Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s_cp nor k8s_exec cannot work with http proxy authentication. #246

Closed
itaru2622 opened this issue Sep 23, 2021 · 10 comments · Fixed by kubernetes-client/python-base#256
Closed
Labels
type/bug Something isn't working

Comments

@itaru2622
Copy link
Contributor

SUMMARY

k8s_cp nor k8s_exec cannot work with http proxy authentication.
both can work only when http proxy need no authentiation.

I tested most patterns for http proxy auth as described in docs/kubernetes.core.k8s_xxx_module.rst

  • environment variables : K8S_AUTH_PROXY_HEADERS_BASIC_AUTH and K8S_AUTH_PROXY_HEADERS_PROXY_BASIC_AUTH
  • specifing proxy_headers block with basic_auth and proxy_basic_auth
    but always it fails by following message.

deloying pod with kubernetes.core.k8s via http proxy authentication works.

task path: /mnt/k8s.yaml:38
redirecting (type: action) kubernetes.core.k8s_exec to kubernetes.core.k8s_info
redirecting (type: action) kubernetes.core.k8s_exec to kubernetes.core.k8s_info
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1632388824.0367606-6684-265204369558111 `" && echo ansible-tmp-1632388824.0367606-6684-265204369558111="` echo /root/.ansible/tmp/ansible-tmp-1632388824.0367606-6684-265204369558111 `" ) && sleep 0'
Loading collection cloud.common from /root/.ansible/collections/ansible_collections/cloud/common
Using module file /root/.ansible/collections/ansible_collections/kubernetes/core/plugins/modules/k8s_exec.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-6492mx8edj7e/tmpp1r4bfl7 TO /root/.ansible/tmp/ansible-tmp-1632388824.0367606-6684-265204369558111/AnsiballZ_k8s_exec.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1632388824.0367606-6684-265204369558111/ /root/.ansible/tmp/ansible-tmp-1632388824.0367606-6684-265204369558111/AnsiballZ_k8s_exec.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'python /root/.ansible/tmp/ansible-tmp-1632388824.0367606-6684-265204369558111/AnsiballZ_k8s_exec.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1632388824.0367606-6684-265204369558111/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_kubernetes.core.k8s_exec_payload_tngkyn7y/ansible_kubernetes.core.k8s_exec_payload.zip/ansible_collections/kubernetes/core/plugins/modules/k8s_exec.py", line 159, in execute_module
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/stream/stream.py", line 35, in _websocket_request
    return api_method(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api/core_v1_api.py", line 994, in connect_get_namespaced_pod_exec
    return self.connect_get_namespaced_pod_exec_with_http_info(name, namespace, **kwargs)  # noqa: E501
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api/core_v1_api.py", line 1101, in connect_get_namespaced_pod_exec_with_http_info
    return self.api_client.call_api(
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/stream/ws_client.py", line 474, in websocket_call
    raise ApiException(status=0, reason=str(e))
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_key": null,
            "ca_cert": null,
            "client_cert": null,
            "client_key": null,
            "command": "/bin/sh -c 'rm -f /tmp/debian-bullseye.txt\ndate > /tmp/debian-bullseye.txt\n'",
            "container": "debian-bullseye",
            "context": null,
            "host": null,
            "kubeconfig": "/root/.kube/config",
            "namespace": "default",
            "password": null,
            "persist_config": null,
            "pod": "sample-0",
            "proxy": "http://basicauth.proxy.local:8080",
            "proxy_headers": {
                "basic_auth": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "proxy_basic_auth": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "user_agent": null
            },
            "username": null,
            "validate_certs": null
        }
    },
    "msg": "Failed to execute on pod sample-0 due to : (0)\nReason: failed CONNECT via proxy status: 407\n"
ISSUE TYPE
  • Bug Report
COMPONENT NAME
  • k8s_cp
  • k8s_exec
  • ... (not sure)
ANSIBLE VERSION
ansible [core 2.11.5]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
  jinja version = 3.0.1
  libyaml = True
COLLECTION VERSION
# /root/.ansible/collections/ansible_collections
Collection      Version
--------------- -------
cloud.common    2.0.4
kubernetes.core 2.2.0

# pip list
kubernetes         18.20.0
CONFIGURATION
(empty)
OS / ENVIRONMENT

debian 11 (bullseye)

STEPS TO REPRODUCE
  1. setup http proxy server requiring basic authentication.
  2. deploy pod from behind http proxy (any official image is ok)
  3. k8s_exec (command: 'date') or k8s_cp (upload or download text file) from behind above http proxy.

Following are my test codes.
ansible-playbook -i hosts -e test=alpine:latest

- name: bootstrap ansible test
  hosts: localhost
  gather_facts: no
  connection: local
  tasks:
   - ...
---
- name: setup
  block:
  - set_fact:
      images: [ "alpine:latest",  "debian:bullseye" ]
      kubeconfig: "/root/.kube/config"
      cases: "{{ [] }}"
  - set_fact:
      cases: |
        {{ cases + [{  'image': item , 'container': item|replace(':','-')|replace('/','-')|replace('.','')  }] }}
    with_items: "{{ images }}"
    
  - template:
      src: "pod-template.j2"
      dest: "/tmp/pod-sample.yaml"
  - k8s:
      state: present
      src: "/tmp/pod-sample.yaml"
      kubeconfig: "{{ kubeconfig }}"

  - name: wait pod ready (this doesn't work in some cases...)
    command: |
       kubectl wait pod --for=condition=ready --kubeconfig {{kubeconfig}} sample-0 --timeout=150s

  - name: prepare content
    copy:
      dest: "/tmp/{{item}}.sh"
      content: |
         #!/bin/sh
         echo "{{item}}" > /tmp/{{item}}.txt
         date            >> /tmp/{{item}}.txt
      mode: +x
    with_items: "{{ cases | map(attribute='container')| list }}"

- name: "test : {{ test }}"
  set_fact:
    target: "{{ test |replace(':','-')|replace('/','-')| replace('.','') }}"

- name: test  {{ target }}
  block:
    - include_tasks: "k8s.yaml"
      vars:
        container: "{{ target }}"
        namespace:  "default"
        pod:        "sample-0"
        shell:      "/bin/sh -c"
        exec: |
           rm -f /tmp/{{target}}.txt
           date > /tmp/{{target}}.txt
        download: { 'remote_path': "/tmp/{{target}}.txt", 'local_path': "/tmp/{{target}}1.txt" }
    - include_tasks: "k8s.yaml"
      vars:
        container: "{{ target }}"
        namespace:  "default"
        pod:        "sample-0"
        shell:      "/bin/sh -c"
        upload: { 'local_path': "/tmp/{{target}}.sh", 'remote_path':"/tmp/{{target}}1.sh" }
        exec: |
           rm -f /tmp/{{target}}.txt
           /tmp/{{target}}1.sh
        download: { 'remote_path': "/tmp/{{target}}.txt", 'local_path': "/tmp/{{target}}1.txt" }
    - debug:
        msg: "got: {{ lookup('file', '/tmp/{{target}}1.txt') }}"

---
# k8s.yaml
- name: get environment variables from current shell for proxy
  set_fact:
    localenv: "{{  localenv|default({}) | combine ({ item : (lookup('env', item)| default('')) }) }}"
  with_items:
    - https_proxy
    - no_proxy
    - K8S_AUTH_PROXY
    - K8S_AUTH_PROXY_HEADERS_BASIC_AUTH
    - K8S_AUTH_PROXY_HEADERS_PROXY_BASIC_AUTH

- debug:
   var: localenv

- name: "uploading     x       core.k8s_cp"
  when: upload is defined
  kubernetes.core.k8s_cp:
     state:         to_pod
     kubeconfig:    "{{ kubeconfig }}"
     namespace:     "{{ namespace }}"
     pod:           "{{ pod }}"
     container:     "{{ container }}"
     local_path:    "{{ upload.local_path }}"
     remote_path:   "{{ upload.remote_path }}"
     proxy: "{{ localenv.K8S_AUTH_PROXY }}"
     proxy_headers:
       basic_auth: "{{ localenv.K8S_AUTH_PROXY_HEADERS_BASIC_AUTH }}"
       proxy_basic_auth: "{{ localenv.K8S_AUTH_PROXY_HEADERS_PROXY_BASIC_AUTH }}"

- name: "exec          x       core.k8s_exec"
  when: exec is defined
  kubernetes.core.k8s_exec:
     kubeconfig:   "{{ kubeconfig }}"
     namespace:    "{{ namespace }}"
     pod:          "{{ pod }}"
     container:    "{{ container }}"
     command: >-
        {{shell}} '{{ exec }}'
     proxy: "{{ localenv.K8S_AUTH_PROXY }}"
     proxy_headers:
       basic_auth: "{{ localenv.K8S_AUTH_PROXY_HEADERS_BASIC_AUTH }}"
       proxy_basic_auth: "{{ localenv.K8S_AUTH_PROXY_HEADERS_PROXY_BASIC_AUTH }}"
  register: result

- name: "result of remote exection"
  debug:
    var: result

- name: "downloading   x       core.k8s_cp"
  when: download is defined
  kubernetes.core.k8s_cp:
     state: from_pod
     kubeconfig:   "{{ kubeconfig }}"
     namespace:    "{{ namespace }}"
     pod:          "{{ pod }}"
     container:    "{{ container }}"
     remote_path:  "{{ download.remote_path }}"
     local_path:   "{{ download.local_path }}"
     proxy: "{{ localenv.K8S_AUTH_PROXY }}"
     proxy_headers:
       basic_auth: "{{ localenv.K8S_AUTH_PROXY_HEADERS_BASIC_AUTH }}"
       proxy_basic_auth: "{{ localenv.K8S_AUTH_PROXY_HEADERS_PROXY_BASIC_AUTH }}"
---
# pod-template.j2
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: "sample"
  namespace: "default"
spec:
  selector:
    matchLabels:
      k8s-app: "sample"
  serviceName: "sample"
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: "sample"
    spec:
      containers:
{% for node in cases %}
      - name: {{ node.container }}
        image: {{ node.image }}
        imagePullPolicy: IfNotPresent
        command:
          - sh
          - "-c"
          - |
            tail -f /dev/null
{% endfor %}
EXPECTED RESULTS

as it described in docs, both of k8s_cp and k8s_exec can work with http proxy authentication.

ACTUAL RESULTS

both k8s_cp and k8s_exec failed when http proxy requires authentication.


@itaru2622
Copy link
Contributor Author

itaru2622 commented Sep 23, 2021

how to setup http proxy with basic authentication, powered by squid

/etc/squid/squid.conf:

# http://www.squid-cache.org/Doc/config/
# https://blog.nillsf.com/index.php/2019/09/09/setting-up-a-squid-proxy-with-authentication/

http_port 3128

acl anywhere src all

# start:  requiring basic authentication >>>>>>>>>>>>>>>>>>>>>>
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
acl auth_users proxy_auth REQUIRED
http_access allow auth_users
# end:    requiring basic authentication <<<<<<<<<<<<<<<<<<<<<

http_access allow anywhere
http_access deny  all

setup account and run squid and test proxying:

htpasswd -b -c /etc/squid/passwd demo demo
squid -f /etc/squid/squid.conf -NYCd 1 --foreground

# following should work fine
https_proxy=http://demo:demo@yourIP:3128/
curl -sSL https://google.com/

# following should not 200 ok when 'requiring basic authentication' enabled
https_proxy=http://yourIP:3128/
curl -sSL https://google.com/

k8s_cp and k8s_exec failed if squid proxy has above configuration.
when you comment out block named ' requiring basic authentication', both k8s_cp and k8s_exec can work.

@gravesm
Copy link
Member

gravesm commented Sep 23, 2021

@itaru2622 Thanks for the detailed bug report. This looks to be a problem with the underlying kubernetes client. I believe the proxy auth is not being set on the connection here: https://github.com/kubernetes-client/python-base/blob/b0afc93ffabb66d930abcdfb1255214d167bf8d5/stream/ws_client.py#L450. I'll take a closer look and try and get a PR submitted soon.

@gravesm gravesm added the type/bug Something isn't working label Sep 23, 2021
@itaru2622
Copy link
Contributor Author

itaru2622 commented Sep 25, 2021

@gravesm thank you for your investigation.

I found kubernetes-client/python-base#230 which requests fixing same issue, but it was closed before merging by bot.
PR 230 takes different approach than kubernetes.core doc but the modification to stream/ws_client.py would be almost the same.

I'm glad your help for this issue.

@itaru2622
Copy link
Contributor Author

itaru2622 commented Sep 25, 2021

@gravesm I checked sources and found the following lines in common.py

elif key == 'proxy_headers':
headers = urllib3.util.make_headers(**value)
setattr(configuration, key, headers)

https://github.com/kubernetes-client/python/blob/d3de7a85a63fa6bec6518d1cc75dc5e9458b9bbc/kubernetes/client/rest.py#L86-L97

so, the patch for this issue could be something below, because configuration.proxy_headers are stored as HTTP header dictionary:

diff --git a/stream/ws_client.py b/stream/ws_client.py
--- a/stream/ws_client.py
+++ b/stream/ws_client.py
@@ -429,6 +429,10 @@ def create_websocket(configuration, url, headers=None):
     else:
         header.append("sec-websocket-protocol: v4.channel.k8s.io")
 
+    if configuration.proxy_headers:
+        for key, value in configuration.proxy_headers.items():
+            header.append("%s: %s" % (key, value))
+
     if url.startswith('wss://') and configuration.verify_ssl:

I don't know how to implement e2e_test for above for https://github.com/kubernetes-client/python
... maybe in e2e_test/test_client.py but no idea for code...

@itaru2622
Copy link
Contributor Author

itaru2622 commented Sep 29, 2021

@gravesm please let me know how to debug plugins/modules/k8s_xxx.py and/or plugins/module_utils/*.py

I want to print debugging message but It couldn't succeed it. even messages by plugins/connection/kubectl.py out.

I know that: plugins/connection/kubectl.py using "ansible.utils.display.Display"
when I implemented almost the same code in plugins/module/k8s_xxx.py, but my messages are not out.

@gravesm
Copy link
Member

gravesm commented Sep 29, 2021

@itaru2622 The problem is that modules are run in a separate process, so the messages won't go to stdout for the main controller process. The easiest thing to do is to use something like https://github.com/zestyping/q.

@itaru2622
Copy link
Contributor Author

@gravesm thank you for your help. I finally got a test-ok code different than the above through debugging.

I will post PR to https://github.com/kubernetes-client/python-base ...

@gravesm
Copy link
Member

gravesm commented Oct 21, 2021

@itaru2622 thanks for submitting the PR on the client! This is really helpful. Have you had a chance to test your client changes against this collection?

@itaru2622
Copy link
Contributor Author

@gravesm

Yes.
my PR to kubernets-client/python-base and kubernets-client/python were already merged.
I confirmed kubernetes.core worked with them when latest master branch of kubernets-client/python installed as below:

pip install git+https://github.com/kubernetes-client/python.git

unfortunately, new kubernetes-client with my PR is not released yet( I mean new version number is not assigned yet),
so this issue is still open at this moment. I will close this issue when new kubernetes-client is released as current stable ( not pre-release).

anyway, you can use proxy authentication feature by the above command.

@itaru2622
Copy link
Contributor Author

@gravesm latest official python kubernetes library supports proxy authentication since 19.15.0. now it is available by usual installation as below command:

pip install kubernetes
# or
pip install kubernetes>=19.15.0

so I close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working
Projects
None yet
2 participants