Skip to content
This repository has been archived by the owner on Jun 13, 2024. It is now read-only.

k8s inventory plugin not working #73

Closed
603627156 opened this issue Apr 14, 2020 · 6 comments
Closed

k8s inventory plugin not working #73

603627156 opened this issue Apr 14, 2020 · 6 comments
Assignees
Labels
type/bug Something isn't working

Comments

@603627156
Copy link

603627156 commented Apr 14, 2020

Im trying to create a dynamic inventory from a kubernetes cluster, using the k8s plugin, however I'm unable to get it to work. It's not good to configure according to the document. How to configure?

1、deploy

[dev] [root@k8s-node1 ~]# cat k8s.yml 
---
- hosts: localhost
  gather_facts: false
  connection: local

  collections:
    - community.kubernetes
  tasks:
    - name: Ensure the myapp Namespace exists.
      k8s:
        api_version: v1
        kind: Namespace
        name: testing
        state: present

2、

[dev] [root@k8s-node1 ~]# cat k8s.yml 
---
- hosts: localhost
  gather_facts: false
  connection: local

  collections:
    - community.kubernetes
  tasks:
    - name: Ensure the myapp Namespace exists.
      k8s:
        api_version: v1
        kind: Namespace
        name: testing
        state: present
[dev] [root@k8s-node1 ~]# cat /root/.kube/
cache/       .config.swd  .config.swf  .config.swh  .config.swj  .config.swl  .config.swn  .config.swp
config       .config.swe  .config.swg  .config.swi  .config.swk  .config.swm  .config.swo  http-cache/
[dev] [root@k8s-node1 ~]# cat /root/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVS25YeHg1NjV1aGpSSFFuY3QvOXY3VS8xdUVVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURjeE1EQTJOVGN3TUZvWERUSTBNRGN3T0RBMk5UY3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBc0I1TndHRyt3QzVYdTdPNFV3QUIKcENhNFBSbVhzM2NDZFpsTW8rV2huSVVvT3UwRmVqdDNCQ1U4RHBvTXZpNGRZNzBVVnIrWVVNRS9BenczR1UvTgpYclBiR2drZHlmL291ZEIzMk94ZmxyOXhYeThCeGVUWnhqWlp5TkE0RmVVWFI1VXV3NWxoL0ErQkVRV1U2MW1MClRRSU4xYUk3RXMvREZMUDVHT3lXYkNOcnVwNVRMU1ZCZ3dEVzM5Rkh5YWhzNytPV0xhM1JyRTYxZFc2blRMdE4KbmZwazZEV05GMzRtRC8vM1BnTDZ2N0VteXEwZWcxbFplWVEzT1h1d1ZGUWV2ZlVnYzlaK2RpaTdTWVhzY3FKRwpLRDVwb1lNRzM5bmxTQnhVQytFMDFVemV4dTl5cFRCNXJ3M1hOL013SmVxTU93K21TTnJCTFZCbGZ2TTdRb1gzCjFRSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVK29HMm1QYmtaTDNyeEN0cEc3KzFXeHpSRVhFd0h3WURWUjBqQkJnd0ZvQVUrb0cybVBiawpaTDNyeEN0cEc3KzFXeHpSRVhFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJQVdEOUpibWdOOHJneWVGM2dRCjRnUGFZQ1BiMit3QzQzd1BVVUtvcjlnTnZjNkVUMVFWY2lmV0dMZHExS3JDREN0QWxJSzVGUVVHak8xNUVMV3MKdWFOeGMzZURDY0NqaUE4SlBhUXNyeWpkeit2R0Iyd0xtN0VGQTVtdy9TcnN3cG5uWGo1RlpoVDdubHlDOTZwRwpOakZyQzlvK2taelRzVndqaUZmdzRUL0J3TWxXaUw2YTZ3bFowY0x1V1JWUnVZRSszQ3NMcHE2NFFJbnBxa3NDCllJOEFrTkY0anN1QmhMSjlMcXE1eDlMdEE2a1NzVFA5M0lnaCtSR2M2UGkreUZEMzcxeDlqcFJBcHNZRDlUOHcKTHB0RTBGWmNjNWx3RGZzSTJUaTVodzI1enZGa3F6OHl4U0RSQVFERXk2Q280aG9zODJwdEFJTjhzZlNyT0lobQpINms9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.130:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluser-admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: cluster-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzVENDQXNXZ0F3SUJBZ0lVR3liT3hCMFhqTmlWd09ma01OdkpqVnVoenBBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURjeE1EQTJOVGN3TUZvWERUSTVNRGN3TnpBMk5UY3dNRm93YXpFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbGFVcHBibWN4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RGpBTUJnTlZCQU1UCkJXRmtiV2x1TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFydzgvQ1c3dzVuUVEKdTFrYkRqVkVEMCtSaWtaT29lTjY3WTd3Q0xucmxENVFGekV4UmlkbmxHajdzRklJQ2Y0VDIxVWNKQzhkWUNFOQpHcDdPYytmL0RselpIMi8zYW5QSFNlU0pjSEZNT05pL2U5c0ZnY0dXVkd1dFVYL1ZnTVV1WkV1VGRzRUNDSUxwCk54YnZ6WEhLWjZoSG4yTjhHSU1OR3k1WGdiNnFYUVhxYnZScjBsRGp5SXc4V29FMEJmejFndnRWMy9UbkZKQUUKTUE3cVVjYTVyQnVqZi9acHNJRklGdVFqYVFjNVgyNkhUNFA1NVJmSXY4K1BFanBwY2JoU29yajBTOVhTdm0vZAp6UlUxTnBCbnQ2MXlSdmRMalZhZWpab0ZjeFlFb2NFbUtuNzEzN2hDODlnbDVhZWlNZXNFWTk0SE1hUy9QMEhkCnE1V1FBREk1cndJREFRQUJvMzh3ZlRBT0JnTlZIUThCQWY4RUJBTUNCYUF3SFFZRFZSMGxCQll3RkFZSUt3WUIKQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0hRWURWUjBPQkJZRUZCNTF3QkFESzhJZAo1MWt5N01xM0tHMWJVVHl3TUI4R0ExVWRJd1FZTUJhQUZQcUJ0cGoyNUdTOTY4UXJhUnUvdFZzYzBSRnhNQTBHCkNTcUdTSWIzRFFFQkN3VUFBNElCQVFCbmpHc0Iyd3RubnZFVjIzZ0h2SnpQcUoxNXk3b3dLK1lSbjJtV2wydnYKV2RQWHdhNnRvOGdNV0RVK2hpeTdkVHFtTFloOUlpWG5PUlNMWDRXZHFVQVNMUjFaVUNYekRyc0xFRlZQVjFaawpYR1d5bGVUVG1meWw3Z0ZoSmx2WWw0SjVDenpucDNNVGFDeDJRUzZqWW1qNUczSlRNWUNnNm9hZzNDVlVNZzg2CjNUOFFQZ0V6Tm1rb1AvNzd1RW1Jb1hCMm94U1UvTVdLY1Nyb0JiN1VDblBvVEQ5dll3b2UybmJhMlFUYkxRRWwKTGsva2hCT1E3d1FvZFVpY2tyV1BaQ1hVcFJjMGdobzZzR2hwYmdHam5HRmVITGxPeGl4d1poeXlqZUpXMzdhaQo0Z1Azc2wycWt4bFV0T1loNlRaTys0SlRpZzNtTW9JZ1NubHdiOEFTa3JWRgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcnc4L0NXN3c1blFRdTFrYkRqVkVEMCtSaWtaT29lTjY3WTd3Q0xucmxENVFGekV4ClJpZG5sR2o3c0ZJSUNmNFQyMVVjSkM4ZFlDRTlHcDdPYytmL0RselpIMi8zYW5QSFNlU0pjSEZNT05pL2U5c0YKZ2NHV1ZHdXRVWC9WZ01VdVpFdVRkc0VDQ0lMcE54YnZ6WEhLWjZoSG4yTjhHSU1OR3k1WGdiNnFYUVhxYnZScgowbERqeUl3OFdvRTBCZnoxZ3Z0VjMvVG5GSkFFTUE3cVVjYTVyQnVqZi9acHNJRklGdVFqYVFjNVgyNkhUNFA1CjVSZkl2OCtQRWpwcGNiaFNvcmowUzlYU3ZtL2R6UlUxTnBCbnQ2MXlSdmRMalZhZWpab0ZjeFlFb2NFbUtuNzEKMzdoQzg5Z2w1YWVpTWVzRVk5NEhNYVMvUDBIZHE1V1FBREk1cndJREFRQUJBb0lCQUJCSk9kTVYyQ0dJY0xvTgpPeUFpUW5ldUxsc1AyV2JrTTk1LzZzTFZFUjZVZ1h6MjNaK3FNTSsweUoySnRDZkIxSFVXUU96NDJTSEZWZHJ4CkpVSFJOb0JPa1FDRXVSN1ZNSmdtUThjTE0wMGlsUVhmeFc1aDVTdHJiUTlrOWlicHNUd3hiOEdmaVNIamsvREYKR0lBamN2SWJ6TFgrV21BcGFRRzdXUGJBRnpkYUd3dlhjaFFHaGZtVjdyRFlxekNIeTBic2RRTzkyaUkzSXRQbwpWS3BoSWwzMzI2czVvdWFYbkJxV3VwMFZJRURYNHBQSm9YM0FzY0hIODl6ODlGUVVMTVZDNCtlbkNFZ21NbkFHCmhwSStBajc0dzI3cnVkd0RpSitTVmFTRTY1dWg2UTR2MFR0UnBmM2FOSXBCRVNoMkdmbVd4UFp6QlhRdzlsUDMKbEszTFZBa0NnWUVBNDY2MDRLQzFpTE9iWUEyOXlKOFRpdHBFdmNTSGN1d1JrTmcvRXdoeEVIZ054Ym5uMzZ1Vwp1ZHBTT1p1QWQ4VlhEYkdTSGRNWWRKcFh5aUx1Um91bWhPc09MRkRybExYVkJrQldpMTNQNDRUS01sWTdnMUpuClkvL3VIbGQrdll5b1B3cW1nQWpjbE9Pa09hNU9nNU15OXVrS240RmV1TCs3dmVWd0tiZlZvWVVDZ1lFQXhOVU4KdFI2djQwa3RlcnEyNTloOVdSUDJIOWgyWndOZGg3OU9LSXVPMFN2bUtUanNPdElNS0o0ZFkzTVljS2FPYUxXbwp2MEFpWGY1WTdGSmxlOVBlSDNNU3ZIUGllakhSRzQrUE8rNmxOQ3UxRm8yNExqQUxDamlRV1lmbFhxZ0lkT25RCkFOQTZwTGlRM2d3SWZGQ1JrczNqZHBYSHBWOE13ZVJobFVCWGVxTUNnWUVBdFgyaFAyRzc4MEZBZkp2WGlhR00KZ1dXbDRDTlYyVHptYjdDQTd0b096cEwwWDRYbW1Mdjl4UjZMNXRIVzRTSlVWMTBSM1daVkd6V2cvMGRDNnNjTgpNT3p4K2s5eXlyTDdJU1dPRnovcnBEQkl3VUZONVV0OWtSQUVydmtOMVdqWEFKR3IwV20rODR4V2I0aExtOFJ0Cm5yWjdPbFIwdmc1UVNIb3BJNGdmNmNVQ2dZQXJHRzY0NGpBbWRtWXp3ZCs4SVdWSWRKdGwyNUlJK2U2bmd4Wk0Kd0VtVHVLWGJEckNDTEcwbkUzOWh2OWh4Q2JhU2JIdTI3QWJhUjQ4V3B1KzdUZWNMUWJtdmN6djUveUJHaFljWgoyeVZtcDg4dFVmZ3FmTEJlRzRaWFkrNnZhK0QySUI4L25sZklxdlJrK1lOK0hISFRENnNtMHFKMHJidndVOTJkCnZRbXFPd0tCZ1FDTjR4aXJpZXcwdVltZitpbFNVUVgrTEdsTFJFQjIyZ1U3SW9pSGFKWTJXMmNDcG1raVdKT3AKSktFVHgwdmNUSi9NandpbzV3MVJxeXNqeDFyTlFkczM3cGZLMDVSc29vLzlrZlpzS2dXU3FrOEJhc3RReE5zTQp0TWhvelpBNFBlTTExTmlPZTAxOUdGUStiRmc1WHBSbWw3L0hnaU92T2VQREF1SjZYandvSXc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

3、error

[dev] [root@k8s-node1 ~]# ansible-playbook  k8s.yml
[WARNING]:  * Failed to parse /etc/ansible/hosts with k8s plugin: Syntax Error while loading YAML.   expected
'<document start>', but found '<scalar>'  The error appears to be in '/etc/ansible/hosts': line 2, column 1, but may
be elsewhere in the file depending on the exact syntax problem.  The offending line appears to be:  [test]
192.168.10.130 ^ here
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'

PLAY [localhost] ****************************************************************************************************

TASK [Ensure the myapp Namespace exists.] ***************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: argument of type 'NoneType' is not iterable
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py\", line 193, in get_api_client\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 96, in load_incluster_config\n    cert_filename=SERVICE_CERT_FILENAME).load_and_set()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 47, in load_and_set\n    self._load_config()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 53, in _load_config\n    raise ConfigException(\"Service host/port is not set.\")\nkubernetes.config.config_exception.ConfigException: Service host/port is not set.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1586849534.6105824-263930027939916/AnsiballZ_k8s.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1586849534.6105824-263930027939916/AnsiballZ_k8s.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1586849534.6105824-263930027939916/AnsiballZ_k8s.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.community.kubernetes.plugins.modules.k8s', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s.py\", line 273, in <module>\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s.py\", line 269, in main\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/raw.py\", line 174, in execute_module\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py\", line 195, in get_api_client\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 743, in load_kube_config\n    loader.load_and_set(config)\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 551, in load_and_set\n    self._load_cluster_info()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 517, in _load_cluster_info\n    file_base_path=base_path).as_file()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 100, in __init__\n    if data_key_name in obj:\nTypeError: argument of type 'NoneType' is not iterable\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP **********************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   
@geerlingguy
Copy link
Collaborator

I'm not sure why you'd need to use the k8s inventory plugin for this particular play—it's operating on a server (192.168.10.130 I presume) that is running Kubernetes, and the way the kubernetes module works, your setting of connection: local seems to indicate it should just work with normal inventory.

How are you configuring the k8s inventory plugin to be used? Can you give more of the contents of /etc/ansible/hosts?

@geerlingguy geerlingguy added the type/question Further information is requested label Apr 14, 2020
@603627156
Copy link
Author

603627156 commented Apr 15, 2020

error:   Service host/port is not set.  
[dev] [root@k8s-node1 ~]# ansible-playbook k8s.yml 

PLAY [all] **************************************************************************

TASK [Ensure the myapp Namespace exists.] *******************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: argument of type 'NoneType' is not iterable
fatal: [192.168.10.131]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_k8s_payload_ndr17hu8/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py\", line 193, in get_api_client\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 96, in load_incluster_config\n    cert_filename=SERVICE_CERT_FILENAME).load_and_set()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 47, in load_and_set\n    self._load_config()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 53, in _load_config\n    raise ConfigException(\"Service host/port is not set.\")\nkubernetes.config.config_exception.ConfigException: Service host/port is not set.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1586915757.726608-31882894553169/AnsiballZ_k8s.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1586915757.726608-31882894553169/AnsiballZ_k8s.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1586915757.726608-31882894553169/AnsiballZ_k8s.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.community.kubernetes.plugins.modules.k8s', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_k8s_payload_ndr17hu8/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s.py\", line 273, in <module>\n  File \"/tmp/ansible_k8s_payload_ndr17hu8/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s.py\", line 269, in main\n  File \"/tmp/ansible_k8s_payload_ndr17hu8/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/raw.py\", line 174, in execute_module\n  File \"/tmp/ansible_k8s_payload_ndr17hu8/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py\", line 195, in get_api_client\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 743, in load_kube_config\n    loader.load_and_set(config)\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 551, in load_and_set\n    self._load_cluster_info()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 517, in _load_cluster_info\n    file_base_path=base_path).as_file()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 100, in __init__\n    if data_key_name in obj:\nTypeError: argument of type 'NoneType' is not iterable\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP **************************************************************************
192.168.10.131             : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
---
- hosts: all
  gather_facts: false
  connection: local 
         
  collections:
    - community.kubernetes
  tasks:     
    - name: Ensure the myapp Namespace exists.
      k8s:
        api_version: v1 
        kind: Namespace
        name: testing
        state: present
[dev] [root@k8s-node1 ~]# kubectl get nodes
NAME             STATUS   ROLES    AGE    VERSION
192.168.10.131   Ready    <none>   279d   v1.12.1
192.168.10.132   Ready    <none>   279d   v1.12.1

[dev] [root@k8s-node1 ~]# cat /root/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVS25YeHg1NjV1aGpSSFFuY3QvOXY3VS8xdUVVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURjeE1EQTJOVGN3TUZvWERUSTBNRGN3T0RBMk5UY3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBc0I1TndHRyt3QzVYdTdPNFV3QUIKcENhNFBSbVhzM2NDZFpsTW8rV2huSVVvT3UwRmVqdDNCQ1U4RHBvTXZpNGRZNzBVVnIrWVVNRS9BenczR1UvTgpYclBiR2drZHlmL291ZEIzMk94ZmxyOXhYeThCeGVUWnhqWlp5TkE0RmVVWFI1VXV3NWxoL0ErQkVRV1U2MW1MClRRSU4xYUk3RXMvREZMUDVHT3lXYkNOcnVwNVRMU1ZCZ3dEVzM5Rkh5YWhzNytPV0xhM1JyRTYxZFc2blRMdE4KbmZwazZEV05GMzRtRC8vM1BnTDZ2N0VteXEwZWcxbFplWVEzT1h1d1ZGUWV2ZlVnYzlaK2RpaTdTWVhzY3FKRwpLRDVwb1lNRzM5bmxTQnhVQytFMDFVemV4dTl5cFRCNXJ3M1hOL013SmVxTU93K21TTnJCTFZCbGZ2TTdRb1gzCjFRSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVK29HMm1QYmtaTDNyeEN0cEc3KzFXeHpSRVhFd0h3WURWUjBqQkJnd0ZvQVUrb0cybVBiawpaTDNyeEN0cEc3KzFXeHpSRVhFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJQVdEOUpibWdOOHJneWVGM2dRCjRnUGFZQ1BiMit3QzQzd1BVVUtvcjlnTnZjNkVUMVFWY2lmV0dMZHExS3JDREN0QWxJSzVGUVVHak8xNUVMV3MKdWFOeGMzZURDY0NqaUE4SlBhUXNyeWpkeit2R0Iyd0xtN0VGQTVtdy9TcnN3cG5uWGo1RlpoVDdubHlDOTZwRwpOakZyQzlvK2taelRzVndqaUZmdzRUL0J3TWxXaUw2YTZ3bFowY0x1V1JWUnVZRSszQ3NMcHE2NFFJbnBxa3NDCllJOEFrTkY0anN1QmhMSjlMcXE1eDlMdEE2a1NzVFA5M0lnaCtSR2M2UGkreUZEMzcxeDlqcFJBcHNZRDlUOHcKTHB0RTBGWmNjNWx3RGZzSTJUaTVodzI1enZGa3F6OHl4U0RSQVFERXk2Q280aG9zODJwdEFJTjhzZlNyT0lobQpINms9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.130:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluser-admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: cluster-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzVENDQXNXZ0F3SUJBZ0lVR3liT3hCMFhqTmlWd09ma01OdkpqVnVoenBBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURjeE1EQTJOVGN3TUZvWERUSTVNRGN3TnpBMk5UY3dNRm93YXpFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbGFVcHBibWN4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RGpBTUJnTlZCQU1UCkJXRmtiV2x1TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFydzgvQ1c3dzVuUVEKdTFrYkRqVkVEMCtSaWtaT29lTjY3WTd3Q0xucmxENVFGekV4UmlkbmxHajdzRklJQ2Y0VDIxVWNKQzhkWUNFOQpHcDdPYytmL0RselpIMi8zYW5QSFNlU0pjSEZNT05pL2U5c0ZnY0dXVkd1dFVYL1ZnTVV1WkV1VGRzRUNDSUxwCk54YnZ6WEhLWjZoSG4yTjhHSU1OR3k1WGdiNnFYUVhxYnZScjBsRGp5SXc4V29FMEJmejFndnRWMy9UbkZKQUUKTUE3cVVjYTVyQnVqZi9acHNJRklGdVFqYVFjNVgyNkhUNFA1NVJmSXY4K1BFanBwY2JoU29yajBTOVhTdm0vZAp6UlUxTnBCbnQ2MXlSdmRMalZhZWpab0ZjeFlFb2NFbUtuNzEzN2hDODlnbDVhZWlNZXNFWTk0SE1hUy9QMEhkCnE1V1FBREk1cndJREFRQUJvMzh3ZlRBT0JnTlZIUThCQWY4RUJBTUNCYUF3SFFZRFZSMGxCQll3RkFZSUt3WUIKQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0hRWURWUjBPQkJZRUZCNTF3QkFESzhJZAo1MWt5N01xM0tHMWJVVHl3TUI4R0ExVWRJd1FZTUJhQUZQcUJ0cGoyNUdTOTY4UXJhUnUvdFZzYzBSRnhNQTBHCkNTcUdTSWIzRFFFQkN3VUFBNElCQVFCbmpHc0Iyd3RubnZFVjIzZ0h2SnpQcUoxNXk3b3dLK1lSbjJtV2wydnYKV2RQWHdhNnRvOGdNV0RVK2hpeTdkVHFtTFloOUlpWG5PUlNMWDRXZHFVQVNMUjFaVUNYekRyc0xFRlZQVjFaawpYR1d5bGVUVG1meWw3Z0ZoSmx2WWw0SjVDenpucDNNVGFDeDJRUzZqWW1qNUczSlRNWUNnNm9hZzNDVlVNZzg2CjNUOFFQZ0V6Tm1rb1AvNzd1RW1Jb1hCMm94U1UvTVdLY1Nyb0JiN1VDblBvVEQ5dll3b2UybmJhMlFUYkxRRWwKTGsva2hCT1E3d1FvZFVpY2tyV1BaQ1hVcFJjMGdobzZzR2hwYmdHam5HRmVITGxPeGl4d1poeXlqZUpXMzdhaQo0Z1Azc2wycWt4bFV0T1loNlRaTys0SlRpZzNtTW9JZ1NubHdiOEFTa3JWRgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcnc4L0NXN3c1blFRdTFrYkRqVkVEMCtSaWtaT29lTjY3WTd3Q0xucmxENVFGekV4ClJpZG5sR2o3c0ZJSUNmNFQyMVVjSkM4ZFlDRTlHcDdPYytmL0RselpIMi8zYW5QSFNlU0pjSEZNT05pL2U5c0YKZ2NHV1ZHdXRVWC9WZ01VdVpFdVRkc0VDQ0lMcE54YnZ6WEhLWjZoSG4yTjhHSU1OR3k1WGdiNnFYUVhxYnZScgowbERqeUl3OFdvRTBCZnoxZ3Z0VjMvVG5GSkFFTUE3cVVjYTVyQnVqZi9acHNJRklGdVFqYVFjNVgyNkhUNFA1CjVSZkl2OCtQRWpwcGNiaFNvcmowUzlYU3ZtL2R6UlUxTnBCbnQ2MXlSdmRMalZhZWpab0ZjeFlFb2NFbUtuNzEKMzdoQzg5Z2w1YWVpTWVzRVk5NEhNYVMvUDBIZHE1V1FBREk1cndJREFRQUJBb0lCQUJCSk9kTVYyQ0dJY0xvTgpPeUFpUW5ldUxsc1AyV2JrTTk1LzZzTFZFUjZVZ1h6MjNaK3FNTSsweUoySnRDZkIxSFVXUU96NDJTSEZWZHJ4CkpVSFJOb0JPa1FDRXVSN1ZNSmdtUThjTE0wMGlsUVhmeFc1aDVTdHJiUTlrOWlicHNUd3hiOEdmaVNIamsvREYKR0lBamN2SWJ6TFgrV21BcGFRRzdXUGJBRnpkYUd3dlhjaFFHaGZtVjdyRFlxekNIeTBic2RRTzkyaUkzSXRQbwpWS3BoSWwzMzI2czVvdWFYbkJxV3VwMFZJRURYNHBQSm9YM0FzY0hIODl6ODlGUVVMTVZDNCtlbkNFZ21NbkFHCmhwSStBajc0dzI3cnVkd0RpSitTVmFTRTY1dWg2UTR2MFR0UnBmM2FOSXBCRVNoMkdmbVd4UFp6QlhRdzlsUDMKbEszTFZBa0NnWUVBNDY2MDRLQzFpTE9iWUEyOXlKOFRpdHBFdmNTSGN1d1JrTmcvRXdoeEVIZ054Ym5uMzZ1Vwp1ZHBTT1p1QWQ4VlhEYkdTSGRNWWRKcFh5aUx1Um91bWhPc09MRkRybExYVkJrQldpMTNQNDRUS01sWTdnMUpuClkvL3VIbGQrdll5b1B3cW1nQWpjbE9Pa09hNU9nNU15OXVrS240RmV1TCs3dmVWd0tiZlZvWVVDZ1lFQXhOVU4KdFI2djQwa3RlcnEyNTloOVdSUDJIOWgyWndOZGg3OU9LSXVPMFN2bUtUanNPdElNS0o0ZFkzTVljS2FPYUxXbwp2MEFpWGY1WTdGSmxlOVBlSDNNU3ZIUGllakhSRzQrUE8rNmxOQ3UxRm8yNExqQUxDamlRV1lmbFhxZ0lkT25RCkFOQTZwTGlRM2d3SWZGQ1JrczNqZHBYSHBWOE13ZVJobFVCWGVxTUNnWUVBdFgyaFAyRzc4MEZBZkp2WGlhR00KZ1dXbDRDTlYyVHptYjdDQTd0b096cEwwWDRYbW1Mdjl4UjZMNXRIVzRTSlVWMTBSM1daVkd6V2cvMGRDNnNjTgpNT3p4K2s5eXlyTDdJU1dPRnovcnBEQkl3VUZONVV0OWtSQUVydmtOMVdqWEFKR3IwV20rODR4V2I0aExtOFJ0Cm5yWjdPbFIwdmc1UVNIb3BJNGdmNmNVQ2dZQXJHRzY0NGpBbWRtWXp3ZCs4SVdWSWRKdGwyNUlJK2U2bmd4Wk0Kd0VtVHVLWGJEckNDTEcwbkUzOWh2OWh4Q2JhU2JIdTI3QWJhUjQ4V3B1KzdUZWNMUWJtdmN6djUveUJHaFljWgoyeVZtcDg4dFVmZ3FmTEJlRzRaWFkrNnZhK0QySUI4L25sZklxdlJrK1lOK0hISFRENnNtMHFKMHJidndVOTJkCnZRbXFPd0tCZ1FDTjR4aXJpZXcwdVltZitpbFNVUVgrTEdsTFJFQjIyZ1U3SW9pSGFKWTJXMmNDcG1raVdKT3AKSktFVHgwdmNUSi9NandpbzV3MVJxeXNqeDFyTlFkczM3cGZLMDVSc29vLzlrZlpzS2dXU3FrOEJhc3RReE5zTQp0TWhvelpBNFBlTTExTmlPZTAxOUdGUStiRmc1WHBSbWw3L0hnaU92T2VQREF1SjZYandvSXc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

@603627156
Copy link
Author

[dev] [root@k8s-node1 ~]# cat /etc/ansible/hosts
[test]
192.168.10.131

@bmillemathias
Copy link

Hello,

I don't understand what is your problem. Dumping lot of content does not help.

Could you state clearly and shortly what you would like to achieve, how do you do that and what you get ?

@Akasurde
Copy link
Member

Akasurde commented Jul 2, 2020

@603627156 There is typo in your kubeconfig. Username should be 'cluster-admin' instead of 'cluser-admin' in contexts.

...
- context:
    cluster: kubernetes
    user: cluser-admin
...

kubernetes Python library is not providing actual error. The following PR will change this behavior - kubernetes-client/python-base#201

@Akasurde Akasurde self-assigned this Jul 3, 2020
@Akasurde Akasurde added type/bug Something isn't working and removed type/question Further information is requested labels Jul 7, 2020
@Akasurde
Copy link
Member

PR is merged. Closing.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants