Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

array with empty dict, will change every time it is applied #113

Closed
qcu266 opened this issue May 19, 2021 · 1 comment · Fixed by #131
Closed

array with empty dict, will change every time it is applied #113

qcu266 opened this issue May 19, 2021 · 1 comment · Fixed by #131
Assignees
Labels
jira type/bug Something isn't working verified The issue is reproduced

Comments

@qcu266
Copy link
Contributor

qcu266 commented May 19, 2021

SUMMARY

When k8s module is used for Kubernetes Networkpolicy (apply=true) , egress([{}]) will change every time it is applied.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION
ansible 2.10.1
  config file = /home/qcu266/ansible-debug/ansible.cfg
  configured module search path = ['/home/qcu266/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/qcu266/miniconda3/lib/python3.7/site-packages/ansible
  executable location = /home/qcu266/miniconda3/bin/ansible
  python version = 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
CONFIGURATION
COLLECTIONS_PATHS(/home/qcu266/ansible-debug/ansible.cfg) = ['/home/qcu266/ansible-debug']
DEFAULT_HOST_LIST(/home/qcu266/ansible-debug/ansible.cfg) = ['/home/qcu266/ansible-debug/inventory/hosts']
DEFAULT_PRIVATE_ROLE_VARS(/home/qcu266/ansible-debug/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/qcu266/ansible-debug/ansible.cfg) = yaml
GALAXY_ROLE_SKELETON_IGNORE(/home/qcu266/ansible-debug/ansible.cfg) = ['^.git$', '^.*/.git_keep$']
INVENTORY_ENABLED(/home/qcu266/ansible-debug/ansible.cfg) = ['k8s']
OS / ENVIRONMENT
OS: Ubuntu 18.04 bionic
Kernel: x86_64 Linux 5.0.0-23-generic
STEPS TO REPRODUCE
  • networkpolicy manifest:
# np.yml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name:  test-np
  labels:
    app: test-np
  annotations:
    {}
spec:
  podSelector: 
    matchLabels:
      app: test-np
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
      - port: 9093
        protocol: TCP
  egress: 
    - {}
  • playbook:
# test-play.yml 
- hosts: "127.0.0.1"
  connection: local
  gather_facts: False
  tasks:
  - name: apply networkpolicy
    community.kubernetes.k8s:
      namespace: "default"
      definition: "{{ lookup('file', 'np.yml' ) }}"
      apply:      true
      force:      false
      validate:
        fail_on_error: yes
        strict: yes
EXPECTED RESULTS

egress is [ {} ], and it shouldn’t change every time apply

ACTUAL RESULTS
$ ansible-playbook -i 'localhost,' test.yml --diff -vvvv
ansible-playbook 2.10.1
  config file = /home/qcu266/ansible-debug/ansible.cfg
  configured module search path = ['/home/qcu266/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/qcu266/miniconda3/lib/python3.7/site-packages/ansible
  executable location = /home/qcu266/miniconda3/bin/ansible-playbook
  python version = 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
Using /home/qcu266/ansible-debug/ansible.cfg as config file
setting up inventory plugins
redirecting (type: inventory) ansible.builtin.k8s to community.kubernetes.k8s
Loading collection community.kubernetes from /home/qcu266/ansible-debug/ansible_collections/community/kubernetes
Skipping due to inventory source not existing or not being readable by the current user
ansible_collections.community.kubernetes.plugins.inventory.k8s declined parsing localhost, as it did not pass its verify_file() method
[WARNING]: Unable to parse localhost, as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
redirecting (type: action) community.kubernetes.k8s to community.kubernetes.k8s_info
redirecting (type: callback) ansible.builtin.yaml to community.general.yaml
Loading collection community.general from /home/qcu266/miniconda3/lib/python3.7/site-packages/ansible_collections/community/general
redirecting (type: callback) ansible.builtin.yaml to community.general.yaml
Loading callback plugin community.general.yaml of type stdout, v2.0 from /home/qcu266/miniconda3/lib/python3.7/site-packages/ansible_collections/community/general/plugins/callback/yaml.py

PLAYBOOK: test.yml *************************************************************************************************************************************************************************************************************************
Positional arguments: test.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
diff: True
inventory: ('localhost,',)
forks: 5
1 plays in test.yml

PLAY [127.0.0.1] ***************************************************************************************************************************************************************************************************************************
META: ran handlers
redirecting (type: action) community.kubernetes.k8s to community.kubernetes.k8s_info

TASK [apply networkpolicy] *****************************************************************************************************************************************************************************************************************
task path: /home/qcu266/ansible-debug/test.yml:6
File lookup using /home/qcu266/ansible-debug/np.yml as file
redirecting (type: action) community.kubernetes.k8s to community.kubernetes.k8s_info
redirecting (type: action) community.kubernetes.k8s to community.kubernetes.k8s_info
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: qcu266
<127.0.0.1> EXEC /bin/sh -c 'echo ~qcu266 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/qcu266/.ansible/tmp `"&& mkdir "` echo /home/qcu266/.ansible/tmp/ansible-tmp-1621427323.2463717-13840-159408552289947 `" && echo ansible-tmp-1621427323.2463717-13840-159408552289947="` echo /home/qcu266/.ansible/tmp/ansible-tmp-1621427323.2463717-13840-159408552289947 `" ) && sleep 0'
Using module file /home/qcu266/ansible-debug/ansible_collections/community/kubernetes/plugins/modules/k8s.py
<127.0.0.1> PUT /home/qcu266/.ansible/tmp/ansible-local-138342m0ocwak/tmpbu48vx0q TO /home/qcu266/.ansible/tmp/ansible-tmp-1621427323.2463717-13840-159408552289947/AnsiballZ_k8s.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/qcu266/.ansible/tmp/ansible-tmp-1621427323.2463717-13840-159408552289947/ /home/qcu266/.ansible/tmp/ansible-tmp-1621427323.2463717-13840-159408552289947/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/home/qcu266/miniconda3/bin/python /home/qcu266/.ansible/tmp/ansible-tmp-1621427323.2463717-13840-159408552289947/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/qcu266/.ansible/tmp/ansible-tmp-1621427323.2463717-13840-159408552289947/ > /dev/null 2>&1 && sleep 0'
--- before
+++ after
@@ -1,5 +1,5 @@
 metadata:
-  generation: 13
+  generation: 14
   managedFields:
   - apiVersion: networking.k8s.io/v1
     fieldsType: FieldsV1
@@ -12,7 +12,6 @@
           .: {}
           f:app: {}
       f:spec:
-        f:egress: {}
         f:ingress: {}
         f:podSelector:
           f:matchLabels:
@@ -22,7 +21,5 @@
     manager: OpenAPI-Generator
     operation: Update
     time: '2021-05-19T12:28:16Z'
-  resourceVersion: '233707268'
-spec:
-  egress:
-  - {}
+  resourceVersion: '233708015'
+spec: {}

changed: [127.0.0.1] => changed=true 
  diff:
    after:
      metadata:
        generation: 14
        managedFields:
        - apiVersion: networking.k8s.io/v1
          fieldsType: FieldsV1
          fieldsV1:
            f:metadata:
              f:annotations:
                .: {}
                f:kubectl.kubernetes.io/last-applied-configuration: {}
              f:labels:
                .: {}
                f:app: {}
            f:spec:
              f:ingress: {}
              f:podSelector:
                f:matchLabels:
                  .: {}
                  f:app: {}
              f:policyTypes: {}
          manager: OpenAPI-Generator
          operation: Update
          time: '2021-05-19T12:28:16Z'
        resourceVersion: '233708015'
      spec: {}
    before:
      metadata:
        generation: 13
        managedFields:
        - apiVersion: networking.k8s.io/v1
          fieldsType: FieldsV1
          fieldsV1:
            f:metadata:
              f:annotations:
                .: {}
                f:kubectl.kubernetes.io/last-applied-configuration: {}
              f:labels:
                .: {}
                f:app: {}
            f:spec:
              f:egress: {}
              f:ingress: {}
              f:podSelector:
                f:matchLabels:
                  .: {}
                  f:app: {}
              f:policyTypes: {}
          manager: OpenAPI-Generator
          operation: Update
          time: '2021-05-19T12:28:16Z'
        resourceVersion: '233707268'
      spec:
        egress:
        - {}
  invocation:
    module_args:
      api_key: null
      api_version: v1
      append_hash: false
      apply: true
      ca_cert: null
      client_cert: null
      client_key: null
      context: null
      delete_options: null
      force: false
      host: null
      kind: null
      kubeconfig: null
      merge_type: null
      name: null
      namespace: default
      password: null
      persist_config: null
      proxy: null
      resource_definition: |-
        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name:  test-np
          labels:
            app: test-np
          annotations:
            {}
        spec:
          podSelector:
            matchLabels:
              app: test-np
          policyTypes:
            - Ingress
            - Egress
          ingress:
            - ports:
              - port: 9093
                protocol: TCP
          egress:
            - {}
      src: null
      state: present
      template: null
      username: null
      validate:
        fail_on_error: true
        strict: true
        version: null
      validate_certs: null
      wait: false
      wait_condition: null
      wait_sleep: 5
      wait_timeout: 120
  method: apply
  result:
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"labels":{"app":"test-np"},"name":"test-np","namespace":"default"},"spec":{"egress":[{}],"ingress":[{"ports":[{"port":9093,"protocol":"TCP"}]}],"podSelector":{"matchLabels":{"app":"test-np"}},"policyTypes":["Ingress","Egress"]}}'
      creationTimestamp: '2021-05-19T12:05:18Z'
      generation: 14
      labels:
        app: test-np
      managedFields:
      - apiVersion: networking.k8s.io/v1
        fieldsType: FieldsV1
        fieldsV1:
          f:metadata:
            f:annotations:
              .: {}
              f:kubectl.kubernetes.io/last-applied-configuration: {}
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:ingress: {}
            f:podSelector:
              f:matchLabels:
                .: {}
                f:app: {}
            f:policyTypes: {}
        manager: OpenAPI-Generator
        operation: Update
        time: '2021-05-19T12:28:16Z'
      name: test-np
      namespace: default
      resourceVersion: '233708015'
      selfLink: /apis/networking.k8s.io/v1/namespaces/default/networkpolicies/test-np
      uid: fda28c53-41f7-4a04-acc8-c5e9ff9973d6
    spec:
      ingress:
      - ports:
        - port: 9093
          protocol: TCP
      podSelector:
        matchLabels:
          app: test-np
      policyTypes:
      - Ingress
      - Egress
META: ran handlers
META: ran handlers

PLAY RECAP *********************************************************************************************************************************************************************************************************************************
127.0.0.1                  : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
  • next run playbook will add egress : [{}]
--- before
+++ after
@@ -1,5 +1,5 @@
 metadata:
-  generation: 14
+  generation: 15
   managedFields:
   - apiVersion: networking.k8s.io/v1
     fieldsType: FieldsV1
@@ -12,6 +12,7 @@
           .: {}
           f:app: {}
       f:spec:
+        f:egress: {}
         f:ingress: {}
         f:podSelector:
           f:matchLabels:
@@ -20,6 +21,8 @@
         f:policyTypes: {}
     manager: OpenAPI-Generator
     operation: Update
-    time: '2021-05-19T12:28:16Z'
-  resourceVersion: '233708015'
-spec: {}
+    time: '2021-05-19T12:32:32Z'
+  resourceVersion: '233714050'
+spec:
+  egress:
+  - {}
  • run again ,will remove egress: [ {} ]
--- before
+++ after
@@ -1,5 +1,5 @@
 metadata:
-  generation: 15
+  generation: 16
   managedFields:
   - apiVersion: networking.k8s.io/v1
     fieldsType: FieldsV1
@@ -12,7 +12,6 @@
           .: {}
           f:app: {}
       f:spec:
-        f:egress: {}
         f:ingress: {}
         f:podSelector:
           f:matchLabels:
@@ -22,7 +21,5 @@
     manager: OpenAPI-Generator
     operation: Update
     time: '2021-05-19T12:32:32Z'
-  resourceVersion: '233714050'
-spec:
-  egress:
-  - {}
+  resourceVersion: '233716986'
+spec: {}
@abikouo abikouo transferred this issue from ansible-collections/community.kubernetes May 21, 2021
@gravesm gravesm added type/bug Something isn't working jira needs_verify labels Jun 4, 2021
@abikouo abikouo self-assigned this Jun 11, 2021
@abikouo abikouo added verified The issue is reproduced and removed needs_verify labels Jun 11, 2021
@abikouo
Copy link
Contributor

abikouo commented Jun 11, 2021

@qcu266 could you please validate that it is working properly now with the fix on #131 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira type/bug Something isn't working verified The issue is reproduced
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants