-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix waiting for daemonset when desired number of pods is 0 #756
Fix waiting for daemonset when desired number of pods is 0 #756
Conversation
Build succeeded. ✔️ ansible-galaxy-importer SUCCESS in 4m 21s |
fa4c8ab
to
3dc8b12
Compare
Build succeeded. ✔️ ansible-galaxy-importer SUCCESS in 3m 39s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add an integration test for this?
Build succeeded. ✔️ ansible-galaxy-importer SUCCESS in 3m 31s |
f76539a
to
9fb7bbb
Compare
Build succeeded. ✔️ ansible-galaxy-importer SUCCESS in 4m 30s |
@gravesm Sure, I've just pushed new commit with integration tests |
9fb7bbb
to
45331cb
Compare
Build succeeded. ✔️ ansible-galaxy-importer SUCCESS in 4m 17s |
Build succeeded (gate pipeline). ✔️ ansible-galaxy-importer SUCCESS in 2m 58s |
b07fbd6
into
ansible-collections:main
Backport to stable-3: 💚 backport PR created✅ Backport PR branch: Backported as #761 🤖 @patchback |
Backport to stable-5: 💚 backport PR created✅ Backport PR branch: Backported as #762 🤖 @patchback |
Fixes #755 SUMMARY Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create .status.updatedNumberScheduled field (at least on version v1.27), because we still are not going to create any Pods. So that if .status.updatedNumberScheduled doesn't exist we should assume that number is 0 Code to reproduce: - name: Create daemonset kubernetes.core.k8s: state: present wait: true definition: apiVersion: apps/v1 kind: DaemonSet metadata: name: my-daemonset namespace: default spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: nginx nodeSelector: non_exisiting_label: 1 ISSUE TYPE Bugfix Pull Request COMPONENT NAME kubernetes.core.plugins.module_utils.k8s.waiter ADDITIONAL INFORMATION TASK [Create daemonset] ********************************************************************************************************************************** changed: [controlplane] => {"changed": true, "duration": 5, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "DaemonSet", "metadata": {"annotations": {"deprecated.daemonset.template.generation": "1"}, "creationTimestamp": "2024-06-28T08:23:41Z", "generation": 1, "managedFields": [{"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:deprecated.daemonset.template.generation": {}}}, "f:spec": {"f:revisionHistoryLimit": {}, "f:selector": {}, "f:template": {"f:metadata": {"f:labels": {".": {}, "f:app": {}}}, "f:spec": {"f:containers": {"k:{\"name\":\"my-container\"}": {".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}}}, "f:dnsPolicy": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {}}}, "f:updateStrategy": {"f:rollingUpdate": {".": {}, "f:maxSurge": {}, "f:maxUnavailable": {}}, "f:type": {}}}}, "manager": "OpenAPI-Generator", "operation": "Update", "time": "2024-06-28T08:23:41Z"}, {"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:status": {"f:observedGeneration": {}}}, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-06-28T08:23:41Z"}], "name": "my-daemonset", "namespace": "default", "resourceVersion": "1088421", "uid": "faafdbf7-4388-4cec-88d5-84657966312d"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "my-app"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "my-app"}}, "spec": {"containers": [{"image": "nginx", "imagePullPolicy": "Always", "name": "my-container", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"non_exisiting_label": "1"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"rollingUpdate": {"maxSurge": 0, "maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1}}} ~$ kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE my-daemonset 0 0 0 0 0 non_exisiting_label=1 30s Reviewed-by: Mike Graves <mgraves@redhat.com> (cherry picked from commit b07fbd6)
Fixes #755 SUMMARY Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create .status.updatedNumberScheduled field (at least on version v1.27), because we still are not going to create any Pods. So that if .status.updatedNumberScheduled doesn't exist we should assume that number is 0 Code to reproduce: - name: Create daemonset kubernetes.core.k8s: state: present wait: true definition: apiVersion: apps/v1 kind: DaemonSet metadata: name: my-daemonset namespace: default spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: nginx nodeSelector: non_exisiting_label: 1 ISSUE TYPE Bugfix Pull Request COMPONENT NAME kubernetes.core.plugins.module_utils.k8s.waiter ADDITIONAL INFORMATION TASK [Create daemonset] ********************************************************************************************************************************** changed: [controlplane] => {"changed": true, "duration": 5, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "DaemonSet", "metadata": {"annotations": {"deprecated.daemonset.template.generation": "1"}, "creationTimestamp": "2024-06-28T08:23:41Z", "generation": 1, "managedFields": [{"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:deprecated.daemonset.template.generation": {}}}, "f:spec": {"f:revisionHistoryLimit": {}, "f:selector": {}, "f:template": {"f:metadata": {"f:labels": {".": {}, "f:app": {}}}, "f:spec": {"f:containers": {"k:{\"name\":\"my-container\"}": {".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}}}, "f:dnsPolicy": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {}}}, "f:updateStrategy": {"f:rollingUpdate": {".": {}, "f:maxSurge": {}, "f:maxUnavailable": {}}, "f:type": {}}}}, "manager": "OpenAPI-Generator", "operation": "Update", "time": "2024-06-28T08:23:41Z"}, {"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:status": {"f:observedGeneration": {}}}, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-06-28T08:23:41Z"}], "name": "my-daemonset", "namespace": "default", "resourceVersion": "1088421", "uid": "faafdbf7-4388-4cec-88d5-84657966312d"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "my-app"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "my-app"}}, "spec": {"containers": [{"image": "nginx", "imagePullPolicy": "Always", "name": "my-container", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"non_exisiting_label": "1"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"rollingUpdate": {"maxSurge": 0, "maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1}}} ~$ kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE my-daemonset 0 0 0 0 0 non_exisiting_label=1 30s Reviewed-by: Mike Graves <mgraves@redhat.com> (cherry picked from commit b07fbd6)
This is a backport of PR #756 as merged into main (b07fbd6). Fixes #755 SUMMARY Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create .status.updatedNumberScheduled field (at least on version v1.27), because we still are not going to create any Pods. So that if .status.updatedNumberScheduled doesn't exist we should assume that number is 0 Code to reproduce: - name: Create daemonset kubernetes.core.k8s: state: present wait: true definition: apiVersion: apps/v1 kind: DaemonSet metadata: name: my-daemonset namespace: default spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: nginx nodeSelector: non_exisiting_label: 1 ISSUE TYPE Bugfix Pull Request COMPONENT NAME kubernetes.core.plugins.module_utils.k8s.waiter ADDITIONAL INFORMATION TASK [Create daemonset] ********************************************************************************************************************************** changed: [controlplane] => {"changed": true, "duration": 5, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "DaemonSet", "metadata": {"annotations": {"deprecated.daemonset.template.generation": "1"}, "creationTimestamp": "2024-06-28T08:23:41Z", "generation": 1, "managedFields": [{"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:deprecated.daemonset.template.generation": {}}}, "f:spec": {"f:revisionHistoryLimit": {}, "f:selector": {}, "f:template": {"f:metadata": {"f:labels": {".": {}, "f:app": {}}}, "f:spec": {"f:containers": {"k:{\"name\":\"my-container\"}": {".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}}}, "f:dnsPolicy": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {}}}, "f:updateStrategy": {"f:rollingUpdate": {".": {}, "f:maxSurge": {}, "f:maxUnavailable": {}}, "f:type": {}}}}, "manager": "OpenAPI-Generator", "operation": "Update", "time": "2024-06-28T08:23:41Z"}, {"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:status": {"f:observedGeneration": {}}}, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-06-28T08:23:41Z"}], "name": "my-daemonset", "namespace": "default", "resourceVersion": "1088421", "uid": "faafdbf7-4388-4cec-88d5-84657966312d"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "my-app"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "my-app"}}, "spec": {"containers": [{"image": "nginx", "imagePullPolicy": "Always", "name": "my-container", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"non_exisiting_label": "1"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"rollingUpdate": {"maxSurge": 0, "maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1}}} ~$ kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE my-daemonset 0 0 0 0 0 non_exisiting_label=1 30s Reviewed-by: Mike Graves <mgraves@redhat.com>
This is a backport of PR #756 as merged into main (b07fbd6). Fixes #755 SUMMARY Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create .status.updatedNumberScheduled field (at least on version v1.27), because we still are not going to create any Pods. So that if .status.updatedNumberScheduled doesn't exist we should assume that number is 0 Code to reproduce: - name: Create daemonset kubernetes.core.k8s: state: present wait: true definition: apiVersion: apps/v1 kind: DaemonSet metadata: name: my-daemonset namespace: default spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-container image: nginx nodeSelector: non_exisiting_label: 1 ISSUE TYPE Bugfix Pull Request COMPONENT NAME kubernetes.core.plugins.module_utils.k8s.waiter ADDITIONAL INFORMATION TASK [Create daemonset] ********************************************************************************************************************************** changed: [controlplane] => {"changed": true, "duration": 5, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "DaemonSet", "metadata": {"annotations": {"deprecated.daemonset.template.generation": "1"}, "creationTimestamp": "2024-06-28T08:23:41Z", "generation": 1, "managedFields": [{"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:metadata": {"f:annotations": {".": {}, "f:deprecated.daemonset.template.generation": {}}}, "f:spec": {"f:revisionHistoryLimit": {}, "f:selector": {}, "f:template": {"f:metadata": {"f:labels": {".": {}, "f:app": {}}}, "f:spec": {"f:containers": {"k:{\"name\":\"my-container\"}": {".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": {}, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}}}, "f:dnsPolicy": {}, "f:nodeSelector": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {}}}, "f:updateStrategy": {"f:rollingUpdate": {".": {}, "f:maxSurge": {}, "f:maxUnavailable": {}}, "f:type": {}}}}, "manager": "OpenAPI-Generator", "operation": "Update", "time": "2024-06-28T08:23:41Z"}, {"apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:status": {"f:observedGeneration": {}}}, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-06-28T08:23:41Z"}], "name": "my-daemonset", "namespace": "default", "resourceVersion": "1088421", "uid": "faafdbf7-4388-4cec-88d5-84657966312d"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "my-app"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "my-app"}}, "spec": {"containers": [{"image": "nginx", "imagePullPolicy": "Always", "name": "my-container", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"non_exisiting_label": "1"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"rollingUpdate": {"maxSurge": 0, "maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1}}} ~$ kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE my-daemonset 0 0 0 0 0 non_exisiting_label=1 30s Reviewed-by: Mike Graves <mgraves@redhat.com>
SUMMARY Version 3.3.0 of ansible-collection kubernetes.core came with several improvements and bugfixes ISSUE TYPE New release pull request Changelog Minor Changes k8s_drain - Improve error message for pod disruption budget when draining a node (#797). Bugfixes helm - Helm version checks did not support RC versions. They now accept any version tags. (#745). helm_pull - Apply no_log=True to pass_credentials to silence false positive warning.. (#796). k8s_drain - Fix k8s_drain does not wait for single pod (#769). k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (#792). kubeconfig option should not appear in module invocation log (#782). kustomize - kustomize plugin fails with deprecation warnings (#639). waiter - Fix waiting for daemonset when desired number of pods is 0. (#756). ADDITIONAL INFORMATION Collection kubernets.core version 3.3.0 is compatible with ansible-core>=2.14.0 Reviewed-by: Alina Buzachis Reviewed-by: Yuriy Novostavskiy Reviewed-by: Mike Graves <mgraves@redhat.com>
SUMMARY This release came with new module helm_registry_auth, and improvements to the error messages in the k8s_drain module, new parameter insecure_registry for helm_template module and several bug fixes. ISSUE TYPE New release pull request Changelog Minor Changes Bump version of ansible-lint to minimum 24.7.0 (#765). Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify (#805). connection/kubectl.py - Added an example of using the kubectl connection plugin to the documentation (#741). k8s_drain - Improve error message for pod disruption budget when draining a node (#797). Bugfixes helm - Helm version checks did not support RC versions. They now accept any version tags. (#745). helm_pull - Apply no_log=True to pass_credentials to silence false positive warning.. (#796). k8s_drain - Fix k8s_drain does not wait for single pod (#769). k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (#792). kubeconfig option should not appear in module invocation log (#782). kustomize - kustomize plugin fails with deprecation warnings (#639). waiter - Fix waiting for daemonset when desired number of pods is 0. (#756). New Modules helm_registry_auth - Helm registry authentication module ADDITIONAL INFORMATION Collection kubernets.core version 3.1.0 is compatible with ansible-core>=2.15.0 Reviewed-by: Mike Graves <mgraves@redhat.com>
SUMMARY This release came with new module helm_registry_auth, and improvements to the error messages in the k8s_drain module, new parameter insecure_registry for helm_template module and several bug fixes. ISSUE TYPE New release pull request Changelog Minor Changes Bump version of ansible-lint to minimum 24.7.0 (ansible-collections#765). Parameter insecure_registry added to helm_template as equivalent of insecure-skip-tls-verify (ansible-collections#805). connection/kubectl.py - Added an example of using the kubectl connection plugin to the documentation (ansible-collections#741). k8s_drain - Improve error message for pod disruption budget when draining a node (ansible-collections#797). Bugfixes helm - Helm version checks did not support RC versions. They now accept any version tags. (ansible-collections#745). helm_pull - Apply no_log=True to pass_credentials to silence false positive warning.. (ansible-collections#796). k8s_drain - Fix k8s_drain does not wait for single pod (ansible-collections#769). k8s_drain - Fix k8s_drain runs into a timeout when evicting a pod which is part of a stateful set (ansible-collections#792). kubeconfig option should not appear in module invocation log (ansible-collections#782). kustomize - kustomize plugin fails with deprecation warnings (ansible-collections#639). waiter - Fix waiting for daemonset when desired number of pods is 0. (ansible-collections#756). New Modules helm_registry_auth - Helm registry authentication module ADDITIONAL INFORMATION Collection kubernets.core version 3.1.0 is compatible with ansible-core>=2.15.0 Reviewed-by: Mike Graves <mgraves@redhat.com>
Fixes #755
SUMMARY
Because we don't have any node with non_exisiting_label (see code below) desired number of Pods will be 0. Kubernetes won't create
.status.updatedNumberScheduled
field (at least on version v1.27), because we still are not going to create any Pods. So that if.status.updatedNumberScheduled
doesn't exist we should assume that number is 0Code to reproduce:
ISSUE TYPE
COMPONENT NAME
kubernetes.core.plugins.module_utils.k8s.waiter
ADDITIONAL INFORMATION