You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
During RollingUpdate, old daemons are not terminated after new daemons are successfully created and running.
What you expected to happen:
Expect old daemons to be terminated after new daemons are successfully created and running.
How to reproduce it (as minimally and precisely as possible):
Install kruise v1.2.0
Create ads.yaml
Update image version (ads.yaml#L28; v2.6.0 -> v2.7.0)
a. Observe that new daemons are created and running
b. Observe that old daemons are not terminated
Delete an old daemon pod kubectl delete pod <pod_name> OR update image again (ads.yaml#L28; v2.7.0 -> v2.8.0)
a. Observe that the other old daemon pods are terminated
Anything else we need to know?:
After manually terminating one of the old daemon pods with kubectl delete pod <pod_name>, the rest of the old daemon pods start terminating.
When another RollingUpdate on the same ads is started, the old daemon pods start terminating.
Advanced DaemonSet RollingUpdate worked fine in kruise v1.0.1.
Same bug is present in kruise v1.1.0
Same bug is present with kubectl v1.24
Same bug is present when kruise installed with helm install kruise openkruise/kruise --version 1.2.0
@aaronseahyh Thanks for reporting. I reproduced your case, turns out it should be a bug of minReadySeconds, which lacks enqueueAfter for those surging pods.
What happened:
During RollingUpdate, old daemons are not terminated after new daemons are successfully created and running.
What you expected to happen:
Expect old daemons to be terminated after new daemons are successfully created and running.
How to reproduce it (as minimally and precisely as possible):
a. Observe that new daemons are created and running
b. Observe that old daemons are not terminated
kubectl delete pod <pod_name>
OR update image again (ads.yaml#L28; v2.7.0 -> v2.8.0)a. Observe that the other old daemon pods are terminated
Anything else we need to know?:
kubectl delete pod <pod_name>
, the rest of the old daemon pods start terminating.helm install kruise openkruise/kruise --version 1.2.0
Environment:
kubectl version
): v1.18Advanced DaemonSet manifest - ads.yaml (from DaemonSet | Kubernetes)
The text was updated successfully, but these errors were encountered: