-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloudwatch agent pods don't get restarted when doing rollout-restart #124
Comments
I was able to reproduce the issue with a new cluster on
|
The current workaround is to delete the daemonset. The EKS Addon will recreate it.
|
The |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
All the pods part of the ds cloudwatch-agent are not getting restarted when doing
kubectl rollout restart ds cloudwatch-agent -n amazon-cloudwatch
. Only one pod is getting restarted.Steps to reproduce
Created a cluster of version 1.28 and installed the addon
Amazon CloudWatch Observability
of versionv1.2.2-eksbuild.1
.Intially we have 2 pods:
1st Restart:
We can see that only 1 pod got restarted, other pod is still running:
Same behaviour every time:
What did you expect to see?
I expected that all the pods of the ds should be restarted
What did you see instead?
Instead, I see that only 1 pod is getting restarted
What version did you use?
v1.2.2-eksbuild.1
What config did you use?
NA
Environment
Tried for cluster version 1.26, 1.27 & 1.28
Additional context
I could observer difference in the creation of the controllerrevisisons.
For a sample ds, where rollout restart works perfectly fine, 1 new controllerrevision is created when we perform rollout restart
Whereas in case of cloudwatch agent pods, the 1st controllerrevision is deleted and 2 new controller revisions are created. 3rd one is same as the 1st one. Below is the pattern:
The text was updated successfully, but these errors were encountered: