-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use systemd as the containerd cgroup driver #717
Use systemd as the containerd cgroup driver #717
Conversation
/assign @abeer91 |
FYI @ravisinha0506 |
@Callisto13 related to your #593 (comment), I assume that the changes in here aren't likely to cause issue with the legacy path as containerd wouldn't be being used? |
@stevehipwell from what I can see that is correct. We are not exposing the containerd option in the legacy codepath, so there should be no driver conflict. I am happy to verify with a test build before you release if you have any concerns, but I think we should be fine. |
Thanks @Callisto13 I think we should be good, just waiting to hear from a maintainer on this. |
7cadb3a
to
a96413e
Compare
@abeer91 could you take a look at this? |
a96413e
to
1d774f5
Compare
@abeer91 if you're not the right person to look into this could you let me know who is? |
Could one of the project maintainers please review or offer some feedback on this? |
@stevehipwell Thanks for submitting the PR. We are prioritizing the testing of cgroup driver to systemd based on your changes. I don't have an ECD to share currently but will update within 2 weeks. |
Thanks @rtripat, hopefully we can get this solved ASAP. |
@rtripat how is the testing going? |
@rtripat it's been almost 2 months since you said you'd get back in 2 weeks, how is this going? |
@stevehipwell We are actively working on the change. Appreciate the patience. |
@rtripat are you by any chance also looking at cgroup v2? |
We need to address this kubelet issue by updating our Regarding cgroup v2, we are unable to make this change until a newer version of |
@cartermckinnon after having had a look through that issue it looks like it's only relevant for Kubernetes v1.22? If so and as EKS v1.22 isn't due until "early 2022" can't this be added for the current EKS versions? RE cgroup v2 is it worth adding a new issue to this repo to track the progress? |
My understanding is:
Older Kubernetes release lines are affected by the first point regardless of the I've created #824 to track cgroup v2. |
@rtripat could we get a status update on this? |
Apologies for the delay, our packaging team has had to prioritize issues related to the log4j vulnerabilities. We will proceed as soon as the Thanks for your patience! 🙏 |
Unfortunately, we aren't able to make this change in the near future. Our packaging team has to prioritize stability across a variety of Amazon Linux distributions, and the changelog of In the meantime, if anyone has experienced the sort of stability issues hinted at in the Kubernetes documentation, we would love to hear about it. |
@cartermckinnon I'm interested in why this isn't seen as a high priority and what the blockers are here? I guess this is the final nail in the coffin of running K8s workloads on AL 2 (and I assume AL 2022) and we should all move to Bottlerocket ASAP to stay inline with the community. |
EDIT: ok, I looked at the runc changelog, that's a bummer. If you don't intend to update |
I'm not an authority on the priorities of the Amazon Linux packaging folks, but I can say: we have many users, and I'm not aware of any case in which we've attributed instability to the presence of two cgroup managers. Again, if you have observed this type of instability, we're very interested in hearing about it. The |
@stevehipwell I have good news! We now have a version of Two things we need to get this merged:
I'm happy to open a separate PR if you don't have the cycles. 😄 |
@cartermckinnon I'll update my PR. |
1d774f5
to
fa88ad7
Compare
@cartermckinnon this should be good now. I've copied the shell script patterns from the rest of the script rather than the ones I'd have used, but can update the PR to make them more correct if required. |
Thanks, @stevehipwell, LGTM! I'll do some sanity tests and try to get this merged this week. |
I built 1.19-1.22 AMIs from this branch, and created a nodegroup using each as follows:
Pods came up, LGTM. I want to ship this change in a dedicated AMI release; so I'll wait to merge until we ship #921. @stevehipwell would you mind addressing the conflicts that came from #921 ? |
3a49afd
to
049ae30
Compare
@cartermckinnon I've rebased this PR so it should be good to merge when you're ready. |
Any updates on this? Is there ETA for release? |
Signed-off-by: Steve Hipwell <steve.hipwell@gmail.com>
049ae30
to
427e7b8
Compare
@cartermckinnon could we get this merged and released? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm!
LGTM |
I'm going to merge this. We have an AMI build currently making its way through our release process, and this change won't make it into that version; it will land in the subsequent version. I'll update here once it's available. Thanks for all your work on this, @stevehipwell! 🙏 |
This is creating a mismatch in the kubelet and container cgroupDriver when creating unmananged nodegroups with eksctl. |
This is effecting clusters using eksctl v0.90.0(or less). until this version eksctl makes its own kubelet config which does not have cgroupDriver setting, this is causing a mis-match where containerd using |
AFAIK older versions of |
aws/containers-roadmap#1210
Description of changes:
This PR sets systemd as the cgroup driver when using containerd, as recommended by the Kubernetes docs. This is complimentary to #593 which adds support for the systemd cgroup driver to Docker but shouldn't be blocked by legacy
eksctl
constraints as containerd is newer than the systemd cgroup support there.By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.