-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use systemd cgroup driver for v1.24.0+ #2737
Use systemd cgroup driver for v1.24.0+ #2737
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
3c7d658
to
4938201
Compare
1c3c1c0
to
a5cf544
Compare
hmm 1.24 kubelet unhealthy
|
right ... all the entrypoint tricks are aimed at cgroupfs. |
a5cf544
to
524cdd7
Compare
So, when using the systemd cgroup driver kubelet will automatically append But we setup We'll need to handle this differently. |
Got back to this a bit today:
Debating if we should plumb through a switch or just always setup both, leaning towards the latter so it's simpler to ensure either can be used. |
d1c5998
to
9b0db6c
Compare
18834fa
to
5e58731
Compare
logs still show failing to remove rmda cgroup but that's not new to this PR / driver kubernetes/kubernetes#109182 |
We need to test this in more environments before moving forward, just to be on the safe side that we haven't missed a quirk. I'm pushing a node image from this, will update the PR with it so that github actions environments pick it up, and then see about getting it tested in some additional environments. |
podman the node only got this far:
exit status 1 https://github.com/kubernetes-sigs/kind/actions/runs/2296780194 |
we still have the rdma issue https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_kind/2737/pull-kind-conformance-parallel-ipv6/1523846989832261632 but that's not new |
for use implementing containerd config patching at node image build time
this code is still gnarly
TODO: just setup a real systemd slice?
32c9c29
to
3f553a8
Compare
SystemdCgroup = false | ||
` | ||
|
||
func configureContainerdSystemdCgroupFalse(containerCmdr exec.Cmder, config string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this fail against an image that was built for 1.24.x with an old version of kind, without the containerd change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, users that upgrade to kind v0.13 will need to use an image built with a newer version
we have no official images that way.
for custom built images that are 1.24+ with kind < v0.13, users will need to build new images. we'll have a release note.
the opposite is also true, 1.24+ official images / images built going forward will require v0.13.
this already would have happened in 1.21 with kubeadm defaulting to systemd then, we just shifted it a few releases.
/retest we have report that macOS / docker desktop works fine https://kubernetes.slack.com/archives/CEKK1KTN2/p1652192992627339?thread_ts=1652149847.653599&cid=CEKK1KTN2 |
/lgtm |
Working on v0.13.0 based on this, pushing images. Wrote release notes. |
see: #1726