-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improve "kubeadm reset" kube-proxy cleanup #3133
improve "kubeadm reset" kube-proxy cleanup #3133
Comments
kubernetes/kubernetes#126596 we don't depends on crictl as well after v1.31.
|
@danwinship i think kube-proxy needs to:
yes, kubeadm now interacts directly with the CRI. 'reset' can create a container and call kube-proxy with the --cleanup flag, but some users of kubeadm don't deploy kube-proxy, so that would mean also checking if the kube-proxy config map exists. i think this is doable, but i don't have bandwidth to play with that. also, i think it may step over the line of the 'best-effort' principle of 'reset' i.e. we consider 'reset' to be a best effort cleanup of the host - some things are skipped, some things may fail. |
+1. A document link is preferred. |
also +1 to link to a website page |
putting 1.33 milestone on this, but there is nothing actionable for kubeadm ATM. |
We don't currently document the flag anywhere other than the autogenerated man page, but there's barely anything to document: you just run
though fwiw
|
i don't think many users run kube-proxy --help or checks the reference docs for the component. the fact that kubeadm runs kube-proxy as a container doesn't make this easier, as there is no kube-proxy binary on the host. i guess in the kubeadm reset docs we can add a section to run kube-proxy in a privileged container and do the cleanup. |
@danwinship so i tested
but that gives me errors:
the alternative approach of running kube-proxy in a container seem to
but doing i don't think your statement "but there's barely anything to document" is entirely valid. |
INPUT, OUTPUT are not cleaned, by --cleanup with this container approach before kubeadm init
after kubedm init:
|
What I meant was there's barely anything to document on kube-proxy's end; if you are able to run Yes, in the context of kubeadm if you want to do I opened kubernetes/kubernetes#129639 about the lack of any mention of |
kubeadm docs PR |
update for k/k: |
@pacoxu pointed out the kubeadm reset kube-proxy cleanup instructions in kubernetes/kubernetes#128886. I started writing a better version:
But it seems weird and awkward that we explain how to clean up nftables and the ipvs half of ipvs mode, but we don't explain how to clean up iptables (including the iptables half of ipvs mode). Plus, kubeadm is no longer requiring the user to have the iptables binaries installed any more anyway...
kube-proxy already has a
--cleanup
flag that will clean up all iptables, ipvs, and nftables rules. Maybe it would be nicer to tell the user to run something like:(not tested). Or maybe something with
crictl
(except obviously the user isn't going to have that installed). Or maybekubeadm reset
could (optionally?) run the pod itself via CRI? Or by running kubelet in standalone mode just long enough to run one pod? I'm not sure what would be easiest for kubeadm...Anyway, feel free to just take the patch above if you want the simple fix.
The text was updated successfully, but these errors were encountered: