-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
containerd is not upgraded because it is not restarted #9019
Comments
i also had this problem today. Even then i change containerd preferences and run cluster.yml --tags=containerd , config changes on nodes but containerd wasnt been restarted. I have to restart it manually to take effect of new options in config. |
Can confirm by using
|
That is exactly how import_* works. (See also #9279 , https://serverfault.com/questions/875247/whats-the-difference-between-include-tasks-and-import-tasks )
That part seems less clear, why is container_manager != "containerd" ? Since #8439 was added it looks like this would cause containerd to be uninstalled if container_manager != "containerd". In any case the last handler of a given name that is invoked should override earlier ones, so the handlers invoked normally in https://github.com/kubernetes-sigs/kubespray/blob/v2.19.0/roles/container-engine/containerd/tasks/main.yml should override this one: https://github.com/kubernetes-sigs/kubespray/blob/v2.19.0/roles/container-engine/validate-container-engine/tasks/main.yml#L90 |
It's also slightly odd that the restart containerd handler is a no-op that just calls another handler: It should work though but might be part of the corner case that could explain why this might be hitting some obscure bug (I looked through ansible issues but didn't find one that seemed pertinent to this). |
"the location where it is inserted doesn't affect when the handlers are added" |
Environment:
ansible --version
):ansible==4.10.0
ansible-core==2.11.12
python --version
):2.7.5
Kubespray version (commit) (
git rev-parse --short HEAD
):v2.19.0
Command used to invoke ansible:
ansible-playbook upgrade-cluster.yml
Anything else do we need to know:
On a cluster deployed using kubespray-v2.18.0, after running
ansible-playbook upgrade-cluster.yml
using kubespray-v2.19.0, containerd is still using old version as shown inkubectl get nodes -o wide
. After restarting the containerd service, new version is used.The problem is that the handler
notify: restart containerd
is never ran, despite that is notified multiple times in https://github.com/kubernetes-sigs/kubespray/blob/v2.19.0/roles/container-engine/containerd/tasks/main.yml.After some troubleshooting, the problem should be caused by the
import_role
in https://github.com/kubernetes-sigs/kubespray/blob/v2.19.0/roles/container-engine/validate-container-engine/tasks/main.yml#L90, because after I updated it toinclude_role
, the problem is gone.My GUESS is that the
when
conditions are added to the containerd handlers whenimport_role
in https://github.com/kubernetes-sigs/kubespray/blob/v2.19.0/roles/container-engine/validate-container-engine/tasks/main.yml#L90, and since my upgrade doesn't match those conditions (e.g.container_manager != "containerd"
), the handler is not ran... What a confusing ansible behaviour...Also, if you just run
ansible-playbook cluster.yml
on a fresh node, you will see the handler is never ran as well, but containerd will still be running because of https://github.com/kubernetes-sigs/kubespray/blob/v2.19.0/roles/container-engine/containerd/tasks/main.yml#L100. I tried that with ansible==5.7.1 ansible-core==2.12.6 python 3.10, and the problem happens as well. So the problem doesn't look like an old ansible behaviour.The text was updated successfully, but these errors were encountered: