-
Notifications
You must be signed in to change notification settings - Fork 44
Cannot re-run 1.12 playbook (eg to add nodes) - kubeadm rbac issue #203
Comments
Note that this also means nodes can't be added to the cluster - that requires the install playbook to run, must include etcd nodes (so that primary master has that variable set correctly), primary_master (to get kubeadm install token), then the nodes, however the playbook fails when trying to install packages on the master, then can't generate kubeadm token |
It looks like the root of my original comment was a Unfortunately playbook still can't be run - running into this issue when adding a new node: kubernetes/kubeadm#907:
From github this has been due to version mismatch, but here everything was installed/upgraded via wardroom and versions seem to match. On master:
On new node:
|
What is the state of the scoped token you are trying to use during this run? Are you sure that it has not expired? |
I'm using the token generated by wardroom on the master - it fails during wardroom node install, if I do Is there a role/rolebinding being misconfigured that's supposed to allow group On the master:
On the node:
|
/kind bug
What steps did you take and what happened:
Ran upgrade script from 1.11.6 cluster to 1.12.7, masters failed due to temporary api server inavailability, ansible aborted. kubectl get nodes showed that masters were successfully upgraded, tried to re-run script to make sure all plays were performed, script now fails on package install
What did you expect to happen:
Detect no change necessary on masters for stages that were successful, only apply needed changes
Anything else you would like to add:
Environment:
branch
1.12/etc/os-release
): ubuntu 18.04@craigtracey
The text was updated successfully, but these errors were encountered: