-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for the new kops reconcile cluster command #17191
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
e1ff234
to
46ea30d
Compare
Why would this be an error? |
Because updating both the cluster's control plane launch templates (or other cloud provider equivalents) and node launch templates at the same time will cause new nodes to fail to join the cluster until all control plane instances have been upgraded. So if Cluster Autoscaler or Karpenter scale up nodes before or during the control plane rolling-update, they will fail to join and workloads will be stuck in Pending. This is almost certainly not what the user wants and is why we're introducing the new command. |
We could allow the user to bypass the error if they know what they're doing. for example, on clusters that dont use Cluster Autoscaler or Karpenter. |
Hold on, hasn't this sequence:
always been the standard upgrade sequence? These steps are even documented in https://kops.sigs.k8s.io/operations/updates_and_upgrades/#automated-update And now it is an error? |
Now it may cause node failures during the k8s 1.31 upgrade, yes. Hence the bold release note being added in this PR and my proposal to prevent users from making this mistake by returning a (skippable) error. I'll update that docs page to note this change as well. |
Sorry for my persistence, what has changed in k8s 1.31 that the regular kOps upgrade procedure has become dangerous? |
I updated this PR to link to the k/k issue that goes into more detail: kubernetes/kubernetes#127316 |
Oh, what a longread! Maybe #16907 would be shorter and more to the point, it is also mentioned within the longer post. However I think I understand the innovation now. The new |
Yes, that's correct. |
@@ -1,15 +1,59 @@ | |||
# Upgrading kubernetes | |||
|
|||
## **NOTE for Kubernetes >1.31** | |||
|
|||
Kops' upgrade procedure has hostorically risked violating the [Kubelet version skew policy](https://kubernetes.io/releases/version-skew-policy/#kubelet). Between `kops update cluster --yes` and every kube-apiserver being rotated with `kops rolling-update cluster --yes`, newly launched nodes running new kubelet versions could be connecting to older `kube-apiserver` nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kops' upgrade procedure has hostorically risked violating the [Kubelet version skew policy](https://kubernetes.io/releases/version-skew-policy/#kubelet). Between `kops update cluster --yes` and every kube-apiserver being rotated with `kops rolling-update cluster --yes`, newly launched nodes running new kubelet versions could be connecting to older `kube-apiserver` nodes. | |
Kops' upgrade procedure has historically risked violating the [Kubelet version skew policy](https://kubernetes.io/releases/version-skew-policy/#kubelet). After `kops update cluster --yes` completes and before every kube-apiserver is replaced with `kops rolling-update cluster --yes`, newly launched nodes running newer kubelet versions could be connecting to older `kube-apiserver` nodes. |
2. `kops rolling-update cluster --instance-group-roles=control-plane,apiserver --yes` | ||
3. `kops update cluster --yes` | ||
4. `kops rolling-update cluster --yes` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5. `kops update cluster --prune --yes` |
|
||
Upgrading kubernetes is similar to changing the image on an InstanceGroup, except that the kubernetes version is | ||
Upgrading kubernetes is similar to changing the image on an InstanceGroup, the kubernetes version is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the language here was clearer with "except that".
@danports: changing LGTM is restricted to collaborators In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Yes, eventually, but I don't think it needs to happen right away, since
That would be a smart idea, though perhaps it should only be a warning if the cluster doesn't have CAS/Karpenter enabled.
👍 More context for error messages is always good.
I am confused about this myself. Based on the commit history I think maybe @justinsb just added the |
@rifelpet: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/hold for feedback
A few open questions:
rolling-update cluster
docs references toreconcile cluster
?update cluster --yes
? (with no--instance-group*
filtering)update cluster --reconcile
flag? When would a user use it instead ofkops reconcile cluster
?