-
Notifications
You must be signed in to change notification settings - Fork 467
Conversation
This is awesome! I will have to make some time to test this (along with the HA stuff). |
Looking forward to testing this! |
@bfallik once #465 is rebased on this PR, you'll have pods drained off nodes before they shutdown and are destroyed. Cluster upgrades are entirely functional here, but keep in mind that the nodes will be shutdown ungracefully, and consequently requests will be routed to the pod ips for some amount of time after the containers have disappeared. |
At a high level, I'd like to think more about these commands and flags. Current (this PR)The current method in this PR: $ kube-aws render
$ git diff # view changes to rendered assets
$ kube-aws up --update $ kube-aws render --generate-credentials
$ kube-aws up --update This retains the two primary commands that we are used to, but makes them much more complicated. The ProposalI propose changing these names. Here are the same scenarios: $ kube-aws render stack
$ git diff # view changes to rendered assets
$ kube-aws update stack $ kube-aws render credentials
$ kube-aws update credentials Note the use of the same subcommand for each. Makes it easier to teach you the terms and pieces that are involved. This PR does a great job of separating out the Backwards CompatibleFor backwards compatibility, we can alias (but not document) the render command from the last release: $ kube-aws render # v0.8.1
$ kube-aws render stack # master |
@@ -0,0 +1,40 @@ | |||
# kube-aws cluster updates |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To fit the naming scheme, can we name this doc kubernetes-on-aws-updates.md
?
@robszumski great suggestion! i'll think it over more thoroughly while implementing it, but sgtm and I'll move forward with what you have outlined. I was unsure of what to do with the command tree... thanks for figuring it out. |
## Types of cluster update | ||
There are two distinct categories of cluster update. | ||
|
||
* **Parameter-level update**: Only changes to `cluster.yaml` and/or TLS assets in `credentials/` folder are reflected. To enact this type of update. Modifications to CloudFormation or cloud-config userdata templates will not be reflected. In this case, you do not have to re-render: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"To enact this type of update."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, should probably add that.
@pieterlange any news on the calico problem you encountered with this PR? |
This fixes the problem i ran into here: coreos#608 (comment)
@pieterlange I've cherry-picked in your commit |
b1036a7
to
7b138ea
Compare
This fixes the problem i ran into here: coreos#608 (comment)
@robszumski check out b4d05dc |
Nice, lookin' good! |
How far away is this from being merged? Im keen to start using this. |
Does rolling replacement update on controller ASG, followed by workers Punts on upgrading etcd cluster- simply makes sure resource definitions don't change after create.
render command now operates on stack and credentials independently add top-level update command
b4d05dc
to
08e5fda
Compare
@iwarp we're working on getting this code reviewed! Sorry for the delay If you're really keen to start using it, it should all be functional if you pull from |
Note to self- I also need to add |
I've been using the Minor note: the update-policy for the worker autoscaler might need a little increase from the default 2 minutes depending on app startup time. The kubernetes master also has a brief window where it's unavailable but everything recovers just fine |
Any updates on this? Would love to start using it 🚀 |
An update for all interested parties: We'll be merging this functionality (along with some of @mumoshu 's work regarding node draining) in an |
@colhom Thanks for the update! |
@colhom
Tested using the latest stable and alpha OS releases. For my current setup this works pretty good. Next I will try to put ETCD in a Auto Scaling Group with S3 daily backups. If there is someone interested in it, I have a working branch: |
Hmmm interesting change of direction. What's the guidance for a highly available cluster that i should be using right now then? I was planning that this PR was going to be complete before going live on a new project Do i need to create multiple k8s clusters and load balance across which is closer to the k8s federation approach. How have others approached this? |
Echoing the previous response. I put off deploying kubernetes until this PR was finished but now I find myself unsure how to proceed. I might just look at kops at this point until there's a clearer vision for coreos-kubernetes. We looked at enterprise support for coreos but this project was a blocker for us being able to proceed with that (in case it helps justify anyone spending time on laying out a clear roadmap). |
I was pushing the date to deploy coreos kube to production and waiting for this PR to land master. We would like to have a procedure to do future updates and this seemed as a good solution. Any plans on providing some guidance on how updates will be done with future new releases? |
Complete and working upgrade path for kube-aws clusters, minus the discrete etcd cluster instances.
As part of this, we now have external CA support for TLS asset generation, along with support for allowing user to generate all TLS assets.
Fixes #104 #161
Depends #544 #596
Follow up with #465
Unfortunately does not support upgrading clusters that have already launched. --edit-- by already launched, i mean created by kube-aws code prior to this functionality merging.
@mumoshu I'd like to get your work on node draining on shutdown integrated as well.
\cc @plange @whereisaaron @robszumski @sym3tri @bfallik
Ref #340 #230 #161