-
Notifications
You must be signed in to change notification settings - Fork 295
Feature: Node pools #46
Comments
For now, I'm going to build a POC like I've described in coreos/coreos-kubernetes#667 (comment) Any requests, comments, proposals are welcomed! |
If we implement node pools by firing off new cloudformations we could generate separate We should still maintain a "main" stack for cross referencing resources. |
Sounds great and definitely would provide a lot of flexibility. I think it's a similar concept to kops instance groups? Part of me thinks it would be good to combine efforts as kops has a ticket for coreos support too. |
@c-knowles Thanks, I had never noticed kops already had that! |
Added something about node labels, cluster-autoscaler, spot pricing to organize what I should include in the poc. |
Regarding the POC, I've ended up with this: If you're bored of just waiting for this to land, please feel free to build the
TODOs
|
Several notes on current implementation: Usage
Designs
Implementation details
Implications
File treeThe whole file tree for a main cluster named
Btw, I named it |
Relevant kops pull for autoscaler - kubernetes/kops#914 |
@c-knowles Thanks for the info! I guess the cluster-autoscaler can be deployed like that in kube-aws too, once node pools landed and we ensure |
Yeah I think the AZ/ASG relationship seems to be the most important part for autoscaling, in its current form at least. |
For anyone interested, I've just rebased the branch onto current master and finished squashing while redoing commits in meaningful units. |
To put separate node labels for each node pool, I'd like kubelet to do that on startup so that we can avoid races between kubelet starts accepting pods and a node is labeled. Fortunately, it is supported as I'm going to accept relying on this alpha feature for now but warn me if that's something anyone wants me to avoid in this stage 😃 A related reading: kubernetes/kubernetes#16090 |
just a forecast: maybe i'll merge part of my node pools branch(the refactoring part) to master once we've release v0.9.1 final. |
I'm satisfied with the refactoring but now it needs to be rebased again 😃 |
Btw I've moved several things including node labels to "Later TODOs". |
Squashed. |
All the refactoring commits merged into master. |
Final brush up is in progress before submitting the pull request. Here's the final notes=commit message for the commit for the pr. Notes on the initial implementation of Node Pools. Usage
Design
Sources of inspiration
Specifications
Implications
File treeThe whole file tree for a main cluster named
Notes for POC users/testersWhile in the POC building phase, See notes on the POC: #46 (comment) They're now named |
#106 is merged and will be included in the second RC of kube-aws v0.9.2. I'm still wondering (how I could/if I should) explicitly label Node Pools an experimental feature. Maybe the commands can be changed to Hey @pieterlange @c-knowles @camilb @nicr9 @danielfm @sdouche, do you mind whether/how these commands are explicitly shown as experimental features or not to our users? |
@mumoshu IMO adding commands to Maybe we can just use confirmation prompts (like "This is an experimental feature. Confirm? [y/n]") or just execute the command and print a warning message at the end, something like that. What do you think? |
Agreed with @danielfm. Simply printing a warning message is sufficient, we shouldn't be changing cmdline options for this purpose. |
@danielfm @pieterlange Thanks for your feedbacks! I'm now convinced that we should not introduce the For which alternative to be used, I'd rather like to "start" with a warning message without prompting, so that we don't prevent our users from scripting around Also/However, I'll be open to future pull requests cover both:
|
+1, if someone wanted to use this feature in an existing CI pipeline that uses kube-aws they'd need a way to skip any prompts. |
Hi @mumoshu |
@sdouche Thanks for your feedback! Then I'll start with just a warning in the very beginning of cluster.yaml for now. |
So explain the fact in the very beginning of cluster.yaml for node pools according to what we've discussed in kubernetes-retired#46 (comment) ref kubernetes-retired#46
@mumoshu I'm eager to test that! |
… to node pools The following experimental features are added node pools via this change: * awsEnvironment * waitSignal * nodeLabel * loadBalancer ref kubernetes-retired#46
@c-knowles I've just noticed that I've missed not only Added to the TODOs list in this issue's description. |
@sdouche FYI, v0.9.2-rc.4 is available with the experimental node pools feature! |
This complements Node Pools(kubernetes-retired#46) and Spot Fleet support(kubernetes-retired#112) The former `experimental.nodeLabel` configuration key is renamed to `experimental.awsNodeLabels` to avoid collision with newly added `experimental.nodeLabels` and consistency with `experimental.awsEnvironment`.
It seems I've finished finished all the TODOs |
The documentation for Node Pools is at https://github.com/coreos/kube-aws/blob/master/Documentation/kubernetes-on-aws-node-pool.md |
Closing this issue as the initial iterations to bring the feature have finished. |
This complements Node Pools(kubernetes-retired#46) and Spot Fleet support(kubernetes-retired#112) The former `experimental.nodeLabel` configuration key is renamed to `experimental.awsNodeLabels` to avoid collision with newly added `experimental.nodeLabels` and consistency with `experimental.awsEnvironment`.
…/add-featureGates-to-controllers to hcom-flavour * commit 'e15df3b5365543ad931a2ab267758e36790be709': RUN-1111 Add in ExpandPersistentVolumes as part of feature-gates
succeed to coreos/coreos-kubernetes#667
I'd like to add GKE's "Node Pools" like feature to kube-aws.
With that feature, we can differentiate the following things per pool without maintaining multiple etcd clusters and kubernetes controller planes:
It will eventually give us more granular control over:
spot=true
Beware that until the taint-and-toleration feature is implemented into Kubernetes, we'd need to give pods specific node selectors to prevent them from being scheduled onto undesirable nodes.
Edit: Taints and tolerations are partially supported in Kubernetes 1.4.x and introduced in kube-aws since #132
TODOs(possibly done in separate pull requests)
kube-aws nodepool update
(Addkube-aws node-pools update
command #130)kube-aws nodepool update
which was always failing 😉 (Makekube-aws node-pools update
not to fail #140)kube-aws nodepool validate
Addkube-aws node-pools validate
command #161kube-aws nodepool destroy
(Addkube-aws node-pools destroy
command #139)node-pools/yourpoolname/cluster.yaml
Experimental feature: User-specified node labels for worker nodes #149Non-TODOs
The text was updated successfully, but these errors were encountered: