-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding temporary multizone-nodepool support for CAS #7013
Adding temporary multizone-nodepool support for CAS #7013
Conversation
Hi @aagusuab. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just a minor nit.
//Picks random zones for Multi-zone nodepool when scaling from zero. | ||
//This random zone will not be the same as the zone of the VMSS that is being created, the purpose of creating | ||
//the node template with random zone is to initiate scaling from zero on the multi-zone nodepool. | ||
//Note that the if the customer is to have some pod affinity picking exact zone, this logic won't work. | ||
//For now, discourage the customers from using podAffinity to pick the availability zones. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
//Picks random zones for Multi-zone nodepool when scaling from zero. | |
//This random zone will not be the same as the zone of the VMSS that is being created, the purpose of creating | |
//the node template with random zone is to initiate scaling from zero on the multi-zone nodepool. | |
//Note that the if the customer is to have some pod affinity picking exact zone, this logic won't work. | |
//For now, discourage the customers from using podAffinity to pick the availability zones. | |
// Picks random zones for Multi-zone nodepool when scaling from zero. | |
// This random zone will not be the same as the zone of the VMSS that is being created, the purpose of creating | |
// the node template with random zone is to initiate scaling from zero on the multi-zone nodepool. | |
// Note that the if the customer is to have some pod affinity picking exact zone, this logic won't work. | |
// For now, discourage the customers from using podAffinity to pick the availability zones. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol
Just for my own understanding, can you give an example of one of these errors?
Also to help me understand, which affinities exactly are you referring to here? |
Sure, @nojnhuh! This caused an error because cluster autoscaler currently takes in multiple zones such that: and the initial node label of "zone1" is not recognized by the multi-zone node label. Therefore, we are removing this zone1__zone2__zone3 (or any similar availability_zone setup), and setting up the |
cfeeea7
to
c1d30e7
Compare
//This random zone will not be the same as the zone of the VMSS that is being created, the purpose of creating | ||
//the node template with random zone is to initiate scaling from zero on the multi-zone nodepool. | ||
//Note that the if the customer is to have some pod affinity picking exact zone, this logic won't work. | ||
//For now, discourage the customers from using podAffinity to pick the availability zones. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, discourage the customers from using podAffinity to pick the availability zones
"... if using multi-zone nodepool" (but probably clear from context)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aagusuab, Bryce-Soghigian, comtalyst, tallaxes The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
/cherry-pick cluster-autoscaler-release-1.27 |
@aagusuab: only kubernetes org members may request cherry picks. If you are already part of the org, make sure to change your membership to public. Otherwise you can still do the cherry-pick manually. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cherry-pick cluster-autoscaler-release-1.27 |
@comtalyst: #7013 failed to apply on top of branch "cluster-autoscaler-release-1.27":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cherry-pick cluster-autoscaler-release-1.30 |
@comtalyst: new pull request created: #7190 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cherry-pick cluster-autoscaler-release-1.29 |
/cherry-pick cluster-autoscaler-release-1.28 |
@comtalyst: new pull request created: #7192 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@comtalyst: #7013 failed to apply on top of branch "cluster-autoscaler-release-1.28":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
What this PR does / why we need it:
This PR will prevent error once cluster-autoscaler is scaling the multi-zone nodepool from zero.
This is necessary because currently when we are trying to create a node template from the multi-zone nodepool, we create the zone-labels "topology.kubernetes.io/zone " where the label is "eastus-1__eastus-2__eastus3".
However, this label is not recognized by some affinities.
With this PR, Cluster Autoscaler will create node templates using any random zone that the VMSS contains. However, this zone isn't the zone VMSS assigns VMs with. The VMSS will decide separately#### Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: