-
Notifications
You must be signed in to change notification settings - Fork 294
v0.9.9 cluster autoscaling with two nodepools #1072
Comments
@jcrugzz Hi, thanks for trying kube-aws! I may be missing something but let me answer each question anyway:
Yes.
No. CA works cluster-wide and therefore there should be no need to have a separate CA per subnet/az in a normal use-case in my mind. What is your concrete use-case of CA? |
@mumoshu gotcha, I was under the impression by reading the config comments that by enabling different node pools with different availability zones cluster autoscaler that it was necessary to have one to manage that different availability zone. If thats not the case then everything is good and my question is answered :) |
@jcrugzz Thanks for the confirmation 👍 Just to make sure you won't get into trouble - let me also add that you should have a separate node pool per AZ when you're going Multi-AZ while enabling CA. In other words, a single node-pool spanning multiple AZs does break CA a bit. See the note starting "Cluster autoscaler is not zone aware" in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#common-notes-and-gotchas for more info. |
@mumoshu I've always been confused about the multi-AZ with cluster-autoscaler. Even though CA is not zone aware, AWS is, and when CA scales up/down a multi-AZ pool, AWS will tend towards even distribution. With three single-AZ CA node pools, and CA zone unaware, then you could easily get uneven AZ distribution, unless CA strongly tends to keep all pools the same size? It seems like a single multi-AZ pool seems more likely to distribute evenly? I thought the issue was when you did need to be zone aware, e.g. you need more nodes on AZ 'x' because the required EBS volumes are in AZ 'x', and so CA can work out that scaling a single-zone AX 'x' pool will get more pods scheduled (than scaling up an AZ 'y' pool). So it seemed like single-AZ CA pools only made sense if CA were in fact zone aware, and thus able to choose which single-AZ pool to scale? |
I found the key discussion here. In it @mumoshu makes the same argument that I made above, that auto-scaled multi-AZ ASGs should work fine, if not better than single-AZ ASGs. kubernetes-retired/contrib#1552 (diff) This issue is the If you are sure none of your scheduling is AZ aware/specific you are probably fine with multi-AZ node pools. In fact you may get better balancing this way. But if anything could be AZ-specific, you need to have only one AZ per node pool. There is a workaround to get better balancing with single-zone ASGs, but it relies on you not using any custom node tags. Would be hand to reference this discussion in the documentation. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is partially a question of how this is supposed to work when creating a fresh cluster. I created a cluster with two node pools in private subnets with autoscaling enabled. When the cluster successfully came up, there was only a cluster autoscaler on one of my controller nodes. Is this the expected behavior? Or should there be two autoscalers deployed with one in each node pool for the separate subnets/availability zones? Example partial config below, let me know if you need more.
The text was updated successfully, but these errors were encountered: