You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello,
I'm looking into cluster autoscaler and found this:
EBS volumes cannot span multiple AWS Availability Zones. If you have a Pod with Persistent Volume in an AZ, It must be running on a k8s/EKS node which is in the same Availability Zone of the Persistent Volume. If AWS Auto Scaling Group launches a new k8s/EKS node in different AZ and moves this Pod into the new node, The Persistent volume in previous AZ will not be available from the new AZ. The pod will stay in Pending status. The Workaround is using a single AZ for the k8s/EKS nodes.
I also would like to see this feature implemented in eksctl. In our use case, using a single AZ is desirable because of the data transfer cost between pods running in different AZs. Our application shuffles a lot of temporary data between pods, so the cost is not negligible.
If I understand you correctly, you wish to have the data plane containing workers/pods in a single AZ?
Here is a very simple example yaml file that I believe achieves this:
hello,
I'm looking into cluster autoscaler and found this:
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws
is there a way to use eksctl and ensure nodes will be spawned in a single availability zone?
The text was updated successfully, but these errors were encountered: