Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spawn nodes in same availability zone #1548

Closed
masterkain opened this issue Nov 13, 2019 · 4 comments
Closed

spawn nodes in same availability zone #1548

masterkain opened this issue Nov 13, 2019 · 4 comments
Labels
kind/help Request for help

Comments

@masterkain
Copy link

hello,
I'm looking into cluster autoscaler and found this:

EBS volumes cannot span multiple AWS Availability Zones. If you have a Pod with Persistent Volume in an AZ, It must be running on a k8s/EKS node which is in the same Availability Zone of the Persistent Volume. If AWS Auto Scaling Group launches a new k8s/EKS node in different AZ and moves this Pod into the new node, The Persistent volume in previous AZ will not be available from the new AZ. The pod will stay in Pending status. The Workaround is using a single AZ for the k8s/EKS nodes.

https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws

is there a way to use eksctl and ensure nodes will be spawned in a single availability zone?

@masterkain masterkain added the kind/help Request for help label Nov 13, 2019
@glapark
Copy link

glapark commented Dec 18, 2019

I also would like to see this feature implemented in eksctl. In our use case, using a single AZ is desirable because of the data transfer cost between pods running in different AZs. Our application shuffles a lot of temporary data between pods, so the cost is not negligible.

@ksykulev
Copy link

If I understand you correctly, you wish to have the data plane containing workers/pods in a single AZ?
Here is a very simple example yaml file that I believe achieves this:

#simple-cluster.yaml

---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: cluster-1
  region: us-west-2

availabilityZones: ["us-west-2a", "us-west-2b", "us-west-2c"]

nodeGroups:
  - name: ng-1
    instanceType: t2.micro
    desiredCapacity: 1
    availabilityZones: ["us-west-2a"]

Here are some resources for more detail:
#232
https://github.com/weaveworks/eksctl/blob/master/examples/05-advanced-nodegroups.yaml

@michaelbeaumont
Copy link
Contributor

Thanks @ksykulev . Closing this issue! Let us know if your problem is solved!

@masterkain
Copy link
Author

yes, it is solved, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/help Request for help
Projects
None yet
Development

No branches or pull requests

4 participants