Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Private/Public Endpoints #649

Closed
christopherhein opened this issue Mar 20, 2019 · 21 comments · Fixed by #1149
Closed

Support Private/Public Endpoints #649

christopherhein opened this issue Mar 20, 2019 · 21 comments · Fixed by #1149
Assignees
Labels
area/aws-vpc kind/feature New feature or request

Comments

@christopherhein
Copy link
Contributor

christopherhein commented Mar 20, 2019

❗️ Currently this is blocked on CloudFormation supporting the Params - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html

Announcement - https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-introduces-kubernetes-api-server-endpoint-access-cont/

Why do you want this feature?
This would add support for enabling Private and/or Public API Server endpoints allowing you to gate access to your clusters via the VPC. This allows you to isolate the API server limiting exposure.

What feature/behavior/change do you want?
Add support for endpointPrivateAccess and endpointPublicAccess on the ResourcesVpcConfig`.

https://docs.aws.amazon.com/eks/latest/APIReference/API_VpcConfigRequest.html#AmazonEKS-Type-VpcConfigRequest-endpointPrivateAccess

https://docs.aws.amazon.com/eks/latest/APIReference/API_VpcConfigRequest.html#AmazonEKS-Type-VpcConfigRequest-endpointPublicAccess

@christopherhein christopherhein self-assigned this Mar 20, 2019
@christopherhein christopherhein added area/aws-vpc kind/feature New feature or request labels Mar 20, 2019
@errordeveloper
Copy link
Contributor

Thanks @christopherhein! Do you think we should consider enabling private access by default? Seems like we could actually do it quite safely.

@mindfulmonk
Copy link

I don't see any cons to enabling both by default.

It is worth tracking aws/containers-roadmap#22 for the private only situation.

@christopherhein
Copy link
Contributor Author

@mindfulmonk @errordeveloper I agree with this for the default to having both turned on, with flags/config to turn off.

@whereisaaron
Copy link

Not enabling the private access seems odd to me, why would you want worker nodes to have to loop through the Internet to get to their own control plane? So when they added this feature, my first thought was 'Wha? They weren't doing this already?' 😄

@whereisaaron
Copy link

Although CF does not have these setting yet, you can change the settings to enable private access on existing clusters from the AWS API/cli

aws eks --region region update-cluster-config --name dev --resources-vpc-config endpointPublicAccess=true,endpointPrivateAccess=true

So we could apply the setting after CF creation completes. Does that mess up CF? Given it is unaware of the settings would it keep reverting the setting on updates, or just not touch it?

I would wait, but sometimes the AWS CF team takes actual years to add the simplist things, or just never does.

@BernhardLenz
Copy link

Any idea when the "endpointPublicAccess=false" feature will be available approximately? Is there a way to support this change to raise the priority?

@errordeveloper
Copy link
Contributor

Until now we were waiting for CloudFormation support, but due to many requests we are going to add this by calling EKS API directly.

@BernhardLenz in the future, please be sure to reach out on Slack and ping me or @kalbir there.

@D3nn
Copy link
Contributor

D3nn commented Aug 8, 2019

Some considerations that must be worked through to implement this functionality:

  1. The default cluster endpoint access is PublicAccess=true/PrivateAccess=false, so when disabling public access, we must necessarily enable private access as well or no clients would be able to access the API server.
  2. One must update the control plane security group to allow port 443 access from the client location.
  3. Additionally, there may be more work to provide proper DNS name resolution. See the section "Accessing the API Server from within the VPC" in Amazon EKS Cluster Endpoint Access Control for further information. Some options require additional DNS configuration to access the API server by name from peered VPCs or connected through Direct Connect or a VPN. See Enabling DNS resolution for Amazon EKS cluster endpoints for further info.

It's possible we could account for the Worker Node VPC access to the API server, but the other options would be out of scope as we wouldn't know from whence the traffic might come. We might be able to create a README for these topics (which could be a short document with just pointers to the Amazon documentation)

@whereisaaron
Copy link

@D3nn those are valid considerations for the most complex, PublicAccess=false / PrivateAccess=true situation, where there is no public access. and end-user need to route to, resolve DNS, and be giving access to the VPC-internal endpoint.

The initial proposal here is to enable the much simpler PublicAccess=true / PrivateAccess=true by default, which is really what the default should be IMHO. This is where nodes access the control plane API internal to the VPC, and end-user access the public API/DNS. This avoids the current undesirable default (PublicAccess=true / PrivateAccess=false) where cluster-internal communication is looped out to the Internet and back.

In the long run, all three functioning options could be supported:

  1. PublicAccess=true / PrivateAccess=true (new eksctl default)
  2. PublicAccess=false / PrivateAccess=true (fully private cluster)
  3. PublicAccess=true / PrivateAccess=false (for April 1 deployments only)

@D3nn
Copy link
Contributor

D3nn commented Aug 9, 2019

@whereisaaron Just a small correction to the behavior when PublicAccess=true/PrivateAccess=false.

According to Amazon's documentation for the default state:

Kubernetes API requests that originate from within your cluster's VPC (such as worker node to control plane communication) leave the VPC but not Amazon's network.

I don't disagree that all three options need to be supported, just that either we need to make the additional changes in the public=false/private=true circumstance to make at least worker node VPC to control plane API server communications work or we should refer users to Amazon's documentation on how to make this communication functional.

@whereisaaron
Copy link

@D3nn yeah I read that too, but I remain unimpressed 😄

I read it and translated it to ‘yeah we know this is far from ideal and we’re sensitive about it too’ 🤣

Kubernetes API requests that originate from within your cluster's VPC (such as worker node to control plane communication) leave the VPC but not Amazon's network.

@MatteoMori
Copy link

Hey, sorry to bother but is there any updates on this? 😅

@D3nn
Copy link
Contributor

D3nn commented Aug 28, 2019

Working on this currently. Need to add some tests and make sure all results are as expected, to wit:

  1. Public: false, Private true should succeed but give warning message with a link to AWS doc on enabling access from within VPC
  2. Public: true, Private: true should succeed
  3. Public: true, Private: false should work (with or without specifying on command line/config file)
  4. Public: false, Private false should give an error (not an allowed configuration)

@errordeveloper errordeveloper modified the milestones: 0.5.0, 0.6.0 Aug 30, 2019
@mr-karan
Copy link

Eagerly waiting for this :) Currently my best bet is to just update this setting using the EKS Console UI, is that right?

@lodotek
Copy link

lodotek commented Oct 3, 2019

Eagerly waiting for this :) Currently my best bet is to just update this setting using the EKS Console UI, is that right?
see:
#649 (comment)

@gemagomez gemagomez removed this from the 0.6.0 milestone Oct 4, 2019
@D3nn D3nn closed this as completed in #1149 Oct 9, 2019
@morinap
Copy link

morinap commented Oct 10, 2019

The change merged in #1149 is behaving differently for the create and update cases. In the case of update, it allows setting of "Private Only", but in create it does not (See Update vs Create). It seems to be a valid use case (in fact, it is for us) to allow creation of a private-only cluster (we only manage the cluster from a bastion host with in the VPC). Was this simply an oversight in the newly merged PR?

@atheiman
Copy link

It’s documented that way @morinap so I dont think it was an oversight: https://github.com/weaveworks/eksctl/blob/master/site/content/usage/06-vpc-networking.md#managing-access-to-the-kubernetes-api-server-endpoints

EKS does allow creating a configuration which allows only private access to be enabled, but eksctl doesn't support it during cluster creation as it prevents eksctl from being able to join the worker nodes to the cluster.
To create private-only Kubernetes API endpoint access, one must first create the cluster with public Kubernetes API endpoint access, and then use /eksctl utils update-cluster-endpoints to change it after the cluster is finished creating.

Do you agree with the private endpoint during cluster creation “prevents eksctl from being able to join the worker nodes to the cluster“?

@morinap
Copy link

morinap commented Oct 10, 2019

@atheiman Thanks for pointing that out, I had missed that in the documentation.

Do you agree with the private endpoint during cluster creation “prevents eksctl from being able to join the worker nodes to the cluster“?

I'm not sure? Let me build this and test this on my own to verify. I don't see at a glance why this wouldn't work if I'm actually running eksctl from a host within the same VPC as the EKS cluster.

@morinap
Copy link

morinap commented Oct 10, 2019

Ahh, I see now. Even from a bastion host within the VPC, a rule needs to be added to the control plane security group to allow that host to access when only private access is enabled. I'll stew on this and see if I can come up with an acceptable change.

@BernhardLenz
Copy link

@morinap @atheiman The current workaround to create a private cluster is to first create the cluster with public access using eksctl and then use the "aws eks update-cluster-config" cli to update the endpoint to private. There is not so much value in changing the 2nd steps to use eksctl instead of the aws cli. The value add would be to be able to achieve the creation of a private cluster in one step...

@morinap
Copy link

morinap commented Oct 10, 2019

@BernhardLenz I've just opened a PR at #1434 that takes one simple approach to resolve this; this approach has worked successfully for me today. I was able to create a new cluster with private-only access from a bastion host in one command using my forked code.

torredil pushed a commit to torredil/eksctl that referenced this issue May 20, 2022
Fix overlays not being updated for gcr migration
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/aws-vpc kind/feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.