-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite EKS L2 construct (EKSv2) #605
Comments
Yes please. Also given how many people have created their clusters via the custom resource, please think about how to introduce an import-system so that we can transition from the old model to the new model seamlessly (I know that's not trivial, but it should be possible). |
Hey, this would be really great. Also, would this other approach drop the need to create two Nested Stacks when creating the EKS Cluster and one Nested Stack when importing to other Stack? |
Hi @kromanow94 yes nested stack will be removed in the new EKS L2 |
I know that in Dynamo we have This also facilitates API updates as we really would be independent of the existing "v1" api. |
Description
When an EKS cluster is created, the only role that has access to the cluster itself (e.g running kubectl commands) is the role that created the cluster. When using CloudFormation, this would be the CloudFormation execution role. Since this role isn’t assumable by anyone, it effectively means it is impossible to connect to the cluster post creation.
In CDK, we workaround this issue by implementing the L2 using custom resources, instead of L1s. This allows us to create the role that creates the cluster (i.e invokes
eks.CreateCluster
API), and subsequently use this role to grant additional (user defined) roles permissions on the cluster.The EKS team added a new feature that allows more control over cluster access. Now it would be possible for CloudFormation to specify a list of roles to be granted access to the cluster, in addition to the role that creates the cluster.
The RFC is to create a new EKS L2 construct and drop the custom resource implementation in favor of the native L1.
This is going to incur a breaking change that will require cluster replacement (because type will change from Custom::AWSCDK-EKS-Cluster to AWS::EKS::Cluster). Given a breaking change is inevitable, we can decide also to make some additional breaking changes in the API that make it more ergonomic and aligned with the new cluster implementation.
Roles
Workflow
status/proposed
)status/review
)status/api-approved
applied to pull request)status/final-comments-period
)status/approved
)status/planning
)status/implementing
)status/done
)The text was updated successfully, but these errors were encountered: