Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

‼️ NOTICE: eks overly permissive trust policies #25674

Closed
iliapolo opened this issue May 22, 2023 · 4 comments
Closed

‼️ NOTICE: eks overly permissive trust policies #25674

iliapolo opened this issue May 22, 2023 · 4 comments
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service management/tracking Issues that track a subject or multiple issues p0

Comments

@iliapolo
Copy link
Contributor

iliapolo commented May 22, 2023

Status

Resolved


What is the issue?

eks.Cluster and eks.FargateCluster constructs create two roles that have an overly permissive trust policy.

The first, referred to as the CreationRole, is used by lambda handlers to create the cluster and deploy Kubernetes resources (e.g KubernetesManifest, HelmChart, ...) onto it.

The second, referred to as the default MastersRole, is provisioned only if the mastersRole property isn't provided and has permissions to execute kubectl commands on the cluster.

Both these roles use the account root principal in their trust policy, which allows any identity in the account with the appropriate sts:AssumeRole permissions to assume it. For example, this can happen if another role in your account has sts:AssumeRole permissions on Resource: "*"


Error message

No error message will be presented to the customer.


What is the impact?

An identity with access to a role that has the appropriate sts:AssumeRole permission can gain greater than intended access to the cluster.


Workaround

Until a fix is available, you can follow the below workarounds.


MastersRole

To avoid creating the default MastersRole, use the mastersRole property to explicitly provide a role. For example:

new eks.Cluster(this, 'Cluster', {
  ...
  mastersRole: iam.Role.fromRoleArn(this, 'Admin', 'arn:aws:iam::xxx:role/Admin')
});

CreationRole

There is no workaround available for CreationRole.


Who is affected?

MastersRole

Users with CDK version higher or equal to 1.57.0 (including v2 users) that are not specifying the mastersRole property. The role in question can be located in the IAM console. It will have the following name pattern:

*-MastersRole-* 

CreationRole

Users with CDK version higher or equal to 1.62.0 (including v2 users). The role in question can be located in the IAM console. It will have the following name pattern:

*-ClusterCreationRole-*

How do I resolve this?

The issue has been fixed in versions v1.202.0, v2.80.0. We recommend you upgrade to a fixed version as soon as possible. See Managing Dependencies in the CDK Developer Guide for instructions on how to do this.

The new versions no longer use the account root principal. Instead, they restrict the trust policy to the specific roles of lambda handlers that need it. This introduces some breaking changes that might require you to perform code changes.

MastersRole

If you relied on the default MastersRole, it will no longer be available and will require you to
explicitly provide it via the mastersRole property. To retain the previous behavior, you could do:

new eks.Cluster(this, 'Cluster', {
  ...
  mastersRole: new iam.Role(thi, 'MastersRole', {
    assumedBy: new iam.AccountRootPrincipal(),
  });
});

However, this would still result in an overly permissive trust policy. We recommend specifying a stricter policy. See trust policies on how to use IAM trust policies.

CreationRole

The change in the CreationRole trust policy should be mostly transparent to you. One scenario in which it could cause disruption is when it is being used on imported clusters, like so, for example:

const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
  kubectlRoleArn: '',
});
cluster.addManifest(...);

Previously, the lambda handlers created for the KubectlProvider of the imported cluster were granted the appropriate sts:AssumeRole permissions, and since the CreationRole was using the account root principal, the handlers were able to assume it and execute kubectl commands. After the upgrade, the CreationRole no longer uses the account root principal, and will therefore not allow the new lambda handlers to assume it. The workaround differs based on whether you are creating a new stack, or updating an existing one.

For Existing Stacks

For imported cluster in existing stacks to continue to work, you will need to add the role of the kubectl provider function to the trust policy of the cluster's admin role:

// do this for each stack where you import the original cluster.
this.cluster.adminRole.assumeRolePolicy?.addStatements(new iam.PolicyStatement({
  actions: ['sts:AssumeRole'],
  principals: [iam.Role.fromRoleArn(this, 'KubectlHandlerImportStackRole', 'arn-of-kubectl-provider-function-role-in-import-stack')]
}));

To locate the relevant ARN, find the Lambda function in the import stack that has the "onEvent handler for EKS kubectl resource provider" description and use its role ARN. Redeploy the cluster stack and everything should work, no changes to the import stack needed.

Alternatively, you can do the reverse and specify the kubectlLambdaRole property when importing the cluster to point to the role of the original kubectl provider:

const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
  kubectlRoleArn: '',
  kubectlLambdaRole: iam.Role.fromRoleArn(this, 'KubectlLambdaRole', 'arn-of-kubectl-provider-function-role-in-cluster-stack'),
});

This will make it so the role of the new provider will be the same as the original provider, and as such, is already trusted by the creation role of the cluster.

For New Stacks

If you are importing the cluster in a new stack, you should reuse the entire KubectlProvider of the cluster:

// import the existing provider
const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', {
  functionArn: '',
  handlerRole: iam.Role.fromRoleArn(this, 'HandlerRole', ''),
  kubectlRoleArn: '',
});

// and reuse it
const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
  kubectlProvider,
});
cluster.addManifest(...);

This way, no additional lambda handlers are created, and therefore no additional permissions are required.
Note that this approach will not work for your existing resources because it causes a modification to the service token
of the custom resources, which isn't allowed.

See https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html#kubectl-support


Related issues

@iliapolo iliapolo added management/tracking Issues that track a subject or multiple issues p0 @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service labels May 22, 2023
@iliapolo
Copy link
Contributor Author

Resolved by #25580 and #25473

@iliapolo iliapolo pinned this issue May 22, 2023
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

mergify bot pushed a commit to cdklabs/aws-cdk-notices that referenced this issue May 23, 2023
Since there are two separate problems, introduced in two separate commits, issue two notices based on the affeceted versions. Note that we have no way of knowing if customers are actually impacted by the default masters role because we don't know if they pass the `mastersRole` property or not.
@storyalex
Copy link

Documentation is not updated after this. It still says mastersRole is created by default.

@iliapolo
Copy link
Contributor Author

Documentation is not updated after this. It still says mastersRole is created by default.

@storyalex thanks! will update promptly 👍

@rix0rrr rix0rrr unpinned this issue Jul 10, 2023
xiehan added a commit to hashicorp/cdktf-aws-cdk that referenced this issue Aug 25, 2023
Because of a security vulnerability in `aws-cdk-lib` prior to version
2.80.0, we are increasing the minimum required version to v2.80. See
GHSA-rx28-r23p-2qc3
for the full CVE and impacts.

> ### Who is affected?
> #### MastersRole
> Users with CDK version higher or equal to
[1.57.0](https://github.com/aws/aws-cdk/releases/tag/v1.57.0) (including
v2 users) that are not specifying the `mastersRole` property. The role
in question can be located in the IAM console. It will have the
following name pattern:
> ```
> *-MastersRole-*
> ``` 
> #### CreationRole
> Users with CDK version higher or equal to
[1.62.0](https://github.com/aws/aws-cdk/releases/tag/v1.62.0) (including
v2 users). The role in question can be located in the IAM console. It
will have the following name pattern:
> ```
> *-ClusterCreationRole-*
> ```
> ### Patches
> The new versions no longer use the account root principal. Instead,
they restrict the trust policy to the specific roles of lambda handlers
that need it. This introduces some breaking changes that might require
you to perform code changes. Refer to
aws/aws-cdk#25674 for a detailed discussion of
options.
> 
> ### Workarounds
> #### CreationRole
> There is no workaround available for CreationRole.
> 
> #### MastersRole
> To avoid creating the default MastersRole, use the `mastersRole`
property to explicitly provide a role. For example:
> 
> ```ts 
> new eks.Cluster(this, 'Cluster', { 
>  ... 
> mastersRole: iam.Role.fromRoleArn(this, 'Admin',
'arn:aws:iam::xxx:role/Admin')
> });
> ```
  
### References
[aws/aws-cdk#25674](aws/aws-cdk#25674)

If you have any questions or comments about this advisory we ask that
you contact AWS/Amazon Security via their [vulnerability reporting
page](https://aws.amazon.com/security/vulnerability-reporting) or
directly via email to aws-security@amazon.com.

Closes #225
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service management/tracking Issues that track a subject or multiple issues p0
Projects
None yet
Development

No branches or pull requests

2 participants