Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(eks): option to disable manifest validation #12012

Merged
merged 5 commits into from
Dec 22, 2020
Merged

feat(eks): option to disable manifest validation #12012

merged 5 commits into from
Dec 22, 2020

Conversation

ayush987goyal
Copy link
Contributor

@ayush987goyal ayush987goyal commented Dec 11, 2020

Closes #11763


By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license

@gitpod-io
Copy link

gitpod-io bot commented Dec 11, 2020

@github-actions github-actions bot added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Dec 11, 2020
@ayush987goyal
Copy link
Contributor Author

For kubctl logs:

Running command: ['kubectl', 'apply', '--kubeconfig', '/tmp/kubeconfig', '-f', '/tmp/manifest.yaml', '--prune', '-l', 'aws.cdk.eks/prune-c8ae759e1d059c6c67a772adfc4e04d8c732118222', '--validate=false']

@ayush987goyal
Copy link
Contributor Author

ayush987goyal commented Dec 14, 2020

@iliapolo @eladb Could you please take a look at this change? It has a big breaking change but I feel this might be okay in a long run since we might have a lot of customizations for the manifests.

Copy link
Contributor

@eladb eladb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of changing the api for addManifest, would it be sufficient to specify this option at the cluster level (same as prune)?

@ayush987goyal
Copy link
Contributor Author

Instead of changing the api for addManifest, would it be sufficient to specify this option at the cluster level (same as prune)?

That would set the flag for the complete cluster. What about the case when you only want to skip validation for one particular manifest that you added through addManifest?

@eladb
Copy link
Contributor

eladb commented Dec 14, 2020

Instead of changing the api for addManifest, would it be sufficient to specify this option at the cluster level (same as prune)?

That would set the flag for the complete cluster. What about the case when you only want to skip validation for one particular manifest that you added through addManifest?

Do you think that's a very common use case?

You can still just use KubernetesManifest directly.

@ayush987goyal
Copy link
Contributor Author

You can still just use KubernetesManifest directly.

I am not sure how we can add a manifest intialized via KubernetesManifest to the cluster (apart from addManifest).

@eladb
Copy link
Contributor

eladb commented Dec 14, 2020

I am not sure how we can add a manifest intialized via KubernetesManifest to the cluster (apart from addManifest).

See docs

@ayush987goyal
Copy link
Contributor Author

ayush987goyal commented Dec 14, 2020

Oh understood it now (missed that KubernetesManifest accepts a cluster!).

I will go ahead and make the changes accordingly to keep it only on KubernetesManifest. @eladb Do you think this flag should also be present on the cluster like prune? Where I am slightly concerned on adding it to cluster is to reduce the blast radius (caused by potential validation skip) for all the manifests.

@iliapolo
Copy link
Contributor

@ayush987goyal I think its ok to only expose validate: false on the manifest level and not on the cluster, it stands to reason that this behavior will not normally apply to all manifests in the cluster.

@ayush987goyal
Copy link
Contributor Author

ayush987goyal commented Dec 14, 2020

@iliapolo So I am trying to add an integ tests for this by initilializing KubernetesManifest by providing the cluster.

  private assertManifestWithoutValidation() {
    // apply a kubernetes manifest
    new eks.KubernetesManifest(this, 'HelloAppWithoutValidation', {
      cluster: this.cluster,
      manifest: [{
        apiVersion: 'v1',
        kind: 'Service',
        metadata: { name: 'hello-kubernetes-without-validation' },
        spec: {
          type: 'LoadBalancer',
          ports: [{ port: 80, targetPort: 8080 }],
          selector: { app: 'hello-kubernetes-without-validation' },
        },
      }],
      validate: false,
    });
  }

I am getting the following cfn error:

Failed to create resource. Error: b'Error from server (AlreadyExists): error when creating "/tmp/manifest.yaml": configmaps "aws-auth" already exists\n' Logs: /aws/lambda/aws-cdk-eks-cluster-test-awscdkaws-Handler886CB40B-4P9SW7KB0BEL at invokeUserFunction (/var/task/framework.js:95:19) at process._tickCallback (internal/process/next_tick.js:68:7)

I am thinking if this is related to the scope in which the manifest is created (the context of cluster v/s stack) but not too sure. Do you have any clue on what might be wrong with this?

@iliapolo
Copy link
Contributor

@ayush987goyal Yes, we broke our integration tests in this commit :)

@eladb is working on a fix.

You can build an earlier commit to see your logic works, but probably better just to hold off until the fix is merged since you'll need to re-run the tests.

@ayush987goyal
Copy link
Contributor Author

Hi @iliapolo , could you please take a look at this now?

@iliapolo iliapolo added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Dec 17, 2020
@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Dec 18, 2020
@iliapolo iliapolo added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Dec 18, 2020
@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Dec 20, 2020
@iliapolo
Copy link
Contributor

@ayush987goyal Also notice there is a conflict now... :\

@aws-cdk-automation
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: AutoBuildProject6AEA49D1-qxepHUsryhcu
  • Commit ID: db4bf0a
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@ayush987goyal
Copy link
Contributor Author

@iliapolo Fixed both! :)

@iliapolo iliapolo changed the title feat(eks): add support to disable manifest validation feat(eks): option to disable manifest validation Dec 22, 2020
Copy link
Contributor

@iliapolo iliapolo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ayush987goyal Thanks :)

@mergify
Copy link
Contributor

mergify bot commented Dec 22, 2020

Thank you for contributing! Your pull request will be updated from master and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork).

@mergify mergify bot merged commit 579b923 into aws:master Dec 22, 2020
@ayush987goyal ayush987goyal deleted the pr/k8s-invalid branch December 22, 2020 18:49
flochaz pushed a commit to flochaz/aws-cdk that referenced this pull request Jan 5, 2021
Closes aws#11763

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service
Projects
None yet
Development

Successfully merging this pull request may close these issues.

(aws-eks): Add validate flag to KubernetesManifest class to pass invalidated objects
4 participants