-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Update default chart versions to latest minor version supported #364
Conversation
|
||
enable_external_dns = true | ||
external_dns_route53_zone_arns = [ | ||
"arn:aws:route53:::hostedzone/*", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod was in crashloop because it was trying to list zones since we default to Route53
@@ -210,6 +172,14 @@ module "eks_blueprints_addons" { | |||
## An S3 Bucket ARN is required. This can be declared with or without a Prefix. | |||
velero = { | |||
s3_backup_location = "${module.velero_backup_s3_bucket.s3_bucket_arn}/backups" | |||
values = [ | |||
# https://github.com/vmware-tanzu/helm-charts/issues/550#issuecomment-1959933230 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to create another version mapping like cluster-autoscaler, so just applying to fix in the test case for now while waiting for the upstream project to find a viable solution
Hey @bryantbiggs @askulkarni2, sorry to ping on a merged/closed pr, but I noticed this PR removed |
its just an EKS addon so there isn't strong motivation to have that in the tests here - either its enabled or its not. the GuardDuty agent will also want to create a VPC endpoint if one does not exist, so that just adds to the overhead of the test suite |
Thanks, yeah, that VPC endpoint is part of how I got on this lol. The "automatic creation" of the vpc endpoint doesn't let us set the subnets, and sometimes picks our "intra" subnets that don't really support any routing in our vpc setup. But the Org is configured to mandate use of the EKS Guardduty agent, so I have to figure out how to manage it. Plus, when I go to destroy the cluster, it won't complete until I clear out the aws-guardduty-agent addon, that was auto-added by the Org config. Just trying to find a good pattern for managing it all! |
What does this PR do?
Motivation
More
pre-commit run -a
with this PRFor Moderators
Additional Notes