-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.25 #341
1.25 #341
Conversation
Allow pods in the karpenter namespace to run on fargate
fstype = "csi.storage.k8s.io/fstype: ${var.pv_fstype}", | ||
} | ||
) | ||
resource "kubernetes_config_map_v1_data" "aws_auth" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this block is very similar to resource "kubernetes_config_map" "aws_auth"
why we have this twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kubernetes_config_map
creates the configmap, whereas this resource manages the content of the data
field.
Managing this this way means we get changes to the data in the terraform plan.
I copied the idea for this from this module https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/main.tf#L536-L569
] | ||
}, | ||
{ | ||
rolearn = aws_iam_role.karpenter_node.arn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In web-platform-X we use instanceProfile: EKSNode
here we will have to change it in the provisioner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is the intention, but we can add the old profile name to the map during transition.
@@ -0,0 +1,74 @@ | |||
resource "aws_sqs_queue" "karpenter_interruption" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe more clear to rename this file to karpenter_spot_node_interruption_queue.tf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it makes it clearer, this queue will have a number of different events - not only spot interruptions.
@@ -0,0 +1,74 @@ | |||
resource "aws_sqs_queue" "karpenter_interruption" { | |||
name = "Karpenter-${var.name}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe more clear karpenter-spot-node-interruption-${var.name}?
required_providers { | ||
aws = { | ||
source = "hashicorp/aws" | ||
version = ">= 4.47.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5.1.0 maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
>= 4.47.0
covers this, since we expect other projects to depend on this module we should have loose constraints (within reason).
|
||
kubernetes = { | ||
source = "hashicorp/kubernetes" | ||
version = ">= 2.10" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2.21.1 is the latest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same
The design here is that new clusters all have a new IAM role per cluster. But to avoid recreating legacy clusters on upgrade, we need a way to use an externally managed IAM role.
* Also adds fargate profile for `kube-system` or some addons will fail.
📝 TODO