-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: New Karpenter sub-module for easily enabling Karpenter on EKS #2303
feat: New Karpenter sub-module for easily enabling Karpenter on EKS #2303
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good (I didn't run it).
Co-authored-by: Anton Babenko <anton@antonbabenko.com>
hmm, is GitHub having a moment?! 😅 |
@antonbabenko is the WIP bot back? or did it not get removed from this repo (saw it show up in the initial checks then disappear) |
I think we have had the WIP bot for ages. I just double-checked it in other repositories. |
## [18.31.0](v18.30.3...v18.31.0) (2022-11-21) ### Features * New Karpenter sub-module for easily enabling Karpenter on EKS ([#2303](#2303)) ([f24de33](f24de33))
This PR is included in version 18.31.0 🎉 |
@@ -60,7 +61,7 @@ module "eks" { | |||
version = "~> 18.0" | |||
|
|||
cluster_name = "my-cluster" | |||
cluster_version = "1.22" | |||
cluster_version = "1.24" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the future, can we do these sort of changes in a different PR?
a card that mentions karpenter, and all the sudden there is a major change of the cluster version, is not what anyone would expect
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is just to the readme so probably not a big deal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR description contains all the necessary details, it just seems some have not taken the time to read it 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry. I seem to not have notifications for PR, only releases and missed this PR when it was initially opened. And when I saw the release email, I clicked in the commit id rather than open the PR so never got to read the description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all good! I do try to put the relevant bits in the PR body with associated links as best as I can. its not always perfect though
@@ -10,13 +10,13 @@ provider "kubernetes" { | |||
api_version = "client.authentication.k8s.io/v1beta1" | |||
command = "aws" | |||
# This requires the awscli to be installed locally where Terraform is executed | |||
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] | |||
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the reasoning for this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems more correct to pass in the module.eks.cluster_name
instead of cluster_id
attribute to --cluster-name
, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see the PR changes for background and reasoning. more context can be found here hashicorp/terraform-provider-aws#27560
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for explaining. Will modify internally to match
I'm rather confused why this karpenter module is being added to the EKS Cluster repo, instead of the terraform-aws-eks-blueprints project. |
I'm not sure where the confusion is - they are two separate projects aimed at two entirely different use cases. this module provides (nearly) all of the AWS infrastructure required for an EKS cluster. The EKS Blueprints project states As I stated in the PR description, Karpenter is quickly becoming the entity that manages the entire lifecycle of the data plane - where today we have EKS managed node groups, self-managed node groups, and Fargate profiles. It is only natural to add this functionality in here to aid in the process of running Karpenter now that more AWS infrastructure resources are required to support Karpenter |
To state a bit differently, now in hindsight, with the changes made in Karpenter v0.19.0 - Karpenter would no longer have a place here https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks and would only have its AWS resources defined here. The actual provisioning of the Karpenter controller and CRD is outside the scope of what I intend for this module. Maybe someday soon I can remove the Kubernetes provider and the aws-auth configmap and we can return to a more "pure" module that is only interacting with the AWS API. The actual provisioning of Karpenter is left up to users since there are numerous ways that folks manage this deployment and lifecycle today (manually (it happens), via Helm, via ArgoCD, via FluxCD, via other SDKS, etc. ) |
Apologize first if I post this in the wrong place. I am able to follow the example (https://github.com/ABOUTYOUGMBH/terraform-aws-eks/tree/v18.29.0/examples)/[karpenter] to get the Karpenter installed with EKS. But the terraform stops at Workaround - hashicorp/terraform-provider-kubernetes#1380 (comment)resource "kubectl_manifest" "karpenter_provisioner" { And I get the following error, kubectl_manifest.karpenter_provisioner: Creating... I am not sure what the correct work around is. Can you please point me to the right direction so I can test the Karpenter? Thank you very much in advance for your help! |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
cluster_name
Motivation and Context
cluster_name
is to start differentiating between cluster ID and cluster name. Today the cluster ID output is the cluster name, in the future, the provider API for this may change and so we'll start following that behavior soonerBreaking Changes
How Has This Been Tested?
examples/*
to demonstrate and validate my change(s)examples/*
projectspre-commit run -a
on my pull request