You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When creating a cluster with karpenter the controller gives the above error. The VPC/Subnets are created separately and we are using private subnets only.
✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
Re-initialize the project root to pull down modules: terraform init
Re-attempt your terraform plan or apply and check if the issue still persists
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.14.0"
cluster_name = "test-dev-cluster"
cluster_version = "1.30"
# Gives Terraform identity admin access to cluster which will
# allow deploying resources (Karpenter) into the cluster
enable_cluster_creator_admin_permissions = true
cluster_endpoint_public_access = true
#disable OIDC provider creation
enable_irsa = false
iam_role_permissions_boundary = var.permissions_boundary_arn
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = data.aws_vpc.selected.id
control_plane_subnet_ids = data.aws_subnets.app.ids
cluster_additional_security_group_ids = data.aws_security_groups.cluster_tiers.ids
eks_managed_node_groups = {
karpenter = {
ami_type = "AL2023_x86_64_STANDARD"
instance_types = ["t2.large", "t3.large", "m4.large", "m5.large", "m6i.large"]
min_size = 1
max_size = 5
desired_size = 1
taints = {
# This Taint aims to keep just EKS Addons and Karpenter running on this MNG
# The pods that do not tolerate this taint should run on nodes created by Karpenter
addons = {
key = "CriticalAddonsOnly"
value = "true"
effect = "NO_SCHEDULE"
}
}
subnet_ids = data.aws_subnets.app.ids
vpc_security_group_ids = data.aws_security_groups.app_tier.ids
}
}
iam_role_additional_policies = {
AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
AmazonSSMManagedEC2InstanceDefaultPolicy = "arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy"
}
tags = merge(var.tags, {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery" = "test-dev-cluster"
})
}
module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = "20.14.0"
cluster_name = "test-dev-cluster"
enable_pod_identity = true
create_pod_identity_association = true
# Used to attach additional IAM policies to the Karpenter node IAM role
node_iam_role_additional_policies = {
AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
AmazonSSMManagedEC2InstanceDefaultPolicy = "arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy"
}
tags = var.tags
}
Steps to reproduce the behaviour:
Points to note:
The VPC/Subnets are created separately and tagged - there is no public subnet. Tags used are:
"kubernetes.io/role/internal-elb" = 1
"karpenter.sh/discovery" = "test-dev-cluster"
enable_irsa is set to false as we have no permission to create OIDC provider
Expected behavior
EKS Cluster & Karpenter controller should come up cleanly
Actual behavior
EKS Cluster comes up cleanly
Karpenter controller fails with error:
{"level":"ERROR","time":"2024-06-30T11:45:00.909Z","logger":"controller","message":"ec2 api connectivity check failed","commit":"490ef94","error":"NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"}
Description
When creating a cluster with karpenter the controller gives the above error. The VPC/Subnets are created separately and we are using private subnets only.
Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]:
Terraform version:
Terraform v1.8.5
Provider version(s):
on linux_amd64
Reproduction Code [Required]
Service account exists:
Steps to reproduce the behaviour:
Points to note:
"kubernetes.io/role/internal-elb" = 1
"karpenter.sh/discovery" = "test-dev-cluster"
Expected behavior
EKS Cluster & Karpenter controller should come up cleanly
Actual behavior
EKS Cluster comes up cleanly
Karpenter controller fails with error:
{"level":"ERROR","time":"2024-06-30T11:45:00.909Z","logger":"controller","message":"ec2 api connectivity check failed","commit":"490ef94","error":"NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"}
Terminal Output Screenshot(s)
Additional context
Running pods:-
ServiceAccount exists:
The text was updated successfully, but these errors were encountered: