Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for launch template and tf 0.13 #6

Merged
merged 1 commit into from
Sep 1, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ repos:
args: ['--allow-missing-credentials']
- id: trailing-whitespace
- repo: git://github.com/antonbabenko/pre-commit-terraform
rev: v1.31.0
rev: v1.36.0
hooks:
- id: terraform_fmt
- id: terraform_docs
Expand Down
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,19 @@ Terraform module to provision EKS Managed Node Group

## Resources created

This module will create EKS managed Node Group that will join your existing Kubernetes cluster.
This module will create EKS managed Node Group that will join your existing Kubernetes cluster. It supports use of launch template which will allow you to further enhance and modify worker nodes.

## Terraform versions

Terraform 0.12. Pin module version to `~> v2.0`. Submit pull-requests to `master` branch.
Terraform 0.12. Pin module version to `~> v3.0`. Submit pull-requests to `master` branch.

## Usage

```hcl
module "eks-node-group" {
source = "umotif-public/eks-node-group/aws"
version = "~> 2.0.0"
version = "~> 3.0.0"

enabled = true
cluster_name = aws_eks_cluster.cluster.id

subnet_ids = ["subnet-1","subnet-2","subnet-3"]
Expand All @@ -43,12 +42,14 @@ module "eks-node-group" {

## Assumptions

Module is to be used with Terraform > 0.12.
Module is to be used with Terraform > 0.13. Fully working with Terraform 0.12 as well.

## Examples

* [EKS Node Group- single](https://github.com/umotif-public/terraform-aws-eks-node-group/tree/master/examples/single-node-group)
* [EKS Node Group- multiple az setup](https://github.com/umotif-public/terraform-aws-eks-node-group/tree/master/examples/multiaz-node-group)
* [EKS Node Group- single named node group](https://github.com/umotif-public/terraform-aws-eks-node-group/tree/master/examples/single-named-node-group)
* [EKS Node Group- single with launch template](https://github.com/umotif-public/terraform-aws-eks-node-group/tree/master/examples/single-node-group-with-launch-template)

## Authors

Expand Down
7 changes: 2 additions & 5 deletions examples/multiaz-node-group/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ provider "aws" {
#####
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
version = "2.48.0"

name = "simple-vpc"

Expand Down Expand Up @@ -46,7 +46,7 @@ resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = "eks"
role_arn = aws_iam_role.cluster.arn
version = "1.14"
version = "1.17"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
Expand Down Expand Up @@ -130,7 +130,6 @@ resource "aws_iam_role_policy_attachment" "main_AmazonEC2ContainerRegistryReadOn
module "eks-node-group-a" {
source = "../../"

enabled = true
create_iam_role = false

cluster_name = aws_eks_cluster.cluster.id
Expand Down Expand Up @@ -158,7 +157,6 @@ module "eks-node-group-a" {
module "eks-node-group-b" {
source = "../../"

enabled = true
create_iam_role = false

cluster_name = aws_eks_cluster.cluster.id
Expand Down Expand Up @@ -186,7 +184,6 @@ module "eks-node-group-b" {
module "eks-node-group-c" {
source = "../../"

enabled = true
create_iam_role = false

cluster_name = aws_eks_cluster.cluster.id
Expand Down
5 changes: 2 additions & 3 deletions examples/single-named-node-group/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ provider "aws" {
#####
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
version = "2.48.0"

name = "simple-vpc"

Expand Down Expand Up @@ -46,7 +46,7 @@ resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = "eks"
role_arn = aws_iam_role.cluster.arn
version = "1.14"
version = "1.17"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
Expand Down Expand Up @@ -94,7 +94,6 @@ module "eks-node-group" {
node_group_name = "example-nodegroup"
node_group_role_name = "example-nodegroup"

enabled = true
cluster_name = aws_eks_cluster.cluster.id

subnet_ids = flatten([module.vpc.private_subnets])
Expand Down
158 changes: 158 additions & 0 deletions examples/single-node-group-with-launch-template/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
provider "aws" {
region = "eu-west-1"
}

#####
# VPC and subnets
#####
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.48.0"

name = "simple-vpc"

cidr = "10.0.0.0/16"

azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

private_subnet_tags = {
"kubernetes.io/role/internal-elb" = "1"
}

public_subnet_tags = {
"kubernetes.io/role/elb" = "1"
}

enable_dns_hostnames = true
enable_dns_support = true
enable_nat_gateway = true
enable_vpn_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false

tags = {
"kubernetes.io/cluster/eks" = "shared",
Environment = "test"
}
}

#####
# EKS Cluster
#####

resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = "eks"
role_arn = aws_iam_role.cluster.arn
version = "1.17"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
}
}

resource "aws_iam_role" "cluster" {
name = "eks-cluster-role"

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.cluster.name
}

resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.cluster.name
}

#####
# Launch Template with AMI
#####
data "aws_ssm_parameter" "cluster" {
name = "/aws/service/eks/optimized-ami/${aws_eks_cluster.cluster.version}/amazon-linux-2/recommended/image_id"
}

data "aws_launch_template" "cluster" {
name = aws_launch_template.cluster.name

depends_on = [aws_launch_template.cluster]
}

resource "aws_launch_template" "cluster" {
image_id = data.aws_ssm_parameter.cluster.value
instance_type = "t3.medium"
name = "eks-launch-template-test"
update_default_version = true

key_name = "eks-test"

block_device_mappings {
device_name = "/dev/sda1"

ebs {
volume_size = 20
}
}

tag_specifications {
resource_type = "instance"

tags = {
Name = "eks-node-group-instance-name"
"kubernetes.io/cluster/eks" = "owned"
}
}

user_data = base64encode(templatefile("userdata.tpl", { CLUSTER_NAME = aws_eks_cluster.cluster.name, B64_CLUSTER_CA = aws_eks_cluster.cluster.certificate_authority[0].data, API_SERVER_URL = aws_eks_cluster.cluster.endpoint }))
}

#####
# EKS Node Group
#####
module "eks-node-group" {
source = "../../"

cluster_name = aws_eks_cluster.cluster.id

subnet_ids = flatten([module.vpc.private_subnets])

desired_size = 1
min_size = 1
max_size = 1

launch_template = {
id = data.aws_launch_template.cluster.id
version = data.aws_launch_template.cluster.latest_version
}

kubernetes_labels = {
lifecycle = "OnDemand"
}

tags = {
"kubernetes.io/cluster/eks" = "owned"
Environment = "test"
}

depends_on = [data.aws_launch_template.cluster]
}
17 changes: 17 additions & 0 deletions examples/single-node-group-with-launch-template/userdata.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
set -ex

exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

yum install -y amazon-ssm-agent
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent

/etc/eks/bootstrap.sh ${CLUSTER_NAME} --b64-cluster-ca ${B64_CLUSTER_CA} --apiserver-endpoint ${API_SERVER_URL}

--==MYBOUNDARY==--\
5 changes: 2 additions & 3 deletions examples/single-node-group/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ provider "aws" {
#####
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
version = "2.48.0"

name = "simple-vpc"

Expand Down Expand Up @@ -46,7 +46,7 @@ resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = []
name = "eks"
role_arn = aws_iam_role.cluster.arn
version = "1.14"
version = "1.17"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
Expand Down Expand Up @@ -91,7 +91,6 @@ resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSServicePolicy" {
module "eks-node-group" {
source = "../../"

enabled = true
cluster_name = aws_eks_cluster.cluster.id

subnet_ids = flatten([module.vpc.private_subnets])
Expand Down
Loading