Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modularization for AWS terraform scripts #650

Merged
merged 12 commits into from
Jul 17, 2019
6 changes: 6 additions & 0 deletions deploy/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
.terraform/
credentials/
terraform.tfstate
terraform.tfstate.backup
.terraform.tfstate.lock.info
kubeconfig_*.yaml
tennix marked this conversation as resolved.
Show resolved Hide resolved
143 changes: 130 additions & 13 deletions deploy/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,9 +187,9 @@ module example-cluster {
source = "./tidb-cluster"

# The target EKS, required
eks_info = local.default_eks
eks_info = local.eks
# The subnets of node pools of this TiDB cluster, required
subnets = local.default_subnets
subnets = local.subnets
# TiDB cluster name, required
cluster_name = "example-cluster"

Expand Down Expand Up @@ -261,27 +261,144 @@ $ terraform destroy
>
> You have to manually delete the EBS volumes in AWS console after running terraform destroy if you do not need the data on the volumes anymore.

## Advanced Guide: Use the tidb-cluster and tidb-operator Modules
## Advanced: Multiple Kubernetes Management

Under the hood, this terraform module composes two sub-modules:
In this section, we will investigate the best practice to manage multiple Kubernetes clusters, each with one or more TiDB clusters installed.

- [tidb-operator](./tidb-operator/README.md), which provisions the Kubernetes control plane for TiDB cluster
- [tidb-cluster](./tidb-cluster/README.md), which provisions a TiDB cluster in the target Kubernetes cluster
Under the hood, this terraform module composes several sub-modules:

You can use these modules separately in your own terraform scripts, by either referencing these modules locally or publish these modules to your terraform module registry.
- [tidb-operator](../modules/aws/tidb-operator/README.md), which provisions the Kubernetes control plane for TiDB cluster
- [tidb-cluster](../modules/aws/tidb-cluster/README.md), which provisions a TiDB cluster in the target Kubernetes cluster
- ...and a `VPC` module, a `bastion` module and a `key-pair` module that are dedicated to TiDB on AWS

For example, let's say you create a terraform module in `/deploy/aws/staging`, you can reference the tidb-operator and tidb-cluster modules as following:
The best practice is creating a new directory for each of your Kubernetes cluster and composing these modules via terraform scripts, so that the terraform state and cluster credentials of each cluster won't be screwed. Here's an example:

```shell
# assume we are in the project root
$ mkdir -p deploy/aws-staging
$ vim deploy/aws-staging/main.tf
```

The content of `deploy/aws-staging/main.tf` could be:

```hcl
module "setup-control-plane" {
source = "../tidb-operator"
provider "aws" {
region = "us-west-1"
}

# create a key pair for ssh to bastion, also for ssh from bastion to worker nodes
module "key-pair" {
source = "../modules/aws/key-pair"

name = "another-eks-cluster"
path = "${path.cwd}/credentials/"
}

# provision a VPC
module "vpc" {
source = "../modules/aws/vpc"

vpc_name = "another-eks-cluster"
}

# provision a EKS control plane with tidb-opeartor installed
module "tidb-operator" {
source = "../modules/aws/tidb-operator"

eks_name = "another-eks-cluster"
config_output_path = "credentials/"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
ssh_key_name = module.key-pair.key_name
}

# HACK: enforce helm to depend on the EKS
resource "local_file" "kubeconfig" {
depends_on = [module.tidb-operator.eks]
sensitive_content = module.tidb-operator.eks.kubeconfig
filename = module.tidb-operator.eks.kubeconfig_filename
}
provider "helm" {
alias = "eks"
insecure = true
install_tiller = false
kubernetes {
config_path = local_file.kubeconfig.filename
}
}

# provision a tidb-cluster in the eks cluster
module "tidb-cluster-a" {
source = "../tidb-cluster"
source = "../modules/aws/tidb-cluster"
providers = {
helm = "helm.eks"
}

cluster_name = "tidb-cluster-a"
eks = module.tidb-operator.eks
ssh_key_name = module.key-pair.key_name
subnets = module.vpc.private_subnets
}

# provision another tidb-cluster in the eks cluster
module "tidb-cluster-b" {
source = "../tidb-cluster"
source = "../modules/aws/tidb-cluster"
providers = {
helm = "helm.eks"
}

cluster_name = "tidb-cluster-b"
eks = module.tidb-operator.eks
ssh_key_name = module.key-pair.key_name
subnets = module.vpc.private_subnets
}
```

# provision a bastion machine to access the TiDB service and worker nodes
module "bastion" {
source = "../modules/aws/bastion"

bastion_name = "another-eks-cluster-bastion"
key_name = module.key-pair.key_name
public_subnets = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
target_security_group_id = module.tidb-operator.eks.worker_security_group_id
enable_ssh_to_workers = true
}

# print the tidb hostname of tidb-cluster-a
output "cluster-a_tidb-dns" {
description = "tidb service endpoints"
value = module.tidb-cluster-a.tidb_hostname
}

# print the monitor hostname of tidb-cluster-b
output "cluster-b_monitor-dns" {
description = "tidb service endpoint"
value = module.tidb-cluster-b.monitor_hostname
}

output "bastion_ip" {
description = "Bastion IP address"
value = module.bastion.bastion_ip
}
```

As shown in the code above, you can omit most of the parameters in each of the module calls because there are reasonable defaults, and it is easy to customize the setup: you just delete the bastion module call if you don't need it.

To customize each fields, you can refer to this terraform module as a great example, also, you can always refer to the `variables.tf` of each of the modules to investigate all the available parameters.

Also, it requires little effort if you want to integrate these modules into your own terraform codebase, and this is what these modules are designed for.

> **Note:**
>
> If you create the new directory elsewhere, please take care of the relative path of modules.

> **Note:**
>
> If you want to use these modules outside of the tidb-operator project, make sure you copy the whole `modules` directory and keep the relative path of each module inside the directory unchanged.

> **Note:**
>
> The hack of helm provider is necessary in case of [hashicorp/terraform#2430](https://github.com/hashicorp/terraform/issues/2430#issuecomment-370685911), please keep it in your terraform scripts.

If you are unwilling to touch the terraform code, copy this directory for each of your Kubernetes clusters also make sense.
36 changes: 0 additions & 36 deletions deploy/aws/bastion.tf

This file was deleted.

6 changes: 3 additions & 3 deletions deploy/aws/clusters.tf
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,9 @@ module "default-cluster" {
providers = {
helm = "helm.eks"
}
source = "./tidb-cluster"
eks = local.default_eks
subnets = local.default_subnets
source = "../modules/aws/tidb-cluster"
eks = local.eks
subnets = local.subnets

cluster_name = var.default_cluster_name
cluster_version = var.default_cluster_version
Expand Down
57 changes: 27 additions & 30 deletions deploy/aws/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,49 +3,46 @@ provider "aws" {
}

locals {
default_subnets = split(",", var.create_vpc ? join(",", module.vpc.private_subnets) : join(",", var.subnets))
default_eks = module.tidb-operator.eks
eks = module.tidb-operator.eks
subnets = module.vpc.private_subnets
}

module "key-pair" {
source = "./aws-key-pair"
name = var.eks_name
path = "${path.module}/credentials/"
source = "../modules/aws/key-pair"

name = var.eks_name
path = "${path.cwd}/credentials/"
}

module "vpc" {
source = "terraform-aws-modules/vpc/aws"

version = "2.6.0"
name = var.eks_name
cidr = var.vpc_cidr
create_vpc = var.create_vpc
azs = data.aws_availability_zones.available.names
private_subnets = var.private_subnets
public_subnets = var.public_subnets
enable_nat_gateway = true
single_nat_gateway = true

# The following tags are required for ELB
private_subnet_tags = {
"kubernetes.io/cluster/${var.eks_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${var.eks_name}" = "shared"
}
vpc_tags = {
"kubernetes.io/cluster/${var.eks_name}" = "shared"
}
source = "../modules/aws/vpc"

vpc_name = var.eks_name
create_vpc = var.create_vpc
private_subnets = var.private_subnets
public_subnets = var.public_subnets
vpc_cidr = var.vpc_cidr
}

module "tidb-operator" {
source = "./tidb-operator"
source = "../modules/aws/tidb-operator"

eks_name = var.eks_name
eks_version = var.eks_version
operator_version = var.operator_version
config_output_path = "credentials/"
subnets = local.default_subnets
vpc_id = var.create_vpc ? module.vpc.vpc_id : var.vpc_id
subnets = local.subnets
vpc_id = module.vpc.vpc_id
ssh_key_name = module.key-pair.key_name
}

module "bastion" {
source = "../modules/aws/bastion"

bastion_name = "${var.eks_name}-bastion"
key_name = module.key-pair.key_name
public_subnets = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
worker_security_group_id = local.eks.worker_security_group_id
enable_ssh_to_workers = true
}
6 changes: 3 additions & 3 deletions deploy/aws/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ output "kubeconfig_filename" {

output "default-cluster_tidb-dns" {
description = "tidb service endpoints"
value = module.default-cluster.tidb_dns
value = module.default-cluster.tidb_hostname
}

output "default-cluster_monitor-dns" {
description = "tidb service endpoint"
value = module.default-cluster.monitor_dns
value = module.default-cluster.monitor_hostname
}

output "bastion_ip" {
description = "Bastion IP address"
value = module.ec2.public_ip
value = module.bastion.bastion_ip
}
7 changes: 0 additions & 7 deletions deploy/aws/tidb-cluster/outputs.tf

This file was deleted.

43 changes: 43 additions & 0 deletions deploy/modules/aws/bastion/bastion.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
resource "aws_security_group" "accept_ssh_from_local" {
name = var.bastion_name
description = "Allow SSH access for bastion instance"
vpc_id = var.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = var.bastion_ingress_cidr
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_security_group_rule" "enable_ssh_to_workers" {
count = var.enable_ssh_to_workers ? 1 : 0
security_group_id = var.worker_security_group_id
source_security_group_id = aws_security_group.accept_ssh_from_local.id
from_port = 22
to_port = 22
protocol = "tcp"
type = "ingress"
}

module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"

version = "2.3.0"
name = var.bastion_name
instance_count = 1
ami = data.aws_ami.amazon-linux-2.id
instance_type = var.bastion_instance_type
key_name = var.key_name
associate_public_ip_address = true
monitoring = false
user_data = file("${path.module}/bastion-userdata")
vpc_security_group_ids = [aws_security_group.accept_ssh_from_local.id]
subnet_ids = var.public_subnets
}
tennix marked this conversation as resolved.
Show resolved Hide resolved
3 changes: 0 additions & 3 deletions deploy/aws/data.tf → deploy/modules/aws/bastion/data.tf
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
data "aws_availability_zones" "available" {
}

data "aws_ami" "amazon-linux-2" {
most_recent = true

Expand Down
4 changes: 4 additions & 0 deletions deploy/modules/aws/bastion/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
output "bastion_ip" {
description = "Bastion IP address"
value = module.ec2.public_ip
}
Loading