Skip to content

A Terraform recipe for a robust etcd cluster, based on how Monzo runs its clusters.

License

Notifications You must be signed in to change notification settings

ondat/etcd3-terraform

 
 

Repository files navigation

etcd3-terraform

A terraform recipe, forked from Monzo's etcd3-terraform and updated in order to provide easy deployment of a non-Kubernetes-resident etcd cluster on AWS for Ondat.

Stack 🎮

This will create a set of 3 Auto Scaling Group controlled EC2 instances each running the latest Ubuntu image by default. These ASGs are distributed over 3 Availability Zones detected from the current region in use (eg. passed via AWS_REGION environment variable). All resources are deployed into a VPC selected by setting vpc_id.

This will also create a local Route 53 zone for the domain you pick and bind it to the VPC so its records can be resolved. This domain does not need to be registered. An SRV record suitable for etcd discovery is also created as well as a Lambda function which monitors ASG events and creates A records for each member of the cluster.

A Network Load Balancer will be created for clients of the etcd cluster. It wraps all of the auto-scaling group instances on port 2379 with a health check to ensure that only functional instances are presented.

High Availability

As mentioned above, the default size of the cluster is 3 nodes - this means that only 2 node failures will trigger a catastrophic cluster failure. In order to prevent this, it's suggested to use a larger cluster in any real-world scenario - 5, 7 or 9 nodes should be sufficient depending on risk appetite.

Elasticity

Scaling out is as easy as increasing the size of the cluster via the aforementioned variable. When scaling down/in, destroy the extreneous instances and autoscaling groups manually via terraform destroy -target=... after removing the member from the cluster using etcdctl before running another terraform apply. Future work could implement lifecycle hooks and autoscaling to make this more automated.

Backups

Volume snapshots are taken automatically of each node, every day at 2am. A week of snapshots is retained for each node. In order to restore from snapshot, take down the cluster and manually replace each EBS volume. Use terraform import to import the new volumes into the state to reconcile from the Terraform end.

Security 🔒

In this distribution, we've:

  • encrypted all etcd and root volumes
  • encryped and authenticated all etcd traffic between peers and clients
  • locked down network access to the minimum
  • ensured that all AWS policies that enable writing to resources are constrained to acting on the resources created by this module
  • used a modern, stable default AMI (Debian 10)

This makes for a secure base configuration.

It is suggested that this is deployed to private subnets only, targeted via the parameter subnet_ids.

Authentication

The etcd nodes authenticate with each other via individual TLS certificates and keys. Clients authenticate using a single certificate. Role Based Access Control is possible with further configuration via etcd itself.

Certificates

A CA and several certificates for peers, servers and clients are generated by Terraform and stored in the state file. It is therefore suggested that the state file is stored securely (and ideally remotely, eg. in an encrypted S3 bucket with limited access). Certificates are valid for 5 years (for the CA) and 1 year (for others). At the moment, the renewal process requires replacing the nodes one-at-a-time after the certificates have been destroyed and re-created in terraform - this should be done carefully using terraform destroy -target=... and terraform apply -target=... for each of the resources in series, spacing out the node replacements to ensure that quorum is not broken. Replacing the CA certificate will require manually copying the new certificates to each instance and restarting the etcd-member systemd job to ensure that the cluster remains in-sync through the terraform node replacement process.

The client certificate must be used to authenticate with the server when communicating with etcd from allowed clients (within the cidr range in client_cidrs). The certificate and key will be generated by Terraform and placed in the current working directory, named client.pem and client.key respectively.

How to configure and deploy 🕹

The file variables.tf declares the Terraform variables required to run this stack. Almost everything has a default - the region will be detected from the AWS_REGION environment variable and it will span across the maximum available zones within your preferred region. You will be asked to provide a VPC ID, subnet IDs and an SSH public key to launch the stack. Variables can all be overridden in a terraform.tfvars file or by passing runtime parameters. By default we use the latest Ubuntu AMI, be sure to change this if you are using a different region!

Example (minimal for development env, creates a VPC and all resources in us-east-1)

provider "aws" {region = "us-east-1"}

data "aws_availability_zones" "available" {}

locals {
  tenant      = "aws001"  # AWS account name or unique id for tenant
  environment = "preprod" # Environment area eg., preprod or prod
  zone        = "dev"     # Environment with in one sub_tenant or business unit
  vpc_cidr  = "10.0.0.0/16"
  vpc_name  = join("-", [local.tenant, local.environment, local.zone, "vpc"])
  etcd_name = join("-", [local.tenant, local.environment, local.zone, "etcd"])
  terraform_version = "Terraform v1.1.5"
}

module "etcd" {
  source     = "github.com/ondat/etcd3-terraform"
  vpc_id     = module.aws_vpc.vpc_id
  subnet_ids = module.aws_vpc.private_subnets

  ssd_size      = 32
  instance_type = "t3.large"

  client_cidrs = module.aws_vpc.private_subnets_cidr_blocks # etcd access for private nodes
  dns          = "${local.etcd_name}.int"
  environment  = "a"
}

module "aws_vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "v3.2.0"

  name = local.vpc_name
  cidr = local.vpc_cidr
  azs  = data.aws_availability_zones.available.names

  public_subnets       = [for k, v in slice(data.aws_availability_zones.available.names, 0, 3) : cidrsubnet(local.vpc_cidr, 8, k)]
  private_subnets      = [for k, v in slice(data.aws_availability_zones.available.names, 0, 3) : cidrsubnet(local.vpc_cidr, 8, k + 10)]
  enable_nat_gateway   = true
  create_igw           = true
  enable_dns_hostnames = true
  single_nat_gateway   = true
}

Example ('airgapped' environment)

Though 'airgapped' in terms of inbound/outbound internet access, this will still rely on access to the AWS metadata and API services from the instance in order to attach the volumes. This example for the etcd module only will use Debian 10 as an alternative to Ubuntu.

module "etcd" {
  source = "github.com/ondat/etcd3-terraform"
  key_pair_public_key = "ssh-rsa..."
  ssh_cidrs = ["10.2.3.4/32"] # ssh jumpbox
  dns = "mycompany.int"

  client_cirs = ["10.3.0.0/16"] # k8s cluster

  ssd_size = 1024
  cluster_size = 9
  instance_type = c5a.4xlarge

  vpc_id = "vpc-abcdef"
  subnet_ids = [ "subnet-abcdef", "subnet-fedcba", "subnet-cdbeaf" ]

  role = "etcd0"
  environment = "performance"

  allow_download_from_cidrs = ["10.2.3.5/32"] # HTTPS server for file (certificate must be valid and verifiable)
  create_s3_bucket = "false"
  ebs_bootstrap_binary_url = "https://10.2.3.5/ebs_bootstrap"
  etcd_url = "https://10.2.3.5/etcd-v3.5.1.tgz"
  ami = "ami-031283ff8a43b021c" # Debian 10
}

Next Steps

To retrieve the LB address, refer to the outputs of the module:

export ETCD_LB="$(terraform show -json | jq -r '.values.outputs.lb_address.value')"; echo $ETCD_LB

You can extract the certificates from the state:

terraform show -json | jq -r '.values.outputs.ca_cert.value' > ca.pem
terraform show -json | jq -r '.values.outputs.client_cert.value' > client.pem
terraform show -json | jq -r '.values.outputs.client_key.value' > client.key

From here, we can copy all the files and variables to a host within the VPC and test the LB with curl:

curl -v --cert client.crt --key client.key --cacert ca.crt https://$ETCD_LB:2379

Troubleshooting

Note that if you are creating a VPC, you may need to initialize it first, before the rest of this module. To do so, simply:

terraform apply -target=module.vpc
terraform apply

This module creates a private DNS zone. If you use custom DNS servers on your VPC, there are two options - either delegate the etcd subdomain to your VPC from your subdomain or implement Route53 Resolver to intercept queries for the internal domain and forward requests for it to your VPC DNS servers.

This module requires outbound internet access in it's default configuration to retrieve etcd binaries and the ebs-bootstrap utility from GitHub. The sources for these tools are configurable via variables outlined in the examples - we can provide an https endpoint to retrieve them from in an internal environment.

Maintenance

etcd is configured with a 100GB data disk per node on Amazon EBS SSDs by default (configurable via ssd_size variable), a revision auto compaction mode and a retention of 20000. An automatic cronjob runs on each node to ensure defragmentation happens at least once every month, this briefly blocks reads/writes on a single node at a time from 3:05am on a different day of the month for each node. It's configured with a backend space quota of 8589934592 bytes.

For further details of what these values and settings mean, refer to etcd's official documentation.

When conducting upgrades or maintenance such as expanding storage, make any necessary changes then use terraform destroy -target=... and terraform apply -target=... on each ASG/launch-group individually to roll them in series without destroying quorum, checking each time that the new node has rejoined the cluster before deleting the old one.

How to run etcdctl 🔧

We presume that whatever system you choose to run these commands on can connect to the NLB (ie. if you're using a private subnet, your client machine is within the VPC or connected via a VPN).

First, install the CA certificate to your client machine. On Ubuntu/Debian, this can be done by copying ca.pem to /usr/local/share/ca-certificates/my-etcd-ca.crt and running update-ca-certificates.

You're now ready to test etcdctl functionality - replace $insert_nlb_address with the URL of the NLB.

$ ETCDCTL_API=3 ETCDCTL_CERT=client.pem ETCDCTL_KEY=client.key ETCDCTL_ENDPOINTS="https://$insert_nlb_address:2379" etcdctl member list
25f97d08c726ed1, started, peer-2, https://peer-2.etcd.eu-west-2.i.development.mycompany.int:2380, https://peer-2.ondat.eu-west-2.i.development.mycompany.int:2379, false
326a6d27c048c8ea, started, peer-1, https://peer-1.etcd.eu-west-2.i.development.mycompany.int:2380, https://peer-1.ondat.eu-west-2.i.development.mycompany.int:2379, false
38308ae09ffc8b32, started, peer-0, https://peer-0.etcd.eu-west-2.i.development.mycompany.int:2380, https://peer-0.ondat.eu-west-2.i.development.mycompany.int:2379, false

How to (synthetically) benchmark etcd in your environment 📊

Prep

Be sure that you have go installed and $GOPATH correctly set with $GOPATH/bin in your $PATH in addition to being able to run etcdctl successfully as above.

$ go get go.etcd.io/etcd/v3/tools/benchmark

Note that performance will vary significantly depending on the client machine you run the benchmarks from - running them over the internet, even through a VPN, does not provide equitable performance to running directly from inside your VPC. For the first benchmark, we will demonstrate this before we continue using a VPC-resident instance only to run the rest.

Benchmark the write rate to leader (high-spec workstation, 100mbps connected over internet) 📉

$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --target-leader --conns=1 --clients=1 put --key-size=8 --sequential-keys --total=10000 --val-size=256
Summary:
  Total:	383.4450 secs.
  Slowest:	0.2093 secs.
  Fastest:	0.0283 secs.
  Average:	0.0383 secs.
  Stddev:	0.0057 secs.
  Requests/sec:	26.0794

Response time histogram:
  0.0283 [1]	|
  0.0464 [9199]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.0645 [764]	|∎∎∎
  0.0826 [31]	|
  0.1007 [4]	|
  0.1188 [0]	|
  0.1369 [0]	|
  0.1550 [0]	|
  0.1731 [0]	|
  0.1912 [0]	|
  0.2093 [1]	|

Latency distribution:
  10% in 0.0335 secs.
  25% in 0.0350 secs.
  50% in 0.0364 secs.
  75% in 0.0405 secs.
  90% in 0.0450 secs.
  95% in 0.0495 secs.
  99% in 0.0585 secs.
  99.9% in 0.0754 secs.

Benchmark the write rate to leader (VPC-resident c4.large instance) 📈

$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --target-leader --conns=1 --clients=1 put --key-size=8 --sequential-keys --total=10000 --val-size=256
Summary:
  Total:	19.0950 secs.
  Slowest:	0.0606 secs.
  Fastest:	0.0014 secs.
  Average:	0.0019 secs.
  Stddev:	0.0011 secs.
  Requests/sec:	523.6961

Response time histogram:
  0.0014 [1]	|
  0.0073 [9972]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.0133 [17]	|
  0.0192 [4]	|
  0.0251 [0]	|
  0.0310 [2]	|
  0.0369 [2]	|
  0.0428 [0]	|
  0.0487 [0]	|
  0.0547 [0]	|
  0.0606 [2]	|

Latency distribution:
  10% in 0.0016 secs.
  25% in 0.0017 secs.
  50% in 0.0018 secs.
  75% in 0.0019 secs.
  90% in 0.0022 secs.
  95% in 0.0025 secs.
  99% in 0.0044 secs.
  99.9% in 0.0139 secs.
$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --target-leader --conns=100 --clients=1000 put --key-size=8 --sequential-keys --total=100000 --val-size=256
Summary:
  Total:	17.8645 secs.
  Slowest:	1.1992 secs.
  Fastest:	0.0338 secs.
  Average:	0.1782 secs.
  Stddev:	0.0785 secs.
  Requests/sec:	5597.7090

Response time histogram:
  0.0338 [1]	|
  0.1503 [37453]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.2668 [54595]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.3834 [6561]	|∎∎∎∎
  0.4999 [627]	|
  0.6165 [268]	|
  0.7330 [187]	|
  0.8495 [108]	|
  0.9661 [76]	|
  1.0826 [89]	|
  1.1992 [35]	|

Latency distribution:
  10% in 0.1061 secs.
  25% in 0.1313 secs.
  50% in 0.1678 secs.
  75% in 0.2060 secs.
  90% in 0.2528 secs.
  95% in 0.2935 secs.
  99% in 0.4293 secs.
  99.9% in 0.9885 secs.

Benchmark writes to all members 📈

$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --conns=100 --clients=1000 put --key-size=8 --sequential-keys --total=100000 --val-size=256
Summary:
  Total:	7.0381 secs.
  Slowest:	0.3753 secs.
  Fastest:	0.0111 secs.
  Average:	0.0694 secs.
  Stddev:	0.0241 secs.
  Requests/sec:	14208.3928

Response time histogram:
  0.0111 [1]	|
  0.0475 [12583]	|∎∎∎∎∎∎∎
  0.0840 [68178]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.1204 [15990]	|∎∎∎∎∎∎∎∎∎
  0.1568 [2456]	|∎
  0.1932 [562]	|
  0.2297 [135]	|
  0.2661 [25]	|
  0.3025 [0]	|
  0.3389 [0]	|
  0.3753 [70]	|

Latency distribution:
  10% in 0.0459 secs.
  25% in 0.0540 secs.
  50% in 0.0654 secs.
  75% in 0.0793 secs.
  90% in 0.0963 secs.
  95% in 0.1092 secs.
  99% in 0.1524 secs.
  99.9% in 0.2080 secs.

Benchmark single connection reads 📈

$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --conns=1 --clients=1 range YOUR_KEY --consistency=l --total=10000
Summary:
  Total:	27.1453 secs.
  Slowest:	0.3582 secs.
  Fastest:	0.0023 secs.
  Average:	0.0027 secs.
  Stddev:	0.0039 secs.
  Requests/sec:	368.3883

Response time histogram:
  0.0023 [1]	|
  0.0379 [9992]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.0735 [5]	|
  0.1091 [1]	|
  0.1446 [0]	|
  0.1802 [0]	|
  0.2158 [0]	|
  0.2514 [0]	|
  0.2870 [0]	|
  0.3226 [0]	|
  0.3582 [1]	|

Latency distribution:
  10% in 0.0024 secs.
  25% in 0.0025 secs.
  50% in 0.0026 secs.
  75% in 0.0027 secs.
  90% in 0.0028 secs.
  95% in 0.0028 secs.
  99% in 0.0032 secs.
  99.9% in 0.0359 secs.
$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --conns=1 --clients=1 range YOUR_KEY --consistency=s --total=10000
Summary:
  Total:	10.9325 secs.
  Slowest:	0.0685 secs.
  Fastest:	0.0009 secs.
  Average:	0.0011 secs.
  Stddev:	0.0008 secs.
  Requests/sec:	914.7062

Response time histogram:
  0.0009 [1]	|
  0.0077 [9989]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.0144 [5]	|
  0.0212 [3]	|
  0.0279 [1]	|
  0.0347 [0]	|
  0.0414 [0]	|
  0.0482 [0]	|
  0.0550 [0]	|
  0.0617 [0]	|
  0.0685 [1]	|

Latency distribution:
  10% in 0.0010 secs.
  25% in 0.0010 secs.
  50% in 0.0010 secs.
  75% in 0.0012 secs.
  90% in 0.0012 secs.
  95% in 0.0013 secs.
  99% in 0.0014 secs.
  99.9% in 0.0077 secs.

Benchmark many concurrent reads 📈

$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --conns=100 --clients=1000 range YOUR_KEY --consistency=l --total=100000
Summary:
  Total:	6.2002 secs.
  Slowest:	0.6050 secs.
  Fastest:	0.0030 secs.
  Average:	0.0570 secs.
  Stddev:	0.0428 secs.
  Requests/sec:	16128.4008

Response time histogram:
  0.0030 [1]	|
  0.0632 [72786]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.1234 [20556]	|∎∎∎∎∎∎∎∎∎∎∎
  0.1836 [4931]	|∎∎
  0.2438 [1145]	|
  0.3040 [193]	|
  0.3642 [293]	|
  0.4244 [29]	|
  0.4846 [6]	|
  0.5448 [0]	|
  0.6050 [60]	|

Latency distribution:
  10% in 0.0239 secs.
  25% in 0.0316 secs.
  50% in 0.0438 secs.
  75% in 0.0664 secs.
  90% in 0.1096 secs.
  95% in 0.1336 secs.
  99% in 0.2207 secs.
  99.9% in 0.3603 secs.
$ benchmark --endpoints="https://$insert_nlb_address:2379" --cert client.pem --key client.key --conns=100 --clients=1000 range YOUR_KEY --consistency=s --total=100000
Summary:
  Total:	5.0824 secs.
  Slowest:	0.6650 secs.
  Fastest:	0.0018 secs.
  Average:	0.0452 secs.
  Stddev:	0.0321 secs.
  Requests/sec:	19675.9040

Response time histogram:
  0.0018 [1]	|
  0.0681 [85681]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  0.1344 [12171]	|∎∎∎∎∎
  0.2008 [1710]	|
  0.2671 [271]	|
  0.3334 [79]	|
  0.3997 [23]	|
  0.4660 [33]	|
  0.5324 [21]	|
  0.5987 [1]	|
  0.6650 [9]	|

Latency distribution:
  10% in 0.0190 secs.
  25% in 0.0262 secs.
  50% in 0.0371 secs.
  75% in 0.0537 secs.
  90% in 0.0795 secs.
  95% in 0.1006 secs.
  99% in 0.1665 secs.
  99.9% in 0.2903 secs.

Appendix

No requirements.

Providers

Name Version
archive 2.2.0
aws 3.71.0
tls 3.1.0

Modules

Name Source Version
attached-ebs github.com/ondat/etcd3-bootstrap/terraform/modules/attached_ebs n/a

Resources

Name Type
aws_autoscaling_group.default resource
aws_cloudwatch_event_rule.autoscaling resource
aws_cloudwatch_event_rule.ec2 resource
aws_cloudwatch_event_target.lambda-cloudwatch-dns-service-autoscaling resource
aws_cloudwatch_event_target.lambda-cloudwatch-dns-service-ec2 resource
aws_iam_instance_profile.default resource
aws_iam_policy.default resource
aws_iam_role.default resource
aws_iam_role.lambda-cloudwatch-dns-service resource
aws_iam_role_policy.lambda-cloudwatch-dns-service resource
aws_iam_role_policy_attachment.default resource
aws_iam_role_policy_attachment.lambda-cloudwatch-dns-service-logs resource
aws_iam_role_policy_attachment.lambda-cloudwatch-dns-service-xray resource
aws_key_pair.default resource
aws_lambda_function.cloudwatch-dns-service resource
aws_lambda_permission.cloudwatch-dns-service-autoscaling resource
aws_lambda_permission.cloudwatch-dns-service-ec2 resource
aws_launch_configuration.default resource
aws_lb.nlb resource
aws_lb_listener.https resource
aws_lb_target_group.https resource
aws_route53_record.defaultclient resource
aws_route53_record.defaultssl resource
aws_route53_record.nlb resource
aws_route53_record.peers resource
aws_route53_zone.default resource
aws_security_group.default resource
tls_cert_request.client resource
tls_cert_request.peer resource
tls_cert_request.server resource
tls_locally_signed_cert.client resource
tls_locally_signed_cert.peer resource
tls_locally_signed_cert.server resource
tls_private_key.ca resource
tls_private_key.client resource
tls_private_key.peer resource
tls_private_key.server resource
tls_self_signed_cert.ca resource
archive_file.lambda-dns-service data source
aws_ami.ami data source
aws_availability_zones.available data source
aws_region.current data source
aws_subnet.target data source
aws_vpc.target data source

Inputs

Name Description Type Default Required
allow_download_from_cidrs CIDRs from which to allow downloading etcd and etcd-bootstrap binaries via TLS (443 outbound). By default, this is totally open as S3 and GitHub IP addresses are unpredictable. list
[
"0.0.0.0/0"
]
no
ami AMI to launch with - if set, overrides the value found via ami_name_regex and ami_owner string "" no
ami_name_regex Regex to match the preferred AMI name string "ubuntu/images/hvm-ssd/ubuntu-.*-amd64-server-*" no
ami_owner AMI owner ID string "099720109477" no
associate_public_ips Whether to associate public IPs with etcd instances (suggest false for security) string "false" no
client_cidrs CIDRs to allow client access to etcd list
[
"10.0.0.0/8"
]
no
cluster_size Number of etcd nodes to launch number 3 no
dns Private, internal domain name to generate for etcd string "mycompany.int" no
ebs_bootstrap_binary_url Custom URL from which to download the ebs-bootstrap binary any null no
environment Target environment, used to apply tags string "development" no
etcd_url Custom URL from which to download the etcd tgz any null no
etcd_version etcd version to install string "3.5.1" no
instance_type AWS instance type, at least c5a.large is recommended. etcd suggest m4.large. string "c5a.large" no
key_pair_public_key Public key for SSH access string "" no
nlb_internal 'true' to expose the NLB internally only, 'false' to expose it to the internet bool true no
restore_snapshot_ids Map of of the snapshots to use to restore etcd data storage - eg. {0: "snap-abcdef", 1: "snap-fedcba", 2: "snap-012345"} map(string) {} no
role Role name used for internal logic string "etcd" no
ssd_size Size (in GB) of the SSD to be used for etcd data storage string "100" no
ssh_cidrs CIDRs to allow SSH access to the nodes from (by default, none) list [] no
subnet_ids The subnet IDs to which to deploy etcd any n/a yes
vpc_id The VPC ID to use any n/a yes

Outputs

Name Description
ca_cert CA certificate to add to client trust stores (also see ./ca.pem)
client_cert Client certificate to use to authenticate with etcd (also see ./client.pem)
client_key Client private key to use to authenticate with etcd (also see ./client.key)
lb_address Load balancer address for use by clients

About

A Terraform recipe for a robust etcd cluster, based on how Monzo runs its clusters.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL 73.3%
  • JavaScript 20.2%
  • Shell 6.2%
  • Makefile 0.3%