Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added terraform support for Exoscale #7141

Merged
merged 2 commits into from
Jan 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
154 changes: 154 additions & 0 deletions contrib/terraform/exoscale/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
# Kubernetes on Exoscale with Terraform

Provision a Kubernetes cluster on [Exoscale](https://www.exoscale.com/) using Terraform and Kubespray

## Overview

The setup looks like following

```text
Kubernetes cluster
+-----------------------+
+---------------+ | +--------------+ |
| | | | +--------------+ |
| API server LB +---------> | | | |
| | | | | Master/etcd | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------+ |
| ^ |
| | |
| v |
+---------------+ | +--------------+ |
| | | | +--------------+ |
| Ingress LB +---------> | | | |
| | | | | Worker | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------+ |
+-----------------------+
```

## Requirements

* Terraform 0.13.0 or newer

*0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*

## Quickstart

NOTE: *Assumes you are at the root of the kubespray repo*

Copy the sample inventory for your cluster and copy the default terraform variables.

```bash
CLUSTER=my-exoscale-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/exoscale/default.tfvars inventory/$CLUSTER/
cd inventory/$CLUSTER
```

Edit `default.tfvars` to match your setup

```bash
# Ensure $EDITOR points to your favorite editor, e.g., vim, emacs, VS Code, etc.
$EDITOR default.tfvars
```

For authentication you can use the credentials file `~/.cloudstack.ini` or `./cloudstack.ini`.
The file should look like something like this:

```ini
[cloudstack]
key = <API key>
secret = <API secret>
```

Follow the [Exoscale IAM Quick-start](https://community.exoscale.com/documentation/iam/quick-start/) to learn how to generate API keys.

### Encrypted credentials

To have the credentials encrypted at rest, you can use [sops](https://github.com/mozilla/sops) and only decrypt the credentials at runtime.

```bash
cat << EOF > cloudstack.ini
[cloudstack]
key =
secret =
EOF
sops --encrypt --in-place --pgp <PGP key fingerprint> cloudstack.ini
sops cloudstack.ini
```

Run terraform to create the infrastructure

```bash
terraform init ../../contrib/terraform/exoscale
terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale
```

If your cloudstack credentials file is encrypted using sops, run the following:

```bash
terraform init ../../contrib/terraform/exoscale
sops exec-file -no-fifo cloudstack.ini 'CLOUDSTACK_CONFIG={} terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale'
```

You should now have a inventory file named `inventory.ini` that you can use with kubespray.
You can now copy your inventory file and use it with kubespray to set up a cluster.
You can type `terraform output` to find out the IP addresses of the nodes, as well as control-plane and data-plane load-balancer.

It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:

```bash
ansible -i inventory.ini -m ping all
```

Example to use this with the default sample inventory:

```bash
ansible-playbook -i inventory.ini ../../cluster.yml -b -v
```

## Teardown

The Kubernetes cluster cannot create any load-balancers or disks, hence, teardown is as simple as Terraform destroy:

```bash
terraform destroy -var-file default.tfvars ../../contrib/terraform/exoscale
```

## Variables

### Required

* `ssh_pub_key`: Path to public ssh key to use for all machines
* `zone`: The zone where to run the cluster
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
* `node_type`: The role of this node *(master|worker)*
* `size`: The size to use
* `boot_disk`: The boot disk to use
* `image_name`: Name of the image
* `root_partition_size`: Size *(in GB)* for the root partition
* `ceph_partition_size`: Size *(in GB)* for the partition for rook to use as ceph storage. *(Set to 0 to disable)*
* `node_local_partition_size`: Size *(in GB)* for the partition for node-local-storage. *(Set to 0 to disable)*
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)

### Optional

* `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*

An example variables file can be found `default.tfvars`

## Known limitations

### Only single disk

Since Exoscale doesn't support additional disks to be mounted onto an instance, this script has the ability to create partitions for [Rook](https://rook.io/) and [node-local-storage](https://kubernetes.io/docs/concepts/storage/volumes/#local).

### No Kubernetes API

The current solution doesn't use the [Exoscale Kubernetes cloud controller](https://github.com/exoscale/exoscale-cloud-controller-manager).
This means that we need to set up a HTTP(S) loadbalancer in front of all workers and set the Ingress controller to DaemonSet mode.
61 changes: 61 additions & 0 deletions contrib/terraform/exoscale/default.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
prefix = "default"
zone = "ch-gva-2"

inventory_file = "inventory.ini"

ssh_pub_key = "~/.ssh/id_rsa.pub"

machines = {
"master-0": {
"node_type": "master",
"size": "Small",
"boot_disk": {
"image_name": "Linux Ubuntu 20.04 LTS 64-bit",
"root_partition_size": 50,
"node_local_partition_size": 0,
"ceph_partition_size": 0
}
},
"worker-0": {
"node_type": "worker",
"size": "Large",
"boot_disk": {
"image_name": "Linux Ubuntu 20.04 LTS 64-bit",
"root_partition_size": 50,
"node_local_partition_size": 0,
"ceph_partition_size": 0
}
},
"worker-1": {
"node_type": "worker",
"size": "Large",
"boot_disk": {
"image_name": "Linux Ubuntu 20.04 LTS 64-bit",
"root_partition_size": 50,
"node_local_partition_size": 0,
"ceph_partition_size": 0
}
},
"worker-2": {
"node_type": "worker",
"size": "Large",
"boot_disk": {
"image_name": "Linux Ubuntu 20.04 LTS 64-bit",
"root_partition_size": 50,
"node_local_partition_size": 0,
"ceph_partition_size": 0
}
}
}

nodeport_whitelist = [
"0.0.0.0/0"
]

ssh_whitelist = [
"0.0.0.0/0"
]

api_server_whitelist = [
"0.0.0.0/0"
]
49 changes: 49 additions & 0 deletions contrib/terraform/exoscale/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
provider "exoscale" {}

module "kubernetes" {
source = "./modules/kubernetes-cluster"

prefix = var.prefix

machines = var.machines

ssh_pub_key = var.ssh_pub_key

ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
nodeport_whitelist = var.nodeport_whitelist
}

#
# Generate ansible inventory
#

data "template_file" "inventory" {
template = file("${path.module}/templates/inventory.tpl")

vars = {
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip_addresses),
values(module.kubernetes.master_ip_addresses).*.public_ip,
values(module.kubernetes.master_ip_addresses).*.private_ip,
range(1, length(module.kubernetes.master_ip_addresses) + 1)))
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
keys(module.kubernetes.worker_ip_addresses),
values(module.kubernetes.worker_ip_addresses).*.public_ip,
values(module.kubernetes.worker_ip_addresses).*.private_ip))

list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
api_lb_ip_address = module.kubernetes.control_plane_lb_ip_address
}
}

resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}

triggers = {
template = data.template_file.inventory.rendered
}
}
Loading