Skip to content

Latest commit

 

History

History
231 lines (166 loc) · 11.7 KB

quickstart-aws.md

File metadata and controls

231 lines (166 loc) · 11.7 KB

How To Install Kubernetes On AWS Cluster Using KubeOne

In this quick start we're going to show how to get started with KubeOne on AWS. We'll cover how to create the needed infrastructure using our example Terraform scripts and then install Kubernetes. Finally, we're going to show how to destroy the cluster along with the infrastructure.

As a result, you'll get Kubernetes 1.16.1 High-Available (HA) clusters with three control plane nodes and three worker nodes.

Prerequisites

To follow this quick start, you'll need:

  • kubeone v0.11.0 or newer installed, which can be done by following the Installing KubeOne section of the README,
  • terraform v0.12.0 or later installed. Older releases are not compatible. The binaries for terraform can be found on the Terraform website

Setting Up Credentials

In order for Terraform to successfully create the infrastructure and for KubeOne to install Kubernetes and create worker nodes you need an IAM account with the appropriate permissions.

Once you have the IAM account you need to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables:

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...

If the environment variables are not set, KubeOne will try to read the default profile from ~/.aws/credentials.

Note: The credentials are also deployed to the cluster to be used by machine-controller for creating worker nodes.

Creating Infrastructure

KubeOne is based on the Bring-Your-Own-Infra approach, which means that you have to provide machines and needed resources yourself. To make this task easier we are providing Terraform scripts that you can use to get started. You're free to use your own scripts or any other preferred approach.

The Terraform scripts for AWS are located in the ./examples/terraform/aws directory.

Note: KubeOne comes with Terraform integration that is capable of reading information about the infrastructure from Terraform output. If you decide not to use our Terraform scripts but want to use Terraform integration, make sure variable names in the output match variable names used by KubeOne. Alternatively, if you decide not to use Terraform, you can provide needed information about the infrastructure manually in the KubeOne configuration file.

First, we need to switch to the directory with Terraform scripts:

cd ./examples/terraform/aws

Before we can use Terraform to create the infrastructure for us, Terraform needs to download the AWS plugin and setup it's environment. This is done by running the init command:

terraform init

Note: You need to run this command only the first time before using scripts.

You may want to configure the provisioning process by setting variables defining the cluster name, AWS region, instances size and similar. The easiest way is to create the terraform.tfvars file and store variables there. This file is automatically read by Terraform.

nano terraform.tfvars

For the list of available settings along with their names please see the variables.tf file. You should consider setting:

  • cluster_name (required) - prefix for cloud resources
  • aws_region (default: eu-west-3)
  • ssh_public_key_file (default: ~/.ssh/id_rsa.pub) - path to your SSH public key that's deployed on instances
  • control_plane_type (default: t3.medium) - note that you should have at least 2 GB RAM and 2 CPUs for Kubernetes to work properly

The terraform.tfvars file can look like:

cluster_name = "demo"

aws_region = "us-east-1"

Now that you configured Terraform you can use the plan command to see what changes will be made:

terraform plan

Finally, if you agree with changes you can proceed and provision the infrastructure:

terraform apply

Shortly after you'll be asked to enter yes to confirm your intention to provision the infrastructure.

Infrastructure provisioning takes around 5 minutes.

Installing Kubernetes

Now that you have infrastructure you can proceed with installing Kubernetes using KubeOne.

Before you start you'll need a configuration file that defines how Kubernetes will be installed, e.g. what version will be used and what features will be enabled. For the configuration file reference run kubeone config print --full.

To get started you can use the following configuration. It'll install Kubernetes 1.16.1 and create three worker nodes. KubeOne automatically populates all needed information about worker nodes from the Terraform output. Alternatively, you can set those information manually. As KubeOne is using Kubermatic machine-controller for creating worker nodes, see AWS example manifest for available options.

For example, to create a cluster with Kubernetes 1.16.1, save the following to config.yaml:

apiVersion: kubeone.io/v1alpha1
kind: KubeOneCluster
versions:
  kubernetes: '1.16.1'
cloudProvider:
  name: 'aws'

Finally, we're going to install Kubernetes by using the install command and providing the configuration file and the Terraform output:

kubeone install config.yaml --tfjson <DIR-WITH-tfstate-FILE>

Note: --tfjson accepts a file as well as a directory containing the terraform state file. To pass a file, generate the JSON output by running the following and use it as the value for the --tfjson flag:

terraform output -json > tf.json

Alternatively, if the terraform state file is in the current working directory --tfjson . can be used as well.

The installation process takes some time, usually 5-10 minutes. The output should look like the following one:

time="11:59:19 UTC" level=info msg="Installing prerequisites…"
time="11:59:20 UTC" level=info msg="Determine operating system…" node=157.230.114.40
time="11:59:20 UTC" level=info msg="Determine operating system…" node=157.230.114.39
time="11:59:20 UTC" level=info msg="Determine operating system…" node=157.230.114.42
time="11:59:21 UTC" level=info msg="Determine hostname…" node=157.230.114.40
time="11:59:21 UTC" level=info msg="Creating environment file…" node=157.230.114.40
time="11:59:21 UTC" level=info msg="Installing kubeadm…" node=157.230.114.40 os=ubuntu
time="11:59:21 UTC" level=info msg="Determine hostname…" node=157.230.114.39
time="11:59:21 UTC" level=info msg="Creating environment file…" node=157.230.114.39
time="11:59:21 UTC" level=info msg="Installing kubeadm…" node=157.230.114.39 os=ubuntu
time="11:59:22 UTC" level=info msg="Determine hostname…" node=157.230.114.42
time="11:59:22 UTC" level=info msg="Creating environment file…" node=157.230.114.42
time="11:59:22 UTC" level=info msg="Installing kubeadm…" node=157.230.114.42 os=ubuntu
time="11:59:59 UTC" level=info msg="Deploying configuration files…" node=157.230.114.39 os=ubuntu
time="12:00:03 UTC" level=info msg="Deploying configuration files…" node=157.230.114.42 os=ubuntu
time="12:00:04 UTC" level=info msg="Deploying configuration files…" node=157.230.114.40 os=ubuntu
time="12:00:05 UTC" level=info msg="Generating kubeadm config file…"
time="12:00:06 UTC" level=info msg="Configuring certs and etcd on first controller…"
time="12:00:06 UTC" level=info msg="Ensuring Certificates…" node=157.230.114.39
time="12:00:09 UTC" level=info msg="Generating PKI…"
time="12:00:09 UTC" level=info msg="Running kubeadm…" node=157.230.114.39
time="12:00:09 UTC" level=info msg="Downloading PKI files…" node=157.230.114.39
time="12:00:10 UTC" level=info msg="Creating local backup…" node=157.230.114.39
time="12:00:10 UTC" level=info msg="Deploying PKI…"
time="12:00:10 UTC" level=info msg="Uploading files…" node=157.230.114.42
time="12:00:10 UTC" level=info msg="Uploading files…" node=157.230.114.40
time="12:00:13 UTC" level=info msg="Configuring certs and etcd on consecutive controller…"
time="12:00:13 UTC" level=info msg="Ensuring Certificates…" node=157.230.114.40
time="12:00:13 UTC" level=info msg="Ensuring Certificates…" node=157.230.114.42
time="12:00:15 UTC" level=info msg="Initializing Kubernetes on leader…"
time="12:00:15 UTC" level=info msg="Running kubeadm…" node=157.230.114.39
time="12:01:47 UTC" level=info msg="Joining controlplane node…"
time="12:03:01 UTC" level=info msg="Copying Kubeconfig to home directory…" node=157.230.114.39
time="12:03:01 UTC" level=info msg="Copying Kubeconfig to home directory…" node=157.230.114.40
time="12:03:01 UTC" level=info msg="Copying Kubeconfig to home directory…" node=157.230.114.42
time="12:03:03 UTC" level=info msg="Building Kubernetes clientset…"
time="12:03:04 UTC" level=info msg="Applying canal CNI plugin…"
time="12:03:06 UTC" level=info msg="Installing machine-controller…"
time="12:03:28 UTC" level=info msg="Installing machine-controller webhooks…"
time="12:03:28 UTC" level=info msg="Waiting for machine-controller to come up…"
time="12:04:08 UTC" level=info msg="Creating worker machines…"
time="12:04:10 UTC" level=info msg="Skipping Ark deployment because no backup provider was configured."

KubeOne automatically downloads the Kubeconfig file for the cluster. It's named as <cluster_name>-kubeconfig, where <cluster_name> is the name from your configuration. You can use it with kubectl such as

kubectl --kubeconfig=<cluster_name>-kubeconfig

or export the KUBECONFIG variable environment variable:

export KUBECONFIG=$PWD/<cluster_name>-kubeconfig

Scaling Worker Nodes

Worker nodes are managed by the machine-controller. It creates initially one per availability zone. Those can be scaled up and down (including to 0) using the Kubernetes API. To do so you first got to retrieve the machinedeployments by

kubectl get machinedeployments -n kube-system

The names of the machinedeployments are generated. You can scale the workers in those via

kubectl --namespace kube-system scale machinedeployment/<MACHINE-DEPLOYMENT-NAME> --replicas=3

Note: The kubectl scale command is not working as expected with kubectl 1.15, returning an error such as:

The machinedeployments "<MACHINE-DEPLOYMENT-NAME>" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

For a workaround, please follow the steps described in the issue 593 or upgrade to kubectl 1.16 or newer.

Deleting The Cluster

Before deleting a cluster you should clean up all MachineDeployments, so all worker nodes are deleted. You can do it with the kubeone reset command:

kubeone reset config.yaml --tfjson <DIR-WITH-tfstate-FILE>

This command will wait for all worker nodes to be gone. Once it's done you can proceed and destroy the AWS infrastructure using Terraform:

terraform destroy

You'll be asked to enter yes to confirm your intention to destroy the cluster.

Congratulations! You're now running Kubernetes 1.16.1 HA cluster with three control plane nodes and three worker nodes. If you want to learn more about KubeOne and its features, such as upgrades, make sure to check our documentation.