Skip to content

A PoC repo for creating and running a k0s cluster on Hetzner Cloud

License

Notifications You must be signed in to change notification settings

akosiaris/k0s_hetzner

Repository files navigation

Terraform

Intro

This is a Proof of Concept. Use anything here at your own risk

Usage

If you use Windows, go get a linux machine.

Get terraform, clone this repo with git, cd into the dir

You have 2 options now. Either rely on terraform to create SSH keys for you or create a new SSH key yourself. Per terraform docs for TLS provider the former is mostly for development purposes, so if you plan to use this for production, create an SSH key pair on your own. You can do so with the following command

$ ssh-keygen -t ed25519 -f id_ed25519_k0s_hetzner_poc

Now, go to Hetzner Cloud console, create project, go to security, create an API token

Create a file named terraform.tfvars and put inside the Token from Hetzner's portal and the public key as below. Note that this file is git-ignored

hcloud_token = "API_TOKEN_HERE"
domain       = "example.com"

These are the absolute necessities, look below for all the tunables that you can configure using this file

If you have created your own SSH key pair, add the following in that file

ssh_pub_key       = "SSH_KEY_HERE"
ssh_priv_key_path = "path_to_priv_key_file"

Now, run the following to get the providers

$ terraform init

Then validate that our HCL is sound

$ terraform validate

Then spew out the plan and stare at it for a while, to dispel demons etc

$ terraform plan

Once you are sure that the demons have been exorcised create the resources

$ SSH_KNOWN_HOSTS=/dev/null terraform apply -auto-approve

Wait for the output and fetch the IPv6 and IPv4 addresses. Based on what you have of the 2, ssh to the nodes. Note that I am on purpose redirecting to /dev/null the host key as I don't want to keep it around for this PoC.

ssh -o UserKnownHostsFile=/dev/null -i ~/.ssh/id_ed25519_k0s_hetzner_poc root@<ip_address>

k0s usage

There are 2 ways to use the cluster. Internally, using the tooling provided by k0s itself, or via the kubeconfig file and standard tooling

Internal use

k0s is a distribution of Kubernetes that has some interesting properties. Some basic commands:

List nodes

# k0s kubectl get nodes -o wide

List all pods everywhere

# k0s kubectl get pods --all-namespaces -o wide

List namespaces

# k0s kubectl get ns

Deploy a basic pod

# k0s kubectl run nginx --image=nginx

Delete pods

# k0s kubectl delete pods --all

And so on. All of these are meant to be run from a controller node.

Standard tooling

If you have kubectl (or any other compatible kubernetes client, e.g. Lens, Helm) locally, there is kubeconfig file saved in the root of the repo after a successful apply, use it at your discretion. Note: This file provides FULL access to the cluster. Don't mishandle it

Standard kubectl

Make sure you have a compatible kubectl version around. Following kubectl skew policy, you can utilize a kubectl 1 version newer or older of whatever the cluster is

KUBECONFIG=kubeconfig kubectl get nodes -o wide

Deploy using helm

This assumes you know your way around basic helm usage

KUBECONFIG=kubeconfig helm install <name> <chart>

k0s admin notes

Backup the configuration of k0s

# mkdir -p backup
# k0s backup --save-path backup

Restore the above

# k0s backup backup/<file_name>

Check the system. Note the IPv4/IPv6 conntrack/nat warnings are expected for kernels past 4.19 and 5.1 respectively

# k0s sysinfo | grep -v pass

Add workers/controllers

Just increase controller_count or worker_count and run

$ SSH_KNOWN_HOSTS=/dev/null terraform apply -auto-approve

Remove workers/controllers

Not supported right now

Full Removal of ALL resources

Cloud isn't free and this is a PoC. Delete everything when done to avoid runaway costs

$ SSH_KNOWN_HOSTS=/dev/null terraform apply -auto-approve -destroy

Tunables

See variables.md

TODO

About

A PoC repo for creating and running a k0s cluster on Hetzner Cloud

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published