Skip to content

hamzilla/k3s-ansible

 
 

Repository files navigation

Build a Kubernetes cluster using K3s via Ansible

Author: https://github.com/itwars
Current Maintainer: https://github.com/dereknola

Easily bring up a cluster on machines running:

  • Debian
  • Ubuntu
  • Raspberry Pi OS
  • RHEL Family (CentOS, Redhat, Rocky Linux...)
  • SUSE Family (SLES, OpenSUSE Leap, Tumbleweed...)
  • ArchLinux

on processor architectures:

  • x64
  • arm64
  • armhf

System requirements

The control node must have Ansible 8.0+ (ansible-core 2.15+)

All managed nodes in inventory must have:

  • Passwordless SSH access
  • Root access (or a user with equivalent permissions)

It is also recommended that all managed nodes disable firewalls and swap. See K3s Requirements for more information.

Usage

First copy the sample inventory to inventory.yml.

cp inventory-sample.yml inventory.yml

Second edit the inventory file to match your cluster setup. For example:

k3s_cluster:
  children:
    server:
      hosts:
        192.16.35.11:
    agent:
      hosts:
        192.16.35.12:
        192.16.35.13:

If needed, you can also edit vars section at the bottom to match your environment.

If multiple hosts are in the server group the playbook will automatically setup k3s in HA mode with embedded etcd. An odd number of server nodes is required (3,5,7). Read the official documentation for more information.

Setting up a loadbalancer or VIP beforehand to use as the API endpoint is possible but not covered here.

Start provisioning of the cluster using the following command:

ansible-playbook playbooks/site.yml -i inventory.yml

Using an external database

If an external database is preferred, this can be achieved by passing the --datastore-endpoint as an extra server argument as well as setting the use_external_database flag to true.

k3s_cluster:
  children:
    server:
      hosts:
        192.16.35.11:
        192.16.35.12:
    agent:
      hosts:
        192.16.35.13:

  vars:
    use_external_database: true
    extra_server_args: "--datastore-endpoint=postgres://username:password@hostname:port/database-name"

The use_external_database flag is required when more than one server is defined, as otherwise an embedded etcd cluster will be created instead.

The format of the datastore-endpoint parameter is dependent upon the datastore backend, please visit the K3s datastore endpoint format for details on the format and supported datastores.

Upgrading

A playbook is provided to upgrade K3s on all nodes in the cluster. To use it, update k3s_version with the desired version in inventory.yml and run:

ansible-playbook playbooks/upgrade.yml -i inventory.yml

Airgap Install

Airgap installation is supported via the airgap_dir variable. This variable should be set to the path of a directory containing the K3s binary and images. The release artifacts can be downloaded from the K3s Releases. You must download the appropriate images for you architecture (any of the compression formats will work).

An example folder for an x86_64 cluster:

$ ls ./playbooks/my-airgap/
total 248M
-rwxr-xr-x 1 $USER $USER  58M Nov 14 11:28 k3s
-rw-r--r-- 1 $USER $USER 190M Nov 14 11:30 k3s-airgap-images-amd64.tar.gz

$ cat inventory.yml
...
airgap_dir: ./my-airgap # Paths are relative to the playbooks directory

Additionally, if deploying on a OS with SELinux, you will also need to download the latest k3s-selinux RPM and place it in the airgap folder.

It is assumed that the control node has access to the internet. The playbook will automatically download the k3s install script on the control node, and then distribute all three artifacts to the managed nodes.

Kubeconfig

After successful bringup, the kubeconfig of the cluster is copied to the control node and merged with ~/.kube/config under the k3s-ansible context. Assuming you have kubectl installed, you can confirm access to your Kubernetes cluster with the following:

kubectl config use-context k3s-ansible
kubectl get nodes

If you wish for your kubeconfig to be copied elsewhere and not merged, you can set the kubeconfig variable in inventory.yml to the desired path.

Local Testing

A Vagrantfile is provided that provision a 5 nodes cluster using Vagrant (LibVirt or Virtualbox as provider). To use it:

vagrant up

By default, each node is given 2 cores and 2GB of RAM and runs Ubuntu 20.04. You can customize these settings by editing the Vagrantfile.

Need More Features?

This project is intended to provide a "vanilla" K3s install. If you need more features, such as:

  • Private Registry
  • Advanced Storage (Longhorn, Ceph, etc)
  • External Database
  • External Load Balancer or VIP
  • Alternative CNIs

See these other projects:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • YAML 92.4%
  • Jinja 7.6%