- 1 controller (I used the first numbered runner in each environment)
- X nodes to do the work
ℹ️ All nodes must have at least 2 CPUs!
- Create a new RHEL 7 VM
- Join to IDM
- Run yum updates and reboot
- Run this Ansible playbook and it'll do the setup steps for all nodes.
-
(all nodes) Create however many VMs you need, plus 1 for a controller. Join them to IDM, install vmware tools, run updates, reboot, etc. These should probably be reasonably beefy VMs.
-
(all nodes) Disable SELinux by editing
/etc/selinux/config
. (TODO - Actually write a policy to prevent having to do this.)# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
-
(all nodes) Enable Extras and Optional repos.
sudo subscription-manager repos --enable=rhel-7-server-extras-rpms sudo subscription-manager repos --enable=rhel-7-server-optional-rpms
-
(all nodes) Download the Docker CE repo.
cd /etc/yum.repos.d/ sudo wget https://download.docker.com/linux/centos/docker-ce.repo
-
(all nodes) Install Docker CE.
sudo yum install docker-ce -y
-
(all nodes) Create a docker group so that users can run docker without superuser rights.
sudo groupadd docker
-
(all nodes) Create a runner user and add it to the docker group.
sudo useradd -mU runner sudo usermod -aG docker runner
-
(all nodes) Change the cgroup driver by creating
/etc/docker/daemon.json
with the following contents:{ "exec-opts": ["native.cgroupdriver=systemd"] }
-
(all nodes) Enable and start the Docker service.
sudo systemctl enable docker.service --now
-
(all nodes) Create the repo file for Kubernetes at
/etc/yum.repos.d/kubernetes.repo
with the following contents:[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
-
(all nodes) Install Kubernetes.
sudo yum install kubelet kubeadm kubectl -y
-
(all nodes) Make a shiny network bridge by making a file at
/etc/sysctl.d/k8s.conf
with the following content:net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
-
(all nodes) Reload.
sudo sysctl --system
-
(all nodes) Turn off swap.
sudo swapoff –a
-
(all nodes) Make that stick by editing
/etc/fstab
and commenting out the swap line. -
(all nodes) Turn off firewalld.
sudo systemctl disable firewalld sudo systemctl stop firewalld
-
(all nodes) Enable the kubelet service.
sudo systemctl enable kubelet.service
-
(controller) Run all the preflight checks and initialize.
⚠️ The address10.244.0.0
is required for flannel!⚠️ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
-
(controller) Do the things to start the cluster.
sudo mkdir -p /home/runner/.kube sudo cp -i /etc/kubernetes/admin.conf /home/runner/.kube/config sudo chown -R runner:runner /home/runner
-
(controller) Apply Flannel for networking.
sudo su - runner kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
-
(controller) Copy/paste the joining token for later. It'll look something like what's in the next step.
-
(controller) Install helm.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 sudo bash get_helm.sh
-
(worker nodes) Run the join command on each worker node.
sudo kubeadm join IPADDRESSHERE:6443 --token WEIRDLONGTOKEN \ --discovery-token-ca-cert-hash sha256:SHASUMGOESHERE
-
(controller) Verify the node has joined by running the command below:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION dev-runner01.fqdn Ready control-plane,master 32m v1.21.3 dev-runner02.fqdn Ready <none> 25s v1.21.3
ℹ️ All of these commands are run as the runner
user.
-
Install and set up
cert-manager
(check the version).kubectl create namespace cert-manager helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.5.4 --set installCRDs=true
-
Install and set up the Actions controller (check the version).
kubectl create namespace actions-runner-system helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller helm repo update helm install -n actions-runner-system actions-runner-controller actions-runner-controller/actions-runner-controller --version=0.13.2
-
Set up the controller to be the GitHub Enterprise Server in the appropriate environment.
kubectl set env deploy actions-runner-controller -c manager GITHUB_ENTERPRISE_URL=https://HOSTNAME --namespace actions-runner-system
-
Set up the secret to control the runners. This token must have the
admin:enterprise
scope. It does not require any other scope.kubectl create secret generic controller-manager -n actions-runner-system --from-literal=github_token=PATGOESHERE
-
Add a namespace for the runners to live in.
kubectl create namespace runners kubectl create namespace test-runners
-
Add the secret to pull the container image from GitHub Packages. Insert credentials as appropriate.
kubectl create secret docker-registry ghe -n runners --docker-server=https://docker.HOSTNAME --docker-username=ghe-username --docker-password=ghe-token --docker-email=youremail@domain.com kubectl create secret docker-registry ghe -n test-runners --docker-server=https://docker.HOSTNAME --docker-username=ghe-username --docker-password=ghe-token --docker-email=youremail@domain.com
-
Actually deploy the runners using any one of these files. This could take some time based on if you need to pull the image.
kubectl apply -f runnerdeployment.yml