This document guides you through the process of setting up a Kubernetes (K8s) environment locally on a Linux machine for GPU-enabled machine learning and artificial intelligence workloads using MicroK8s. Please ensure your system meets the hardware requirements, including a compatible GPU for ML/AI tasks.
- A Linux-based operating system.
- Snap package manager installed.
- A compatible GPU with the necessary drivers installed.
MicroK8s is a lightweight Kubernetes distribution that can run on a local machine. To install MicroK8s, use the following command:
snap install microk8s --classic
(These commands are commented out in the provided script. Uncomment if needed.)
# microk8s.start
# watch microk8s status
Enable necessary addons including Helm 3, DNS, hostpath storage, Ingress, and RBAC for Kubernetes:
microk8s enable helm3 dns community hostpath-storage ingress rbac metrics-server
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. To enable MetalLB, set the dhcpList
environment variable to your IP address range and run the command:
export dhcpList=<YOUR-DHCP-RANGE>
[ ! -z "${dhcpList}" ] && microk8s enable metallb:${dhcpList}
Replace <YOUR-DHCP-RANGE>
with your actual DHCP IP range.
To enable GPU support in MicroK8s, execute:
microk8s enable gpu
Kubectl is the command-line tool for Kubernetes. Set up an alias to use with MicroK8s:
snap install kubectl --classic
#snap alias microk8s.kubectl kubectl
Helm is a package manager for Kubernetes. After installing it with MicroK8s, set up an alias for ease of use:
snap alias microk8s.helm3 helm
To interact with your Kubernetes cluster, set up the KUBECONFIG
environment variable:
export KUBECONFIG=${HOME}/.kube/config
mkdir -p ${HOME}/.kube
microk8s.kubectl config view --raw > $KUBECONFIG
Finally, create a new namespace named infra-root
and set it as the default context for your kubectl commands:
kubectl create namespace infra-root
kubectl config set-context --current --namespace=infra-root
After completing these steps, your Kubernetes cluster should be up and running with GPU support. You can now deploy GPU-accelerated ML/AI applications to your cluster.
To ensure that everything is set up correctly, run the following command to check the status of your nodes and enabled addons:
microk8s status --wait-ready
Also, verify that your GPU is recognized by the cluster:
kubectl get nodes "-o=custom-columns=NAME:.metadata.name,GPU:.status.allocatable.nvidia\.com/gpu"
If you encounter any issues during installation, check the following:
- Ensure your GPU drivers are correctly installed and compatible with your Kubernetes version.
- Verify that all commands are executed with proper permissions. Use
sudo
if required. - Consult the MicroK8s documentation for specific troubleshooting related to the addons.
For more information on MicroK8s, visit the official documentation: MicroK8s Documentation
For more details on Kubernetes, refer to the official Kubernetes documentation: Kubernetes Documentation
You now have a local Kubernetes cluster powered by MicroK8s with GPU support, ready for running machine learning and artificial intelligence workloads.