The DigitalOcean Kubernetes Challenge is to deploy a GitOps CI/CD implementation.
The first step was creating an account on DigitalOcean that comes with a $100 free credit valid for two months.
To provision the infrastructure, i.e. the DigitalOcean managed Kubernetes cluster (DOKS) and to deploy Argo CD on top of, I have used HashiCorp's Terraform, an open-source Infrastructure as Code (IaC) tool.
To remotely store the state file, I have configured an s3 compatible backend on a DigitalOcean space.
terraform {
backend "s3" {
endpoint = "ams3.digitaloceanspaces.com"
region = "eu-west-1"
bucket = "terraform-state"
key = "terraform.tfstate"
skip_credentials_validation = true
}
}
To initialize the backend:
terraform init -backend-config="access_key=$access_key" -backend-config="secret_key=$secret_key"
Where both, the access key
and the secret key
are Spaces access keys which are different from the personal access token
. The latter is also required for the DigitalOcean terraform provider.
To interact with the DigitalOcean API via the command line, I have installed doctl, the official DigitalOcean CLI on windows following this guide.
To authenticate, I have created a new personal access token in the DigitalOcean portal.
To supply the arguments of DigitalOcean Kubernetes cluster with values, I have run the following commands to retrieve the values as shown in the gif below:
The doctl commands to run are:
doctl auth init --context $CONTEXT # and enter the PAT
doctl auth list # lists the contexts
doctl auth switch --context $CONTEXT # switches to the recently created context
doctl account get # retrieves accounts details
doctl compute region list # lists the available regions such as ams3, fra1, etc.
doctl kubernetes options versions # lists the supported k8s versions
doctl kubernetes options sizes # lists the available nodes sizes
To fully automate the process, I have set up a workflow in Github Actions. The sensitive values such as the token and the secret key have been stored as secrets in GitHub.
Terraform outputs the Kubernetes cluster ID
that is required to get its credentials added to the local kubeconfig file. To do so:
doctl kubernetes cluster kubeconfig save <cluster ID>
To automate the application build process, I have set up a workflow that builds the Docker image and pushes it to Docker Hub.
This workflow also bumps the version of the Docker image in the Kubernetes deployment manifest using sed upon a successful merge of a Pull Request (PR) into the main branch.
This is particularly important for the GitOps approach given that the GitOps agent monitors the git repository as the single source of truth and synchronizes the desired state into the Kubernetes cluster.
To continuously deploy the latest version of the application into the Kubernetes cluster, I have used Argo CD, a widely used open-source GitOps operator for Kubernetes.
GitOps is a set of principles, practices, and tools to manage infrastructure and application delivery using a developer-friendly tool, Git.
To log in to Argo CD server:
export password=$(kubectl -n argocd get secret argocd-secret -o jsonpath="{.data.clearPassword}" | base64 -d)
argocd login $serverUrl --username=admin --password=$password
Once logged in, it becomes feasible to deploy an application using Argo CD. To do so, I have configured an application in a yaml manifest and have run the following Argo CD CLI command to create it:
argocd app create -f argocd/simple-app.yaml
To check the application status using Argo CD CLI:
It is also possible to use the UI as shown in the gif below:
Argo CD updates the application once a new version has been released, i.e. the Docker image version in the deployment manifest has been updated when a PR gets merged.
I have also recorded a walkthrough video showing the aforementioned steps.