This org file is intended to capture and automate the end to end workflow to deploy an instance of Kimai on Google Cloud Platform.
We’ll use shell blocks inside this file which can be executed with Babel or tangled out to files within this repository so that the scripts can then be executed manually or by a continous delivery pipeline.
Notes:
- To interact with this org file we’re using the Humacs distribution of Emacs.
- This workflow has only been tested on the
Ubuntu 20.04
linux distribution, via WSL 2. - To run manually you can just click the “Open In Google Cloud Shell” button above.
To automate our interactions with Google Cloud Platform we’ll use the GCP SDK which provides us with a number of command line tools to interact with the platform, such as gcloud
, gsutil
and kubectl
.
Tangle the shell block below to a shell script by pressing , b t in emacs command mode:
# Add the Cloud SDK distribution URI as a package source
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee /etc/apt/sources.list.d/google-cloud-sdk.list
# Make sure apt-transport-https is installed
sudo apt-get install -y apt-transport-https ca-certificates gnupg
# Import the Google Cloud public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
# Update and install the SDK
sudo apt-get update && sudo apt-get install -y google-cloud-sdk kubectl
With GCP SDK now installed we need to authenticate, create a project and then create an AutoPilot kubernetes cluster that we will install Kimai into later in the workflow.
First up is authentication so our GCP SDK installation can carry out actions in a given account and project. This part of the process is currently a manual step as the authentication process includes some interactive steps.
In future we could automate this process as part of a continous delivery pipeline using a GCP service account with permission to create virtual machine instances.
gcloud auth login
Once we have authenticated we can create a project and then create a new kubernetes cluster within that project.
Firstly let’s create a new project, a project is the logical boundary all our cloud resources for this deployment will live within. To be able to deploy resources we also need to enable billing.
Tangle the shell block below to a shell script by pressing , b t in emacs command mode:
# Create a project id based on date
export gcp_project_id="kimai-gcp"
# Create new project using a random project id
gcloud projects create $gcp_project_id
# Ensure billing is enabled for the project
export gcp_billing_account=$(gcloud alpha billing accounts list --limit=1 --format='value(name.basename())')
gcloud alpha billing projects link $gcp_project_id --billing-account $gcp_billing_account
# Make sure the project is set active
gcloud config set project $gcp_project_id
Once we have a project we can create a new cluster. To create a cluster we need to ensure compute engine apis are enabled.
Tangle the shell block below to a shell script by pressing , b t in emacs command mode:
# Ensure compute engine apis are enabled in the project
gcloud services enable container.googleapis.com
# Create the new machine
gcloud container clusters create-auto kimai-gcp --region australia-southeast1
Now that we have a cluster running, we need to deploy our Kimai database and application to it.
Tangle the code blocks below to create the kubernetes resource yaml files and the shell script that will apply the scripts within our cluster.
The first file is the database deployment, which will create a MySQL database pod. Adjust the credentials to suit your environment.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kimai-db
labels:
app: kimai-db
spec:
replicas: 1
selector:
matchLabels:
app: kimai-db
template:
metadata:
labels:
app: kimai-db
spec:
containers:
- name: mysql
image: mysql:latest
ports:
- containerPort: 3306
resources:
limits:
cpu: "500m"
memory: "1Gi"
requests:
cpu: "250m"
memory: "512Mi"
env:
- name: MYSQL_DATABASE
value: kimai
- name: MYSQL_USER
value: kimai
- name: MYSQL_PASSWORD
value: kimai
- name: MYSQL_ROOT_PASSWORD
value: kimai
The second file creates the application deployment, which will create a second pod in the cluster running the custom Kimai application.
Remember to adjust the TRUSTED_HOSTS
environment variable to suit your needs.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kimai-app
labels:
app: kimai-app
spec:
replicas: 1
selector:
matchLabels:
app: kimai-app
template:
metadata:
labels:
app: kimai-app
spec:
containers:
- name: kimai
image: kimai/kimai2:apache
ports:
- containerPort: 8001
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "250m"
memory: "256Mi"
env:
- name: DATABASE_URL
value: mysql://kimai:kimai@kimai-db:3306/kimai
- name: TRUSTED_HOSTS
value: 127.0.0.1,localhost,kimai.asterion.digital
The third kubernetes resource file creates the kubernetes service that will allow our Kimai application pod to talk to our MySQL database.
---
apiVersion: v1
kind: Service
metadata:
name: kimai-db
labels:
app: kimai-db
spec :
selector:
app: kimai-db
ports :
- name : "mysql"
protocol : "TCP"
port : 3306
targetPort : 3306
The final file creates the kubernetes loadbalancer service that we will use to access our application externally over the internet.
---
apiVersion: v1
kind: Service
metadata:
name: kimai-app
labels:
app: kimai-app
spec:
selector:
app: kimai-app
ports:
- name: "kimai"
protocol: "TCP"
port: 8001
targetPort: 8001
type: LoadBalancer
The final file we will tangle is the shell script that will retrieve the credentials for our kubernetes cluster then apply our resources.
# Define where kubeconfig file will be stored
export KUBECONFIG=/home/$USER/.kube/config
# Retrieve credentials for the cluster
gcloud container clusters get-credentials kimai-gcp --region australia-southeast1
# Apply the application deployment yaml
kubectl apply -f config/app-deployment.yaml
kubectl apply -f config/db-deployment.yaml
kubectl apply -f config/db-service.yaml
kubectl apply -f config/app-service.yaml
# Create the admin account to login to Kimai
podname=$(kubectl get pods -o=name | grep kimai-app | sed "s/^.\{4\}//")
kubectl exec -ti $podname /opt/kimai/bin/console kimai:create-user admin admin@example.com ROLE_SUPER_ADMIN
The Google Cloud Platform resources created by this process come at a cost, so it’s important we have an easy way to teardown those resources as soon as we’re finished with them!
The script below will delete any projects containing kimai
in the name along with any compute kubernetes clusters running in those projects.
Tangle the shell block below to a shell script by pressing , b t in emacs command mode:
# Iterate over any matching projects
for project in $(gcloud projects list | awk '{ print $1 }' | grep kimai); do
# Iterate over any instances in the project
for instance in $(gcloud container clusters list --project $project --format="value(name)"); do
# Delete the cluster
gcloud container clusters delete --quiet $instance --zone australia-southeast1 --project $project
done
# Delete the project as well
#gcloud projects delete $project --quiet
done