This is an example of a Corteza deployment on a managed Kubernetes cluster.
Note
|
Since k8s cloud providers vary from one to another, some additional configuration per-provider is needed. |
Important
|
This is a proof of concept kubernetes configuration for single-pod Corteza instance running with a default Postgresql installation. |
-
proper scaling support of Corteza server
-
no database operator or stateful set, db is running as a temporary store of value
-
no minio support
-
no tls (or letsencrypt)
-
no corredor (should be setup as a sidecar container)
-
kubernetes 1.26.0
-
CNI Calico
-
Nginx ingress (https://kubernetes.github.io/ingress-nginx/)
-
server (with webapps)
2022.9.5
-
postgresql
14.5
A working node(s) and a control plane, configured networking plugin. Kubectl tools need to be setup locally in order to setup the cluster.
# merge conf
KUBECONFIG=~/.kube/config:~/my-test-cluster.kubeconfig k config view --flatten > ~/.kube/config
# alias kubectl
alias k=kubectl
Update the inotify (inodes) limits on each of the nodes.
Important
|
The sysctl settings need to be higher, otherwise kubernetes starts failing. |
Note
|
You can use the ssh kubectl plugin for access. |
# fetch first node (but do not forget to do it for all)
NODE=$(k get nodes -o jsonpath='{.items[0].metadata.name}')
# ssh to node
k ssh node ${NODE}
sysctl fs.inotify.max_user_instances=8192
sysctl fs.inotify.max_user_watches=524288
echo 'fs.inotify.max_user_instances=8192' > /etc/sysctl.conf
echo 'fs.inotify.max_user_watches=524288' > /etc/sysctl.conf
# db volume
mkdir /mnt/data
# corteza volume
mkdir /mnt/corteza