This information shows you how to deploy OpenRMF to a Kubernetes instance. I have used this for manually deploying to AWS EKS (outside of ingress) as well as to minikube locally.
To generate your YAML use the Helm Chart after making your namespace and persistent volume.
I use a named profile so I can try out things, so I run minikube like this in a .sh file:
minikube start \
--vm-driver virtualbox --disk-size 40GB \
--cpus 3 --memory 8096 --profile openrmf
Run kubectl config set-context openrmf --namespace=openrmf
where the openrmf after set-context is the named Minikube profile you are using.
You have a couple choices when you wish to expose your application endpoints out of k8s to your local computer with Minkube. They are outlined below.
For a regular k8s setup you would have an ingress controller like NGINX or Traefik or even HAProxy to help control ingress matching to services in a similar manner.
Follow the information at https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/ to enable the ingress minikube addon minikube addons enable ingress
and then expose your pod HTTP path with a /path extension to the main Minikube IP.
- Each service YAML has an ingress.yaml that goes with it, which gives a /* and a hostname off the Minikube IP to that service, which gets you into the pod.
Follow the directions at https://github.com/elsonrodriguez/minikube-lb-patch to get external IPs
- Make sure you have
jq
and if not runbrew install jq
or the equivalent on a Linux box - Make sure the script that uses the minikube profile has the correct path if you have a named profile for Minikube
There is a hiden gem here https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md on how to setup the ingress controllers especiall if you have sub paths. This below has a $2 in the rewrite target. This means "add the extra stuff on the end". So in this example, if I call http://openrmf.local/controls/healthz/ it will add the /healthz/ to the root of the API call internally. Otherwise it was always just dropping it and calling root no matter what "sub path" of the URL I was calling.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: openrmf-controls-ingress
namespace: openrmf
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, OPTIONS, POST, PUT, DELETE"
spec:
rules:
- host: openrmf.local
http:
paths:
- path: /controls(/|$)(.*)
backend:
serviceName: openrmf-controls
servicePort: 8080
Using the ingress information below where the "host" and the "path" pieces come into play, you can for instance go to http://openrmf.local/web/ and get the web UI. The web.yaml file in here has a ConfigMap that rewrites the API path variables for the web UI to use to call the READ or CONTROLS or COMPLIANCE or UPLOAD API and return data required. All that hinges on the configmap in the web.yaml, the *ingress.yaml files and your /etc/hosts file pointing to the correct IP for "openrmf.local". You can of course change the name, but change it everywhere. And the "IP" to use is found running minikube ip
if you are using Minikube locally as a "kubernetes cluster of 1".
spec:
rules:
- host: openrmf.local
http:
paths:
- path: /web(/|$)(.*)
backend:
serviceName: openrmf-compliance
servicePort: 8080
Use the Helm Chart and generate a single file template to deploy locally/manually without a running rooted Tiller in your cluster.
Other things you can do to make this work well while testing:
- Run kubectl config set-context openrmf --namespace=openrmf` to set your default namespace if you wish
- Run
kubectl get pods -o wide
to make sure things are coming up correctly - Run
kubectl get pvc
to make sure the persistent volume claims are AOK - Run
kubectl describe node minikube
to make sure CPU and disk pressure are not stopping you
From the installation root folder run cd keycloak to go to the Keycloak folder. Run ./setup-realm.sh NAMEOFNAMESPACE NAMEOFKEYCLOAKCONTAINER OPENRMFDNSNAME OPENRMFADMIN
where the name of the Keycloak container can be found by the information above. The OPENRMFDNSNAME is the same DNS name to OpenRMF Professional you put into the values.yaml file in a previous step for the value dnsName:. And the OPENRMFADMIN is the initial login for OpenRMF Professional that will have full Administrator rights across the application for setup. The script defaults to allow http://OPENRMFDNSNAME/* to call Keycloak for authentication. If you are going to use HTTPS we will need to update that in the Client setup in a later step here.