- What is ArgoCD
- Prerequisites
- Installing ArgoCD on Openshift 4
- Configuring OpenShift 4
- Multi-cluster Management
In this guide we will explore managing OpenShift 4 cluster configurations with GitOps using ArgoCD.
ArgoCD is a declarative continuous delivery tool that leverages GitOps to maintain cluster resources. ArgoCD is implemented as a controller which is continuously monitoring application definitions and configurations defined in a Git repository and compares the desired state of those configurations with their live state on the cluster. Configurations which deviate from their desired state in the Git repository are classified as OutOfSync
. ArgoCD reports these differences and allows administrators to automatically or manually resync configurations to the desired state.
The examples contained in this guide require,
- the oc OpenShift client command-line tool
- a kubeconfig file for an existing OpenShift cluster (default location is
~/.kube/config
) - the argocd command-line tool
These manual steps will hopefully be replaced by an ArgoCD operator on OperatorHub in the near future.
oc new-project argocd
oc apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
oc create route passthrough --service=argocd-server
# but this does not seem to work for console logins...
#oc apply -n argocd -f argocd.yaml
#oc create route edge --service=argocd-server
# Get the argoCD 'admin' password:
ARGO_ADMIN_PASS=`kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2`
# Login:
ARGO_ROUTE=`oc get route argocd-server -n argocd -o jsonpath='{.spec.host}'`
argocd login $ARGO_ROUTE:443 --username admin --password $ARGO_ADMIN_PASS --insecure
# Change the ArgoCD password:
argocd account update-password
NOTE: ArgoCD does not have any local users other than the built-in admin
user. By default, only the admin
user may interract with ArgoCD and its apps. Additional users can manage ArgoCD via SSO if configured. See the ArgoCD Operator Manual.
- ArgoCD "Applications" (despite the name) can be used to deliver global custom resources such as those which configure OpenShift v4 clusters.
- When creating an application you will be required to provide a namespace. In the case of an application delivering global custom resources this doesn't make a lot of sense, but you can provide the name of any namespace to get past this issue.
- By default Argo will look to prune resources, should you ever delete your application that delivered them. In the case of OpenShift v4 global configuration custom resources, these often are blocked from being deleted, which can cause Argo to become stuck. If however in your configuration git repository you add the
argocd.argoproj.io/sync-options: Prune=false
annotation to your custom resources, this problem can be avoided. If you do run into this problem, you will need to manually "kubectl edit" the Argo Application and remove the finalizer which blocks until resources are pruned.
The following section demonstrates the use of ArgoCD to deliver some of the available OpenShift v4 Cluster Customizations.
The identity-providers directory contains an example for deploying an HTPasswd OAuth provider, and the associated secret. Deploying this as an ArgoCD application should allow you to login to your cluster as user1 / MyPassword!. For information on how this secret was created, see the OpenShift 4 Documentation.
argocd app create htpasswd-oauth --repo https://github.com/dgoodwin/openshift4-gitops.git --path=identity-providers --dest-server=https://kubernetes.default.svc --dest-namespace=openshift-config
argocd app sync htpasswd-oauth
This example includes both a global OAuth config resource, and a namespaced secret.
WARNING: The openshift-oauth operator copies your specified secrets to the openshift-authentication, including their labels. One of these labels in added by ArgoCD to indicate the secret is owned by the htpasswd-oauth application. When this is copied, it causes ArgoCD to now see the copied secret as a resource it doesn't know about, is owned by this app, thus should be pruned. You can disable pruning with the normal annotation but will still see this secret as out of sync in the UI.
The builds directory contains an example global Build configuration.
argocd app create builds-config --repo https://github.com/dgoodwin/openshift4-gitops.git --path=builds/base --dest-server=https://kubernetes.default.svc --dest-namespace=openshift-config
argocd app sync builds-config
The image directory contains an example global Image configuration which sets allowedRegistriesForImport
, limiting the container image registries from which normal users may import images to only include quay.io
.
argocd app create image-config --repo https://github.com/dgoodwin/openshift4-gitops.git --path=image --dest-server=https://kubernetes.default.svc --dest-namespace=openshift-config
argocd app sync image-config
The console directory contains a simple configuration for the OpenShift console which simply changes the logout behavior to redirect to Google.
argocd app create console-config --repo https://github.com/dgoodwin/openshift4-gitops.git --path=console --dest-server=https://kubernetes.default.svc --dest-namespace=openshift-config
argocd app sync console-config
TODO: The --dest-namespace here is odd as this example contains only a global resource.
The scheduler directory contains an example scheduler policy configmap which can be deployed to override the default scheduler policy. For information regarding scheduler predicates, see the OpenShift 4 Documentation.
argocd app create scheduler-policy --repo https://github.com/dgoodwin/openshift4-gitops.git --path=scheduler --dest-server=https://kubernetes.default.svc --dest-namespace=openshift-config
argocd app sync scheduler-policy
The machine-sets directory contains an example MachineSet
being deployed as an application via ArgoCD:
argocd app create machineset --repo https://github.com/dgoodwin/openshift4-gitops.git --path=machine-sets --dest-server=https://kubernetes.default.svc --dest-namespace=openshift-machine-api
argocd app sync machineset
However there is a problem here, if you view the yaml you will see the cluster's generated InfraID referenced multiple times. This value is generated by the OpenShift installer and used in the naming of many cloud objects. Committing cluster config will be problematic as this value is not known before install, and not consistent across clusters.
A standard OpenShift 4 cluster with 3 compute nodes in us-east-1 comes with 6 MachineSets, one per AZ (in my account), with only three of them scaled to 1 replicas. Each MachineSet references the generated InfraID roughly 9 times:
- MachineSet Name
- Selector
- IAM Instance Profile
- Security Group Name
- Subnet
- AWS Tags
TODO: Should we recommend against using MachineSets with gitops and Argo? Or is there a templating solution we should explore? In this case the value we want to template is a fact about the individual cluster it's being deployed to.
Deploy an operator from Operator Hub by creating OperatorGroup
and Subscription
objects. In this example we will deploy the grafana operator.
argocd app create grafana-operator --repo https://github.com/dgoodwin/openshift4-gitops.git --path=grafana-operator --dest-server=https://kubernetes.default.svc --dest-namespace=default
argocd app sync grafana-operator
In this example we will manage the build configuration of two OpenShift 4.x clusters, a pre-production (context: pre
) cluster and a production (context: pro
) cluster.
The example build configuration we will deploy contains customizations to be made per cluster environment.
Ensure we have access to both clusters via kubeconfig context,
$ oc --context pre get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-133-97.ec2.internal Ready master 5h v1.14.6+7e13ab9a7
ip-10-0-136-91.ec2.internal Ready worker 5h v1.14.6+7e13ab9a7
ip-10-0-144-237.ec2.internal Ready worker 5h v1.14.6+7e13ab9a7
ip-10-0-147-216.ec2.internal Ready master 5h v1.14.6+7e13ab9a7
ip-10-0-165-161.ec2.internal Ready master 5h v1.14.6+7e13ab9a7
ip-10-0-169-135.ec2.internal Ready worker 5h v1.14.6+7e13ab9a7
$ oc --context pro get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-133-100.ec2.internal Ready master 5h v1.14.6+7e13ab9a7
ip-10-0-138-244.ec2.internal Ready worker 5h v1.14.6+7e13ab9a7
ip-10-0-146-118.ec2.internal Ready master 5h v1.14.6+7e13ab9a7
ip-10-0-151-40.ec2.internal Ready worker 5h v1.14.6+7e13ab9a7
ip-10-0-165-83.ec2.internal Ready worker 5h v1.14.6+7e13ab9a7
ip-10-0-175-20.ec2.internal Ready master 5h v1.14.6+7e13ab9a7
NOTE: Setting up multiple contexts with separate kubeconfigs can be achieved by merging kubeconfigs.
In order to merge several kubeconfigs, ensure that each kubeconfig you wish to merge is configured with a user unique to the particular kubeconfig. For example, if each kubeconfig you wish to merge contains an admin
user then that user would need to be changed to something unique to the cluster identified by the kubeconfig such as admin1
. Simply update the user string in the kubeconfig.
For this example, we will have two kubeconfig files cluster1.kubeconfig
and cluster2.kubeconfig
that will be merged into merged-config.kubeconfig
.
export KUBECONFIG="merged-config.kubeconfig:cluster1.kubeconfig:cluster2.kubeconfig"
$ oc config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
admin1 cluster1 admin1
admin2 cluster2 admin2
$ oc config set-context pre --cluster=cluster1 --user=admin1
Context "pre" created.
$ oc config set-context pro --cluster=cluster2 --user=admin2
Context "pro" created.
Next, ensure that each cluster has been registered with ArgoCD. Clusters are added to ArgoCD by specifying the context,
$ argocd cluster add
ERRO[0000] Choose a context name from:
CURRENT NAME CLUSTER SERVER
admin1 cluster1 https://api.cluster1.new-installer.openshift.com:6443
admin2 cluster2 https://api.cluster2.new-installer.openshift.com:6443
* pre cluster1 https://api.cluster1.new-installer.openshift.com:6443
pro cluster2 https://api.cluster2.new-installer.openshift.com:6443
$ argocd cluster add pre
INFO[0000] ServiceAccount "argocd-manager" created in namespace "kube-system"
INFO[0000] ClusterRole "argocd-manager-role" created
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created, bound "argocd-manager" to "argocd-manager-role"
Cluster 'pre' added
$ argocd cluster add pro
INFO[0000] ServiceAccount "argocd-manager" created in namespace "kube-system"
INFO[0000] ClusterRole "argocd-manager-role" created
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created, bound "argocd-manager" to "argocd-manager-role"
Cluster 'pro' added
$ argocd cluster list
SERVER NAME STATUS MESSAGE
https://kubernetes.default.svc Successful
https://api.cluster2.new-installer.openshift.com:6443 pro Successful
https://api.cluster1.new-installer.openshift.com:6443 pre Successful
Add our build configuration repository to ArgoCD. The build configuration repository has a pre
and pro
kustomize overlay which will override the build imageLabels
by cluster but we will start by deploying the base build configuration.
$ argocd repo add https://github.com/dgoodwin/openshift4-gitops.git
Deploy custom OpenShift build configuration to pre-production and production clusters,
$ argocd app create --project default \
--name pre-builds \
--repo https://github.com/dgoodwin/openshift4-gitops.git \
--path builds/base \
--dest-server https://api.cluster1.new-installer.openshift.com:6443 \
--dest-namespace=openshift-config \
--revision master
$ argocd app create --project default \
--name pro-builds \
--repo https://github.com/dgoodwin/openshift4-gitops.git \
--path builds/base \
--dest-server https://api.cluster2.new-installer.openshift.com:6443 \
--dest-namespace=openshift-config \
--revision master
Sync configuration to both clusters as we have not defined an ArgoCD sync policy for the apps and must sync configurations manually.
$ argocd app sync pre-builds
$ argocd app sync pro-builds
Ensure both configurations have been successfully synced,
$ argocd app list
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH
pre-builds https://api.cluster1.new-installer.openshift.com:6443 openshift-config default Synced Healthy
pro-builds https://api.cluster2.new-installer.openshift.com:6443 openshift-config default Synced Healthy
Grab the modified build configuration from each cluster and ensure that it has been updated,
$ oc --context pre get build.config.openshift.io/cluster -o yaml -n openshift-config
$ oc --context pro get build.config.openshift.io/cluster -o yaml -n openshift-config
In this example, we will modify our build configuration based on which cluster we are deploying to. ArgoCD leverages kustomize to manage configuration overrides across environments. In the pre
and pro
overlay directories of our git repository there are kustomization
files which include patches to apply to the base configuration. We will specify the overlays
directory containing our kustomizations as the application path instead of the base
directory builds configuration directory.
Deploy kustomized build configuration to pre-production and production clusters,
$ argocd app create --project default \
--name pre-kustomize-builds \
--repo https://github.com/dgoodwin/openshift4-gitops.git \
--path builds/overlays/pre \
--dest-server https://api.cluster1.new-installer.openshift.com:6443 \
--dest-namespace openshift-config \
--revision master \
--sync-policy automated
$ argocd app create --project default \
--name pro-kustomize-builds \
--repo https://github.com/dgoodwin/openshift4-gitops.git \
--path builds/overlays/pro \
--dest-server https://api.cluster2.new-installer.openshift.com:6443 \
--dest-namespace openshift-config \
--revision master \
--sync-policy automated
Ensure that configuration applications have been synced successfully,
$ argocd app get pre-kustomize-builds
Name: pre-kustomize-builds
Project: default
Server: https://api.cluster1.new-installer.openshift.com:6443
Namespace: openshift-config
URL: https://argocd-server-argocd.apps.cluster1.new-installer.openshift.com/applications/pre-kustomize-builds
Repo: https://github.com/dgoodwin/openshift4-gitops.git
Target: pre
Path: builds/overlays/pre
Sync Policy: Automated
Sync Status: Synced to master (884a6db)
Health Status: Healthy
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
config.openshift.io Build openshift-config cluster Running Synced build.config.openshift.io/cluster configured
config.openshift.io Build cluster Synced Unknown
$ argocd app get pro-kustomize-builds
Name: pro-kustomize-builds
Project: default
Server: https://api.cluster2.new-installer.openshift.com:6443
Namespace: openshift-config
URL: https://argocd-server-argocd.apps.cluster2.new-installer.openshift.com/applications/pro-kustomize-builds
Repo: https://github.com/dgoodwin/openshift4-gitops.git
Target: pro
Path: builds/overlays/pro
Sync Policy: Automated
Sync Status: Synced to master (884a6db)
Health Status: Healthy
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
config.openshift.io Build openshift-config cluster Running Synced build.config.openshift.io/cluster unchanged
config.openshift.io Build cluster Synced Unknown
Grab the imageLabels
which have been modified per environment using kustomize,
$ oc --context pre get build.config.openshift.io/cluster -n openshift-config -o jsonpath='{.spec.buildDefaults.imageLabels}'
[map[value:true name:preprodbuild]]
$ oc --context pro get build.config.openshift.io/cluster -n openshift-config -o jsonpath='{.spec.buildDefaults.imageLabels}'
[map[value:true name:prodbuild]]