Skip to content

Commit

Permalink
add a solution to deploy argocd apps
Browse files Browse the repository at this point in the history
Signed-off-by: Yang Le <yangle@redhat.com>
  • Loading branch information
elgnay committed Aug 31, 2022
1 parent e0322bd commit c06cb06
Show file tree
Hide file tree
Showing 9 changed files with 291 additions and 0 deletions.
173 changes: 173 additions & 0 deletions solutions/deploy-argocd-apps/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
# Deploy applications with Argo CD

The script and instructions provided in this doc help you to setup an Open Cluster Management (OCM) environment with Kind clusters and integrate it with Argo CD. And then you can deploy Argo CD applications to OCM managed clusters.

## Prerequisite

- [kind](https://kind.sigs.k8s.io) must be installed on your local machine. The Kubernetes version must be >= 1.19, see [kind user guide](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster) for more details.

- Download and install [clusteradm](https://github.com/open-cluster-management-io/clusteradm/releases). For Linux OS, run the following commands:

```
wget -qO- https://github.com/open-cluster-management-io/clusteradm/releases/latest/download/clusteradm_linux_amd64.tar.gz | sudo tar -xvz -C /usr/local/bin/
sudo chmod +x /usr/local/bin/clusteradm
```
## Setup the clusters
1. Edit the kind configuration files, including `cluster1-config.yaml` and `cluster2-config.yaml`, for managed clusters to update the value in `apiServerAddress` to match your private IP address, which makes the kube apiserver of the managed clusters accessable for Argo CD running on the hub cluster.
2. Run `bash ./setup-ocm.sh`, you will see two clusters registered on the hub cluster.
```
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
cluster1 true https://cluster1-control-plane:6443 18s
cluster2 true https://cluster2-control-plane:6443 1s
```
## Install Argo CD
1. Install Argo CD on the OCM hub cluster
```bash
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
See more informatin from [Argo CD Getting Started](https://argo-cd.readthedocs.io/en/stable/getting_started/).
2. Confirm that all pods are running.
```bash
$ kubectl -n argocd get pods
argocd-application-controller-0 1/1 Running 0 22s
argocd-applicationset-controller-7f466f7cc-9hdlq 1/1 Running 0 22s
argocd-dex-server-54cd4596c4-t5sb5 1/1 Running 0 22s
argocd-notifications-controller-8445d56d96-8745z 1/1 Running 0 22s
argocd-redis-65596bf87-lkrg2 1/1 Running 0 22s
argocd-repo-server-5ccf4bd568-hxlzw 1/1 Running 0 22s
argocd-server-7dff66c8f8-qxzp9 1/1 Running 0 22s
```
3. Download Argo CD CLI
Download the latest version of Argo CD CLI `argocd` from https://github.com/argoproj/argo-cd/releases/latest. More detailed installation instructions can be found via the [CLI installation documentation](https://argo-cd.readthedocs.io/en/stable/cli_installation/).
## Integrate OCM with Argo CD
1. Register the managed clusters to Argo CD
Start port forwarding to expose Argo CD API server on the hub cluster.
```bash
kubectl port-forward svc/argocd-server -n argocd 8080:443
```
Login Argo CD using the CLI
```bash
export ADMIN_PASS=$(kubectl -n argocd get secret argocd-initial-admin-secret -o=jsonpath='{.data.password}' | base64 -d)
argocd login localhost:8080 --username=admin --password="${ADMIN_PASS}" --insecure
```
Register the two OCM managed clusters to Argo CD
```bash
argocd cluster add kind-cluster1 --name=cluster1
argocd cluster add kind-cluster2 --name=cluster2
```
2. Grant permission to Argo CD to access OCM placement API
```bash
kubectl apply -f ./manifests/ocm-placement-consumer-role.yaml
kubectl apply -f ./manifests/ocm-placement-consumer-rolebinding.yaml
```
3. Bind at least one `ManagedClusterSet` to the namespace `argocd`. Then applcations can be deployed to clusters belong to those clustersets. For example, bind the global `ManagedClusterSet` which includes all OCM managed clusters.
```bash
clusteradm clusterset bind global --namespace argocd
```
Confirm the clusterset is bound successfully.
```bash
$ clusteradm get clustersets
NAME BOUND NAMESPACES STATUS
default 2 ManagedClusters selected
global argocd 2 ManagedClusters selected
```
4. Create the configuration of the OCM Placement generator, which is a [Cluster Decision Resource generator](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Cluster-Decision-Resource/) by using OCM Placement API.
```bash
kubectl apply -f ./manifests/ocm-placement-generator-cm.yaml
```
## Deploy an application to managed clusters
1. Create a placement to select a set of managed clusters.
```bash
kubectl apply -f ./manifests/guestbook-app-placement.yaml
```
Confirm the the placemet selects all managed clusters.
```bash
$ kubectl -n argocd get placementdecisions -l cluster.open-cluster-management.io/placement=guestbook-app-placement -o yaml
apiVersion: v1
items:
- apiVersion: cluster.open-cluster-management.io/v1beta1
kind: PlacementDecision
metadata:
labels:
cluster.open-cluster-management.io/placement: guestbook-app-placement
name: guestbook-app-placement-decision-1
namespace: argocd
status:
decisions:
- clusterName: cluster1
reason: ""
- clusterName: cluster2
reason: ""
kind: List
metadata:
resourceVersion: ""
selfLink: ""
```
2. Create an Argo CD `ApplicationSet`.
```bash
kubectl apply -f ./manifests/guestbook-app.yaml
```
With references to the OCM Placement generator and a placement, an `ApplicationSet` can target the application to the clusters selected by the placement.
3. Confirm the `ApplicationSet` is created and an `Application` is generated for each selected managed cluster.
```bash
$ kubectl -n argocd get applicationsets
NAME AGE
guestbook-app 4s
$ kubectl -n argocd get applications
NAME SYNC STATUS HEALTH STATUS
cluster1-guestbook-app Synced Progressing
cluster2-guestbook-app Synced Progressing
```
4. Confirm the `Application` is running on the selected managed clusters
```bash
$ kubectl --context kind-cluster1 -n guestbook get pods
NAME READY STATUS RESTARTS AGE
guestbook-ui-6b689986f-cdrk8 1/1 Running 0 112s
$ kubectl --context kind-cluster2 -n guestbook get pods
NAME READY STATUS RESTARTS AGE
guestbook-ui-6b689986f-x9tsq 1/1 Running 0 2m33s
```
7 changes: 7 additions & 0 deletions solutions/deploy-argocd-apps/cluster1-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# cluster1-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
# replace the IP address with your real IP address
apiServerAddress: "192.168.1.123"
apiServerPort: 10443
7 changes: 7 additions & 0 deletions solutions/deploy-argocd-apps/cluster2-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# cluster2-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
# replace the IP address with your real IP address
apiServerAddress: "192.168.1.123"
apiServerPort: 11443
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: guestbook-app-placement
namespace: argocd
spec: {}
30 changes: 30 additions & 0 deletions solutions/deploy-argocd-apps/manifests/guestbook-app.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook-app
namespace: argocd
spec:
generators:
- clusterDecisionResource:
configMapRef: ocm-placement-generator
labelSelector:
matchLabels:
cluster.open-cluster-management.io/placement: guestbook-app-placement
requeueAfterSeconds: 30
template:
metadata:
name: '{{clusterName}}-guestbook-app'
spec:
project: default
source:
repoURL: 'https://github.com/argoproj/argocd-example-apps.git'
targetRevision: HEAD
path: guestbook
destination:
name: '{{clusterName}}'
namespace: guestbook
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ocm-placement-consumer
namespace: argocd
rules:
# Allow controller to manage placements/placementdecisions
- apiGroups: ["cluster.open-cluster-management.io"]
resources: ["placements"]
verbs: ["get", "list"]
- apiGroups: ["cluster.open-cluster-management.io"]
resources: ["placementdecisions"]
verbs: ["get", "list"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ocm-placement-consumer:argocd
namespace: argocd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ocm-placement-consumer
subjects:
- kind: ServiceAccount
namespace: argocd
name: argocd-applicationset-controller
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: ocm-placement-generator
namespace: argocd
data:
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: placementdecisions
statusListKey: decisions
matchKey: clusterName
32 changes: 32 additions & 0 deletions solutions/deploy-argocd-apps/setup-ocm.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#!/bin/bash
set -e

hub=${CLUSTER1:-hub}
c1=${CLUSTER1:-cluster1}
c2=${CLUSTER2:-cluster2}

hubctx="kind-${hub}"
c1ctx="kind-${c1}"
c2ctx="kind-${c2}"

kind create cluster --name "${hub}"
kind create cluster --name "${c1}" --config cluster1-config.yaml
kind create cluster --name "${c2}" --config cluster2-config.yaml

kubectl config use ${hubctx}
echo "Initialize the ocm hub cluster"
joincmd=$(clusteradm init --use-bootstrap-token | grep clusteradm)

kubectl config use ${c1ctx}
echo "Join cluster1 to hub"
$(echo ${joincmd} --force-internal-endpoint-lookup --wait | sed "s/<cluster_name>/$c1/g")

kubectl config use ${c2ctx}
echo "Join cluster2 to hub"
$(echo ${joincmd} --force-internal-endpoint-lookup --wait | sed "s/<cluster_name>/$c2/g")

kubectl config use ${hubctx}
echo "Accept join of cluster1 and cluster2"
clusteradm accept --clusters ${c1},${c2} --wait

kubectl get managedclusters

0 comments on commit c06cb06

Please sign in to comment.