- Helm
- Kubectl
brew install helm
brew install kubectl
You may want to delete old resources
Follow up after this steps
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd probes.monitoring.coreos.com
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.com
kubectl delete crd applications.argoproj.io
kubectl delete crd applicationsets.argoproj.io
kubectl delete crd appprojects.argoproj.io
helm del -n argocd argocd
We will install the argoCD via Helm
We choose this argoCD distribution
The helm chart can be found here
We would like to expose two sets of Argo CD metrics
Therefore, we will enable:
- Application Metrics :
controller.metrics.enabled=true
- API Server Metrics :
server.metrics.enabled=true
Let's install:
kubectl create namespace argocd
helm repo add argo https://argoproj.github.io/argo-helm
helm upgrade -i argocd --namespace argocd --set redis.exporter.enabled=true --set redis.metrics.enabled=true --set server.metrics.enabled=true --set controller.metrics.enabled=true argo/argo-cd
Username: admin
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
The Application CRD is the Kubernetes resource object representing a deployed application instance in an environment
Let's apply it:
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: workshop
namespace: argocd
spec:
destination:
namespace: argocd
server: https://kubernetes.default.svc
project: default
source:
path: argoCD/
repoURL: https://github.com/naturalett/continuous-delivery
targetRevision: main
syncPolicy:
automated:
prune: true
selfHeal: true
EOF
kubectl port-forward service/argocd-server -n argocd 8080:443
Connect: http://localhost:8080
We will install the full stack: kube-prometheus-stack
The full stack will come with:
- Prometheus
- Grafana dashboard
- etc
We will disable the default node-exporter and we will add its helm chart separately
We are using Option 2
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus
namespace: argocd
spec:
destination:
name: in-cluster
namespace: argocd
project: default
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: 44.3.0
chart: kube-prometheus-stack
EOF
This option is already applied based on CRD that we deployed earlier
You can check the Application YAML
Prometheus expose metrics to /metrics. In Grafana we will define a Prometheus data source. In addition, we have more metrics that we want to display in Grafana therefore we will scrape them in Prometheus
The application is defined declaratively
Related issue: Fix prometheus CRD being too big #4439
We deployed a Prometheus CRDs
kubectl port-forward service/kube-prometheus-stack-prometheus -n argocd 9090:9090
Connect: http://localhost:9090
Username: admin
kubectl get secret -n argocd kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
kubectl port-forward service/kube-prometheus-stack-grafana -n argocd 9092:80
For dashboard we created a configMap and applied it with using the kustomization then we attached it during the deployment of Grafana
As well, we scrap the metrics that got exposed by the argoCD
Connect: http://localhost:9092
Run the following script:
https://github.com/naturalett/continuous-delivery/blob/main/trigger_alert.sh
kubectl port-forward service/alertmanager-operated -n argocd 9093:9093
Connect: http://localhost:9093
Watch the alert:
kubectl port-forward service/alertmanager-operated 9093:9093