The infrastructure is on AWS and written as code using Terraform.
Enter environment directory:
cd environments/${ENV}
Configure AWS access:
aws configure --profile gitops-example-${ENV}
All AWS terraform providers are configured to use the profile
gitops-example-${ENV}
Create VPC:
DIR='01_vpc'; terraform -chdir="$DIR" init && terraform -chdir="$DIR" apply
Create Aurora RDS Cluster:
DIR='02_rds'; terraform -chdir="$DIR" init && terraform -chdir="$DIR" apply
Create EKS:
DIR='03_eks'; terraform -chdir="$DIR" init && terraform -chdir="$DIR" apply
Before running the step below, connect to the EKS cluster:
aws eks update-kubeconfig --name gitops-example-${ENV} --profile gitops-example-${ENV}
Create ArgoCD server and Applications
:
DIR='04_argocd'; terraform -chdir="$DIR" init && terraform -chdir="$DIR" apply
ArgoCD will take care of deploying everything :)
To create a new app:
- Create Helm Chart at
dev/${APP_NAME}/chart
- Create values file at
ops/environments/${ENV}/05_applications/${APP_NAME}_values.yaml
- Add
${APP_NAME}
to the list ofapplications
atops/environments/${ENV}/04_argocd/main.tf
- Apply changes to ArgoCD (terraform apply)
- Commit code. Once it hits the
master
branch ArgoCD will do his magic.
ArgoCD deploys a NGINX Ingress Controller which creates a Load Balancer to expose the applications.
To access them an DNS entry should be created for the hosts indicated on each
application Helm values. Alternatively, you can force the DNS resolution via /etc/hosts
.
webapp
will be exposed on path /
and api
will be exposed on path /api
.
For the dev
environment:
sudo sh -c "echo \"$(host $(kubectl get svc -n ingress-nginx | grep LoadBalancer | awk '{print $4}') | head -n1 | awk '{print $4}') gitops-example-dev.foo.bar\" >> /etc/hosts"
An simple way to stress the application and check if autoscaler kicks-in
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: stress
spec:
replicas: 10
selector:
matchLabels:
app: stress
template:
metadata:
labels:
app: stress
spec:
containers:
- image: alpine/curl
name: stress-api
command: ["sh", "-c"]
args: ["while true; do curl -s api; done"]
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 50m
memory: 64Mi
- image: alpine/curl
name: stress-webapp
command: ["sh", "-c"]
args: ["while true; do curl -s webapp; done"]
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 50m
memory: 64Mi
EOF
ArgoCD itself already gives us some observability on the cluster state from its UI. To access it:
kubectl -n argocd port-forward svc/argocd-server -n argocd 8080:443
The user is
admin
and the password is on theargocd-initial-admin-secret
secret:kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d
But beyond that, ArgoCD will also deploy a Prometheus Server to collect cluster metrics and a bunch of Grafana dashboards so we can view more deeply how the cluster and our application are performing.
To access Prometheus:
kubectl -n observability port-forward svc/prometheus-kube-prometheus-prometheus 9090
To access Grafana:
kubectl -n observability port-forward svc/prometheus-grafana 3000:80
The user is
admin
and the password is on theprometheus-grafana
secret:kubectl -n observability get secret prometheus-grafana -o jsonpath='{.data.admin-password}' | base64 -d