A ramped deployment updates pods in a rolling update fashion, a secondary ReplicaSet is created with the new version of the application, then the number of replicas of the old version is decreased and the new version is increased until the correct number of replicas is reached.
kubectl create namespace rolling-update
cd ~/Workshop-K8S/k8s/rolling-update
kubectl -n rolling-update apply -f hello-world.yaml
kubectl -n rolling-update rollout status deployment hello-world -w
kubectl -n rolling-update get events
kubectl -n rolling-update get pods -o wide
kubectl -n rolling-update apply -f hello-world-service.yaml
kubectl -n rolling-update get service
kubectl -n rolling-update apply -f hello-world-ingress.yaml
Note down the Public IP adres of the load balancer.
kubectl get svc -A
Test with curl:
curl http://[public ip adress loadbalancer]/api
Or in browser
http://[PUBLIC IP loadbalancer]
Find the right pod name in the rolling-update namespace and delete the pod:
kubectl -n rolling-update get pods
kubectl -n rolling-update delete pod <hello-world pod name>
See what happened and if a new pod is started:
kubectl -n rolling-update get events
kubectl -n rolling-update get pods
Check health check:
curl [public ip adress load balancer]/health
Marks server as down:
curl -X POST [public ip adress load balancer]/down
Check health check:
curl [public ip adress load balancer]/health
Check that health check is failed:
kubectl -n rolling-update get events -w
kubectl -n rolling-update get pods -o wide
Update App to use new version
kubectl -n rolling-update set image deployments/hello-world hello-world=avthart/hello-app:1.0.1
This results in a new pod with status broken version (status CrashLoopBackOff). The new version is broken :-(
Roll out status
kubectl -n rolling-update rollout status deployments/hello-world
kubectl -n rolling-update get pods
kubectl -n rolling-update describe pod hello-world
Which version is running? Stil alive?
curl [public ip adress load balancer]/api
Rollback App
kubectl -n rolling-update rollout history deployments/hello-world
kubectl -n rolling-update rollout history deployments/hello-world --revision=2
kubectl -n rolling-update rollout undo deployments/hello-world
Update to working version:
kubectl -n rolling-update set image deployments/hello-world hello-world=avthart/hello-app:1.0.3
kubectl -n rolling-update rollout status deployments/hello-world
Which version is running? Was there any downtime?
curl [public ip adress load balancer]/api
Scale app to 3 instances
kubectl -n rolling-update scale --replicas=15 deployment/hello-world && \
kubectl -n rolling-update get pods -w
Check that request is handled by all instances:
while sleep 0.5; do curl [public ip adress load balancer]/api; done
Stop with CONTROL-C
kubectl delete ns rolling-update