My automated infrastructure deployment
- automates kubernetes deployment (Based on mmumshad/kubernetes-the-hard-way)
- Note: cni pod networking, dns, ingress controller, and loadbalancer are not yet included in code.
- also has glusterfs -- Not yet finished.
Steps:
- Create the topology in both
- inventory.yaml and config/certs/generate_all.sh
- Nodes will be needed for etcd, control plane, load balancer, and worker nodes.
- config/certs/generate_all.sh ansible-playbook -i inventory.yaml playbooks/kubernetes/*
- etcd.yaml
- control_plane.yaml
- load_balancer.yaml
- worker_pre.yaml either:
- worker.yaml or (if with tls-bootstratpping) (currently not working)
- tls_bootstrapping.yaml
- worker_with_tls_bootstrap.yaml
- kubectl apply -f kube-flannel CNI pod networking
- init_rbac_kubelet_authorization.yaml
- kubectl apply -f coredns.yaml
- istioctl install --set profile=default -f ../istio_examples/overrides.yaml
- kubectl apply -f metallb-native.yaml
- Note about WA: kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io metallb-webhook-configuration
Things to improve:
- server, node0, and node1 are still hardcoded names and is set in my router's dns settings. The scripts needs to adapt based on the inventory file. Currently, this wont work when I provision my next 2 (3+3) raspberry pi 5 cluster. I think majority of that work should is on the certs and kubeconfig generation.
- DNS stuff. I am bound by my ISP's provided router-- Orange funbox. Setting the DNS names here are less than ideal (dashes are not supported and subdomains does not work). It also doesn't have support for alternative DNS provides. So additional configuration is needed if I want to have private DNS servers (like making it a DHCP server also).
- The generation of certificates and kubeconfig needs to be "ansible-fy". It is handled by bash scripts.
- Refactor "init_server.yaml" to be 3 multiple playbooks. Currently, all of the functionalities of server (api-server, controller-manager, and scheduler) are all in one playbook. But in production environments these can be ran on different hosts and it should be decoupled for flexibility.
- Encryption at rest is still not implemented.
- Override ~/.kube/config generation behavior.
- Refactor glusterfs provisioning code. It feels like the vars are unorganized and the playbooks are a mess
- Add revert counterparts to glusterfs playbooks.