- install virtualbox
- install vagrant
- run
vagrant up
- when vagrant attempt to start vm, it may ask you what network interface should use, please select the public facing network interface, in order to download kubernetes related package into vm.
- After vm environment are ready, login kube-master with command
vagrant ssh kube-master
- get token by:
sudo kubeadm token list
- copy token and exit current ssh session
- login kube-node and execute join command
vagrant ssh kube-node
sudo kubeadm join --token=<token> 11.22.33.44:6443
- edit /etc/kubernetes/manifest/kube-apiserver.yaml at master node with sudo
sudo nano /etc/kubernetes/manifest/kube-apiserver.yaml
replace --advertise-address and livenessProbe/httpGet/host to 11.22.33.44, and make sure --secure-port and /livenessProbe/httpGet/port are both 6443
- add kube-dns to DNS nameservers, first find kube-dns service by:
kubectl -n kube-system describe service kube-dns
then you should get name server's IP address, in my environment is 10.96.0.10
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
Endpoints: 10.244.0.9:53
Port: dns-tcp 53/TCP
Endpoints: 10.244.0.9:53
Session Affinity: None
Events: <none>
edit /etc/resolv.conf and add namespace < dns - ip >
echo "nameserver 10.96.0.10" >> /etc/resolv.conf
- login kube-master and all nodes are ready
vagrant ssh kube-master
kubectl get nodes # check if all nodes are ready
kubectl get pod --all-namespaces # check if all pod are running
- deploy /tmp/resource/demo.yaml and /tmp/resource/demo-service.yaml by
vagrant ssh kube-master
kubectl apply -f /tmp/resource/demo.yaml
kubectl apply -f /tmp/resource/demo-service.yaml
- get service
> kubectl describe service nginxservice
Name: nginxservice
Namespace: default
Labels: name=nginxservice
Annotations: kubectl.kubernetes.io/last-applied-configuration=...
Selector: app=nginx
Type: NodePort
IP: 10.100.172.146
Port: http 80/TCP
NodePort: http 31212/TCP
Endpoints: 10.244.1.38:80,10.244.2.9:80
Session Affinity: None
Events: <none>
- hit the main page:
curl http://10.100.172.146
vagrant ssh kube-master
kubectl apply -f /tmp/resource/dashboard.yaml
First add AWS Credentials to ENV:
export AWS_ACCESS_KEY_ID = your aws access key id
export AWS_SECRET_ACCESS_KEY = your aws secret access key
export AWS_DEFAULT_REGION = default region
Open Vagrantfile and comment out default provisioner, uncomment provisioner with args --enable-aws-ecr:
# default provision command
# config.vm.provision "shell", path: "common.sh", env: {"AWS_ACCESS_KEY_ID" => ENV["AWS_ACCESS_KEY_ID"], "AWS_SECRET_ACCESS_KEY" => ENV["AWS_SECRET_ACCESS_KEY", "AWS_DEFAULT_REGION" => ENV["AWS_DEFAULT_REGION"]]}
# if you want to add aws ecr to your docker aws registry, please comment out default provision command and uncomment next line:
config.vm.provision "shell", path: "common.sh", args: "--enable-aws-ecr", env: {"AWS_ACCESS_KEY_ID" => ENV["AWS_ACCESS_KEY_ID"], "AWS_SECRET_ACCESS_KEY" => ENV["AWS_SECRET_ACCESS_KEY", "AWS_DEFAULT_REGION" => ENV["AWS_DEFAULT_REGION"]]}
...
...
- edit Vagrantfile to add VM section, bootstrap node with
vagrant up new_node_name
- join current kubernetes cluster
vagrant ssh new_node_name
sudo kubeadm join --token=<token> 11.22.33.44:6443
After kubernetes cluster environment has bootstrapped, you can shutdown with
vagrant halt
And start with
vagrant up --no-provision #with --no-provision parameter to avoid run bootstrap command again
Also, reload with
vagrant reload --no-provision