k8s_lab with kind settinup an simple cluster
- kubectl rollout
- kubectl cp
- kubectl diff
- kubectl debug
- netshoot: a Docker + Kubernetes network trouble-shooting swiss-army container
- Need some explaination of kubectl stdin and pipe
- Why dig does not resolve K8s service by dns name while nslookup has no problems with it?
- Workloads
- Persistent Volumes
- Access Modes
- uchan/Installation
- k8s NFS Output: mount.nfs: Operation not permitted
- I was trying to create a pvc with RWX access mode in the kind cluster. I see an error that it is not supported.
- Enable Simulation of automatically provisioned ReadWriteMany PVs
- docker Language-specific guides
- DNS for Services and Pods
- Service type
- k8s/components
- rbac
- install-bash-auto-completion
- Spring Cloud for Microservices Compared to Kubernetes
- Re: [請益] 雲端技術是Java工程師的必備技能嗎
- 架構師觀點: 你需要什麼樣的 CI / CD ?
- 如何读懂火焰图? Flamegraph
#check docker / podman
which docker
which podman
podman ps
#check kubectl
#setting up for kubectl alilas
which kubectl
kubectl version --client
#kustomize
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
sudo mv kustomize /usr/local/bin/
#kustomize bash completion
sudo -s
# kustomize completion bash > /usr/share/bash-completion/completions/kustomize
kustomize completion bash > ${BASH_COMPLETION_USER_DIR:-${XDG_DATA_HOME:-$HOME/.local/share}/bash-completion}/completions/kustomize
exec bash
#test kustomize build
kustomize build deployment/hong-lab/base/
#install kubie
wget https://github.com/sbstp/kubie/releases/download/v0.22.0/kubie-linux-amd64
mv kubie-linux-amd64 kubie
chmod +x kubie
sudo mv kubie /usr/local/bin/
#check kubie
which kubie
#add alias for kubie
vim ~/.bashrc
alias k='kubectl'
alias kic='kubie ctx'
alias kin='kubie ns'
#vim .kube/config
#check bash
mkdir -p ${BASH_COMPLETION_USER_DIR:-${XDG_DATA_HOME:-$HOME/.local/share}/bash-completion}/completions
#Replace the shell with the given command
exec bash
#install go
wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
exec bash
go version
which go
#install kind
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
#check kind
which kind
#install kind completion bash
kind completion bash > ${BASH_COMPLETION_USER_DIR:-${XDG_DATA_HOME:-$HOME/.local/share}/bash-completion}/completions/kind
kind create --help
#creat local k8s cluster with config file
#According --config we will create an k8s cluster with 1 control-node and 2 work nodes
cd architecture/
kind create cluster --name hong-cluster --config kind-example.config.yaml
#delete
kind delete cluster --name hong-cluster
#ERROR: failed to create cluster: running kind with rootless provider requires cgroup v2, see https://kind.sigs.k8s.io/docs/user/rootless/
sudo vim /etc/default/grub
#add this to /etc/default/grub
GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=1"
#update
sudo update-grub
#
sudo mkdir -p /etc/systemd/system/user@.service.d/
cat <<EOF > /etc/systemd/system/user@.service.d/delegate.conf
[Service]
Delegate=yes
EOF
#
cat <<EOF > /etc/modules-load.d/iptables.conf
ip6_tables
ip6table_nat
ip_tables
iptable_nat
EOF
cat <<EOF > /etc/modules-load.d/test.conf
ip6_tables
ip6table_nat
ip_tables
iptable_nat
EOF
# echo "ip6_tables ip6table_nat ip_tables iptable_nat" | sudo tee /etc/modules-load.d/test.conf
#check cluster
kind get clusters
kind get nodes --name hong-cluster
#Deletes one of [cluster]
kind delete cluster --name hong-cluster
#ops with kind cluster
kic
#leave with exit
exit
#settinup kubie with ~/.kube
#check detials in kubie.yaml
touch ~/.kube/kubie.yaml
#copy and paste the yaml codes from k8s_lab/env/kubie.yaml
vim ~/.kube/kubie.yaml
mkdir -p ~/.kube/configs
#mv config to configs and edit name
mv ~/.kube/config ~/.kube/configs/kind-hong-cluster.yaml
#test wuth uchan
kic
kin
k create ns hong-lab
k create ns uchan
k create ns varnish-operator
#output
# enabling experimental podman provider
# Deleting cluster "hong-cluster" ...
# Deleted nodes: ["hong-cluster-worker" "hong-cluster-worker2" "hong-cluster-control-plane"]
#Setting Up An Ingress Controller
#src: https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx
#ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
#test ingress
#kin hong-lab
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
# should output "foo-app"
curl localhost:38000/foo/hostname
# should output "bar-app"
curl localhost:38000/bar/hostname
#check container images
podman images
#Test env
# export KUBECONFIG=/home/hong/.kube/configs/hong-cluster.yaml
env | grep "KUBECONFIG"
#deploy single yaml to k8s
kubectl apply -f ns.yaml --dry-run=server
#port-forward
kubectl port-forward services/postgres-svc 9000:5432
#deploy objects to k8s with kustomize and kubectl
kustomize build . | kubectl apply -f - --dry-run=server
kustomize build . | kubectl apply -f -
#test with ubuntu image
kubectl run test-ubuntu --image=ubuntu --restart=Never -- sleep 1d
- Ingress NGINX Controller
- metrics-server
- fluentbit
- Reloader
- gitlab runner
- emqx-operator
- Specify secrets for listeners TLS certificates
- SSL/TLS TLS termination proxy part
- nacos
- discovery each node nacos cluster mode
- Kubernetes Nacos
- nacos-k8s
- kubevirt
- RabbitMQ
- prometheus
- ELK(Elastic Search/Logstash/Kibana) stack or EFK(Elastic Search/fluentbit/Kibana) stack
#For test metrics-server
kubectl top nodes
kubectl top pods
== We're Using GitHub Under Protest ==
This project is currently hosted on GitHub. This is not ideal; GitHub is a proprietary, trade-secret system that is not Free and Open Souce Software (FOSS). We are deeply concerned about using a proprietary system like GitHub to develop our FOSS project. We have an open {bug ticket, mailing list thread, etc.} where the project contributors are actively discussing how we can move away from GitHub in the long term. We urge you to read about the Give up GitHub campaign from the Software Freedom Conservancy to understand some of the reasons why GitHub is not a good place to host FOSS projects.
If you are a contributor who personally has already quit using GitHub, please check this resource for how to send us contributions without using GitHub directly.
Any use of this project's code by GitHub Copilot, past or present, is done without our permission. We do not consent to GitHub's use of this project's code in Copilot.