Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error by instaling #2221

Open
Nello-Angelo opened this issue Apr 23, 2024 · 2 comments
Open

error by instaling #2221

Nello-Angelo opened this issue Apr 23, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@Nello-Angelo
Copy link

What is version of KubeKey has the issue?

3.1.1

What is your os environment?

debian 11.5

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.163.130, internalAddress: 192.168.163.130, user: root, password: "123"}
  - {name: worker1, address: 192.168.163.131, internalAddress: 192.168.163.131, user: root, password: "123"}
  - {name: worker2, address: 192.168.163.132, internalAddress: 192.168.163.132, user: root, password: "123"} 
  roleGroups:
    etcd:
    - master
    control-plane: 
    - master
    worker:
    - worker1
    - worker2
  controlPlaneEndpoint: 
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.27.0
    clusterName: cluster.local
    masqueradeAll: false
    maxPods: 110
    nodeCidrMaskSize: 24
    proxyMode: ipvs
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubeadm
  network:
    plugin: calico
    calico:
      ipipMode: Never
      vxlanMode: Never
      vethMTU: 1440   
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18    
    multusCNI:
      enabled: false


### A clear and concise description of what happend.

root@debian:~# kk create cluster -f config-sample.yaml


| | / / | | | | / /
| |/ / _ | |_ | |/ / ___ _ _
| | | | | '
\ / _ \ \ / _ \ | | |
| |\ \ |
| | |
) | / |\ \ / || |
_| _/_
,|./ _
_| _/___|_, |
/ |
|
/

22:53:42 MSK [GreetingsModule] Greetings
22:53:42 MSK message: [worker2]
Greetings, KubeKey!
22:53:42 MSK message: [master]
Greetings, KubeKey!
22:53:42 MSK message: [worker1]
Greetings, KubeKey!
22:53:42 MSK success: [worker2]
22:53:42 MSK success: [master]
22:53:42 MSK success: [worker1]
22:53:42 MSK [NodePreCheckModule] A pre-check on nodes
22:53:43 MSK success: [worker1]
22:53:43 MSK success: [master]
22:53:43 MSK success: [worker2]
22:53:43 MSK [ConfirmModule] Display confirmation form
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master | y | y | y | y | y | | | y | | | v1.7.13 | | | | MSK 22:53:43 |
| worker1 | y | y | y | y | y | | | y | | | v1.7.13 | | | | MSK 22:53:43 |
| worker2 | y | y | y | y | y | | | y | | | v1.7.13 | | | | MSK 22:53:43 |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
22:53:45 MSK success: [LocalHost]
22:53:45 MSK [NodeBinariesModule] Download installation binaries
22:53:45 MSK message: [localhost]
downloading amd64 kubeadm v1.28.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 48.3M 100 48.3M 0 0 7358k 0 0:00:06 0:00:06 --:--:-- 9034k
22:53:53 MSK message: [localhost]
downloading amd64 kubelet v1.28.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 105M 100 105M 0 0 8688k 0 0:00:12 0:00:12 --:--:-- 10.5M
22:54:06 MSK message: [localhost]
downloading amd64 kubectl v1.28.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 47.5M 100 47.5M 0 0 7416k 0 0:00:06 0:00:06 --:--:-- 8901k
22:54:13 MSK message: [localhost]
downloading amd64 helm v3.14.3 ...
22:54:13 MSK message: [localhost]
helm exists
22:54:13 MSK message: [localhost]
downloading amd64 kubecni v1.2.0 ...
22:54:13 MSK message: [localhost]
kubecni exists
22:54:13 MSK message: [localhost]
downloading amd64 crictl v1.29.0 ...
22:54:14 MSK message: [localhost]
crictl exists
22:54:14 MSK message: [localhost]
downloading amd64 etcd v3.5.13 ...
22:54:14 MSK message: [localhost]
etcd exists
22:54:14 MSK message: [localhost]
downloading amd64 containerd 1.7.13 ...
22:54:14 MSK message: [localhost]
containerd exists
22:54:14 MSK message: [localhost]
downloading amd64 runc v1.1.12 ...
22:54:14 MSK message: [localhost]
runc exists
22:54:14 MSK message: [localhost]
downloading amd64 calicoctl v3.27.3 ...
22:54:15 MSK message: [localhost]
calicoctl exists
22:54:15 MSK success: [LocalHost]
22:54:15 MSK [ConfigureOSModule] Get OS release
22:54:15 MSK success: [worker2]
22:54:15 MSK success: [master]
22:54:15 MSK success: [worker1]
22:54:15 MSK [ConfigureOSModule] Prepare to init OS
22:54:15 MSK success: [worker2]
22:54:15 MSK success: [worker1]
22:54:15 MSK success: [master]
22:54:15 MSK [ConfigureOSModule] Generate init os script
22:54:15 MSK success: [master]
22:54:15 MSK success: [worker1]
22:54:15 MSK success: [worker2]
22:54:15 MSK [ConfigureOSModule] Exec init os script
22:54:16 MSK stdout: [worker2]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
22:54:16 MSK stdout: [worker1]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
22:54:16 MSK stdout: [master]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
22:54:16 MSK success: [worker2]
22:54:16 MSK success: [worker1]
22:54:16 MSK success: [master]
22:54:16 MSK [ConfigureOSModule] configure the ntp server for each node
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [KubernetesStatusModule] Get kubernetes cluster status
22:54:16 MSK success: [master]
22:54:16 MSK [InstallContainerModule] Sync containerd binaries
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Generate containerd service
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Generate containerd config
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Enable containerd
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Sync crictl binaries
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Generate crictl config
22:54:16 MSK success: [worker2]
22:54:16 MSK success: [master]
22:54:16 MSK success: [worker1]
22:54:16 MSK [PullModule] Start to pull images on all nodes
22:54:16 MSK message: [worker2]
downloading image: kubesphere/pause:3.9
22:54:16 MSK message: [master]
downloading image: kubesphere/etcd:v3.5.13
22:54:16 MSK message: [worker1]
downloading image: kubesphere/pause:3.9
22:54:17 MSK message: [worker1]
downloading image: kubesphere/kube-proxy:v1.28.0
22:54:17 MSK message: [master]
downloading image: kubesphere/pause:3.9
22:54:17 MSK message: [worker2]
downloading image: kubesphere/kube-proxy:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: coredns/coredns:1.9.3
22:54:17 MSK message: [worker2]
downloading image: coredns/coredns:1.9.3
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-apiserver:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: kubesphere/k8s-dns-node-cache:1.22.20
22:54:17 MSK message: [worker2]
downloading image: kubesphere/k8s-dns-node-cache:1.22.20
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-controller-manager:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: calico/kube-controllers:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/kube-controllers:v3.27.3
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-scheduler:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: calico/cni:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/cni:v3.27.3
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-proxy:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: calico/node:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/node:v3.27.3
22:54:17 MSK message: [master]
downloading image: coredns/coredns:1.9.3
22:54:17 MSK message: [worker1]
downloading image: calico/pod2daemon-flexvol:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/pod2daemon-flexvol:v3.27.3
22:54:17 MSK message: [master]
downloading image: kubesphere/k8s-dns-node-cache:1.22.20
22:54:17 MSK message: [worker1]
downloading image: library/haproxy:2.9.6-alpine
22:54:17 MSK message: [worker2]
downloading image: library/haproxy:2.9.6-alpine
22:54:17 MSK message: [master]
downloading image: calico/kube-controllers:v3.27.3
22:54:17 MSK message: [master]
downloading image: calico/cni:v3.27.3
22:54:17 MSK message: [master]
downloading image: calico/node:v3.27.3
22:54:17 MSK message: [master]
downloading image: calico/pod2daemon-flexvol:v3.27.3
22:54:17 MSK success: [worker1]
22:54:17 MSK success: [worker2]
22:54:17 MSK success: [master]
22:54:17 MSK [InstallKubeBinariesModule] Synchronize kubernetes binaries
22:54:40 MSK success: [master]
22:54:40 MSK success: [worker2]
22:54:40 MSK success: [worker1]
22:54:40 MSK [InstallKubeBinariesModule] Change kubelet mode
22:54:40 MSK success: [worker1]
22:54:40 MSK success: [master]
22:54:40 MSK success: [worker2]
22:54:40 MSK [InstallKubeBinariesModule] Generate kubelet service
22:54:40 MSK success: [worker1]
22:54:40 MSK success: [master]
22:54:40 MSK success: [worker2]
22:54:40 MSK [InstallKubeBinariesModule] Enable kubelet service
22:54:41 MSK success: [worker2]
22:54:41 MSK success: [worker1]
22:54:41 MSK success: [master]
22:54:41 MSK [InstallKubeBinariesModule] Generate kubelet env
22:54:41 MSK success: [worker1]
22:54:41 MSK success: [master]
22:54:41 MSK success: [worker2]
22:54:41 MSK [InitKubernetesModule] Generate kubeadm config
22:54:41 MSK success: [master]
22:54:41 MSK [InitKubernetesModule] Generate audit policy
22:54:41 MSK skipped: [master]
22:54:41 MSK [InitKubernetesModule] Generate audit webhook
22:54:41 MSK skipped: [master]
22:54:41 MSK [InitKubernetesModule] Init cluster using kubeadm
22:59:44 MSK stdout: [master]
W0423 22:54:41.834178 5820 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[WARNING ImagePull]: failed to pull image kubesphere/etcd:v3.5.13: output: E0423 22:55:39.163611 5916 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/kubesphere/etcd:v3.5.13": failed to resolve reference "docker.io/kubesphere/etcd:v3.5.13": docker.io/kubesphere/etcd:v3.5.13: not found" image="kubesphere/etcd:v3.5.13"
time="2024-04-23T22:55:39+03:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/kubesphere/etcd:v3.5.13": failed to resolve reference "docker.io/kubesphere/etcd:v3.5.13": docker.io/kubesphere/etcd:v3.5.13: not found"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local worker1 worker1.cluster.local worker2 worker2.cluster.local] and IPs [10.233.0.1 192.168.163.130 127.0.0.1 192.168.163.131 192.168.163.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
22:59:45 MSK stdout: [master]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0423 22:59:44.708699 6563 reset.go:120] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.163.130:6443: connect: connection refused
[preflight] Running pre-flight checks
W0423 22:59:44.708888 6563 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
22:59:45 MSK message: [master]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W0423 22:54:41.834178 5820 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[WARNING ImagePull]: failed to pull image kubesphere/etcd:v3.5.13: output: E0423 22:55:39.163611 5916 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/kubesphere/etcd:v3.5.13": failed to resolve reference "docker.io/kubesphere/etcd:v3.5.13": docker.io/kubesphere/etcd:v3.5.13: not found" image="kubesphere/etcd:v3.5.13"
time="2024-04-23T22:55:39+03:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/kubesphere/etcd:v3.5.13": failed to resolve reference "docker.io/kubesphere/etcd:v3.5.13": docker.io/kubesphere/etcd:v3.5.13: not found"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local worker1 worker1.cluster.local worker2 worker2.cluster.local] and IPs [10.233.0.1 192.168.163.130 127.0.0.1 192.168.163.131 192.168.163.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1


### Relevant log output

```shell
root@debian:~# kk create cluster -f config-sample.yaml


 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

22:53:42 MSK [GreetingsModule] Greetings
22:53:42 MSK message: [worker2]
Greetings, KubeKey!
22:53:42 MSK message: [master]
Greetings, KubeKey!
22:53:42 MSK message: [worker1]
Greetings, KubeKey!
22:53:42 MSK success: [worker2]
22:53:42 MSK success: [master]
22:53:42 MSK success: [worker1]
22:53:42 MSK [NodePreCheckModule] A pre-check on nodes
22:53:43 MSK success: [worker1]
22:53:43 MSK success: [master]
22:53:43 MSK success: [worker2]
22:53:43 MSK [ConfirmModule] Display confirmation form
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name    | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master  | y    | y    | y       | y        | y     |       |         | y         |        |        | v1.7.13    |            |             |                  | MSK 22:53:43 |
| worker1 | y    | y    | y       | y        | y     |       |         | y         |        |        | v1.7.13    |            |             |                  | MSK 22:53:43 |
| worker2 | y    | y    | y       | y        | y     |       |         | y         |        |        | v1.7.13    |            |             |                  | MSK 22:53:43 |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
22:53:45 MSK success: [LocalHost]
22:53:45 MSK [NodeBinariesModule] Download installation binaries
22:53:45 MSK message: [localhost]
downloading amd64 kubeadm v1.28.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 48.3M  100 48.3M    0     0  7358k      0  0:00:06  0:00:06 --:--:-- 9034k
22:53:53 MSK message: [localhost]
downloading amd64 kubelet v1.28.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  105M  100  105M    0     0  8688k      0  0:00:12  0:00:12 --:--:-- 10.5M
22:54:06 MSK message: [localhost]
downloading amd64 kubectl v1.28.0 ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 47.5M  100 47.5M    0     0  7416k      0  0:00:06  0:00:06 --:--:-- 8901k
22:54:13 MSK message: [localhost]
downloading amd64 helm v3.14.3 ...
22:54:13 MSK message: [localhost]
helm exists
22:54:13 MSK message: [localhost]
downloading amd64 kubecni v1.2.0 ...
22:54:13 MSK message: [localhost]
kubecni exists
22:54:13 MSK message: [localhost]
downloading amd64 crictl v1.29.0 ...
22:54:14 MSK message: [localhost]
crictl exists
22:54:14 MSK message: [localhost]
downloading amd64 etcd v3.5.13 ...
22:54:14 MSK message: [localhost]
etcd exists
22:54:14 MSK message: [localhost]
downloading amd64 containerd 1.7.13 ...
22:54:14 MSK message: [localhost]
containerd exists
22:54:14 MSK message: [localhost]
downloading amd64 runc v1.1.12 ...
22:54:14 MSK message: [localhost]
runc exists
22:54:14 MSK message: [localhost]
downloading amd64 calicoctl v3.27.3 ...
22:54:15 MSK message: [localhost]
calicoctl exists
22:54:15 MSK success: [LocalHost]
22:54:15 MSK [ConfigureOSModule] Get OS release
22:54:15 MSK success: [worker2]
22:54:15 MSK success: [master]
22:54:15 MSK success: [worker1]
22:54:15 MSK [ConfigureOSModule] Prepare to init OS
22:54:15 MSK success: [worker2]
22:54:15 MSK success: [worker1]
22:54:15 MSK success: [master]
22:54:15 MSK [ConfigureOSModule] Generate init os script
22:54:15 MSK success: [master]
22:54:15 MSK success: [worker1]
22:54:15 MSK success: [worker2]
22:54:15 MSK [ConfigureOSModule] Exec init os script
22:54:16 MSK stdout: [worker2]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
22:54:16 MSK stdout: [worker1]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
22:54:16 MSK stdout: [master]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
22:54:16 MSK success: [worker2]
22:54:16 MSK success: [worker1]
22:54:16 MSK success: [master]
22:54:16 MSK [ConfigureOSModule] configure the ntp server for each node
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [KubernetesStatusModule] Get kubernetes cluster status
22:54:16 MSK success: [master]
22:54:16 MSK [InstallContainerModule] Sync containerd binaries
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Generate containerd service
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Generate containerd config
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Enable containerd
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Sync crictl binaries
22:54:16 MSK skipped: [master]
22:54:16 MSK skipped: [worker2]
22:54:16 MSK skipped: [worker1]
22:54:16 MSK [InstallContainerModule] Generate crictl config
22:54:16 MSK success: [worker2]
22:54:16 MSK success: [master]
22:54:16 MSK success: [worker1]
22:54:16 MSK [PullModule] Start to pull images on all nodes
22:54:16 MSK message: [worker2]
downloading image: kubesphere/pause:3.9
22:54:16 MSK message: [master]
downloading image: kubesphere/etcd:v3.5.13
22:54:16 MSK message: [worker1]
downloading image: kubesphere/pause:3.9
22:54:17 MSK message: [worker1]
downloading image: kubesphere/kube-proxy:v1.28.0
22:54:17 MSK message: [master]
downloading image: kubesphere/pause:3.9
22:54:17 MSK message: [worker2]
downloading image: kubesphere/kube-proxy:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: coredns/coredns:1.9.3
22:54:17 MSK message: [worker2]
downloading image: coredns/coredns:1.9.3
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-apiserver:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: kubesphere/k8s-dns-node-cache:1.22.20
22:54:17 MSK message: [worker2]
downloading image: kubesphere/k8s-dns-node-cache:1.22.20
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-controller-manager:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: calico/kube-controllers:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/kube-controllers:v3.27.3
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-scheduler:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: calico/cni:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/cni:v3.27.3
22:54:17 MSK message: [master]
downloading image: kubesphere/kube-proxy:v1.28.0
22:54:17 MSK message: [worker1]
downloading image: calico/node:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/node:v3.27.3
22:54:17 MSK message: [master]
downloading image: coredns/coredns:1.9.3
22:54:17 MSK message: [worker1]
downloading image: calico/pod2daemon-flexvol:v3.27.3
22:54:17 MSK message: [worker2]
downloading image: calico/pod2daemon-flexvol:v3.27.3
22:54:17 MSK message: [master]
downloading image: kubesphere/k8s-dns-node-cache:1.22.20
22:54:17 MSK message: [worker1]
downloading image: library/haproxy:2.9.6-alpine
22:54:17 MSK message: [worker2]
downloading image: library/haproxy:2.9.6-alpine
22:54:17 MSK message: [master]
downloading image: calico/kube-controllers:v3.27.3
22:54:17 MSK message: [master]
downloading image: calico/cni:v3.27.3
22:54:17 MSK message: [master]
downloading image: calico/node:v3.27.3
22:54:17 MSK message: [master]
downloading image: calico/pod2daemon-flexvol:v3.27.3
22:54:17 MSK success: [worker1]
22:54:17 MSK success: [worker2]
22:54:17 MSK success: [master]
22:54:17 MSK [InstallKubeBinariesModule] Synchronize kubernetes binaries
22:54:40 MSK success: [master]
22:54:40 MSK success: [worker2]
22:54:40 MSK success: [worker1]
22:54:40 MSK [InstallKubeBinariesModule] Change kubelet mode
22:54:40 MSK success: [worker1]
22:54:40 MSK success: [master]
22:54:40 MSK success: [worker2]
22:54:40 MSK [InstallKubeBinariesModule] Generate kubelet service
22:54:40 MSK success: [worker1]
22:54:40 MSK success: [master]
22:54:40 MSK success: [worker2]
22:54:40 MSK [InstallKubeBinariesModule] Enable kubelet service
22:54:41 MSK success: [worker2]
22:54:41 MSK success: [worker1]
22:54:41 MSK success: [master]
22:54:41 MSK [InstallKubeBinariesModule] Generate kubelet env
22:54:41 MSK success: [worker1]
22:54:41 MSK success: [master]
22:54:41 MSK success: [worker2]
22:54:41 MSK [InitKubernetesModule] Generate kubeadm config
22:54:41 MSK success: [master]
22:54:41 MSK [InitKubernetesModule] Generate audit policy
22:54:41 MSK skipped: [master]
22:54:41 MSK [InitKubernetesModule] Generate audit webhook
22:54:41 MSK skipped: [master]
22:54:41 MSK [InitKubernetesModule] Init cluster using kubeadm
22:59:44 MSK stdout: [master]
W0423 22:54:41.834178    5820 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image kubesphere/etcd:v3.5.13: output: E0423 22:55:39.163611    5916 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"docker.io/kubesphere/etcd:v3.5.13\": failed to resolve reference \"docker.io/kubesphere/etcd:v3.5.13\": docker.io/kubesphere/etcd:v3.5.13: not found" image="kubesphere/etcd:v3.5.13"
time="2024-04-23T22:55:39+03:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"docker.io/kubesphere/etcd:v3.5.13\": failed to resolve reference \"docker.io/kubesphere/etcd:v3.5.13\": docker.io/kubesphere/etcd:v3.5.13: not found"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local worker1 worker1.cluster.local worker2 worker2.cluster.local] and IPs [10.233.0.1 192.168.163.130 127.0.0.1 192.168.163.131 192.168.163.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
22:59:45 MSK stdout: [master]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0423 22:59:44.708699    6563 reset.go:120] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.163.130:6443: connect: connection refused
[preflight] Running pre-flight checks
W0423 22:59:44.708888    6563 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
22:59:45 MSK message: [master]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W0423 22:54:41.834178    5820 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image kubesphere/etcd:v3.5.13: output: E0423 22:55:39.163611    5916 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"docker.io/kubesphere/etcd:v3.5.13\": failed to resolve reference \"docker.io/kubesphere/etcd:v3.5.13\": docker.io/kubesphere/etcd:v3.5.13: not found" image="kubesphere/etcd:v3.5.13"
time="2024-04-23T22:55:39+03:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"docker.io/kubesphere/etcd:v3.5.13\": failed to resolve reference \"docker.io/kubesphere/etcd:v3.5.13\": docker.io/kubesphere/etcd:v3.5.13: not found"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local worker1 worker1.cluster.local worker2 worker2.cluster.local] and IPs [10.233.0.1 192.168.163.130 127.0.0.1 192.168.163.131 192.168.163.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.163.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1


### Additional information

_No response_
@Nello-Angelo Nello-Angelo added the bug Something isn't working label Apr 23, 2024
@Nello-Angelo
Copy link
Author

v3.1.0-alpha.5 works

@littlejiancc
Copy link

Encountered the same problem. The phenomenon is that the registry node and the master node are the same, so there will be problems with containerd, and you will find that the /etc/containerd directory is missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants