You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
安装出错过程:
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
10:23:43 CST stdout: [master]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0518 10:23:43.212698 52331 reset.go:103] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 10.30.10.13:6443: connect: connection refused
systemctl status kubelet
[root@master ~]# systemctl status kubelet -l
○ kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) since Sat 2024-05-18 13:10:52 CST; 20h ago
Docs: http://kubernetes.io/docs/
Process: 25693 ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=0/SUCCESS)
Main PID: 25693 (code=exited, status=0/SUCCESS)
CPU: 2.152s
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.207042 25693 kubelet.go:2448] "Error getting node" err="node "master" not found"
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.258281 25693 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node "master" not found"
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.290267 25693 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network>May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.307719 25693 kubelet.go:2448] "Error getting node" err="node "master" not found"
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.408646 25693 kubelet.go:2448] "Error getting node" err="node "master" not found"
May 18 13:10:52 master kubelet[25693]: I0518 13:10:52.453330 25693 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
May 18 13:10:52 master systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
May 18 13:10:52 master systemd[1]: kubelet.service: Deactivated successfully.
May 18 13:10:52 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 18 13:10:52 master systemd[1]: kubelet.service: Consumed 2.152s CPU time.
Relevant log output
journal -u kubelet | less
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028247 24629 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 173.194.174.82:443: i/o timeout"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028297 24629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 173.194.174.82:443: i/o timeout" pod="kube-system/kube-apiserver-master"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028322 24629 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 173.194.174.82:443: i/o timeout" pod="kube-system/kube-apiserver-master"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.028371 24629 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-master_kube-system(85c702565972003fef2047c1d4381b47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-master_kube-system(85c702565972003fef2047c1d4381b47)\\\": rpc error: code = DeadlineExceeded desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.8\\\": failed to pull image \\\"registry.k8s.io/pause:3.8\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.8\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.8\\\": failed to do request: Head \\\"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\\\": dial tcp 173.194.174.82:443: i/o timeout\"" pod="kube-system/kube-apiserver-master" podUID=85c702565972003fef2047c1d4381b47
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.051437 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.152102 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.252506 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.353246 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.453684 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.554420 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.654520 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.754968 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.855634 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:09 master kubelet[24629]: E0518 12:59:09.955940 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.056573 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.157297 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.257810 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.358416 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.459080 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.471881 24629 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master?timeout=10s": dial tcp 10.30.10.13:6443: connect: connection refused
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.559284 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: I0518 12:59:10.578336 24629 kubelet_node_status.go:70] "Attempting to register node" node="master"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.578790 24629 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://lb.kubesphere.local:6443/api/v1/nodes\": dial tcp 10.30.10.13:6443: connect: connection refused" node="master"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.660110 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.760766 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.861511 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:10 master kubelet[24629]: E0518 12:59:10.962549 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.062884 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.163298 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.263576 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.364322 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.464921 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.565703 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.666495 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
May 18 12:59:11 master kubelet[24629]: E0518 12:59:11.767049 24629 kubelet.go:2448] "Error getting node" err="node \"master\" not found"
Additional information
k8s-1.25.3,
RuntimeName: containerd
RuntimeVersion: v1.7.1
The text was updated successfully, but these errors were encountered:
What is version of KubeKey has the issue?
v3.1.1"
What is your os environment?
openeuler 22.10LTS
KubeKey config file
A clear and concise description of what happend.
安装出错过程:
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
10:23:43 CST stdout: [master]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0518 10:23:43.212698 52331 reset.go:103] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 10.30.10.13:6443: connect: connection refused
systemctl status kubelet
[root@master ~]# systemctl status kubelet -l
○ kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) since Sat 2024-05-18 13:10:52 CST; 20h ago
Docs: http://kubernetes.io/docs/
Process: 25693 ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=0/SUCCESS)
Main PID: 25693 (code=exited, status=0/SUCCESS)
CPU: 2.152s
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.207042 25693 kubelet.go:2448] "Error getting node" err="node "master" not found"
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.258281 25693 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node "master" not found"
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.290267 25693 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network>May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.307719 25693 kubelet.go:2448] "Error getting node" err="node "master" not found"
May 18 13:10:52 master kubelet[25693]: E0518 13:10:52.408646 25693 kubelet.go:2448] "Error getting node" err="node "master" not found"
May 18 13:10:52 master kubelet[25693]: I0518 13:10:52.453330 25693 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
May 18 13:10:52 master systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
May 18 13:10:52 master systemd[1]: kubelet.service: Deactivated successfully.
May 18 13:10:52 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 18 13:10:52 master systemd[1]: kubelet.service: Consumed 2.152s CPU time.
Relevant log output
Additional information
k8s-1.25.3,
RuntimeName: containerd
RuntimeVersion: v1.7.1
The text was updated successfully, but these errors were encountered: