Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail on the Kubeadm init process - timeout on waiting for the kubelet to boot up, from fresh new GCP debian OS #2853

Closed
jerryxnqiu opened this issue Apr 3, 2023 · 2 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@jerryxnqiu
Copy link

I used to be able to complete the installation process (20 days ago) but now I seem not able to and keep seeing below error time out messages. As I am doing the installation on a fresh new VM from GCP debian OS, I couldn’t find similar issue and resolution from internet. I am not sure where I went wrong, could experts please help to have a look at my issue and advise? Below are the configuration and log details. Thanks.

Versions

kubeadm version (use kubeadm version): v1.26.3

Environment:

  • Kubernetes version (use kubectl version): v1.26.3
  • Cloud provider or hardware configuration: GCP Compute engine / OS: debian-11-bullseye-v20230306 / CPU: c3-highcpu-4/x86/64 / RAM: 4G / Storage: 10G
  • OS (e.g. from /etc/os-release): OS: debian-11-bullseye-v20230306
  • Kernel (e.g. uname -a): Linux k8s-master 5.10.0-21-cloud-amd64 kubeadm join on slave node fails preflight checks #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux
  • Container runtime (CRI) (e.g. containerd, cri-o): Docker
  • Container networking plugin (CNI) (e.g. Calico, Cilium): Not yet, but use Calico's IP address
  • Others: swap is disabled

What happened?

Follow the installation process from website, time out occurred at below stage
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

  1. Install Docker Engine
  2. To disable the swap
  3. Configure the forwarding IPv4 and letting iptables see bridged traffic
  4. Install cri-dockerd for Docker
  5. Installing kubeadm, kubelet and kubectl
  6. execute - sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=

What you expected to happen?

Success message occurs - Your Kubernetes control-plane has initialized successfully!

How to reproduce it (as minimally and precisely as possible)?

Just to initiate a new VM from GCP and it can be reproduced.

Anything else we need to know?

  1. Below are the logs I retrieved

jerry_xnqiu@k8s-master:~/cri-dockerd$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=10.128.0.27
[init] Using Kubernetes version: v1.26.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.128.0.27]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.128.0.27 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.128.0.27 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

  1. Below are the image list obtained during the process

jerry_xnqiu@k8s-master:~/cri-dockerd$ kubeadm config images list
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3

  1. Below are logs from "systemctl status kubelet"

[init] Using Kubernetes version: v1.26.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.128.0.27]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2023-04-03 11:34:28 UTC; 4min 59s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 68292 (kubelet)
Tasks: 12 (limit: 9522)
Memory: 51.2M
CPU: 17.657s
CGroup: /system.slice/kubelet.service
└─68292 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --pod-infra-contain>

Apr 03 11:39:26 k8s-master kubelet[68292]: E0403 11:39:26.340773 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 2m40s restarting failed container=etcd pod=etcd-k8s-maste>
Apr 03 11:39:26 k8s-master kubelet[68292]: E0403 11:39:26.457699 68292 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://10.128.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master?timeout=10>
Apr 03 11:39:27 k8s-master kubelet[68292]: I0403 11:39:27.083195 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49953648ecd65c62562298add39b5ee52c5238f1b426f660b8ae8721de02d80d"
Apr 03 11:39:27 k8s-master kubelet[68292]: I0403 11:39:27.114251 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6cc91233ed1465e33c6c630b9a328c9097376f9e76fcea284e1eac20dc8abe3"
Apr 03 11:39:27 k8s-master kubelet[68292]: I0403 11:39:27.169727 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57fafde520b4870f20b0c7233ed81cc6519c726a805ca74fff4d655ee1fcce77"
Apr 03 11:39:27 k8s-master kubelet[68292]: I0403 11:39:27.234877 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda929d3d77c2508494444b223a66a712851bbeb9fcfd04ddf5e2062a6206df5"
Apr 03 11:39:27 k8s-master kubelet[68292]: E0403 11:39:27.303188 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:39:27 k8s-master kubelet[68292]: E0403 11:39:27.340260 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:39:27 k8s-master kubelet[68292]: E0403 11:39:27.430147 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube>
Apr 03 11:39:27 k8s-master kubelet[68292]: E0403 11:39:27.497530 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 2m40s restarting failed container=etcd pod=etcd-k8s-maste>

  1. Below are logs from "journalctl -xeu kubelet"

jerry_xnqiu@k8s-master:~/cri-dockerd$ journalctl -xeu kubelet
Apr 03 11:40:17 k8s-master kubelet[68292]: E0403 11:40:17.758653 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:17 k8s-master kubelet[68292]: E0403 11:40:17.768204 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:17 k8s-master kubelet[68292]: E0403 11:40:17.778483 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:17 k8s-master kubelet[68292]: E0403 11:40:17.886149 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:18 k8s-master kubelet[68292]: I0403 11:40:18.671310 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53dbd506cb1f9a5a95e2be6d0e1a92abf2760f9c5a4154f575e56cf2f5c8f2b4"
Apr 03 11:40:18 k8s-master kubelet[68292]: I0403 11:40:18.714522 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee7add4c0dca62c297ab7a3301e7d4ca2d9e0e6cb17976f47814b39433976f4"
Apr 03 11:40:18 k8s-master kubelet[68292]: E0403 11:40:18.750979 68292 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s-master.1752698dd13e86b1", GenerateName:"", Namespace:"defau>
Apr 03 11:40:18 k8s-master kubelet[68292]: I0403 11:40:18.761126 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57665ece7a83ff41ef8a7b02171407ec11e6696830b76588a5ed2c646063b6ca"
Apr 03 11:40:18 k8s-master kubelet[68292]: I0403 11:40:18.813230 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69142bea178db7de70aea0dc46ab1d3fcebf453adf6158c76869ae6c8a0f6cb5"
Apr 03 11:40:18 k8s-master kubelet[68292]: E0403 11:40:18.958992 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:18 k8s-master kubelet[68292]: E0403 11:40:18.964291 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:18 k8s-master kubelet[68292]: E0403 11:40:18.977189 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:19 k8s-master kubelet[68292]: E0403 11:40:19.085226 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:19 k8s-master kubelet[68292]: I0403 11:40:19.866092 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a37607eab797c52694777f1a2956ff6115c72bb9c745f1d7eba48a5ceac66cb"
Apr 03 11:40:19 k8s-master kubelet[68292]: I0403 11:40:19.902696 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e3534fe1dd150f5365b1751d07667cb2182e60bb8d2e4f24ce5b46bfadb34f5"
Apr 03 11:40:19 k8s-master kubelet[68292]: I0403 11:40:19.949701 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289aae52d058af3370c4a47a5513a6826e9d1344fa0c8dc07827846ef1988454"
Apr 03 11:40:20 k8s-master kubelet[68292]: I0403 11:40:20.008225 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8757e892fcb03536ede44ff742ea660634af023e6709d01237567f8b9a6c87"
Apr 03 11:40:20 k8s-master kubelet[68292]: E0403 11:40:20.047686 68292 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node "k8s-master" not found"
Apr 03 11:40:20 k8s-master kubelet[68292]: E0403 11:40:20.168717 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:20 k8s-master kubelet[68292]: E0403 11:40:20.174950 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:20 k8s-master kubelet[68292]: E0403 11:40:20.182648 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:20 k8s-master kubelet[68292]: E0403 11:40:20.303071 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:21 k8s-master kubelet[68292]: I0403 11:40:21.060031 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b842c643315de5c98586c90763f42193b40a4021cb0e01e692edcc8193105cf6"
Apr 03 11:40:21 k8s-master kubelet[68292]: I0403 11:40:21.095395 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed37e7c2c770340837baf428fac07cca65951057cb32faea6d8d8d9a92eee023"
Apr 03 11:40:21 k8s-master kubelet[68292]: I0403 11:40:21.142544 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d71ca6776dc48e87322ea73b7fa77102a20ee34a05d1d52ba7bf32cbf5993004"
Apr 03 11:40:21 k8s-master kubelet[68292]: I0403 11:40:21.206203 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2bfca9b943fc9df6c6017d80ce5d28842598ce7cf991b63ae843819053f5a4c"
Apr 03 11:40:21 k8s-master kubelet[68292]: E0403 11:40:21.355225 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:21 k8s-master kubelet[68292]: E0403 11:40:21.357965 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:21 k8s-master kubelet[68292]: E0403 11:40:21.373016 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:21 k8s-master kubelet[68292]: E0403 11:40:21.504628 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:21 k8s-master kubelet[68292]: I0403 11:40:21.594892 68292 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master"
Apr 03 11:40:21 k8s-master kubelet[68292]: E0403 11:40:21.595199 68292 kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://10.128.0.27:6443/api/v1/nodes\": dial tcp 10.128.0.27:6443: connect: connection refused" node="k8s->
Apr 03 11:40:21 k8s-master kubelet[68292]: W0403 11:40:21.950701 68292 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.>
Apr 03 11:40:21 k8s-master kubelet[68292]: E0403 11:40:21.950751 68292 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.27:6443/api/v1/services?limit=500&resourc>
Apr 03 11:40:22 k8s-master kubelet[68292]: I0403 11:40:22.262297 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65ecd69bc15c62b0ce329c3ef7035e411de2a5a214cc105834d228369fa7e42c"
Apr 03 11:40:22 k8s-master kubelet[68292]: I0403 11:40:22.305586 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276e2678670c18e325e27ffe3d5e971a5bfeaf972cfe66a4899097cddad98625"
Apr 03 11:40:22 k8s-master kubelet[68292]: I0403 11:40:22.354338 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b3ddc03310003f5a76d8889cd5d730a28bea9bd3bfa1f2bc621383219fc458"
Apr 03 11:40:22 k8s-master kubelet[68292]: I0403 11:40:22.411044 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a5c93fe95b231c30332fc71bddcbcaf3eb50a86f6c7881089ff3e31958c824"
Apr 03 11:40:22 k8s-master kubelet[68292]: E0403 11:40:22.463530 68292 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://10.128.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master?timeout=10>
Apr 03 11:40:22 k8s-master kubelet[68292]: E0403 11:40:22.532810 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:22 k8s-master kubelet[68292]: E0403 11:40:22.544029 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:22 k8s-master kubelet[68292]: E0403 11:40:22.624431 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:22 k8s-master kubelet[68292]: E0403 11:40:22.671674 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:23 k8s-master kubelet[68292]: I0403 11:40:23.474785 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ed4f0502a7c4cb1d6281eb3074025a7b2e05a9053c0d8890e5ef38085ba7b96"
Apr 03 11:40:23 k8s-master kubelet[68292]: I0403 11:40:23.513767 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af9c22d2432f91be2bc33abfe035d9631f57c90a8cd7aceb872e47b1b0f2d8c"
Apr 03 11:40:23 k8s-master kubelet[68292]: I0403 11:40:23.562468 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09449192ce926ab06a3c8f8353b88ddfae949251e861550c665f4ae27c10ecf5"
Apr 03 11:40:23 k8s-master kubelet[68292]: I0403 11:40:23.619000 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65db26fcd6425c42bf4e38a24f5f001e3f0d64317baba388216d8a47b1fc0fe1"
Apr 03 11:40:23 k8s-master kubelet[68292]: E0403 11:40:23.787942 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:23 k8s-master kubelet[68292]: E0403 11:40:23.789912 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:23 k8s-master kubelet[68292]: E0403 11:40:23.790167 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:23 k8s-master kubelet[68292]: E0403 11:40:23.922168 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:24 k8s-master kubelet[68292]: I0403 11:40:24.674328 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f53ab04dadbaa2d388d6eea78800383b3b126971e704b5030a1dfd4257adfe0"
Apr 03 11:40:24 k8s-master kubelet[68292]: I0403 11:40:24.720471 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eae7035a9aaa99175ea235bb277dddb6a6e99e53f0911be6928752330dec4d1"
Apr 03 11:40:24 k8s-master kubelet[68292]: I0403 11:40:24.766212 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50020c248053e19d0cc79ffc18163cc83166a5601256686f0e8d6e9e17107e29"
Apr 03 11:40:24 k8s-master kubelet[68292]: I0403 11:40:24.815565 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef0c1b6c0de71db5e35af3400dcf50688aaffd8440ee3ba13730ad8feb237823"
Apr 03 11:40:24 k8s-master kubelet[68292]: E0403 11:40:24.955784 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:24 k8s-master kubelet[68292]: E0403 11:40:24.973802 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:25 k8s-master kubelet[68292]: E0403 11:40:25.040323 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:25 k8s-master kubelet[68292]: E0403 11:40:25.122305 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:25 k8s-master kubelet[68292]: I0403 11:40:25.874909 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c649bb0326669da8736d8eaf9d159f17a02d66eaab384f3baa9271b14674c44c"
Apr 03 11:40:25 k8s-master kubelet[68292]: I0403 11:40:25.920642 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b18305f883fa4dba05a1a906d39301ce557d278410d9122c5e2bab1a3dca86"
Apr 03 11:40:25 k8s-master kubelet[68292]: I0403 11:40:25.967791 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9859332976c41dddaa43587a7737ec58220bd7551d0f9662ff9b82cdce61444"
Apr 03 11:40:26 k8s-master kubelet[68292]: I0403 11:40:26.034135 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18242eeb6f38764a2ff9fe3eaa78a2641b5a41fb460731c9eaf17eb2d26c2894"
Apr 03 11:40:26 k8s-master kubelet[68292]: E0403 11:40:26.184999 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>
Apr 03 11:40:26 k8s-master kubelet[68292]: E0403 11:40:26.188813 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-apiserve>
Apr 03 11:40:26 k8s-master kubelet[68292]: E0403 11:40:26.225472 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:26 k8s-master kubelet[68292]: E0403 11:40:26.308220 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:27 k8s-master kubelet[68292]: I0403 11:40:27.093161 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b814de4e05a7662ad7a08a1e815a8f637ee08c89b50234f8ec6c89246f4cd59"
Apr 03 11:40:27 k8s-master kubelet[68292]: I0403 11:40:27.136771 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e708220875430f3739316eebf80e7e256284e92d336fcc3e452353038f6530"
Apr 03 11:40:27 k8s-master kubelet[68292]: I0403 11:40:27.187664 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5309ad0dcfb09f864c33ea88f0dc911e87ee58f5627566d11a5c1b8d13002780"
Apr 03 11:40:27 k8s-master kubelet[68292]: I0403 11:40:27.249498 68292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e79429a5b2d246b15f51ae4f94556bbc4dc8321f3a8753721488c2e8bce6ff18"
Apr 03 11:40:27 k8s-master kubelet[68292]: E0403 11:40:27.380665 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube->
Apr 03 11:40:27 k8s-master kubelet[68292]: E0403 11:40:27.394655 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-k8s-master>
Apr 03 11:40:27 k8s-master kubelet[68292]: E0403 11:40:27.436891 68292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-schedule>

@neolit123
Copy link
Member

please try asking for help on the support channels. links below.

/support

@github-actions
Copy link

github-actions bot commented Apr 3, 2023

Hello, @jerryxnqiu 🤖 👋

You seem to have troubles using Kubernetes and kubeadm.
Note that our issue trackers should not be used for providing support to users.
There are special channels for that purpose.

Please see:

@github-actions github-actions bot added the kind/support Categorizes issue or PR as a support question. label Apr 3, 2023
@github-actions github-actions bot closed this as completed Apr 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

2 participants