Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container runtime network not ready #2746

Closed
yangjianfeng1208 opened this issue Aug 18, 2022 · 2 comments
Closed

Container runtime network not ready #2746

yangjianfeng1208 opened this issue Aug 18, 2022 · 2 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@yangjianfeng1208
Copy link

yangjianfeng1208 commented Aug 18, 2022

What keywords did you search in kubeadm issues before filing this one?

CNI dosen't work after kubeadm init cluster

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version): v1.24.2

Environment:

  • Kubernetes version (use kubectl version): v1.24.2
  • Cloud provider or hardware configuration:
    • Two virtual machines by kvm
    • 16 cores, 32G ram and 64G storage each VM
  • OS (e.g. from /etc/os-release): ubuntu-2204
  • Kernel (e.g. uname -a):Linux 5.15.0-25-generic Updating kubeadm manifests #25-Ubuntu SMP Wed Mar 30 15:54:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • Container runtime (CRI) (e.g. containerd, cri-o): containerd
  • Container networking plugin (CNI) (e.g. Calico, Cilium): Calico and flannel
  • Others:

What happened?

I use kubeadm to initialize a cluster, yaml files shows below:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.122.100
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-node01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.24.2
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}

Cluster initializated well, and then install CNI, I try flannel first

# kubectl get node
NAME         STATUS     ROLES           AGE   VERSION
k8s-node01   NotReady   control-plane   36m   v1.24.2
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

The flannel pod are running fine, but coredns are still Pending

# kubectl get po -A
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-cqxvn                1/1     Running   0          34m
kube-system    coredns-6d4b75cb6d-4l4k5             0/1     Pending   0          38m
kube-system    coredns-6d4b75cb6d-p8fsq             0/1     Pending   0          38m
kube-system    etcd-k8s-node01                      1/1     Running   8          38m
kube-system    kube-apiserver-k8s-node01            1/1     Running   5          38m
kube-system    kube-controller-manager-k8s-node01   1/1     Running   0          38m
kube-system    kube-proxy-vhzxq                     1/1     Running   0          38m
kube-system    kube-scheduler-k8s-node01            1/1     Running   5          38m

run kubectl describe -n kube-system po coredns-6d4b75cb6d-p8fsq

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  33s (x9 over 40m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

run journalctl -xefu kubelet

Aug 18 08:35:23 k8s-node01 kubelet[376556]: E0818 08:35:23.055980  376556 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 18 08:35:28 k8s-node01 kubelet[376556]: E0818 08:35:28.057657  376556 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 18 08:35:33 k8s-node01 kubelet[376556]: E0818 08:35:33.059659  376556 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 18 08:35:38 k8s-node01 kubelet[376556]: E0818 08:35:38.060972  376556 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Aug 18 08:35:43 k8s-node01 kubelet[376556]: E0818 08:35:43.062988  376556 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

What you expected to happen?

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?

@neolit123
Copy link
Member

hi please try asking on the support channels

/support

@github-actions
Copy link

Hello, @yangjianfeng1208 🤖 👋

You seem to have troubles using Kubernetes and kubeadm.
Note that our issue trackers should not be used for providing support to users.
There are special channels for that purpose.

Please see:

@github-actions github-actions bot added the kind/support Categorizes issue or PR as a support question. label Aug 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

2 participants