You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After installed kubeadm cluster using latest version of kubespray. All static pods CrashLoopBackOff including kubeapiserver, kubescheduler, kubecontrollermanager.
And I found latest pull request which support x2.0.v change is not generate compatible config.toml for v2.0.x and version 3 documentation of containerd.
Cluster Kube API Connection Issue
kubectl get pods -o wide -A
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Containerd Running Containers (crictl ps)
[root@uat-cluster-master-1 containerd]# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7aa346fe34353 c42f13656d0b2 22 seconds ago Running kube-apiserver 7 e9f8f274c8b34 kube-apiserver-uat-cluster-master-1
d00c26a309e27 a0bf559e280cf About a minute ago Running kube-proxy 3 3814bd84f4a7b kube-proxy-zss25
c363cf59bf7ab c7aad43836fa5 About a minute ago Running kube-controller-manager 8 714bec976ca96 kube-controller-manager-uat-cluster-master-1
6135705ace823 259c8277fcbbc About a minute ago Running kube-scheduler 4 c788bbc0ddec1 kube-scheduler-uat-cluster-master-1
space=k8s.io
" returns successfully"
1""
" must be in running or unknown state, current state "CONTAINER_EXITED""
" must be in running or unknown state, current state "CONTAINER_EXITED""
edce465093952978269baa3295389d81" id:"3bccefc2d5591cf9c26e3b0bb3d39385edce465093952978269baa3295389d81" pid:353301 exit_status:137 exited_at:{seconds:1738822725 nanos:810805678}"
81 namespace=k8s.io
3952978269baa3295389d81 namespace=k8s.io
space=k8s.io
baa3295389d81" exit_status:137 exited_at:{seconds:1738822725 nanos:810805678}"
a3295389d81" successfully"
1" returns successfully"
1""
" must be in running or unknown state, current state "CONTAINER_EXITED""
" must be in running or unknown state, current state "CONTAINER_EXITED""
a3295389d81" successfully"
1" returns successfully"
263c0a87a4d17039d32f491ef9e4936,Namespace:kube-system,Attempt:3,}"
be""
be" returns successfully"
address="unix:///run/containerd/s/01da5035e626b0ee5c318bc594164600a3464beadb44c5cb3ebb58bf6fdb445e" namespace=k8s.io protocol=ttrpc version=3
263c0a87a4d17039d32f491ef9e4936,Namespace:kube-system,Attempt:3,} returns sandbox id "f09a48eef1f296442f414fee02330c9e1c22db00977301e11571e8404572979b"
Containerd Version
[root@uat-cluster-master-1 containerd]# k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
uat-cluster-master-1 NotReady control-plane 13m v1.30.0 192.168.213.200 <none> Rocky Linux 9.4 (Blue Onyx) 5.14.0-427.16.1.el9_4.x86_64 containerd://2.0.2
uat-cluster-worker-1 NotReady <none> 10m v1.30.0 192.168.213.201 <none> Rocky Linux 9.4 (Blue Onyx) 5.14.0-427.16.1.el9_4.x86_64 containerd://2.0.2
uat-cluster-worker-2 NotReady <none> 10m v1.30.0 192.168.213.202 <none> Rocky Linux 9.4 (Blue Onyx) 5.14.0-427.16.1.el9_4.x86_64 containerd://2.0.2
What did you expect to happen?
Kubernetes static pods should run without any issue
How can we reproduce it (as minimally and precisely as possible)?
Provision a Kubernetes cluster using Kubespray, ensuring that the container runtime is set to containerd://2.0.2. Upon deployment, the aforementioned error is expected to occur.
OS
Rocky Linux 9.4 (Blue Onyx) 5.14.0-427.16.1.el9_4.x86_64
@jeikeibnaa: GitHub didn't allow me to assign the following users: yourself.
Note that only kubernetes-sigs members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
What happened?
After installed kubeadm cluster using latest version of kubespray. All static pods CrashLoopBackOff including kubeapiserver, kubescheduler, kubecontrollermanager.
And I found latest pull request which support x2.0.v change is not generate compatible config.toml for v2.0.x and version 3 documentation of containerd.
Cluster Kube API Connection Issue
kubectl get pods -o wide -A
Containerd Running Containers (crictl ps)
Pods Status (kubectl get pods -A)
/etc/containerd/config.toml
Journal Logs (journalctl -xeu containerd)
Containerd Version
What did you expect to happen?
Kubernetes static pods should run without any issue
How can we reproduce it (as minimally and precisely as possible)?
Provision a Kubernetes cluster using Kubespray, ensuring that the container runtime is set to containerd://2.0.2. Upon deployment, the aforementioned error is expected to occur.
OS
Rocky Linux 9.4 (Blue Onyx) 5.14.0-427.16.1.el9_4.x86_64
Version of Ansible
2.17.0
Version of Python
python 3.12.3
Version of Kubespray (commit)
fe0a1f4
Network plugin used
cni
Full inventory with variables
[all]
uat-cluster-master-1 ansible_host=192.168.213.200 ip=192.168.213.200 ansible_user=ansible
uat-cluster-worker-1 ansible_host=192.168.213.201 ip=192.168.213.201 ansible_user=ansible
uat-cluster-worker-2 ansible_host=192.168.213.202 ip=192.168.213.202 ansible_user=ansible
[kube_control_plane]
uat-cluster-master-1 etcd_member_name=etcd1
[etcd:children]
kube_control_plane
[kube_node]
uat-cluster-worker-1
uat-cluster-worker-2
[all:vars]
ansible_become=yes
ansible_become_method=sudo
ansible_become_user=root
Command used to invoke ansible
Output of ansible run
Anything else we need to know
No response
The text was updated successfully, but these errors were encountered: