-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persisten Volume Claim (PVC) stuck in status pending #2712
Comments
@hoba84 can you please paste the configmap output which contains the clusterID and mon mapping? can you also try running commands like Note- you need to pass |
Hi @Madhu-1, these are the two config maps for
For our suggested debugging I run Also, I do not know how to pass monitor, user and key to that container. |
To check network connectivity you can run the below command in the csi-rbdplugin container
If above command does not return any value like |
Hi @Madhu-1, the above command The only speciality in my setup is, that the kubernetes nodes have multiple network interfaces. One (10.1.30.0/24) is dedicated to ceph access. In the other container (pod:csi-rbdplugin with container:csi-rbdplugin) I can run the above commands to access ceph cluster. |
@hoba84 we don't put any requirement for networking, the pre-req is that the ceph cluster should be reachable from the csi pods. |
Hi @Madhu-1, I could solve the problem and indeed it was missing network connectivity inside the provisioner pod. I've resolved that by configuring with help of multus a network interface - now I was able to use the PVC. Maybe the documentation and/or logging could be improved. Thanks again for your valuable hep! |
Describe the bug
Following this guide, the PVC I want to create gets stuck at status
Pending
. with the following detail messageAborted desc = an operation with the given Volume ID pvc-55b22d71-055e-4f23-9e61-7386e4cde6c5 already exists
. I don't know how to further debug this. One thing which I found out is, that from the csi-rbdplugin-provisioner/csi-rbdplugin containers I cannot reach ceph cluster. From csi-rbdplugin/csi-rbdplugin containers I can reach the ceph cluster (see sectionAdditional Context
). I don't know if this is how it should be (?).Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : ceph-common (?)The Ceph cluster is hosted on a Proxmox 7 environment. The MicroK8S nodes are running on Ubuntu 20.04 VMs on Proxmox. MicroK8S is running in HA mode (3 nodes) which means that Calico CNI is used. Enabled add-ons for microk8s are: dns, ha-cluster, ingress, metallb, metrics-server, rbac.
Steps to reproduce
Same as in this guide:
Actual results
The PVC is stuck in status pending.
Expected behavior
The PVC should be created successfully.
Additional context
When
kubectl exec -i -t csi-rbdplugin-provisioner-6bbfdc7c78-5mrgq --container csi-rbdplugin -- /bin/bash
, from this container I cannot reach my ceph cluster (ping does not work). My network interfaces in this container are the following:When
kubectl exec -i -t csi-rbdplugin-cfmfh --container csi-rbdplugin -- /bin/bash
, from this container I can reach my ceph cluster. The network interfaces are the following (which are the same as in the host machinek8s-node-[1..3]
:Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs
of below containers.
provisioner pod.
csi-rbdplugin-provisioner/csi-rbdplugin
[...]
csi-rbdplugin-provisioner/csi-provisioner
If the issue is in PVC resize please attach complete logs of below containers.
provisioner pod.
no issue with resizer
If the issue is in snapshot creation and deletion please attach complete logs
of below containers.
provisioner pod.
no issue with snapshots
If the issue is in PVC mounting please attach complete logs of below containers.
csi-rbdplugin/csi-cephfsplugin and driver-registrar container logs from
plugin pod from the node where the mount is failing.
no issue with mounting (do not come to that point)
if required attach dmesg logs.
dmesg of csi-rbdplugin-cfmfh/csi-rbdplugin:
dmesg of csi-rbdplugin-provisioner-6bbfdc7c78-5mrgq/csi-rbdplugin
The text was updated successfully, but these errors were encountered: