-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod and PVC stuck in Pending with WaitForFirstConsumer #453
Comments
Hi @maxnrb . Can you please share the output of |
One more thing to add , do you have zfs pools created . I see you are using How many nodes do you have in your cluster? |
Can you also try with the latest version and see whether the issue is being replicated or not? |
@maxnrb Will close this issue in a week in case of no response. |
got the same log from openebs-zfs-controller-0
the node pods dont appear to print anything as a result of creating a pvc
pvc
pod
storageclass: allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- uca1k
- uca2k
- uca3k
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"allowVolumeExpansion":true,"allowedTopologies":[{"matchLabelExpressions":[{"key":"kubernetes.io/hostname","values":["uca1k","uca2k","uca3k"]}]}],"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"lv"},"parameters":{"compression":"off","dedup":"off","fstype":"zfs","poolname":"lv","recordsize":"128k"},"provisioner":"zfs.csi.openebs.io","volumeBindingMode":"WaitForFirstConsumer"}
creationTimestamp: "2023-12-04T13:13:20Z"
name: lv
resourceVersion: "4198060"
uid: 3f0fb9ec-073e-4cf9-bb58-f13f16a67031
parameters:
compression: "off"
dedup: "off"
fstype: zfs
poolname: lv
recordsize: 128k
provisioner: zfs.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
aep@stark: /work/kraud/ansible/uca/k8s
zpool on the nodes
df:
the root cause might be this?
not sure how these are supposed to be created. the manual doesnt mention that |
just ran into this again on a different cluster however, its caused by the same deployment. cockroachdb is creating an sts with 3 replicas. when switching to immediate mode i get
possibly the root cause is still the same. recreating the entire operator will result in exactly those 2 zfsnodes coming back, missing the 3rd again. |
finally figured out the missing zfsnode! i completely missed that kubectl logs gives you only the first containers logs, but the actually interesting log is from openebs-zfs-plugin.
this is because the zpool has listsnapshots=on. unfortunately this has nothing to do with the original issue here. the pods are still not scheduled. |
What steps did you take and what happened:
Hello,
I'm trying to implement ZFS LocalPV with
volumeBindingMode: WaitForFirstConsumer
for storage class, however my pod and the PVC stuck in Pending, also the PV is not created. I get the following message in the pod description:Knowing that if I deploy the same elements with
volumeBindingMode: Immediate
in SC, my PV is created.Here are the different elements I have deployed (SC, PVC and Pod) :
StorageClass:
PersistentVolumeClaim:
Pod:
Environment:
kubectl version
): v1.27.2/etc/os-release
): Debian GNU/Linux 11 (bullseye)How can I debug this?
Thanks for your help!
The text was updated successfully, but these errors were encountered: