You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But I can't get out of MountVolume.SetUp failed for volume "consume-pv" : rpc error: code = Internal desc = rpc error: code = Internal desc = verifyMount: device already mounted at [/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount /host/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount]
I don't get either the shared-yes param in my ZFSVolume as stated #152 (comment)
I'm expecting that the two pods share the same ZFS dataset to get only one destination to put my files (the two applications have dicern concerns depending on the files put in It).
Environment:
ZFS-LocalPV version zfs-driver:2.4.0
Kubernetes version (use kubectl version):
Client Version: v1.28.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.4+k0s
Kubernetes installer & version: k0s v1.28.4+k0s.0
Cloud provider or hardware configuration: Odroid HC4
OS (e.g. from /etc/os-release): Armbian 23.11.1 bookworm 6.1.63-current-meson64
Thanks in advance for providing me any clue about this..
The text was updated successfully, but these errors were encountered:
etlfg
changed the title
Device already mounted at for two pods on the same shared=yes ZFS dataset
"Device already mounted at /var/lib/kubelet/pods" with a shared=yes ZFS dataset
Jan 17, 2024
Hi @etlfg, I was trying to use shared mount also, and was able to do it successfully. So i thought to give it a try with yamls you provided, and here as well it worked for me. I can see shared: yes in -o yaml of zfsvolume CR. Does this issue still persist for you? I would suggest to give it a try once again.
One point i want to check that:
In your storage class yaml i see that poolname: data/import but in your zfs volume and PV yamls i see that it is only poolName: data. Can you confirm that by any chance your storage class yaml was different while provisiong the volume ?
What steps did you take and what happened:
I can't get two pods running on the same node to access to one given dataset.
I've been reading all the issues and docs I can about sharing a dataset between pods.
But I can't get out of
MountVolume.SetUp failed for volume "consume-pv" : rpc error: code = Internal desc = rpc error: code = Internal desc = verifyMount: device already mounted at [/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount /host/var/lib/k0s/kubelet/pods/7f3fc9cd-5e94-4d32-9e59-3ae0caa41fc4/volumes/kubernetes.io~csi/import-pv/mount]
I don't get either the shared-yes param in my ZFSVolume as stated #152 (comment)
Here are my curated resources :
What did you expect to happen:
I'm expecting that the two pods share the same ZFS dataset to get only one destination to put my files (the two applications have dicern concerns depending on the files put in It).
Environment:
zfs-driver:2.4.0
kubectl version
):k0s v1.28.4+k0s.0
Odroid HC4
/etc/os-release
):Armbian 23.11.1 bookworm
6.1.63-current-meson64
Thanks in advance for providing me any clue about this..
The text was updated successfully, but these errors were encountered: