-
Notifications
You must be signed in to change notification settings - Fork 537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to mount two folders on same EFS volume #100
Comments
我也有同样的问题,我做了一些测试。发现问题在于一个pod无法挂载多个pvc。 |
以下为我做测试的yaml和结果
报错信息如下: |
pv和pvc:
|
I did some quick testing and able to reproduce the same. Taking a look Here is the sample spec I'm using:
|
There wasn't even call into the driver for $ kk logs efs-csi-node-kp8rl -n kube-system efs-plugin | grep efs-pv
I1107 19:43:50.153495 1 node.go:49] NodePublishVolume: called with args volume_id:"fs-8bbb3e0a" target_path:"/var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"path" value:"/test2" >
I1107 19:43:50.153556 1 node.go:103] NodePublishVolume: creating dir /var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount
I1107 19:43:50.153575 1 node.go:108] NodePublishVolume: mounting fs-8bbb3e0a:/test2 at /var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount with options []
I1107 19:43:50.153586 1 mount_linux.go:135] Mounting cmd (mount) with arguments ([-t efs fs-8bbb3e0a:/test2 /var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount]) Do you see the same on the csi-node (the csi node pod that share the same node where the pod is deployed to) logs ? @ShanHsl @stefansedich I'm checking kubelet for what's happening |
Tested consuming two PVs each with different EFS filesystem ID and it works.
|
I think I figured out why. The cause is because both PVs are using the same filesystem ID as the However, in kubernetes volume manager's desired_state_of_world cache, it figures out whether a volume needs to be mounted by a unique volume name here. And the csi_plugin 's In comparison to NFS plugin, it does use path as part of volume name here. |
As required by CSI Volume here, volume handle needs to unique identify the volume. The fix will be modifying subpath feature to be specifying the subpath through
If I remove the |
I'm going to finish up the PR to fix this issue @ShanHsl @stefansedich let me know if this approach works for u |
WIP: unit test WIP: docs Expands the supported `volumeHandle` formats to include a three-field version: `{fsid}:{subpath}:{apid}`. This addresses the limitation originally described in kubernetes-sigs#100 whereby k8s relies solely on the `volumeHandle` to uniquely distinguish one PV from another. As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions` is deprecated. For more details, see the related issue (kubernetes-sigs#167). The following scenarios were tested in a live environment: - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap2']` - expect: fail - actual: fail with `Warning FailedMount 1s (x4 over 4s) kubelet, ip-10-0-137-122.ec2.internal MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)` - result: ✔ - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap1']` - expect: success - result: ✔ Also makes sure we populate tls and accesspoint in mountOptions - `mountOptions: []` (for both) - PV1: - `volumeHandle: fs1::ap1` - PV2: - `volumeHandle: fs1::ap2` - expect: success, both mounts accessible and distinct - result: ✔ - `mountOptions: []` (for all) - PV1: - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar) - PV2: - `volumeHandle: fs1:/foo/bar:ap1` (absolute path) - PV3: - `volumeHandle: fs1:foo/bar:ap1` (relative path) - expect: success - actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected) - result: ✔ Fixes: kubernetes-sigs#167 Signed-off-by: Eric Fried <efried@redhat.com>
WIP: unit test WIP: docs Expands the supported `volumeHandle` formats to include a three-field version: `{fsid}:{subpath}:{apid}`. This addresses the limitation originally described in kubernetes-sigs#100 whereby k8s relies solely on the `volumeHandle` to uniquely distinguish one PV from another. As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions` is deprecated. For more details, see the related issue (kubernetes-sigs#167). The following scenarios were tested in a live environment: **Conflicting access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap2']` - expect: fail - actual: fail with `Warning FailedMount 1s (x4 over 4s) kubelet, ip-10-0-137-122.ec2.internal MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)` - result: ✔ **Same access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap1']` - expect: success - result: ✔ **Two access points on the same file system** Also makes sure we populate tls and accesspoint in mountOptions - `mountOptions: []` (for both) - PV1: - `volumeHandle: fs1::ap1` - PV2: - `volumeHandle: fs1::ap2` - expect: success, both mounts accessible and distinct - result: ✔ **Subpaths with access points** - `mountOptions: []` (for all) - PV1: - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar) - PV2: - `volumeHandle: fs1:/foo/bar:ap1` (absolute path) - PV3: - `volumeHandle: fs1:foo/bar:ap1` (relative path) - expect: success - actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected) - result: ✔ Fixes: kubernetes-sigs#167 Signed-off-by: Eric Fried <efried@redhat.com>
WIP: unit test WIP: docs Expands the supported `volumeHandle` formats to include a three-field version: `{fsid}:{subpath}:{apid}`. This addresses the limitation originally described in kubernetes-sigs#100 whereby k8s relies solely on the `volumeHandle` to uniquely distinguish one PV from another. As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions` is deprecated. For more details, see the related issue (kubernetes-sigs#167). The following scenarios were tested in a live environment: **Conflicting access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap2']` - expect: fail - actual: fail with `Warning FailedMount 1s (x4 over 4s) kubelet, ip-10-0-137-122.ec2.internal MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)` - result: ✔ **Same access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap1']` - expect: success - result: ✔ **Two access points on the same file system** Also makes sure we populate tls and accesspoint in mountOptions - `mountOptions: []` (for both) - PV1: - `volumeHandle: fs1::ap1` - PV2: - `volumeHandle: fs1::ap2` - expect: success, both mounts accessible and distinct - result: ✔ **Subpaths with access points** - `mountOptions: []` (for all) - PV1: - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar) - PV2: - `volumeHandle: fs1:/foo/bar:ap1` (absolute path) - PV3: - `volumeHandle: fs1:foo/bar:ap1` (relative path) - expect: success - actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected) - result: ✔ Signed-off-by: Eric Fried <efried@redhat.com>
Expands the supported `volumeHandle` formats to include a three-field version: `{fsid}:{subpath}:{apid}`. This addresses the limitation originally described in kubernetes-sigs#100 whereby k8s relies solely on the `volumeHandle` to uniquely distinguish one PV from another. As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions` is deprecated. For more details, see the related issue (kubernetes-sigs#167). The following scenarios were tested in a live environment: **Conflicting access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap2']` - expect: fail - actual: fail with `Warning FailedMount 1s (x4 over 4s) kubelet, ip-10-0-137-122.ec2.internal MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)` - result: ✔ **Same access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap1']` - expect: success - result: ✔ **Two access points on the same file system** Also makes sure we populate tls and accesspoint in mountOptions - `mountOptions: []` (for both) - PV1: - `volumeHandle: fs1::ap1` - PV2: - `volumeHandle: fs1::ap2` - expect: success, both mounts accessible and distinct - result: ✔ **Subpaths with access points** - `mountOptions: []` (for all) - PV1: - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar) - PV2: - `volumeHandle: fs1:/foo/bar:ap1` (absolute path) - PV3: - `volumeHandle: fs1:foo/bar:ap1` (relative path) - expect: success - actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected) - result: ✔ Signed-off-by: Eric Fried <efried@redhat.com>
Expands the supported `volumeHandle` formats to include a three-field version: `{fsid}:{subpath}:{apid}`. This addresses the limitation originally described in kubernetes-sigs#100 whereby k8s relies solely on the `volumeHandle` to uniquely distinguish one PV from another. As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions` is deprecated. For more details, see the related issue (kubernetes-sigs#167). The following scenarios were tested in a live environment: **Conflicting access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap2']` - expect: fail - actual: fail with `Warning FailedMount 1s (x4 over 4s) kubelet, ip-10-0-137-122.ec2.internal MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)` - result: ✔ **Same access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap1']` - expect: success - result: ✔ **Two access points on the same file system** Also makes sure we populate tls and accesspoint in mountOptions - `mountOptions: []` (for both) - PV1: - `volumeHandle: fs1::ap1` - PV2: - `volumeHandle: fs1::ap2` - expect: success, both mounts accessible and distinct - result: ✔ **Subpaths with access points** - `mountOptions: []` (for all) - PV1: - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar) - PV2: - `volumeHandle: fs1:/foo/bar:ap1` (absolute path) - PV3: - `volumeHandle: fs1:foo/bar:ap1` (relative path) - expect: success - actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected) - result: ✔ Signed-off-by: Eric Fried <efried@redhat.com>
/kind bug
What happened?
When attempting to mount two different folders on the same EFS volume using multiple PV and PVCs the pod fails to start showing a timeout binding the volumes:
Binding works fine if I mount only one of the volumes, doesn't matter which one but as soon as there is >1 it fails.
What you expected to happen?
I would expect this to work as it did when I was previously using an NFS PV.
How to reproduce it (as minimally and precisely as possible)?
Use the following manifests, it will require an EFS volume with the
/efs-test
and/efs-test2
folders created.Anything else we need to know?:
Environment
kubectl version
): 1,14The text was updated successfully, but these errors were encountered: