Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to mount two folders on same EFS volume #100

Closed
stefansedich opened this issue Oct 30, 2019 · 9 comments · Fixed by #102
Closed

Unable to mount two folders on same EFS volume #100

stefansedich opened this issue Oct 30, 2019 · 9 comments · Fixed by #102
Labels
kind/bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@stefansedich
Copy link

stefansedich commented Oct 30, 2019

/kind bug

What happened?

When attempting to mount two different folders on the same EFS volume using multiple PV and PVCs the pod fails to start showing a timeout binding the volumes:

Unable to mount volumes for pod "efs_default": timeout expired waiting for volumes to attach or mount for pod "default"/"efs". list of unmounted volumes=[efs-test2]. list of unattached volumes=[efs-test efs-test2 default-token-nffl4]

Binding works fine if I mount only one of the volumes, doesn't matter which one but as soon as there is >1 it fails.

What you expected to happen?

I would expect this to work as it did when I was previously using an NFS PV.

How to reproduce it (as minimally and precisely as possible)?

Use the following manifests, it will require an EFS volume with the /efs-test and /efs-test2 folders created.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-test
spec:
  storageClassName: efs
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  mountOptions:
    - tls
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-xxx
    volumeAttributes:
      path: "/efs-test"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: efs
  volumeName: efs-test
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-test2
spec:
  storageClassName: efs
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  mountOptions:
    - tls
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-xxx
    volumeAttributes:
      path: "/efs-test2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-test2
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: efs
  volumeName: efs-test2
---
apiVersion: v1
kind: Pod
metadata:
  name: efs
spec:
  containers:
    - name: ubuntu
      image: ubuntu:latest
      command: [ "/bin/bash", "-c", "--" ]
      args: [ "while true; do sleep 30; done;" ]
      volumeMounts:
        - mountPath: /mnt/efs-test
          name: efs-test
          readOnly: false
        - mountPath: /mnt/efs-test2
          name: efs-test2
          readOnly: false
  volumes:
    - name: efs-test
      persistentVolumeClaim:
        claimName: efs-test
    - name: efs-test2
      persistentVolumeClaim:
        claimName: efs-test2

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version): 1,14
  • Driver version: 0.2.0
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 30, 2019
@ShanHsl
Copy link

ShanHsl commented Nov 1, 2019

我也有同样的问题,我做了一些测试。发现问题在于一个pod无法挂载多个pvc。

@ShanHsl
Copy link

ShanHsl commented Nov 1, 2019

以下为我做测试的yaml和结果
不可以执行的yaml

kind: Pod
apiVersion: v1
metadata:
  name: volume-debugger
spec:
  volumes:
    - name: airflow-dags
      persistentVolumeClaim:
       claimName: airflow-dags
    - name: airflow-logs
      persistentVolumeClaim:
       claimName: airflow-logs
  containers:
    - name: debugger
      image: debug:1.10.5
      command: ['sleep', '36000']
      volumeMounts:
        - name: airflow-dags
          mountPath: /root/airflow/dags
        - name: airflow-logs
          mountPath: /root/airflow/logs

报错信息如下:
Warning FailedMount 112s (x3 over 6m23s) kubelet, ip-xxxx-xxxx-xxxx-xxxx.ec2.internal Unable to mount volumes for pod "volume-debugger_airflow(8e802dfa-fbef-11e9-99a0-0eea81d2edb2)": timeout expired waiting for volumes to attach or mount for pod "airflow"/"volume-debugger". list of unmounted volumes=[airflow-logs]. list of unattached volumes=[airflow-dags airflow-logs default-token-xt7gr]

@ShanHsl
Copy link

ShanHsl commented Nov 1, 2019

pv和pvc:

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: airflow-dags
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: f s
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: airflow-dags
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 10Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: airflow-logs
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-**********
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: airflow-logs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 10Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: test-volume
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-**********
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-volume
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 10Gi

@leakingtapan
Copy link
Contributor

leakingtapan commented Nov 2, 2019

I did some quick testing and able to reproduce the same. Taking a look

Here is the sample spec I'm using:

---

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv1
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-26e5fe8d
    volumeAttributes:
      path: /test1

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim1
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv2
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-26e5fe8d
    volumeAttributes:
      path: /test2

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim2
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: pv1 
      mountPath: /data/pv1
    - name: pv2
      mountPath: /data/pv2
  volumes:
  - name: pv1 
    persistentVolumeClaim:
      claimName: efs-claim1
  - name: pv2
    persistentVolumeClaim:
      claimName: efs-claim2

@leakingtapan
Copy link
Contributor

leakingtapan commented Nov 7, 2019

There wasn't even call into the driver for NodePublishVolume on pv2:

$ kk logs efs-csi-node-kp8rl -n kube-system efs-plugin | grep efs-pv
I1107 19:43:50.153495       1 node.go:49] NodePublishVolume: called with args volume_id:"fs-8bbb3e0a" target_path:"/var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"path" value:"/test2" >
I1107 19:43:50.153556       1 node.go:103] NodePublishVolume: creating dir /var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount
I1107 19:43:50.153575       1 node.go:108] NodePublishVolume: mounting fs-8bbb3e0a:/test2 at /var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount with options []
I1107 19:43:50.153586       1 mount_linux.go:135] Mounting cmd (mount) with arguments ([-t efs fs-8bbb3e0a:/test2 /var/lib/kubelet/pods/cf02d596-adcd-498d-b9a7-aba672b288ef/volumes/kubernetes.io~csi/efs-pv2/mount])

Do you see the same on the csi-node (the csi node pod that share the same node where the pod is deployed to) logs ? @ShanHsl @stefansedich

I'm checking kubelet for what's happening

@leakingtapan
Copy link
Contributor

leakingtapan commented Nov 7, 2019

Tested consuming two PVs each with different EFS filesystem ID and it works.

---

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv1
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-8bbb3e0a
    #volumeAttributes:
    #  path: /test1

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim1
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv2
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-3e63e6bf
    #volumeAttributes:
    #  path: /test2

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim2
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

---
apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: pv1
      mountPath: /data/pv1
    - name: pv2
      mountPath: /data/pv2
  volumes:
  - name: pv1
    persistentVolumeClaim:
      claimName: efs-claim1
  - name: pv2
    persistentVolumeClaim:
      claimName: efs-claim2

@leakingtapan
Copy link
Contributor

leakingtapan commented Nov 7, 2019

I think I figured out why. The cause is because both PVs are using the same filesystem ID as the volumeHandle. It makes perfect sense for a RWX volume to reuse the same volumeHandle value, since the volume is expected to be shared by multiple containers.

However, in kubernetes volume manager's desired_state_of_world cache, it figures out whether a volume needs to be mounted by a unique volume name here. And the csi_plugin 's GetVolumeName method determines volume name only by CSI driver name and volumeHandle. This causes the second EFS CSI volume never to be mounted (since kubelet will not call CSI driver again). This also explains why using a different EFS filesystem ID works.

In comparison to NFS plugin, it does use path as part of volume name here.

@leakingtapan leakingtapan added this to the 0.3 milestone Nov 7, 2019
@leakingtapan
Copy link
Contributor

leakingtapan commented Nov 7, 2019

As required by CSI Volume here, volume handle needs to unique identify the volume.

The fix will be modifying subpath feature to be specifying the subpath through volumeHandle as such:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv2
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-3e63e6bf:/test2

If I remove the subpath field under volumeAttributes, this will be a backward incompatible change unfortunately.

@leakingtapan
Copy link
Contributor

I'm going to finish up the PR to fix this issue @ShanHsl @stefansedich let me know if this approach works for u

2uasimojo added a commit to 2uasimojo/aws-efs-csi-driver that referenced this issue Jun 16, 2020
WIP: unit test
WIP: docs

Expands the supported `volumeHandle` formats to include a three-field
version: `{fsid}:{subpath}:{apid}`. This addresses the limitation
originally described in kubernetes-sigs#100 whereby k8s relies solely on the
`volumeHandle` to uniquely distinguish one PV from another.

As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions`
is deprecated.

For more details, see the related issue (kubernetes-sigs#167).

The following scenarios were tested in a live environment:

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap2']`
- expect: fail
- actual: fail with `Warning  FailedMount  1s (x4 over 4s)  kubelet, ip-10-0-137-122.ec2.internal  MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)`
- result: ✔

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap1']`
- expect: success
- result: ✔

Also makes sure we populate tls and accesspoint in mountOptions

- `mountOptions: []` (for both)
- PV1:
  - `volumeHandle: fs1::ap1`
- PV2:
  - `volumeHandle: fs1::ap2`
- expect: success, both mounts accessible and distinct
- result: ✔

- `mountOptions: []` (for all)
- PV1:
  - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar)
- PV2:
  - `volumeHandle: fs1:/foo/bar:ap1` (absolute path)
- PV3:
  - `volumeHandle: fs1:foo/bar:ap1` (relative path)
- expect: success
- actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected)
- result: ✔

Fixes: kubernetes-sigs#167

Signed-off-by: Eric Fried <efried@redhat.com>
2uasimojo added a commit to 2uasimojo/aws-efs-csi-driver that referenced this issue Jun 16, 2020
WIP: unit test
WIP: docs

Expands the supported `volumeHandle` formats to include a three-field
version: `{fsid}:{subpath}:{apid}`. This addresses the limitation
originally described in kubernetes-sigs#100 whereby k8s relies solely on the
`volumeHandle` to uniquely distinguish one PV from another.

As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions`
is deprecated.

For more details, see the related issue (kubernetes-sigs#167).

The following scenarios were tested in a live environment:

**Conflicting access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap2']`
- expect: fail
- actual: fail with `Warning  FailedMount  1s (x4 over 4s)  kubelet, ip-10-0-137-122.ec2.internal  MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)`
- result: ✔

**Same access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap1']`
- expect: success
- result: ✔

**Two access points on the same file system**

Also makes sure we populate tls and accesspoint in mountOptions

- `mountOptions: []` (for both)
- PV1:
  - `volumeHandle: fs1::ap1`
- PV2:
  - `volumeHandle: fs1::ap2`
- expect: success, both mounts accessible and distinct
- result: ✔

**Subpaths with access points**

- `mountOptions: []` (for all)
- PV1:
  - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar)
- PV2:
  - `volumeHandle: fs1:/foo/bar:ap1` (absolute path)
- PV3:
  - `volumeHandle: fs1:foo/bar:ap1` (relative path)
- expect: success
- actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected)
- result: ✔

Fixes: kubernetes-sigs#167

Signed-off-by: Eric Fried <efried@redhat.com>
2uasimojo added a commit to 2uasimojo/aws-efs-csi-driver that referenced this issue Jun 16, 2020
WIP: unit test
WIP: docs

Expands the supported `volumeHandle` formats to include a three-field
version: `{fsid}:{subpath}:{apid}`. This addresses the limitation
originally described in kubernetes-sigs#100 whereby k8s relies solely on the
`volumeHandle` to uniquely distinguish one PV from another.

As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions`
is deprecated.

For more details, see the related issue (kubernetes-sigs#167).

The following scenarios were tested in a live environment:

**Conflicting access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap2']`
- expect: fail
- actual: fail with `Warning  FailedMount  1s (x4 over 4s)  kubelet, ip-10-0-137-122.ec2.internal  MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)`
- result: ✔

**Same access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap1']`
- expect: success
- result: ✔

**Two access points on the same file system**

Also makes sure we populate tls and accesspoint in mountOptions

- `mountOptions: []` (for both)
- PV1:
  - `volumeHandle: fs1::ap1`
- PV2:
  - `volumeHandle: fs1::ap2`
- expect: success, both mounts accessible and distinct
- result: ✔

**Subpaths with access points**

- `mountOptions: []` (for all)
- PV1:
  - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar)
- PV2:
  - `volumeHandle: fs1:/foo/bar:ap1` (absolute path)
- PV3:
  - `volumeHandle: fs1:foo/bar:ap1` (relative path)
- expect: success
- actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected)
- result: ✔

Signed-off-by: Eric Fried <efried@redhat.com>
2uasimojo added a commit to 2uasimojo/aws-efs-csi-driver that referenced this issue Jun 18, 2020
Expands the supported `volumeHandle` formats to include a three-field
version: `{fsid}:{subpath}:{apid}`. This addresses the limitation
originally described in kubernetes-sigs#100 whereby k8s relies solely on the
`volumeHandle` to uniquely distinguish one PV from another.

As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions`
is deprecated.

For more details, see the related issue (kubernetes-sigs#167).

The following scenarios were tested in a live environment:

**Conflicting access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap2']`
- expect: fail
- actual: fail with `Warning  FailedMount  1s (x4 over 4s)  kubelet, ip-10-0-137-122.ec2.internal  MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)`
- result: ✔

**Same access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap1']`
- expect: success
- result: ✔

**Two access points on the same file system**

Also makes sure we populate tls and accesspoint in mountOptions

- `mountOptions: []` (for both)
- PV1:
  - `volumeHandle: fs1::ap1`
- PV2:
  - `volumeHandle: fs1::ap2`
- expect: success, both mounts accessible and distinct
- result: ✔

**Subpaths with access points**

- `mountOptions: []` (for all)
- PV1:
  - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar)
- PV2:
  - `volumeHandle: fs1:/foo/bar:ap1` (absolute path)
- PV3:
  - `volumeHandle: fs1:foo/bar:ap1` (relative path)
- expect: success
- actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected)
- result: ✔

Signed-off-by: Eric Fried <efried@redhat.com>
2uasimojo added a commit to 2uasimojo/aws-efs-csi-driver that referenced this issue Jun 18, 2020
Expands the supported `volumeHandle` formats to include a three-field
version: `{fsid}:{subpath}:{apid}`. This addresses the limitation
originally described in kubernetes-sigs#100 whereby k8s relies solely on the
`volumeHandle` to uniquely distinguish one PV from another.

As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions`
is deprecated.

For more details, see the related issue (kubernetes-sigs#167).

The following scenarios were tested in a live environment:

**Conflicting access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap2']`
- expect: fail
- actual: fail with `Warning  FailedMount  1s (x4 over 4s)  kubelet, ip-10-0-137-122.ec2.internal  MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)`
- result: ✔

**Same access point in volumeHandle and mountOptions**

- `volumeHandle: fs::ap1`
- `mountOptions: ['tls', 'accesspoint=ap1']`
- expect: success
- result: ✔

**Two access points on the same file system**

Also makes sure we populate tls and accesspoint in mountOptions

- `mountOptions: []` (for both)
- PV1:
  - `volumeHandle: fs1::ap1`
- PV2:
  - `volumeHandle: fs1::ap2`
- expect: success, both mounts accessible and distinct
- result: ✔

**Subpaths with access points**

- `mountOptions: []` (for all)
- PV1:
  - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar)
- PV2:
  - `volumeHandle: fs1:/foo/bar:ap1` (absolute path)
- PV3:
  - `volumeHandle: fs1:foo/bar:ap1` (relative path)
- expect: success
- actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected)
- result: ✔

Signed-off-by: Eric Fried <efried@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants