Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

statefulset mount failed following mount_path example #110

Closed
liangrog opened this issue Dec 8, 2019 · 6 comments
Closed

statefulset mount failed following mount_path example #110

liangrog opened this issue Dec 8, 2019 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@liangrog
Copy link

liangrog commented Dec 8, 2019

/kind bug

What happened?
Volume mount failed:

Note: the directory "efs-test" does exist on EFS and has a 777 permission.

Events:
  Type     Reason       Age   From                                                     Message
  ----     ------       ----  ----                                                     -------
  Normal   Scheduled    36s   default-scheduler                                        Successfully assigned default/pod-0 to XXXXXX.compute.internal
  Warning  FailedMount  0s    kubelet, XXXXXX.compute.internal  MountVolume.SetUp failed for volume "efs-test" : rpc error: code = Internal desc = Could not mount "fs-XXXXX:/efs-test:/" at "/var/lib/kubelet/pods/4148d172-1a0f-11ea-8294-02c1d933bf74/volumes/kubernetes.io~csi/efs-bitbucket/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t efs fs-XXXXX:/efs-test:/ /var/lib/kubelet/pods/4148d172-1a0f-11ea-8294-02c1d933bf74/volumes/kubernetes.io~csi/efs-test/mount
Output: mount.nfs4: mounting fs-XXXXX.efs.ap-southeast-2.amazonaws.com:/efs-test:/ failed, reason given by server: No such file or directory

What you expected to happen?
Volume should be mounted as per example

How to reproduce it (as minimally and precisely as possible)?
Instead of mounting to the pod as per example, mount it to a statefulset.
Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b7174d", GitCommit:"b7174db5ee0e30c94a0b9899c20ac980c0850fc8", GitTreeState:"clean", BuildDate:"2019-10-18T17:56:01Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

  • Driver version:
    master branch

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 8, 2019
@leakingtapan
Copy link
Contributor

Thx for reporting the issue. What's your statefulset manifest?

@liangrog
Copy link
Author

liangrog commented Dec 13, 2019

Please see below manifest that doesn't work:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-test
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-xxxx:/test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-test
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs
  resources:
    requests:
      storage: 5Gi
  volumeName: efs-test
---
apiVersion: v1
kind: Service
metadata:
  name: efs-test-headless
spec:
  type: ClusterIP
  clusterIP: None
  selector:
    app: test
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: efs-test
spec:
  serviceName: efs-test-headless
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      volumes:
        - name: efs-test
          persistentVolumeClaim:
            claimName: efs-test
      containers:
      - name: busybox
        image: busybox:latest
        command:
          - sleep
          - "3600"
        volumeMounts:
        - name: efs-test
          mountPath: /efs-test

Error messages will look like (Notice the mount path)

Mounting command: mount
Mounting arguments: -t efs fs-xxxxx:/test:/ /var/lib/kubelet/pods/f8a332d8-1d45-11ea-8294-02c1d933bf74/volumes/kubernetes.io~csi/efs-test/mount

To make it work, you need to update the PV to

volumeHandle: fs-xxxxx

Then update stateful set volumeMounts

        volumeMounts:
        - name: efs-test
          mountPath: /efs-test
          subPath: test

As you can see, mount path in PV doesn't work.

@tyrken
Copy link

tyrken commented Dec 19, 2019

The feature to have paths in the volumeHandle only hit master 10 days before your bug report in #102, and your mount error message looks very like what would happen if you used the new config in older code which doesn't understand it.

I haven't tried it myself yet (still reading up) but I suggest forcing a re-pull of actual latest container images or more easily dropping back to the old style config with the path in volumeAttributes/path, e.g. see

volumeHandle: fs-0434d1e6
volumeAttributes:
path: /a/b/c/

@rimaulana
Copy link
Contributor

I agree with @tyrken, I build my own image from the master branch, tried the sample manifest provided and it works like a charm

@leakingtapan
Copy link
Contributor

/close
feel free to reopen if the problem still happens in latest image

@k8s-ci-robot
Copy link
Contributor

@leakingtapan: Closing this issue.

In response to this:

/close
feel free to reopen if the problem still happens in latest image

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

5 participants