Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 95 Operation not supported when mounting pvcs #344

Closed
hyperbolic2346 opened this issue May 2, 2019 · 10 comments
Closed

Error 95 Operation not supported when mounting pvcs #344

hyperbolic2346 opened this issue May 2, 2019 · 10 comments

Comments

@hyperbolic2346
Copy link
Contributor

I'm running a test that worked a few weeks ago and is now failing with error 95. I have Kuberentes 1.14.1 running in AWS.

$ kubectl describe po/ceph-xfs-write-test
Name:         ceph-xfs-write-test
Namespace:    default
Node:         ip-172-31-30-145.ec2.internal/172.31.30.145
Start Time:   Thu, 02 May 2019 13:19:53 -0400
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"ceph-xfs-write-test","namespace":"default"},"spec":{"containers":[{"c...
Status:       Pending
IP:           
Containers:
  ceph-xfs-write-test:
    Container ID:  
    Image:         ubuntu
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -c
      echo 'CEPH TEST' > /data/ceph
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from shared-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zq2p6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  shared-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  ceph-xfs-pvc
    ReadOnly:   false
  default-token-zq2p6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zq2p6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age   From                                    Message
  ----     ------                  ----  ----                                    -------
  Normal   Scheduled               22s   default-scheduler                       Successfully assigned default/ceph-xfs-write-test to ip-172-31-30-145.ec2.internal
  Normal   SuccessfulAttachVolume  22s   attachdetach-controller                 AttachVolume.Attach succeeded for volume "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
  Warning  FailedMount             13s   kubelet, ip-172-31-30-145.ec2.internal  MountVolume.SetUp failed for volume "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912" : rpc error: code = Unknown desc = fail to check rbd image status with: (exit status 95), rbd output: (did not load config file, using default settings.
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 Errors while parsing config file!
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 Errors while parsing config file!
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.146 7fbf9cba2b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.178 7fbf9cba2b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.182 7fbf9cba2b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.182 7fbf9cba2b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.186 7fbf9cba2b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.186 7fbf9cba2b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.186 7fbf9cba2b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.210 7fbf75ffb700 -1 librbd::image::RefreshRequest: failed to retrieve group: (95) Operation not supported
2019-05-02 17:20:02.214 7fbf757fa700 -1 librbd::image::OpenRequest: failed to refresh image: (95) Operation not supported
2019-05-02 17:20:02.214 7fbf757fa700 -1 librbd::ImageState: 0x562841ce9500 failed to open image: (95) Operation not supported
rbd: error opening image pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912: (95) Operation not supported
)
  Warning  FailedMount  13s  kubelet, ip-172-31-30-145.ec2.internal  MountVolume.SetUp failed for volume "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912" : rpc error: code = Unknown desc = fail to check rbd image status with: (exit status 95), rbd output: (did not load config file, using default settings.
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 Errors while parsing config file!
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 Errors while parsing config file!
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.874 7f9d0d4e6b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:02.894 7f9d0d4e6b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.898 7f9d0d4e6b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.898 7f9d0d4e6b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.898 7f9d0d4e6b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.902 7f9d0d4e6b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.902 7f9d0d4e6b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:02.922 7f9ce67fc700 -1 librbd::image::RefreshRequest: failed to retrieve group: (95) Operation not supported
2019-05-02 17:20:02.922 7f9ce5ffb700 -1 librbd::image::OpenRequest: failed to refresh image: (95) Operation not supported
2019-05-02 17:20:02.922 7f9ce5ffb700 -1 librbd::ImageState: 0x5611a57fc7d0 failed to open image: (95) Operation not supported
rbd: error opening image pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912: (95) Operation not supported
)
  Warning  FailedMount  11s  kubelet, ip-172-31-30-145.ec2.internal  MountVolume.SetUp failed for volume "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912" : rpc error: code = Unknown desc = fail to check rbd image status with: (exit status 95), rbd output: (did not load config file, using default settings.
2019-05-02 17:20:04.070 7f1185002b00 -1 Errors while parsing config file!
2019-05-02 17:20:04.070 7f1185002b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:04.070 7f1185002b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:04.070 7f1185002b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:04.074 7f1185002b00 -1 Errors while parsing config file!
2019-05-02 17:20:04.074 7f1185002b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:04.074 7f1185002b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:04.074 7f1185002b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:04.094 7f1185002b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:04.098 7f1185002b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:04.098 7f1185002b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:04.102 7f1185002b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:04.102 7f1185002b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:04.102 7f1185002b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:04.126 7f1165ffb700 -1 librbd::image::RefreshRequest: failed to retrieve group: (95) Operation not supported
2019-05-02 17:20:04.126 7f11657fa700 -1 librbd::image::OpenRequest: failed to refresh image: (95) Operation not supported
2019-05-02 17:20:04.126 7f11657fa700 -1 librbd::ImageState: 0x55bd283b1ce0 failed to open image: (95) Operation not supported
rbd: error opening image pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912: (95) Operation not supported
)
  Warning  FailedMount  9s  kubelet, ip-172-31-30-145.ec2.internal  MountVolume.SetUp failed for volume "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912" : rpc error: code = Unknown desc = fail to check rbd image status with: (exit status 95), rbd output: (did not load config file, using default settings.
2019-05-02 17:20:06.282 7fb32c315b00 -1 Errors while parsing config file!
2019-05-02 17:20:06.282 7fb32c315b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:06.282 7fb32c315b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:06.282 7fb32c315b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:06.282 7fb32c315b00 -1 Errors while parsing config file!
2019-05-02 17:20:06.282 7fb32c315b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:06.282 7fb32c315b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:06.282 7fb32c315b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:06.310 7fb32c315b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:06.322 7fb32c315b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:06.322 7fb32c315b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:06.326 7fb32c315b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:06.326 7fb32c315b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:06.326 7fb32c315b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:06.362 7fb305ffb700 -1 librbd::image::RefreshRequest: failed to retrieve group: (95) Operation not supported
2019-05-02 17:20:06.362 7fb3057fa700 -1 librbd::image::OpenRequest: failed to refresh image: (95) Operation not supported
2019-05-02 17:20:06.362 7fb3057fa700 -1 librbd::ImageState: 0x55863d35f850 failed to open image: (95) Operation not supported
rbd: error opening image pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912: (95) Operation not supported
)
  Warning  FailedMount  5s  kubelet, ip-172-31-30-145.ec2.internal  MountVolume.SetUp failed for volume "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912" : rpc error: code = Unknown desc = fail to check rbd image status with: (exit status 95), rbd output: (did not load config file, using default settings.
2019-05-02 17:20:10.490 7fead2146b00 -1 Errors while parsing config file!
2019-05-02 17:20:10.490 7fead2146b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:10.490 7fead2146b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:10.490 7fead2146b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:10.490 7fead2146b00 -1 Errors while parsing config file!
2019-05-02 17:20:10.490 7fead2146b00 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:10.490 7fead2146b00 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2019-05-02 17:20:10.490 7fead2146b00 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-05-02 17:20:10.514 7fead2146b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:10.514 7fead2146b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:10.514 7fead2146b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:10.518 7fead2146b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:10.518 7fead2146b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:10.518 7fead2146b00 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-05-02 17:20:10.546 7feab37fe700 -1 librbd::image::RefreshRequest: failed to retrieve group: (95) Operation not supported
2019-05-02 17:20:10.546 7feab2ffd700 -1 librbd::image::OpenRequest: failed to refresh image: (95) Operation not supported
2019-05-02 17:20:10.546 7feab2ffd700 -1 librbd::ImageState: 0x5625360ce320 failed to open image: (95) Operation not supported
rbd: error opening image pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912: (95) Operation not supported
$ kubectl describe ds csi-rbdplugin
Name:           csi-rbdplugin
Selector:       app=csi-rbdplugin
Node-Selector:  <none>
Labels:         cdk-addons=true
Annotations:    deprecated.daemonset.template.generation: 1
                kubectl.kubernetes.io/last-applied-configuration:
                  {"apiVersion":"apps/v1beta2","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"cdk-addons":"true"},"name":"csi-rbdplugin","namesp...
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=csi-rbdplugin
  Service Account:  rbd-csi-nodeplugin
  Containers:
   driver-registrar:
    Image:      quay.io/k8scsi/csi-node-driver-registrar:v1.0.2
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from plugin-dir (rw)
      /registration from registration-dir (rw)
   csi-rbdplugin:
    Image:      quay.io/cephcsi/rbdplugin:v1.0.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --nodeid=$(NODE_ID)
      --endpoint=$(CSI_ENDPOINT)
      --v=5
      --drivername=rbd.csi.ceph.com
      --containerized=true
      --metadatastorage=k8s_configmap
    Environment:
      HOST_ROOTFS:    /rootfs
      NODE_ID:         (v1:spec.nodeName)
      POD_NAMESPACE:   (v1:metadata.namespace)
      CSI_ENDPOINT:   unix://var/lib/kubelet/plugins_registry/rbd.csi.ceph.com/csi.sock
    Mounts:
      /dev from host-dev (rw)
      /lib/modules from lib-modules (ro)
      /rootfs from host-rootfs (rw)
      /sys from host-sys (rw)
      /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/ from plugin-mount-dir (rw)
      /var/lib/kubelet/plugins_registry/rbd.csi.ceph.com from plugin-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
  Volumes:
   plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/rbd.csi.ceph.com
    HostPathType:  DirectoryOrCreate
   plugin-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/
    HostPathType:  DirectoryOrCreate
   registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
   pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
   host-dev:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  
   host-rootfs:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  
   host-sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:  
   lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulCreate  28m   daemonset-controller  Created pod: csi-rbdplugin-7zjrh
  Normal  SuccessfulCreate  28m   daemonset-controller  Created pod: csi-rbdplugin-wvfsm
  Normal  SuccessfulCreate  28m   daemonset-controller  Created pod: csi-rbdplugin-z7wbp
$ kubectl describe po/csi-rbdplugin-attacher-0
Name:           csi-rbdplugin-attacher-0
Namespace:      default
Node:           ip-172-31-69-186.ec2.internal/172.31.69.186
Start Time:     Thu, 02 May 2019 13:00:02 -0400
Labels:         app=csi-rbdplugin-attacher
                controller-revision-hash=csi-rbdplugin-attacher-7dbc8c4969
                statefulset.kubernetes.io/pod-name=csi-rbdplugin-attacher-0
Annotations:    <none>
Status:         Running
IP:             10.1.77.6
Controlled By:  StatefulSet/csi-rbdplugin-attacher
Containers:
  csi-rbdplugin-attacher:
    Container ID:  docker://728d33abf5705c3c86cedd9529dc831a8726246fcac0d530b64a7e98a20f436f
    Image:         quay.io/k8scsi/csi-attacher:v1.0.1
    Image ID:      docker-pullable://quay.io/k8scsi/csi-attacher@sha256:6425af42299ba211de685a94953a5c4c6fcbfd2494e445437dd9ebd70b28bf8a
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=$(ADDRESS)
    State:          Running
      Started:      Thu, 02 May 2019 13:00:19 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      ADDRESS:  /var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock
    Mounts:
      /var/lib/kubelet/plugins/rbd.csi.ceph.com from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rbd-csi-attacher-token-66dff (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/rbd.csi.ceph.com
    HostPathType:  DirectoryOrCreate
  rbd-csi-attacher-token-66dff:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rbd-csi-attacher-token-66dff
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From                                    Message
  ----     ------            ----               ----                                    -------
  Warning  FailedScheduling  29m (x2 over 29m)  default-scheduler                       no nodes available to schedule pods
  Normal   Scheduled         29m                default-scheduler                       Successfully assigned default/csi-rbdplugin-attacher-0 to ip-172-31-69-186.ec2.internal
  Normal   Pulling           29m                kubelet, ip-172-31-69-186.ec2.internal  Pulling image "quay.io/k8scsi/csi-attacher:v1.0.1"
  Normal   Pulled            29m                kubelet, ip-172-31-69-186.ec2.internal  Successfully pulled image "quay.io/k8scsi/csi-attacher:v1.0.1"
  Normal   Created           28m                kubelet, ip-172-31-69-186.ec2.internal  Created container csi-rbdplugin-attacher
  Normal   Started           28m                kubelet, ip-172-31-69-186.ec2.internal  Started container csi-rbdplugin-attacher
$ kubectl logs csi-rbdplugin-attacher-0
I0502 17:00:19.247853       1 main.go:76] Version: v1.0.1-0-gb7dadac
I0502 17:00:19.248772       1 connection.go:89] Connecting to /var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock
I0502 17:00:19.249029       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:19.249343       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:20.249315       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:20.249338       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:21.291323       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:21.291536       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:22.467731       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:23.533589       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:24.508792       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:24.517185       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:25.478809       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:25.478880       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:26.553703       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:27.380109       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:28.242926       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:29.081665       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:30.002518       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:30.002581       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:31.008306       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:31.008371       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:32.133914       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:33.019809       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:33.972073       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:33.972160       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:34.899470       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:35.887130       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:36.800531       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:37.717855       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:37.717966       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:38.789651       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:39.677221       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:39.678150       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:40.558576       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:41.503045       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:41.503099       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:42.531447       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:42.531608       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:43.676574       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:43.676627       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:44.593991       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:45.512929       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:45.513002       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:46.614196       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:46.614251       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:47.496971       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:48.643212       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:48.643578       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:49.722115       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:50.731786       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:51.543164       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:51.543309       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:52.406679       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:52.406771       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:53.449761       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:53.449809       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:54.642127       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:54.642261       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:55.474360       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:55.474409       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:56.512442       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:00:56.512472       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:57.337076       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:58.415156       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:00:59.337080       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:00.206547       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:01.223117       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:02.240974       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:03.152452       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:04.121878       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:05.134259       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:06.035795       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:06.948787       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:08.064355       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:09.009204       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:10.161489       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:10.161578       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:11.080492       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:11.080552       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:12.238389       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:12.238410       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:13.081064       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:13.081089       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:14.271990       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:15.101845       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:15.990812       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:15.990896       1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0502 17:01:17.063496       1 connection.go:116] Still trying, connection is CONNECTING
I0502 17:01:17.063577       1 connection.go:113] Connected
I0502 17:01:17.063597       1 connection.go:242] GRPC call: /csi.v1.Identity/Probe
I0502 17:01:17.063603       1 connection.go:243] GRPC request: {}
I0502 17:01:17.066927       1 connection.go:245] GRPC response: {}
I0502 17:01:17.067330       1 connection.go:246] GRPC error: <nil>
I0502 17:01:17.067336       1 main.go:211] Probe succeeded
I0502 17:01:17.067353       1 connection.go:242] GRPC call: /csi.v1.Identity/GetPluginInfo
I0502 17:01:17.067361       1 connection.go:243] GRPC request: {}
I0502 17:01:17.069180       1 connection.go:245] GRPC response: {"name":"rbd.csi.ceph.com","vendor_version":"1.0.0"}
I0502 17:01:17.069680       1 connection.go:246] GRPC error: <nil>
I0502 17:01:17.069688       1 main.go:128] CSI driver name: "rbd.csi.ceph.com"
I0502 17:01:17.069695       1 connection.go:242] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0502 17:01:17.069700       1 connection.go:243] GRPC request: {}
I0502 17:01:17.071915       1 connection.go:245] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}}]}
I0502 17:01:17.073102       1 connection.go:246] GRPC error: <nil>
I0502 17:01:17.073110       1 connection.go:242] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0502 17:01:17.073114       1 connection.go:243] GRPC request: {}
I0502 17:01:17.077679       1 connection.go:245] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":7}}}]}
I0502 17:01:17.080556       1 connection.go:246] GRPC error: <nil>
I0502 17:01:17.080595       1 main.go:152] CSI driver supports ControllerPublishUnpublish, using real CSI handler
I0502 17:01:17.080751       1 controller.go:111] Starting CSI attacher
E0502 17:01:37.944394       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
E0502 17:01:37.944800       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
E0502 17:01:37.946023       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
E0502 17:01:37.947384       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolume: Get https://10.152.183.1:443/api/v1/persistentvolumes?resourceVersion=882&timeout=8m39s&timeoutSeconds=519&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E0502 17:01:37.948688       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.VolumeAttachment: Get https://10.152.183.1:443/apis/storage.k8s.io/v1beta1/volumeattachments?resourceVersion=889&timeout=9m0s&timeoutSeconds=540&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E0502 17:01:37.949189       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://10.152.183.1:443/api/v1/nodes?resourceVersion=1119&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E0502 17:01:56.021280       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
E0502 17:01:56.021296       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
E0502 17:01:56.021336       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
E0502 17:01:56.023146       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://10.152.183.1:443/api/v1/nodes?resourceVersion=1158&timeout=5m27s&timeoutSeconds=327&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E0502 17:02:28.328999       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolume: Get https://10.152.183.1:443/api/v1/persistentvolumes?resourceVersion=1123&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E0502 17:02:28.329083       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://10.152.183.1:443/api/v1/nodes?resourceVersion=1214&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E0502 17:02:28.329138       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.VolumeAttachment: Get https://10.152.183.1:443/apis/storage.k8s.io/v1beta1/volumeattachments?resourceVersion=1123&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
I0502 17:19:47.256111       1 controller.go:203] Started PV processing "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:47.256143       1 csi_handler.go:418] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:47.256153       1 csi_handler.go:422] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912": no deletion timestamp, ignoring
I0502 17:19:47.263938       1 controller.go:203] Started PV processing "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:47.263962       1 csi_handler.go:418] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:47.263968       1 csi_handler.go:422] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912": no deletion timestamp, ignoring
I0502 17:19:47.272167       1 controller.go:203] Started PV processing "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:47.272200       1 csi_handler.go:418] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:47.272224       1 csi_handler.go:422] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912": no deletion timestamp, ignoring
I0502 17:19:53.402442       1 controller.go:173] Started VA processing "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.402467       1 csi_handler.go:93] CSIHandler: processing VA "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.402473       1 csi_handler.go:120] Attaching "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.402480       1 csi_handler.go:259] Starting attach operation for "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.402551       1 csi_handler.go:220] Adding finalizer to PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:53.410472       1 controller.go:203] Started PV processing "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:53.410496       1 csi_handler.go:418] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:53.410504       1 csi_handler.go:422] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912": no deletion timestamp, ignoring
I0502 17:19:53.410547       1 csi_handler.go:228] PV finalizer added to "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:19:53.411632       1 csi_handler.go:524] Can't get CSINodeInfo ip-172-31-30-145.ec2.internal: the server could not find the requested resource (get csinodeinfos.csi.storage.k8s.io ip-172-31-30-145.ec2.internal)
I0502 17:19:53.411885       1 csi_handler.go:181] VA finalizer added to "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.411901       1 csi_handler.go:195] NodeID annotation added to "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.417818       1 csi_handler.go:205] VolumeAttachment "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac" updated with finalizer and/or NodeID annotation
I0502 17:19:53.417849       1 connection.go:242] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0502 17:19:53.417854       1 connection.go:243] GRPC request: {"node_id":"ip-172-31-30-145.ec2.internal","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"adminid":"admin","fsType":"xfs","imageFeatures":"layering","imageFormat":"2","monitors":"172.31.16.16:6789 172.31.79.249:6789 172.31.83.31:6789","pool":"xfs-pool","storage.kubernetes.io/csiProvisionerIdentity":"1556816415867-8081-","userid":"admin"},"volume_id":"csi-rbd-vol-7b6b1b1c-6cfe-11e9-be11-8a129e7529b1"}
I0502 17:19:53.422390       1 connection.go:245] GRPC response: {}
I0502 17:19:53.422780       1 connection.go:246] GRPC error: <nil>
I0502 17:19:53.422789       1 csi_handler.go:133] Attached "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.422797       1 util.go:33] Marking as attached "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.429125       1 util.go:43] Marked as attached "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.429146       1 csi_handler.go:139] Fully attached "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.429154       1 csi_handler.go:109] CSIHandler: finished processing "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.429208       1 controller.go:173] Started VA processing "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.429218       1 csi_handler.go:93] CSIHandler: processing VA "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:19:53.429225       1 csi_handler.go:115] "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac" is already attached
I0502 17:19:53.429236       1 csi_handler.go:109] CSIHandler: finished processing "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:22:34.772726       1 controller.go:173] Started VA processing "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:22:34.772766       1 csi_handler.go:93] CSIHandler: processing VA "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:22:34.772774       1 csi_handler.go:115] "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac" is already attached
I0502 17:22:34.772782       1 csi_handler.go:109] CSIHandler: finished processing "csi-26c5f4392c6c0f0d6ab40442cd3c39bb6e24d30fce3edb11af17072bfc1216ac"
I0502 17:22:34.777487       1 controller.go:203] Started PV processing "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:22:34.777511       1 csi_handler.go:418] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912"
I0502 17:22:34.777520       1 csi_handler.go:422] CSIHandler: processing PV "pvc-7b6245e7-6cfe-11e9-a5f8-12e255a35912": no deletion timestamp, ignoring
@dillaman
Copy link

dillaman commented May 2, 2019

@hyperbolic2346 You are hitting [1] - the initial Nautilus release of librbd fails to open images stored on a Luminous release cluster.

[1] http://tracker.ceph.com/issues/38834

@hyperbolic2346
Copy link
Contributor Author

hyperbolic2346 commented May 2, 2019

@dillaman Yes, that seems exactly my issue. Thank you. Do we have an estimate on how long that fix will take to roll out into the containers? Due to #345 I can't just roll back to a previous docker image to fix this.

@dillaman
Copy link

dillaman commented May 2, 2019

@hyperbolic2346 I'm not truly involved in the CSI workload, so I couldn't say. The 14.2.1 release is out so a new re-build of the container should hopefully pick up the fixed packages.

@ShyamsundarR
Copy link
Contributor

@dillaman The latest builds of 1.0 uses mimic afaict as well as from the Makefile and the docker file in deploy/rbd/docker/. So, should this issue be present on mimic client to luminous server versions as well?

@hyperbolic2346 we started building using nautilus versions very recently, looks like we need some updates to the 1.0 branch to get this to use the same Ceph images as cephcsi build uses.

@dillaman
Copy link

dillaman commented May 2, 2019

@dillaman The latest builds of 1.0 uses mimic afaict as well as from the Makefile and the docker file in deploy/rbd/docker/. So, should this issue be present on mimic client to luminous server versions as well?

Negative -- it was only an issue in 14.2.0.

@dillaman
Copy link

dillaman commented May 2, 2019

@ShyamsundarR
Copy link
Contributor

The package manifest shows that 14.2.0 of librbd1 is installed [1]

[1] https://quay.io/repository/cephcsi/rbdplugin/manifest/sha256:77f9dfaa6f2408e94262e97aba9994bcc19f6892f4c01c1db6eacce7340d0d86?tab=packages

Right, the line that calls out mimic as no meaning in the dockerfile,

yum install -y centos-release-ceph && yum install -y ceph-common e2fsprogs xfsprogs rbd-nbd && yum clean all

ENV CEPH_VERSION=mimic

The line above that actually installs the latest `centos-release-ceph' and moves on from there. Thanks for catching the package manifest @dillaman .

The latest in CentOS storage is still 14.2.0.

@hyperbolic2346 we either need to fix the DockerFile to use what cephcsi uses or wait for CentOS storage SIG to update to the latest (I assume from a future proofing perspective the former is better). Let me start fixing that.

ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue May 3, 2019
plugin images were using centos 7 images as the base image. This
is now moved to the ceph container image that supports required
content since 14.2 version.

Fixes ceph#344

Signed-off-by: ShyamsundarR <srangana@redhat.com>
ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue May 24, 2019
plugin images were using centos 7 images as the base image. This
is now moved to the ceph container image that supports required
content since 14.2 version.

Fixes ceph#344

Signed-off-by: ShyamsundarR <srangana@redhat.com>
ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue May 24, 2019
plugin images were using centos 7 images as the base image. This
is now moved to the ceph container image that supports required
content since 14.2 version.

Fixes ceph#344

Signed-off-by: ShyamsundarR <srangana@redhat.com>
mergify bot pushed a commit that referenced this issue May 28, 2019
plugin images were using centos 7 images as the base image. This
is now moved to the ceph container image that supports required
content since 14.2 version.

Fixes #344

Signed-off-by: ShyamsundarR <srangana@redhat.com>
@Madhu-1
Copy link
Collaborator

Madhu-1 commented May 28, 2019

@hyperbolic2346 this issue is Fixed in #346.

@humblec
Copy link
Collaborator

humblec commented May 28, 2019

Yep, I am closing this issue for now, please feel free to reopen @hyperbolic2346 and thanks for reporting it!!

@humblec humblec closed this as completed May 28, 2019
@hyperbolic2346
Copy link
Contributor Author

Yes, this is fixed for me now. Thank you all for your help.

openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/ceph-csi that referenced this issue Sep 9, 2024
Syncing latest changes from devel for ceph-csi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants