Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

With "layerlist: storage", "placementPolicy: FollowTopology" creates a drbd device instead of an LV #102

Closed
dkhachyan opened this issue Dec 19, 2020 · 4 comments · Fixed by #103
Assignees
Labels
bug Something isn't working

Comments

@dkhachyan
Copy link

Hi!
I'm using the latest piraeus-operator. With layerlist: storage, placementPolicy: FollowTopology creates a drbd device instead of an LV.

sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor-hdd-lvm
provisioner: linstor.csi.linbit.com
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
parameters:
  layerlist: storage
  placementCount: "1"
  placementPolicy: FollowTopology
  allowRemoteVolumeAccess: "false"
  disklessOnRemaining: "false"
  csi.storage.k8s.io/fstype: xfs
  mountOpts: noatime,discard
  storagePool: hdd

oc -n piraeus-demo get pvc

NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
linstor-lvm   Bound    pvc-15523a8a-e0d2-48c3-9fa4-e79725def3cd   23Gi       RWO            linstor-hdd-lvm   9s

kubectl -n piraeus-demo describe pv pvc-15523a8a-e0d2-48c3-9fa4-e79725def3cd

Name:              pvc-15523a8a-e0d2-48c3-9fa4-e79725def3cd
Labels:            <none>
Annotations:       pv.kubernetes.io/provisioned-by: linstor.csi.linbit.com
Finalizers:        [kubernetes.io/pv-protection external-attacher/linstor-csi-linbit-com]
StorageClass:      linstor-hdd-lvm
Status:            Bound
Claim:             piraeus-demo/linstor-lvm
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          23Gi
Node Affinity:
  Required Terms:
    Term 0:        linbit.com/hostname in [intel-1]
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            linstor.csi.linbit.com
    FSType:            xfs
    VolumeHandle:      pvc-15523a8a-e0d2-48c3-9fa4-e79725def3cd
    ReadOnly:          false
    VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1608004506593-8081-linstor.csi.linbit.com
Events:                <none>

┊ intel-1 ┊ pvc-15523a8a-e0d2-48c3-9fa4-e79725def3cd ┊ raid10 ┊ 0 ┊ 1013 ┊ /dev/drbd1013 ┊ 23.01 GiB ┊ InUse ┊ UpToDate ┊

@WanzenBug WanzenBug added the bug Something isn't working label Dec 21, 2020
@WanzenBug WanzenBug self-assigned this Dec 21, 2020
@WanzenBug
Copy link
Member

Hi!

Thanks for the report! Looks like we forgot to pass this information to LINSTOR when not using the AutoPlace scheduler. I have a fix ready, I'll just need to properly test it.

@dkhachyan
Copy link
Author

Hi! The same problem after upgrade to:
operator: 1.7.1
csi: 0.17.0
linstor-server: 1.17.0

@WanzenBug
Copy link
Member

Please switch to the AutoplaceTopology placement policy. You can just remove the parameter from your storage class, as that policy is the default.

FollowTopology is broken in subtle, and not so subtle ways

@dkhachyan
Copy link
Author

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants