Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume provisioning with multiple ceph clusters #4857

Closed
lechugaletal opened this issue Sep 23, 2024 · 5 comments
Closed

Volume provisioning with multiple ceph clusters #4857

lechugaletal opened this issue Sep 23, 2024 · 5 comments
Labels
wontfix This will not be worked on

Comments

@lechugaletal
Copy link

Volume provisioning with multiple ceph clusters

I'm trying to find if there is a way to provision PVs from multiple ceph clusters given a StatefulSet that has a single StorageClass defined in its volumeClaimTemplates section.

Given a StatefulSet with a spec of spec.volumeClaimTemplates.storageClassName: test-sc (pointing to a single StorageClass), and a storageClass with an hypothetical spec like:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-sc
parameters:
  clusters:
    - clusterID: cluster1
      csi.storage.k8s.io/controller-expand-secret-name: cluster1
      csi.storage.k8s.io/controller-expand-secret-namespace: csi-rbd
      csi.storage.k8s.io/fstype: ext4
      imageFeatures: layering
      pool: rbd
    - clusterID: cluster2
      csi.storage.k8s.io/controller-expand-secret-name: cluster2
      csi.storage.k8s.io/controller-expand-secret-namespace: csi-rbd
      csi.storage.k8s.io/fstype: ext4
      imageFeatures: layering
      pool: rbd
provisioner: rbd.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate

Could it be possible to provision PVs from cluster1 or cluster2 depending on the zone or region in which the pod is scheduled?

Thank you very much for your help!

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 23, 2024

we had the same requirement #4611, currently, we don't have a priority for it, we always welcome community contributions for it.

@lechugaletal
Copy link
Author

I understand 🤔.
I've been reading through Helm values documentation and this parameter seems to be related:

  # topologyConstrainedPools: |
  #   [{"poolName":"pool0",
  #     "dataPool":"ec-pool0" # optional, erasure-coded pool for data
  #     "domainSegments":[
  #       {"domainLabel":"region","value":"east"},
  #       {"domainLabel":"zone","value":"zone1"}]},
  #    {"poolName":"pool1",
  #     "dataPool":"ec-pool1" # optional, erasure-coded pool for data
  #     "domainSegments":[
  #       {"domainLabel":"region","value":"east"},
  #       {"domainLabel":"zone","value":"zone2"}]},
  #    {"poolName":"pool2",
  #     "dataPool":"ec-pool2" # optional, erasure-coded pool for data
  #     "domainSegments":[
  #       {"domainLabel":"region","value":"west"},
  #       {"domainLabel":"zone","value":"zone1"}]}
  #   ]

As far as I understand, I can create an RBD pool in ceph and then use labels in pods/nodes to create some sort of data affinity. The domainSegments property is configured in the ceph cluster, or this is only related with k8s labels onto resources?

Thank you @Madhu-1 for your help!!

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 24, 2024

https://rook.github.io/docs/rook/v1.14/CRDs/Cluster/external-cluster/topology-for-external-mode/#ceph-cluster contains the same documentation, it gives better idea about this feature

Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Oct 24, 2024
Copy link

github-actions bot commented Nov 1, 2024

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

2 participants