Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't add topology details in the pv volumeAttributes #4497

Closed
parth-gr opened this issue Mar 14, 2024 · 0 comments · Fixed by #4499
Closed

Don't add topology details in the pv volumeAttributes #4497

parth-gr opened this issue Mar 14, 2024 · 0 comments · Fixed by #4499
Assignees
Labels
bug Something isn't working component/rbd Issues related to RBD

Comments

@parth-gr
Copy link
Contributor

Describe the bug

currently provisioning with topologypools, we also add the details of topology in the volumeAttributes, so the proposal is to remove those details as it is not required

A clear and concise description of what the bug is.

an example:

~ % oc get pv pvc-d1a26969-a15c-4e60-8f76-847d1fdd6041 -o=jsonpath='{.spec.csi.volumeAttributes}' | jq
{
 "clusterID": "openshift-storage",
 "imageFeatures": "layering,deep-flatten,exclusive-lock,object-map,fast-diff",
 "imageFormat": "2",
 "imageName": "csi-vol-937ee9e9-2f99-11ed-b2d0-0a580a83001a",
 "journalPool": "ocs-storagecluster-cephblockpool",
 "pool": "ocs-storagecluster-cephblockpool-us-east-1a",
 "storage.kubernetes.io/csiProvisionerIdentity": "1662652453217-8081-openshift-storage.rbd.csi.ceph.com",
 "topologyConstrainedPools": "[\n {\n \"poolName\": \"ocs-storagecluster-cephblockpool-us-east-1a\",\n \"domainSegments\": [\n {\n \"domainLabel\": \"zone\",\n \"value\": \"us-east-1a\"\n }\n ]\n },\n {\n \"poolName\": \"ocs-storagecluster-cephblockpool-us-east-1b\",\n \"domainSegments\": [\n {\n \"domainLabel\": \"zone\",\n \"value\": \"us-east-1b\"\n }\n ]\n },\n {\n \"poolName\": \"ocs-storagecluster-cephblockpool-us-east-1c\",\n \"domainSegments\": [\n {\n \"domainLabel\": \"zone\",\n \"value\": \"us-east-1c\"\n }\n ]\n }\n]"
}

Environment details

  • Image/version of Ceph CSI driver :
  • Helm chart version :
  • Kernel version :
  • Mounter used for mounting PVC (for cephFS its fuse or kernel. for rbd its
    krbd or rbd-nbd) :
  • Kubernetes cluster version :
  • Ceph cluster version :

Steps to reproduce

Steps to reproduce the behavior:

  1. Setup details: provision PVC with topologyconstraintpool pools(RBD storageclass)

Actual results

Describe what happened

Expected behavior

#A clear and concise description of what you expected to happen.

Logs

If the issue is in PVC creation, deletion, cloning please attach complete logs
of below containers.

  • csi-provisioner and csi-rbdplugin/csi-cephfsplugin container logs from the
    provisioner pod.

If the issue is in PVC resize please attach complete logs of below containers.

  • csi-resizer and csi-rbdplugin/csi-cephfsplugin container logs from the
    provisioner pod.

If the issue is in snapshot creation and deletion please attach complete logs
of below containers.

  • csi-snapshotter and csi-rbdplugin/csi-cephfsplugin container logs from the
    provisioner pod.

If the issue is in PVC mounting please attach complete logs of below containers.

  • csi-rbdplugin/csi-cephfsplugin and driver-registrar container logs from
    plugin pod from the node where the mount is failing.

  • if required attach dmesg logs.

Note:- If its a rbd issue please provide only rbd related logs, if its a
cephFS issue please provide cephFS logs.

Additional context

Add any other context about the problem here.

For example:

Any existing bug report which describe about the similar issue/behavior

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component/rbd Issues related to RBD
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants