Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rbd: add backend support for VolumeGroup operations #4719

Merged
merged 9 commits into from
Jul 24, 2024

Conversation

nixpanic
Copy link
Member

@nixpanic nixpanic commented Jul 18, 2024

Describe what this PR does

Implementation of the VolumeGroup backend. It contains a clear separation of the VolumeGroup type and functions and CSI-Addons responsibilities.

Is there anything that requires special attention

Tested with a small application that calls the CSI-Addons procedures directly. The application was modified several times to test different scenarios. At the moment I am confident that the basic functionality works as expected.

I0722 14:28:27.040708       1 utils.go:235] ID: 22 GRPC call: /volumegroup.Controller/CreateVolumeGroup
I0722 14:28:27.040980       1 utils.go:236] ID: 22 GRPC request: {"name":"my-group","parameters":{"clusterID":"openshift-storage","pool":"ocs-storagecluster-cephblockpool"},"secrets":"***stripped***","volume_ids":["0001-0011-openshift-storage-0000000000000001-eafd9e22-70de-459b-97fc-2d46e03d5f88","0001-0011-openshift-storage-0000000000000001-2f1acca8-95ca-4594-be0c-e5e70b6cbc43"]}
I0722 14:28:27.065729       1 omap.go:89] ID: 22 got omap values: (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.eafd9e22-70de-459b-97fc-2d46e03d5f88"): map[csi.imageid:67c16fdb4f5d csi.imagename:csi-vol-eafd9e22-70de-459b-97fc-2d46e03d5f88 csi.volname:pvc-37249c71-61f2-4c2f-8cdd-08cb2f99d668 csi.volume.owner:default]
I0722 14:28:27.112579       1 omap.go:89] ID: 22 got omap values: (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.2f1acca8-95ca-4594-be0c-e5e70b6cbc43"): map[csi.imageid:67c1ec38b881 csi.imagename:csi-vol-2f1acca8-95ca-4594-be0c-e5e70b6cbc43 csi.volname:pvc-5aae7b38-7ec3-4c45-b3c9-1a21c1d37b78 csi.volume.owner:default]
I0722 14:28:27.143738       1 volumegroup.go:106] ID: 22 all 2 Volumes for VolumeGroup "my-group" have been found
I0722 14:28:27.156146       1 omap.go:159] ID: 22 set omap keys (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.groups."): map[csi.volume.group.my-group:46ae4226-e71b-4955-8440-4801f178979a])
I0722 14:28:27.160879       1 omap.go:159] ID: 22 set omap keys (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.group.46ae4226-e71b-4955-8440-4801f178979a"): map[csi.groupname:csi-vol-group-46ae4226-e71b-4955-8440-4801f178979a csi.volname:my-group])
I0722 14:28:27.161611       1 omap.go:221] ID: 22 got omap values: (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.group.46ae4226-e71b-4955-8440-4801f178979a"): map[csi.groupname:csi-vol-group-46ae4226-e71b-4955-8440-4801f178979a csi.volname:my-group]
I0722 14:28:27.161663       1 volume_group.go:149] ID: 22 GetVolumeGroup(0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a) returns {id:0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a name:csi-vol-group-46ae4226-e71b-4955-8440-4801f178979a clusterID:openshift-storage credentials:0xc0008111a0 conn:<nil> ioctx:<nil> monitors:172.30.253.8:3300,172.30.149.93:3300,172.30.68.64:3300 pool:ocs-storagecluster-cephblockpool namespace: journal:0xc000b2e840 volumes:[] volumesToFree:[]}
I0722 14:28:27.161697       1 volume_group.go:249] ID: 22 connection established for volume group "0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a"
I0722 14:28:27.161710       1 volume_group.go:277] ID: 22 iocontext created for volume group "0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a" in pool "ocs-storagecluster-cephblockpool"
I0722 14:28:27.170848       1 volume_group.go:326] ID: 22 volume group "csi-vol-group-46ae4226-e71b-4955-8440-4801f178979a" has been created
I0722 14:28:27.170869       1 volumegroup.go:118] ID: 22 VolumeGroup "my-group" had been created: csi-vol-group-46ae4226-e71b-4955-8440-4801f178979a
I0722 14:28:27.194642       1 omap.go:159] ID: 22 set omap keys (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.group.46ae4226-e71b-4955-8440-4801f178979a"): map[0001-0011-openshift-storage-0000000000000001-eafd9e22-70de-459b-97fc-2d46e03d5f88:])
I0722 14:28:27.216181       1 omap.go:159] ID: 22 set omap keys (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.group.46ae4226-e71b-4955-8440-4801f178979a"): map[0001-0011-openshift-storage-0000000000000001-2f1acca8-95ca-4594-be0c-e5e70b6cbc43:])
I0722 14:28:27.216203       1 volumegroup.go:133] ID: 22 all 2 Volumes have been added to for VolumeGroup "my-group"
I0722 14:28:27.216519       1 utils.go:242] ID: 22 GRPC response: {"volume_group":{"volume_group_id":"0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a","volumes":[{"capacity_bytes":17179869184,"volume_context":{"imageName":"csi-vol-eafd9e22-70de-459b-97fc-2d46e03d5f88","journalPool":"ocs-storagecluster-cephblockpool","pool":"ocs-storagecluster-cephblockpool"},"volume_id":"0001-0011-openshift-storage-0000000000000001-eafd9e22-70de-459b-97fc-2d46e03d5f88"},{"capacity_bytes":8589934592,"volume_context":{"imageName":"csi-vol-2f1acca8-95ca-4594-be0c-e5e70b6cbc43","journalPool":"ocs-storagecluster-cephblockpool","pool":"ocs-storagecluster-cephblockpool"},"volume_id":"0001-0011-openshift-storage-0000000000000001-2f1acca8-95ca-4594-be0c-e5e70b6cbc43"}]}}
I0722 14:28:27.217582       1 utils.go:235] ID: 23 GRPC call: /volumegroup.Controller/DeleteVolumeGroup
I0722 14:28:27.217667       1 utils.go:236] ID: 23 GRPC request: {"secrets":"***stripped***","volume_group_id":"0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a"}
I0722 14:28:27.218503       1 omap.go:221] ID: 23 got omap values: (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.group.46ae4226-e71b-4955-8440-4801f178979a"): map[0001-0011-openshift-storage-0000000000000001-2f1acca8-95ca-4594-be0c-e5e70b6cbc43: 0001-0011-openshift-storage-0000000000000001-eafd9e22-70de-459b-97fc-2d46e03d5f88: csi.groupname:csi-vol-group-46ae4226-e71b-4955-8440-4801f178979a csi.volname:my-group]
I0722 14:28:27.219984       1 omap.go:89] ID: 23 got omap values: (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.2f1acca8-95ca-4594-be0c-e5e70b6cbc43"): map[csi.imageid:67c1ec38b881 csi.imagename:csi-vol-2f1acca8-95ca-4594-be0c-e5e70b6cbc43 csi.volname:pvc-5aae7b38-7ec3-4c45-b3c9-1a21c1d37b78 csi.volume.owner:default]
I0722 14:28:27.258472       1 omap.go:89] ID: 23 got omap values: (pool="ocs-storagecluster-cephblockpool", namespace="", name="csi.volume.eafd9e22-70de-459b-97fc-2d46e03d5f88"): map[csi.imageid:67c16fdb4f5d csi.imagename:csi-vol-eafd9e22-70de-459b-97fc-2d46e03d5f88 csi.volname:pvc-37249c71-61f2-4c2f-8cdd-08cb2f99d668 csi.volume.owner:default]
I0722 14:28:27.289357       1 volume_group.go:149] ID: 23 GetVolumeGroup(0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a) returns {id:0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a name:csi-vol-group-46ae4226-e71b-4955-8440-4801f178979a clusterID:openshift-storage credentials:0xc000c832c0 conn:<nil> ioctx:<nil> monitors:172.30.253.8:3300,172.30.149.93:3300,172.30.68.64:3300 pool:ocs-storagecluster-cephblockpool namespace: journal:0xc00016f640 volumes:[0xc000658b48 0xc000658d88] volumesToFree:[0xc000658b48 0xc000658d88]}
I0722 14:28:27.289378       1 volumegroup.go:185] ID: 23 VolumeGroup "0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a" has been found
I0722 14:28:27.289387       1 volumegroup.go:197] ID: 23 VolumeGroup "0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a" contains 2 volumes
I0722 14:28:27.289420       1 volume_group.go:307] ID: 23 destroyed volume group instance with id "0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a"
E0722 14:28:27.289438       1 utils.go:240] ID: 23 GRPC error: rpc error: code = FailedPrecondition desc = rejecting to delete non-empty volume group "0001-0011-openshift-storage-0000000000000001-46ae4226-e71b-4955-8440-4801f178979a"

Show available bot commands

These commands are normally not required, but in case of issues, leave any of
the following bot commands in an otherwise empty comment in this PR:

  • /retest ci/centos/<job-name>: retest the <job-name> after unrelated
    failure (please report the failure too!)

@mergify mergify bot added the component/rbd Issues related to RBD label Jul 18, 2024
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from b71a67e to beaf170 Compare July 18, 2024 19:11
@nixpanic nixpanic changed the title rbd: rbd: add backend support for VolumeGroup operations Jul 18, 2024
internal/rbd/controllerserver.go Outdated Show resolved Hide resolved
Comment on lines +23 to +25
librbd "github.com/ceph/go-ceph/rbd"

"github.com/ceph/ceph-csi/internal/rbd/types"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rearrange the order here? cephcsi import should come first

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this guideline is not followed everywere. I think and Golang suggests to have imports of local packages at the bottom, that seems to be a coding convention many other Go projects use.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/ceph/ceph-csi/blob/devel/docs/coding.md#imports this is one we have, lets stick to it and also update other places if its not followed as followup cleanup?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #4721. I don't think we should strictly enforce it for existing imports, but it would be nice to use this order in new files.

internal/rbd/group.go Outdated Show resolved Hide resolved
internal/rbd/group.go Show resolved Hide resolved
internal/rbd/group.go Show resolved Hide resolved
internal/rbd/manager.go Show resolved Hide resolved
internal/rbd/manager.go Show resolved Hide resolved
internal/rbd/manager.go Show resolved Hide resolved
internal/rbd_group/volume_group.go Outdated Show resolved Hide resolved
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch 2 times, most recently from dc80fab to 7a758aa Compare July 19, 2024 14:00
@nixpanic nixpanic requested a review from Madhu-1 July 19, 2024 14:18
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from 7a758aa to a34d7fe Compare July 19, 2024 18:25
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from a34d7fe to eca8a7a Compare July 22, 2024 06:41
Copy link
Collaborator

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, let me know once its tested, i will do final review and approve it.

@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from eca8a7a to 5c8bdb0 Compare July 22, 2024 08:16
@nixpanic nixpanic marked this pull request as ready for review July 22, 2024 14:55
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from 6dec5e0 to e214d05 Compare July 22, 2024 14:59
@nixpanic nixpanic requested review from Madhu-1 and a team July 22, 2024 14:59
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from e214d05 to ccd5536 Compare July 22, 2024 15:51
@yati1998
Copy link
Contributor

LGTM

internal/rbd/manager.go Outdated Show resolved Hide resolved
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from 1bc0ca0 to 75ccdcc Compare July 23, 2024 12:54
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch 2 times, most recently from 5746225 to f552d4d Compare July 23, 2024 13:19
Copy link
Contributor

@Rakshith-R Rakshith-R left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tiny nits.

There's 3 commits named rbd: implement the VolumeGroup interface that I think can be squashed together.

internal/csi-addons/rbd/volumegroup.go Outdated Show resolved Hide resolved
@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from f552d4d to 3362475 Compare July 24, 2024 08:50
@nixpanic nixpanic requested a review from Rakshith-R July 24, 2024 08:50
Copy link
Contributor

@Rakshith-R Rakshith-R left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks LGTM

@nixpanic
Copy link
Member Author

@Mergifyio rebase

Add support for adding and removing the RBD-image from a group.

Signed-off-by: Niels de Vos <ndevos@ibm.com>
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Add extra error checking to make sure trying to create an existing
volume group does not result in a failure. The same counts for deleting
a non-existing volume group, and adding/removing volumes to/from the
volume group.

Signed-off-by: Niels de Vos <ndevos@ibm.com>
A RBD image can only be part of a single group. While an image is added
to a group, check if the image is already part of a group, and return an
error in case it is.

Signed-off-by: Niels de Vos <ndevos@ibm.com>
Copy link
Contributor

mergify bot commented Jul 24, 2024

rebase

✅ Branch has been successfully rebased

@nixpanic nixpanic force-pushed the csi-addons/rbd/volumegroup/journal branch from 3362475 to 1825c27 Compare July 24, 2024 11:35
@nixpanic
Copy link
Member Author

@Mergifyio queue

Copy link
Contributor

mergify bot commented Jul 24, 2024

queue

🛑 The pull request has been removed from the queue default

The queue conditions cannot be satisfied due to failing checks.

You can take a look at Queue: Embarked in merge queue check runs for more details.

In case of a failure due to a flaky test, you should first retrigger the CI.
Then, re-embark the pull request into the merge queue by posting the comment
@mergifyio refresh on the pull request.

@mergify mergify bot added the ok-to-test Label to trigger E2E tests label Jul 24, 2024
@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.27

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.27

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-cephfs

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.27

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-rbd

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.28

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.28

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.28

@ceph-csi-bot ceph-csi-bot removed the ok-to-test Label to trigger E2E tests label Jul 24, 2024
@nixpanic
Copy link
Member Author

@Mergifyio requeue

The multi-arch-build CI job failed while installing some packages. It has been restarted and has passed the section that previously failed.

Copy link
Contributor

mergify bot commented Jul 24, 2024

requeue

✅ The queue state of this pull request has been cleaned. It can be re-embarked automatically

@mergify mergify bot merged commit f9ab14e into ceph:devel Jul 24, 2024
40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/rbd Issues related to RBD
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants