Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Subvolume{Group} pinning in ceph csi #2637

Closed
Tracked by #3336
humblec opened this issue Nov 15, 2021 · 9 comments
Closed
Tracked by #3336

Support Subvolume{Group} pinning in ceph csi #2637

humblec opened this issue Nov 15, 2021 · 9 comments
Assignees
Labels
component/cephfs Issues related to CephFS dependency/go-ceph depends on go-ceph functionality enhancement New feature or request keepalive This label can be used to disable stale bot activiity in the repo

Comments

@humblec
Copy link
Collaborator

humblec commented Nov 15, 2021

Describe the feature you'd like to have

Subvolumes and subvolume groups can be automatically pinned to ranks according
to policies. This can help distribute load across MDS ranks in predictable and
stable ways.

Pinning is configured by::
$ ceph fs subvolumegroup pin <vol_name> <group_name> <pin_type> <pin_setting>
or for subvolumes::
$ ceph fs subvolume pin <vol_name> <group_name> <pin_type> <pin_setting>

in Ceph CSI we have to do pinning for the subvolumegroup. The pin_type may be
one of export, distributed, or random.

So, for example, setting a distributed pinning strategy on a subvolume group::

$ ceph fs subvolumegroup pin cephfilesystem-a csi distributed 1

Will enable distributed subtree partitioning policy for the "csi" subvolume
group. This will cause every subvolume within the group to be automatically
pinned to one of the available ranks on the file system.

Additional Ref # ceph/ceph#43896

Cc @nixpanic

@humblec humblec self-assigned this Nov 15, 2021
@nixpanic nixpanic added enhancement New feature or request component/cephfs Issues related to CephFS labels Nov 15, 2021
@phlogistonjohn
Copy link
Contributor

Hi, @humblec I am assuming you'd prefer to access this via the api rather than cli if possible. Would you like me to file an issue in go-ceph for API calls for these or would you prefer to do it?

@humblec
Copy link
Collaborator Author

humblec commented Nov 15, 2021

Hi, @humblec I am assuming you'd prefer to access this via the api rather than cli if possible.

Hi @phlogistonjohn yeah, indeed its good to have the api support and keep away from cli.

Would you like me to file an issue in go-ceph for API calls for these or would you prefer to do it?

Anything works for me , please let me know if you are filing it.

@phlogistonjohn
Copy link
Contributor

Done. ceph/go-ceph#611 :-)

@nixpanic nixpanic added the dependency/go-ceph depends on go-ceph functionality label Nov 15, 2021
@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the wontfix This will not be worked on label Dec 15, 2021
@github-actions
Copy link

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@Rakshith-R Rakshith-R reopened this Jan 6, 2022
@Rakshith-R Rakshith-R added keepalive This label can be used to disable stale bot activiity in the repo and removed wontfix This will not be worked on labels Jan 6, 2022
@humblec humblec added this to the release-3.7 milestone May 27, 2022
@humblec
Copy link
Collaborator Author

humblec commented Jun 16, 2022

go-ceph v0.16.0 ( the recent release ) whcih is the base for 3.7 dont have this functionality. so , moving the milestone to 3.8 https://github.com/ceph/ceph-csi/milestone/15

@humblec humblec modified the milestones: release-3.7, release-3.8 Jun 16, 2022
@humblec humblec mentioned this issue Jan 19, 2023
9 tasks
@Madhu-1 Madhu-1 modified the milestones: release-3.8, release-v3.9 Feb 23, 2023
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 28, 2023

I don't think it's a good idea for us to create subvolumegroups at cephcsi. It should be the responsibility of the ceph admin to handle this task. Instead, cephcsi should only create subvolumes that correspond to the PVC/PV.
Additionally, we should avoid including pin details in the storageclass or clusterID configmap since a single subvolumegroup might be used by multiple storageclasses/clusterIDs. Pinning details are subject to change and should be considered a dynamic configuration.
Lastly, it's best if the ceph admin manages the creation and management of the filesystem and subvolumegroup.

@Rakshith-R
Copy link
Contributor

+1

As discussed, I think we can close this issue and let admin/rook at higher level handle pinning

@Rakshith-R
Copy link
Contributor

Closing this epic as per above comments.
subvolumegroup pinning support will be added Rook in upcoming releases.
In standalone ceph clusters, storage admins needs to do it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS dependency/go-ceph depends on go-ceph functionality enhancement New feature or request keepalive This label can be used to disable stale bot activiity in the repo
Projects
None yet
Development

No branches or pull requests

5 participants