-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the advantage of maintaining the rbdVolumes map? #135
Comments
When calling |
But both |
yes, but these info are not in delete and stage. So csi drivers have to persist the info when volumes are created, and retrieve these info from persistent storage during delete and stage. |
Ohh, I see that |
vol info should be retrieved from configmaps if not in local cache |
If I'm not mistaken, such situation shouldn't occur because both the provisioner and snapshotter are deployed within the same pod:
|
I had the same observation reading the code as the initial reporter, that the additional maps are actually not required and a concern. Thanks to @JohnStrunk for help with understanding k8s plumbing in this regard, and disclaimer ahead that I am a Ceph noob! The problem as I see, storing these config maps are,
The solution rather seems to be in how we abstract the various configuration elements into either the StorageClass (SC) or the CSI configuration. If the required parameters to reach the Ceph pool/cluster are part of the CSI configuration, these need not be stored in the said maps. Further, the StorageClass can now have options that helps create different rbd images with differing properties, as part of its "parameters" than information about the pool/cluster. The Node Service anyway receives required information to mount and such from the RPC payload itself, hence this does not change the same. This leads to a few things, If the abstraction is changed as above, it seems better aligned for the overall purpose and avoids the maps. Also, it makes the CSI talk to a Ceph cluster based on the configuration, than CSI being the funnel to talk to any Ceph cluster. Further, it makes the CSI driver nearly stateless, it just acts as a pass through based on the configuration and hence no reconciliation if ever needed (other than the core cluster parameters). Thoughts? |
Agree. Split brain is a constant theme in storage orchestration sadly.
This is being addressed. The drivers can store both monitors and keyrings in the secret. When monitors or keyring change, controller only needs to update the secret that is used by PVs rather than all PVs.
If there are multiple Ceph clusters, we have to create one storageclass per ceph cluster. |
Had an offline discussion with @rootfs on the topic. What it came down to was,
@rootfs add any details that I may have missed out, thanks. |
@ShyamsundarR looking forward to your PR 👍 |
This has been fixed in several PR's Thanks @ShyamsundarR for fixing this. closing this one |
…ck-134-to-release-4.12 [release-4.12] sync to upstream devel
Hi all, I've read the conversation on #66 and the cm fix but I still don't understand the advantage of having maps like
rbdVolumes
andrbdSnapshots
.For example, the
rbdVolumes
map is used just 2 times:CreateVolume
to check if the given volume name already exist, could't the same thing be done with a call torbd info
orrbd status
?CreateSnapshot
seems to be used to retrieve the volume name with the volume ID. I understand that according to the CSI spec volName and volID MUST be different, but couldn't we just use a volName hash as volID to avoid saving it?The text was updated successfully, but these errors were encountered: