You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Volume Group snapshots in kubernetes are being planned (currently in alpha). This would allow a volume group snapshot to be made of multiple PVCs. Potentially this could be used by VolSync to simplify (and group) replication/backup of multiple PVCs
What is the value to the end user? (why is it a priority?)
It would be good to be able to have a consistent way to take a snapshot of multiple PVCs and potentially even have this done in a single ReplicationSource/ReplicationDestination
How will we know we have a good solution? (acceptance criteria)
Looks like we will still need 1 mover pod per PVC as it's possible source PVCs in the group are not schedule-able on the same nodes even though they all need to be in the same namespace.
The text was updated successfully, but these errors were encountered:
Investigated both trying to create a replication source with multliple pvcs attached (simple flow), but this has issues as not all volumesnapshots in the group may be accessible on the same node.
Also created a more complex prototype where multiple jobs would be started (1 per source pvc) and then communication to 1 replication destination service which would route correctly.
Prototype VolSync code was written to confirm this could work (for rsync-tls only at this point) - but it's complex to define from the replicationdestination side (need to map all pvcs from source to dest). However it is workable. Prototype code here: https://github.com/tesshuflower/volsync/tree/test_vgroup_multi_job
Issues:
VolumeGroup snapshots are still in Beta and seem a ways away.
Created issue: VolumeGroupSnapshots - how to rebuild/restore a VolumeGroupSnapshot? kubernetes-csi/external-snapshotter#969 - Restoring volumegroup snapshots is not easy for a user which will prevent adoption. A use would need to have their source pvcs still running (or remember/write them all down) and then still have access to a bunch of global apis in order to figure out how to restore each snapshot to the correct pvc.
Complexity - as mentioned above, the complexity could also be an issue - at the destination side all PVCs in the group need to be defined, as we need to map between the source PVCs (source pvcs are not even known to the replicationsource spec, as they are simply selected via a label selector) and the destination PVCs. Destination PVCs also could get different names.
The main problem right now is timing - VolSync is not ready to put these changes in until VolumeGroupSnapshots in kubernetes-storage get closer to being generally available.
Describe the feature you'd like to have.
Volume Group snapshots in kubernetes are being planned (currently in alpha). This would allow a volume group snapshot to be made of multiple PVCs. Potentially this could be used by VolSync to simplify (and group) replication/backup of multiple PVCs
What is the value to the end user? (why is it a priority?)
It would be good to be able to have a consistent way to take a snapshot of multiple PVCs and potentially even have this done in a single ReplicationSource/ReplicationDestination
How will we know we have a good solution? (acceptance criteria)
Additional context
Upstream KEP for volume group snapshots:
https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3476-volume-group-snapshot
Sample implementation in the host-path driver:
kubernetes-csi/csi-driver-host-path#399
Notes:
The text was updated successfully, but these errors were encountered: