-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backward compatibility and migration to 1.1 volumes from existing 1.0 versions #378
Comments
Why not just read the data out of the configmaps, inject it into the omaps and drop the old configmaps? |
Mounting does not need the config maps and also does not use them in the current scheme. That would leave the The |
Why was the volumeid format changed? |
The config maps held the required cluster/pool to The above required us to change the |
so in the new format, the cluster/pool is encoded into the volumeid's? |
Would sticking them in as volumeAttributes work too? |
No, this is not passed to the Delete request as detailed in this issue container-storage-interface/spec#167 (there was one other discussion that I am failing to find the link to that talks about a timing issue regarding when the attribute information is deleted and when the volume is deleted, but in short this information is not available). Also, the cluster/pool in the new VolumeID is encoded as detailed here. |
Hmm.. I see. What about saad's suggestion of adding cluster/pool into a secret and injected via 'credentials'? That would decouple it from the volume id. |
I guess that would still require an edit to the pv's which may not be supported. |
Another migration option was mentioned on sig-storage. Force delete the pv while leaving the pvc. This would leave the state in 'lost'. if you then delete all pods referencing the pvc, no new instances will be launched. Then you can reupload the pvc with any changes and the pvc should become unlost and work again. I have not tested this theory. |
I think it is important to decide on what the end result looks like here, and if we are looking at migration or just plain backward compatibility. For backward compatibility, retain the config maps, and function with them as before, just not adding any more to the config maps. We can even flatten the config maps to a file and use that instead of an actual Kubernetes config map being accessed by the plugin (but that raises the question of who stores this flat file and where). This is the simplest form of backward compatibility that we can reach, from the code to the deployment as it exists. For migration, I think the end result is that everything is in the new format, the PV parameters, VolumeID and the metadata stored on the backend, as this would be ideal in the long run, instead of supporting any other intermediate form/format/parameters in any entity. For reaching the migration goal, I see the possibility of a "down" pod conversion of the PVC to the a new format PV (as detailed in Option-A), without incurring a data copy. IOW, as we control the metadata on the Ceph cluster, we can manipulate that, but recreate required objects on the Kubernetes cluster as we do not have control over the same. @kfox1111 Are you attempting to find other ways to reach the same end state/goal? Or, is your end-goal different than what I am looking at? |
Same goal really. The configmaps have always been a bit of a pain for operators to deal with. So not having to special case some old volumes would be better all around. The helm chart also has a bug in it that caused the configmaps to enter the wrong namespace (default). So one additional goal is to decide if we need yet another migration plan for those, migrating them from the wrong namespace to the right namespace, or does the migration plan for this issue solve the other issue too. |
The current migration plan, will get rid of the config map, as we would not need it any longer. As a result at the end of the migration it would solve the config maps landing in the default namespace (as those would be deleted). Just to be on the same page, the current migration plan is to recreate the PV/PVC on the Kubernetes end, but shift the old image as the new image in the Ceph pool, and hence the old PV is deleted and so the config map is also removed. |
I'm good getting rid of the configmaps if we can but, deleting PVC's is really really painful/error prone/potentially disastrous. My preferred list of ways to solve this (highest to lowest preference):
|
We understand the manual efforts, but unfortunately we are into that state. Also please remember that, this driver has not declared as stable or there were no stable releases before. We will make sure we will not break again, but this change is inevitable.
PVs are immutable since start or from day0 for a reason. I dont think its something community is going to accept. You can give a try for sure, even I can help. But, it looks to me very difficult to get this change done as a solution in upstream atleast in short term. As an additional note, we are more than happy to have above mentioned process written or get some contribution from community to mitigate the effect of this change! |
This commit adds support to mount and delete volumes provisioned by older plugin versions (1.0.0) in order to support backward compatibility to 1.0.0 created volumes. It adds back the ability to specify where older meta data was specified, using the metadatastorage option to the plugin. Further, using the provided meta data to mount and delete the older volumes. It also supports a variety of ways in which monitor information may have been specified (in the storage class, or in the secret), to keep the monitor information current. Testing done: - Mount/Delete 1.0.0 plugin created volume with monitors in the StorageClass - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a key "monitors" - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a user specified key - PVC creation and deletion with the current version (to ensure at the minimum no broken functionality) - Tested some negative cases, where monitor information is missing in secrets or present with a different key name, to understand if failure scenarios work as expected Updates ceph#378 Follow-up work: - Documentation on how to upgrade to 1.1 plugin and retain above functionality for older volumes Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit adds support to mount and delete volumes provisioned by older plugin versions (1.0.0) in order to support backward compatibility to 1.0.0 created volumes. It adds back the ability to specify where older meta data was specified, using the metadatastorage option to the plugin. Further, using the provided meta data to mount and delete the older volumes. It also supports a variety of ways in which monitor information may have been specified (in the storage class, or in the secret), to keep the monitor information current. Testing done: - Mount/Delete 1.0.0 plugin created volume with monitors in the StorageClass - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a key "monitors" - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a user specified key - PVC creation and deletion with the current version (to ensure at the minimum no broken functionality) - Tested some negative cases, where monitor information is missing in secrets or present with a different key name, to understand if failure scenarios work as expected Updates ceph#378 Follow-up work: - Documentation on how to upgrade to 1.1 plugin and retain above functionality for older volumes Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit adds support to mount and delete volumes provisioned by older plugin versions (1.0.0) in order to support backward compatibility to 1.0.0 created volumes. It adds back the ability to specify where older meta data was specified, using the metadatastorage option to the plugin. Further, using the provided meta data to mount and delete the older volumes. It also supports a variety of ways in which monitor information may have been specified (in the storage class, or in the secret), to keep the monitor information current. Testing done: - Mount/Delete 1.0.0 plugin created volume with monitors in the StorageClass - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a key "monitors" - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a user specified key - PVC creation and deletion with the current version (to ensure at the minimum no broken functionality) - Tested some negative cases, where monitor information is missing in secrets or present with a different key name, to understand if failure scenarios work as expected Updates ceph#378 Follow-up work: - Documentation on how to upgrade to 1.1 plugin and retain above functionality for older volumes Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit adds support to mount and delete volumes provisioned by older plugin versions (1.0.0) in order to support backward compatibility to 1.0.0 created volumes. It adds back the ability to specify where older meta data was specified, using the metadatastorage option to the plugin. Further, using the provided meta data to mount and delete the older volumes. It also supports a variety of ways in which monitor information may have been specified (in the storage class, or in the secret), to keep the monitor information current. Testing done: - Mount/Delete 1.0.0 plugin created volume with monitors in the StorageClass - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a key "monitors" - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a user specified key - PVC creation and deletion with the current version (to ensure at the minimum no broken functionality) - Tested some negative cases, where monitor information is missing in secrets or present with a different key name, to understand if failure scenarios work as expected Updates ceph#378 Follow-up work: - Documentation on how to upgrade to 1.1 plugin and retain above functionality for older volumes Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit adds support to mount and delete volumes provisioned by older plugin versions (1.0.0) in order to support backward compatibility to 1.0.0 created volumes. It adds back the ability to specify where older meta data was specified, using the metadatastorage option to the plugin. Further, using the provided meta data to mount and delete the older volumes. It also supports a variety of ways in which monitor information may have been specified (in the storage class, or in the secret), to keep the monitor information current. Testing done: - Mount/Delete 1.0.0 plugin created volume with monitors in the StorageClass - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a key "monitors" - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a user specified key - PVC creation and deletion with the current version (to ensure at the minimum no broken functionality) - Tested some negative cases, where monitor information is missing in secrets or present with a different key name, to understand if failure scenarios work as expected Updates ceph#378 Follow-up work: - Documentation on how to upgrade to 1.1 plugin and retain above functionality for older volumes Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit adds support to mount and delete volumes provisioned by older plugin versions (1.0.0) in order to support backward compatibility to 1.0.0 created volumes. It adds back the ability to specify where older meta data was specified, using the metadatastorage option to the plugin. Further, using the provided meta data to mount and delete the older volumes. It also supports a variety of ways in which monitor information may have been specified (in the storage class, or in the secret), to keep the monitor information current. Testing done: - Mount/Delete 1.0.0 plugin created volume with monitors in the StorageClass - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a key "monitors" - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a user specified key - PVC creation and deletion with the current version (to ensure at the minimum no broken functionality) - Tested some negative cases, where monitor information is missing in secrets or present with a different key name, to understand if failure scenarios work as expected Updates #378 Follow-up work: - Documentation on how to upgrade to 1.1 plugin and retain above functionality for older volumes Signed-off-by: ShyamsundarR <srangana@redhat.com>
@ShyamsundarR anything pending on this one? |
@Madhu-1 what is the current status of CephFS. Also, we possibly need documentation on how to upgrade and use older volumes from 1.0. |
@poornimag anything pending for cephfs 1.0? Note: I think we don't need to support migration of 0.3 to 1.0 or 1.1.0 |
If I understand the above right, you are stating "migration" is not supported, but "backward compatibility" is, correct? |
do we need to support backward compatibility? |
Yes, that is what we agreed to do. So we should support "using" 1.0 created volumes, and by that I mean node services and ability to delete such volumes. |
@ShyamsundarR backward compatibility for 1.0 is fine not for 0.3 |
Agreed. |
@ShyamsundarR, in that case, can you please fix the issue title |
This commit adds support to mount and delete volumes provisioned by older plugin versions (1.0.0) in order to support backward compatibility to 1.0.0 created volumes. It adds back the ability to specify where older meta data was specified, using the metadatastorage option to the plugin. Further, using the provided meta data to mount and delete the older volumes. It also supports a variety of ways in which monitor information may have been specified (in the storage class, or in the secret), to keep the monitor information current. Testing done: - Mount/Delete 1.0.0 plugin created volume with monitors in the StorageClass - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a key "monitors" - Mount/Delete 1.0.0 plugin created volume with monitors in the secret with a user specified key - PVC creation and deletion with the current version (to ensure at the minimum no broken functionality) - Tested some negative cases, where monitor information is missing in secrets or present with a different key name, to understand if failure scenarios work as expected Updates ceph#378 Follow-up work: - Documentation on how to upgrade to 1.1 plugin and retain above functionality for older volumes Signed-off-by: ShyamsundarR <srangana@redhat.com>
closing this one as we support mounting/deleting 1.0.0 pvc |
BUG 2311885: rbd: fail DisableVolumeReplication() if image is not mirror disabled
As the plugin moves from the current config maps to instead use a more descriptive VolumeID and RADOS objects (once #312 is merged), to store Kubernetes VolumeName and its backing image details, it needs to ensure the following for existing users:
1) Backward compatibility:
NOTE: This is as mentioned in this comment, #296 (comment)
For volumes created using existing 1.0 versions of the Ceph-CSI plugins the following actions would be supported by 1.1 version of the plugin,
DeleteVolume
DeleteSnapshot
NodePublishVolume (IOW, mounting and using the volume for required IO operations)
And, the following would be unsupported:
CreateVolume from snapshot source which is from an older version
CreateSnapshot from volume source which is from an older version
NOTE: Support for 0.3 created PVs requires feasibility analysis to ensure the above compatibility can be guaranteed for the same.
Method for doing this, would be to continue using the existing config maps, and on detecting older style VolumeID in the mentioned RPC requests, read the required data from the config maps and process the request.
2) Migration
These were discussed in issue #296
Option A: #296 (comment)
In short, create a new PVC using the new version of the plugin, and using Ceph CLIs clone the image backing the older PVC to the image backing the newer PVC, further updating pods that use the older PVC to the new PVC.
Option B: #296 (comment)
Deals with PV.ClaimRef juggling as in the link above.
Option C: Migrate data using a PV to PV data copy
Option A is what I have tested and am far more comfortable recommending, as that would update the driver name as well as the required VolumeID and other fields. Hence makes it future proof from a Ceph-CSI implementation.
Further with Option-A and the backward compatibility included as above, the migration can be staged, and need not happen in one go. Hopefully alleviating some concerns around down time for the pods using the PVs.
Documentation and instructions for achieving the same would be provided.
Other options:
NOTE: CephFS would be backward compatible, but may not be able to leverage the provided migration solution, as in Optio-A, as it involves snapshots and clones
The text was updated successfully, but these errors were encountered: