-
Scenario is a HA control planed k8s cluster where some automation went awry and wiped out 3/5 of the k8s control nodes. This cluster was using an externally hosted ceph pools along with this plugin to provide PVCs to the cluster. Snapshots fell out of range over the extended weekend we had. With that said i would like to stand up a new cluster and point it at the same external ceph cluster and be able to import the data into the new cluster. Is this possible? If so what would be required? Not a production cluster but there is some very inconveniently replaceable data that would be nice to retain, and i wouldn't want to have to clear the 5 TBs or so just because the controller died. The 2 remaining control nodes have had manual snapshots taken before any additional actions. Worker nodes seem untouched. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
You might be able to create static PVCs for the volumes you had in the previous Kubernetes cluster. It should be possible for applications to consume those, but it will also make it easy to move data from the old volume to a new dynamically provisioned volume. |
Beta Was this translation helpful? Give feedback.
-
@gerethd it should be possible to do it in 3 different ways
|
Beta Was this translation helpful? Give feedback.
@gerethd it should be possible to do it in 3 different ways