Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Support retained PV as source #2359

Closed
haslersn opened this issue Jul 16, 2022 · 11 comments
Closed

Feature Request: Support retained PV as source #2359

haslersn opened this issue Jul 16, 2022 · 11 comments

Comments

@haslersn
Copy link

Is your feature request related to a problem? Please describe:

I accidentally deleted a DataVolume. The PersistentVolume was still there. I wanted to reuse it, including its content. However, containerized-data-importer currently doesn't support this.

Describe the solution you'd like:

Perhaps something like the following could be made to work. To date, this doesn't work, because the importer would create a blank disk inside the volume, AFAIK.

  source:
    blank: {}
  pvc:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
    volumeName: <PV to reclaim>

Describe alternatives you've considered:

I think a workaround is to manually create a PVC to reclaim the PV and then use the PVC source. However, this would create a copy of the data.

@mhenriks
Copy link
Member

hi @haslersn, as a workaround, you should be able to create a pvc that is explicitly bound to the retained PV. Add the following annotation to the PVC:

cdi.kubevirt.io/storage.populatedFor: <pvc name>

Then when the DataVolume with same name as the PVC is created, the DataVolume controller should skip population and mark the DataVolume succeeded.

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 19, 2022
@haslersn
Copy link
Author

/remove-lifecycle stale

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 19, 2022
@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 17, 2023
@haslersn
Copy link
Author

/remove-lifecycle stale

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2023
@aglitke
Copy link
Member

aglitke commented Mar 13, 2023

@mhenriks may have some updated information for you on this issue...

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2023
@haslersn
Copy link
Author

/remove-lifecycle stale

@mhenriks is there an update (as suggested by @aglitke)?

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2023
@akalenyu
Copy link
Collaborator

akalenyu commented Jun 19, 2023

One possible workaround which was introduced for this in #2683
is to specify a dummy unique storage class name on both the DV & PV
(just for sake of binding).

Regarding the full functionality which was suggested (volumeName), I believe we do not have a solution for that yet.

@alromeros
Copy link
Collaborator

Hey @haslersn, I think this feature is one that might come in handy in your case: #2583.

Basically, now you can create a DV with the cdi.kubevirt.io/storage.checkStaticVolume annotation, so CDI will look for PVs with claimrefs that match its name and namespace. There's more information in the PR, but creating a DV with the same name and namespace as before and that annotation should be enough.

Let us know if this fixes your problem!

@alromeros
Copy link
Collaborator

Hey @haslersn, closing the issue since the topic has been addressed (should be fixed by the previous approach). Feel free to reopen if you want to follow up on the discussion or have other related issues. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants