Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two PVC-s bound to the same PV #2827

Open
1 task done
jonaskello opened this issue Jun 9, 2022 · 1 comment
Open
1 task done

Two PVC-s bound to the same PV #2827

jonaskello opened this issue Jun 9, 2022 · 1 comment

Comments

@jonaskello
Copy link
Contributor

jonaskello commented Jun 9, 2022

Describe the bug

Exact same thing as in #2456 but not commenting there as it was requested to open new issue in this comment.

All our PV and PVC are reserved using claimerf and volumename according to the docs:

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume

This is not working anymore after we upgraded to new flux version. Probably because claimref uuid value is lost somehow when applied by flux (or perhaps more precise by the api/tool flux uses to do apply) as mentioned in #2250.

Conclusion seems to be to remove claimref from all pv and that is also working for us but official docs says that it should be there if we want to reserve the pv for a specific pvc.

Steps to reproduce

See #2456

Expected behavior

See #2456

Screenshots and recordings

See #2456

OS / Distro

See #2456

Flux version

kustomize-controller:v0.26.1

Flux check

N/A

Git provider

No response

Container Registry provider

No response

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@stefanprodan
Copy link
Member

Probably becuase claimref uuid value is lost somehow when applied by flux (or perhaps more precise by the api/tool flux uses to do apply).

Flux doesn't remove anything from PVs, it uses Kubernetes server-side apply to reconcile them. My guess is that the Kubernetes API drops the claimref when it patches the objects in etcd. I will bring this up with the Kubernetes maintainers.

The current workaround is to place the PVs in a Helm chart in Git and apply it with a Flux HelmRelease. Unlike Flux, Helm does not uses server-side apply so it should keep the claimref in place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants