-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support annotation to prevent argo from deleting resources if app gets deleted #11227
Comments
I would love to see this enhancement come available at some point! |
@slig2008 awesome, didn't know that there is an existing workaround for this, that's why i implemented it by myself. My code looks nearly the same except i'm checking also for the name and not only the namespace and the status because it's possible that there are multiple released volumes in the same namespace. Thanks for sharing the link to the workaround. |
I would love this feature too. My use case is when an operator who create a new Application, mistakenly include existing namespace that has existing resource which is managed by another Application. When the operator try to fix it by renaming the namespace, argo-cd (when auto sync is true) will delete the previous namespace , which will delete any resources inside the namespace. |
This is a very important feature. In my opinion, namespaces, PVCs and PVs should never be deleted by default, just like CRDs. Annotation is a great solution, as it will support resources created by operators as well. |
+1 |
For us, we will use OPA to prevent that. If a resource not annotated with specific annotation, the deletion will not be allowed. |
I may be misunderstanding what this issue is about, but isn't this already supported? Just slap the annotation apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresql-data
annotations:
argocd.argoproj.io/sync-options: Delete=false
spec: If you deploy a namespace manifest, don't forget to add the annotation to it, too. Otherwise all your resources will be deleted implicitly on namespace deletion: apiVersion: v1
kind: Namespace
metadata:
name: oncall
annotations:
argocd.argoproj.io/sync-options: Delete=false |
Summary
What change you think needs making.
Support flags/annotations that prevents resources from being deleted if the argo app gets deleted.
Motivation
If you have multiple customers with dev and prod systems it's sometimes necessary to stop a system of a customer to save resources and costs. If you have a dynamically created persistent volume it's getting deleted if the argo app gets deleted. If you manually install the application with a helm install there is a policy
"helm.sh/resource-policy": keep
that will prevent helm from deleting the PVC and the PV with it. Stated in the Argo docs this policy is not supported and i can confirm that.If we use a storageClass with
reclaimPolicy: Retain
the PVC will be deleted by Argo but the PV won't. It will switch to the status "Released". If you now recreate the application a new PV will be created cause the claimRef in the released PV does not match the criteria.The uid is the problem here cause it's the uid of the PVC which created the PV in the first place.
As Argo currently does not support PostDelete Hooks we have to use the PreSync hook for that:
What we do right now is using PreSync hook which checks if there is an already existing PV which status is "Released" and which is in the same namespace with the specific name. If there is a match then we patch the PV by removing the uid from the claimRef. With this the PV will change it's status to "Available" and in the Sync the newly created PVC is able to bound to the PV.
Specs:
ArgoCD version: 2.5.2
CSI driver used: ebs.csi.aws.com
Proposal
Handle flags/annotations like helm and do not delete resources if the argo app gets deleted. Other possibility is to support PostDelete hooks that will allow handling the above workaround. Advantage is here that the process will only run if the app gets deleted and do not need to run on every sync which is a waste of resources.
The text was updated successfully, but these errors were encountered: