-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform: Helm module should remove itself at destroy time before cluster deletion #1422
Comments
Relates to #1403 . |
First of all the solution is already mentioned in this repo:
I have added this into my PR #1375 and tested that now no leftover could be found in Firewall Rules and Forwarding Rule (Load balancer) tabs. |
I have tried using provisioner :
And it is doing
|
/cc @chrisst any thoughts on this? Is there something erroneous with our helm approach? |
Weird that it fails when you do that work @aLekSer -- if I run a script to delete all Helm instances before destroying everything else it's fine. A couple of theories:
|
Unfortunately I'm not very experienced with mixing Helm and Terraform so my thoughts are more educated guesses at this point. I don't think using a local-exec provisioner is heading down the correct path. If a terraform resource, in this case the helm release, is removed or modified by an external process, local-exec, it is almost always going to be problematic for Terraform. Cleaning up after a resource should be handled by the resource's destroy call, so in this case the I suspect it's failing because Terraform is trying to delete the helm release which has already been removed through the provisioner call. You can try looking at the debug logs for more information. |
I and @aLekSer tried following approaches:
to Result:
to |
Hi, So the fix was , prior to destroy run : terraform refresh |
Hi, I also found a similar issue where the helm resource deletion was not completing before the namespace was removed. In my case the only thing being deployed to the namespace was the helm chart that was handled by the helm_release resource so i added the below, which then meant it cleanly deleted, rather than creating the namespace separately. |
'This issue is marked as Stale due to inactivity for more than 30 days. To avoid being marked as 'stale' please add 'awaiting-maintainer' label or add a comment. Thank you for your contributions ' |
This issue is marked as obsolete due to inactivity for last 60 days. To avoid issue getting closed in next 30 days, please add a comment or add 'awaiting-maintainer' label. Thank you for your contributions |
We are closing this as there was no activity in this issue for last 90 days. Please reopen if you’d like to discuss anything further. |
Is your feature request related to a problem? Please describe.
This is particularly frustrating with GKE, not sure how it is with other providers.
When you delete GKR cluster, if there is a Service setup, the firewall rules and load balancers are left in place and aren't deleted. So you can sometimes hit quota limits and/or extra charges for LBs, external IPs, ets you aren't using.
Describe the solution you'd like
What I would suggest is that when a
destroy
event occurs to a cluster, the Terraform Helm module should do ahelm delete --purge
on the installed chart before the cluster is removed, to ensure this gets cleaned up.See:
https://www.terraform.io/docs/provisioners/index.html#destroy-time-provisioners
For the hooks to implement this.
Describe alternatives you've considered
Writing a bash script to cleanup orphaned resources, but load balancers in GCP are a combo of various other things, and it gets complicated 😕
Additional context
https://github.com/pantheon-systems/kube-gce-cleanup
kubernetes/ingress-gce#136
The text was updated successfully, but these errors were encountered: