-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not delete a resource but create a new resource when change is detected #15485
Comments
Interesting idea, @Puneeth-n! Thanks for suggesting it. I think a question we'd need to figure out here is what does happen to the old instance. Should Terraform just "forget" it (no longer reference it from state) and leave some other process to clean it up? Or maybe it should become "deposed" but not deleted. In that case, a subsequent run would delete it, so that's probably not what you want here. |
Thanks @apparentlymart I have been giving this some thought since the past few days as we plan to use AWS step functions in the near future. My thoughts on this:
and a new cli option |
Okay... so this implies an entirely new instance state "decommissioned", in addition to "tainted" and "deposed", which behaves a bit like deposed but only gets deleted when specifically requested. Ideally I'd rather avoid the complexity of introducing a new instance state, so I'd like to let this soak for a little while and see if we can find a way to build something similar out of our existing concepts, or to add a new concept that addresses a broader problem in this same area of gradually decommissioning things. For example: a common problem is gracefully migrating between clusters that have actions to be taken when nodes leave, like Consul and Nomad clusters. Currently people do this in several manual steps of ramping up a new cluster, gradually phasing out the old cluster, and then destroying the old cluster. This seems like a similar problem to yours, and in both cases it seems like there's some action that is to be taken between the creation of the new thing and the destruction of the old thing. This idea has come up a number of times with different details. |
@apparentlymart: This would be really useful for dealing with i.e. AWS Launch Config changes. Each time you change them, you need to first create one, point your ASG to it, then destroy the old one via
It would be great to be able to somehow indicate to always create a new one and not delete the old one. That way you preserve the history of what the values were and can switch back easily. |
Hi, Any chance of getting this in recent release? |
@maulik887 what is your use case? When I was working with Step functions a year before I had this requirement Since there was no Update API and we didn't want Terraform to delete our step functions. |
My case is, I'm creating API Gateway API and using it for Lambda proxy. Now I want to create API Stages per Lambda version, and don't want to delete old Stage version. |
I would like to check on the state of this, my usecase would be that I have an application which performs long running processes (hours, sometimes days) but I would like to roll out updates seemlessly. For me I would have 2 suggestions, either I would need a way to query the application for its status (http request) and create an endpoint for terraform, or I would like to schedule a resource to be deleted after X (in my case I would probably set it to a week). This way an update will also delete all older (finished) instances. |
I'd also find this useful. I have a use case where I'm deploying an s3 object, we're deploying them with the version tag in the object name, so 'myjs-.js'. When we change the version, I want a new s3 object deployed, but I don't want the old version removed. |
I have a similar need to create a new version of a resource without destroying the old one, but in my use case, I don't really care about cleaning up old versions, so I'd be OK with Terraform just forgetting about the old resource. The way I'd see it working would be to have an additional attribute modifier similar to I'm not sure how this would work with dependent resources though, which I wouldn't necessarily want destroyed, even though I need to grab the ID of the new resource that got created. |
Similar use case to @sjmh , I want to keep my old lambda code versions around for a bit in s3, since they can now be attached to older lambda versions with alias and (soon) I should be able route traffic to those old versions, but there is no way to update and add a new code version and alias without deleting the old code version with aws_s3_bucket_object. |
The Terraform Core team is not currently doing any work in the area of this issue due to being focused elsewhere. In the mean I think some of the use-cases described here (thanks!) could be handled more locally by features within providers themselves, such as flags to skip deletion on a per-resource-type basis, so I'd encourage you all to open an issue within the relevant provider (unless I've missed someone, it looks like you're all talking about AWS provider stuff) to discuss the more specific use-case and see if a more short-term solution is possible. There is already some precedent for resource-type-specific flags to disable destroying an object, such as Having some specific examples of solutions for this in individual providers is often a good way to figure out what a general solution might look like, or even to see if a general solution is warranted, so if you do open such a ticket please mention |
I am also trying to get a similar feature. Whenever there is a feature change, it fires up a jenkins job and new tasks/services are created in AWS ECS cluster. I want to keep the old tasks/services also so that if anything goes wrong, I can roll my load balancer to the old tasks/services |
I build EBS volume with Packer, then take EBS volume snapshot with Terraform. For the potentioal rollback purpose, I don't want Terraform replaces (deletes) old EBS volume snapshot. ATM there is a hack with Terraform state manipulation in my script wrapper that runs Terraform commands to achieve desired behavior:
I just remove the resource from Terraform state to free place for the new one on the next apply. It would be nice to have HCL resource declaration for this. |
the solution my team implemented was similar to what @Tensho described. We just use |
For some things we did recently. We actually use |
This is also not viable for me because our deploys are done entirely via CI
with no manual interventions. We also deploy many times a day so we can't
introduce manual steps each time and would be way better off just creating
the task definition outside of Terraform but that loses our shared task
configuration and adds more tooling when we'd like to keep it as simple as
possible.
…On Fri, 15 Feb 2019, 18:49 Tipene Moss, ***@***.***> wrote:
For some things we did recently. We actually use terraform state mv, to
move the resource out of the way, and a -target to get a new one built.
Then, subsequent applies will want to tear down the old resources, so we
could apply that when we were ready, and that way the state was still
tracked, and we didn't have to do manual clicking about
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#15485 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEJa5rJI17j8zMt-38zB7Z1GPCRTKSCtks5vNwE1gaJpZM4OPVsA>
.
|
+1 |
Hard to believe Terraform doesn't have this feature. In the case of AWS snapshots, I somewhat obviously want more than one snapshot. I want to create a new snapshot, but KEEP THE EXISTING ONES, TOO! Y'all don't support that? Seriously? "Make new, but keep existing" Like when I install a new binary on my windows laptop, I don't want to delete and reinstall all of the other binaries, I want TO KEEP WHAT I'VE GOT and ADD THE NEW ONE. Could this please get a look from The Hashicorp Core Dev Team? |
Same thing here with EMR cluster. I want to launch new cluster but keep the old one running. Terraform always destroy the old one and replace it with a new one. |
Similar workflow here, want to deploy a new EC2 instance, leave the old one running until the load balancer marks the new one as good. Then run another plan/apply to delete the instance. It raises some issues around indexes, because it has to increment them, for example, my LB is using the instance count to add to the entries to the target group. During the no-destroy phase I expect the extra entry added (so somehow the state needs to increment its count) and then when we actually apply the plan that destroys then the count goes back to normal. |
@DavidGamba you can already do that as long as your change to the EC2 instance forces a destroy (eg changing the AMI) by using the It will also do this for you in a single |
I think this feature would be useful in many cases and should be considered cross-provider.
This is exactly the same issue I face but with Google Cloud. Ideally I can have Terraform create new objects in cloud storage and publish new function versions without destroying the old ones. Another area where I see this being useful is certificate management (again, cross provider), you never really want to delete your old cert, just provision a new one. This feature would also help there. In terms of where I would expect this to be surfaced, I would like to see it as a |
How do you plan to pick up resources left after abandon_on_destroy? |
In my case I create an AMI from an image everytime I run terraform. It will be great if terraform can create new AMIs without having to destroy the old AMIs. We can have like a lifecycle for something like replacing a resource should use the I don't think |
This works all well for me. I am able to forget the resource from state and keep history of the resource in AWS |
My use case is we are trying to migrate to a new aws account and we want to create the new resources in the target destination first, test them thoroughly. Then do a cutover. Then tidy up on success. Ideally i would like to do this in 3 apply's:
On those lines, abandoning the resource wouldn't be ideal as i would have to manually clean up the old resources. Is that something that could be done? |
My use case is not to destroy the Lambda layer version when the source code changes. I want Terraform to deploy the new layer version and keep the old ones. |
I am sharing Layer between accounts, since this is done by version when the layer is deleted we have to update/redeploy the lambda functions from other accounts that use this layer (update the version number, since the previous layer is destroyed). |
In our case, |
We have a use case for this: We have some terraform configurations that manage both a resource and a CloudWatch log group that the resource logs to. If we ever want to change the name of the log group that the resource logs to, we can't just change it in the configuration, because a log group name changes forces a recreation, but our log groups are undeletable for audit reasons. To accomplish what we want we have to manually |
There is a way.
Note : This has risk as the very first VM did not get deleted as it is considered generated out of scope of terraform. So your cost may persist. So you have to have back up (incremental for states) Hope this helps. |
I have another use case, similar but not identical to those that were presented here, which could be solved by something like On GCP, I'd like to remove a BigQuery table from the Terraform state without deleting the actual underlying table, which would result in an unrecoverable loss of data. Setting any kind of lifecycle parameter would make it clear that I know what a destroy means, and that I do not want the actual data to be deleted. The entire process is part of CI/CD and running Please note that the On a slightly different note, by browsing around I stumbled upon CloudFormation's DeletionPolicy, which looks like their solution to the need expressed in this issue. (I never used CloudFormation though, and could be completely wrong.) |
I also want this. |
Any existing workarounds? |
I can't give you a code sample, but if you erase the difference of task definition after tf apply with jq or somewhat script, the plan's behavior will be as intended. |
adding another usecase, we use terraform to automate creating new stacks in Grafana cloud with the stack name passed as a variable. First time it creates a new stack and stores the details in the state. We don't want to destroy (thus lose all the data) for the second stack. Currently approach is a teardown stage to remove the references from state |
Does additional resources are added only. On gcp i just add additional
resources in same code and it deploys them keeping original state file as
is and just adds new state.
Is this you looking for
…On Tue, Apr 11, 2023, 10:59 PM Adarsha ***@***.***> wrote:
adding another usecase, we use terraform to automate creating new stacks
in Grafana cloud
<https://registry.terraform.io/providers/grafana/grafana/latest/docs#installing-synthetic-monitoring-on-a-new-grafana-cloud-stack>
with the stack name passed as a variable. First time it creates a new stack
and stores the details in the state. We don't want to destroy (thus lose
all the data) for the second stack. Currently approach is a teardown stage
to remove the references from state terraform state list | %{if($_ -match
"new_stack|grafana_data_source|grafana_cloud_api_key"){terraform state rm
$_}} but if there is a better less hacky way it would be preferred.
—
Reply to this email directly, view it on GitHub
<#15485 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2PFHSIUNKOCLILN6L3GUH3XAWIH3ANCNFSM4DR5LMAA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Does anyone has any good idea to do this when you're building things within Azure? |
I am adding this here, and it was in its own feature request but has been deemed to be a duplicate of this... Use CasesThe core idea is to create a way to tell Terraform to remove a resource from the state file during a This would make it possible to handle nested objects, like Attempted SolutionsHashicorp would recommend that people use a single Terraform workflow to spin up a k8s cluster and then install things into that cluster in another Terraform workflow. However, there are many times that at least some bootstrapping will occur in the initial Terraform workflow. This fix would allow users to quickly identify resources that need not be destroyed via their API. Proposallifecycle {
# This would cause Terraform to remove the resource from the state file instead of calling the owning API to delete.
state_only_destroy = true
} |
Any update? It's been 7 years? |
Another use case for I have Google Cloud Spanner instance created outside Terraform that should be permanent. With Terraform, I want to create a database in that instance and then drop the database in I need the instance in the configuration to refer to its properties, so I use
Another idea for this use case is to add an option to |
@alexeyinkin Your case, where the config doesn't actually manage a resource, seems like a a prime example of where to use a Now I was kinda expecting you were going to say you want Terraform to manage the configuration (i.e. all the knobs) but not the lifetime (creation/deletion) and I'd see that as a possible case but, from what you described, that doesn't seem to be your use case. |
I'm creating AMI from the instances on which I'm deploying code and using that AMI for my launch template, When I'm creating AMI from another instance it destroys the previous AMI, In this case, I'm getting locked as if something goes wrong I can't roll out my previous AMI version, Is there any way in terraform so I can create new AMI while keeping older AMI's |
This "state_only_destroy = true" flag would be the dogs bollocks for us. If you do frequent destroy and rebuild cycles you can preserve one or two resources and then actually pick them up again using an import{ } block. There's not only pet things like databases but sometimes you have infrastructure that simply wont go away. I can think of a few resources on GCP, but also I am working with Vault and a vault auth backend - it can only be emptied out on our setup, curiously it cant be deleted at all. So when you absolutely cannot delete something but you need to destroy everything else then this state_only_destroy = true idea is a winner as far as I'm concerned. |
Adding my 2 cents on this. Our usecase is for the rotation of passwords / certificates / etc. We are using azuread_application_password to create a clientid and secret, but we want to rotate them before they expire. Running azuread_application_password to rotate it deletes the existing secret which will kick out everyone using the existing secret. If we can just append the new one to the application, we can set an expiry for 2X the rotation period, allowing applications to cycle through their lifecycle and grab the new one from our Vault instance without disrupting the applications running at the time of secret creation. |
Can terraform be configured to create a new resource but not delete the existing resource when it sees a change? For example with AWS step functions, one can either create or delete a state machine and not modify it.
I want terraform to create a new state machine each time it sees a change but not delete the old one as it might contain states.
The text was updated successfully, but these errors were encountered: