-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
State file is not saved locally when uploading to remote backend fails #14298
Comments
I can confirm that all Terraform 0.9.x versions are impacted. Terraform 0.8 allowed us to just run a second apply with new credentials to resume operations as the local and updated version of the state was synced to the remote storage at the start of the second run. |
In the old remote state system we had the idea of a local backup, which is actually still present for the legacy backends but no longer applies for the new-style backends like the s3 backend. It's problematic when an apply runs for long enough that someone's time-limited AWS STS credentials expire and then Terraform fails and can't persist state to S3. To reduce the risk of lost state, here we add some extra fallback code for the local apply operation in particular. If either state writing or state persisting fail then we attempt to write the state to a special backup file errored.tfstate, and produce an error message that guides the user on how to retry uploading this state. In the unlikely event that we can't write to local disk either (e.g. permissions problems) we take a last-ditch attempt to dump the JSON onto stdout and advise the user to manually copy it into a file for import. If even that doesn't work for some reason, we assume a critical Terraform bug (JSON-serialization problem with states?) and bail out with an apologetic error message. This is implemented for the apply command in particular because this is the one command where new objects are created in real APIs that we don't want to lose track of. For other operations it's less bad to just generate a simple error message and have the user retry. This fixes #14298.
In the old remote state system we had the idea of a local backup, which is actually still present for the legacy backends but no longer applies for the new-style backends like the s3 backend. It's problematic when an apply runs for long enough that someone's time-limited AWS STS credentials expire and then Terraform fails and can't persist state to S3. To reduce the risk of lost state, here we add some extra fallback code for the local apply operation in particular. If either state writing or state persisting fail then we attempt to write the state to a special backup file errored.tfstate, and produce an error message that guides the user on how to retry uploading this state. In the unlikely event that we can't write to local disk either (e.g. permissions problems) we take a last-ditch attempt to dump the JSON onto stdout and advise the user to manually copy it into a file for import. If even that doesn't work for some reason, we assume a critical Terraform bug (JSON-serialization problem with states?) and bail out with an apologetic error message. This is implemented for the apply command in particular because this is the one command where new objects are created in real APIs that we don't want to lose track of. For other operations it's less bad to just generate a simple error message and have the user retry. This fixes #14298.
In the old remote state system we had the idea of a local backup, which is actually still present for the legacy backends but no longer applies for the new-style backends like the s3 backend. It's problematic when an apply runs for long enough that someone's time-limited AWS STS credentials expire and then Terraform fails and can't persist state to S3. To reduce the risk of lost state, here we add some extra fallback code for the local apply operation in particular. If either state writing or state persisting fail then we attempt to write the state to a special backup file errored.tfstate, and produce an error message that guides the user on how to retry uploading this state. In the unlikely event that we can't write to local disk either (e.g. permissions problems) we take a last-ditch attempt to dump the JSON onto stdout and advise the user to manually copy it into a file for import. If even that doesn't work for some reason, we assume a critical Terraform bug (JSON-serialization problem with states?) and bail out with an apologetic error message. This is implemented for the apply command in particular because this is the one command where new objects are created in real APIs that we don't want to lose track of. For other operations it's less bad to just generate a simple error message and have the user retry. This fixes #14298.
In the old remote state system we had the idea of a local backup, which is actually still present for the legacy backends but no longer applies for the new-style backends like the s3 backend. It's problematic when an apply runs for long enough that someone's time-limited AWS STS credentials expire and then Terraform fails and can't persist state to S3. To reduce the risk of lost state, here we add some extra fallback code for the local apply operation in particular. If either state writing or state persisting fail then we attempt to write the state to a special backup file errored.tfstate, and produce an error message that guides the user on how to retry uploading this state. In the unlikely event that we can't write to local disk either (e.g. permissions problems) we take a last-ditch attempt to dump the JSON onto stdout and advise the user to manually copy it into a file for import. If even that doesn't work for some reason, we assume a critical Terraform bug (JSON-serialization problem with states?) and bail out with an apologetic error message. This is implemented for the apply command in particular because this is the one command where new objects are created in real APIs that we don't want to lose track of. For other operations it's less bad to just generate a simple error message and have the user retry. This fixes #14298.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform does not save a state file locally when uploading to the remote backend fails. In my case I was creating 104 resources on AWS and after 1 hour of working and completion, suddenly, terraform failed to upload the resulting state file to S3 with 403 error to the bucket (with no reason, I didn't have the access problems) but important here, it left me without a state file at all - no local copy unfortunately. So basically, I have got a ton of orphaned AWS resources.
Terraform Version
v0.9.3, v0.9.4
Affected Resource(s)
Uploading a state file to the remote backend S3
Expected Behavior
From https://www.terraform.io/docs/backends/state.html#state-storage
Actual Behavior
Data loss, you are left with the orphaned resources :(
Steps to Reproduce
Here
my-bucket
should be a real bucket.Now run the below command and while it is sleeping on provisioning you will have 30s to quickly edit your /etc/hosts and add a line like this
1.2.3.4 my-bucket.s3-us-west-2.amazonaws.com
to repoint DNS to something fake and have the upload to remote backend to fail:No matter what the error is, it could be 403, 404 or in my case certificate problems. Terraform only saves a current state file to
terraform.tfstate.backup
locally which is a state file before resource creation, but after they are created and upload fails - nothing is saved neither to the current folder or to .terraform/ and you get a data loss.Thanks, please let me know if you need any further information.
The text was updated successfully, but these errors were encountered: