Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed destroy causes unresolved resource references in output variables #1172

Closed
willmcg opened this issue Mar 10, 2015 · 5 comments
Closed

Comments

@willmcg
Copy link

willmcg commented Mar 10, 2015

I'm encountering a situation where a failed destroy will leave the Terraform state in a bad way where output variables that reference destroyed resources are causing a re-run of the destroy to fail with complaints that the output variables now reference non-existent resources.

The destroy operation failure and need to run destroy multiple times is annoying (due, I believe, to subnet destroy failure because autoscaled instances are still in the process of terminating in the subnets) but not subject of this particular issue. The issue here is that the workaround of running the destroy twice cannot be used due to invalid state.

Specifically in this case I have output variable referencing an ELB resource DNS name:

output "elb-front-dns" {
    value = "${aws_elb.front.dns_name}"
}

I run a destroy operation that fails because of the afore-mentioned autoscaling/subnet problem:

$ terraform destroy -var --force meta/terraform/test/
aws_vpc.vpc: Refreshing state... (ID: vpc-c90abcac)
aws_route_table.private.2: Refreshing state... (ID: rtb-11f24c74)
.
.
aws_elb.front: Destruction complete
aws_security_group.nat: Destruction complete
aws_security_group.compute: Destroying...
aws_security_group.compute: Destruction complete
aws_subnet.public.2: Destroying...
aws_subnet.public.1: Destroying...
aws_subnet.public.0: Destroying...
aws_security_group.elb-front: Destroying...
aws_subnet.public.1: Error: Error deleting subnet: The subnet 'subnet-f458f483' has dependencies and cannot be deleted. (DependencyViolation)
aws_subnet.public.0: Error: Error deleting subnet: The subnet 'subnet-3078f655' has dependencies and cannot be deleted. (DependencyViolation)
aws_subnet.public.2: Error: Error deleting subnet: The subnet 'subnet-fd4587a4' has dependencies and cannot be deleted. (DependencyViolation)
aws_security_group.elb-front: Destruction complete
aws_subnet.private.2: Destroying...
aws_subnet.private.1: Destroying...
aws_subnet.private.0: Destroying...
aws_subnet.private.0: Destruction complete
aws_subnet.private.2: Destruction complete
aws_subnet.private.1: Destruction complete
Error applying plan:

1 error(s) occurred:

* Error deleting subnet: The subnet 'subnet-f458f483' has dependencies and cannot be deleted. (DependencyViolation)

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Now re-running the destroy operation fails thusly...

$ terraform destroy --force meta/terraform/test/
aws_vpc.vpc: Refreshing state... (ID: vpc-c90abcac)
aws_internet_gateway.igw: Refreshing state... (ID: igw-46e53723)
aws_subnet.public.2: Refreshing state... (ID: subnet-fd4587a4)
aws_subnet.public.1: Refreshing state... (ID: subnet-f458f483)
aws_subnet.public.0: Refreshing state... (ID: subnet-3078f655)
Error creating plan: Resource 'aws_elb.front' not found for variable 'aws_elb.front.dns_name'

The only way to work around this problem is to comment out all the problematic output variables in the templates and then running destroy will work...

$ terraform destroy --force meta/terraform/test/
aws_vpc.vpc: Refreshing state... (ID: vpc-c90abcac)
aws_internet_gateway.igw: Refreshing state... (ID: igw-46e53723)
aws_subnet.public.2: Refreshing state... (ID: subnet-fd4587a4)
aws_subnet.public.1: Refreshing state... (ID: subnet-f458f483)
aws_subnet.public.0: Refreshing state... (ID: subnet-3078f655)
aws_subnet.public.2: Destroying...
aws_subnet.public.1: Destroying...
aws_subnet.public.0: Destroying...
aws_subnet.public.2: Destruction complete
aws_subnet.public.0: Destruction complete
aws_subnet.public.1: Destruction complete
aws_internet_gateway.igw: Destroying...
aws_internet_gateway.igw: Destruction complete
aws_vpc.vpc: Destroying...
aws_vpc.vpc: Destruction complete

Apply complete! Resources: 0 added, 0 changed, 5 destroyed.
@radeksimko
Copy link
Member

This was a known issue which has been fixed recently in #522

I'd either encourage you to try and build from current master and report back if it fixes the problem for you or wait till the next release (most likely this month).

@willmcg
Copy link
Author

willmcg commented Mar 11, 2015

Ahhh... that did not turn up in my issue search. I will try running from master and close out this issue if it is resolved there. Thanks!

@willmcg
Copy link
Author

willmcg commented Mar 11, 2015

OK... that seems to work now. Looks like it was a duplicate of #522.

However this has now surfaced another issue in master with apply where Terraform seems to be bumping up against the API request throttling limit and crapping out. Did not see this in 0.37 but I also added a bunch of network ACLs which probably ramped up the number of API requests...

$ terraform apply meta/terraform/test/
aws_vpc.vpc: Creating...
  cidr_block:                "" => "10.0.0.0/16"
  default_network_acl_id:    "" => "<computed>"
  default_security_group_id: "" => "<computed>"
  enable_dns_hostnames:      "" => "1"
  enable_dns_support:        "" => "1"
  main_route_table_id:       "" => "<computed>"
  tags.#:                    "" => "2"
  tags.Deployment:           "" => "blah"
  tags.Name:                 "" => "vpc"
aws_vpc.vpc: Creation complete
aws_internet_gateway.igw: Creating...
  tags.#:          "0" => "2"
  tags.Deployment: "" => "blah"
.
.
.
aws_elb.front: Creation complete
aws_security_group.compute: Error: 1 error(s) occurred:

* Request limit exceeded.
aws_network_acl.public.2: Creation complete
aws_network_acl.public.1: Error: 1 error(s) occurred:

*
aws_network_acl.public.0: Error: 1 error(s) occurred:

*
Error applying plan:

4 error(s) occurred:

* 1 error(s) occurred:

* 1 error(s) occurred:

* Request limit exceeded.
* 1 error(s) occurred:

* Resource 'aws_launch_configuration.compute' not found for variable 'aws_launch_configuration.compute.name'
* 1 error(s) occurred:

* Resource 'aws_security_group.nat' not found for variable 'aws_security_group.nat.id'
* 2 error(s) occurred:

* 1 error(s) occurred:

*
* 1 error(s) occurred:

*

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Re-running the apply finishes the deployment just fine so it looks like Terraform is not gracefully backing off its requests in the case where the AWS service starts rate limiting API calls.

I'm going to look around and see if there are any existing issues that match this before creating a new issue for this problem.

@willmcg willmcg closed this as completed Mar 11, 2015
@willmcg
Copy link
Author

willmcg commented Mar 11, 2015

This secondary API rate limiting looks like issue #1051

Added comments there.

@ghost
Copy link

ghost commented May 4, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 4, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants