Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

400 response when destroying aws_rds_cluster not respected #11378

Closed
WillSewell opened this issue Jan 24, 2017 · 7 comments · Fixed by #11795
Closed

400 response when destroying aws_rds_cluster not respected #11378

WillSewell opened this issue Jan 24, 2017 · 7 comments · Fixed by #11795
Labels
bug provider/aws waiting-response An issue/pull request is waiting for a response from the community

Comments

@WillSewell
Copy link

Terraform Version

0.8.4

Affected Resource(s)

aws_rds_cluster

Terraform Configuration Files

resource "aws_rds_cluster" "feeds" {
  cluster_identifier     = "feeds-cluster"
  availability_zones     = ["us-east-1c","us-east-1d"]
  database_name          = "feeds"
  master_username        = "${var.mysql_username}"
  master_password        = "${var.mysql_password}"
  vpc_security_group_ids = ["${aws_security_group.feeds_db.id}"]
  apply_immediately      = true
  db_subnet_group_name   = "${aws_db_subnet_group.feeds_db.id}"
}

resource "aws_rds_cluster_instance" "feeds_0" {
  identifier           = "feeds-0"
  cluster_identifier   = "${aws_rds_cluster.feeds.id}"
  instance_class       = "db.t2.medium"
  publicly_accessible  = true
  db_subnet_group_name = "${aws_db_subnet_group.feeds_db.id}"

  tags {
      Name = "feeds-0"
  }
}

resource "aws_rds_cluster_instance" "feeds_1" {
  identifier           = "feeds-1"
  cluster_identifier   = "${aws_rds_cluster.feeds.id}"
  instance_class       = "db.t2.medium"
  publicly_accessible  = true
  db_subnet_group_name = "${aws_db_subnet_group.feeds_db.id}"

  tags {
      Name = "feeds-1"
  }
}

Debug Output

https://gist.github.com/WillSewell/7a6cc8e26e24bb1a49e2ecac47ca7ca3

Expected Behavior

Terraform received this error from AWS:

2017/01/24 11:19:46 [DEBUG] plugin: terraform: aws-provider (internal) 2017/01/24 11:19:46 [DEBUG] [aws-sdk-go] DEBUG: Response rds/DeleteDBCluster Details:
2017/01/24 11:19:46 [DEBUG] plugin: terraform: ---[ RESPONSE ]--------------------------------------
2017/01/24 11:19:46 [DEBUG] plugin: terraform: HTTP/1.1 400 Bad Request
2017/01/24 11:19:46 [DEBUG] plugin: terraform: Connection: close
2017/01/24 11:19:46 [DEBUG] plugin: terraform: Content-Length: 337
2017/01/24 11:19:46 [DEBUG] plugin: terraform: Content-Type: text/xml
2017/01/24 11:19:46 [DEBUG] plugin: terraform: Date: Tue, 24 Jan 2017 11:19:45 GMT
2017/01/24 11:19:46 [DEBUG] plugin: terraform: X-Amzn-Requestid: 030415e1-e227-11e6-9fa4-1f1d842a38f5
2017/01/24 11:19:46 [DEBUG] plugin: terraform:
2017/01/24 11:19:46 [DEBUG] plugin: terraform: <ErrorResponse xmlns="http://rds.amazonaws.com/doc/2014-10-31/">
2017/01/24 11:19:46 [DEBUG] plugin: terraform:   <Error>
2017/01/24 11:19:46 [DEBUG] plugin: terraform:     <Type>Sender</Type>
2017/01/24 11:19:46 [DEBUG] plugin: terraform:     <Code>InvalidDBClusterStateFault</Code>
2017/01/24 11:19:46 [DEBUG] plugin: terraform:     <Message>Cluster cannot be deleted, it still contains DB instances in non-deleting state.</Message>
2017/01/24 11:19:46 [DEBUG] plugin: terraform:   </Error>
2017/01/24 11:19:46 [DEBUG] plugin: terraform:   <RequestId>030415e1-e227-11e6-9fa4-1f1d842a38f5</RequestId>
2017/01/24 11:19:46 [DEBUG] plugin: terraform: </ErrorResponse>

and should have reported this to the user and stopped running.

Actual Behavior

Instead it continues in an infinite loop (until timing out) waiting for the resource to become created.

Steps to Reproduce

  1. terraform apply .
  2. terraform destroy --target=aws_rds_cluster.feeds
@grubernaut
Copy link
Contributor

Hi @WillSewell, thanks for the issue!

I definitely believe that you're seeing a legitimate issue, however I cannot reproduce on 0.8.5 by just specifying the destroy target of aws_rds_cluster.feeds. Is there another way to reproduce the issue that you're seeing?

@grubernaut
Copy link
Contributor

Attempted to reproduce with 0.8.4 as well, but no such luck. The terraform destroy destroys the aws_rds_cluster_instance resources prior to destroying the aws_rds_cluster resource as expected.

@grubernaut grubernaut added the waiting-response An issue/pull request is waiting for a response from the community label Feb 3, 2017
@WillSewell
Copy link
Author

I am also unable to reproduce this anymore. And I can't remember precisely what I was doing at the time, so I suspect the "Steps to Reproduce" might be incorrect. Maybe I had messed up my tfstate or fiddled manually with the AWS resources.

Despite that, even if the underlying cause might have been my problem, I still think it is wrong that terraform ignores the 400 response from AWS, and continues attempting to destroy the resource in an infinite loop.

@grubernaut
Copy link
Contributor

@WillSewell, odd that I can't reproduce it, but I fully believe that error should be caught. I have a WIP changeset that will catch the 400 as well, but I'll continue to dive into creating a reproduction case to fully test the change.

@grubernaut
Copy link
Contributor

Reproduced! Should have a fix in place soon. Thanks for your patience on this @WillSewell.

@WillSewell
Copy link
Author

Awesome. Good job.

@ghost
Copy link

ghost commented Apr 17, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug provider/aws waiting-response An issue/pull request is waiting for a response from the community
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants