-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws rds cluster is not recreated when snapshot identifier is updated #15563
Comments
only workaround I have so far is manually deleting the cluster and then running terraform apply, it creates the cluster and restores from the snapshot |
I have noticed this bug too. Looking at this https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/resource_aws_rds_cluster.go#L346 it seems to me that we should update it to include |
Also running into this issue. Updating the |
I really hope this is the intended behavior. To not re-create cluster on
At every apply the cluster is destroyed and recreated. It is a nightmare |
There's an easy fix for the above:
With the current behavior, however, it's not possible to update a cluster from a new snapshot without either destroying the cluster first, or tainting the resource. As a data point, AWS CloudFormation's behavior is to recreate a cluster if the value of I fear this issue has become largely academic, however, as the provider has been functioning this way for a significant period of time, and a breaking change which could result in the loss of DB clusters seems a bit too risky to implement, at this point. With that being the case, perhaps updating What could be quite useful is some mechanism (perhaps a meta-argument) for forcing recreation for a change on any specified argument(s). Something along these lines:
An alternative could be to read another immutable argument "through" a |
@bgshacklett Another fun bug I ran into with this is when I taint the resource that doesn't work either. It tries to create the resource before it destroys it (which is odd because I don't have that rule defined for the resource) |
@bgshacklett agree that it's very misleading to see the "diff" that will result in no changes. the diff imho should be nil. in a terraform plan it is a very misleading output |
This caused our company a serious headache in an emergency situation as we thought updating the snapshot would allow us to roll back safely. It did not. Is this going to be updated? |
I understand that this behavior was present for so long, that change is probably not possible. But what is an official workaround? Destroy the cluster manually before applying the plan? |
Recreating the cluster is the intended behaviour for restoring RDS snapshots. I don't see how this "is not able to be changed". The current behaviour is plain broken. If you change the snapshot, you want it to change the data in the DB, which now it says it does, but doesn't do. |
If the functionality is not updated/fixed could the docs warn about this since this goes against what one would reasonably expect to happen in the Terraform model? |
This functionality has been released in v5.0.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
When the snapshot identifier for a aws_rds_cluster resource is updated the resource is not recreated.
Expected Behavior
New cluster created with the snapshot.
Actual Behavior
Terraform apply does not replace the cluster
Steps to Reproduce
Terraform v0.13.3
The text was updated successfully, but these errors were encountered: