Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws rds cluster is not recreated when snapshot identifier is updated #15563

Closed
mbig opened this issue Oct 8, 2020 · 14 comments · Fixed by #29409
Closed

aws rds cluster is not recreated when snapshot identifier is updated #15563

mbig opened this issue Oct 8, 2020 · 14 comments · Fixed by #29409
Assignees
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.
Milestone

Comments

@mbig
Copy link

mbig commented Oct 8, 2020

When the snapshot identifier for a aws_rds_cluster resource is updated the resource is not recreated.

Expected Behavior

New cluster created with the snapshot.

Actual Behavior

Terraform apply does not replace the cluster

Steps to Reproduce

  • Create a aws_rds_cluster resource
  • add s snapshot identifier
  • run terraform apply

Terraform v0.13.3

@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Oct 8, 2020
@ewbankkit ewbankkit added the service/rds Issues and PRs that pertain to the rds service. label Oct 13, 2020
@mbig
Copy link
Author

mbig commented Oct 13, 2020

only workaround I have so far is manually deleting the cluster and then running terraform apply, it creates the cluster and restores from the snapshot

@bflad bflad added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Oct 30, 2020
@bflad bflad self-assigned this Oct 30, 2020
@kurtmc
Copy link
Contributor

kurtmc commented Feb 2, 2021

I have noticed this bug too. Looking at this https://github.com/hashicorp/terraform-provider-aws/blob/main/aws/resource_aws_rds_cluster.go#L346 it seems to me that we should update it to include ForceNew: true,?

@rajaie-sg
Copy link

Also running into this issue. Updating the snapshot_identifier does not force a re-create of the rds_cluster resource

@gdecicco
Copy link

gdecicco commented Oct 1, 2021

I really hope this is the intended behavior. To not re-create cluster on snapshot_identifier update
Imagine a production cluster that is created from:


data "aws_db_cluster_snapshot" "snapshot" {
  db_cluster_identifier = "<my-id>"
  most_recent = true
}

resource "aws_rds_cluster" "default" {
  cluster_identifier = "<my-id>"
  database_name = "mydb"
  master_username = "foo"
  master_password = "bar"
  snapshot_identifier = data.aws_db_cluster_snapshot.snapshot.id
}

At every apply the cluster is destroyed and recreated. It is a nightmare

@bgshacklett
Copy link

There's an easy fix for the above:

    lifecycle {
        ignore_changes = [snapshot_identifier]
    }

With the current behavior, however, it's not possible to update a cluster from a new snapshot without either destroying the cluster first, or tainting the resource. As a data point, AWS CloudFormation's behavior is to recreate a cluster if the value of SnapshotIdentifier changes: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html#cfn-rds-dbcluster-snapshotidentifier

I fear this issue has become largely academic, however, as the provider has been functioning this way for a significant period of time, and a breaking change which could result in the loss of DB clusters seems a bit too risky to implement, at this point. With that being the case, perhaps updating snapshot_identifier after the recreation should result in a warning.

What could be quite useful is some mechanism (perhaps a meta-argument) for forcing recreation for a change on any specified argument(s). Something along these lines:

    lifecycle {
        force_recreation = [snapshot_identifier]
    }

An alternative could be to read another immutable argument "through" a random_identifier resource with snapshot_identifier as a keeper.

@charles-d-burton
Copy link

@bgshacklett Another fun bug I ran into with this is when I taint the resource that doesn't work either. It tries to create the resource before it destroys it (which is odd because I don't have that rule defined for the resource)

@klovelyPhraseHealth
Copy link

klovelyPhraseHealth commented Jul 5, 2022

@bgshacklett agree that it's very misleading to see the "diff" that will result in no changes. the diff imho should be nil.

in a terraform plan it is a very misleading output

@yamikuronue
Copy link

This caused our company a serious headache in an emergency situation as we thought updating the snapshot would allow us to roll back safely. It did not. Is this going to be updated?

@nepto
Copy link

nepto commented Feb 11, 2023

I understand that this behavior was present for so long, that change is probably not possible.

But what is an official workaround? Destroy the cluster manually before applying the plan?

@autarchprinceps
Copy link

autarchprinceps commented Mar 8, 2023

Recreating the cluster is the intended behaviour for restoring RDS snapshots. I don't see how this "is not able to be changed". The current behaviour is plain broken. If you change the snapshot, you want it to change the data in the DB, which now it says it does, but doesn't do.
And deleting the cluster manually, creates significantly more downtime than creating the new cluser and then cleaning the old one up from terraform. And that is, if you know this behaviour is a thing. Since it successfully runs through, you first have to debug, why the "modified DB" isn't working either, potentially trying multiple snapshots, until you find this thread with a lot of googling.

@evanstoddard23
Copy link

If the functionality is not updated/fixed could the docs warn about this since this goes against what one would reasonably expect to happen in the Terraform model?

@jar-b jar-b added this to the v5.0.0 milestone May 1, 2023
@jar-b jar-b self-assigned this May 23, 2023
@jar-b
Copy link
Member

jar-b commented May 23, 2023

Closed by #29409, merged to main via #31392

@jar-b jar-b closed this as completed May 23, 2023
@github-actions
Copy link

This functionality has been released in v5.0.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 25, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.
Projects
None yet