Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CloudFront KeyGroup resource update to eTag on associated CloudFront Distribution not captured in state #24033

Closed
frankloye opened this issue Apr 5, 2022 · 6 comments · Fixed by #24537
Labels
bug Addresses a defect in current functionality. service/cloudfront Issues and PRs that pertain to the cloudfront service.
Milestone

Comments

@frankloye
Copy link

frankloye commented Apr 5, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Affected Resource(s)

  • aws_cloudfront_distribution
  • aws_cloudfront_public_key
  • aws_cloudfront_key_group

Terraform Configuration Files

terraform {
  backend "azurerm" {}
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.53.0"
    }
  }
}

resource "aws_cloudfront_public_key" "key" {
  encoded_key = file(var.encoded_key)
  name_prefix = "key-${var.xxx}-${var.yyy}-${var.zzz}-"
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_cloudfront_key_group" "key_group" {
  items = [aws_cloudfront_public_key.key.id]
  name  = "key-group-${local.xxx}-${var.yyy}-${var.zzz}"
}

Expected Behavior

The CloudFront KeyGroup resource should not directly update the CloudFront Distribution. Only the CloudFront Distribution module should update the CloudFront Distribution with the new Key Group values and update the associated eTags.

Actual Behavior

Every time the CloudFront KeyGroup value is changed, the CloudFront KeyGroup module will update the associated CloudFront Distribution with the new PublicKey and KeyGroup values, and change the eTags on the CloudFront Distribution. The updated eTags are not being captured in the state file of the associated CloudFront Distribution, and when an update is needed on the CloudFront Distribution, the update request fails because the eTag values in state do not match the eTag values on the CloudFront Distribution with the following error: Error: error updating CloudFront Distribution (XXXXX): PreconditionFailed: The request failed because it didn't meet the preconditions in one or more request-header fields.status code: 412

Steps to Reproduce

  1. terraform apply
@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/cloudfront Issues and PRs that pertain to the cloudfront service. labels Apr 5, 2022
@justinretzolk justinretzolk added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Apr 14, 2022
@enzolupia
Copy link

enzolupia commented Apr 22, 2022

Hi, same issue here when updating an aws_cloudfront_cache_policy or an aws_cloudfront_response_headers_policy linked to a Cloudfront distribution.
During update, the aws_cloudfront_cache_policy and aws_cloudfront_response_headers_policy are updated first, then when the Cloudfront distribution gets updated it fails with "Error: error updating CloudFront Distribution (XXXXX): PreconditionFailed: The request failed because it didn't meet the preconditions in one or more request-header fields.status code: 412".

@aholthagerty
Copy link

correct me if I'm wrong, but it appears the issue outlined by @enzolupia is also affecting aws_cloudfront_response_headers_policy resource.

is there a workaround for this? as it stands, it doesn't appear that these resources can be updated in subsequent terraform apply calls, else we end up with:

Error: error updating CloudFront Distribution (XXXXX): PreconditionFailed: The request failed because it didn't meet the preconditions in one or more request-header fields.status code: 412

@aholthagerty
Copy link

Best workaround I have currently is to let the first terraform apply fail with PreconditionFailed, then run a terraform refresh, then run terraform apply again.

@frankloye
Copy link
Author

@aholthagerty that is the same workaround I am using

@github-actions
Copy link

github-actions bot commented May 5, 2022

This functionality has been released in v4.13.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

github-actions bot commented Jun 5, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 5, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/cloudfront Issues and PRs that pertain to the cloudfront service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants