Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform plan/refresh error caused by existing data source state after schema change #30823

Closed
pst opened this issue Apr 8, 2022 · 6 comments · Fixed by #30830
Closed

Terraform plan/refresh error caused by existing data source state after schema change #30823

pst opened this issue Apr 8, 2022 · 6 comments · Fixed by #30830
Assignees
Labels
bug confirmed a Terraform Core team member has reproduced this issue core

Comments

@pst
Copy link

pst commented Apr 8, 2022

SDK version

$ go list -m github.com/hashicorp/terraform-plugin-sdk/...
github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0

I validated the bug exists with v2.13.0. However, the steps to reproduce below will use v.2.12.0, because I don't yet have a release with the updated minor version available.

Relevant provider source code

The change to the data source's schema linked below is causing this. The change was required to upgrade from v1 to v2, since the schema that was valid with v1 is not valid with v2 anymore.

https://github.com/kbst/terraform-provider-kustomization/pull/107/files#diff-a909667eef2686d43e97d8494c1a015e69b05673f68d2ba8c8f58ef8dec28fa4R235-R463

Terraform Configuration Files

terraform {
  required_providers {
    kustomization = {
      source  = "kbst/kustomization"
      version = "0.7.2"
      #version = "0.8.0"
    }
  }
}

data "kustomization_overlay" "debug" {
  config_map_generator {
    name      = "debug"
    namespace = "default"
    literals = [
      "KEY=VALUE"
    ]
  }

  patches {
    patch = <<-EOF
    - op: replace
      path: /metadata/name
      value: debug-patched
    EOF

    target = {
      kind      = "ConfigMap"
      name      = "debug"
      namespace = "default"
    }
  }
}

resource "kustomization_resource" "debug" {
  for_each = data.kustomization_overlay.debug.ids

  manifest = data.kustomization_overlay.debug.manifests[each.value]
}

...

Debug Output

https://gist.github.com/pst/e1155d1d05b96c1ce18acde32c1de5e6

Expected Behavior

Terraform plan

Actual Behavior

$ terraform plan
╷
│ Error: .patches[0].target: missing expected [
│ 
│ 
╵

Steps to Reproduce

  1. export KUBECONFIG_PATH=~/.kube/config (kind, minikube, k3d) anything is fine
  2. terrraform init
  3. terraform apply --auto-approve
  4. Change configuration:
    1. provider version constraint from 0.7.2 to 0.8.0 by commenting/uncommenting
    2. target to be a block as required by the new schema
  5. terraform init --upgrade
  6. terraform plan

References

kbst/terraform-provider-kustomization#169

@pst pst added the bug label Apr 8, 2022
@bflad
Copy link
Contributor

bflad commented Apr 8, 2022

Hi @pst 👋 Thank you for raising this and apologies you ran into trouble here.

Interestingly enough, this may be either an issue in the SDK or within Terraform CLI. Typically it should be expected that data sources are fully refreshed each run and therefore any prior state does not matter, however there is currently logic that in certain scenarios would trigger the prior state to actually be read. In that case, the current schema is mismatches that prior state and can generate the error seen here.

Relevant log lines:

2022-04-08T10:36:42.137+0200 [TRACE] readResourceInstanceState: reading state for data.kustomization_overlay.debug
2022-04-08T10:36:42.137+0200 [WARN]  UpgradeResourceState: unexpected type cty.List(cty.Object(map[string]cty.Type{"annotation_selector":cty.String, "group":cty.String, "kind":cty.String, "label_selector":cty.String, "name":cty.String, "namespace":cty.String, "version":cty.String})) for map in json state
2022-04-08T10:36:42.138+0200 [ERROR] vertex "data.kustomization_overlay.debug" error: .patches[0].target: missing expected [
2022-04-08T10:36:42.138+0200 [TRACE] vertex "data.kustomization_overlay.debug": visit complete, with errors

One workaround, although unfortunately a practitioner configuration breaking change, would be to drop the existing data source attribute and use a new one. Terraform CLI in that case should ignore the existing/old attribute state. Since it appears you may have been breaking existing configurations anyways, this might be an acceptable change for you.

Thanks again for reporting this issue.

@bflad
Copy link
Contributor

bflad commented Apr 8, 2022

I'm actually going to transfer this over to the Terraform CLI repository, so it becomes visible for one of the maintainers there to triage this from their perspective. 👍

@bflad bflad transferred this issue from hashicorp/terraform-plugin-sdk Apr 8, 2022
@alisdair alisdair added new new issue not yet triaged and removed upstream-terraform labels Apr 8, 2022
@jbardin jbardin added core confirmed a Terraform Core team member has reproduced this issue and removed new new issue not yet triaged labels Apr 8, 2022
@pst
Copy link
Author

pst commented Apr 8, 2022

Thanks for looking into this so quickly. The provider release is out and the issue was only discovered after the release. Renaming now would be a second breaking change for users. And the first one was already, from a regular user's perspective, unnecessary (since forced by a change in the SDK). For users it would certainly be best to fix the bug's root cause.

Currently a workaround for users is to terraform state rm the data source. Then everything works as expected again.

But with only single item state rm and the locking, unlocking and backup this takes multiple seconds per data source. For the more sizeable Kubernetes platforms amongst my users there are hundreds of these data sources.

Let's see what the Terraform core team says about this.

@crw crw added new new issue not yet triaged and removed new new issue not yet triaged labels Apr 8, 2022
@jbardin
Copy link
Member

jbardin commented Apr 8, 2022

Unfortunately I think the the terraform state rm workaround is the only way to move forward once you've encountered this condition. Since the prior state in this case is really only informational, I think we'll be able to patch this up in a minor release.

The provider schema upgrade tests only covered the addition and removal of attributes, so this type of change was never verified. We can probably just ignore prior state in the case where the schema doesn't match as an interim solution, though I'm not sure if the lack of prior state in the plan output is going to surprise anyone or not.

@pst
Copy link
Author

pst commented Apr 15, 2022

Thanks for getting this fixed so quickly!

@github-actions
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 16, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug confirmed a Terraform Core team member has reproduced this issue core
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants