Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform plan fails with cannot be determined until apply even if the depends_on dependency is known. #34391

Closed
boillodmanuel opened this issue Dec 11, 2023 · 5 comments
Labels
bug new new issue not yet triaged

Comments

@boillodmanuel
Copy link

Terraform Version

Terraform v1.7.0-beta1
on darwin_amd64
+ provider registry.terraform.io/alekc/kubectl v2.0.3

--
Same issue with terraform 1.6 or 1.5
--

Terraform v1.6.5
on darwin_amd64
+ provider registry.terraform.io/alekc/kubectl v2.0.3

--

Terraform v1.5.7
on darwin_amd64
+ provider registry.terraform.io/alekc/kubectl v2.0.3

Terraform Configuration Files

🔗 Available on github: https://github.com/boillodmanuel/terraform-depends-on-issue

  • main.tf:
#file: main.tf

terraform {
  required_version = ">= 1.0"
  backend "local" {
    path = "backend/terraform.tfstate"
  }
  required_providers {
    kubectl = {
      source  = "alekc/kubectl"
      version = "~> 2.0"
    }
  }
}
resource "terraform_data" "known_data" {
  input = "1"
}

module "docs" {
  source          = "./modules/docs"
  depends_on = [ terraform_data.known_data ]
}
  • modules/docs/main.tf:
#file: modules/docs/main.tf

terraform {
  required_providers {
    kubectl = {
      source  = "alekc/kubectl"
      version = "~> 2.0"
    }
  }
}

# Split YAML Manifest into 2 YAML Documents
data "kubectl_file_documents" "docs" {
  content = <<EOF
apiVersion: v1
kind: docs
metadata:
  name: doc1
---
apiVersion: v1
kind: docs
metadata:
  name: doc2
EOF
}

resource "kubectl_manifest" "docs" {
  count     = length(data.kubectl_file_documents.docs.documents)
  yaml_body = data.kubectl_file_documents.docs.documents[count.index]
}

Debug Output

https://github.com/boillodmanuel/terraform-depends-on-issue/blob/main/tf.plan.debug.output

Expected Behavior

Command terraform plan should succeed as module dependency is known.

The result should be the same as when dependency is commented:

module.docs.data.kubectl_file_documents.docs: Reading...
module.docs.data.kubectl_file_documents.docs: Read complete after 0s [id=fec7d70476e63a50bf0be1e07a5de930dcacde17a90a1280f4aac58c75b3a29b]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # terraform_data.known_data will be created
  + resource "terraform_data" "known_data" {
      + id     = (known after apply)
      + input  = "1"
      + output = (known after apply)
    }

  # module.docs.kubectl_manifest.docs[0] will be created
  + resource "kubectl_manifest" "docs" {
      + api_version             = "v1"
      + apply_only              = false
      + field_manager           = "kubectl"
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "docs"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "doc1"
      + namespace               = (known after apply)
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: v1
            kind: docs
            metadata:
              name: doc1
        EOT
      + yaml_incluster          = (sensitive value)
    }

  # module.docs.kubectl_manifest.docs[1] will be created
  + resource "kubectl_manifest" "docs" {
      + api_version             = "v1"
      + apply_only              = false
      + field_manager           = "kubectl"
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "docs"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "doc2"
      + namespace               = (known after apply)
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: v1
            kind: docs
            metadata:
              name: doc2
        EOT
      + yaml_incluster          = (sensitive value)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Actual Behavior

Command terraform plan fails with unknown resource error:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform planned the following actions, but then encountered a problem:

  # terraform_data.known_data will be created
  + resource "terraform_data" "known_data" {
      + id     = (known after apply)
      + input  = "1"
      + output = (known after apply)
    }

  # module.docs.data.kubectl_file_documents.docs will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "kubectl_file_documents" "docs" {
      + content   = <<-EOT
            apiVersion: v1
            kind: docs
            metadata:
              name: doc1
            ---
            apiVersion: v1
            kind: docs
            metadata:
              name: doc2
        EOT
      + documents = (known after apply)
      + id        = (known after apply)
      + manifests = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
╷
│ Error: Invalid count argument
│ 
│   on modules/docs/main.tf line 26, in resource "kubectl_manifest" "docs":
│   26:   count     = length(data.kubectl_file_documents.docs.documents)
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.

Steps to Reproduce

  1. terraform init
  2. terraform plan

Additional Context

No response

References

No response

@jbardin
Copy link
Member

jbardin commented Dec 11, 2023

Hi @boillodmanuel,

In the docs module call you have put depends_on, which declares that everything in the module depends on any change to the given dependency. Because of this constraint, the kubectl_file_documents data source cannot be read until those changes are complete.

We use GitHub issues for tracking bugs and enhancements, rather than for questions. While we can sometimes help with certain simple problems here, it's better to use the community forum where there are more people ready to help.

Thanks!

@jbardin jbardin closed this as not planned Won't fix, can't repro, duplicate, stale Dec 11, 2023
@boillodmanuel
Copy link
Author

Hi @jbardin ,

Because of this constraint, the kubectl_file_documents data source cannot be read until those changes are complete.

I'm aware, and it's exactly what I complaining about.

🛑 For me this is an issue! and let me explain why.

The dependency is known, so there is no reason to not read the datasource immediately. Currently, the read is delayed until changes are completed, as you mentioned, and the consequence are that plan failed with the given error.

Of course, I simplified the example to reproduce the situation. But this is very blocking for a lot of situation.
Example, you have 3 modules (similar to terraform-aws-eks example

  • module vpc
  • module eks-cluster
  • module eks-pods

If you don't add dependencies between, the creation and/or destruction may fail due to improper order of resource creation/destruction.
For example, in the vpc module, you create the nat-gateway resource for internet access. The eks-cluster don't have any implicit dependency to this nat-gateway resource. Hence, the eks-cluster or the eks-pods resources can be created before the nat gateway is created => This may fail because the creation order is not correct.

The only way to fix this, is to explicitly define dependencies between resources, so we can define a proper order to create resources (or to destroy resources).
But today, due to the issue I mentioned, this is not possible!

You can reproduce this issue in a more "classic" use case by updating the terraform-aws-eks example and by adding a depends_on between the eks module and the vpc module.

@jbardin, may you reopen this issue please?

@jbardin
Copy link
Member

jbardin commented Dec 11, 2023

Hi @boillodmanuel,

Terraform can only understand dependencies to the extent they are declared in the configuration. When you use depends_on in a module, you are declaring that everything in the module depends on all changes in the dependency. Because of this is is almost always a mistake to use depends_on in a module block, especially if that module contains data resources. In order for Terraform to have more fine grained control over the dependencies, a more precise data flow between the modules must be declared. Threading the computed outputs of dependencies into the module, and using those values judiciously within the module to control the order of operations is the only way to prevent blocking everything in the module at once. In other words, you should not be putting depends_on between the eks and vpc modules in the first place.

If the infrastructure is built such that there are distinct layers, e.g. you must deploy some resources first before you can begin provisioning a kubernetes cluster on top of those resources, then that will require multiple plan+apply cycles with multiple configurations (or at a minimum, using -target to break up a single configuration, however that is not a recommended or scalable solution).

If you are interested in progress towards designing new workflows to deal with these layered configuration, you can follow #30937 which is a generalized issue about error you are encountering.

@boillodmanuel
Copy link
Author

Thanks @jbardin for the explanation.

I tried to move the dependencies into the module itself, but this is not possible with community modules.
In this case, the only workaround is to use -target or separate terraform projects.
This is not very convenient. By adding depends_on, I don't want to change the plan which is working, but only change the order. This seems impossible today 😭

Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 11, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug new new issue not yet triaged
Projects
None yet
Development

No branches or pull requests

2 participants