Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform create/destroy dependencies between resources #22572

Closed
brokoli18 opened this issue Aug 23, 2019 · 4 comments · Fixed by #30900
Closed

Terraform create/destroy dependencies between resources #22572

brokoli18 opened this issue Aug 23, 2019 · 4 comments · Fixed by #30900

Comments

@brokoli18
Copy link

Current Terraform Version

v0.11.13

Use-cases

I have a k8s cluster deployed in google, on which I then deploy some resources using terraform. A short example is below:

resource "google_container_cluster" "primary" {
  name               = "marcellus-wallace"
  location           = "us-central1-a"
  initial_node_count = 3

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }
}

resource "kubernetes_namespace" "example" {
  metadata {
    annotations = {
      name = "example-annotation"
    }

    labels = {
      mylabel = "label-value"
    }

    name = "terraform-example-namespace"
  }
}

If I were for example to change the location of the google_container_cluster the entire resource would be recreated. However this would mean that the above namespace is destroyed on the cluster, but remains as a resource in the terraform state file. I would like to add a dependency between the google_container_cluster and the kubernetes_namespace so that terraform is aware that when the cluster is recreated the namespace would need to be recreated as well.

Attempted Solutions

An example of the behavior I would like to replicate is the triggers argument available for null-resources. https://www.terraform.io/docs/providers/null/resource.html#triggers.

I know that it is also possible to use the depends_on argument for resources, but from what I can see this only affects the order in which resources are initially provisioned and would not achieve the behavior I am describing above. Perhaps I am mistaken?

Proposal

Essentially I would like to replicate the triggers option available for null_resources for other resources in terraform, specifically the kubernetes and helm providers.

References

@teamterraform
Copy link
Contributor

Thanks for sharing this use-case, @brokoli18!

This seems like it might be another good example case for developing the idea in #22094. This new situation is special in a couple of ways:

  • It seems like in this case the relationship is indirect through the provider configuration, rather than explicit in the resource types. That is, there's presumably a reference to google_container_cluster.primary in a the provider "kubernetes" block to populate its hostname/etc, and thus we could say that all resources from that provider configuration are in a sense "contained within" the google_container_cluster.primary object. The proposal in Global IDs for representing relationships between resource objects (object containment, name collision detection, etc) #22094 doesn't have room for that idea at the time of writing.
  • The containment spans across different providers such that if this were to be addressed with a "global ID" sort of idea it would require the two providers to agree on a common way to talk about this relationship. For example, the Kubernetes provider might define a global id scheme to represent the idea of a Kubernetes cluster and indicate that everything it creates belongs to the cluster, in which case the google_container_cluster resource type implementation would need to know how to construct that ID scheme so that it can express the idea that the Kubernetes cluster is "contained within" it.

This particular relationship is subject to the problem described under "Global Object IDs for multi-instance systems" in that issue, because a Kubernetes cluster is identified only by its location but it ought to be possible to move a Kubernetes cluster (a self-hosted one, most likely) to a different physical network address without implying that everything in its is destroyed and recreated.

@brokoli18
Copy link
Author

Thank you for the detailed response

@timota
Copy link

timota commented Aug 27, 2019

Thank you

@github-actions
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 23, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants