Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuring one provider with a dynamic attribute from another (was: depends_on for providers) #2430

Closed
dupuy opened this issue Jun 23, 2015 · 138 comments
Labels
config enhancement unknown-values Issues related to Terraform's treatment of unknown values

Comments

@dupuy
Copy link
Contributor

dupuy commented Jun 23, 2015

This issue was inspired by this question on Google Groups.

I've got some Terraform code that doesn't work because the EC2 instance running the Docker daemon doesn't exist yet so I get "* Error pinging Docker server: Get http://${aws_instance.docker.public_ip}:2375/_ping: dial tcp: lookup ${aws_instance.docker.public_ip}: no such host" if I run plan or apply.

There are providers (docker and consul - theoretically also openstack but that's a stretch) that can be implemented with Terraform itself using other providers like AWS; if there are other resources in a Terraform deployment that use the (docker or consul) provider they cannot be provisioned or managed in any way until and unless the other resources that implement the docker server or consul cluster have been successfully provisioned.

If there were a depends_on clause for providers like docker and consul, this kind of dependency could be managed automatically. In the absence of this, it may be possible to add depends_on clauses for all the resources using the docker or consul provider, but that does not fully address the problem as Terraform will attempt (and fail, if they are not already provisioned) to discover the state of the docker/consul resources during the planning stage, long before it has completed the computation of dependencies. Multiple plan/apply runs may be able to resolve that specific problem, but having a depends_on clause for providers would allow everything to be managed in a single pass.

@bitglue
Copy link

bitglue commented Oct 22, 2015

fleet is another example of a service for which this is a problem.

That provider used to work: if the ip address is an empty string, then it would use a mock API that failed on everything. But that solution is no longer working in Terraform 0.6.3.

@apparentlymart
Copy link
Contributor

I think this is describing the same issue I wrote up in #2976, in which case unfortunately the problem is a bit more subtle than supporting depends_on on the provider.

Terraform is actually already able to correctly handle provider instantiation in the dependency graph, correctly understanding that (in your example) the docker provider instantiation depends on the completion of the EC2 instance.

The key issue here is that providers need to be instantiated for all operations, not just apply. Thus when terraform plan is run, Terraform will run the plan for the AWS instance, noting that it needs to create it, and then it will try to instantiate the Docker provider to plan the docker_container resource, which of course it can't do because we won't know the AWS instance results until apply.

When I attempted to define this problem in #2976 I was focused on working with resources that don't really have a concept of "creation", like consul_keys or template_file, rather than things like aws_instance, etc. There really isn't a good way to make that EC2 instance and docker example work as long as we preserve Terraform's strict separation between plan and apply.

The workaround for this problem is to explicitly split the problem into two steps: make one Terraform config that creates the EC2 instance and produces the instance IP address as an output, publish the state from that configuration somewhere, and then use the terraform_remote_state resource to reference that from a separate downstream config that sets up the Docker resources.

Unfortunately if you follow my above advice, you will then run into the issue that I described in #2976: the terraform_remote_state resource also won't get instantiated during plan. That issue seems solvable, however; terraform_remote_state just reads some data from elsewhere and doesn't actually create anything, so it should be safe to refresh it during plan and get the data necessary to populate the provider configuration before the provider is instantiated.

@bitglue
Copy link

bitglue commented Oct 22, 2015

@apparentlymart: In that issue, you are describing a case "which is very handy when parts of the config are set dynamically from outside the configuration." And you propose to get around the issue by making those resources that represent the outside configuration "pre-refreshed", meaning you skip the create step and immediately go to the read step.

But I'm describing a case where parts of the config are set dynamically from inside the configuration. For example, I want to manipulate a Docker or Fleet service that exists on the EC2 instance I just made. Would pre-refreshing help in this case?

@apparentlymart
Copy link
Contributor

apparentlymart commented Oct 22, 2015

@bitglue no, as my (rather verbose) comment described, having a "true" resource from one provider be used as input to another is not compatible with Terraform's model of separating plan from apply. The only way to solve that, without changing Terraform's architecture considerably, is to break the problem into two separate configurations and then use terraform_remote_state (which can potentially be pre-refreshed, but isn't yet) to pass resource data between the two.

@bitglue
Copy link

bitglue commented Oct 23, 2015

edited a bit to clarify the plan/apply stages

@apparentlymart I don't think it's impossible, because terraform-provider-fleet used to do it successfully. When the configuration is being planned for the first time, the provider doesn't actually need to do anything, which is why it can use a Fleet API object which fails all the time. None of the provider methods get called because it's not necessary: if there's no prior state, then the plan is trivially to create everything, and you don't need to call any provider methods to know that.

After the plan is made and it's time to apply, then the provider can be initialized after having created the EC2 instance, and now it has a proper endpoint and can actually work and create the fleet resources.

On subsequent planning runs, the public IP address of the EC2 instance is already known, so planning can happen as usual. Bonus points for refreshing the EC2 instance before initializing the fleet provider to do its refreshing.

I'd also think it's not the separation of plan and apply that's really the issue here, but more specifically refresh. You can always terraform plan -refresh=false and that will work even if the providers can't connect to anything, right? Assuming the state file is accurate, of course.

You can run into a little trouble if you delete the EC2 instance to which fleet was connecting but after the fleet resources have been created. Now plan can't work. But there are two ways to resolve that situation:

  1. Give the fleet provider the IP address of another EC2 instance in the same fleet cluster which hasn't been deleted.
  2. If the entire fleet cluster has been deleted and so there is no IP address you could give it, then all the fleet units have been deleted, too. So you can delete all the fleet units from the state file.
  3. Assuming you have an accurate (enough) state file, create the plan without refreshing and apply it. Then the missing EC2 instance will exist again and things will be back to normal.

Granted, these resolutions require a little hackish manual action, but it's not a situation I ever hit in practice. I'm sure with a little refinement it could be made less hackish.

@apparentlymart
Copy link
Contributor

@bitglue it sounds like you're saying that in principle the providers could tolerate their configurations being incomplete until they are asked to do something. That is certainly theoretically true... while today most of them verify their config during instantiation and fail hard if the config is incomplete (as you saw the Docker provider do), they could potentially just let an incomplete config pass and then have it fail if any later operation tries to do any operations.

So one thing we could prototype is to revise how helper/schema handles provider configuration errors: in Configure, rather than returning an error when the ConfigureFunc returns an error, it could simply remember the error as internal state and return nil. The Apply and Refresh functions would then be responsible for checking for that saved error and returning it, so that Diff (which, as you noted, does not depend on the instantiated client) can complete successfully.

Having a prototype of that would allow us to try out the different cases and see what it fixes and when/how it fails. As you said, it should resolve the initial creation case because at that point the Refresh method won't be called. The case I'm less sure about -- which is admittedly an edge case -- is when a provider starts of with a totally literal configuration, and then at some later point it's changed to depend on the outcome of another resource; in that case Terraform will try to refresh the resources belonging to that provider, which will presumably then fail.

@bitglue
Copy link

bitglue commented Oct 23, 2015

@apparentlymart That's more or less my thinking, yeah. Though from what I've observed trying to get terraform-fleet-provider working again, in many circumstances the failure happens before ConfigureFunc is even called, so we might need a different approach.

@apparentlymart
Copy link
Contributor

@bitglue I guess the schema validation will catch cases where a field is required but yet empty, so you're right that what I described won't entirely fix it unless we make all provider arguments optional and handle them being missing inside the ConfigureFunc.

@mtekel
Copy link

mtekel commented Feb 25, 2016

This is issue for postgresql provider as well, when you e.g. want to create AWS RDS instance and then use the port in the provider configuration. This fails, as the provider initializes before RDS instance is created, the port number is returned as "" and that doesn't convert to int:

  * provider.postgresql: cannot parse '' as int: strconv.ParseInt: parsing "": invalid syntax

TF code:

 provider "postgresql" {
   host = "${aws_db_instance.myDB.address}"
   port = "${aws_db_instance.myDB.port}"
   username = "${aws_db_instance.myDB.username}"
   password = "abc"
 }

mtekel added a commit to alphagov/paas-cf that referenced this issue Feb 25, 2016
Don't specify port, as the RDS instance doesn't exist yet in the
moment of postgresQL provider initialization, which then breaks,
because port is returned as empty quotes, which doesn't convert to
string. See hashicorp/terraform#2430
@mtekel
Copy link

mtekel commented Feb 26, 2016

Interestingly, in your case TF graph does show the provider dependency, yet provider runs in parallel still. This is esp. problematic on destroy, as RDS instance gets destroyed before postgresql provider has chance to destroy the resources it has created, leaving "undestructable" state file behind. See #5340

@BastienM
Copy link

Hello there,

I got a similar problem with the Docker provider when used inside a Openstack instance (graph).

# main.tf
module "openstack" {
    source            = "./openstack"
    user-name         = "${var.openstack_user-name}"
    tenant-name       = "${var.openstack_tenant-name}"
    user-password     = "${var.openstack_user-password}"
    auth-url          = "${var.openstack_auth-url}"
    dc-region         = "${var.openstack_dc-region}"
    key-name          = "${var.openstack_key-name}"
    key-path          = "${var.openstack_key-path}"
    instance-flavor   = "${var.openstack_instance-flavor}"
    instance-os       = "${var.openstack_instance-os}"
}

module "docker" {
    source                  = "./docker"
    dockerhub-organization  = "${var.docker_dockerhub-organization}"
    instance-public-ip      = "${module.openstack.instance_public_ip}"
}
$ terraform plan

Error running plan: 1 error(s) occurred:
* Error initializing Docker client: invalid endpoint

Even thought I'm using :

provider "docker" {
    host = "${var.instance-public-ip}:2375/"
}

Logically it should wait for the instance to be up but sadly the provider is still initialized at the very beginning ...


So as a workaround I splitted my project into modules (module.openstack & module.docker) and then execute them one at a time with the -target parameter.
Like this :

$ terraform apply -target=module.openstack && terraform apply -target=module.docker

It does the job but make the whole process quite annoying as we must always specify the modules in the good order for each steps (plan, apply, destroy ...).

So until we got an option such as depends_on, I don't see other ways to do.
Is there an update on this matter ?

@closconsultancy
Copy link

closconsultancy commented May 9, 2016

I've submitted a similar question to the google group on this:

https://groups.google.com/forum/#!topic/terraform-tool/OhDdMrSoWK8

The workaround of specifying the modules separately didn't seem to work for me. Weirdly the docker provider was spinning up a second EC2 instance?! I've also noticed that terraform destroy didn't seem to take notice of the target module. See below:

#terraform destroy -target=module.aws

Do you really want to destroy?
  Terraform will delete the following infrastructure:
    module.aws

module.aws.aws_instance.my_ec2_instance: Refreshing state... (ID: i-69b034e5)
.....
Error refreshing state: 1 error(s) occurred:

* Error initializing Docker client: invalid endpoint

@apparentlymart
Copy link
Contributor

Issue #4149 was my later proposal to alter Terraform's workflow to better support the situation of having a single Terraform config work at multiple levels of abstraction (a VM and the app running on it, as in this case).

It's not an easy fix but it essentially formalizes the use of -target to apply a config in multiple steps and uses Terraform's knowledge of the dependency graph to do it automatically.

@dbcoliveira
Copy link

dbcoliveira commented Jun 2, 2016

@apparentlymart I don't fully understand why the dependency graph does actually count on the plan step. Intuitively I would guess that the dependency matrix would be applied at all stages (including instantiation on each step) . At least for all these cases would just avoid a bunch of problems.
By other words the dependency graph should indicate if the instantiation should wait or not for certain resource.
This kind of issues just smashes eventual capabilities of the tool. Its a bit silly that with a series of posix commands it can be fixed while programmatically (using TF logic) it can't.

@CloudSurgeon
Copy link

So, if I understand this correctly, there is no way for me to tell a provider to not configure until after certain resources have been configured. For example, this won't work because custom_provider will already be initialized before my_machine is built:
provider "custom_provider" {
url = "${aws_instance.my_machine.public_ip}"
username = "admin"
password = "password"
}

The only option would be to run an apply with a -target option for my_machine first, the run the apply again after the dependency has been satisfied.

brianantonelli added a commit to Cox-Automotive/terraform-provider-alks that referenced this issue Mar 22, 2017
but this will never work until this issue is resolved
hashicorp/terraform#2430
@derFunk
Copy link

derFunk commented Mar 29, 2017

+1 for depends_on for providers.
I want to be able to depend on having all actions from another provider applied first.

My use case:
I want to create another database and roles/schema inside this database in PostgreSQL.

To do so, I have to connect as the "root" user first, create the new role with appropriate permissions, and then connect again with the new user to create the database and schema in it.

So I need two providers with aliases, one with root and one for the application db.
The application postgresql provider depends on the finished actions from the root postgresql provider.

My workaround currently is to comment out the second part first, apply the first part, then comment in the second part again to apply it as well. :(

# ====================================================================
# Execute first
# ====================================================================

provider "postgresql" {
  alias           = "root"
  host            = "${var.db_pg_application_host}"
  port            = "${var.db_pg_application_port}"
  username        = "root"
  password        = "${lookup(var.rds_root_pws, "application")}"
  database        = "postgres"
}

resource "postgresql_role" "application" {
  provider        = "postgresql.root"
  name            = "application"
  login           = true
  create_database = true
  password        = "${lookup(var.rds_user_pws, "application")}"
}

# ====================================================================
# Execute second
# ====================================================================

provider "postgresql" {
  alias           = "application"
  host            = "${var.db_pg_application_host}"
  port            = "${var.db_pg_application_port}"
  username        = "application"
  password        = "${lookup(var.rds_user_pws, "application")}"
  database        = ""
}

resource "postgresql_database" "application" {
  provider = "postgresql.application"
  name     = "application"
  owner = "${postgresql_role.application.name}"
}

resource "postgresql_schema" "myschema" {
  provider = "postgresql.application"
  name     = "myschema"
  owner      = "${postgresql_role.application.name}"

  policy {
    create = true
    usage  = true
    role   = "${postgresql_role.application.name}"
  }

  policy {
    create = true
    usage  = true
    role   = "root"
  }
}

@apparentlymart
Copy link
Contributor

@derFunk your use case there (deferring a particular provider and its resources until after its dependencies are ready) is a big part of what #4149 is about. (Just mentioning this here to create the issue link, so I can find this again later!)

@andylockran
Copy link

I've managed to his this same issue with the postgres provider depending on an aws_db_instance outside my module. Is there a workaround available now?

@jjlakis
Copy link

jjlakis commented Feb 22, 2018

Any progress here?

@johnmarcou
Copy link

johnmarcou commented Mar 6, 2018

Hi all.

I had a similar issue where I am using Terraform to:
1 - deploy an infrastructure (Kubernetes Typhoon)
2 - then to deploy resources on the fresh deployed infrastructure (Helm packages)

The helm provider was checking the connection file (kubeconfig) at terraform initialisation, so before the file was created itself (this occur during the step 1).
So the helm resources creation was crashing for sure, because the provider was unable to contact the infrastructure.

A double terraform apply works though, but, here is how I manage to make it working with a single terraform apply, forcing the helm provider to wait for the infrastructure to be online, and exporting the infrastructure config file in a temporary local_file:

resource "local_file" "kubeconfig" {
  # HACK: depends_on for the helm provider
  # Passing provider configuration value via a local_file
  depends_on = ["module.typhoon"]
  content    = "${module.typhoon.kubeconfig}"
  filename   = "./terraform.tfstate.helmprovider.kubeconfig"
}

provider "helm" {
  kubernetes {
    # HACK: depends_on via an another resource
    # config_path = "${module.typhoon.kubeconfig}", but via the dependency
    config_path = "${local_file.kubeconfig.filename}"
  }
}

resource "helm_release" "openvmtools" {
  count      = "${var.enable_addons ? 1 : 0}"
   # HACK: when destroy, don't delete the resource dependency before the resource
  depends_on = ["module.typhoon"]
  name       = "openvmtools"
  namespace  = "kube-system"
  chart      = "${path.module}/addons/charts/open-vm-tools"
}

NB: This hack works because the provider expect a file path as config value.

Hope it can help.

@ap1969
Copy link

ap1969 commented Apr 8, 2018

Hi,
Similar issue with Rancher. The rancher provider requires the URL for the rancher host, but if the plan is to create the rancher host and some other hosts to run the containerized services, it's then impossible to:

  1. create the rancher host, and
  2. have the other hosts register with the rancher hosts.

This is due to the rancher provider failing during the terraform plan step, as it can't reach the API.

@rchernobelskiy
Copy link

Probably a lot of people come here looking for how to provision Kubernetes objects with the same terraform that is creating a kube.
The way to do that is to avoid using a data source and rather referencing a module output and perhaps exec.
For example, to provision Kubernetes objects after making a kube with the eks module, the kubernetes provider should be set up roughly as follows:

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

--role-arn can be included above if your aws provider is assuming a role also

@brandongallagher1999
Copy link

@rchernobelskiy Isn't that provider executed at build time? Meaning that the module of which it's pointing to already needs to exist prior to?

@rchernobelskiy
Copy link

@rchernobelskiy Isn't that provider executed at build time? Meaning that the module of which it's pointing to already needs to exist prior to?

In practice that seems to not be the case and arguments from other modules seem to be ok to put in providers prior to those modules existing.

@jbg
Copy link

jbg commented May 17, 2023

@rchernobelskiy Isn't that provider executed at build time? Meaning that the module of which it's pointing to already needs to exist prior to?

In practice that seems to not be the case and arguments from other modules seem to be ok to put in providers prior to those modules existing.

As long as no resources are added yet that depend on the provider being added. In which case why are you adding the provider?

@rchernobelskiy
Copy link

@rchernobelskiy Isn't that provider executed at build time? Meaning that the module of which it's pointing to already needs to exist prior to?

In practice that seems to not be the case and arguments from other modules seem to be ok to put in providers prior to those modules existing.

As long as no resources are added yet that depend on the provider being added. In which case why are you adding the provider?

I'm using one terraform apply to both create an eks kube and then provision kubernetes objects in it, like configmaps.
This works in one go, one apply command, even though the configuration for the kubernets provider is based on a kube that does not exist yet at the time of running terraform apply.

@jbg
Copy link

jbg commented May 18, 2023

I'm using one terraform apply to both create an eks kube and then provision kubernetes objects in it, like configmaps.

This works in one go, one apply command, even though the configuration for the kubernets provider is based on a kube that does not exist yet at the time of running terraform apply.

It may work if you don't have any resources which depend on k8s resources via for_each or count, don't use any k8s data sources which need to be read at plan time, and don't use the kubernetes_manifest resource. It will fail with anything that requires the provider to connect to the cluster at plan time (since the cluster doesn't exist yet). Non-trivial configurations tend to have things that require cluster access at plan time, as discussed above in this issue.

@edicsonm
Copy link

Hi, I had the same issue when configuring a Kubernetes provider that uses outputs of other modules. This is what I have on my provider configuration:

provider "kubernetes" {
host = module.management-cluster-module.host
cluster_ca_certificate = module.management-cluster-module.certificate-authority-data
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--profile","allianz","--cluster-name", module.management-cluster-module.cluster-name]
command = "aws"
}
}

My current main.tf files uses few modules and in some of those modules I used this resource definition:

data "kubectl_file_documents" "secrets_manifest" {
content = file("${path.module}/manifests/secrets.yaml")
}

resource "kubectl_manifest" "metrics_server" {
for_each = data.kubectl_file_documents.secrets_manifest.manifests
yaml_body = each.value
}

Up to this definition my whole terraform implementation was working fine and without issues using the above described provider configuration. Then, one day I needed to set up some secrets and I decided to use the configuration below. It is using the same "resource "kubectl_manifest"" but the new configuration describe/create the k8s objets in a different way (it is not using a yam file to describe/create the k8s object), it is describing the k8s objects inside the "resource "kubectl_manifest"" as you can see below. This is when I start getting this error: cannot create REST client: no client config.

Secrets to use during Jenkins Installation:

resource "kubernetes_manifest" "jenkins_spc" {
manifest = {
apiVersion = "secrets-store.csi.x-k8s.io/v1alpha1"
kind = "SecretProviderClass"
metadata = {
namespace = "jenkins"
name = "jenkins-secrets"
}
spec = {
provider = "aws"
parameters = {
objects = yamlencode( [
{
objectName = "management/jenkins"
objectType = "secretsmanager"
jmesPath = [
... this section was deleted on purpose
]
}
])
}
secretObjects = [
{
... this section was deleted on purpose
}
]
}
}
}

What was my solution? i went back to create my secrets configuration in k8s using resource "kubectl_manifest" but like this:

data "kubectl_file_documents" "secrets_manifest" {
content = file("${path.module}/manifests/secrets.yaml")
}

resource "kubectl_manifest" "metrics_server" {
for_each = data.kubectl_file_documents.secrets_manifest.manifests
yaml_body = each.value
}

Hope that helps someone with the same issue.

@villesau
Copy link

Could this actually indicate that the parameters are defined in a wrong place? Maybe the parameters that are currently passed to provider should actually belong for resolvers instead? It would be more repetitive, but would solve the problem without fundamental changes to Terraform.

@brandongallagher1999
Copy link

We need a feature that allows us to create our Terraform infrastructure from end to end using run-time dependant providers so that I can for example: Create a Kubernetes Cluster then within the same runtime --> Deploy Helm charts into my cluster (Prometheus, Grafana, etc).

This would be game-changing in terms of IaC capabilities. I sometimes question the use-case of IaC beyond creating infrastructure using other module's properties. Since I have to have multiple folders and have to manually run through each module folder and run terraform destroy or terraform apply, even though they're all part of the same app, it makes things extremely tedious as it's manual infrastructure management minus a few steps since I can delete some (not all) dependant resources.

Please HashiCorp!

@UncleSamSwiss
Copy link

@brandongallagher1999 It gets even worse when you want to deploy an entire application stack.

We have these completely independent Transform module folders:

  • Kubernetes Cluster
  • Postgres (separate because it uses the Kubernetes Provider)
  • Keycloak (because it uses Postgres and Kubernetes Providers)
  • our application (because it needs Keycloak, Postgres and Kubernetes Providers)

Currently the only better way to do this would be to use CDKTF which apparently can manage multiple independent "Stacks" that share properties using Terraform state.

So yes, a "native" solution inside Terraform would really be appreciated!

@marziply
Copy link

marziply commented Jun 16, 2023

@brandongallagher1999 It gets even worse when you want to deploy an entire application stack.

We have these completely independent Transform module folders:

  • Kubernetes Cluster
  • Postgres (separate because it uses the Kubernetes Provider)
  • Keycloak (because it uses Postgres and Kubernetes Providers)
  • our application (because it needs Keycloak, Postgres and Kubernetes Providers)

Currently the only better way to do this would be to use CDKTF which apparently can manage multiple independent "Stacks" that share properties using Terraform state.

So yes, a "native" solution inside Terraform would really be appreciated!

I share these sentiments exactly. As a user of Terraform, I want to run terraform apply once. I theoretically should not have to have multiple "stages" for Terraform because surely depends_on should handle dependencies between whatever I configure to be as dependants. I've often found myself in a "chick or the egg" situation with Terraform - I want to install a bunch of Helm packages via the Helm resource but I can't use kubernetes_manifest CRD resources within the same apply because of the Open API spec that's needed first. A frustrating constraint that means I am needing to split my Terraform applies into sequential stages.

We need a feature that allows us to create our Terraform infrastructure from end to end using run-time dependant providers so that I can for example: Create a Kubernetes Cluster then within the same runtime --> Deploy Helm charts into my cluster (Prometheus, Grafana, etc).

This would be game-changing in terms of IaC capabilities. I sometimes question the use-case of IaC beyond creating infrastructure using other module's properties. Since I have to have multiple folders and have to manually run through each module folder and run terraform destroy or terraform apply, even though they're all part of the same app, it makes things extremely tedious as it's manual infrastructure management minus a few steps since I can delete some (not all) dependant resources.

Please HashiCorp!

This mirrors my point exactly. We need support for some level of dependency configuration between providers, especially for read operations as well in the case of Helm. I want to configure Terraform to wait until X resource (eg. Helm packages) have successfully been installed in Kubernetes before attempting to read the CRD Open API spec.

@ajostergaard

This comment was marked as off-topic.

@UncleSamSwiss
Copy link

I assume you've all tried using Terragrunt to help with this scenario, what's the reason that didn't work out?

You are right, I didn't know about terragrunt. So we have two community projects that work around this issue: CDKTF and terragrunt. This is great, but it adds to the complexity - even more tools to learn and understand. IMHO it would still be better to have this solved once and for all in Terraform itself.

@brandongallagher1999
Copy link

I assume you've all tried using Terragrunt to help with this scenario, what's the reason that didn't work out?

Terragrunt unfortunately does not solve this use-case either. Not to mention the unnecessary complexity of managing all of the sub .hcl files to allow Terragrunt to function is quite a nuisance. I find Terragrunt is only useful for running things like terragrunt format in multiple sub folders, but beyond that it doesn't improve the underlying functionality of Terraform.

@jmturwy
Copy link

jmturwy commented Jul 13, 2023

Same issue as everyone else:
Using terraform to spin up an entire eks eco system, then using a helm provider to deploy the vault chart, then use the vault provider to configure vault. Vaults on the vault provider because it cannot connect to the vault URL since it doesnt exist yet. Works fine if the server is already stood up.

Going to just see if i can configure everything i need in the helm provider.

@brandongallagher1999
Copy link

We need run-time dependant providers. This feature would be groundbreaking in terms of IaC.

@Sodki
Copy link

Sodki commented Jul 14, 2023

We need run-time dependant providers. This feature would be groundbreaking in terms of IaC.

Not groundbreaking. Other tools like Pulumi have had it for years. Terraform just needs to pick up the pace.

@brandongallagher1999
Copy link

brandongallagher1999 commented Jul 14, 2023

Not groundbreaking. Other tools like Pulumi have had it for years. Terraform just needs to pick up the pace.

Wasn't aware Pulumi had this capability. It's nice to spin up our entire infrastructure and all of it's relative resources within one command. If Pulumi is actually capable of this, I actually might consider moving over. However in terms of the DevOps industry, Terraform still seems to be the most in demand; which concerns me.

@brandongallagher1999
Copy link

@Sodki Follow up from your comment. I have tried out Pulumi to recreate our entire stack. I will NEVER go back to Terraform. The ability to spin up an entire stack (Database, K8s Cluster, K8s resources, Storage Accounts, etc) in a single file (or organized set of folders), where I'd previously have an issue with deploying K8s resources since it required the cluster to be created prior to runtime is no longer an issue.

I'd recommend that everyone here move over to Pulumi since not only does it solve the issue present on this thread, but it also allows you to utilize TypeScript or Python which is extremely convenient, compared to the extreme complexity of HCL when it comes to things beyond static declaration (for loops for example are INSANE in HCL, especially when accessing object properties).

@apparentlymart
Copy link
Contributor

The Terraform team is planning to resolve this problem as part of #30937, by introducing a new capability for a provider to report that it doesn't yet have enough information to complete planning (e.g. something crucial like the API URL isn't known yet), which Terraform Core would then handle by deferring the planning of affected resources until a subsequent plan/apply round.

You could think of this as something similar to automatically adding -target options to exclude the resources that can't be applied yet, although the implementation is quite different because it's still desirable to at least validate the deferred resources, and validation doesn't require a complete provider configuration.

The history of this issue seems to be causing ongoing confusion, since the underlying problem here has nothing to do with dependencies and is instead about unknown values appearing in the provider configuration.

Since there's already work underway to solve this as part of a broader effort to deal with unknown values in inconvenient places, I'm going to close this issue just to consolidate with the other one that has a clearer description of what the problem is and is tracking the ongoing work to deal with it.

Thanks for the discussion here! If you are subscribed to this issue and would like to continue getting notifications about this topic then I'd suggest subscribing to #30937 instead.

@apparentlymart apparentlymart closed this as not planned Won't fix, can't repro, duplicate, stale May 30, 2024
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 30, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
config enhancement unknown-values Issues related to Terraform's treatment of unknown values
Projects
None yet
Development

No branches or pull requests