Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use terraform import with module that has dynamic provider configuration #13018

Closed
jszwedko opened this issue Mar 23, 2017 · 83 comments
Closed

Comments

@jszwedko
Copy link
Contributor

Terraform Version

Terraform v0.9.1

Affected Resource(s)

N/A

Terraform Configuration Files

# ./main.tf
module "module_us_west_1" {
  source = "./module"
  region = "us-west-1"
}

# ./module/main.tf
variable "region" {
  description = "AWS region for provider"
}

provider "aws" {
  region = "${var.region}"
}

resource "aws_cloudwatch_log_group" "rds_os" {
  name = "RDSOSMetrics"
  retention_in_days = 30
}

Debug Output

https://gist.github.com/ff54870fee49636209ecfaa5de272175

Panic Output

N/A

Expected Behavior

Resource was imported

Actual Behavior

Error importing: 1 error(s) occurred:

* module.module_us_west_1.provider.aws: 1:3: unknown variable accessed: var.region in:

${var.region}

Steps to Reproduce

  1. terraform import module.module_us_west_1.aws_cloudwatch_log_group.rds_os "arn:aws:logs:us-west-1:FILTERED:log-group:RDSOSMetrics:*"

Important Factoids

N/A

References

N/A

@Flygsand
Copy link

Flygsand commented Apr 3, 2017

Same issue here on v0.9.2. Seems related to #7774 (closed).

@iancward
Copy link

iancward commented Apr 4, 2017

I'm seeing the same issue with the datadog provider, so this isn't just AWS.

@haidangwa
Copy link

I've found that adding a provider alias resolves this issue. Something static works, like alias "myregion".

@iancward
Copy link

iancward commented Apr 4, 2017

It looks like you can sort of get around this by aliasing the provider.

provider "aws" {
  alias = "${var.region}"
  region = "${var.region}"
}

resource "aws_cloudwatch_log_group" "rds_os" {
  provider = "aws.${var.region}"
  name = "RDSOSMetrics"
  retention_in_days = 30
}

This may also be worth a read: #1819
As is this: #3285

@Chili-Man
Copy link
Contributor

I'm having the same issue as well with Terraform 0.9.4

@bflad
Copy link
Contributor

bflad commented Jun 2, 2017

Seeing this behavior with Terraform 0.9.6 and a similar use case of a module with a variable in the provider:

# module/variables.tf
variable "aws_profile" {
  description = "AWS profile for the provider"
  type        = "string"
}

variable "aws_region" {
  description = "AWS region for the provider"
  type        = "string"
}

# module/provider.tf
provider "aws" {
  profile = "${var.aws_profile}"
  region  = "${var.aws_region}"
}
* module.EXAMPLE.provider.aws: 1:3: unknown variable accessed: var.aws_profile in:

${var.aws_profile}

@christianbradley
Copy link

This behavior still exists in v0.10.3. Any update on this?

@apparentlymart
Copy link
Contributor

Hi all! Sorry for this limitation and for the long silence here.

Unfortunately the import system struggles a bit with more complex situations like this because it builds a different sort of graph to deal with the import case.

From the example config and output given here, it looks like the interpolation context isn't being constructed correctly when in import mode, and so the variables from the parent module aren't coming through correctly. This is definitely a bug, but is likely fiddly to fix. 😖

The fact that aliasing the provider makes this work is interesting. In that situation, is there an unaliased provider "aws" block (with a literal region) in the parent/root module that could be being used instead? That's the best explanation I can come up with for why that didn't produce an error that the region needs to be set.

@christianbradley
Copy link

@apparentlymart - sounds fiddly indeed. Is there anyone assigned to this bug who can get fiddlin?

@apparentlymart
Copy link
Contributor

There isn't anyone available to look at this right now, but want to get to it eventually. Any additional information we can gather in the mean time (including the question at the end of my previous comment) could help explain the issue here and thus make it easier to plan about.

To be honest, the current import functionality is primarily focused on simple cases where people are getting started and importing for the first time, so its design struggles with more complex scenarios. We will probably end up having to rethink it in a broader sense before too long in order to make it more usable.

Our usual expectation is that import is something people use only for a short period when they are getting started with Terraform, but from the comments here I get the sense that some or all of you are using it in a more ongoing/routine manner. Is that right? If so, it'd be great to hear what you're using it for since that will help inform future changes to make it more generally-usable.

@christianbradley
Copy link

@apparentlymart - I can't answer your question personally as the "workaround" didn't work for me...

I won't say I'm a terraform expert just yet - I've been using "import" to migrate legacy deploys of an app we develop into terraform control so that future updates can be automated with terraform.

Would you suggest a different/faster method for pulling existing resources into state so that terraform does not try to destroy/recreate them?

@apparentlymart
Copy link
Contributor

I've not personally run into this since I've only used import in pretty simple cases, but when faced with this problem I would probably try to work around it like this:

  • Import the resource into the root module, using a provider that has the intended region statically configured.
  • Use terraform state mv to move the state for the imported resource over into the target module.
  • Move the configuration block into the target module too.
  • Run terraform plan to make sure things have settled.

I'm not under any illusion that the above is a great workaround, but I think it would work and what I'd try when faced with is problem right now.

If the root module's config is explicitly using a different region, a variant of the above would be to temporarily create a child module that exists only to receive imports, with a static provider config, and import into that before moving into the final location.

To be completely honest, I'm not sure why the workaround of temporarily adding an alias works, so I would not have thought to try that. If it is working, I expect it's due to some edge case and thus not something I'd expect to work reliably.

lancekuo added a commit to lancekuo/terraform-docker-swarm that referenced this issue Sep 13, 2017
grahamlyons added a commit to mergermarket/tf_aws_cross_account_vpc_peering_connection that referenced this issue Sep 20, 2017
@grahamlyons
Copy link

@iancward's alias workaround worked for me.

@philax
Copy link

philax commented Oct 2, 2017

the alias workaround is not working for me

@vascop
Copy link

vascop commented Oct 5, 2017

We're using dynamic configuration of modules to be able to store our secrets (postgresql password, datadog API keys...) in Vault and pull them to be used by terraform when importing, planning and applying. Hinting that we should just give everyone passwords for everything and risk them be committed to source control doesn't look like sound advice.

Furthermore, this happens any time we're importing anything, even from another provider. So if I want to start using Terraform to provision a new provider, I have to go to my other providers and replace all the dynamic references to static ones again.

@wraithm
Copy link

wraithm commented Oct 12, 2017

I did exactly what @apparentlymart suggested here. What happened is that the terraform plan completely plowed over my carefully constructed terraform state. I think that it's actually only terraform refresh that's causing the state to be changed.

In the root module, I have an aws provider that is unaliased. It seems that in my modules where I use different providers, associated with different aws accounts, the resources in there got replaced with resources associated with that was provider that is in the root module.

EDIT: It was only terraform refresh that destroyed state. terraform plan works perfectly fine. Aliasing the unaliased provider after doing all of the imports and moves makes terraform refresh not destroy state.

@daveadams
Copy link
Contributor

I have a slightly different concern about this problem, which has gotten worse with 0.10 changes to how imports work. In order for the workaround of importing to the root module, then doing terraform state mv, you also need to create temporary resource stubs in the code. This has practical implications that make the import process truly burdensome.

My understanding is that the requirement to stub out the imported resources was imposed to prevent people from then immediately accidentally destroying those imported resources by running terraform apply. But with the new defaults that default to requiring interactive approval before application, this is surely a much smaller concern.

As far as the provider config, I'm also willing to specify all of that on the command line. So it'd be great if we had the option to ignore the resource and provider config in the tf code, and just specify the provider args on the command line, and give a path we wish to import to.

This is becoming a bigger problem for us as we try to get more people in our organization to use Terraform. The lack of ability to import into any but the most explicitly specified code is a real block to adoption.

@sebglon
Copy link

sebglon commented Nov 11, 2017

this works good for me on v0.10.7 with Google provider.

@christianbradley
Copy link

This is really becoming a major issue for adopting terraform for our current app. It is composed of multiple modules (vpc, dns, auth, api, cms, portal, etc) that have dependencies upon one another. This is very easily definable using a root module that passes output vars of one module to the input vars of another, but then we lose the ability to import existing infrastructure (which is a common scenario for some of our legacy apps). The alternative is to write and maintain a ton of scripts that do the same thing while keeping each module independent.

The response from the terraform dev team isn't giving us a lot of faith in the platform either, it's been 10 months since this issue was opened and we've seen very little actual focus on it. We're currently investigating alternatives to terraform as a result.

I'd love to keep using terraform, but as the weeks go by I'm seeing many such pain points - and it's getting harder to convince the rest of the team to keep moving forward.

@apparentlymart - it's been 3 months since we've had any update from the team on this issue, other than saying "it's fiddly to fix". Some bugs are fiddly, I get it. But these are two of the features that drew us to terraform...

  1. Modular definitions
  2. The ability to import existing infrastructure into TF management.

Let's get this fixed, or honestly, we're moving on.

@leszekeljasz
Copy link

Suggested solution with aliases does not solve the problem for me. It fixes one, and adds another - terraform keeps asking for provider.aws.region each time I run import / plan / apply / destroy.

Terraform version:

$ terraform version
Terraform v0.11.2
+ provider.aws v1.7.1

Ex. config:

$ tree
.
├── main.tf
└── modules
    └── environment
        └── env.tf

main.tf

$ cat main.tf
module "uswest" {
    source  = "modules/environment"
    sg_name = "test_sg_uswest"
    region  = "us-west-2"  # Oregon
}

module "useast" {
    source  = "modules/environment"
    sg_name = "test_sg_useast"
    region  = "us-east-1"  # N. Virginia
}

modules/environment/env.tf

$ cat modules/environment/env.tf
variable "region" {}
variable "sg_name" {}

provider "aws" {
    alias   = "${var.region}"
    region  = "${var.region}"
}

resource "aws_security_group" "sg" {
    name = "${var.sg_name}"

terraform plan

$ terraform plan
provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value:

@philax
Copy link

philax commented Jan 24, 2018

Is there any sort of ETA on this at all?

@meowmeowhappykitten
Copy link

My hacky workaround for this (that probably won't work in more complex situations) is just to fill in ${var.region} 's in my module with the actual string value of the region while I'm importing a resource from. After the import I put the ${var.region} reference back in and I can verify via terraform plan that my resource has successfully been imported.

This works for me because my main.tf files only refer to one region. If you had a main.tf with the same module creating resources in multiple regions, this work around probably wouldn't work for you.

@masonembry
Copy link

I just spent a whole morning on this. Please fix this bug.

@msmagoo87
Copy link

Encountered in v0.12.23. Please fix!

@renatoargh
Copy link

Encountered in v0.12.24

@allymat
Copy link

allymat commented Apr 9, 2020

Seems like this is still a big issue for lots of people - come on GitHub team, get this prioritized!

@kzap
Copy link

kzap commented May 7, 2020

3 years and counting.... note to self if you find this issue again in 1 year don't worry its not your fault, add +1 to yearly counter

@llamahunter
Copy link

So, https://www.terraform.io/docs/commands/import.html#provider-configuration says that provider configuration during import can depend on variables (but not data sources). yet this does not work if the provider is in a module.

@abrahamoshel
Copy link

abrahamoshel commented May 21, 2020

So, https://www.terraform.io/docs/commands/import.html#provider-configuration says that provider configuration during import can depend on variables (but not data sources). yet this does not work if the provider is in a module.

Just ran into the same problem. I am using a module that has out puts. It runs the plan and the apply perfectly but it does not allow me to import or rm state.

@KursLabIgor
Copy link

this topic and info helped me solve import issue
https://medium.com/@lhk.igor/this-is-how-i-solved-problem-of-importing-resources-in-terraform-with-dynamic-provider-a2f255a9a303

@mhenniges
Copy link

I faced this issue as well, and after reading this thread and all the workarounds, and decided to first just try commenting basically all of my other terraform out before importing. The import worked perfectly, and after uncommenting everything again, was back to a clean plan.

@spikewang
Copy link

Is there a way to manually create the state for the resource if it's not even related to providers?

I have the provider dynamically set in the module as well, but trying to import an existing kubernetes namespace... it's still failed on aws provider... which doesn't make any senses, since it's not even related...

@archoversight
Copy link

archoversight commented Aug 13, 2020

This is an issue even when it is not the provider that is dynamically built, but instead if you have a module that contains dynamic variables that is generated in an upstream module and passed through as a variable to a downstream module it fails to import because the downstream never receives the upstream data correctly.

This makes it very difficult to import the resource.

Similar to this comment: #13018 (comment)

@brikis98
Copy link
Contributor

brikis98 commented Sep 2, 2020

We ran into this issue as well. Any time we had a module with a provider block nested in it, and that provider block used any dynamic data (e.g., set region to a variable that was passed in), import would no longer work.

I just updated Terragrunt with a new aws-provider-patch command, which is an experimental, temporary workaround for this issue. You can run a command like this:

terragrunt aws-provider-patch --terragrunt-override-attr region=eu-west-1

And Terragrunt will:

  1. Run terraform init to download the code for all your modules into .terraform/modules.
  2. Scan all the Terraform code in .terraform/modules, find AWS provider blocks, and hard-code the region param
    to eu-west-1 for each one. You can of course set other params to override using the --terragrunt-override-attr option.

Once you do this, you'll hopefully be able to run import on that module. After that, you can delete the modified .terraform/modules and go back to normal.

Note that you should be able to use this even if you're not a Terragrunt user. Just add an empty terragrunt.hcl to any Terraform module you're using (e.g., run touch terragrunt.hcl), and run the same terragrunt aws-provider-patch command as above.

This is obviously an ugly, hacky solution, but as this bug has been open for ~3 years now, I figured an ugly solution is better than none. The fix is available in Terragrunt v0.23.40 and above. Give it a try and let me know if it works! PRs to improve it further are also welcome.

brikis98 added a commit to gruntwork-io/terragrunt that referenced this issue Oct 15, 2020
I have come across yet another `import` bug with Terraform. The [original one](hashicorp/terraform#13018) prompted us to add the `aws-provider-patch` command. The [new one](hashicorp/terraform#26211), which is the motivation for this PR, requires that the `aws-provider-patch` command can override not only top-level attributes in the `provider` block, but also attributes nested within blocks within the `provider` block, such as a `role_arn` within an `assume_role` block. This PR allows you to specify these nested attributes using a "dot" notation: e.g., `--terragrunt-override-attr assume_role.role_arn=""`.
gokuldaemon pushed a commit to Daemon-Solutions/tf-aws-asg-ebs-persist that referenced this issue Oct 22, 2020
Provider is not used anywhere else in the module.
It can also be a source of an error as per  hashicorp/terraform#13018
@JohannesAtGit
Copy link

@brikis98
This terragrunt aws-provider-patch worked for us fine. Thank you for this, but we have a similar problem in Azure!
Is there any plan to fix this in Terraform itself?

@brikis98
Copy link
Contributor

brikis98 commented Feb 4, 2021

@JohannesAtGit We aren't currently focused on Azure, so it's hard to carve out time for that right now, but if someone wants to make aws-provider-patch more generic (e.g., provider-patch), we'd very much welcome PRs!

@wagnst
Copy link

wagnst commented Feb 15, 2021

Any update on this? Still facing in 0.14

@jbardin
Copy link
Member

jbardin commented Feb 25, 2021

Hello All!

Sorry about the delay, but I have confirmed that the original issue as presented here now works in the latest release.

Since the original examples here are strictly concerned with using a provider which obtains its configuration via module variables, and those cases presented are now working in the current development release, I'm going to close this out. A lot has changed in Terraform over the course of this thread, and it's becoming difficult to follow which parts here could be relevant to the current version of Terraform.

Note that now we no longer recommend provider configurations within modules, and have better methods for explicitly passing provider configuration into modules. This form of provider configuration will also work with import when adapted to the configurations presented here.

As stated in the docs, it's still not possible to use a data source which is not already present in the state for provider configuration during import. This is a known shortcoming, and I'm going to open a new issue specifically for this concern.

If there are other examples where providers are not configured properly during import, please test with the latest development release and file a new issue, or ask in the community forum where there are more people ready to help.

Thanks!

@llamahunter
Copy link

llamahunter commented Mar 8, 2021

Still not working with terraform 0.14.7

% terraform import kubernetes_config_map.qea1 kafka-copier-core/qea1-backup-topics
Acquiring state lock. This may take a few moments...

Error: Invalid provider configuration

  on /Users/rdlee/Documents/tivo-research/git/inception-k8s-deployment-dev/backup/tek2/main.tf line 58:
  58: provider "kubernetes" {

The configuration for provider["registry.terraform.io/hashicorp/kubernetes"]
depends on values that cannot be determined until apply.


Error: Invalid provider configuration

  on /Users/rdlee/Documents/tivo-research/git/inception-k8s-deployment-dev/backup/tek2/main.tf line 64:
  64: provider "k8sraw" {

The configuration for provider["terraform.tivo.com/nabancard/k8sraw"] depends
on values that cannot be determined until apply.

Releasing state lock. This may take a few moments...

These providers are at the root level, not in a module.

@llamahunter
Copy link

see #27934 for an update on how to fix this.

@hashicorp hashicorp locked as resolved and limited conversation to collaborators Mar 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests