Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use for_each to create multiple clusters at once: nested provider configuration for "kubernetes" #673

Closed
BundyQ opened this issue Sep 15, 2020 · 9 comments · Fixed by #777

Comments

@BundyQ
Copy link

BundyQ commented Sep 15, 2020

I am sorry if this issue has been discussed before but I couldn't find anything about it.

We are currently trying to create multiple clusters at once using terraform's for_each. The idea behind this was to use terraform-google-modules/project-factory to create projects for different environments (dev, stg, prd, ...) and afterwards create the same resources for each project. Unfortunately when we try call the GKE module with for_each like:

module "gke" {
  for_each = module.project_hierarchy.projects
  source = "terraform-google-modules/kubernetes-engine/google//modules/beta-public-cluster"
  project_id = each.value
  name = "cluster-${each.key}"
  create_service_account = true
  regional = false
  region = var.region
  zones = [var.zone]
  [...]
}

where each.key is the name of the environment and each.value the newly created project id. We get the error:

Error: Module does not support for_each

on main.tf line 36, in module "gke":
36: for_each = module.project_hierarchy.projects

Module "gke" cannot be used with for_each because it contains a nested
provider configuration for "kubernetes", at
.terraform/modules/gke/modules/beta-public-cluster/auth.tf:29,10-22.

This module can be made compatible with for_each by changing it to receive all
of its provider configurations from the calling module, by using the
"providers" argument in the calling module block.

Are we doing something wrong or is for_each simply not supported (yet)?

@bharathkkb
Copy link
Member

Hi @BundyQ
Yes this is one of the limitations I ran into while spot testing the release candidate. The only workaround to get for_each working was to remove that kubernetes provider block after which for_each worked.

@morgante as the recommended approach from 0.11 is this, what are your thoughts on making stub_domains configure_ip_masq etc into a separate submodule?

@morgante
Copy link
Contributor

@morgante as the recommended approach from 0.11 is this, what are your thoughts on making stub_domains configure_ip_masq etc into a separate submodule?

I'd like to avoid it if possible, as it introduces additional complexity for users and wiring them together is non-trivial.

@bharathkkb
Copy link
Member

bharathkkb commented Sep 23, 2020

I think there are two options:
i) We just remove the k8s provider from module and give instructions on how to wire the k8s provider (non trivial like you mentioned). Perhaps an upgrade guide and few examples would help with the transition?

ii) Submodule approach where we get necessary info using the google_container_cluster data source and use that to instantiate the k8s provider. This still means this submodule would be incompatible with for_each, however the interface would be minimal only requiring the project_id, location and cluster_name and stub_domains/configure_ip_masq specific info.

From an end user perspective both are pretty big changes os happy to discuss further if anyone has other ideas.

@PCatinean
Copy link

Hi, Terraform 13 suggests the following: This module can be made compatible with count by changing it to receive all of its provider configurations from the calling module, by using the "providers" argument in the calling module block.

I still don't get how you can bypass this with the given message. When you say you remove the provider block @bohdanyurov-gl you mean in the local modules?

@ystoneman
Copy link

ystoneman commented Oct 26, 2020

I had the same frustration with provider & count limitations when creating clusters and instances within an aurora global db, but it seems to be the direction Terraform is going: hashicorp/terraform#25120 (comment)

@johnatalima
Copy link

Hi, Terraform 13 suggests the following: This module can be made compatible with count by changing it to receive all of its provider configurations from the calling module, by using the "providers" argument in the calling module block.

I still don't get how you can bypass this with the given message. When you say you remove the provider block @bohdanyurov-gl you mean in the local modules?

Yes. Remove provider block on your local modules

@thecodeassassin
Copy link

Hi, Terraform 13 suggests the following: This module can be made compatible with count by changing it to receive all of its provider configurations from the calling module, by using the "providers" argument in the calling module block.
I still don't get how you can bypass this with the given message. When you say you remove the provider block @bohdanyurov-gl you mean in the local modules?

Yes. Remove provider block on your local modules

How do you plan to do this with Kubernetes ?

provider "kubernetes" {
  load_config_file = false

  host  = google_container_cluster.gke_cluster.endpoint
  token = data.google_client_config.provider.access_token

  client_certificate     = base64decode(google_container_cluster.gke_cluster.master_auth[0].client_certificate)
  client_key             = base64decode(google_container_cluster.gke_cluster.master_auth[0].client_key)
  cluster_ca_certificate = base64decode(google_container_cluster.gke_cluster.master_auth[0].cluster_ca_certificate)
}

The provider needs arguments (such as host) from resources created in the module.

@alvis
Copy link

alvis commented Dec 10, 2020

@thecodeassassin I'm facing the same issue as you. Any luck at the end?

@thecodeassassin
Copy link

@alvis Well didn't succeed. So basically I'm stuck copy pasting modules :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
8 participants