Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not use dynamic Service Account #27

Closed
AdrienWalkowiak opened this issue Oct 30, 2018 · 20 comments · Fixed by #225
Closed

Can not use dynamic Service Account #27

AdrienWalkowiak opened this issue Oct 30, 2018 · 20 comments · Fixed by #225
Assignees
Labels
bug Something isn't working

Comments

@AdrienWalkowiak
Copy link

I am trying to use this module, based on the provided examples, but can't seem to get it to work. It used to be fine few days ago, but not anymore.

Here is the error I get:

Warning: module.gke-cluster.google_container_cluster.primary: "region": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.


Warning: module.gke-cluster.google_container_node_pool.pools: "node_config.0.taint": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Warning: module.gke-cluster.google_container_node_pool.pools: "region": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Warning: module.gke-cluster.google_container_node_pool.zonal_pools: "node_config.0.taint": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Warning: module.project.google_project.project: "app_engine": [DEPRECATED] Use the google_app_engine_application resource instead.



Error: module.gke-cluster.google_container_node_pool.pools: node_config.0.tags: should be a list



Error: module.gke-cluster.google_container_node_pool.pools: node_config.0.taint: should be a list



Error: module.gke-cluster.google_container_node_pool.zonal_pools: node_config.0.tags: should be a list



Error: module.gke-cluster.google_container_node_pool.zonal_pools: node_config.0.taint: should be a list

And here is the terraform used:

  source                     = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"
  project_id                 = "${local.project_id}"
  name                       = "${local.gke_cluster_name}"
  network                    = "${local.network_name}"
  subnetwork                 = "${local.subnetwork_name}"
  region                     = "${var.default_region}"
  zones                      = "${var.default_zones}"
  ip_range_pods              = "${var.default_region}-gke-01-pods"
  ip_range_services          = "${var.default_region}-gke-01-services"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true
  kubernetes_version         = "1.10.6-gke.6"



  node_pools = [
    {
      name            = "default-node-pool"
      machine_type    = "${var.node_pool_machine_type}"
      min_count       = 1
      max_count       = 10
      disk_size_gb    = 100
      disk_type       = "pd-standard"
      image_type      = "COS"
      auto_repair     = true
      auto_upgrade    = true
      service_account = "${module.project.service_account_name}"
    },
  ]

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key    = "default-node-pool"
        value  = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}
@ogreface
Copy link

ogreface commented Nov 20, 2018

Hi @AdrienWalkowiak,

Is this still an issue for you? I've been unable to duplicate the issue. I'm able to correctly plan and apply using just about the same terraform you pasted. I've put what I'm using below.

If this is still an issue you're running into, could you send me your whole project?

Best,

Rishi

My main.tf

module "gke" {
  source = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"

  project_id        = "${var.project_id}"
  name              = "deploy-service-cluster"
  region            = "${var.region}"
  network           = "${var.network}"
  subnetwork        = "${var.subnetwork}"
  ip_range_pods     = "${var.ip_range_pods}"
  ip_range_services = "${var.ip_range_services}"
  http_load_balancing = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard = true
  network_policy = true
  kubernetes_version = "1.11.2-gke.18"



  node_pools = [
    {
      name = "default-node-pool"
      machine_type = "n1-standard-2"
      min_count = 1
      max_count = 10
      disk_size_gb = 100
      disk_type = "pd-standard"
      image_type = "COS"
      auto_repair = true
      auto_upgrade = true
    },
  ]

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key = "default-node-pool"
        value = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}

My vars

/**
 * Copyright 2018 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

variable "project_id" {
  description = "The project ID to host the cluster in (required)"
}

variable "name" {
  description = "The name of the cluster (required)"
}

variable "description" {
  description = "The description of the cluster"
  default     = ""
}

variable "regional" {
  description = "Whether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!)"
  default     = true
}

variable "region" {
  description = "The region to host the cluster in (required)"
}

variable "zones" {
  type        = "list"
  description = "The zones to host the cluster in (optional if regional cluster / required if zonal)"
  default     = []
}

variable "network" {
  description = "The VPC network to host the cluster in (required)"
}

variable "network_project_id" {
  description = "The project ID of the shared VPC's host (for shared vpc support)"
  default     = ""
}

variable "subnetwork" {
  description = "The subnetwork to host the cluster in (required)"
}

variable "kubernetes_version" {
  description = "The Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region."
  default     = "1.10.6"
}

variable "node_version" {
  description = "The Kubernetes version of the node pools. Defaults kubernetes_version (master) variable and can be overridden for individual node pools by setting the version key on them. Must be empyty or set the same as master at cluster creation."
  default     = ""
}

variable "master_authorized_networks_config" {
  type = "list"

  description = <<EOF
  The desired configuration options for master authorized networks. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists)

  ### example format ###
  master_authorized_networks_config = [{
    cidr_blocks = [{
      cidr_block   = "10.0.0.0/8"
      display_name = "example_network"
    }],
  }]

  EOF

  default = []
}

variable "horizontal_pod_autoscaling" {
  description = "Enable horizontal pod autoscaling addon"
  default     = false
}

variable "http_load_balancing" {
  description = "Enable httpload balancer addon"
  default     = true
}

variable "kubernetes_dashboard" {
  description = "Enable kubernetes dashboard addon"
  default     = false
}

variable "network_policy" {
  description = "Enable network policy addon"
  default     = false
}

variable "maintenance_start_time" {
  description = "Time window specified for daily maintenance operations in RFC3339 format"
  default     = "05:00"
}

variable "ip_range_pods" {
  description = "The secondary ip range to use for pods"
}

variable "ip_range_services" {
  description = "The secondary ip range to use for pods"
}

variable "node_pools" {
  type        = "list"
  description = "List of maps containing node pools"

  default = [
    {
      name = "default-node-pool"
    },
  ]
}

variable "node_pools_labels" {
  type        = "map"
  description = "Map of maps containing node labels by node-pool name"

  default = {
    all               = {}
    default-node-pool = {}
  }
}

variable "node_pools_taints" {
  type        = "map"
  description = "Map of lists containing node taints by node-pool name"

  default = {
    all               = []
    default-node-pool = []
  }
}

variable "node_pools_tags" {
  type        = "map"
  description = "Map of lists containing node network tags by node-pool name"

  default = {
    all               = []
    default-node-pool = []
  }
}

variable "stub_domains" {
  type        = "map"
  description = "Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server"
  default     = {}
}

variable "non_masquerade_cidrs" {
  type        = "list"
  description = "List of strings in CIDR notation that specify the IP address ranges that do not use IP masquerading."
  default     = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
}

variable "ip_masq_resync_interval" {
  description = "The interval at which the agent attempts to sync its ConfigMap file from the disk."
  default     = "60s"
}

variable "ip_masq_link_local" {
  description = "Whether to masquerade traffic to the link-local prefix (169.254.0.0/16)."
  default     = "false"
}

variable "logging_service" {
  description = "The logging service that the cluster should write logs to. Available options include logging.googleapis.com, logging.googleapis.com/kubernetes (beta), and none"
  default     = "logging.googleapis.com"
}

variable "monitoring_service" {
  description = "The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and none"
  default     = "monitoring.googleapis.com"
}

And my outputs

/**
 * Copyright 2018 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
output "name_example" {
  description = "Cluster name"
  value       = "${module.gke.name}"
}

output "endpoint_example" {
  sensitive   = true
  description = "Cluster endpoint"
  value       = "${module.gke.endpoint}"
}

output "location_example" {
  description = "Cluster location"
  value       = "${module.gke.location}"
}

output "zones_example" {
  description = "List of zones in which the cluster resides"
  value       = "${module.gke.zones}"
}

output "node_pools_names_example" {
  value = "${module.gke.node_pools_names}"
}

output "node_pools_versions_example" {
  value = "${module.gke.node_pools_versions}"
}

@AdrienWalkowiak
Copy link
Author

Thank you for checking. I tried using your code and it seems to go past the error so I will close this issue and see what' wrong on my end, probably a syntax issue.

Thanks

@zbutt-muvaki
Copy link

zbutt-muvaki commented Jan 18, 2019

this is definitely an issue

i get following error

Warning: module.kubernetes-engine.google_container_node_pool.pools: "node_config.0.taint": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Warning: module.kubernetes-engine.google_container_node_pool.zonal_pools: "node_config.0.taint": [DEPRECATED] This field is in beta and will be removed from this provider. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.



Error: module.kubernetes-engine.google_container_node_pool.pools: node_config.0.taint: should be a list



Error: module.kubernetes-engine.google_container_node_pool.zonal_pools: node_config.0.taint: should be a list

my config is pretty straight forward

module "kubernetes-engine" {
    source   = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"

    project_id                 = "${var.project-id}"
    name                       = "${var.environment}-gke"

    region            = "us-central1"
    network           = "${module.vpc.name}"
    subnetwork        = "us-central1"
    service_account   = "${module.gke-sa.email}"

    kubernetes_version = "1.11.6-gke.2"

    http_load_balancing        = true
    horizontal_pod_autoscaling = true

    # enables vpc-nativ
    ip_range_pods       = "${var.environment}-gke-pod-range"
    ip_range_services   = "${var.environment}-gke-services-range"

    remove_default_node_pool = "true"

    node_pools = [
        {
            name            = "standard"
            machine_type    = "n1-standard-4"
            min_count       = 1
            max_count       = 10
            disk_size_gb    = 100
            disk_type       = "pd-standard"
            image_type      = "COS"
            auto_repair     = true
            auto_upgrade    = true
            service_account = "${module.gke-sa.email}"
            preemptible     = false
        },
    ]

    node_pools_labels = {
        all = {
            type = "gke"
            provisioner = "terraform"
        }

        standard = {
            node_cluster = "standard"
        }
    }

    node_pools_taints = {
        all = []
        standard = []
    }

    # control firewall for nodes via this tag
    node_pools_tags = {
        all = [
            "gke-cluster"
        ]
    }
}

I am thinking this should fix it ... needs to be applied in both regional.tf and zonal.tf

taint        = ["${concat(var.node_pools_taints["all"], var.node_pools_taints[lookup(var.node_pools[count.index], "name")])}"]

@morgante morgante reopened this Jan 19, 2019
@liafizan
Copy link

liafizan commented Feb 13, 2019

I have the same issue now. Trying to deploy a GKE cluster and keep getting this error.

Error: module.gke.google_container_node_pool.pools: "node_config.0.taint": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

Taint option does not seem to be available now

I removed the taint option from the local module and it worked fine. I do not see this option in the documentation for node-pool . However, the upstream API-Doc still has reference to taints.

Also, as the error suggests to use Beta provider and I did it but from my understanding this option has been deprecated

@tommyknows
Copy link
Contributor

tommyknows commented Feb 18, 2019

Same for me with a minimal config:

provider "google-beta" {
  project = "${var.project_id}"
  region  = "${var.region}"
}


module "gke" {
  source               = "terraform-google-modules/kubernetes-engine/google"
  project_id           = "${var.project_id}"
  name                 = "${var.cluster_name}"
  region               = "${var.region}"
  zones                = ["${var.cluster_zone}"]
  network              = "${var.network_name}"
  subnetwork           = "${var.subnetwork_name}"
  ip_range_pods        = "${var.ip_range_pods}"
  ip_range_services    = "${var.ip_range_services}"
  service_account      = "[SERVICE ACCOUNT NAME]"
  kubernetes_dashboard = true
}

terraform validate gives me:

Error: module.gke.google_container_node_pool.pools: "node_config.0.taint": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

@deenski
Copy link

deenski commented Feb 26, 2019

Same here:

module "kubernetes-engine" {
  source  = "github.com/terraform-google-modules/terraform-google-kubernetes-engine"
  project_id                 = "${var.project_id}"
  name                       = "${var.cluster_name}"
  region                     = "${var.region}"
  zones                      = "${var.zones}"
  network                    = "${var.network}"
  subnetwork                 = "${var.subnetwork}"
  ip_range_pods              = "${var.ip_range_pods}"
  ip_range_services          = "${var.ip_range_services}"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true

  node_pools = [
    {
      name            = "default-node-pool"
      machine_type    = "${var.default_node_pool_instance_type}"
      min_count       = "${var.default_node_pool_min_count}"
      max_count       = "${var.default_node_pool_max_count}"
      disk_size_gb    = 20
      disk_type       = "pd-ssd"
      image_type      = "COS"
      auto_repair     = true
      auto_upgrade    = true
      service_account = "${var.default_node_pool_service_account}"
      preemptible     = false
    },
  ]
  node_pools_tags = "${var.node_pool_tags}"
}

plan output:

Error: module.kubernetes-engine.google_container_node_pool.pools: "node_config.0.taint": [REMOVED] This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/provider_versions.html for more details.

@ogreface
Copy link

ogreface commented Feb 27, 2019

@deenski @tommyknows @faizan82 Can any of you let me know what version of the provider you're using? I can confirm the issue on 2.0.

@ogreface
Copy link

ogreface commented Feb 27, 2019

Ok, so the documentation states that you need version 1.8 of the provider, and that is the supported configuration.

Software Dependencies
Kubectl
kubectl 1.9.x
Terraform and Plugins
Terraform 0.11.x
terraform-provider-google v1.8.0

I can confirm that the examples work on with the provider pinned to 1.8.

Using the 2.0 version of the provider is unsupported. If you are using that, you do need to be on the beta version of the Google provider. This configuration seems to work for me.

provider "google-beta" {
  version = "~> 2.0.0"
  project = "${var.project_id}"
  region  = "${var.region}"
}

module "gke" {
  providers = {
    google ="google-beta"
  }
  source                 = "terraform-google-modules/kubernetes-engine/google"
  project_id                 = "${var.project_id}"
  name                       = "issue27-test-cluster"
  region                     = "us-east4"
  zones                      = ["us-east4-a"]
  network                    = "${var.network}"
  subnetwork                 = "${var.subnetwork}"
  ip_range_pods              = "${var.ip_range_pods}"
  ip_range_services          = "${var.ip_range_services}"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true
}

Note that the only difference is you have to explicitly pass the beta provider into the module, so that it inherits correctly.

@morgante morgante changed the title Deployment issues Deployment issues with 2.0.0 provider Mar 5, 2019
@deenski
Copy link

deenski commented Mar 9, 2019

Can confirm, I was on the 2.0 version. The configuration @ogreface provided also works for me.

Edit: sorry for the delay.

@wadadli
Copy link

wadadli commented Mar 19, 2019

@ogreface -- we're giving this a shot right now. We even tried to explicitly pin to 2.0.0 of the beta provider and no dice. Seems this is now completely borked?

@ogreface
Copy link

@wadadli Could you paste your code? Happy to take a look, but the example above still seems to work for me.

@wadadli
Copy link

wadadli commented Mar 20, 2019

Here's the tf that is resulting in the following error

Error: module.vault_kubernetes.google_container_node_pool.pools: node_config.0.tags: should be a list
module "vault_kubernetes" {
  providers = {
    google = "google-beta"
  }

  source                     = "terraform-google-modules/kubernetes-engine/google"
  name                       = "vault-${random_id.id.hex}"
  project_id                 = "${module.management_project.project_id}"
  network                    = "management"
  subnetwork                 = "mgmt-private-01"
  region                     = "us-east4"
  zones                      = ["us-west4-a", "us-west4-b", "us-west4-c"]
  ip_range_pods              = "kubernetes-pods"
  ip_range_services          = "kubernetes-services"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true
  kubernetes_version         = "1.11.2-gke.18"

  node_pools = [
    {
      name         = "default-node-pool"
      machine_type = "n1-standard-2"
      min_count    = 1
      max_count    = 10
      disk_size_gb = 100
      disk_type    = "pd-standard"
      image_type   = "COS"
      auto_repair  = true
      auto_upgrade = true
    },
  ]

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key    = "default-node-pool"
        value  = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}

We have tried adding the

provider "google-beta" {}

to both _init.tf and within the the same tf file as the resource above.

@ogreface
Copy link

ogreface commented Mar 21, 2019

@wadadli That TF pretty much works for me in terms of validation. Are you specifying the provider that's being passed in?

provider "google-beta" {
  version = "2.0.0"
  project = "${var.project_id}"
  region  = "${var.region}"
}

module "gke" {
  providers = {
    google = "google-beta"
  }

  source                     = "../../"
  name                       = "vault-foobar"
  project_id                 = "${var.project_id}"
  network                    = "management"
  subnetwork                 = "mgmt-private-01"
  region                     = "${var.region}"
  zones                      = "${var.zones}"
  ip_range_pods              = "kubernetes-pods"
  ip_range_services          = "kubernetes-services"
  http_load_balancing        = true
  horizontal_pod_autoscaling = true
  kubernetes_dashboard       = true
  network_policy             = true
  kubernetes_version         = "1.11.2-gke.18"

  node_pools = [
    {
      name         = "default-node-pool"
      machine_type = "n1-standard-2"
      min_count    = 1
      max_count    = 10
      disk_size_gb = 100
      disk_type    = "pd-standard"
      image_type   = "COS"
      auto_repair  = true
      auto_upgrade = true
    },
  ]

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key    = "default-node-pool"
        value  = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}
data "google_client_config" "default" {}

@g0blin79
Copy link

I still have this issue.

I'm using master code of this repo, downloaded some minutes ago.
This is my terraform cluster definition using this module:

module "kubernetes-cluster" {
  providers = {
    google = "google-beta"
  }

  source  = "github.com/terraform-google-modules/terraform-google-kubernetes-engine?ref=master"
  project_id         = "${var.project_id}"
  name               = "${var.cluster_name}"
  regional           = false
  region             = "${var.region}"
  zones              = ["${var.zone}"]
  network            = "${var.network_name}"
  subnetwork         = "${var.network_name}-subnet-01"
  ip_range_pods      = "${var.network_name}-pod-secondary-range"
  ip_range_services  = "${var.network_name}-services-secondary-range"
  kubernetes_version = "${var.kubernetes_version}"
  node_version       = "${var.kubernetes_version}"
  remove_default_node_pool = true
  disable_legacy_metadata_endpoints = "false"
  service_account = "create"

  node_pools = [
    {
      name            = "forge-pool"
      machine_type    = "n1-standard-2"
      min_count       = 1
      max_count       = 3
      disk_size_gb    = 100
      disk_type       = "pd-standard"
      image_type      = "COS"
      auto_repair     = true
      auto_upgrade    = false
      service_account = "${module.kubernetes-cluster.service_account}"
    },
  ]

  node_pools_metadata = {
    all = {}

    forge-pool = {}
  }

  node_pools_labels = {
    all = {}

    forge-pool = {
      forge-pool = "true"
    }
  }

  node_pools_taints = {
    all = []

    forge-pool = [
      {
        key    = "forge-pool"
        value  = "true"
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    forge-pool = [
      "forge-pool",
    ]
  }
}

This is my provider

provider "google" {
  credentials = "${file("/root/.config/gcloud/application_default_credentials.json")}"
  project = "${var.project_id}"
  region = "${var.region}"
  zone = "${var.zone}"
  version = "~> 2.2"
}

provider "google-beta" {
  credentials = "${file("/root/.config/gcloud/application_default_credentials.json")}"
  project = "${var.project_id}"
  region = "${var.region}"
  zone = "${var.zone}"
  version = "~> 2.2"
}

Still having this errors:

Error: module.kubernetes-cluster.google_container_node_pool.pools: node_config.0.taint: should be a list

Error: module.kubernetes-cluster.google_container_node_pool.zonal_pools: node_config.0.taint: should be a list

Is there some mistake I made?

@thiagonache
Copy link

thiagonache commented Apr 12, 2019

Folks, I think I've found the issue. You must use specifically a variable. If you use local or module output it does fail.

node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "n1-standard-2"
      min_count          = 0
      max_count          = 1
      disk_size_gb       = 100
      disk_type          = "pd-standard"
      image_type         = "COS"
      auto_repair        = true
      auto_upgrade       = true
      service_account    = "${local.default_service_account}"
      preemptible        = false
      initial_node_count = 0
    },
  ]

Does not work

node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "n1-standard-2"
      min_count          = 0
      max_count          = 1
      disk_size_gb       = 100
      disk_type          = "pd-standard"
      image_type         = "COS"
      auto_repair        = true
      auto_upgrade       = true
      service_account    = "${var.default_service_account}"
      preemptible        = false
      initial_node_count = 0
    },
  ]

works

@aaron-lane
Copy link
Contributor

Hi all. I apologize for the persistence of this issue. A workaround in addition to the one shared by @thiagonache is to allow the module to create a dedicated service account for the cluster:

module "kubernetes_engine" {
  # ...
  service_account = "create"
}

@aaron-lane aaron-lane added the bug Something isn't working label May 27, 2019
@aaron-lane aaron-lane changed the title Deployment issues with 2.0.0 provider Can not use dynamic Service Account Jul 16, 2019
@kopachevsky
Copy link
Contributor

kopachevsky commented Jul 25, 2019

Not reproducible.
Tested on TF12 with google/google-beta provider v2.9.0
For test I did create 2 node-pools, one with service account name from other module output, second from locals:

locals {
  cluster_type = "node-pool"
  mysa = "xxxxxx@xxxxxx.iam.gserviceaccount.com"
}

provider "google" {
  version = "~> 2.9.0"
  region  = var.region
}

provider "google-beta" {
  version = "~> 2.9.0"
  region  = var.region
}

module "sa" {
    source = "./sa"
}

module "gke" {
  source                            = "../terraform-google-kubernetes-engine"
  project_id                        = var.project_id
  name                              = "${local.cluster_type}-cluster${var.cluster_name_suffix}"
  regional                          = false
  region                            = var.region
  zones                             = var.zones
  network                           = var.network
  subnetwork                        = var.subnetwork
  ip_range_pods                     = var.ip_range_pods
  ip_range_services                 = var.ip_range_services
  remove_default_node_pool          = true
  disable_legacy_metadata_endpoints = false

   node_pools = [
    {
      name            = "pool-01"
      min_count       = 1
      max_count       = 2
      service_account = module.sa.name
      auto_upgrade    = false
    },
    {
      name            = "pool-02"
      min_count       = 1
      max_count       = 2
      service_account = local.mysa
      auto_upgrade    = false
    },
  ]

Module file sa.tf:

locals {
    prefix = "xxxxxx"
    suffix = "xxxxxx.iam.gserviceaccount.com"
}

output "name" {
  value = "${local.prefix}@${local.suffix}"
}

@morgante
Copy link
Contributor

@kopachevsky Please attempt to reproduce when you include the SA in the same config as your module invocation.

ie.

resource "google_service_account" "gke" {
  account_id   = "object-viewer"
  display_name = "Object viewer"
}

module "gke" {
  source                            = "../terraform-google-kubernetes-engine"
  project_id                        = var.project_id
  name                              = "${local.cluster_type}-cluster${var.cluster_name_suffix}"
  regional                          = false
  region                            = var.region
  zones                             = var.zones
  network                           = var.network
  subnetwork                        = var.subnetwork
  ip_range_pods                     = var.ip_range_pods
  ip_range_services                 = var.ip_range_services
  remove_default_node_pool          = true
  disable_legacy_metadata_endpoints = false

   node_pools = [
    {
      name            = "pool-01"
      min_count       = 1
      max_count       = 2
      service_account = module.sa.name
      auto_upgrade    = false
    },
    {
      name            = "pool-02"
      min_count       = 1
      max_count       = 2
      service_account = google_service_account.gke.email
      auto_upgrade    = false
    },
  ]

@kopachevsky
Copy link
Contributor

kopachevsky commented Jul 30, 2019

@morgante this schenario working fine, tested several times, see gist working for me https://gist.github.com/kopachevsky/6152449ac8e2a177e0759564915ed84f

So dynamic service account definition in node_pool parameter works:

 node_pools = [
    {
      name            = "pool-01"
      min_count       = 1
      max_count       = 1 
      service_account = google_service_account.sa.email
      auto_upgrade    = false
      auto_repair     = false
    },
  ]

But if I set service account for default pool as top level service_account parameter:

module "gke" {
  source                       = "../terraform-google-kubernetes-engine"
  project_id                   = "gl-akopachevskyy-gke"
  initial_node_count           = 1
  service_account              =  google_service_account.gke.email

I'm getting following error

Error: Invalid count argument

  on .terraform/modules/gke/sa.tf line 37, in resource "google_service_account" "cluster_service_account":
  37:   count        = var.service_account == "create" ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

Possible solution here is to add new parameter boolean create_service_account, true by default and use it in form:

module "gke" {
  source                   =  "../terraform-google-kubernetes-engine"
  project_id               =  "gl-akopachevskyy-gke"
  initial_node_count       =  1
  create_service_account   =  false
  service_account           =  google_service_account.gke.email
  //..other props
}

What do you think about?

@morgante
Copy link
Contributor

@kopachevsky That sounds good to me.

morgante added a commit that referenced this issue Aug 14, 2019
Bugfix: Can not use dynamic Service Account #27
morgante added a commit that referenced this issue Aug 14, 2019
morgante added a commit that referenced this issue Aug 15, 2019
morgante added a commit that referenced this issue Aug 15, 2019
morgante added a commit that referenced this issue Aug 20, 2019
morgante added a commit that referenced this issue Aug 28, 2019
morgante added a commit that referenced this issue Sep 25, 2019
CPL-markus pushed a commit to WALTER-GROUP/terraform-google-kubernetes-engine that referenced this issue Jul 15, 2024
Added boolean create_service_account variable, true by default, after this change google_service_account.cluster_service_account.count depends on new variable and not on service_account variable, means service_account variable can be dynamic from now on.
CPL-markus pushed a commit to WALTER-GROUP/terraform-google-kubernetes-engine that referenced this issue Jul 15, 2024
…27/dynamic-sa

Bugfix: Can not use dynamic Service Account terraform-google-modules#27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.