Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 400 creating google_container_cluster on v3.29.0 #6744

Closed
Eshanel opened this issue Jul 7, 2020 · 8 comments · Fixed by GoogleCloudPlatform/magic-modules#3732 or hashicorp/terraform-provider-google-beta#2260
Assignees
Labels
bug forward/review In review; remove label to forward service/container

Comments

@Eshanel
Copy link

Eshanel commented Jul 7, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v0.12.25 - Terraform Cloud

  • provider.google v3.29.0

Affected Resource(s)

  • google_container_cluster

Terraform Configuration Files

# Copy-paste your Terraform configurations here.
#
# For large Terraform configs, please use a service like Dropbox and share a link to the ZIP file.
# For security, you can also encrypt the files using our GPG public key:
#    https://www.hashicorp.com/security
#
# If reproducing the bug involves modifying the config file (e.g., apply a config,
# change a value, apply the config again, see the bug), then please include both:
# * the version of the config before the change, and
# * the version of the config after the change.
resource "google_container_cluster" "cluster" {
  provider    = google-beta
  name        = var.CLUSTER_NAME
  location    = var.CLUSTER_ZONE
  description = var.CLUSTER_DESCRIPTION

  // https://www.terraform.io/docs/providers/google/r/container_cluster.html
  // We can't create a cluster with no node pool defined, but we want to only use
  // separately managed node pools. So we create the smallest possible default
  // node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  enable_binary_authorization = false
  enable_kubernetes_alpha     = false
  enable_legacy_abac          = false
  enable_shielded_nodes       = true
  enable_intranode_visibility = true
  default_max_pods_per_node   = 110
  logging_service             = "logging.googleapis.com/kubernetes"
  monitoring_service          = "monitoring.googleapis.com/kubernetes"
  network                     = google_compute_network.cluster_vpc.self_link
  subnetwork                  = google_compute_subnetwork.cluster_vpc_subnetwork.self_link
  resource_labels             = var.GOOGLE_LABELS

  addons_config {
    horizontal_pod_autoscaling {
      disabled = false
    }
    http_load_balancing {
      disabled = false
    }
    network_policy_config {
      disabled = false
    }
  }
  cluster_autoscaling {
    enabled = false
  }
  database_encryption {
    state    = "ENCRYPTED"
    key_name = google_kms_crypto_key.cluster_kms_key_etcd.self_link
  }
  ip_allocation_policy {
    cluster_ipv4_cidr_block  = ""
    services_ipv4_cidr_block = ""
  }
  maintenance_policy {
    daily_maintenance_window {
      start_time = "11:00"
    }
  }
  master_auth {
    client_certificate_config {
      issue_client_certificate = false
    }
  }
  master_authorized_networks_config {
    dynamic cidr_blocks {
      for_each = var.CLUSTER_MASTER_AUTHORIZED_IPS
      content {
        display_name = cidr_blocks.key
        cidr_block   = cidr_blocks.value
      }
    }
  }
  network_policy {
    enabled = true
  }
  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = false
    master_ipv4_cidr_block  = "172.16.0.0/28"
  }
  release_channel {
    channel = "REGULAR"
  }
}

Debug Output

Panic Output

Expected Behavior

The cluster must be created.

Actual Behavior

The plan was successful, but the apply got an error from the Google APIs

Error: googleapi: Error 400: DefaultMaxPodsConstraint can only be used if IpAllocationPolicy.UseIpAliases is true., badRequest

  on .terraform/modules/k8s/main.tf line 81, in resource "google_container_cluster" "cluster":
  81: resource "google_container_cluster" "cluster" {

Steps to Reproduce

  1. terraform apply

Important Factoids

The plan is run from Terraform Cloud.
Also, we are running this config every morning, with success until today.

References

@ghost ghost added the bug label Jul 7, 2020
@jahernandezmartinez13
Copy link

With the provider.google v3.29 was failing for me as well.
I had to downgrade to terraform-provider-google_v3.14.0_x5 to make it work again.

@Eshanel
Copy link
Author

Eshanel commented Jul 7, 2020

I found a successful workaround with the beta argument networking_mode set to VPC_NATIVE

I wonder if the problem is the default blank cluster_ipv4_cidr_block and services_ipv4_cidr_block in ip_allocation_policy block which doesn't inferred the VPC_NATIVE mode for the cluster ?

FYI: My configuration now looks like this:

resource "google_container_cluster" "cluster" {
  provider    = google-beta
  name        = var.CLUSTER_NAME
  location    = var.CLUSTER_ZONE
  description = var.CLUSTER_DESCRIPTION

  // https://www.terraform.io/docs/providers/google/r/container_cluster.html
  // We can't create a cluster with no node pool defined, but we want to only use
  // separately managed node pools. So we create the smallest possible default
  // node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  enable_binary_authorization = false
  enable_kubernetes_alpha     = false
  enable_legacy_abac          = false
  enable_shielded_nodes       = true
  enable_intranode_visibility = true
  default_max_pods_per_node   = 110
  logging_service             = "logging.googleapis.com/kubernetes"
  monitoring_service          = "monitoring.googleapis.com/kubernetes"
  networking_mode             = "VPC_NATIVE" // Added to avoid cluster creation error
  network                     = google_compute_network.cluster_vpc.self_link
  subnetwork                  = google_compute_subnetwork.cluster_vpc_subnetwork.self_link
  resource_labels             = var.GOOGLE_LABELS

  addons_config {
    horizontal_pod_autoscaling {
      disabled = false
    }
    http_load_balancing {
      disabled = false
    }
    network_policy_config {
      disabled = false
    }
  }
  cluster_autoscaling {
    enabled = false
  }
  database_encryption {
    state    = "ENCRYPTED"
    key_name = google_kms_crypto_key.cluster_kms_key_etcd.self_link
  }
  ip_allocation_policy {
    cluster_ipv4_cidr_block  = ""
    services_ipv4_cidr_block = ""
  }
  maintenance_policy {
    daily_maintenance_window {
      start_time = "11:00"
    }
  }
  master_auth {
    client_certificate_config {
      issue_client_certificate = false
    }
  }
  master_authorized_networks_config {
    dynamic cidr_blocks {
      for_each = var.CLUSTER_MASTER_AUTHORIZED_IPS
      content {
        display_name = cidr_blocks.key
        cidr_block   = cidr_blocks.value
      }
    }
  }
  network_policy {
    enabled = true
  }
  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = false
    master_ipv4_cidr_block  = "172.16.0.0/28"
  }
  release_channel {
    channel = "REGULAR"
  }
}

@milentzvetkov
Copy link

milentzvetkov commented Jul 7, 2020

Confirmed that the workaround is working just now, I spend a good 2-3 hours of wondering why just 2 days ago everything was finе....
Thanks @Eshanel you owe you one for this.
Latest working provider 3.28 as I can see in my cloudbuild history...

@M3t0r
Copy link

M3t0r commented Jul 7, 2020

As @jahernandezmartinez13 pointed out, just using a previous version works. But since it took me a while to find the syntax for specifying versions, I'm posting what helped me here:

provider "google" {
  version = "~> 3.18, != 3.29.0"
  project = local.gcp_project
  region  = local.region
}

provider "google-beta" {
  version = "~> 3.18, != 3.29.0"
  project = local.gcp_project
  region  = local.region
}

@farmdawgnation
Copy link

Hi folks,

I'm just here to comment that I ran into a related issue with a different error message. While attempting to create a private cluster, I was receiving this error message:

Alias IP addresses are required for private cluster, please make sure you enable alias IPs when creating a cluster.

My ip_allocation_policy is using cluster_secondary_range_name and services_secondary_range_name to call out specific ranges in the network to pull addresses from. It appears as though I had to specify networking_mode = "VPC_NATIVE" explicitly to get things working.

@megan07
Copy link
Contributor

megan07 commented Jul 8, 2020

Sorry for the mix-up with this! Yes, networking_mode = "VPC_NATIVE" is what we want to use going forward. It was the default when ip_allocation_policy was set before, however, we made this change in the most recent release. Unfortunately, we missed this backward-compatability in implementation. I have created a PR to address this change. Thanks for your patience!

@farmdawgnation
Copy link

No worries, thanks for the quick reply

@ghost
Copy link

ghost commented Aug 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Aug 8, 2020
@github-actions github-actions bot added service/container forward/review In review; remove label to forward labels Jan 14, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.