Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[private-cluster-update-variant] Module does not allow to update node pool labels without pool recration #1695

Closed
alarex opened this issue Jul 20, 2023 · 3 comments · Fixed by #1698
Labels
bug Something isn't working

Comments

@alarex
Copy link

alarex commented Jul 20, 2023

TL;DR

Modules private-cluster-update-variant and beta-private-cluster-update-variant still forces node pools to be recreated, while since 4.48.0 labels allowed to be updated in-place.

Can it be fixed by just removing labels block from keepers?

Expected behavior

Updating lables for node pool updates it in-place.

Observed behavior

Updating labels in module forces random_id to recreate and it leads to node_pool be recreated too.

Terraform Configuration

module "gke-usc1" {
  source     = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster-update-variant"
  version    = "27.0.0"
### Non-important stuff
  node_pools = [
    {
      name                        = "g2-std-4"
      accelerator_count           = 1
      accelerator_type            = "nvidia-l4"
      auto_repair                 = true
      auto_upgrade                = true
      autoscaling                 = true
      cpu_cfs_quota               = true
      disk_size_gb                = 60
      disk_type                   = "pd-ssd"
      enable_gcfs                 = false
      enable_gvnic                = false
      enable_integrity_monitoring = true
      enable_secure_boot          = true
      image_type                  = "COS_CONTAINERD"
      initial_node_count          = 0
      local_ssd_count             = 0
      local_ssd_ephemeral_count   = 0
      location_policy             = "BALANCED"
      machine_type                = "g2-standard-4"
      max_count                   = 1024
      max_pods_per_node           = 16
      max_surge                   = 3
      max_unavailable             = 1
      min_count                   = 0
      node_locations              = "us-central1-a,us-central1-b"
    },
  ]
  node_pools_labels = {
    all = {}
    g2-std-4 = {
      "example_label" = "true",
    }
  }
}

Terraform Version

1.5.0

Additional information

No response

@alarex alarex added the bug Something isn't working label Jul 20, 2023
@lauraseidler
Copy link
Contributor

We're running into the same issue - I'll test if removing the keeper is enough, and will open a PR

lauraseidler added a commit to lauraseidler/terraform-google-kubernetes-engine that referenced this issue Jul 24, 2023
Will cause node pools to be re-created on update, unless remote state
of the `random_id` resources is manually modified to reflect the new keepers.

Fixes terraform-google-modules#1695
@github-actions
Copy link

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days

@github-actions github-actions bot added the Stale label Sep 22, 2023
@lauraseidler
Copy link
Contributor

Not stale, this is still an open issue

@github-actions github-actions bot removed the Stale label Sep 23, 2023
lauraseidler added a commit to lauraseidler/terraform-google-kubernetes-engine that referenced this issue Oct 12, 2023
Will cause node pools to be re-created on update, unless remote state
of the `random_id` resources is manually modified to reflect the new keepers.

Fixes terraform-google-modules#1695
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants