Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

master_global_access_config repeatedly shows up in Terraform plan #7829

Closed
currankaushik opened this issue Nov 17, 2020 · 3 comments · Fixed by GoogleCloudPlatform/magic-modules#4343 or hashicorp/terraform-provider-google-beta#2816
Assignees
Labels
bug forward/review In review; remove label to forward service/container

Comments

@currankaushik
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

$ terraform -v
Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/google v3.47.0
+ provider registry.terraform.io/hashicorp/google-beta v3.47.0

Affected Resource(s)

  • google_container_cluster

Terraform Configuration Files

resource "google_container_cluster" "test-gke-cluster" {
  provider                 = google-beta
  name                     = "test-gke-cluster"
  location                 = "us-east1"
  remove_default_node_pool = true
  initial_node_count       = 1
  network                  = google_compute_network.test-vpc.name
  subnetwork               = google_compute_subnetwork.test-subnet.name
  networking_mode          = "VPC_NATIVE"
  ip_allocation_policy {
    cluster_ipv4_cidr_block  = "10.2.0.0/16"
    services_ipv4_cidr_block = "10.3.0.0/16"
  }
  master_auth {
    username = ""
    password = ""
    client_certificate_config {
      issue_client_certificate = false
    }
  }
  master_authorized_networks_config {
    cidr_blocks {
      cidr_block = "10.1.0.0/16"
    }
  }
  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = true
    master_ipv4_cidr_block  = "10.4.0.0/28"
    master_global_access_config {
      enabled = false
    }
  }
}

Expected Behavior

After running terraform apply, subsequent terraform plans should show "No changes. Infrastructure is up-to-date."

Actual Behavior

Subsequent terraform plans continue to show a change to the master_global_access_config:

  # google_container_cluster.test-gke-cluster will be updated in-place
  ~ resource "google_container_cluster" "test-gke-cluster" {
        cluster_ipv4_cidr           = "10.2.0.0/16"
        default_max_pods_per_node   = 110
        enable_binary_authorization = false
        enable_intranode_visibility = false
        enable_kubernetes_alpha     = false
        enable_legacy_abac          = false
        enable_shielded_nodes       = false
        enable_tpu                  = false
        endpoint                    = "10.4.0.2"
        id                          = "projects/<REMOVED>/locations/us-east1/clusters/test-gke-cluster"
        initial_node_count          = 1
        instance_group_urls         = []
        label_fingerprint           = "a9dc16a7"
        location                    = "us-east1"
        logging_service             = "logging.googleapis.com/kubernetes"
        master_version              = "1.16.13-gke.401"
        monitoring_service          = "monitoring.googleapis.com/kubernetes"
        name                        = "test-gke-cluster"
        network                     = "projects/<REMOVED>/global/networks/test-vpc"
        networking_mode             = "VPC_NATIVE"
        node_locations              = [
            "us-east1-b",
            "us-east1-c",
            "us-east1-d",
        ]
        node_version                = "1.16.13-gke.401"
        project                     = "<REMOVED>"
        remove_default_node_pool    = true
        resource_labels             = {}
        self_link                   = "https://container.googleapis.com/v1beta1/projects/<REMOVED>/locations/us-east1/clusters/test-gke-cluster"
        services_ipv4_cidr          = "10.3.0.0/16"
        subnetwork                  = "projects/<REMOVED>/regions/us-east1/subnetworks/test-subnet"

        addons_config {

            network_policy_config {
                disabled = true
            }
        }

        cluster_autoscaling {
            autoscaling_profile = "BALANCED"
            enabled             = false
        }

        cluster_telemetry {
            type = "ENABLED"
        }

        database_encryption {
            state = "DECRYPTED"
        }

        default_snat_status {
            disabled = false
        }

        ip_allocation_policy {
            cluster_ipv4_cidr_block       = "10.2.0.0/16"
            cluster_secondary_range_name  = "gke-test-gke-cluster-pods-67809078"
            services_ipv4_cidr_block      = "10.3.0.0/16"
            services_secondary_range_name = "gke-test-gke-cluster-services-67809078"
        }

        master_auth {
            cluster_ca_certificate= "<REDACTED>"

            client_certificate_config {
                issue_client_certificate = false
            }
        }

        master_authorized_networks_config {
            cidr_blocks {
                cidr_block = "10.1.0.0/16"
            }
        }

        network_policy {
            enabled  = false
            provider = "PROVIDER_UNSPECIFIED"
        }

        notification_config {
            pubsub {
                enabled = false
            }
        }

        pod_security_policy_config {
            enabled = false
        }

      ~ private_cluster_config {
            enable_private_endpoint = true
            enable_private_nodes    = true
            master_ipv4_cidr_block  = "10.4.0.0/28"
            peering_name            = "<REMOVED>"
            private_endpoint        = "10.4.0.2"
            public_endpoint         = "<REMOVED>"

          + master_global_access_config {
              + enabled = false
            }
        }

        release_channel {
            channel = "UNSPECIFIED"
        }
    }

Steps to Reproduce

  1. terraform apply
  2. terraform plan
  3. See "1 to change" in the new plan.
  4. Go to step 2.

References

  • I believe this may be the same issue, but reported on the wrong project
@ghost ghost added the bug label Nov 17, 2020
@edwardmedia edwardmedia self-assigned this Nov 17, 2020
@edwardmedia
Copy link
Contributor

edwardmedia commented Nov 17, 2020

Interesting masterGlobalAccessConfig was sent in the request but not included in the response

https://paste.googleplex.com/5481982390697984

@gnuruzzi
Copy link

gnuruzzi commented Dec 4, 2020

Temp workaround for me was to ignore its changes.

  lifecycle {
    ignore_changes = [private_cluster_config[0].master_global_access_config]
  }

@ghost
Copy link

ghost commented Jan 28, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Jan 28, 2021
@github-actions github-actions bot added service/container forward/review In review; remove label to forward labels Jan 14, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.