Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

monitoring_enable_managed_prometheus field not working in v27.0.0 #1706

Open
prateekn opened this issue Aug 9, 2023 · 16 comments
Open

monitoring_enable_managed_prometheus field not working in v27.0.0 #1706

prateekn opened this issue Aug 9, 2023 · 16 comments
Assignees
Labels
bug Something isn't working

Comments

@prateekn
Copy link

prateekn commented Aug 9, 2023

TL;DR

Inspite of setting "monitoring_enable_managed_prometheus" variable equal to false on version v27.0.0 of this terraform module
"Managed Service for Prometheus" in gcp still enabled

gke version 1.27.4-gke.900

Expected behavior

Managed Service for Prometheus should be disabled.

Observed behavior

Managed Service for Prometheus is enabled.

error

Terraform Configuration

module "gke" {
  source                 = "terraform-google-modules/kubernetes-engine/google/modules/beta-private-cluster-update-variant"
  version                = "v27.0.0"
  description                   = "GKE Prod Cluster"
  kubernetes_version            = var.kubernetes_version
  regional                      = true
  region                        = var.region
  network                       = var.network
  network_project_id            = var.network_project_id
  subnetwork                    = var.subnetwork
  ip_range_pods                 = var.ip_range_pods
  ip_range_services             = var.ip_range_services
  create_service_account        = false
  service_account               = var.default_cluster_sa
  add_cluster_firewall_rules    = true
  firewall_inbound_ports        = ["8443", "9443", "15017"]
  default_max_pods_per_node     = 32
  monitoring_enable_managed_prometheus = false
  http_load_balancing           = true
  network_policy                = false
  horizontal_pod_autoscaling    = true
  filestore_csi_driver          = false
  enable_private_endpoint       = true
  enable_private_nodes          = true
  master_ipv4_cidr_block        = "10.10.20.0/28"
  remove_default_node_pool      = true
  gce_pd_csi_driver             = true
  enable_intranode_visibility   = false
  enable_binary_authorization   = true
  database_encryption           = [{
    state = "ENCRYPTED"
    key_name = data.google_kms_crypto_key.gke_key_1.id
  }]
  release_channel               = "UNSPECIFIED"
  grant_registry_access         = false
  node_metadata                 = "GKE_METADATA"
  logging_enabled_components    = ["SYSTEM_COMPONENTS"]
  monitoring_enabled_components = ["SYSTEM_COMPONENTS"]
  gateway_api_channel           = "CHANNEL_STANDARD"
  master_authorized_networks = [    
    {
      cidr_block   = "10.0.0.0/8"
      display_name = "Internal Ips"
    }
  ]
}

Terraform Version

v1.1.7

Additional information

https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed#enable-mgdcoll-gke

@prateekn prateekn added the bug Something isn't working label Aug 9, 2023
@nourspace
Copy link

The way monitoring_config block is defined prevents it from being set when monitoring_enable_managed_prometheus=false or monitoring_enabled_components=[] and therefore it received the cluster's default value.

  dynamic "monitoring_config" {
    for_each = length(var.monitoring_enabled_components) > 0 || var.monitoring_enable_managed_prometheus ? [1] : []

    content {
      enable_components = length(var.monitoring_enabled_components) > 0 ? var.monitoring_enabled_components : []

      dynamic "managed_prometheus" {
        for_each = var.monitoring_enable_managed_prometheus ? [1] : []

        content {
          enabled = var.monitoring_enable_managed_prometheus
        }
      }
    }
  }

@BhaRgav-MoRadiya
Copy link

For_each being set as false will prevent it for executing enabled=true or false.

@ericyz
Copy link
Collaborator

ericyz commented Aug 22, 2023

#1715

@alexberry
Copy link

Upvoting this as we've just noticed it applying to all our new and upgraded clusters despite the false default

@Syphon83
Copy link

Any news about this? We are using private cluster module and it's impossible to disable managed Prometheus.

@mansourkheffache
Copy link

+1 on this, I still cannot manage to disable it

@bsgrigorov
Copy link

+1 seeing the same in 28.0.0

@prateekn
Copy link
Author

prateekn commented Nov 2, 2023

+1 seeing the same in 28.0.0

the fix in 29.0.0
#1746

@shybbko
Copy link

shybbko commented Nov 7, 2023

The fix doesn't seem to work for me:

module "gke_euw3" {

  source  = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster-update-variant"
  version = "29.0.0"

(...)
  monitoring_enable_managed_prometheus = false
  monitoring_enabled_components = []

The plan contains a lot of noise (due to #1773 as I would be upgrading from 27.0 to 29.0), but does not mention making any changes to this block of TF state:

            "monitoring_config": [
              {
                "advanced_datapath_observability_config": [
                  {
                    "enable_metrics": false,
                    "relay_mode": "DISABLED"
                  }
                ],
                "enable_components": [
                  "SYSTEM_COMPONENTS"
                ],
                "managed_prometheus": [
                  {
                    "enabled": true
                  }
                ]
              }
            ],

@gueux
Copy link

gueux commented Nov 14, 2023

The fix doesn't seem to work for me:

module "gke_euw3" {

  source  = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster-update-variant"
  version = "29.0.0"

(...)
  monitoring_enable_managed_prometheus = false
  monitoring_enabled_components = []

Because of some bad conditions in module you have to specify at least one component if you want to disable prometheus:
Try this one:

monitoring_enable_managed_prometheus = false
monitoring_enabled_components = ['SYSTEM_COMPONENTS']

Copy link

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days

@github-actions github-actions bot added the Stale label Jan 13, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2024
@msgongora
Copy link
Contributor

+1 seeing the same in 28.0.0

the fix in 29.0.0 #1746

This isn't correct. I faced it recently using beta-private-cluster-update-variant:30.0, while gueux's suggestion is a workaround, this issue should be addressed from the module code by changing the default value of the variable monitoring_enabled_components

@atorrembo
Copy link

Issue still exist in 30.0.0

@bharathkkb bharathkkb reopened this May 28, 2024
@bharathkkb
Copy link
Member

@ericyz looks like users still have some trouble configuring this. Could you PTAL?

@pramodsetlur
Copy link

Thanks for reopening this issue. I am wondering if the fix can be back ported to the previous released versions (we use v26.0.0).

@github-actions github-actions bot removed the Stale label May 28, 2024
@kholisrag
Copy link

Issue still exist in v31.0.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests