Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to set workload policies allow-net-admin into a cluster #1785

Closed
sonja-hiltunen-ivadolabs opened this issue Oct 27, 2023 · 6 comments
Labels
enhancement New feature or request

Comments

@sonja-hiltunen-ivadolabs
Copy link

sonja-hiltunen-ivadolabs commented Oct 27, 2023

TL;DR

GKE Autopilot clusters recently allowed for bring-your-own service meshes. In the GKE documentation, it tells us to

"use --workload-policies=allow-net-admin when we create a cluster or update an existing cluster".

It doesn't seem that it is currently possible with this module. Could this be something you can consider adding?

@sonja-hiltunen-ivadolabs sonja-hiltunen-ivadolabs added the enhancement New feature or request label Oct 27, 2023
@kol-ratner
Copy link

kol-ratner commented Nov 7, 2023

@sonja-hiltunen-ivadolabs yes thank you for opening this issue - my firm is in the same exact boat - we are running an autopilot cluster and had to manually enable allow-net-admin so now our terraform tries and thankfully doesn't succeed to disable NET_ADMIN due to active workloads which depend on it (linkerd)

Heres our module code:

module "gke_prd_euw4" {
  source  = "terraform-google-modules/kubernetes-engine/google//modules/beta-autopilot-private-cluster"
  version = "~> 29.0.0"

  name       = "gke-prd-euw4"
  project_id = module.k8s_euw4_prd.project_id
  regional   = true
  region     = "europe-west4"

  network            = module.cluster_shared_vpc_prd.network_name
  network_project_id = module.cluster_shared_vpc_prd.project_id
  subnetwork         = "${module.cluster_shared_vpc_prd.network_name}-euw4"
  ip_range_pods      = "${module.cluster_shared_vpc_prd.network_name}-euw4-gke-pods"
  ip_range_services  = "${module.cluster_shared_vpc_prd.network_name}-euw4-gke-svc"

  master_ipv4_cidr_block = local.networking.cidr_blocks.k8s_prd_euw4_control_plane_cidr
  master_authorized_networks = [{
    cidr_block   = local.networking.cidr_blocks.cluster_shared_vpc_prd_euw4_cidr
    display_name = "${module.cluster_shared_vpc_prd.network_name}-euw4"
  }]

  enable_private_endpoint = true
  enable_private_nodes    = true

  release_channel                 = "REGULAR"
  enable_vertical_pod_autoscaling = true
  horizontal_pod_autoscaling      = true

  security_posture_mode               = "BASIC"
  security_posture_vulnerability_mode = "VULNERABILITY_BASIC"
}

and heres the output from our terraform apply:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.gke_prd_euw4.google_container_cluster.primary will be updated in-place
  ~ resource "google_container_cluster" "primary" {
      - allow_net_admin             = true -> null
        id                          = "projects/k8s-euw4-prd-0a8b/locations/europe-west4/clusters/gke-prd-euw4"
        name                        = "gke-prd-euw4"
        # (29 unchanged attributes hidden)

      ~ security_posture_config {
          ~ mode               = "DISABLED" -> "BASIC"
            # (1 unchanged attribute hidden)
        }

        # (32 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.gke_prd_euw4.google_container_cluster.primary: Modifying... [id=projects/k8s-euw4-prd-0a8b/locations/europe-west4/clusters/gke-prd-euw4]
╷
│ Error: googleapi: Error 400: Resource is in use by other resources: there are workloads currently using NET_ADMIN: linkerd-destination, linkerd-identity, linkerd-proxy-injector, linkerd-destination-75755bb48b, linkerd-destination-7dcc8ff6d4, linkerd-identity-66fff7d8cc, linkerd-identity-ff8665c45, linkerd-proxy-injector-69fdbdc8b5, linkerd-proxy-injector-75db56c5b7, flagger-54949fdf8d-dtc2c, [truncated].
│ Details:
│ [
│   {
│     "@type": "type.googleapis.com/google.rpc.RequestInfo",
│     "requestId": "0x39e02271f1cd1dca"
│   }
│ ]
│ , badRequest
│ 
│   with module.gke_prd_euw4.google_container_cluster.primary,
│   on .terraform/modules/gke_prd_euw4/modules/beta-autopilot-private-cluster/cluster.tf line 22, in resource "google_container_cluster" "primary":
│   22: resource "google_container_cluster" "primary" {
│ 

The ask is if you can simply expose the allow_net_admin parameter that the underlying resource google_container_cluster.primary already exposes according to the documentation here:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#allow_net_admin

@sonja-hiltunen-ivadolabs
Copy link
Author

@cole-ratner looks like there's already a PR open here: #1768

@sonja-hiltunen-ivadolabs sonja-hiltunen-ivadolabs changed the title Ability to add workload policies into a cluster Ability to set workload policies allow-net-admin into a cluster Nov 13, 2023
@sonja-hiltunen-ivadolabs
Copy link
Author

The PR is merged, I'll close this issue 🥳 It'll be out in release 30.0.0

@kol-ratner
Copy link

amazing thats really great! thanks all!

@sureshpalemoni
Copy link

I couldn't see the changes for net_admin. could you please share the link?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants