Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix!: enable private nodes with specified pod ip range #1514

Conversation

splichy
Copy link
Contributor

@splichy splichy commented Dec 30, 2022

Fixes #1493
enable_private_nodes attribute was introduced in google provider 4.45.0 and Google API now requires enable_private_nodes to be set on both cluster and node_pool when pod_ip_range is specified on the node_pool level.

@splichy splichy requested review from a team and Jberlinsky as code owners December 30, 2022 11:23
@splichy splichy changed the title fix/1493:Unable to enable private nodes with specified pod ip range fix:1493/Unable to enable private nodes with specified pod ip range Dec 30, 2022
@splichy splichy force-pushed the fix/1493_enable_private_nodes branch 3 times, most recently from b8af263 to a185443 Compare December 30, 2022 11:31
@splichy splichy changed the title fix:1493/Unable to enable private nodes with specified pod ip range fix: enable private nodes with specified pod ip range Dec 30, 2022
@splichy splichy force-pushed the fix/1493_enable_private_nodes branch 2 times, most recently from 587daac to c312230 Compare January 3, 2023 10:05
@comment-bot-dev
Copy link

@splichy
Thanks for the PR! 🚀
✅ Lint checks have passed.

@splichy splichy changed the title fix: enable private nodes with specified pod ip range fix!: enable private nodes with specified pod ip range Jan 3, 2023
@splichy
Copy link
Contributor Author

splichy commented Jan 3, 2023

@bharathkkb I have finally found what was wrong with the tests, so it's ready to be merged.
And maybe as an issue is caused by the change in Google API, it should be considered to backport it also into module v23 as an upgrade to v24 requires a manual step due to breaking change.

BTW is it correct that integration tests are not sending emails what part failed? I had to create a test project under my personal GCP account, hopefully, it will not cost me hundreds of $

Copy link
Member

@bharathkkb bharathkkb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @splichy

@bharathkkb
Copy link
Member

And maybe as an issue is caused by the change in Google API, it should be considered to backport it also into module v23 as an upgrade to v24 requires a manual step due to breaking change.

We usually don't do backports unless it is a critical issue. However reading the API docs, it seems like even if this not set it should now default to the cluster config. Are you still able to reproduce the error?

https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools#nodenetworkconfig
Whether nodes have internal IP addresses only. If enablePrivateNodes is not specified, then the value is derived from [cluster.privateClusterConfig.enablePrivateNodes]

@bharathkkb bharathkkb merged commit 8190439 into terraform-google-modules:master Jan 10, 2023
@splichy
Copy link
Contributor Author

splichy commented Jan 10, 2023

I'm still able to reproduce the error

  "error": {
    "code": 400,
    "message": "EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr.",
    "errors": [
      {
        "message": "EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr.",
        "domain": "global",
        "reason": "badRequest"
      }
    ],
    "status": "INVALID_ARGUMENT"
  }
}

It's probably a bug in Google API which is not deriving from [cluster.privateClusterConfig.enablePrivateNodes] when NodePool.NodeNetworkConfig.podRange is defined/different from cluster default.

There are also other bugs with https://cloud.google.com/kubernetes-engine/docs/how-to/multi-pod-cidr - which for example states that you can use subnet smaller than /24 for node pool override, and you can actually use e.g. /25 as cluster default, but when you try to use anything smaller than /24 for node pool override then you will get an error.

@sivadeepN
Copy link

sivadeepN commented Mar 6, 2023

I'm still able to reproduce the error

  "error": {
    "code": 400,
    "message": "EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr.",
    "errors": [
      {
        "message": "EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr.",
        "domain": "global",
        "reason": "badRequest"
      }
    ],
    "status": "INVALID_ARGUMENT"
  }
}

It's probably a bug in Google API which is not deriving from [cluster.privateClusterConfig.enablePrivateNodes] when NodePool.NodeNetworkConfig.podRange is defined/different from cluster default.

There are also other bugs with https://cloud.google.com/kubernetes-engine/docs/how-to/multi-pod-cidr - which for example states that you can use subnet smaller than /24 for node pool override, and you can actually use e.g. /25 as cluster default, but when you try to use anything smaller than /24 for node pool override then you will get an error.

@splichy @bharathkkb Im facing the exact same issue that you mentioned above. How do I fix it?

@splichy
Copy link
Contributor Author

splichy commented Mar 9, 2023

@sivadeepN you have to set enable_private_nodes on both: cluster & node_pool, or if you are talking about an inability to use a subnet smaller than /24 for node pool override, then this doesn't have a solution yet - I have tried to solve it with GCP support, spent a few days mailing with them, then I gave up. Anyway you can use a smaller subnet cluster-wide and then add /24 as node pool overrides.

@henryhcheung2
Copy link

henryhcheung2 commented Mar 22, 2023

@splichy thank you for your work.

When I set private_cluster_config.enable_private_nodes = true in google_container_cluster & network_config.enable_private_nodes = false in google_container_node_pool, I get a similar error:

Error: error creating NodePool: googleapi: Error 400: EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr., badRequest

What I am trying to achieve is a private cluster with mixed node pools (public and private node pools), but it seems it does not work. Although according to the documentation, it should: https://cloud.google.com/blog/products/containers-kubernetes/understanding-gkes-new-control-plane-connectivity#:~:text=Allow%20toggling%20and%20mixed%2Dmode%20clusters%20with%20public%20and%20private%20node%20pools

@agates4
Copy link

agates4 commented Aug 9, 2023

I'm able to get around this, and create public node pools with a different pod IP range, when I pin the google-beta version:

terraform {
  required_version = ">=0.13"

  provider_meta "google-beta" {
    module_name = "blueprints/terraform/terraform-google-kubernetes-engine:safer-cluster-update-variant/v16.0.1"
  }

  required_providers {
    google-beta = "~> 4.44.1"
  }

}

This issue was introduced in 4.45.0, and 4.44.1 is the latest version available that can still create public node pool with a different pod IP range. If you need to create both a public and private node pool each with a different IP range, unfortunately, you'll have to wait for a fix on this repository.

CPL-markus pushed a commit to WALTER-GROUP/terraform-google-kubernetes-engine that referenced this pull request Jul 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unable to enable private nodes when creating nodes with specified pod ip range
6 participants