Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add network_config to node_pool #984

Merged
merged 14 commits into from
Aug 24, 2021

Conversation

DrFaust92
Copy link
Contributor

Closes #983

@comment-bot-dev
Copy link

comment-bot-dev commented Aug 22, 2021

Thanks for the PR! 🚀
✅ Lint checks have passed.

Copy link
Contributor

@morgante morgante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add this value to one of the examples/ to show how it would be used and verify implementation.

@DrFaust92
Copy link
Contributor Author

@morgante added example + tested config.

@@ -77,3 +77,11 @@ variable "cluster_autoscaling" {
}
description = "Cluster autoscaling configuration. See [more details](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#clusterautoscaling)"
}

variable "network_config" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't make this a variable. Just hard code it. Examples can and should use hardcoding extensively.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

np, it just will make an assumption on users env. will change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

dynamic "network_config" {
for_each = lookup(each.value, "network_config", false) ? [each.value] : []
content {
create_pod_range = lookup(network_config.value, "create_pod_range", false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we want to dynamically create the range on demand, since that's not very declarative.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, ill remove both create_pod_range and cidr vars.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi fellows, why not giving the possibility to customize the CIDR if set? Looks like pod_ipv4_cidr_block can not be set so it is set to a default. How can we set this to some CIDR range of our choosing? Thanks!

modules/beta-private-cluster-update-variant/cluster.tf Outdated Show resolved Hide resolved
DrFaust92 and others added 3 commits August 25, 2021 00:34
@DrFaust92
Copy link
Contributor Author

Thanks for the quick review @morgante! addressed comments.

@@ -199,6 +199,8 @@ The node_pools variable takes the following parameters:
{% endif %}
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true | 1 | Optional |
| name | The name of the node pool | | Required |
| network_config | Configuration for Adding Pod IP address ranges to the node pool. | | Optional |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since only pod_range is used now, could we collapse this into being the only variable? So network_config doesn't need to be supplied at all?

Copy link
Contributor Author

@DrFaust92 DrFaust92 Aug 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

future proofing? what if there will an additional var there?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

YAGNI - we shouldn't add complexity for hypothetical gains.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. changed.

Copy link
Contributor

@morgante morgante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please increase the minimum required provider version: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/master/versions.tf#L24

Also make sure that changes only apply to beta modules.

autogen/main/README.md Show resolved Hide resolved
@DrFaust92
Copy link
Contributor Author

@morgante made changes and tested example again (needed some minor change to foreach logic) and bumped beta provider in template.

@morgante morgante merged commit 9d1274f into terraform-google-modules:master Aug 24, 2021
@DrFaust92 DrFaust92 deleted the network_config branch August 24, 2021 23:13
CPL-markus pushed a commit to WALTER-GROUP/terraform-google-kubernetes-engine that referenced this pull request Jul 15, 2024
…ls (terraform-google-modules#984)

BREAKING CHANGE: Minimum beta provider version increased to v3.79.0.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for node pool level pods range
4 participants