Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Webhook firewall rule not targeting correct source range #859

Closed
dkpat12 opened this issue Apr 1, 2021 · 4 comments
Closed

Webhook firewall rule not targeting correct source range #859

dkpat12 opened this issue Apr 1, 2021 · 4 comments
Labels
bug Something isn't working P2 high priority issues triaged Scoped and ready for work

Comments

@dkpat12
Copy link

dkpat12 commented Apr 1, 2021

Hi team!

I'm not able to directly use this module at my current job unfortunately, but I have used it for reference in creating the internal module. I implemented the webhook firewall rule as it was shown in this module but after it was created my nginx ingress webhook would not work.

I realized it needs to target the master cidr block (the whole /28 range), not the primary endpoint as a cidr block.

source_ranges = [local.cluster_endpoint_for_nodes]

@morgante morgante added bug Something isn't working P2 high priority issues triaged Scoped and ready for work labels Apr 5, 2021
@fstr
Copy link
Contributor

fstr commented Apr 27, 2021

This is already fixed in the beta-private-cluster module.

modules/beta-private-cluster/main.tf:113

cluster_endpoint_for_nodes = var.master_ipv4_cidr_block

I assume this can be safely ported to the non-beta module?

@fstr
Copy link
Contributor

fstr commented Apr 29, 2021

Previously I didn't know about the autogen feature of this repo. After looking more into it, I see that this difference in cluster_endpoint_for_nodes between the public and private cluster was explicitly added in #470. It is nothing that creeped in over time.

Any idea what was the reason that it was implemented like this?

autogen/main/main.tf.tmpl

{% if private_cluster %}
  ...
  cluster_endpoint_for_nodes = var.master_ipv4_cidr_block
{% else %}
  ...
  cluster_endpoint_for_nodes = "${google_container_cluster.primary.endpoint}/32"
{% endif %}
git diff 5791ac1f64cbd9355a9e2ee96f29d1c5b8686d60:autogen/main/main.tf.tmpl 16bdd6e6310ae248991462494f50876b99a36bbe:autogen/main/main.tf.tmpl

@morgante
Copy link
Contributor

I don't recall a particular reason for this separation, switching to cluster_endpoint_for_nodes = var.master_ipv4_cidr_block for everything makes sense.

@fstr
Copy link
Contributor

fstr commented May 1, 2021

Public GKE clusters do not have --master-ipv4-cidr as an argument or property and as such var.master_ipv4_cidr_block is a variable that is only available on private cluster configurations.

The code is correct.

Private Clusters
...
--master-ipv4-cidr=MASTER_IPV4_CIDR
IPv4 CIDR range to use for the master network. This should have a
netmask of size /28 and should be used in conjunction with the
--enable-private-nodes flag.

@dkpat12 likely copied the firewall rule from the root module firewall.tf, but when running a a private cluster it should have been copied from one of the private cluster submodules.

Issue can be closed imo.

@morgante morgante closed this as completed May 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working P2 high priority issues triaged Scoped and ready for work
Projects
None yet
Development

No branches or pull requests

3 participants