Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

modules/private-cluster endpoint incorrect #828

Closed
edd-trent opened this issue Feb 22, 2021 · 8 comments · Fixed by #841
Closed

modules/private-cluster endpoint incorrect #828

edd-trent opened this issue Feb 22, 2021 · 8 comments · Fixed by #841
Labels
enhancement New feature or request good first issue Good for newcomers P4 low priority issues triaged Scoped and ready for work

Comments

@edd-trent
Copy link

edd-trent commented Feb 22, 2021

Sorry if I've got the wrong end of the stick here but I am having a problem with my private clusters. I've been having this problem with my own project and decided to run the example project (https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/private_zonal_with_networking) but I get the same result.

If I look in the GCP UI at the cluster it tells me the endpoint is 172.16.0.2 which is what I would expect but the output within terraform has the endpoint 104.196.20.229

Outputs:

ca_certificate = LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lSQUtqM3pTK0VMSVFFQ1UxNEl4eitwNGt3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa1lXVmtNVGhoTUdNdE56WXhZUzAwWkdZeExUaGpPREl0WmpZMk5HRTROVGhqTlROaQpNQjRYRFRJeE1ESXhPVEUxTlRZek4xb1hEVEkyTURJeE9ERTJOVFl6TjFvd0x6RXRNQ3NHQTFVRUF4TWtZV1ZrCk1UaGhNR010TnpZeFlTMDBaR1l4TFRoak9ESXRaalkyTkdFNE5UaGpOVE5pTUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTIxVFBmVHlmYWRobGFVT0ZWbjQvR3FPaWtNVU81TWpSaUErUThxVgptaTVsVklLa2JOcU9haXVsYzl0SnBtempwaUU3MjNRN1pUK2tBY3B0YXVyeGxweENrMmduK1o5RSs1QThBOC9QClZYcXBjdVhIUzB1a3h6RXZob2E1bmdDRmlSRGpqMWxiL2ZBbW80bldTKzVMc2I0S1dvazZnVzE4NG1SUWhydysKcC9TaFBNM1dTbVhDT0ZMZmswL0VVYVAyK3BZbTA4WVYrWUtqMU8vS1dsZFR6UWV3MnJRcXRHK29zR2lLVktnRApyR2V5bXhOYUJKdGcvdWFCYnZiQ0xGc2dTeGgrVE9GNkZ2WWRZMUNaQ25vSExYSDNvWXpvQ2pETllQNUZ5Z2dmCkFxQ1UwWFhPWlhPSmtrdHNMR3hXWVhlNXlvOTJJS3JJbUxOVDhsdG42ajJTaXdJREFRQUJvMEl3UURBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXNPMDZJNlVaQ3ZhSApMRURKQS8zWVFrclRlVG93RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQURaSWdFVDNaRG9rakZ6aWx4Z2VKbU9NCndOVVd4MUFUQmZmalZkd3NGL0tvMUtIZnlHYXNvWjlzWTQyclJJMENac1hWQUwvY0M4WEFmdWRvcTFDVTFxY0IKb2dFZDg4cU9LTVAvWFZrYm1kZS9oMitQUzRNTk94NHo0VkVETlppVzVRdFNCdCsrYU9UbFlGcHpBTGJUVkQ3SApoWDFpb1RhdGozM0Vuc0RsemlNOThEc3hTWXJkT0o5YnUrazNNTWZHR0RBV1RKUVdZU1djbGNqMWZDR3BOeHBJCmt5WXpicW9zY2h0b1FTaC9qU0tyTjVPeGkvMTYxb3pNbTc3cG9Vc2pqM3ZYTWpPRzhIRHdNbmdEMHR5a255L1cKcmxRN0xvZDNMbW9MUnF2amMwUUIwK3JMWVJ2TG5QWU1kRVkxSW9pdERYT3Y0NDh2akdkZ0crUnNUd0ZuV0xjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client_token = <sensitive>
cluster_name = gke-on-vpc-cluster
ip_range_pods_name = ip-range-pods
ip_range_services_name = ip-range-scv
kubernetes_endpoint = 104.196.20.229
location = us-east1-b
master_kubernetes_version = 1.18.15-gke.1100
network = gke-network
network_name = gke-network
peering_name = gke-n9c50b1751d1d8287509-37e3-8f1f-peer
project_id = tf-example-test
region = us-east1
service_account = tf-gke-gke-on-vpc-clus-razb@tf-example-test.iam.gserviceaccount.com
subnet_name = [
  "gke-subnet",
]
subnet_secondary_ranges = [
  [
    {
      "ip_cidr_range" = "192.168.0.0/18"
      "range_name" = "ip-range-pods"
    },
    {
      "ip_cidr_range" = "192.168.64.0/18"
      "range_name" = "ip-range-scv"
    },
  ],
]
subnetwork = gke-subnet
zones = []
@morgante
Copy link
Contributor

The module is providing the public endpoint. You can also look up the private endpoint if you need it.

@edd-trent
Copy link
Author

Thank you. I can get the private IP by running a describe on the cluster but is there a way to output it from the module or with terraform. I'm hoping to use terraform for the Helm deploy too and I need to provide the endpoint in the provider. I was expecting to do something like this host = "https://${module.gke-us-east1.endpoint}" hard coding an IP in after the cluster is built doesn't seem right.

@morgante
Copy link
Contributor

We should probably output the cluster itself to make retrieving additional values like this possible.

You can also use the auth submodule to retrieve the cluster connection info: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/auth

@morgante morgante added enhancement New feature or request good first issue Good for newcomers P4 low priority issues triaged Scoped and ready for work labels Feb 23, 2021
@edd-trent
Copy link
Author

Thanks for your quick reply and work on this module, it's great!

@fstr
Copy link
Contributor

fstr commented Mar 5, 2021

If you are using the beta-private-cluster module, there's a flag called deploy_using_private_endpoint that you can set.

(Beta) A toggle for Terraform and kubectl to connect to the master's internal IP address during deployment.

If this flag is set, module.gke.endpoint will return the private IP address of the cluster.

A possible improvement could be to return both, the public and the private endpoint as module outputs. This way, the caller can decide which one to use for any follow-up operations.

@fstr
Copy link
Contributor

fstr commented Mar 5, 2021

As @morgante pointed out, the values are already available. The module should just return both endpoints or the whole cluster as an output.

In my cluster, which currently has a private and public endpoint (not exclusively private at the moment) it looks like this:

      + private_cluster_config            = [
          + {
              + enable_private_endpoint     = false
              + enable_private_nodes        = true
              + master_global_access_config = [
                  + {
                      + enabled = true
                    },
                ]
              + master_ipv4_cidr_block      = "10.164.32.0/28"
              + peering_name                = "gke-123abc-peer"
              + private_endpoint            = "10.164.32.2"
              + public_endpoint             = "34.x.x.x"
            },
        ]

The auth module is great and could be improved by adding a flag bool connect_on_private_endpoint, which would then use the private_cluster_config[0].private_endpoint. Not sure if it's possible that this list has multiple entries?

@morgante
Copy link
Contributor

morgante commented Mar 5, 2021

Private cluster config should be 0 or 1 values, so if it's present we can safely take the private endpoint.

@edd-trent
Copy link
Author

I have tried connect_on_private_endpoint and using the auth module, both work as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers P4 low priority issues triaged Scoped and ready for work
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants