Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubeconfig in connection secret is not working in the latest (0.4) version #128

Closed
vadasambar opened this issue Jan 3, 2020 · 11 comments · Fixed by #129
Closed

Kubeconfig in connection secret is not working in the latest (0.4) version #128

vadasambar opened this issue Jan 3, 2020 · 11 comments · Fixed by #129
Labels
bug Something isn't working

Comments

@vadasambar
Copy link

vadasambar commented Jan 3, 2020

What happened?

The kubeconfig in connection secrets seems to be missing username and password (both of them are empty). When I try to use the kubeconfig locally using cli to access the cluster, I am asked for username and password.
When I looked into config of this cluster in GKE console, Basic Authentication was disabled.

How can we reproduce it?

Use crossplane 0.6 and stack-gcp 0.4. Try provisioning a cluster using the following cluster claim and cluster class

# class
apiVersion: container.gcp.crossplane.io/v1beta1
kind: GKEClusterClass
metadata:
  labels:
    className: "app-kubernetes-class"
  name: app-kubernetes-class
  namespace: crossplane-system
specTemplate:
  forProvider:
    location: us-central1
  providerRef:
    name: gcp-provider
  reclaimPolicy: Delete
  writeConnectionSecretsToNamespace: crossplane-system
---
# claim
apiVersion: compute.crossplane.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: app-kubernetes
  namespace: crossplane-system
  annotations:
    crossplane.io/external-name: foobarbaz
spec:
  classSelector:
    matchLabels:
      className: "app-kubernetes-class"
  writeConnectionSecretToRef:
    name: app-kubernetes
---
# nodepool
apiVersion: container.gcp.crossplane.io/v1alpha1
kind: NodePool
metadata:
  name: gke-nodepool
  namespace: crossplane-system
spec:
  providerRef:
    name: gcp-provider
  writeConnectionSecretToRef:
    name: gke-nodepool
    namespace: crossplane-system

  forProvider:
    cluster: "projects/myproject-12345/locations/us-central1/clusters/foobarbaz"
    initialNodeCount: 2

Check the cluster connection secret using

$ kubectl get secret app-kubernetes -o yaml

username and password field are empty.

What environment did it happen in?

  • I am running crossplane on minikube
  • Crossplane version: 0.6
  • Cloud provider: GCP
  • Kubernetes version (use kubectl version)
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes distribution: GKE

More details

I tried provisioning cluster manually first, by providing username in masterAuth by referring to this
https://github.com/crossplaneio/stack-gcp/blob/a6131969f4d1b2d6cbb0abd84cf4d452a1400367/pkg/clients/gke/gke.go#L98

through GKE's web api (https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters/create). If I provide a username in the API here, sure enough Basic Authentication is enabled and I get username and password.
This makes me wonder if there's some problem on our side. I am looking into the codebase to check if there's something wrong.

Update 1

  • Above code related to masterAuth does is for v1alpha3, v1beta1 does not seem to have Basic Authentication enabled by default

https://github.com/crossplaneio/stack-gcp/blob/a6131969f4d1b2d6cbb0abd84cf4d452a1400367/pkg/clients/cluster/cluster.go#L255

Update 2

Update 3

@vadasambar vadasambar added the bug Something isn't working label Jan 3, 2020
@hasheddan
Copy link
Member

Hey @vadasambar! Thanks for opening this issue and apologies for any inconvenience you have experienced. I want to make sure I have a good understanding of where you are at in your troubleshooting now following your updates. It sounds like the connection Secret generated does have enough information to connect to the GKE cluster (specifically when a KubernetesApplication is scheduled to it), but you would like for it to provide a username / password for your own manual connection.

If this is indeed the state you have found yourself in, it is actually by design right now while we determine a secure way to specify credentials on creation for v1beta1 resources. In the mean time, we do not support setting a Username and Password per security guidelines on GKE documentation. You can however request issuance of a client certificate with the following stanza:

masterAuth:
  clientCertificateConfig:
    issueClientCertificate: true

Let me know if this helps!

@vadasambar
Copy link
Author

It sounds like the connection Secret generated does have enough information to connect to the GKE cluster (specifically when a KubernetesApplication is scheduled to it), but you would like for it to provide a username / password for your own manual connection.

I want a way to connect to the provisioned cluster. It does not have to be username/password.

masterAuth:
  clientCertificateConfig:
    issueClientCertificate: true

I will try this. Thank you!
I could not find anything related to this in the release notes nor the documentation. I came across this just a while before I read your reply.

@vadasambar
Copy link
Author

I do get the cert fields populated now. Thank you!

But when I used the raw kubeconfig in cli (I wrote it to config file)

$ kubectl get po --kubeconfig=./config
Error from server (Forbidden): pods is forbidden: User "client" cannot list resource "pods" in API group "" in the namespace "default"

@vadasambar
Copy link
Author

Thread on slack around this issue: https://crossplane.slack.com/archives/CEG3T90A1/p1578059370045600

@muvaf
Copy link
Member

muvaf commented Jan 3, 2020

As far as I understand, there are 3 ways to authenticate to a GKE cluster (please correct me if I'm wrong):

  • Basic auth: simple username and password.
    • We do not support this for now until we have Secret input, which should be available in the short term.
  • Client cert: a user with almost no permission.
    • I run into this when I manually create the GKE cluster from the console as well, so, that's how they work and we support that. It's a bad UX but not much we can do for that I guess.
  • IAM auth: this is what gcloud uses; your user in GCP is actually a user in GKE cluster as well.
    • In the kubeconfig that gcloud generates, there is an auth provider referred that authenticates you. In kubeconfig, it looks something like this:
- name: gke_crossplane-playground_us-central1_foobarbaz
  user:
    auth-provider:
      config:
        access-token: <REDACTED>
        cmd-args: config config-helper --format=json
        cmd-path: /Users/username/sdks/google-cloud-sdk/bin/gcloud
        expiry: "2020-01-03T13:09:59Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

I am not sure whether it's possible but we can investigate that in case neither basic auth nor client cert is enabled, how we can generate a kubeconfig for access that doesn't depend on gcloud. I mean, in the end there is a Kubernetes user/serviceaccount in the cluster that you're using in any scenario, so, it should be possible to fetch its raw credentials given that we have the necessary GCP credentials. The most straight-forward but definitely not the best way is that gke controller could authenticate to the cluster using the credential in Provider(via Google GKE client or even gcloud), find that user and fetch its credentials.

I found this but didn't dig much https://gist.github.com/ahmetb/548059cdbf12fb571e4e2f1e29c48997

@negz
Copy link
Member

negz commented Jan 3, 2020

I mean, in the end there is a Kubernetes user/serviceaccount in the cluster that you're using in any scenario

I'm fairly confident this isn't the case, unfortunately. The cluster doesn't actually have knowledge of what users exist - it typically defers to some external authentication system for that. For example the CN of any valid auth certificate will be used as the username when cert auth is in use.

@negz negz closed this as completed in #129 Jan 4, 2020
@vadasambar
Copy link
Author

Slack thread about discussion around the fix: https://crossplane.slack.com/archives/CKXQHM7U3/p1578063791101100

@muvaf
Copy link
Member

muvaf commented Jan 6, 2020

The cluster doesn't actually have knowledge of what users exist - it typically defers to some external authentication system for that.

Yes this seems to be the case, one more thing to learn for me. Thanks!

@echarles
Copy link

echarles commented May 8, 2021

@Agarnier22
Copy link

I reopen the subject as the slack threads are not accessible anymore (I don't know if buying pro will give me access to it?). I'am facing the same problem of client certificate with no sufficient permissions. Could someone explain how to connect to the cluster with enough permissions (ideally admin) wihtout using gcloud? All other cloud providers provide sufficient token or certificate to connect but it seems that gcp is a bit harder to use. Thank you.

@cdesaintleger
Copy link

Hello, we are facing the same problem
Please, can anyone tell us how to solve this problem ?
Than you 🙏🏻

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants