You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current implementation uses gcloud container clusters get-credentials to generate the temporary privileged kubeconfig. The implementation of that command (as seen here) just reads the users gcloud config to get the credentials.
The kubectl implementation of the gcloud auth provider just pulls the users tokens from the fields specified here
The cmd/eiam/internal/proxy/shell.go:startShell method was generating a temporary kubeconfig for the privileged session using gcloud container clusters get-credentials. This command adds a new context to the kubeconfig with the gcp auth provider plugin that uses GCP credentials to provide tokens for kubectl to authenticate itself to the apiserver. This auth provider plugin simply instructs kubectl to run gcloud config config-helper --format=json and get the access token from the output, resulting in kubectl commands being authenticated as the default user account stored in the active gcloud config, not the service account for the privileged session.
The fix for this is to add the service account token info to the access-token and expiry fields in the generated kubeconfig. These fields are intended to be used as a token cache, and the value in the access-token field will be used by kubectl to authenticate API calls until the time specified in the expiry field.
No description provided.
The text was updated successfully, but these errors were encountered: