Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--kubeconfig cannot be set via DEVSPACE_FLAGS #2859

Closed
siredmar opened this issue May 29, 2024 · 4 comments · Fixed by #2860
Closed

--kubeconfig cannot be set via DEVSPACE_FLAGS #2859

siredmar opened this issue May 29, 2024 · 4 comments · Fixed by #2860
Assignees
Labels
kind/bug Something isn't working

Comments

@siredmar
Copy link
Contributor

siredmar commented May 29, 2024

What happened?
I'm using some devspace command to spin up a kind cluster. The kubeconfig is located in the projects root, e.g. ./dev/run/kubeconfig not to interfere with any exising real clusters.

I was using the var DEVSPACE_FLAGS to set the kubeconfig and the kube-context arguments.
When running a pipeline it cannot access the kubeconfig thworing this error

warn Unable to create new kubectl client: kube config is invalid

create_deployments: Please make sure you have an existing valid kube config. You might want to check one of the following things:

* Make sure you can use 'kubectl get namespaces' locally
* If you are using Loft, you might want to run 'devspace create space' or 'loft create space'

fatal exit status 1

I can do some really ugly workarounds i really don't like, e.g.:

  1. writing a wrapper script around devspace that sets the arguments explicitly
  2. devspace run-pipeline init --kubeconfig dev/run/kubeconfig
  3. KUBECONFIG=./dev/run/kubeconfig devspace run-pipeline init

I liked the idea of keeping things clear and well defined and explicit in yaml files.
So this seems like a bug to me.

What did you expect to happen instead?

I expected that i can pass the kubeconfig i like without

How can we reproduce the bug? (as minimally and precisely as possible)

My devspace.yaml:

version: v2beta1
name: foo
vars:
  DEVSPACE_FLAGS: -s --kubeconfig ./dev/run/kubeconfig --kube-context kind-mycluster

Local Environment:

  • DevSpace Version: DevSpace version : 6.3.12
  • Operating System:linux
  • ARCH of the OS: AMD64
    Kubernetes Cluster:
  • Cloud Provider: kind

Anything else we need to know?

At a first glance the problem seems that the KUBECONFIG variable is not evaluated after the root command is executed:

devspace/cmd/root.go

Lines 72 to 77 in 3212b31

if globalFlags.KubeConfig != "" {
err := os.Setenv("KUBECONFIG", globalFlags.KubeConfig)
if err != nil {
log.Errorf("Unable to set KUBECONFIG variable: %v", err)
}
}

Setting the --kubeconfig argument via the DEVSPACE_FLAGS variable should trigger setting the KUBECONFIG env variable as well.

@lizardruss
Copy link
Collaborator

@siredmar

IMO your third option of setting the KUBECONFIG environment might be the most straightforward, since it's unlikely to have side effects:

$ KUBECONFIG=./dev/run/kubeconfig devspace run-pipeline init

Custom commands might be an option to clean it up a little more.

@siredmar
Copy link
Contributor Author

@lizardruss thanks for responding. However variant three don't seem ideal given the fact that there already is an option called --kubeconfig.
Would you mind having a look at my PR #2860?

@lizardruss
Copy link
Collaborator

@siredmar

I'll take a look, however in my opinion, the --kubeconfig flag might be a good candidate for deprecation. We've learned that it's difficult to ensure consistency if DevSpace is also setting environment variables. I believe we've also deprecated other flags for similar reasons, so while it exists, I'd discourage its use unless it can be shown to be the only way to accomplish something.

@siredmar
Copy link
Contributor Author

so, either deprecate it or use it the way it promises it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants