Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Examples show off load_kube_config() as the Right Way to set up the module, but an attempt at load_incluster_config() is also required to match the behavior of kubectl and work form inside pods #1005

Closed
adamnovak opened this issue Nov 12, 2019 · 34 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@adamnovak
Copy link

Link to the issue (please include a link to the specific documentation or example):

See the examples in the README:

https://github.com/kubernetes-client/python#examples

Description of the issue (please include outputs or screenshots if possible):

The examples of how to set up the module all (except for in_cluster_config.py) look like this:

from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
# Do stuff with Kubernetes

This gives the impression that this is all you need to do to pick up "the" Kubernetes configuration that your user is going to expect you to use (i.e. whatever kubectl would use). However, this is not the case.

If you are running in a pod, and you want to use the configuration that kubectl picks up (for the pod's service account, talking to the current Kubertnetes), you need to run config.load_incluster_config() if/when config.load_kube_config() fails. Since, outside of very specialized situations, you don't really know where your user's will run your software or which method will produce the actual Kubernetes credentials in advance, the Right Way to connect to Kubernetes is not a single method call but a try/except, something like this:

try:
    config.load_kube_config()
except:
    # load_kube_config throws if there is no config, but does not document what it throws, so I can't rely on any particular type here
    config.load_incluster_config()

The examples in the README, and possibly in the examples folder, should be changed to demonstrate credential loading that works like kubectl and pulls from either of these sources as available.

Ideally, the two utility methods should be merged/wrapped in a utility method that loads whichever config is available.

It looks like a similar proposal (with the order reversed) was made as part of #487, but that was part of a larger request, and it was killed by the stale bot without anyone actually solving this particular problem.

@roycaihw
Copy link
Member

/assign @fabianvf

@adamnovak
Copy link
Author

Some documentation of the Right Way to get the "current" namespace would also be helpful here; the way that works when using config files (config.list_kube_config_contexts() to get the active context, then asking it about its namespace) doesn't work when using a pod's service account (when you need to read /var/run/secrets/kubernetes.io/serviceaccount/namespace instead). Most of the examples seem written to avoid having to do this by calling methods that don't need a namespace, or by hardcoding "default".

@ramnes
Copy link
Contributor

ramnes commented Nov 14, 2019

Frankly, all of this would be much simpler with a single load_config function that does the try-except mentioned above.

@fabianvf
Copy link
Contributor

Yeah, I think a general load_config function that mimics more closely what kubectl is doing would make sense, I've always ended up reimplementing it in projects that make use of the client. The configuration code is all here: https://github.com/kubernetes-client/python-base/tree/master/config , is there anyone with the bandwidth/desire to pick this up?

@adamnovak
Copy link
Author

This is also related to #741 in that, having loaded the config, credentials which expire aren't refreshed in the background. So the Right Way to get credentials ought to include some kind of periodic refresh, or at least a note that you need it.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2020
@adamnovak
Copy link
Author

Most of the examples still ignore load_incluster_config.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 18, 2020
@roycaihw roycaihw added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Feb 19, 2020
@roycaihw
Copy link
Member

client-go has a function that does "try and fallback". We should have a method mimics its behavior

@iamneha
Copy link
Contributor

iamneha commented Feb 24, 2020

I would like to work on this issue.

/assign @iamneha

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 24, 2020
@adamnovak
Copy link
Author

The Python examples still don't note the need for load_incluster_config().

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 26, 2020
@moshevayner
Copy link
Member

Hey @iamneha
Are you actively working on that issue?
If not- I'd like to take a stab at it.
Please let me know if you have any objections.

@moshevayner
Copy link
Member

Looks like this isn't being actively worked on.
I'm going to try and start working on that over the weekend.
/assign

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 23, 2020
@dinvlad
Copy link

dinvlad commented Sep 23, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 23, 2020
@Frankkkkk
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 20, 2021
@Frankkkkk
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 22, 2021
@seanocca
Copy link

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 15, 2022
@Frankkkkk
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 15, 2022
tiraboschi added a commit to tiraboschi/hyperconverged-cluster-operator that referenced this issue Mar 30, 2022
try to load KUBECONFIG first and eventually
failback to incluster config only if the
first failed to mimic kubectl behaviour

See: kubernetes-client/python#1005

Signed-off-by: Simone Tiraboschi <stirabos@redhat.com>
kubevirt-bot pushed a commit to kubevirt/hyperconverged-cluster-operator that referenced this issue Mar 31, 2022
try to load KUBECONFIG first and eventually
failback to incluster config only if the
first failed to mimic kubectl behaviour

See: kubernetes-client/python#1005

Signed-off-by: Simone Tiraboschi <stirabos@redhat.com>
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 15, 2022
@dinvlad
Copy link

dinvlad commented Jun 15, 2022

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

benclifford added a commit to Parsl/parsl that referenced this issue Apr 29, 2024
The KubernetesProvider now falls back to loading config in-cluster if a kube-config file is not found. This allows in-cluster submission of parsl jobs.

See kubernetes-client/python#1005 for a good description of the issue.


Co-authored-by: T. Andrew Manning <56734137+manning-ncsa@users.noreply.github.com>
Co-authored-by: Matt Fisher <mfisher87@gmail.com>
Co-authored-by: Ben Clifford <benc@hawaga.org.uk>
johanneskoester pushed a commit to snakemake/snakemake-executor-plugin-kubernetes that referenced this issue Aug 14, 2024
The current setup for the plugin assumes that we have a kubeconfig file
used to connect to the cluster. However, this is not the case if the
code submitting the job is itself running in a pod in the cluster, as we
will need to use the default service account provided by the cluster.
This modifies the init to catch any errors when loading a kubeconfig
file and use the incluster configuration instead

The issue in the python kubernetes client here describes the problem:
kubernetes-client/python#1005
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests