Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clients are not working if k8s cluster require proxy to access #333

Closed
abhi24k opened this issue Aug 27, 2017 · 8 comments
Closed

clients are not working if k8s cluster require proxy to access #333

abhi24k opened this issue Aug 27, 2017 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@abhi24k
Copy link

abhi24k commented Aug 27, 2017

I am trying to reach k8s cluster hosted in aws. Using kubectl command I'm able to list all the nodes present in the cluster since proxy is set in the environment, but using kubernetes-incubator python client I am unable to do so, not really know where to set proxy.

code snippet
from kubernetes import client,config
from kubernetes.client import configuration

config.load_kube_config('/root/.kube/config.devenv')
print configuration.host
v1=client.CoreV1Api()

v1.list_node()

ERROR
https://abhishek-api.dev.devops.net
2017-08-27 03:28:40,499 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fdccc9d7210>: Failed to establish a new connection: [Errno 110] Connection timed out',)': /api/v1/nodes

@IamPrvn
Copy link

IamPrvn commented Jan 6, 2018

try this:

from kubernetes.client import Configuration
conf = Configuration()
conf.http_proxy_url="<your proxy>"
config.load_kube_config(client_configuration=conf)
v1 = client.CoreV1Api()

@montyz
Copy link

montyz commented Jun 6, 2018

@IamPrvn If I try the above, it seems to set the proxy var but doesn't then load the kubeconfig file and tries to contact localhost:443. It just seems to ignore the kubeconfig:

config.load_kube_config(config_file=kube_config,
                        context=context, 
                        client_configuration=conf)

Any ideas how to get both to work?

@jueast08
Copy link

@montyz Have you been able to work out this problem ?

@montyz
Copy link

montyz commented Jun 18, 2018

@jueast08 the following worked for me though is a bit of a hack. The trick was that I had to load_config to cause Configuration._default to be created, and then modify that directly with the proxy_url before creating the CoreV1Api object. Perhaps there is a better way to do this via constructors but I couldn't figure it out. Any suggestions welcome.

import os
from kubernetes import client, config

kube_config = os.getenv('KUBE_CONFIG')
context = os.getenv('CONTEXT')

proxy_url = os.getenv('HTTP_PROXY', None)
config.load_kube_config(config_file=kube_config,
                        context=context)
if proxy_url:
    logging.warning("Setting proxy: {}".format(proxy_url))
    client.Configuration._default.proxy = proxy_url

kubernetes_client = client.CoreV1Api()

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 24, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants