Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate | Rancher EKS Cluster #1767

Closed
iridian-ks opened this issue Mar 31, 2022 · 26 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@iridian-ks
Copy link

What happened (please include outputs or screenshots):

Here's me using kubectl. I'm just expecting the equiv. Python to work.

→ export KUBECONFIG=~/.kube/config
→ kubectl get no
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-122-0-141.us-west-2.compute.internal   Ready    <none>   23h   v1.20.11-eks-f17b81
ip-10-122-16-82.us-west-2.compute.internal   Ready    <none>   23h   v1.20.11-eks-f17b81

My attempt at doing the same thing in Python and getting different results.

>>> import kubernetes
>> kubernetes.config.load_config()
>>> v1 = kubernetes.client.CoreV1Api()
>>> v1.list_node()
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(                                                                                                                                           File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 386, in _make_request
    self._validate_conn(conn)                                                                                                                                                        File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 1040, in _validate_conn
    conn.connect()                                                                                                                                                                   File "/usr/local/lib/python3.9/dist-packages/urllib3/connection.py", line 414, in connect
    self.sock = ssl_wrap_socket(                                                                                                                                                     File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(                                                                                                                                                File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)                                                                                                            File "/usr/lib/python3.9/ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(                                                                                                                                             File "/usr/lib/python3.9/ssl.py", line 1040, in _create
    self.do_handshake()
  File "/usr/lib/python3.9/ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()                                                                                                                                                    ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1123)
                                                                                                                                                                                   During handling of the above exception, another exception occurred:

OK, doesn't work. Let's try loading the kube_config directly.

>>> import os
>>> os.path.exists(os.environ["KUBECONFIG"])
True
>> kubernetes.config.load_kube_config(os.environ["KUBECONFIG"])
>> v1 = kubernetes.client.CoreV1Api()
>>> v1.list_node()  
# SAME ERROR AS ABOVE

OK, digging through Github issues I came across #1622 which leads to #36
I can try that...

>>> from kubernetes import client
>>> from kubernetes import config                                                                                                                                                 
>>> from kubernetes.client.api import core_v1_api
>>> config.load_config()
>>> configuration = client.Configuration()
>>> configuration.assert_hostname = False
>>> configuration.verify_ssl = True
>>> client.Configuration.set_default(configuration)
>>>
>>> v1 = core_v1_api.CoreV1Api()
>>> v1.list_node()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api/core_v1_api.py", line 16844, in list_node
    return self.list_node_with_http_info(**kwargs)  # noqa: E501
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api/core_v1_api.py", line 16951, in list_node_with_http_info
    return self.api_client.call_api(
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/api_client.py", line 373, in request
    return self.rest_client.GET(url,
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/rest.py", line 240, in GET
    return self.request("GET", url,
  File "/usr/local/lib/python3.9/dist-packages/kubernetes/client/rest.py", line 213, in request
    r = self.pool_manager.request(method, url,
  File "/usr/local/lib/python3.9/dist-packages/urllib3/request.py", line 74, in request
    return self.request_encode_url(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/request.py", line 96, in request_encode_url
    return self.urlopen(method, url, **extra_kw)
  File "/usr/local/lib/python3.9/dist-packages/urllib3/poolmanager.py", line 376, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 692, in urlopen
    conn = self._get_conn(timeout=pool_timeout)
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 281, in _get_conn
    return conn or self._new_conn()
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 235, in _new_conn
    conn = self.ConnectionCls(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connection.py", line 130, in __init__
    _HTTPConnection.__init__(self, *args, **kw)
TypeError: __init__() got an unexpected keyword argument 'assert_hostname'

OK, what if we just disable SSL verify instead?

>>> from kubernetes import client
>>> from kubernetes import config
>>> from kubernetes.client.api import core_v1_api
>>> config.load_config()
>>> configuration = client.Configuration()
>>> configuration.verify_ssl = False
>>> client.Configuration.set_default(configuration)
>>> v1 = core_v1_api.CoreV1Api()
>>> v1.list_node()
...
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /api/v1/nodes (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f245a5701f0>: Failed to establish a new connection: [Errno 111] Connection refused'))

I can't actually get this client to work in any way that I can try...

What you expected to happen:

Anything to work.

How to reproduce it (as minimally and precisely as possible):

Many examples above.

Anything else we need to know?:

Environment:

  • Kubernetes version (kubectl version):
→ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:30:48Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.20) exceeds the supported minor version skew of +/-1
  • OS (e.g., MacOS 10.13.6):
→ uname -a
Linux 7CWL5Y2 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Python version (python --version)
# python3 --version
Python 3.9.2
  • Python client version (pip list | grep kubernetes)
# pip list | grep kubernetes
kubernetes          20.13.0

I originally tried the very latest of this library, but I decided to test a downgrade to see if v23 would work too. I downgraded to have the client version match the server version.

@iridian-ks iridian-ks added the kind/bug Categorizes issue or PR as related to a bug. label Mar 31, 2022
@iridian-ks
Copy link
Author

iridian-ks commented Mar 31, 2022

This is how I was able to connect:

import os
import yaml
import kubernetes
import base64

with open(os.environ["KUBECONFIG"], "r"):
    kubeconfig = yaml.load(fd.read(), yaml.FullLoader)

cluster = kubeconfig["clusters"][0]
ca_b64 = cluster["cluster"]["certificate-authority-data"]
host = cluster["cluster"]["server"]
user = kubeconfig["users"][0]
token = user["user"]["token"]
ca = base64.b64decode(ca_b64.encode("utf-8").decode("utf-8")
with open("ca.crt", "w") as fd:
  fd.write(ca) 
configuration = kubernetes.client.Configuration()
configuration.host = host
configuration.api_key = {"authorization": token}
configuration.api_key_prefix = {"authorization": "bearer"}
configuration.ssl_ca_crt = "ca.crt"
# The above does not work so I still needed to disable SSL
configuration.verify_ssl = False
client = kubernetes.client.ApiClient(configuration)
v1 = kubernetes.client.CoreV1Api(client)
v1.list_node()
# Success!

Not ideal, as SSL looks totally broken. But glad I was able to get it to work. I suppose I was expecting this to be handled for me. :(

@roycaihw
Copy link
Member

/assign @yliaog

@yliaog
Copy link
Contributor

yliaog commented Apr 12, 2022

there seem to be issues in your use of the client library, could you try the given example in https://github.com/kubernetes-client/python#examples?

@jameslewis4891
Copy link

jameslewis4891 commented Apr 16, 2022

I am seeing the exact same issue

from kubernetes import client, config
import ssl


def get_pods():
    config.load_kube_config()

    v1 = client.CoreV1Api()

    print(ssl.OPENSSL_VERSION)
    print("Listing pods with their IPs:")

    ret = v1.list_pod_for_all_namespaces()

    for i in ret.items:
        print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
OpenSSL 1.1.1n  15 Mar 2022
Listing pods with their IPs:
{"@l":"WARNING","@t":"2022-04-16T11:58:10.169Z","@n":"urllib3.connectionpool","@m":"Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1129)'))': /api/v1/pods"}
{"@l":"WARNING","@t":"2022-04-16T11:58:10.830Z","@n":"urllib3.connectionpool","@m":"Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1129)'))': /api/v1/pods"}
{"@l":"WARNING","@t":"2022-04-16T11:58:11.491Z","@n":"urllib3.connectionpool","@m":"Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1129)'))': /api/v1/pods"}

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='kube-oidc-proxy.aws-us-east-1.gpt.williamhill.plc', port=443): Max retries exceeded with url: /api/v1/pods (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1129)')))

pipfile

[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"

[packages]
flask = "==2.1.1"
boto3 = "==1.21.32"
flask-restx = "==0.5.1"
importlib-resources = "==5.6.0"
awscli = "==1.22.87"
typing-extensions = "==4.1.1"
waitress = "==2.1.1"
requests = "==2.27.1"
kubernetes = "==23.3.0"

[dev-packages]
pytest = "==7.1.1"
pytest-flask = "==1.2.0"
coverage = "==6.3.2"
pytest-cov = "==3.0.0"
moto = "==3.1.5.dev9"

[requires]
python_version = "3.9"

Local kubectl get all pods on a remote EKS cluster after setting up .kube/config

kubectl get pods --all-namespaces
NAMESPACE                                NAME                                                              READY   STATUS             RESTARTS   AGE
*                                       *                                                                  2/2     Running            0          11d
*                                       *                                                                  2/2     Running            0          11d
*                                       *                                                                  1/1     Running            0          32h
*                                       *                                                                  2/2     Running            0          11d
*                                       *                                                                  2/2     Running            0          11d

Confirmed this also worked for me -> #1767 (comment)

@maver1ck
Copy link

maver1ck commented Jun 7, 2022

Maybe it's a bug in rancher ? I have similar issue on premise.

@Lerentis
Copy link

Lerentis commented Aug 8, 2022

I have the same issue and i am not using rancher. kubectl is able to use

...
clusters:
- cluster:
    certificate-authority-data: ...
...

while the python client seems to ignore these information. i disabled validation directly in the kubeconfig in the end:

    with open(os.environ["KUBECONFIG"], "r") as fd:
        kubeconfig = yaml.load(fd, Loader=yaml.FullLoader)
        kubeconfig["clusters"][0]["cluster"]["insecure-skip-tls-verify"] = True
    with open(os.environ["KUBECONFIG"], "w") as fd:
        yaml.dump(kubeconfig, fd, default_flow_style=False)

    config.load_kube_config(os.environ['KUBECONFIG'])

note that these modifications break the use of kubectl in the end as specifying a root certificates file with the insecure flag is not allowed anymore, so a fix to honor clusters[0].cluster.certificate-authority-data would be highly appreciated.

i would provide a patch as well, but i am not sure about the logic where to set the ca here to evaluate this if statement here to True. If somebody (probably @yliaog ) would have a hint i could try to create a patch 😃

@dheeg
Copy link

dheeg commented Aug 17, 2022

Thanks, I had the same problem as well and wanted to keep kubectl working.

I generate my client now like this until this is fixed:

import os
import yaml
import tempfile
from pathlib import Path
from kubernetes import client, config
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)


def gen_client():
    kube_config_orig = f'{Path.home()}/.kube/config'
    tmp_config = tempfile.NamedTemporaryFile().name

    with open(kube_config_orig, "r") as fd:
        kubeconfig = yaml.load(fd, Loader=yaml.FullLoader)
    for cluster in kubeconfig["clusters"]:
        cluster["cluster"]["insecure-skip-tls-verify"] = True
    with open(tmp_config, "w") as fd:
        yaml.dump(kubeconfig, fd, default_flow_style=False)

    config.load_kube_config(tmp_config)
    os.remove(tmp_config)

    return client.CoreV1Api()


v1 = gen_client()

@santosh0705
Copy link

I just started with this module and come across this issue. Is this the standard behaviour?
The above solution didn't work for me.

@EraYaN
Copy link

EraYaN commented Dec 7, 2022

The client just does not seem to read certificate-authority-data at all and then (obviously) fails to connect.

EDIT: It does seem to read it correctly but somehow it's broken for some clusters. With a remote cluster on AKS it works just fine. The broken cluster is an in LAN cluster created with kubeadm.

The one difference I can detect is that AKS uses 4096 bits RSA and kubeadm uses 2048 and the kubeadm CA cert has the following extra extensions:

X509v3 Subject Key Identifier:
    D7:FB:F3<snip>
X509v3 Subject Alternative Name:
    DNS:kubernetes

And it connects to a nonstandard port, which might also cause the issue.

configuration.verify_ssl = False
configuration.client_side_validation = False

Has no effect which is curious

@DanInProgress
Copy link

Just to help any others who come across this, I had a similar error message while using ansible with the kubernetes.core modules.

Likely not the same cause as above, but the certificate that I provided in certificate-authority-data was not self-signed. Our internal setup used:

  • a per-worker certificate (herinafter worker)
  • signed by an cluster-specific intermediary CA (herinafter cluster)
  • signed by a department-level intermediary CA (herinafter department).
  • signed by a root CA (herinafter company).
  • signed by company (self).

The value of certificate-authority-data in my ~/.kube/config was effectively set to:

Issuer: department
Subject: cluster
...

This doesn't make valid certificate chain, but usage of an intermediary as your certificate-authority-data does work with kubectl and any tooling based on kubernetes/client-go

I resolved my issue by setting certificate-authority-data to something more like:

Issuer: company
Subject: company
...

this wouldn't be ideal for inter-worker communication (broadens the trust scope a great deal), but it will be fine for my local config.

I wouldn't be surprised if other tooling (a la kubeadm) will generate a similar config if configured to get its certificates from an issuer like vault.

Only thing left that is curious to me is why it didn't resolve the root certificate from the operating store (since company is present in my trust store), but that's an investigation for another day.

Hope this helps someone :)

@OriHoch
Copy link

OriHoch commented Dec 13, 2022

I also encountered this problem, following is some debug info which might be useful, seems like a problem with urllib3

Reproduction steps:

  • set env vars
    • USERNAME=<YOUR KUBECTL CONFIG USER NAME>
    • CLUSTER=<YOUR KUBECTL CONFIG CLUSTER NAME>
    • CACERT=<FILE TO STORE THE CA CERTIFICATE IN>
    • TOKEN="$(kubectl config view --raw "-ojsonpath={.users[?(@.name=='${USERNAME}')].user.token}")"
    • kubectl config view --raw "-ojsonpath={.clusters[?(@.name=='${CLUSTER}')].cluster['certificate-authority-data']}" | base64 -d > $CACERT
    • SERVER="$(kubectl config view --raw "-ojsonpath={.clusters[?(@.name=='${CLUSTER}')].cluster.server}")"
  • make a request to Kubernetes using curl
    • curl -H "authorization: Bearer ${TOKEN}" "${SERVER}/api/v1/nodes" --cacert $CACERT
    • request succeeds
  • make the same request using urllib3 (same way the kubernetes python client does it)
    • python3 -c "import urllib3; urllib3.PoolManager(ca_certs='${CACERT}').request('GET', '${SERVER}/api/v1/nodes', headers={'authorization': 'Bearer ${TOKEN}'})"

Expected

  • Request using urllib3 should succeed

Actual

  • Request using urllib3 fails with following error:
  • urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='***', port=443): Max retries exceeded with url: ****/api/v1/nodes (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1131)')))

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 12, 2023
@Lerentis
Copy link

as there seems to be no traction here or by the looks of the investigation done by @OriHoch the issue might not even relate to this repository, could we maybe document the workaround provided by @dheeg in #1767 (comment) @yliaog ?
while i am not really happy with the security implications, there seems very little that could be done

@claudio-walser
Copy link

This seems to really be an issue. We ran into it some days ago after renewing our cluster certificate.
Everything works fine with the config using kubectl itself, only the python library has an issue for some reason.

@yliaog also the list-all-pods example from https://github.com/kubernetes-client/python#examples does not work. Is there any incentive to fix this?

@claudio-walser
Copy link

Besides the rootCA we also had to add the issuingCA to the system, which fixed the issue.
I agree with @OriHoch that this issue is most probably located in urllib3.

@aravindhkudiyarasan
Copy link

This is still an issue with kubernetes module on latest stable version too. Help us with the resolution.

@aravindhkudiyarasan
Copy link

error :- urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='***', port=443): Max retries exceeded with url: ****/api/v1/nodes (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1131)')))

@intfish123
Copy link

Thanks, I had the same problem as well and wanted to keep kubectl working.

I generate my client now like this until this is fixed:

import os
import yaml
import tempfile
from pathlib import Path
from kubernetes import client, config
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)


def gen_client():
    kube_config_orig = f'{Path.home()}/.kube/config'
    tmp_config = tempfile.NamedTemporaryFile().name

    with open(kube_config_orig, "r") as fd:
        kubeconfig = yaml.load(fd, Loader=yaml.FullLoader)
    for cluster in kubeconfig["clusters"]:
        cluster["cluster"]["insecure-skip-tls-verify"] = True
    with open(tmp_config, "w") as fd:
        yaml.dump(kubeconfig, fd, default_flow_style=False)

    config.load_kube_config(tmp_config)
    os.remove(tmp_config)

    return client.CoreV1Api()


v1 = gen_client()

非常好的解决了我的问题,大赞👍

@OriHoch
Copy link

OriHoch commented May 31, 2023

I found a permanent fix for this problem which should also be secure, the following script fixes kube config files certificate-authority-data to a valid value based on what is actually used when connecting to the relevant server.

https://raw.githubusercontent.com/Kamatera/kamateratoolbox-iac/main/bin/fix_kubeconfig_ca_certs.py

explanation: after a lot of research into ssl, I found the source of the problem in my case to be a mismatch in root certificate. I'm using Let's Encrypt certificate and the certificate chain which is served by the server differs from the actual chain used in the client (e.g. when using curl / web browser). For some reason the Python ssl library does not take into account the client certificate store when providing a cafile, and I think that's the reason it fails. My fix, uses Python code to fetch the certificate chain which is actually used by the client, and then use that as the certificate-authority-data.

@aravindhkudiyarasan
Copy link

Thanks, I had the same problem as well and wanted to keep kubectl working.
I generate my client now like this until this is fixed:

import os
import yaml
import tempfile
from pathlib import Path
from kubernetes import client, config
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)


def gen_client():
    kube_config_orig = f'{Path.home()}/.kube/config'
    tmp_config = tempfile.NamedTemporaryFile().name

    with open(kube_config_orig, "r") as fd:
        kubeconfig = yaml.load(fd, Loader=yaml.FullLoader)
    for cluster in kubeconfig["clusters"]:
        cluster["cluster"]["insecure-skip-tls-verify"] = True
    with open(tmp_config, "w") as fd:
        yaml.dump(kubeconfig, fd, default_flow_style=False)

    config.load_kube_config(tmp_config)
    os.remove(tmp_config)

    return client.CoreV1Api()


v1 = gen_client()

非常好的解决了我的问题,大赞👍

This is not working . Below are the error details.

pod_list = v1.list_pod_for_all_namespaces().items File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 17309, in list_pod_for_all_namespaces return self.list_pod_for_all_namespaces_with_http_info(**kwargs) # noqa: E501 File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 17430, in list_pod_for_all_namespaces_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 353, in call_api _preload_content, _request_timeout, _host) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 184, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 377, in request headers=headers) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 245, in GET query_params=query_params) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 235, in request raise ApiException(http_resp=r) kubernetes.client.exceptions.ApiException: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Content-Type': 'text/plain; charset=utf-8', 'X-Content-Type-Options': 'nosniff', 'Date': 'Wed, 31 May 2023 08:00:59 GMT', 'Content-Length': '10'}) HTTP response body: Forbidden

@aravindhkudiyarasan
Copy link

@OriHoch I have the kubeconfig file generated with your python script and used it to fetch data using kubernetes provider and also with kubectl, but is getting cert error.

Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority

@yliaog I am looking for a way to move further with kubernetes provider, Why can't we skip SSL verification in latest kubernetes module ?

@gjhenrique
Copy link

While troubleshooting with wireshark and comparing the kubectl request, I discovered that the plugin doesn't set the SNI value in the TLS handshake. This value should be derived from the tls-server-name property in kubeconfig.

I considered opening PR, but I found that there's already an open one (#1933) addressing the same issue.

As a temporary solution, I directly added the addition_pool_args['server_hostname'] with the value of tls-server-name in the init method of the client/rest.py file until that PR is merged or released.

@basura-persistent
Copy link

Hi everyone. Any guidance on how to resolve this issue when using

config.load_incluster_config?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests