Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting a "cannot fetch certificate" when working with kubeseal client #317

Closed
creydr opened this issue Nov 14, 2019 · 42 comments
Closed

Getting a "cannot fetch certificate" when working with kubeseal client #317

creydr opened this issue Nov 14, 2019 · 42 comments

Comments

@creydr
Copy link

creydr commented Nov 14, 2019

When trying to seal a secret with the kubeseal client like in the follwing, kubeseal hangs:

kubeseal < mysecret.yml -o yaml

When I set a timeout with the --request-timeout option, I get a more detailed message:

E1114 14:56:18.638781    8199 round_trippers.go:174] CancelRequest not implemented by *oidc.roundTripper
E1114 14:56:18.639062    8199 request.go:858] Unexpected error when reading response body: net/http: request canceled (Client.Timeout exceeded while reading body)
error: cannot fetch certificate: Unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout exceeded while reading body)

using it via:
kubeseal < mysecret.yml -o yaml --cert certfile.cert works.

What am I doing wrong?

Some details about my setup:

  • using release version 0.9.5 (client & controller)
  • accessing the cert.pem, via port-forwarding the sealed-secret-controller service and downloading it from /v1/cert.pem works
  • RBAC is enabled (but the used user has full permissions on the resources in the namespace)

Thanks for your help

@mkmik
Copy link
Collaborator

mkmik commented Nov 14, 2019

did you deploy the controller via the helm chart?

@creydr
Copy link
Author

creydr commented Nov 14, 2019

No. I took the yaml from https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.5/controller.yaml and applied it via kubectl apply -f controller.yaml

@mkmik
Copy link
Collaborator

mkmik commented Nov 14, 2019

Good. Is the controller actually running (or crashlooping)? (Edit: ah I remember now that you said you tried the port forwarding)

@creydr
Copy link
Author

creydr commented Nov 15, 2019

Yes controller is running:

NAME                                                      READY   STATUS        RESTARTS   AGE
...
pod/sealed-secrets-controller-6b8c688c89-nbss2            1/1     Running       0          16h


NAME                                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
...
service/sealed-secrets-controller                      ClusterIP   10.233.57.173   <none>        8080/TCP                 16h

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
...
deployment.apps/sealed-secrets-controller   1/1     1            1           16h

NAME                                                   DESIRED   CURRENT   READY   AGE
...
replicaset.apps/sealed-secrets-controller-6b8c688c89   1         1         1       16h

@creydr
Copy link
Author

creydr commented Nov 18, 2019

Can it be related to using OpenId Connect as authprovider?
When testing it with Minikube (without oidc), it seems to work.

@JamesDowning
Copy link

I installed the controller via helm:

sealed-secrets 1 Mon Nov 18 11:44:34 2019 DEPLOYED sealed-secrets-1.4.30.9.1 kube-system

Initial call worked but after a couple of minutes I tried again only to receive the following error:

cmd:

kubectl create secret generic secret-name --dry-run --from-env-file=secrets.env -o yaml | \ kubeseal \ --controller-name=sealed-secrets \ --controller-namespace=kube-system \ --format yaml > mysealedsecret.yaml

error: cannot fetch certificate: no endpoints available for service "http:sealed-secrets:"

@mkmik
Copy link
Collaborator

mkmik commented Nov 18, 2019

Can it be related to using OpenId Connect as authprovider?
When testing it with Minikube (without oidc), it seems to work.

possibly. I just realized we don't expose a flag to control log verbosity for the k8s client library.

@mkmik
Copy link
Collaborator

mkmik commented Nov 18, 2019

can I get some more info, like the version of the k8s clusters you guys are running?

@JamesDowning
Copy link

JamesDowning commented Nov 18, 2019

I'm running eks.4 with kube version 1.13, the controller was definitely running when I previously ran the command. Annoyingly/happily the exact same command now works for me with no changes being made suggesting this is an intermittent issue.

@linkvt
Copy link

linkvt commented Nov 18, 2019

@creydr and me are running a Kubernetes cluster on-premise setup with Kubespray running on 1.14.3. We are in an enterprise context though but even without using http proxies in between it is not working (direct clearance exists so access is possible in general), so we don't think that it is proxy-related.
Edit: We use OIDC with a keycloak instance but also didn't get it to work with a local user.

@mkmik
Copy link
Collaborator

mkmik commented Nov 18, 2019

please try:

shell 1:

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

shell 2:

$ curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem

@linkvt
Copy link

linkvt commented Nov 18, 2019

Very interesting, I get an CNTLM error message despite not having any proxies configured:

$ env | grep -i prox                                                                              
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

and

$ env | grep -i prox              
$ curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem                  
<html><body><h1>502 Connection refused</h1><p><a href="http://cntlm.sf.net/">Cntlm</a> proxy failed to complete the request.</p></body></html>%

We have cntlm running on our k8s servers and this problem shows also when trying to access other services like the kubernetes-dashboard, we will investigate tomorrow, thanks for the tip so far!

@mkmik
Copy link
Collaborator

mkmik commented Nov 18, 2019

I'm running eks.4 with kube version 1.13, the controller was definitely running when I previously ran the command. Annoyingly/happily the exact same command now works for me with no changes being made suggesting this is an intermittent issue.

@JamesDowning is it possible the container got restarted for some reason?

@mkmik
Copy link
Collaborator

mkmik commented Nov 18, 2019

what do you think about #282? would that help?

Access to the controller via the proxy is still useful for features such as kubeseal --validate and kubeseal --re-encrypt, but I guess getting the certificate is by far the most frequent thing you need live from the controller.

@JamesDowning
Copy link

@mkmik I can't be 100% sure as I'm not the only one operating the cluster but the pods events don't suggest so:

Events:
  Type    Reason     Age   From                                                Message
  ----    ------     ----  ----                                                -------
  Normal  Scheduled  50m   default-scheduler                                   Successfully assigned kube-system/sealed-secrets-7864f98bd4-dfz5v to ip-***** 
  Normal  Pulling    50m   kubelet, ip-*****   pulling image "quay.io/bitnami/sealed-secrets-controller:v0.9.1"
  Normal  Pulled     50m   kubelet, ip-*****  Successfully pulled image "quay.io/bitnami/sealed-secrets-controller:v0.9.1"
  Normal  Created    50m   kubelet, ip-*****   Created container
  Normal  Started    50m   kubelet, ip-***** Started container

@creydr
Copy link
Author

creydr commented Nov 19, 2019

Hi @mkmik,
we set an Ingress already up as an workaround. But the best would be, if we could use it without the Ingress and using kubeseal the normal way.

I tested it with your branch from #320 and got the following output (still hangs without result):

$ go run cmd/kubeseal/main.go < ../../mysecret.yml -o yaml -v 10
I1119 13:29:57.945237   32642 loader.go:359] Config loaded from file:  /home/x/.kube/config
I1119 13:29:57.945618   32642 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/x-pem-file, */*" -H "User-Agent: main/v0.0.0 (linux/amd64) kubernetes/$Format" 'https://<api-server>:6443/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem'
I1119 13:29:57.945677   32642 round_trippers.go:419] curl -k -v -XGET  'https://<our-oid-provider>/auth/realms/master/.well-known/openid-configuration'
I1119 13:29:58.023839   32642 round_trippers.go:438] GET https://<our-oid-provider>/auth/realms/master/.well-known/openid-configuration 200 OK in 78 milliseconds
I1119 13:29:58.023879   32642 round_trippers.go:444] Response Headers:
I1119 13:29:58.023892   32642 round_trippers.go:447]     Content-Type: application/json
I1119 13:29:58.023904   32642 round_trippers.go:447]     Content-Length: 2512
I1119 13:29:58.023915   32642 round_trippers.go:447]     Connection: keep-alive
I1119 13:29:58.023926   32642 round_trippers.go:447]     Cache-Control: no-cache, must-revalidate, no-transform, no-store
I1119 13:29:58.023942   32642 round_trippers.go:447]     Server: nginx/1.17.1
I1119 13:29:58.023953   32642 round_trippers.go:447]     Date: Tue, 19 Nov 2019 12:30:11 GMT
I1119 13:29:58.024299   32642 round_trippers.go:419] curl -k -v -XPOST  -H "Content-Type: application/x-www-form-urlencoded" -H "Authorization: Basic a3ViZXJuZXRlczo=" 'https://<our-oid-provider>/auth/realms/master/protocol/openid-connect/token'
I1119 13:29:58.108632   32642 round_trippers.go:438] POST https://<our-oid-provider>/auth/realms/master/protocol/openid-connect/token 200 OK in 84 milliseconds
I1119 13:29:58.108692   32642 round_trippers.go:444] Response Headers:
I1119 13:29:58.108724   32642 round_trippers.go:447]     Connection: keep-alive
I1119 13:29:58.108870   32642 round_trippers.go:447]     Cache-Control: no-store
I1119 13:29:58.108894   32642 round_trippers.go:447]     Pragma: no-cache
I1119 13:29:58.108907   32642 round_trippers.go:447]     Server: nginx/1.17.1
I1119 13:29:58.108921   32642 round_trippers.go:447]     Date: Tue, 19 Nov 2019 12:30:11 GMT
I1119 13:29:58.108933   32642 round_trippers.go:447]     Content-Type: application/json
I1119 13:29:58.108947   32642 round_trippers.go:447]     Content-Length: 3373
I1119 13:29:58.116896   32642 loader.go:359] Config loaded from file:  /home/x/.kube/config
I1119 13:29:58.122637   32642 loader.go:359] Config loaded from file:  /home/x/.kube/config
I1119 13:29:58.133128   32642 loader.go:359] Config loaded from file:  /home/x/.kube/config
I1119 13:29:58.196332   32642 round_trippers.go:438] GET https://<api-server>:6443/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem 400 Bad Request in 250 milliseconds
I1119 13:29:58.196376   32642 round_trippers.go:444] Response Headers:
I1119 13:29:58.196389   32642 round_trippers.go:447]     Date: Tue, 19 Nov 2019 12:30:11 GMT
I1119 13:29:58.196401   32642 round_trippers.go:447]     Audit-Id: 9abc14bb-8933-4c3b-970a-302f007715f4

When trying to access another service via kubectl proxy I can curl it. E.g the dashboard:

$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/\#\!/login 

 <!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.93db0a0d.css"> <link rel="stylesheet" href="static/app.ddd3b5ec.css"> </head> <body ng-controller="kdMain as $ctrl"> <!--[if lt IE 10]>
      <p class="browsehappy">You are using an <strong>outdated</strong> browser.
      Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
      experience.</p>
    <![endif]--> <kd-login layout="column" layout-fill="" ng-if="$ctrl.isLoginState()"> </kd-login> <kd-chrome layout="column" layout-fill="" ng-if="!$ctrl.isLoginState()"> </kd-chrome> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.91a96542.js"></script> </body> </html>

bors bot added a commit that referenced this issue Nov 19, 2019
320: Add klog flags so we can troubleshoot k8s client r=mkmik a=mkmik

will help troubleshoot #317

Co-authored-by: Marko Mikulicic <mkm@bitnami.com>
@mkmik
Copy link
Collaborator

mkmik commented Nov 19, 2019

But the best would be, if we could use it without the Ingress and using kubeseal the normal way.

Ideally I'd like kubeseal to transparently access your ingress and thus avoiding the whole class of problems caused by proxying calls via the apiserver.

@mkmik
Copy link
Collaborator

mkmik commented Nov 19, 2019

When trying to access another service via kubectl proxy I can curl it. E.g the dashboard:

just to doublecheck: you can access http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

but you can not access:

$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem 

?

If so, I'm tempted to conclude that it's not a client side problem.

Could you please share:

$ kubectl get -n kube-system svc sealed-secrets-controller -oyaml

?

@creydr
Copy link
Author

creydr commented Nov 20, 2019

yes that is correct. I can curl the dashboard, but not the cert.pem:

$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ -v
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 3128 (#0)
> GET http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ HTTP/1.1
> Host: localhost:8001
> User-Agent: curl/7.58.0
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 200 OK
< Accept-Ranges: bytes
< Audit-Id: 8bebe9bf-5af7-46df-9774-8aeb67723a64
< Cache-Control: no-store
< Content-Type: text/html; charset=utf-8
< Date: Wed, 20 Nov 2019 06:05:16 GMT
< Last-Modified: Mon, 17 Dec 2018 09:04:43 GMT
< Content-Length: 996
< Proxy-Connection: keep-alive
< Connection: keep-alive
< 
 <!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.93db0a0d.css"> <link rel="stylesheet" href="static/app.ddd3b5ec.css"> </head> <body ng-controller="kdMain as $ctrl"> <!--[if lt IE 10]>
      <p class="browsehappy">You are using an <strong>outdated</strong> browser.
      Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
      experience.</p>
* Connection #0 to host localhost left intact
    <![endif]--> <kd-login layout="column" layout-fill="" ng-if="$ctrl.isLoginState()"> </kd-login> <kd-chrome layout="column" layout-fill="" ng-if="!$ctrl.isLoginState()"> </kd-chrome> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.91a96542.js"></script> </body> </html> %                                                                                

$ curl http://localhost:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem -v
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 3128 (#0)
> GET http://localhost:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem HTTP/1.1
> Host: localhost:8001
> User-Agent: curl/7.58.0
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 502 Bad Gateway
< Audit-Id: 62316a80-bc22-483e-be20-c6d67936b9a9
< Content-Type: text/html
< Date: Wed, 20 Nov 2019 06:05:44 GMT
< Content-Length: 142
< Proxy-Connection: keep-alive
< Connection: keep-alive
< 
* Connection #0 to host localhost left intact
<html><body><h1>502 Connection refused</h1><p><a href="http://cntlm.sf.net/">Cntlm</a> proxy failed to complete the request.</p></body></html>%

The config of my service is:

$ kubectl get -n kube-system svc sealed-secrets-controller -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"sealed-secrets-controller"},"name":"sealed-secrets-controller","namespace":"kube-system"},"spec":{"ports":[{"port":8080,"targetPort":8080}],"selector":{"name":"sealed-secrets-controller"},"type":"ClusterIP"}}
  creationTimestamp: "2019-11-14T13:35:43Z"
  labels:
    name: sealed-secrets-controller
  name: sealed-secrets-controller
  namespace: kube-system
  resourceVersion: "7236448"
  selfLink: /api/v1/namespaces/kube-system/services/sealed-secrets-controller
  uid: a7c524d1-06e3-11ea-bc7d-005056002a8d
spec:
  clusterIP: 10.233.57.173
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    name: sealed-secrets-controller
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

@creydr
Copy link
Author

creydr commented Dec 11, 2019

Hi,
are there any Updates on this issue? Any suggestions why this occurs?

Thanks

@mkmik
Copy link
Collaborator

mkmik commented Dec 11, 2019

I cannot reproduce the issue

@gfrntz
Copy link

gfrntz commented Dec 16, 2019

I had the same issue in v1.14.7-gke.23 cluster.

Installed via stable helm chart sealed-secrets. Only works in bare metal. But in GKE this issue still actual

kind: Service
metadata:
  creationTimestamp: "2019-12-13T16:55:04Z"
  labels:
    app.kubernetes.io/instance: sealed-secrets
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: sealed-secrets
    app.kubernetes.io/version: 0.9.6
    helm.sh/chart: sealed-secrets-1.6.1
  name: sealed-secrets
  namespace: default
  resourceVersion: "112226122"
  selfLink: /api/v1/namespaces/default/services/sealed-secrets
  uid: 4f322bd7-1dc9-11ea-a992-4201ac100003
spec:
  clusterIP: 10.124.10.81
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/name: sealed-secrets
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

EP

...
  name: sealed-secrets
  namespace: default
  resourceVersion: "112226123"
  selfLink: /api/v1/namespaces/default/endpoints/sealed-secrets
  uid: 4f33e7fa-1dc9-11ea-a992-4201ac100003
subsets:
- addresses:
  - ip: 10.60.9.178
    nodeName: gke-ebaysocial-nl-stateful-b7d2b8a1-pp6z
    targetRef:
      kind: Pod
      name: sealed-secrets-6b795f77f8-n6bqg
      namespace: default
      resourceVersion: "110495809"
      uid: 4f61e696-1dc9-11ea-a992-4201ac100003
  ports:
  - port: 8080
    protocol: TCP

Get cert

kubeseal \
 --controller-name=sealed-secrets \
 --controller-namespace=default \
 --fetch-cert -v 9 > ~/.kubeseal/foo.pem
I1216 15:30:16.191839   34818 loader.go:359] Config loaded from file:  /Users/garus/.kube/config.yaml
I1216 15:30:16.195168   34818 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/x-pem-file, */*" -H "User-Agent: kubeseal/v0.0.0 (darwin/amd64) kubernetes/$Format" 'https://gke-api/api/v1/namespaces/default/services/http:sealed-secrets:/proxy/v1/cert.pem'
I1216 15:30:46.401507   34818 round_trippers.go:438] GET https://gke-api/api/v1/namespaces/default/services/http:sealed-secrets:/proxy/v1/cert.pem 503 Service Unavailable in 30206 milliseconds
I1216 15:30:46.401556   34818 round_trippers.go:444] Response Headers:
I1216 15:30:46.401562   34818 round_trippers.go:447]     Audit-Id: e2ffd9a2-240b-4bb8-b1a4-e6d444e19653
I1216 15:30:46.401566   34818 round_trippers.go:447]     Date: Mon, 16 Dec 2019 12:30:46 GMT
I1216 15:30:46.401629   34818 request.go:947] Response Body: Error: 'dial tcp 10.60.9.178:8080: i/o timeout'
Trying to reach: 'http://10.60.9.178:8080/v1/cert.pem'
I1216 15:30:46.401680   34818 request.go:1150] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: invalid character 'E' looking for beginning of value
error: cannot fetch certificate: the server is currently unable to handle the request (get services http:sealed-secrets:)

@gfrntz
Copy link

gfrntz commented Dec 23, 2019

In GKE check firewall on 8080 port.

Tf example:

  resource "google_compute_firewall" "kubeseal-http" {
   name    = "kubeseal-http"
   network = "projects/${var.project}/global/networks/default"
   project = var.project

    allow {
     protocol = "tcp"
     ports    = ["8080"]
   }

    source_ranges = ["${google_container_cluster.primary.private_cluster_config.0.master_ipv4_cidr_block}"]
 }

@xamox
Copy link

xamox commented Jan 14, 2020

So I was having the same issue and not sure if this is the issue, but:

For reference, I am using:

GKE: v1.14.8-gke.12
Helm Chart: 1.6.1
Sealed Secrets: 0.9.6

I followed the normal instructions, and ran into the error when trying to create a secret. Thanks to @mkmik comments it helped me realize what the issue is.

When the helm chart spins it up it's using "sealed-secrets" vs. "sealed-secrets-controller":
https://github.com/helm/charts/blob/master/stable/sealed-secrets/templates/service.yaml#L5.

I noticed that if I do the proxy method previously mentioned and run it:

> curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem                                                375ms  Tue 14 Jan 2020 02:53:38 PM EST
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "services \"sealed-secrets-controller\" not found",
  "reason": "NotFound",
  "details": {
    "name": "sealed-secrets-controller",
    "kind": "services"
  },
  "code": 404
}

Same thing using kubeseal client:

> kubeseal \                                                                                                                                                                 
   --controller-name=sealed-secrets \
   --controller-namespace=default \
   --fetch-cert -v 9 
I0114 15:00:32.022033   10988 loader.go:359] Config loaded from file:  /home/xamox/.kube/config
I0114 15:00:32.022712   10988 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/x-pem-file, */*" -H "User-Agent: kubeseal/v0.0.0 (linux/amd64) kubernetes/$Format" 'https://35.203.166.246/api/v1/namespaces/default/services/http:sealed-secrets:/proxy/v1/cert.pem'
I0114 15:00:32.362715   10988 round_trippers.go:438] GET https://35.203.166.246/api/v1/namespaces/default/services/http:sealed-secrets:/proxy/v1/cert.pem 404 Not Found in 339 milliseconds
I0114 15:00:32.362769   10988 round_trippers.go:444] Response Headers:
I0114 15:00:32.362793   10988 round_trippers.go:447]     Audit-Id: fa3c8e89-2184-49be-a42b-8a0789185371
I0114 15:00:32.362815   10988 round_trippers.go:447]     Content-Type: application/json
I0114 15:00:32.362836   10988 round_trippers.go:447]     Content-Length: 204
I0114 15:00:32.362857   10988 round_trippers.go:447]     Date: Tue, 14 Jan 2020 20:00:32 GMT
I0114 15:00:32.362943   10988 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"sealed-secrets\" not found","reason":"NotFound","details":{"name":"sealed-secrets","kind":"services"},"code":404}
error: cannot fetch certificate: services "sealed-secrets" not found

But if I curl using what matches the actual service name it works:

> curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets:/proxy/v1/cert.pem
-----BEGIN CERTIFICATE-----
REDACTED
-----END CERTIFICATE-----

The controller.yaml on the release page seems to be correct (i.e. https://github.com/bitnami-labs/sealed-secrets/releases/tag/v0.9.6).

@pnowy
Copy link

pnowy commented Feb 4, 2020

We had the same problem on GKE and private cluster. Solved by solution suggested by @gfrntz (firewall rule for masters)

@mkmik
Copy link
Collaborator

mkmik commented Feb 4, 2020

We have some documentation about that: https://github.com/bitnami-labs/sealed-secrets/blob/master/docs/GKE.md#private-gke-clusters

Improvements to the docs welcome (possibly in the shape of a PR :-) )

@pnowy
Copy link

pnowy commented Feb 5, 2020

@mkmik what do you think to add that issue description / problem to FAQ with the link to mentioned GKE documentation and this issue. I figured it out that the proxy problem based on the ticket and later noticed that this solution has been provided ;)

Something like that:

I have 'dial tcp IP:8080 timeout' error and 'Trying to reach http://IP:8080' error. What I should do?

And here as a response link to documentation which you pointed and link to this issue.

What do you think? If it ok I can create MR.

@mkmik
Copy link
Collaborator

mkmik commented Feb 5, 2020

The GKE specific instructions are linked twice from the main README:

  1. when documenting how the public key gets published: here
  2. installation section: here

That said, I guess that a FAQ entry could help those people who only notice that something is broken after otherwise successfully installing it (I have to admit that more often than not I'm a non-RTFM person myself too). Pull request welcome! let's discuss the exact wording in the PR review.

@tereschenkov
Copy link

Today I run into the same problem. My configuration is:

  • Kubernetes cluster v1.16.8 on AWS
  • sealed-secrets v0.12.1 installed using helm v3

When I run kubeseal --fetch-cert --controller-name=sealed-secrets I get

error: cannot fetch certificate: no endpoints available for service "http:sealed-secrets:"

After trying to access the certificate using kubectl proxy I found out that the proxy URL has been changed. I got the certificate by running curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/sealed-secrets:http/proxy/v1/cert.pem
Note /http:sealed-secrets:/proxy/v1/cert.pem -> /sealed-secrets:http/proxy/v1/cert.pem

Most probably that proxy URL has been changed in some version of Kubernetes

@mkmik
Copy link
Collaborator

mkmik commented May 3, 2020

@tereschenkov

You might been affected by #397 . There is an open PR against the helm chart (which contains a bug) but the maintainers of the helm chart are currently unresponsive.

@tereschenkov
Copy link

@mkmik Thanks for the tip. I tried updated helm version and it works now

@linkvt
Copy link

linkvt commented Jun 10, 2020

Ticket can be closed, we were able to solve the issue.
We deployed our clusters with kubespray into an environment which needs proxies and the kubespray version we used didn't set the proxy exceptions (no_proxy) correctly, this was solved in version 2.12.

After adding the kubernetes internal pod and service subnets to the proxy exceptions, requests to the sealed-secrets-controller were not incorrectly routed to the proxy anymore but to the correct service inside of the kubernetes cluster.

@mkmik
Copy link
Collaborator

mkmik commented Jun 10, 2020

thanks for the feedback

@mkmik mkmik closed this as completed Jun 10, 2020
@omerfsen
Copy link

There is an issue with helm so you must use sealed-secrets-controller as the release name not anything else, otherwise you can get this error because controller svc is created with release name but internal pods still tries to connect to sealed-secrets-controller

@mkmik
Copy link
Collaborator

mkmik commented Oct 30, 2020

yeah if you use any other name you must set the environment variable SEALED_SECRETS_CONTROLLER_NAME to point
to whatever name you ended up using sealed-secrets service resource, see kubectl -n namespace-where-you-installed-it get svc -l name=sealed-secrets-controller

@kingfisher-strong
Copy link

I was facing the same issue while trying to fetch certificate. Here are the steps that worked for me.

  1. First expose the kubeseal service to local : kubectl port-forward service/sealed-secrets-controller -n kube-system 8081:8080

  2. Call the endpoint : curl localhost:8081/v1/cert.pem

@dcharbonnier
Copy link

same issue for me :
error: cannot fetch certificate: error trying to reach service: dial tcp 10.42.4.150:8080: i/o timeout
but curl get an immediate response

curl 10.42.4.150:8080/v1/cert.pem
-----BEGIN CERTIFICATE-----
MIIErjCCApagAwIBAgIRALWM6qWGgQT/AbX4qJjC/zowDQYJKoZIhvcNAQELBQAw

@macdrorepo
Copy link

@shashank0202 same for me, did you found any solution?

@kingfisher-strong
Copy link

@macdrorepo thats the solution that I have posted above. ( #317 (comment) )

@YevheniiPokhvalii
Copy link

YevheniiPokhvalii commented May 30, 2021

This issue might also be related. ArgoCD replaces sealed-secrets-controller with an app name in a Helm chart:

argoproj/argo-cd#1066

@kbreit
Copy link

kbreit commented Jan 3, 2022

I am seeing this problem using kubeseal 0.17.1 and and controller 0.17.1. This is an on-premises deployment so no cloud components.

(⎈ |supervisor:sealed-secrets)supervisor [main●] % \cat ss.json | kubeseal --controller-namespace sealed-secrets --controller-name sealed-secrets
cat: ss.json: No such file or directory
error: cannot fetch certificate: no endpoints available for service "http:sealed-secrets:"

The release name is sealed-secrets but this has worked in the past. When I issue raw curl commands as this issue described, it works when I do sealed-secrets:http but not the other way around. kubeseal appears to be trying the incorrect method. How do I resolve this?

@bibAtWork
Copy link

Had the same issue with:

  • sealed-secrets-controller:v0.17.3
  • kubectl:v0.21.10

Replacing kubeseal.exe with the latest version fixed the problem for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests