Skip to content
This repository has been archived by the owner on Nov 9, 2017. It is now read-only.

Kubernetes-UI / Internal Server Error (500) : Get https://10.100.0.1:443/api/v1/replicationcontrollers: dial tcp 10.100.0.1:443: getsockopt: connection refused #38

Closed
jantoniucci opened this issue May 26, 2016 · 3 comments

Comments

@jantoniucci
Copy link

I installed "coreos-kubernetes-cluster-osx" v0.6.2 and when I click on the "Kubernetes-UI" menu option:

Internal Server Error (500)
Get https://10.100.0.1:443/api/v1/replicationcontrollers: dial tcp 10.100.0.1:443: getsockopt: connection refused

I realised that ui container is trying to access the API Server through its internal service (10.100.0.1) instead of 172.17.15.101 or any other available network.

I tried to add the --bind-address=0.0.0.0 to the API Server using the fleet ui but nothing.

Any idea?

Thanks!

@rimusz
Copy link
Owner

rimusz commented May 26, 2016

@jantoniucci no idea why it is not working yet
in my other Kube-solo App I have it running at no problems without any changes to dashboard svc/rc

@davecaplinger
Copy link

I'm having the same issue, and am theorizing that it is because I don't have any way to route or proxy my browser request from the vbox host-only network (172.17.15.0/24) to the cluster IP range (10.100.0.0/16), perhaps via the flannel network (10.244.0.0/16).

As a comparison, browser access to fleet-ui does work because docker-proxy is doing the mapping of the container's port 3000 to the master host's (172.17.15.101) port 3000, so that happens because neither kubernetes nor flannel are involved.

An ingress-controller and corresponding ingress pod/container might be a part of the solution, but I'm pretty new to this and could have this wrong. A method that might work would be to have an haproxy or nginx container running on the master node (like fleet-ui; not managed by k8s) that is auto-configured from etcd (e.g., using confd). This would match k8s' expectation that IP ingress/load-balancing is an external "cloud provider" service.

Since the master participates in the flannel network (10.244.0.0/16), it can proxy for browser requests coming from 172.17.15.0/24. In my current setup, k8s knows the link between the flannel IP address and the k8s cluster IP address, so that looks like the final link in the chain:

$ kubectl describe svc kubernetes-dashboard --namespace=kube-system
Name:           kubernetes-dashboard
Namespace:      kube-system
Labels:         k8s-app=kubernetes-dashboard,kubernetes.io/cluster-service=true
Selector:       k8s-app=kubernetes-dashboard
Type:           ClusterIP
IP:         10.100.132.157
Port:           <unset> 80/TCP
Endpoints:      10.244.91.2:9090
Session Affinity:   None
No events.

Whatever the solution is for kubernetes-dashboard, hopefully that same solution can then apply to any web-facing workload the user deploys via kubernetes.

@rimusz
Copy link
Owner

rimusz commented May 31, 2016

fixed in v0.6.3

@rimusz rimusz closed this as completed May 31, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants