Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http: TLS handshake error from 172.30.117.64:25970: EOF #145

Closed
ixiaoyi93 opened this issue Sep 30, 2018 · 12 comments
Closed

http: TLS handshake error from 172.30.117.64:25970: EOF #145

ixiaoyi93 opened this issue Sep 30, 2018 · 12 comments

Comments

@ixiaoyi93
Copy link

$ kubectl logs -f metrics-server-589cc698c4-fbw5t -n kube-system
I0930 02:55:01.584798 1 logs.go:49] http: TLS handshake error from 172.30.117.64:25830: EOF
I0930 02:55:04.354311 1 logs.go:49] http: TLS handshake error from 172.30.117.64:25904: EOF
I0930 02:55:04.890066 1 logs.go:49] http: TLS handshake error from 172.30.117.64:25914: EOF
I0930 02:55:06.195554 1 logs.go:49] http: TLS handshake error from 172.30.51.128:14770: EOF

$ cat metrics-server-deployment.yaml
......
image: hexun/metrics-server-amd64:v0.3.0
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP

At present, it is possible to obtain resources by using kubectl top node / pod, but the log frequently appears "http: TLS handshake error from 172.30.117.64:52590:EOF", this problem can be solved?

@gjmzj
Copy link

gjmzj commented Sep 30, 2018

in your cluster, is kubelet running with flags '--anonymous-auth=false' or not?

@ixiaoyi93
Copy link
Author

@gjmzj yes,I have

@gjmzj
Copy link

gjmzj commented Sep 30, 2018

my question is 'how can the metrics-server get authorized by the kubelet server '

when i upgrade the 'metrics-server' to 0.3.0, i get error log of 'metrics-server' POD:

E0930 02:45:31.619297       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:10.xx.yy.41: unable to fetch metrics from Kubelet 10.xx.yy.41 (10.xx.yy.41): request failed - "401 Unauthorized", response: "Unauthorized"

@ixiaoyi93
Copy link
Author

@gjmzj

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd ./metrics-server/deploy/1.8+

Is it through such a deployment?

@gjmzj
Copy link

gjmzj commented Oct 1, 2018

through manifests in deploy/1.8+
yes, also i did a little changes:

      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      - name: ssl-dir
        secret:
          secretName: metrics-server-secrets
          defaultMode: 0400
      containers:
      - name: metrics-server
        #image: k8s.gcr.io/metrics-server-amd64:v0.3.0
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.1
        imagePullPolicy: IfNotPresent
        command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
        - --logtostderr=true
        - --tls-cert-file=/etc/ssl/ms-cert
        - --tls-private-key-file=/etc/ssl/ms-key
        - --v=2
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
        - name: ssl-dir
          mountPath: /etc/ssl

i created 'metrics-server-secrets' before deployment

kubectl create secret generic -n kube-system metrics-server-secrets \
            --from-file=ca=ca.pem \
            --from-file=ms-key=metrics-server-key.pem \
            --from-file=ms-cert=metrics-server.pem"

@DirectXMan12
Copy link
Contributor

@gjmzj metrics-server attempts to authorize itself using token authentication. Please ensure that you're running your kubelets with webhook token authentication turned on.

@DirectXMan12
Copy link
Contributor

@xiaomuyi looks like something's connecting and then immediately disconnecting. If everything is working, I wouldn't worry about it.

@ixiaoyi93 ixiaoyi93 reopened this Oct 8, 2018
@alanh0vx
Copy link

alanh0vx commented Dec 14, 2018

I have similar situation

http: TLS handshake error from 192.168.133.64:51926:EOF

192.168.133.64 is the internal ip address of api-server

from api-server log

E1214 02:06:35.625504       1 memcache.go:134] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1214 02:07:05.573958       1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.109.179.236:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E1214 02:07:05.658841       1 memcache.go:134] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request

kubelet version is 1.12.3
metrics-server 0.3.1

i have another clusters with the same version and configuration, metrics-server works just fine

@tvildo
Copy link

tvildo commented Jan 11, 2019

metrics-server v0.3.1
calico v3.4
kubernetes v1.13.1
I was using calico CNI. TLS message is big packet and MTU in calico was wrong so I changed it according https://docs.projectcalico.org/v3.4/usage/configuration/mtu
I changed MTU from 1550 to 1450
and everything is fixed. It was not metrics-server issue in my case 👍

@alanh0vx
Copy link

thanks, turned out it's MTU issue

from calico config, it was 1500 while the interface is 1450

finally kubectl edit configmap calico-config -n kube-system and change the MTU value from 1500 to 1430

@zhangxingdeppon
Copy link

metrics-server v0.3.1
calico v3.4
kubernetes v1.13.1
I was using calico CNI. TLS message is big packet and MTU in calico was wrong so I changed it according https://docs.projectcalico.org/v3.4/usage/configuration/mtu
I changed MTU from 1550 to 1450
and everything is fixed. It was not metrics-server issue in my case 👍

@tvildo hi, buddy, why does the MTU configuration cause the problem? can you explain it for me? , I met the same problem, I want to kown the reason so that I can operation it , thanks

@Juludut
Copy link

Juludut commented Mar 18, 2021

Hi there,
Writing this here if it may help anyone else hitting this problem.

We had similar problems and it was also a MTU issue : while upgrading the underlying virtualization infrastructure of the kubernetes nodes, the nodes lost their MTU (8950) for a too little MTU (1450), which caused TLS packets to be too large - thus these messages (and a lot of other erratic errors).
Putting back the right MTU on masters/workers nodes solved the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants