-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TLS handshake error with multi-cluster setup #2402
Comments
I know this is a very long description but unfortunately I am a bit desperate. If I don't get rid of these messages, then I don't even need to think about driving my PoC further. I am totally excited about the setup - but these TLS errors are driving me crazy... I still have the problem, that I have a lot of My Setup1.) Create Client_cert for later use
2.) Deploy
|
It might relate to your ELB setup if you are using AWS. When the protocol setup is not correct, the healthcheck will throw TLS handshake errors continually. You may refer to this and change the protocol to SSL instead of TLS for ELB. |
Thanks for your response. Is the allocator broadcast any message or try to scraping some health end-points? I'm a little bit lost |
the annotation Now I have a different error :-/
it seems, that the agones-allocator do not provide a non mTLS health check endpoint - or? |
@cindy52 thanks that you point me to the right direction +1 In the end the AWS Loadbalancer need an endpoint for the health check. The AWS default is to pick the first port in the solution is, to patch the service
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: agones
meta.helm.sh/release-namespace: agones-system
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /live
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
labels:
app: agones
app.kubernetes.io/managed-by: Helm
chart: agones-1.19.0
component: allocator
heritage: Helm
release: agones
name: agones-allocator
namespace: agones-system
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
multicluster.agones.dev/role: allocator
sessionAffinity: None
type: LoadBalancer
or kubectl annotate --overwrite service agones-allocator -n agones-system 'service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval'='10'
kubectl annotate --overwrite service agones-allocator -n agones-system 'service.beta.kubernetes.io/aws-load-balancer-healthcheck-path'='/live'
kubectl annotate --overwrite service agones-allocator -n agones-system 'service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol'='http'
kubectl patch service \
agones-allocator \
-n agones-system \
--type merge \
--patch \
'{"spec": {"ports":[{"name":"http", "port":8080, "targetPort":8080},{"name":"https", "port":443, "targetPort":8443}]}}'
|
I have a multi-cluster setup with with two clusters
A
andB
the relatedGameServerAllocationPolicy
.Everything seems to work fine. I can use the Allocator endpoints of cluster-A or cluster-B and the
GameServerAllocationPolicy
are working as expected. On the surface everything is top - I'm very satisfied.But in the log of the
agones-allocator
are many many TLS handshake errors. I'm not sure if this comes from the communication between cluster-a and cluster-b - or if this related to an internal sync call?The text was updated successfully, but these errors were encountered: