Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[nginx] Setup on Azure failing since 0.9.0-beta.4 #758

Closed
Globegitter opened this issue May 24, 2017 · 9 comments · Fixed by #760
Closed

[nginx] Setup on Azure failing since 0.9.0-beta.4 #758

Globegitter opened this issue May 24, 2017 · 9 comments · Fixed by #760

Comments

@Globegitter
Copy link
Contributor

Globegitter commented May 24, 2017

I am following this example here: https://blogs.technet.microsoft.com/livedevopsinjapan/2017/02/28/configure-nginx-ingress-controller-for-tls-termination-on-kubernetes-on-azure-2/ for an nginx ingress controller setup on Azure. The example just sets up the nginx ingress controller exposed via a LoadBalancer and an ingress to forward to the example http-svc provided in this repo. It is uses nginx-ingress controller v0.9.0-beta.2 and works fine with that, as well as v0.9.0-beta.3 but fails once I upgrade to beta.4 and beta.5. When trying to curl the service after setting everything up I get the same error on beta.4 and beta.5:

curl https://<IP>
curl: (35) gnutls_handshake() failed: The TLS connection was non-properly terminated.
 curl https://<IP> -k
curl: (35) gnutls_handshake() failed: The TLS connection was non-properly terminated.

If I open up port 80 as well on the LoadBalancer I do manage to make a curl request but get as a response:

<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.11.12</center>
</body>
</html>

Also looking at the logs of the ingress controller (this is beta.4 here), it does look like it is setting up correctly:

I0524 08:12:56.207926       1 controller.go:1184] starting Ingress controller
I0524 08:12:56.208574       1 leaderelection.go:203] attempting to acquire leader lease...
I0524 08:12:56.218665       1 event.go:217] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"c3b028bf-4057-11e7-8dc8-000d3ab6aab1", APIVersion:"extensions", ResourceVersion:"6171849", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/nginx-ingress
W0524 08:12:57.208705       1 backend_ssl.go:42] deferring sync till endpoints controller has synced
W0524 08:12:57.218428       1 queue.go:94] requeuing default/nginx-ingress, err deferring sync till endpoints controller has synced
I0524 08:13:06.665070       1 metrics.go:34] changing prometheus collector from  to default
I0524 08:13:06.717207       1 controller.go:421] ingress backend successfully reloaded...
I0524 08:13:07.209283       1 backend_ssl.go:71] adding secret default/tls-secret to the local store
I0524 08:13:26.456412       1 leaderelection.go:213] successfully acquired lease kube-system/ingress-controller-leader-nginx
I0524 08:13:56.218459       1 status.go:302] updating Ingress default/nginx-ingress status to [{52.169.47.215 }]
I0524 08:13:56.227724       1 event.go:217] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"c3b028bf-4057-11e7-8dc8-000d3ab6aab1", APIVersion:"extensions", ResourceVersion:"6172691", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/nginx-ingress

So not quite sure what exactly changed between beta.3 and beta.4 what causes now the issue and how to modify the setup to fix it (or of it is a bug?). Any insight on this would be great - and let me know if there are any more details I can provide.

Edit: Just seeing there is #643, which seems related.

Edit2:

Just tried the setup again (with beta.5) and now seeing in the logs:

2017/05/24 08:58:32 [warn] 140#140: *6 using uninitialized "proxy_upstream_name" variable while logging request, client: <IP>, server: _, request: "GET / HTTP/1.1", host: "<IP2>"
<IP> - [<IP>] - - [24/May/2017:08:58:32 +0000] "GET / HTTP/1.1" 301 186 "-" "curl/7.47.0" 76 0.000 [] - - - -

Whenever i make a curl. So I suppose this unset variable is the cause of the issue?

Edit3: Out of curiosity I just tried the latest test version of t he image: quay.io/aledbf/nginx-ingress-controller:0.124 and while I do get an error in the logs I can actually curl the service and get the expected response from it.

In case it might be useful, this is the error I am seeing in the logs:

E0524 09:15:00.763832       7 event.go:260] Could not construct reference to: '&v1beta1.Ingress{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nginx-ingress", GenerateName:"", Namespace:"default", SelfLink:"/apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress", UID:"f8d38a19-405e-11e7-8dc8-000d3ab6aab1", ResourceVersion:"6178599", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{sec:63631213030, nsec:0, loc:(*time.Location)(0x1de97a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubernetes.io/ingress.class":"nginx"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1beta1.IngressSpec{Backend:(*v1beta1.IngressBackend)(nil), TLS:[]v1beta1.IngressTLS{v1beta1.IngressTLS{Hosts:[]string(nil), SecretName:"tls-secret"}}, Rules:[]v1beta1.IngressRule{v1beta1.IngressRule{Host:"", IngressRuleValue:v1beta1.IngressRuleValue{HTTP:(*v1beta1.HTTPIngressRuleValue)(0xc4209673c0)}}}}, Status:v1beta1.IngressStatus{LoadBalancer:v1.LoadBalancerStatus{Ingress:[]v1.LoadBalancerIngress{v1.LoadBalancerIngress{IP:"IP", Hostname:""}}}}}' due to: 'no kind is registered for the type v1beta1.Ingress'. Will not report event: 'Normal' 'UPDATE' 'Ingress default/nginx-ingress'
@andor44
Copy link

andor44 commented May 24, 2017

There's definitely something going on in beta.4 that's causing SSL issues on certain setups. After upgrading to beta.4 our hardware load balancer is unable to make HTTPS requests to the ingress controller, with similar TLS early-termination error messages, and so does a colleague's local OpenSSL client. The list of changes for beta.4 is massive so it's a little hard to pinpoint what's causing it. The only thing that seems suspect to me is the new SSL backend handoff.

@aledbf
Copy link
Member

aledbf commented May 24, 2017

@Globegitter @andor44 please update the image to quay.io/aledbf/nginx-ingress-controller:0.124

@aledbf
Copy link
Member

aledbf commented May 24, 2017

The only thing that seems suspect to me is the new SSL backend handoff.

You are right, this new feature introduced undesired behaviors in the controller. This issues were fixed in beta.5

@andor44
Copy link

andor44 commented May 24, 2017

@aledbf sorry, I should have been more accurate. We actually jumped from beta 3 to beta 5 straight away, and we noticed the issue in beta 5. So it is present in beta 5 too. We then downgraded to beta 4, which displayed the same symptoms. Going back to beta 3 made it go away. I will give your own fork a try and see how it goes.

@andor44
Copy link

andor44 commented May 24, 2017

With the quay.io/aledbf/nginx-ingress-controller:0.124 image the device that was unable to make HTTPS requests works again, but I get errors like

'no kind is registered for the type v1beta1.Ingress'. Will not report event: 'Normal' 'CREATE' 'Ingress <INGRESSNAME>'

I assume this image is using newer API versions?

@Globegitter
Copy link
Contributor Author

Yes @aledbf updating the image fixes that issue for me but also shows the error messages that @andor44 is reporting.

@aledbf
Copy link
Member

aledbf commented May 24, 2017

@andor44 @Globegitter the error messages are related to the library we use (client-go). I hope this will be fixed before the end of the day

@aledbf
Copy link
Member

aledbf commented May 24, 2017

@andor44 @Globegitter please update the image to quay.io/aledbf/nginx-ingress-controller:0.125

@Globegitter
Copy link
Contributor Author

Thanks @aledbf , this is fixed with the latest beta 6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants