-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TCP Proxy Not Listening (tcp-services) #4213
Comments
I ran into the same issue, and the problem was only that nginx don't reload automatically when changing the tcp config. Deleting the pod fixed the problem |
I have the same probleme the configmap |
Changing the ingress-nginx "LoadBalancer" annotation worked for me: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp" |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@bitva77 did you fix this? |
Similar issue here and updating some ingress rule fixed it. Looks like editing the tcp-services configmap by adding/removing the 'PROXY' field(s) doesn't trigger a re-generation of the nginx.conf file. Setup:
Steps to reproduce:
|
as @yizha said, you need to update 2 places (service and configmap) in order to open new tcp port. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I followed the above steps but still no luck. Service has an exposed port 5671 (rabbitmq) and I appied a tcp-services yaml on top of my existing ingress-nginx namespace. Then I went into the ingress-nginx container to check nginx.conf, I see something strange at the "stream" section - as if the upstream server did not get configured, left as a placeholder:
|
I have the same behaviour here :/ |
I have the same behavior. I am using AWS load balancer with multiple services deployed in the cluster, the http/s routes are working (80/443) but TCP is not working. I have a mosquitto runing in the cluster which is using TCP port 8883 (which is configured in the nginx tcp-services configmap and service). Bug? |
Same as you guys, my But if you looked at the bottom of this oracle article, it states that "The upstream is proxying via Lua." and seems to acknowledge that the Furthermore when I look at some previous section of
So looks like nginx-ingress is no longer using stream, at least for tcp service, it uses lua instead. Looking at CHANGELOG perhaps this change is introduced around v0.21. But to be honest I'm not familiar with Lua, so I need some more investigation as well. Hope this can shed some lights on the issue, let me know if any of you have figured out the problem. |
Just wanna report back - So the placeholder is not of concern here (since nginx-ingress seem to use lua instead of upstream here), it's probably something else that prevents you from reaching to that port inside nginx. This post gives some tips to debug such case which you might find helpful. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I was using image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.1. I have the same problem and I found that I had missed
I had also tested new port reloading in tcp-services.yaml file. So for that you need to compulsary apply tcp-services.yaml and nginx-ingress-controller.yaml file once again to take impact of new port in nginx-ingress-controller pod. |
@nrvmodi where do you set that |
@marcusjwhelan if you're using helm, you can take a look at this comment from Helm issue. Basically, you just need to set the The
Personally I'l go for Helm because it's much simpler. |
@rivernews Would it be fine to just patch the deployment since I followed the ingress-nginx install of https://kubernetes.github.io/ingress-nginx/deploy/#azure which is for azure? Reading this https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/ and this https://skryvets.com/blog/2019/04/09/exposing-tcp-and-udp-services-via-ingress-on-minikube/ I get the idea that I don't need to create a nodeport/clusterip for the nodes? It will automatically just connect to the ports of the pods I am creating. Both are on port 25565 so i need a way to route to each but on a different port. Or do I need to create a nodeport/clusterIp for each pod so I can specify a different port. Or how does that work exactly. |
I can only speak for myself and my settings, I’m using ClusterIP, not sure about NodePort.
You can try to update deployment in place and see how that works, if you have strategy set to Recreate, the pod will get delete and make sure a brand new controller pod spin up using the latest deployment.
You can always delete and create the modified deployment if you can spare the controller down for a moment.
…
On May 5, 2020 at 11:14 AM, <Marcus ***@***.***)> wrote:
@rivernews (https://github.com/rivernews) Would it be fine to just patch the deployment since I followed the ingress-nginx install of https://kubernetes.github.io/ingress-nginx/deploy/#azure which is for azure? Reading this https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/ and this https://skryvets.com/blog/2019/04/09/exposing-tcp-and-udp-services-via-ingress-on-minikube/ I get the idea that I don't need to create a nodeport/clusterip for the nodes? It will automatically just connect to the ports of the pods I am creating. Both are on port 25565 so i need a way to route to each but on a different port. Or do I need to create a nodeport/clusterIp for each pod so I can specify a different port. Or how does that work exactly.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub (#4213 (comment)), or unsubscribe (https://github.com/notifications/unsubscribe-auth/ADZOKWDFPEGTJQG4SIZOHTDRQBJPPANCNFSM4HZN2RCA).
|
something like this works for helm with values.yaml
|
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Baremetal: kubeadm install on RedHat 7.6
What happened:
Created a TCP proxy and the port is not being listened on.
What you expected to happen:
TCP port to be exposed.
How to reproduce it (as minimally and precisely as possible):
Controller installed via the mandatory YAML file in the docs.
Nginx Service created like so
tcp-services
configured like so:namespace is correct.
Anything else we need to know:
Logs:
I've tried various combinations of PROXY:PROXY ::PROXY :PROXY as well but nada.
It's close to #3984 however in that one the proxy seems to actually happen. I'm not even getting that far.
netstat -an |grep 9615
is empty.The text was updated successfully, but these errors were encountered: