-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service latency degradation #1905
Comments
Also, removing option |
I will add an option to make this configurable with the Configmap |
The only way we can fix this is to remove the need to reload nginx on endpoints changes. Right now the only way to do this is to use lua, more specifically balancer_by_lua_block |
Since PR #2174 will implement the "no reload" feature, I'm here asking on what will be next steps. We already discussed this issue while I was proposing to implement LUA on virtualhosts and ssl too. @aledbf @ElvinEfendi were involved in discussion. What are next steps? |
@valeriano-manassero thanks for starting the discussion. IMHO the next immediate step after #2174 gets merged should be coming up with a way to test Lua code. One option here would be to use current e2e test framework. After the test process is in place I suggest we focus on the remaining tasks in the following priority:
|
@Lookyan can you test quay.io/aledbf/nginx-ingress-controller:0.343 adding the flag |
Yes, thank you, I'll test it. |
@ElvinEfendi @aledbf sorry, I was very busy in the last days. Regarding dynamic-reconfiguration I was in hurry to deploy something usable for my company so I was forced to create my fork: There are, obviously huge differences from the point where I forked from here but I hope, in future, we can do it. Feel free to comment or give me a way to merge that functionalities here. |
@valeriano-manassero some good stuff in that fork! It would be great to get consistent hashing and dynamic certificate features ported to this repository from your fork. This week I'm going to refactor |
Closing. This is fixed with the new dynamic configuration feature. We will enable this feature by default in a couple releases. |
@Lookyan have you tried |
Sorry, forgot to write here. |
@Lookyan Did you try Fluid https://github.com/NCCloud/fluid fork? We got same issues and tried to avoid reload at all. The fork is not perfect but maybe it can help you. |
Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT
NGINX Ingress controller version: 0.9.0 (quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0)
Kubernetes version:
Environment:
What happened:
We use ingress controller for almost all our services. And one of them has big load of 30k rps. This traffic is handled by 3 ingress controllers (there are three separate physical servers for them). And we experience a problem with service deploy process. When one service which is using the same ingress controller updates its pods (endpoints) ingress reloads its configuration. And at the same time we get increasing latency for all network communications on each service which uses these ingress controllers (even latency from service to database and to localhost). What we see in our monitoring: increasing number of established connections and increased latency.
What you expected to happen:
Deploy process shouldn't affect other services.
How to reproduce it (as minimally and precisely as possible):
The easiest way to reproduce it is to deploy service, start any perf tool to increase load to this service and then start
nginx -t
on ingress. Because ingress tests configuration before reload it invokes it each time during deploy of each service. So you should runnginx -t
hundreds times and you will see the degradation. These are our metrics:Anything else we need to know:
This problem can be reproduced only at big load, starting from 5k rps to one ingress controller.
Also I saw this issue and we have this patch, but still have problems: kubernetes/kubernetes#48358
The text was updated successfully, but these errors were encountered: