-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
haproxy ingress 1.11.2 reloads when number of backend pods scales #646
Comments
Solved by downgrading helm chart from 1.39.1 to 1.37.0 (haproxy ingress controller 1.11.2->1.10.11) with exactly the same configuration. |
Hi @nosmicek , Thanks for reporting, I'll check what happened. |
Hi, thanks, if you need some info about our setup or configuration etc. don't hesitate to ask, I have some limited time dedicated to solving this. Also, another issue that I've noticed when working on it was, that when I configured hard-stop-after and close-spread-time to 300000 (5minutes), the haproxy ingress controller ended up periodically reloading every 10minutes on 1.11.2 version regardless of our service scaling (was constant for the whole period of test). Again due to this prometheus endpoint. We tried this as a workaround to reloads, but it ended up in much worse scenario. |
Hi @nosmicek , can you try with the nightly build ? There's a change that could solve the issue. To use it just replace the tag in your yaml. Switch from |
Hi @ivanmatmati, with the latest, the issue is still present. I tried it today. |
Latest or nightly ? |
I tried now with nightly and the problem is even worse, I've got lots of reloads due to prometheus even when not scaling our backend service pods. It's really hard to distinguish whether it is caused by scaling or not, but it seems that it happens when scaling. |
I don't see these reloads due to prometheus. Can you check the commit you're on ? On my side it's c3cd22c. |
The same happened with me and I was on chart version 1.39.4. It got resolved after downgrading to 1.37.0 as suggested by @nosmicek
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
We are experiencing issue with haproxy ingress controller v1.11.2 that reloads every time when our kubernetes service is scaled up or down, we see this haproxy ingress controller logs every time:
number of backend server slots is not changed, just one is enabled or disabled, but it always triggers reload.
Similar problem was solved in #638 and #634 so maybe this was missed.
We don't run the controller with --prometheus CLI flag, just scrape metrics via ServiceMonitor bound on stats port, ie. it goes through frontend where the
http-request use-service prometheus-exporter if { path /metrics }
The text was updated successfully, but these errors were encountered: