-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple reloads of haproxy config without any apparent changes #745
Comments
Please add the |
Thanks, I'll do that and get back to you with the results. |
@jcmoraisjr I have sent you the log file to your gmail. As I mention in the mail, I have verified that the config does not change between haproxy restarts:
|
I can reproduce this. DNS based updates are not properly updating its internal control of the server-template size. I've just pushed Maybe you can also consider to give the Kubernetes' endpoint based update a chance, removing the Out of curiosity - why use DNS based updates? Endpoints should have a faster update due to the controller watch in the k8s api, and haproxy is also updated without the need to reload. |
Honestly I don't remember now if there was a specific reason why we chose DNS discovery. We had been using haproxy for a long time in a VM environment, and when we moved to kubernetes we initially tried nginx-ingress. But it was too unstable: it used varying amounts of memory, the dynamic updates were not reliable, and it was not able to reload it's config without dropping existing connections. So I set out to bring in haproxy-ingress and haproxy instead, and submitted several updates to the incubator/haproxy-ingress chart to bring it to par with the stable/nginx-ingress chart. Also we found we needed the dynamic weight-based load balancing provided by haproxy's agent-check to be able to spread long-lived websocket connections evenly over a number of pods, without overloading any of them. My main goal was to be able to do dynamic updates with as few restarts as possible, while not dropping existing connections. I had read about haproxy being able to do DNS lookups (A and SRV) to discover backend servers, and it sounded like a good alternative, so I guess that's why we went with that. So endpoint based updates would also provide dynamic updates, without unnecessary config reloads? Are there any drawbacks? |
Great! You should also know but history was preserved
It's true and the only doable alternative on static environments, but it's just another strategy when you have a discovery system like Kubernetes, a dynamically configurable proxy like HAProxy and its Runtime API, and a controller that can read from one and apply to the other in a fast and safe way. Looking to the big picture DNS sounds a bit ... old =)
Yep, ep based has also dynamic updates and uses native support - no addons, no memory leak. I'm not aware of any drawback, but instead I'd say it's safer and faster - safer because we use it since ages on pretty large and noisy clusters with tens of rolling updates every day - I'm pretty close to the SRE team, taking care of logs and metrics, looking for misbehavior like unexpected reloads; and faster because it's almost instantaneous without the need to rely on DNS updates.
In the case you've not found |
v0.12.2, v0.11.5 and v0.10.6 were just released and fixes this issue. Closing. |
After recently migrating from haproxy-ingress 0.7.6 to 0.12.1 (!), I saw a change in behavior when rolling out a new version of a service deployment. haproxy reloaded it's configuration multiple times during the rollout, apparently without any changes to the configuration.
We have a service with very long-lived websocket connections. Therefore we have configured haproxy-ingress to avoid restarting and dropping those connections as much as possible. We are using reusesockets, dynamic scaling and DNS resolvers. We also set
timeout-stop
to 24h so that existing connections are kept alive, so that most clients have time to re-connect during the next 24h. (If we didn't do this we would get a burst of clients re-connecting at the same time.)What happened is that our service with 30 pods was rolled out with 6 pods at a time. At each rollout step, haproxy forked another instance, so the haproxy-ingress pod memory usage grew in steps, and of course the count of current connections restarted from zero each time.
Note also that I am running an "external" haproxy in a side-car.
Expected behavior
Since backend server discovery is done through DNS lookups, haproxy.cfg does not change, so there should not be a need to restart haproxy. (Even if we didn't use DNS lookups, dynamic scaling should have allowed updates without restarting haproxy.) This is the way it used to work in 0.7.6.
Steps to reproduce the problem
Environment information
HAProxy Ingress version:
v0.12.1
Command-line options:
Global options:
Ingress objects:
This results in the following backend configurations:
In an haproxy container, it looked like this after the rollout:
Here's a snapshot of the number of connections during the rollout (there were some problems with the service initially). You can see how haproxy forks a new instance in all haproxy-ingress pods with each new step of the deployment rollout. (But note also that there seem to be many more haproxy instances running in the example above than just four or five.):
The text was updated successfully, but these errors were encountered: