Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump haproxy version to 2.8.14 #756

Merged
merged 2 commits into from
Jan 30, 2025
Merged

Bump haproxy version to 2.8.14 #756

merged 2 commits into from
Jan 30, 2025

Conversation

CFN-CI
Copy link
Contributor

@CFN-CI CFN-CI commented Jan 30, 2025

Automatic bump from version 2.8.13 to version 2.8.14, downloaded from https://www.haproxy.org/download/2.8/src/haproxy-2.8.14.tar.gz.

After merge, consider releasing a new version of haproxy-boshrelease.

@CFN-CI CFN-CI requested a review from a team January 30, 2025 07:03
@CFN-CI CFN-CI added the run-ci Allow this PR to be tested on Concourse label Jan 30, 2025
@peanball
Copy link
Contributor

peanball commented Jan 30, 2025

release note: https://www.mail-archive.com/haproxy@formilux.org/msg45570.html

Somewhat relevant points:

  • Most of remaining issues with the queues management were fixed. The
    dequeuing process is now called when a stream is closed. This should
    unsure no stream remains infinitely blocked in the queue and prevent any
    infinite loop in some extreme cases. It was also possible to exceed the
    configured maxconn when a server was brought back up. It appears that
    only the proxy queue was evaluated at this stage while the server queue
    must also be processed. Note that the issue it not totally fixed in
    3.0. We can occasionally see a few more connections than maxconn, but
    the max that have been observed is 4 more connections, we no longer get
    multiple times maxconn. This was improved in the 3.2 to strictly respect
    the maxconn value.
  • In H1, it was possible to have unusable client connections waiting for
    the client timeout while they should be closed. This happened when a
    connection error was immediately encountered after the connection
    establishment, in same time of the connection closure. It was not a leak
    because connections were finally closed but it was a waste of
    ressources, especially with a high client timeout.
  • The H1 multiplexer was only able to handle timeouts if the client or
    server timeouts were defined, depending on the side. So, it was possible
    to ignore client-fin/server-fin and http-keep-alive/http-request
    timeouts.

@peanball peanball merged commit d23816e into master Jan 30, 2025
4 checks passed
@peanball peanball deleted the haproxy-auto-bump-master branch January 30, 2025 08:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run-ci Allow this PR to be tested on Concourse
Projects
Development

Successfully merging this pull request may close these issues.

2 participants