-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize connection pool implementation #924
Conversation
6819e1c
to
1358872
Compare
httpcore/_async/connection_pool.py
Outdated
@@ -238,24 +238,27 @@ def _assign_requests_to_connections(self) -> List[AsyncConnectionInterface]: | |||
those connections to be handled seperately. | |||
""" | |||
closing_connections = [] | |||
idling_connections = {c for c in self._connections if c.is_idle()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we use this collection just for tracking the count of idle connections. Maybe it should just be an integer for simplicity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With integer we would need to recheck all the connections again. We can not just decrement below in the if-elif branches as we could end up going negative etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh sorry, you are right. We can just check eg the expired one if it was also idle one and decrement!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that part significantly change the performance? Did you run any tests? Maybe we are gaining the same performance boost by only using polling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I checked with pyinstrument
it showed this exact spot as a hot spot spending time iterating the connections in outer and inner loops. Ill rerun the benchmark to check it again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anyways I pushed now the integer fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have similar results without this PR. Am I doing something wrong? Here is what I got on my machine without this PR: UPDATE: However, the test results for httpcore vary between 3500 and 5500 for me (without this change). |
What kind of system are you on and python version? |
# Checking the readable status is relatively expensive so check it at a lower frequency. | ||
if (now - self._network_stream_used_at) > self._socket_poll_interval(): | ||
self._network_stream_used_at = now | ||
server_disconnected = ( | ||
self._state == HTTPConnectionState.IDLE | ||
and self._network_stream.get_extra_info("is_readable") | ||
) | ||
if server_disconnected: | ||
return True | ||
|
||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I'm not happy with interval calculating.
Could we improve is_readable
instead? FWIW I noticed anyio and sync backend use httpcore._utils.is_socket_readable
while trio backend uses its own, @MarkusSintonen is there any benchmark difference when you switch backend to trio?
For improving is_readable
in get_extra_info
:
- Always assume it is readable and turn it to false by specified events such as received close socket from server.
- Use synchronized Event on readability status change.
I didn't go deep to search these cases are possible or not. They are only my opinions 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For improving is_readable in get_extra_info:
Always assume it is readable and turn it to false by specified events such as received close socket from server.
Im not aware of anyway to to get events about socket getting closed. As far as I know the only way to know it is to use the socket. 🤔 But I agree the is_readable
could be better so its not so heavy weight. We could make it just a flag based so in networking side we just set some boolean flag when we detect a network error on usage. This has a downside that we have greater probability of giving out already broken connections from the pool. But as far as I know this is how its usually done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other options would be to move the intervalled polling into the network backends side. Or get even more elaborate and run the socket polling in a specific interval via loop.call_later
which run the actual poll via loop.run_in_executor
to avoid any possible nonasync socket IO in the async land. Gets easily hairy 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Except that interval mechanism (perhaps could be excluded from this PR), it's LGTM, Thank You!
That was the beef of the PR as the constantly happening socket polling is the most expensive thing in the whole pooling implementation 😅 (loop complexity gets shadowed by the socket polling) |
Let's get this split into two seperate PRs please, so we can look at each in isolation and determine if it's a benefit. |
Will do, FYI here was the pool without the loop complexity fix #924 (comment) |
bb3cbe1
to
0694653
Compare
0694653
to
b6b119c
Compare
@tomchristie / @T-256 |
Summary
Second PR in series of optimizations to improve performance of
httpcore
and even reach performance levels ofaiohttp
(andurllib3
library).Related discussion encode/httpx#3215 (comment)
Optimizes the connection pool by reducing the time complexity of the idling connections checks. Also it no longer checks the sockets readable status on every pool operation. Pools
has_expired
check uses socket polling viais_readable
check which is relatively expensive. So now the polling is done using smudged intervals (smudging to avoid all polls being done at the exactly same time). This still should have relatively low chance of encountering broken keep alive connections when connection is picked up from the pool.Async previously:
Async with PR:
Async request latency is not so stable yet (as this doesn't include #922) but the overall duration of the benchmark improves by 7.5x. (The difference diminishes when server latency increases over 100ms or so.)
Sync previously:
Sync with PR:
Sync request latency improves by x2.5. With this
httpcore
has same performance asurllib3
library.Checklist