Timeout unexpectedly not raised #814
-
Hi, this is a question I originally asked in httpx: encode/httpx#2861 If you don't mind, since the same conditions apply, I paste again the question below, on the bottom I will add my findings in httpcore: I was investigating why I sometimes have requests that take much longer than the timeout. I could isolate and reproduce the scenario: I have those versions, on Python 3.11.3:
let's have this minimal server which just sleeps 2 seconds import asyncio
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
await asyncio.sleep(2) I name it And here is the test: import asyncio
import time
from httpx import Limits, AsyncClient, Timeout
async def read():
start = time.time()
try:
response = await http.get("http://127.0.0.1:8000")
response.raise_for_status()
except Exception as ex:
print(f"failed: {ex.__class__}")
print(f"time: {time.time()-start}")
async def main():
tasks = []
for x in range(100):
tasks.append(asyncio.create_task(read()))
await asyncio.gather(*tasks, return_exceptions=True)
if __name__ == "__main__":
limits = Limits(
max_keepalive_connections=None,
max_connections=3,
)
http = AsyncClient(
limits=limits,
timeout=Timeout(5, read=5, pool=5),
)
asyncio.get_event_loop().run_until_complete(main()) I get timings way above 5 seconds, but I don't understand why ; I would expect a pool timeout. I can probably work around this by having a timeout on the task itself ( like here: Following this, I tracked down where the PoolTimeout logic originates from, and it's this httpcore/httpcore/_async/connection_pool.py Lines 229 to 234 in 94ffb33 after that, it can fail with httpcore/httpcore/_async/connection_pool.py Lines 246 to 250 in 94ffb33 but then it loops and tries again with the same timeout, and depending on the size of the backlog this can go way beyond the intended timeout. To validate this assumption, I tried this very rough patch: started = time.time()
while True:
timeouts = request.extensions.get("timeout", {})
timeout = timeouts.get("pool", None)
timeout = max(0, timeout - (time.time() - started))
try:
connection = await status.wait_for_connection(timeout=timeout) Following that, the example worked as I expected. I presume this is not intentional, but please correct me if I'm wrong. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 6 replies
-
Thanks for the thorough description! This looks like a bug to me. If another maintainer agrees we can work on a fix. |
Beta Was this translation helpful? Give feedback.
-
Hi @valsteen! Thank you for providing such a clear description. You can see #732 where we discussed what to do in such cases for read/write timeouts, but in the end we decided to keep it simple and not change anything ( Refs: OverallTimeout class ). Applying your solution appears reasonable to me, but we may encounter the same issues with read/write operations. For example, setting the read timeout to 5 may cause you to hang indefinitely because the server may send 1 byte every 4 seconds.
We should probably resolve this if the difference is significant enough. |
Beta Was this translation helpful? Give feedback.
-
There is the same example that @valsteen has provided, but just using the file: test.py import time
import trio
from httpcore import AsyncConnectionPool
TIMEOUTS = {"read": 5, "write": 5, "pool": 5, "connect": 5}
async def read():
start = time.time()
try:
response = await http.request(
"GET", "http://127.0.0.1:8000", extensions={"timeout": TIMEOUTS}
)
if response.status // 100 != 2:
raise RuntimeError
except Exception as ex:
print(f"failed: {ex.__class__}")
print(f"time: {time.time()-start}")
async def main():
async with trio.open_nursery() as nursery:
for _ in range(100):
nursery.start_soon(read)
if __name__ == "__main__":
http = AsyncConnectionPool(
max_connections=3,
max_keepalive_connections=3,
)
trio.run(main) |
Beta Was this translation helpful? Give feedback.
-
this is solved: |
Beta Was this translation helpful? Give feedback.
this is solved: