Replies: 2 comments
-
I think there's not enough context here. We'd need a more complete reproducer to judge what's happening in your case. |
Beta Was this translation helpful? Give feedback.
-
It's possible you're measuring the wrong thing, especially if you have multiple requests being handled at the same time. requests is synchronous, so your entire application stops and waits for it to finish the write. Test it with a simple script (without a web application) that sends a single request and see if you observe aiohttp taking twice as much time. If the difference is not there in a simple script, then I'm pretty sure it's just the problem of measuring it described above. |
Beta Was this translation helpful? Give feedback.
-
Currently I am sending some graphql queries using the typical post request:
This is executed by different fastAPI endpoints that get hit at the same time. The post requests manage to run in parallel but
every single one is taking a bit too long when comparing them with just querying
requests.post
(this takes ~1.5 seconds), with aiohttp they take around 4 seconds each. When profiling each call most of the time (2 seconds) are caused byClientRequest.write_bytes
, am I using the functions wrong? Is it just an expected overhead? Any help would be apreciated.Beta Was this translation helpful? Give feedback.
All reactions