Optimizing batch requests #1280
Replies: 1 comment 2 replies
-
The most usual case is the lack of Operating System resources to perform those requests in parallel. You are asking your OS to open 10k sockets, each one consuming a file descriptor and a few KB of memory. If each one of them needs 64KB (very common), you'll need a bit more than 600MB of RAM in kernel-space to be able to allocate them. Essentially such high number of concurrent connections are possible only with sophisticated network tuning. Given how TCP windows work, I recommend to create a fixed number of connections, e.g. 128, and then set up pipelining to maximize their throughput, e.g. 8 or 10 pipelined requests. |
Beta Was this translation helpful? Give feedback.
-
Hi, just had a few questions about Undici and optimizing for running a tonne of requests.
I have a batch of 10k HTTP requests to make and want to squeeze-out every ounce of juice I can get. In the test below I purposefully set connections to 10,000 to see what would happen.
Running the above took a total of 38960 ms, of which 4,683 requests completed and 5,317 failed. Here's a full picture of the run stats;
The
errors.json
file has the same failing error for each failed request (also like to note this error is not documented in the errors types file);I am aware about file descriptor limitations on Linux and Windows handler limitations (this test was ran on a Windows machine), and port exhaustion though I checked my machines event logs for port exhaustion and none were reported, also don't believe I ran into a handler limitation.
Here's a recording of the sockets lifecycle captured for this run (partially for the hell of it, and if it helps at all with seeing the socket lifecycle states / relating it to the failed request error. Some states may possibly be missed given TaskExplorer itself is set to refresh every 300ms) (sockets action starts at 0:09);
sockets_lifecycle.mp4
I'd just like to ask
Beta Was this translation helpful? Give feedback.
All reactions