Issue: "Too many open files" error despite using TCPConnector with limit #9496
Unanswered
loic-bellinger
asked this question in
Q&A
Replies: 1 comment
-
It would be good if you could try this on master. There's a possibility it's fixed there and couldn't be backported to 3.x releases. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I am encountering the classic system error "Too many open files" / "too many file descriptors" when making a large number of requests (50K+) to different domains. Despite setting up a TCPConnector with a limit=500, the error is still triggered. For reference,
ulimit -n
on my system returns 1024.I suspect that the sockets aren't being closed properly, which causes the file descriptor limit to be reached. Using
force_close=True
on the TCPConnector seems to solve the problem, but I assume this introduces a lot of overhead by not reusing connections.Given my session configuration (provided below), do you see a solution to ensure that I stay within the file descriptor limits of my system? I've also tried using
asyncio.BoundSemaphore
, but it didn’t seem to make a difference. In any case, I assume it's redundant with the limit parameter on the TCPConnector, correct?Here’s how I am configuring my session:
Do you have any advice on how to properly handle the file descriptor limits without forcing the closure of every socket?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions