You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have come up with a problem regarding the media server running out of sockets (file descriptors) and hangs.
(ERR]-EventLoop::Start() | could not start pipe [errno:24] [ERR]-RTPBundleTransport::Init() | too many failed attemps opening sockets
This was reproduced by reducing the number of available file descriptors for the node process and trying opening new connections to the media server which ended up hanging the process. strace -p {pid} gives back futex (0x359c800, FUTEX _WAIT, 2147483648, NULL with multiple bind(-1, Tsa family=AF_INET, sin port=htons (19823), sin_addr=inet_addr("0.0.0.0")}, 16) = -1 EBADF (Bad file descriptor) socket (AF_INET, SOCK_DGRAM, IPPROTO_IP) = -1 EMFILE (Too many open files) messages.
This was tested on both 0.100.2 and 0.117.2 with the same result.
The process seems to be stack waiting for a value back which never comes and as such hangs without entering zombie state.
The file descriptor limits are only affected by the process limits and are not inherited by the system or user limits (this was tested).
Thank you for taking a look at this!
The text was updated successfully, but these errors were encountered:
pal-gstama
changed the title
Running out of sockets leads the node process to hang.
Running out of sockets leads the node process to hang
May 4, 2022
We have come up with a problem regarding the media server running out of sockets (file descriptors) and hangs.
(ERR]-EventLoop::Start() | could not start pipe [errno:24]
[ERR]-RTPBundleTransport::Init() | too many failed attemps opening sockets
This was reproduced by reducing the number of available file descriptors for the node process and trying opening new connections to the media server which ended up hanging the process.
strace -p {pid}
gives backfutex (0x359c800, FUTEX _WAIT, 2147483648, NULL
with multiplebind(-1, Tsa family=AF_INET, sin port=htons (19823), sin_addr=inet_addr("0.0.0.0")}, 16) = -1 EBADF (Bad file descriptor) socket (AF_INET, SOCK_DGRAM, IPPROTO_IP) = -1 EMFILE (Too many open files)
messages.This was tested on both
0.100.2
and0.117.2
with the same result.The process seems to be stack waiting for a value back which never comes and as such hangs without entering zombie state.
The file descriptor limits are only affected by the process limits and are not inherited by the system or user limits (this was tested).
Thank you for taking a look at this!
The text was updated successfully, but these errors were encountered: