You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a max size of 20mb set since the servers we fetch from can sometimes give us an endless amount of data (I mean 12 hours of useless 300GB data endless! 😄).
Once the size limit is breached, node-fetch successfully throws an error, the promise is rejected — but the HTTP request isn't stopped, so everything appears fine.
That is until you suddenly see huge spikes in inbound traffic and CPU, and eventually realize a node process has been running for 12 hours instead of 12 seconds!
Is this part of the spec, or a bug? It feels like unexpected behavior to me.
Thanks!
The text was updated successfully, but these errors were encountered:
Set a timeout or use AbortSignal to the same effect.
In v3 release, we are more strict in piping network related errors, so hopefully this issue won't be repeated.
Normally the underlying connection should timeout according to your OS socket setup, I haven't seen a situation where node-fetch keeps the connection alive (we default to keep-alive: close), so you might want to look into your agent config and your OS config.
We have a max size of 20mb set since the servers we fetch from can sometimes give us an endless amount of data (I mean 12 hours of useless 300GB data endless! 😄).
Once the size limit is breached, node-fetch successfully throws an error, the promise is rejected — but the HTTP request isn't stopped, so everything appears fine.
That is until you suddenly see huge spikes in inbound traffic and CPU, and eventually realize a node process has been running for 12 hours instead of 12 seconds!
Is this part of the spec, or a bug? It feels like unexpected behavior to me.
Thanks!
The text was updated successfully, but these errors were encountered: