-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
maxBuffer default too small #9829
Comments
I would go with 10 MB. (Which is what I'm using as default in |
How about we make maxBuffer mandatory? |
@sam-github That sounds like an extremely breaking change and would be very inconvenient. Just set it to an arbitrary high value so most users won't hit it. |
Yeah, bit of a strawman... but any value is likely to be smaller than someone needs. |
@sam-github Yes, but 200 KB is IMHO way too low. Anything higher will at least improve the situation. |
The flip side is that the memory footprint for a program that spawns twenty simultaneous child processes goes up from max 4 MB to max 200 MB. That's a pretty steep increase. I infer that you use maxBuffer as a convenient way to collect output but it's intended to be used as a circuit breaker for runaway processes. Also, I'm curious why you are using child_process.exec/execFile for programs that print a lot to stdout. You'd be better served by child_process.spawn(), it knows how to stream the data. |
That's only if they actually use all 10 MB each, right? Honestly, even
|
@bnoordhuis That is a good point, though it doesn't solve the core issue that developers hit this limit and their child processes terminate without much indication as to why. Adding to what @sindresorhus said, if you are using |
I'd be OK with upping to 1Meg, if someone PRs it, I've run into the limit, too (but I just set maxBuffer when I do). 10 Meg seems excessive, if you want that much output, explicitly configuring node to expect it is reasonable. Or perhaps the limit should be max-string, and run-away processes become the user's problem to protect against? |
Having the maxBuffer at all is the bug. It gives users an option they don't want to have to specify. Much better to remove the macBuffer option and have the buffer grow dynamically. Programs with small streams use small amounts of memory, programs that use large streams use more memory. Everything just works and the programmer doesn't need to know about the issue at all. |
@peterhal I take it you didn't read this comment?
|
Increase the default maxBuffer for child_process.exec from 200 * 1024 bytes to 1024 KB so child processes don't get terminated too soon. This is a fix for nodejs#9829
Having a The problem: Node is trying to do what a developer should be doing. My humble opinion is that Otherwise, as has been noted above, any default value for |
Is this a change we're likely to consider? Should this remain open? |
There is an open but stalled pull request. Seeing there hasn't been much movement, I'll close this out. If someone wants to adopt #11196, feel free. edit: I just closed the PR; didn't seem likely its author was going to pick it up again. |
can someone help me how to change the default buffer size. i have the same issue executing az status vm list skus |
Currently
maxBuffer
forchild_process.exec
is set to 200*1024 bytes, or ~204.8KB. I ran into an issue where my child process was being terminated and tracking it down was quite tough. It ended up being that it was producing enough output that it exceededmaxBuffer
.I think the buffer size is too small and this behavior (terminating a child) is drastic enough that it should only be done in the case where a child is producing a much larger amount of output.
I'm not sure what's sane here, perhaps 5MB+?
The text was updated successfully, but these errors were encountered: