-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase default 'max_semi_space_size' value to reduce GC overhead in V8 #42511
Comments
@nodejs/v8 |
I remember there were similar requests for |
@joyeecheung Thanks for your explanation! Is there any official documents about the Node.js core prefers? And If there are |
I guess this also applies to workers and the |
Actually it doesn't seem to be possible to set semi space for workers? @jasnell @addaleax Only:
Are possible to set. https://lbwa.github.io/v8-reference/classv8_1_1_resource_constraints.html |
The problem with this is that 99% of users don't know what is best for their use case so I guess we have to make some kind of decision in terms of defaults that works best for most of our users. I guess the current defaults are more optimized for browser workloads rather than server workloads? IMHO it might be worth looking into optimizing the defaults to better suit our users. |
We already document |
I agree with this, and V8 should consider the browser's memory consumption in mobile device with small RAM size. But for server scenarios, memory usually isn't the bottleneck. V8 has provided the interface set_max_semi_space_size_in_kb() to change the default max_semi_space_size. I found a related issue try to setup the default max_yong and max_old generation size according to system's physical_memory, which uses the ResourceConstraints::ConfigureDefaults interface in V8. But V8 has limited the max heap size as 2GB, and there is a proportional relationship between semi_space_size and heap_size, so the default max_semi_space_size can't large than 16MB. Can we create a similar |
Add the '--max_semi_space_size' flag into useful V8 option. Fixes: nodejs#42511
OK, I added the |
Add link for issue nodejs#42511.
Unfortunately this doesn't yet allow to use this option via NODE_OPTIONS. It seems, it has to be explicitly allowed in |
Hi @fdc-liebreich , I not quite clear about this: "Unfortunately this doesn't yet allow to use this option via NODE_OPTIONS." Do you mean Node.js not support passing the runtime flag |
Add the `--max_semi_space_size` flag into useful V8 option. Fixes: nodejs/node#42511 PR-URL: nodejs/node#42575 Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Michael Dawson <midawson@redhat.com>
@JialuZhang-intel I have tried setting max semi space size to 32, 64,128 Mb and ran backend server using express. After loadtesting with apache bench, as rightly mentioned throughput increased for 64Mb(in case of backend rps had increased by 10%), however by repeating test multiple times, rps had actually decreased significantly for one test run, I believe it's due to scavange gc freeing up memory. With 16Mb, rps is quite consistent for all runs and no significant dip in rps found unlike for 64Mb. Setting to 128Mb no significant improvement. Is still setting semi max space size to 128 Mb or 64 Mb is good ?. |
use more space for the young generations to avoid GC runs see https://github.com/nodejs/node/blob/main/doc/api/cli.md#--max-semi-space-sizesize-in-megabytes and nodejs/node#42511 for details
use more space for the young generations to avoid GC runs see https://github.com/nodejs/node/blob/main/doc/api/cli.md#--max-semi-space-sizesize-in-megabytes and nodejs/node#42511 for details
We would love to have a PR for this to discuss. |
Context: #46608 (comment) |
Was discussed on TSC meeting again. There is interest on seeing a PR. Until then there is not much to discuss on TSC level. |
…ost of some more memory See https://github.com/nodejs/node/blob/main/doc/api/cli.md#useful-v8-options Also see https://www.alibabacloud.com/blog/better-node-application-performance-through-gc-optimization_595119 and nodejs/node#42511 for some details on impact
…ost of some more memory See https://github.com/nodejs/node/blob/main/doc/api/cli.md#useful-v8-options Also see https://www.alibabacloud.com/blog/better-node-application-performance-through-gc-optimization_595119 and nodejs/node#42511 for some details on impact
* perf: speed up `splitLast` function * perf: skip function wrapping if profiler is disabled * perf: index files into a tree structure to get files for a subdirectory more quickly * perf: increase max-semi-space-size for less GC interruptions at the cost of some more memory See https://github.com/nodejs/node/blob/main/doc/api/cli.md#useful-v8-options Also see https://www.alibabacloud.com/blog/better-node-application-performance-through-gc-optimization_595119 and nodejs/node#42511 for some details on impact * refactor: replace Bluebird with native Promises (part 1) * refactor: replace bluebird with native promises (part 2) * refactor: replace bluebird with native promises (part 3) * chore: make the linter happy after all the bluebird replacements * fix: splitLast behavior with no index found * chore: linter again * chore: cleanup file-tree and add header * refactor: replace pLimit with pMap
There has been no activity on this feature request for 5 months and it is unlikely to be implemented. It will be closed 6 months after the last non-automated comment. For more information on how the project manages feature requests, please consult the feature request management document. |
There has been no activity on this feature request and it is being closed. If you feel closing this issue is not the right thing to do, please leave a comment. For more information on how the project manages feature requests, please consult the feature request management document. |
I guess this issue should stay open as a reminder for making a PR? |
There's nodejs/performance#67 already. I'm going to close this but feel free to send a PR. |
What is the problem this feature will solve?
When I use node to run the web-tooling-benchmark, I found that the runtime flag
--max_semi_space_size
have a big impact on the test result. The total throughput increased about 18% after I pass the runtime flag--max_semi_space_size=128
into node. So I did some investigate about the 'max_semi_space_size' flag.From some v8 official blogs (Getting garbage collection for free, orinoco-parallel-scavenger), I found there are two garbage collection strategies in V8:
When we create a new object by javascript code, the object will be put into semi_space as a young generation object. And when the semi_space is about to use up, V8 engine will trigger the Scavenge GC to clean up the garbage objects in semi_space.
If I use the
--max_semi_space_size
flag to increase the maximum limit of semi_space size, the scavenge GC occur frequency will decrease. This will bring both advantages and disadvantages:It's a trade-off between time and space. V8 set the default
max_semi_space_size
as 16MB for 64bit system and 8MB for 32bit system (related code). I think it's a heuristic value that mainly considered client device with small RAM size (for example: some android device only have 4GB RMA). But for server scenarios, memory usually isn't the bottleneck, while throughput is the actual bottleneck.So the problem is:
Whether the currently default
max_semi_space_size
(16MB/8MB) for V8 is also the best configuration for node?What is the feature you are proposing to solve the problem?
To solve the problem above, I tuned the
--max_semi_space_size
(16MB, 32MB, 64MB, 128MB, 256MB) and tested on web-tooling-benchmark and a simple service based on ghost.js. Here is the test results:From the figure we can see that:
So I think we can choose a better
max_semi_space_size
value and pass this runtime flag to V8 when node startup.What alternatives have you considered?
Test environment:
Test process:
The text was updated successfully, but these errors were encountered: