-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kbn/optimizer] disable parallelization in terser plugin #58396
[kbn/optimizer] disable parallelization in terser plugin #58396
Conversation
💚 Build SucceededTo update your PR or re-run it, just comment with: |
Pinging @elastic/kibana-operations (Team:Operations) |
While I do see a slight reduction (~7%), I am doubting that this is the cause of the OOM in the release manager given how little memory it's consuming. Still, it's worth the reduction in memory usage and workers. I am going to run some tests during the actual build to see what we're seeing resource wise there. Wrote a crude bash script to measure the change of memory usage over time:
Built the platform plugins with the following command while monitoring the usage:
master: Duration of ~69.07 (69.3, 68.1, 69.8) fix/memory-usage-when-building-dist: Duration of ~68.57s (68.3, 68, 69.4) |
I'm not sure the exact memory usage is relevant, I think the issue is when memory usage is high and then we try to spawn a process it refuses to spawn the process unless there is enough uncommitted ram for the entire process to be doubled... Or something like that... https://github.com/nodejs/node/issues/25382#issuecomment-580319938 is really interesting, this might be even worse once we're on node 12... |
* [kbn/optimizer] disable parallelization in terer plugin * use more workers when building the dist
7.7/7.x: ce10569 |
…re/files-and-filetree * 'master' of github.com:elastic/kibana: (174 commits) [SIEM] Fix unnecessary re-renders on the Overview page (elastic#56587) Don't mutate error message (elastic#58452) Fix service map popover transaction duration (elastic#58422) [ML] Adding filebeat config to file dataviz (elastic#58152) [Uptime] Improve refresh handling when generating test data (elastic#58285) [Logs / Metrics UI] Remove path prefix from ViewSourceConfigur… (elastic#58238) [ML] Functional tests - adjust classification model memory (elastic#58445) [ML] Use event.timezone instead of beat.timezone in file upload (elastic#58447) [Logs UI] Unskip and stabilitize log column configuration tests (elastic#58392) [Telemetry] Separate the license retrieval from the stats in the usage collectors (elastic#57332) hide welcome screen for cloud (elastic#58371) Move src/legacy/ui/public/notify/app_redirect to kibana_legacy (elastic#58127) [ML] Functional tests - stabilize typing during df analytics creation (elastic#58227) fix short url in spaces (elastic#58313) [SIEM] Upgrades cypress to version 4.0.2 (elastic#58400) [Index management] Move to new platform "plugins" folder (elastic#58109) [kbn/optimizer] disable parallelization in terser plugin (elastic#58396) [Uptime] Delete useless try...catch blocks (elastic#58263) [Uptime] Use scripted metric for snapshot calculation (elastic#58247) (elastic#58389) [APM] Stabilize agent configuration API (elastic#57767) ... # Conflicts: # src/plugins/console/public/application/containers/editor/legacy/console_editor/editor.tsx
In order to avoid a memory issue caused when running the build in smaller VMs we should prevent terser from launching worker processes. We're already running in workers and node child processes reserve the same amount of memory as the parent according to https://github.com/nodejs/node/issues/25382, additionally, when running on smaller computers we run the optimizer with two workers and 4GB of max-old-space-size for each worker. This combination seems to be exploding on build machines creating builds of the whole stack. Disabling the
parallel
option inTerserPlugin
seems to fix this by running terser in the same process as the webpack compiler so it respects the worker count.Additionally, rather than limiting the worker number to
cpuCount/3
, when building the distributable we'll usecpuCount-1
workers because, by default, we only build the dist when it's the only thing happening and we don't have to be quite as considerate to other processes running on the machine (like we try to do in dev).