-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multi-threaded compilation for --build #30235
Comments
We'll be looking at multi-threaded stuff more when node's worker threads API leaves Experimental phase. It's certainly a ripe opportunity. |
More keywords since I had trouble digging this up from #30900: multi-core multicore multithreading multithreaded multi-thread parallel concurrent spawn |
Node 12 just came out and while worker threads are still experimental, they no longer require any special node command line args to enable them. I believe 11.7 no longer required Given that node 12 will become LTS in 6 months, now is a pretty good time to test this out. I would also bet typescript would be the biggest user and could help drive the internals of this API. |
@DanielRosenwasser I think I found this issue faster thanks to you. @ericanderson I agree with you. Node 12 is out now and presents a very good opportunity to try a feature like this. Modern consumer-grade computers have a lot of power to leverage! |
My project's TS compilation slowed down by a fair margin after switching from parallel lerna builds to project references. Almost definitely from this lack of parallelization. Would love to reap the benefits of both! |
@RyanCavanaugh The worker threads API just left the experimental stage and were lifted to stable (see: https://nodejs.org/api/worker_threads.html). Finally! Can we expect now that they will be integrated into the compiler in the near future? |
This would be a great addition to typescript. Our builds are currently taking a good 5-6 minutes to complete. |
as part of the parallelization push, it would be nice if a parallelization-by-ply strategy of the dependence dag would solve both issues (parallelization, and dependence management); and, it would seem to me that some recognition of the dependence dag is necessary, in any case? |
hi @Bnaya, thanks for the reply! what i meant was that, to the best of my knowledge, we have to place the i was suggesting that any parallelization of the compiler will need to do a topological sort, anyway (i'm assuming that we will be doing a by-ply parallelization). if so, then we can also avoid the requirement that |
If |
Is this going to be part of 3.8? Would love to get improved compilation speed sooner rather than later. Feature wise Typescript pretty much checks all the boxes (3.7 with optional chaining was last key feature missing), so from my side, focus on performance while ensuring it continues to be stable is most important. |
Any update? |
I would also highly appreciate support for multi-threading. |
Just write my 2 cents about that. tl;dr: try to compile your project with https://github.com/timocov/parallel-typescript and see/share your results 😂 If we want to speed-up the build by parallelization, we need to share the state between workers/threads (parsed source files, configs, fs cache, etc). Afaik nodejs doesn't provide a way to share the same object between threads (please correct me if I'm wrong) so we can only post serialized object and de-serialize it in receiver (for that we need to have implemented something like this). If we don't share the state - we'll parse the same stuff again and again in every worker requires that stuff, which might affect the performance. In other hand, if we can compile every file inside the project in parallel, but it requires either change the compiler API from synchronous to asynchronous (to be honest I don't believe that this will be done sometime 😂) or having a way to "stop the world and wait until other thread/process finished" (see @DanielRosenwasser's tweet about that).
Because of that, it's possible (just possible) that multi-threaded compilation (in any kind: workers or processes) won't increase your build as much as you expect it (at least for some projects). I have a small piece of code which run compilation in parallel for independent projects and assumes that
Some of my assumptions can be inaccurate (or even all of them), just my thoughts. If you wish, you can try to compile your project with https://github.com/timocov/parallel-typescript (don't forget to run |
@timocov Thank you for setting up that project. Just tested it out here on a TypeScript project with 253 packages on a Xeon W-2133 (6 cores @ 3.6ghz) with 32GB ram and got fairly good results (see below). For a project like ours it looks like there'd be significant improvement (~30% faster) just implementing what you've done. One thing worth noting is that it expectedly chews up my entire CPU. With the original 14 minute run my PC was still usable; however, on the 10 minute run it's completely unusable. It's clearly doing a bunch of redundant work, which is also expected from what you said. The workers version was slightly easier on the CPU, but only slightly. To me this means our build machines would benefit more than our local machines. In our dev workflow it's relatively rare to have to do a full build of all of the projects. Before: > Measure-Command { tsc -b | Out-Host }
Days : 0
Hours : 0
Minutes : 14
Seconds : 9
Milliseconds : 203
Ticks : 8492037715
TotalDays : 0.00982874735532407
TotalHours : 0.235889936527778
TotalMinutes : 14.1533961916667
TotalSeconds : 849.2037715
TotalMilliseconds : 849203.7715 After (Processes): > Measure-Command { node ..\..\parallel-typescript\index.js | Out-Host }
Days : 0
Hours : 0
Minutes : 10
Seconds : 9
Milliseconds : 573
Ticks : 6095731335
TotalDays : 0.00705524460069444
TotalHours : 0.169325870416667
TotalMinutes : 10.159552225
TotalSeconds : 609.5731335
TotalMilliseconds : 609573.1335 After (Workers): > Measure-Command { node ..\..\parallel-typescript\index.js tsconfig.json node_modules/typescript/lib/tsc.js --workers | Out-Host }
Days : 0
Hours : 0
Minutes : 9
Seconds : 45
Milliseconds : 854
Ticks : 5858549663
TotalDays : 0.00678072877662037
TotalHours : 0.162737490638889
TotalMinutes : 9.76424943833333
TotalSeconds : 585.8549663
TotalMilliseconds : 585854.9663 |
Wow, that's really impressive. I didn't even think that I can find any project with improvement of speed here though 😂
Yeah, I guess this is the cost of the parallelization.
Agree. @evandigby is it possible to share "build flow" for your project (the tool prints it at the start before the compilation)? Anyway thank you for the feedback! |
@timocov No problem! Here's the build flow:
|
@timocov this is not completely true
If it's true that you cannot share the same object (i.e. share the memory area of the object) between threads, you can always use messages to share the object, manipulate it and re-send back the edited one. For example: const { Worker, isMainThread, parentPort } = require('worker_threads');
var obj = { /* what you need */ };
if (isMainThread) {
const worker = new Worker(__filename);
worker.once('message', (message) => {
// edit data here and use condition to avoid "infinite loop", e.g. wrap the object in a "message object" with the status or the "kind" of message
worker.postMessage(message); // message should contain the edited object
});
// send first message to the worker process
worker.postMessage(obj); // or, maybe is better to replace obj with { status: "status identifier", args: [ obj ] }
} else {
parentPort.once('message', (message) => {
// edit data here and use condition to avoid "infinite loop", e.g. wrap the object in a "message object" with the state or the "kind" of message
parentPort.postMessage(message);
});
} It can be a little tricky, but it is possible, I've done it to achieve a (really heavily customized) parallel typescript build in my company. Actually we're building with 4 parallel threads, with around a 30-40% of speed up (but the customization slows down the build process of around the 20%). Note1: the code shown in the example has not been tested exactly as written and probably will require some fix to work properly. Sorry but I'm not authorized to copy-paste the code (and not even portions of it). For this example I've just edited the NodeJS documentation page to share my knowledge. Note2: I didn't fully understand why, but if you use multiple process build (using child_process) instead of workers, the build speeds up of around 10% more _ but the memory increase of around 25%. We're not using this ways because our the build process of our application uses around 4-6GB of memory (consider that the source code disk size is around 10GB and the "application" is composed of 12 "sub-application") and with more TSC processes we easily reach 8GB 😆 Hope this can be useful! 😄 |
@KingOss Yes, I said that in case of sending/receiving the whole AST between workers to avoid unnecessary readings/parsings, e.g. you can't (or can?) send an object which has methods/functions in it.
Is it possible to open-source it? |
@KingOss so i made this project for shared memory |
Is this ask on the agenda now? it's really awkward to see only 4 threads being utilized when building a large project and the rest 28 are idle like this.. (yes I love my 3950x). I hope we have this sorted out sooner. Which part of tsc is supposed to take care of this, can someone share some pointers where to begin with, like I was the one to make this change. I'd love to contribute. |
OK, I enlisted typescript engine code locally, played with it, and debugged a little bit, now I know there are build, CreateProgram, getNextInvalidatedProject etc. functions. However from below code in build function, seems like this is single threaded if you look at below function, an infinite while loop that picks next project one by one and build one by one,
Seems like the magic of building the buildOrder is from function createBuildOrder, where a DFS algo is used, from root tsconfig.json, visit from top down till a package has no dependency, push the node to the queue, then trace back to the root package. While building the order, there is no graph information recorded, like the edges or the dependencies between each package. So there are two things,
a b c => d e f => h i j => k (k is the root)
Could also use some simple prediction for optimization, e.g. compute the total file * line for each package and sort them desc (or asc?), then schedule the packages by that order to workers one by one, to prevent worker starvation. Although this is I/O heavy and may introduce some overhead, we'll see. comments? |
sorry the email of the quote has been marked as spam and I didn't see it, however
@timocov no, sorry, that code cannot be open-sourced because is a part of a commercial product. In any case, as I've been the author of that code, if necessary I could re-write a semplified version of the same library, let me know if you'd like that 😉
@Bnaya if I'll rewrite the code I'll inform you 😁 |
@KingOss I am curious, would you say that this speed-up would justify the invested time and added complexity of the parallel build process? Even if it does for your company/product, do you think it would make sense for the majority of TypeScript's user-base? For a large codebase such as the one I was working on above (which has grown significantly since then) 30% can be a significant amount of time. For a small codebase that could be a few seconds (or it may be slower in very small projects given the 20% slow-down you describe). I am wondering if one of the reasons this hasn't been attempted within the TypeScript team is that the cost/benefit of the parallelization just isn't there for the majority of the customer base. To be clear, I am not affiliated with the TS team in any way so I'm speculating as much as anyone. |
@KingOss yes please, I think it might be very useful for others 👍 |
@evandigby well, it surely depends on the structure of the project. In our case the project is made of more than 3k TS files that generates more than 600 bundles. We're speaking of more than 3k objects. In addiction, right now we're not using (for most of the files) the TS modules (as we are migrating them from JS and the migration to TS is still not completed). In our case we notice a reduction of the build time from 22 minutes to around 15 minutes. Applying the same approach only to the TS with module (around 100 bundle and around 300 files), the build time speed up is not so significant (less than 10%), but this part is only partially developed: to manage the parallel build with module: "None" we've created a "custom tsinfo" that works with /// <reference ... /> and this takes a relevant amount of time. Surely if we activate the incremental flag of TS standard build and we use the generated tsinfo, the process will be much more efficient.
@timocov ok, then I'll prepare a PoC and I'll share it with you all. However I'll make it really simple and partial, only for demonstration purpose. That's ok? |
I would be very interested in a feature like this. Currently my typescript build takes around 8 minutes. With the |
If parallel-typescript is possible, would it be possible to let something like NVIDIA CUDA compile the code? |
I have also tried out the before (tsc -b): 115.14s after (parallel-typescript): 74.97s |
@RyanCavanaugh Node.js Worker threads API has been stable for a while. Is this the time to bring multi-thread support to tsc? |
Please. My m1 pro laptop is beating my 5950x in transpiling times. This cannot stand haha. |
What if, just hear me but what if there was also a focus of porting to Go, or WebAssembly? It would be great to also boost single-threaded performance and startup, tsc takes forever just to boot let alone compile a small project. |
Definitely agree. Using a better performing language like Go or Rust as the compiler back-end and FFI to communicate with it from a Node front end (CLI) is a sensible approach. Parcel uses a Rust back end for compilation with a Node front end. What's awesome is this allowed the back end to be compiled to wasm and a front end written in the browser - allowing for its use in-browser. TypeScript type checking is the bottle-neck as we rely on |
@evandigby how did u make this run?Upon using tsc -b in my project it shows a time of around 58 seconds but with parallel-typescript in usage it increases to 1min 46 seocnd and with parallel-typescript --workers to 1:35 seconds . |
This might be relavant here,
Even on a single thread, without any code evaluate, the ts compiler takes extra |
@avin-kavish the type-checker is a single 2 mb+ file: https://github.com/microsoft/TypeScript/blob/main/src/compiler/checker.ts
|
My point is that there’s no code to type check. Note the -e expression that passes empty code to evaluate |
When, in a real-world scenario, would you not have code to type check, but would still invoke |
Yeah, so what I am saying is, if the type checker is lazily loaded instead of all at once, then there wouldn’t be a ~600ms delay in starting to process. Example,
I.e. something like a JIT type interpreter. Looking at the way the type checker is written, it doesn't look possible without a complete rewrite, it's very monolithic. |
This is not how the typescript checker works. And remember, you will have libes2015, libdom or types/node need to be loaded before checking your file. |
Is it safe to assume, because of the massive effort of parallelizing the TS compiler (it apparently requires a full rewrite, if I've understood the previous comments correctly), that this issue is unlikely to be implemented in the short and long term? |
Search Terms
Project references, multi-thread, compilation, performance
Suggestion
We have migrated our main project to use project references. It simplifies the development workflow, but brings basically 0 noticeable performance improvement (for both local development and CI). There's even an illusion that now we get worse performance during development.
I guess the main reason behind this is lacking support for multi-thread compilation.
Use Cases
Improve compilation performance.
Examples
N/A
Checklist
My suggestion meets these guidelines:
The text was updated successfully, but these errors were encountered: