-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add API to (de)serialize parsed Node objects #26871
Comments
Canonical S-expressions could be a candidate format. |
Not sure if I misunderstand this issue, but is the watch mode not an option? I'm using that in my project https://github.com/AlCalzone/virtual-tsc (with a barebones language server host and an in-memory FS) where compilation times drop from a few seconds on the first compilation to a few milliseconds when only single files get changed. |
Not exactly. Watch mode by
For example, you work on your feature branch and now you need to switch to master/dev branch. You stop Or, for example, even share caches between co-workers/builds in CI/etc to do not parse unchanged since some time files. |
Is this an XY problem? i.e. is the request here to make compilation faster in general, or do you specifically need a serialized parse tree for some other reason? I will say that 10-15s to parse 226k lines seems quite high and may indicate a real bug somewhere. Our self-host compilation of |
The thing which inspired me to create this issue is the compilation performance.
It would be nice to have such API, but if compilation/parse time will be fast, I hope that nobody needs to store parsed tree to file (because parse time will be similar to reading and restoring that tree from other file).
Wow, it looks good. But anyway I'm not sure will it be faster than restoring ast tree from file, but we need to check anyway.
Hm. I'm open to help indicate that problem. What I can provide you to help find the bug if it is? |
Also it would be nice to understand what's happened here and find the cause. Please let me know if something can help to understand the problem. |
Are you using |
No, at that moment I specially disable this flag to avoid extra checking/emitting. With enabled |
@ajafff it seems that it was some lag on my side - I just run compilations again and noEmit: true:
noEmit: false:
|
@RyanCavanaugh ping |
Okay, I just ran with
It seems that parse time is ok now. |
Search Terms
Serialization, deserialization, dumping, dump source file, compiler cache
Related issues
#25658, TypeStrong/ts-loader#825
Suggestion
I'd like to suggest to add an API to be able to (de-)serialize any parsed Node or make parsed AST be easy (de-)serializable by, for example,
JSON.stringify()
.Use Cases
It can be used to make cache of parsed source files, which can be used between compilations without parsing unchanged source files again.
For example, for our project at the moment we have the following times for compilation:
(I have no idea why
--noEmit
does not reduce "Total time" by "Emit time" - it is strange)I hope, if we'll have such API or even cache by
tsc
, it can reduce compilation time, especiallyParse time
.Also, in this case, the TypeScript package can be published with already parsed lib files, what can reduce compilation time even without enabling
--skipLibCheck
option.Examples
For example, we can override
getSourceFile
method inCompilerHost
and provide files from cache if file is not changed since last compilation. It can be done via provided methods likets.serializeNode(node: ts.Node): string | ByteArray
/ts.deserializeNode(nodeData: string | ByteArray): ts.Node
.Also it can be added to
tsc
command as arg--use-cache
(or something like that) to do it by default (but I believe in this case we also need to have new option likecacheDir
or--force
).The similar cache can be found in bazel rules_typescript, but it is in-memory cache.
In our internal tool we have very similar in-memory cache too (it allows speedup multiple compilations the same files).
Checklist
My suggestion meets these guidelines:
The text was updated successfully, but these errors were encountered: