-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow tests with 9.0.0-next.6 #757
Comments
There are a few things which are different from 8.x.x
I don't understand exactly about the Primary request |
Thank you for the quick reply, @ahnpnl. I'll try to explain my primary request, but it's relatively open-ended, and may be infeasible - I don't have much understanding of what the preset does. I'm looking for any way to get the simpler + faster code path of I looked at this PR, which says:
So, for example, I'm wondering if I can try switching out the transformers, eg use inline-files and strip-styles, or is that likely to be a wasted effort? Or, do you have any other ideas for achieving performance similar to 10x slower is a LOT slower, so I figure it's worth figuring out the root cause and optimizing it. Thank you again for all your work on this!! |
The performance gain for The different transformers are almost the same implementation, it's just that we switched to Angular's implementation since that means less code to maintain for us. |
Like I explained above, Short explanation according to what I've experienced and understood from
Most of the hard work in Currently I only have one solution to improve for |
Besides, there is something about using I’ve seen that before but no clue why that is different since |
2 more updates on performance data: On Linux, the slow down between 8.3.1 and 9.0.0-next.6 is much less pronounced: I'm seeing about a 35% slow down on Linux vs a 400%-700% slowdown on Windows. So, that's good news unless you develop on Windows. Also, |
Maybe cache file miss on Windows ? After each time a file is read, it is put into memory. It can be that when getting from memory, the path is Windows path while cached path is different. This can be checked by producing the log file via env variable. we don’t process |
Hmm - I'll dig into file caching on Windows and see what I can find. I've looked at the ts-jest log recently and didn't see anything. I'm seeing another issue on Windows (haven't tested on Linux) that might be related: When running in --watch mode, file changes are detected (eg the test file rebuilds), but only the original code is run - the updated code is not running. I have to clear the cache and restart to pick up changes. If I run jest not in watch mode, but with file caching enabled, the changes are picked up - so this doesn't appear to be a file cache issue - more an interaction between watch mode and the file cache. But I still see the same issue with Not sure if that's related to any perf issues or not, b/c the perf problem is there without |
I used procmon to collect logs for all the system-level File API calls while jest is running with the cache empty. It appears that the files being written to jest-transform-cache are being overwritten about 11 times each, for the same transform output. So, there may be a race condition or something causing that, and if so that could explain the perf degradation, or at least be a place for optimization. I haven't looked at the code yet - but I'd be happy to, if you could point me to where to look. Here's an example - this is just after the setupFiles have been run, this is during the transformation of
writing about 50k to the cache entry.
and then the files are closed.
and writes the same number of bytes to each file (I can't say for sure that the contents are the same, but it is the same number of bytes for the map file and cache entry), and then renames them to the same files. If I run |
Ah ha, nice information !! I think if the case of monorepo it is possibly related to this jestjs/jest#10835 Each transform file has a cache key which is provided by https://github.com/kulshekhar/ts-jest/blob/master/src/ts-jest-transformer.ts#L257 Each of the cache key is associated to a transformer’s cache. So the situation with the example of
It might also explain why I didn’t see much difference with my work project since I use single project only. |
The project that I've been measuring is a monorepo (multiple libs and apps), but we (currently) run it with a single jest config in the root for the whole repo - that's been fine b/c the perf was good. So it's not the same situation as https://github.com/lexanth/jest-projects-repro , which is the repro repo for jestjs/jest#10835 . What I'm seeing is that the initial and final cache path and key are the same (all 11 processes create or re-create the same file), the only difference is the suffix used by each process for writing the file before it renames to the same target file. It seems like the 2nd process and later are not properly detecting that the file exists, even though procmon says the first read for the map file succeeded. I'd assumed that the number of jest worker processes was based on available cores, if not specified. I'll step through the transformer cache key logic and see what I can figure out. Thank you for the pointer! |
There is one thing to make clear that tests run in JIT mode, which doesn't actually use Ivy compiler directly but use normal TypeScript The whole implementation is lying in https://github.com/angular/angular-cli/blob/master/packages/ngtools/webpack/src/angular_compiler_plugin.ts#L382 which The issue with Jest always is: transformer is a pure function which is a part of Jest eco system. It is different from Angular CLI where they invoke compilation and webpack before triggering Karma + Jasmine. With Jest transformer, we transform file by file which is passed from Since transformer in each worker (associated to each CPU thread) doesn't know about the others, it ends up that there is an x amount of TypeScript I want to look into the approach of compiling before running tests like Angular CLI does. That approach looks promising. |
I have a hypothesis on a way to fix this. I spent a bunch of time yesterday stepping through the code and adding logpoints to see what is going on. Now I understand what jest-preset-angular does :) It appears that, in the "cache is empty case", the list of This isn't an issue for I will debug why |
So we took the approach to stimulate incremental compilation. Test files are excluded from startup Program and other files which are not known at startup will be automatically added to Program later, as well as test files. This will reduce I/O on disk which helps for non-SSD users. Fixing the initial In general, the approach to work with Jest is file by file processing. |
What you're saying makes sense - thank you for the explanation. I like the idea of compiling ahead of time. The only issue I see with that is avoiding unnecessary compilation. How about the idea of limiting the scope of a Program's rootNames to just the test file and the setup files. And when the worker moves on to the next test, it passes in the old Program to the new one to presumably re-use some of the previous compilation. Right now I'm under the impression that the Program is getting recreated many times, and just getting larger and larger, and the old rootNames are no longer needed. The idea of limiting the Program's scope is probably easier said than done, because I don't see a straightforward way for the I'll keep testing to see if I can find any changes that work and make a big perf difference. |
Based on my now slightly improved understanding, the explanation for the 11 parallel + redundant transforms of tslibes6 is:
I'm still puzzled why the file cache isn't working - perhaps it's just a race condition. Would be nice to improve, but I don't think it's the critical issue. |
Does the cache still not work when limiting the number of workers to 1 (e. g. using |
The problem with this approach is: each test file depend on some certain files aka Currently, we already passed the old program into the new program for each processing file passed from An issue is that, in parallel mode, each transformer doesn't talk to each other and therefore they don't share the An idea is that
In the implementation of |
Check discussion about improving default resolver here jestjs/jest#11034 |
I tried a few different approaches to improve perf, and these changes worked well for me: The change, in a nutshell, is to limit the root files to just the file being transformed plus the non-test roots, instead of accumulating all files that the worker has transformed since testing started. I tested this going back and forth with the current approach, and it makes a significant difference (in my repo 2x faster for first run). I also avoided creating the Program until a file is transformed (just b/c it will always need to be re-created). And, I turned on Also, there's an important change needed for my commit yesterday to fix the Here are the timings I'm seeing: jest-preset-angular@9.0.0-custom with isolatedModules: false, 'jest-preset-angular/presets/defaults-esm' 1st run test/sec: 3.6 This puts it 2x as fast (for me) on Windows 1st run with Ivy support, though 8.3 (non Ivy) is still twice as fast as this. And it's as fast as any when the file cache is in place. I think this could still be a lot better with your pre-compile idea (there's still a lot of waste I'm sure), but for a relatively "easy" tweak on top of all the work you've done, this seems like a good perf improvement. Let me know if you'd like me to create a PR, or if you just want to take a look. One issue I have is that 2 of the tests are failing, and I'm not sure the right way to fix them. The failures are:
Neither of the tests report readable errors in the console, but these 2 errors can be found internally:
|
Left a few comments on the commit. The approach was almost the same as what I thought of, only problem is that there will be an issue that types change won't trigger re type checking because cache key is the same, see the PR kulshekhar/ts-jest#2167 and its related issues. Regarding to the errors, I suspect the approach might cause side effects or unknown errors. I will try to test your tar file against my work project. |
Thank you @ahnpnl . RE the test errors, it is trying to resolve |
@ahnpnl - you're correct that when a dependency (eg a component) is updated, tests using the dependency are re-run, but it's still using the old dependency transform output. Ie what kulshekhar/ts-jest#2167 fixed (I think). Hmm. |
FYI, I've made a version which uses
I myself tested and I didn't see any differences comparing to the usage with Another note is: this version requires any types in constructor to be imported without using The whole experiment is making 9.0.0 work exactly the same like 8.3.2 except that using different AST transformers. |
Your Using:
Perf: When linking to libraries (still no ESM) So, on Windows, this is as fast as 8.3 on first run, and faster than 8.3 when cached. And it's using Ivy (obviously), b/c I have tests that fail on ViewEngine. If this is a viable approach, it definitely addresses this issue. It's interesting that I didn't see any perf improvement from linking to pre-built libraries. Presumably this is due to the fact that transformation is happening either way. When I use
I didn't debug these errors deeper. I've found that "linking error" means there is an unreported error that I could probably find by running jest in the debugger. |
Interesting about how it resolved the perf issue on your side, that could confirm my theories about something different between using TypeScript
With The link error is typical error with ESM, I’m not sure why that happens but I never see it happens on CI, only in my local. I don’t expect any perf on linking because the implementation for ESM in I think it is wise to use |
Indeed I observed on CI that windows tests are consistently faster with the changes, about 25%. |
THANK YOU very much @ahnpnl . For us, this fix keeps jest valuable for Angular development. On Windows, I'm seeing a 900% speedup when uncached (10m -> 1m) when going from 9.0.0-next.6 to 9.0.0-next.8. When cached, there is a miniscule slow down of 1-2s ( < 1%). On Linux, the speedup is < 1% (2s) when uncached. Hopefully tests will become even faster once we can use ESM modules (by avoiding the conversion to CJS). |
💥 Performance Regression Report
I've spent a lot more time than I'd like to admit upgrading to 9.0.0-next* and getting things working, and trying all different configurations to determine if it's ready to use. This past week was the 3rd time I've attempted to upgrade, but my conclusion is that the performance hit from upgrading to 9.0.0-next is still too big. I'm detailing a bunch of things that I tried here in hopes of both helping the performance effort for this project, and helping others with perf troubleshooting.
In a nutshell, I've found the release version (jest-preset-angular@8.3.1) to yield acceptably fast tests - "I'd like faster, but Jest is the best option for most tests". And I've found 9.0.0-next.6, and earlier 9.0.0-next versions, with
isolatedModules: false
, unacceptably slow - 7x slower than @8.3.1 for the first run.Note that 9.0.0-next.6 with
isolatedModules: true
is FAST - it's what I'd hoped for using Ivy in Jest. Unfortunately we don't want the limitations of typescript compiling with isolatedModules, as there's no other benefit to using isolatedModules that we care about. If isolatedModules resulted in faster Angular build times, or some other similar benefit, it would probably be worth giving upconst enum
.Primary request: Is there any way we can use the same jest compilation path that is being used for isolatedModules: true, but without isolatedModules: true in tsc?
Secondary request: Explanation as to why
isolatedModules: true
is soooo much faster thanisolatedModules: false
in 9.0.0-next.Last working version
Worked up to version: 8.3.1
Stopped working in version: 9.0.0-next.3, 9.0.0-next.6
Motivation
The main reason I've been putting so much effort into upgrading to 9.0.0-next.6 is Ivy support: There are a number of benefits (type safety + build perf), but the main reason is that we have code that depends on Ivy behavior, than doesn't exist in ViewEngine (right now we have to run those tests in Karma). Also, it makes sense to test with the same code used in production.
Since it's available, I also decided to try Jest + node ESM support. Since we use ESM (primarily) in prod, and many libraries ship separate code for ESM, let's use the ESM code in tests. And, I was hopeful it would help perf - partly b/c I figured we could use pre-built library code within Jest, reducing Jest's compilation workload.
Upgrade steps
To clarify the changes between these different test runs:
jest.config.js changes :
setup-jest.ts changes :
Replace
import 'jest-preset-angular';
withimport 'jest-preset-angular/setup-jest';
Upgrading to ESM
In jest.config.js, use 'jest-preset-angular/presets/defaults-esm'.
Run tests using
node --experimental-vm-modules ./node_modules/jest/bin/jest.js
.Performance Tactics
A few of the things I tried to improve Jest performance:
Note that I could load internal ESM files during Jest compilation by using this in my Jest.config (and equivalent in tsconfig.spec.json):
moduleNameMapper: '^@this/([a-z\-]+)/([a-z\-]+)$': '/dist/$1/fesm2015/this-$1-$2.js'
But all the external dependencies (angular, zone, rxjs) are still loaded as UMD - as noted in #751, this requires a resolver that is ESM aware.
In other words, changing tsconfig.spec.json from:
to
and jest.config.js from :
to
This didn't work for me because we use some language features that break in ES5 (base class properties), and this linking path combined with jest's default resolver pulls in UMD/ES5 library code. You can work around that for internal libraries using:
But it will still use UMD/ES5 for external libraries.
Timing data
This is non-scientific, run on an 8 core laptop with SSD.
jest-preset-angular@8.3.1 with jest 26, isolatedModules: false (default) :
for 392 tests, 78 suites
First run (no cache): 62s
2nd run (uses cache): 27s
jest-preset-angular@8.3.1 with jest 26, isolatedModules: true :
for 381 tests, 71 suites (8 suites don't compile)
First run (no cache): 42s
2nd run (uses cache): 32s
jest-preset-angular@9.0.0-next.6 with jest jest@27.0.0-next.2, isolatedModules: false, 'jest-preset-angular/presets/defaults', +ngcc-jest-processor
for 392 tests, 78 suites
First run (no cache): 446s
NOTE: The first test doesn't complete for 115s
2nd run (uses cache): 33s
jest-preset-angular@9.0.0-next.6 with jest jest@27.0.0-next.2, isolatedModules: true, 'jest-preset-angular/presets/defaults'
for 381 tests, 71 suites (8 suites don't compile)
First run (no cache): 43s
2nd run (uses cache): 32s
jest-preset-angular@9.0.0-next.6 with jest jest@27.0.0-next.2, isolatedModules: false, 'jest-preset-angular/presets/defaults-esm'
for 392 tests, 78 suites
First run (no cache): 435s
NOTE: The first test doesn't complete for 117s
2nd run (uses cache): 41s
jest-preset-angular@9.0.0-next.6 with jest jest@27.0.0-next.2, isolatedModules: true, 'jest-preset-angular/presets/defaults-esm'
for 282 tests, 52 suites (27 suites don't compile)
First run (no cache): 41s
2nd run (uses cache): 32s
jest-preset-angular@9.0.0-next.6 with jest jest@27.0.0-next.2, isolatedModules: false, 'jest-preset-angular/presets/defaults-esm', internal library ESM references
for 392 tests, 78 suites
First run (no cache): 258s
NOTE: The first test doesn't complete for 40s
2nd run (uses cache): 40s
jest-preset-angular@9.0.0-next.6 with jest jest@27.0.0-next.2, isolatedModules: true, 'jest-preset-angular/presets/defaults-esm', internal library ESM references
for 287 tests, 55 suites (24 suites don't compile)
First run (no cache): 38s
2nd run (uses cache): 27s
Tests per second for different configurations
Link to repo
I can't expose all our tests, but the setup is very similar to this repo + branch:
https://github.com/johncrim/repro-ivy-jest-preset-angular/tree/bug/inline-directive
envinfo
The text was updated successfully, but these errors were encountered: