-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(jest-core): Add performance markers around significant lifecycle events #13859
Conversation
Yay, super exciting! I've wanted to do this sort of things for years, but never got around to it. I'm actually working on instrumentation at work atm (albeit using open telemetry), so it's been on my mind lately to get into it for Jest as well. |
const {contexts, hasteMapInstances} = await buildContextsAndHasteMaps( | ||
configs, | ||
globalConfig, | ||
outputStream, | ||
); | ||
performance.mark('jest/buildContextsAndHasteMaps:end'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would using something like https://nodejs.org/api/perf_hooks.html#performancetimerifyfn-options make sense? both here and other places with a start
& end
pair
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did consider timerify
, but it's limited - there's no way to pass detail
, unlike with marks (I used that here for numTests
), and the only label on the entry is the literal name of the function, which isn't necessarily descriptive out of context. It's not interchangeable with mark
either, because it requires a PerformanceObserver
and creates different entry types.
One thing I like about being able to specify the labels is being able to namespace them to Jest, which is important if you've got other instrumented code in the same process (if, e.g, Babel introduced instrumentation, we'd want to be able to distinguish it from Jest's, and at Meta we run Jest programmatically in a wrapper that has its own instrumentation).
Another alternative I looked at is measure
, but it pretty much just changes the syntax of the end marker:
const fooStartMarker = performance.mark('foo:start');
// ...foo
performance.measure('foo', fooStartMarker); // OR performance.measure('foo', 'foo:start');
vs
performance.mark('foo:start');
// ...foo
performance.mark('foo:end');
..I'm not sure the former is any cleaner. Also, a consumer can trivially create measures from marks if that's what they want, whereas the reverse is harder.
So TL;DR IMO mark
optimises for flexibility and simplicity. Admittedly it isn't super clean, but I wouldn't anticipate littering the whole codebase or even much more than what's in this PR - if we did end up adding lots more markers, we could reduce some duplication and strengthen typing with an abstraction (require('jest-instrumentation').startMark('foo')
or similar).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, thanks for the thorough response 👍
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Summary
I'd like to upstream some of the work we've been doing over in Metro to improve startup times (particularly with an overhaul of
jest-haste-map
that started by forking it), and possibly carve out some time to optimise other areas of Jest startup. A first step (especially to get some buy-in at Meta!) is instrumentation - this PR is a stab at adding some key markers via the standard globalperformance
API.I'm not wedded to this implementation - open to bikeshedding about labels, abstraction, or if there's a pre-existing Jest instrumentation pattern I'd missed, but I'm particularly looking for some way to time the startup independently of running tests.
Test plan
Running Jest programmatically in a thin wrapper allows performance data to be gathered: