-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Add performance benchmark to diff-sequences
package
#7603
chore: Add performance benchmark to diff-sequences
package
#7603
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome, great job!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be cool to include some tests that favor longer lines instead of more lines and see how that compares. Looks good:)
@pedrottimark thoughts on @thymikee's comment? |
Yes, it is a great point to add more realistic tests. @thymikee review comments always stretch my thinking: it hit me just now that an additional benchmark is compare For y’all info, a long term goal to solve edge cases in report when assertion fails is for |
Don't bother it now, it's just a nice-to-have for somewhere later. If you have any personal to-do list, it's worth to put it there, or create an issue to not forget about a followup :) |
…7603) * chore: Add performance benchmark to diff-sequences * Change require to refer to build directory * Update CHANGELOG.md * Add Facebook copyright header to index.js file * Replace process.stdout.write with console.log * Add link to 7627 as a way to restart CI :)
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Summary
Compare improved
diff-sequences
package to baselinediff
package.perf/index.js
fileperf/example.md
file which contains copied result of a run"scripts"
inpackage.json
filebenchmark
anddiff
asdevDependencies
Because allocating and freeing of temporary objects is the root of the performance problem, the tests call
global.gc()
before every test cycle, so make sure to run node with--expose-gc
option!Above 2000 items, the
benchmark
package can’t keep the relative mean error below its target of 1%For example, notice in next to last row of
example.md
that an outlier low0.0083
ratio corresponds to outlier high3.60%
baseline rme: an inaccurate high denominator causes low ratio.Your critique is always welcome and especially because perf benchmark is new for me :)
P.S. I added
/* eslint import/no-extraneous-dependencies: "off" */
because the rule demanded thatbenchmark
anddiff
becomedependencies
instead ofdevDependencies
Test plan
To achieve 1% relative mean error, run the benchmark:
I ran it with node 10.15.0 not via
yarn
in subdirectory:node --expose-gc perf/index.js
See most values of ratio are between
0.05
and0.15
The most relevant test for catastrophic performance problems is
insert 80%
that is, after every 2 expected items, insert 8 unexpected items: