You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm getting to the point where changes in e.g. abstract-level result in benchmark differences that are too small to say anything meaningful about it. Which is a good sign, but it means that the benchmarks must become more precise to still be useful. Rough plan:
Measure timer resolution, aka average smallest measurable time
Determine minimum duration of benchmark by Math.max(resolutionInSeconds() / 2 / 0.01, 0.05) * 1e3
Let iterations be 1
Optionally run benchmark for warmup: fn(); fn(); eval('%OptimizeFunctionOnNextCall(fn)'); fn()
Run benchmark, which should call a function iterations amount of times
Optionally subtract time spent on GC
If needed, increase iterations to satisfy minimum duration, and repeat
If minimum duration is met, record the duration in a histogram
I'm getting to the point where changes in e.g.
abstract-level
result in benchmark differences that are too small to say anything meaningful about it. Which is a good sign, but it means that the benchmarks must become more precise to still be useful. Rough plan:Math.max(resolutionInSeconds() / 2 / 0.01, 0.05) * 1e3
iterations
be 1fn(); fn(); eval('%OptimizeFunctionOnNextCall(fn)'); fn()
iterations
amount of timesiterations
to satisfy minimum duration, and repeathistogram.minimumSize()
The text was updated successfully, but these errors were encountered: