-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General benchmarks suite #5508
Comments
Would it be possible/a good idea to spec every With an enhancement of the spec runner, it would allow to not write speed spec for everything, and still have a lot of data on the performances. And if we need to spec the perfomance of very specific things, or common code, we can add a speed-spec file for this or that class. |
I like the idea of an optional param for |
In my mind it should be a separate suit. Specs are usually small so not good for benchmarks: "foo".includes?("o") I'd actually like to benchmark that with both short strings and huge strings. But not have many redundant specs being benchmarked. So I personally don't know about the idea of mixing specs and benchmarks (I know Go can do this, but it's a separate feature in my mind). |
Just a note: Comparing |
In my experience, this type of performance test can only be post-submit, and you need to watch trends (possibly with automatic alerts). Otherwise false alerts due to noisy tests are imminent. And this is not even hard to achieve, just need to have dedicated hardware. |
This is on hacker news today: https://ziglang.org/perf/ They have a benchmark suite (https://github.com/ziglang/gotta-go-fast/) which runs against every commit on master and the performance is tracked over time. As @oprypin mentioned, the key to this is having dedicated hardware where you can eliminate environment influences. |
Crystal needs a general benchmarks suite for its standard library.
It should cover most of the core types, like String, Array, Hash, Enumerable, and its methods.
This will be very useful to see how a change affects performance, for example when deciding whether to accept a PR or not.
Ideally, it should also show memory allocation (so
Benchmark.ips
should be improved). I don't know how to implement this though (I don't know how can this be computed with the current GC).Also, it should be possible to choose what things to benchmark. I guess we could have one file per class, and then another file that combines all of them via
require
. Then you could just run one of some of them by directly compiling them (with--release
).I believe Go has something like that. You can see that in some PRs, information about how times change are shown. Maybe for this utility we'd need to execute the benchmark using
crystal
, then benchmark it again withbin/crystal
(so changes in the standard library are taken), parse the output of both programs and be able to compare the output.The text was updated successfully, but these errors were encountered: