Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

General benchmarks suite #5508

Open
asterite opened this issue Jan 2, 2018 · 6 comments
Open

General benchmarks suite #5508

asterite opened this issue Jan 2, 2018 · 6 comments

Comments

@asterite
Copy link
Member

asterite commented Jan 2, 2018

Crystal needs a general benchmarks suite for its standard library.

It should cover most of the core types, like String, Array, Hash, Enumerable, and its methods.

This will be very useful to see how a change affects performance, for example when deciding whether to accept a PR or not.

Ideally, it should also show memory allocation (so Benchmark.ips should be improved). I don't know how to implement this though (I don't know how can this be computed with the current GC).

Also, it should be possible to choose what things to benchmark. I guess we could have one file per class, and then another file that combines all of them via require. Then you could just run one of some of them by directly compiling them (with --release).

I believe Go has something like that. You can see that in some PRs, information about how times change are shown. Maybe for this utility we'd need to execute the benchmark using crystal, then benchmark it again with bin/crystal (so changes in the standard library are taken), parse the output of both programs and be able to compare the output.

@bew
Copy link
Contributor

bew commented Jan 2, 2018

Would it be possible/a good idea to spec every it blocks of all specs?

With an enhancement of the spec runner, it would allow to not write speed spec for everything, and still have a lot of data on the performances.

And if we need to spec the perfomance of very specific things, or common code, we can add a speed-spec file for this or that class.

@drhuffman12
Copy link

I like the idea of an optional param for crystal spec to compare performance; maybe something like crystal spec --benchmark path/to/benchmark. But would we want a separate [optional?] benchmark folder, since 'run all specs' should usually be quick and running some benchmarks might intentionally take some time? Or, maybe add a crystal bench which would basically be the same as doing crystal spec --benchmark, but defaults to use a 'benchmarks' [or 'bench'] folder instead of the 'spec' folder?

@asterite
Copy link
Member Author

asterite commented Jan 2, 2018

In my mind it should be a separate suit. Specs are usually small so not good for benchmarks:

"foo".includes?("o")

I'd actually like to benchmark that with both short strings and huge strings. But not have many redundant specs being benchmarked.

So I personally don't know about the idea of mixing specs and benchmarks (I know Go can do this, but it's a separate feature in my mind).

@straight-shoota
Copy link
Member

Just a note: Comparing crystal vs. bin/crystal won't do because there might already be other changes in master affecting the results. You would need to run bin/crystal on both master and pr branch to get valid results.

@oprypin
Copy link
Member

oprypin commented Apr 9, 2018

In my experience, this type of performance test can only be post-submit, and you need to watch trends (possibly with automatic alerts). Otherwise false alerts due to noisy tests are imminent.

And this is not even hard to achieve, just need to have dedicated hardware.

@RX14 RX14 reopened this May 29, 2018
@straight-shoota
Copy link
Member

straight-shoota commented Dec 6, 2021

This is on hacker news today: https://ziglang.org/perf/

They have a benchmark suite (https://github.com/ziglang/gotta-go-fast/) which runs against every commit on master and the performance is tracked over time. As @oprypin mentioned, the key to this is having dedicated hardware where you can eliminate environment influences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants