Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add basic runtime benchmark infrastructure #1423

Merged
merged 3 commits into from
Oct 4, 2022
Merged

Add basic runtime benchmark infrastructure #1423

merged 3 commits into from
Oct 4, 2022

Conversation

Kobzol
Copy link
Contributor

@Kobzol Kobzol commented Aug 24, 2022

This PR contains basic collector infrastructure for runtime benchmarks. It implements the ideas described in https://hackmd.io/j19WFamVQlmQMRigF9wrFw.

There are no DB or UI changes yet, first I think that we should only support the benchmarks locally, for experimentation. Later I will add support for storing the results into the DB.

Currently the design looks like this:

  • A new crate was added to the collector directory. It's called benchlib and it contains a small library for defining and measuring microbenchmarks (using perf. counters with perf_event_open and walltime currently). It's used by collector to read benchmark results (transferred with JSON currently) and by the runtime benchmarks to define what should be measured.
  • There is a directory collector/runtime-benchmarks that will contain runtime benchmarks that will use benchlib to define a set of related benchmarks. The benchmarks are not executed right away, the benchlib instead decides what to do once it parses CLI arguments (it could have commands e.g. to list available benchmarks and their core requirements, so that when you actually execute the benchmark, you can select to how many cores you could pin it).
  • A single runtime benchmark was added that measures hashmap insertions.
  • A new command called bench_runtime_local was added to collector. It takes a path to rustc, uses it to compile the runtime benchmarks and then executes them. Eventually, it might make sense to instead create something like bench_local comptime --scenarios ... and bench_local runtime .... A new BenchNext command shouldn't be needed, since it should decide from the next_artifact API call if it should also run runtime benchmarks.

Best reviewed commit by commit.

@anp
Copy link
Member

anp commented Aug 24, 2022

This is awesome! After this last RustConf I've been thinking about spending time on this kind of thing again, let me know if I can help! I'm still waiting on some new Linux hardware to be able to use perf_event_open locally but I'll be able to help with results/UI/regression detection/porting benchmarks/etc soon.

@Kobzol
Copy link
Contributor Author

Kobzol commented Aug 25, 2022

I think that we first have to get the basic infrastructure in place (code layout, measurements, DB, etc.), I'm not sure how much that is "parallelizable". But once that's in place, we will need to populate the benchmark suite and possibly build some new UI for that, and then it will definitely be good to cooperate :) It might still be a way off though, the infrastructure will probably go through a lot of iterations.

Copy link
Contributor

@nnethercote nnethercote left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've had a quick look, it all seems quite plausible.

Why do the benchmarks appear twice? E.g. collector/lolperf/benchmarks/nbody/src/nbody.rs and runtime/benchmarks/nbody/src/nbody.rs. Same story with other source files like measure.rs.

I have a strong preference for the structure of the runtime benchmarks and the compile time benchmarks to be similar. E.g. compile time benchmarks are in collector/benchmarks/, so the runtime ones should be in a related spot at the same level of nesting. This will make it easier to navigate. If this requires renaming some existing things, that's fine. (Likewise with the subcommand names and structures, as you suggested.)

collector/lolperf/benchlib/src/measure.rs Outdated Show resolved Hide resolved
@Kobzol
Copy link
Contributor Author

Kobzol commented Aug 25, 2022

I also want to make the structure similar to comptime benchmarks, I just put it into a separate directory for now to better separate it from the rest of the code for now. There's also one additional thing instead of just a directory of benchmarks, namely the benchmarking library. I'm fine with changing the file/directory layout in any way we find better.

@Kobzol
Copy link
Contributor Author

Kobzol commented Aug 25, 2022

Why do the benchmarks appear twice? E.g. collector/lolperf/benchmarks/nbody/src/nbody.rs and runtime/benchmarks/nbody/src/nbody.rs. Same story with other source files like measure.rs.

Ooos, sorry, I accidentally committed another directory where I have stored it previously. And I was wondering why the diff is so large.. 😅

@Kobzol Kobzol changed the title Add a sketch of a runtime benchmark suite Add basic runtime benchmark infrastructure Sep 1, 2022
@Kobzol Kobzol marked this pull request as ready for review September 1, 2022 21:22
@Kobzol Kobzol force-pushed the lolperf branch 2 times, most recently from b9ccaa6 to 93bb62c Compare September 1, 2022 21:36
@Kobzol
Copy link
Contributor Author

Kobzol commented Sep 1, 2022

I got some nasty diffs there because of a botched directory move, now it should be fine.

@Kobzol Kobzol force-pushed the lolperf branch 3 times, most recently from 83c6be8 to 88ed11a Compare October 1, 2022 15:22
@Kobzol
Copy link
Contributor Author

Kobzol commented Oct 1, 2022

@rylev Can you take a look please? :)

Copy link
Contributor

@nnethercote nnethercote left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm annoyed by the inconsistency between the runtime-benchmarks and compile-benchmarks directory names, i.e. the use of time in one but not the other. compile-time-benchmarks would be more consistent, but longer. (There's still the inconsistency between runtime being a single word and compile time being two words but that's a pre-existing bug in English 😁)

run-benchmarks would also be more consistent, but then it sounds like a verb phrase, which is no good.

I don't know, maybe it's good enough as is.

Other than that, and a few minor comments above, this looks good to me.

collector/benchlib/src/measure/perf_counter/unix.rs Outdated Show resolved Hide resolved
collector/benchlib/Cargo.toml Show resolved Hide resolved
collector/runtime-benchmarks/hashmap/src/main.rs Outdated Show resolved Hide resolved
@Kobzol
Copy link
Contributor Author

Kobzol commented Oct 3, 2022

We can also name it comptime-benchmarks? But I think that compile vs runtime is obvious enough.

@nnethercote
Copy link
Contributor

Ok, compile-benchmarks is good enough.

@Kobzol
Copy link
Contributor Author

Kobzol commented Oct 4, 2022

I'm going to go ahead and merge it so that we can start experimenting with it.

@Kobzol Kobzol merged commit 4b6d196 into master Oct 4, 2022
@Kobzol Kobzol deleted the lolperf branch October 4, 2022 18:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants