Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prefix benchmark names with module path #16647

Open
BD103 opened this issue Dec 4, 2024 · 4 comments
Open

Prefix benchmark names with module path #16647

BD103 opened this issue Dec 4, 2024 · 4 comments
Labels
A-Cross-Cutting Impacts the entire engine C-Benchmarks Stress tests and benchmarks used to measure how fast things are C-Code-Quality A section of code that is hard to understand or change D-Straightforward Simple bug fixes and API improvements, docs, test and examples S-Ready-For-Implementation This issue is ready for an implementation PR. Go for it!

Comments

@BD103
Copy link
Member

BD103 commented Dec 4, 2024

Problem

I've slowly been returning to bevy-bencher, and am trying to migrate it to use Bevy's in-tree benchmarks instead of custom ones. One of the original issues I had with this approach is that it's difficult to see what a benchmark is testing from its name alone. For example:

  • layers_intersect
  • entity_hash
  • easing_1000
  • param/combinator_system/8_piped_systems
  • concrete_list_clone_dynamic
  • ray_mesh_intersection/1000_vertices
  • despawn_world_recursive/100_entities
  • overhead_par_iter/threads_4
  • run_condition/yes_using_resource

All of these names were pulled from our current benchmarks, and are the names that would be displayed in Bencher's UI. Can you guess what each benchmark tracks specifically? Probably not, unless you're deeply familiar with that specific subsystem.

Solution

Now look at the same list again, but with a few changes:

  • render::render_layers::intersect
  • ecs::world::entity_hash
  • math::bezier::easing_1000
  • ecs::param::combinator_system::8_piped_systems
  • reflect::list::concrete_clone_dynamic
  • picking::ray_mesh_intersection/1000_vertices
  • ecs::world::despawn_recursive/100_entities
  • tasks::overhead_par_iter/threads_4
  • ecs::scheduling::run_condition/yes_using_resource

This naming scheme includes the module path in the benchmark name, and removes any redundant words from the benchmark name. There are a few benefits to this approach:

  1. The name is far clearer on what is being tested.
  2. It's easy to locate the benchmark from the name alone.
    • With the name render::render_layers::intersect, you know the benchmark is within bevy_render/render_layers.rs.
  3. You can easily filter benchmarks to run by category.
    • For instance, you can run cargo bench -- ecs::world to run all World-related benchmarks.

Automation

We can automate this naming a little bit using macros, specifically with module_path!(). For a quick sketch, you may be able to do this:

// I may have messed up this syntax, but you get the idea :)
macro_rules! bench {
  ($name:lit) => {
    concat!(module_path!(), $name)
  }
}

// Crate: `bevy_math`
mod bezier {
  fn easing(c: &mut Criterion) {
    // Name is `bevy_math::bezier::easing`.
    c.bench_function(bench!("easing"), |b| {
      // ...
    });
  }
}
@BD103 BD103 added C-Code-Quality A section of code that is hard to understand or change S-Needs-Design This issue requires design work to think about how it would best be accomplished C-Benchmarks Stress tests and benchmarks used to measure how fast things are A-Cross-Cutting Impacts the entire engine D-Straightforward Simple bug fixes and API improvements, docs, test and examples labels Dec 4, 2024
github-merge-queue bot pushed a commit that referenced this issue Dec 10, 2024
# Objective

- Benchmarks are inconsistently setup, there are several that do not
compile, and several others that need to be reformatted.
- Related / precursor to #16647, this is part of my attempt migrating
[`bevy-bencher`](https://github.com/TheBevyFlock/bevy-bencher) to the
official benchmarks.

## Solution

> [!TIP]
>
> I recommend reviewing this PR commit-by-commit, instead of all at
once!

In 5d26f56 I reorganized how benches
were registered. Now this is one `[[bench]]` per Bevy crate. In each
crate benchmark folder, there is a `main.rs` that calls
`criterion_main!`. I also disabled automatic benchmark discovery, which
isn't necessarily required, but may clear up confusion with our custom
setup. I also fixed a few errors that were causing the benchmarks to
fail to compile.

In afc8d33 I ran `rustfmt` on all of
the benchmarks.

In d6cdf96 I fixed all of the Clippy
warnings.

In ee94d48 I fixed some of the
benchmarks' usage of `black_box()`. I ended up opening
rust-lang/rust#133942 due to this, which should
help prevent this in the future.

In cbe1688 I renamed all of the ECS
benchmark groups to be called `benches`, to be consistent with the other
crate benchmarks.

In e701c21 and
8815bb7 I re-ordered some imports and
module definitions, and uplifted `fragmentation/mod.rs` to
`fragementation.rs`.

Finally, in b0065e0 I organized
`Cargo.toml` and bumped Criterion to v0.5.

## Testing

- `cd benches && cargo clippy --benches`
- `cd benches && cargo fmt --all`
@BenjaminBrienen
Copy link
Contributor

I think the macro should take the raw identifier and internally it can use quote!, no?

@BD103
Copy link
Member Author

BD103 commented Dec 12, 2024

I think the macro should take the raw identifier and internally it can use quote!, no?

Do you mean something like this?

bench!(easing)
// Instead of...
bench!("easing")

That's certainly possible with stringify!(), though I prefer the original version for style purposes.

@BenjaminBrienen
Copy link
Contributor

I don't see why it would be a string literal, but no big deal either way.

@BD103 BD103 moved this to Bevy-Bencher in BD103 Work Planning Dec 26, 2024
github-merge-queue bot pushed a commit that referenced this issue Dec 26, 2024
# Objective

- Please see #16647 for the full reasoning behind this change.

## Solution

- Create the `bench!` macro, which generates the name of the benchmark
at compile time.

Migrating is a single line change, and it will automatically update if
you move the benchmark to a different module:

  ```diff
  + use benches::bench;

  fn my_benchmark(c: &mut Criterion) {
  -   c.bench_function("my_benchmark", |b| {});
  +   c.bench_function(bench!("my_benchmark"), |b| {});
  }
  ```

- Migrate all reflection benchmarks to use `bench!`.
- Fix a few places where `black_box()` or Criterion is misused.

## Testing

```sh
cd benches

# Will take a long time!
cargo bench --bench reflect

# List out the names of all reflection benchmarks, to ensure I didn't miss anything.
cargo bench --bench reflect -- --list

# Check for linter warnings.
cargo clippy --bench reflect

# Run each benchmark once.
cargo test --bench reflect
```
github-merge-queue bot pushed a commit that referenced this issue Dec 29, 2024
# Objective

- Part of #16647.
- The benchmarks for bezier curves have several issues and do not yet
use the new `bench!` naming scheme.

## Solution

- Make all `bevy_math` benchmarks use the `bench!` macro for their name.
- Delete the `build_accel_cubic()` benchmark, since it was an exact
duplicate of `build_pos_cubic()`.
- Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace
it with a `for` loop.
- Combine all of the benchmarks that measure `curve.position()` under a
single group, `curve_position`, and extract the common bench routine
into a helper function.
- Move the time calculation for the `curve.ease()` benchmark into the
setup closure so it is not tracked.
- Rename the benchmarks to be more descriptive on what they do.
  - `easing_1000` -> `segment_ease`
  - `cubic_position_Vec2` -> `curve_position/vec2`
  - `cubic_position_Vec3A` -> `curve_position/vec3a`
  - `cubic_position_Vec3` -> `curve_position/vec3`
  - `build_pos_cubic_100_points` -> `curve_iter_positions`

## Testing

- `cargo test -p benches --bench math`
- `cargo bench -p benches --bench math`
  - Then open `./target/criterion/report/index.html` to see the report!

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
@BD103 BD103 added S-Ready-For-Implementation This issue is ready for an implementation PR. Go for it! and removed S-Needs-Design This issue requires design work to think about how it would best be accomplished labels Dec 30, 2024
@BD103
Copy link
Member Author

BD103 commented Dec 30, 2024

This issue is ready for implementation! I've been personally going through each crates' benchmarks, but anyone else can help too if they're interested! My progress is:

github-merge-queue bot pushed a commit that referenced this issue Dec 30, 2024
# Objective

- Part of #16647.
- This PR goes through our `ray_cast::ray_mesh_intersection()`
benchmarks and overhauls them with more comments and better
extensibility. The code is also a lot less duplicated!

## Solution

- Create a `Benchmarks` enum that describes all of the different kind of
scenarios we want to benchmark.
- Merge all of our existing benchmark functions into a single one,
`bench()`, which sets up the scenarios all at once.
- Add comments to `mesh_creation()` and `ptoxznorm()`, and move some
lines around to be a bit clearer.
- Make the benchmarks use the new `bench!` macro, as part of #16647.
- Rename many functions and benchmarks to be clearer.

## For reviewers

I split this PR up into several, easier to digest commits. You might
find it easier to review by looking through each commit, instead of the
complete file changes.

None of my changes actually modifies the behavior of the benchmarks;
they still track the exact same test cases. There shouldn't be
significant changes in benchmark performance before and after this PR.

## Testing

List all picking benchmarks: `cargo bench -p benches --bench picking --
--list`

Run the benchmarks once in debug mode: `cargo test -p benches --bench
picking`

Run the benchmarks and analyze their performance: `cargo bench -p
benches --bench picking`

- Check out the generated HTML report in
`./target/criterion/report/index.html` once you're done!

---

## Showcase

List of all picking benchmarks, after having been renamed:

<img width="524" alt="image"
src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1"
/>

Example report for
`picking::ray_mesh_intersection::cull_intersect/100_vertices`:

<img width="992" alt="image"
src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d"
/>
github-merge-queue bot pushed a commit that referenced this issue Jan 2, 2025
# Objective

- `entity_cloning` was separated from the rest of the ECS benchmarks.
- There was some room for improvement in the benchmarks themselves.
- Part of #16647.

## Solution

- Merge `entity_cloning` into the rest of the ECS benchmarks.
- Apply the `bench!` macro to all benchmark names.\
- Reorganize benchmarks and their helper functions, with more comments
than before.
- Remove all the extra component definitions (`C2`, `C3`, etc.), and
just leave one. Now all entities have exactly one component.

## Testing

```sh
# List all entity cloning benchmarks, to verify their names have updated.
cargo bench -p benches --bench ecs entity_cloning -- --list

# Test benchmarks by running them once.
cargo test -p benches --bench ecs entity_cloning

# Run all benchmarks (takes about a minute).
cargo bench -p benches --bench ecs entity_cloning
```

---

## Showcase


![image](https://github.com/user-attachments/assets/4e3d7d98-015a-4974-ae16-363cf1b9423c)

Interestingly, using `Clone` instead of `Reflect` appears to be 2-2.5
times faster. Furthermore, there were noticeable jumps in time when
running the benchmarks:


![image](https://github.com/user-attachments/assets/bd8513de-3922-432f-b3dd-1b1b7750bdb5)


I theorize this is because the `World` is allocating more space for all
the entities, but I don't know for certain. Neat!
ecoskey pushed a commit to ecoskey/bevy that referenced this issue Jan 6, 2025
# Objective

- Benchmarks are inconsistently setup, there are several that do not
compile, and several others that need to be reformatted.
- Related / precursor to bevyengine#16647, this is part of my attempt migrating
[`bevy-bencher`](https://github.com/TheBevyFlock/bevy-bencher) to the
official benchmarks.

## Solution

> [!TIP]
>
> I recommend reviewing this PR commit-by-commit, instead of all at
once!

In 5d26f56 I reorganized how benches
were registered. Now this is one `[[bench]]` per Bevy crate. In each
crate benchmark folder, there is a `main.rs` that calls
`criterion_main!`. I also disabled automatic benchmark discovery, which
isn't necessarily required, but may clear up confusion with our custom
setup. I also fixed a few errors that were causing the benchmarks to
fail to compile.

In afc8d33 I ran `rustfmt` on all of
the benchmarks.

In d6cdf96 I fixed all of the Clippy
warnings.

In ee94d48 I fixed some of the
benchmarks' usage of `black_box()`. I ended up opening
rust-lang/rust#133942 due to this, which should
help prevent this in the future.

In cbe1688 I renamed all of the ECS
benchmark groups to be called `benches`, to be consistent with the other
crate benchmarks.

In e701c21 and
8815bb7 I re-ordered some imports and
module definitions, and uplifted `fragmentation/mod.rs` to
`fragementation.rs`.

Finally, in b0065e0 I organized
`Cargo.toml` and bumped Criterion to v0.5.

## Testing

- `cd benches && cargo clippy --benches`
- `cd benches && cargo fmt --all`
ecoskey pushed a commit to ecoskey/bevy that referenced this issue Jan 6, 2025
# Objective

- Please see bevyengine#16647 for the full reasoning behind this change.

## Solution

- Create the `bench!` macro, which generates the name of the benchmark
at compile time.

Migrating is a single line change, and it will automatically update if
you move the benchmark to a different module:

  ```diff
  + use benches::bench;

  fn my_benchmark(c: &mut Criterion) {
  -   c.bench_function("my_benchmark", |b| {});
  +   c.bench_function(bench!("my_benchmark"), |b| {});
  }
  ```

- Migrate all reflection benchmarks to use `bench!`.
- Fix a few places where `black_box()` or Criterion is misused.

## Testing

```sh
cd benches

# Will take a long time!
cargo bench --bench reflect

# List out the names of all reflection benchmarks, to ensure I didn't miss anything.
cargo bench --bench reflect -- --list

# Check for linter warnings.
cargo clippy --bench reflect

# Run each benchmark once.
cargo test --bench reflect
```
ecoskey pushed a commit to ecoskey/bevy that referenced this issue Jan 6, 2025
# Objective

- Part of bevyengine#16647.
- The benchmarks for bezier curves have several issues and do not yet
use the new `bench!` naming scheme.

## Solution

- Make all `bevy_math` benchmarks use the `bench!` macro for their name.
- Delete the `build_accel_cubic()` benchmark, since it was an exact
duplicate of `build_pos_cubic()`.
- Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace
it with a `for` loop.
- Combine all of the benchmarks that measure `curve.position()` under a
single group, `curve_position`, and extract the common bench routine
into a helper function.
- Move the time calculation for the `curve.ease()` benchmark into the
setup closure so it is not tracked.
- Rename the benchmarks to be more descriptive on what they do.
  - `easing_1000` -> `segment_ease`
  - `cubic_position_Vec2` -> `curve_position/vec2`
  - `cubic_position_Vec3A` -> `curve_position/vec3a`
  - `cubic_position_Vec3` -> `curve_position/vec3`
  - `build_pos_cubic_100_points` -> `curve_iter_positions`

## Testing

- `cargo test -p benches --bench math`
- `cargo bench -p benches --bench math`
  - Then open `./target/criterion/report/index.html` to see the report!

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
ecoskey pushed a commit to ecoskey/bevy that referenced this issue Jan 6, 2025
# Objective

- Part of bevyengine#16647.
- This PR goes through our `ray_cast::ray_mesh_intersection()`
benchmarks and overhauls them with more comments and better
extensibility. The code is also a lot less duplicated!

## Solution

- Create a `Benchmarks` enum that describes all of the different kind of
scenarios we want to benchmark.
- Merge all of our existing benchmark functions into a single one,
`bench()`, which sets up the scenarios all at once.
- Add comments to `mesh_creation()` and `ptoxznorm()`, and move some
lines around to be a bit clearer.
- Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647.
- Rename many functions and benchmarks to be clearer.

## For reviewers

I split this PR up into several, easier to digest commits. You might
find it easier to review by looking through each commit, instead of the
complete file changes.

None of my changes actually modifies the behavior of the benchmarks;
they still track the exact same test cases. There shouldn't be
significant changes in benchmark performance before and after this PR.

## Testing

List all picking benchmarks: `cargo bench -p benches --bench picking --
--list`

Run the benchmarks once in debug mode: `cargo test -p benches --bench
picking`

Run the benchmarks and analyze their performance: `cargo bench -p
benches --bench picking`

- Check out the generated HTML report in
`./target/criterion/report/index.html` once you're done!

---

## Showcase

List of all picking benchmarks, after having been renamed:

<img width="524" alt="image"
src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1"
/>

Example report for
`picking::ray_mesh_intersection::cull_intersect/100_vertices`:

<img width="992" alt="image"
src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d"
/>
mrchantey pushed a commit to mrchantey/bevy that referenced this issue Feb 4, 2025
# Objective

- Please see bevyengine#16647 for the full reasoning behind this change.

## Solution

- Create the `bench!` macro, which generates the name of the benchmark
at compile time.

Migrating is a single line change, and it will automatically update if
you move the benchmark to a different module:

  ```diff
  + use benches::bench;

  fn my_benchmark(c: &mut Criterion) {
  -   c.bench_function("my_benchmark", |b| {});
  +   c.bench_function(bench!("my_benchmark"), |b| {});
  }
  ```

- Migrate all reflection benchmarks to use `bench!`.
- Fix a few places where `black_box()` or Criterion is misused.

## Testing

```sh
cd benches

# Will take a long time!
cargo bench --bench reflect

# List out the names of all reflection benchmarks, to ensure I didn't miss anything.
cargo bench --bench reflect -- --list

# Check for linter warnings.
cargo clippy --bench reflect

# Run each benchmark once.
cargo test --bench reflect
```
mrchantey pushed a commit to mrchantey/bevy that referenced this issue Feb 4, 2025
# Objective

- Part of bevyengine#16647.
- The benchmarks for bezier curves have several issues and do not yet
use the new `bench!` naming scheme.

## Solution

- Make all `bevy_math` benchmarks use the `bench!` macro for their name.
- Delete the `build_accel_cubic()` benchmark, since it was an exact
duplicate of `build_pos_cubic()`.
- Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace
it with a `for` loop.
- Combine all of the benchmarks that measure `curve.position()` under a
single group, `curve_position`, and extract the common bench routine
into a helper function.
- Move the time calculation for the `curve.ease()` benchmark into the
setup closure so it is not tracked.
- Rename the benchmarks to be more descriptive on what they do.
  - `easing_1000` -> `segment_ease`
  - `cubic_position_Vec2` -> `curve_position/vec2`
  - `cubic_position_Vec3A` -> `curve_position/vec3a`
  - `cubic_position_Vec3` -> `curve_position/vec3`
  - `build_pos_cubic_100_points` -> `curve_iter_positions`

## Testing

- `cargo test -p benches --bench math`
- `cargo bench -p benches --bench math`
  - Then open `./target/criterion/report/index.html` to see the report!

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
mrchantey pushed a commit to mrchantey/bevy that referenced this issue Feb 4, 2025
# Objective

- Part of bevyengine#16647.
- This PR goes through our `ray_cast::ray_mesh_intersection()`
benchmarks and overhauls them with more comments and better
extensibility. The code is also a lot less duplicated!

## Solution

- Create a `Benchmarks` enum that describes all of the different kind of
scenarios we want to benchmark.
- Merge all of our existing benchmark functions into a single one,
`bench()`, which sets up the scenarios all at once.
- Add comments to `mesh_creation()` and `ptoxznorm()`, and move some
lines around to be a bit clearer.
- Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647.
- Rename many functions and benchmarks to be clearer.

## For reviewers

I split this PR up into several, easier to digest commits. You might
find it easier to review by looking through each commit, instead of the
complete file changes.

None of my changes actually modifies the behavior of the benchmarks;
they still track the exact same test cases. There shouldn't be
significant changes in benchmark performance before and after this PR.

## Testing

List all picking benchmarks: `cargo bench -p benches --bench picking --
--list`

Run the benchmarks once in debug mode: `cargo test -p benches --bench
picking`

Run the benchmarks and analyze their performance: `cargo bench -p
benches --bench picking`

- Check out the generated HTML report in
`./target/criterion/report/index.html` once you're done!

---

## Showcase

List of all picking benchmarks, after having been renamed:

<img width="524" alt="image"
src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1"
/>

Example report for
`picking::ray_mesh_intersection::cull_intersect/100_vertices`:

<img width="992" alt="image"
src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d"
/>
mrchantey pushed a commit to mrchantey/bevy that referenced this issue Feb 4, 2025
# Objective

- `entity_cloning` was separated from the rest of the ECS benchmarks.
- There was some room for improvement in the benchmarks themselves.
- Part of bevyengine#16647.

## Solution

- Merge `entity_cloning` into the rest of the ECS benchmarks.
- Apply the `bench!` macro to all benchmark names.\
- Reorganize benchmarks and their helper functions, with more comments
than before.
- Remove all the extra component definitions (`C2`, `C3`, etc.), and
just leave one. Now all entities have exactly one component.

## Testing

```sh
# List all entity cloning benchmarks, to verify their names have updated.
cargo bench -p benches --bench ecs entity_cloning -- --list

# Test benchmarks by running them once.
cargo test -p benches --bench ecs entity_cloning

# Run all benchmarks (takes about a minute).
cargo bench -p benches --bench ecs entity_cloning
```

---

## Showcase


![image](https://github.com/user-attachments/assets/4e3d7d98-015a-4974-ae16-363cf1b9423c)

Interestingly, using `Clone` instead of `Reflect` appears to be 2-2.5
times faster. Furthermore, there were noticeable jumps in time when
running the benchmarks:


![image](https://github.com/user-attachments/assets/bd8513de-3922-432f-b3dd-1b1b7750bdb5)


I theorize this is because the `World` is allocating more space for all
the entities, but I don't know for certain. Neat!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Cross-Cutting Impacts the entire engine C-Benchmarks Stress tests and benchmarks used to measure how fast things are C-Code-Quality A section of code that is hard to understand or change D-Straightforward Simple bug fixes and API improvements, docs, test and examples S-Ready-For-Implementation This issue is ready for an implementation PR. Go for it!
Projects
None yet
Development

No branches or pull requests

2 participants