-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prefix benchmark names with module path #16647
Comments
# Objective - Benchmarks are inconsistently setup, there are several that do not compile, and several others that need to be reformatted. - Related / precursor to #16647, this is part of my attempt migrating [`bevy-bencher`](https://github.com/TheBevyFlock/bevy-bencher) to the official benchmarks. ## Solution > [!TIP] > > I recommend reviewing this PR commit-by-commit, instead of all at once! In 5d26f56 I reorganized how benches were registered. Now this is one `[[bench]]` per Bevy crate. In each crate benchmark folder, there is a `main.rs` that calls `criterion_main!`. I also disabled automatic benchmark discovery, which isn't necessarily required, but may clear up confusion with our custom setup. I also fixed a few errors that were causing the benchmarks to fail to compile. In afc8d33 I ran `rustfmt` on all of the benchmarks. In d6cdf96 I fixed all of the Clippy warnings. In ee94d48 I fixed some of the benchmarks' usage of `black_box()`. I ended up opening rust-lang/rust#133942 due to this, which should help prevent this in the future. In cbe1688 I renamed all of the ECS benchmark groups to be called `benches`, to be consistent with the other crate benchmarks. In e701c21 and 8815bb7 I re-ordered some imports and module definitions, and uplifted `fragmentation/mod.rs` to `fragementation.rs`. Finally, in b0065e0 I organized `Cargo.toml` and bumped Criterion to v0.5. ## Testing - `cd benches && cargo clippy --benches` - `cd benches && cargo fmt --all`
I think the macro should take the raw identifier and internally it can use |
Do you mean something like this? bench!(easing)
// Instead of...
bench!("easing") That's certainly possible with |
I don't see why it would be a string literal, but no big deal either way. |
# Objective - Please see #16647 for the full reasoning behind this change. ## Solution - Create the `bench!` macro, which generates the name of the benchmark at compile time. Migrating is a single line change, and it will automatically update if you move the benchmark to a different module: ```diff + use benches::bench; fn my_benchmark(c: &mut Criterion) { - c.bench_function("my_benchmark", |b| {}); + c.bench_function(bench!("my_benchmark"), |b| {}); } ``` - Migrate all reflection benchmarks to use `bench!`. - Fix a few places where `black_box()` or Criterion is misused. ## Testing ```sh cd benches # Will take a long time! cargo bench --bench reflect # List out the names of all reflection benchmarks, to ensure I didn't miss anything. cargo bench --bench reflect -- --list # Check for linter warnings. cargo clippy --bench reflect # Run each benchmark once. cargo test --bench reflect ```
# Objective - Part of #16647. - The benchmarks for bezier curves have several issues and do not yet use the new `bench!` naming scheme. ## Solution - Make all `bevy_math` benchmarks use the `bench!` macro for their name. - Delete the `build_accel_cubic()` benchmark, since it was an exact duplicate of `build_pos_cubic()`. - Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace it with a `for` loop. - Combine all of the benchmarks that measure `curve.position()` under a single group, `curve_position`, and extract the common bench routine into a helper function. - Move the time calculation for the `curve.ease()` benchmark into the setup closure so it is not tracked. - Rename the benchmarks to be more descriptive on what they do. - `easing_1000` -> `segment_ease` - `cubic_position_Vec2` -> `curve_position/vec2` - `cubic_position_Vec3A` -> `curve_position/vec3a` - `cubic_position_Vec3` -> `curve_position/vec3` - `build_pos_cubic_100_points` -> `curve_iter_positions` ## Testing - `cargo test -p benches --bench math` - `cargo bench -p benches --bench math` - Then open `./target/criterion/report/index.html` to see the report! --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This issue is ready for implementation! I've been personally going through each crates' benchmarks, but anyone else can help too if they're interested! My progress is:
|
# Objective - Part of #16647. - This PR goes through our `ray_cast::ray_mesh_intersection()` benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated! ## Solution - Create a `Benchmarks` enum that describes all of the different kind of scenarios we want to benchmark. - Merge all of our existing benchmark functions into a single one, `bench()`, which sets up the scenarios all at once. - Add comments to `mesh_creation()` and `ptoxznorm()`, and move some lines around to be a bit clearer. - Make the benchmarks use the new `bench!` macro, as part of #16647. - Rename many functions and benchmarks to be clearer. ## For reviewers I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes. None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR. ## Testing List all picking benchmarks: `cargo bench -p benches --bench picking -- --list` Run the benchmarks once in debug mode: `cargo test -p benches --bench picking` Run the benchmarks and analyze their performance: `cargo bench -p benches --bench picking` - Check out the generated HTML report in `./target/criterion/report/index.html` once you're done! --- ## Showcase List of all picking benchmarks, after having been renamed: <img width="524" alt="image" src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1" /> Example report for `picking::ray_mesh_intersection::cull_intersect/100_vertices`: <img width="992" alt="image" src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d" />
# Objective - `entity_cloning` was separated from the rest of the ECS benchmarks. - There was some room for improvement in the benchmarks themselves. - Part of #16647. ## Solution - Merge `entity_cloning` into the rest of the ECS benchmarks. - Apply the `bench!` macro to all benchmark names.\ - Reorganize benchmarks and their helper functions, with more comments than before. - Remove all the extra component definitions (`C2`, `C3`, etc.), and just leave one. Now all entities have exactly one component. ## Testing ```sh # List all entity cloning benchmarks, to verify their names have updated. cargo bench -p benches --bench ecs entity_cloning -- --list # Test benchmarks by running them once. cargo test -p benches --bench ecs entity_cloning # Run all benchmarks (takes about a minute). cargo bench -p benches --bench ecs entity_cloning ``` --- ## Showcase ![image](https://github.com/user-attachments/assets/4e3d7d98-015a-4974-ae16-363cf1b9423c) Interestingly, using `Clone` instead of `Reflect` appears to be 2-2.5 times faster. Furthermore, there were noticeable jumps in time when running the benchmarks: ![image](https://github.com/user-attachments/assets/bd8513de-3922-432f-b3dd-1b1b7750bdb5) I theorize this is because the `World` is allocating more space for all the entities, but I don't know for certain. Neat!
# Objective - Benchmarks are inconsistently setup, there are several that do not compile, and several others that need to be reformatted. - Related / precursor to bevyengine#16647, this is part of my attempt migrating [`bevy-bencher`](https://github.com/TheBevyFlock/bevy-bencher) to the official benchmarks. ## Solution > [!TIP] > > I recommend reviewing this PR commit-by-commit, instead of all at once! In 5d26f56 I reorganized how benches were registered. Now this is one `[[bench]]` per Bevy crate. In each crate benchmark folder, there is a `main.rs` that calls `criterion_main!`. I also disabled automatic benchmark discovery, which isn't necessarily required, but may clear up confusion with our custom setup. I also fixed a few errors that were causing the benchmarks to fail to compile. In afc8d33 I ran `rustfmt` on all of the benchmarks. In d6cdf96 I fixed all of the Clippy warnings. In ee94d48 I fixed some of the benchmarks' usage of `black_box()`. I ended up opening rust-lang/rust#133942 due to this, which should help prevent this in the future. In cbe1688 I renamed all of the ECS benchmark groups to be called `benches`, to be consistent with the other crate benchmarks. In e701c21 and 8815bb7 I re-ordered some imports and module definitions, and uplifted `fragmentation/mod.rs` to `fragementation.rs`. Finally, in b0065e0 I organized `Cargo.toml` and bumped Criterion to v0.5. ## Testing - `cd benches && cargo clippy --benches` - `cd benches && cargo fmt --all`
# Objective - Please see bevyengine#16647 for the full reasoning behind this change. ## Solution - Create the `bench!` macro, which generates the name of the benchmark at compile time. Migrating is a single line change, and it will automatically update if you move the benchmark to a different module: ```diff + use benches::bench; fn my_benchmark(c: &mut Criterion) { - c.bench_function("my_benchmark", |b| {}); + c.bench_function(bench!("my_benchmark"), |b| {}); } ``` - Migrate all reflection benchmarks to use `bench!`. - Fix a few places where `black_box()` or Criterion is misused. ## Testing ```sh cd benches # Will take a long time! cargo bench --bench reflect # List out the names of all reflection benchmarks, to ensure I didn't miss anything. cargo bench --bench reflect -- --list # Check for linter warnings. cargo clippy --bench reflect # Run each benchmark once. cargo test --bench reflect ```
# Objective - Part of bevyengine#16647. - The benchmarks for bezier curves have several issues and do not yet use the new `bench!` naming scheme. ## Solution - Make all `bevy_math` benchmarks use the `bench!` macro for their name. - Delete the `build_accel_cubic()` benchmark, since it was an exact duplicate of `build_pos_cubic()`. - Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace it with a `for` loop. - Combine all of the benchmarks that measure `curve.position()` under a single group, `curve_position`, and extract the common bench routine into a helper function. - Move the time calculation for the `curve.ease()` benchmark into the setup closure so it is not tracked. - Rename the benchmarks to be more descriptive on what they do. - `easing_1000` -> `segment_ease` - `cubic_position_Vec2` -> `curve_position/vec2` - `cubic_position_Vec3A` -> `curve_position/vec3a` - `cubic_position_Vec3` -> `curve_position/vec3` - `build_pos_cubic_100_points` -> `curve_iter_positions` ## Testing - `cargo test -p benches --bench math` - `cargo bench -p benches --bench math` - Then open `./target/criterion/report/index.html` to see the report! --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective - Part of bevyengine#16647. - This PR goes through our `ray_cast::ray_mesh_intersection()` benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated! ## Solution - Create a `Benchmarks` enum that describes all of the different kind of scenarios we want to benchmark. - Merge all of our existing benchmark functions into a single one, `bench()`, which sets up the scenarios all at once. - Add comments to `mesh_creation()` and `ptoxznorm()`, and move some lines around to be a bit clearer. - Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647. - Rename many functions and benchmarks to be clearer. ## For reviewers I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes. None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR. ## Testing List all picking benchmarks: `cargo bench -p benches --bench picking -- --list` Run the benchmarks once in debug mode: `cargo test -p benches --bench picking` Run the benchmarks and analyze their performance: `cargo bench -p benches --bench picking` - Check out the generated HTML report in `./target/criterion/report/index.html` once you're done! --- ## Showcase List of all picking benchmarks, after having been renamed: <img width="524" alt="image" src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1" /> Example report for `picking::ray_mesh_intersection::cull_intersect/100_vertices`: <img width="992" alt="image" src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d" />
# Objective - Please see bevyengine#16647 for the full reasoning behind this change. ## Solution - Create the `bench!` macro, which generates the name of the benchmark at compile time. Migrating is a single line change, and it will automatically update if you move the benchmark to a different module: ```diff + use benches::bench; fn my_benchmark(c: &mut Criterion) { - c.bench_function("my_benchmark", |b| {}); + c.bench_function(bench!("my_benchmark"), |b| {}); } ``` - Migrate all reflection benchmarks to use `bench!`. - Fix a few places where `black_box()` or Criterion is misused. ## Testing ```sh cd benches # Will take a long time! cargo bench --bench reflect # List out the names of all reflection benchmarks, to ensure I didn't miss anything. cargo bench --bench reflect -- --list # Check for linter warnings. cargo clippy --bench reflect # Run each benchmark once. cargo test --bench reflect ```
# Objective - Part of bevyengine#16647. - The benchmarks for bezier curves have several issues and do not yet use the new `bench!` naming scheme. ## Solution - Make all `bevy_math` benchmarks use the `bench!` macro for their name. - Delete the `build_accel_cubic()` benchmark, since it was an exact duplicate of `build_pos_cubic()`. - Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace it with a `for` loop. - Combine all of the benchmarks that measure `curve.position()` under a single group, `curve_position`, and extract the common bench routine into a helper function. - Move the time calculation for the `curve.ease()` benchmark into the setup closure so it is not tracked. - Rename the benchmarks to be more descriptive on what they do. - `easing_1000` -> `segment_ease` - `cubic_position_Vec2` -> `curve_position/vec2` - `cubic_position_Vec3A` -> `curve_position/vec3a` - `cubic_position_Vec3` -> `curve_position/vec3` - `build_pos_cubic_100_points` -> `curve_iter_positions` ## Testing - `cargo test -p benches --bench math` - `cargo bench -p benches --bench math` - Then open `./target/criterion/report/index.html` to see the report! --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective - Part of bevyengine#16647. - This PR goes through our `ray_cast::ray_mesh_intersection()` benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated! ## Solution - Create a `Benchmarks` enum that describes all of the different kind of scenarios we want to benchmark. - Merge all of our existing benchmark functions into a single one, `bench()`, which sets up the scenarios all at once. - Add comments to `mesh_creation()` and `ptoxznorm()`, and move some lines around to be a bit clearer. - Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647. - Rename many functions and benchmarks to be clearer. ## For reviewers I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes. None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR. ## Testing List all picking benchmarks: `cargo bench -p benches --bench picking -- --list` Run the benchmarks once in debug mode: `cargo test -p benches --bench picking` Run the benchmarks and analyze their performance: `cargo bench -p benches --bench picking` - Check out the generated HTML report in `./target/criterion/report/index.html` once you're done! --- ## Showcase List of all picking benchmarks, after having been renamed: <img width="524" alt="image" src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1" /> Example report for `picking::ray_mesh_intersection::cull_intersect/100_vertices`: <img width="992" alt="image" src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d" />
# Objective - `entity_cloning` was separated from the rest of the ECS benchmarks. - There was some room for improvement in the benchmarks themselves. - Part of bevyengine#16647. ## Solution - Merge `entity_cloning` into the rest of the ECS benchmarks. - Apply the `bench!` macro to all benchmark names.\ - Reorganize benchmarks and their helper functions, with more comments than before. - Remove all the extra component definitions (`C2`, `C3`, etc.), and just leave one. Now all entities have exactly one component. ## Testing ```sh # List all entity cloning benchmarks, to verify their names have updated. cargo bench -p benches --bench ecs entity_cloning -- --list # Test benchmarks by running them once. cargo test -p benches --bench ecs entity_cloning # Run all benchmarks (takes about a minute). cargo bench -p benches --bench ecs entity_cloning ``` --- ## Showcase ![image](https://github.com/user-attachments/assets/4e3d7d98-015a-4974-ae16-363cf1b9423c) Interestingly, using `Clone` instead of `Reflect` appears to be 2-2.5 times faster. Furthermore, there were noticeable jumps in time when running the benchmarks: ![image](https://github.com/user-attachments/assets/bd8513de-3922-432f-b3dd-1b1b7750bdb5) I theorize this is because the `World` is allocating more space for all the entities, but I don't know for certain. Neat!
Problem
I've slowly been returning to
bevy-bencher
, and am trying to migrate it to use Bevy's in-tree benchmarks instead of custom ones. One of the original issues I had with this approach is that it's difficult to see what a benchmark is testing from its name alone. For example:layers_intersect
entity_hash
easing_1000
param/combinator_system/8_piped_systems
concrete_list_clone_dynamic
ray_mesh_intersection/1000_vertices
despawn_world_recursive/100_entities
overhead_par_iter/threads_4
run_condition/yes_using_resource
All of these names were pulled from our current benchmarks, and are the names that would be displayed in Bencher's UI. Can you guess what each benchmark tracks specifically? Probably not, unless you're deeply familiar with that specific subsystem.
Solution
Now look at the same list again, but with a few changes:
render::render_layers::intersect
ecs::world::entity_hash
math::bezier::easing_1000
ecs::param::combinator_system::8_piped_systems
reflect::list::concrete_clone_dynamic
picking::ray_mesh_intersection/1000_vertices
ecs::world::despawn_recursive/100_entities
tasks::overhead_par_iter/threads_4
ecs::scheduling::run_condition/yes_using_resource
This naming scheme includes the module path in the benchmark name, and removes any redundant words from the benchmark name. There are a few benefits to this approach:
render::render_layers::intersect
, you know the benchmark is withinbevy_render/render_layers.rs
.cargo bench -- ecs::world
to run allWorld
-related benchmarks.Automation
We can automate this naming a little bit using macros, specifically with
module_path!()
. For a quick sketch, you may be able to do this:The text was updated successfully, but these errors were encountered: