Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update benchmarking macros #3934

Merged
merged 77 commits into from
Apr 10, 2024
Merged

Update benchmarking macros #3934

merged 77 commits into from
Apr 10, 2024

Conversation

pgherveou
Copy link
Contributor

@pgherveou pgherveou commented Apr 2, 2024

Current benchmarking macro returns a closure with the captured benchmarked code.
This can cause issues when the benchmarked code has complex lifetime requirements.

This PR updates the existing macro by injecting the recording parameter and invoking the start / stop method around the benchmarked block instead of returning a closure

One other added benefit is that you can write this kind of code now as well:

let v;
#[block]
{ v = func.call(); }
dbg!(v); // or assert something on v

Weights compare link

@pgherveou pgherveou marked this pull request as ready for review April 2, 2024 09:40
@pgherveou pgherveou requested a review from a team as a code owner April 2, 2024 09:40
@pgherveou pgherveou changed the title Update benchmarking Update benchmarking macros Apr 2, 2024
Copy link
Member

@ggwpez ggwpez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine once we re-run the substrate benches. Thanks!

substrate/frame/benchmarking/src/utils.rs Outdated Show resolved Hide resolved
@paritytech-review-bot paritytech-review-bot bot requested a review from a team April 2, 2024 11:02
@@ -1077,15 +1077,13 @@ mod tests {
(frame_benchmarking::BenchmarkParameter::v, v),
(frame_benchmarking::BenchmarkParameter::n, n),
];
let closure_to_benchmark =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this test be removed, it should be executed when running the benchmark tests anyway ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea looks a bit weird.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah nvm it's marked as #[extra] so I guess this is to ensure the test runs everytime..

@pgherveou
Copy link
Contributor Author

bot help

@command-bot
Copy link

command-bot bot commented Apr 2, 2024

Here's a link to docs

@pgherveou pgherveou added the T12-benchmarks This PR/Issue is related to benchmarking and weights. label Apr 2, 2024
@pgherveou
Copy link
Contributor Author

bot bench-all substrate

@command-bot
Copy link

command-bot bot commented Apr 2, 2024

@pgherveou https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5730507 was started for your command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate. Check out https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.

Comment bot cancel 1-e2f49155-d59b-42ba-95a0-e568537d0eff to cancel this command or bot cancel to cancel all commands in this pull request.

@command-bot
Copy link

command-bot bot commented Apr 2, 2024

@pgherveou Command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate has finished. Result: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5730507 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5730507/artifacts/download.

@pgherveou
Copy link
Contributor Author

bot bench-all substrate

@command-bot
Copy link

command-bot bot commented Apr 2, 2024

@pgherveou "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate (https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5736938) was cancelled in #3934 (comment)

@pgherveou
Copy link
Contributor Author

bot cancel 2-0444d85c-3a2b-4901-aa42-5e499a58ce60

@command-bot
Copy link

command-bot bot commented Apr 2, 2024

@pgherveou Command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate has finished. Result: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5736938 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5736938/artifacts/download.

@pgherveou
Copy link
Contributor Author

bot bench-all substrate

@command-bot
Copy link

command-bot bot commented Apr 2, 2024

@pgherveou https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5737796 was started for your command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate. Check out https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.

Comment bot cancel 3-41cdd745-28bd-4ad4-adae-a90a48ba89f3 to cancel this command or bot cancel to cancel all commands in this pull request.

@command-bot
Copy link

command-bot bot commented Apr 8, 2024

@pgherveou https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5822874 was started for your command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate. Check out https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.

Comment bot cancel 1-b771a94b-ab21-4d4e-a3fa-ff99ad54f1ee to cancel this command or bot cancel to cancel all commands in this pull request.

@command-bot
Copy link

command-bot bot commented Apr 8, 2024

@pgherveou Command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate has finished. Result: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5822874 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5822874/artifacts/download.

@ggwpez
Copy link
Member

ggwpez commented Apr 8, 2024

bot bench-all substrate

@command-bot
Copy link

command-bot bot commented Apr 8, 2024

@ggwpez https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5831222 was started for your command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate. Check out https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.

Comment bot cancel 1-b2242482-db25-434e-b302-4acdf5610c21 to cancel this command or bot cancel to cancel all commands in this pull request.

@command-bot
Copy link

command-bot bot commented Apr 9, 2024

@ggwpez Command "$PIPELINE_SCRIPTS_DIR/commands/bench-all/bench-all.sh" --target_dir=substrate has finished. Result: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5831222 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5831222/artifacts/download.

@pgherveou pgherveou mentioned this pull request Apr 9, 2024
@pgherveou pgherveou enabled auto-merge April 9, 2024 08:42
@paritytech-review-bot paritytech-review-bot bot requested a review from a team April 9, 2024 11:14
@pgherveou
Copy link
Contributor Author

@bkontur @acatangiu can I get a +1 from one of view for the bridge review lock?

@serban300
Copy link
Contributor

serban300 commented Apr 10, 2024

@pgherveou could you just revert the change in the bridges folder please ? I think it's just noise. It looks like it wouldn't make a difference. But it would create conflicts for our migration PR: #4024

@pgherveou
Copy link
Contributor Author

@serban300 I just fixed the constant because the tests were failing, how is that calculated?

@serban300
Copy link
Contributor

@serban300 I just fixed the constant because the tests were failing, how is that calculated?

Oh ok. I'm not sure how it's calculated. But we can keep it if it's needed for the test to pass.

Copy link
Contributor

@serban300 serban300 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bridges change looks ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T12-benchmarks This PR/Issue is related to benchmarking and weights.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants