-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Create Benchmarking Setup for Identity Pallet #4695 #4818
Conversation
…brizi-identity-bench
…ubstrate into shawntabrizi-identity-bench
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logic seems reasonable (sans macros). Formatting needs some love.
fn foo(self) -> Self
{
bar()
is wrong. The rule is that two neighbouring lines at the same indent level should be syntactically interchangeable, at least in terms of their initial token. Also, indent levels should never increase or decrease by more than one per line.
As such it's either:
fn foo(self) -> Self {
bar()
Or:
fn foo(self)
-> Self
{
bar()
Or:
fn foo(
self
) -> Self {
bar()
Or (though I've never needed to use this myself):
fn foo(
self
)
-> Self
{
bar()
@@ -804,6 +806,25 @@ impl_runtime_apis! { | |||
SessionKeys::decode_into_raw_public_keys(&encoded) | |||
} | |||
} | |||
|
|||
impl crate::Benchmark<Block> for Runtime { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I hide the entire benchmarking pipeline behind a feature flag?
This means by default we should:
- Not expose a runtime API
- Not have CLI
- Not have benchmarking code in our runtime wasm
Is it possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think so; we don't really want benchmarking code making it on to the chain.
client/db/src/bench.rs
Outdated
|
||
state.reopen()?; | ||
let child_delta = genesis.children.into_iter().map(|(storage_key, child_content)| ( | ||
storage_key, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double ident level
This PR introduces a pipeline to benchmark Substrate FRAME Pallets.
Changes In this PR
This PR introduces a
bench
database which is useful specifically for benchmarking as it gives the runtime access to control the database through host functions listed below.This PR introduces a new set of host functions under the name
benchmarking
. These host functions allow you access to:This PR also introduces a new Runtime API,
Benchmark_dispatch_benchmark
, which allows you to call into the runtime to execute benchmarking tests.This Runtime API is easily accessible through a new CLI sub-command
substrate benchmark
which allows the user to execute benchmarks from their Substrate node executable.A set of benchmarks which use this pipeline have been added for the Identity Pallet and all of its extrinsics.
How to Use
The Substrate node CLI now exposes a
benchmark
sub-command which can be used to run the Pallet benchmarks.For example:
substrate benchmark --chain dev --execution=wasm --wasm-execution=compiled --pallet pallet-identity --extrinsic add_registrar --steps 10 --repeat 100 > add_registrar.csv
As shown above, the output of this command will be the benchmark results in a CSV format.
CLI Params
The
benchmark
sub-command shares the global node configuration parameters. Of note, when testing the Substrate runtime, it is important to set the following:--execution=wasm
: Ensures you are running the benchmark completely in the Wasm environment.--wasm-execution=compiled
: Ensures that you are using the compiled Wasm versus the much slower interpreted Wasm.--chain <your chain spec>
: Needed to specify the genesis state of your benchmarking setup.Custom parameters include:
--pallet
/-p
: The pallet you want to test (i.e.pallet-identity
oridentity
.--extrinsic
/-e
: The extrinsic you want to test (i.e.set_identity
).--steps
/-s
: The maximum number of sample points to take between component ranges (see below) [Default: 1].--repeat
/-r
: The number of times to repeat every benchmark [Default: 1].If you put an invalid
--pallet
or--extrinsic
, the CLI will return an error and no benchmarks will be run.Output
The output will look something like:
Other than the first line, which is metadata about what has been run, the rest of the output is in CSV format.
time
is always measured in nanoseconds.Creating Benchmarks for a Pallet
Using the patterns defined in this PR, you should be able to create benchmarks for any pallet.
benchmark.rs
in your runtime pallet.Benchmarking
trait forModule<T>
BenchmarkingSetup
for that struct:The specific implementation of these functions is left to the user, but the implementation done in this PR is meant to be reusable and general for any benchmarks.
When running an actual benchmark, you should be sure to excute in a way similar to this:
run_benchmarks
call to map to your respective benchmark. This is a little nasty for now, but should be fixed with Create macros to simplify benchmarking. #4861Exposing Your Pallet Benchmarks
After you have written the benchmarks for your pallet, you need to expose them through your Substrate node. You can do this via a runtime api in your main node file:
You will need to implement that API for your runtime:
You can simply map the appropriate strings to your Module's
run_benchmark
function.Benchmarking Philosophy
You can read the philosophy and process we are using for benchmarking and weighing pallets here: https://hackmd.io/UD0HojfARqyUMC9Jxs5-RA