From 378cfb26d984bcde467781f07ef8ddb6998212cb Mon Sep 17 00:00:00 2001 From: Kian Paimani <5588131+kianenigma@users.noreply.github.com> Date: Wed, 14 Dec 2022 17:05:01 +0000 Subject: [PATCH] Lexnv/kiz revamp try runtime stuff (#12932) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Introduce sensible weight constants (#12868) * Introduce sensible weight constants * cargo fmt * Remove unused import * Add missing import * ".git/.scripts/bench-bot.sh" pallet dev pallet_lottery Co-authored-by: command-bot <> * Checkout to the branch HEAD explicitly in `build-linux-substrate` (#12876) * cli: Improve pruning documentation (#12819) * cli: Improve pruning documentation Signed-off-by: Alexandru Vasile * cli: Keep `finalized` notation and remove `canonical` one * cli: Fix cargo doc * cli: `PruningModeClap` IR enum Signed-off-by: Alexandru Vasile * cli: Convert PruningModeClap into pruning modes Signed-off-by: Alexandru Vasile * cli: Use `PruningModeClap` Signed-off-by: Alexandru Vasile * cli: Rename to `DatabasePruningMode` Signed-off-by: Alexandru Vasile * cli: Implement `FromStr` instead of `clap::ValueEnum` Signed-off-by: Alexandru Vasile * Update client/cli/src/params/pruning_params.rs Co-authored-by: Bastian Köcher * Fix clippy Signed-off-by: Alexandru Vasile * cli: Add option documentation back Signed-off-by: Alexandru Vasile * Apply suggestions from code review Signed-off-by: Alexandru Vasile Co-authored-by: Bastian Köcher * Revert "Move LockableCurrency trait to fungibles::Lockable and deprecate LockableCurrency (#12798)" (#12882) This reverts commit ea3ca3f757ff9d9559665719a77da81f4cf0f0ce. * Don't indefinitely block on shutting down Tokio (#12885) * Don't indefinitely on shutting down Tokio Now we wait in maximum 60 seconds before we shutdown the node. Tasks are may be leaked and leading to some data corruption. * Drink less :thinking_face: * General Message Queue Pallet (#12485) * The message queue * Make fully generic * Refactor * Docs * Refactor * Use iter not slice * Per-origin queues * Multi-queue processing * Introduce MaxReady * Remove MaxReady in favour of ready ring * Cleanups * ReadyRing and tests * Stale page reaping * from_components -> from_parts Signed-off-by: Oliver Tale-Yazdi * Move WeightCounter to sp_weights Signed-off-by: Oliver Tale-Yazdi * Add MockedWeightInfo Signed-off-by: Oliver Tale-Yazdi * Deploy to kitchensink Signed-off-by: Oliver Tale-Yazdi * Use WeightCounter Signed-off-by: Oliver Tale-Yazdi * Small fixes and logging Signed-off-by: Oliver Tale-Yazdi * Add service_page Signed-off-by: Oliver Tale-Yazdi * Typo Signed-off-by: Oliver Tale-Yazdi * Move service_page below service_queue Signed-off-by: Oliver Tale-Yazdi * Add service_message Signed-off-by: Oliver Tale-Yazdi * Use correct weight function Signed-off-by: Oliver Tale-Yazdi * Overweight execution * Refactor * Missing file * Fix WeightCounter usage in scheduler Signed-off-by: Oliver Tale-Yazdi * Fix peek_index Take into account that decoding from a mutable slice modifies it. Signed-off-by: Oliver Tale-Yazdi * Add tests and bench service_page_item Signed-off-by: Oliver Tale-Yazdi * Add debug_info Signed-off-by: Oliver Tale-Yazdi * Add no-progress check to service_queues Signed-off-by: Oliver Tale-Yazdi * Add more benches Signed-off-by: Oliver Tale-Yazdi * Bound from_message and try_append_message Signed-off-by: Oliver Tale-Yazdi * Add PageReaped event Signed-off-by: Oliver Tale-Yazdi * Rename BookStateOf and BookStateFor Signed-off-by: Oliver Tale-Yazdi * Update tests and remove logging Signed-off-by: Oliver Tale-Yazdi * Remove redundant per-message origins; add footprint() and sweep_queue() * Move testing stuff to mock.rs Signed-off-by: Oliver Tale-Yazdi * Add integration test Signed-off-by: Oliver Tale-Yazdi * Fix no-progress check Signed-off-by: Oliver Tale-Yazdi * Fix debug_info Signed-off-by: Oliver Tale-Yazdi * Fixup merge and tests Signed-off-by: Oliver Tale-Yazdi * Fix footprint tracking * Introduce * Formatting * OverweightEnqueued event, auto-servicing config item * Update tests and benchmarks Signed-off-by: Oliver Tale-Yazdi * Clippy Signed-off-by: Oliver Tale-Yazdi * Add tests Signed-off-by: Oliver Tale-Yazdi * Provide change handler * Add missing BookStateFor::insert and call QueueChangeHandler Signed-off-by: Oliver Tale-Yazdi * Docs Signed-off-by: Oliver Tale-Yazdi * Update benchmarks and weights Signed-off-by: Oliver Tale-Yazdi * More tests... Signed-off-by: Oliver Tale-Yazdi * Use weight metering functions Signed-off-by: Oliver Tale-Yazdi * weightInfo::process_message_payload is gone Signed-off-by: Oliver Tale-Yazdi * Add defensive_saturating_accrue Signed-off-by: Oliver Tale-Yazdi * Rename WeightCounter to WeightMeter Ctr+Shift+H should do the trick. Signed-off-by: Oliver Tale-Yazdi * Test on_initialize Signed-off-by: Oliver Tale-Yazdi * Add module docs Signed-off-by: Oliver Tale-Yazdi * Remove origin from MaxMessageLen The message origin is not encoded into the heap and does therefore not influence the max message length anymore. Signed-off-by: Oliver Tale-Yazdi * Add BoundedVec::as_slice Signed-off-by: Oliver Tale-Yazdi * Test Page::{from_message, try_append_message} Signed-off-by: Oliver Tale-Yazdi * Fixup docs Signed-off-by: Oliver Tale-Yazdi * Docs * Do nothing in sweep_queue if the queue does not exist ... otherwise it inserts default values into the storage. Signed-off-by: Oliver Tale-Yazdi * Test ring (un)knitting Signed-off-by: Oliver Tale-Yazdi * Upgrade stress-test Change the test to not assume that all queued messages will be processed in the next block but split it over multiple. Signed-off-by: Oliver Tale-Yazdi * More tests... Signed-off-by: Oliver Tale-Yazdi * Beauty fixes Signed-off-by: Oliver Tale-Yazdi * clippy Signed-off-by: Oliver Tale-Yazdi * Rename BoundedVec::as_slice to as_bounded_slice Conflicts with deref().as_slice() otherwise. Signed-off-by: Oliver Tale-Yazdi * Fix imports Signed-off-by: Oliver Tale-Yazdi * Remove ReadyRing struct Was used for testing only. Instead use 'fn assert_ring' which also check the service head and backlinks. Signed-off-by: Oliver Tale-Yazdi * Beauty fixes Signed-off-by: Oliver Tale-Yazdi * Fix stale page watermark Signed-off-by: Oliver Tale-Yazdi * Cleanup Signed-off-by: Oliver Tale-Yazdi * Fix test feature and clippy Signed-off-by: Oliver Tale-Yazdi * QueueChanged handler is called correctly Signed-off-by: Oliver Tale-Yazdi * Update benches Signed-off-by: Oliver Tale-Yazdi * Abstract testing functions Signed-off-by: Oliver Tale-Yazdi * More tests Signed-off-by: Oliver Tale-Yazdi * Cleanup Signed-off-by: Oliver Tale-Yazdi * Clippy Signed-off-by: Oliver Tale-Yazdi * fmt Signed-off-by: Oliver Tale-Yazdi * Simplify tests Signed-off-by: Oliver Tale-Yazdi * Make stuff compile Signed-off-by: Oliver Tale-Yazdi * Extend overweight execution benchmark Signed-off-by: Oliver Tale-Yazdi * Remove TODOs Signed-off-by: Oliver Tale-Yazdi * Test service queue with faulty MessageProcessor Signed-off-by: Oliver Tale-Yazdi * fmt Signed-off-by: Oliver Tale-Yazdi * Update pallet ui tests to 1.65 Signed-off-by: Oliver Tale-Yazdi * More docs Signed-off-by: Oliver Tale-Yazdi * Review doc fixes Co-authored-by: Robert Klotzner Signed-off-by: Oliver Tale-Yazdi * Add weight_limit to extrinsic weight of execute_overweight * Correctly return unused weight * Return actual weight consumed in do_execute_overweight * Review fixes Signed-off-by: Oliver Tale-Yazdi * Set version 7.0.0-dev Signed-off-by: Oliver Tale-Yazdi * Make it compile Signed-off-by: Oliver Tale-Yazdi * Switch message_size to u64 Signed-off-by: Oliver Tale-Yazdi * Switch message_count to u64 Signed-off-by: Oliver Tale-Yazdi * Fix benchmarks Signed-off-by: Oliver Tale-Yazdi * Make CI green Signed-off-by: Oliver Tale-Yazdi * Docs * Update tests Signed-off-by: Oliver Tale-Yazdi * ".git/.scripts/bench-bot.sh" pallet dev pallet_message_queue * Dont mention README.md in the Cargo.toml Signed-off-by: Oliver Tale-Yazdi * Remove reference to readme Signed-off-by: Oliver Tale-Yazdi Co-authored-by: Oliver Tale-Yazdi Co-authored-by: parity-processbot <> Co-authored-by: Robert Klotzner Co-authored-by: Keith Yeung * zombienet timings adjusted (#12890) * zombinet tests: add some timeout to allow net spin-up Sometimes tests are failing at first try, as the pods were not up yet. Adding timeout should allow the network to spin up properly. * initial timeout increased to 30s * Move import queue out of `sc-network` (#12764) * Move import queue out of `sc-network` Add supplementary asynchronous API for the import queue which means it can be run as an independent task and communicated with through the `ImportQueueService`. This commit removes removes block and justification imports from `sc-network` and provides `ChainSync` with a handle to import queue so it can import blocks and justifications. Polling of the import queue is moved complete out of `sc-network` and `sc_consensus::Link` is implemented for `ChainSyncInterfaceHandled` so the import queue can still influence the syncing process. * Fix tests * Apply review comments * Apply suggestions from code review Co-authored-by: Bastian Köcher * Update client/network/sync/src/lib.rs Co-authored-by: Bastian Köcher Co-authored-by: Bastian Köcher * Trace response payload in default `jsonrpsee` middleware (#12886) * Trace result in default `jsonrpsee` middleware * `rpc_metrics::extra` Co-authored-by: Bastian Köcher Co-authored-by: Bastian Köcher * Ensure that we inform all tasks to stop before starting the 60 seconds shutdown (#12897) * Ensure that we inform all tasks to stop before starting the 60 seconds shutdown The change of waiting in maximum 60 seconds for the node to shutdown actually introduced a bug. We were actually waiting always 60 seconds as we didn't informed our tasks to shutdown. The solution to this problem is to drop the task manager as this will then inform all tasks to end. It also adds tests to ensure that the behaviors work as expected. (This should already have been done in the first pr! :() * ".git/.scripts/fmt.sh" 1 Co-authored-by: command-bot <> * Safe desired targets call (#12826) * checked call for desired targets * fix compile * fmt * fix tests * cleaner with and_then * Fix typo (#12900) * ValidateUnsigned: Improve docs. (#12870) * ValidateUnsigned: Improve docs. * Review comments * rpc server with HTTP/WS on the same socket (#12663) * jsonrpsee v0.16 add backwards compatibility run old http server on http only * cargo fmt * update jsonrpsee 0.16.1 * less verbose cors log * fix nit in log: WS -> HTTP * revert needless changes in Cargo.lock * remove unused features in tower * fix nits; add client-core feature * jsonrpsee v0.16.2 * `pallet-message-queue`: Fix license (#12895) * Fix license Signed-off-by: Oliver Tale-Yazdi * Add mock doc Signed-off-by: Oliver Tale-Yazdi Signed-off-by: Oliver Tale-Yazdi * Use explicit call indices (#12891) * frame-system: explicit call index Signed-off-by: Oliver Tale-Yazdi * Use explicit call indices Signed-off-by: Oliver Tale-Yazdi * pallet-template: explicit call index Signed-off-by: Oliver Tale-Yazdi * DNM: Temporarily require call_index Signed-off-by: Oliver Tale-Yazdi * Revert "DNM: Temporarily require call_index" This reverts commit c4934e312e12af72ca05a8029d7da753a9c99346. Signed-off-by: Oliver Tale-Yazdi * Pin canonincalized block (#12902) * Remove implicit approval chilling upon slash. (#12420) * don't read slashing spans when taking election snapshot * update cargo.toml * bring back remote test * fix merge stuff * fix npos-voters function sig * remove as much redundant diff as you can * Update frame/staking/src/pallet/mod.rs Co-authored-by: Andronik * fix * Update frame/staking/src/pallet/impls.rs * update lock * fix all tests * review comments * fmt * fix offence bench * clippy * ".git/.scripts/bench-bot.sh" pallet dev pallet_staking Co-authored-by: Andronik Co-authored-by: Ankan Co-authored-by: command-bot <> * bounties calls docs fix (#12909) Co-authored-by: parity-processbot <> * pallet-contracts migration pre-upgrade fix for v8 (#12905) * Only run pre-v8 migration check for versions older than 8 * Logix fix * use custom environment for publishing crates (#12912) * [contracts] Add debug buffer limit + enforcement (#12845) * Add debug buffer limit + enforcement Add debug buffer limit + enforcement * use BoundedVec for the debug buffer * revert schedule (debug buf len limit not needed anymore) * return DispatchError * addressed review comments * frame/remote-externalities: Fix clippy Signed-off-by: Alexandru Vasile * frame/rpc: Add previous export Signed-off-by: Alexandru Vasile * Fixup some wrong dependencies (#12899) * Fixup some wrong dependencies Dev dependencies should not appear in the feature list. If features are required, they should be directly enabled for the `dev-dependency`. * More fixups * Fix fix * Remove deprecated feature * Make all work properly and nice!! * FMT * Fix formatting * add numerator and denominator to Rational128 Debug impl and increase precision of float representation (#12914) * Fix state-db pinning (#12927) * Pin all canonicalized blocks * Added a test * Docs * [ci] add job switcher (#12922) * Use LOG_TARGET in consensus related crates (#12875) * Use shared LOG_TARGET in consensus related crates * Rename target from "afg" to "grandpa" Signed-off-by: Alexandru Vasile Signed-off-by: Oliver Tale-Yazdi Co-authored-by: Keith Yeung Co-authored-by: Vlad Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com> Co-authored-by: Bastian Köcher Co-authored-by: Anthony Alaribe Co-authored-by: Gavin Wood Co-authored-by: Oliver Tale-Yazdi Co-authored-by: Robert Klotzner Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com> Co-authored-by: Aaro Altonen <48052676+altonen@users.noreply.github.com> Co-authored-by: tgmichel Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com> Co-authored-by: Luke Schoen Co-authored-by: Niklas Adolfsson Co-authored-by: Arkadiy Paronyan Co-authored-by: Andronik Co-authored-by: Ankan Co-authored-by: Muharem Ismailov Co-authored-by: Dino Pačandi <3002868+Dinonard@users.noreply.github.com> Co-authored-by: João Paulo Silva de Souza <77391175+joao-paulo-parity@users.noreply.github.com> Co-authored-by: Sasha Gryaznov Co-authored-by: Alexandru Vasile Co-authored-by: Alexander Popiak Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com> Co-authored-by: Davide Galassi --- .gitlab-ci.yml | 9 +- Cargo.lock | 210 +-- Cargo.toml | 1 + README.md | 2 +- bin/node-template/node/Cargo.toml | 2 +- bin/node-template/pallets/template/src/lib.rs | 2 + bin/node-template/runtime/src/lib.rs | 6 +- bin/node/cli/Cargo.toml | 2 +- bin/node/rpc/Cargo.toml | 2 +- bin/node/runtime/Cargo.toml | 4 + bin/node/runtime/src/lib.rs | 37 +- client/beefy/rpc/Cargo.toml | 2 +- client/beefy/rpc/src/lib.rs | 2 +- client/cli/Cargo.toml | 1 + client/cli/src/params/pruning_params.rs | 114 +- client/cli/src/runner.rs | 219 ++- client/consensus/aura/src/import_queue.rs | 7 +- client/consensus/aura/src/lib.rs | 8 +- client/consensus/babe/rpc/Cargo.toml | 2 +- client/consensus/babe/src/aux_schema.rs | 4 +- client/consensus/babe/src/lib.rs | 86 +- client/consensus/babe/src/tests.rs | 10 +- client/consensus/babe/src/verification.rs | 15 +- client/consensus/common/Cargo.toml | 1 + client/consensus/common/src/import_queue.rs | 60 +- .../common/src/import_queue/basic_queue.rs | 92 +- .../common/src/import_queue/buffered_link.rs | 32 +- .../consensus/common/src/import_queue/mock.rs | 46 + client/consensus/manual-seal/Cargo.toml | 2 +- .../manual-seal/src/consensus/babe.rs | 6 +- client/consensus/manual-seal/src/lib.rs | 2 + client/consensus/pow/src/lib.rs | 20 +- client/consensus/pow/src/worker.rs | 34 +- client/consensus/slots/src/lib.rs | 16 +- client/consensus/slots/src/slots.rs | 4 +- client/db/src/lib.rs | 1 + client/finality-grandpa/rpc/Cargo.toml | 2 +- client/finality-grandpa/rpc/src/lib.rs | 2 +- client/finality-grandpa/src/authorities.rs | 20 +- client/finality-grandpa/src/aux_schema.rs | 13 +- .../src/communication/gossip.rs | 47 +- .../finality-grandpa/src/communication/mod.rs | 39 +- .../src/communication/periodic.rs | 8 +- client/finality-grandpa/src/environment.rs | 80 +- client/finality-grandpa/src/finality_proof.rs | 10 +- client/finality-grandpa/src/import.rs | 17 +- client/finality-grandpa/src/lib.rs | 25 +- client/finality-grandpa/src/observer.rs | 6 +- client/finality-grandpa/src/until_imported.rs | 6 +- client/merkle-mountain-range/rpc/Cargo.toml | 2 +- client/network/common/src/sync.rs | 28 +- client/network/src/behaviour.rs | 31 +- client/network/src/config.rs | 7 - client/network/src/lib.rs | 9 +- client/network/src/protocol.rs | 107 +- client/network/src/service.rs | 88 +- client/network/src/service/metrics.rs | 10 - .../network/src/service/tests/chain_sync.rs | 106 +- client/network/src/service/tests/mod.rs | 40 +- client/network/sync/Cargo.toml | 1 + client/network/sync/src/lib.rs | 483 +++--- client/network/sync/src/mock.rs | 14 +- client/network/sync/src/service/chain_sync.rs | 53 + client/network/sync/src/service/mock.rs | 33 +- client/network/sync/src/tests.rs | 3 + client/network/test/src/lib.rs | 12 +- client/rpc-api/Cargo.toml | 2 +- client/rpc-servers/Cargo.toml | 5 +- client/rpc-servers/src/lib.rs | 142 +- client/rpc-servers/src/middleware.rs | 129 +- client/rpc-spec-v2/Cargo.toml | 2 +- client/rpc-spec-v2/src/chain_spec/tests.rs | 2 +- client/rpc/Cargo.toml | 2 +- client/rpc/src/author/tests.rs | 2 +- client/rpc/src/chain/tests.rs | 2 +- client/rpc/src/state/mod.rs | 3 +- client/rpc/src/state/tests.rs | 2 +- client/rpc/src/system/tests.rs | 2 +- client/service/Cargo.toml | 2 +- client/service/src/builder.rs | 6 +- client/service/src/chain_ops/import_blocks.rs | 2 +- client/service/src/lib.rs | 50 +- client/state-db/src/lib.rs | 10 + client/state-db/src/noncanonical.rs | 51 + client/sync-state-rpc/Cargo.toml | 2 +- docs/Upgrading-2.0-to-3.0.md | 4 +- frame/alliance/src/lib.rs | 18 + frame/assets/Cargo.toml | 3 +- frame/assets/src/lib.rs | 30 + frame/atomic-swap/src/lib.rs | 3 + frame/aura/src/lib.rs | 4 +- frame/authorship/src/lib.rs | 1 + frame/babe/src/default_weights.rs | 21 +- frame/babe/src/equivocation.rs | 16 +- frame/babe/src/lib.rs | 5 + frame/bags-list/src/lib.rs | 60 +- frame/balances/README.md | 8 +- frame/balances/src/lib.rs | 32 +- frame/balances/src/tests.rs | 6 +- frame/benchmarking/src/tests.rs | 3 + frame/benchmarking/src/tests_instance.rs | 2 + frame/bounties/src/lib.rs | 16 +- frame/child-bounties/src/lib.rs | 7 + frame/collective/src/lib.rs | 7 + frame/collective/src/tests.rs | 1 + frame/contracts/src/exec.rs | 84 +- frame/contracts/src/lib.rs | 31 +- frame/contracts/src/migration.rs | 2 +- frame/contracts/src/tests.rs | 7 +- frame/contracts/src/wasm/mod.rs | 4 +- frame/contracts/src/wasm/runtime.rs | 4 +- frame/conviction-voting/src/benchmarking.rs | 2 +- frame/conviction-voting/src/lib.rs | 12 +- frame/democracy/src/lib.rs | 28 +- frame/democracy/src/tests.rs | 4 +- .../election-provider-multi-phase/src/lib.rs | 18 +- .../election-provider-multi-phase/src/mock.rs | 2 +- .../src/signed.rs | 10 +- frame/election-provider-support/Cargo.toml | 3 +- frame/election-provider-support/src/lib.rs | 19 +- frame/elections-phragmen/src/lib.rs | 16 +- frame/examples/basic/src/lib.rs | 2 + frame/examples/offchain-worker/src/lib.rs | 3 + frame/executive/src/lib.rs | 16 +- frame/fast-unstake/Cargo.toml | 6 - frame/fast-unstake/src/lib.rs | 3 + frame/fast-unstake/src/mock.rs | 4 +- frame/grandpa/src/default_weights.rs | 20 +- frame/grandpa/src/equivocation.rs | 16 +- frame/grandpa/src/lib.rs | 5 + frame/grandpa/src/migrations/v4.rs | 5 +- frame/identity/src/lib.rs | 15 + frame/im-online/src/lib.rs | 1 + frame/indices/src/lib.rs | 5 + frame/lottery/src/lib.rs | 4 + frame/lottery/src/weights.rs | 58 +- frame/membership/src/lib.rs | 7 + .../src/default_weights.rs | 4 +- frame/message-queue/Cargo.toml | 53 + frame/message-queue/src/benchmarking.rs | 205 +++ frame/message-queue/src/integration_test.rs | 225 +++ frame/message-queue/src/lib.rs | 1311 +++++++++++++++++ frame/message-queue/src/mock.rs | 315 ++++ frame/message-queue/src/mock_helpers.rs | 186 +++ frame/message-queue/src/tests.rs | 1092 ++++++++++++++ frame/message-queue/src/weights.rs | 216 +++ frame/multisig/src/lib.rs | 4 + frame/nicks/src/lib.rs | 4 + frame/nis/src/lib.rs | 4 + frame/node-authorization/src/lib.rs | 9 + frame/nomination-pools/Cargo.toml | 2 +- .../nomination-pools/benchmarking/Cargo.toml | 2 - frame/nomination-pools/src/lib.rs | 14 + frame/offences/benchmarking/src/lib.rs | 14 +- frame/offences/benchmarking/src/mock.rs | 4 +- frame/offences/src/mock.rs | 6 +- frame/preimage/src/lib.rs | 4 + frame/proxy/src/lib.rs | 10 + frame/ranked-collective/src/lib.rs | 6 + frame/recovery/src/lib.rs | 9 + frame/referenda/src/lib.rs | 15 +- frame/remark/src/lib.rs | 1 + frame/root-offences/src/lib.rs | 1 + frame/root-testing/src/lib.rs | 1 + frame/scheduler/Cargo.toml | 2 + frame/scheduler/src/lib.rs | 9 +- frame/scheduler/src/mock.rs | 2 + frame/scored-pool/src/lib.rs | 5 + frame/session/src/lib.rs | 2 + frame/society/src/lib.rs | 12 + frame/staking/Cargo.toml | 8 +- frame/staking/src/benchmarking.rs | 11 +- frame/staking/src/mock.rs | 2 +- frame/staking/src/pallet/impls.rs | 83 +- frame/staking/src/pallet/mod.rs | 40 +- frame/staking/src/slashing.rs | 10 +- frame/staking/src/tests.rs | 339 +++-- frame/staking/src/weights.rs | 631 ++++---- frame/state-trie-migration/src/lib.rs | 6 + frame/sudo/src/lib.rs | 4 + frame/sudo/src/mock.rs | 2 + frame/support/src/traits.rs | 7 +- frame/support/src/traits/messages.rs | 202 +++ frame/support/src/traits/tokens/currency.rs | 5 +- .../src/traits/tokens/currency/lockable.rs | 48 +- frame/support/src/traits/tokens/fungible.rs | 1 - frame/support/src/traits/tokens/fungibles.rs | 2 - .../src/traits/tokens/fungibles/lockable.rs | 65 - frame/support/src/weights/block_weights.rs | 9 +- .../support/src/weights/extrinsic_weights.rs | 9 +- frame/support/src/weights/paritydb_weights.rs | 12 +- frame/support/src/weights/rocksdb_weights.rs | 12 +- frame/support/test/tests/pallet.rs | 3 + .../test/tests/pallet_compatibility.rs | 1 + .../tests/pallet_compatibility_instance.rs | 1 + frame/support/test/tests/pallet_instance.rs | 2 + ...age_ensure_span_are_ok_on_wrong_gen.stderr | 2 +- ...re_span_are_ok_on_wrong_gen_unnamed.stderr | 2 +- frame/support/test/tests/storage_layers.rs | 1 + frame/system/src/lib.rs | 8 + frame/system/src/limits.rs | 2 +- frame/timestamp/src/lib.rs | 1 + frame/tips/src/lib.rs | 6 + .../asset-tx-payment/Cargo.toml | 4 - .../asset-tx-payment/src/tests.rs | 5 +- frame/transaction-payment/rpc/Cargo.toml | 2 +- frame/transaction-storage/src/lib.rs | 3 + frame/treasury/src/lib.rs | 5 + frame/uniques/src/lib.rs | 26 + frame/utility/src/lib.rs | 6 + frame/utility/src/tests.rs | 4 + frame/vesting/src/lib.rs | 16 +- frame/whitelist/src/lib.rs | 4 + primitives/arithmetic/src/rational.rs | 4 +- primitives/core/src/bounded/bounded_vec.rs | 7 + primitives/core/src/lib.rs | 46 + primitives/runtime/src/traits.rs | 33 +- primitives/staking/Cargo.toml | 2 + primitives/staking/src/lib.rs | 2 + primitives/weights/src/lib.rs | 11 +- primitives/weights/src/weight_meter.rs | 6 + scripts/ci/gitlab/pipeline/build.yml | 6 + scripts/ci/gitlab/pipeline/publish.yml | 4 + scripts/ci/gitlab/pipeline/test.yml | 2 + .../frame/benchmarking-cli/src/block/bench.rs | 4 +- .../benchmarking-cli/src/overhead/README.md | 6 +- .../benchmarking-cli/src/overhead/weights.hbs | 13 +- .../benchmarking-cli/src/storage/README.md | 4 +- .../benchmarking-cli/src/storage/weights.hbs | 12 +- utils/frame/remote-externalities/src/lib.rs | 155 +- utils/frame/rpc/client/Cargo.toml | 2 +- utils/frame/rpc/client/src/lib.rs | 1 + .../rpc/state-trie-migration-rpc/Cargo.toml | 2 +- utils/frame/rpc/support/Cargo.toml | 4 +- utils/frame/rpc/system/Cargo.toml | 2 +- utils/wasm-builder/src/wasm_project.rs | 9 +- .../0000-block-building/block-building.zndsl | 4 +- .../0001-basic-warp-sync/test-warp-sync.zndsl | 8 +- 238 files changed, 7108 insertions(+), 2048 deletions(-) create mode 100644 client/consensus/common/src/import_queue/mock.rs create mode 100644 frame/message-queue/Cargo.toml create mode 100644 frame/message-queue/src/benchmarking.rs create mode 100644 frame/message-queue/src/integration_test.rs create mode 100644 frame/message-queue/src/lib.rs create mode 100644 frame/message-queue/src/mock.rs create mode 100644 frame/message-queue/src/mock_helpers.rs create mode 100644 frame/message-queue/src/tests.rs create mode 100644 frame/message-queue/src/weights.rs create mode 100644 frame/support/src/traits/messages.rs delete mode 100644 frame/support/src/traits/tokens/fungibles/lockable.rs diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 25d61cf349615..dcc0cbb7c9693 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -81,8 +81,14 @@ default: paths: - artifacts/ +.job-switcher: + before_script: + - if echo "$CI_DISABLED_JOBS" | grep -xF "$CI_JOB_NAME"; then echo "The job has been cancelled in CI settings"; exit 0; fi + .kubernetes-env: image: "${CI_IMAGE}" + before_script: + - !reference [.job-switcher, before_script] tags: - kubernetes-parity-build @@ -95,6 +101,7 @@ default: .pipeline-stopper-vars: script: + - !reference [.job-switcher, before_script] - echo "Collecting env variables for the cancel-pipeline job" - echo "FAILED_JOB_URL=${CI_JOB_URL}" > pipeline-stopper.env - echo "FAILED_JOB_NAME=${CI_JOB_NAME}" >> pipeline-stopper.env @@ -110,6 +117,7 @@ default: before_script: # TODO: remove unset invocation when we'll be free from 'ENV RUSTC_WRAPPER=sccache' & sccache itself in all images - unset RUSTC_WRAPPER + - !reference [.job-switcher, before_script] - !reference [.rust-info-script, script] - !reference [.rusty-cachier, before_script] - !reference [.pipeline-stopper-vars, script] @@ -300,7 +308,6 @@ rusty-cachier-notify: PR_NUM: "${PR_NUM}" trigger: project: "parity/infrastructure/ci_cd/pipeline-stopper" - branch: "as-improve" remove-cancel-pipeline-message: stage: .post diff --git a/Cargo.lock b/Cargo.lock index 360fe12d31ae4..b840cadce3e5c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -288,7 +288,7 @@ dependencies = [ "futures-sink", "futures-util", "memchr", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", ] [[package]] @@ -2295,7 +2295,7 @@ dependencies = [ "futures-io", "memchr", "parking", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", "waker-fn", ] @@ -2352,7 +2352,7 @@ dependencies = [ "futures-sink", "futures-task", "memchr", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", "pin-utils", "slab", ] @@ -2669,15 +2669,21 @@ dependencies = [ [[package]] name = "http-body" -version = "0.4.2" +version = "0.4.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "60daa14be0e0786db0f03a9e57cb404c9d756eed2b6c62b9ea98ec5743ec75a9" +checksum = "d5f38f16d184e36f2408a55281cd658ecbd3ca05cce6d6510a176eca393e26d1" dependencies = [ "bytes", "http", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", ] +[[package]] +name = "http-range-header" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0bfe8eed0a9285ef776bb792479ea3834e8b94e13d615c2f66d03dd50a435a29" + [[package]] name = "httparse" version = "1.8.0" @@ -2712,7 +2718,7 @@ dependencies = [ "httparse", "httpdate", "itoa 1.0.4", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", "socket2", "tokio", "tower-service", @@ -2909,24 +2915,23 @@ dependencies = [ [[package]] name = "jsonrpsee" -version = "0.15.1" +version = "0.16.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8bd0d559d5e679b1ab2f869b486a11182923863b1b3ee8b421763cdd707b783a" +checksum = "7d291e3a5818a2384645fd9756362e6d89cf0541b0b916fa7702ea4a9833608e" dependencies = [ "jsonrpsee-core", - "jsonrpsee-http-server", "jsonrpsee-proc-macros", + "jsonrpsee-server", "jsonrpsee-types", "jsonrpsee-ws-client", - "jsonrpsee-ws-server", "tracing", ] [[package]] name = "jsonrpsee-client-transport" -version = "0.15.1" +version = "0.16.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8752740ecd374bcbf8b69f3e80b0327942df76f793f8d4e60d3355650c31fb74" +checksum = "965de52763f2004bc91ac5bcec504192440f0b568a5d621c59d9dbd6f886c3fb" dependencies = [ "futures-util", "http", @@ -2945,9 +2950,9 @@ dependencies = [ [[package]] name = "jsonrpsee-core" -version = "0.15.1" +version = "0.16.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f3dc3e9cf2ba50b7b1d7d76a667619f82846caa39e8e8daa8a4962d74acaddca" +checksum = "a4e70b4439a751a5de7dd5ed55eacff78ebf4ffe0fc009cb1ebb11417f5b536b" dependencies = [ "anyhow", "arrayvec 0.7.2", @@ -2958,10 +2963,8 @@ dependencies = [ "futures-timer", "futures-util", "globset", - "http", "hyper", "jsonrpsee-types", - "lazy_static", "parking_lot 0.12.1", "rand 0.8.5", "rustc-hash", @@ -2971,45 +2974,48 @@ dependencies = [ "thiserror", "tokio", "tracing", - "tracing-futures", - "unicase", ] [[package]] -name = "jsonrpsee-http-server" -version = "0.15.1" +name = "jsonrpsee-proc-macros" +version = "0.16.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "baa6da1e4199c10d7b1d0a6e5e8bd8e55f351163b6f4b3cbb044672a69bd4c1c" +dependencies = [ + "heck", + "proc-macro-crate", + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "jsonrpsee-server" +version = "0.16.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "03802f0373a38c2420c70b5144742d800b509e2937edc4afb116434f07120117" +checksum = "1fb69dad85df79527c019659a992498d03f8495390496da2f07e6c24c2b356fc" dependencies = [ "futures-channel", "futures-util", + "http", "hyper", "jsonrpsee-core", "jsonrpsee-types", "serde", "serde_json", + "soketto", "tokio", + "tokio-stream", + "tokio-util", + "tower", "tracing", - "tracing-futures", -] - -[[package]] -name = "jsonrpsee-proc-macros" -version = "0.15.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bd67957d4280217247588ac86614ead007b301ca2fa9f19c19f880a536f029e3" -dependencies = [ - "proc-macro-crate", - "proc-macro2", - "quote", - "syn", ] [[package]] name = "jsonrpsee-types" -version = "0.15.1" +version = "0.16.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e290bba767401b646812f608c099b922d8142603c9e73a50fb192d3ac86f4a0d" +checksum = "5bd522fe1ce3702fd94812965d7bb7a3364b1c9aba743944c5a00529aae80f8c" dependencies = [ "anyhow", "beef", @@ -3021,9 +3027,9 @@ dependencies = [ [[package]] name = "jsonrpsee-ws-client" -version = "0.15.1" +version = "0.16.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6ee5feddd5188e62ac08fcf0e56478138e581509d4730f3f7be9b57dd402a4ff" +checksum = "0b83daeecfc6517cfe210df24e570fb06213533dfb990318fae781f4c7119dd9" dependencies = [ "http", "jsonrpsee-client-transport", @@ -3031,26 +3037,6 @@ dependencies = [ "jsonrpsee-types", ] -[[package]] -name = "jsonrpsee-ws-server" -version = "0.15.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d488ba74fb369e5ab68926feb75a483458b88e768d44319f37e4ecad283c7325" -dependencies = [ - "futures-channel", - "futures-util", - "http", - "jsonrpsee-core", - "jsonrpsee-types", - "serde_json", - "soketto", - "tokio", - "tokio-stream", - "tokio-util", - "tracing", - "tracing-futures", -] - [[package]] name = "k256" version = "0.11.5" @@ -3119,6 +3105,7 @@ dependencies = [ "pallet-indices", "pallet-lottery", "pallet-membership", + "pallet-message-queue", "pallet-mmr", "pallet-multisig", "pallet-nis", @@ -5218,7 +5205,6 @@ dependencies = [ "frame-support", "frame-system", "log", - "pallet-assets", "pallet-balances", "pallet-staking", "pallet-staking-reward-curve", @@ -5351,6 +5337,28 @@ dependencies = [ "sp-std", ] +[[package]] +name = "pallet-message-queue" +version = "7.0.0-dev" +dependencies = [ + "frame-benchmarking", + "frame-support", + "frame-system", + "log", + "parity-scale-codec", + "rand 0.8.5", + "rand_distr", + "scale-info", + "serde", + "sp-arithmetic", + "sp-core", + "sp-io", + "sp-runtime", + "sp-std", + "sp-tracing", + "sp-weights", +] + [[package]] name = "pallet-mmr" version = "4.0.0-dev" @@ -5738,6 +5746,7 @@ dependencies = [ "sp-io", "sp-runtime", "sp-std", + "sp-weights", "substrate-test-utils", ] @@ -6339,9 +6348,9 @@ checksum = "257b64915a082f7811703966789728173279bdebb956b143dbcd23f6f970a777" [[package]] name = "pin-project-lite" -version = "0.2.6" +version = "0.2.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dc0e1f259c92177c30a4c9d177246edd0a3568b25756a977d0632cf8fa37e905" +checksum = "e0a7ae3ac2f1173085d398531c705756c94a4c56843785df85a60c1a0afac116" [[package]] name = "pin-utils" @@ -7232,6 +7241,7 @@ dependencies = [ "clap 4.0.11", "fdlimit", "futures", + "futures-timer", "libp2p", "log", "names", @@ -7333,6 +7343,7 @@ dependencies = [ "futures-timer", "libp2p", "log", + "mockall", "parking_lot 0.12.1", "sc-client-api", "sc-utils", @@ -7934,6 +7945,7 @@ dependencies = [ "sp-runtime", "sp-test-primitives", "sp-tracing", + "substrate-prometheus-endpoint", "substrate-test-runtime-client", "thiserror", "tokio", @@ -8112,11 +8124,14 @@ name = "sc-rpc-server" version = "4.0.0-dev" dependencies = [ "futures", + "http", "jsonrpsee", "log", "serde_json", "substrate-prometheus-endpoint", "tokio", + "tower", + "tower-http", ] [[package]] @@ -8426,9 +8441,9 @@ dependencies = [ [[package]] name = "scale-info" -version = "2.1.1" +version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8980cafbe98a7ee7a9cc16b32ebce542c77883f512d83fbf2ddc8f6a85ea74c9" +checksum = "333af15b02563b8182cd863f925bd31ef8fa86a0e095d30c091956057d436153" dependencies = [ "bitvec", "cfg-if", @@ -8440,9 +8455,9 @@ dependencies = [ [[package]] name = "scale-info-derive" -version = "2.1.1" +version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4260c630e8a8a33429d1688eff2f163f24c65a4e1b1578ef6b565061336e4b6f" +checksum = "53f56acbd0743d29ffa08f911ab5397def774ad01bab3786804cf6ee057fb5e1" dependencies = [ "proc-macro-crate", "proc-macro2", @@ -8599,9 +8614,9 @@ checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3" [[package]] name = "serde" -version = "1.0.136" +version = "1.0.145" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ce31e24b01e1e524df96f1c2fdd054405f8d7376249a5110886fb4b658484789" +checksum = "728eb6351430bccb993660dfffc5a72f91ccc1295abaa8ce19b27ebe4f75568b" dependencies = [ "serde_derive", ] @@ -8618,9 +8633,9 @@ dependencies = [ [[package]] name = "serde_derive" -version = "1.0.136" +version = "1.0.145" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "08597e7152fcd306f41838ed3e37be9eaeed2b61c42e2117266a554fab4662f9" +checksum = "81fa1584d3d1bcacd84c277a0dfe21f5b0f6accf4a23d04d4c6d61f1af522b4c" dependencies = [ "proc-macro2", "quote", @@ -8824,6 +8839,7 @@ dependencies = [ "bytes", "flate2", "futures", + "http", "httparse", "log", "rand 0.8.5", @@ -9487,6 +9503,7 @@ version = "4.0.0-dev" dependencies = [ "parity-scale-codec", "scale-info", + "sp-core", "sp-runtime", "sp-std", ] @@ -10299,7 +10316,7 @@ dependencies = [ "mio", "num_cpus", "parking_lot 0.12.1", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", "signal-hook-registry", "socket2", "tokio-macros", @@ -10335,7 +10352,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7b2f3f698253f03119ac0102beaa64f67a67e08074d03a22d18784104543727f" dependencies = [ "futures-core", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", "tokio", ] @@ -10362,7 +10379,7 @@ dependencies = [ "futures-core", "futures-io", "futures-sink", - "pin-project-lite 0.2.6", + "pin-project-lite 0.2.9", "tokio", "tracing", ] @@ -10376,6 +10393,41 @@ dependencies = [ "serde", ] +[[package]] +name = "tower" +version = "0.4.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c" +dependencies = [ + "tower-layer", + "tower-service", + "tracing", +] + +[[package]] +name = "tower-http" +version = "0.3.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f873044bf02dd1e8239e9c1293ea39dad76dc594ec16185d0a1bf31d8dc8d858" +dependencies = [ + "bitflags", + "bytes", + "futures-core", + "futures-util", + "http", + "http-body", + "http-range-header", + "pin-project-lite 0.2.9", + "tower-layer", + "tower-service", +] + +[[package]] +name = "tower-layer" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c20c8dbed6283a09604c3e69b4b7eeb54e298b8a600d4d5ecb5ad39de609f1d0" + [[package]] name = "tower-service" version = "0.3.1" @@ -10389,7 +10441,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a400e31aa60b9d44a52a8ee0343b5b18566b03a8321e0d321f695cf56e940160" dependencies = [ "cfg-if", - "pin-project-lite 0.2.6", + "log", + "pin-project-lite 0.2.9", "tracing-attributes", "tracing-core", ] @@ -10681,15 +10734,6 @@ dependencies = [ "static_assertions", ] -[[package]] -name = "unicase" -version = "2.6.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "50f37be617794602aabbeee0be4f259dc1778fabe05e2d67ee8f79326d5cb4f6" -dependencies = [ - "version_check", -] - [[package]] name = "unicode-bidi" version = "0.3.4" diff --git a/Cargo.toml b/Cargo.toml index 12f2ced0d1d03..eb78d5e104486 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -121,6 +121,7 @@ members = [ "frame/offences/benchmarking", "frame/preimage", "frame/proxy", + "frame/message-queue", "frame/nomination-pools", "frame/nomination-pools/fuzzer", "frame/nomination-pools/benchmarking", diff --git a/README.md b/README.md index 02f8a7591acc5..7d8c7e575581c 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ Then try out one of the [tutorials](https://docs.substrate.io/tutorials/). ## Community & Support -Join the highly active and supportive community on the [Susbstrate Stack Exchange](https://substrate.stackexchange.com/) to ask questions about use and problems you run into using this software. +Join the highly active and supportive community on the [Substrate Stack Exchange](https://substrate.stackexchange.com/) to ask questions about use and problems you run into using this software. Please do report bugs and [issues here](https://github.com/paritytech/substrate/issues) for anything you suspect requires action in the source. ## Contributions & Code of Conduct diff --git a/bin/node-template/node/Cargo.toml b/bin/node-template/node/Cargo.toml index 3d9c6e5df9546..364cfa25d3c6b 100644 --- a/bin/node-template/node/Cargo.toml +++ b/bin/node-template/node/Cargo.toml @@ -44,7 +44,7 @@ frame-system = { version = "4.0.0-dev", path = "../../../frame/system" } pallet-transaction-payment = { version = "4.0.0-dev", default-features = false, path = "../../../frame/transaction-payment" } # These dependencies are used for the node template's RPCs -jsonrpsee = { version = "0.15.1", features = ["server"] } +jsonrpsee = { version = "0.16.2", features = ["server"] } sc-rpc = { version = "4.0.0-dev", path = "../../../client/rpc" } sp-api = { version = "4.0.0-dev", path = "../../../primitives/api" } sc-rpc-api = { version = "0.10.0-dev", path = "../../../client/rpc-api" } diff --git a/bin/node-template/pallets/template/src/lib.rs b/bin/node-template/pallets/template/src/lib.rs index 28d36ac2c6321..4630e344add31 100644 --- a/bin/node-template/pallets/template/src/lib.rs +++ b/bin/node-template/pallets/template/src/lib.rs @@ -64,6 +64,7 @@ pub mod pallet { impl Pallet { /// An example dispatchable that takes a singles value as a parameter, writes the value to /// storage and emits an event. This function must be dispatched by a signed extrinsic. + #[pallet::call_index(0)] #[pallet::weight(10_000 + T::DbWeight::get().writes(1).ref_time())] pub fn do_something(origin: OriginFor, something: u32) -> DispatchResult { // Check that the extrinsic was signed and get the signer. @@ -81,6 +82,7 @@ pub mod pallet { } /// An example dispatchable that may throw a custom error. + #[pallet::call_index(1)] #[pallet::weight(10_000 + T::DbWeight::get().reads_writes(1,1).ref_time())] pub fn cause_error(origin: OriginFor) -> DispatchResult { let _who = ensure_signed(origin)?; diff --git a/bin/node-template/runtime/src/lib.rs b/bin/node-template/runtime/src/lib.rs index 37f84bf51bf3e..f76b2c449ee4a 100644 --- a/bin/node-template/runtime/src/lib.rs +++ b/bin/node-template/runtime/src/lib.rs @@ -32,7 +32,9 @@ pub use frame_support::{ ConstU128, ConstU32, ConstU64, ConstU8, KeyOwnerProofSystem, Randomness, StorageInfo, }, weights::{ - constants::{BlockExecutionWeight, ExtrinsicBaseWeight, RocksDbWeight, WEIGHT_PER_SECOND}, + constants::{ + BlockExecutionWeight, ExtrinsicBaseWeight, RocksDbWeight, WEIGHT_REF_TIME_PER_SECOND, + }, IdentityFee, Weight, }, StorageValue, @@ -141,7 +143,7 @@ parameter_types! { /// We allow for 2 seconds of compute with a 6 second average block time. pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights::with_sensible_defaults( - (2u64 * WEIGHT_PER_SECOND).set_proof_size(u64::MAX), + Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX), NORMAL_DISPATCH_RATIO, ); pub BlockLength: frame_system::limits::BlockLength = frame_system::limits::BlockLength diff --git a/bin/node/cli/Cargo.toml b/bin/node/cli/Cargo.toml index 3c492a56f869f..6b50115fd9a00 100644 --- a/bin/node/cli/Cargo.toml +++ b/bin/node/cli/Cargo.toml @@ -39,7 +39,7 @@ array-bytes = "4.1" clap = { version = "4.0.9", features = ["derive"], optional = true } codec = { package = "parity-scale-codec", version = "3.0.0" } serde = { version = "1.0.136", features = ["derive"] } -jsonrpsee = { version = "0.15.1", features = ["server"] } +jsonrpsee = { version = "0.16.2", features = ["server"] } futures = "0.3.21" log = "0.4.17" rand = "0.8" diff --git a/bin/node/rpc/Cargo.toml b/bin/node/rpc/Cargo.toml index 9d2810413613f..f34922a287dfe 100644 --- a/bin/node/rpc/Cargo.toml +++ b/bin/node/rpc/Cargo.toml @@ -13,7 +13,7 @@ publish = false targets = ["x86_64-unknown-linux-gnu"] [dependencies] -jsonrpsee = { version = "0.15.1", features = ["server"] } +jsonrpsee = { version = "0.16.2", features = ["server"] } node-primitives = { version = "2.0.0", path = "../primitives" } pallet-transaction-payment-rpc = { version = "4.0.0-dev", path = "../../../frame/transaction-payment/rpc/" } mmr-rpc = { version = "4.0.0-dev", path = "../../../client/merkle-mountain-range/rpc/" } diff --git a/bin/node/runtime/Cargo.toml b/bin/node/runtime/Cargo.toml index 02a2ae292d83e..477545c9ac332 100644 --- a/bin/node/runtime/Cargo.toml +++ b/bin/node/runtime/Cargo.toml @@ -75,6 +75,7 @@ pallet-indices = { version = "4.0.0-dev", default-features = false, path = "../. pallet-identity = { version = "4.0.0-dev", default-features = false, path = "../../../frame/identity" } pallet-lottery = { version = "4.0.0-dev", default-features = false, path = "../../../frame/lottery" } pallet-membership = { version = "4.0.0-dev", default-features = false, path = "../../../frame/membership" } +pallet-message-queue = { version = "7.0.0-dev", default-features = false, path = "../../../frame/message-queue" } pallet-mmr = { version = "4.0.0-dev", default-features = false, path = "../../../frame/merkle-mountain-range" } pallet-multisig = { version = "4.0.0-dev", default-features = false, path = "../../../frame/multisig" } pallet-nomination-pools = { version = "1.0.0", default-features = false, path = "../../../frame/nomination-pools"} @@ -150,6 +151,7 @@ std = [ "sp-inherents/std", "pallet-lottery/std", "pallet-membership/std", + "pallet-message-queue/std", "pallet-mmr/std", "pallet-multisig/std", "pallet-nomination-pools/std", @@ -229,6 +231,7 @@ runtime-benchmarks = [ "pallet-indices/runtime-benchmarks", "pallet-lottery/runtime-benchmarks", "pallet-membership/runtime-benchmarks", + "pallet-message-queue/runtime-benchmarks", "pallet-mmr/runtime-benchmarks", "pallet-multisig/runtime-benchmarks", "pallet-nomination-pools-benchmarking/runtime-benchmarks", @@ -282,6 +285,7 @@ try-runtime = [ "pallet-identity/try-runtime", "pallet-lottery/try-runtime", "pallet-membership/try-runtime", + "pallet-message-queue/try-runtime", "pallet-mmr/try-runtime", "pallet-multisig/try-runtime", "pallet-nomination-pools/try-runtime", diff --git a/bin/node/runtime/src/lib.rs b/bin/node/runtime/src/lib.rs index d971eae135620..0e3bee8821fc2 100644 --- a/bin/node/runtime/src/lib.rs +++ b/bin/node/runtime/src/lib.rs @@ -32,13 +32,15 @@ use frame_support::{ pallet_prelude::Get, parameter_types, traits::{ - fungible::ItemOf, fungibles, AsEnsureOriginWithArg, ConstBool, ConstU128, ConstU16, - ConstU32, Currency, EitherOfDiverse, EqualPrivilegeOnly, Everything, Imbalance, - InstanceFilter, KeyOwnerProofSystem, Nothing, OnUnbalanced, U128CurrencyToVote, + fungible::ItemOf, AsEnsureOriginWithArg, ConstBool, ConstU128, ConstU16, ConstU32, + Currency, EitherOfDiverse, EqualPrivilegeOnly, Everything, Imbalance, InstanceFilter, + KeyOwnerProofSystem, LockIdentifier, Nothing, OnUnbalanced, U128CurrencyToVote, WithdrawReasons, }, weights::{ - constants::{BlockExecutionWeight, ExtrinsicBaseWeight, RocksDbWeight, WEIGHT_PER_SECOND}, + constants::{ + BlockExecutionWeight, ExtrinsicBaseWeight, RocksDbWeight, WEIGHT_REF_TIME_PER_SECOND, + }, ConstantMultiplier, IdentityFee, Weight, }, PalletId, RuntimeDebug, @@ -173,7 +175,8 @@ const AVERAGE_ON_INITIALIZE_RATIO: Perbill = Perbill::from_percent(10); /// by Operational extrinsics. const NORMAL_DISPATCH_RATIO: Perbill = Perbill::from_percent(75); /// We allow for 2 seconds of compute with a 6 second average block time, with maximum proof size. -const MAXIMUM_BLOCK_WEIGHT: Weight = WEIGHT_PER_SECOND.saturating_mul(2).set_proof_size(u64::MAX); +const MAXIMUM_BLOCK_WEIGHT: Weight = + Weight::from_parts(WEIGHT_REF_TIME_PER_SECOND.saturating_mul(2), u64::MAX); parameter_types! { pub const BlockHashCount: BlockNumber = 2400; @@ -1003,7 +1006,7 @@ parameter_types! { pub const DesiredRunnersUp: u32 = 7; pub const MaxVoters: u32 = 10 * 1000; pub const MaxCandidates: u32 = 1000; - pub const ElectionsPhragmenPalletId: fungibles::LockIdentifier = *b"phrelect"; + pub const ElectionsPhragmenPalletId: LockIdentifier = *b"phrelect"; } // Make sure that there are no more than `MaxMembers` members elected via elections-phragmen. @@ -1132,6 +1135,25 @@ impl pallet_bounties::Config for Runtime { type ChildBountyManager = ChildBounties; } +parameter_types! { + /// Allocate at most 20% of each block for message processing. + /// + /// Is set to 20% since the scheduler can already consume a maximum of 80%. + pub MessageQueueServiceWeight: Option = Some(Perbill::from_percent(20) * RuntimeBlockWeights::get().max_block); +} + +impl pallet_message_queue::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type WeightInfo = (); + /// NOTE: Always set this to `NoopMessageProcessor` for benchmarking. + type MessageProcessor = pallet_message_queue::mock_helpers::NoopMessageProcessor; + type Size = u32; + type QueueChangeHandler = (); + type HeapSize = ConstU32<{ 64 * 1024 }>; + type MaxStale = ConstU32<128>; + type ServiceWeight = MessageQueueServiceWeight; +} + parameter_types! { pub const ChildBountyValueMinimum: Balance = 1 * DOLLARS; } @@ -1194,6 +1216,7 @@ impl pallet_contracts::Config for Runtime { type MaxCodeLen = ConstU32<{ 128 * 1024 }>; type MaxStorageKeyLen = ConstU32<128>; type UnsafeUnstableInterface = ConstBool; + type MaxDebugBufferLen = ConstU32<{ 2 * 1024 * 1024 }>; } impl pallet_sudo::Config for Runtime { @@ -1696,6 +1719,7 @@ construct_runtime!( RankedPolls: pallet_referenda::, RankedCollective: pallet_ranked_collective, FastUnstake: pallet_fast_unstake, + MessageQueue: pallet_message_queue, } ); @@ -1790,6 +1814,7 @@ mod benches { [pallet_indices, Indices] [pallet_lottery, Lottery] [pallet_membership, TechnicalMembership] + [pallet_message_queue, MessageQueue] [pallet_mmr, Mmr] [pallet_multisig, Multisig] [pallet_nomination_pools, NominationPoolsBench::] diff --git a/client/beefy/rpc/Cargo.toml b/client/beefy/rpc/Cargo.toml index d27225824539a..f5b5770153477 100644 --- a/client/beefy/rpc/Cargo.toml +++ b/client/beefy/rpc/Cargo.toml @@ -11,7 +11,7 @@ homepage = "https://substrate.io" [dependencies] codec = { package = "parity-scale-codec", version = "3.0.0", features = ["derive"] } futures = "0.3.21" -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } log = "0.4" parking_lot = "0.12.1" serde = { version = "1.0.136", features = ["derive"] } diff --git a/client/beefy/rpc/src/lib.rs b/client/beefy/rpc/src/lib.rs index d29ed433c38db..59a133b86214e 100644 --- a/client/beefy/rpc/src/lib.rs +++ b/client/beefy/rpc/src/lib.rs @@ -172,7 +172,7 @@ mod tests { }; use beefy_primitives::{known_payloads, Payload, SignedCommitment}; use codec::{Decode, Encode}; - use jsonrpsee::{types::EmptyParams, RpcModule}; + use jsonrpsee::{types::EmptyServerParams as EmptyParams, RpcModule}; use sp_runtime::traits::{BlakeTwo256, Hash}; use substrate_test_runtime_client::runtime::Block; diff --git a/client/cli/Cargo.toml b/client/cli/Cargo.toml index 2f079a0c7c56f..fd84ff4d4574b 100644 --- a/client/cli/Cargo.toml +++ b/client/cli/Cargo.toml @@ -49,6 +49,7 @@ sp-version = { version = "5.0.0", path = "../../primitives/version" } [dev-dependencies] tempfile = "3.1.0" +futures-timer = "3.0.1" [features] default = ["rocksdb"] diff --git a/client/cli/src/params/pruning_params.rs b/client/cli/src/params/pruning_params.rs index 2da1de919771c..7e50f53d7169a 100644 --- a/client/cli/src/params/pruning_params.rs +++ b/client/cli/src/params/pruning_params.rs @@ -23,57 +23,93 @@ use sc_service::{BlocksPruning, PruningMode}; /// Parameters to define the pruning mode #[derive(Debug, Clone, PartialEq, Args)] pub struct PruningParams { - /// Specify the state pruning mode, a number of blocks to keep or 'archive'. + /// Specify the state pruning mode. /// - /// Default is to keep only the last 256 blocks, - /// otherwise, the state can be kept for all of the blocks (i.e 'archive'), - /// or for all of the canonical blocks (i.e 'archive-canonical'). - #[arg(alias = "pruning", long, value_name = "PRUNING_MODE")] - pub state_pruning: Option, - /// Specify the blocks pruning mode, a number of blocks to keep or 'archive'. + /// This mode specifies when the block's state (ie, storage) + /// should be pruned (ie, removed) from the database. /// - /// Default is to keep all finalized blocks. - /// otherwise, all blocks can be kept (i.e 'archive'), - /// or for all canonical blocks (i.e 'archive-canonical'), - /// or for the last N blocks (i.e a number). + /// Possible values: + /// 'archive' Keep the state of all blocks. + /// 'archive-canonical' Keep only the state of finalized blocks. + /// number Keep the state of the last number of finalized blocks. + #[arg(alias = "pruning", long, value_name = "PRUNING_MODE", default_value = "256")] + pub state_pruning: DatabasePruningMode, + /// Specify the blocks pruning mode. /// - /// NOTE: only finalized blocks are subject for removal! - #[arg(alias = "keep-blocks", long, value_name = "COUNT")] - pub blocks_pruning: Option, + /// This mode specifies when the block's body (including justifications) + /// should be pruned (ie, removed) from the database. + /// + /// Possible values: + /// 'archive' Keep all blocks. + /// 'archive-canonical' Keep only finalized blocks. + /// number Keep the last `number` of finalized blocks. + #[arg( + alias = "keep-blocks", + long, + value_name = "PRUNING_MODE", + default_value = "archive-canonical" + )] + pub blocks_pruning: DatabasePruningMode, } impl PruningParams { /// Get the pruning value from the parameters pub fn state_pruning(&self) -> error::Result> { - self.state_pruning - .as_ref() - .map(|s| match s.as_str() { - "archive" => Ok(PruningMode::ArchiveAll), - "archive-canonical" => Ok(PruningMode::ArchiveCanonical), - bc => bc - .parse() - .map_err(|_| { - error::Error::Input("Invalid state pruning mode specified".to_string()) - }) - .map(PruningMode::blocks_pruning), - }) - .transpose() + Ok(Some(self.state_pruning.into())) } /// Get the block pruning value from the parameters pub fn blocks_pruning(&self) -> error::Result { - match self.blocks_pruning.as_ref() { - Some(bp) => match bp.as_str() { - "archive" => Ok(BlocksPruning::KeepAll), - "archive-canonical" => Ok(BlocksPruning::KeepFinalized), - bc => bc - .parse() - .map_err(|_| { - error::Error::Input("Invalid blocks pruning mode specified".to_string()) - }) - .map(BlocksPruning::Some), - }, - None => Ok(BlocksPruning::KeepFinalized), + Ok(self.blocks_pruning.into()) + } +} + +/// Specifies the pruning mode of the database. +/// +/// This specifies when the block's data (either state via `--state-pruning` +/// or body via `--blocks-pruning`) should be pruned (ie, removed) from +/// the database. +#[derive(Debug, Clone, Copy, PartialEq)] +pub enum DatabasePruningMode { + /// Keep the data of all blocks. + Archive, + /// Keep only the data of finalized blocks. + ArchiveCanonical, + /// Keep the data of the last number of finalized blocks. + Custom(u32), +} + +impl std::str::FromStr for DatabasePruningMode { + type Err = String; + + fn from_str(input: &str) -> Result { + match input { + "archive" => Ok(Self::Archive), + "archive-canonical" => Ok(Self::ArchiveCanonical), + bc => bc + .parse() + .map_err(|_| "Invalid pruning mode specified".to_string()) + .map(Self::Custom), + } + } +} + +impl Into for DatabasePruningMode { + fn into(self) -> PruningMode { + match self { + DatabasePruningMode::Archive => PruningMode::ArchiveAll, + DatabasePruningMode::ArchiveCanonical => PruningMode::ArchiveCanonical, + DatabasePruningMode::Custom(n) => PruningMode::blocks_pruning(n), + } + } +} + +impl Into for DatabasePruningMode { + fn into(self) -> BlocksPruning { + match self { + DatabasePruningMode::Archive => BlocksPruning::KeepAll, + DatabasePruningMode::ArchiveCanonical => BlocksPruning::KeepFinalized, + DatabasePruningMode::Custom(n) => BlocksPruning::Some(n), } } } diff --git a/client/cli/src/runner.rs b/client/cli/src/runner.rs index f6edd8444735a..d4191feddfa90 100644 --- a/client/cli/src/runner.rs +++ b/client/cli/src/runner.rs @@ -22,7 +22,7 @@ use futures::{future, future::FutureExt, pin_mut, select, Future}; use log::info; use sc_service::{Configuration, Error as ServiceError, TaskManager}; use sc_utils::metrics::{TOKIO_THREADS_ALIVE, TOKIO_THREADS_TOTAL}; -use std::marker::PhantomData; +use std::{marker::PhantomData, time::Duration}; #[cfg(target_family = "unix")] async fn main(func: F) -> std::result::Result<(), E> @@ -145,9 +145,19 @@ impl Runner { E: std::error::Error + Send + Sync + 'static + From, { self.print_node_infos(); + let mut task_manager = self.tokio_runtime.block_on(initialize(self.config))?; let res = self.tokio_runtime.block_on(main(task_manager.future().fuse())); - Ok(res?) + // We need to drop the task manager here to inform all tasks that they should shut down. + // + // This is important to be done before we instruct the tokio runtime to shutdown. Otherwise + // the tokio runtime will wait the full 60 seconds for all tasks to stop. + drop(task_manager); + + // Give all futures 60 seconds to shutdown, before tokio "leaks" them. + self.tokio_runtime.shutdown_timeout(Duration::from_secs(60)); + + res.map_err(Into::into) } /// A helper function that runs a command with the configuration of this node. @@ -204,3 +214,208 @@ pub fn print_node_infos(config: &Configuration) { ); info!("⛓ Native runtime: {}", C::native_runtime_version(&config.chain_spec)); } + +#[cfg(test)] +mod tests { + use std::{ + path::PathBuf, + sync::atomic::{AtomicU64, Ordering}, + }; + + use sc_network::config::NetworkConfiguration; + use sc_service::{Arc, ChainType, GenericChainSpec, NoExtension}; + use sp_runtime::create_runtime_str; + use sp_version::create_apis_vec; + + use super::*; + + struct Cli; + + impl SubstrateCli for Cli { + fn author() -> String { + "test".into() + } + + fn impl_name() -> String { + "yep".into() + } + + fn impl_version() -> String { + "version".into() + } + + fn description() -> String { + "desc".into() + } + + fn support_url() -> String { + "no.pe".into() + } + + fn copyright_start_year() -> i32 { + 2042 + } + + fn load_spec( + &self, + _: &str, + ) -> std::result::Result, String> { + Err("nope".into()) + } + + fn native_runtime_version( + _: &Box, + ) -> &'static sp_version::RuntimeVersion { + const VERSION: sp_version::RuntimeVersion = sp_version::RuntimeVersion { + spec_name: create_runtime_str!("spec"), + impl_name: create_runtime_str!("name"), + authoring_version: 0, + spec_version: 0, + impl_version: 0, + apis: create_apis_vec!([]), + transaction_version: 2, + state_version: 0, + }; + + &VERSION + } + } + + fn create_runner() -> Runner { + let runtime = build_runtime().unwrap(); + + let runner = Runner::new( + Configuration { + impl_name: "spec".into(), + impl_version: "3".into(), + role: sc_service::Role::Authority, + tokio_handle: runtime.handle().clone(), + transaction_pool: Default::default(), + network: NetworkConfiguration::new_memory(), + keystore: sc_service::config::KeystoreConfig::InMemory, + keystore_remote: None, + database: sc_client_db::DatabaseSource::ParityDb { path: PathBuf::from("db") }, + trie_cache_maximum_size: None, + state_pruning: None, + blocks_pruning: sc_client_db::BlocksPruning::KeepAll, + chain_spec: Box::new(GenericChainSpec::from_genesis( + "test", + "test_id", + ChainType::Development, + || unimplemented!("Not required in tests"), + Vec::new(), + None, + None, + None, + None, + NoExtension::None, + )), + wasm_method: Default::default(), + wasm_runtime_overrides: None, + execution_strategies: Default::default(), + rpc_http: None, + rpc_ws: None, + rpc_ipc: None, + rpc_ws_max_connections: None, + rpc_cors: None, + rpc_methods: Default::default(), + rpc_max_payload: None, + rpc_max_request_size: None, + rpc_max_response_size: None, + rpc_id_provider: None, + rpc_max_subs_per_conn: None, + ws_max_out_buffer_capacity: None, + prometheus_config: None, + telemetry_endpoints: None, + default_heap_pages: None, + offchain_worker: Default::default(), + force_authoring: false, + disable_grandpa: false, + dev_key_seed: None, + tracing_targets: None, + tracing_receiver: Default::default(), + max_runtime_instances: 8, + announce_block: true, + base_path: None, + informant_output_format: Default::default(), + runtime_cache_size: 2, + }, + runtime, + ) + .unwrap(); + + runner + } + + #[test] + fn ensure_run_until_exit_informs_tasks_to_end() { + let runner = create_runner(); + + let counter = Arc::new(AtomicU64::new(0)); + let counter2 = counter.clone(); + + runner + .run_node_until_exit(move |cfg| async move { + let task_manager = TaskManager::new(cfg.tokio_handle.clone(), None).unwrap(); + let (sender, receiver) = futures::channel::oneshot::channel(); + + // We need to use `spawn_blocking` here so that we get a dedicated thread for our + // future. This is important for this test, as otherwise tokio can just "drop" the + // future. + task_manager.spawn_handle().spawn_blocking("test", None, async move { + let _ = sender.send(()); + loop { + counter2.fetch_add(1, Ordering::Relaxed); + futures_timer::Delay::new(Duration::from_millis(50)).await; + } + }); + + task_manager.spawn_essential_handle().spawn_blocking("test2", None, async { + // Let's stop this essential task directly when our other task started. + // It will signal that the task manager should end. + let _ = receiver.await; + }); + + Ok::<_, sc_service::Error>(task_manager) + }) + .unwrap_err(); + + let count = counter.load(Ordering::Relaxed); + + // Ensure that our counting task was running for less than 30 seconds. + // It should be directly killed, but for CI and whatever we are being a little bit more + // "relaxed". + assert!((count as u128) < (Duration::from_secs(30).as_millis() / 50)); + } + + /// This test ensures that `run_node_until_exit` aborts waiting for "stuck" tasks after 60 + /// seconds, aka doesn't wait until they are finished (which may never happen). + #[test] + fn ensure_run_until_exit_is_not_blocking_indefinitely() { + let runner = create_runner(); + + runner + .run_node_until_exit(move |cfg| async move { + let task_manager = TaskManager::new(cfg.tokio_handle.clone(), None).unwrap(); + let (sender, receiver) = futures::channel::oneshot::channel(); + + // We need to use `spawn_blocking` here so that we get a dedicated thread for our + // future. This future is more blocking code that will never end. + task_manager.spawn_handle().spawn_blocking("test", None, async move { + let _ = sender.send(()); + loop { + std::thread::sleep(Duration::from_secs(30)); + } + }); + + task_manager.spawn_essential_handle().spawn_blocking("test2", None, async { + // Let's stop this essential task directly when our other task started. + // It will signal that the task manager should end. + let _ = receiver.await; + }); + + Ok::<_, sc_service::Error>(task_manager) + }) + .unwrap_err(); + } +} diff --git a/client/consensus/aura/src/import_queue.rs b/client/consensus/aura/src/import_queue.rs index 07f982542c95b..d5cf40f33359e 100644 --- a/client/consensus/aura/src/import_queue.rs +++ b/client/consensus/aura/src/import_queue.rs @@ -20,6 +20,7 @@ use crate::{ aura_err, authorities, find_pre_digest, slot_author, AuthorityId, CompatibilityMode, Error, + LOG_TARGET, }; use codec::{Codec, Decode, Encode}; use log::{debug, info, trace}; @@ -88,7 +89,7 @@ where .map_err(Error::Client)? { info!( - target: "aura", + target: LOG_TARGET, "Slot author is equivocating at slot {} with headers {:?} and {:?}", slot, equivocation_proof.first_header.hash(), @@ -256,7 +257,7 @@ where block.body = Some(inner_body); } - trace!(target: "aura", "Checked {:?}; importing.", pre_header); + trace!(target: LOG_TARGET, "Checked {:?}; importing.", pre_header); telemetry!( self.telemetry; CONSENSUS_TRACE; @@ -272,7 +273,7 @@ where Ok((block, None)) }, CheckedHeader::Deferred(a, b) => { - debug!(target: "aura", "Checking {:?} failed; {:?}, {:?}.", hash, a, b); + debug!(target: LOG_TARGET, "Checking {:?} failed; {:?}, {:?}.", hash, a, b); telemetry!( self.telemetry; CONSENSUS_DEBUG; diff --git a/client/consensus/aura/src/lib.rs b/client/consensus/aura/src/lib.rs index 46b9124f9077f..a8ed80d7c0432 100644 --- a/client/consensus/aura/src/lib.rs +++ b/client/consensus/aura/src/lib.rs @@ -72,6 +72,8 @@ pub use sp_consensus_aura::{ AuraApi, ConsensusLog, SlotDuration, AURA_ENGINE_ID, }; +const LOG_TARGET: &str = "aura"; + type AuthorityId

=

::Public; /// Run `AURA` in a compatibility mode. @@ -530,7 +532,7 @@ where } fn aura_err(error: Error) -> Error { - debug!(target: "aura", "{}", error); + debug!(target: LOG_TARGET, "{}", error); error } @@ -580,10 +582,10 @@ pub fn find_pre_digest(header: &B::Header) -> Resul let mut pre_digest: Option = None; for log in header.digest().logs() { - trace!(target: "aura", "Checking log {:?}", log); + trace!(target: LOG_TARGET, "Checking log {:?}", log); match (CompatibleDigestItem::::as_aura_pre_digest(log), pre_digest.is_some()) { (Some(_), true) => return Err(aura_err(Error::MultipleHeaders)), - (None, _) => trace!(target: "aura", "Ignoring digest not meant for us"), + (None, _) => trace!(target: LOG_TARGET, "Ignoring digest not meant for us"), (s, false) => pre_digest = s, } } diff --git a/client/consensus/babe/rpc/Cargo.toml b/client/consensus/babe/rpc/Cargo.toml index d0a65a3fc3193..4f5aaf85494b9 100644 --- a/client/consensus/babe/rpc/Cargo.toml +++ b/client/consensus/babe/rpc/Cargo.toml @@ -13,7 +13,7 @@ readme = "README.md" targets = ["x86_64-unknown-linux-gnu"] [dependencies] -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } futures = "0.3.21" serde = { version = "1.0.136", features = ["derive"] } thiserror = "1.0" diff --git a/client/consensus/babe/src/aux_schema.rs b/client/consensus/babe/src/aux_schema.rs index fef84bda86974..2a09aa738f4ec 100644 --- a/client/consensus/babe/src/aux_schema.rs +++ b/client/consensus/babe/src/aux_schema.rs @@ -21,7 +21,7 @@ use codec::{Decode, Encode}; use log::info; -use crate::{migration::EpochV0, Epoch}; +use crate::{migration::EpochV0, Epoch, LOG_TARGET}; use sc_client_api::backend::AuxStore; use sc_consensus_epochs::{ migration::{EpochChangesV0For, EpochChangesV1For}, @@ -82,7 +82,7 @@ pub fn load_epoch_changes( let epoch_changes = SharedEpochChanges::::new(maybe_epoch_changes.unwrap_or_else(|| { info!( - target: "babe", + target: LOG_TARGET, "👶 Creating empty BABE epoch changes on what appears to be first startup.", ); EpochChangesFor::::default() diff --git a/client/consensus/babe/src/lib.rs b/client/consensus/babe/src/lib.rs index 84b6893648f49..b50874ae3401d 100644 --- a/client/consensus/babe/src/lib.rs +++ b/client/consensus/babe/src/lib.rs @@ -149,6 +149,8 @@ pub mod aux_schema; #[cfg(test)] mod tests; +const LOG_TARGET: &str = "babe"; + /// BABE epoch information #[derive(Decode, Encode, PartialEq, Eq, Clone, Debug)] pub struct Epoch { @@ -323,7 +325,7 @@ impl From> for String { } fn babe_err(error: Error) -> Error { - debug!(target: "babe", "{}", error); + debug!(target: LOG_TARGET, "{}", error); error } @@ -345,7 +347,7 @@ where let block_id = if client.usage_info().chain.finalized_state.is_some() { BlockId::Hash(client.usage_info().chain.best_hash) } else { - debug!(target: "babe", "No finalized state is available. Reading config from genesis"); + debug!(target: LOG_TARGET, "No finalized state is available. Reading config from genesis"); BlockId::Hash(client.usage_info().chain.genesis_hash) }; @@ -486,7 +488,7 @@ where telemetry, }; - info!(target: "babe", "👶 Starting BABE Authorship worker"); + info!(target: LOG_TARGET, "👶 Starting BABE Authorship worker"); let slot_worker = sc_consensus_slots::start_slot_worker( babe_link.config.slot_duration(), @@ -523,12 +525,8 @@ fn aux_storage_cleanup + HeaderBackend, Block: B Ok(meta) => { hashes.insert(meta.parent); }, - Err(err) => warn!( - target: "babe", - "Failed to lookup metadata for block `{:?}`: {}", - first, - err, - ), + Err(err) => + warn!(target: LOG_TARGET, "Failed to lookup metadata for block `{:?}`: {}", first, err,), } // Cleans data for finalized block's ancestors @@ -716,7 +714,7 @@ where type AuxData = ViableEpochDescriptor, Epoch>; fn logging_target(&self) -> &'static str { - "babe" + LOG_TARGET } fn block_import(&mut self) -> &mut Self::BlockImport { @@ -749,7 +747,7 @@ where slot: Slot, epoch_descriptor: &ViableEpochDescriptor, Epoch>, ) -> Option { - debug!(target: "babe", "Attempting to claim slot {}", slot); + debug!(target: LOG_TARGET, "Attempting to claim slot {}", slot); let s = authorship::claim_slot( slot, self.epoch_changes @@ -760,7 +758,7 @@ where ); if s.is_some() { - debug!(target: "babe", "Claimed slot {}", slot); + debug!(target: LOG_TARGET, "Claimed slot {}", slot); } s @@ -777,7 +775,7 @@ where Ok(()) => true, Err(e) => if e.is_full() { - warn!(target: "babe", "Trying to notify a slot but the channel is full"); + warn!(target: LOG_TARGET, "Trying to notify a slot but the channel is full"); true } else { false @@ -904,10 +902,10 @@ pub fn find_pre_digest(header: &B::Header) -> Result = None; for log in header.digest().logs() { - trace!(target: "babe", "Checking log {:?}, looking for pre runtime digest", log); + trace!(target: LOG_TARGET, "Checking log {:?}, looking for pre runtime digest", log); match (log.as_babe_pre_digest(), pre_digest.is_some()) { (Some(_), true) => return Err(babe_err(Error::MultiplePreRuntimeDigests)), - (None, _) => trace!(target: "babe", "Ignoring digest not meant for us"), + (None, _) => trace!(target: LOG_TARGET, "Ignoring digest not meant for us"), (s, false) => pre_digest = s, } } @@ -920,13 +918,13 @@ fn find_next_epoch_digest( ) -> Result, Error> { let mut epoch_digest: Option<_> = None; for log in header.digest().logs() { - trace!(target: "babe", "Checking log {:?}, looking for epoch change digest.", log); + trace!(target: LOG_TARGET, "Checking log {:?}, looking for epoch change digest.", log); let log = log.try_to::(OpaqueDigestItemId::Consensus(&BABE_ENGINE_ID)); match (log, epoch_digest.is_some()) { (Some(ConsensusLog::NextEpochData(_)), true) => return Err(babe_err(Error::MultipleEpochChangeDigests)), (Some(ConsensusLog::NextEpochData(epoch)), false) => epoch_digest = Some(epoch), - _ => trace!(target: "babe", "Ignoring digest not meant for us"), + _ => trace!(target: LOG_TARGET, "Ignoring digest not meant for us"), } } @@ -939,13 +937,13 @@ fn find_next_config_digest( ) -> Result, Error> { let mut config_digest: Option<_> = None; for log in header.digest().logs() { - trace!(target: "babe", "Checking log {:?}, looking for epoch change digest.", log); + trace!(target: LOG_TARGET, "Checking log {:?}, looking for epoch change digest.", log); let log = log.try_to::(OpaqueDigestItemId::Consensus(&BABE_ENGINE_ID)); match (log, config_digest.is_some()) { (Some(ConsensusLog::NextConfigData(_)), true) => return Err(babe_err(Error::MultipleConfigChangeDigests)), (Some(ConsensusLog::NextConfigData(config)), false) => config_digest = Some(config), - _ => trace!(target: "babe", "Ignoring digest not meant for us"), + _ => trace!(target: LOG_TARGET, "Ignoring digest not meant for us"), } } @@ -1075,7 +1073,10 @@ where None => match generate_key_owner_proof(&best_id)? { Some(proof) => proof, None => { - debug!(target: "babe", "Equivocation offender is not part of the authority set."); + debug!( + target: LOG_TARGET, + "Equivocation offender is not part of the authority set." + ); return Ok(()) }, }, @@ -1091,7 +1092,7 @@ where ) .map_err(Error::RuntimeApi)?; - info!(target: "babe", "Submitted equivocation report for author {:?}", author); + info!(target: LOG_TARGET, "Submitted equivocation report for author {:?}", author); Ok(()) } @@ -1121,7 +1122,7 @@ where mut block: BlockImportParams, ) -> BlockVerificationResult { trace!( - target: "babe", + target: LOG_TARGET, "Verifying origin: {:?} header: {:?} justification(s): {:?} body: {:?}", block.origin, block.header, @@ -1140,7 +1141,11 @@ where return Ok((block, Default::default())) } - debug!(target: "babe", "We have {:?} logs in this header", block.header.digest().logs().len()); + debug!( + target: LOG_TARGET, + "We have {:?} logs in this header", + block.header.digest().logs().len() + ); let create_inherent_data_providers = self .create_inherent_data_providers @@ -1204,7 +1209,10 @@ where ) .await { - warn!(target: "babe", "Error checking/reporting BABE equivocation: {}", err); + warn!( + target: LOG_TARGET, + "Error checking/reporting BABE equivocation: {}", err + ); } if let Some(inner_body) = block.body { @@ -1233,7 +1241,7 @@ where block.body = Some(inner_body); } - trace!(target: "babe", "Checked {:?}; importing.", pre_header); + trace!(target: LOG_TARGET, "Checked {:?}; importing.", pre_header); telemetry!( self.telemetry; CONSENSUS_TRACE; @@ -1252,7 +1260,7 @@ where Ok((block, Default::default())) }, CheckedHeader::Deferred(a, b) => { - debug!(target: "babe", "Checking {:?} failed; {:?}, {:?}.", hash, a, b); + debug!(target: LOG_TARGET, "Checking {:?} failed; {:?}, {:?}.", hash, a, b); telemetry!( self.telemetry; CONSENSUS_DEBUG; @@ -1520,21 +1528,23 @@ where log::Level::Info }; - log!(target: "babe", - log_level, - "👶 New epoch {} launching at block {} (block slot {} >= start slot {}).", - viable_epoch.as_ref().epoch_index, - hash, - slot, - viable_epoch.as_ref().start_slot, + log!( + target: LOG_TARGET, + log_level, + "👶 New epoch {} launching at block {} (block slot {} >= start slot {}).", + viable_epoch.as_ref().epoch_index, + hash, + slot, + viable_epoch.as_ref().start_slot, ); let next_epoch = viable_epoch.increment((next_epoch_descriptor, epoch_config)); - log!(target: "babe", - log_level, - "👶 Next epoch starts at slot {}", - next_epoch.as_ref().start_slot, + log!( + target: LOG_TARGET, + log_level, + "👶 Next epoch starts at slot {}", + next_epoch.as_ref().start_slot, ); // prune the tree of epochs not part of the finalized chain or @@ -1565,7 +1575,7 @@ where }; if let Err(e) = prune_and_import() { - debug!(target: "babe", "Failed to launch next epoch: {}", e); + debug!(target: LOG_TARGET, "Failed to launch next epoch: {}", e); *epoch_changes = old_epoch_changes.expect("set `Some` above and not taken; qed"); return Err(e) diff --git a/client/consensus/babe/src/tests.rs b/client/consensus/babe/src/tests.rs index 7f51eb2c51977..d7691235a550d 100644 --- a/client/consensus/babe/src/tests.rs +++ b/client/consensus/babe/src/tests.rs @@ -323,7 +323,7 @@ impl TestNetFactory for BabeTestNet { use substrate_test_runtime_client::DefaultTestClientBuilderExt; let client = client.as_client(); - trace!(target: "babe", "Creating a verifier"); + trace!(target: LOG_TARGET, "Creating a verifier"); // ensure block import and verifier are linked correctly. let data = maybe_link @@ -352,12 +352,12 @@ impl TestNetFactory for BabeTestNet { } fn peer(&mut self, i: usize) -> &mut BabePeer { - trace!(target: "babe", "Retrieving a peer"); + trace!(target: LOG_TARGET, "Retrieving a peer"); &mut self.peers[i] } fn peers(&self) -> &Vec { - trace!(target: "babe", "Retrieving peers"); + trace!(target: LOG_TARGET, "Retrieving peers"); &self.peers } @@ -583,7 +583,7 @@ fn can_author_block() { // with secondary slots enabled it should never be empty match claim_slot(i.into(), &epoch, &keystore) { None => i += 1, - Some(s) => debug!(target: "babe", "Authored block {:?}", s.0), + Some(s) => debug!(target: LOG_TARGET, "Authored block {:?}", s.0), } // otherwise with only vrf-based primary slots we might need to try a couple @@ -593,7 +593,7 @@ fn can_author_block() { match claim_slot(i.into(), &epoch, &keystore) { None => i += 1, Some(s) => { - debug!(target: "babe", "Authored block {:?}", s.0); + debug!(target: LOG_TARGET, "Authored block {:?}", s.0); break }, } diff --git a/client/consensus/babe/src/verification.rs b/client/consensus/babe/src/verification.rs index 53ec3002e6a85..e77a70c8e465a 100644 --- a/client/consensus/babe/src/verification.rs +++ b/client/consensus/babe/src/verification.rs @@ -17,9 +17,9 @@ // along with this program. If not, see . //! Verification for BABE headers. -use super::{ +use crate::{ authorship::{calculate_primary_threshold, check_primary_threshold, secondary_slot_author}, - babe_err, find_pre_digest, BlockT, Epoch, Error, + babe_err, find_pre_digest, BlockT, Epoch, Error, LOG_TARGET, }; use log::{debug, trace}; use sc_consensus_slots::CheckedHeader; @@ -67,7 +67,7 @@ pub(super) fn check_header( let authorities = &epoch.authorities; let pre_digest = pre_digest.map(Ok).unwrap_or_else(|| find_pre_digest::(&header))?; - trace!(target: "babe", "Checking header"); + trace!(target: LOG_TARGET, "Checking header"); let seal = header .digest_mut() .pop() @@ -93,7 +93,8 @@ pub(super) fn check_header( match &pre_digest { PreDigest::Primary(primary) => { - debug!(target: "babe", + debug!( + target: LOG_TARGET, "Verifying primary block #{} at slot: {}", header.number(), primary.slot, @@ -104,7 +105,8 @@ pub(super) fn check_header( PreDigest::SecondaryPlain(secondary) if epoch.config.allowed_slots.is_secondary_plain_slots_allowed() => { - debug!(target: "babe", + debug!( + target: LOG_TARGET, "Verifying secondary plain block #{} at slot: {}", header.number(), secondary.slot, @@ -115,7 +117,8 @@ pub(super) fn check_header( PreDigest::SecondaryVRF(secondary) if epoch.config.allowed_slots.is_secondary_vrf_slots_allowed() => { - debug!(target: "babe", + debug!( + target: LOG_TARGET, "Verifying secondary VRF block #{} at slot: {}", header.number(), secondary.slot, diff --git a/client/consensus/common/Cargo.toml b/client/consensus/common/Cargo.toml index 971ee71ab8040..b61c6a4334285 100644 --- a/client/consensus/common/Cargo.toml +++ b/client/consensus/common/Cargo.toml @@ -18,6 +18,7 @@ futures = { version = "0.3.21", features = ["thread-pool"] } futures-timer = "3.0.1" libp2p = { version = "0.49.0", default-features = false } log = "0.4.17" +mockall = "0.11.2" parking_lot = "0.12.1" serde = { version = "1.0", features = ["derive"] } thiserror = "1.0.30" diff --git a/client/consensus/common/src/import_queue.rs b/client/consensus/common/src/import_queue.rs index 3741fa99663cd..02309cc6a365e 100644 --- a/client/consensus/common/src/import_queue.rs +++ b/client/consensus/common/src/import_queue.rs @@ -45,6 +45,8 @@ use crate::{ pub use basic_queue::BasicQueue; use sp_consensus::{error::Error as ConsensusError, BlockOrigin, CacheKeyId}; +const LOG_TARGET: &str = "sync::import-queue"; + /// A commonly-used Import Queue type. /// /// This defines the transaction type of the `BasicQueue` to be the transaction type for a client. @@ -53,6 +55,7 @@ pub type DefaultImportQueue = mod basic_queue; pub mod buffered_link; +pub mod mock; /// Shared block import struct used by the queue. pub type BoxBlockImport = @@ -105,10 +108,10 @@ pub trait Verifier: Send + Sync { /// Blocks import queue API. /// /// The `import_*` methods can be called in order to send elements for the import queue to verify. -/// Afterwards, call `poll_actions` to determine how to respond to these elements. -pub trait ImportQueue: Send { +pub trait ImportQueueService: Send { /// Import bunch of blocks. fn import_blocks(&mut self, origin: BlockOrigin, blocks: Vec>); + /// Import block justifications. fn import_justifications( &mut self, @@ -117,12 +120,26 @@ pub trait ImportQueue: Send { number: NumberFor, justifications: Justifications, ); - /// Polls for actions to perform on the network. - /// +} + +#[async_trait::async_trait] +pub trait ImportQueue: Send { + /// Get a copy of the handle to [`ImportQueueService`]. + fn service(&self) -> Box>; + + /// Get a reference to the handle to [`ImportQueueService`]. + fn service_ref(&mut self) -> &mut dyn ImportQueueService; + /// This method should behave in a way similar to `Future::poll`. It can register the current /// task and notify later when more actions are ready to be polled. To continue the comparison, /// it is as if this method always returned `Poll::Pending`. fn poll_actions(&mut self, cx: &mut futures::task::Context, link: &mut dyn Link); + + /// Start asynchronous runner for import queue. + /// + /// Takes an object implementing [`Link`] which allows the import queue to + /// influece the synchronization process. + async fn run(self, link: Box>); } /// Hooks that the verification queue can use to influence the synchronization @@ -232,15 +249,15 @@ pub(crate) async fn import_single_block_metered< (Some(header), justifications) => (header, justifications), (None, _) => { if let Some(ref peer) = peer { - debug!(target: "sync", "Header {} was not provided by {} ", block.hash, peer); + debug!(target: LOG_TARGET, "Header {} was not provided by {} ", block.hash, peer); } else { - debug!(target: "sync", "Header {} was not provided ", block.hash); + debug!(target: LOG_TARGET, "Header {} was not provided ", block.hash); } return Err(BlockImportError::IncompleteHeader(peer)) }, }; - trace!(target: "sync", "Header {} has {:?} logs", block.hash, header.digest().logs().len()); + trace!(target: LOG_TARGET, "Header {} has {:?} logs", block.hash, header.digest().logs().len()); let number = *header.number(); let hash = block.hash; @@ -248,27 +265,31 @@ pub(crate) async fn import_single_block_metered< let import_handler = |import| match import { Ok(ImportResult::AlreadyInChain) => { - trace!(target: "sync", "Block already in chain {}: {:?}", number, hash); + trace!(target: LOG_TARGET, "Block already in chain {}: {:?}", number, hash); Ok(BlockImportStatus::ImportedKnown(number, peer)) }, Ok(ImportResult::Imported(aux)) => Ok(BlockImportStatus::ImportedUnknown(number, aux, peer)), Ok(ImportResult::MissingState) => { - debug!(target: "sync", "Parent state is missing for {}: {:?}, parent: {:?}", - number, hash, parent_hash); + debug!( + target: LOG_TARGET, + "Parent state is missing for {}: {:?}, parent: {:?}", number, hash, parent_hash + ); Err(BlockImportError::MissingState) }, Ok(ImportResult::UnknownParent) => { - debug!(target: "sync", "Block with unknown parent {}: {:?}, parent: {:?}", - number, hash, parent_hash); + debug!( + target: LOG_TARGET, + "Block with unknown parent {}: {:?}, parent: {:?}", number, hash, parent_hash + ); Err(BlockImportError::UnknownParent) }, Ok(ImportResult::KnownBad) => { - debug!(target: "sync", "Peer gave us a bad block {}: {:?}", number, hash); + debug!(target: LOG_TARGET, "Peer gave us a bad block {}: {:?}", number, hash); Err(BlockImportError::BadBlock(peer)) }, Err(e) => { - debug!(target: "sync", "Error importing block {}: {:?}: {}", number, hash, e); + debug!(target: LOG_TARGET, "Error importing block {}: {:?}: {}", number, hash, e); Err(BlockImportError::Other(e)) }, }; @@ -309,9 +330,16 @@ pub(crate) async fn import_single_block_metered< let (import_block, maybe_keys) = verifier.verify(import_block).await.map_err(|msg| { if let Some(ref peer) = peer { - trace!(target: "sync", "Verifying {}({}) from {} failed: {}", number, hash, peer, msg); + trace!( + target: LOG_TARGET, + "Verifying {}({}) from {} failed: {}", + number, + hash, + peer, + msg + ); } else { - trace!(target: "sync", "Verifying {}({}) failed: {}", number, hash, msg); + trace!(target: LOG_TARGET, "Verifying {}({}) failed: {}", number, hash, msg); } if let Some(metrics) = metrics.as_ref() { metrics.report_verification(false, started.elapsed()); diff --git a/client/consensus/common/src/import_queue/basic_queue.rs b/client/consensus/common/src/import_queue/basic_queue.rs index 0e607159b75c3..b63bc192b2e77 100644 --- a/client/consensus/common/src/import_queue/basic_queue.rs +++ b/client/consensus/common/src/import_queue/basic_queue.rs @@ -34,7 +34,8 @@ use crate::{ import_queue::{ buffered_link::{self, BufferedLinkReceiver, BufferedLinkSender}, import_single_block_metered, BlockImportError, BlockImportStatus, BoxBlockImport, - BoxJustificationImport, ImportQueue, IncomingBlock, Link, RuntimeOrigin, Verifier, + BoxJustificationImport, ImportQueue, ImportQueueService, IncomingBlock, Link, + RuntimeOrigin, Verifier, LOG_TARGET, }, metrics::Metrics, }; @@ -42,10 +43,8 @@ use crate::{ /// Interface to a basic block import queue that is importing blocks sequentially in a separate /// task, with plugable verification. pub struct BasicQueue { - /// Channel to send justification import messages to the background task. - justification_sender: TracingUnboundedSender>, - /// Channel to send block import messages to the background task. - block_import_sender: TracingUnboundedSender>, + /// Handle for sending justification and block import messages to the background task. + handle: BasicQueueHandle, /// Results coming from the worker task. result_port: BufferedLinkReceiver, _phantom: PhantomData, @@ -54,8 +53,7 @@ pub struct BasicQueue { impl Drop for BasicQueue { fn drop(&mut self) { // Flush the queue and close the receiver to terminate the future. - self.justification_sender.close_channel(); - self.block_import_sender.close_channel(); + self.handle.close(); self.result_port.close(); } } @@ -95,24 +93,50 @@ impl BasicQueue { future.boxed(), ); - Self { justification_sender, block_import_sender, result_port, _phantom: PhantomData } + Self { + handle: BasicQueueHandle::new(justification_sender, block_import_sender), + result_port, + _phantom: PhantomData, + } } } -impl ImportQueue for BasicQueue { +#[derive(Clone)] +struct BasicQueueHandle { + /// Channel to send justification import messages to the background task. + justification_sender: TracingUnboundedSender>, + /// Channel to send block import messages to the background task. + block_import_sender: TracingUnboundedSender>, +} + +impl BasicQueueHandle { + pub fn new( + justification_sender: TracingUnboundedSender>, + block_import_sender: TracingUnboundedSender>, + ) -> Self { + Self { justification_sender, block_import_sender } + } + + pub fn close(&mut self) { + self.justification_sender.close_channel(); + self.block_import_sender.close_channel(); + } +} + +impl ImportQueueService for BasicQueueHandle { fn import_blocks(&mut self, origin: BlockOrigin, blocks: Vec>) { if blocks.is_empty() { return } - trace!(target: "sync", "Scheduling {} blocks for import", blocks.len()); + trace!(target: LOG_TARGET, "Scheduling {} blocks for import", blocks.len()); let res = self .block_import_sender .unbounded_send(worker_messages::ImportBlocks(origin, blocks)); if res.is_err() { log::error!( - target: "sync", + target: LOG_TARGET, "import_blocks: Background import task is no longer alive" ); } @@ -132,16 +156,46 @@ impl ImportQueue for BasicQueue if res.is_err() { log::error!( - target: "sync", + target: LOG_TARGET, "import_justification: Background import task is no longer alive" ); } } } +} + +#[async_trait::async_trait] +impl ImportQueue for BasicQueue { + /// Get handle to [`ImportQueueService`]. + fn service(&self) -> Box> { + Box::new(self.handle.clone()) + } + + /// Get a reference to the handle to [`ImportQueueService`]. + fn service_ref(&mut self) -> &mut dyn ImportQueueService { + &mut self.handle + } + /// Poll actions from network. fn poll_actions(&mut self, cx: &mut Context, link: &mut dyn Link) { if self.result_port.poll_actions(cx, link).is_err() { - log::error!(target: "sync", "poll_actions: Background import task is no longer alive"); + log::error!( + target: LOG_TARGET, + "poll_actions: Background import task is no longer alive" + ); + } + } + + /// Start asynchronous runner for import queue. + /// + /// Takes an object implementing [`Link`] which allows the import queue to + /// influece the synchronization process. + async fn run(mut self, mut link: Box>) { + loop { + if let Err(_) = self.result_port.next_action(&mut *link).await { + log::error!(target: "sync", "poll_actions: Background import task is no longer alive"); + return + } } } } @@ -180,7 +234,7 @@ async fn block_import_process( Some(blocks) => blocks, None => { log::debug!( - target: "block-import", + target: LOG_TARGET, "Stopping block import because the import channel was closed!", ); return @@ -254,7 +308,7 @@ impl BlockImportWorker { // down and we should end this future. if worker.result_sender.is_closed() { log::debug!( - target: "block-import", + target: LOG_TARGET, "Stopping block import because result channel was closed!", ); return @@ -267,7 +321,7 @@ impl BlockImportWorker { worker.import_justification(who, hash, number, justification).await, None => { log::debug!( - target: "block-import", + target: LOG_TARGET, "Stopping block import because justification channel was closed!", ); return @@ -302,7 +356,7 @@ impl BlockImportWorker { .await .map_err(|e| { debug!( - target: "sync", + target: LOG_TARGET, "Justification import failed for hash = {:?} with number = {:?} coming from node = {:?} with error: {}", hash, number, @@ -356,7 +410,7 @@ async fn import_many_blocks, Transaction: Send + 'stat _ => Default::default(), }; - trace!(target: "sync", "Starting import of {} blocks {}", count, blocks_range); + trace!(target: LOG_TARGET, "Starting import of {} blocks {}", count, blocks_range); let mut imported = 0; let mut results = vec![]; @@ -396,7 +450,7 @@ async fn import_many_blocks, Transaction: Send + 'stat if import_result.is_ok() { trace!( - target: "sync", + target: LOG_TARGET, "Block imported successfully {:?} ({})", block_number, block_hash, diff --git a/client/consensus/common/src/import_queue/buffered_link.rs b/client/consensus/common/src/import_queue/buffered_link.rs index 5d418dddf0853..e6d3b212fdbac 100644 --- a/client/consensus/common/src/import_queue/buffered_link.rs +++ b/client/consensus/common/src/import_queue/buffered_link.rs @@ -80,7 +80,7 @@ impl Clone for BufferedLinkSender { } /// Internal buffered message. -enum BlockImportWorkerMsg { +pub enum BlockImportWorkerMsg { BlocksProcessed(usize, usize, Vec<(BlockImportResult, B::Hash)>), JustificationImported(RuntimeOrigin, B::Hash, NumberFor, bool), RequestJustification(B::Hash, NumberFor), @@ -122,6 +122,18 @@ pub struct BufferedLinkReceiver { } impl BufferedLinkReceiver { + /// Send action for the synchronization to perform. + pub fn send_actions(&mut self, msg: BlockImportWorkerMsg, link: &mut dyn Link) { + match msg { + BlockImportWorkerMsg::BlocksProcessed(imported, count, results) => + link.blocks_processed(imported, count, results), + BlockImportWorkerMsg::JustificationImported(who, hash, number, success) => + link.justification_imported(who, &hash, number, success), + BlockImportWorkerMsg::RequestJustification(hash, number) => + link.request_justification(&hash, number), + } + } + /// Polls for the buffered link actions. Any enqueued action will be propagated to the link /// passed as parameter. /// @@ -138,15 +150,17 @@ impl BufferedLinkReceiver { Poll::Pending => break Ok(()), }; - match msg { - BlockImportWorkerMsg::BlocksProcessed(imported, count, results) => - link.blocks_processed(imported, count, results), - BlockImportWorkerMsg::JustificationImported(who, hash, number, success) => - link.justification_imported(who, &hash, number, success), - BlockImportWorkerMsg::RequestJustification(hash, number) => - link.request_justification(&hash, number), - } + self.send_actions(msg, &mut *link); + } + } + + /// Poll next element from import queue and send the corresponding action command over the link. + pub async fn next_action(&mut self, link: &mut dyn Link) -> Result<(), ()> { + if let Some(msg) = self.rx.next().await { + self.send_actions(msg, link); + return Ok(()) } + Err(()) } /// Close the channel. diff --git a/client/consensus/common/src/import_queue/mock.rs b/client/consensus/common/src/import_queue/mock.rs new file mode 100644 index 0000000000000..67deee9514a1c --- /dev/null +++ b/client/consensus/common/src/import_queue/mock.rs @@ -0,0 +1,46 @@ +// This file is part of Substrate. + +// Copyright (C) 2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: GPL-3.0-or-later WITH Classpath-exception-2.0 + +// This program is free software: you can redistribute it and/or modify +// it under the terms of the GNU General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. + +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License +// along with this program. If not, see . + +use super::*; + +mockall::mock! { + pub ImportQueueHandle {} + + impl ImportQueueService for ImportQueueHandle { + fn import_blocks(&mut self, origin: BlockOrigin, blocks: Vec>); + fn import_justifications( + &mut self, + who: RuntimeOrigin, + hash: B::Hash, + number: NumberFor, + justifications: Justifications, + ); + } +} + +mockall::mock! { + pub ImportQueue {} + + #[async_trait::async_trait] + impl ImportQueue for ImportQueue { + fn service(&self) -> Box>; + fn service_ref(&mut self) -> &mut dyn ImportQueueService; + fn poll_actions<'a>(&mut self, cx: &mut futures::task::Context<'a>, link: &mut dyn Link); + async fn run(self, link: Box>); + } +} diff --git a/client/consensus/manual-seal/Cargo.toml b/client/consensus/manual-seal/Cargo.toml index cf151424c2ee5..fb89445a97002 100644 --- a/client/consensus/manual-seal/Cargo.toml +++ b/client/consensus/manual-seal/Cargo.toml @@ -13,7 +13,7 @@ readme = "README.md" targets = ["x86_64-unknown-linux-gnu"] [dependencies] -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } assert_matches = "1.3.0" async-trait = "0.1.57" codec = { package = "parity-scale-codec", version = "3.0.0" } diff --git a/client/consensus/manual-seal/src/consensus/babe.rs b/client/consensus/manual-seal/src/consensus/babe.rs index 206f5163a13cd..d2bea3a3a3656 100644 --- a/client/consensus/manual-seal/src/consensus/babe.rs +++ b/client/consensus/manual-seal/src/consensus/babe.rs @@ -20,7 +20,7 @@ //! that expect babe-specific digests. use super::ConsensusDataProvider; -use crate::Error; +use crate::{Error, LOG_TARGET}; use codec::Encode; use sc_client_api::{AuxStore, UsageProvider}; use sc_consensus_babe::{ @@ -179,7 +179,7 @@ where let epoch = epoch_changes .viable_epoch(&epoch_descriptor, |slot| Epoch::genesis(&self.config, slot)) .ok_or_else(|| { - log::info!(target: "babe", "create_digest: no viable_epoch :("); + log::info!(target: LOG_TARGET, "create_digest: no viable_epoch :("); sp_consensus::Error::InvalidAuthoritiesSet })?; @@ -290,7 +290,7 @@ where let has_authority = epoch.authorities.iter().any(|(id, _)| *id == *authority); if !has_authority { - log::info!(target: "manual-seal", "authority not found"); + log::info!(target: LOG_TARGET, "authority not found"); let timestamp = inherents .timestamp_inherent_data()? .ok_or_else(|| Error::StringError("No timestamp inherent data".into()))?; diff --git a/client/consensus/manual-seal/src/lib.rs b/client/consensus/manual-seal/src/lib.rs index 09ab139b91c73..700b94cf1d704 100644 --- a/client/consensus/manual-seal/src/lib.rs +++ b/client/consensus/manual-seal/src/lib.rs @@ -49,6 +49,8 @@ pub use self::{ use sc_transaction_pool_api::TransactionPool; use sp_api::{ProvideRuntimeApi, TransactionFor}; +const LOG_TARGET: &str = "manual-seal"; + /// The `ConsensusEngineId` of Manual Seal. pub const MANUAL_SEAL_ENGINE_ID: ConsensusEngineId = [b'm', b'a', b'n', b'l']; diff --git a/client/consensus/pow/src/lib.rs b/client/consensus/pow/src/lib.rs index ac7ce3b411333..ace00a34459af 100644 --- a/client/consensus/pow/src/lib.rs +++ b/client/consensus/pow/src/lib.rs @@ -67,6 +67,8 @@ use sp_runtime::{ }; use std::{cmp::Ordering, collections::HashMap, marker::PhantomData, sync::Arc, time::Duration}; +const LOG_TARGET: &str = "pow"; + #[derive(Debug, thiserror::Error)] pub enum Error { #[error("Header uses the wrong engine {0:?}")] @@ -531,7 +533,7 @@ where } if sync_oracle.is_major_syncing() { - debug!(target: "pow", "Skipping proposal due to sync."); + debug!(target: LOG_TARGET, "Skipping proposal due to sync."); worker.on_major_syncing(); continue } @@ -540,7 +542,7 @@ where Ok(x) => x, Err(err) => { warn!( - target: "pow", + target: LOG_TARGET, "Unable to pull new block for authoring. \ Select best chain error: {}", err @@ -561,7 +563,7 @@ where Ok(x) => x, Err(err) => { warn!( - target: "pow", + target: LOG_TARGET, "Unable to propose new block for authoring. \ Fetch difficulty failed: {}", err, @@ -577,7 +579,7 @@ where Ok(x) => x, Err(err) => { warn!( - target: "pow", + target: LOG_TARGET, "Unable to propose new block for authoring. \ Creating inherent data providers failed: {}", err, @@ -590,7 +592,7 @@ where Ok(r) => r, Err(e) => { warn!( - target: "pow", + target: LOG_TARGET, "Unable to propose new block for authoring. \ Creating inherent data failed: {}", e, @@ -610,7 +612,7 @@ where Ok(x) => x, Err(err) => { warn!( - target: "pow", + target: LOG_TARGET, "Unable to propose new block for authoring. \ Creating proposer failed: {:?}", err, @@ -624,7 +626,7 @@ where Ok(x) => x, Err(err) => { warn!( - target: "pow", + target: LOG_TARGET, "Unable to propose new block for authoring. \ Creating proposal failed: {}", err, @@ -654,14 +656,14 @@ where fn find_pre_digest(header: &B::Header) -> Result>, Error> { let mut pre_digest: Option<_> = None; for log in header.digest().logs() { - trace!(target: "pow", "Checking log {:?}, looking for pre runtime digest", log); + trace!(target: LOG_TARGET, "Checking log {:?}, looking for pre runtime digest", log); match (log, pre_digest.is_some()) { (DigestItem::PreRuntime(POW_ENGINE_ID, _), true) => return Err(Error::MultiplePreRuntimeDigests), (DigestItem::PreRuntime(POW_ENGINE_ID, v), false) => { pre_digest = Some(v.clone()); }, - (_, _) => trace!(target: "pow", "Ignoring digest not meant for us"), + (_, _) => trace!(target: LOG_TARGET, "Ignoring digest not meant for us"), } } diff --git a/client/consensus/pow/src/worker.rs b/client/consensus/pow/src/worker.rs index a00da6e7022fb..b53227bb3ca50 100644 --- a/client/consensus/pow/src/worker.rs +++ b/client/consensus/pow/src/worker.rs @@ -41,7 +41,7 @@ use std::{ time::Duration, }; -use crate::{PowAlgorithm, PowIntermediate, Seal, INTERMEDIATE_KEY, POW_ENGINE_ID}; +use crate::{PowAlgorithm, PowIntermediate, Seal, INTERMEDIATE_KEY, LOG_TARGET, POW_ENGINE_ID}; /// Mining metadata. This is the information needed to start an actual mining loop. #[derive(Clone, Eq, PartialEq)] @@ -159,26 +159,16 @@ where ) { Ok(true) => (), Ok(false) => { - warn!( - target: "pow", - "Unable to import mined block: seal is invalid", - ); + warn!(target: LOG_TARGET, "Unable to import mined block: seal is invalid",); return false }, Err(err) => { - warn!( - target: "pow", - "Unable to import mined block: {}", - err, - ); + warn!(target: LOG_TARGET, "Unable to import mined block: {}", err,); return false }, } } else { - warn!( - target: "pow", - "Unable to import mined block: metadata does not exist", - ); + warn!(target: LOG_TARGET, "Unable to import mined block: metadata does not exist",); return false } @@ -192,10 +182,7 @@ where } { build } else { - warn!( - target: "pow", - "Unable to import mined block: build does not exist", - ); + warn!(target: LOG_TARGET, "Unable to import mined block: build does not exist",); return false }; @@ -225,18 +212,13 @@ where ); info!( - target: "pow", - "✅ Successfully mined block on top of: {}", - build.metadata.best_hash + target: LOG_TARGET, + "✅ Successfully mined block on top of: {}", build.metadata.best_hash ); true }, Err(err) => { - warn!( - target: "pow", - "Unable to import mined block: {}", - err, - ); + warn!(target: LOG_TARGET, "Unable to import mined block: {}", err,); false }, } diff --git a/client/consensus/slots/src/lib.rs b/client/consensus/slots/src/lib.rs index bc68797dc734e..6126647e6190d 100644 --- a/client/consensus/slots/src/lib.rs +++ b/client/consensus/slots/src/lib.rs @@ -48,6 +48,8 @@ use std::{ time::{Duration, Instant}, }; +const LOG_TARGET: &str = "slots"; + /// The changes that need to applied to the storage to create the state for a block. /// /// See [`sp_state_machine::StorageChanges`] for more information. @@ -198,9 +200,9 @@ pub trait SimpleSlotWorker { > { let slot = slot_info.slot; let telemetry = self.telemetry(); - let logging_target = self.logging_target(); + let log_target = self.logging_target(); - let inherent_data = Self::create_inherent_data(&slot_info, &logging_target).await?; + let inherent_data = Self::create_inherent_data(&slot_info, &log_target).await?; let proposing_remaining_duration = self.proposing_remaining_duration(&slot_info); let logs = self.pre_digest_data(slot, claim); @@ -220,19 +222,19 @@ pub trait SimpleSlotWorker { let proposal = match futures::future::select(proposing, proposing_remaining).await { Either::Left((Ok(p), _)) => p, Either::Left((Err(err), _)) => { - warn!(target: logging_target, "Proposing failed: {}", err); + warn!(target: log_target, "Proposing failed: {}", err); return None }, Either::Right(_) => { info!( - target: logging_target, + target: log_target, "⌛️ Discarding proposal for slot {}; block production took too long", slot, ); // If the node was compiled with debug, tell the user to use release optimizations. #[cfg(build_type = "debug")] info!( - target: logging_target, + target: log_target, "👉 Recompile your node in `--release` mode to mitigate this problem.", ); telemetry!( @@ -525,13 +527,13 @@ pub async fn start_slot_worker( let slot_info = match slots.next_slot().await { Ok(r) => r, Err(e) => { - warn!(target: "slots", "Error while polling for next slot: {}", e); + warn!(target: LOG_TARGET, "Error while polling for next slot: {}", e); return }, }; if sync_oracle.is_major_syncing() { - debug!(target: "slots", "Skipping proposal slot due to sync."); + debug!(target: LOG_TARGET, "Skipping proposal slot due to sync."); continue } diff --git a/client/consensus/slots/src/slots.rs b/client/consensus/slots/src/slots.rs index f8f366d89c82c..9bb5650b313d5 100644 --- a/client/consensus/slots/src/slots.rs +++ b/client/consensus/slots/src/slots.rs @@ -20,7 +20,7 @@ //! //! This is used instead of `futures_timer::Interval` because it was unreliable. -use super::{InherentDataProviderExt, Slot}; +use super::{InherentDataProviderExt, Slot, LOG_TARGET}; use sp_consensus::{Error, SelectChain}; use sp_inherents::{CreateInherentDataProviders, InherentDataProvider}; use sp_runtime::traits::{Block as BlockT, Header as HeaderT}; @@ -146,7 +146,7 @@ where Ok(x) => x, Err(e) => { log::warn!( - target: "slots", + target: LOG_TARGET, "Unable to author block in slot. No best block header: {}", e, ); diff --git a/client/db/src/lib.rs b/client/db/src/lib.rs index 426876f5cba8c..4452d5dcb3f29 100644 --- a/client/db/src/lib.rs +++ b/client/db/src/lib.rs @@ -2002,6 +2002,7 @@ impl sc_client_api::backend::Backend for Backend { .map_err(sp_blockchain::Error::from_state_db)?; Err(e) } else { + self.storage.state_db.sync(); Ok(()) } } diff --git a/client/finality-grandpa/rpc/Cargo.toml b/client/finality-grandpa/rpc/Cargo.toml index 7be77c122bab2..252c5e3871a64 100644 --- a/client/finality-grandpa/rpc/Cargo.toml +++ b/client/finality-grandpa/rpc/Cargo.toml @@ -12,7 +12,7 @@ homepage = "https://substrate.io" [dependencies] finality-grandpa = { version = "0.16.0", features = ["derive-codec"] } futures = "0.3.16" -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } log = "0.4.8" parity-scale-codec = { version = "3.0.0", features = ["derive"] } serde = { version = "1.0.105", features = ["derive"] } diff --git a/client/finality-grandpa/rpc/src/lib.rs b/client/finality-grandpa/rpc/src/lib.rs index 85df72de77b54..dfdad666ba8f3 100644 --- a/client/finality-grandpa/rpc/src/lib.rs +++ b/client/finality-grandpa/rpc/src/lib.rs @@ -138,7 +138,7 @@ mod tests { use std::{collections::HashSet, convert::TryInto, sync::Arc}; use jsonrpsee::{ - types::{EmptyParams, SubscriptionId}, + types::{EmptyServerParams as EmptyParams, SubscriptionId}, RpcModule, }; use parity_scale_codec::{Decode, Encode}; diff --git a/client/finality-grandpa/src/authorities.rs b/client/finality-grandpa/src/authorities.rs index 0803e6b3c2931..a61c66979bb5c 100644 --- a/client/finality-grandpa/src/authorities.rs +++ b/client/finality-grandpa/src/authorities.rs @@ -29,7 +29,7 @@ use sc_consensus::shared_data::{SharedData, SharedDataLocked}; use sc_telemetry::{telemetry, TelemetryHandle, CONSENSUS_INFO}; use sp_finality_grandpa::{AuthorityId, AuthorityList}; -use crate::SetId; +use crate::{SetId, LOG_TARGET}; /// Error type returned on operations on the `AuthoritySet`. #[derive(Debug, thiserror::Error)] @@ -314,7 +314,7 @@ where let number = pending.canon_height.clone(); debug!( - target: "afg", + target: LOG_TARGET, "Inserting potential standard set change signaled at block {:?} (delayed by {:?} blocks).", (&number, &hash), pending.delay, @@ -323,7 +323,7 @@ where self.pending_standard_changes.import(hash, number, pending, is_descendent_of)?; debug!( - target: "afg", + target: LOG_TARGET, "There are now {} alternatives for the next pending standard change (roots), and a \ total of {} pending standard changes (across all forks).", self.pending_standard_changes.roots().count(), @@ -362,7 +362,7 @@ where .unwrap_or_else(|i| i); debug!( - target: "afg", + target: LOG_TARGET, "Inserting potential forced set change at block {:?} (delayed by {:?} blocks).", (&pending.canon_height, &pending.canon_hash), pending.delay, @@ -370,7 +370,11 @@ where self.pending_forced_changes.insert(idx, pending); - debug!(target: "afg", "There are now {} pending forced changes.", self.pending_forced_changes.len()); + debug!( + target: LOG_TARGET, + "There are now {} pending forced changes.", + self.pending_forced_changes.len() + ); Ok(()) } @@ -475,7 +479,7 @@ where if standard_change.effective_number() <= median_last_finalized && is_descendent_of(&standard_change.canon_hash, &change.canon_hash)? { - log::info!(target: "afg", + log::info!(target: LOG_TARGET, "Not applying authority set change forced at block #{:?}, due to pending standard change at block #{:?}", change.canon_height, standard_change.effective_number(), @@ -488,7 +492,7 @@ where } // apply this change: make the set canonical - afg_log!( + grandpa_log!( initial_sync, "👴 Applying authority set change forced at block #{:?}", change.canon_height, @@ -570,7 +574,7 @@ where } if let Some(change) = change { - afg_log!( + grandpa_log!( initial_sync, "👴 Applying authority set change scheduled at block #{:?}", change.canon_height, diff --git a/client/finality-grandpa/src/aux_schema.rs b/client/finality-grandpa/src/aux_schema.rs index 235453ea35df1..a7357a7fa5e40 100644 --- a/client/finality-grandpa/src/aux_schema.rs +++ b/client/finality-grandpa/src/aux_schema.rs @@ -38,7 +38,7 @@ use crate::{ CompletedRound, CompletedRounds, CurrentRounds, HasVoted, SharedVoterSetState, VoterSetState, }, - GrandpaJustification, NewAuthoritySet, + GrandpaJustification, NewAuthoritySet, LOG_TARGET, }; const VERSION_KEY: &[u8] = b"grandpa_schema_version"; @@ -100,8 +100,8 @@ where // previously we only supported at most one pending change per fork &|_, _| Ok(false), ) { - warn!(target: "afg", "Error migrating pending authority set change: {}", err); - warn!(target: "afg", "Node is in a potentially inconsistent state."); + warn!(target: LOG_TARGET, "Error migrating pending authority set change: {}", err); + warn!(target: LOG_TARGET, "Node is in a potentially inconsistent state."); } } @@ -384,8 +384,11 @@ where } // genesis. - info!(target: "afg", "👴 Loading GRANDPA authority set \ - from genesis on what appears to be first startup."); + info!( + target: LOG_TARGET, + "👴 Loading GRANDPA authority set \ + from genesis on what appears to be first startup." + ); let genesis_authorities = genesis_authorities()?; let genesis_set = AuthoritySet::genesis(genesis_authorities) diff --git a/client/finality-grandpa/src/communication/gossip.rs b/client/finality-grandpa/src/communication/gossip.rs index 218b4b668c10f..408cbda745e56 100644 --- a/client/finality-grandpa/src/communication/gossip.rs +++ b/client/finality-grandpa/src/communication/gossip.rs @@ -99,7 +99,7 @@ use sp_finality_grandpa::AuthorityId; use sp_runtime::traits::{Block as BlockT, NumberFor, Zero}; use super::{benefit, cost, Round, SetId, NEIGHBOR_REBROADCAST_PERIOD}; -use crate::{environment, CatchUp, CompactCommit, SignedMessage}; +use crate::{environment, CatchUp, CompactCommit, SignedMessage, LOG_TARGET}; use std::{ collections::{HashSet, VecDeque}, @@ -578,8 +578,13 @@ impl Peers { last_update: Some(now), }; - trace!(target: "afg", "Peer {} updated view. Now at {:?}, {:?}", - who, peer.view.round, peer.view.set_id); + trace!( + target: LOG_TARGET, + "Peer {} updated view. Now at {:?}, {:?}", + who, + peer.view.round, + peer.view.set_id + ); Ok(Some(&peer.view)) } @@ -801,8 +806,12 @@ impl Inner { let set_id = local_view.set_id; - debug!(target: "afg", "Voter {} noting beginning of round {:?} to network.", - self.config.name(), (round, set_id)); + debug!( + target: LOG_TARGET, + "Voter {} noting beginning of round {:?} to network.", + self.config.name(), + (round, set_id) + ); local_view.update_round(round); @@ -824,7 +833,7 @@ impl Inner { authorities.iter().collect::>(); if diff_authorities { - debug!(target: "afg", + debug!(target: LOG_TARGET, "Gossip validator noted set {:?} twice with different authorities. \ Was the authority set hard forked?", set_id, @@ -912,7 +921,7 @@ impl Inner { // ensure authority is part of the set. if !self.authorities.contains(&full.message.id) { - debug!(target: "afg", "Message from unknown voter: {}", full.message.id); + debug!(target: LOG_TARGET, "Message from unknown voter: {}", full.message.id); telemetry!( self.config.telemetry; CONSENSUS_DEBUG; @@ -929,7 +938,7 @@ impl Inner { full.round.0, full.set_id.0, ) { - debug!(target: "afg", "Bad message signature {}", full.message.id); + debug!(target: LOG_TARGET, "Bad message signature {}", full.message.id); telemetry!( self.config.telemetry; CONSENSUS_DEBUG; @@ -964,7 +973,7 @@ impl Inner { if full.message.precommits.len() != full.message.auth_data.len() || full.message.precommits.is_empty() { - debug!(target: "afg", "Malformed compact commit"); + debug!(target: LOG_TARGET, "Malformed compact commit"); telemetry!( self.config.telemetry; CONSENSUS_DEBUG; @@ -1023,9 +1032,9 @@ impl Inner { PendingCatchUp::Processing { .. } => { self.pending_catch_up = PendingCatchUp::None; }, - state => debug!(target: "afg", - "Noted processed catch up message when state was: {:?}", - state, + state => debug!( + target: LOG_TARGET, + "Noted processed catch up message when state was: {:?}", state, ), } } @@ -1067,7 +1076,9 @@ impl Inner { return (None, Action::Discard(Misbehavior::OutOfScopeMessage.cost())) } - trace!(target: "afg", "Replying to catch-up request for round {} from {} with round {}", + trace!( + target: LOG_TARGET, + "Replying to catch-up request for round {} from {} with round {}", request.round.0, who, last_completed_round.number, @@ -1141,9 +1152,9 @@ impl Inner { let (catch_up_allowed, catch_up_report) = self.note_catch_up_request(who, &request); if catch_up_allowed { - debug!(target: "afg", "Sending catch-up request for round {} to {}", - round, - who, + debug!( + target: LOG_TARGET, + "Sending catch-up request for round {} to {}", round, who, ); catch_up = Some(GossipMessage::::CatchUpRequest(request)); @@ -1347,7 +1358,7 @@ impl GossipValidator { let metrics = match prometheus_registry.map(Metrics::register) { Some(Ok(metrics)) => Some(metrics), Some(Err(e)) => { - debug!(target: "afg", "Failed to register metrics: {:?}", e); + debug!(target: LOG_TARGET, "Failed to register metrics: {:?}", e); None }, None => None, @@ -1466,7 +1477,7 @@ impl GossipValidator { }, Err(e) => { message_name = None; - debug!(target: "afg", "Error decoding message: {}", e); + debug!(target: LOG_TARGET, "Error decoding message: {}", e); telemetry!( self.telemetry; CONSENSUS_DEBUG; diff --git a/client/finality-grandpa/src/communication/mod.rs b/client/finality-grandpa/src/communication/mod.rs index 75a7697812c6c..e7e3c12989c96 100644 --- a/client/finality-grandpa/src/communication/mod.rs +++ b/client/finality-grandpa/src/communication/mod.rs @@ -54,7 +54,7 @@ use sp_runtime::traits::{Block as BlockT, Hash as HashT, Header as HeaderT, Numb use crate::{ environment::HasVoted, CatchUp, Commit, CommunicationIn, CommunicationOutH, CompactCommit, - Error, Message, SignedMessage, + Error, Message, SignedMessage, LOG_TARGET, }; use gossip::{ FullCatchUpMessage, FullCommitMessage, GossipMessage, GossipValidator, PeerReport, VoteMessage, @@ -274,7 +274,8 @@ impl> NetworkBridge { gossip_engine.lock().register_gossip_message(topic, message.encode()); } - trace!(target: "afg", + trace!( + target: LOG_TARGET, "Registered {} messages for topic {:?} (round: {}, set_id: {})", round.votes.len(), topic, @@ -340,13 +341,19 @@ impl> NetworkBridge { match decoded { Err(ref e) => { - debug!(target: "afg", "Skipping malformed message {:?}: {}", notification, e); + debug!( + target: LOG_TARGET, + "Skipping malformed message {:?}: {}", notification, e + ); future::ready(None) }, Ok(GossipMessage::Vote(msg)) => { // check signature. if !voters.contains(&msg.message.id) { - debug!(target: "afg", "Skipping message from unknown voter {}", msg.message.id); + debug!( + target: LOG_TARGET, + "Skipping message from unknown voter {}", msg.message.id + ); return future::ready(None) } @@ -388,7 +395,7 @@ impl> NetworkBridge { future::ready(Some(msg.message)) }, _ => { - debug!(target: "afg", "Skipping unknown message type"); + debug!(target: LOG_TARGET, "Skipping unknown message type"); future::ready(None) }, } @@ -631,7 +638,12 @@ fn incoming_global( // this could be optimized by decoding piecewise. let decoded = GossipMessage::::decode(&mut ¬ification.message[..]); if let Err(ref e) = decoded { - trace!(target: "afg", "Skipping malformed commit message {:?}: {}", notification, e); + trace!( + target: LOG_TARGET, + "Skipping malformed commit message {:?}: {}", + notification, + e + ); } future::ready(decoded.map(move |d| (notification, d)).ok()) }) @@ -642,7 +654,7 @@ fn incoming_global( GossipMessage::CatchUp(msg) => process_catch_up(msg, notification, &gossip_engine, &gossip_validator, &voters), _ => { - debug!(target: "afg", "Skipping unknown message type"); + debug!(target: LOG_TARGET, "Skipping unknown message type"); None }, }) @@ -748,7 +760,7 @@ impl Sink> for OutgoingMessages { }); debug!( - target: "afg", + target: LOG_TARGET, "Announcing block {} to peers which we voted on in round {} in set {}", target_hash, self.round, @@ -813,7 +825,7 @@ fn check_compact_commit( return Err(cost::MALFORMED_COMMIT) } } else { - debug!(target: "afg", "Skipping commit containing unknown voter {}", id); + debug!(target: LOG_TARGET, "Skipping commit containing unknown voter {}", id); return Err(cost::MALFORMED_COMMIT) } } @@ -838,7 +850,7 @@ fn check_compact_commit( set_id.0, &mut buf, ) { - debug!(target: "afg", "Bad commit message signature {}", id); + debug!(target: LOG_TARGET, "Bad commit message signature {}", id); telemetry!( telemetry; CONSENSUS_DEBUG; @@ -886,7 +898,10 @@ fn check_catch_up( return Err(cost::MALFORMED_CATCH_UP) } } else { - debug!(target: "afg", "Skipping catch up message containing unknown voter {}", id); + debug!( + target: LOG_TARGET, + "Skipping catch up message containing unknown voter {}", id + ); return Err(cost::MALFORMED_CATCH_UP) } } @@ -922,7 +937,7 @@ fn check_catch_up( if !sp_finality_grandpa::check_message_signature_with_buffer( &msg, id, sig, round, set_id, buf, ) { - debug!(target: "afg", "Bad catch up message signature {}", id); + debug!(target: LOG_TARGET, "Bad catch up message signature {}", id); telemetry!( telemetry; CONSENSUS_DEBUG; diff --git a/client/finality-grandpa/src/communication/periodic.rs b/client/finality-grandpa/src/communication/periodic.rs index c001796b5ca5d..7e50abb96e441 100644 --- a/client/finality-grandpa/src/communication/periodic.rs +++ b/client/finality-grandpa/src/communication/periodic.rs @@ -21,17 +21,19 @@ use futures::{future::FutureExt as _, prelude::*, ready, stream::Stream}; use futures_timer::Delay; use log::debug; -use sc_utils::mpsc::{tracing_unbounded, TracingUnboundedReceiver, TracingUnboundedSender}; use std::{ pin::Pin, task::{Context, Poll}, time::Duration, }; -use super::gossip::{GossipMessage, NeighborPacket}; use sc_network::PeerId; +use sc_utils::mpsc::{tracing_unbounded, TracingUnboundedReceiver, TracingUnboundedSender}; use sp_runtime::traits::{Block as BlockT, NumberFor}; +use super::gossip::{GossipMessage, NeighborPacket}; +use crate::LOG_TARGET; + /// A sender used to send neighbor packets to a background job. #[derive(Clone)] pub(super) struct NeighborPacketSender( @@ -46,7 +48,7 @@ impl NeighborPacketSender { neighbor_packet: NeighborPacket>, ) { if let Err(err) = self.0.unbounded_send((who, neighbor_packet)) { - debug!(target: "afg", "Failed to send neighbor packet: {:?}", err); + debug!(target: LOG_TARGET, "Failed to send neighbor packet: {:?}", err); } } } diff --git a/client/finality-grandpa/src/environment.rs b/client/finality-grandpa/src/environment.rs index f235c3a86c04e..110d0eb2c927e 100644 --- a/client/finality-grandpa/src/environment.rs +++ b/client/finality-grandpa/src/environment.rs @@ -60,7 +60,7 @@ use crate::{ until_imported::UntilVoteTargetImported, voting_rule::VotingRule as VotingRuleT, ClientForGrandpa, CommandOrError, Commit, Config, Error, NewAuthoritySet, Precommit, Prevote, - PrimaryPropose, SignedMessage, VoterCommand, + PrimaryPropose, SignedMessage, VoterCommand, LOG_TARGET, }; type HistoricalVotes = finality_grandpa::HistoricalVotes< @@ -551,7 +551,10 @@ where { Some(proof) => proof, None => { - debug!(target: "afg", "Equivocation offender is not part of the authority set."); + debug!( + target: LOG_TARGET, + "Equivocation offender is not part of the authority set." + ); return Ok(()) }, }; @@ -609,8 +612,13 @@ where let tree_route = match tree_route_res { Ok(tree_route) => tree_route, Err(e) => { - debug!(target: "afg", "Encountered error computing ancestry between block {:?} and base {:?}: {}", - block, base, e); + debug!( + target: LOG_TARGET, + "Encountered error computing ancestry between block {:?} and base {:?}: {}", + block, + base, + e + ); return Err(GrandpaError::NotDescendent) }, @@ -955,7 +963,8 @@ where historical_votes: &HistoricalVotes, ) -> Result<(), Self::Error> { debug!( - target: "afg", "Voter {} completed round {} in set {}. Estimate = {:?}, Finalized in round = {:?}", + target: LOG_TARGET, + "Voter {} completed round {} in set {}. Estimate = {:?}, Finalized in round = {:?}", self.config.name(), round, self.set_id, @@ -1016,7 +1025,8 @@ where historical_votes: &HistoricalVotes, ) -> Result<(), Self::Error> { debug!( - target: "afg", "Voter {} concluded round {} in set {}. Estimate = {:?}, Finalized in round = {:?}", + target: LOG_TARGET, + "Voter {} concluded round {} in set {}. Estimate = {:?}, Finalized in round = {:?}", self.config.name(), round, self.set_id, @@ -1102,9 +1112,12 @@ where Self::Signature, >, ) { - warn!(target: "afg", "Detected prevote equivocation in the finality worker: {:?}", equivocation); + warn!( + target: LOG_TARGET, + "Detected prevote equivocation in the finality worker: {:?}", equivocation + ); if let Err(err) = self.report_equivocation(equivocation.into()) { - warn!(target: "afg", "Error reporting prevote equivocation: {}", err); + warn!(target: LOG_TARGET, "Error reporting prevote equivocation: {}", err); } } @@ -1117,9 +1130,12 @@ where Self::Signature, >, ) { - warn!(target: "afg", "Detected precommit equivocation in the finality worker: {:?}", equivocation); + warn!( + target: LOG_TARGET, + "Detected precommit equivocation in the finality worker: {:?}", equivocation + ); if let Err(err) = self.report_equivocation(equivocation.into()) { - warn!(target: "afg", "Error reporting precommit equivocation: {}", err); + warn!(target: LOG_TARGET, "Error reporting precommit equivocation: {}", err); } } } @@ -1158,7 +1174,8 @@ where let base_header = match client.header(BlockId::Hash(block))? { Some(h) => h, None => { - debug!(target: "afg", + debug!( + target: LOG_TARGET, "Encountered error finding best chain containing {:?}: couldn't find base block", block, ); @@ -1172,7 +1189,10 @@ where // proceed onwards. most of the time there will be no pending transition. the limit, if any, is // guaranteed to be higher than or equal to the given base number. let limit = authority_set.current_limit(*base_header.number()); - debug!(target: "afg", "Finding best chain containing block {:?} with number limit {:?}", block, limit); + debug!( + target: LOG_TARGET, + "Finding best chain containing block {:?} with number limit {:?}", block, limit + ); let result = match select_chain.finality_target(block, None).await { Ok(best_hash) => { @@ -1234,7 +1254,10 @@ where .or_else(|| Some((target_header.hash(), *target_header.number()))) }, Err(e) => { - warn!(target: "afg", "Encountered error finding best chain containing {:?}: {}", block, e); + warn!( + target: LOG_TARGET, + "Encountered error finding best chain containing {:?}: {}", block, e + ); None }, }; @@ -1273,7 +1296,7 @@ where // This can happen after a forced change (triggered manually from the runtime when // finality is stalled), since the voter will be restarted at the median last finalized // block, which can be lower than the local best finalized block. - warn!(target: "afg", "Re-finalized block #{:?} ({:?}) in the canonical chain, current best finalized is #{:?}", + warn!(target: LOG_TARGET, "Re-finalized block #{:?} ({:?}) in the canonical chain, current best finalized is #{:?}", hash, number, status.finalized_number, @@ -1303,7 +1326,10 @@ where ) { if let Some(sender) = justification_sender { if let Err(err) = sender.notify(justification) { - warn!(target: "afg", "Error creating justification for subscriber: {}", err); + warn!( + target: LOG_TARGET, + "Error creating justification for subscriber: {}", err + ); } } } @@ -1354,11 +1380,16 @@ where client .apply_finality(import_op, hash, persisted_justification, true) .map_err(|e| { - warn!(target: "afg", "Error applying finality to block {:?}: {}", (hash, number), e); + warn!( + target: LOG_TARGET, + "Error applying finality to block {:?}: {}", + (hash, number), + e + ); e })?; - debug!(target: "afg", "Finalizing blocks up to ({:?}, {})", number, hash); + debug!(target: LOG_TARGET, "Finalizing blocks up to ({:?}, {})", number, hash); telemetry!( telemetry; @@ -1376,13 +1407,17 @@ where let (new_id, set_ref) = authority_set.current(); if set_ref.len() > 16 { - afg_log!( + grandpa_log!( initial_sync, "👴 Applying GRANDPA set change to new set with {} authorities", set_ref.len(), ); } else { - afg_log!(initial_sync, "👴 Applying GRANDPA set change to new set {:?}", set_ref); + grandpa_log!( + initial_sync, + "👴 Applying GRANDPA set change to new set {:?}", + set_ref + ); } telemetry!( @@ -1411,8 +1446,11 @@ where ); if let Err(e) = write_result { - warn!(target: "afg", "Failed to write updated authority set to disk. Bailing."); - warn!(target: "afg", "Node is in a potentially inconsistent state."); + warn!( + target: LOG_TARGET, + "Failed to write updated authority set to disk. Bailing." + ); + warn!(target: LOG_TARGET, "Node is in a potentially inconsistent state."); return Err(e.into()) } diff --git a/client/finality-grandpa/src/finality_proof.rs b/client/finality-grandpa/src/finality_proof.rs index 453b41bc63468..43ed0ed31993e 100644 --- a/client/finality-grandpa/src/finality_proof.rs +++ b/client/finality-grandpa/src/finality_proof.rs @@ -52,7 +52,7 @@ use crate::{ authorities::{AuthoritySetChangeId, AuthoritySetChanges}, best_justification, justification::GrandpaJustification, - SharedAuthoritySet, + SharedAuthoritySet, LOG_TARGET, }; const MAX_UNKNOWN_HEADERS: usize = 100_000; @@ -163,7 +163,7 @@ where "Requested finality proof for descendant of #{} while we only have finalized #{}.", block, info.finalized_number, ); - trace!(target: "afg", "{}", &err); + trace!(target: LOG_TARGET, "{}", &err); return Err(FinalityProofError::BlockNotYetFinalized) } @@ -175,7 +175,7 @@ where justification } else { trace!( - target: "afg", + target: LOG_TARGET, "No justification found for the latest finalized block. \ Returning empty proof.", ); @@ -194,7 +194,7 @@ where grandpa_justification } else { trace!( - target: "afg", + target: LOG_TARGET, "No justification found when making finality proof for {}. \ Returning empty proof.", block, @@ -205,7 +205,7 @@ where }, AuthoritySetChangeId::Unknown => { warn!( - target: "afg", + target: LOG_TARGET, "AuthoritySetChanges does not cover the requested block #{} due to missing data. \ You need to resync to populate AuthoritySetChanges properly.", block, diff --git a/client/finality-grandpa/src/import.rs b/client/finality-grandpa/src/import.rs index 3715287eea31f..e061c105eeea5 100644 --- a/client/finality-grandpa/src/import.rs +++ b/client/finality-grandpa/src/import.rs @@ -45,6 +45,7 @@ use crate::{ justification::GrandpaJustification, notification::GrandpaJustificationSender, AuthoritySetChanges, ClientForGrandpa, CommandOrError, Error, NewAuthoritySet, VoterCommand, + LOG_TARGET, }; /// A block-import handler for GRANDPA. @@ -589,18 +590,16 @@ where Ok(ImportResult::Imported(aux)) => aux, Ok(r) => { debug!( - target: "afg", - "Restoring old authority set after block import result: {:?}", - r, + target: LOG_TARGET, + "Restoring old authority set after block import result: {:?}", r, ); pending_changes.revert(); return Ok(r) }, Err(e) => { debug!( - target: "afg", - "Restoring old authority set after block import error: {}", - e, + target: LOG_TARGET, + "Restoring old authority set after block import error: {}", e, ); pending_changes.revert(); return Err(ConsensusError::ClientImport(e.to_string())) @@ -665,7 +664,7 @@ where import_res.unwrap_or_else(|err| { if needs_justification { debug!( - target: "afg", + target: LOG_TARGET, "Requesting justification from peers due to imported block #{} that enacts authority set change with invalid justification: {}", number, err @@ -678,7 +677,7 @@ where None => if needs_justification { debug!( - target: "afg", + target: LOG_TARGET, "Imported unjustified block #{} that enacts authority set change, waiting for finality for enactment.", number, ); @@ -803,7 +802,7 @@ where match result { Err(CommandOrError::VoterCommand(command)) => { - afg_log!( + grandpa_log!( initial_sync, "👴 Imported justification for block #{} that triggers \ command {}, signaling voter.", diff --git a/client/finality-grandpa/src/lib.rs b/client/finality-grandpa/src/lib.rs index c1b4962d04a12..efc46d8f93a6d 100644 --- a/client/finality-grandpa/src/lib.rs +++ b/client/finality-grandpa/src/lib.rs @@ -93,10 +93,12 @@ use std::{ time::Duration, }; +const LOG_TARGET: &str = "grandpa"; + // utility logging macro that takes as first argument a conditional to // decide whether to log under debug or info level (useful to restrict // logging under initial sync). -macro_rules! afg_log { +macro_rules! grandpa_log { ($condition:expr, $($msg: expr),+ $(,)?) => { { let log_level = if $condition { @@ -105,7 +107,7 @@ macro_rules! afg_log { log::Level::Info }; - log::log!(target: "afg", log_level, $($msg),+); + log::log!(target: LOG_TARGET, log_level, $($msg),+); } }; } @@ -803,10 +805,11 @@ where ); let voter_work = voter_work.map(|res| match res { - Ok(()) => error!(target: "afg", + Ok(()) => error!( + target: LOG_TARGET, "GRANDPA voter future has concluded naturally, this should be unreachable." ), - Err(e) => error!(target: "afg", "GRANDPA voter error: {}", e), + Err(e) => error!(target: LOG_TARGET, "GRANDPA voter error: {}", e), }); // Make sure that `telemetry_task` doesn't accidentally finish and kill grandpa. @@ -871,7 +874,7 @@ where let metrics = match prometheus_registry.as_ref().map(Metrics::register) { Some(Ok(metrics)) => Some(metrics), Some(Err(e)) => { - debug!(target: "afg", "Failed to register metrics: {:?}", e); + debug!(target: LOG_TARGET, "Failed to register metrics: {:?}", e); None }, None => None, @@ -913,7 +916,12 @@ where /// state. This method should be called when we know that the authority set /// has changed (e.g. as signalled by a voter command). fn rebuild_voter(&mut self) { - debug!(target: "afg", "{}: Starting new voter with set ID {}", self.env.config.name(), self.env.set_id); + debug!( + target: LOG_TARGET, + "{}: Starting new voter with set ID {}", + self.env.config.name(), + self.env.set_id + ); let maybe_authority_id = local_authority_id(&self.env.voters, self.env.config.keystore.as_ref()); @@ -974,7 +982,8 @@ where // Repoint shared_voter_state so that the RPC endpoint can query the state if self.shared_voter_state.reset(voter.voter_state()).is_none() { - info!(target: "afg", + info!( + target: LOG_TARGET, "Timed out trying to update shared GRANDPA voter state. \ RPC endpoints may return stale data." ); @@ -1043,7 +1052,7 @@ where Ok(()) }, VoterCommand::Pause(reason) => { - info!(target: "afg", "Pausing old validator set: {}", reason); + info!(target: LOG_TARGET, "Pausing old validator set: {}", reason); // not racing because old voter is shut down. self.env.update_voter_set_state(|voter_set_state| { diff --git a/client/finality-grandpa/src/observer.rs b/client/finality-grandpa/src/observer.rs index 9bcb03c0555c2..1efb71e5903ec 100644 --- a/client/finality-grandpa/src/observer.rs +++ b/client/finality-grandpa/src/observer.rs @@ -43,7 +43,7 @@ use crate::{ environment, global_communication, notification::GrandpaJustificationSender, ClientForGrandpa, CommandOrError, CommunicationIn, Config, Error, LinkHalf, VoterCommand, - VoterSetState, + VoterSetState, LOG_TARGET, }; struct ObserverChain<'a, Block: BlockT, Client> { @@ -145,7 +145,7 @@ where // proceed processing with new finalized block number future::ok(finalized_number) } else { - debug!(target: "afg", "Received invalid commit: ({:?}, {:?})", round, commit); + debug!(target: LOG_TARGET, "Received invalid commit: ({:?}, {:?})", round, commit); finality_grandpa::process_commit_validation_result(validation_result, callback); @@ -317,7 +317,7 @@ where // update it on-disk in case we restart as validator in the future. self.persistent_data.set_state = match command { VoterCommand::Pause(reason) => { - info!(target: "afg", "Pausing old validator set: {}", reason); + info!(target: LOG_TARGET, "Pausing old validator set: {}", reason); let completed_rounds = self.persistent_data.set_state.read().completed_rounds(); let set_state = VoterSetState::Paused { completed_rounds }; diff --git a/client/finality-grandpa/src/until_imported.rs b/client/finality-grandpa/src/until_imported.rs index df0b63348e94b..95b658e92298a 100644 --- a/client/finality-grandpa/src/until_imported.rs +++ b/client/finality-grandpa/src/until_imported.rs @@ -24,7 +24,7 @@ use super::{ BlockStatus as BlockStatusT, BlockSyncRequester as BlockSyncRequesterT, CommunicationIn, Error, - SignedMessage, + SignedMessage, LOG_TARGET, }; use finality_grandpa::voter; @@ -296,7 +296,7 @@ where let next_log = *last_log + LOG_PENDING_INTERVAL; if Instant::now() >= next_log { debug!( - target: "afg", + target: LOG_TARGET, "Waiting to import block {} before {} {} messages can be imported. \ Requesting network sync service to retrieve block from. \ Possible fork?", @@ -346,7 +346,7 @@ where fn warn_authority_wrong_target(hash: H, id: AuthorityId) { warn!( - target: "afg", + target: LOG_TARGET, "Authority {:?} signed GRANDPA message with \ wrong block number for hash {}", id, diff --git a/client/merkle-mountain-range/rpc/Cargo.toml b/client/merkle-mountain-range/rpc/Cargo.toml index ca14544000bdb..dcc5e49c52051 100644 --- a/client/merkle-mountain-range/rpc/Cargo.toml +++ b/client/merkle-mountain-range/rpc/Cargo.toml @@ -13,7 +13,7 @@ targets = ["x86_64-unknown-linux-gnu"] [dependencies] codec = { package = "parity-scale-codec", version = "3.0.0" } -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } serde = { version = "1.0.136", features = ["derive"] } sp-api = { version = "4.0.0-dev", path = "../../../primitives/api" } sp-blockchain = { version = "4.0.0-dev", path = "../../../primitives/blockchain" } diff --git a/client/network/common/src/sync.rs b/client/network/common/src/sync.rs index bed9935698769..5e8219c550d19 100644 --- a/client/network/common/src/sync.rs +++ b/client/network/common/src/sync.rs @@ -24,9 +24,7 @@ pub mod warp; use libp2p::PeerId; use message::{BlockAnnounce, BlockData, BlockRequest, BlockResponse}; -use sc_consensus::{ - import_queue::RuntimeOrigin, BlockImportError, BlockImportStatus, IncomingBlock, -}; +use sc_consensus::{import_queue::RuntimeOrigin, IncomingBlock}; use sp_consensus::BlockOrigin; use sp_runtime::{ traits::{Block as BlockT, NumberFor}, @@ -317,6 +315,12 @@ pub trait ChainSync: Send { response: BlockResponse, ) -> Result, BadPeer>; + /// Procss received block data. + fn process_block_response_data( + &mut self, + blocks_to_import: Result, BadPeer>, + ); + /// Handle a response from the remote to a justification request that we made. /// /// `request` must be the original request that triggered `response`. @@ -326,17 +330,6 @@ pub trait ChainSync: Send { response: BlockResponse, ) -> Result, BadPeer>; - /// A batch of blocks have been processed, with or without errors. - /// - /// Call this when a batch of blocks have been processed by the import - /// queue, with or without errors. - fn on_blocks_processed( - &mut self, - imported: usize, - count: usize, - results: Vec<(Result>, BlockImportError>, Block::Hash)>, - ) -> Box), BadPeer>>>; - /// Call this when a justification has been processed by the import queue, /// with or without errors. fn on_justification_import( @@ -378,7 +371,7 @@ pub trait ChainSync: Send { /// Call when a peer has disconnected. /// Canceled obsolete block request may result in some blocks being ready for /// import, so this functions checks for such blocks and returns them. - fn peer_disconnected(&mut self, who: &PeerId) -> Option>; + fn peer_disconnected(&mut self, who: &PeerId); /// Return some key metrics. fn metrics(&self) -> Metrics; @@ -395,7 +388,10 @@ pub trait ChainSync: Send { /// Internally calls [`ChainSync::poll_block_announce_validation()`] and /// this function should be polled until it returns [`Poll::Pending`] to /// consume all pending events. - fn poll(&mut self, cx: &mut std::task::Context) -> Poll>; + fn poll( + &mut self, + cx: &mut std::task::Context, + ) -> Poll>; /// Send block request to peer fn send_block_request(&mut self, who: PeerId, request: BlockRequest); diff --git a/client/network/src/behaviour.rs b/client/network/src/behaviour.rs index 48d6127f642c3..3a977edbca574 100644 --- a/client/network/src/behaviour.rs +++ b/client/network/src/behaviour.rs @@ -32,7 +32,6 @@ use libp2p::{ NetworkBehaviour, }; -use sc_consensus::import_queue::{IncomingBlock, RuntimeOrigin}; use sc_network_common::{ protocol::{ event::DhtEvent, @@ -43,18 +42,14 @@ use sc_network_common::{ }; use sc_peerset::{PeersetHandle, ReputationChange}; use sp_blockchain::HeaderBackend; -use sp_consensus::BlockOrigin; -use sp_runtime::{ - traits::{Block as BlockT, NumberFor}, - Justifications, -}; +use sp_runtime::traits::Block as BlockT; use std::{collections::HashSet, time::Duration}; pub use crate::request_responses::{InboundFailure, OutboundFailure, RequestId, ResponseFailure}; /// General behaviour of the network. Combines all protocols together. #[derive(NetworkBehaviour)] -#[behaviour(out_event = "BehaviourOut")] +#[behaviour(out_event = "BehaviourOut")] pub struct Behaviour where B: BlockT, @@ -72,10 +67,7 @@ where } /// Event generated by `Behaviour`. -pub enum BehaviourOut { - BlockImport(BlockOrigin, Vec>), - JustificationImport(RuntimeOrigin, B::Hash, NumberFor, Justifications), - +pub enum BehaviourOut { /// Started a random iterative Kademlia discovery query. RandomKademliaStarted, @@ -107,10 +99,7 @@ pub enum BehaviourOut { }, /// A request protocol handler issued reputation changes for the given peer. - ReputationChanges { - peer: PeerId, - changes: Vec, - }, + ReputationChanges { peer: PeerId, changes: Vec }, /// Opened a substream with the given node with the given notifications protocol. /// @@ -306,13 +295,9 @@ fn reported_roles_to_observed_role(roles: Roles) -> ObservedRole { } } -impl From> for BehaviourOut { +impl From> for BehaviourOut { fn from(event: CustomMessageOutcome) -> Self { match event { - CustomMessageOutcome::BlockImport(origin, blocks) => - BehaviourOut::BlockImport(origin, blocks), - CustomMessageOutcome::JustificationImport(origin, hash, nb, justification) => - BehaviourOut::JustificationImport(origin, hash, nb, justification), CustomMessageOutcome::NotificationStreamOpened { remote, protocol, @@ -344,7 +329,7 @@ impl From> for BehaviourOut { } } -impl From for BehaviourOut { +impl From for BehaviourOut { fn from(event: request_responses::Event) -> Self { match event { request_responses::Event::InboundRequest { peer, protocol, result } => @@ -357,14 +342,14 @@ impl From for BehaviourOut { } } -impl From for BehaviourOut { +impl From for BehaviourOut { fn from(event: peer_info::PeerInfoEvent) -> Self { let peer_info::PeerInfoEvent::Identified { peer_id, info } = event; BehaviourOut::PeerIdentify { peer_id, info } } } -impl From for BehaviourOut { +impl From for BehaviourOut { fn from(event: DiscoveryOut) -> Self { match event { DiscoveryOut::UnroutablePeer(_peer_id) => { diff --git a/client/network/src/config.rs b/client/network/src/config.rs index b10612dd17094..52993e2519400 100644 --- a/client/network/src/config.rs +++ b/client/network/src/config.rs @@ -40,7 +40,6 @@ use libp2p::{ multiaddr, Multiaddr, }; use prometheus_endpoint::Registry; -use sc_consensus::ImportQueue; use sc_network_common::{ config::{MultiaddrWithPeerId, NonDefaultSetConfig, SetConfig, TransportConfig}, sync::ChainSync, @@ -82,12 +81,6 @@ where /// name on the wire. pub fork_id: Option, - /// Import queue to use. - /// - /// The import queue is the component that verifies that blocks received from other nodes are - /// valid. - pub import_queue: Box>, - /// Instance of chain sync implementation. pub chain_sync: Box>, diff --git a/client/network/src/lib.rs b/client/network/src/lib.rs index f3faa44ee6dbd..f185458e0dace 100644 --- a/client/network/src/lib.rs +++ b/client/network/src/lib.rs @@ -258,6 +258,7 @@ pub mod network_state; #[doc(inline)] pub use libp2p::{multiaddr, Multiaddr, PeerId}; pub use protocol::PeerInfo; +use sc_consensus::{JustificationSyncLink, Link}; pub use sc_network_common::{ protocol::{ event::{DhtEvent, Event}, @@ -297,11 +298,15 @@ const MAX_CONNECTIONS_ESTABLISHED_INCOMING: u32 = 10_000; /// Abstraction over syncing-related services pub trait ChainSyncInterface: - NetworkSyncForkRequest> + Send + Sync + NetworkSyncForkRequest> + JustificationSyncLink + Link + Send + Sync { } impl ChainSyncInterface for T where - T: NetworkSyncForkRequest> + Send + Sync + T: NetworkSyncForkRequest> + + JustificationSyncLink + + Link + + Send + + Sync { } diff --git a/client/network/src/protocol.rs b/client/network/src/protocol.rs index 8c1dd39b49be3..10eb31b595253 100644 --- a/client/network/src/protocol.rs +++ b/client/network/src/protocol.rs @@ -29,32 +29,26 @@ use libp2p::{ }, Multiaddr, PeerId, }; -use log::{debug, error, info, log, trace, warn, Level}; +use log::{debug, error, log, trace, warn, Level}; use lru::LruCache; use message::{generic::Message as GenericMessage, Message}; use notifications::{Notifications, NotificationsOut}; use prometheus_endpoint::{register, Gauge, GaugeVec, Opts, PrometheusError, Registry, U64}; use sc_client_api::HeaderBackend; -use sc_consensus::import_queue::{ - BlockImportError, BlockImportStatus, IncomingBlock, RuntimeOrigin, -}; use sc_network_common::{ config::NonReservedPeerMode, error, protocol::{role::Roles, ProtocolName}, sync::{ message::{BlockAnnounce, BlockAnnouncesHandshake, BlockData, BlockResponse, BlockState}, - BadPeer, ChainSync, ImportResult, OnBlockData, PollBlockAnnounceValidation, PollResult, - SyncStatus, + BadPeer, ChainSync, PollBlockAnnounceValidation, SyncStatus, }, utils::{interval, LruHashSet}, }; use sp_arithmetic::traits::SaturatedConversion; -use sp_consensus::BlockOrigin; use sp_runtime::{ generic::BlockId, traits::{Block as BlockT, CheckedSub, Header as HeaderT, NumberFor, Zero}, - Justifications, }; use std::{ collections::{HashMap, HashSet, VecDeque}, @@ -481,12 +475,7 @@ where } if let Some(_peer_data) = self.peers.remove(&peer) { - if let Some(OnBlockData::Import(origin, blocks)) = - self.chain_sync.peer_disconnected(&peer) - { - self.pending_messages - .push_back(CustomMessageOutcome::BlockImport(origin, blocks)); - } + self.chain_sync.peer_disconnected(&peer); self.default_peers_set_no_slot_connected_peers.remove(&peer); Ok(()) } else { @@ -785,25 +774,13 @@ where }], }, ); + self.chain_sync.process_block_response_data(blocks_to_import); if is_best { self.pending_messages.push_back(CustomMessageOutcome::PeerNewBest(who, number)); } - match blocks_to_import { - Ok(OnBlockData::Import(origin, blocks)) => - CustomMessageOutcome::BlockImport(origin, blocks), - Ok(OnBlockData::Request(peer, req)) => { - self.chain_sync.send_block_request(peer, req); - CustomMessageOutcome::None - }, - Ok(OnBlockData::Continue) => CustomMessageOutcome::None, - Err(BadPeer(id, repu)) => { - self.behaviour.disconnect_peer(&id, HARDCODED_PEERSETS_SYNC); - self.peerset_handle.report_peer(id, repu); - CustomMessageOutcome::None - }, - } + CustomMessageOutcome::None } /// Call this when a block has been finalized. The sync layer may have some additional @@ -812,58 +789,6 @@ where self.chain_sync.on_block_finalized(&hash, *header.number()) } - /// Request a justification for the given block. - /// - /// Uses `protocol` to queue a new justification request and tries to dispatch all pending - /// requests. - pub fn request_justification(&mut self, hash: &B::Hash, number: NumberFor) { - self.chain_sync.request_justification(hash, number) - } - - /// Clear all pending justification requests. - pub fn clear_justification_requests(&mut self) { - self.chain_sync.clear_justification_requests(); - } - - /// A batch of blocks have been processed, with or without errors. - /// Call this when a batch of blocks have been processed by the importqueue, with or without - /// errors. - pub fn on_blocks_processed( - &mut self, - imported: usize, - count: usize, - results: Vec<(Result>, BlockImportError>, B::Hash)>, - ) { - let results = self.chain_sync.on_blocks_processed(imported, count, results); - for result in results { - match result { - Ok((id, req)) => self.chain_sync.send_block_request(id, req), - Err(BadPeer(id, repu)) => { - self.behaviour.disconnect_peer(&id, HARDCODED_PEERSETS_SYNC); - self.peerset_handle.report_peer(id, repu) - }, - } - } - } - - /// Call this when a justification has been processed by the import queue, with or without - /// errors. - pub fn justification_import_result( - &mut self, - who: PeerId, - hash: B::Hash, - number: NumberFor, - success: bool, - ) { - self.chain_sync.on_justification_import(hash, number, success); - if !success { - info!("💔 Invalid justification provided by {} for #{}", who, hash); - self.behaviour.disconnect_peer(&who, HARDCODED_PEERSETS_SYNC); - self.peerset_handle - .report_peer(who, sc_peerset::ReputationChange::new_fatal("Invalid justification")); - } - } - /// Set whether the syncing peers set is in reserved-only mode. pub fn set_reserved_only(&self, reserved_only: bool) { self.peerset_handle.set_reserved_only(HARDCODED_PEERSETS_SYNC, reserved_only); @@ -997,8 +922,6 @@ where #[derive(Debug)] #[must_use] pub enum CustomMessageOutcome { - BlockImport(BlockOrigin, Vec>), - JustificationImport(RuntimeOrigin, B::Hash, NumberFor, Justifications), /// Notification protocols have been opened with a remote. NotificationStreamOpened { remote: PeerId, @@ -1106,23 +1029,9 @@ where // Process any received requests received from `NetworkService` and // check if there is any block announcement validation finished. while let Poll::Ready(result) = self.chain_sync.poll(cx) { - match result { - PollResult::Import(import) => self.pending_messages.push_back(match import { - ImportResult::BlockImport(origin, blocks) => - CustomMessageOutcome::BlockImport(origin, blocks), - ImportResult::JustificationImport(origin, hash, number, justifications) => - CustomMessageOutcome::JustificationImport( - origin, - hash, - number, - justifications, - ), - }), - PollResult::Announce(announce) => - match self.process_block_announce_validation_result(announce) { - CustomMessageOutcome::None => {}, - outcome => self.pending_messages.push_back(outcome), - }, + match self.process_block_announce_validation_result(result) { + CustomMessageOutcome::None => {}, + outcome => self.pending_messages.push_back(outcome), } } diff --git a/client/network/src/service.rs b/client/network/src/service.rs index d35594a07e38a..08e498299a1d3 100644 --- a/client/network/src/service.rs +++ b/client/network/src/service.rs @@ -54,7 +54,6 @@ use libp2p::{ use log::{debug, error, info, trace, warn}; use metrics::{Histogram, HistogramVec, MetricSources, Metrics}; use parking_lot::Mutex; -use sc_consensus::{BlockImportError, BlockImportStatus, ImportQueue, Link}; use sc_network_common::{ config::{MultiaddrWithPeerId, TransportConfig}, error::Error, @@ -450,7 +449,6 @@ where is_major_syncing, network_service: swarm, service, - import_queue: params.import_queue, from_service, event_streams: out_events::OutChannels::new(params.metrics_registry.as_ref())?, peers_notifications_sinks, @@ -748,13 +746,11 @@ impl sc_consensus::JustificationSyncLink for NetworkSe /// On success, the justification will be passed to the import queue that was part at /// initialization as part of the configuration. fn request_justification(&self, hash: &B::Hash, number: NumberFor) { - let _ = self - .to_worker - .unbounded_send(ServiceToWorkerMsg::RequestJustification(*hash, number)); + let _ = self.chain_sync_service.request_justification(hash, number); } fn clear_justification_requests(&self) { - let _ = self.to_worker.unbounded_send(ServiceToWorkerMsg::ClearJustificationRequests); + let _ = self.chain_sync_service.clear_justification_requests(); } } @@ -1208,8 +1204,6 @@ impl<'a> NotificationSenderReadyT for NotificationSenderReady<'a> { /// /// Each entry corresponds to a method of `NetworkService`. enum ServiceToWorkerMsg { - RequestJustification(B::Hash, NumberFor), - ClearJustificationRequests, AnnounceBlock(B::Hash, Option>), GetValue(KademliaKey), PutValue(KademliaKey, Vec), @@ -1261,8 +1255,6 @@ where service: Arc>, /// The *actual* network. network_service: Swarm>, - /// The import queue that was passed at initialization. - import_queue: Box>, /// Messages from the [`NetworkService`] that must be processed. from_service: TracingUnboundedReceiver>, /// Senders for events that happen on the network. @@ -1290,10 +1282,6 @@ where fn poll(mut self: Pin<&mut Self>, cx: &mut std::task::Context) -> Poll { let this = &mut *self; - // Poll the import queue for actions to perform. - this.import_queue - .poll_actions(cx, &mut NetworkLink { protocol: &mut this.network_service }); - // At the time of writing of this comment, due to a high volume of messages, the network // worker sometimes takes a long time to process the loop below. When that happens, the // rest of the polling is frozen. In order to avoid negative side-effects caused by this @@ -1322,16 +1310,6 @@ where .behaviour_mut() .user_protocol_mut() .announce_block(hash, data), - ServiceToWorkerMsg::RequestJustification(hash, number) => this - .network_service - .behaviour_mut() - .user_protocol_mut() - .request_justification(&hash, number), - ServiceToWorkerMsg::ClearJustificationRequests => this - .network_service - .behaviour_mut() - .user_protocol_mut() - .clear_justification_requests(), ServiceToWorkerMsg::GetValue(key) => this.network_service.behaviour_mut().get_value(key), ServiceToWorkerMsg::PutValue(key, value) => @@ -1435,23 +1413,6 @@ where match poll_value { Poll::Pending => break, - Poll::Ready(SwarmEvent::Behaviour(BehaviourOut::BlockImport(origin, blocks))) => { - if let Some(metrics) = this.metrics.as_ref() { - metrics.import_queue_blocks_submitted.inc(); - } - this.import_queue.import_blocks(origin, blocks); - }, - Poll::Ready(SwarmEvent::Behaviour(BehaviourOut::JustificationImport( - origin, - hash, - nb, - justifications, - ))) => { - if let Some(metrics) = this.metrics.as_ref() { - metrics.import_queue_justifications_submitted.inc(); - } - this.import_queue.import_justifications(origin, hash, nb, justifications); - }, Poll::Ready(SwarmEvent::Behaviour(BehaviourOut::InboundRequest { protocol, result, @@ -1952,51 +1913,6 @@ where { } -// Implementation of `import_queue::Link` trait using the available local variables. -struct NetworkLink<'a, B, Client> -where - B: BlockT, - Client: HeaderBackend + 'static, -{ - protocol: &'a mut Swarm>, -} - -impl<'a, B, Client> Link for NetworkLink<'a, B, Client> -where - B: BlockT, - Client: HeaderBackend + 'static, -{ - fn blocks_processed( - &mut self, - imported: usize, - count: usize, - results: Vec<(Result>, BlockImportError>, B::Hash)>, - ) { - self.protocol - .behaviour_mut() - .user_protocol_mut() - .on_blocks_processed(imported, count, results) - } - fn justification_imported( - &mut self, - who: PeerId, - hash: &B::Hash, - number: NumberFor, - success: bool, - ) { - self.protocol - .behaviour_mut() - .user_protocol_mut() - .justification_import_result(who, *hash, number, success); - } - fn request_justification(&mut self, hash: &B::Hash, number: NumberFor) { - self.protocol - .behaviour_mut() - .user_protocol_mut() - .request_justification(hash, number) - } -} - fn ensure_addresses_consistent_with_transport<'a>( addresses: impl Iterator, transport: &TransportConfig, diff --git a/client/network/src/service/metrics.rs b/client/network/src/service/metrics.rs index db1b6f7f6500d..a099bba716eb9 100644 --- a/client/network/src/service/metrics.rs +++ b/client/network/src/service/metrics.rs @@ -53,8 +53,6 @@ pub struct Metrics { pub connections_opened_total: CounterVec, pub distinct_peers_connections_closed_total: Counter, pub distinct_peers_connections_opened_total: Counter, - pub import_queue_blocks_submitted: Counter, - pub import_queue_justifications_submitted: Counter, pub incoming_connections_errors_total: CounterVec, pub incoming_connections_total: Counter, pub issued_light_requests: Counter, @@ -103,14 +101,6 @@ impl Metrics { "substrate_sub_libp2p_distinct_peers_connections_opened_total", "Total number of connections opened with distinct peers" )?, registry)?, - import_queue_blocks_submitted: prometheus::register(Counter::new( - "substrate_import_queue_blocks_submitted", - "Number of blocks submitted to the import queue.", - )?, registry)?, - import_queue_justifications_submitted: prometheus::register(Counter::new( - "substrate_import_queue_justifications_submitted", - "Number of justifications submitted to the import queue.", - )?, registry)?, incoming_connections_errors_total: prometheus::register(CounterVec::new( Opts::new( "substrate_sub_libp2p_incoming_connections_handshake_errors_total", diff --git a/client/network/src/service/tests/chain_sync.rs b/client/network/src/service/tests/chain_sync.rs index bd4967f25973a..0f47b64c352f2 100644 --- a/client/network/src/service/tests/chain_sync.rs +++ b/client/network/src/service/tests/chain_sync.rs @@ -86,27 +86,26 @@ async fn normal_network_poll_no_peers() { #[tokio::test] async fn request_justification() { - // build `ChainSyncInterface` provider and set no expecations for it (i.e., it cannot be - // called) - let chain_sync_service = - Box::new(MockChainSyncInterface::::new()); - - // build `ChainSync` and verify that call to `request_justification()` is made - let mut chain_sync = - Box::new(MockChainSync::::new()); - let hash = H256::random(); let number = 1337u64; - chain_sync - .expect_request_justification() + // build `ChainSyncInterface` provider and and expect + // `JustificationSyncLink::request_justification() to be called once + let mut chain_sync_service = + Box::new(MockChainSyncInterface::::new()); + + chain_sync_service + .expect_justification_sync_link_request_justification() .withf(move |in_hash, in_number| &hash == in_hash && &number == in_number) .once() .returning(|_, _| ()); + // build `ChainSync` and set default expecations for it + let mut chain_sync = MockChainSync::::new(); + set_default_expecations_no_peers(&mut chain_sync); let mut network = TestNetworkBuilder::new(Handle::current()) - .with_chain_sync((chain_sync, chain_sync_service)) + .with_chain_sync((Box::new(chain_sync), chain_sync_service)) .build(); // send "request justifiction" message and poll the network @@ -121,17 +120,20 @@ async fn request_justification() { #[tokio::test] async fn clear_justification_requests() { - // build `ChainSyncInterface` provider and set no expecations for it (i.e., it cannot be - // called) - let chain_sync_service = + // build `ChainSyncInterface` provider and expect + // `JustificationSyncLink::clear_justification_requests()` to be called + let mut chain_sync_service = Box::new(MockChainSyncInterface::::new()); - // build `ChainSync` and verify that call to `clear_justification_requests()` is made + chain_sync_service + .expect_justification_sync_link_clear_justification_requests() + .once() + .returning(|| ()); + + // build `ChainSync` and set default expecations for it let mut chain_sync = Box::new(MockChainSync::::new()); - chain_sync.expect_clear_justification_requests().once().returning(|| ()); - set_default_expecations_no_peers(&mut chain_sync); let mut network = TestNetworkBuilder::new(Handle::current()) .with_chain_sync((chain_sync, chain_sync_service)) @@ -235,19 +237,13 @@ async fn on_block_finalized() { // and verify that connection to the peer is closed #[tokio::test] async fn invalid_justification_imported() { - struct DummyImportQueue( - Arc< - RwLock< - Option<( - PeerId, - substrate_test_runtime_client::runtime::Hash, - sp_runtime::traits::NumberFor, - )>, - >, - >, - ); + struct DummyImportQueueHandle; - impl sc_consensus::ImportQueue for DummyImportQueue { + impl + sc_consensus::import_queue::ImportQueueService< + substrate_test_runtime_client::runtime::Block, + > for DummyImportQueueHandle + { fn import_blocks( &mut self, _origin: sp_consensus::BlockOrigin, @@ -265,7 +261,23 @@ async fn invalid_justification_imported() { _justifications: sp_runtime::Justifications, ) { } + } + struct DummyImportQueue( + Arc< + RwLock< + Option<( + PeerId, + substrate_test_runtime_client::runtime::Hash, + sp_runtime::traits::NumberFor, + )>, + >, + >, + DummyImportQueueHandle, + ); + + #[async_trait::async_trait] + impl sc_consensus::ImportQueue for DummyImportQueue { fn poll_actions( &mut self, _cx: &mut futures::task::Context, @@ -275,13 +287,40 @@ async fn invalid_justification_imported() { link.justification_imported(peer, &hash, number, false); } } + + fn service( + &self, + ) -> Box< + dyn sc_consensus::import_queue::ImportQueueService< + substrate_test_runtime_client::runtime::Block, + >, + > { + Box::new(DummyImportQueueHandle {}) + } + + fn service_ref( + &mut self, + ) -> &mut dyn sc_consensus::import_queue::ImportQueueService< + substrate_test_runtime_client::runtime::Block, + > { + &mut self.1 + } + + async fn run( + self, + _link: Box>, + ) { + } } let justification_info = Arc::new(RwLock::new(None)); let listen_addr = config::build_multiaddr![Memory(rand::random::())]; let (service1, mut event_stream1) = TestNetworkBuilder::new(Handle::current()) - .with_import_queue(Box::new(DummyImportQueue(justification_info.clone()))) + .with_import_queue(Box::new(DummyImportQueue( + justification_info.clone(), + DummyImportQueueHandle {}, + ))) .with_listen_addresses(vec![listen_addr.clone()]) .build() .start_network(); @@ -331,6 +370,7 @@ async fn disconnect_peer_using_chain_sync_handle() { let client = Arc::new(TestClientBuilder::with_default_backend().build_with_longest_chain().0); let listen_addr = config::build_multiaddr![Memory(rand::random::())]; + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (chain_sync_network_provider, chain_sync_network_handle) = sc_network_sync::service::network::NetworkServiceProvider::new(); let handle_clone = chain_sync_network_handle.clone(); @@ -344,7 +384,9 @@ async fn disconnect_peer_using_chain_sync_handle() { Box::new(sp_consensus::block_validation::DefaultBlockAnnounceValidator), 1u32, None, + None, chain_sync_network_handle.clone(), + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -353,7 +395,7 @@ async fn disconnect_peer_using_chain_sync_handle() { let (node1, mut event_stream1) = TestNetworkBuilder::new(Handle::current()) .with_listen_addresses(vec![listen_addr.clone()]) - .with_chain_sync((Box::new(chain_sync), chain_sync_service)) + .with_chain_sync((Box::new(chain_sync), Box::new(chain_sync_service))) .with_chain_sync_network((chain_sync_network_provider, chain_sync_network_handle)) .with_client(client.clone()) .build() diff --git a/client/network/src/service/tests/mod.rs b/client/network/src/service/tests/mod.rs index f8635e39e9da9..fa1486a791213 100644 --- a/client/network/src/service/tests/mod.rs +++ b/client/network/src/service/tests/mod.rs @@ -21,7 +21,7 @@ use crate::{config, ChainSyncInterface, NetworkService, NetworkWorker}; use futures::prelude::*; use libp2p::Multiaddr; use sc_client_api::{BlockBackend, HeaderBackend}; -use sc_consensus::ImportQueue; +use sc_consensus::{ImportQueue, Link}; use sc_network_common::{ config::{ NonDefaultSetConfig, NonReservedPeerMode, NotificationHandshake, ProtocolId, SetConfig, @@ -93,6 +93,7 @@ impl TestNetwork { struct TestNetworkBuilder { import_queue: Option>>, + link: Option>>, client: Option>, listen_addresses: Vec, set_config: Option, @@ -106,6 +107,7 @@ impl TestNetworkBuilder { pub fn new(rt_handle: Handle) -> Self { Self { import_queue: None, + link: None, client: None, listen_addresses: Vec::new(), set_config: None, @@ -212,13 +214,14 @@ impl TestNetworkBuilder { } } - let import_queue = self.import_queue.unwrap_or(Box::new(sc_consensus::BasicQueue::new( - PassThroughVerifier(false), - Box::new(client.clone()), - None, - &sp_core::testing::TaskExecutor::new(), - None, - ))); + let mut import_queue = + self.import_queue.unwrap_or(Box::new(sc_consensus::BasicQueue::new( + PassThroughVerifier(false), + Box::new(client.clone()), + None, + &sp_core::testing::TaskExecutor::new(), + None, + ))); let protocol_id = ProtocolId::from("test-protocol-name"); let fork_id = Some(String::from("test-fork-id")); @@ -289,15 +292,23 @@ impl TestNetworkBuilder { Box::new(sp_consensus::block_validation::DefaultBlockAnnounceValidator), network_config.max_parallel_downloads, None, + None, chain_sync_network_handle, + import_queue.service(), block_request_protocol_config.name.clone(), state_request_protocol_config.name.clone(), None, ) .unwrap(); - (Box::new(chain_sync), chain_sync_service) + if let None = self.link { + self.link = Some(Box::new(chain_sync_service.clone())); + } + (Box::new(chain_sync), Box::new(chain_sync_service)) }); + let mut link = self + .link + .unwrap_or(Box::new(sc_network_sync::service::mock::MockChainSyncInterface::new())); let handle = self.rt_handle.clone(); let executor = move |f| { @@ -316,7 +327,6 @@ impl TestNetworkBuilder { chain: client.clone(), protocol_id, fork_id, - import_queue, chain_sync, chain_sync_service, metrics_registry: None, @@ -333,6 +343,16 @@ impl TestNetworkBuilder { self.rt_handle.spawn(async move { let _ = chain_sync_network_provider.run(service).await; }); + self.rt_handle.spawn(async move { + loop { + futures::future::poll_fn(|cx| { + import_queue.poll_actions(cx, &mut *link); + std::task::Poll::Ready(()) + }) + .await; + tokio::time::sleep(std::time::Duration::from_millis(250)).await; + } + }); TestNetwork::new(worker, self.rt_handle) } diff --git a/client/network/sync/Cargo.toml b/client/network/sync/Cargo.toml index 086ab3c30cc25..e29d8047161ce 100644 --- a/client/network/sync/Cargo.toml +++ b/client/network/sync/Cargo.toml @@ -28,6 +28,7 @@ prost = "0.11" smallvec = "1.8.0" thiserror = "1.0" fork-tree = { version = "3.0.0", path = "../../../utils/fork-tree" } +prometheus-endpoint = { package = "substrate-prometheus-endpoint", version = "0.10.0-dev", path = "../../../utils/prometheus" } sc-client-api = { version = "4.0.0-dev", path = "../../api" } sc-consensus = { version = "0.10.0-dev", path = "../../consensus/common" } sc-network-common = { version = "0.10.0-dev", path = "../common" } diff --git a/client/network/sync/src/lib.rs b/client/network/sync/src/lib.rs index 697445334a073..75eda91219ec8 100644 --- a/client/network/sync/src/lib.rs +++ b/client/network/sync/src/lib.rs @@ -54,9 +54,12 @@ use futures::{ }; use libp2p::{request_response::OutboundFailure, PeerId}; use log::{debug, error, info, trace, warn}; +use prometheus_endpoint::{register, Counter, PrometheusError, Registry, U64}; use prost::Message; use sc_client_api::{BlockBackend, ProofProvider}; -use sc_consensus::{BlockImportError, BlockImportStatus, IncomingBlock}; +use sc_consensus::{ + import_queue::ImportQueueService, BlockImportError, BlockImportStatus, IncomingBlock, +}; use sc_network_common::{ config::{ NonDefaultSetConfig, NonReservedPeerMode, NotificationHandshake, ProtocolId, SetConfig, @@ -71,8 +74,8 @@ use sc_network_common::{ warp::{EncodedProof, WarpProofRequest, WarpSyncPhase, WarpSyncProgress, WarpSyncProvider}, BadPeer, ChainSync as ChainSyncT, ImportResult, Metrics, OnBlockData, OnBlockJustification, OnStateData, OpaqueBlockRequest, OpaqueBlockResponse, OpaqueStateRequest, - OpaqueStateResponse, PeerInfo, PeerRequest, PollBlockAnnounceValidation, PollResult, - SyncMode, SyncState, SyncStatus, + OpaqueStateResponse, PeerInfo, PeerRequest, PollBlockAnnounceValidation, SyncMode, + SyncState, SyncStatus, }, }; use sc_utils::mpsc::{tracing_unbounded, TracingUnboundedReceiver}; @@ -233,6 +236,32 @@ impl Default for AllowedRequests { } } +struct SyncingMetrics { + pub import_queue_blocks_submitted: Counter, + pub import_queue_justifications_submitted: Counter, +} + +impl SyncingMetrics { + fn register(registry: &Registry) -> Result { + Ok(Self { + import_queue_blocks_submitted: register( + Counter::new( + "substrate_sync_import_queue_blocks_submitted", + "Number of blocks submitted to the import queue.", + )?, + registry, + )?, + import_queue_justifications_submitted: register( + Counter::new( + "substrate_sync_import_queue_justifications_submitted", + "Number of justifications submitted to the import queue.", + )?, + registry, + )?, + }) + } +} + struct GapSync { blocks: BlockCollection, best_queued_number: NumberFor, @@ -311,6 +340,10 @@ pub struct ChainSync { warp_sync_protocol_name: Option, /// Pending responses pending_responses: FuturesUnordered>, + /// Handle to import queue. + import_queue: Box>, + /// Metrics. + metrics: Option, } /// All the data we have about a Peer that we are trying to sync with @@ -961,6 +994,19 @@ where Ok(self.validate_and_queue_blocks(new_blocks, gap)) } + fn process_block_response_data(&mut self, blocks_to_import: Result, BadPeer>) { + match blocks_to_import { + Ok(OnBlockData::Import(origin, blocks)) => self.import_blocks(origin, blocks), + Ok(OnBlockData::Request(peer, req)) => self.send_block_request(peer, req), + Ok(OnBlockData::Continue) => {}, + Err(BadPeer(id, repu)) => { + self.network_service + .disconnect_peer(id, self.block_announce_protocol_name.clone()); + self.network_service.report_peer(id, repu); + }, + } + } + fn on_block_justification( &mut self, who: PeerId, @@ -1016,156 +1062,6 @@ where Ok(OnBlockJustification::Nothing) } - fn on_blocks_processed( - &mut self, - imported: usize, - count: usize, - results: Vec<(Result>, BlockImportError>, B::Hash)>, - ) -> Box), BadPeer>>> { - trace!(target: "sync", "Imported {} of {}", imported, count); - - let mut output = Vec::new(); - - let mut has_error = false; - for (_, hash) in &results { - self.queue_blocks.remove(hash); - self.blocks.clear_queued(hash); - if let Some(gap_sync) = &mut self.gap_sync { - gap_sync.blocks.clear_queued(hash); - } - } - for (result, hash) in results { - if has_error { - break - } - - if result.is_err() { - has_error = true; - } - - match result { - Ok(BlockImportStatus::ImportedKnown(number, who)) => - if let Some(peer) = who { - self.update_peer_common_number(&peer, number); - }, - Ok(BlockImportStatus::ImportedUnknown(number, aux, who)) => { - if aux.clear_justification_requests { - trace!( - target: "sync", - "Block imported clears all pending justification requests {}: {:?}", - number, - hash, - ); - self.clear_justification_requests(); - } - - if aux.needs_justification { - trace!( - target: "sync", - "Block imported but requires justification {}: {:?}", - number, - hash, - ); - self.request_justification(&hash, number); - } - - if aux.bad_justification { - if let Some(ref peer) = who { - warn!("💔 Sent block with bad justification to import"); - output.push(Err(BadPeer(*peer, rep::BAD_JUSTIFICATION))); - } - } - - if let Some(peer) = who { - self.update_peer_common_number(&peer, number); - } - let state_sync_complete = - self.state_sync.as_ref().map_or(false, |s| s.target() == hash); - if state_sync_complete { - info!( - target: "sync", - "State sync is complete ({} MiB), restarting block sync.", - self.state_sync.as_ref().map_or(0, |s| s.progress().size / (1024 * 1024)), - ); - self.state_sync = None; - self.mode = SyncMode::Full; - output.extend(self.restart()); - } - let warp_sync_complete = self - .warp_sync - .as_ref() - .map_or(false, |s| s.target_block_hash() == Some(hash)); - if warp_sync_complete { - info!( - target: "sync", - "Warp sync is complete ({} MiB), restarting block sync.", - self.warp_sync.as_ref().map_or(0, |s| s.progress().total_bytes / (1024 * 1024)), - ); - self.warp_sync = None; - self.mode = SyncMode::Full; - output.extend(self.restart()); - } - let gap_sync_complete = - self.gap_sync.as_ref().map_or(false, |s| s.target == number); - if gap_sync_complete { - info!( - target: "sync", - "Block history download is complete." - ); - self.gap_sync = None; - } - }, - Err(BlockImportError::IncompleteHeader(who)) => - if let Some(peer) = who { - warn!( - target: "sync", - "💔 Peer sent block with incomplete header to import", - ); - output.push(Err(BadPeer(peer, rep::INCOMPLETE_HEADER))); - output.extend(self.restart()); - }, - Err(BlockImportError::VerificationFailed(who, e)) => - if let Some(peer) = who { - warn!( - target: "sync", - "💔 Verification failed for block {:?} received from peer: {}, {:?}", - hash, - peer, - e, - ); - output.push(Err(BadPeer(peer, rep::VERIFICATION_FAIL))); - output.extend(self.restart()); - }, - Err(BlockImportError::BadBlock(who)) => - if let Some(peer) = who { - warn!( - target: "sync", - "💔 Block {:?} received from peer {} has been blacklisted", - hash, - peer, - ); - output.push(Err(BadPeer(peer, rep::BAD_BLOCK))); - }, - Err(BlockImportError::MissingState) => { - // This may happen if the chain we were requesting upon has been discarded - // in the meantime because other chain has been finalized. - // Don't mark it as bad as it still may be synced if explicitly requested. - trace!(target: "sync", "Obsolete block {:?}", hash); - }, - e @ Err(BlockImportError::UnknownParent) | e @ Err(BlockImportError::Other(_)) => { - warn!(target: "sync", "💔 Error importing block {:?}: {}", hash, e.unwrap_err()); - self.state_sync = None; - self.warp_sync = None; - output.extend(self.restart()); - }, - Err(BlockImportError::Cancelled) => {}, - }; - } - - self.allowed_requests.set_all(); - Box::new(output.into_iter()) - } - fn on_justification_import(&mut self, hash: B::Hash, number: NumberFor, success: bool) { let finalization_result = if success { Ok((hash, number)) } else { Err(()) }; self.extra_justifications @@ -1331,7 +1227,7 @@ where } } - fn peer_disconnected(&mut self, who: &PeerId) -> Option> { + fn peer_disconnected(&mut self, who: &PeerId) { self.blocks.clear_peer_download(who); if let Some(gap_sync) = &mut self.gap_sync { gap_sync.blocks.clear_peer_download(who) @@ -1343,8 +1239,13 @@ where target.peers.remove(who); !target.peers.is_empty() }); + let blocks = self.ready_blocks(); - (!blocks.is_empty()).then(|| self.validate_and_queue_blocks(blocks, false)) + if let Some(OnBlockData::Import(origin, blocks)) = + (!blocks.is_empty()).then(|| self.validate_and_queue_blocks(blocks, false)) + { + self.import_blocks(origin, blocks); + } } fn metrics(&self) -> Metrics { @@ -1421,22 +1322,56 @@ where .map_err(|error: codec::Error| error.to_string()) } - fn poll(&mut self, cx: &mut std::task::Context) -> Poll> { + fn poll( + &mut self, + cx: &mut std::task::Context, + ) -> Poll> { while let Poll::Ready(Some(event)) = self.service_rx.poll_next_unpin(cx) { match event { ToServiceCommand::SetSyncForkRequest(peers, hash, number) => { self.set_sync_fork_request(peers, &hash, number); }, + ToServiceCommand::RequestJustification(hash, number) => + self.request_justification(&hash, number), + ToServiceCommand::ClearJustificationRequests => self.clear_justification_requests(), + ToServiceCommand::BlocksProcessed(imported, count, results) => { + for result in self.on_blocks_processed(imported, count, results) { + match result { + Ok((id, req)) => self.send_block_request(id, req), + Err(BadPeer(id, repu)) => { + self.network_service + .disconnect_peer(id, self.block_announce_protocol_name.clone()); + self.network_service.report_peer(id, repu) + }, + } + } + }, + ToServiceCommand::JustificationImported(peer, hash, number, success) => { + self.on_justification_import(hash, number, success); + if !success { + info!(target: "sync", "💔 Invalid justification provided by {} for #{}", peer, hash); + self.network_service + .disconnect_peer(peer, self.block_announce_protocol_name.clone()); + self.network_service.report_peer( + peer, + sc_peerset::ReputationChange::new_fatal("Invalid justification"), + ); + } + }, } } self.process_outbound_requests(); - if let Poll::Ready(result) = self.poll_pending_responses(cx) { - return Poll::Ready(PollResult::Import(result)) + while let Poll::Ready(result) = self.poll_pending_responses(cx) { + match result { + ImportResult::BlockImport(origin, blocks) => self.import_blocks(origin, blocks), + ImportResult::JustificationImport(who, hash, number, justifications) => + self.import_justifications(who, hash, number, justifications), + } } if let Poll::Ready(announce) = self.poll_block_announce_validation(cx) { - return Poll::Ready(PollResult::Announce(announce)) + return Poll::Ready(announce) } Poll::Pending @@ -1494,11 +1429,13 @@ where block_announce_validator: Box + Send>, max_parallel_downloads: u32, warp_sync_provider: Option>>, + metrics_registry: Option<&Registry>, network_service: service::network::NetworkServiceHandle, + import_queue: Box>, block_request_protocol_name: ProtocolName, state_request_protocol_name: ProtocolName, warp_sync_protocol_name: Option, - ) -> Result<(Self, Box>, NonDefaultSetConfig), ClientError> { + ) -> Result<(Self, ChainSyncInterfaceHandle, NonDefaultSetConfig), ClientError> { let (tx, service_rx) = tracing_unbounded("mpsc_chain_sync"); let block_announce_config = Self::get_block_announce_proto_config( protocol_id, @@ -1544,10 +1481,22 @@ where .clone() .into(), pending_responses: Default::default(), + import_queue, + metrics: if let Some(r) = &metrics_registry { + match SyncingMetrics::register(r) { + Ok(metrics) => Some(metrics), + Err(err) => { + error!(target: "sync", "Failed to register metrics for ChainSync: {err:?}"); + None + }, + } + } else { + None + }, }; sync.reset_sync_start_point()?; - Ok((sync, Box::new(ChainSyncInterfaceHandle::new(tx)), block_announce_config)) + Ok((sync, ChainSyncInterfaceHandle::new(tx), block_announce_config)) } /// Returns the median seen block number. @@ -2173,8 +2122,10 @@ where if request.fields == BlockAttributes::JUSTIFICATION { match self.on_block_justification(peer_id, block_response) { Ok(OnBlockJustification::Nothing) => None, - Ok(OnBlockJustification::Import { peer, hash, number, justifications }) => - Some(ImportResult::JustificationImport(peer, hash, number, justifications)), + Ok(OnBlockJustification::Import { peer, hash, number, justifications }) => { + self.import_justifications(peer, hash, number, justifications); + None + }, Err(BadPeer(id, repu)) => { self.network_service .disconnect_peer(id, self.block_announce_protocol_name.clone()); @@ -2184,8 +2135,10 @@ where } } else { match self.on_block_data(&peer_id, Some(request), block_response) { - Ok(OnBlockData::Import(origin, blocks)) => - Some(ImportResult::BlockImport(origin, blocks)), + Ok(OnBlockData::Import(origin, blocks)) => { + self.import_blocks(origin, blocks); + None + }, Ok(OnBlockData::Request(peer, req)) => { self.send_block_request(peer, req); None @@ -2712,6 +2665,182 @@ where }, } } + + fn import_blocks(&mut self, origin: BlockOrigin, blocks: Vec>) { + if let Some(metrics) = &self.metrics { + metrics.import_queue_blocks_submitted.inc(); + } + + self.import_queue.import_blocks(origin, blocks); + } + + fn import_justifications( + &mut self, + peer: PeerId, + hash: B::Hash, + number: NumberFor, + justifications: Justifications, + ) { + if let Some(metrics) = &self.metrics { + metrics.import_queue_justifications_submitted.inc(); + } + + self.import_queue.import_justifications(peer, hash, number, justifications); + } + + /// A batch of blocks have been processed, with or without errors. + /// + /// Call this when a batch of blocks have been processed by the import + /// queue, with or without errors. + fn on_blocks_processed( + &mut self, + imported: usize, + count: usize, + results: Vec<(Result>, BlockImportError>, B::Hash)>, + ) -> Box), BadPeer>>> { + trace!(target: "sync", "Imported {} of {}", imported, count); + + let mut output = Vec::new(); + + let mut has_error = false; + for (_, hash) in &results { + self.queue_blocks.remove(hash); + self.blocks.clear_queued(hash); + if let Some(gap_sync) = &mut self.gap_sync { + gap_sync.blocks.clear_queued(hash); + } + } + for (result, hash) in results { + if has_error { + break + } + + if result.is_err() { + has_error = true; + } + + match result { + Ok(BlockImportStatus::ImportedKnown(number, who)) => + if let Some(peer) = who { + self.update_peer_common_number(&peer, number); + }, + Ok(BlockImportStatus::ImportedUnknown(number, aux, who)) => { + if aux.clear_justification_requests { + trace!( + target: "sync", + "Block imported clears all pending justification requests {}: {:?}", + number, + hash, + ); + self.clear_justification_requests(); + } + + if aux.needs_justification { + trace!( + target: "sync", + "Block imported but requires justification {}: {:?}", + number, + hash, + ); + self.request_justification(&hash, number); + } + + if aux.bad_justification { + if let Some(ref peer) = who { + warn!("💔 Sent block with bad justification to import"); + output.push(Err(BadPeer(*peer, rep::BAD_JUSTIFICATION))); + } + } + + if let Some(peer) = who { + self.update_peer_common_number(&peer, number); + } + let state_sync_complete = + self.state_sync.as_ref().map_or(false, |s| s.target() == hash); + if state_sync_complete { + info!( + target: "sync", + "State sync is complete ({} MiB), restarting block sync.", + self.state_sync.as_ref().map_or(0, |s| s.progress().size / (1024 * 1024)), + ); + self.state_sync = None; + self.mode = SyncMode::Full; + output.extend(self.restart()); + } + let warp_sync_complete = self + .warp_sync + .as_ref() + .map_or(false, |s| s.target_block_hash() == Some(hash)); + if warp_sync_complete { + info!( + target: "sync", + "Warp sync is complete ({} MiB), restarting block sync.", + self.warp_sync.as_ref().map_or(0, |s| s.progress().total_bytes / (1024 * 1024)), + ); + self.warp_sync = None; + self.mode = SyncMode::Full; + output.extend(self.restart()); + } + let gap_sync_complete = + self.gap_sync.as_ref().map_or(false, |s| s.target == number); + if gap_sync_complete { + info!( + target: "sync", + "Block history download is complete." + ); + self.gap_sync = None; + } + }, + Err(BlockImportError::IncompleteHeader(who)) => + if let Some(peer) = who { + warn!( + target: "sync", + "💔 Peer sent block with incomplete header to import", + ); + output.push(Err(BadPeer(peer, rep::INCOMPLETE_HEADER))); + output.extend(self.restart()); + }, + Err(BlockImportError::VerificationFailed(who, e)) => + if let Some(peer) = who { + warn!( + target: "sync", + "💔 Verification failed for block {:?} received from peer: {}, {:?}", + hash, + peer, + e, + ); + output.push(Err(BadPeer(peer, rep::VERIFICATION_FAIL))); + output.extend(self.restart()); + }, + Err(BlockImportError::BadBlock(who)) => + if let Some(peer) = who { + warn!( + target: "sync", + "💔 Block {:?} received from peer {} has been blacklisted", + hash, + peer, + ); + output.push(Err(BadPeer(peer, rep::BAD_BLOCK))); + }, + Err(BlockImportError::MissingState) => { + // This may happen if the chain we were requesting upon has been discarded + // in the meantime because other chain has been finalized. + // Don't mark it as bad as it still may be synced if explicitly requested. + trace!(target: "sync", "Obsolete block {:?}", hash); + }, + e @ Err(BlockImportError::UnknownParent) | e @ Err(BlockImportError::Other(_)) => { + warn!(target: "sync", "💔 Error importing block {:?}: {}", hash, e.unwrap_err()); + self.state_sync = None; + self.warp_sync = None; + output.extend(self.restart()); + }, + Err(BlockImportError::Cancelled) => {}, + }; + } + + self.allowed_requests.set_all(); + Box::new(output.into_iter()) + } } // This is purely during a backwards compatible transitionary period and should be removed @@ -3089,6 +3218,7 @@ mod test { let block_announce_validator = Box::new(DefaultBlockAnnounceValidator); let peer_id = PeerId::random(); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); let (mut sync, _, _) = ChainSync::new( @@ -3100,7 +3230,9 @@ mod test { block_announce_validator, 1, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -3151,6 +3283,7 @@ mod test { #[test] fn restart_doesnt_affect_peers_downloading_finality_data() { let mut client = Arc::new(TestClientBuilder::new().build()); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); @@ -3163,7 +3296,9 @@ mod test { Box::new(DefaultBlockAnnounceValidator), 1, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -3330,6 +3465,7 @@ mod test { sp_tracing::try_init_simple(); let mut client = Arc::new(TestClientBuilder::new().build()); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); @@ -3342,7 +3478,9 @@ mod test { Box::new(DefaultBlockAnnounceValidator), 5, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -3453,6 +3591,7 @@ mod test { }; let mut client = Arc::new(TestClientBuilder::new().build()); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); let info = client.info(); @@ -3466,7 +3605,9 @@ mod test { Box::new(DefaultBlockAnnounceValidator), 5, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -3584,6 +3725,7 @@ mod test { fn can_sync_huge_fork() { sp_tracing::try_init_simple(); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); let mut client = Arc::new(TestClientBuilder::new().build()); @@ -3619,7 +3761,9 @@ mod test { Box::new(DefaultBlockAnnounceValidator), 5, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -3722,6 +3866,7 @@ mod test { fn syncs_fork_without_duplicate_requests() { sp_tracing::try_init_simple(); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); let mut client = Arc::new(TestClientBuilder::new().build()); @@ -3757,7 +3902,9 @@ mod test { Box::new(DefaultBlockAnnounceValidator), 5, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -3881,6 +4028,7 @@ mod test { #[test] fn removes_target_fork_on_disconnect() { sp_tracing::try_init_simple(); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); let mut client = Arc::new(TestClientBuilder::new().build()); @@ -3895,7 +4043,9 @@ mod test { Box::new(DefaultBlockAnnounceValidator), 1, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, @@ -3921,6 +4071,7 @@ mod test { #[test] fn can_import_response_with_missing_blocks() { sp_tracing::try_init_simple(); + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); let mut client2 = Arc::new(TestClientBuilder::new().build()); @@ -3937,7 +4088,9 @@ mod test { Box::new(DefaultBlockAnnounceValidator), 1, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, diff --git a/client/network/sync/src/mock.rs b/client/network/sync/src/mock.rs index 48d72c425bd03..b59ea7e4fea70 100644 --- a/client/network/sync/src/mock.rs +++ b/client/network/sync/src/mock.rs @@ -21,11 +21,10 @@ use futures::task::Poll; use libp2p::PeerId; -use sc_consensus::{BlockImportError, BlockImportStatus}; use sc_network_common::sync::{ message::{BlockAnnounce, BlockData, BlockRequest, BlockResponse}, BadPeer, ChainSync as ChainSyncT, Metrics, OnBlockData, OnBlockJustification, - OpaqueBlockResponse, PeerInfo, PollBlockAnnounceValidation, PollResult, SyncStatus, + OpaqueBlockResponse, PeerInfo, PollBlockAnnounceValidation, SyncStatus, }; use sp_runtime::traits::{Block as BlockT, NumberFor}; @@ -60,17 +59,12 @@ mockall::mock! { request: Option>, response: BlockResponse, ) -> Result, BadPeer>; + fn process_block_response_data(&mut self, blocks_to_import: Result, BadPeer>); fn on_block_justification( &mut self, who: PeerId, response: BlockResponse, ) -> Result, BadPeer>; - fn on_blocks_processed( - &mut self, - imported: usize, - count: usize, - results: Vec<(Result>, BlockImportError>, Block::Hash)>, - ) -> Box), BadPeer>>>; fn on_justification_import( &mut self, hash: Block::Hash, @@ -89,7 +83,7 @@ mockall::mock! { &mut self, cx: &mut std::task::Context<'a>, ) -> Poll>; - fn peer_disconnected(&mut self, who: &PeerId) -> Option>; + fn peer_disconnected(&mut self, who: &PeerId); fn metrics(&self) -> Metrics; fn block_response_into_blocks( &self, @@ -99,7 +93,7 @@ mockall::mock! { fn poll<'a>( &mut self, cx: &mut std::task::Context<'a>, - ) -> Poll>; + ) -> Poll>; fn send_block_request( &mut self, who: PeerId, diff --git a/client/network/sync/src/service/chain_sync.rs b/client/network/sync/src/service/chain_sync.rs index cf07c65ee3109..50ded5b643dea 100644 --- a/client/network/sync/src/service/chain_sync.rs +++ b/client/network/sync/src/service/chain_sync.rs @@ -17,6 +17,7 @@ // along with this program. If not, see . use libp2p::PeerId; +use sc_consensus::{BlockImportError, BlockImportStatus, JustificationSyncLink, Link}; use sc_network_common::service::NetworkSyncForkRequest; use sc_utils::mpsc::TracingUnboundedSender; use sp_runtime::traits::{Block as BlockT, NumberFor}; @@ -25,9 +26,18 @@ use sp_runtime::traits::{Block as BlockT, NumberFor}; #[derive(Debug)] pub enum ToServiceCommand { SetSyncForkRequest(Vec, B::Hash, NumberFor), + RequestJustification(B::Hash, NumberFor), + ClearJustificationRequests, + BlocksProcessed( + usize, + usize, + Vec<(Result>, BlockImportError>, B::Hash)>, + ), + JustificationImported(PeerId, B::Hash, NumberFor, bool), } /// Handle for communicating with `ChainSync` asynchronously +#[derive(Clone)] pub struct ChainSyncInterfaceHandle { tx: TracingUnboundedSender>, } @@ -56,3 +66,46 @@ impl NetworkSyncForkRequest> .unbounded_send(ToServiceCommand::SetSyncForkRequest(peers, hash, number)); } } + +impl JustificationSyncLink for ChainSyncInterfaceHandle { + /// Request a justification for the given block from the network. + /// + /// On success, the justification will be passed to the import queue that was part at + /// initialization as part of the configuration. + fn request_justification(&self, hash: &B::Hash, number: NumberFor) { + let _ = self.tx.unbounded_send(ToServiceCommand::RequestJustification(*hash, number)); + } + + fn clear_justification_requests(&self) { + let _ = self.tx.unbounded_send(ToServiceCommand::ClearJustificationRequests); + } +} + +impl Link for ChainSyncInterfaceHandle { + fn blocks_processed( + &mut self, + imported: usize, + count: usize, + results: Vec<(Result>, BlockImportError>, B::Hash)>, + ) { + let _ = self + .tx + .unbounded_send(ToServiceCommand::BlocksProcessed(imported, count, results)); + } + + fn justification_imported( + &mut self, + who: PeerId, + hash: &B::Hash, + number: NumberFor, + success: bool, + ) { + let _ = self + .tx + .unbounded_send(ToServiceCommand::JustificationImported(who, *hash, number, success)); + } + + fn request_justification(&mut self, hash: &B::Hash, number: NumberFor) { + let _ = self.tx.unbounded_send(ToServiceCommand::RequestJustification(*hash, number)); + } +} diff --git a/client/network/sync/src/service/mock.rs b/client/network/sync/src/service/mock.rs index c8a29e1fba8ea..d8aad2fa7bac1 100644 --- a/client/network/sync/src/service/mock.rs +++ b/client/network/sync/src/service/mock.rs @@ -18,6 +18,7 @@ use futures::channel::oneshot; use libp2p::{Multiaddr, PeerId}; +use sc_consensus::{BlockImportError, BlockImportStatus}; use sc_network_common::{ config::MultiaddrWithPeerId, protocol::ProtocolName, @@ -29,13 +30,43 @@ use sp_runtime::traits::{Block as BlockT, NumberFor}; use std::collections::HashSet; mockall::mock! { - pub ChainSyncInterface {} + pub ChainSyncInterface { + pub fn justification_sync_link_request_justification(&self, hash: &B::Hash, number: NumberFor); + pub fn justification_sync_link_clear_justification_requests(&self); + } impl NetworkSyncForkRequest> for ChainSyncInterface { fn set_sync_fork_request(&self, peers: Vec, hash: B::Hash, number: NumberFor); } + + impl sc_consensus::Link for ChainSyncInterface { + fn blocks_processed( + &mut self, + imported: usize, + count: usize, + results: Vec<(Result>, BlockImportError>, B::Hash)>, + ); + fn justification_imported( + &mut self, + who: PeerId, + hash: &B::Hash, + number: NumberFor, + success: bool, + ); + fn request_justification(&mut self, hash: &B::Hash, number: NumberFor); + } +} + +impl sc_consensus::JustificationSyncLink for MockChainSyncInterface { + fn request_justification(&self, hash: &B::Hash, number: NumberFor) { + self.justification_sync_link_request_justification(hash, number); + } + + fn clear_justification_requests(&self) { + self.justification_sync_link_clear_justification_requests(); + } } mockall::mock! { diff --git a/client/network/sync/src/tests.rs b/client/network/sync/src/tests.rs index a03e657f03ab2..61de08443a6c2 100644 --- a/client/network/sync/src/tests.rs +++ b/client/network/sync/src/tests.rs @@ -37,6 +37,7 @@ use substrate_test_runtime_client::{TestClientBuilder, TestClientBuilderExt as _ // poll `ChainSync` and verify that a new sync fork request has been registered #[tokio::test] async fn delegate_to_chainsync() { + let import_queue = Box::new(sc_consensus::import_queue::mock::MockImportQueueHandle::new()); let (_chain_sync_network_provider, chain_sync_network_handle) = NetworkServiceProvider::new(); let (mut chain_sync, chain_sync_service, _) = ChainSync::new( sc_network_common::sync::SyncMode::Full, @@ -47,7 +48,9 @@ async fn delegate_to_chainsync() { Box::new(DefaultBlockAnnounceValidator), 1u32, None, + None, chain_sync_network_handle, + import_queue, ProtocolName::from("block-request"), ProtocolName::from("state-request"), None, diff --git a/client/network/test/src/lib.rs b/client/network/test/src/lib.rs index d3642e69cb632..173ca81653b1a 100644 --- a/client/network/test/src/lib.rs +++ b/client/network/test/src/lib.rs @@ -43,8 +43,8 @@ use sc_client_api::{ }; use sc_consensus::{ BasicQueue, BlockCheckParams, BlockImport, BlockImportParams, BoxJustificationImport, - ForkChoiceStrategy, ImportResult, JustificationImport, JustificationSyncLink, LongestChain, - Verifier, + ForkChoiceStrategy, ImportQueue, ImportResult, JustificationImport, JustificationSyncLink, + LongestChain, Verifier, }; use sc_network::{ config::{NetworkConfiguration, RequestResponseConfig, Role, SyncMode}, @@ -896,7 +896,9 @@ where block_announce_validator, network_config.max_parallel_downloads, Some(warp_sync), + None, chain_sync_network_handle, + import_queue.service(), block_request_protocol_config.name.clone(), state_request_protocol_config.name.clone(), Some(warp_protocol_config.name.clone()), @@ -915,9 +917,8 @@ where chain: client.clone(), protocol_id, fork_id, - import_queue, chain_sync: Box::new(chain_sync), - chain_sync_service, + chain_sync_service: Box::new(chain_sync_service.clone()), metrics_registry: None, block_announce_config, request_response_protocol_configs: [ @@ -936,6 +937,9 @@ where self.rt_handle().spawn(async move { chain_sync_network_provider.run(service).await; }); + self.rt_handle().spawn(async move { + import_queue.run(Box::new(chain_sync_service)).await; + }); self.mut_peers(move |peers| { for peer in peers.iter_mut() { diff --git a/client/rpc-api/Cargo.toml b/client/rpc-api/Cargo.toml index cb82a3b26706b..c46488db2d8e1 100644 --- a/client/rpc-api/Cargo.toml +++ b/client/rpc-api/Cargo.toml @@ -28,4 +28,4 @@ sp-rpc = { version = "6.0.0", path = "../../primitives/rpc" } sp-runtime = { version = "7.0.0", path = "../../primitives/runtime" } sp-tracing = { version = "6.0.0", path = "../../primitives/tracing" } sp-version = { version = "5.0.0", path = "../../primitives/version" } -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["server", "client-core", "macros"] } diff --git a/client/rpc-servers/Cargo.toml b/client/rpc-servers/Cargo.toml index a3e64c367afb6..b494749ffd26a 100644 --- a/client/rpc-servers/Cargo.toml +++ b/client/rpc-servers/Cargo.toml @@ -14,8 +14,11 @@ targets = ["x86_64-unknown-linux-gnu"] [dependencies] futures = "0.3.21" -jsonrpsee = { version = "0.15.1", features = ["server"] } +jsonrpsee = { version = "0.16.2", features = ["server"] } log = "0.4.17" serde_json = "1.0.85" tokio = { version = "1.22.0", features = ["parking_lot"] } prometheus-endpoint = { package = "substrate-prometheus-endpoint", version = "0.10.0-dev", path = "../../utils/prometheus" } +tower-http = { version = "0.3.4", features = ["cors"] } +tower = "0.4.13" +http = "0.2.8" diff --git a/client/rpc-servers/src/lib.rs b/client/rpc-servers/src/lib.rs index 7eb825e169bfa..1fa2ba81d8672 100644 --- a/client/rpc-servers/src/lib.rs +++ b/client/rpc-servers/src/lib.rs @@ -21,17 +21,21 @@ #![warn(missing_docs)] use jsonrpsee::{ - http_server::{AccessControlBuilder, HttpServerBuilder, HttpServerHandle}, - ws_server::{WsServerBuilder, WsServerHandle}, + server::{ + middleware::proxy_get_request::ProxyGetRequestLayer, AllowHosts, ServerBuilder, + ServerHandle, + }, RpcModule, }; use std::{error::Error as StdError, net::SocketAddr}; -pub use crate::middleware::{RpcMetrics, RpcMiddleware}; +pub use crate::middleware::RpcMetrics; +use http::header::HeaderValue; pub use jsonrpsee::core::{ id_providers::{RandomIntegerIdProvider, RandomStringIdProvider}, traits::IdProvider, }; +use tower_http::cors::{AllowOrigin, CorsLayer}; const MEGABYTE: usize = 1024 * 1024; @@ -46,12 +50,11 @@ const WS_MAX_SUBS_PER_CONN: usize = 1024; pub mod middleware; -/// Type alias for http server -pub type HttpServer = HttpServerHandle; -/// Type alias for ws server -pub type WsServer = WsServerHandle; +/// Type alias JSON-RPC server +pub type Server = ServerHandle; -/// WebSocket specific settings on the server. +/// Server config. +#[derive(Debug, Clone)] pub struct WsConfig { /// Maximum connections. pub max_connections: Option, @@ -67,8 +70,8 @@ impl WsConfig { // Deconstructs the config to get the finalized inner values. // // `Payload size` or `max subs per connection` bigger than u32::MAX will be truncated. - fn deconstruct(self) -> (u32, u32, u64, u32) { - let max_conns = self.max_connections.unwrap_or(WS_MAX_CONNECTIONS) as u64; + fn deconstruct(self) -> (u32, u32, u32, u32) { + let max_conns = self.max_connections.unwrap_or(WS_MAX_CONNECTIONS) as u32; let max_payload_in_mb = payload_size_or_default(self.max_payload_in_mb) as u32; let max_payload_out_mb = payload_size_or_default(self.max_payload_out_mb) as u32; let max_subs_per_conn = self.max_subs_per_conn.unwrap_or(WS_MAX_SUBS_PER_CONN) as u32; @@ -86,31 +89,27 @@ pub async fn start_http( metrics: Option, rpc_api: RpcModule, rt: tokio::runtime::Handle, -) -> Result> { - let max_payload_in = payload_size_or_default(max_payload_in_mb); - let max_payload_out = payload_size_or_default(max_payload_out_mb); +) -> Result> { + let max_payload_in = payload_size_or_default(max_payload_in_mb) as u32; + let max_payload_out = payload_size_or_default(max_payload_out_mb) as u32; + let host_filter = hosts_filter(cors.is_some(), &addrs); - let mut acl = AccessControlBuilder::new(); + let middleware = tower::ServiceBuilder::new() + // Proxy `GET /health` requests to internal `system_health` method. + .layer(ProxyGetRequestLayer::new("/health", "system_health")?) + .layer(try_into_cors(cors)?); - if let Some(cors) = cors { - // Whitelist listening address. - // NOTE: set_allowed_hosts will whitelist both ports but only one will used. - acl = acl.set_allowed_hosts(format_allowed_hosts(&addrs[..]))?; - acl = acl.set_allowed_origins(cors)?; - }; - - let builder = HttpServerBuilder::new() - .max_request_body_size(max_payload_in as u32) - .max_response_body_size(max_payload_out as u32) - .set_access_control(acl.build()) - .health_api("/health", "system_health")? - .custom_tokio_runtime(rt); + let builder = ServerBuilder::new() + .max_request_body_size(max_payload_in) + .max_response_body_size(max_payload_out) + .set_host_filtering(host_filter) + .set_middleware(middleware) + .custom_tokio_runtime(rt) + .http_only(); let rpc_api = build_rpc_api(rpc_api); let (handle, addr) = if let Some(metrics) = metrics { - let middleware = RpcMiddleware::new(metrics, "http".into()); - let builder = builder.set_middleware(middleware); - let server = builder.build(&addrs[..]).await?; + let server = builder.set_logger(metrics).build(&addrs[..]).await?; let addr = server.local_addr(); (server.start(rpc_api)?, addr) } else { @@ -120,16 +119,16 @@ pub async fn start_http( }; log::info!( - "Running JSON-RPC HTTP server: addr={}, allowed origins={:?}", + "Running JSON-RPC HTTP server: addr={}, allowed origins={}", addr.map_or_else(|_| "unknown".to_string(), |a| a.to_string()), - cors + format_cors(cors) ); Ok(handle) } -/// Start WS server listening on given address. -pub async fn start_ws( +/// Start a JSON-RPC server listening on given address that supports both HTTP and WS. +pub async fn start( addrs: [SocketAddr; 2], cors: Option<&Vec>, ws_config: WsConfig, @@ -137,27 +136,26 @@ pub async fn start_ws( rpc_api: RpcModule, rt: tokio::runtime::Handle, id_provider: Option>, -) -> Result> { +) -> Result> { let (max_payload_in, max_payload_out, max_connections, max_subs_per_conn) = ws_config.deconstruct(); - let mut acl = AccessControlBuilder::new(); + let host_filter = hosts_filter(cors.is_some(), &addrs); - if let Some(cors) = cors { - // Whitelist listening address. - // NOTE: set_allowed_hosts will whitelist both ports but only one will used. - acl = acl.set_allowed_hosts(format_allowed_hosts(&addrs[..]))?; - acl = acl.set_allowed_origins(cors)?; - }; + let middleware = tower::ServiceBuilder::new() + // Proxy `GET /health` requests to internal `system_health` method. + .layer(ProxyGetRequestLayer::new("/health", "system_health")?) + .layer(try_into_cors(cors)?); - let mut builder = WsServerBuilder::new() + let mut builder = ServerBuilder::new() .max_request_body_size(max_payload_in) .max_response_body_size(max_payload_out) .max_connections(max_connections) .max_subscriptions_per_connection(max_subs_per_conn) .ping_interval(std::time::Duration::from_secs(30)) - .custom_tokio_runtime(rt) - .set_access_control(acl.build()); + .set_host_filtering(host_filter) + .set_middleware(middleware) + .custom_tokio_runtime(rt); if let Some(provider) = id_provider { builder = builder.set_id_provider(provider); @@ -167,9 +165,7 @@ pub async fn start_ws( let rpc_api = build_rpc_api(rpc_api); let (handle, addr) = if let Some(metrics) = metrics { - let middleware = RpcMiddleware::new(metrics, "ws".into()); - let builder = builder.set_middleware(middleware); - let server = builder.build(&addrs[..]).await?; + let server = builder.set_logger(metrics).build(&addrs[..]).await?; let addr = server.local_addr(); (server.start(rpc_api)?, addr) } else { @@ -179,23 +175,14 @@ pub async fn start_ws( }; log::info!( - "Running JSON-RPC WS server: addr={}, allowed origins={:?}", + "Running JSON-RPC WS server: addr={}, allowed origins={}", addr.map_or_else(|_| "unknown".to_string(), |a| a.to_string()), - cors + format_cors(cors) ); Ok(handle) } -fn format_allowed_hosts(addrs: &[SocketAddr]) -> Vec { - let mut hosts = Vec::with_capacity(addrs.len() * 2); - for addr in addrs { - hosts.push(format!("localhost:{}", addr.port())); - hosts.push(format!("127.0.0.1:{}", addr.port())); - } - hosts -} - fn build_rpc_api(mut rpc_api: RpcModule) -> RpcModule { let mut available_methods = rpc_api.method_names().collect::>(); available_methods.sort(); @@ -214,3 +201,40 @@ fn build_rpc_api(mut rpc_api: RpcModule) -> RpcModu fn payload_size_or_default(size_mb: Option) -> usize { size_mb.map_or(RPC_MAX_PAYLOAD_DEFAULT, |mb| mb.saturating_mul(MEGABYTE)) } + +fn hosts_filter(enabled: bool, addrs: &[SocketAddr]) -> AllowHosts { + if enabled { + // NOTE The listening addresses are whitelisted by default. + let mut hosts = Vec::with_capacity(addrs.len() * 2); + for addr in addrs { + hosts.push(format!("localhost:{}", addr.port()).into()); + hosts.push(format!("127.0.0.1:{}", addr.port()).into()); + } + AllowHosts::Only(hosts) + } else { + AllowHosts::Any + } +} + +fn try_into_cors( + maybe_cors: Option<&Vec>, +) -> Result> { + if let Some(cors) = maybe_cors { + let mut list = Vec::new(); + for origin in cors { + list.push(HeaderValue::from_str(origin)?); + } + Ok(CorsLayer::new().allow_origin(AllowOrigin::list(list))) + } else { + // allow all cors + Ok(CorsLayer::permissive()) + } +} + +fn format_cors(maybe_cors: Option<&Vec>) -> String { + if let Some(cors) = maybe_cors { + format!("{:?}", cors) + } else { + format!("{:?}", ["*"]) + } +} diff --git a/client/rpc-servers/src/middleware.rs b/client/rpc-servers/src/middleware.rs index 0d77442323241..1c25ac1dfd1b3 100644 --- a/client/rpc-servers/src/middleware.rs +++ b/client/rpc-servers/src/middleware.rs @@ -16,9 +16,9 @@ // You should have received a copy of the GNU General Public License // along with this program. If not, see . -//! RPC middlware to collect prometheus metrics on RPC calls. +//! RPC middleware to collect prometheus metrics on RPC calls. -use jsonrpsee::core::middleware::{Headers, HttpMiddleware, MethodKind, Params, WsMiddleware}; +use jsonrpsee::server::logger::{HttpRequest, Logger, MethodKind, Params, TransportProtocol}; use prometheus_endpoint::{ register, Counter, CounterVec, HistogramOpts, HistogramVec, Opts, PrometheusError, Registry, U64, @@ -54,9 +54,9 @@ pub struct RpcMetrics { calls_started: CounterVec, /// Number of calls completed. calls_finished: CounterVec, - /// Number of Websocket sessions opened (Websocket only). + /// Number of Websocket sessions opened. ws_sessions_opened: Option>, - /// Number of Websocket sessions closed (Websocket only). + /// Number of Websocket sessions closed. ws_sessions_closed: Option>, } @@ -139,62 +139,61 @@ impl RpcMetrics { } } -#[derive(Clone)] -/// Middleware for RPC calls -pub struct RpcMiddleware { - metrics: RpcMetrics, - transport_label: &'static str, -} +impl Logger for RpcMetrics { + type Instant = std::time::Instant; -impl RpcMiddleware { - /// Create a new [`RpcMiddleware`] with the provided [`RpcMetrics`]. - pub fn new(metrics: RpcMetrics, transport_label: &'static str) -> Self { - Self { metrics, transport_label } + fn on_connect( + &self, + _remote_addr: SocketAddr, + _request: &HttpRequest, + transport: TransportProtocol, + ) { + if let TransportProtocol::WebSocket = transport { + self.ws_sessions_opened.as_ref().map(|counter| counter.inc()); + } } - /// Called when a new JSON-RPC request comes to the server. - fn on_request(&self) -> std::time::Instant { + fn on_request(&self, transport: TransportProtocol) -> Self::Instant { + let transport_label = transport_label_str(transport); let now = std::time::Instant::now(); - self.metrics.requests_started.with_label_values(&[self.transport_label]).inc(); + self.requests_started.with_label_values(&[transport_label]).inc(); now } - /// Called on each JSON-RPC method call, batch requests will trigger `on_call` multiple times. - fn on_call(&self, name: &str, params: Params, kind: MethodKind) { + fn on_call(&self, name: &str, params: Params, kind: MethodKind, transport: TransportProtocol) { + let transport_label = transport_label_str(transport); log::trace!( target: "rpc_metrics", "[{}] on_call name={} params={:?} kind={}", - self.transport_label, + transport_label, name, params, kind, ); - self.metrics - .calls_started - .with_label_values(&[self.transport_label, name]) - .inc(); + self.calls_started.with_label_values(&[transport_label, name]).inc(); } - /// Called on each JSON-RPC method completion, batch requests will trigger `on_result` multiple - /// times. - fn on_result(&self, name: &str, success: bool, started_at: std::time::Instant) { + fn on_result( + &self, + name: &str, + success: bool, + started_at: Self::Instant, + transport: TransportProtocol, + ) { + let transport_label = transport_label_str(transport); let micros = started_at.elapsed().as_micros(); log::debug!( target: "rpc_metrics", "[{}] {} call took {} μs", - self.transport_label, + transport_label, name, micros, ); - self.metrics - .calls_time - .with_label_values(&[self.transport_label, name]) - .observe(micros as _); + self.calls_time.with_label_values(&[transport_label, name]).observe(micros as _); - self.metrics - .calls_finished + self.calls_finished .with_label_values(&[ - self.transport_label, + transport_label, name, // the label "is_error", so `success` should be regarded as false // and vice-versa to be registrered correctly. @@ -203,57 +202,23 @@ impl RpcMiddleware { .inc(); } - /// Called once the JSON-RPC request is finished and response is sent to the output buffer. - fn on_response(&self, _result: &str, started_at: std::time::Instant) { - log::trace!(target: "rpc_metrics", "[{}] on_response started_at={:?}", self.transport_label, started_at); - self.metrics.requests_finished.with_label_values(&[self.transport_label]).inc(); - } -} - -impl WsMiddleware for RpcMiddleware { - type Instant = std::time::Instant; - - fn on_connect(&self, _remote_addr: SocketAddr, _headers: &Headers) { - self.metrics.ws_sessions_opened.as_ref().map(|counter| counter.inc()); - } - - fn on_request(&self) -> Self::Instant { - self.on_request() - } - - fn on_call(&self, name: &str, params: Params, kind: MethodKind) { - self.on_call(name, params, kind) + fn on_response(&self, result: &str, started_at: Self::Instant, transport: TransportProtocol) { + let transport_label = transport_label_str(transport); + log::trace!(target: "rpc_metrics", "[{}] on_response started_at={:?}", transport_label, started_at); + log::trace!(target: "rpc_metrics::extra", "[{}] result={:?}", transport_label, result); + self.requests_finished.with_label_values(&[transport_label]).inc(); } - fn on_result(&self, name: &str, success: bool, started_at: Self::Instant) { - self.on_result(name, success, started_at) - } - - fn on_response(&self, _result: &str, started_at: Self::Instant) { - self.on_response(_result, started_at) - } - - fn on_disconnect(&self, _remote_addr: SocketAddr) { - self.metrics.ws_sessions_closed.as_ref().map(|counter| counter.inc()); + fn on_disconnect(&self, _remote_addr: SocketAddr, transport: TransportProtocol) { + if let TransportProtocol::WebSocket = transport { + self.ws_sessions_closed.as_ref().map(|counter| counter.inc()); + } } } -impl HttpMiddleware for RpcMiddleware { - type Instant = std::time::Instant; - - fn on_request(&self, _remote_addr: SocketAddr, _headers: &Headers) -> Self::Instant { - self.on_request() - } - - fn on_call(&self, name: &str, params: Params, kind: MethodKind) { - self.on_call(name, params, kind) - } - - fn on_result(&self, name: &str, success: bool, started_at: Self::Instant) { - self.on_result(name, success, started_at) - } - - fn on_response(&self, _result: &str, started_at: Self::Instant) { - self.on_response(_result, started_at) +fn transport_label_str(t: TransportProtocol) -> &'static str { + match t { + TransportProtocol::Http => "http", + TransportProtocol::WebSocket => "ws", } } diff --git a/client/rpc-spec-v2/Cargo.toml b/client/rpc-spec-v2/Cargo.toml index a0ae3038378ff..930aeb4bd8956 100644 --- a/client/rpc-spec-v2/Cargo.toml +++ b/client/rpc-spec-v2/Cargo.toml @@ -13,7 +13,7 @@ readme = "README.md" targets = ["x86_64-unknown-linux-gnu"] [dependencies] -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } # Internal chain structures for "chain_spec". sc-chain-spec = { version = "4.0.0-dev", path = "../chain-spec" } # Pool for submitting extrinsics required by "transaction" diff --git a/client/rpc-spec-v2/src/chain_spec/tests.rs b/client/rpc-spec-v2/src/chain_spec/tests.rs index 6c078b2974e98..6f662ba422bc4 100644 --- a/client/rpc-spec-v2/src/chain_spec/tests.rs +++ b/client/rpc-spec-v2/src/chain_spec/tests.rs @@ -17,7 +17,7 @@ // along with this program. If not, see . use super::*; -use jsonrpsee::{types::EmptyParams, RpcModule}; +use jsonrpsee::{types::EmptyServerParams as EmptyParams, RpcModule}; use sc_chain_spec::Properties; const CHAIN_NAME: &'static str = "TEST_CHAIN_NAME"; diff --git a/client/rpc/Cargo.toml b/client/rpc/Cargo.toml index d690e2c7b4cf1..a241807cc242b 100644 --- a/client/rpc/Cargo.toml +++ b/client/rpc/Cargo.toml @@ -16,7 +16,7 @@ targets = ["x86_64-unknown-linux-gnu"] codec = { package = "parity-scale-codec", version = "3.0.0" } futures = "0.3.21" hash-db = { version = "0.15.2", default-features = false } -jsonrpsee = { version = "0.15.1", features = ["server"] } +jsonrpsee = { version = "0.16.2", features = ["server"] } lazy_static = { version = "1.4.0", optional = true } log = "0.4.17" parking_lot = "0.12.1" diff --git a/client/rpc/src/author/tests.rs b/client/rpc/src/author/tests.rs index f969812e5b14c..573d01630de32 100644 --- a/client/rpc/src/author/tests.rs +++ b/client/rpc/src/author/tests.rs @@ -23,7 +23,7 @@ use assert_matches::assert_matches; use codec::Encode; use jsonrpsee::{ core::Error as RpcError, - types::{error::CallError, EmptyParams}, + types::{error::CallError, EmptyServerParams as EmptyParams}, RpcModule, }; use sc_transaction_pool::{BasicPool, FullChainApi}; diff --git a/client/rpc/src/chain/tests.rs b/client/rpc/src/chain/tests.rs index 1e6dbd5aca148..224d021f9409e 100644 --- a/client/rpc/src/chain/tests.rs +++ b/client/rpc/src/chain/tests.rs @@ -19,7 +19,7 @@ use super::*; use crate::testing::{test_executor, timeout_secs}; use assert_matches::assert_matches; -use jsonrpsee::types::EmptyParams; +use jsonrpsee::types::EmptyServerParams as EmptyParams; use sc_block_builder::BlockBuilderProvider; use sp_consensus::BlockOrigin; use sp_rpc::list::ListOrValue; diff --git a/client/rpc/src/state/mod.rs b/client/rpc/src/state/mod.rs index 7213e4360ae2b..fd802e5a80391 100644 --- a/client/rpc/src/state/mod.rs +++ b/client/rpc/src/state/mod.rs @@ -28,9 +28,8 @@ use std::sync::Arc; use crate::SubscriptionTaskExecutor; use jsonrpsee::{ - core::{Error as JsonRpseeError, RpcResult}, + core::{server::rpc_module::SubscriptionSink, Error as JsonRpseeError, RpcResult}, types::SubscriptionResult, - ws_server::SubscriptionSink, }; use sc_rpc_api::{state::ReadProof, DenyUnsafe}; diff --git a/client/rpc/src/state/tests.rs b/client/rpc/src/state/tests.rs index 53dd8ebf50499..3ef59e5ca9a7c 100644 --- a/client/rpc/src/state/tests.rs +++ b/client/rpc/src/state/tests.rs @@ -23,7 +23,7 @@ use assert_matches::assert_matches; use futures::executor; use jsonrpsee::{ core::Error as RpcError, - types::{error::CallError as RpcCallError, EmptyParams, ErrorObject}, + types::{error::CallError as RpcCallError, EmptyServerParams as EmptyParams, ErrorObject}, }; use sc_block_builder::BlockBuilderProvider; use sc_rpc_api::DenyUnsafe; diff --git a/client/rpc/src/system/tests.rs b/client/rpc/src/system/tests.rs index 2f91648008ff7..00ab9c46861e2 100644 --- a/client/rpc/src/system/tests.rs +++ b/client/rpc/src/system/tests.rs @@ -21,7 +21,7 @@ use assert_matches::assert_matches; use futures::prelude::*; use jsonrpsee::{ core::Error as RpcError, - types::{error::CallError, EmptyParams}, + types::{error::CallError, EmptyServerParams as EmptyParams}, RpcModule, }; use sc_network::{self, config::Role, PeerId}; diff --git a/client/service/Cargo.toml b/client/service/Cargo.toml index 87949ef12d888..4d1d267d45c97 100644 --- a/client/service/Cargo.toml +++ b/client/service/Cargo.toml @@ -22,7 +22,7 @@ test-helpers = [] runtime-benchmarks = ["sc-client-db/runtime-benchmarks"] [dependencies] -jsonrpsee = { version = "0.15.1", features = ["server"] } +jsonrpsee = { version = "0.16.2", features = ["server"] } thiserror = "1.0.30" futures = "0.3.21" rand = "0.7.3" diff --git a/client/service/src/builder.rs b/client/service/src/builder.rs index dd89ce6dff10a..7153672030d6a 100644 --- a/client/service/src/builder.rs +++ b/client/service/src/builder.rs @@ -853,7 +853,9 @@ where block_announce_validator, config.network.max_parallel_downloads, warp_sync_provider, + config.prometheus_config.as_ref().map(|config| config.registry.clone()).as_ref(), chain_sync_network_handle, + import_queue.service(), block_request_protocol_config.name.clone(), state_request_protocol_config.name.clone(), warp_sync_protocol_config.as_ref().map(|config| config.name.clone()), @@ -877,9 +879,8 @@ where chain: client.clone(), protocol_id: protocol_id.clone(), fork_id: config.chain_spec.fork_id().map(ToOwned::to_owned), - import_queue: Box::new(import_queue), chain_sync: Box::new(chain_sync), - chain_sync_service, + chain_sync_service: Box::new(chain_sync_service.clone()), metrics_registry: config.prometheus_config.as_ref().map(|config| config.registry.clone()), block_announce_config, request_response_protocol_configs: request_response_protocol_configs @@ -925,6 +926,7 @@ where Some("networking"), chain_sync_network_provider.run(network.clone()), ); + spawn_handle.spawn("import-queue", None, import_queue.run(Box::new(chain_sync_service))); let (system_rpc_tx, system_rpc_rx) = tracing_unbounded("mpsc_system_rpc"); diff --git a/client/service/src/chain_ops/import_blocks.rs b/client/service/src/chain_ops/import_blocks.rs index c0612124dd0c2..ca09c1658d72f 100644 --- a/client/service/src/chain_ops/import_blocks.rs +++ b/client/service/src/chain_ops/import_blocks.rs @@ -157,7 +157,7 @@ fn import_block_to_queue( let (header, extrinsics) = signed_block.block.deconstruct(); let hash = header.hash(); // import queue handles verification and importing it into the client. - queue.import_blocks( + queue.service_ref().import_blocks( BlockOrigin::File, vec![IncomingBlock:: { hash, diff --git a/client/service/src/lib.rs b/client/service/src/lib.rs index 091b4bbe9fe5f..f0e3f72510c28 100644 --- a/client/service/src/lib.rs +++ b/client/service/src/lib.rs @@ -43,7 +43,6 @@ use log::{debug, error, warn}; use sc_client_api::{blockchain::HeaderBackend, BlockBackend, BlockchainEvents, ProofProvider}; use sc_network::PeerId; use sc_network_common::{config::MultiaddrWithPeerId, service::NetworkBlock}; -use sc_rpc_server::WsConfig; use sc_utils::mpsc::TracingUnboundedReceiver; use sp_blockchain::HeaderMetadata; use sp_consensus::SyncOracle; @@ -294,20 +293,9 @@ async fn build_network_future< // Wrapper for HTTP and WS servers that makes sure they are properly shut down. mod waiting { - pub struct HttpServer(pub Option); + pub struct Server(pub Option); - impl Drop for HttpServer { - fn drop(&mut self) { - if let Some(server) = self.0.take() { - // This doesn't not wait for the server to be stopped but fires the signal. - let _ = server.stop(); - } - } - } - - pub struct WsServer(pub Option); - - impl Drop for WsServer { + impl Drop for Server { fn drop(&mut self) { if let Some(server) = self.0.take() { // This doesn't not wait for the server to be stopped but fires the signal. @@ -326,9 +314,6 @@ fn start_rpc_servers( where R: Fn(sc_rpc::DenyUnsafe) -> Result, Error>, { - let (max_request_size, ws_max_response_size, http_max_response_size) = - legacy_cli_parsing(config); - fn deny_unsafe(addr: SocketAddr, methods: &RpcMethods) -> sc_rpc::DenyUnsafe { let is_exposed_addr = !addr.ip().is_loopback(); match (is_exposed_addr, methods) { @@ -337,6 +322,9 @@ where } } + let (max_request_size, ws_max_response_size, http_max_response_size) = + legacy_cli_parsing(config); + let random_port = |mut addr: SocketAddr| { addr.set_port(0); addr @@ -346,6 +334,7 @@ where .rpc_ws .unwrap_or_else(|| "127.0.0.1:9944".parse().expect("valid sockaddr; qed")); let ws_addr2 = random_port(ws_addr); + let http_addr = config .rpc_http .unwrap_or_else(|| "127.0.0.1:9933".parse().expect("valid sockaddr; qed")); @@ -353,29 +342,29 @@ where let metrics = sc_rpc_server::RpcMetrics::new(config.prometheus_registry())?; + let server_config = sc_rpc_server::WsConfig { + max_connections: config.rpc_ws_max_connections, + max_payload_in_mb: max_request_size, + max_payload_out_mb: ws_max_response_size, + max_subs_per_conn: config.rpc_max_subs_per_conn, + }; + let http_fut = sc_rpc_server::start_http( [http_addr, http_addr2], config.rpc_cors.as_ref(), max_request_size, http_max_response_size, metrics.clone(), - gen_rpc_module(deny_unsafe(ws_addr, &config.rpc_methods))?, + gen_rpc_module(deny_unsafe(http_addr, &config.rpc_methods))?, config.tokio_handle.clone(), ); - let ws_config = WsConfig { - max_connections: config.rpc_ws_max_connections, - max_payload_in_mb: max_request_size, - max_payload_out_mb: ws_max_response_size, - max_subs_per_conn: config.rpc_max_subs_per_conn, - }; - - let ws_fut = sc_rpc_server::start_ws( + let ws_fut = sc_rpc_server::start( [ws_addr, ws_addr2], config.rpc_cors.as_ref(), - ws_config, - metrics, - gen_rpc_module(deny_unsafe(http_addr, &config.rpc_methods))?, + server_config.clone(), + metrics.clone(), + gen_rpc_module(deny_unsafe(ws_addr, &config.rpc_methods))?, config.tokio_handle.clone(), rpc_id_provider, ); @@ -383,8 +372,7 @@ where match tokio::task::block_in_place(|| { config.tokio_handle.block_on(futures::future::try_join(http_fut, ws_fut)) }) { - Ok((http, ws)) => - Ok(Box::new((waiting::HttpServer(Some(http)), waiting::WsServer(Some(ws))))), + Ok((http, ws)) => Ok(Box::new((waiting::Server(Some(http)), waiting::Server(Some(ws))))), Err(e) => Err(Error::Application(e)), } } diff --git a/client/state-db/src/lib.rs b/client/state-db/src/lib.rs index 94d41787701b3..5e01a0e063ac1 100644 --- a/client/state-db/src/lib.rs +++ b/client/state-db/src/lib.rs @@ -470,6 +470,10 @@ impl StateDbSync { } } + fn sync(&mut self) { + self.non_canonical.sync(); + } + pub fn get( &self, key: &Q, @@ -573,6 +577,12 @@ impl StateDb { self.db.write().unpin(hash) } + /// Confirm that all changes made to commit sets are on disk. Allows for temporarily pinned + /// blocks to be released. + pub fn sync(&self) { + self.db.write().sync() + } + /// Get a value from non-canonical/pruning overlay or the backing DB. pub fn get( &self, diff --git a/client/state-db/src/noncanonical.rs b/client/state-db/src/noncanonical.rs index df09a9c017747..84ba94c052909 100644 --- a/client/state-db/src/noncanonical.rs +++ b/client/state-db/src/noncanonical.rs @@ -38,6 +38,7 @@ pub struct NonCanonicalOverlay { // would be deleted but kept around because block is pinned, ref counted. pinned: HashMap, pinned_insertions: HashMap, u32)>, + pinned_canonincalized: Vec, } #[cfg_attr(test, derive(PartialEq, Debug))] @@ -225,6 +226,7 @@ impl NonCanonicalOverlay { pinned: Default::default(), pinned_insertions: Default::default(), values, + pinned_canonincalized: Default::default(), }) } @@ -348,6 +350,18 @@ impl NonCanonicalOverlay { self.last_canonicalized.as_ref().map(|&(_, n)| n) } + /// Confirm that all changes made to commit sets are on disk. Allows for temporarily pinned + /// blocks to be released. + pub fn sync(&mut self) { + let mut pinned = std::mem::take(&mut self.pinned_canonincalized); + for hash in pinned.iter() { + self.unpin(hash) + } + pinned.clear(); + // Reuse the same memory buffer + self.pinned_canonincalized = pinned; + } + /// Select a top-level root and canonicalized it. Discards all sibling subtrees and the root. /// Add a set of changes of the canonicalized block to `CommitSet` /// Return the block number of the canonicalized block @@ -367,6 +381,12 @@ impl NonCanonicalOverlay { .position(|overlay| overlay.hash == *hash) .ok_or(StateDbError::InvalidBlock)?; + // No failures are possible beyond this point. + + // Force pin canonicalized block so that it is no discarded immediately + self.pin(hash); + self.pinned_canonincalized.push(hash.clone()); + let mut discarded_journals = Vec::new(); let mut discarded_blocks = Vec::new(); for (i, overlay) in level.blocks.into_iter().enumerate() { @@ -680,6 +700,7 @@ mod tests { db.commit(&overlay.insert(&h2, 11, &h1, make_changeset(&[5], &[3])).unwrap()); let mut commit = CommitSet::default(); overlay.canonicalize(&h1, &mut commit).unwrap(); + overlay.unpin(&h1); db.commit(&commit); assert_eq!(overlay.levels.len(), 1); @@ -707,6 +728,7 @@ mod tests { let mut commit = CommitSet::default(); overlay.canonicalize(&h1, &mut commit).unwrap(); db.commit(&commit); + overlay.sync(); assert!(!contains(&overlay, 5)); assert!(contains(&overlay, 7)); assert_eq!(overlay.levels.len(), 1); @@ -714,6 +736,7 @@ mod tests { let mut commit = CommitSet::default(); overlay.canonicalize(&h2, &mut commit).unwrap(); db.commit(&commit); + overlay.sync(); assert_eq!(overlay.levels.len(), 0); assert_eq!(overlay.parents.len(), 0); assert!(db.data_eq(&make_db(&[1, 4, 6, 7, 8]))); @@ -732,6 +755,7 @@ mod tests { let mut commit = CommitSet::default(); overlay.canonicalize(&h_1, &mut commit).unwrap(); db.commit(&commit); + overlay.sync(); assert!(!contains(&overlay, 1)); } @@ -819,6 +843,7 @@ mod tests { let mut commit = CommitSet::default(); overlay.canonicalize(&h_1, &mut commit).unwrap(); db.commit(&commit); + overlay.sync(); assert_eq!(overlay.levels.len(), 2); assert_eq!(overlay.parents.len(), 6); assert!(!contains(&overlay, 1)); @@ -839,6 +864,7 @@ mod tests { let mut commit = CommitSet::default(); overlay.canonicalize(&h_1_2, &mut commit).unwrap(); db.commit(&commit); + overlay.sync(); assert_eq!(overlay.levels.len(), 1); assert_eq!(overlay.parents.len(), 3); assert!(!contains(&overlay, 11)); @@ -855,6 +881,7 @@ mod tests { let mut commit = CommitSet::default(); overlay.canonicalize(&h_1_2_2, &mut commit).unwrap(); db.commit(&commit); + overlay.sync(); assert_eq!(overlay.levels.len(), 0); assert_eq!(overlay.parents.len(), 0); assert!(db.data_eq(&make_db(&[1, 12, 122]))); @@ -938,6 +965,28 @@ mod tests { assert!(!contains(&overlay, 1)); } + #[test] + fn pins_canonicalized() { + let mut db = make_db(&[]); + + let (h_1, c_1) = (H256::random(), make_changeset(&[1], &[])); + let (h_2, c_2) = (H256::random(), make_changeset(&[2], &[])); + + let mut overlay = NonCanonicalOverlay::::new(&db).unwrap(); + db.commit(&overlay.insert(&h_1, 1, &H256::default(), c_1).unwrap()); + db.commit(&overlay.insert(&h_2, 2, &h_1, c_2).unwrap()); + + let mut commit = CommitSet::default(); + overlay.canonicalize(&h_1, &mut commit).unwrap(); + overlay.canonicalize(&h_2, &mut commit).unwrap(); + assert!(contains(&overlay, 1)); + assert!(contains(&overlay, 2)); + db.commit(&commit); + overlay.sync(); + assert!(!contains(&overlay, 1)); + assert!(!contains(&overlay, 2)); + } + #[test] fn pin_keeps_parent() { let mut db = make_db(&[]); @@ -964,6 +1013,7 @@ mod tests { assert!(contains(&overlay, 1)); overlay.unpin(&h_21); assert!(!contains(&overlay, 1)); + overlay.unpin(&h_12); assert!(overlay.pinned.is_empty()); } @@ -999,6 +1049,7 @@ mod tests { let mut commit = CommitSet::default(); overlay.canonicalize(&h21, &mut commit).unwrap(); // h11 should stay in the DB db.commit(&commit); + overlay.sync(); assert!(!contains(&overlay, 21)); } diff --git a/client/sync-state-rpc/Cargo.toml b/client/sync-state-rpc/Cargo.toml index 9730eb56e9bd6..a72b4106ba873 100644 --- a/client/sync-state-rpc/Cargo.toml +++ b/client/sync-state-rpc/Cargo.toml @@ -13,7 +13,7 @@ targets = ["x86_64-unknown-linux-gnu"] [dependencies] codec = { package = "parity-scale-codec", version = "3.0.0" } -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } serde = { version = "1.0.136", features = ["derive"] } serde_json = "1.0.85" thiserror = "1.0.30" diff --git a/docs/Upgrading-2.0-to-3.0.md b/docs/Upgrading-2.0-to-3.0.md index 7540e0d5b5b8c..906018db9a707 100644 --- a/docs/Upgrading-2.0-to-3.0.md +++ b/docs/Upgrading-2.0-to-3.0.md @@ -100,12 +100,12 @@ And update the overall definition for weights on frame and a few related types a +/// by Operational extrinsics. +const NORMAL_DISPATCH_RATIO: Perbill = Perbill::from_percent(75); +/// We allow for 2 seconds of compute with a 6 second average block time. -+const MAXIMUM_BLOCK_WEIGHT: Weight = 2u64 * WEIGHT_PER_SECOND; ++const MAXIMUM_BLOCK_WEIGHT: Weight = Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX); + parameter_types! { pub const BlockHashCount: BlockNumber = 2400; - /// We allow for 2 seconds of compute with a 6 second average block time. -- pub const MaximumBlockWeight: Weight = 2u64 * WEIGHT_PER_SECOND; +- pub const MaximumBlockWeight: Weight = Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX); - pub const AvailableBlockRatio: Perbill = Perbill::from_percent(75); - /// Assume 10% of weight for average on_initialize calls. - pub MaximumExtrinsicWeight: Weight = diff --git a/frame/alliance/src/lib.rs b/frame/alliance/src/lib.rs index 7e03da9ac1c7b..790c3c384e701 100644 --- a/frame/alliance/src/lib.rs +++ b/frame/alliance/src/lib.rs @@ -503,6 +503,7 @@ pub mod pallet { /// Add a new proposal to be voted on. /// /// Must be called by a Fellow. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::propose_proposed( *length_bound, // B T::MaxFellows::get(), // M @@ -524,6 +525,7 @@ pub mod pallet { /// Add an aye or nay vote for the sender to the given proposal. /// /// Must be called by a Fellow. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::vote(T::MaxFellows::get()))] pub fn vote( origin: OriginFor, @@ -541,6 +543,7 @@ pub mod pallet { /// Close a vote that is either approved, disapproved, or whose voting period has ended. /// /// Must be called by a Fellow. + #[pallet::call_index(2)] #[pallet::weight({ let b = *length_bound; let m = T::MaxFellows::get(); @@ -573,6 +576,7 @@ pub mod pallet { /// The Alliance must be empty, and the call must provide some founding members. /// /// Must be called by the Root origin. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::init_members( fellows.len() as u32, allies.len() as u32, @@ -623,6 +627,7 @@ pub mod pallet { /// Disband the Alliance, remove all active members and unreserve deposits. /// /// Witness data must be set. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::disband( witness.fellow_members, witness.ally_members, @@ -673,6 +678,7 @@ pub mod pallet { } /// Set a new IPFS CID to the alliance rule. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::set_rule())] pub fn set_rule(origin: OriginFor, rule: Cid) -> DispatchResult { T::AdminOrigin::ensure_origin(origin)?; @@ -684,6 +690,7 @@ pub mod pallet { } /// Make an announcement of a new IPFS CID about alliance issues. + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::announce())] pub fn announce(origin: OriginFor, announcement: Cid) -> DispatchResult { T::AnnouncementOrigin::ensure_origin(origin)?; @@ -699,6 +706,7 @@ pub mod pallet { } /// Remove an announcement. + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::remove_announcement())] pub fn remove_announcement(origin: OriginFor, announcement: Cid) -> DispatchResult { T::AnnouncementOrigin::ensure_origin(origin)?; @@ -716,6 +724,7 @@ pub mod pallet { } /// Submit oneself for candidacy. A fixed deposit is reserved. + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::join_alliance())] pub fn join_alliance(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -752,6 +761,7 @@ pub mod pallet { /// A Fellow can nominate someone to join the alliance as an Ally. There is no deposit /// required from the nominator or nominee. + #[pallet::call_index(9)] #[pallet::weight(T::WeightInfo::nominate_ally())] pub fn nominate_ally(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { let nominator = ensure_signed(origin)?; @@ -776,6 +786,7 @@ pub mod pallet { } /// Elevate an Ally to Fellow. + #[pallet::call_index(10)] #[pallet::weight(T::WeightInfo::elevate_ally())] pub fn elevate_ally(origin: OriginFor, ally: AccountIdLookupOf) -> DispatchResult { T::MembershipManager::ensure_origin(origin)?; @@ -792,6 +803,7 @@ pub mod pallet { /// As a member, give a retirement notice and start a retirement period required to pass in /// order to retire. + #[pallet::call_index(11)] #[pallet::weight(T::WeightInfo::give_retirement_notice())] pub fn give_retirement_notice(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -814,6 +826,7 @@ pub mod pallet { /// /// This can only be done once you have called `give_retirement_notice` and the /// `RetirementPeriod` has passed. + #[pallet::call_index(12)] #[pallet::weight(T::WeightInfo::retire())] pub fn retire(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -836,6 +849,7 @@ pub mod pallet { } /// Kick a member from the Alliance and slash its deposit. + #[pallet::call_index(13)] #[pallet::weight(T::WeightInfo::kick_member())] pub fn kick_member(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { T::MembershipManager::ensure_origin(origin)?; @@ -853,6 +867,7 @@ pub mod pallet { } /// Add accounts or websites to the list of unscrupulous items. + #[pallet::call_index(14)] #[pallet::weight(T::WeightInfo::add_unscrupulous_items(items.len() as u32, T::MaxWebsiteUrlLength::get()))] pub fn add_unscrupulous_items( origin: OriginFor, @@ -882,6 +897,7 @@ pub mod pallet { } /// Deem some items no longer unscrupulous. + #[pallet::call_index(15)] #[pallet::weight(>::WeightInfo::remove_unscrupulous_items( items.len() as u32, T::MaxWebsiteUrlLength::get() ))] @@ -907,6 +923,7 @@ pub mod pallet { /// Close a vote that is either approved, disapproved, or whose voting period has ended. /// /// Must be called by a Fellow. + #[pallet::call_index(16)] #[pallet::weight({ let b = *length_bound; let m = T::MaxFellows::get(); @@ -934,6 +951,7 @@ pub mod pallet { /// Abdicate one's position as a voting member and just be an Ally. May be used by Fellows /// who do not want to leave the Alliance but do not have the capacity to participate /// operationally for some time. + #[pallet::call_index(17)] #[pallet::weight(T::WeightInfo::abdicate_fellow_status())] pub fn abdicate_fellow_status(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; diff --git a/frame/assets/Cargo.toml b/frame/assets/Cargo.toml index 715149b20c042..84bfd9535a461 100644 --- a/frame/assets/Cargo.toml +++ b/frame/assets/Cargo.toml @@ -23,9 +23,9 @@ frame-support = { version = "4.0.0-dev", default-features = false, path = "../su # `system` module provides us with all sorts of useful stuff and macros depend on it being around. frame-system = { version = "4.0.0-dev", default-features = false, path = "../system" } frame-benchmarking = { version = "4.0.0-dev", default-features = false, path = "../benchmarking", optional = true } +sp-core = { version = "7.0.0", default-features = false, path = "../../primitives/core" } [dev-dependencies] -sp-core = { version = "7.0.0", path = "../../primitives/core" } sp-std = { version = "5.0.0", path = "../../primitives/std" } sp-io = { version = "7.0.0", path = "../../primitives/io" } pallet-balances = { version = "4.0.0-dev", path = "../balances" } @@ -35,6 +35,7 @@ default = ["std"] std = [ "codec/std", "scale-info/std", + "sp-core/std", "sp-std/std", "sp-runtime/std", "frame-support/std", diff --git a/frame/assets/src/lib.rs b/frame/assets/src/lib.rs index 2902477d0f2b5..ab589c0eef0f4 100644 --- a/frame/assets/src/lib.rs +++ b/frame/assets/src/lib.rs @@ -563,6 +563,7 @@ pub mod pallet { /// Emits `Created` event when successful. /// /// Weight: `O(1)` + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::create())] pub fn create( origin: OriginFor, @@ -620,6 +621,7 @@ pub mod pallet { /// Emits `ForceCreated` event when successful. /// /// Weight: `O(1)` + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::force_create())] pub fn force_create( origin: OriginFor, @@ -645,6 +647,7 @@ pub mod pallet { /// asset. /// /// The asset class must be frozen before calling `start_destroy`. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::start_destroy())] pub fn start_destroy(origin: OriginFor, id: T::AssetIdParameter) -> DispatchResult { let maybe_check_owner = match T::ForceOrigin::try_origin(origin) { @@ -667,6 +670,7 @@ pub mod pallet { /// asset. /// /// Each call emits the `Event::DestroyedAccounts` event. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::destroy_accounts(T::RemoveItemsLimit::get()))] pub fn destroy_accounts( origin: OriginFor, @@ -690,6 +694,7 @@ pub mod pallet { /// asset. /// /// Each call emits the `Event::DestroyedApprovals` event. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::destroy_approvals(T::RemoveItemsLimit::get()))] pub fn destroy_approvals( origin: OriginFor, @@ -711,6 +716,7 @@ pub mod pallet { /// asset. /// /// Each successful call emits the `Event::Destroyed` event. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::finish_destroy())] pub fn finish_destroy(origin: OriginFor, id: T::AssetIdParameter) -> DispatchResult { let _ = ensure_signed(origin)?; @@ -730,6 +736,7 @@ pub mod pallet { /// /// Weight: `O(1)` /// Modes: Pre-existing balance of `beneficiary`; Account pre-existence of `beneficiary`. + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::mint())] pub fn mint( origin: OriginFor, @@ -759,6 +766,7 @@ pub mod pallet { /// /// Weight: `O(1)` /// Modes: Post-existence of `who`; Pre & post Zombie-status of `who`. + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::burn())] pub fn burn( origin: OriginFor, @@ -793,6 +801,7 @@ pub mod pallet { /// Weight: `O(1)` /// Modes: Pre-existence of `target`; Post-existence of sender; Account pre-existence of /// `target`. + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::transfer())] pub fn transfer( origin: OriginFor, @@ -826,6 +835,7 @@ pub mod pallet { /// Weight: `O(1)` /// Modes: Pre-existence of `target`; Post-existence of sender; Account pre-existence of /// `target`. + #[pallet::call_index(9)] #[pallet::weight(T::WeightInfo::transfer_keep_alive())] pub fn transfer_keep_alive( origin: OriginFor, @@ -860,6 +870,7 @@ pub mod pallet { /// Weight: `O(1)` /// Modes: Pre-existence of `dest`; Post-existence of `source`; Account pre-existence of /// `dest`. + #[pallet::call_index(10)] #[pallet::weight(T::WeightInfo::force_transfer())] pub fn force_transfer( origin: OriginFor, @@ -887,6 +898,7 @@ pub mod pallet { /// Emits `Frozen`. /// /// Weight: `O(1)` + #[pallet::call_index(11)] #[pallet::weight(T::WeightInfo::freeze())] pub fn freeze( origin: OriginFor, @@ -923,6 +935,7 @@ pub mod pallet { /// Emits `Thawed`. /// /// Weight: `O(1)` + #[pallet::call_index(12)] #[pallet::weight(T::WeightInfo::thaw())] pub fn thaw( origin: OriginFor, @@ -958,6 +971,7 @@ pub mod pallet { /// Emits `Frozen`. /// /// Weight: `O(1)` + #[pallet::call_index(13)] #[pallet::weight(T::WeightInfo::freeze_asset())] pub fn freeze_asset(origin: OriginFor, id: T::AssetIdParameter) -> DispatchResult { let origin = ensure_signed(origin)?; @@ -984,6 +998,7 @@ pub mod pallet { /// Emits `Thawed`. /// /// Weight: `O(1)` + #[pallet::call_index(14)] #[pallet::weight(T::WeightInfo::thaw_asset())] pub fn thaw_asset(origin: OriginFor, id: T::AssetIdParameter) -> DispatchResult { let origin = ensure_signed(origin)?; @@ -1011,6 +1026,7 @@ pub mod pallet { /// Emits `OwnerChanged`. /// /// Weight: `O(1)` + #[pallet::call_index(15)] #[pallet::weight(T::WeightInfo::transfer_ownership())] pub fn transfer_ownership( origin: OriginFor, @@ -1054,6 +1070,7 @@ pub mod pallet { /// Emits `TeamChanged`. /// /// Weight: `O(1)` + #[pallet::call_index(16)] #[pallet::weight(T::WeightInfo::set_team())] pub fn set_team( origin: OriginFor, @@ -1098,6 +1115,7 @@ pub mod pallet { /// Emits `MetadataSet`. /// /// Weight: `O(1)` + #[pallet::call_index(17)] #[pallet::weight(T::WeightInfo::set_metadata(name.len() as u32, symbol.len() as u32))] pub fn set_metadata( origin: OriginFor, @@ -1122,6 +1140,7 @@ pub mod pallet { /// Emits `MetadataCleared`. /// /// Weight: `O(1)` + #[pallet::call_index(18)] #[pallet::weight(T::WeightInfo::clear_metadata())] pub fn clear_metadata(origin: OriginFor, id: T::AssetIdParameter) -> DispatchResult { let origin = ensure_signed(origin)?; @@ -1153,6 +1172,7 @@ pub mod pallet { /// Emits `MetadataSet`. /// /// Weight: `O(N + S)` where N and S are the length of the name and symbol respectively. + #[pallet::call_index(19)] #[pallet::weight(T::WeightInfo::force_set_metadata(name.len() as u32, symbol.len() as u32))] pub fn force_set_metadata( origin: OriginFor, @@ -1204,6 +1224,7 @@ pub mod pallet { /// Emits `MetadataCleared`. /// /// Weight: `O(1)` + #[pallet::call_index(20)] #[pallet::weight(T::WeightInfo::force_clear_metadata())] pub fn force_clear_metadata( origin: OriginFor, @@ -1243,6 +1264,7 @@ pub mod pallet { /// Emits `AssetStatusChanged` with the identity of the asset. /// /// Weight: `O(1)` + #[pallet::call_index(21)] #[pallet::weight(T::WeightInfo::force_asset_status())] pub fn force_asset_status( origin: OriginFor, @@ -1299,6 +1321,7 @@ pub mod pallet { /// Emits `ApprovedTransfer` on success. /// /// Weight: `O(1)` + #[pallet::call_index(22)] #[pallet::weight(T::WeightInfo::approve_transfer())] pub fn approve_transfer( origin: OriginFor, @@ -1325,6 +1348,7 @@ pub mod pallet { /// Emits `ApprovalCancelled` on success. /// /// Weight: `O(1)` + #[pallet::call_index(23)] #[pallet::weight(T::WeightInfo::cancel_approval())] pub fn cancel_approval( origin: OriginFor, @@ -1361,6 +1385,7 @@ pub mod pallet { /// Emits `ApprovalCancelled` on success. /// /// Weight: `O(1)` + #[pallet::call_index(24)] #[pallet::weight(T::WeightInfo::force_cancel_approval())] pub fn force_cancel_approval( origin: OriginFor, @@ -1410,6 +1435,7 @@ pub mod pallet { /// Emits `TransferredApproved` on success. /// /// Weight: `O(1)` + #[pallet::call_index(25)] #[pallet::weight(T::WeightInfo::transfer_approved())] pub fn transfer_approved( origin: OriginFor, @@ -1434,6 +1460,7 @@ pub mod pallet { /// - `id`: The identifier of the asset for the account to be created. /// /// Emits `Touched` event when successful. + #[pallet::call_index(26)] #[pallet::weight(T::WeightInfo::mint())] pub fn touch(origin: OriginFor, id: T::AssetIdParameter) -> DispatchResult { let id: T::AssetId = id.into(); @@ -1448,6 +1475,7 @@ pub mod pallet { /// - `allow_burn`: If `true` then assets may be destroyed in order to complete the refund. /// /// Emits `Refunded` event when successful. + #[pallet::call_index(27)] #[pallet::weight(T::WeightInfo::mint())] pub fn refund( origin: OriginFor, @@ -1459,3 +1487,5 @@ pub mod pallet { } } } + +sp_core::generate_feature_enabled_macro!(runtime_benchmarks_enabled, feature = "runtime-benchmarks", $); diff --git a/frame/atomic-swap/src/lib.rs b/frame/atomic-swap/src/lib.rs index 9c6056497118c..66628e8e6f242 100644 --- a/frame/atomic-swap/src/lib.rs +++ b/frame/atomic-swap/src/lib.rs @@ -243,6 +243,7 @@ pub mod pallet { /// - `duration`: Locked duration of the atomic swap. For safety reasons, it is recommended /// that the revealer uses a shorter duration than the counterparty, to prevent the /// situation where the revealer reveals the proof too late around the end block. + #[pallet::call_index(0)] #[pallet::weight(T::DbWeight::get().reads_writes(1, 1).ref_time().saturating_add(40_000_000))] pub fn create_swap( origin: OriginFor, @@ -278,6 +279,7 @@ pub mod pallet { /// - `proof`: Revealed proof of the claim. /// - `action`: Action defined in the swap, it must match the entry in blockchain. Otherwise /// the operation fails. This is used for weight calculation. + #[pallet::call_index(1)] #[pallet::weight( T::DbWeight::get().reads_writes(1, 1) .saturating_add(action.weight()) @@ -318,6 +320,7 @@ pub mod pallet { /// /// - `target`: Target of the original atomic swap. /// - `hashed_proof`: Hashed proof of the original atomic swap. + #[pallet::call_index(2)] #[pallet::weight(T::DbWeight::get().reads_writes(1, 1).ref_time().saturating_add(40_000_000))] pub fn cancel_swap( origin: OriginFor, diff --git a/frame/aura/src/lib.rs b/frame/aura/src/lib.rs index ff2c5df04a453..635e22c5d58aa 100644 --- a/frame/aura/src/lib.rs +++ b/frame/aura/src/lib.rs @@ -58,6 +58,8 @@ mod tests; pub use pallet::*; +const LOG_TARGET: &str = "runtime::aura"; + #[frame_support::pallet] pub mod pallet { use super::*; @@ -222,7 +224,7 @@ impl OneSessionHandler for Pallet { if last_authorities != next_authorities { if next_authorities.len() as u32 > T::MaxAuthorities::get() { log::warn!( - target: "runtime::aura", + target: LOG_TARGET, "next authorities list larger than {}, truncating", T::MaxAuthorities::get(), ); diff --git a/frame/authorship/src/lib.rs b/frame/authorship/src/lib.rs index c08e773abe3a7..a40f32d36c265 100644 --- a/frame/authorship/src/lib.rs +++ b/frame/authorship/src/lib.rs @@ -237,6 +237,7 @@ pub mod pallet { #[pallet::call] impl Pallet { /// Provide a set of uncles. + #[pallet::call_index(0)] #[pallet::weight((0, DispatchClass::Mandatory))] pub fn set_uncles(origin: OriginFor, new_uncles: Vec) -> DispatchResult { ensure_none(origin)?; diff --git a/frame/babe/src/default_weights.rs b/frame/babe/src/default_weights.rs index d3e0c9d044883..f864fd18d86a6 100644 --- a/frame/babe/src/default_weights.rs +++ b/frame/babe/src/default_weights.rs @@ -19,7 +19,7 @@ //! This file was not auto-generated. use frame_support::weights::{ - constants::{RocksDbWeight as DbWeight, WEIGHT_PER_MICROS, WEIGHT_PER_NANOS}, + constants::{RocksDbWeight as DbWeight, WEIGHT_REF_TIME_PER_MICROS, WEIGHT_REF_TIME_PER_NANOS}, Weight, }; @@ -38,17 +38,20 @@ impl crate::WeightInfo for () { const MAX_NOMINATORS: u64 = 200; // checking membership proof - let ref_time_weight = (35u64 * WEIGHT_PER_MICROS) - .saturating_add((175u64 * WEIGHT_PER_NANOS).saturating_mul(validator_count)) + Weight::from_ref_time(35u64 * WEIGHT_REF_TIME_PER_MICROS) + .saturating_add( + Weight::from_ref_time(175u64 * WEIGHT_REF_TIME_PER_NANOS) + .saturating_mul(validator_count), + ) .saturating_add(DbWeight::get().reads(5)) // check equivocation proof - .saturating_add(110u64 * WEIGHT_PER_MICROS) + .saturating_add(Weight::from_ref_time(110u64 * WEIGHT_REF_TIME_PER_MICROS)) // report offence - .saturating_add(110u64 * WEIGHT_PER_MICROS) - .saturating_add(25u64 * WEIGHT_PER_MICROS * MAX_NOMINATORS) + .saturating_add(Weight::from_ref_time(110u64 * WEIGHT_REF_TIME_PER_MICROS)) + .saturating_add(Weight::from_ref_time( + 25u64 * WEIGHT_REF_TIME_PER_MICROS * MAX_NOMINATORS, + )) .saturating_add(DbWeight::get().reads(14 + 3 * MAX_NOMINATORS)) - .saturating_add(DbWeight::get().writes(10 + 3 * MAX_NOMINATORS)); - - ref_time_weight + .saturating_add(DbWeight::get().writes(10 + 3 * MAX_NOMINATORS)) } } diff --git a/frame/babe/src/equivocation.rs b/frame/babe/src/equivocation.rs index f55bda751887d..70f087fd461f9 100644 --- a/frame/babe/src/equivocation.rs +++ b/frame/babe/src/equivocation.rs @@ -48,7 +48,7 @@ use sp_staking::{ }; use sp_std::prelude::*; -use crate::{Call, Config, Pallet}; +use crate::{Call, Config, Pallet, LOG_TARGET}; /// A trait with utility methods for handling equivocation reports in BABE. /// The trait provides methods for reporting an offence triggered by a valid @@ -161,15 +161,9 @@ where }; match SubmitTransaction::>::submit_unsigned_transaction(call.into()) { - Ok(()) => log::info!( - target: "runtime::babe", - "Submitted BABE equivocation report.", - ), - Err(e) => log::error!( - target: "runtime::babe", - "Error submitting equivocation report: {:?}", - e, - ), + Ok(()) => log::info!(target: LOG_TARGET, "Submitted BABE equivocation report.",), + Err(e) => + log::error!(target: LOG_TARGET, "Error submitting equivocation report: {:?}", e,), } Ok(()) @@ -192,7 +186,7 @@ impl Pallet { TransactionSource::Local | TransactionSource::InBlock => { /* allowed */ }, _ => { log::warn!( - target: "runtime::babe", + target: LOG_TARGET, "rejecting unsigned report equivocation transaction because it is not local/in-block.", ); diff --git a/frame/babe/src/lib.rs b/frame/babe/src/lib.rs index eadaa036332fa..16b2b2119793a 100644 --- a/frame/babe/src/lib.rs +++ b/frame/babe/src/lib.rs @@ -50,6 +50,8 @@ use sp_consensus_vrf::schnorrkel; pub use sp_consensus_babe::{AuthorityId, PUBLIC_KEY_LENGTH, RANDOMNESS_LENGTH, VRF_OUTPUT_LENGTH}; +const LOG_TARGET: &str = "runtime::babe"; + mod default_weights; mod equivocation; mod randomness; @@ -403,6 +405,7 @@ pub mod pallet { /// the equivocation proof and validate the given key ownership proof /// against the extracted offender. If both are valid, the offence will /// be reported. + #[pallet::call_index(0)] #[pallet::weight(::WeightInfo::report_equivocation( key_owner_proof.validator_count(), ))] @@ -424,6 +427,7 @@ pub mod pallet { /// block authors will call it (validated in `ValidateUnsigned`), as such /// if the block author is defined it will be defined as the equivocation /// reporter. + #[pallet::call_index(1)] #[pallet::weight(::WeightInfo::report_equivocation( key_owner_proof.validator_count(), ))] @@ -445,6 +449,7 @@ pub mod pallet { /// the next call to `enact_epoch_change`. The config will be activated one epoch after. /// Multiple calls to this method will replace any existing planned config change that had /// not been enacted yet. + #[pallet::call_index(2)] #[pallet::weight(::WeightInfo::plan_config_change())] pub fn plan_config_change( origin: OriginFor, diff --git a/frame/bags-list/src/lib.rs b/frame/bags-list/src/lib.rs index 2b48fbf0ca630..14f8a613eb798 100644 --- a/frame/bags-list/src/lib.rs +++ b/frame/bags-list/src/lib.rs @@ -224,6 +224,7 @@ pub mod pallet { /// `ScoreProvider`. /// /// If `dislocated` does not exists, it returns an error. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::rebag_non_terminal().max(T::WeightInfo::rebag_terminal()))] pub fn rebag(origin: OriginFor, dislocated: AccountIdLookupOf) -> DispatchResult { ensure_signed(origin)?; @@ -242,6 +243,7 @@ pub mod pallet { /// Only works if /// - both nodes are within the same bag, /// - and `origin` has a greater `Score` than `lighter`. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::put_in_front_of())] pub fn put_in_front_of( origin: OriginFor, @@ -357,25 +359,26 @@ impl, I: 'static> SortedListProvider for Pallet List::::unsafe_clear() } - #[cfg(feature = "runtime-benchmarks")] - fn score_update_worst_case(who: &T::AccountId, is_increase: bool) -> Self::Score { - use frame_support::traits::Get as _; - let thresholds = T::BagThresholds::get(); - let node = list::Node::::get(who).unwrap(); - let current_bag_idx = thresholds - .iter() - .chain(sp_std::iter::once(&T::Score::max_value())) - .position(|w| w == &node.bag_upper()) - .unwrap(); - - if is_increase { - let next_threshold_idx = current_bag_idx + 1; - assert!(thresholds.len() > next_threshold_idx); - thresholds[next_threshold_idx] - } else { - assert!(current_bag_idx != 0); - let prev_threshold_idx = current_bag_idx - 1; - thresholds[prev_threshold_idx] + frame_election_provider_support::runtime_benchmarks_enabled! { + fn score_update_worst_case(who: &T::AccountId, is_increase: bool) -> Self::Score { + use frame_support::traits::Get as _; + let thresholds = T::BagThresholds::get(); + let node = list::Node::::get(who).unwrap(); + let current_bag_idx = thresholds + .iter() + .chain(sp_std::iter::once(&T::Score::max_value())) + .position(|w| w == &node.bag_upper) + .unwrap(); + + if is_increase { + let next_threshold_idx = current_bag_idx + 1; + assert!(thresholds.len() > next_threshold_idx); + thresholds[next_threshold_idx] + } else { + assert!(current_bag_idx != 0); + let prev_threshold_idx = current_bag_idx - 1; + thresholds[prev_threshold_idx] + } } } } @@ -387,14 +390,15 @@ impl, I: 'static> ScoreProvider for Pallet { Node::::get(id).map(|node| node.score()).unwrap_or_default() } - #[cfg(any(feature = "runtime-benchmarks", feature = "fuzz", test))] - fn set_score_of(id: &T::AccountId, new_score: T::Score) { - ListNodes::::mutate(id, |maybe_node| { - if let Some(node) = maybe_node.as_mut() { - node.set_score(new_score) - } else { - panic!("trying to mutate {:?} which does not exists", id); - } - }) + frame_election_provider_support::runtime_benchmarks_or_fuzz_enabled! { + fn set_score_of(id: &T::AccountId, new_score: T::Score) { + ListNodes::::mutate(id, |maybe_node| { + if let Some(node) = maybe_node.as_mut() { + node.score = new_score; + } else { + panic!("trying to mutate {:?} which does not exists", id); + } + }) + } } } diff --git a/frame/balances/README.md b/frame/balances/README.md index d32fffbf0e7ad..93e424a89c721 100644 --- a/frame/balances/README.md +++ b/frame/balances/README.md @@ -57,7 +57,7 @@ that you need, then you can avoid coupling with the Balances module. fungible assets system. - [`ReservableCurrency`](https://docs.rs/frame-support/latest/frame_support/traits/trait.ReservableCurrency.html): Functions for dealing with assets that can be reserved from an account. -- [`Lockable`](https://docs.rs/frame-support/latest/frame_support/traits/fungibles/trait.Lockable.html): Functions for +- [`LockableCurrency`](https://docs.rs/frame-support/latest/frame_support/traits/trait.LockableCurrency.html): Functions for dealing with accounts that allow liquidity restrictions. - [`Imbalance`](https://docs.rs/frame-support/latest/frame_support/traits/trait.Imbalance.html): Functions for handling imbalances between total issuance in the system and account balances. Must be used when a function @@ -88,13 +88,13 @@ pub type NegativeImbalanceOf = <::Currency as Currency<; + type Currency: LockableCurrency; } fn update_ledger( diff --git a/frame/balances/src/lib.rs b/frame/balances/src/lib.rs index d74de37e993f7..57f76b1ff679d 100644 --- a/frame/balances/src/lib.rs +++ b/frame/balances/src/lib.rs @@ -79,7 +79,7 @@ //! - [`ReservableCurrency`](frame_support::traits::ReservableCurrency): //! - [`NamedReservableCurrency`](frame_support::traits::NamedReservableCurrency): //! Functions for dealing with assets that can be reserved from an account. -//! - [`Lockable`](frame_support::traits::fungibles::Lockable): Functions for +//! - [`LockableCurrency`](frame_support::traits::LockableCurrency): Functions for //! dealing with accounts that allow liquidity restrictions. //! - [`Imbalance`](frame_support::traits::Imbalance): Functions for handling //! imbalances between total issuance in the system and account balances. Must be used when a @@ -113,13 +113,13 @@ //! # fn main() {} //! ``` //! -//! The Staking pallet uses the `fungibles::Lockable` trait to lock a stash account's funds: +//! The Staking pallet uses the `LockableCurrency` trait to lock a stash account's funds: //! //! ``` -//! use frame_support::traits::{WithdrawReasons, fungibles, fungibles::Lockable}; +//! use frame_support::traits::{WithdrawReasons, LockableCurrency}; //! use sp_runtime::traits::Bounded; //! pub trait Config: frame_system::Config { -//! type Currency: fungibles::Lockable; +//! type Currency: LockableCurrency; //! } //! # struct StakingLedger { //! # stash: ::AccountId, @@ -171,13 +171,11 @@ use frame_support::{ ensure, pallet_prelude::DispatchResult, traits::{ - tokens::{ - fungible, fungibles, BalanceStatus as Status, DepositConsequence, WithdrawConsequence, - }, + tokens::{fungible, BalanceStatus as Status, DepositConsequence, WithdrawConsequence}, Currency, DefensiveSaturating, ExistenceRequirement, ExistenceRequirement::{AllowDeath, KeepAlive}, - Get, Imbalance, NamedReservableCurrency, OnUnbalanced, ReservableCurrency, SignedImbalance, - StoredMap, TryDrop, WithdrawReasons, + Get, Imbalance, LockIdentifier, LockableCurrency, NamedReservableCurrency, OnUnbalanced, + ReservableCurrency, SignedImbalance, StoredMap, TryDrop, WithdrawReasons, }, WeakBoundedVec, }; @@ -284,6 +282,7 @@ pub mod pallet { /// --------------------------------- /// - Origin account is already in memory, so no DB operations for them. /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::transfer())] pub fn transfer( origin: OriginFor, @@ -309,6 +308,7 @@ pub mod pallet { /// it will reset the account nonce (`frame_system::AccountNonce`). /// /// The dispatch origin for this call is `root`. + #[pallet::call_index(1)] #[pallet::weight( T::WeightInfo::set_balance_creating() // Creates a new account. .max(T::WeightInfo::set_balance_killing()) // Kills an existing account. @@ -362,6 +362,7 @@ pub mod pallet { /// - Same as transfer, but additional read and write because the source account is not /// assumed to be in the overlay. /// # + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::force_transfer())] pub fn force_transfer( origin: OriginFor, @@ -387,6 +388,7 @@ pub mod pallet { /// 99% of the time you want [`transfer`] instead. /// /// [`transfer`]: struct.Pallet.html#method.transfer + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::transfer_keep_alive())] pub fn transfer_keep_alive( origin: OriginFor, @@ -416,6 +418,7 @@ pub mod pallet { /// keep the sender account alive (true). # /// - O(1). Just like transfer, but reading the user's transferable balance first. /// # + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::transfer_all())] pub fn transfer_all( origin: OriginFor, @@ -434,6 +437,7 @@ pub mod pallet { /// Unreserve some balance from a user by force. /// /// Can only be called by ROOT. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::force_unreserve())] pub fn force_unreserve( origin: OriginFor, @@ -664,7 +668,7 @@ impl BitOr for Reasons { #[derive(Encode, Decode, Clone, PartialEq, Eq, RuntimeDebug, MaxEncodedLen, TypeInfo)] pub struct BalanceLock { /// An identifier for this lock. Only one lock may be in existence for each identifier. - pub id: fungibles::LockIdentifier, + pub id: LockIdentifier, /// The amount which the free balance may not drop below when this lock is in effect. pub amount: Balance, /// If true, then the lock remains in effect even for payment of transaction fees. @@ -2133,7 +2137,7 @@ where } } -impl, I: 'static> fungibles::Lockable for Pallet +impl, I: 'static> LockableCurrency for Pallet where T::Balance: MaybeSerializeDeserialize + Debug, { @@ -2144,7 +2148,7 @@ where // Set a lock on the balance of `who`. // Is a no-op if lock amount is zero or `reasons` `is_none()`. fn set_lock( - id: fungibles::LockIdentifier, + id: LockIdentifier, who: &T::AccountId, amount: T::Balance, reasons: WithdrawReasons, @@ -2166,7 +2170,7 @@ where // Extend a lock on the balance of `who`. // Is a no-op if lock amount is zero or `reasons` `is_none()`. fn extend_lock( - id: fungibles::LockIdentifier, + id: LockIdentifier, who: &T::AccountId, amount: T::Balance, reasons: WithdrawReasons, @@ -2195,7 +2199,7 @@ where Self::update_locks(who, &locks[..]); } - fn remove_lock(id: fungibles::LockIdentifier, who: &T::AccountId) { + fn remove_lock(id: LockIdentifier, who: &T::AccountId) { let mut locks = Self::locks(who); locks.retain(|l| l.id != id); Self::update_locks(who, &locks[..]); diff --git a/frame/balances/src/tests.rs b/frame/balances/src/tests.rs index 44a71b93257db..83944caf9f7ff 100644 --- a/frame/balances/src/tests.rs +++ b/frame/balances/src/tests.rs @@ -28,15 +28,15 @@ macro_rules! decl_tests { use frame_support::{ assert_noop, assert_storage_noop, assert_ok, assert_err, traits::{ - fungibles, fungibles::Lockable, WithdrawReasons, + LockableCurrency, LockIdentifier, WithdrawReasons, Currency, ReservableCurrency, ExistenceRequirement::AllowDeath } }; use pallet_transaction_payment::{ChargeTransactionPayment, Multiplier}; use frame_system::RawOrigin; - const ID_1: fungibles::LockIdentifier = *b"1 "; - const ID_2: fungibles::LockIdentifier = *b"2 "; + const ID_1: LockIdentifier = *b"1 "; + const ID_2: LockIdentifier = *b"2 "; pub const CALL: &<$test as frame_system::Config>::RuntimeCall = &RuntimeCall::Balances(pallet_balances::Call::transfer { dest: 0, value: 0 }); diff --git a/frame/benchmarking/src/tests.rs b/frame/benchmarking/src/tests.rs index 88a7d6d0286b2..1499f9c182fce 100644 --- a/frame/benchmarking/src/tests.rs +++ b/frame/benchmarking/src/tests.rs @@ -51,6 +51,7 @@ mod pallet_test { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(0)] pub fn set_value(origin: OriginFor, n: u32) -> DispatchResult { let _sender = ensure_signed(origin)?; @@ -58,12 +59,14 @@ mod pallet_test { Ok(()) } + #[pallet::call_index(1)] #[pallet::weight(0)] pub fn dummy(origin: OriginFor, _n: u32) -> DispatchResult { let _sender = ensure_none(origin)?; Ok(()) } + #[pallet::call_index(2)] #[pallet::weight(0)] pub fn always_error(_origin: OriginFor) -> DispatchResult { return Err("I always fail".into()) diff --git a/frame/benchmarking/src/tests_instance.rs b/frame/benchmarking/src/tests_instance.rs index 7e1cd48840687..ecc0a78a199b9 100644 --- a/frame/benchmarking/src/tests_instance.rs +++ b/frame/benchmarking/src/tests_instance.rs @@ -61,6 +61,7 @@ mod pallet_test { where ::OtherEvent: Into<>::RuntimeEvent>, { + #[pallet::call_index(0)] #[pallet::weight(0)] pub fn set_value(origin: OriginFor, n: u32) -> DispatchResult { let _sender = ensure_signed(origin)?; @@ -68,6 +69,7 @@ mod pallet_test { Ok(()) } + #[pallet::call_index(1)] #[pallet::weight(0)] pub fn dummy(origin: OriginFor, _n: u32) -> DispatchResult { let _sender = ensure_none(origin)?; diff --git a/frame/bounties/src/lib.rs b/frame/bounties/src/lib.rs index 2e350dd1e2484..c3c2c08d24b2a 100644 --- a/frame/bounties/src/lib.rs +++ b/frame/bounties/src/lib.rs @@ -151,8 +151,7 @@ pub enum BountyStatus { Approved, /// The bounty is funded and waiting for curator assignment. Funded, - /// A curator has been proposed by the `ApproveOrigin`. Waiting for acceptance from the - /// curator. + /// A curator has been proposed. Waiting for acceptance from the curator. CuratorProposed { /// The assigned curator of this bounty. curator: AccountId, @@ -333,6 +332,7 @@ pub mod pallet { /// - `fee`: The curator fee. /// - `value`: The total payment amount of this bounty, curator fee included. /// - `description`: The description of this bounty. + #[pallet::call_index(0)] #[pallet::weight(>::WeightInfo::propose_bounty(description.len() as u32))] pub fn propose_bounty( origin: OriginFor, @@ -347,11 +347,12 @@ pub mod pallet { /// Approve a bounty proposal. At a later time, the bounty will be funded and become active /// and the original deposit will be returned. /// - /// May only be called from `T::ApproveOrigin`. + /// May only be called from `T::SpendOrigin`. /// /// # /// - O(1). /// # + #[pallet::call_index(1)] #[pallet::weight(>::WeightInfo::approve_bounty())] pub fn approve_bounty( origin: OriginFor, @@ -378,11 +379,12 @@ pub mod pallet { /// Assign a curator to a funded bounty. /// - /// May only be called from `T::ApproveOrigin`. + /// May only be called from `T::SpendOrigin`. /// /// # /// - O(1). /// # + #[pallet::call_index(2)] #[pallet::weight(>::WeightInfo::propose_curator())] pub fn propose_curator( origin: OriginFor, @@ -432,6 +434,7 @@ pub mod pallet { /// # /// - O(1). /// # + #[pallet::call_index(3)] #[pallet::weight(>::WeightInfo::unassign_curator())] pub fn unassign_curator( origin: OriginFor, @@ -517,6 +520,7 @@ pub mod pallet { /// # /// - O(1). /// # + #[pallet::call_index(4)] #[pallet::weight(>::WeightInfo::accept_curator())] pub fn accept_curator( origin: OriginFor, @@ -559,6 +563,7 @@ pub mod pallet { /// # /// - O(1). /// # + #[pallet::call_index(5)] #[pallet::weight(>::WeightInfo::award_bounty())] pub fn award_bounty( origin: OriginFor, @@ -606,6 +611,7 @@ pub mod pallet { /// # /// - O(1). /// # + #[pallet::call_index(6)] #[pallet::weight(>::WeightInfo::claim_bounty())] pub fn claim_bounty( origin: OriginFor, @@ -669,6 +675,7 @@ pub mod pallet { /// # /// - O(1). /// # + #[pallet::call_index(7)] #[pallet::weight(>::WeightInfo::close_bounty_proposed() .max(>::WeightInfo::close_bounty_active()))] pub fn close_bounty( @@ -760,6 +767,7 @@ pub mod pallet { /// # /// - O(1). /// # + #[pallet::call_index(8)] #[pallet::weight(>::WeightInfo::extend_bounty_expiry())] pub fn extend_bounty_expiry( origin: OriginFor, diff --git a/frame/child-bounties/src/lib.rs b/frame/child-bounties/src/lib.rs index 2dfe0660ad68e..9eb784eaccd23 100644 --- a/frame/child-bounties/src/lib.rs +++ b/frame/child-bounties/src/lib.rs @@ -237,6 +237,7 @@ pub mod pallet { /// - `parent_bounty_id`: Index of parent bounty for which child-bounty is being added. /// - `value`: Value for executing the proposal. /// - `description`: Text description for the child-bounty. + #[pallet::call_index(0)] #[pallet::weight(::WeightInfo::add_child_bounty(description.len() as u32))] pub fn add_child_bounty( origin: OriginFor, @@ -311,6 +312,7 @@ pub mod pallet { /// - `child_bounty_id`: Index of child bounty. /// - `curator`: Address of child-bounty curator. /// - `fee`: payment fee to child-bounty curator for execution. + #[pallet::call_index(1)] #[pallet::weight(::WeightInfo::propose_curator())] pub fn propose_curator( origin: OriginFor, @@ -380,6 +382,7 @@ pub mod pallet { /// /// - `parent_bounty_id`: Index of parent bounty. /// - `child_bounty_id`: Index of child bounty. + #[pallet::call_index(2)] #[pallet::weight(::WeightInfo::accept_curator())] pub fn accept_curator( origin: OriginFor, @@ -456,6 +459,7 @@ pub mod pallet { /// /// - `parent_bounty_id`: Index of parent bounty. /// - `child_bounty_id`: Index of child bounty. + #[pallet::call_index(3)] #[pallet::weight(::WeightInfo::unassign_curator())] pub fn unassign_curator( origin: OriginFor, @@ -570,6 +574,7 @@ pub mod pallet { /// - `parent_bounty_id`: Index of parent bounty. /// - `child_bounty_id`: Index of child bounty. /// - `beneficiary`: Beneficiary account. + #[pallet::call_index(4)] #[pallet::weight(::WeightInfo::award_child_bounty())] pub fn award_child_bounty( origin: OriginFor, @@ -636,6 +641,7 @@ pub mod pallet { /// /// - `parent_bounty_id`: Index of parent bounty. /// - `child_bounty_id`: Index of child bounty. + #[pallet::call_index(5)] #[pallet::weight(::WeightInfo::claim_child_bounty())] pub fn claim_child_bounty( origin: OriginFor, @@ -745,6 +751,7 @@ pub mod pallet { /// /// - `parent_bounty_id`: Index of parent bounty. /// - `child_bounty_id`: Index of child bounty. + #[pallet::call_index(6)] #[pallet::weight(::WeightInfo::close_child_bounty_added() .max(::WeightInfo::close_child_bounty_active()))] pub fn close_child_bounty( diff --git a/frame/collective/src/lib.rs b/frame/collective/src/lib.rs index 06d5b1fab78e7..c522b71891b3c 100644 --- a/frame/collective/src/lib.rs +++ b/frame/collective/src/lib.rs @@ -372,6 +372,7 @@ pub mod pallet { /// - `P` storage mutations (codec `O(M)`) for updating the votes for each proposal /// - 1 storage write (codec `O(1)`) for deleting the old `prime` and setting the new one /// # + #[pallet::call_index(0)] #[pallet::weight(( T::WeightInfo::set_members( *old_count, // M @@ -429,6 +430,7 @@ pub mod pallet { /// - DB: 1 read (codec `O(M)`) + DB access of `proposal` /// - 1 event /// # + #[pallet::call_index(1)] #[pallet::weight(( T::WeightInfo::execute( *length_bound, // B @@ -492,6 +494,7 @@ pub mod pallet { /// - 1 storage write `Voting` (codec `O(M)`) /// - 1 event /// # + #[pallet::call_index(2)] #[pallet::weight(( if *threshold < 2 { T::WeightInfo::propose_execute( @@ -557,6 +560,7 @@ pub mod pallet { /// - 1 storage mutation `Voting` (codec `O(M)`) /// - 1 event /// # + #[pallet::call_index(3)] #[pallet::weight((T::WeightInfo::vote(T::MaxMembers::get()), DispatchClass::Operational))] pub fn vote( origin: OriginFor, @@ -610,6 +614,7 @@ pub mod pallet { /// - any mutations done while executing `proposal` (`P1`) /// - up to 3 events /// # + #[pallet::call_index(4)] #[pallet::weight(( { let b = *length_bound; @@ -653,6 +658,7 @@ pub mod pallet { /// * Reads: Proposals /// * Writes: Voting, Proposals, ProposalOf /// # + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::disapprove_proposal(T::MaxProposals::get()))] pub fn disapprove_proposal( origin: OriginFor, @@ -695,6 +701,7 @@ pub mod pallet { /// - any mutations done while executing `proposal` (`P1`) /// - up to 3 events /// # + #[pallet::call_index(6)] #[pallet::weight(( { let b = *length_bound; diff --git a/frame/collective/src/tests.rs b/frame/collective/src/tests.rs index 3d1540a8c3b5c..648b6f88ec86c 100644 --- a/frame/collective/src/tests.rs +++ b/frame/collective/src/tests.rs @@ -69,6 +69,7 @@ mod mock_democracy { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(0)] pub fn external_propose_majority(origin: OriginFor) -> DispatchResult { T::ExternalMajorityOrigin::ensure_origin(origin)?; diff --git a/frame/contracts/src/exec.rs b/frame/contracts/src/exec.rs index c0cf6a9f4c4c4..945095dc20329 100644 --- a/frame/contracts/src/exec.rs +++ b/frame/contracts/src/exec.rs @@ -18,8 +18,8 @@ use crate::{ gas::GasMeter, storage::{self, Storage, WriteOutcome}, - BalanceOf, CodeHash, Config, ContractInfo, ContractInfoOf, Determinism, Error, Event, Nonce, - Pallet as Contracts, Schedule, + BalanceOf, CodeHash, Config, ContractInfo, ContractInfoOf, DebugBufferVec, Determinism, Error, + Event, Nonce, Pallet as Contracts, Schedule, }; use frame_support::{ crypto::ecdsa::ECDSAExt, @@ -279,7 +279,7 @@ pub trait Ext: sealing::Sealed { /// when the code is executing on-chain. /// /// Returns `true` if debug message recording is enabled. Otherwise `false` is returned. - fn append_debug_buffer(&mut self, msg: &str) -> bool; + fn append_debug_buffer(&mut self, msg: &str) -> Result; /// Call some dispatchable and return the result. fn call_runtime(&self, call: ::RuntimeCall) -> DispatchResultWithPostInfo; @@ -409,7 +409,7 @@ pub struct Stack<'a, T: Config, E> { /// /// All the bytes added to this field should be valid UTF-8. The buffer has no defined /// structure and is intended to be shown to users as-is for debugging purposes. - debug_message: Option<&'a mut Vec>, + debug_message: Option<&'a mut DebugBufferVec>, /// The determinism requirement of this call stack. determinism: Determinism, /// No executable is held by the struct but influences its behaviour. @@ -617,7 +617,7 @@ where schedule: &'a Schedule, value: BalanceOf, input_data: Vec, - debug_message: Option<&'a mut Vec>, + debug_message: Option<&'a mut DebugBufferVec>, determinism: Determinism, ) -> Result { let (mut stack, executable) = Self::new( @@ -652,7 +652,7 @@ where value: BalanceOf, input_data: Vec, salt: &[u8], - debug_message: Option<&'a mut Vec>, + debug_message: Option<&'a mut DebugBufferVec>, ) -> Result<(T::AccountId, ExecReturnValue), ExecError> { let (mut stack, executable) = Self::new( FrameArgs::Instantiate { @@ -681,7 +681,7 @@ where storage_meter: &'a mut storage::meter::Meter, schedule: &'a Schedule, value: BalanceOf, - debug_message: Option<&'a mut Vec>, + debug_message: Option<&'a mut DebugBufferVec>, determinism: Determinism, ) -> Result<(Self, E), ExecError> { let (first_frame, executable, nonce) = Self::new_frame( @@ -1328,14 +1328,16 @@ where &mut self.top_frame_mut().nested_gas } - fn append_debug_buffer(&mut self, msg: &str) -> bool { + fn append_debug_buffer(&mut self, msg: &str) -> Result { if let Some(buffer) = &mut self.debug_message { if !msg.is_empty() { - buffer.extend(msg.as_bytes()); + buffer + .try_extend(&mut msg.bytes()) + .map_err(|_| Error::::DebugBufferExhausted)?; } - true + Ok(true) } else { - false + Ok(false) } } @@ -2503,12 +2505,16 @@ mod tests { #[test] fn printing_works() { let code_hash = MockLoader::insert(Call, |ctx, _| { - ctx.ext.append_debug_buffer("This is a test"); - ctx.ext.append_debug_buffer("More text"); + ctx.ext + .append_debug_buffer("This is a test") + .expect("Maximum allowed debug buffer size exhausted!"); + ctx.ext + .append_debug_buffer("More text") + .expect("Maximum allowed debug buffer size exhausted!"); exec_success() }); - let mut debug_buffer = Vec::new(); + let mut debug_buffer = DebugBufferVec::::try_from(Vec::new()).unwrap(); ExtBuilder::default().build().execute_with(|| { let min_balance = ::Currency::minimum_balance(); @@ -2531,18 +2537,22 @@ mod tests { .unwrap(); }); - assert_eq!(&String::from_utf8(debug_buffer).unwrap(), "This is a testMore text"); + assert_eq!(&String::from_utf8(debug_buffer.to_vec()).unwrap(), "This is a testMore text"); } #[test] fn printing_works_on_fail() { let code_hash = MockLoader::insert(Call, |ctx, _| { - ctx.ext.append_debug_buffer("This is a test"); - ctx.ext.append_debug_buffer("More text"); + ctx.ext + .append_debug_buffer("This is a test") + .expect("Maximum allowed debug buffer size exhausted!"); + ctx.ext + .append_debug_buffer("More text") + .expect("Maximum allowed debug buffer size exhausted!"); exec_trapped() }); - let mut debug_buffer = Vec::new(); + let mut debug_buffer = DebugBufferVec::::try_from(Vec::new()).unwrap(); ExtBuilder::default().build().execute_with(|| { let min_balance = ::Currency::minimum_balance(); @@ -2565,7 +2575,43 @@ mod tests { assert!(result.is_err()); }); - assert_eq!(&String::from_utf8(debug_buffer).unwrap(), "This is a testMore text"); + assert_eq!(&String::from_utf8(debug_buffer.to_vec()).unwrap(), "This is a testMore text"); + } + + #[test] + fn debug_buffer_is_limited() { + let code_hash = MockLoader::insert(Call, move |ctx, _| { + ctx.ext.append_debug_buffer("overflowing bytes")?; + exec_success() + }); + + // Pre-fill the buffer up to its limit + let mut debug_buffer = + DebugBufferVec::::try_from(vec![0u8; DebugBufferVec::::bound()]).unwrap(); + + ExtBuilder::default().build().execute_with(|| { + let schedule: Schedule = ::Schedule::get(); + let min_balance = ::Currency::minimum_balance(); + let mut gas_meter = GasMeter::::new(GAS_LIMIT); + set_balance(&ALICE, min_balance * 10); + place_contract(&BOB, code_hash); + let mut storage_meter = storage::meter::Meter::new(&ALICE, Some(0), 0).unwrap(); + assert_err!( + MockStack::run_call( + ALICE, + BOB, + &mut gas_meter, + &mut storage_meter, + &schedule, + 0, + vec![], + Some(&mut debug_buffer), + Determinism::Deterministic, + ) + .map_err(|e| e.error), + Error::::DebugBufferExhausted + ); + }); } #[test] diff --git a/frame/contracts/src/lib.rs b/frame/contracts/src/lib.rs index 4bbb311313d61..b76acf9d1db08 100644 --- a/frame/contracts/src/lib.rs +++ b/frame/contracts/src/lib.rs @@ -142,6 +142,7 @@ type BalanceOf = type CodeVec = BoundedVec::MaxCodeLen>; type RelaxedCodeVec = WeakBoundedVec::MaxCodeLen>; type AccountIdLookupOf = <::Lookup as StaticLookup>::Source; +type DebugBufferVec = BoundedVec::MaxDebugBufferLen>; /// Used as a sentinel value when reading and writing contract memory. /// @@ -344,6 +345,10 @@ pub mod pallet { /// Do **not** set to `true` on productions chains. #[pallet::constant] type UnsafeUnstableInterface: Get; + + /// The maximum length of the debug buffer in bytes. + #[pallet::constant] + type MaxDebugBufferLen: Get; } #[pallet::hooks] @@ -385,6 +390,7 @@ pub mod pallet { as HasCompact>::Type: Clone + Eq + PartialEq + Debug + TypeInfo + Encode, { /// Deprecated version if [`Self::call`] for use in an in-storage `Call`. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::call().saturating_add(>::compat_weight(*gas_limit)))] #[allow(deprecated)] #[deprecated(note = "1D weight is used in this extrinsic, please migrate to `call`")] @@ -407,6 +413,7 @@ pub mod pallet { } /// Deprecated version if [`Self::instantiate_with_code`] for use in an in-storage `Call`. + #[pallet::call_index(1)] #[pallet::weight( T::WeightInfo::instantiate_with_code(code.len() as u32, salt.len() as u32) .saturating_add(>::compat_weight(*gas_limit)) @@ -436,6 +443,7 @@ pub mod pallet { } /// Deprecated version if [`Self::instantiate`] for use in an in-storage `Call`. + #[pallet::call_index(2)] #[pallet::weight( T::WeightInfo::instantiate(salt.len() as u32).saturating_add(>::compat_weight(*gas_limit)) )] @@ -481,6 +489,7 @@ pub mod pallet { /// To avoid this situation a constructor could employ access control so that it can /// only be instantiated by permissioned entities. The same is true when uploading /// through [`Self::instantiate_with_code`]. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::upload_code(code.len() as u32))] pub fn upload_code( origin: OriginFor, @@ -497,6 +506,7 @@ pub mod pallet { /// /// A code can only be removed by its original uploader (its owner) and only if it is /// not used by any contract. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::remove_code())] pub fn remove_code( origin: OriginFor, @@ -518,6 +528,7 @@ pub mod pallet { /// This does **not** change the address of the contract in question. This means /// that the contract address is no longer derived from its code hash after calling /// this dispatchable. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::set_code())] pub fn set_code( origin: OriginFor, @@ -563,6 +574,7 @@ pub mod pallet { /// * If the account is a regular account, any value will be transferred. /// * If no account exists and the call value is not less than `existential_deposit`, /// a regular account will be created and any value will be transferred. + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::call().saturating_add(*gas_limit))] pub fn call( origin: OriginFor, @@ -619,6 +631,7 @@ pub mod pallet { /// - The smart-contract account is created at the computed address. /// - The `value` is transferred to the new account. /// - The `deploy` function is executed in the context of the newly-created account. + #[pallet::call_index(7)] #[pallet::weight( T::WeightInfo::instantiate_with_code(code.len() as u32, salt.len() as u32) .saturating_add(*gas_limit) @@ -661,6 +674,7 @@ pub mod pallet { /// This function is identical to [`Self::instantiate_with_code`] but without the /// code deployment step. Instead, the `code_hash` of an on-chain deployed wasm binary /// must be supplied. + #[pallet::call_index(8)] #[pallet::weight( T::WeightInfo::instantiate(salt.len() as u32).saturating_add(*gas_limit) )] @@ -854,6 +868,9 @@ pub mod pallet { CodeRejected, /// An indetermistic code was used in a context where this is not permitted. Indeterministic, + /// The debug buffer size used during contract execution exceeded the limit determined by + /// the `MaxDebugBufferLen` pallet config parameter. + DebugBufferExhausted, } /// A mapping from an original code hash to the original code, untouched by instrumentation. @@ -952,7 +969,7 @@ where debug: bool, determinism: Determinism, ) -> ContractExecResult> { - let mut debug_message = if debug { Some(Vec::new()) } else { None }; + let mut debug_message = if debug { Some(DebugBufferVec::::default()) } else { None }; let output = Self::internal_call( origin, dest, @@ -968,7 +985,7 @@ where gas_consumed: output.gas_meter.gas_consumed(), gas_required: output.gas_meter.gas_required(), storage_deposit: output.storage_deposit, - debug_message: debug_message.unwrap_or_default(), + debug_message: debug_message.unwrap_or_default().to_vec(), } } @@ -994,7 +1011,7 @@ where salt: Vec, debug: bool, ) -> ContractInstantiateResult> { - let mut debug_message = if debug { Some(Vec::new()) } else { None }; + let mut debug_message = if debug { Some(DebugBufferVec::::default()) } else { None }; let output = Self::internal_instantiate( origin, value, @@ -1013,7 +1030,7 @@ where gas_consumed: output.gas_meter.gas_consumed(), gas_required: output.gas_meter.gas_required(), storage_deposit: output.storage_deposit, - debug_message: debug_message.unwrap_or_default(), + debug_message: debug_message.unwrap_or_default().to_vec(), } } @@ -1104,7 +1121,7 @@ where gas_limit: Weight, storage_deposit_limit: Option>, data: Vec, - debug_message: Option<&mut Vec>, + debug_message: Option<&mut DebugBufferVec>, determinism: Determinism, ) -> InternalCallOutput { let mut gas_meter = GasMeter::new(gas_limit); @@ -1147,7 +1164,7 @@ where code: Code>, data: Vec, salt: Vec, - mut debug_message: Option<&mut Vec>, + mut debug_message: Option<&mut DebugBufferVec>, ) -> InternalInstantiateOutput { let mut storage_deposit = Default::default(); let mut gas_meter = GasMeter::new(gas_limit); @@ -1163,7 +1180,7 @@ where TryInstantiate::Skip, ) .map_err(|(err, msg)| { - debug_message.as_mut().map(|buffer| buffer.extend(msg.as_bytes())); + debug_message.as_mut().map(|buffer| buffer.try_extend(&mut msg.bytes())); err })?; // The open deposit will be charged during execution when the diff --git a/frame/contracts/src/migration.rs b/frame/contracts/src/migration.rs index aa04d8b9b1084..56d688abc7309 100644 --- a/frame/contracts/src/migration.rs +++ b/frame/contracts/src/migration.rs @@ -69,7 +69,7 @@ impl OnRuntimeUpgrade for Migration { fn pre_upgrade() -> Result, &'static str> { let version = >::on_chain_storage_version(); - if version == 8 { + if version == 7 { v8::pre_upgrade::()?; } diff --git a/frame/contracts/src/tests.rs b/frame/contracts/src/tests.rs index f4c8889ef05f4..6121d880ca8c5 100644 --- a/frame/contracts/src/tests.rs +++ b/frame/contracts/src/tests.rs @@ -37,10 +37,10 @@ use frame_support::{ parameter_types, storage::child, traits::{ - fungibles::Lockable, BalanceStatus, ConstU32, ConstU64, Contains, Currency, Get, OnIdle, + BalanceStatus, ConstU32, ConstU64, Contains, Currency, Get, LockableCurrency, OnIdle, OnInitialize, ReservableCurrency, WithdrawReasons, }, - weights::{constants::WEIGHT_PER_SECOND, Weight}, + weights::{constants::WEIGHT_REF_TIME_PER_SECOND, Weight}, }; use frame_system::{self as system, EventRecord, Phase}; use pretty_assertions::{assert_eq, assert_ne}; @@ -285,7 +285,7 @@ impl RegisteredChainExtension for TempStorageExtension { parameter_types! { pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights::simple_max( - (2u64 * WEIGHT_PER_SECOND).set_proof_size(u64::MAX), + Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX), ); pub static ExistentialDeposit: u64 = 1; } @@ -415,6 +415,7 @@ impl Config for Test { type MaxCodeLen = ConstU32<{ 128 * 1024 }>; type MaxStorageKeyLen = ConstU32<128>; type UnsafeUnstableInterface = UnstableInterface; + type MaxDebugBufferLen = ConstU32<{ 2 * 1024 * 1024 }>; } pub const ALICE: AccountId32 = AccountId32::new([1u8; 32]); diff --git a/frame/contracts/src/wasm/mod.rs b/frame/contracts/src/wasm/mod.rs index e9e6b42dc3f8a..d85dac95cc712 100644 --- a/frame/contracts/src/wasm/mod.rs +++ b/frame/contracts/src/wasm/mod.rs @@ -594,9 +594,9 @@ mod tests { fn gas_meter(&mut self) -> &mut GasMeter { &mut self.gas_meter } - fn append_debug_buffer(&mut self, msg: &str) -> bool { + fn append_debug_buffer(&mut self, msg: &str) -> Result { self.debug_buffer.extend(msg.as_bytes()); - true + Ok(true) } fn call_runtime( &self, diff --git a/frame/contracts/src/wasm/runtime.rs b/frame/contracts/src/wasm/runtime.rs index b933688eb61ec..50ad9996e6eb6 100644 --- a/frame/contracts/src/wasm/runtime.rs +++ b/frame/contracts/src/wasm/runtime.rs @@ -2395,11 +2395,11 @@ pub mod env { str_len: u32, ) -> Result { ctx.charge_gas(RuntimeCosts::DebugMessage)?; - if ctx.ext.append_debug_buffer("") { + if ctx.ext.append_debug_buffer("")? { let data = ctx.read_sandbox_memory(memory, str_ptr, str_len)?; let msg = core::str::from_utf8(&data).map_err(|_| >::DebugMessageInvalidUTF8)?; - ctx.ext.append_debug_buffer(msg); + ctx.ext.append_debug_buffer(msg)?; return Ok(ReturnCode::Success) } Ok(ReturnCode::LoggingDisabled) diff --git a/frame/conviction-voting/src/benchmarking.rs b/frame/conviction-voting/src/benchmarking.rs index 4bebc6a97c49b..117bb7fe22989 100644 --- a/frame/conviction-voting/src/benchmarking.rs +++ b/frame/conviction-voting/src/benchmarking.rs @@ -23,7 +23,7 @@ use assert_matches::assert_matches; use frame_benchmarking::{account, benchmarks_instance_pallet, whitelist_account}; use frame_support::{ dispatch::RawOrigin, - traits::{Currency, Get}, + traits::{fungible, Currency, Get}, }; use sp_runtime::traits::Bounded; use sp_std::collections::btree_map::BTreeMap; diff --git a/frame/conviction-voting/src/lib.rs b/frame/conviction-voting/src/lib.rs index 992b532fb93ed..141e9690fa29d 100644 --- a/frame/conviction-voting/src/lib.rs +++ b/frame/conviction-voting/src/lib.rs @@ -31,7 +31,7 @@ use frame_support::{ dispatch::{DispatchError, DispatchResult}, ensure, traits::{ - fungible, fungibles, fungibles::Lockable, Currency, Get, PollStatus, Polling, + fungible, Currency, Get, LockIdentifier, LockableCurrency, PollStatus, Polling, ReservableCurrency, WithdrawReasons, }, }; @@ -60,7 +60,7 @@ mod tests; #[cfg(feature = "runtime-benchmarks")] pub mod benchmarking; -const CONVICTION_VOTING_ID: fungibles::LockIdentifier = *b"pyconvot"; +const CONVICTION_VOTING_ID: LockIdentifier = *b"pyconvot"; type AccountIdLookupOf = <::Lookup as StaticLookup>::Source; type BalanceOf = @@ -104,7 +104,7 @@ pub mod pallet { type WeightInfo: WeightInfo; /// Currency type with which voting happens. type Currency: ReservableCurrency - + fungibles::Lockable + + LockableCurrency + fungible::Inspect; /// The implementation of the logic which conducts polls. @@ -208,6 +208,7 @@ pub mod pallet { /// - `vote`: The vote configuration. /// /// Weight: `O(R)` where R is the number of polls the voter has voted on. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::vote_new().max(T::WeightInfo::vote_existing()))] pub fn vote( origin: OriginFor, @@ -243,6 +244,7 @@ pub mod pallet { /// voted on. Weight is initially charged as if maximum votes, but is refunded later. // NOTE: weight must cover an incorrect voting of origin with max votes, this is ensure // because a valid delegation cover decoding a direct voting with max votes. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::delegate(T::MaxVotes::get()))] pub fn delegate( origin: OriginFor, @@ -274,6 +276,7 @@ pub mod pallet { /// voted on. Weight is initially charged as if maximum votes, but is refunded later. // NOTE: weight must cover an incorrect voting of origin with max votes, this is ensure // because a valid delegation cover decoding a direct voting with max votes. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::undelegate(T::MaxVotes::get().into()))] pub fn undelegate( origin: OriginFor, @@ -293,6 +296,7 @@ pub mod pallet { /// - `target`: The account to remove the lock on. /// /// Weight: `O(R)` with R number of vote of target. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::unlock())] pub fn unlock( origin: OriginFor, @@ -334,6 +338,7 @@ pub mod pallet { /// /// Weight: `O(R + log R)` where R is the number of polls that `target` has voted on. /// Weight is calculated for the maximum number of vote. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::remove_vote())] pub fn remove_vote( origin: OriginFor, @@ -360,6 +365,7 @@ pub mod pallet { /// /// Weight: `O(R + log R)` where R is the number of polls that `target` has voted on. /// Weight is calculated for the maximum number of vote. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::remove_other_vote())] pub fn remove_other_vote( origin: OriginFor, diff --git a/frame/democracy/src/lib.rs b/frame/democracy/src/lib.rs index 096122cb1caa5..2c65e5d94bc46 100644 --- a/frame/democracy/src/lib.rs +++ b/frame/democracy/src/lib.rs @@ -157,11 +157,9 @@ use frame_support::{ ensure, traits::{ defensive_prelude::*, - fungibles, - fungibles::Lockable, schedule::{v3::Named as ScheduleNamed, DispatchTime}, - Bounded, Currency, Get, OnUnbalanced, QueryPreimage, ReservableCurrency, StorePreimage, - WithdrawReasons, + Bounded, Currency, Get, LockIdentifier, LockableCurrency, OnUnbalanced, QueryPreimage, + ReservableCurrency, StorePreimage, WithdrawReasons, }, weights::Weight, }; @@ -191,7 +189,7 @@ pub mod benchmarking; pub mod migrations; -const DEMOCRACY_ID: fungibles::LockIdentifier = *b"democrac"; +const DEMOCRACY_ID: LockIdentifier = *b"democrac"; /// A proposal index. pub type PropIndex = u32; @@ -236,7 +234,7 @@ pub mod pallet { /// Currency type for this pallet. type Currency: ReservableCurrency - + fungibles::Lockable; + + LockableCurrency; /// The period between a proposal being approved and enacted. /// @@ -551,6 +549,7 @@ pub mod pallet { /// - `value`: The amount of deposit (must be at least `MinimumDeposit`). /// /// Emits `Proposed`. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::propose())] pub fn propose( origin: OriginFor, @@ -593,6 +592,7 @@ pub mod pallet { /// must have funds to cover the deposit, equal to the original deposit. /// /// - `proposal`: The index of the proposal to second. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::second())] pub fn second( origin: OriginFor, @@ -618,6 +618,7 @@ pub mod pallet { /// /// - `ref_index`: The index of the referendum to vote for. /// - `vote`: The vote configuration. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::vote_new().max(T::WeightInfo::vote_existing()))] pub fn vote( origin: OriginFor, @@ -636,6 +637,7 @@ pub mod pallet { /// -`ref_index`: The index of the referendum to cancel. /// /// Weight: `O(1)`. + #[pallet::call_index(3)] #[pallet::weight((T::WeightInfo::emergency_cancel(), DispatchClass::Operational))] pub fn emergency_cancel( origin: OriginFor, @@ -658,6 +660,7 @@ pub mod pallet { /// The dispatch origin of this call must be `ExternalOrigin`. /// /// - `proposal_hash`: The preimage hash of the proposal. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::external_propose())] pub fn external_propose( origin: OriginFor, @@ -686,6 +689,7 @@ pub mod pallet { /// pre-scheduled `external_propose` call. /// /// Weight: `O(1)` + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::external_propose_majority())] pub fn external_propose_majority( origin: OriginFor, @@ -707,6 +711,7 @@ pub mod pallet { /// pre-scheduled `external_propose` call. /// /// Weight: `O(1)` + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::external_propose_default())] pub fn external_propose_default( origin: OriginFor, @@ -733,6 +738,7 @@ pub mod pallet { /// Emits `Started`. /// /// Weight: `O(1)` + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::fast_track())] pub fn fast_track( origin: OriginFor, @@ -785,6 +791,7 @@ pub mod pallet { /// Emits `Vetoed`. /// /// Weight: `O(V + log(V))` where V is number of `existing vetoers` + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::veto_external())] pub fn veto_external(origin: OriginFor, proposal_hash: H256) -> DispatchResult { let who = T::VetoOrigin::ensure_origin(origin)?; @@ -819,6 +826,7 @@ pub mod pallet { /// - `ref_index`: The index of the referendum to cancel. /// /// # Weight: `O(1)`. + #[pallet::call_index(9)] #[pallet::weight(T::WeightInfo::cancel_referendum())] pub fn cancel_referendum( origin: OriginFor, @@ -851,6 +859,7 @@ pub mod pallet { /// voted on. Weight is charged as if maximum votes. // NOTE: weight must cover an incorrect voting of origin with max votes, this is ensure // because a valid delegation cover decoding a direct voting with max votes. + #[pallet::call_index(10)] #[pallet::weight(T::WeightInfo::delegate(T::MaxVotes::get()))] pub fn delegate( origin: OriginFor, @@ -879,6 +888,7 @@ pub mod pallet { /// voted on. Weight is charged as if maximum votes. // NOTE: weight must cover an incorrect voting of origin with max votes, this is ensure // because a valid delegation cover decoding a direct voting with max votes. + #[pallet::call_index(11)] #[pallet::weight(T::WeightInfo::undelegate(T::MaxVotes::get()))] pub fn undelegate(origin: OriginFor) -> DispatchResultWithPostInfo { let who = ensure_signed(origin)?; @@ -891,6 +901,7 @@ pub mod pallet { /// The dispatch origin of this call must be _Root_. /// /// Weight: `O(1)`. + #[pallet::call_index(12)] #[pallet::weight(T::WeightInfo::clear_public_proposals())] pub fn clear_public_proposals(origin: OriginFor) -> DispatchResult { ensure_root(origin)?; @@ -905,6 +916,7 @@ pub mod pallet { /// - `target`: The account to remove the lock on. /// /// Weight: `O(R)` with R number of vote of target. + #[pallet::call_index(13)] #[pallet::weight(T::WeightInfo::unlock_set(T::MaxVotes::get()).max(T::WeightInfo::unlock_remove(T::MaxVotes::get())))] pub fn unlock(origin: OriginFor, target: AccountIdLookupOf) -> DispatchResult { ensure_signed(origin)?; @@ -940,6 +952,7 @@ pub mod pallet { /// /// Weight: `O(R + log R)` where R is the number of referenda that `target` has voted on. /// Weight is calculated for the maximum number of vote. + #[pallet::call_index(14)] #[pallet::weight(T::WeightInfo::remove_vote(T::MaxVotes::get()))] pub fn remove_vote(origin: OriginFor, index: ReferendumIndex) -> DispatchResult { let who = ensure_signed(origin)?; @@ -961,6 +974,7 @@ pub mod pallet { /// /// Weight: `O(R + log R)` where R is the number of referenda that `target` has voted on. /// Weight is calculated for the maximum number of vote. + #[pallet::call_index(15)] #[pallet::weight(T::WeightInfo::remove_other_vote(T::MaxVotes::get()))] pub fn remove_other_vote( origin: OriginFor, @@ -989,6 +1003,7 @@ pub mod pallet { /// /// Weight: `O(p)` (though as this is an high-privilege dispatch, we assume it has a /// reasonable value). + #[pallet::call_index(16)] #[pallet::weight((T::WeightInfo::blacklist(), DispatchClass::Operational))] pub fn blacklist( origin: OriginFor, @@ -1038,6 +1053,7 @@ pub mod pallet { /// - `prop_index`: The index of the proposal to cancel. /// /// Weight: `O(p)` where `p = PublicProps::::decode_len()` + #[pallet::call_index(17)] #[pallet::weight(T::WeightInfo::cancel_proposal())] pub fn cancel_proposal( origin: OriginFor, diff --git a/frame/democracy/src/tests.rs b/frame/democracy/src/tests.rs index eceb1a3400bba..41b279035028e 100644 --- a/frame/democracy/src/tests.rs +++ b/frame/democracy/src/tests.rs @@ -77,7 +77,9 @@ impl Contains for BaseFilter { parameter_types! { pub BlockWeights: frame_system::limits::BlockWeights = - frame_system::limits::BlockWeights::simple_max(frame_support::weights::constants::WEIGHT_PER_SECOND.set_proof_size(u64::MAX)); + frame_system::limits::BlockWeights::simple_max( + Weight::from_parts(frame_support::weights::constants::WEIGHT_REF_TIME_PER_SECOND, u64::MAX), + ); } impl frame_system::Config for Test { type BaseCallFilter = BaseFilter; diff --git a/frame/election-provider-multi-phase/src/lib.rs b/frame/election-provider-multi-phase/src/lib.rs index 2d49cd79dbcad..6c4a55800f7e8 100644 --- a/frame/election-provider-multi-phase/src/lib.rs +++ b/frame/election-provider-multi-phase/src/lib.rs @@ -892,6 +892,7 @@ pub mod pallet { /// putting their authoring reward at risk. /// /// No deposit or reward is associated with this submission. + #[pallet::call_index(0)] #[pallet::weight(( T::WeightInfo::submit_unsigned( witness.voters, @@ -941,6 +942,7 @@ pub mod pallet { /// Dispatch origin must be aligned with `T::ForceOrigin`. /// /// This check can be turned off by setting the value to `None`. + #[pallet::call_index(1)] #[pallet::weight(T::DbWeight::get().writes(1))] pub fn set_minimum_untrusted_score( origin: OriginFor, @@ -959,6 +961,7 @@ pub mod pallet { /// The solution is not checked for any feasibility and is assumed to be trustworthy, as any /// feasibility check itself can in principle cause the election process to fail (due to /// memory/weight constrains). + #[pallet::call_index(2)] #[pallet::weight(T::DbWeight::get().reads_writes(1, 1))] pub fn set_emergency_election_result( origin: OriginFor, @@ -996,6 +999,7 @@ pub mod pallet { /// /// A deposit is reserved and recorded for the solution. Based on the outcome, the solution /// might be rewarded, slashed, or get all or a part of the deposit back. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::submit())] pub fn submit( origin: OriginFor, @@ -1065,6 +1069,7 @@ pub mod pallet { /// /// This can only be called when [`Phase::Emergency`] is enabled, as an alternative to /// calling [`Call::set_emergency_election_result`]. + #[pallet::call_index(4)] #[pallet::weight(T::DbWeight::get().reads_writes(1, 1))] pub fn governance_fallback( origin: OriginFor, @@ -1409,12 +1414,12 @@ impl Pallet { return Err(ElectionError::DataProvider("Snapshot too big for submission.")) } - let mut desired_targets = - T::DataProvider::desired_targets().map_err(ElectionError::DataProvider)?; + let mut desired_targets = as ElectionProviderBase>::desired_targets_checked() + .map_err(|e| ElectionError::DataProvider(e))?; - // If `desired_targets` > `targets.len()`, cap `desired_targets` to that - // level and emit a warning - let max_desired_targets: u32 = (targets.len() as u32).min(T::MaxWinners::get()); + // If `desired_targets` > `targets.len()`, cap `desired_targets` to that level and emit a + // warning + let max_desired_targets: u32 = targets.len() as u32; if desired_targets > max_desired_targets { log!( warn, @@ -2288,6 +2293,8 @@ mod tests { assert_eq!(MultiPhase::elect().unwrap_err(), ElectionError::Fallback("NoFallback.")); // phase is now emergency. assert_eq!(MultiPhase::current_phase(), Phase::Emergency); + // snapshot is still there until election finalizes. + assert!(MultiPhase::snapshot().is_some()); assert_eq!( multi_phase_events(), @@ -2313,6 +2320,7 @@ mod tests { // phase is now emergency. assert_eq!(MultiPhase::current_phase(), Phase::Emergency); assert!(MultiPhase::queued_solution().is_none()); + assert!(MultiPhase::snapshot().is_some()); // no single account can trigger this assert_noop!( diff --git a/frame/election-provider-multi-phase/src/mock.rs b/frame/election-provider-multi-phase/src/mock.rs index 8ab7e5bbf733d..347a4f19185f9 100644 --- a/frame/election-provider-multi-phase/src/mock.rs +++ b/frame/election-provider-multi-phase/src/mock.rs @@ -239,7 +239,7 @@ parameter_types! { pub const ExistentialDeposit: u64 = 1; pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights ::with_sensible_defaults( - Weight::from_parts(2u64 * constants::WEIGHT_PER_SECOND.ref_time(), u64::MAX), + Weight::from_parts(2u64 * constants::WEIGHT_REF_TIME_PER_SECOND, u64::MAX), NORMAL_DISPATCH_RATIO, ); } diff --git a/frame/election-provider-multi-phase/src/signed.rs b/frame/election-provider-multi-phase/src/signed.rs index 9d629ad77fd79..12d39e83b6c09 100644 --- a/frame/election-provider-multi-phase/src/signed.rs +++ b/frame/election-provider-multi-phase/src/signed.rs @@ -594,10 +594,12 @@ mod tests { DesiredTargets::set(4); MaxWinners::set(3); - let (_, _, actual_desired_targets) = MultiPhase::create_snapshot_external().unwrap(); - - // snapshot is created with min of desired_targets and MaxWinners - assert_eq!(actual_desired_targets, 3); + // snapshot not created because data provider returned an unexpected number of + // desired_targets + assert_noop!( + MultiPhase::create_snapshot_external(), + ElectionError::DataProvider("desired_targets must not be greater than MaxWinners."), + ); }) } diff --git a/frame/election-provider-support/Cargo.toml b/frame/election-provider-support/Cargo.toml index b9584c899e1b1..33a6f25ed0822 100644 --- a/frame/election-provider-support/Cargo.toml +++ b/frame/election-provider-support/Cargo.toml @@ -21,10 +21,10 @@ sp-arithmetic = { version = "6.0.0", default-features = false, path = "../../pri sp-npos-elections = { version = "4.0.0-dev", default-features = false, path = "../../primitives/npos-elections" } sp-runtime = { version = "7.0.0", default-features = false, path = "../../primitives/runtime" } sp-std = { version = "5.0.0", default-features = false, path = "../../primitives/std" } +sp-core = { version = "7.0.0", default-features = false, path = "../../primitives/core" } [dev-dependencies] rand = "0.7.3" -sp-core = { version = "7.0.0", path = "../../primitives/core" } sp-io = { version = "7.0.0", path = "../../primitives/io" } sp-npos-elections = { version = "4.0.0-dev", path = "../../primitives/npos-elections" } @@ -38,6 +38,7 @@ std = [ "scale-info/std", "sp-arithmetic/std", "sp-npos-elections/std", + "sp-core/std", "sp-runtime/std", "sp-std/std", ] diff --git a/frame/election-provider-support/src/lib.rs b/frame/election-provider-support/src/lib.rs index 38924a18e2f54..9d5d6c018e5e1 100644 --- a/frame/election-provider-support/src/lib.rs +++ b/frame/election-provider-support/src/lib.rs @@ -386,15 +386,13 @@ pub trait ElectionProviderBase { /// checked call to `Self::DataProvider::desired_targets()` ensuring the value never exceeds /// [`Self::MaxWinners`]. fn desired_targets_checked() -> data_provider::Result { - match Self::DataProvider::desired_targets() { - Ok(desired_targets) => - if desired_targets <= Self::MaxWinners::get() { - Ok(desired_targets) - } else { - Err("desired_targets should not be greater than MaxWinners") - }, - Err(e) => Err(e), - } + Self::DataProvider::desired_targets().and_then(|desired_targets| { + if desired_targets <= Self::MaxWinners::get() { + Ok(desired_targets) + } else { + Err("desired_targets must not be greater than MaxWinners.") + } + }) } } @@ -673,3 +671,6 @@ pub type BoundedSupportsOf = BoundedSupports< ::AccountId, ::MaxWinners, >; + +sp_core::generate_feature_enabled_macro!(runtime_benchmarks_enabled, feature = "runtime-benchmarks", $); +sp_core::generate_feature_enabled_macro!(runtime_benchmarks_or_fuzz_enabled, any(feature = "runtime-benchmarks", feature = "fuzzing"), $); diff --git a/frame/elections-phragmen/src/lib.rs b/frame/elections-phragmen/src/lib.rs index 13190237ea784..1cfdc25fd9b47 100644 --- a/frame/elections-phragmen/src/lib.rs +++ b/frame/elections-phragmen/src/lib.rs @@ -101,8 +101,8 @@ use codec::{Decode, Encode}; use frame_support::{ traits::{ - defensive_prelude::*, fungibles, fungibles::Lockable, ChangeMembers, Contains, - ContainsLengthBound, Currency, CurrencyToVote, Get, InitializeMembers, OnUnbalanced, + defensive_prelude::*, ChangeMembers, Contains, ContainsLengthBound, Currency, + CurrencyToVote, Get, InitializeMembers, LockIdentifier, LockableCurrency, OnUnbalanced, ReservableCurrency, SortedMembers, WithdrawReasons, }, weights::Weight, @@ -199,10 +199,10 @@ pub mod pallet { /// Identifier for the elections-phragmen pallet's lock #[pallet::constant] - type PalletId: Get; + type PalletId: Get; /// The currency that people are electing with. - type Currency: fungibles::Lockable + type Currency: LockableCurrency + ReservableCurrency; /// What to do when the members change. @@ -309,6 +309,7 @@ pub mod pallet { /// # /// We assume the maximum weight among all 3 cases: vote_equal, vote_more and vote_less. /// # + #[pallet::call_index(0)] #[pallet::weight( T::WeightInfo::vote_more(votes.len() as u32) .max(T::WeightInfo::vote_less(votes.len() as u32)) @@ -371,6 +372,7 @@ pub mod pallet { /// This removes the lock and returns the deposit. /// /// The dispatch origin of this call must be signed and be a voter. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::remove_voter())] pub fn remove_voter(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -394,6 +396,7 @@ pub mod pallet { /// # /// The number of current candidates must be provided as witness data. /// # + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::submit_candidacy(*candidate_count))] pub fn submit_candidacy( origin: OriginFor, @@ -438,6 +441,7 @@ pub mod pallet { /// # /// The type of renouncing must be provided as witness data. /// # + #[pallet::call_index(3)] #[pallet::weight(match *renouncing { Renouncing::Candidate(count) => T::WeightInfo::renounce_candidacy_candidate(count), Renouncing::Member => T::WeightInfo::renounce_candidacy_members(), @@ -500,6 +504,7 @@ pub mod pallet { /// If we have a replacement, we use a small weight. Else, since this is a root call and /// will go into phragmen, we assume full block for now. /// # + #[pallet::call_index(4)] #[pallet::weight(if *rerun_election { T::WeightInfo::remove_member_without_replacement() } else { @@ -535,6 +540,7 @@ pub mod pallet { /// # /// The total number of voters and those that are defunct must be provided as witness data. /// # + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::clean_defunct_voters(*_num_voters, *_num_defunct))] pub fn clean_defunct_voters( origin: OriginFor, @@ -1274,7 +1280,7 @@ mod tests { } parameter_types! { - pub const ElectionsPhragmenPalletId: fungibles::LockIdentifier = *b"phrelect"; + pub const ElectionsPhragmenPalletId: LockIdentifier = *b"phrelect"; pub const PhragmenMaxVoters: u32 = 1000; pub const PhragmenMaxCandidates: u32 = 100; } diff --git a/frame/examples/basic/src/lib.rs b/frame/examples/basic/src/lib.rs index 256529421caae..d5045157dade7 100644 --- a/frame/examples/basic/src/lib.rs +++ b/frame/examples/basic/src/lib.rs @@ -497,6 +497,7 @@ pub mod pallet { // // The weight for this extrinsic we rely on the auto-generated `WeightInfo` from the // benchmark toolchain. + #[pallet::call_index(0)] #[pallet::weight( ::WeightInfo::accumulate_dummy() )] @@ -541,6 +542,7 @@ pub mod pallet { // // The weight for this extrinsic we use our own weight object `WeightForSetDummy` to // determine its weight + #[pallet::call_index(1)] #[pallet::weight(WeightForSetDummy::(>::from(100u32)))] pub fn set_dummy( origin: OriginFor, diff --git a/frame/examples/offchain-worker/src/lib.rs b/frame/examples/offchain-worker/src/lib.rs index fdf8b61a01acd..46ff7725e3b16 100644 --- a/frame/examples/offchain-worker/src/lib.rs +++ b/frame/examples/offchain-worker/src/lib.rs @@ -229,6 +229,7 @@ pub mod pallet { /// working and receives (and provides) meaningful data. /// This example is not focused on correctness of the oracle itself, but rather its /// purpose is to showcase offchain worker capabilities. + #[pallet::call_index(0)] #[pallet::weight(0)] pub fn submit_price(origin: OriginFor, price: u32) -> DispatchResultWithPostInfo { // Retrieve sender of the transaction. @@ -254,6 +255,7 @@ pub mod pallet { /// /// This example is not focused on correctness of the oracle itself, but rather its /// purpose is to showcase offchain worker capabilities. + #[pallet::call_index(1)] #[pallet::weight(0)] pub fn submit_price_unsigned( origin: OriginFor, @@ -270,6 +272,7 @@ pub mod pallet { Ok(().into()) } + #[pallet::call_index(2)] #[pallet::weight(0)] pub fn submit_price_unsigned_with_signed_payload( origin: OriginFor, diff --git a/frame/executive/src/lib.rs b/frame/executive/src/lib.rs index 4bdfc1c815c2f..1f20e93f43c30 100644 --- a/frame/executive/src/lib.rs +++ b/frame/executive/src/lib.rs @@ -686,7 +686,10 @@ mod tests { use frame_support::{ assert_err, parameter_types, - traits::{fungibles, ConstU32, ConstU64, ConstU8, Currency, WithdrawReasons}, + traits::{ + ConstU32, ConstU64, ConstU8, Currency, LockIdentifier, LockableCurrency, + WithdrawReasons, + }, weights::{ConstantMultiplier, IdentityFee, RuntimeDbWeight, Weight, WeightToFee}, }; use frame_system::{Call as SystemCall, ChainContext, LastRuntimeUpgradeInfo}; @@ -737,6 +740,7 @@ mod tests { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(100)] pub fn some_function(origin: OriginFor) -> DispatchResult { // NOTE: does not make any different. @@ -744,36 +748,42 @@ mod tests { Ok(()) } + #[pallet::call_index(1)] #[pallet::weight((200, DispatchClass::Operational))] pub fn some_root_operation(origin: OriginFor) -> DispatchResult { frame_system::ensure_root(origin)?; Ok(()) } + #[pallet::call_index(2)] #[pallet::weight(0)] pub fn some_unsigned_message(origin: OriginFor) -> DispatchResult { frame_system::ensure_none(origin)?; Ok(()) } + #[pallet::call_index(3)] #[pallet::weight(0)] pub fn allowed_unsigned(origin: OriginFor) -> DispatchResult { frame_system::ensure_root(origin)?; Ok(()) } + #[pallet::call_index(4)] #[pallet::weight(0)] pub fn unallowed_unsigned(origin: OriginFor) -> DispatchResult { frame_system::ensure_root(origin)?; Ok(()) } + #[pallet::call_index(5)] #[pallet::weight((0, DispatchClass::Mandatory))] pub fn inherent_call(origin: OriginFor) -> DispatchResult { frame_system::ensure_none(origin)?; Ok(()) } + #[pallet::call_index(6)] #[pallet::weight(0)] pub fn calculate_storage_root(_origin: OriginFor) -> DispatchResult { let root = sp_io::storage::root(sp_runtime::StateVersion::V1); @@ -1248,11 +1258,11 @@ mod tests { #[test] fn can_pay_for_tx_fee_on_full_lock() { - let id: fungibles::LockIdentifier = *b"0 "; + let id: LockIdentifier = *b"0 "; let execute_with_lock = |lock: WithdrawReasons| { let mut t = new_test_ext(1); t.execute_with(|| { - as fungibles::Lockable>::set_lock( + as LockableCurrency>::set_lock( id, &1, 110, lock, ); let xt = TestXt::new( diff --git a/frame/fast-unstake/Cargo.toml b/frame/fast-unstake/Cargo.toml index 61bc823cc11e5..b61060e775a9f 100644 --- a/frame/fast-unstake/Cargo.toml +++ b/frame/fast-unstake/Cargo.toml @@ -25,10 +25,7 @@ sp-std = { version = "5.0.0", default-features = false, path = "../../primitives sp-staking = { default-features = false, path = "../../primitives/staking" } frame-election-provider-support = { default-features = false, path = "../election-provider-support" } -# optional dependencies for cargo features frame-benchmarking = { version = "4.0.0-dev", default-features = false, optional = true, path = "../benchmarking" } -pallet-staking = { default-features = false, optional = true, path = "../staking" } -pallet-assets = { default-features = false, optional = true, path = "../assets" } [dev-dependencies] pallet-staking-reward-curve = { version = "4.0.0-dev", path = "../staking/reward-curve" } @@ -38,8 +35,6 @@ sp-tracing = { version = "6.0.0", path = "../../primitives/tracing" } pallet-staking = { path = "../staking" } pallet-balances = { path = "../balances" } pallet-timestamp = { path = "../timestamp" } -pallet-assets = { path = "../assets" } - [features] default = ["std"] @@ -64,6 +59,5 @@ runtime-benchmarks = [ "frame-benchmarking/runtime-benchmarks", "frame-system/runtime-benchmarks", "sp-staking/runtime-benchmarks", - "pallet-staking/runtime-benchmarks" ] try-runtime = ["frame-support/try-runtime"] diff --git a/frame/fast-unstake/src/lib.rs b/frame/fast-unstake/src/lib.rs index 618afa63c2c4c..7f226826cbc53 100644 --- a/frame/fast-unstake/src/lib.rs +++ b/frame/fast-unstake/src/lib.rs @@ -228,6 +228,7 @@ pub mod pallet { /// If the check fails, the stash remains chilled and waiting for being unbonded as in with /// the normal staking system, but they lose part of their unbonding chunks due to consuming /// the chain's resources. + #[pallet::call_index(0)] #[pallet::weight(::WeightInfo::register_fast_unstake())] pub fn register_fast_unstake(origin: OriginFor) -> DispatchResult { let ctrl = ensure_signed(origin)?; @@ -257,6 +258,7 @@ pub mod pallet { /// Note that the associated stash is still fully unbonded and chilled as a consequence of /// calling `register_fast_unstake`. This should probably be followed by a call to /// `Staking::rebond`. + #[pallet::call_index(1)] #[pallet::weight(::WeightInfo::deregister())] pub fn deregister(origin: OriginFor) -> DispatchResult { let ctrl = ensure_signed(origin)?; @@ -282,6 +284,7 @@ pub mod pallet { /// Control the operation of this pallet. /// /// Dispatch origin must be signed by the [`Config::ControlOrigin`]. + #[pallet::call_index(2)] #[pallet::weight(::WeightInfo::control())] pub fn control(origin: OriginFor, unchecked_eras_to_check: EraIndex) -> DispatchResult { let _ = T::ControlOrigin::ensure_origin(origin)?; diff --git a/frame/fast-unstake/src/mock.rs b/frame/fast-unstake/src/mock.rs index d66f4ba5663d9..b67dcf581ed97 100644 --- a/frame/fast-unstake/src/mock.rs +++ b/frame/fast-unstake/src/mock.rs @@ -21,7 +21,7 @@ use frame_support::{ pallet_prelude::*, parameter_types, traits::{ConstU64, Currency}, - weights::constants::WEIGHT_PER_SECOND, + weights::constants::WEIGHT_REF_TIME_PER_SECOND, }; use sp_runtime::traits::{Convert, IdentityLookup}; @@ -37,7 +37,7 @@ pub type T = Runtime; parameter_types! { pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights::simple_max( - (2u64 * WEIGHT_PER_SECOND).set_proof_size(u64::MAX), + Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX), ); } diff --git a/frame/grandpa/src/default_weights.rs b/frame/grandpa/src/default_weights.rs index 4ca94dd576fb7..ba343cabac622 100644 --- a/frame/grandpa/src/default_weights.rs +++ b/frame/grandpa/src/default_weights.rs @@ -19,7 +19,7 @@ //! This file was not auto-generated. use frame_support::weights::{ - constants::{RocksDbWeight as DbWeight, WEIGHT_PER_MICROS, WEIGHT_PER_NANOS}, + constants::{RocksDbWeight as DbWeight, WEIGHT_REF_TIME_PER_MICROS, WEIGHT_REF_TIME_PER_NANOS}, Weight, }; @@ -34,14 +34,19 @@ impl crate::WeightInfo for () { const MAX_NOMINATORS: u64 = 200; // checking membership proof - (35u64 * WEIGHT_PER_MICROS) - .saturating_add((175u64 * WEIGHT_PER_NANOS).saturating_mul(validator_count)) + Weight::from_ref_time(35u64 * WEIGHT_REF_TIME_PER_MICROS) + .saturating_add( + Weight::from_ref_time(175u64 * WEIGHT_REF_TIME_PER_NANOS) + .saturating_mul(validator_count), + ) .saturating_add(DbWeight::get().reads(5)) // check equivocation proof - .saturating_add(95u64 * WEIGHT_PER_MICROS) + .saturating_add(Weight::from_ref_time(95u64 * WEIGHT_REF_TIME_PER_MICROS)) // report offence - .saturating_add(110u64 * WEIGHT_PER_MICROS) - .saturating_add(25u64 * WEIGHT_PER_MICROS * MAX_NOMINATORS) + .saturating_add(Weight::from_ref_time(110u64 * WEIGHT_REF_TIME_PER_MICROS)) + .saturating_add(Weight::from_ref_time( + 25u64 * WEIGHT_REF_TIME_PER_MICROS * MAX_NOMINATORS, + )) .saturating_add(DbWeight::get().reads(14 + 3 * MAX_NOMINATORS)) .saturating_add(DbWeight::get().writes(10 + 3 * MAX_NOMINATORS)) // fetching set id -> session index mappings @@ -49,6 +54,7 @@ impl crate::WeightInfo for () { } fn note_stalled() -> Weight { - (3u64 * WEIGHT_PER_MICROS).saturating_add(DbWeight::get().writes(1)) + Weight::from_ref_time(3u64 * WEIGHT_REF_TIME_PER_MICROS) + .saturating_add(DbWeight::get().writes(1)) } } diff --git a/frame/grandpa/src/equivocation.rs b/frame/grandpa/src/equivocation.rs index 181d22fba545c..23801a4982a82 100644 --- a/frame/grandpa/src/equivocation.rs +++ b/frame/grandpa/src/equivocation.rs @@ -52,7 +52,7 @@ use sp_staking::{ SessionIndex, }; -use super::{Call, Config, Pallet}; +use super::{Call, Config, Pallet, LOG_TARGET}; /// A trait with utility methods for handling equivocation reports in GRANDPA. /// The offence type is generic, and the trait provides , reporting an offence @@ -170,15 +170,9 @@ where }; match SubmitTransaction::>::submit_unsigned_transaction(call.into()) { - Ok(()) => log::info!( - target: "runtime::afg", - "Submitted GRANDPA equivocation report.", - ), - Err(e) => log::error!( - target: "runtime::afg", - "Error submitting equivocation report: {:?}", - e, - ), + Ok(()) => log::info!(target: LOG_TARGET, "Submitted GRANDPA equivocation report.",), + Err(e) => + log::error!(target: LOG_TARGET, "Error submitting equivocation report: {:?}", e,), } Ok(()) @@ -211,7 +205,7 @@ impl Pallet { TransactionSource::Local | TransactionSource::InBlock => { /* allowed */ }, _ => { log::warn!( - target: "runtime::afg", + target: LOG_TARGET, "rejecting unsigned report equivocation transaction because it is not local/in-block." ); diff --git a/frame/grandpa/src/lib.rs b/frame/grandpa/src/lib.rs index fe5b9861853bf..716cf54865773 100644 --- a/frame/grandpa/src/lib.rs +++ b/frame/grandpa/src/lib.rs @@ -70,6 +70,8 @@ pub use equivocation::{ pub use pallet::*; +const LOG_TARGET: &str = "runtime::grandpa"; + #[frame_support::pallet] pub mod pallet { use super::*; @@ -193,6 +195,7 @@ pub mod pallet { /// equivocation proof and validate the given key ownership proof /// against the extracted offender. If both are valid, the offence /// will be reported. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::report_equivocation(key_owner_proof.validator_count()))] pub fn report_equivocation( origin: OriginFor, @@ -213,6 +216,7 @@ pub mod pallet { /// block authors will call it (validated in `ValidateUnsigned`), as such /// if the block author is defined it will be defined as the equivocation /// reporter. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::report_equivocation(key_owner_proof.validator_count()))] pub fn report_equivocation_unsigned( origin: OriginFor, @@ -240,6 +244,7 @@ pub mod pallet { /// block of all validators of the new authority set. /// /// Only callable by root. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::note_stalled())] pub fn note_stalled( origin: OriginFor, diff --git a/frame/grandpa/src/migrations/v4.rs b/frame/grandpa/src/migrations/v4.rs index 81dbd3bab4b67..33e200b728336 100644 --- a/frame/grandpa/src/migrations/v4.rs +++ b/frame/grandpa/src/migrations/v4.rs @@ -15,6 +15,7 @@ // See the License for the specific language governing permissions and // limitations under the License. +use crate::LOG_TARGET; use frame_support::{ traits::{Get, StorageVersion}, weights::Weight, @@ -34,14 +35,14 @@ pub const OLD_PREFIX: &[u8] = b"GrandpaFinality"; pub fn migrate>(new_pallet_name: N) -> Weight { if new_pallet_name.as_ref().as_bytes() == OLD_PREFIX { log::info!( - target: "runtime::afg", + target: LOG_TARGET, "New pallet name is equal to the old prefix. No migration needs to be done.", ); return Weight::zero() } let storage_version = StorageVersion::get::>(); log::info!( - target: "runtime::afg", + target: LOG_TARGET, "Running migration to v3.1 for grandpa with storage version {:?}", storage_version, ); diff --git a/frame/identity/src/lib.rs b/frame/identity/src/lib.rs index 95f5a84d8abb7..8eab2c67418a1 100644 --- a/frame/identity/src/lib.rs +++ b/frame/identity/src/lib.rs @@ -284,6 +284,7 @@ pub mod pallet { /// - One storage mutation (codec `O(R)`). /// - One event. /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::add_registrar(T::MaxRegistrars::get()))] pub fn add_registrar( origin: OriginFor, @@ -329,6 +330,7 @@ pub mod pallet { /// - One storage mutation (codec-read `O(X' + R)`, codec-write `O(X + R)`). /// - One event. /// # + #[pallet::call_index(1)] #[pallet::weight( T::WeightInfo::set_identity( T::MaxRegistrars::get(), // R T::MaxAdditionalFields::get(), // X @@ -404,6 +406,7 @@ pub mod pallet { // N storage items for N sub accounts. Right now the weight on this function // is a large overestimate due to the fact that it could potentially write // to 2 x T::MaxSubAccounts::get(). + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::set_subs_old(T::MaxSubAccounts::get()) // P: Assume max sub accounts removed. .saturating_add(T::WeightInfo::set_subs_new(subs.len() as u32)) // S: Assume all subs are new. )] @@ -475,6 +478,7 @@ pub mod pallet { /// - `2` storage reads and `S + 2` storage deletions. /// - One event. /// # + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::clear_identity( T::MaxRegistrars::get(), // R T::MaxSubAccounts::get(), // S @@ -526,6 +530,7 @@ pub mod pallet { /// - Storage: 1 read `O(R)`, 1 mutate `O(X + R)`. /// - One event. /// # + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::request_judgement( T::MaxRegistrars::get(), // R T::MaxAdditionalFields::get(), // X @@ -588,6 +593,7 @@ pub mod pallet { /// - One storage mutation `O(R + X)`. /// - One event /// # + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::cancel_request( T::MaxRegistrars::get(), // R T::MaxAdditionalFields::get(), // X @@ -636,6 +642,7 @@ pub mod pallet { /// - One storage mutation `O(R)`. /// - Benchmark: 7.315 + R * 0.329 µs (min squares analysis) /// # + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::set_fee(T::MaxRegistrars::get()))] // R pub fn set_fee( origin: OriginFor, @@ -674,6 +681,7 @@ pub mod pallet { /// - One storage mutation `O(R)`. /// - Benchmark: 8.823 + R * 0.32 µs (min squares analysis) /// # + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::set_account_id(T::MaxRegistrars::get()))] // R pub fn set_account_id( origin: OriginFor, @@ -713,6 +721,7 @@ pub mod pallet { /// - One storage mutation `O(R)`. /// - Benchmark: 7.464 + R * 0.325 µs (min squares analysis) /// # + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::set_fields(T::MaxRegistrars::get()))] // R pub fn set_fields( origin: OriginFor, @@ -761,6 +770,7 @@ pub mod pallet { /// - Storage: 1 read `O(R)`, 1 mutate `O(R + X)`. /// - One event. /// # + #[pallet::call_index(9)] #[pallet::weight(T::WeightInfo::provide_judgement( T::MaxRegistrars::get(), // R T::MaxAdditionalFields::get(), // X @@ -834,6 +844,7 @@ pub mod pallet { /// - `S + 2` storage mutations. /// - One event. /// # + #[pallet::call_index(10)] #[pallet::weight(T::WeightInfo::kill_identity( T::MaxRegistrars::get(), // R T::MaxSubAccounts::get(), // S @@ -874,6 +885,7 @@ pub mod pallet { /// /// The dispatch origin for this call must be _Signed_ and the sender must have a registered /// sub identity of `sub`. + #[pallet::call_index(11)] #[pallet::weight(T::WeightInfo::add_sub(T::MaxSubAccounts::get()))] pub fn add_sub( origin: OriginFor, @@ -909,6 +921,7 @@ pub mod pallet { /// /// The dispatch origin for this call must be _Signed_ and the sender must have a registered /// sub identity of `sub`. + #[pallet::call_index(12)] #[pallet::weight(T::WeightInfo::rename_sub(T::MaxSubAccounts::get()))] pub fn rename_sub( origin: OriginFor, @@ -930,6 +943,7 @@ pub mod pallet { /// /// The dispatch origin for this call must be _Signed_ and the sender must have a registered /// sub identity of `sub`. + #[pallet::call_index(13)] #[pallet::weight(T::WeightInfo::remove_sub(T::MaxSubAccounts::get()))] pub fn remove_sub(origin: OriginFor, sub: AccountIdLookupOf) -> DispatchResult { let sender = ensure_signed(origin)?; @@ -959,6 +973,7 @@ pub mod pallet { /// /// NOTE: This should not normally be used, but is provided in the case that the non- /// controller of an account is maliciously registered as a sub-account. + #[pallet::call_index(14)] #[pallet::weight(T::WeightInfo::quit_sub(T::MaxSubAccounts::get()))] pub fn quit_sub(origin: OriginFor) -> DispatchResult { let sender = ensure_signed(origin)?; diff --git a/frame/im-online/src/lib.rs b/frame/im-online/src/lib.rs index 342522ff29b19..f23e610a4874d 100644 --- a/frame/im-online/src/lib.rs +++ b/frame/im-online/src/lib.rs @@ -474,6 +474,7 @@ pub mod pallet { /// # // NOTE: the weight includes the cost of validate_unsigned as it is part of the cost to // import block with such an extrinsic. + #[pallet::call_index(0)] #[pallet::weight(::WeightInfo::validate_unsigned_and_then_heartbeat( heartbeat.validators_len as u32, heartbeat.network_state.external_addresses.len() as u32, diff --git a/frame/indices/src/lib.rs b/frame/indices/src/lib.rs index 41893254c3c97..95d3cf4b2eed1 100644 --- a/frame/indices/src/lib.rs +++ b/frame/indices/src/lib.rs @@ -98,6 +98,7 @@ pub mod pallet { /// ------------------- /// - DB Weight: 1 Read/Write (Accounts) /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::claim())] pub fn claim(origin: OriginFor, index: T::AccountIndex) -> DispatchResult { let who = ensure_signed(origin)?; @@ -131,6 +132,7 @@ pub mod pallet { /// - Reads: Indices Accounts, System Account (recipient) /// - Writes: Indices Accounts, System Account (recipient) /// # + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::transfer())] pub fn transfer( origin: OriginFor, @@ -171,6 +173,7 @@ pub mod pallet { /// ------------------- /// - DB Weight: 1 Read/Write (Accounts) /// # + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::free())] pub fn free(origin: OriginFor, index: T::AccountIndex) -> DispatchResult { let who = ensure_signed(origin)?; @@ -207,6 +210,7 @@ pub mod pallet { /// - Reads: Indices Accounts, System Account (original owner) /// - Writes: Indices Accounts, System Account (original owner) /// # + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::force_transfer())] pub fn force_transfer( origin: OriginFor, @@ -245,6 +249,7 @@ pub mod pallet { /// ------------------- /// - DB Weight: 1 Read/Write (Accounts) /// # + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::freeze())] pub fn freeze(origin: OriginFor, index: T::AccountIndex) -> DispatchResult { let who = ensure_signed(origin)?; diff --git a/frame/lottery/src/lib.rs b/frame/lottery/src/lib.rs index c501a30ef5f4a..3255062e3fe7f 100644 --- a/frame/lottery/src/lib.rs +++ b/frame/lottery/src/lib.rs @@ -296,6 +296,7 @@ pub mod pallet { /// should listen for the `TicketBought` event. /// /// This extrinsic must be called by a signed origin. + #[pallet::call_index(0)] #[pallet::weight( T::WeightInfo::buy_ticket() .saturating_add(call.get_dispatch_info().weight) @@ -317,6 +318,7 @@ pub mod pallet { /// provided by this pallet, which uses storage to determine the valid calls. /// /// This extrinsic must be called by the Manager origin. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::set_calls(calls.len() as u32))] pub fn set_calls( origin: OriginFor, @@ -344,6 +346,7 @@ pub mod pallet { /// * `length`: How long the lottery should run for starting at the current block. /// * `delay`: How long after the lottery end we should wait before picking a winner. /// * `repeat`: If the lottery should repeat when completed. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::start_lottery())] pub fn start_lottery( origin: OriginFor, @@ -376,6 +379,7 @@ pub mod pallet { /// The lottery will continue to run to completion. /// /// This extrinsic must be called by the `ManagerOrigin`. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::stop_repeat())] pub fn stop_repeat(origin: OriginFor) -> DispatchResult { T::ManagerOrigin::ensure_origin(origin)?; diff --git a/frame/lottery/src/weights.rs b/frame/lottery/src/weights.rs index c0936ae6c8073..e9ee528cc43b8 100644 --- a/frame/lottery/src/weights.rs +++ b/frame/lottery/src/weights.rs @@ -18,7 +18,7 @@ //! Autogenerated weights for pallet_lottery //! //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev -//! DATE: 2022-11-18, STEPS: `50`, REPEAT: 20, LOW RANGE: `[]`, HIGH RANGE: `[]` +//! DATE: 2022-12-08, STEPS: `50`, REPEAT: 20, LOW RANGE: `[]`, HIGH RANGE: `[]` //! HOSTNAME: `bm3`, CPU: `Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz` //! EXECUTION: Some(Wasm), WASM-EXECUTION: Compiled, CHAIN: Some("dev"), DB CACHE: 1024 @@ -67,33 +67,33 @@ impl WeightInfo for SubstrateWeight { // Storage: System Account (r:1 w:1) // Storage: Lottery Tickets (r:0 w:1) fn buy_ticket() -> Weight { - // Minimum execution time: 53_735 nanoseconds. - Weight::from_ref_time(54_235_000) + // Minimum execution time: 52_479 nanoseconds. + Weight::from_ref_time(53_225_000) .saturating_add(T::DbWeight::get().reads(6)) .saturating_add(T::DbWeight::get().writes(4)) } // Storage: Lottery CallIndices (r:0 w:1) /// The range of component `n` is `[0, 10]`. fn set_calls(n: u32, ) -> Weight { - // Minimum execution time: 15_065 nanoseconds. - Weight::from_ref_time(16_467_398) - // Standard Error: 5_392 - .saturating_add(Weight::from_ref_time(294_914).saturating_mul(n.into())) + // Minimum execution time: 14_433 nanoseconds. + Weight::from_ref_time(15_660_780) + // Standard Error: 5_894 + .saturating_add(Weight::from_ref_time(290_482).saturating_mul(n.into())) .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Lottery Lottery (r:1 w:1) // Storage: Lottery LotteryIndex (r:1 w:1) // Storage: System Account (r:1 w:1) fn start_lottery() -> Weight { - // Minimum execution time: 45_990 nanoseconds. - Weight::from_ref_time(46_789_000) + // Minimum execution time: 43_683 nanoseconds. + Weight::from_ref_time(44_580_000) .saturating_add(T::DbWeight::get().reads(3)) .saturating_add(T::DbWeight::get().writes(3)) } // Storage: Lottery Lottery (r:1 w:1) fn stop_repeat() -> Weight { - // Minimum execution time: 10_783 nanoseconds. - Weight::from_ref_time(11_180_000) + // Minimum execution time: 10_514 nanoseconds. + Weight::from_ref_time(10_821_000) .saturating_add(T::DbWeight::get().reads(1)) .saturating_add(T::DbWeight::get().writes(1)) } @@ -103,8 +103,8 @@ impl WeightInfo for SubstrateWeight { // Storage: Lottery TicketsCount (r:1 w:1) // Storage: Lottery Tickets (r:1 w:0) fn on_initialize_end() -> Weight { - // Minimum execution time: 62_088 nanoseconds. - Weight::from_ref_time(63_670_000) + // Minimum execution time: 60_254 nanoseconds. + Weight::from_ref_time(61_924_000) .saturating_add(T::DbWeight::get().reads(6)) .saturating_add(T::DbWeight::get().writes(4)) } @@ -115,8 +115,8 @@ impl WeightInfo for SubstrateWeight { // Storage: Lottery Tickets (r:1 w:0) // Storage: Lottery LotteryIndex (r:1 w:1) fn on_initialize_repeat() -> Weight { - // Minimum execution time: 64_953 nanoseconds. - Weight::from_ref_time(65_465_000) + // Minimum execution time: 61_552 nanoseconds. + Weight::from_ref_time(62_152_000) .saturating_add(T::DbWeight::get().reads(7)) .saturating_add(T::DbWeight::get().writes(5)) } @@ -132,33 +132,33 @@ impl WeightInfo for () { // Storage: System Account (r:1 w:1) // Storage: Lottery Tickets (r:0 w:1) fn buy_ticket() -> Weight { - // Minimum execution time: 53_735 nanoseconds. - Weight::from_ref_time(54_235_000) + // Minimum execution time: 52_479 nanoseconds. + Weight::from_ref_time(53_225_000) .saturating_add(RocksDbWeight::get().reads(6)) .saturating_add(RocksDbWeight::get().writes(4)) } // Storage: Lottery CallIndices (r:0 w:1) /// The range of component `n` is `[0, 10]`. fn set_calls(n: u32, ) -> Weight { - // Minimum execution time: 15_065 nanoseconds. - Weight::from_ref_time(16_467_398) - // Standard Error: 5_392 - .saturating_add(Weight::from_ref_time(294_914).saturating_mul(n.into())) + // Minimum execution time: 14_433 nanoseconds. + Weight::from_ref_time(15_660_780) + // Standard Error: 5_894 + .saturating_add(Weight::from_ref_time(290_482).saturating_mul(n.into())) .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Lottery Lottery (r:1 w:1) // Storage: Lottery LotteryIndex (r:1 w:1) // Storage: System Account (r:1 w:1) fn start_lottery() -> Weight { - // Minimum execution time: 45_990 nanoseconds. - Weight::from_ref_time(46_789_000) + // Minimum execution time: 43_683 nanoseconds. + Weight::from_ref_time(44_580_000) .saturating_add(RocksDbWeight::get().reads(3)) .saturating_add(RocksDbWeight::get().writes(3)) } // Storage: Lottery Lottery (r:1 w:1) fn stop_repeat() -> Weight { - // Minimum execution time: 10_783 nanoseconds. - Weight::from_ref_time(11_180_000) + // Minimum execution time: 10_514 nanoseconds. + Weight::from_ref_time(10_821_000) .saturating_add(RocksDbWeight::get().reads(1)) .saturating_add(RocksDbWeight::get().writes(1)) } @@ -168,8 +168,8 @@ impl WeightInfo for () { // Storage: Lottery TicketsCount (r:1 w:1) // Storage: Lottery Tickets (r:1 w:0) fn on_initialize_end() -> Weight { - // Minimum execution time: 62_088 nanoseconds. - Weight::from_ref_time(63_670_000) + // Minimum execution time: 60_254 nanoseconds. + Weight::from_ref_time(61_924_000) .saturating_add(RocksDbWeight::get().reads(6)) .saturating_add(RocksDbWeight::get().writes(4)) } @@ -180,8 +180,8 @@ impl WeightInfo for () { // Storage: Lottery Tickets (r:1 w:0) // Storage: Lottery LotteryIndex (r:1 w:1) fn on_initialize_repeat() -> Weight { - // Minimum execution time: 64_953 nanoseconds. - Weight::from_ref_time(65_465_000) + // Minimum execution time: 61_552 nanoseconds. + Weight::from_ref_time(62_152_000) .saturating_add(RocksDbWeight::get().reads(7)) .saturating_add(RocksDbWeight::get().writes(5)) } diff --git a/frame/membership/src/lib.rs b/frame/membership/src/lib.rs index 4191bbcc5d86e..77b53aa72dad8 100644 --- a/frame/membership/src/lib.rs +++ b/frame/membership/src/lib.rs @@ -166,6 +166,7 @@ pub mod pallet { /// Add a member `who` to the set. /// /// May only be called from `T::AddOrigin`. + #[pallet::call_index(0)] #[pallet::weight(50_000_000)] pub fn add_member(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { T::AddOrigin::ensure_origin(origin)?; @@ -188,6 +189,7 @@ pub mod pallet { /// Remove a member `who` from the set. /// /// May only be called from `T::RemoveOrigin`. + #[pallet::call_index(1)] #[pallet::weight(50_000_000)] pub fn remove_member(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { T::RemoveOrigin::ensure_origin(origin)?; @@ -211,6 +213,7 @@ pub mod pallet { /// May only be called from `T::SwapOrigin`. /// /// Prime membership is *not* passed from `remove` to `add`, if extant. + #[pallet::call_index(2)] #[pallet::weight(50_000_000)] pub fn swap_member( origin: OriginFor, @@ -244,6 +247,7 @@ pub mod pallet { /// pass `members` pre-sorted. /// /// May only be called from `T::ResetOrigin`. + #[pallet::call_index(3)] #[pallet::weight(50_000_000)] pub fn reset_members(origin: OriginFor, members: Vec) -> DispatchResult { T::ResetOrigin::ensure_origin(origin)?; @@ -266,6 +270,7 @@ pub mod pallet { /// May only be called from `Signed` origin of a current member. /// /// Prime membership is passed from the origin account to `new`, if extant. + #[pallet::call_index(4)] #[pallet::weight(50_000_000)] pub fn change_key(origin: OriginFor, new: AccountIdLookupOf) -> DispatchResult { let remove = ensure_signed(origin)?; @@ -300,6 +305,7 @@ pub mod pallet { /// Set the prime member. Must be a current member. /// /// May only be called from `T::PrimeOrigin`. + #[pallet::call_index(5)] #[pallet::weight(50_000_000)] pub fn set_prime(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { T::PrimeOrigin::ensure_origin(origin)?; @@ -313,6 +319,7 @@ pub mod pallet { /// Remove the prime member if it exists. /// /// May only be called from `T::PrimeOrigin`. + #[pallet::call_index(6)] #[pallet::weight(50_000_000)] pub fn clear_prime(origin: OriginFor) -> DispatchResult { T::PrimeOrigin::ensure_origin(origin)?; diff --git a/frame/merkle-mountain-range/src/default_weights.rs b/frame/merkle-mountain-range/src/default_weights.rs index e513e2197f1c6..e4f9750fbcba5 100644 --- a/frame/merkle-mountain-range/src/default_weights.rs +++ b/frame/merkle-mountain-range/src/default_weights.rs @@ -19,7 +19,7 @@ //! This file was not auto-generated. use frame_support::weights::{ - constants::{RocksDbWeight as DbWeight, WEIGHT_PER_NANOS}, + constants::{RocksDbWeight as DbWeight, WEIGHT_REF_TIME_PER_NANOS}, Weight, }; @@ -28,7 +28,7 @@ impl crate::WeightInfo for () { // Reading the parent hash. let leaf_weight = DbWeight::get().reads(1); // Blake2 hash cost. - let hash_weight = 2u64 * WEIGHT_PER_NANOS; + let hash_weight = Weight::from_ref_time(2u64 * WEIGHT_REF_TIME_PER_NANOS); // No-op hook. let hook_weight = Weight::zero(); diff --git a/frame/message-queue/Cargo.toml b/frame/message-queue/Cargo.toml new file mode 100644 index 0000000000000..47d114902f52c --- /dev/null +++ b/frame/message-queue/Cargo.toml @@ -0,0 +1,53 @@ +[package] +authors = ["Parity Technologies "] +edition = "2021" +name = "pallet-message-queue" +version = "7.0.0-dev" +license = "Apache-2.0" +homepage = "https://substrate.io" +repository = "https://github.com/paritytech/substrate/" +description = "FRAME pallet to queue and process messages" + +[dependencies] +codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false, features = ["derive"] } +scale-info = { version = "2.1.2", default-features = false, features = ["derive"] } +serde = { version = "1.0.137", optional = true, features = ["derive"] } +log = { version = "0.4.17", default-features = false } + +sp-core = { version = "7.0.0", default-features = false, path = "../../primitives/core" } +sp-io = { version = "7.0.0", default-features = false, path = "../../primitives/io" } +sp-runtime = { version = "7.0.0", default-features = false, path = "../../primitives/runtime" } +sp-std = { version = "5.0.0", default-features = false, path = "../../primitives/std" } +sp-arithmetic = { version = "6.0.0", default-features = false, path = "../../primitives/arithmetic" } +sp-weights = { version = "4.0.0", default-features = false, path = "../../primitives/weights" } + +frame-benchmarking = { version = "4.0.0-dev", default-features = false, optional = true, path = "../benchmarking" } +frame-support = { version = "4.0.0-dev", default-features = false, path = "../support" } +frame-system = { version = "4.0.0-dev", default-features = false, path = "../system" } + +[dev-dependencies] +sp-tracing = { version = "6.0.0", path = "../../primitives/tracing" } +rand = "0.8.5" +rand_distr = "0.4.3" + +[features] +default = ["std"] +std = [ + "codec/std", + "scale-info/std", + "sp-core/std", + "sp-io/std", + "sp-runtime/std", + "sp-std/std", + "sp-arithmetic/std", + "sp-weights/std", + "frame-benchmarking?/std", + "frame-support/std", + "frame-system/std", +] +runtime-benchmarks = [ + "frame-benchmarking/runtime-benchmarks", + "frame-support/runtime-benchmarks", + "frame-system/runtime-benchmarks", +] +try-runtime = ["frame-support/try-runtime"] diff --git a/frame/message-queue/src/benchmarking.rs b/frame/message-queue/src/benchmarking.rs new file mode 100644 index 0000000000000..9cd6b75e4d0ae --- /dev/null +++ b/frame/message-queue/src/benchmarking.rs @@ -0,0 +1,205 @@ +// This file is part of Substrate. + +// Copyright (C) 2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! Benchmarking for the message queue pallet. + +#![cfg(feature = "runtime-benchmarks")] +#![allow(unused_assignments)] // Needed for `ready_ring_knit`. + +use super::{mock_helpers::*, Pallet as MessageQueue, *}; + +use frame_benchmarking::{benchmarks, whitelisted_caller}; +use frame_support::traits::Get; +use frame_system::RawOrigin; +use sp_std::prelude::*; + +benchmarks! { + where_clause { + where + // NOTE: We need to generate multiple origins, therefore Origin is `From`. The + // `PartialEq` is for asserting the outcome of the ring (un)knitting and *could* be + // removed if really necessary. + <::MessageProcessor as ProcessMessage>::Origin: From + PartialEq, + ::Size: From, + } + + // Worst case path of `ready_ring_knit`. + ready_ring_knit { + let mid: MessageOriginOf:: = 1.into(); + build_ring::(&[0.into(), mid.clone(), 2.into()]); + unknit::(&mid); + assert_ring::(&[0.into(), 2.into()]); + let mut neighbours = None; + }: { + neighbours = MessageQueue::::ready_ring_knit(&mid).ok(); + } verify { + // The neighbours needs to be modified manually. + BookStateFor::::mutate(&mid, |b| { b.ready_neighbours = neighbours }); + assert_ring::(&[0.into(), 2.into(), mid]); + } + + // Worst case path of `ready_ring_unknit`. + ready_ring_unknit { + build_ring::(&[0.into(), 1.into(), 2.into()]); + assert_ring::(&[0.into(), 1.into(), 2.into()]); + let o: MessageOriginOf:: = 0.into(); + let neighbours = BookStateFor::::get(&o).ready_neighbours.unwrap(); + }: { + MessageQueue::::ready_ring_unknit(&o, neighbours); + } verify { + assert_ring::(&[1.into(), 2.into()]); + } + + // `service_queues` without any queue processing. + service_queue_base { + }: { + MessageQueue::::service_queue(0.into(), &mut WeightMeter::max_limit(), Weight::MAX) + } + + // `service_page` without any message processing but with page completion. + service_page_base_completion { + let origin: MessageOriginOf = 0.into(); + let page = PageOf::::default(); + Pages::::insert(&origin, 0, &page); + let mut book_state = single_page_book::(); + let mut meter = WeightMeter::max_limit(); + let limit = Weight::MAX; + }: { + MessageQueue::::service_page(&origin, &mut book_state, &mut meter, limit) + } + + // `service_page` without any message processing and without page completion. + service_page_base_no_completion { + let origin: MessageOriginOf = 0.into(); + let mut page = PageOf::::default(); + // Mock the storage such that `is_complete` returns `false` but `peek_first` returns `None`. + page.first = 1.into(); + page.remaining = 1.into(); + Pages::::insert(&origin, 0, &page); + let mut book_state = single_page_book::(); + let mut meter = WeightMeter::max_limit(); + let limit = Weight::MAX; + }: { + MessageQueue::::service_page(&origin, &mut book_state, &mut meter, limit) + } + + // Processing a single message from a page. + service_page_item { + let msg = vec![1u8; MaxMessageLenOf::::get() as usize]; + let mut page = page::(&msg.clone()); + let mut book = book_for::(&page); + assert!(page.peek_first().is_some(), "There is one message"); + let mut weight = WeightMeter::max_limit(); + }: { + let status = MessageQueue::::service_page_item(&0u32.into(), 0, &mut book, &mut page, &mut weight, Weight::MAX); + assert_eq!(status, ItemExecutionStatus::Executed(true)); + } verify { + // Check that it was processed. + assert_last_event::(Event::Processed { + hash: T::Hashing::hash(&msg), origin: 0.into(), + weight_used: 1.into_weight(), success: true + }.into()); + let (_, processed, _) = page.peek_index(0).unwrap(); + assert!(processed); + assert_eq!(book.message_count, 0); + } + + // Worst case for calling `bump_service_head`. + bump_service_head { + setup_bump_service_head::(0.into(), 10.into()); + let mut weight = WeightMeter::max_limit(); + }: { + MessageQueue::::bump_service_head(&mut weight); + } verify { + assert_eq!(ServiceHead::::get().unwrap(), 10u32.into()); + assert_eq!(weight.consumed, T::WeightInfo::bump_service_head()); + } + + reap_page { + // Mock the storage to get a *cullable* but not *reapable* page. + let origin: MessageOriginOf = 0.into(); + let mut book = single_page_book::(); + let (page, msgs) = full_page::(); + + for p in 0 .. T::MaxStale::get() * T::MaxStale::get() { + if p == 0 { + Pages::::insert(&origin, p, &page); + } + book.end += 1; + book.count += 1; + book.message_count += msgs as u64; + book.size += page.remaining_size.into() as u64; + } + book.begin = book.end - T::MaxStale::get(); + BookStateFor::::insert(&origin, &book); + assert!(Pages::::contains_key(&origin, 0)); + + }: _(RawOrigin::Signed(whitelisted_caller()), 0u32.into(), 0) + verify { + assert_last_event::(Event::PageReaped{ origin: 0.into(), index: 0 }.into()); + assert!(!Pages::::contains_key(&origin, 0)); + } + + // Worst case for `execute_overweight` where the page is removed as completed. + // + // The worst case occurs when executing the last message in a page of which all are skipped since it is using `peek_index` which has linear complexities. + execute_overweight_page_removed { + let origin: MessageOriginOf = 0.into(); + let (mut page, msgs) = full_page::(); + // Skip all messages. + for _ in 1..msgs { + page.skip_first(true); + } + page.skip_first(false); + let book = book_for::(&page); + Pages::::insert(&origin, 0, &page); + BookStateFor::::insert(&origin, &book); + }: { + MessageQueue::::execute_overweight(RawOrigin::Signed(whitelisted_caller()).into(), 0u32.into(), 0u32, ((msgs - 1) as u32).into(), Weight::MAX).unwrap() + } + verify { + assert_last_event::(Event::Processed { + hash: T::Hashing::hash(&((msgs - 1) as u32).encode()), origin: 0.into(), + weight_used: Weight::from_parts(1, 1), success: true + }.into()); + assert!(!Pages::::contains_key(&origin, 0), "Page must be removed"); + } + + // Worst case for `execute_overweight` where the page is updated. + execute_overweight_page_updated { + let origin: MessageOriginOf = 0.into(); + let (mut page, msgs) = full_page::(); + // Skip all messages. + for _ in 0..msgs { + page.skip_first(false); + } + let book = book_for::(&page); + Pages::::insert(&origin, 0, &page); + BookStateFor::::insert(&origin, &book); + }: { + MessageQueue::::execute_overweight(RawOrigin::Signed(whitelisted_caller()).into(), 0u32.into(), 0u32, ((msgs - 1) as u32).into(), Weight::MAX).unwrap() + } + verify { + assert_last_event::(Event::Processed { + hash: T::Hashing::hash(&((msgs - 1) as u32).encode()), origin: 0.into(), + weight_used: Weight::from_parts(1, 1), success: true + }.into()); + assert!(Pages::::contains_key(&origin, 0), "Page must be updated"); + } + + impl_benchmark_test_suite!(MessageQueue, crate::mock::new_test_ext::(), crate::integration_test::Test); +} diff --git a/frame/message-queue/src/integration_test.rs b/frame/message-queue/src/integration_test.rs new file mode 100644 index 0000000000000..f4b1b7a125449 --- /dev/null +++ b/frame/message-queue/src/integration_test.rs @@ -0,0 +1,225 @@ +// This file is part of Substrate. + +// Copyright (C) 2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! Stress tests pallet-message-queue. Defines its own runtime config to use larger constants for +//! `HeapSize` and `MaxStale`. + +#![cfg(test)] + +use crate::{ + mock::{ + new_test_ext, CountingMessageProcessor, IntoWeight, MockedWeightInfo, NumMessagesProcessed, + }, + *, +}; + +use crate as pallet_message_queue; +use frame_support::{ + parameter_types, + traits::{ConstU32, ConstU64}, +}; +use rand::{rngs::StdRng, Rng, SeedableRng}; +use rand_distr::Pareto; +use sp_core::H256; +use sp_runtime::{ + testing::Header, + traits::{BlakeTwo256, IdentityLookup}, +}; + +type UncheckedExtrinsic = frame_system::mocking::MockUncheckedExtrinsic; +type Block = frame_system::mocking::MockBlock; + +frame_support::construct_runtime!( + pub enum Test where + Block = Block, + NodeBlock = Block, + UncheckedExtrinsic = UncheckedExtrinsic, + { + System: frame_system::{Pallet, Call, Config, Storage, Event}, + MessageQueue: pallet_message_queue::{Pallet, Call, Storage, Event}, + } +); + +parameter_types! { + pub BlockWeights: frame_system::limits::BlockWeights = + frame_system::limits::BlockWeights::simple_max(frame_support::weights::Weight::from_ref_time(1024)); +} +impl frame_system::Config for Test { + type BaseCallFilter = frame_support::traits::Everything; + type BlockWeights = (); + type BlockLength = (); + type DbWeight = (); + type RuntimeOrigin = RuntimeOrigin; + type Index = u64; + type BlockNumber = u64; + type Hash = H256; + type RuntimeCall = RuntimeCall; + type Hashing = BlakeTwo256; + type AccountId = u64; + type Lookup = IdentityLookup; + type Header = Header; + type RuntimeEvent = RuntimeEvent; + type BlockHashCount = ConstU64<250>; + type Version = (); + type PalletInfo = PalletInfo; + type AccountData = (); + type OnNewAccount = (); + type OnKilledAccount = (); + type SystemWeightInfo = (); + type SS58Prefix = (); + type OnSetCode = (); + type MaxConsumers = ConstU32<16>; +} + +parameter_types! { + pub const HeapSize: u32 = 32 * 1024; + pub const MaxStale: u32 = 32; + pub static ServiceWeight: Option = Some(Weight::from_parts(100, 100)); +} + +impl Config for Test { + type RuntimeEvent = RuntimeEvent; + type WeightInfo = MockedWeightInfo; + type MessageProcessor = CountingMessageProcessor; + type Size = u32; + type QueueChangeHandler = (); + type HeapSize = HeapSize; + type MaxStale = MaxStale; + type ServiceWeight = ServiceWeight; +} + +/// Simulates heavy usage by enqueueing and processing large amounts of messages. +/// +/// Best to run with `-r`, `RUST_LOG=info` and `RUSTFLAGS='-Cdebug-assertions=y'`. +/// +/// # Example output +/// +/// ```pre +/// Enqueued 1189 messages across 176 queues. Payload 46.97 KiB +/// Processing 772 of 1189 messages +/// Enqueued 9270 messages across 1559 queues. Payload 131.85 KiB +/// Processing 6262 of 9687 messages +/// Enqueued 5025 messages across 1225 queues. Payload 100.23 KiB +/// Processing 1739 of 8450 messages +/// Enqueued 42061 messages across 6357 queues. Payload 536.29 KiB +/// Processing 11675 of 48772 messages +/// Enqueued 20253 messages across 2420 queues. Payload 288.34 KiB +/// Processing 28711 of 57350 messages +/// Processing all remaining 28639 messages +/// ``` +#[test] +#[ignore] // Only run in the CI. +fn stress_test_enqueue_and_service() { + let blocks = 20; + let max_queues = 10_000; + let max_messages_per_queue = 10_000; + let max_msg_len = MaxMessageLenOf::::get(); + let mut rng = StdRng::seed_from_u64(42); + + new_test_ext::().execute_with(|| { + let mut msgs_remaining = 0; + for _ in 0..blocks { + // Start by enqueuing a large number of messages. + let (enqueued, _) = + enqueue_messages(max_queues, max_messages_per_queue, max_msg_len, &mut rng); + msgs_remaining += enqueued; + + // Pick a fraction of all messages currently in queue and process them. + let processed = rng.gen_range(1..=msgs_remaining); + log::info!("Processing {} of all messages {}", processed, msgs_remaining); + process_messages(processed); // This also advances the block. + msgs_remaining -= processed; + } + log::info!("Processing all remaining {} messages", msgs_remaining); + process_messages(msgs_remaining); + post_conditions(); + }); +} + +/// Enqueue a random number of random messages into a random number of queues. +fn enqueue_messages( + max_queues: u32, + max_per_queue: u32, + max_msg_len: u32, + rng: &mut StdRng, +) -> (u32, usize) { + let num_queues = rng.gen_range(1..max_queues); + let mut num_messages = 0; + let mut total_msg_len = 0; + for origin in 0..num_queues { + let num_messages_per_queue = + (rng.sample(Pareto::new(1.0, 1.1).unwrap()) as u32).min(max_per_queue); + + for m in 0..num_messages_per_queue { + let mut message = format!("{}:{}", &origin, &m).into_bytes(); + let msg_len = (rng.sample(Pareto::new(1.0, 1.0).unwrap()) as u32) + .clamp(message.len() as u32, max_msg_len); + message.resize(msg_len as usize, 0); + MessageQueue::enqueue_message( + BoundedSlice::defensive_truncate_from(&message), + origin.into(), + ); + total_msg_len += msg_len; + } + num_messages += num_messages_per_queue; + } + log::info!( + "Enqueued {} messages across {} queues. Payload {:.2} KiB", + num_messages, + num_queues, + total_msg_len as f64 / 1024.0 + ); + (num_messages, total_msg_len as usize) +} + +/// Process the number of messages. +fn process_messages(num_msgs: u32) { + let weight = (num_msgs as u64).into_weight(); + ServiceWeight::set(Some(weight)); + let consumed = next_block(); + + assert_eq!(consumed, weight, "\n{}", MessageQueue::debug_info()); + assert_eq!(NumMessagesProcessed::take(), num_msgs as usize); +} + +/// Returns the weight consumed by `MessageQueue::on_initialize()`. +fn next_block() -> Weight { + MessageQueue::on_finalize(System::block_number()); + System::on_finalize(System::block_number()); + System::set_block_number(System::block_number() + 1); + System::on_initialize(System::block_number()); + MessageQueue::on_initialize(System::block_number()) +} + +/// Assert that the pallet is in the expected post state. +fn post_conditions() { + // All queues are empty. + for (_, book) in BookStateFor::::iter() { + assert!(book.end >= book.begin); + assert_eq!(book.count, 0); + assert_eq!(book.size, 0); + assert_eq!(book.message_count, 0); + assert!(book.ready_neighbours.is_none()); + } + // No pages remain. + assert_eq!(Pages::::iter().count(), 0); + // Service head is gone. + assert!(ServiceHead::::get().is_none()); + // This still works fine. + assert_eq!(MessageQueue::service_queues(Weight::MAX), Weight::zero(), "Nothing left"); + next_block(); +} diff --git a/frame/message-queue/src/lib.rs b/frame/message-queue/src/lib.rs new file mode 100644 index 0000000000000..8d9faebe0517f --- /dev/null +++ b/frame/message-queue/src/lib.rs @@ -0,0 +1,1311 @@ +// This file is part of Substrate. + +// Copyright (C) 2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! # Generalized Message Queue Pallet +//! +//! Provides generalized message queuing and processing capabilities on a per-queue basis for +//! arbitrary use-cases. +//! +//! # Design Goals +//! +//! 1. Minimal assumptions about `Message`s and `MessageOrigin`s. Both should be MEL bounded blobs. +//! This ensures the generality and reusability of the pallet. +//! 2. Well known and tightly limited pre-dispatch PoV weights, especially for message execution. +//! This is paramount for the success of the pallet since message execution is done in +//! `on_initialize` which must _never_ under-estimate its PoV weight. It also needs a frugal PoV +//! footprint since PoV is scarce and this is (possibly) done in every block. This must also hold +//! in the presence of unpredictable message size distributions. +//! 3. Usable as XCMP, DMP and UMP message/dispatch queue - possibly through adapter types. +//! +//! # Design +//! +//! The pallet has means to enqueue, store and process messages. This is implemented by having +//! *queues* which store enqueued messages and can be *served* to process said messages. A queue is +//! identified by its origin in the `BookStateFor`. Each message has an origin which defines into +//! which queue it will be stored. Messages are stored by being appended to the last [`Page`] of a +//! book. Each book keeps track of its pages by indexing `Pages`. The `ReadyRing` contains all +//! queues which hold at least one unprocessed message and are thereby *ready* to be serviced. The +//! `ServiceHead` indicates which *ready* queue is the next to be serviced. +//! The pallet implements [`frame_support::traits::EnqueueMessage`], +//! [`frame_support::traits::ServiceQueues`] and has [`frame_support::traits::ProcessMessage`] and +//! [`OnQueueChanged`] hooks to communicate with the outside world. +//! +//! NOTE: The storage items are not linked since they are not public. +//! +//! **Message Execution** +//! +//! Executing a message is offloaded to the [`Config::MessageProcessor`] which contains the actual +//! logic of how to handle the message since they are blobs. A message can be temporarily or +//! permanently overweight. The pallet will perpetually try to execute a temporarily overweight +//! message. A permanently overweight message is skipped and must be executed manually. +//! +//! **Pagination** +//! +//! Queues are stored in a *paged* manner by splitting their messages into [`Page`]s. This results +//! in a lot of complexity when implementing the pallet but is completely necessary to archive the +//! second #[Design Goal](design-goals). The problem comes from the fact a message can *possibly* be +//! quite large, lets say 64KiB. This then results in a *MEL* of at least 64KiB which results in a +//! PoV of at least 64KiB. Now we have the assumption that most messages are much shorter than their +//! maximum allowed length. This would result in most messages having a pre-dispatch PoV size which +//! is much larger than their post-dispatch PoV size, possibly by a factor of thousand. Disregarding +//! this observation would cripple the processing power of the pallet since it cannot straighten out +//! this discrepancy at runtime. Conceptually, the implementation is packing as many messages into a +//! single bounded vec, as actually fit into the bounds. This reduces the wasted PoV. +//! +//! **Page Data Layout** +//! +//! A Page contains a heap which holds all its messages. The heap is built by concatenating +//! `(ItemHeader, Message)` pairs. The [`ItemHeader`] contains the length of the message which is +//! needed for retrieving it. This layout allows for constant access time of the next message and +//! linear access time for any message in the page. The header must remain minimal to reduce its PoV +//! impact. +//! +//! **Weight Metering** +//! +//! The pallet utilizes the [`sp_weights::WeightMeter`] to manually track its consumption to always +//! stay within the required limit. This implies that the message processor hook can calculate the +//! weight of a message without executing it. This restricts the possible use-cases but is necessary +//! since the pallet runs in `on_initialize` which has a hard weight limit. The weight meter is used +//! in a way that `can_accrue` and `check_accrue` are always used to check the remaining weight of +//! an operation before committing to it. The process of exiting due to insufficient weight is +//! termed "bailing". +//! +//! # Scenario: Message enqueuing +//! +//! A message `m` is enqueued for origin `o` into queue `Q[o]` through +//! [`frame_support::traits::EnqueueMessage::enqueue_message`]`(m, o)`. +//! +//! First the queue is either loaded if it exists or otherwise created with empty default values. +//! The message is then inserted to the queue by appended it into its last `Page` or by creating a +//! new `Page` just for `m` if it does not fit in there. The number of messages in the `Book` is +//! incremented. +//! +//! `Q[o]` is now *ready* which will eventually result in `m` being processed. +//! +//! # Scenario: Message processing +//! +//! The pallet runs each block in `on_initialize` or when being manually called through +//! [`frame_support::traits::ServiceQueues::service_queues`]. +//! +//! First it tries to "rotate" the `ReadyRing` by one through advancing the `ServiceHead` to the +//! next *ready* queue. It then starts to service this queue by servicing as many pages of it as +//! possible. Servicing a page means to execute as many message of it as possible. Each executed +//! message is marked as *processed* if the [`Config::MessageProcessor`] return Ok. An event +//! [`Event::Processed`] is emitted afterwards. It is possible that the weight limit of the pallet +//! will never allow a specific message to be executed. In this case it remains as unprocessed and +//! is skipped. This process stops if either there are no more messages in the queue or the +//! remaining weight became insufficient to service this queue. If there is enough weight it tries +//! to advance to the next *ready* queue and service it. This continues until there are no more +//! queues on which it can make progress or not enough weight to check that. +//! +//! # Scenario: Overweight execution +//! +//! A permanently over-weight message which was skipped by the message processing will never be +//! executed automatically through `on_initialize` nor by calling +//! [`frame_support::traits::ServiceQueues::service_queues`]. +//! +//! Manual intervention in the form of +//! [`frame_support::traits::ServiceQueues::execute_overweight`] is necessary. Overweight messages +//! emit an [`Event::OverweightEnqueued`] event which can be used to extract the arguments for +//! manual execution. This only works on permanently overweight messages. There is no guarantee that +//! this will work since the message could be part of a stale page and be reaped before execution +//! commences. +//! +//! # Terminology +//! +//! - `Message`: A blob of data into which the pallet has no introspection, defined as +//! [`BoundedSlice>`]. The message length is limited by [`MaxMessageLenOf`] +//! which is calculated from [`Config::HeapSize`] and [`ItemHeader::max_encoded_len()`]. +//! - `MessageOrigin`: A generic *origin* of a message, defined as [`MessageOriginOf`]. The +//! requirements for it are kept minimal to remain as generic as possible. The type is defined in +//! [`frame_support::traits::ProcessMessage::Origin`]. +//! - `Page`: An array of `Message`s, see [`Page`]. Can never be empty. +//! - `Book`: A list of `Page`s, see [`BookState`]. Can be empty. +//! - `Queue`: A `Book` together with an `MessageOrigin` which can be part of the `ReadyRing`. Can +//! be empty. +//! - `ReadyRing`: A double-linked list which contains all *ready* `Queue`s. It chains together the +//! queues via their `ready_neighbours` fields. A `Queue` is *ready* if it contains at least one +//! `Message` which can be processed. Can be empty. +//! - `ServiceHead`: A pointer into the `ReadyRing` to the next `Queue` to be serviced. +//! - (`un`)`processed`: A message is marked as *processed* after it was executed by the pallet. A +//! message which was either: not yet executed or could not be executed remains as `unprocessed` +//! which is the default state for a message after being enqueued. +//! - `knitting`/`unknitting`: The means of adding or removing a `Queue` from the `ReadyRing`. +//! - `MEL`: The Max Encoded Length of a type, see [`codec::MaxEncodedLen`]. +//! +//! # Properties +//! +//! **Liveness - Enqueueing** +//! +//! It is always possible to enqueue any message for any `MessageOrigin`. +//! +//! **Liveness - Processing** +//! +//! `on_initialize` always respects its finite weight-limit. +//! +//! **Progress - Enqueueing** +//! +//! An enqueued message immediately becomes *unprocessed* and thereby eligible for execution. +//! +//! **Progress - Processing** +//! +//! The pallet will execute at least one unprocessed message per block, if there is any. Ensuring +//! this property needs careful consideration of the concrete weights, since it is possible that the +//! weight limit of `on_initialize` never allows for the execution of even one message; trivially if +//! the limit is set to zero. `integrity_test` can be used to ensure that this property holds. +//! +//! **Fairness - Enqueuing** +//! +//! Enqueueing a message for a specific `MessageOrigin` does not influence the ability to enqueue a +//! message for the same of any other `MessageOrigin`; guaranteed by **Liveness - Enqueueing**. +//! +//! **Fairness - Processing** +//! +//! The average amount of weight available for message processing is the same for each queue if the +//! number of queues is constant. Creating a new queue must therefore be, possibly economically, +//! expensive. Currently this is archived by having one queue per para-chain/thread, which keeps the +//! number of queues within `O(n)` and should be "good enough". + +#![cfg_attr(not(feature = "std"), no_std)] + +mod benchmarking; +mod integration_test; +mod mock; +pub mod mock_helpers; +mod tests; +pub mod weights; + +use codec::{Codec, Decode, Encode, MaxEncodedLen}; +use frame_support::{ + defensive, + pallet_prelude::*, + traits::{ + DefensiveTruncateFrom, EnqueueMessage, ExecuteOverweightError, Footprint, ProcessMessage, + ProcessMessageError, ServiceQueues, + }, + BoundedSlice, CloneNoBound, DefaultNoBound, +}; +use frame_system::pallet_prelude::*; +pub use pallet::*; +use scale_info::TypeInfo; +use sp_arithmetic::traits::{BaseArithmetic, Unsigned}; +use sp_runtime::{ + traits::{Hash, One, Zero}, + SaturatedConversion, Saturating, +}; +use sp_std::{fmt::Debug, ops::Deref, prelude::*, vec}; +use sp_weights::WeightMeter; +pub use weights::WeightInfo; + +/// Type for identifying a page. +type PageIndex = u32; + +/// Data encoded and prefixed to the encoded `MessageItem`. +#[derive(Encode, Decode, PartialEq, MaxEncodedLen, Debug)] +pub struct ItemHeader { + /// The length of this item, not including the size of this header. The next item of the page + /// follows immediately after the payload of this item. + payload_len: Size, + /// Whether this item has been processed. + is_processed: bool, +} + +/// A page of messages. Pages always contain at least one item. +#[derive( + CloneNoBound, Encode, Decode, RuntimeDebugNoBound, DefaultNoBound, TypeInfo, MaxEncodedLen, +)] +#[scale_info(skip_type_params(HeapSize))] +#[codec(mel_bound(Size: MaxEncodedLen))] +pub struct Page + Debug + Clone + Default, HeapSize: Get> { + /// Messages remaining to be processed; this includes overweight messages which have been + /// skipped. + remaining: Size, + /// The size of all remaining messages to be processed. + /// + /// Includes overweight messages outside of the `first` to `last` window. + remaining_size: Size, + /// The number of items before the `first` item in this page. + first_index: Size, + /// The heap-offset of the header of the first message item in this page which is ready for + /// processing. + first: Size, + /// The heap-offset of the header of the last message item in this page. + last: Size, + /// The heap. If `self.offset == self.heap.len()` then the page is empty and should be deleted. + heap: BoundedVec>, +} + +impl< + Size: BaseArithmetic + Unsigned + Copy + Into + Codec + MaxEncodedLen + Debug + Default, + HeapSize: Get, + > Page +{ + /// Create a [`Page`] from one unprocessed message. + fn from_message(message: BoundedSlice>) -> Self { + let payload_len = message.len(); + let data_len = ItemHeader::::max_encoded_len().saturating_add(payload_len); + let payload_len = payload_len.saturated_into(); + let header = ItemHeader:: { payload_len, is_processed: false }; + + let mut heap = Vec::with_capacity(data_len); + header.using_encoded(|h| heap.extend_from_slice(h)); + heap.extend_from_slice(message.deref()); + + Page { + remaining: One::one(), + remaining_size: payload_len, + first_index: Zero::zero(), + first: Zero::zero(), + last: Zero::zero(), + heap: BoundedVec::defensive_truncate_from(heap), + } + } + + /// Try to append one message to a page. + fn try_append_message( + &mut self, + message: BoundedSlice>, + ) -> Result<(), ()> { + let pos = self.heap.len(); + let payload_len = message.len(); + let data_len = ItemHeader::::max_encoded_len().saturating_add(payload_len); + let payload_len = payload_len.saturated_into(); + let header = ItemHeader:: { payload_len, is_processed: false }; + let heap_size: u32 = HeapSize::get().into(); + if (heap_size as usize).saturating_sub(self.heap.len()) < data_len { + // Can't fit. + return Err(()) + } + + let mut heap = sp_std::mem::take(&mut self.heap).into_inner(); + header.using_encoded(|h| heap.extend_from_slice(h)); + heap.extend_from_slice(message.deref()); + self.heap = BoundedVec::defensive_truncate_from(heap); + self.last = pos.saturated_into(); + self.remaining.saturating_inc(); + self.remaining_size.saturating_accrue(payload_len); + Ok(()) + } + + /// Returns the first message in the page without removing it. + /// + /// SAFETY: Does not panic even on corrupted storage. + fn peek_first(&self) -> Option>> { + if self.first > self.last { + return None + } + let f = (self.first.into() as usize).min(self.heap.len()); + let mut item_slice = &self.heap[f..]; + if let Ok(h) = ItemHeader::::decode(&mut item_slice) { + let payload_len = h.payload_len.into() as usize; + if payload_len <= item_slice.len() { + // impossible to truncate since is sliced up from `self.heap: BoundedVec` + return Some(BoundedSlice::defensive_truncate_from(&item_slice[..payload_len])) + } + } + defensive!("message-queue: heap corruption"); + None + } + + /// Point `first` at the next message, marking the first as processed if `is_processed` is true. + fn skip_first(&mut self, is_processed: bool) { + let f = (self.first.into() as usize).min(self.heap.len()); + if let Ok(mut h) = ItemHeader::decode(&mut &self.heap[f..]) { + if is_processed && !h.is_processed { + h.is_processed = true; + h.using_encoded(|d| self.heap[f..f + d.len()].copy_from_slice(d)); + self.remaining.saturating_dec(); + self.remaining_size.saturating_reduce(h.payload_len); + } + self.first + .saturating_accrue(ItemHeader::::max_encoded_len().saturated_into()); + self.first.saturating_accrue(h.payload_len); + self.first_index.saturating_inc(); + } + } + + /// Return the message with index `index` in the form of `(position, processed, message)`. + fn peek_index(&self, index: usize) -> Option<(usize, bool, &[u8])> { + let mut pos = 0; + let mut item_slice = &self.heap[..]; + let header_len: usize = ItemHeader::::max_encoded_len().saturated_into(); + for _ in 0..index { + let h = ItemHeader::::decode(&mut item_slice).ok()?; + let item_len = h.payload_len.into() as usize; + if item_slice.len() < item_len { + return None + } + item_slice = &item_slice[item_len..]; + pos.saturating_accrue(header_len.saturating_add(item_len)); + } + let h = ItemHeader::::decode(&mut item_slice).ok()?; + if item_slice.len() < h.payload_len.into() as usize { + return None + } + item_slice = &item_slice[..h.payload_len.into() as usize]; + Some((pos, h.is_processed, item_slice)) + } + + /// Set the `is_processed` flag for the item at `pos` to be `true` if not already and decrement + /// the `remaining` counter of the page. + /// + /// Does nothing if no [`ItemHeader`] could be decoded at the given position. + fn note_processed_at_pos(&mut self, pos: usize) { + if let Ok(mut h) = ItemHeader::::decode(&mut &self.heap[pos..]) { + if !h.is_processed { + h.is_processed = true; + h.using_encoded(|d| self.heap[pos..pos + d.len()].copy_from_slice(d)); + self.remaining.saturating_dec(); + self.remaining_size.saturating_reduce(h.payload_len); + } + } + } + + /// Returns whether the page is *complete* which means that no messages remain. + fn is_complete(&self) -> bool { + self.remaining.is_zero() + } +} + +/// A single link in the double-linked Ready Ring list. +#[derive(Clone, Encode, Decode, MaxEncodedLen, TypeInfo, RuntimeDebug, PartialEq)] +pub struct Neighbours { + /// The previous queue. + prev: MessageOrigin, + /// The next queue. + next: MessageOrigin, +} + +/// The state of a queue as represented by a book of its pages. +/// +/// Each queue has exactly one book which holds all of its pages. All pages of a book combined +/// contain all of the messages of its queue; hence the name *Book*. +/// Books can be chained together in a double-linked fashion through their `ready_neighbours` field. +#[derive(Clone, Encode, Decode, MaxEncodedLen, TypeInfo, RuntimeDebug)] +pub struct BookState { + /// The first page with some items to be processed in it. If this is `>= end`, then there are + /// no pages with items to be processing in them. + begin: PageIndex, + /// One more than the last page with some items to be processed in it. + end: PageIndex, + /// The number of pages stored at present. + /// + /// This might be larger than `end-begin`, because we keep pages with unprocessed overweight + /// messages outside of the end/begin window. + count: PageIndex, + /// If this book has any ready pages, then this will be `Some` with the previous and next + /// neighbours. This wraps around. + ready_neighbours: Option>, + /// The number of unprocessed messages stored at present. + message_count: u64, + /// The total size of all unprocessed messages stored at present. + size: u64, +} + +impl Default for BookState { + fn default() -> Self { + Self { begin: 0, end: 0, count: 0, ready_neighbours: None, message_count: 0, size: 0 } + } +} + +/// Handler code for when the items in a queue change. +pub trait OnQueueChanged { + /// Note that the queue `id` now has `item_count` items in it, taking up `items_size` bytes. + fn on_queue_changed(id: Id, items_count: u64, items_size: u64); +} + +impl OnQueueChanged for () { + fn on_queue_changed(_: Id, _: u64, _: u64) {} +} + +#[frame_support::pallet] +pub mod pallet { + use super::*; + + #[pallet::pallet] + #[pallet::generate_store(pub(super) trait Store)] + pub struct Pallet(_); + + /// The module configuration trait. + #[pallet::config] + pub trait Config: frame_system::Config { + /// The overarching event type. + type RuntimeEvent: From> + IsType<::RuntimeEvent>; + + /// Weight information for extrinsics in this pallet. + type WeightInfo: WeightInfo; + + /// Processor for a message. + /// + /// Must be set to [`mock_helpers::NoopMessageProcessor`] for benchmarking. + /// Other message processors that consumes exactly (1, 1) weight for any give message will + /// work as well. Otherwise the benchmarking will also measure the weight of the message + /// processor, which is not desired. + type MessageProcessor: ProcessMessage; + + /// Page/heap size type. + type Size: BaseArithmetic + + Unsigned + + Copy + + Into + + Member + + Encode + + Decode + + MaxEncodedLen + + TypeInfo + + Default; + + /// Code to be called when a message queue changes - either with items introduced or + /// removed. + type QueueChangeHandler: OnQueueChanged<::Origin>; + + /// The size of the page; this implies the maximum message size which can be sent. + /// + /// A good value depends on the expected message sizes, their weights, the weight that is + /// available for processing them and the maximal needed message size. The maximal message + /// size is slightly lower than this as defined by [`MaxMessageLenOf`]. + #[pallet::constant] + type HeapSize: Get; + + /// The maximum number of stale pages (i.e. of overweight messages) allowed before culling + /// can happen. Once there are more stale pages than this, then historical pages may be + /// dropped, even if they contain unprocessed overweight messages. + #[pallet::constant] + type MaxStale: Get; + + /// The amount of weight (if any) which should be provided to the message queue for + /// servicing enqueued items. + /// + /// This may be legitimately `None` in the case that you will call + /// `ServiceQueues::service_queues` manually. + #[pallet::constant] + type ServiceWeight: Get>; + } + + #[pallet::event] + #[pallet::generate_deposit(pub(super) fn deposit_event)] + pub enum Event { + /// Message discarded due to an inability to decode the item. Usually caused by state + /// corruption. + Discarded { hash: T::Hash }, + /// Message discarded due to an error in the `MessageProcessor` (usually a format error). + ProcessingFailed { hash: T::Hash, origin: MessageOriginOf, error: ProcessMessageError }, + /// Message is processed. + Processed { hash: T::Hash, origin: MessageOriginOf, weight_used: Weight, success: bool }, + /// Message placed in overweight queue. + OverweightEnqueued { + hash: T::Hash, + origin: MessageOriginOf, + page_index: PageIndex, + message_index: T::Size, + }, + /// This page was reaped. + PageReaped { origin: MessageOriginOf, index: PageIndex }, + } + + #[pallet::error] + pub enum Error { + /// Page is not reapable because it has items remaining to be processed and is not old + /// enough. + NotReapable, + /// Page to be reaped does not exist. + NoPage, + /// The referenced message could not be found. + NoMessage, + /// The message was already processed and cannot be processed again. + AlreadyProcessed, + /// The message is queued for future execution. + Queued, + /// There is temporarily not enough weight to continue servicing messages. + InsufficientWeight, + } + + /// The index of the first and last (non-empty) pages. + #[pallet::storage] + pub(super) type BookStateFor = + StorageMap<_, Twox64Concat, MessageOriginOf, BookState>, ValueQuery>; + + /// The origin at which we should begin servicing. + #[pallet::storage] + pub(super) type ServiceHead = StorageValue<_, MessageOriginOf, OptionQuery>; + + /// The map of page indices to pages. + #[pallet::storage] + pub(super) type Pages = StorageDoubleMap< + _, + Twox64Concat, + MessageOriginOf, + Twox64Concat, + PageIndex, + Page, + OptionQuery, + >; + + #[pallet::hooks] + impl Hooks> for Pallet { + fn on_initialize(_n: BlockNumberFor) -> Weight { + if let Some(weight_limit) = T::ServiceWeight::get() { + Self::service_queues(weight_limit) + } else { + Weight::zero() + } + } + + /// Check all assumptions about [`crate::Config`]. + fn integrity_test() { + assert!(!MaxMessageLenOf::::get().is_zero(), "HeapSize too low"); + } + } + + #[pallet::call] + impl Pallet { + /// Remove a page which has no more messages remaining to be processed or is stale. + #[pallet::call_index(0)] + #[pallet::weight(T::WeightInfo::reap_page())] + pub fn reap_page( + origin: OriginFor, + message_origin: MessageOriginOf, + page_index: PageIndex, + ) -> DispatchResult { + let _ = ensure_signed(origin)?; + Self::do_reap_page(&message_origin, page_index) + } + + /// Execute an overweight message. + /// + /// - `origin`: Must be `Signed`. + /// - `message_origin`: The origin from which the message to be executed arrived. + /// - `page`: The page in the queue in which the message to be executed is sitting. + /// - `index`: The index into the queue of the message to be executed. + /// - `weight_limit`: The maximum amount of weight allowed to be consumed in the execution + /// of the message. + /// + /// Benchmark complexity considerations: O(index + weight_limit). + #[pallet::call_index(1)] + #[pallet::weight( + T::WeightInfo::execute_overweight_page_updated().max( + T::WeightInfo::execute_overweight_page_removed()).saturating_add(*weight_limit) + )] + pub fn execute_overweight( + origin: OriginFor, + message_origin: MessageOriginOf, + page: PageIndex, + index: T::Size, + weight_limit: Weight, + ) -> DispatchResultWithPostInfo { + let _ = ensure_signed(origin)?; + let actual_weight = + Self::do_execute_overweight(message_origin, page, index, weight_limit)?; + Ok(Some(actual_weight).into()) + } + } +} + +/// The status of a page after trying to execute its next message. +#[derive(PartialEq, Debug)] +enum PageExecutionStatus { + /// The execution bailed because there was not enough weight remaining. + Bailed, + /// No more messages could be loaded. This does _not_ imply `page.is_complete()`. + /// + /// The reasons for this status are: + /// - The end of the page is reached but there could still be skipped messages. + /// - The storage is corrupted. + NoMore, +} + +/// The status after trying to execute the next item of a [`Page`]. +#[derive(PartialEq, Debug)] +enum ItemExecutionStatus { + /// The execution bailed because there was not enough weight remaining. + Bailed, + /// The item was not found. + NoItem, + /// Whether the execution of an item resulted in it being processed. + /// + /// One reason for `false` would be permanently overweight. + Executed(bool), +} + +/// The status of an attempt to process a message. +#[derive(PartialEq)] +enum MessageExecutionStatus { + /// There is not enough weight remaining at present. + InsufficientWeight, + /// There will never be enough weight. + Overweight, + /// The message was processed successfully. + Processed, + /// The message was processed and resulted in a permanent error. + Unprocessable, +} + +impl Pallet { + /// Knit `origin` into the ready ring right at the end. + /// + /// Return the two ready ring neighbours of `origin`. + fn ready_ring_knit(origin: &MessageOriginOf) -> Result>, ()> { + if let Some(head) = ServiceHead::::get() { + let mut head_book_state = BookStateFor::::get(&head); + let mut head_neighbours = head_book_state.ready_neighbours.take().ok_or(())?; + let tail = head_neighbours.prev; + head_neighbours.prev = origin.clone(); + head_book_state.ready_neighbours = Some(head_neighbours); + BookStateFor::::insert(&head, head_book_state); + + let mut tail_book_state = BookStateFor::::get(&tail); + let mut tail_neighbours = tail_book_state.ready_neighbours.take().ok_or(())?; + tail_neighbours.next = origin.clone(); + tail_book_state.ready_neighbours = Some(tail_neighbours); + BookStateFor::::insert(&tail, tail_book_state); + + Ok(Neighbours { next: head, prev: tail }) + } else { + ServiceHead::::put(origin); + Ok(Neighbours { next: origin.clone(), prev: origin.clone() }) + } + } + + fn ready_ring_unknit(origin: &MessageOriginOf, neighbours: Neighbours>) { + if origin == &neighbours.next { + debug_assert!( + origin == &neighbours.prev, + "unknitting from single item ring; outgoing must be only item" + ); + // Service queue empty. + ServiceHead::::kill(); + } else { + BookStateFor::::mutate(&neighbours.next, |book_state| { + if let Some(ref mut n) = book_state.ready_neighbours { + n.prev = neighbours.prev.clone() + } + }); + BookStateFor::::mutate(&neighbours.prev, |book_state| { + if let Some(ref mut n) = book_state.ready_neighbours { + n.next = neighbours.next.clone() + } + }); + if let Some(head) = ServiceHead::::get() { + if &head == origin { + ServiceHead::::put(neighbours.next); + } + } else { + defensive!("`ServiceHead` must be some if there was a ready queue"); + } + } + } + + /// Tries to bump the current `ServiceHead` to the next ready queue. + /// + /// Returns the current head if it got be bumped and `None` otherwise. + fn bump_service_head(weight: &mut WeightMeter) -> Option> { + if !weight.check_accrue(T::WeightInfo::bump_service_head()) { + return None + } + + if let Some(head) = ServiceHead::::get() { + let mut head_book_state = BookStateFor::::get(&head); + if let Some(head_neighbours) = head_book_state.ready_neighbours.take() { + ServiceHead::::put(&head_neighbours.next); + Some(head) + } else { + None + } + } else { + None + } + } + + fn do_enqueue_message( + origin: &MessageOriginOf, + message: BoundedSlice>, + ) { + let mut book_state = BookStateFor::::get(origin); + book_state.message_count.saturating_inc(); + book_state + .size + // This should be payload size, but here the payload *is* the message. + .saturating_accrue(message.len() as u64); + + if book_state.end > book_state.begin { + debug_assert!(book_state.ready_neighbours.is_some(), "Must be in ready ring if ready"); + // Already have a page in progress - attempt to append. + let last = book_state.end - 1; + let mut page = match Pages::::get(origin, last) { + Some(p) => p, + None => { + defensive!("Corruption: referenced page doesn't exist."); + return + }, + }; + if page.try_append_message::(message).is_ok() { + Pages::::insert(origin, last, &page); + BookStateFor::::insert(origin, book_state); + return + } + } else { + debug_assert!( + book_state.ready_neighbours.is_none(), + "Must not be in ready ring if not ready" + ); + // insert into ready queue. + match Self::ready_ring_knit(origin) { + Ok(neighbours) => book_state.ready_neighbours = Some(neighbours), + Err(()) => { + defensive!("Ring state invalid when knitting"); + }, + } + } + // No room on the page or no page - link in a new page. + book_state.end.saturating_inc(); + book_state.count.saturating_inc(); + let page = Page::from_message::(message); + Pages::::insert(origin, book_state.end - 1, page); + // NOTE: `T::QueueChangeHandler` is called by the caller. + BookStateFor::::insert(origin, book_state); + } + + /// Try to execute a single message that was marked as overweight. + /// + /// The `weight_limit` is the weight that can be consumed to execute the message. The base + /// weight of the function it self must be measured by the caller. + pub fn do_execute_overweight( + origin: MessageOriginOf, + page_index: PageIndex, + index: T::Size, + weight_limit: Weight, + ) -> Result> { + let mut book_state = BookStateFor::::get(&origin); + let mut page = Pages::::get(&origin, page_index).ok_or(Error::::NoPage)?; + let (pos, is_processed, payload) = + page.peek_index(index.into() as usize).ok_or(Error::::NoMessage)?; + let payload_len = payload.len() as u64; + ensure!( + page_index < book_state.begin || + (page_index == book_state.begin && pos < page.first.into() as usize), + Error::::Queued + ); + ensure!(!is_processed, Error::::AlreadyProcessed); + use MessageExecutionStatus::*; + let mut weight_counter = WeightMeter::from_limit(weight_limit); + match Self::process_message_payload( + origin.clone(), + page_index, + index, + payload, + &mut weight_counter, + Weight::MAX, + // ^^^ We never recognise it as permanently overweight, since that would result in an + // additional overweight event being deposited. + ) { + Overweight | InsufficientWeight => Err(Error::::InsufficientWeight), + Unprocessable | Processed => { + page.note_processed_at_pos(pos); + book_state.message_count.saturating_dec(); + book_state.size.saturating_reduce(payload_len); + let page_weight = if page.remaining.is_zero() { + debug_assert!( + page.remaining_size.is_zero(), + "no messages remaining; no space taken; qed" + ); + Pages::::remove(&origin, page_index); + debug_assert!(book_state.count >= 1, "page exists, so book must have pages"); + book_state.count.saturating_dec(); + T::WeightInfo::execute_overweight_page_removed() + // no need to consider .first or ready ring since processing an overweight page + // would not alter that state. + } else { + Pages::::insert(&origin, page_index, page); + T::WeightInfo::execute_overweight_page_updated() + }; + BookStateFor::::insert(&origin, &book_state); + T::QueueChangeHandler::on_queue_changed( + origin, + book_state.message_count, + book_state.size, + ); + Ok(weight_counter.consumed.saturating_add(page_weight)) + }, + } + } + + /// Remove a stale page or one which has no more messages remaining to be processed. + fn do_reap_page(origin: &MessageOriginOf, page_index: PageIndex) -> DispatchResult { + let mut book_state = BookStateFor::::get(origin); + // definitely not reapable if the page's index is no less than the `begin`ning of ready + // pages. + ensure!(page_index < book_state.begin, Error::::NotReapable); + + let page = Pages::::get(origin, page_index).ok_or(Error::::NoPage)?; + + // definitely reapable if the page has no messages in it. + let reapable = page.remaining.is_zero(); + + // also reapable if the page index has dropped below our watermark. + let cullable = || { + let total_pages = book_state.count; + let ready_pages = book_state.end.saturating_sub(book_state.begin).min(total_pages); + + // The number of stale pages - i.e. pages which contain unprocessed overweight messages. + // We would prefer to keep these around but will restrict how far into history they can + // extend if we notice that there's too many of them. + // + // We don't know *where* in history these pages are so we use a dynamic formula which + // reduces the historical time horizon as the stale pages pile up and increases it as + // they reduce. + let stale_pages = total_pages - ready_pages; + + // The maximum number of stale pages (i.e. of overweight messages) allowed before + // culling can happen at all. Once there are more stale pages than this, then historical + // pages may be dropped, even if they contain unprocessed overweight messages. + let max_stale = T::MaxStale::get(); + + // The amount beyond the maximum which are being used. If it's not beyond the maximum + // then we exit now since no culling is needed. + let overflow = match stale_pages.checked_sub(max_stale + 1) { + Some(x) => x + 1, + None => return false, + }; + + // The special formula which tells us how deep into index-history we will pages. As + // the overflow is greater (and thus the need to drop items from storage is more urgent) + // this is reduced, allowing a greater range of pages to be culled. + // With a minimum `overflow` (`1`), this returns `max_stale ** 2`, indicating we only + // cull beyond that number of indices deep into history. + // At this overflow increases, our depth reduces down to a limit of `max_stale`. We + // never want to reduce below this since this will certainly allow enough pages to be + // culled in order to bring `overflow` back to zero. + let backlog = (max_stale * max_stale / overflow).max(max_stale); + + let watermark = book_state.begin.saturating_sub(backlog); + page_index < watermark + }; + ensure!(reapable || cullable(), Error::::NotReapable); + + Pages::::remove(origin, page_index); + debug_assert!(book_state.count > 0, "reaping a page implies there are pages"); + book_state.count.saturating_dec(); + book_state.message_count.saturating_reduce(page.remaining.into() as u64); + book_state.size.saturating_reduce(page.remaining_size.into() as u64); + BookStateFor::::insert(origin, &book_state); + T::QueueChangeHandler::on_queue_changed( + origin.clone(), + book_state.message_count, + book_state.size, + ); + Self::deposit_event(Event::PageReaped { origin: origin.clone(), index: page_index }); + + Ok(()) + } + + /// Execute any messages remaining to be processed in the queue of `origin`, using up to + /// `weight_limit` to do so. Any messages which would take more than `overweight_limit` to + /// execute are deemed overweight and ignored. + fn service_queue( + origin: MessageOriginOf, + weight: &mut WeightMeter, + overweight_limit: Weight, + ) -> (bool, Option>) { + if !weight.check_accrue( + T::WeightInfo::service_queue_base().saturating_add(T::WeightInfo::ready_ring_unknit()), + ) { + return (false, None) + } + + let mut book_state = BookStateFor::::get(&origin); + let mut total_processed = 0; + + while book_state.end > book_state.begin { + let (processed, status) = + Self::service_page(&origin, &mut book_state, weight, overweight_limit); + total_processed.saturating_accrue(processed); + match status { + // Store the page progress and do not go to the next one. + PageExecutionStatus::Bailed => break, + // Go to the next page if this one is at the end. + PageExecutionStatus::NoMore => (), + }; + book_state.begin.saturating_inc(); + } + let next_ready = book_state.ready_neighbours.as_ref().map(|x| x.next.clone()); + if book_state.begin >= book_state.end && total_processed > 0 { + // No longer ready - unknit. + if let Some(neighbours) = book_state.ready_neighbours.take() { + Self::ready_ring_unknit(&origin, neighbours); + } else { + defensive!("Freshly processed queue must have been ready"); + } + } + BookStateFor::::insert(&origin, &book_state); + if total_processed > 0 { + T::QueueChangeHandler::on_queue_changed( + origin, + book_state.message_count, + book_state.size, + ); + } + (total_processed > 0, next_ready) + } + + /// Service as many messages of a page as possible. + /// + /// Returns how many messages were processed and the page's status. + fn service_page( + origin: &MessageOriginOf, + book_state: &mut BookStateOf, + weight: &mut WeightMeter, + overweight_limit: Weight, + ) -> (u32, PageExecutionStatus) { + use PageExecutionStatus::*; + if !weight.check_accrue( + T::WeightInfo::service_page_base_completion() + .max(T::WeightInfo::service_page_base_no_completion()), + ) { + return (0, Bailed) + } + + let page_index = book_state.begin; + let mut page = match Pages::::get(origin, page_index) { + Some(p) => p, + None => { + defensive!("message-queue: referenced page not found"); + return (0, NoMore) + }, + }; + + let mut total_processed = 0; + + // Execute as many messages as possible. + let status = loop { + use ItemExecutionStatus::*; + match Self::service_page_item( + origin, + page_index, + book_state, + &mut page, + weight, + overweight_limit, + ) { + Bailed => break PageExecutionStatus::Bailed, + NoItem => break PageExecutionStatus::NoMore, + // Keep going as long as we make progress... + Executed(true) => total_processed.saturating_inc(), + Executed(false) => (), + } + }; + + if page.is_complete() { + debug_assert!(status != Bailed, "we never bail if a page became complete"); + Pages::::remove(origin, page_index); + debug_assert!(book_state.count > 0, "completing a page implies there are pages"); + book_state.count.saturating_dec(); + } else { + Pages::::insert(origin, page_index, page); + } + (total_processed, status) + } + + /// Execute the next message of a page. + pub(crate) fn service_page_item( + origin: &MessageOriginOf, + page_index: PageIndex, + book_state: &mut BookStateOf, + page: &mut PageOf, + weight: &mut WeightMeter, + overweight_limit: Weight, + ) -> ItemExecutionStatus { + // This ugly pre-checking is needed for the invariant + // "we never bail if a page became complete". + if page.is_complete() { + return ItemExecutionStatus::NoItem + } + if !weight.check_accrue(T::WeightInfo::service_page_item()) { + return ItemExecutionStatus::Bailed + } + + let payload = &match page.peek_first() { + Some(m) => m, + None => return ItemExecutionStatus::NoItem, + }[..]; + + use MessageExecutionStatus::*; + let is_processed = match Self::process_message_payload( + origin.clone(), + page_index, + page.first_index, + payload.deref(), + weight, + overweight_limit, + ) { + InsufficientWeight => return ItemExecutionStatus::Bailed, + Processed | Unprocessable => true, + Overweight => false, + }; + + if is_processed { + book_state.message_count.saturating_dec(); + book_state.size.saturating_reduce(payload.len() as u64); + } + page.skip_first(is_processed); + ItemExecutionStatus::Executed(is_processed) + } + + /// Print the pages in each queue and the messages in each page. + /// + /// Processed messages are prefixed with a `*` and the current `begin`ning page with a `>`. + /// + /// # Example output + /// + /// ```text + /// queue Here: + /// page 0: [] + /// > page 1: [] + /// page 2: ["\0weight=4", "\0c", ] + /// page 3: ["\0bigbig 1", ] + /// page 4: ["\0bigbig 2", ] + /// page 5: ["\0bigbig 3", ] + /// ``` + #[cfg(feature = "std")] + pub fn debug_info() -> String { + let mut info = String::new(); + for (origin, book_state) in BookStateFor::::iter() { + let mut queue = format!("queue {:?}:\n", &origin); + let mut pages = Pages::::iter_prefix(&origin).collect::>(); + pages.sort_by(|(a, _), (b, _)| a.cmp(b)); + for (page_index, mut page) in pages.into_iter() { + let page_info = if book_state.begin == page_index { ">" } else { " " }; + let mut page_info = format!( + "{} page {} ({:?} first, {:?} last, {:?} remain): [ ", + page_info, page_index, page.first, page.last, page.remaining + ); + for i in 0..u32::MAX { + if let Some((_, processed, message)) = + page.peek_index(i.try_into().expect("std-only code")) + { + let msg = String::from_utf8_lossy(message.deref()); + if processed { + page_info.push('*'); + } + page_info.push_str(&format!("{:?}, ", msg)); + page.skip_first(true); + } else { + break + } + } + page_info.push_str("]\n"); + queue.push_str(&page_info); + } + info.push_str(&queue); + } + info + } + + /// Process a single message. + /// + /// The base weight of this function needs to be accounted for by the caller. `weight` is the + /// remaining weight to process the message. `overweight_limit` is the maximum weight that a + /// message can ever consume. Messages above this limit are marked as permanently overweight. + fn process_message_payload( + origin: MessageOriginOf, + page_index: PageIndex, + message_index: T::Size, + message: &[u8], + weight: &mut WeightMeter, + overweight_limit: Weight, + ) -> MessageExecutionStatus { + let hash = T::Hashing::hash(message); + use ProcessMessageError::Overweight; + match T::MessageProcessor::process_message(message, origin.clone(), weight.remaining()) { + Err(Overweight(w)) if w.any_gt(overweight_limit) => { + // Permanently overweight. + Self::deposit_event(Event::::OverweightEnqueued { + hash, + origin, + page_index, + message_index, + }); + MessageExecutionStatus::Overweight + }, + Err(Overweight(_)) => { + // Temporarily overweight - save progress and stop processing this + // queue. + MessageExecutionStatus::InsufficientWeight + }, + Err(error) => { + // Permanent error - drop + Self::deposit_event(Event::::ProcessingFailed { hash, origin, error }); + MessageExecutionStatus::Unprocessable + }, + Ok((success, weight_used)) => { + // Success + weight.defensive_saturating_accrue(weight_used); + let event = Event::::Processed { hash, origin, weight_used, success }; + Self::deposit_event(event); + MessageExecutionStatus::Processed + }, + } + } +} + +/// Provides a [`sp_core::Get`] to access the `MEL` of a [`codec::MaxEncodedLen`] type. +pub struct MaxEncodedLenOf(sp_std::marker::PhantomData); +impl Get for MaxEncodedLenOf { + fn get() -> u32 { + T::max_encoded_len() as u32 + } +} + +/// Calculates the maximum message length and exposed it through the [`codec::MaxEncodedLen`] trait. +pub struct MaxMessageLen( + sp_std::marker::PhantomData<(Origin, Size, HeapSize)>, +); +impl, HeapSize: Get> Get + for MaxMessageLen +{ + fn get() -> u32 { + (HeapSize::get().into()).saturating_sub(ItemHeader::::max_encoded_len() as u32) + } +} + +/// The maximal message length. +pub type MaxMessageLenOf = + MaxMessageLen, ::Size, ::HeapSize>; +/// The maximal encoded origin length. +pub type MaxOriginLenOf = MaxEncodedLenOf>; +/// The `MessageOrigin` of this pallet. +pub type MessageOriginOf = <::MessageProcessor as ProcessMessage>::Origin; +/// The maximal heap size of a page. +pub type HeapSizeU32Of = IntoU32<::HeapSize, ::Size>; +/// The [`Page`] of this pallet. +pub type PageOf = Page<::Size, ::HeapSize>; +/// The [`BookState`] of this pallet. +pub type BookStateOf = BookState>; + +/// Converts a [`sp_core::Get`] with returns a type that can be cast into an `u32` into a `Get` +/// which returns an `u32`. +pub struct IntoU32(sp_std::marker::PhantomData<(T, O)>); +impl, O: Into> Get for IntoU32 { + fn get() -> u32 { + T::get().into() + } +} + +impl ServiceQueues for Pallet { + type OverweightMessageAddress = (MessageOriginOf, PageIndex, T::Size); + + fn service_queues(weight_limit: Weight) -> Weight { + // The maximum weight that processing a single message may take. + let overweight_limit = weight_limit; + let mut weight = WeightMeter::from_limit(weight_limit); + + let mut next = match Self::bump_service_head(&mut weight) { + Some(h) => h, + None => return weight.consumed, + }; + // The last queue that did not make any progress. + // The loop aborts as soon as it arrives at this queue again without making any progress + // on other queues in between. + let mut last_no_progress = None; + + loop { + let (progressed, n) = Self::service_queue(next.clone(), &mut weight, overweight_limit); + next = match n { + Some(n) => + if !progressed { + if last_no_progress == Some(n.clone()) { + break + } + if last_no_progress.is_none() { + last_no_progress = Some(next.clone()) + } + n + } else { + last_no_progress = None; + n + }, + None => break, + } + } + weight.consumed + } + + /// Execute a single overweight message. + /// + /// The weight limit must be enough for `execute_overweight` and the message execution itself. + fn execute_overweight( + weight_limit: Weight, + (message_origin, page, index): Self::OverweightMessageAddress, + ) -> Result { + let mut weight = WeightMeter::from_limit(weight_limit); + if !weight.check_accrue( + T::WeightInfo::execute_overweight_page_removed() + .max(T::WeightInfo::execute_overweight_page_updated()), + ) { + return Err(ExecuteOverweightError::InsufficientWeight) + } + + Pallet::::do_execute_overweight(message_origin, page, index, weight.remaining()).map_err( + |e| match e { + Error::::InsufficientWeight => ExecuteOverweightError::InsufficientWeight, + _ => ExecuteOverweightError::NotFound, + }, + ) + } +} + +impl EnqueueMessage> for Pallet { + type MaxMessageLen = + MaxMessageLen<::Origin, T::Size, T::HeapSize>; + + fn enqueue_message( + message: BoundedSlice, + origin: ::Origin, + ) { + Self::do_enqueue_message(&origin, message); + let book_state = BookStateFor::::get(&origin); + T::QueueChangeHandler::on_queue_changed(origin, book_state.message_count, book_state.size); + } + + fn enqueue_messages<'a>( + messages: impl Iterator>, + origin: ::Origin, + ) { + for message in messages { + Self::do_enqueue_message(&origin, message); + } + let book_state = BookStateFor::::get(&origin); + T::QueueChangeHandler::on_queue_changed(origin, book_state.message_count, book_state.size); + } + + fn sweep_queue(origin: MessageOriginOf) { + if !BookStateFor::::contains_key(&origin) { + return + } + let mut book_state = BookStateFor::::get(&origin); + book_state.begin = book_state.end; + if let Some(neighbours) = book_state.ready_neighbours.take() { + Self::ready_ring_unknit(&origin, neighbours); + } + BookStateFor::::insert(&origin, &book_state); + } + + fn footprint(origin: MessageOriginOf) -> Footprint { + let book_state = BookStateFor::::get(&origin); + Footprint { count: book_state.message_count, size: book_state.size } + } +} diff --git a/frame/message-queue/src/mock.rs b/frame/message-queue/src/mock.rs new file mode 100644 index 0000000000000..7159840d1c01b --- /dev/null +++ b/frame/message-queue/src/mock.rs @@ -0,0 +1,315 @@ +// This file is part of Substrate. + +// Copyright (C) 2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! Test helpers and runtime setup for the message queue pallet. + +#![cfg(test)] + +pub use super::mock_helpers::*; +use super::*; + +use crate as pallet_message_queue; +use frame_support::{ + parameter_types, + traits::{ConstU32, ConstU64}, +}; +use sp_core::H256; +use sp_runtime::{ + testing::Header, + traits::{BlakeTwo256, IdentityLookup}, +}; +use sp_std::collections::btree_map::BTreeMap; + +type UncheckedExtrinsic = frame_system::mocking::MockUncheckedExtrinsic; +type Block = frame_system::mocking::MockBlock; + +frame_support::construct_runtime!( + pub enum Test where + Block = Block, + NodeBlock = Block, + UncheckedExtrinsic = UncheckedExtrinsic, + { + System: frame_system::{Pallet, Call, Config, Storage, Event}, + MessageQueue: pallet_message_queue::{Pallet, Call, Storage, Event}, + } +); +parameter_types! { + pub BlockWeights: frame_system::limits::BlockWeights = + frame_system::limits::BlockWeights::simple_max(frame_support::weights::Weight::from_ref_time(1024)); +} +impl frame_system::Config for Test { + type BaseCallFilter = frame_support::traits::Everything; + type BlockWeights = (); + type BlockLength = (); + type DbWeight = (); + type RuntimeOrigin = RuntimeOrigin; + type Index = u64; + type BlockNumber = u64; + type Hash = H256; + type RuntimeCall = RuntimeCall; + type Hashing = BlakeTwo256; + type AccountId = u64; + type Lookup = IdentityLookup; + type Header = Header; + type RuntimeEvent = RuntimeEvent; + type BlockHashCount = ConstU64<250>; + type Version = (); + type PalletInfo = PalletInfo; + type AccountData = (); + type OnNewAccount = (); + type OnKilledAccount = (); + type SystemWeightInfo = (); + type SS58Prefix = (); + type OnSetCode = (); + type MaxConsumers = ConstU32<16>; +} +parameter_types! { + pub const HeapSize: u32 = 24; + pub const MaxStale: u32 = 2; + pub const ServiceWeight: Option = Some(Weight::from_parts(10, 10)); +} +impl Config for Test { + type RuntimeEvent = RuntimeEvent; + type WeightInfo = MockedWeightInfo; + type MessageProcessor = RecordingMessageProcessor; + type Size = u32; + type QueueChangeHandler = RecordingQueueChangeHandler; + type HeapSize = HeapSize; + type MaxStale = MaxStale; + type ServiceWeight = ServiceWeight; +} + +/// Mocked `WeightInfo` impl with allows to set the weight per call. +pub struct MockedWeightInfo; + +parameter_types! { + /// Storage for `MockedWeightInfo`, do not use directly. + pub static WeightForCall: BTreeMap = Default::default(); +} + +/// Set the return value for a function from the `WeightInfo` trait. +impl MockedWeightInfo { + /// Set the weight of a specific weight function. + pub fn set_weight(call_name: &str, weight: Weight) { + let mut calls = WeightForCall::get(); + calls.insert(call_name.into(), weight); + WeightForCall::set(calls); + } +} + +impl crate::weights::WeightInfo for MockedWeightInfo { + fn reap_page() -> Weight { + WeightForCall::get().get("reap_page").copied().unwrap_or_default() + } + fn execute_overweight_page_updated() -> Weight { + WeightForCall::get() + .get("execute_overweight_page_updated") + .copied() + .unwrap_or_default() + } + fn execute_overweight_page_removed() -> Weight { + WeightForCall::get() + .get("execute_overweight_page_removed") + .copied() + .unwrap_or_default() + } + fn service_page_base_completion() -> Weight { + WeightForCall::get() + .get("service_page_base_completion") + .copied() + .unwrap_or_default() + } + fn service_page_base_no_completion() -> Weight { + WeightForCall::get() + .get("service_page_base_no_completion") + .copied() + .unwrap_or_default() + } + fn service_queue_base() -> Weight { + WeightForCall::get().get("service_queue_base").copied().unwrap_or_default() + } + fn bump_service_head() -> Weight { + WeightForCall::get().get("bump_service_head").copied().unwrap_or_default() + } + fn service_page_item() -> Weight { + WeightForCall::get().get("service_page_item").copied().unwrap_or_default() + } + fn ready_ring_knit() -> Weight { + WeightForCall::get().get("ready_ring_knit").copied().unwrap_or_default() + } + fn ready_ring_unknit() -> Weight { + WeightForCall::get().get("ready_ring_unknit").copied().unwrap_or_default() + } +} + +parameter_types! { + pub static MessagesProcessed: Vec<(Vec, MessageOrigin)> = vec![]; +} + +/// A message processor which records all processed messages into [`MessagesProcessed`]. +pub struct RecordingMessageProcessor; +impl ProcessMessage for RecordingMessageProcessor { + /// The transport from where a message originates. + type Origin = MessageOrigin; + + /// Process the given message, using no more than `weight_limit` in weight to do so. + /// + /// Consumes exactly `n` weight of all components if it starts `weight=n` and `1` otherwise. + /// Errors if given the `weight_limit` is insufficient to process the message or if the message + /// is `badformat`, `corrupt` or `unsupported` with the respective error. + fn process_message( + message: &[u8], + origin: Self::Origin, + weight_limit: Weight, + ) -> Result<(bool, Weight), ProcessMessageError> { + processing_message(message)?; + + let weight = if message.starts_with(&b"weight="[..]) { + let mut w: u64 = 0; + for &c in &message[7..] { + if (b'0'..=b'9').contains(&c) { + w = w * 10 + (c - b'0') as u64; + } else { + break + } + } + w + } else { + 1 + }; + let weight = Weight::from_parts(weight, weight); + + if weight.all_lte(weight_limit) { + let mut m = MessagesProcessed::get(); + m.push((message.to_vec(), origin)); + MessagesProcessed::set(m); + Ok((true, weight)) + } else { + Err(ProcessMessageError::Overweight(weight)) + } + } +} + +/// Processed a mocked message. Messages that end with `badformat`, `corrupt` or `unsupported` will +/// fail with the respective error. +fn processing_message(msg: &[u8]) -> Result<(), ProcessMessageError> { + let msg = String::from_utf8_lossy(msg); + if msg.ends_with("badformat") { + Err(ProcessMessageError::BadFormat) + } else if msg.ends_with("corrupt") { + Err(ProcessMessageError::Corrupt) + } else if msg.ends_with("unsupported") { + Err(ProcessMessageError::Unsupported) + } else { + Ok(()) + } +} + +parameter_types! { + pub static NumMessagesProcessed: usize = 0; + pub static NumMessagesErrored: usize = 0; +} + +/// Similar to [`RecordingMessageProcessor`] but only counts the number of messages processed and +/// does always consume one weight per message. +/// +/// The [`RecordingMessageProcessor`] is a bit too slow for the integration tests. +pub struct CountingMessageProcessor; +impl ProcessMessage for CountingMessageProcessor { + type Origin = MessageOrigin; + + fn process_message( + message: &[u8], + _origin: Self::Origin, + weight_limit: Weight, + ) -> Result<(bool, Weight), ProcessMessageError> { + if let Err(e) = processing_message(message) { + NumMessagesErrored::set(NumMessagesErrored::get() + 1); + return Err(e) + } + let weight = Weight::from_parts(1, 1); + + if weight.all_lte(weight_limit) { + NumMessagesProcessed::set(NumMessagesProcessed::get() + 1); + Ok((true, weight)) + } else { + Err(ProcessMessageError::Overweight(weight)) + } + } +} + +parameter_types! { + /// Storage for `RecordingQueueChangeHandler`, do not use directly. + pub static QueueChanges: Vec<(MessageOrigin, u64, u64)> = vec![]; +} + +/// Records all queue changes into [`QueueChanges`]. +pub struct RecordingQueueChangeHandler; +impl OnQueueChanged for RecordingQueueChangeHandler { + fn on_queue_changed(id: MessageOrigin, items_count: u64, items_size: u64) { + QueueChanges::mutate(|cs| cs.push((id, items_count, items_size))); + } +} + +/// Create new test externalities. +/// +/// Is generic since it is used by the unit test, integration tests and benchmarks. +pub fn new_test_ext() -> sp_io::TestExternalities +where + ::BlockNumber: From, +{ + sp_tracing::try_init_simple(); + WeightForCall::take(); + QueueChanges::take(); + NumMessagesErrored::take(); + let t = frame_system::GenesisConfig::default().build_storage::().unwrap(); + let mut ext = sp_io::TestExternalities::new(t); + ext.execute_with(|| frame_system::Pallet::::set_block_number(1.into())); + ext +} + +/// Set the weight of a specific weight function. +pub fn set_weight(name: &str, w: Weight) { + MockedWeightInfo::set_weight::(name, w); +} + +/// Assert that exactly these pages are present. Assumes `Here` origin. +pub fn assert_pages(indices: &[u32]) { + assert_eq!(Pages::::iter().count(), indices.len()); + for i in indices { + assert!(Pages::::contains_key(MessageOrigin::Here, i)); + } +} + +/// Build a ring with three queues: `Here`, `There` and `Everywhere(0)`. +pub fn build_triple_ring() { + use MessageOrigin::*; + build_ring::(&[Here, There, Everywhere(0)]) +} + +/// Shim to get rid of the annoying `::` everywhere. +pub fn assert_ring(queues: &[MessageOrigin]) { + super::mock_helpers::assert_ring::(queues); +} + +pub fn knit(queue: &MessageOrigin) { + super::mock_helpers::knit::(queue); +} + +pub fn unknit(queue: &MessageOrigin) { + super::mock_helpers::unknit::(queue); +} diff --git a/frame/message-queue/src/mock_helpers.rs b/frame/message-queue/src/mock_helpers.rs new file mode 100644 index 0000000000000..f12cf4cc41073 --- /dev/null +++ b/frame/message-queue/src/mock_helpers.rs @@ -0,0 +1,186 @@ +// This file is part of Substrate. + +// Copyright (C) 2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! Std setup helpers for testing and benchmarking. +//! +//! Cannot be put into mock.rs since benchmarks require no-std and mock.rs is std. + +use crate::*; +use frame_support::traits::Defensive; + +/// Converts `Self` into a `Weight` by using `Self` for all components. +pub trait IntoWeight { + fn into_weight(self) -> Weight; +} + +impl IntoWeight for u64 { + fn into_weight(self) -> Weight { + Weight::from_parts(self, self) + } +} + +/// Mocked message origin for testing. +#[derive(Copy, Clone, Eq, PartialEq, Encode, Decode, MaxEncodedLen, TypeInfo, Debug)] +pub enum MessageOrigin { + Here, + There, + Everywhere(u32), +} + +impl From for MessageOrigin { + fn from(i: u32) -> Self { + Self::Everywhere(i) + } +} + +/// Processes any message and consumes (1, 1) weight per message. +pub struct NoopMessageProcessor; +impl ProcessMessage for NoopMessageProcessor { + type Origin = MessageOrigin; + + fn process_message( + _message: &[u8], + _origin: Self::Origin, + weight_limit: Weight, + ) -> Result<(bool, Weight), ProcessMessageError> { + let weight = Weight::from_parts(1, 1); + + if weight.all_lte(weight_limit) { + Ok((true, weight)) + } else { + Err(ProcessMessageError::Overweight(weight)) + } + } +} + +/// Create a message from the given data. +pub fn msg>(x: &'static str) -> BoundedSlice { + BoundedSlice::defensive_truncate_from(x.as_bytes()) +} + +pub fn vmsg(x: &'static str) -> Vec { + x.as_bytes().to_vec() +} + +/// Create a page from a single message. +pub fn page(msg: &[u8]) -> PageOf { + PageOf::::from_message::(msg.try_into().unwrap()) +} + +pub fn single_page_book() -> BookStateOf { + BookState { begin: 0, end: 1, count: 1, ready_neighbours: None, message_count: 0, size: 0 } +} + +pub fn empty_book() -> BookStateOf { + BookState { begin: 0, end: 1, count: 1, ready_neighbours: None, message_count: 0, size: 0 } +} + +/// Returns a full page of messages with their index as payload and the number of messages. +pub fn full_page() -> (PageOf, usize) { + let mut msgs = 0; + let mut page = PageOf::::default(); + for i in 0..u32::MAX { + let r = i.using_encoded(|d| page.try_append_message::(d.try_into().unwrap())); + if r.is_err() { + break + } else { + msgs += 1; + } + } + assert!(msgs > 0, "page must hold at least one message"); + (page, msgs) +} + +/// Returns a page filled with empty messages and the number of messages. +pub fn book_for(page: &PageOf) -> BookStateOf { + BookState { + count: 1, + begin: 0, + end: 1, + ready_neighbours: None, + message_count: page.remaining.into() as u64, + size: page.remaining_size.into() as u64, + } +} + +/// Assert the last event that was emitted. +#[cfg(any(feature = "std", feature = "runtime-benchmarks", test))] +pub fn assert_last_event(generic_event: ::RuntimeEvent) { + assert!( + !frame_system::Pallet::::block_number().is_zero(), + "The genesis block has n o events" + ); + frame_system::Pallet::::assert_last_event(generic_event.into()); +} + +/// Provide a setup for `bump_service_head`. +pub fn setup_bump_service_head( + current: <::MessageProcessor as ProcessMessage>::Origin, + next: <::MessageProcessor as ProcessMessage>::Origin, +) { + let mut book = single_page_book::(); + book.ready_neighbours = Some(Neighbours::> { prev: next.clone(), next }); + ServiceHead::::put(¤t); + BookStateFor::::insert(¤t, &book); +} + +/// Knit a queue into the ready-ring and write it back to storage. +pub fn knit(o: &<::MessageProcessor as ProcessMessage>::Origin) { + let mut b = BookStateFor::::get(o); + b.ready_neighbours = crate::Pallet::::ready_ring_knit(o).ok().defensive(); + BookStateFor::::insert(o, b); +} + +/// Unknit a queue into the ready-ring and write it back to storage. +pub fn unknit(o: &<::MessageProcessor as ProcessMessage>::Origin) { + let mut b = BookStateFor::::get(o); + crate::Pallet::::ready_ring_unknit(o, b.ready_neighbours.unwrap()); + b.ready_neighbours = None; + BookStateFor::::insert(o, b); +} + +/// Build a ring with three queues: `Here`, `There` and `Everywhere(0)`. +pub fn build_ring( + queues: &[<::MessageProcessor as ProcessMessage>::Origin], +) { + for queue in queues { + BookStateFor::::insert(queue, empty_book::()); + } + for queue in queues { + knit::(queue); + } + assert_ring::(queues); +} + +/// Check that the Ready Ring consists of `queues` in that exact order. +/// +/// Also check that all backlinks are valid and that the first element is the service head. +pub fn assert_ring( + queues: &[<::MessageProcessor as ProcessMessage>::Origin], +) { + for (i, origin) in queues.iter().enumerate() { + let book = BookStateFor::::get(origin); + assert_eq!( + book.ready_neighbours, + Some(Neighbours { + prev: queues[(i + queues.len() - 1) % queues.len()].clone(), + next: queues[(i + 1) % queues.len()].clone(), + }) + ); + } + assert_eq!(ServiceHead::::get(), queues.first().cloned()); +} diff --git a/frame/message-queue/src/tests.rs b/frame/message-queue/src/tests.rs new file mode 100644 index 0000000000000..103fb690ddba7 --- /dev/null +++ b/frame/message-queue/src/tests.rs @@ -0,0 +1,1092 @@ +// This file is part of Substrate. + +// Copyright (C) 2019-2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! Tests for Message Queue Pallet. + +#![cfg(test)] + +use crate::{mock::*, *}; + +use frame_support::{assert_noop, assert_ok, assert_storage_noop, StorageNoopGuard}; +use rand::{rngs::StdRng, Rng, SeedableRng}; + +#[test] +fn mocked_weight_works() { + new_test_ext::().execute_with(|| { + assert!(::WeightInfo::service_queue_base().is_zero()); + }); + new_test_ext::().execute_with(|| { + set_weight("service_queue_base", Weight::MAX); + assert_eq!(::WeightInfo::service_queue_base(), Weight::MAX); + }); + // The externalities reset it. + new_test_ext::().execute_with(|| { + assert!(::WeightInfo::service_queue_base().is_zero()); + }); +} + +#[test] +fn enqueue_within_one_page_works() { + new_test_ext::().execute_with(|| { + use MessageOrigin::*; + MessageQueue::enqueue_message(msg("a"), Here); + MessageQueue::enqueue_message(msg("b"), Here); + MessageQueue::enqueue_message(msg("c"), Here); + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(b"a".to_vec(), Here), (b"b".to_vec(), Here)]); + + assert_eq!(MessageQueue::service_queues(2.into_weight()), 1.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(b"c".to_vec(), Here)]); + + assert_eq!(MessageQueue::service_queues(2.into_weight()), 0.into_weight()); + assert!(MessagesProcessed::get().is_empty()); + + MessageQueue::enqueue_messages([msg("a"), msg("b"), msg("c")].into_iter(), There); + + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + assert_eq!( + MessagesProcessed::take(), + vec![(b"a".to_vec(), There), (b"b".to_vec(), There),] + ); + + MessageQueue::enqueue_message(msg("d"), Everywhere(1)); + + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + assert_eq!(MessageQueue::service_queues(2.into_weight()), 0.into_weight()); + assert_eq!( + MessagesProcessed::take(), + vec![(b"c".to_vec(), There), (b"d".to_vec(), Everywhere(1))] + ); + }); +} + +#[test] +fn queue_priority_retains() { + new_test_ext::().execute_with(|| { + use MessageOrigin::*; + assert_ring(&[]); + MessageQueue::enqueue_message(msg("a"), Everywhere(1)); + assert_ring(&[Everywhere(1)]); + MessageQueue::enqueue_message(msg("b"), Everywhere(2)); + assert_ring(&[Everywhere(1), Everywhere(2)]); + MessageQueue::enqueue_message(msg("c"), Everywhere(3)); + assert_ring(&[Everywhere(1), Everywhere(2), Everywhere(3)]); + MessageQueue::enqueue_message(msg("d"), Everywhere(2)); + assert_ring(&[Everywhere(1), Everywhere(2), Everywhere(3)]); + // service head is 1, it will process a, leaving service head at 2. it also processes b but + // doees not empty queue 2, so service head will end at 2. + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + assert_eq!( + MessagesProcessed::take(), + vec![(vmsg("a"), Everywhere(1)), (vmsg("b"), Everywhere(2)),] + ); + assert_ring(&[Everywhere(2), Everywhere(3)]); + // service head is 2, so will process d first, then c. + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + assert_eq!( + MessagesProcessed::get(), + vec![(vmsg("d"), Everywhere(2)), (vmsg("c"), Everywhere(3)),] + ); + assert_ring(&[]); + }); +} + +#[test] +fn queue_priority_reset_once_serviced() { + new_test_ext::().execute_with(|| { + use MessageOrigin::*; + MessageQueue::enqueue_message(msg("a"), Everywhere(1)); + MessageQueue::enqueue_message(msg("b"), Everywhere(2)); + MessageQueue::enqueue_message(msg("c"), Everywhere(3)); + // service head is 1, it will process a, leaving service head at 2. it also processes b and + // empties queue 2, so service head will end at 3. + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + MessageQueue::enqueue_message(msg("d"), Everywhere(2)); + // service head is 3, so will process c first, then d. + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + + assert_eq!( + MessagesProcessed::get(), + vec![ + (vmsg("a"), Everywhere(1)), + (vmsg("b"), Everywhere(2)), + (vmsg("c"), Everywhere(3)), + (vmsg("d"), Everywhere(2)), + ] + ); + }); +} + +#[test] +fn service_queues_basic_works() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + MessageQueue::enqueue_messages(vec![msg("a"), msg("ab"), msg("abc")].into_iter(), Here); + MessageQueue::enqueue_messages(vec![msg("x"), msg("xy"), msg("xyz")].into_iter(), There); + assert_eq!(QueueChanges::take(), vec![(Here, 3, 6), (There, 3, 6)]); + + // Service one message from `Here`. + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("a"), Here)]); + assert_eq!(QueueChanges::take(), vec![(Here, 2, 5)]); + + // Service one message from `There`. + ServiceHead::::set(There.into()); + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("x"), There)]); + assert_eq!(QueueChanges::take(), vec![(There, 2, 5)]); + + // Service the remaining from `Here`. + ServiceHead::::set(Here.into()); + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("ab"), Here), (vmsg("abc"), Here)]); + assert_eq!(QueueChanges::take(), vec![(Here, 0, 0)]); + + // Service all remaining messages. + assert_eq!(MessageQueue::service_queues(Weight::MAX), 2.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("xy"), There), (vmsg("xyz"), There)]); + assert_eq!(QueueChanges::take(), vec![(There, 0, 0)]); + }); +} + +#[test] +fn service_queues_failing_messages_works() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + set_weight("service_page_item", 1.into_weight()); + MessageQueue::enqueue_message(msg("badformat"), Here); + MessageQueue::enqueue_message(msg("corrupt"), Here); + MessageQueue::enqueue_message(msg("unsupported"), Here); + // Starts with three pages. + assert_pages(&[0, 1, 2]); + + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_last_event::( + Event::ProcessingFailed { + hash: ::Hashing::hash(b"badformat"), + origin: MessageOrigin::Here, + error: ProcessMessageError::BadFormat, + } + .into(), + ); + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_last_event::( + Event::ProcessingFailed { + hash: ::Hashing::hash(b"corrupt"), + origin: MessageOrigin::Here, + error: ProcessMessageError::Corrupt, + } + .into(), + ); + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_last_event::( + Event::ProcessingFailed { + hash: ::Hashing::hash(b"unsupported"), + origin: MessageOrigin::Here, + error: ProcessMessageError::Unsupported, + } + .into(), + ); + // All pages removed. + assert_pages(&[]); + }); +} + +#[test] +fn reap_page_permanent_overweight_works() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + // Create 10 pages more than the stale limit. + let n = (MaxStale::get() + 10) as usize; + for _ in 0..n { + MessageQueue::enqueue_message(msg("weight=2"), Here); + } + assert_eq!(Pages::::iter().count(), n); + assert_eq!(QueueChanges::take().len(), n); + // Mark all pages as stale since their message is permanently overweight. + MessageQueue::service_queues(1.into_weight()); + + // Check that we can reap everything below the watermark. + let max_stale = MaxStale::get(); + for i in 0..n as u32 { + let b = BookStateFor::::get(Here); + let stale_pages = n as u32 - i; + let overflow = stale_pages.saturating_sub(max_stale + 1) + 1; + let backlog = (max_stale * max_stale / overflow).max(max_stale); + let watermark = b.begin.saturating_sub(backlog); + + if i >= watermark { + break + } + assert_ok!(MessageQueue::do_reap_page(&Here, i)); + assert_eq!(QueueChanges::take(), vec![(Here, b.message_count - 1, b.size - 8)]); + } + + // Cannot reap any more pages. + for (o, i, _) in Pages::::iter() { + assert_noop!(MessageQueue::do_reap_page(&o, i), Error::::NotReapable); + assert!(QueueChanges::take().is_empty()); + } + }); +} + +#[test] +fn reaping_overweight_fails_properly() { + use MessageOrigin::*; + assert_eq!(MaxStale::get(), 2, "The stale limit is two"); + + new_test_ext::().execute_with(|| { + // page 0 + MessageQueue::enqueue_message(msg("weight=4"), Here); + MessageQueue::enqueue_message(msg("a"), Here); + // page 1 + MessageQueue::enqueue_message(msg("weight=4"), Here); + MessageQueue::enqueue_message(msg("b"), Here); + // page 2 + MessageQueue::enqueue_message(msg("weight=4"), Here); + MessageQueue::enqueue_message(msg("c"), Here); + // page 3 + MessageQueue::enqueue_message(msg("bigbig 1"), Here); + // page 4 + MessageQueue::enqueue_message(msg("bigbig 2"), Here); + // page 5 + MessageQueue::enqueue_message(msg("bigbig 3"), Here); + // Double-check that exactly these pages exist. + assert_pages(&[0, 1, 2, 3, 4, 5]); + + assert_eq!(MessageQueue::service_queues(2.into_weight()), 2.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("a"), Here), (vmsg("b"), Here)]); + // 2 stale now. + + // Nothing reapable yet, because we haven't hit the stale limit. + for (o, i, _) in Pages::::iter() { + assert_noop!(MessageQueue::do_reap_page(&o, i), Error::::NotReapable); + } + assert_pages(&[0, 1, 2, 3, 4, 5]); + + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("c"), Here)]); + // 3 stale now: can take something 4 pages in history. + + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("bigbig 1"), Here)]); + + // Nothing reapable yet, because we haven't hit the stale limit. + for (o, i, _) in Pages::::iter() { + assert_noop!(MessageQueue::do_reap_page(&o, i), Error::::NotReapable); + } + assert_pages(&[0, 1, 2, 4, 5]); + + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("bigbig 2"), Here)]); + assert_pages(&[0, 1, 2, 5]); + + // First is now reapable as it is too far behind the first ready page (5). + assert_ok!(MessageQueue::do_reap_page(&Here, 0)); + // Others not reapable yet, because we haven't hit the stale limit. + for (o, i, _) in Pages::::iter() { + assert_noop!(MessageQueue::do_reap_page(&o, i), Error::::NotReapable); + } + assert_pages(&[1, 2, 5]); + + assert_eq!(MessageQueue::service_queues(1.into_weight()), 1.into_weight()); + assert_eq!(MessagesProcessed::take(), vec![(vmsg("bigbig 3"), Here)]); + + assert_noop!(MessageQueue::do_reap_page(&Here, 0), Error::::NoPage); + assert_noop!(MessageQueue::do_reap_page(&Here, 3), Error::::NoPage); + assert_noop!(MessageQueue::do_reap_page(&Here, 4), Error::::NoPage); + // Still not reapable, since the number of stale pages is only 2. + for (o, i, _) in Pages::::iter() { + assert_noop!(MessageQueue::do_reap_page(&o, i), Error::::NotReapable); + } + }); +} + +#[test] +fn service_queue_bails() { + // Not enough weight for `service_queue_base`. + new_test_ext::().execute_with(|| { + set_weight("service_queue_base", 2.into_weight()); + let mut meter = WeightMeter::from_limit(1.into_weight()); + + assert_storage_noop!(MessageQueue::service_queue(0u32.into(), &mut meter, Weight::MAX)); + assert!(meter.consumed.is_zero()); + }); + // Not enough weight for `ready_ring_unknit`. + new_test_ext::().execute_with(|| { + set_weight("ready_ring_unknit", 2.into_weight()); + let mut meter = WeightMeter::from_limit(1.into_weight()); + + assert_storage_noop!(MessageQueue::service_queue(0u32.into(), &mut meter, Weight::MAX)); + assert!(meter.consumed.is_zero()); + }); + // Not enough weight for `service_queue_base` and `ready_ring_unknit`. + new_test_ext::().execute_with(|| { + set_weight("service_queue_base", 2.into_weight()); + set_weight("ready_ring_unknit", 2.into_weight()); + + let mut meter = WeightMeter::from_limit(3.into_weight()); + assert_storage_noop!(MessageQueue::service_queue(0.into(), &mut meter, Weight::MAX)); + assert!(meter.consumed.is_zero()); + }); +} + +#[test] +fn service_page_works() { + use super::integration_test::Test; // Run with larger page size. + use MessageOrigin::*; + use PageExecutionStatus::*; + new_test_ext::().execute_with(|| { + set_weight("service_page_base_completion", 2.into_weight()); + set_weight("service_page_item", 3.into_weight()); + + let (page, mut msgs) = full_page::(); + assert!(msgs >= 10, "pre-condition: need at least 10 msgs per page"); + let mut book = book_for::(&page); + Pages::::insert(Here, 0, page); + + // Call it a few times each with a random weight limit. + let mut rng = rand::rngs::StdRng::seed_from_u64(42); + while msgs > 0 { + let process = rng.gen_range(0..=msgs); + msgs -= process; + + // Enough weight to process `process` messages. + let mut meter = WeightMeter::from_limit(((2 + (3 + 1) * process) as u64).into_weight()); + System::reset_events(); + let (processed, status) = + crate::Pallet::::service_page(&Here, &mut book, &mut meter, Weight::MAX); + assert_eq!(processed as usize, process); + assert_eq!(NumMessagesProcessed::take(), process); + assert_eq!(System::events().len(), process); + if msgs == 0 { + assert_eq!(status, NoMore); + } else { + assert_eq!(status, Bailed); + } + } + assert!(!Pages::::contains_key(Here, 0), "The page got removed"); + }); +} + +// `service_page` does nothing when called with an insufficient weight limit. +#[test] +fn service_page_bails() { + // Not enough weight for `service_page_base_completion`. + new_test_ext::().execute_with(|| { + set_weight("service_page_base_completion", 2.into_weight()); + let mut meter = WeightMeter::from_limit(1.into_weight()); + + let (page, _) = full_page::(); + let mut book = book_for::(&page); + Pages::::insert(MessageOrigin::Here, 0, page); + + assert_storage_noop!(MessageQueue::service_page( + &MessageOrigin::Here, + &mut book, + &mut meter, + Weight::MAX + )); + assert!(meter.consumed.is_zero()); + }); + // Not enough weight for `service_page_base_no_completion`. + new_test_ext::().execute_with(|| { + set_weight("service_page_base_no_completion", 2.into_weight()); + let mut meter = WeightMeter::from_limit(1.into_weight()); + + let (page, _) = full_page::(); + let mut book = book_for::(&page); + Pages::::insert(MessageOrigin::Here, 0, page); + + assert_storage_noop!(MessageQueue::service_page( + &MessageOrigin::Here, + &mut book, + &mut meter, + Weight::MAX + )); + assert!(meter.consumed.is_zero()); + }); +} + +#[test] +fn service_page_item_bails() { + new_test_ext::().execute_with(|| { + let _guard = StorageNoopGuard::default(); + let (mut page, _) = full_page::(); + let mut weight = WeightMeter::from_limit(10.into_weight()); + let overweight_limit = 10.into_weight(); + set_weight("service_page_item", 11.into_weight()); + + assert_eq!( + MessageQueue::service_page_item( + &MessageOrigin::Here, + 0, + &mut book_for::(&page), + &mut page, + &mut weight, + overweight_limit, + ), + ItemExecutionStatus::Bailed + ); + }); +} + +#[test] +fn bump_service_head_works() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + // Create a ready ring with three queues. + BookStateFor::::insert(Here, empty_book::()); + knit(&Here); + BookStateFor::::insert(There, empty_book::()); + knit(&There); + BookStateFor::::insert(Everywhere(0), empty_book::()); + knit(&Everywhere(0)); + + // Bump 99 times. + for i in 0..99 { + let current = MessageQueue::bump_service_head(&mut WeightMeter::max_limit()).unwrap(); + assert_eq!(current, [Here, There, Everywhere(0)][i % 3]); + } + + // The ready ring is intact and the service head is still `Here`. + assert_ring(&[Here, There, Everywhere(0)]); + }); +} + +/// `bump_service_head` does nothing when called with an insufficient weight limit. +#[test] +fn bump_service_head_bails() { + new_test_ext::().execute_with(|| { + set_weight("bump_service_head", 2.into_weight()); + setup_bump_service_head::(0.into(), 10.into()); + + let _guard = StorageNoopGuard::default(); + let mut meter = WeightMeter::from_limit(1.into_weight()); + assert!(MessageQueue::bump_service_head(&mut meter).is_none()); + assert_eq!(meter.consumed, 0.into_weight()); + }); +} + +#[test] +fn bump_service_head_trivial_works() { + new_test_ext::().execute_with(|| { + set_weight("bump_service_head", 2.into_weight()); + let mut meter = WeightMeter::max_limit(); + + assert_eq!(MessageQueue::bump_service_head(&mut meter), None, "Cannot bump"); + assert_eq!(meter.consumed, 2.into_weight()); + + setup_bump_service_head::(0.into(), 1.into()); + + assert_eq!(MessageQueue::bump_service_head(&mut meter), Some(0.into())); + assert_eq!(ServiceHead::::get().unwrap(), 1.into(), "Bumped the head"); + assert_eq!(meter.consumed, 4.into_weight()); + + assert_eq!(MessageQueue::bump_service_head(&mut meter), None, "Cannot bump"); + assert_eq!(meter.consumed, 6.into_weight()); + }); +} + +#[test] +fn bump_service_head_no_head_noops() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + // Create a ready ring with three queues. + BookStateFor::::insert(Here, empty_book::()); + knit(&Here); + BookStateFor::::insert(There, empty_book::()); + knit(&There); + BookStateFor::::insert(Everywhere(0), empty_book::()); + knit(&Everywhere(0)); + + // But remove the service head. + ServiceHead::::kill(); + + // Nothing happens. + assert_storage_noop!(MessageQueue::bump_service_head(&mut WeightMeter::max_limit())); + }); +} + +#[test] +fn service_page_item_consumes_correct_weight() { + new_test_ext::().execute_with(|| { + let mut page = page::(b"weight=3"); + let mut weight = WeightMeter::from_limit(10.into_weight()); + let overweight_limit = 0.into_weight(); + set_weight("service_page_item", 2.into_weight()); + + assert_eq!( + MessageQueue::service_page_item( + &MessageOrigin::Here, + 0, + &mut book_for::(&page), + &mut page, + &mut weight, + overweight_limit + ), + ItemExecutionStatus::Executed(true) + ); + assert_eq!(weight.consumed, 5.into_weight()); + }); +} + +/// `service_page_item` skips a permanently `Overweight` message and marks it as `unprocessed`. +#[test] +fn service_page_item_skips_perm_overweight_message() { + new_test_ext::().execute_with(|| { + let mut page = page::(b"TooMuch"); + let mut weight = WeightMeter::from_limit(2.into_weight()); + let overweight_limit = 0.into_weight(); + set_weight("service_page_item", 2.into_weight()); + + assert_eq!( + crate::Pallet::::service_page_item( + &MessageOrigin::Here, + 0, + &mut book_for::(&page), + &mut page, + &mut weight, + overweight_limit + ), + ItemExecutionStatus::Executed(false) + ); + assert_eq!(weight.consumed, 2.into_weight()); + assert_last_event::( + Event::OverweightEnqueued { + hash: ::Hashing::hash(b"TooMuch"), + origin: MessageOrigin::Here, + message_index: 0, + page_index: 0, + } + .into(), + ); + + // Check that the message was skipped. + let (pos, processed, payload) = page.peek_index(0).unwrap(); + assert_eq!(pos, 0); + assert!(!processed); + assert_eq!(payload, b"TooMuch".encode()); + }); +} + +#[test] +fn peek_index_works() { + use super::integration_test::Test; // Run with larger page size. + new_test_ext::().execute_with(|| { + // Fill a page with messages. + let (mut page, msgs) = full_page::(); + let msg_enc_len = ItemHeader::<::Size>::max_encoded_len() + 4; + + for i in 0..msgs { + // Skip all even messages. + page.skip_first(i % 2 == 0); + // Peek each message and check that it is correct. + let (pos, processed, payload) = page.peek_index(i).unwrap(); + assert_eq!(pos, msg_enc_len * i); + assert_eq!(processed, i % 2 == 0); + // `full_page` uses the index as payload. + assert_eq!(payload, (i as u32).encode()); + } + }); +} + +#[test] +fn peek_first_and_skip_first_works() { + use super::integration_test::Test; // Run with larger page size. + new_test_ext::().execute_with(|| { + // Fill a page with messages. + let (mut page, msgs) = full_page::(); + + for i in 0..msgs { + let msg = page.peek_first().unwrap(); + // `full_page` uses the index as payload. + assert_eq!(msg.deref(), (i as u32).encode()); + page.skip_first(i % 2 == 0); // True of False should not matter here. + } + assert!(page.peek_first().is_none(), "Page must be at the end"); + + // Check that all messages were correctly marked as (un)processed. + for i in 0..msgs { + let (_, processed, _) = page.peek_index(i).unwrap(); + assert_eq!(processed, i % 2 == 0); + } + }); +} + +#[test] +fn note_processed_at_pos_works() { + use super::integration_test::Test; // Run with larger page size. + new_test_ext::().execute_with(|| { + let (mut page, msgs) = full_page::(); + + for i in 0..msgs { + let (pos, processed, _) = page.peek_index(i).unwrap(); + assert!(!processed); + assert_eq!(page.remaining as usize, msgs - i); + + page.note_processed_at_pos(pos); + + let (_, processed, _) = page.peek_index(i).unwrap(); + assert!(processed); + assert_eq!(page.remaining as usize, msgs - i - 1); + } + // `skip_first` still works fine. + for _ in 0..msgs { + page.peek_first().unwrap(); + page.skip_first(false); + } + assert!(page.peek_first().is_none()); + }); +} + +#[test] +fn note_processed_at_pos_idempotent() { + let (mut page, _) = full_page::(); + page.note_processed_at_pos(0); + + let original = page.clone(); + page.note_processed_at_pos(0); + assert_eq!(page.heap, original.heap); +} + +#[test] +fn is_complete_works() { + use super::integration_test::Test; // Run with larger page size. + new_test_ext::().execute_with(|| { + let (mut page, msgs) = full_page::(); + assert!(msgs > 3, "Boring"); + let msg_enc_len = ItemHeader::<::Size>::max_encoded_len() + 4; + + assert!(!page.is_complete()); + for i in 0..msgs { + if i % 2 == 0 { + page.skip_first(false); + } else { + page.note_processed_at_pos(msg_enc_len * i); + } + } + // Not complete since `skip_first` was called with `false`. + assert!(!page.is_complete()); + for i in 0..msgs { + if i % 2 == 0 { + assert!(!page.is_complete()); + let (pos, _, _) = page.peek_index(i).unwrap(); + page.note_processed_at_pos(pos); + } + } + assert!(page.is_complete()); + assert_eq!(page.remaining_size, 0); + // Each message is marked as processed. + for i in 0..msgs { + let (_, processed, _) = page.peek_index(i).unwrap(); + assert!(processed); + } + }); +} + +#[test] +fn page_from_message_basic_works() { + assert!(MaxMessageLenOf::::get() > 0, "pre-condition unmet"); + let mut msg: BoundedVec> = Default::default(); + msg.bounded_resize(MaxMessageLenOf::::get() as usize, 123); + + let page = PageOf::::from_message::(msg.as_bounded_slice()); + assert_eq!(page.remaining, 1); + assert_eq!(page.remaining_size as usize, msg.len()); + assert!(page.first_index == 0 && page.first == 0 && page.last == 0); + + // Verify the content of the heap. + let mut heap = Vec::::new(); + let header = + ItemHeader::<::Size> { payload_len: msg.len() as u32, is_processed: false }; + heap.extend(header.encode()); + heap.extend(msg.deref()); + assert_eq!(page.heap, heap); +} + +#[test] +fn page_try_append_message_basic_works() { + use super::integration_test::Test; // Run with larger page size. + + let mut page = PageOf::::default(); + let mut msgs = 0; + // Append as many 4-byte message as possible. + for i in 0..u32::MAX { + let r = i.using_encoded(|i| page.try_append_message::(i.try_into().unwrap())); + if r.is_err() { + break + } else { + msgs += 1; + } + } + let expected_msgs = (::HeapSize::get()) / + (ItemHeader::<::Size>::max_encoded_len() as u32 + 4); + assert_eq!(expected_msgs, msgs, "Wrong number of messages"); + assert_eq!(page.remaining, msgs); + assert_eq!(page.remaining_size, msgs * 4); + + // Verify that the heap content is correct. + let mut heap = Vec::::new(); + for i in 0..msgs { + let header = ItemHeader::<::Size> { payload_len: 4, is_processed: false }; + heap.extend(header.encode()); + heap.extend(i.encode()); + } + assert_eq!(page.heap, heap); +} + +#[test] +fn page_try_append_message_max_msg_len_works_works() { + use super::integration_test::Test; // Run with larger page size. + + // We start off with an empty page. + let mut page = PageOf::::default(); + // … and append a message with maximum possible length. + let msg = vec![123u8; MaxMessageLenOf::::get() as usize]; + // … which works. + page.try_append_message::(BoundedSlice::defensive_truncate_from(&msg)) + .unwrap(); + // Now we cannot append *anything* since the heap is full. + page.try_append_message::(BoundedSlice::defensive_truncate_from(&[])) + .unwrap_err(); + assert_eq!(page.heap.len(), ::HeapSize::get() as usize); +} + +#[test] +fn page_try_append_message_with_remaining_size_works_works() { + use super::integration_test::Test; // Run with larger page size. + let header_size = ItemHeader::<::Size>::max_encoded_len(); + + // We start off with an empty page. + let mut page = PageOf::::default(); + let mut remaining = ::HeapSize::get() as usize; + let mut msgs = Vec::new(); + let mut rng = StdRng::seed_from_u64(42); + // Now we keep appending messages with different lengths. + while remaining >= header_size { + let take = rng.gen_range(0..=(remaining - header_size)); + let msg = vec![123u8; take]; + page.try_append_message::(BoundedSlice::defensive_truncate_from(&msg)) + .unwrap(); + remaining -= take + header_size; + msgs.push(msg); + } + // Cannot even fit a single header in there now. + assert!(remaining < header_size); + assert_eq!(::HeapSize::get() as usize - page.heap.len(), remaining); + assert_eq!(page.remaining as usize, msgs.len()); + assert_eq!( + page.remaining_size as usize, + msgs.iter().fold(0, |mut a, m| { + a += m.len(); + a + }) + ); + // Verify the heap content. + let mut heap = Vec::new(); + for msg in msgs.into_iter() { + let header = ItemHeader::<::Size> { + payload_len: msg.len() as u32, + is_processed: false, + }; + heap.extend(header.encode()); + heap.extend(msg); + } + assert_eq!(page.heap, heap); +} + +// `Page::from_message` does not panic when called with the maximum message and origin lengths. +#[test] +fn page_from_message_max_len_works() { + let max_msg_len: usize = MaxMessageLenOf::::get() as usize; + + let page = PageOf::::from_message::(vec![1; max_msg_len][..].try_into().unwrap()); + + assert_eq!(page.remaining, 1); +} + +#[test] +fn sweep_queue_works() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + build_triple_ring(); + + let book = BookStateFor::::get(Here); + assert!(book.begin != book.end); + // Removing the service head works + assert_eq!(ServiceHead::::get(), Some(Here)); + MessageQueue::sweep_queue(Here); + assert_ring(&[There, Everywhere(0)]); + // The book still exits, but has updated begin and end. + let book = BookStateFor::::get(Here); + assert_eq!(book.begin, book.end); + + // Removing something that is not the service head works. + assert!(ServiceHead::::get() != Some(Everywhere(0))); + MessageQueue::sweep_queue(Everywhere(0)); + assert_ring(&[There]); + // The book still exits, but has updated begin and end. + let book = BookStateFor::::get(Everywhere(0)); + assert_eq!(book.begin, book.end); + + MessageQueue::sweep_queue(There); + // The book still exits, but has updated begin and end. + let book = BookStateFor::::get(There); + assert_eq!(book.begin, book.end); + assert_ring(&[]); + + // Sweeping a queue never calls OnQueueChanged. + assert!(QueueChanges::take().is_empty()); + }) +} + +/// Test that `sweep_queue` also works if the ReadyRing wraps around. +#[test] +fn sweep_queue_wraps_works() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + BookStateFor::::insert(Here, empty_book::()); + knit(&Here); + + MessageQueue::sweep_queue(Here); + let book = BookStateFor::::get(Here); + assert!(book.ready_neighbours.is_none()); + }); +} + +#[test] +fn sweep_queue_invalid_noops() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + assert_storage_noop!(MessageQueue::sweep_queue(Here)); + }); +} + +#[test] +fn footprint_works() { + new_test_ext::().execute_with(|| { + let origin = MessageOrigin::Here; + let (page, msgs) = full_page::(); + let book = book_for::(&page); + BookStateFor::::insert(origin, book); + + let info = MessageQueue::footprint(origin); + assert_eq!(info.count as usize, msgs); + assert_eq!(info.size, page.remaining_size as u64); + + // Sweeping a queue never calls OnQueueChanged. + assert!(QueueChanges::take().is_empty()); + }) +} + +/// The footprint of an invalid queue is the default footprint. +#[test] +fn footprint_invalid_works() { + new_test_ext::().execute_with(|| { + let origin = MessageOrigin::Here; + assert_eq!(MessageQueue::footprint(origin), Default::default()); + }) +} + +/// The footprint of a swept queue is still correct. +#[test] +fn footprint_on_swept_works() { + use MessageOrigin::*; + new_test_ext::().execute_with(|| { + let mut book = empty_book::(); + book.message_count = 3; + book.size = 10; + BookStateFor::::insert(Here, &book); + knit(&Here); + + MessageQueue::sweep_queue(Here); + let fp = MessageQueue::footprint(Here); + assert_eq!(fp.count, 3); + assert_eq!(fp.size, 10); + }) +} + +#[test] +fn execute_overweight_works() { + new_test_ext::().execute_with(|| { + set_weight("bump_service_head", 1.into_weight()); + set_weight("service_queue_base", 1.into_weight()); + set_weight("service_page_base_completion", 1.into_weight()); + + // Enqueue a message + let origin = MessageOrigin::Here; + MessageQueue::enqueue_message(msg("weight=6"), origin); + // Load the current book + let book = BookStateFor::::get(origin); + assert_eq!(book.message_count, 1); + assert!(Pages::::contains_key(origin, 0)); + + // Mark the message as permanently overweight. + assert_eq!(MessageQueue::service_queues(4.into_weight()), 4.into_weight()); + assert_eq!(QueueChanges::take(), vec![(origin, 1, 8)]); + assert_last_event::( + Event::OverweightEnqueued { + hash: ::Hashing::hash(b"weight=6"), + origin: MessageOrigin::Here, + message_index: 0, + page_index: 0, + } + .into(), + ); + + // Now try to execute it with too few weight. + let consumed = + ::execute_overweight(5.into_weight(), (origin, 0, 0)); + assert_eq!(consumed, Err(ExecuteOverweightError::InsufficientWeight)); + + // Execute it with enough weight. + assert_eq!(Pages::::iter().count(), 1); + assert!(QueueChanges::take().is_empty()); + let consumed = + ::execute_overweight(7.into_weight(), (origin, 0, 0)) + .unwrap(); + assert_eq!(consumed, 6.into_weight()); + assert_eq!(QueueChanges::take(), vec![(origin, 0, 0)]); + // There is no message left in the book. + let book = BookStateFor::::get(origin); + assert_eq!(book.message_count, 0); + // And no more pages. + assert_eq!(Pages::::iter().count(), 0); + + // Doing it again with enough weight will error. + let consumed = + ::execute_overweight(70.into_weight(), (origin, 0, 0)); + assert_eq!(consumed, Err(ExecuteOverweightError::NotFound)); + assert!(QueueChanges::take().is_empty()); + assert!(!Pages::::contains_key(origin, 0), "Page is gone"); + }); +} + +/// Checks that (un)knitting the ready ring works with just one queue. +/// +/// This case is interesting since it wraps and a lot of `mutate` now operate on the same object. +#[test] +fn ready_ring_knit_basic_works() { + use MessageOrigin::*; + + new_test_ext::().execute_with(|| { + BookStateFor::::insert(Here, empty_book::()); + + for i in 0..10 { + if i % 2 == 0 { + knit(&Here); + assert_ring(&[Here]); + } else { + unknit(&Here); + assert_ring(&[]); + } + } + assert_ring(&[]); + }); +} + +#[test] +fn ready_ring_knit_and_unknit_works() { + use MessageOrigin::*; + + new_test_ext::().execute_with(|| { + // Place three queues into the storage. + BookStateFor::::insert(Here, empty_book::()); + BookStateFor::::insert(There, empty_book::()); + BookStateFor::::insert(Everywhere(0), empty_book::()); + + // Knit them into the ready ring. + assert_ring(&[]); + knit(&Here); + assert_ring(&[Here]); + knit(&There); + assert_ring(&[Here, There]); + knit(&Everywhere(0)); + assert_ring(&[Here, There, Everywhere(0)]); + + // Now unknit… + unknit(&Here); + assert_ring(&[There, Everywhere(0)]); + unknit(&There); + assert_ring(&[Everywhere(0)]); + unknit(&Everywhere(0)); + assert_ring(&[]); + }); +} + +#[test] +fn enqueue_message_works() { + use MessageOrigin::*; + let max_msg_per_page = ::HeapSize::get() as u64 / + (ItemHeader::<::Size>::max_encoded_len() as u64 + 1); + + new_test_ext::().execute_with(|| { + // Enqueue messages which should fill three pages. + let n = max_msg_per_page * 3; + for i in 1..=n { + MessageQueue::enqueue_message(msg("a"), Here); + assert_eq!(QueueChanges::take(), vec![(Here, i, i)], "OnQueueChanged not called"); + } + assert_eq!(Pages::::iter().count(), 3); + + // Enqueue one more onto page 4. + MessageQueue::enqueue_message(msg("abc"), Here); + assert_eq!(QueueChanges::take(), vec![(Here, n + 1, n + 3)]); + assert_eq!(Pages::::iter().count(), 4); + + // Check the state. + assert_eq!(BookStateFor::::iter().count(), 1); + let book = BookStateFor::::get(Here); + assert_eq!(book.message_count, n + 1); + assert_eq!(book.size, n + 3); + assert_eq!((book.begin, book.end), (0, 4)); + assert_eq!(book.count as usize, Pages::::iter().count()); + }); +} + +#[test] +fn enqueue_messages_works() { + use MessageOrigin::*; + let max_msg_per_page = ::HeapSize::get() as u64 / + (ItemHeader::<::Size>::max_encoded_len() as u64 + 1); + + new_test_ext::().execute_with(|| { + // Enqueue messages which should fill three pages. + let n = max_msg_per_page * 3; + let msgs = vec![msg("a"); n as usize]; + + // Now queue all messages at once. + MessageQueue::enqueue_messages(msgs.into_iter(), Here); + // The changed handler should only be called once. + assert_eq!(QueueChanges::take(), vec![(Here, n, n)], "OnQueueChanged not called"); + assert_eq!(Pages::::iter().count(), 3); + + // Enqueue one more onto page 4. + MessageQueue::enqueue_message(msg("abc"), Here); + assert_eq!(QueueChanges::take(), vec![(Here, n + 1, n + 3)]); + assert_eq!(Pages::::iter().count(), 4); + + // Check the state. + assert_eq!(BookStateFor::::iter().count(), 1); + let book = BookStateFor::::get(Here); + assert_eq!(book.message_count, n + 1); + assert_eq!(book.size, n + 3); + assert_eq!((book.begin, book.end), (0, 4)); + assert_eq!(book.count as usize, Pages::::iter().count()); + }); +} diff --git a/frame/message-queue/src/weights.rs b/frame/message-queue/src/weights.rs new file mode 100644 index 0000000000000..cd9268ffde224 --- /dev/null +++ b/frame/message-queue/src/weights.rs @@ -0,0 +1,216 @@ +// This file is part of Substrate. + +// Copyright (C) 2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! Autogenerated weights for pallet_message_queue +//! +//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev +//! DATE: 2022-12-08, STEPS: `50`, REPEAT: 20, LOW RANGE: `[]`, HIGH RANGE: `[]` +//! HOSTNAME: `bm3`, CPU: `Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz` +//! EXECUTION: Some(Wasm), WASM-EXECUTION: Compiled, CHAIN: Some("dev"), DB CACHE: 1024 + +// Executed Command: +// /home/benchbot/cargo_target_dir/production/substrate +// benchmark +// pallet +// --steps=50 +// --repeat=20 +// --extrinsic=* +// --execution=wasm +// --wasm-execution=compiled +// --heap-pages=4096 +// --json-file=/var/lib/gitlab-runner/builds/zyw4fam_/0/parity/mirrors/substrate/.git/.artifacts/bench.json +// --pallet=pallet_message_queue +// --chain=dev +// --header=./HEADER-APACHE2 +// --output=./frame/message-queue/src/weights.rs +// --template=./.maintain/frame-weight-template.hbs + +#![cfg_attr(rustfmt, rustfmt_skip)] +#![allow(unused_parens)] +#![allow(unused_imports)] + +use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}}; +use sp_std::marker::PhantomData; + +/// Weight functions needed for pallet_message_queue. +pub trait WeightInfo { + fn ready_ring_knit() -> Weight; + fn ready_ring_unknit() -> Weight; + fn service_queue_base() -> Weight; + fn service_page_base_completion() -> Weight; + fn service_page_base_no_completion() -> Weight; + fn service_page_item() -> Weight; + fn bump_service_head() -> Weight; + fn reap_page() -> Weight; + fn execute_overweight_page_removed() -> Weight; + fn execute_overweight_page_updated() -> Weight; +} + +/// Weights for pallet_message_queue using the Substrate node and recommended hardware. +pub struct SubstrateWeight(PhantomData); +impl WeightInfo for SubstrateWeight { + // Storage: MessageQueue ServiceHead (r:1 w:0) + // Storage: MessageQueue BookStateFor (r:2 w:2) + fn ready_ring_knit() -> Weight { + // Minimum execution time: 12_330 nanoseconds. + Weight::from_ref_time(12_711_000) + .saturating_add(T::DbWeight::get().reads(3)) + .saturating_add(T::DbWeight::get().writes(2)) + } + // Storage: MessageQueue BookStateFor (r:2 w:2) + // Storage: MessageQueue ServiceHead (r:1 w:1) + fn ready_ring_unknit() -> Weight { + // Minimum execution time: 12_322 nanoseconds. + Weight::from_ref_time(12_560_000) + .saturating_add(T::DbWeight::get().reads(3)) + .saturating_add(T::DbWeight::get().writes(3)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + fn service_queue_base() -> Weight { + // Minimum execution time: 4_652 nanoseconds. + Weight::from_ref_time(4_848_000) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + // Storage: MessageQueue Pages (r:1 w:1) + fn service_page_base_completion() -> Weight { + // Minimum execution time: 7_115 nanoseconds. + Weight::from_ref_time(7_407_000) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + // Storage: MessageQueue Pages (r:1 w:1) + fn service_page_base_no_completion() -> Weight { + // Minimum execution time: 6_974 nanoseconds. + Weight::from_ref_time(7_200_000) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + fn service_page_item() -> Weight { + // Minimum execution time: 79_657 nanoseconds. + Weight::from_ref_time(80_050_000) + } + // Storage: MessageQueue ServiceHead (r:1 w:1) + // Storage: MessageQueue BookStateFor (r:1 w:0) + fn bump_service_head() -> Weight { + // Minimum execution time: 7_598 nanoseconds. + Weight::from_ref_time(8_118_000) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(1)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + // Storage: MessageQueue Pages (r:1 w:1) + fn reap_page() -> Weight { + // Minimum execution time: 60_562 nanoseconds. + Weight::from_ref_time(61_430_000) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + // Storage: MessageQueue Pages (r:1 w:1) + fn execute_overweight_page_removed() -> Weight { + // Minimum execution time: 74_582 nanoseconds. + Weight::from_ref_time(75_445_000) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + // Storage: MessageQueue Pages (r:1 w:1) + fn execute_overweight_page_updated() -> Weight { + // Minimum execution time: 87_526 nanoseconds. + Weight::from_ref_time(88_055_000) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } +} + +// For backwards compatibility and tests +impl WeightInfo for () { + // Storage: MessageQueue ServiceHead (r:1 w:0) + // Storage: MessageQueue BookStateFor (r:2 w:2) + fn ready_ring_knit() -> Weight { + // Minimum execution time: 12_330 nanoseconds. + Weight::from_ref_time(12_711_000) + .saturating_add(RocksDbWeight::get().reads(3)) + .saturating_add(RocksDbWeight::get().writes(2)) + } + // Storage: MessageQueue BookStateFor (r:2 w:2) + // Storage: MessageQueue ServiceHead (r:1 w:1) + fn ready_ring_unknit() -> Weight { + // Minimum execution time: 12_322 nanoseconds. + Weight::from_ref_time(12_560_000) + .saturating_add(RocksDbWeight::get().reads(3)) + .saturating_add(RocksDbWeight::get().writes(3)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + fn service_queue_base() -> Weight { + // Minimum execution time: 4_652 nanoseconds. + Weight::from_ref_time(4_848_000) + .saturating_add(RocksDbWeight::get().reads(1)) + .saturating_add(RocksDbWeight::get().writes(1)) + } + // Storage: MessageQueue Pages (r:1 w:1) + fn service_page_base_completion() -> Weight { + // Minimum execution time: 7_115 nanoseconds. + Weight::from_ref_time(7_407_000) + .saturating_add(RocksDbWeight::get().reads(1)) + .saturating_add(RocksDbWeight::get().writes(1)) + } + // Storage: MessageQueue Pages (r:1 w:1) + fn service_page_base_no_completion() -> Weight { + // Minimum execution time: 6_974 nanoseconds. + Weight::from_ref_time(7_200_000) + .saturating_add(RocksDbWeight::get().reads(1)) + .saturating_add(RocksDbWeight::get().writes(1)) + } + fn service_page_item() -> Weight { + // Minimum execution time: 79_657 nanoseconds. + Weight::from_ref_time(80_050_000) + } + // Storage: MessageQueue ServiceHead (r:1 w:1) + // Storage: MessageQueue BookStateFor (r:1 w:0) + fn bump_service_head() -> Weight { + // Minimum execution time: 7_598 nanoseconds. + Weight::from_ref_time(8_118_000) + .saturating_add(RocksDbWeight::get().reads(2)) + .saturating_add(RocksDbWeight::get().writes(1)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + // Storage: MessageQueue Pages (r:1 w:1) + fn reap_page() -> Weight { + // Minimum execution time: 60_562 nanoseconds. + Weight::from_ref_time(61_430_000) + .saturating_add(RocksDbWeight::get().reads(2)) + .saturating_add(RocksDbWeight::get().writes(2)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + // Storage: MessageQueue Pages (r:1 w:1) + fn execute_overweight_page_removed() -> Weight { + // Minimum execution time: 74_582 nanoseconds. + Weight::from_ref_time(75_445_000) + .saturating_add(RocksDbWeight::get().reads(2)) + .saturating_add(RocksDbWeight::get().writes(2)) + } + // Storage: MessageQueue BookStateFor (r:1 w:1) + // Storage: MessageQueue Pages (r:1 w:1) + fn execute_overweight_page_updated() -> Weight { + // Minimum execution time: 87_526 nanoseconds. + Weight::from_ref_time(88_055_000) + .saturating_add(RocksDbWeight::get().reads(2)) + .saturating_add(RocksDbWeight::get().writes(2)) + } +} diff --git a/frame/multisig/src/lib.rs b/frame/multisig/src/lib.rs index ae4efb76335a0..076a289e06519 100644 --- a/frame/multisig/src/lib.rs +++ b/frame/multisig/src/lib.rs @@ -272,6 +272,7 @@ pub mod pallet { /// - DB Weight: None /// - Plus Call Weight /// # + #[pallet::call_index(0)] #[pallet::weight({ let dispatch_info = call.get_dispatch_info(); ( @@ -365,6 +366,7 @@ pub mod pallet { /// - Writes: Multisig Storage, [Caller Account] /// - Plus Call Weight /// # + #[pallet::call_index(1)] #[pallet::weight({ let s = other_signatories.len() as u32; let z = call.using_encoded(|d| d.len()) as u32; @@ -428,6 +430,7 @@ pub mod pallet { /// - Read: Multisig Storage, [Caller Account] /// - Write: Multisig Storage, [Caller Account] /// # + #[pallet::call_index(2)] #[pallet::weight({ let s = other_signatories.len() as u32; @@ -480,6 +483,7 @@ pub mod pallet { /// - Read: Multisig Storage, [Caller Account], Refund Account /// - Write: Multisig Storage, [Caller Account], Refund Account /// # + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::cancel_as_multi(other_signatories.len() as u32))] pub fn cancel_as_multi( origin: OriginFor, diff --git a/frame/nicks/src/lib.rs b/frame/nicks/src/lib.rs index b3238630d3174..79daeb9bdb9a8 100644 --- a/frame/nicks/src/lib.rs +++ b/frame/nicks/src/lib.rs @@ -135,6 +135,7 @@ pub mod pallet { /// - One storage read/write. /// - One event. /// # + #[pallet::call_index(0)] #[pallet::weight(50_000_000)] pub fn set_name(origin: OriginFor, name: Vec) -> DispatchResult { let sender = ensure_signed(origin)?; @@ -167,6 +168,7 @@ pub mod pallet { /// - One storage read/write. /// - One event. /// # + #[pallet::call_index(1)] #[pallet::weight(70_000_000)] pub fn clear_name(origin: OriginFor) -> DispatchResult { let sender = ensure_signed(origin)?; @@ -193,6 +195,7 @@ pub mod pallet { /// - One storage read/write. /// - One event. /// # + #[pallet::call_index(2)] #[pallet::weight(70_000_000)] pub fn kill_name(origin: OriginFor, target: AccountIdLookupOf) -> DispatchResult { T::ForceOrigin::ensure_origin(origin)?; @@ -220,6 +223,7 @@ pub mod pallet { /// - One storage read/write. /// - One event. /// # + #[pallet::call_index(3)] #[pallet::weight(70_000_000)] pub fn force_name( origin: OriginFor, diff --git a/frame/nis/src/lib.rs b/frame/nis/src/lib.rs index 97f727c241479..dff64625a3654 100644 --- a/frame/nis/src/lib.rs +++ b/frame/nis/src/lib.rs @@ -520,6 +520,7 @@ pub mod pallet { /// /// Complexities: /// - `Queues[duration].len()` (just take max). + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::place_bid_max())] pub fn place_bid( origin: OriginFor, @@ -581,6 +582,7 @@ pub mod pallet { /// /// - `amount`: The amount of the previous bid. /// - `duration`: The duration of the previous bid. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::retract_bid(T::MaxQueueLen::get()))] pub fn retract_bid( origin: OriginFor, @@ -615,6 +617,7 @@ pub mod pallet { /// Ensure we have sufficient funding for all potential payouts. /// /// - `origin`: Must be accepted by `FundOrigin`. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::fund_deficit())] pub fn fund_deficit(origin: OriginFor) -> DispatchResult { T::FundOrigin::ensure_origin(origin)?; @@ -636,6 +639,7 @@ pub mod pallet { /// - `index`: The index of the receipt. /// - `portion`: If `Some`, then only the given portion of the receipt should be thawed. If /// `None`, then all of it should be. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::thaw())] pub fn thaw( origin: OriginFor, diff --git a/frame/node-authorization/src/lib.rs b/frame/node-authorization/src/lib.rs index bd1b14d10b013..543ba24500ebc 100644 --- a/frame/node-authorization/src/lib.rs +++ b/frame/node-authorization/src/lib.rs @@ -210,6 +210,7 @@ pub mod pallet { /// May only be called from `T::AddOrigin`. /// /// - `node`: identifier of the node. + #[pallet::call_index(0)] #[pallet::weight((T::WeightInfo::add_well_known_node(), DispatchClass::Operational))] pub fn add_well_known_node( origin: OriginFor, @@ -239,6 +240,7 @@ pub mod pallet { /// May only be called from `T::RemoveOrigin`. /// /// - `node`: identifier of the node. + #[pallet::call_index(1)] #[pallet::weight((T::WeightInfo::remove_well_known_node(), DispatchClass::Operational))] pub fn remove_well_known_node(origin: OriginFor, node: PeerId) -> DispatchResult { T::RemoveOrigin::ensure_origin(origin)?; @@ -264,6 +266,7 @@ pub mod pallet { /// /// - `remove`: the node which will be moved out from the list. /// - `add`: the node which will be put in the list. + #[pallet::call_index(2)] #[pallet::weight((T::WeightInfo::swap_well_known_node(), DispatchClass::Operational))] pub fn swap_well_known_node( origin: OriginFor, @@ -300,6 +303,7 @@ pub mod pallet { /// May only be called from `T::ResetOrigin`. /// /// - `nodes`: the new nodes for the allow list. + #[pallet::call_index(3)] #[pallet::weight((T::WeightInfo::reset_well_known_nodes(), DispatchClass::Operational))] pub fn reset_well_known_nodes( origin: OriginFor, @@ -318,6 +322,7 @@ pub mod pallet { /// PeerId, so claim it right away! /// /// - `node`: identifier of the node. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::claim_node())] pub fn claim_node(origin: OriginFor, node: PeerId) -> DispatchResult { let sender = ensure_signed(origin)?; @@ -335,6 +340,7 @@ pub mod pallet { /// needs to reach consensus among the network participants. /// /// - `node`: identifier of the node. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::remove_claim())] pub fn remove_claim(origin: OriginFor, node: PeerId) -> DispatchResult { let sender = ensure_signed(origin)?; @@ -355,6 +361,7 @@ pub mod pallet { /// /// - `node`: identifier of the node. /// - `owner`: new owner of the node. + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::transfer_node())] pub fn transfer_node( origin: OriginFor, @@ -378,6 +385,7 @@ pub mod pallet { /// /// - `node`: identifier of the node. /// - `connections`: additonal nodes from which the connections are allowed. + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::add_connections())] pub fn add_connections( origin: OriginFor, @@ -412,6 +420,7 @@ pub mod pallet { /// /// - `node`: identifier of the node. /// - `connections`: additonal nodes from which the connections are not allowed anymore. + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::remove_connections())] pub fn remove_connections( origin: OriginFor, diff --git a/frame/nomination-pools/Cargo.toml b/frame/nomination-pools/Cargo.toml index 4894e3d97f19a..3eb2d4bc5fd9b 100644 --- a/frame/nomination-pools/Cargo.toml +++ b/frame/nomination-pools/Cargo.toml @@ -26,7 +26,7 @@ sp-core = { version = "7.0.0", default-features = false, path = "../../primitive sp-io = { version = "7.0.0", default-features = false, path = "../../primitives/io" } log = { version = "0.4.0", default-features = false } -# Optional: usef for testing and/or fuzzing +# Optional: use for testing and/or fuzzing pallet-balances = { version = "4.0.0-dev", path = "../balances", optional = true } sp-tracing = { version = "6.0.0", path = "../../primitives/tracing", optional = true } diff --git a/frame/nomination-pools/benchmarking/Cargo.toml b/frame/nomination-pools/benchmarking/Cargo.toml index be52d9777ac86..74b71a353fe7f 100644 --- a/frame/nomination-pools/benchmarking/Cargo.toml +++ b/frame/nomination-pools/benchmarking/Cargo.toml @@ -31,7 +31,6 @@ sp-runtime = { version = "7.0.0", default-features = false, path = "../../../pri sp-runtime-interface = { version = "7.0.0", default-features = false, path = "../../../primitives/runtime-interface" } sp-staking = { version = "4.0.0-dev", default-features = false, path = "../../../primitives/staking" } sp-std = { version = "5.0.0", default-features = false, path = "../../../primitives/std" } -sp-io = { optional = true, default-features = false, path = "../../../primitives/io" } [dev-dependencies] pallet-balances = { version = "4.0.0-dev", default-features = false, path = "../../balances" } @@ -53,7 +52,6 @@ std = [ "pallet-nomination-pools/std", "sp-runtime/std", "sp-runtime-interface/std", - "sp-io/std", "sp-staking/std", "sp-std/std", ] diff --git a/frame/nomination-pools/src/lib.rs b/frame/nomination-pools/src/lib.rs index 9ca9539b3dca8..fd533ee3762b4 100644 --- a/frame/nomination-pools/src/lib.rs +++ b/frame/nomination-pools/src/lib.rs @@ -1506,6 +1506,7 @@ pub mod pallet { /// * This call will *not* dust the member account, so the member must have at least /// `existential deposit + amount` in their account. /// * Only a pool with [`PoolState::Open`] can be joined + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::join())] pub fn join( origin: OriginFor, @@ -1563,6 +1564,7 @@ pub mod pallet { // NOTE: this transaction is implemented with the sole purpose of readability and // correctness, not optimization. We read/write several storage items multiple times instead // of just once, in the spirit reusing code. + #[pallet::call_index(1)] #[pallet::weight( T::WeightInfo::bond_extra_transfer() .max(T::WeightInfo::bond_extra_reward()) @@ -1605,6 +1607,7 @@ pub mod pallet { /// /// The member will earn rewards pro rata based on the members stake vs the sum of the /// members in the pools stake. Rewards do not "expire". + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::claim_payout())] pub fn claim_payout(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -1644,6 +1647,7 @@ pub mod pallet { /// [`Call::pool_withdraw_unbonded`] can be called to try and minimize unlocking chunks. If /// there are too many unlocking chunks, the result of this call will likely be the /// `NoMoreChunks` error from the staking system. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::unbond())] pub fn unbond( origin: OriginFor, @@ -1719,6 +1723,7 @@ pub mod pallet { /// can be cleared by withdrawing. In the case there are too many unlocking chunks, the user /// would probably see an error like `NoMoreChunks` emitted from the staking system when /// they attempt to unbond. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::pool_withdraw_unbonded(*num_slashing_spans))] pub fn pool_withdraw_unbonded( origin: OriginFor, @@ -1753,6 +1758,7 @@ pub mod pallet { /// # Note /// /// If the target is the depositor, the pool will be destroyed. + #[pallet::call_index(5)] #[pallet::weight( T::WeightInfo::withdraw_unbonded_kill(*num_slashing_spans) )] @@ -1874,6 +1880,7 @@ pub mod pallet { /// /// In addition to `amount`, the caller will transfer the existential deposit; so the caller /// needs at have at least `amount + existential_deposit` transferrable. + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::create())] pub fn create( origin: OriginFor, @@ -1898,6 +1905,7 @@ pub mod pallet { /// /// same as `create` with the inclusion of /// * `pool_id` - `A valid PoolId. + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::create())] pub fn create_with_pool_id( origin: OriginFor, @@ -1922,6 +1930,7 @@ pub mod pallet { /// /// This directly forward the call to the staking pallet, on behalf of the pool bonded /// account. + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::nominate(validators.len() as u32))] pub fn nominate( origin: OriginFor, @@ -1944,6 +1953,7 @@ pub mod pallet { /// 1. signed by the state toggler, or the root role of the pool, /// 2. if the pool conditions to be open are NOT met (as described by `ok_to_be_open`), and /// then the state of the pool can be permissionlessly changed to `Destroying`. + #[pallet::call_index(9)] #[pallet::weight(T::WeightInfo::set_state())] pub fn set_state( origin: OriginFor, @@ -1972,6 +1982,7 @@ pub mod pallet { /// /// The dispatch origin of this call must be signed by the state toggler, or the root role /// of the pool. + #[pallet::call_index(10)] #[pallet::weight(T::WeightInfo::set_metadata(metadata.len() as u32))] pub fn set_metadata( origin: OriginFor, @@ -2003,6 +2014,7 @@ pub mod pallet { /// * `max_pools` - Set [`MaxPools`]. /// * `max_members` - Set [`MaxPoolMembers`]. /// * `max_members_per_pool` - Set [`MaxPoolMembersPerPool`]. + #[pallet::call_index(11)] #[pallet::weight(T::WeightInfo::set_configs())] pub fn set_configs( origin: OriginFor, @@ -2039,6 +2051,7 @@ pub mod pallet { /// /// It emits an event, notifying UIs of the role change. This event is quite relevant to /// most pool members and they should be informed of changes to pool roles. + #[pallet::call_index(12)] #[pallet::weight(T::WeightInfo::update_roles())] pub fn update_roles( origin: OriginFor, @@ -2091,6 +2104,7 @@ pub mod pallet { /// /// This directly forward the call to the staking pallet, on behalf of the pool bonded /// account. + #[pallet::call_index(13)] #[pallet::weight(T::WeightInfo::chill())] pub fn chill(origin: OriginFor, pool_id: PoolId) -> DispatchResult { let who = ensure_signed(origin)?; diff --git a/frame/offences/benchmarking/src/lib.rs b/frame/offences/benchmarking/src/lib.rs index 555ec42882ee1..e5ec2952f8114 100644 --- a/frame/offences/benchmarking/src/lib.rs +++ b/frame/offences/benchmarking/src/lib.rs @@ -308,17 +308,20 @@ benchmarks! { let slash_amount = slash_fraction * bond_amount; let reward_amount = slash_amount.saturating_mul(1 + n) / 2; let reward = reward_amount / r; + let slash_report = |id| core::iter::once( + ::RuntimeEvent::from(StakingEvent::::SlashReported{ validator: id, fraction: slash_fraction, slash_era: 0}) + ); let slash = |id| core::iter::once( - ::RuntimeEvent::from(StakingEvent::::Slashed{staker: id, amount: BalanceOf::::from(slash_amount)}) + ::RuntimeEvent::from(StakingEvent::::Slashed{ staker: id, amount: BalanceOf::::from(slash_amount) }) ); let balance_slash = |id| core::iter::once( - ::RuntimeEvent::from(pallet_balances::Event::::Slashed{who: id, amount: slash_amount.into()}) + ::RuntimeEvent::from(pallet_balances::Event::::Slashed{ who: id, amount: slash_amount.into() }) ); let chill = |id| core::iter::once( - ::RuntimeEvent::from(StakingEvent::::Chilled{stash: id}) + ::RuntimeEvent::from(StakingEvent::::Chilled{ stash: id }) ); let balance_deposit = |id, amount: u32| - ::RuntimeEvent::from(pallet_balances::Event::::Deposit{who: id, amount: amount.into()}); + ::RuntimeEvent::from(pallet_balances::Event::::Deposit{ who: id, amount: amount.into() }); let mut first = true; let slash_events = raw_offenders.into_iter() .flat_map(|offender| { @@ -328,6 +331,7 @@ benchmarks! { }); let mut events = chill(offender.stash.clone()).map(Into::into) + .chain(slash_report(offender.stash.clone()).map(Into::into)) .chain(balance_slash(offender.stash.clone()).map(Into::into)) .chain(slash(offender.stash).map(Into::into)) .chain(nom_slashes) @@ -407,6 +411,7 @@ benchmarks! { System::::event_count(), 0 + 1 // offence + 3 // reporter (reward + endowment) + + 1 // offenders reported + 2 // offenders slashed + 1 // offenders chilled + 2 * n // nominators slashed @@ -443,6 +448,7 @@ benchmarks! { System::::event_count(), 0 + 1 // offence + 3 // reporter (reward + endowment) + + 1 // offenders reported + 2 // offenders slashed + 1 // offenders chilled + 2 * n // nominators slashed diff --git a/frame/offences/benchmarking/src/mock.rs b/frame/offences/benchmarking/src/mock.rs index e022d81c5b5bd..de3a4eca6308d 100644 --- a/frame/offences/benchmarking/src/mock.rs +++ b/frame/offences/benchmarking/src/mock.rs @@ -24,7 +24,7 @@ use frame_election_provider_support::{onchain, SequentialPhragmen}; use frame_support::{ parameter_types, traits::{ConstU32, ConstU64}, - weights::constants::WEIGHT_PER_SECOND, + weights::{constants::WEIGHT_REF_TIME_PER_SECOND, Weight}, }; use frame_system as system; use pallet_session::historical as pallet_session_historical; @@ -41,7 +41,7 @@ type Balance = u64; parameter_types! { pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights::simple_max( - 2u64 * WEIGHT_PER_SECOND + Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX) ); } diff --git a/frame/offences/src/mock.rs b/frame/offences/src/mock.rs index 31dac8d51d3b1..8e4256ec3d3e6 100644 --- a/frame/offences/src/mock.rs +++ b/frame/offences/src/mock.rs @@ -26,7 +26,7 @@ use frame_support::{ parameter_types, traits::{ConstU32, ConstU64}, weights::{ - constants::{RocksDbWeight, WEIGHT_PER_SECOND}, + constants::{RocksDbWeight, WEIGHT_REF_TIME_PER_SECOND}, Weight, }, }; @@ -85,7 +85,9 @@ frame_support::construct_runtime!( parameter_types! { pub BlockWeights: frame_system::limits::BlockWeights = - frame_system::limits::BlockWeights::simple_max(2u64 * WEIGHT_PER_SECOND); + frame_system::limits::BlockWeights::simple_max( + Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX), + ); } impl frame_system::Config for Runtime { type BaseCallFilter = frame_support::traits::Everything; diff --git a/frame/preimage/src/lib.rs b/frame/preimage/src/lib.rs index 6549832c11f5d..bf7d602057cac 100644 --- a/frame/preimage/src/lib.rs +++ b/frame/preimage/src/lib.rs @@ -153,6 +153,7 @@ pub mod pallet { /// /// If the preimage was previously requested, no fees or deposits are taken for providing /// the preimage. Otherwise, a deposit is taken proportional to the size of the preimage. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::note_preimage(bytes.len() as u32))] pub fn note_preimage(origin: OriginFor, bytes: Vec) -> DispatchResultWithPostInfo { // We accept a signed origin which will pay a deposit, or a root origin where a deposit @@ -172,6 +173,7 @@ pub mod pallet { /// /// - `hash`: The hash of the preimage to be removed from the store. /// - `len`: The length of the preimage of `hash`. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::unnote_preimage())] pub fn unnote_preimage(origin: OriginFor, hash: T::Hash) -> DispatchResult { let maybe_sender = Self::ensure_signed_or_manager(origin)?; @@ -182,6 +184,7 @@ pub mod pallet { /// /// If the preimage requests has already been provided on-chain, we unreserve any deposit /// a user may have paid, and take the control of the preimage out of their hands. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::request_preimage())] pub fn request_preimage(origin: OriginFor, hash: T::Hash) -> DispatchResult { T::ManagerOrigin::ensure_origin(origin)?; @@ -192,6 +195,7 @@ pub mod pallet { /// Clear a previously made request for a preimage. /// /// NOTE: THIS MUST NOT BE CALLED ON `hash` MORE TIMES THAN `request_preimage`. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::unrequest_preimage())] pub fn unrequest_preimage(origin: OriginFor, hash: T::Hash) -> DispatchResult { T::ManagerOrigin::ensure_origin(origin)?; diff --git a/frame/proxy/src/lib.rs b/frame/proxy/src/lib.rs index 5c07a2b012243..d98534d16a21b 100644 --- a/frame/proxy/src/lib.rs +++ b/frame/proxy/src/lib.rs @@ -191,6 +191,7 @@ pub mod pallet { /// - `real`: The account that the proxy will make a call on behalf of. /// - `force_proxy_type`: Specify the exact proxy type to be used and checked for this call. /// - `call`: The call to be made by the `real` account. + #[pallet::call_index(0)] #[pallet::weight({ let di = call.get_dispatch_info(); (T::WeightInfo::proxy(T::MaxProxies::get()) @@ -224,6 +225,7 @@ pub mod pallet { /// - `proxy_type`: The permissions allowed for this proxy account. /// - `delay`: The announcement period required of the initial proxy. Will generally be /// zero. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::add_proxy(T::MaxProxies::get()))] pub fn add_proxy( origin: OriginFor, @@ -243,6 +245,7 @@ pub mod pallet { /// Parameters: /// - `proxy`: The account that the `caller` would like to remove as a proxy. /// - `proxy_type`: The permissions currently enabled for the removed proxy account. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::remove_proxy(T::MaxProxies::get()))] pub fn remove_proxy( origin: OriginFor, @@ -261,6 +264,7 @@ pub mod pallet { /// /// WARNING: This may be called on accounts created by `pure`, however if done, then /// the unreserved fees will be inaccessible. **All access to this account will be lost.** + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::remove_proxies(T::MaxProxies::get()))] pub fn remove_proxies(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -288,6 +292,7 @@ pub mod pallet { /// same sender, with the same parameters. /// /// Fails if there are insufficient funds to pay for deposit. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::create_pure(T::MaxProxies::get()))] pub fn create_pure( origin: OriginFor, @@ -335,6 +340,7 @@ pub mod pallet { /// /// Fails with `NoPermission` in case the caller is not a previously created pure /// account whose `pure` call has corresponding parameters. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::kill_pure(T::MaxProxies::get()))] pub fn kill_pure( origin: OriginFor, @@ -372,6 +378,7 @@ pub mod pallet { /// Parameters: /// - `real`: The account that the proxy will make a call on behalf of. /// - `call_hash`: The hash of the call to be made by the `real` account. + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::announce(T::MaxPending::get(), T::MaxProxies::get()))] pub fn announce( origin: OriginFor, @@ -421,6 +428,7 @@ pub mod pallet { /// Parameters: /// - `real`: The account that the proxy will make a call on behalf of. /// - `call_hash`: The hash of the call to be made by the `real` account. + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::remove_announcement( T::MaxPending::get(), T::MaxProxies::get() @@ -447,6 +455,7 @@ pub mod pallet { /// Parameters: /// - `delegate`: The account that previously announced the call. /// - `call_hash`: The hash of the call to be made. + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::reject_announcement( T::MaxPending::get(), T::MaxProxies::get() @@ -476,6 +485,7 @@ pub mod pallet { /// - `real`: The account that the proxy will make a call on behalf of. /// - `force_proxy_type`: Specify the exact proxy type to be used and checked for this call. /// - `call`: The call to be made by the `real` account. + #[pallet::call_index(9)] #[pallet::weight({ let di = call.get_dispatch_info(); (T::WeightInfo::proxy_announced(T::MaxPending::get(), T::MaxProxies::get()) diff --git a/frame/ranked-collective/src/lib.rs b/frame/ranked-collective/src/lib.rs index 33aed2704918c..b057a57508023 100644 --- a/frame/ranked-collective/src/lib.rs +++ b/frame/ranked-collective/src/lib.rs @@ -470,6 +470,7 @@ pub mod pallet { /// - `rank`: The rank to give the new member. /// /// Weight: `O(1)` + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::add_member())] pub fn add_member(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { let _ = T::PromoteOrigin::ensure_origin(origin)?; @@ -483,6 +484,7 @@ pub mod pallet { /// - `who`: Account of existing member. /// /// Weight: `O(1)` + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::promote_member(0))] pub fn promote_member(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { let max_rank = T::PromoteOrigin::ensure_origin(origin)?; @@ -497,6 +499,7 @@ pub mod pallet { /// - `who`: Account of existing member of rank greater than zero. /// /// Weight: `O(1)`, less if the member's index is highest in its rank. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::demote_member(0))] pub fn demote_member(origin: OriginFor, who: AccountIdLookupOf) -> DispatchResult { let max_rank = T::DemoteOrigin::ensure_origin(origin)?; @@ -528,6 +531,7 @@ pub mod pallet { /// - `min_rank`: The rank of the member or greater. /// /// Weight: `O(min_rank)`. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::remove_member(*min_rank as u32))] pub fn remove_member( origin: OriginFor, @@ -562,6 +566,7 @@ pub mod pallet { /// fee. /// /// Weight: `O(1)`, less if there was no previous vote on the poll by the member. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::vote())] pub fn vote( origin: OriginFor, @@ -618,6 +623,7 @@ pub mod pallet { /// Transaction fees are waived if the operation is successful. /// /// Weight `O(max)` (less if there are fewer items to remove than `max`). + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::cleanup_poll(*max))] pub fn cleanup_poll( origin: OriginFor, diff --git a/frame/recovery/src/lib.rs b/frame/recovery/src/lib.rs index 18d3d48dc024c..9c57ca79d2e47 100644 --- a/frame/recovery/src/lib.rs +++ b/frame/recovery/src/lib.rs @@ -374,6 +374,7 @@ pub mod pallet { /// Parameters: /// - `account`: The recovered account you want to make a call on-behalf-of. /// - `call`: The call you want to make with the recovered account. + #[pallet::call_index(0)] #[pallet::weight({ let dispatch_info = call.get_dispatch_info(); ( @@ -403,6 +404,7 @@ pub mod pallet { /// Parameters: /// - `lost`: The "lost account" to be recovered. /// - `rescuer`: The "rescuer account" which can call as the lost account. + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::set_recovered())] pub fn set_recovered( origin: OriginFor, @@ -437,6 +439,7 @@ pub mod pallet { /// friends. /// - `delay_period`: The number of blocks after a recovery attempt is initialized that /// needs to pass before the account can be recovered. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::create_recovery(friends.len() as u32))] pub fn create_recovery( origin: OriginFor, @@ -488,6 +491,7 @@ pub mod pallet { /// Parameters: /// - `account`: The lost account that you want to recover. This account needs to be /// recoverable (i.e. have a recovery configuration). + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::initiate_recovery())] pub fn initiate_recovery( origin: OriginFor, @@ -532,6 +536,7 @@ pub mod pallet { /// /// The combination of these two parameters must point to an active recovery /// process. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::vouch_recovery(T::MaxFriends::get()))] pub fn vouch_recovery( origin: OriginFor, @@ -575,6 +580,7 @@ pub mod pallet { /// Parameters: /// - `account`: The lost account that you want to claim has been successfully recovered by /// you. + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::claim_recovery(T::MaxFriends::get()))] pub fn claim_recovery( origin: OriginFor, @@ -622,6 +628,7 @@ pub mod pallet { /// /// Parameters: /// - `rescuer`: The account trying to rescue this recoverable account. + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::close_recovery(T::MaxFriends::get()))] pub fn close_recovery( origin: OriginFor, @@ -659,6 +666,7 @@ pub mod pallet { /// /// The dispatch origin for this call must be _Signed_ and must be a /// recoverable account (i.e. has a recovery configuration). + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::remove_recovery(T::MaxFriends::get()))] pub fn remove_recovery(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -681,6 +689,7 @@ pub mod pallet { /// /// Parameters: /// - `account`: The recovered account you are able to call on-behalf-of. + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::cancel_recovered())] pub fn cancel_recovered( origin: OriginFor, diff --git a/frame/referenda/src/lib.rs b/frame/referenda/src/lib.rs index 551628fee9159..0b846faf88558 100644 --- a/frame/referenda/src/lib.rs +++ b/frame/referenda/src/lib.rs @@ -1,3 +1,5 @@ +// This file is part of Substrate. + // Copyright (C) 2017-2022 Parity Technologies (UK) Ltd. // SPDX-License-Identifier: Apache-2.0 @@ -66,12 +68,11 @@ use codec::{Codec, Encode}; use frame_support::{ ensure, traits::{ - fungibles, schedule::{ v3::{Anon as ScheduleAnon, Named as ScheduleNamed}, DispatchTime, }, - Currency, OnUnbalanced, OriginTrait, PollStatus, Polling, QueryPreimage, + Currency, LockIdentifier, OnUnbalanced, OriginTrait, PollStatus, Polling, QueryPreimage, ReservableCurrency, StorePreimage, VoteTally, }, BoundedVec, @@ -132,7 +133,7 @@ macro_rules! impl_tracksinfo_get { }; } -const ASSEMBLY_ID: fungibles::LockIdentifier = *b"assembly"; +const ASSEMBLY_ID: LockIdentifier = *b"assembly"; #[frame_support::pallet] pub mod pallet { @@ -396,6 +397,7 @@ pub mod pallet { /// - `enactment_moment`: The moment that the proposal should be enacted. /// /// Emits `Submitted`. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::submit())] pub fn submit( origin: OriginFor, @@ -443,6 +445,7 @@ pub mod pallet { /// posted. /// /// Emits `DecisionDepositPlaced`. + #[pallet::call_index(1)] #[pallet::weight(ServiceBranch::max_weight_of_deposit::())] pub fn place_decision_deposit( origin: OriginFor, @@ -470,6 +473,7 @@ pub mod pallet { /// refunded. /// /// Emits `DecisionDepositRefunded`. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::refund_decision_deposit())] pub fn refund_decision_deposit( origin: OriginFor, @@ -499,6 +503,7 @@ pub mod pallet { /// - `index`: The index of the referendum to be cancelled. /// /// Emits `Cancelled`. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::cancel())] pub fn cancel(origin: OriginFor, index: ReferendumIndex) -> DispatchResult { T::CancelOrigin::ensure_origin(origin)?; @@ -523,6 +528,7 @@ pub mod pallet { /// - `index`: The index of the referendum to be cancelled. /// /// Emits `Killed` and `DepositSlashed`. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::kill())] pub fn kill(origin: OriginFor, index: ReferendumIndex) -> DispatchResult { T::KillOrigin::ensure_origin(origin)?; @@ -543,6 +549,7 @@ pub mod pallet { /// /// - `origin`: must be `Root`. /// - `index`: the referendum to be advanced. + #[pallet::call_index(5)] #[pallet::weight(ServiceBranch::max_weight_of_nudge::())] pub fn nudge_referendum( origin: OriginFor, @@ -569,6 +576,7 @@ pub mod pallet { /// `DecidingCount` is not yet updated. This means that we should either: /// - begin deciding another referendum (and leave `DecidingCount` alone); or /// - decrement `DecidingCount`. + #[pallet::call_index(6)] #[pallet::weight(OneFewerDecidingBranch::max_weight::())] pub fn one_fewer_deciding( origin: OriginFor, @@ -602,6 +610,7 @@ pub mod pallet { /// refunded. /// /// Emits `SubmissionDepositRefunded`. + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::refund_submission_deposit())] pub fn refund_submission_deposit( origin: OriginFor, diff --git a/frame/remark/src/lib.rs b/frame/remark/src/lib.rs index b61c79f7f273d..80fe393c20f4a 100644 --- a/frame/remark/src/lib.rs +++ b/frame/remark/src/lib.rs @@ -62,6 +62,7 @@ pub mod pallet { #[pallet::call] impl Pallet { /// Index and store data off chain. + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::store(remark.len() as u32))] pub fn store(origin: OriginFor, remark: Vec) -> DispatchResultWithPostInfo { ensure!(!remark.is_empty(), Error::::Empty); diff --git a/frame/root-offences/src/lib.rs b/frame/root-offences/src/lib.rs index 298fe0078a6a6..ed039f46becc8 100644 --- a/frame/root-offences/src/lib.rs +++ b/frame/root-offences/src/lib.rs @@ -81,6 +81,7 @@ pub mod pallet { #[pallet::call] impl Pallet { /// Allows the `root`, for example sudo to create an offence. + #[pallet::call_index(0)] #[pallet::weight(T::DbWeight::get().reads(2))] pub fn create_offence( origin: OriginFor, diff --git a/frame/root-testing/src/lib.rs b/frame/root-testing/src/lib.rs index 25d66cfac202d..da67904967853 100644 --- a/frame/root-testing/src/lib.rs +++ b/frame/root-testing/src/lib.rs @@ -45,6 +45,7 @@ pub mod pallet { #[pallet::call] impl Pallet { /// A dispatch that will fill the block weight up to the given ratio. + #[pallet::call_index(0)] #[pallet::weight(*_ratio * T::BlockWeights::get().max_block)] pub fn fill_block(origin: OriginFor, _ratio: Perbill) -> DispatchResult { ensure_root(origin)?; diff --git a/frame/scheduler/Cargo.toml b/frame/scheduler/Cargo.toml index 86ca63c753bea..25ac602681cc0 100644 --- a/frame/scheduler/Cargo.toml +++ b/frame/scheduler/Cargo.toml @@ -19,6 +19,7 @@ frame-system = { version = "4.0.0-dev", default-features = false, path = "../sys sp-io = { version = "7.0.0", default-features = false, path = "../../primitives/io" } sp-runtime = { version = "7.0.0", default-features = false, path = "../../primitives/runtime" } sp-std = { version = "5.0.0", default-features = false, path = "../../primitives/std" } +sp-weights = { version = "4.0.0", default-features = false, path = "../../primitives/weights" } [dev-dependencies] pallet-preimage = { version = "4.0.0-dev", path = "../preimage" } @@ -42,5 +43,6 @@ std = [ "sp-io/std", "sp-runtime/std", "sp-std/std", + "sp-weights/std", ] try-runtime = ["frame-support/try-runtime"] diff --git a/frame/scheduler/src/lib.rs b/frame/scheduler/src/lib.rs index 78533540be98f..d6a66c5e2cb2c 100644 --- a/frame/scheduler/src/lib.rs +++ b/frame/scheduler/src/lib.rs @@ -73,7 +73,6 @@ use frame_support::{ weights::{Weight, WeightMeter}, }; use frame_system::{self as system}; -pub use pallet::*; use scale_info::TypeInfo; use sp_io::hashing::blake2_256; use sp_runtime::{ @@ -81,6 +80,8 @@ use sp_runtime::{ BoundedVec, RuntimeDebug, }; use sp_std::{borrow::Borrow, cmp::Ordering, marker::PhantomData, prelude::*}; + +pub use pallet::*; pub use weights::WeightInfo; /// Just a simple index for naming period tasks. @@ -296,6 +297,7 @@ pub mod pallet { #[pallet::call] impl Pallet { /// Anonymously schedule a task. + #[pallet::call_index(0)] #[pallet::weight(::WeightInfo::schedule(T::MaxScheduledPerBlock::get()))] pub fn schedule( origin: OriginFor, @@ -317,6 +319,7 @@ pub mod pallet { } /// Cancel an anonymously scheduled task. + #[pallet::call_index(1)] #[pallet::weight(::WeightInfo::cancel(T::MaxScheduledPerBlock::get()))] pub fn cancel(origin: OriginFor, when: T::BlockNumber, index: u32) -> DispatchResult { T::ScheduleOrigin::ensure_origin(origin.clone())?; @@ -326,6 +329,7 @@ pub mod pallet { } /// Schedule a named task. + #[pallet::call_index(2)] #[pallet::weight(::WeightInfo::schedule_named(T::MaxScheduledPerBlock::get()))] pub fn schedule_named( origin: OriginFor, @@ -349,6 +353,7 @@ pub mod pallet { } /// Cancel a named scheduled task. + #[pallet::call_index(3)] #[pallet::weight(::WeightInfo::cancel_named(T::MaxScheduledPerBlock::get()))] pub fn cancel_named(origin: OriginFor, id: TaskName) -> DispatchResult { T::ScheduleOrigin::ensure_origin(origin.clone())?; @@ -362,6 +367,7 @@ pub mod pallet { /// # /// Same as [`schedule`]. /// # + #[pallet::call_index(4)] #[pallet::weight(::WeightInfo::schedule(T::MaxScheduledPerBlock::get()))] pub fn schedule_after( origin: OriginFor, @@ -387,6 +393,7 @@ pub mod pallet { /// # /// Same as [`schedule_named`](Self::schedule_named). /// # + #[pallet::call_index(5)] #[pallet::weight(::WeightInfo::schedule_named(T::MaxScheduledPerBlock::get()))] pub fn schedule_named_after( origin: OriginFor, diff --git a/frame/scheduler/src/mock.rs b/frame/scheduler/src/mock.rs index 61efdfb67b73e..0aaac56667dcb 100644 --- a/frame/scheduler/src/mock.rs +++ b/frame/scheduler/src/mock.rs @@ -72,6 +72,7 @@ pub mod logger { where ::RuntimeOrigin: OriginTrait, { + #[pallet::call_index(0)] #[pallet::weight(*weight)] pub fn log(origin: OriginFor, i: u32, weight: Weight) -> DispatchResult { Self::deposit_event(Event::Logged(i, weight)); @@ -81,6 +82,7 @@ pub mod logger { Ok(()) } + #[pallet::call_index(1)] #[pallet::weight(*weight)] pub fn log_without_filter(origin: OriginFor, i: u32, weight: Weight) -> DispatchResult { Self::deposit_event(Event::Logged(i, weight)); diff --git a/frame/scored-pool/src/lib.rs b/frame/scored-pool/src/lib.rs index a015c1c568153..5db9c6506d770 100644 --- a/frame/scored-pool/src/lib.rs +++ b/frame/scored-pool/src/lib.rs @@ -311,6 +311,7 @@ pub mod pallet { /// /// The `index` parameter of this function must be set to /// the index of the transactor in the `Pool`. + #[pallet::call_index(0)] #[pallet::weight(0)] pub fn submit_candidacy(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -340,6 +341,7 @@ pub mod pallet { /// /// The `index` parameter of this function must be set to /// the index of the transactor in the `Pool`. + #[pallet::call_index(1)] #[pallet::weight(0)] pub fn withdraw_candidacy(origin: OriginFor, index: u32) -> DispatchResult { let who = ensure_signed(origin)?; @@ -358,6 +360,7 @@ pub mod pallet { /// /// The `index` parameter of this function must be set to /// the index of `dest` in the `Pool`. + #[pallet::call_index(2)] #[pallet::weight(0)] pub fn kick( origin: OriginFor, @@ -382,6 +385,7 @@ pub mod pallet { /// /// The `index` parameter of this function must be set to /// the index of the `dest` in the `Pool`. + #[pallet::call_index(3)] #[pallet::weight(0)] pub fn score( origin: OriginFor, @@ -421,6 +425,7 @@ pub mod pallet { /// (this happens each `Period`). /// /// May only be called from root. + #[pallet::call_index(4)] #[pallet::weight(0)] pub fn change_member_count(origin: OriginFor, count: u32) -> DispatchResult { ensure_root(origin)?; diff --git a/frame/session/src/lib.rs b/frame/session/src/lib.rs index 7b97a20860175..4e2caf5e0874e 100644 --- a/frame/session/src/lib.rs +++ b/frame/session/src/lib.rs @@ -595,6 +595,7 @@ pub mod pallet { /// - DbReads per key id: `KeyOwner` /// - DbWrites per key id: `KeyOwner` /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::set_keys())] pub fn set_keys(origin: OriginFor, keys: T::Keys, proof: Vec) -> DispatchResult { let who = ensure_signed(origin)?; @@ -620,6 +621,7 @@ pub mod pallet { /// - DbWrites: `NextKeys`, `origin account` /// - DbWrites per key id: `KeyOwner` /// # + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::purge_keys())] pub fn purge_keys(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; diff --git a/frame/society/src/lib.rs b/frame/society/src/lib.rs index 73a09490ea579..0edf00ff80f6e 100644 --- a/frame/society/src/lib.rs +++ b/frame/society/src/lib.rs @@ -711,6 +711,7 @@ pub mod pallet { /// /// Total Complexity: O(M + B + C + logM + logB + X) /// # + #[pallet::call_index(0)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn bid(origin: OriginFor, value: BalanceOf) -> DispatchResult { let who = ensure_signed(origin)?; @@ -750,6 +751,7 @@ pub mod pallet { /// /// Total Complexity: O(B + X) /// # + #[pallet::call_index(1)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn unbid(origin: OriginFor, pos: u32) -> DispatchResult { let who = ensure_signed(origin)?; @@ -822,6 +824,7 @@ pub mod pallet { /// /// Total Complexity: O(M + B + C + logM + logB + X) /// # + #[pallet::call_index(2)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn vouch( origin: OriginFor, @@ -873,6 +876,7 @@ pub mod pallet { /// /// Total Complexity: O(B) /// # + #[pallet::call_index(3)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn unvouch(origin: OriginFor, pos: u32) -> DispatchResult { let voucher = ensure_signed(origin)?; @@ -914,6 +918,7 @@ pub mod pallet { /// /// Total Complexity: O(M + logM + C) /// # + #[pallet::call_index(4)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn vote( origin: OriginFor, @@ -950,6 +955,7 @@ pub mod pallet { /// /// Total Complexity: O(M + logM) /// # + #[pallet::call_index(5)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn defender_vote(origin: OriginFor, approve: bool) -> DispatchResult { let voter = ensure_signed(origin)?; @@ -984,6 +990,7 @@ pub mod pallet { /// /// Total Complexity: O(M + logM + P + X) /// # + #[pallet::call_index(6)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn payout(origin: OriginFor) -> DispatchResult { let who = ensure_signed(origin)?; @@ -1026,6 +1033,7 @@ pub mod pallet { /// /// Total Complexity: O(1) /// # + #[pallet::call_index(7)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn found( origin: OriginFor, @@ -1060,6 +1068,7 @@ pub mod pallet { /// /// Total Complexity: O(1) /// # + #[pallet::call_index(8)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn unfound(origin: OriginFor) -> DispatchResult { let founder = ensure_signed(origin)?; @@ -1105,6 +1114,7 @@ pub mod pallet { /// /// Total Complexity: O(M + logM + B) /// # + #[pallet::call_index(9)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn judge_suspended_member( origin: OriginFor, @@ -1182,6 +1192,7 @@ pub mod pallet { /// /// Total Complexity: O(M + logM + B + X) /// # + #[pallet::call_index(10)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn judge_suspended_candidate( origin: OriginFor, @@ -1255,6 +1266,7 @@ pub mod pallet { /// /// Total Complexity: O(1) /// # + #[pallet::call_index(11)] #[pallet::weight(T::BlockWeights::get().max_block / 10)] pub fn set_max_members(origin: OriginFor, max: u32) -> DispatchResult { ensure_root(origin)?; diff --git a/frame/staking/Cargo.toml b/frame/staking/Cargo.toml index 3ad63ad94a08a..a7fca045cc4ba 100644 --- a/frame/staking/Cargo.toml +++ b/frame/staking/Cargo.toml @@ -32,10 +32,9 @@ sp-application-crypto = { version = "7.0.0", default-features = false, path = ". frame-election-provider-support = { version = "4.0.0-dev", default-features = false, path = "../election-provider-support" } log = { version = "0.4.17", default-features = false } -# optional dependencies for cargo features +# Optional imports for benchmarking frame-benchmarking = { version = "4.0.0-dev", default-features = false, path = "../benchmarking", optional = true } rand_chacha = { version = "0.2", default-features = false, optional = true } -pallet-bags-list = { default-features = false, optional = true, path = "../bags-list" } [dev-dependencies] sp-tracing = { version = "6.0.0", path = "../../primitives/tracing" } @@ -75,10 +74,5 @@ runtime-benchmarks = [ "frame-election-provider-support/runtime-benchmarks", "rand_chacha", "sp-staking/runtime-benchmarks", - "pallet-bags-list/runtime-benchmarks", ] try-runtime = ["frame-support/try-runtime"] -fuzz = [ - "pallet-bags-list/fuzz", - "frame-election-provider-support/fuzz", -] diff --git a/frame/staking/src/benchmarking.rs b/frame/staking/src/benchmarking.rs index dcb861e2ce419..8409b5413f992 100644 --- a/frame/staking/src/benchmarking.rs +++ b/frame/staking/src/benchmarking.rs @@ -792,12 +792,10 @@ benchmarks! { } get_npos_voters { - // number of validator intention. + // number of validator intention. we will iterate all of them. let v in (MaxValidators::::get() / 2) .. MaxValidators::::get(); - // number of nominator intention. + // number of nominator intention. we will iterate all of them. let n in (MaxNominators::::get() / 2) .. MaxNominators::::get(); - // total number of slashing spans. Assigned to validators randomly. - let s in 1 .. 20; let validators = create_validators_with_nominators_for_era::( v, n, T::MaxNominations::get() as usize, false, None @@ -806,9 +804,8 @@ benchmarks! { .map(|v| T::Lookup::lookup(v).unwrap()) .collect::>(); - (0..s).for_each(|index| { - add_slashing_spans::(&validators[index as usize], 10); - }); + assert_eq!(Validators::::count(), v); + assert_eq!(Nominators::::count(), n); let num_voters = (v + n) as usize; }: { diff --git a/frame/staking/src/mock.rs b/frame/staking/src/mock.rs index 16e4e5ddd7aa2..d3affda05277a 100644 --- a/frame/staking/src/mock.rs +++ b/frame/staking/src/mock.rs @@ -115,7 +115,7 @@ impl FindAuthor for Author11 { parameter_types! { pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights::simple_max( - frame_support::weights::constants::WEIGHT_PER_SECOND * 2 + Weight::from_parts(frame_support::weights::constants::WEIGHT_REF_TIME_PER_SECOND * 2, u64::MAX), ); pub static SessionsPerEra: SessionIndex = 3; pub static ExistentialDeposit: Balance = 1; diff --git a/frame/staking/src/pallet/impls.rs b/frame/staking/src/pallet/impls.rs index f0d332f20880b..5aa40bf33ea52 100644 --- a/frame/staking/src/pallet/impls.rs +++ b/frame/staking/src/pallet/impls.rs @@ -25,9 +25,8 @@ use frame_support::{ dispatch::WithPostDispatchInfo, pallet_prelude::*, traits::{ - fungibles::Lockable, Currency, CurrencyToVote, Defensive, DefensiveResult, - EstimateNextNewSession, Get, Imbalance, OnUnbalanced, TryCollect, UnixTime, - WithdrawReasons, + Currency, CurrencyToVote, Defensive, DefensiveResult, EstimateNextNewSession, Get, + Imbalance, LockableCurrency, OnUnbalanced, TryCollect, UnixTime, WithdrawReasons, }, weights::Weight, }; @@ -41,7 +40,7 @@ use sp_staking::{ offence::{DisableStrategy, OffenceDetails, OnOffenceHandler}, EraIndex, SessionIndex, Stake, StakingInterface, }; -use sp_std::{collections::btree_map::BTreeMap, prelude::*}; +use sp_std::prelude::*; use crate::{ log, slashing, weights::WeightInfo, ActiveEraInfo, BalanceOf, EraPayout, Exposure, ExposureOf, @@ -352,6 +351,7 @@ impl Pallet { } } + /// Start a new era. It does: /// /// * Increment `active_era.index`, /// * reset `active_era.start`, @@ -705,11 +705,6 @@ impl Pallet { /// `maybe_max_len` can imposes a cap on the number of voters returned; /// /// This function is self-weighing as [`DispatchClass::Mandatory`]. - /// - /// ### Slashing - /// - /// All votes that have been submitted before the last non-zero slash of the corresponding - /// target are *auto-chilled*, but still count towards the limit imposed by `maybe_max_len`. pub fn get_npos_voters(maybe_max_len: Option) -> Vec> { let max_allowed_len = { let all_voter_count = T::VoterList::count() as usize; @@ -720,7 +715,6 @@ impl Pallet { // cache a few things. let weight_of = Self::weight_of_fn(); - let slashing_spans = >::iter().collect::>(); let mut voters_seen = 0u32; let mut validators_taken = 0u32; @@ -738,18 +732,12 @@ impl Pallet { None => break, }; - if let Some(Nominations { submitted_in, mut targets, suppressed: _ }) = - >::get(&voter) - { - // if this voter is a nominator: - targets.retain(|stash| { - slashing_spans - .get(stash) - .map_or(true, |spans| submitted_in >= spans.last_nonzero_slash()) - }); - if !targets.len().is_zero() { + if let Some(Nominations { targets, .. }) = >::get(&voter) { + if !targets.is_empty() { all_voters.push((voter.clone(), weight_of(&voter), targets)); nominators_taken.saturating_inc(); + } else { + // Technically should never happen, but not much we can do about it. } } else if Validators::::contains_key(&voter) { // if this voter is a validator: @@ -772,18 +760,14 @@ impl Pallet { warn, "DEFENSIVE: invalid item in `VoterList`: {:?}, this nominator probably has too many nominations now", voter - ) + ); } } // all_voters should have not re-allocated. debug_assert!(all_voters.capacity() == max_allowed_len); - Self::register_weight(T::WeightInfo::get_npos_voters( - validators_taken, - nominators_taken, - slashing_spans.len() as u32, - )); + Self::register_weight(T::WeightInfo::get_npos_voters(validators_taken, nominators_taken)); log!( info, @@ -1286,6 +1270,12 @@ where disable_strategy, }); + Self::deposit_event(Event::::SlashReported { + validator: stash.clone(), + fraction: *slash_fraction, + slash_era, + }); + if let Some(mut unapplied) = unapplied { let nominators_len = unapplied.others.len() as u64; let reporters_len = details.reporters.len() as u64; @@ -1339,7 +1329,7 @@ impl ScoreProvider for Pallet { Self::weight_of(who) } - #[cfg(any(feature = "runtime-benchmarks", feature = "fuzz"))] + #[cfg(feature = "runtime-benchmarks")] fn set_score_of(who: &T::AccountId, weight: Self::Score) { // this will clearly results in an inconsistent state, but it should not matter for a // benchmark. @@ -1604,28 +1594,27 @@ impl StakingInterface for Pallet { Self::nominate(RawOrigin::Signed(ctrl).into(), targets) } - #[cfg(feature = "runtime-benchmarks")] - fn nominations(who: Self::AccountId) -> Option> { - Nominators::::get(who).map(|n| n.targets.into_inner()) - } + sp_staking::runtime_benchmarks_enabled! { + fn nominations(who: Self::AccountId) -> Option> { + Nominators::::get(who).map(|n| n.targets.into_inner()) + } - #[cfg(feature = "runtime-benchmarks")] - fn add_era_stakers( - current_era: &EraIndex, - stash: &T::AccountId, - exposures: Vec<(Self::AccountId, Self::Balance)>, - ) { - let others = exposures - .iter() - .map(|(who, value)| IndividualExposure { who: who.clone(), value: value.clone() }) - .collect::>(); - let exposure = Exposure { total: Default::default(), own: Default::default(), others }; - Self::add_era_stakers(current_era.clone(), stash.clone(), exposure) - } + fn add_era_stakers( + current_era: &EraIndex, + stash: &T::AccountId, + exposures: Vec<(Self::AccountId, Self::Balance)>, + ) { + let others = exposures + .iter() + .map(|(who, value)| IndividualExposure { who: who.clone(), value: value.clone() }) + .collect::>(); + let exposure = Exposure { total: Default::default(), own: Default::default(), others }; + >::insert(¤t_era, &stash, &exposure); + } - #[cfg(feature = "runtime-benchmarks")] - fn set_current_era(era: EraIndex) { - CurrentEra::::put(era); + fn set_current_era(era: EraIndex) { + CurrentEra::::put(era); + } } } diff --git a/frame/staking/src/pallet/mod.rs b/frame/staking/src/pallet/mod.rs index fd0c494fa6723..fda455ca3c166 100644 --- a/frame/staking/src/pallet/mod.rs +++ b/frame/staking/src/pallet/mod.rs @@ -24,8 +24,8 @@ use frame_support::{ dispatch::Codec, pallet_prelude::*, traits::{ - fungibles, fungibles::Lockable, Currency, CurrencyToVote, Defensive, DefensiveResult, - DefensiveSaturating, EnsureOrigin, EstimateNextNewSession, Get, OnUnbalanced, TryCollect, + Currency, CurrencyToVote, Defensive, DefensiveResult, DefensiveSaturating, EnsureOrigin, + EstimateNextNewSession, Get, LockIdentifier, LockableCurrency, OnUnbalanced, TryCollect, UnixTime, }, weights::Weight, @@ -50,7 +50,7 @@ use crate::{ ValidatorPrefs, }; -const STAKING_ID: fungibles::LockIdentifier = *b"staking "; +const STAKING_ID: LockIdentifier = *b"staking "; #[frame_support::pallet] pub mod pallet { @@ -78,7 +78,7 @@ pub mod pallet { #[pallet::config] pub trait Config: frame_system::Config { /// The staking balance. - type Currency: fungibles::Lockable< + type Currency: LockableCurrency< Self::AccountId, Moment = Self::BlockNumber, Balance = Self::CurrencyBalance, @@ -517,7 +517,7 @@ pub mod pallet { #[pallet::storage] #[pallet::getter(fn slashing_spans)] #[pallet::unbounded] - pub(crate) type SlashingSpans = + pub type SlashingSpans = StorageMap<_, Twox64Concat, T::AccountId, slashing::SlashingSpans>; /// Records information about the maximum slash of a stash within a slashing span, @@ -671,8 +671,11 @@ pub mod pallet { EraPaid { era_index: EraIndex, validator_payout: BalanceOf, remainder: BalanceOf }, /// The nominator has been rewarded by this amount. Rewarded { stash: T::AccountId, amount: BalanceOf }, - /// One staker (and potentially its nominators) has been slashed by the given amount. + /// A staker (validator or nominator) has been slashed by the given amount. Slashed { staker: T::AccountId, amount: BalanceOf }, + /// A slash for the given validator, for the given percentage of their stake, at the given + /// era as been reported. + SlashReported { validator: T::AccountId, fraction: Perbill, slash_era: EraIndex }, /// An old slashing report from a prior era was discarded because it could /// not be processed. OldSlashingReportDiscarded { session_index: SessionIndex }, @@ -831,6 +834,7 @@ pub mod pallet { /// unless the `origin` falls below _existential deposit_ and gets removed as dust. /// ------------------ /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::bond())] pub fn bond( origin: OriginFor, @@ -900,6 +904,7 @@ pub mod pallet { /// - Independent of the arguments. Insignificant complexity. /// - O(1). /// # + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::bond_extra())] pub fn bond_extra( origin: OriginFor, @@ -953,6 +958,7 @@ pub mod pallet { /// Emits `Unbonded`. /// /// See also [`Call::withdraw_unbonded`]. + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::unbond())] pub fn unbond( origin: OriginFor, @@ -1032,6 +1038,7 @@ pub mod pallet { /// Complexity O(S) where S is the number of slashing spans to remove /// NOTE: Weight annotation is the kill scenario, we refund otherwise. /// # + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::withdraw_unbonded_kill(*num_slashing_spans))] pub fn withdraw_unbonded( origin: OriginFor, @@ -1079,6 +1086,7 @@ pub mod pallet { /// Effects will be felt at the beginning of the next era. /// /// The dispatch origin for this call must be _Signed_ by the controller, not the stash. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::validate())] pub fn validate(origin: OriginFor, prefs: ValidatorPrefs) -> DispatchResult { let controller = ensure_signed(origin)?; @@ -1122,6 +1130,7 @@ pub mod pallet { /// which is capped at CompactAssignments::LIMIT (T::MaxNominations). /// - Both the reads and writes follow a similar pattern. /// # + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::nominate(targets.len() as u32))] pub fn nominate( origin: OriginFor, @@ -1190,6 +1199,7 @@ pub mod pallet { /// - Contains one read. /// - Writes are limited to the `origin` account key. /// # + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::chill())] pub fn chill(origin: OriginFor) -> DispatchResult { let controller = ensure_signed(origin)?; @@ -1214,6 +1224,7 @@ pub mod pallet { /// - Read: Ledger /// - Write: Payee /// # + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::set_payee())] pub fn set_payee( origin: OriginFor, @@ -1242,6 +1253,7 @@ pub mod pallet { /// - Read: Bonded, Ledger New Controller, Ledger Old Controller /// - Write: Bonded, Ledger New Controller, Ledger Old Controller /// # + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::set_controller())] pub fn set_controller( origin: OriginFor, @@ -1270,6 +1282,7 @@ pub mod pallet { /// Weight: O(1) /// Write: Validator Count /// # + #[pallet::call_index(9)] #[pallet::weight(T::WeightInfo::set_validator_count())] pub fn set_validator_count( origin: OriginFor, @@ -1294,6 +1307,7 @@ pub mod pallet { /// # /// Same as [`Self::set_validator_count`]. /// # + #[pallet::call_index(10)] #[pallet::weight(T::WeightInfo::set_validator_count())] pub fn increase_validator_count( origin: OriginFor, @@ -1319,6 +1333,7 @@ pub mod pallet { /// # /// Same as [`Self::set_validator_count`]. /// # + #[pallet::call_index(11)] #[pallet::weight(T::WeightInfo::set_validator_count())] pub fn scale_validator_count(origin: OriginFor, factor: Percent) -> DispatchResult { ensure_root(origin)?; @@ -1349,6 +1364,7 @@ pub mod pallet { /// - Weight: O(1) /// - Write: ForceEra /// # + #[pallet::call_index(12)] #[pallet::weight(T::WeightInfo::force_no_eras())] pub fn force_no_eras(origin: OriginFor) -> DispatchResult { ensure_root(origin)?; @@ -1372,6 +1388,7 @@ pub mod pallet { /// - Weight: O(1) /// - Write ForceEra /// # + #[pallet::call_index(13)] #[pallet::weight(T::WeightInfo::force_new_era())] pub fn force_new_era(origin: OriginFor) -> DispatchResult { ensure_root(origin)?; @@ -1382,6 +1399,7 @@ pub mod pallet { /// Set the validators who cannot be slashed (if any). /// /// The dispatch origin must be Root. + #[pallet::call_index(14)] #[pallet::weight(T::WeightInfo::set_invulnerables(invulnerables.len() as u32))] pub fn set_invulnerables( origin: OriginFor, @@ -1395,6 +1413,7 @@ pub mod pallet { /// Force a current staker to become completely unstaked, immediately. /// /// The dispatch origin must be Root. + #[pallet::call_index(15)] #[pallet::weight(T::WeightInfo::force_unstake(*num_slashing_spans))] pub fn force_unstake( origin: OriginFor, @@ -1420,6 +1439,7 @@ pub mod pallet { /// The election process starts multiple blocks before the end of the era. /// If this is called just before a new era is triggered, the election process may not /// have enough blocks to get a result. + #[pallet::call_index(16)] #[pallet::weight(T::WeightInfo::force_new_era_always())] pub fn force_new_era_always(origin: OriginFor) -> DispatchResult { ensure_root(origin)?; @@ -1432,6 +1452,7 @@ pub mod pallet { /// Can be called by the `T::SlashCancelOrigin`. /// /// Parameters: era and indices of the slashes for that era to kill. + #[pallet::call_index(17)] #[pallet::weight(T::WeightInfo::cancel_deferred_slash(slash_indices.len() as u32))] pub fn cancel_deferred_slash( origin: OriginFor, @@ -1477,6 +1498,7 @@ pub mod pallet { /// NOTE: weights are assuming that payouts are made to alive stash account (Staked). /// Paying even a dead controller is cheaper weight-wise. We don't do any refunds here. /// # + #[pallet::call_index(18)] #[pallet::weight(T::WeightInfo::payout_stakers_alive_staked( T::MaxNominatorRewardedPerValidator::get() ))] @@ -1498,6 +1520,7 @@ pub mod pallet { /// - Bounded by `MaxUnlockingChunks`. /// - Storage changes: Can't increase storage, only decrease it. /// # + #[pallet::call_index(19)] #[pallet::weight(T::WeightInfo::rebond(T::MaxUnlockingChunks::get() as u32))] pub fn rebond( origin: OriginFor, @@ -1542,6 +1565,7 @@ pub mod pallet { /// It can be called by anyone, as long as `stash` meets the above requirements. /// /// Refunds the transaction fees upon successful execution. + #[pallet::call_index(20)] #[pallet::weight(T::WeightInfo::reap_stash(*num_slashing_spans))] pub fn reap_stash( origin: OriginFor, @@ -1574,6 +1598,7 @@ pub mod pallet { /// /// Note: Making this call only makes sense if you first set the validator preferences to /// block any further nominations. + #[pallet::call_index(21)] #[pallet::weight(T::WeightInfo::kick(who.len() as u32))] pub fn kick(origin: OriginFor, who: Vec>) -> DispatchResult { let controller = ensure_signed(origin)?; @@ -1621,6 +1646,7 @@ pub mod pallet { /// to kick people under the new limits, `chill_other` should be called. // We assume the worst case for this call is either: all items are set or all items are // removed. + #[pallet::call_index(22)] #[pallet::weight( T::WeightInfo::set_staking_configs_all_set() .max(T::WeightInfo::set_staking_configs_all_remove()) @@ -1681,6 +1707,7 @@ pub mod pallet { /// /// This can be helpful if bond requirements are updated, and we need to remove old users /// who do not satisfy these requirements. + #[pallet::call_index(23)] #[pallet::weight(T::WeightInfo::chill_other())] pub fn chill_other(origin: OriginFor, controller: T::AccountId) -> DispatchResult { // Anyone can call this function. @@ -1743,6 +1770,7 @@ pub mod pallet { /// Force a validator to have at least the minimum commission. This will not affect a /// validator who already has a commission greater than or equal to the minimum. Any account /// can call this. + #[pallet::call_index(24)] #[pallet::weight(T::WeightInfo::force_apply_min_commission())] pub fn force_apply_min_commission( origin: OriginFor, diff --git a/frame/staking/src/slashing.rs b/frame/staking/src/slashing.rs index a1900136d64fd..aeea0a1a58c63 100644 --- a/frame/staking/src/slashing.rs +++ b/frame/staking/src/slashing.rs @@ -239,9 +239,9 @@ pub(crate) fn compute_slash( return None } - let (prior_slash_p, _era_slash) = + let prior_slash_p = as Store>::ValidatorSlashInEra::get(¶ms.slash_era, params.stash) - .unwrap_or((Perbill::zero(), Zero::zero())); + .map_or(Zero::zero(), |(prior_slash_proportion, _)| prior_slash_proportion); // compare slash proportions rather than slash values to avoid issues due to rounding // error. @@ -390,9 +390,7 @@ fn slash_nominators( let mut era_slash = as Store>::NominatorSlashInEra::get(¶ms.slash_era, stash) .unwrap_or_else(Zero::zero); - era_slash += own_slash_difference; - as Store>::NominatorSlashInEra::insert(¶ms.slash_era, stash, &era_slash); era_slash @@ -411,12 +409,10 @@ fn slash_nominators( let target_span = spans.compare_and_update_span_slash(params.slash_era, era_slash); if target_span == Some(spans.span_index()) { - // End the span, but don't chill the nominator. its nomination - // on this validator will be ignored in the future. + // end the span, but don't chill the nominator. spans.end_span(params.now); } } - nominators_slashed.push((stash.clone(), nom_slashed)); } diff --git a/frame/staking/src/tests.rs b/frame/staking/src/tests.rs index 6609b9087637d..3e0a62f53d886 100644 --- a/frame/staking/src/tests.rs +++ b/frame/staking/src/tests.rs @@ -2845,6 +2845,8 @@ fn deferred_slashes_are_deferred() { assert_eq!(Balances::free_balance(101), 2000); let nominated_value = exposure.others.iter().find(|o| o.who == 101).unwrap().value; + System::reset_events(); + on_offence_now( &[OffenceDetails { offender: (11, Staking::eras_stakers(active_era(), 11)), @@ -2853,6 +2855,9 @@ fn deferred_slashes_are_deferred() { &[Perbill::from_percent(10)], ); + // nominations are not removed regardless of the deferring. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + assert_eq!(Balances::free_balance(11), 1000); assert_eq!(Balances::free_balance(101), 2000); @@ -2866,8 +2871,6 @@ fn deferred_slashes_are_deferred() { assert_eq!(Balances::free_balance(11), 1000); assert_eq!(Balances::free_balance(101), 2000); - System::reset_events(); - // at the start of era 4, slashes from era 1 are processed, // after being deferred for at least 2 full eras. mock::start_active_era(4); @@ -2875,15 +2878,16 @@ fn deferred_slashes_are_deferred() { assert_eq!(Balances::free_balance(11), 900); assert_eq!(Balances::free_balance(101), 2000 - (nominated_value / 10)); - assert_eq!( - staking_events_since_last_call(), - vec![ - Event::StakersElected, - Event::EraPaid { era_index: 3, validator_payout: 11075, remainder: 33225 }, + assert!(matches!( + staking_events_since_last_call().as_slice(), + &[ + Event::Chilled { stash: 11 }, + Event::SlashReported { validator: 11, slash_era: 1, .. }, + .., Event::Slashed { staker: 11, amount: 100 }, Event::Slashed { staker: 101, amount: 12 } ] - ); + )); }) } @@ -2896,25 +2900,29 @@ fn retroactive_deferred_slashes_two_eras_before() { let exposure_11_at_era1 = Staking::eras_stakers(active_era(), 11); mock::start_active_era(3); + + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + + System::reset_events(); on_offence_in_era( &[OffenceDetails { offender: (11, exposure_11_at_era1), reporters: vec![] }], &[Perbill::from_percent(10)], 1, // should be deferred for two full eras, and applied at the beginning of era 4. DisableStrategy::Never, ); - System::reset_events(); mock::start_active_era(4); - assert_eq!( - staking_events_since_last_call(), - vec![ - Event::StakersElected, - Event::EraPaid { era_index: 3, validator_payout: 7100, remainder: 21300 }, + assert!(matches!( + staking_events_since_last_call().as_slice(), + &[ + Event::Chilled { stash: 11 }, + Event::SlashReported { validator: 11, slash_era: 1, .. }, + .., Event::Slashed { staker: 11, amount: 100 }, - Event::Slashed { staker: 101, amount: 12 }, + Event::Slashed { staker: 101, amount: 12 } ] - ); + )); }) } @@ -2932,35 +2940,29 @@ fn retroactive_deferred_slashes_one_before() { assert_ok!(Staking::unbond(RuntimeOrigin::signed(10), 100)); mock::start_active_era(3); + System::reset_events(); on_offence_in_era( &[OffenceDetails { offender: (11, exposure_11_at_era1), reporters: vec![] }], &[Perbill::from_percent(10)], 2, // should be deferred for two full eras, and applied at the beginning of era 5. DisableStrategy::Never, ); - System::reset_events(); mock::start_active_era(4); - assert_eq!( - staking_events_since_last_call(), - vec![ - Event::StakersElected, - Event::EraPaid { era_index: 3, validator_payout: 11075, remainder: 33225 } - ] - ); assert_eq!(Staking::ledger(10).unwrap().total, 1000); // slash happens after the next line. + mock::start_active_era(5); - assert_eq!( - staking_events_since_last_call(), - vec![ - Event::StakersElected, - Event::EraPaid { era_index: 4, validator_payout: 11075, remainder: 33225 }, + assert!(matches!( + staking_events_since_last_call().as_slice(), + &[ + Event::SlashReported { validator: 11, slash_era: 2, .. }, + .., Event::Slashed { staker: 11, amount: 100 }, Event::Slashed { staker: 101, amount: 12 } ] - ); + )); // their ledger has already been slashed. assert_eq!(Staking::ledger(10).unwrap().total, 900); @@ -3068,6 +3070,7 @@ fn remove_deferred() { mock::start_active_era(2); // reported later, but deferred to start of era 4 as well. + System::reset_events(); on_offence_in_era( &[OffenceDetails { offender: (11, exposure.clone()), reporters: vec![] }], &[Perbill::from_percent(15)], @@ -3094,19 +3097,18 @@ fn remove_deferred() { // at the start of era 4, slashes from era 1 are processed, // after being deferred for at least 2 full eras. - System::reset_events(); mock::start_active_era(4); - // the first slash for 10% was cancelled, but the 15% one - assert_eq!( - staking_events_since_last_call(), - vec![ - Event::StakersElected, - Event::EraPaid { era_index: 3, validator_payout: 11075, remainder: 33225 }, + // the first slash for 10% was cancelled, but the 15% one not. + assert!(matches!( + staking_events_since_last_call().as_slice(), + &[ + Event::SlashReported { validator: 11, slash_era: 1, .. }, + .., Event::Slashed { staker: 11, amount: 50 }, Event::Slashed { staker: 101, amount: 7 } ] - ); + )); let slash_10 = Perbill::from_percent(10); let slash_15 = Perbill::from_percent(15); @@ -3196,6 +3198,9 @@ fn slash_kicks_validators_not_nominators_and_disables_nominator_for_kicked_valid assert_eq!(Balances::free_balance(11), 1000); assert_eq!(Balances::free_balance(101), 2000); + // 100 has approval for 11 as of now + assert!(Staking::nominators(101).unwrap().targets.contains(&11)); + // 11 and 21 both have the support of 100 let exposure_11 = Staking::eras_stakers(active_era(), &11); let exposure_21 = Staking::eras_stakers(active_era(), &21); @@ -3208,23 +3213,29 @@ fn slash_kicks_validators_not_nominators_and_disables_nominator_for_kicked_valid &[Perbill::from_percent(10)], ); + assert_eq!( + staking_events_since_last_call(), + vec![ + Event::StakersElected, + Event::EraPaid { era_index: 0, validator_payout: 11075, remainder: 33225 }, + Event::Chilled { stash: 11 }, + Event::SlashReported { + validator: 11, + fraction: Perbill::from_percent(10), + slash_era: 1 + }, + Event::Slashed { staker: 11, amount: 100 }, + Event::Slashed { staker: 101, amount: 12 }, + ] + ); + // post-slash balance let nominator_slash_amount_11 = 125 / 10; assert_eq!(Balances::free_balance(11), 900); assert_eq!(Balances::free_balance(101), 2000 - nominator_slash_amount_11); - // This is the best way to check that the validator was chilled; `get` will - // return default value. - for (stash, _) in ::Validators::iter() { - assert!(stash != 11); - } - - let nominations = ::Nominators::get(&101).unwrap(); - - // and make sure that the vote will be ignored even if the validator - // re-registers. - let last_slash = ::SlashingSpans::get(&11).unwrap().last_nonzero_slash(); - assert!(nominations.submitted_in < last_slash); + // check that validator was chilled. + assert!(::Validators::iter().all(|(stash, _)| stash != 11)); // actually re-bond the slashed validator assert_ok!(Staking::validate(RuntimeOrigin::signed(10), Default::default())); @@ -3233,11 +3244,12 @@ fn slash_kicks_validators_not_nominators_and_disables_nominator_for_kicked_valid let exposure_11 = Staking::eras_stakers(active_era(), &11); let exposure_21 = Staking::eras_stakers(active_era(), &21); - // 10 is re-elected, but without the support of 100 - assert_eq!(exposure_11.total, 900); - - // 20 is re-elected, with the (almost) entire support of 100 - assert_eq!(exposure_21.total, 1000 + 500 - nominator_slash_amount_11); + // 11's own expo is reduced. sum of support from 11 is less (448), which is 500 + // 900 + 146 + assert!(matches!(exposure_11, Exposure { own: 900, total: 1046, .. })); + // 1000 + 342 + assert!(matches!(exposure_21, Exposure { own: 1000, total: 1342, .. })); + assert_eq!(500 - 146 - 342, nominator_slash_amount_11); }); } @@ -3256,12 +3268,40 @@ fn non_slashable_offence_doesnt_disable_validator() { &[Perbill::zero()], ); + // it does NOT affect the nominator. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + // offence that slashes 25% of the bond on_offence_now( &[OffenceDetails { offender: (21, exposure_21.clone()), reporters: vec![] }], &[Perbill::from_percent(25)], ); + // it DOES NOT affect the nominator. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + + assert_eq!( + staking_events_since_last_call(), + vec![ + Event::StakersElected, + Event::EraPaid { era_index: 0, validator_payout: 11075, remainder: 33225 }, + Event::Chilled { stash: 11 }, + Event::SlashReported { + validator: 11, + fraction: Perbill::from_percent(0), + slash_era: 1 + }, + Event::Chilled { stash: 21 }, + Event::SlashReported { + validator: 21, + fraction: Perbill::from_percent(25), + slash_era: 1 + }, + Event::Slashed { staker: 21, amount: 250 }, + Event::Slashed { staker: 101, amount: 94 } + ] + ); + // the offence for validator 10 wasn't slashable so it wasn't disabled assert!(!is_disabled(10)); // whereas validator 20 gets disabled @@ -3288,6 +3328,9 @@ fn slashing_independent_of_disabling_validator() { DisableStrategy::Always, ); + // nomination remains untouched. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + // offence that slashes 25% of the bond, BUT not disabling on_offence_in_era( &[OffenceDetails { offender: (21, exposure_21.clone()), reporters: vec![] }], @@ -3296,6 +3339,31 @@ fn slashing_independent_of_disabling_validator() { DisableStrategy::Never, ); + // nomination remains untouched. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + + assert_eq!( + staking_events_since_last_call(), + vec![ + Event::StakersElected, + Event::EraPaid { era_index: 0, validator_payout: 11075, remainder: 33225 }, + Event::Chilled { stash: 11 }, + Event::SlashReported { + validator: 11, + fraction: Perbill::from_percent(0), + slash_era: 1 + }, + Event::Chilled { stash: 21 }, + Event::SlashReported { + validator: 21, + fraction: Perbill::from_percent(25), + slash_era: 1 + }, + Event::Slashed { staker: 21, amount: 250 }, + Event::Slashed { staker: 101, amount: 94 } + ] + ); + // the offence for validator 10 was explicitly disabled assert!(is_disabled(10)); // whereas validator 20 is explicitly not disabled @@ -3370,6 +3438,9 @@ fn disabled_validators_are_kept_disabled_for_whole_era() { &[Perbill::from_percent(25)], ); + // nominations are not updated. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + // validator 10 should not be disabled since the offence wasn't slashable assert!(!is_disabled(10)); // validator 20 gets disabled since it got slashed @@ -3387,6 +3458,9 @@ fn disabled_validators_are_kept_disabled_for_whole_era() { &[Perbill::from_percent(25)], ); + // nominations are not updated. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); + advance_session(); // and both are disabled in the last session of the era @@ -3503,18 +3577,10 @@ fn zero_slash_keeps_nominators() { assert_eq!(Balances::free_balance(11), 1000); assert_eq!(Balances::free_balance(101), 2000); - // This is the best way to check that the validator was chilled; `get` will - // return default value. - for (stash, _) in ::Validators::iter() { - assert!(stash != 11); - } - - let nominations = ::Nominators::get(&101).unwrap(); - - // and make sure that the vote will not be ignored, because the slash was - // zero. - let last_slash = ::SlashingSpans::get(&11).unwrap().last_nonzero_slash(); - assert!(nominations.submitted_in >= last_slash); + // 11 is still removed.. + assert!(::Validators::iter().all(|(stash, _)| stash != 11)); + // but their nominations are kept. + assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); }); } @@ -4365,9 +4431,10 @@ mod election_data_provider { #[test] fn targets_2sec_block() { let mut validators = 1000; - while ::WeightInfo::get_npos_targets(validators) - .all_lt(2u64 * frame_support::weights::constants::WEIGHT_PER_SECOND) - { + while ::WeightInfo::get_npos_targets(validators).all_lt(Weight::from_parts( + 2u64 * frame_support::weights::constants::WEIGHT_REF_TIME_PER_SECOND, + u64::MAX, + )) { validators += 1; } @@ -4379,13 +4446,14 @@ mod election_data_provider { // we assume a network only wants up to 1000 validators in most cases, thus having 2000 // candidates is as high as it gets. let validators = 2000; - // we assume the worse case: each validator also has a slashing span. - let slashing_spans = validators; let mut nominators = 1000; - while ::WeightInfo::get_npos_voters(validators, nominators, slashing_spans) - .all_lt(2u64 * frame_support::weights::constants::WEIGHT_PER_SECOND) - { + while ::WeightInfo::get_npos_voters(validators, nominators).all_lt( + Weight::from_parts( + 2u64 * frame_support::weights::constants::WEIGHT_REF_TIME_PER_SECOND, + u64::MAX, + ), + ) { nominators += 1; } @@ -4407,49 +4475,6 @@ mod election_data_provider { }) } - #[test] - fn voters_exclude_slashed() { - ExtBuilder::default().build_and_execute(|| { - assert_eq!(Staking::nominators(101).unwrap().targets, vec![11, 21]); - assert_eq!( - ::electing_voters(None) - .unwrap() - .iter() - .find(|x| x.0 == 101) - .unwrap() - .2, - vec![11, 21] - ); - - start_active_era(1); - add_slash(&11); - - // 11 is gone. - start_active_era(2); - assert_eq!( - ::electing_voters(None) - .unwrap() - .iter() - .find(|x| x.0 == 101) - .unwrap() - .2, - vec![21] - ); - - // resubmit and it is back - assert_ok!(Staking::nominate(RuntimeOrigin::signed(100), vec![11, 21])); - assert_eq!( - ::electing_voters(None) - .unwrap() - .iter() - .find(|x| x.0 == 101) - .unwrap() - .2, - vec![11, 21] - ); - }) - } - #[test] fn respects_snapshot_len_limits() { ExtBuilder::default() @@ -4486,10 +4511,26 @@ mod election_data_provider { fn only_iterates_max_2_times_max_allowed_len() { ExtBuilder::default() .nominate(false) - // the other nominators only nominate 21 - .add_staker(61, 60, 2_000, StakerStatus::::Nominator(vec![21])) - .add_staker(71, 70, 2_000, StakerStatus::::Nominator(vec![21])) - .add_staker(81, 80, 2_000, StakerStatus::::Nominator(vec![21])) + // the best way to invalidate a bunch of nominators is to have them nominate a lot of + // ppl, but then lower the MaxNomination limit. + .add_staker( + 61, + 60, + 2_000, + StakerStatus::::Nominator(vec![21, 22, 23, 24, 25]), + ) + .add_staker( + 71, + 70, + 2_000, + StakerStatus::::Nominator(vec![21, 22, 23, 24, 25]), + ) + .add_staker( + 81, + 80, + 2_000, + StakerStatus::::Nominator(vec![21, 22, 23, 24, 25]), + ) .build_and_execute(|| { // all voters ordered by stake, assert_eq!( @@ -4497,10 +4538,7 @@ mod election_data_provider { vec![61, 71, 81, 11, 21, 31] ); - run_to_block(25); - - // slash 21, the only validator nominated by our first 3 nominators - add_slash(&21); + MaxNominations::set(2); // we want 2 voters now, and in maximum we allow 4 iterations. This is what happens: // 61 is pruned; @@ -4520,55 +4558,6 @@ mod election_data_provider { }); } - // Even if some of the higher staked nominators are slashed, we still get up to max len voters - // by adding more lower staked nominators. In other words, we assert that we keep on adding - // valid nominators until we reach max len voters; which is opposed to simply stopping after we - // have iterated max len voters, but not adding all of them to voters due to some nominators not - // having valid targets. - #[test] - fn get_max_len_voters_even_if_some_nominators_are_slashed() { - ExtBuilder::default() - .nominate(false) - .add_staker(61, 60, 20, StakerStatus::::Nominator(vec![21])) - .add_staker(71, 70, 10, StakerStatus::::Nominator(vec![11, 21])) - .add_staker(81, 80, 10, StakerStatus::::Nominator(vec![11, 21])) - .build_and_execute(|| { - // given our voters ordered by stake, - assert_eq!( - ::VoterList::iter().collect::>(), - vec![11, 21, 31, 61, 71, 81] - ); - - // we take 4 voters - assert_eq!( - Staking::electing_voters(Some(4)) - .unwrap() - .iter() - .map(|(stash, _, _)| stash) - .copied() - .collect::>(), - vec![11, 21, 31, 61], - ); - - // roll to session 5 - run_to_block(25); - - // slash 21, the only validator nominated by 61. - add_slash(&21); - - // we take 4 voters; 71 and 81 are replacing the ejected ones. - assert_eq!( - Staking::electing_voters(Some(4)) - .unwrap() - .iter() - .map(|(stash, _, _)| stash) - .copied() - .collect::>(), - vec![11, 31, 71, 81], - ); - }); - } - #[test] fn estimate_next_election_works() { ExtBuilder::default().session_per_era(5).period(5).build_and_execute(|| { diff --git a/frame/staking/src/weights.rs b/frame/staking/src/weights.rs index 56374ffbc4b62..21fc3d6f077bc 100644 --- a/frame/staking/src/weights.rs +++ b/frame/staking/src/weights.rs @@ -18,24 +18,25 @@ //! Autogenerated weights for pallet_staking //! //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev -//! DATE: 2022-11-07, STEPS: `50`, REPEAT: 20, LOW RANGE: `[]`, HIGH RANGE: `[]` -//! HOSTNAME: `bm2`, CPU: `Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz` +//! DATE: 2022-12-12, STEPS: `50`, REPEAT: 20, LOW RANGE: `[]`, HIGH RANGE: `[]` +//! HOSTNAME: `bm3`, CPU: `Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz` //! EXECUTION: Some(Wasm), WASM-EXECUTION: Compiled, CHAIN: Some("dev"), DB CACHE: 1024 // Executed Command: -// ./target/production/substrate +// /home/benchbot/cargo_target_dir/production/substrate // benchmark // pallet -// --chain=dev // --steps=50 // --repeat=20 -// --pallet=pallet_staking // --extrinsic=* // --execution=wasm // --wasm-execution=compiled // --heap-pages=4096 -// --output=./frame/staking/src/weights.rs +// --json-file=/var/lib/gitlab-runner/builds/zyw4fam_/0/parity/mirrors/substrate/.git/.artifacts/bench.json +// --pallet=pallet_staking +// --chain=dev // --header=./HEADER-APACHE2 +// --output=./frame/staking/src/weights.rs // --template=./.maintain/frame-weight-template.hbs #![cfg_attr(rustfmt, rustfmt_skip)] @@ -70,7 +71,7 @@ pub trait WeightInfo { fn rebond(l: u32, ) -> Weight; fn reap_stash(s: u32, ) -> Weight; fn new_era(v: u32, n: u32, ) -> Weight; - fn get_npos_voters(v: u32, n: u32, s: u32, ) -> Weight; + fn get_npos_voters(v: u32, n: u32, ) -> Weight; fn get_npos_targets(v: u32, ) -> Weight; fn set_staking_configs_all_set() -> Weight; fn set_staking_configs_all_remove() -> Weight; @@ -87,10 +88,10 @@ impl WeightInfo for SubstrateWeight { // Storage: Balances Locks (r:1 w:1) // Storage: Staking Payee (r:0 w:1) fn bond() -> Weight { - // Minimum execution time: 53_097 nanoseconds. - Weight::from_ref_time(53_708_000 as u64) - .saturating_add(T::DbWeight::get().reads(4 as u64)) - .saturating_add(T::DbWeight::get().writes(4 as u64)) + // Minimum execution time: 56_034 nanoseconds. + Weight::from_ref_time(56_646_000) + .saturating_add(T::DbWeight::get().reads(4)) + .saturating_add(T::DbWeight::get().writes(4)) } // Storage: Staking Bonded (r:1 w:0) // Storage: Staking Ledger (r:1 w:1) @@ -98,10 +99,10 @@ impl WeightInfo for SubstrateWeight { // Storage: VoterList ListNodes (r:3 w:3) // Storage: VoterList ListBags (r:2 w:2) fn bond_extra() -> Weight { - // Minimum execution time: 92_199 nanoseconds. - Weight::from_ref_time(93_541_000 as u64) - .saturating_add(T::DbWeight::get().reads(8 as u64)) - .saturating_add(T::DbWeight::get().writes(7 as u64)) + // Minimum execution time: 94_354 nanoseconds. + Weight::from_ref_time(95_318_000) + .saturating_add(T::DbWeight::get().reads(8)) + .saturating_add(T::DbWeight::get().writes(7)) } // Storage: Staking Ledger (r:1 w:1) // Storage: Staking Nominators (r:1 w:0) @@ -113,10 +114,10 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking Bonded (r:1 w:0) // Storage: VoterList ListBags (r:2 w:2) fn unbond() -> Weight { - // Minimum execution time: 98_227 nanoseconds. - Weight::from_ref_time(99_070_000 as u64) - .saturating_add(T::DbWeight::get().reads(12 as u64)) - .saturating_add(T::DbWeight::get().writes(8 as u64)) + // Minimum execution time: 99_960 nanoseconds. + Weight::from_ref_time(101_022_000) + .saturating_add(T::DbWeight::get().reads(12)) + .saturating_add(T::DbWeight::get().writes(8)) } // Storage: Staking Ledger (r:1 w:1) // Storage: Staking CurrentEra (r:1 w:0) @@ -124,12 +125,12 @@ impl WeightInfo for SubstrateWeight { // Storage: System Account (r:1 w:1) /// The range of component `s` is `[0, 100]`. fn withdraw_unbonded_update(s: u32, ) -> Weight { - // Minimum execution time: 45_058 nanoseconds. - Weight::from_ref_time(46_592_713 as u64) - // Standard Error: 413 - .saturating_add(Weight::from_ref_time(63_036 as u64).saturating_mul(s as u64)) - .saturating_add(T::DbWeight::get().reads(4 as u64)) - .saturating_add(T::DbWeight::get().writes(3 as u64)) + // Minimum execution time: 45_819 nanoseconds. + Weight::from_ref_time(48_073_614) + // Standard Error: 1_410 + .saturating_add(Weight::from_ref_time(62_881).saturating_mul(s.into())) + .saturating_add(T::DbWeight::get().reads(4)) + .saturating_add(T::DbWeight::get().writes(3)) } // Storage: Staking Ledger (r:1 w:1) // Storage: Staking CurrentEra (r:1 w:0) @@ -146,10 +147,10 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking Payee (r:0 w:1) /// The range of component `s` is `[0, 100]`. fn withdraw_unbonded_kill(_s: u32, ) -> Weight { - // Minimum execution time: 86_087 nanoseconds. - Weight::from_ref_time(87_627_894 as u64) - .saturating_add(T::DbWeight::get().reads(13 as u64)) - .saturating_add(T::DbWeight::get().writes(11 as u64)) + // Minimum execution time: 86_035 nanoseconds. + Weight::from_ref_time(89_561_735) + .saturating_add(T::DbWeight::get().reads(13)) + .saturating_add(T::DbWeight::get().writes(11)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking MinValidatorBond (r:1 w:0) @@ -163,22 +164,22 @@ impl WeightInfo for SubstrateWeight { // Storage: VoterList CounterForListNodes (r:1 w:1) // Storage: Staking CounterForValidators (r:1 w:1) fn validate() -> Weight { - // Minimum execution time: 67_690 nanoseconds. - Weight::from_ref_time(68_348_000 as u64) - .saturating_add(T::DbWeight::get().reads(11 as u64)) - .saturating_add(T::DbWeight::get().writes(5 as u64)) + // Minimum execution time: 68_748 nanoseconds. + Weight::from_ref_time(69_285_000) + .saturating_add(T::DbWeight::get().reads(11)) + .saturating_add(T::DbWeight::get().writes(5)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Nominators (r:1 w:1) /// The range of component `k` is `[1, 128]`. fn kick(k: u32, ) -> Weight { - // Minimum execution time: 43_512 nanoseconds. - Weight::from_ref_time(47_300_477 as u64) - // Standard Error: 11_609 - .saturating_add(Weight::from_ref_time(6_770_405 as u64).saturating_mul(k as u64)) - .saturating_add(T::DbWeight::get().reads(1 as u64)) - .saturating_add(T::DbWeight::get().reads((1 as u64).saturating_mul(k as u64))) - .saturating_add(T::DbWeight::get().writes((1 as u64).saturating_mul(k as u64))) + // Minimum execution time: 41_641 nanoseconds. + Weight::from_ref_time(48_919_231) + // Standard Error: 11_548 + .saturating_add(Weight::from_ref_time(6_901_201).saturating_mul(k.into())) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().reads((1_u64).saturating_mul(k.into()))) + .saturating_add(T::DbWeight::get().writes((1_u64).saturating_mul(k.into()))) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking MinNominatorBond (r:1 w:0) @@ -193,13 +194,13 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking CounterForNominators (r:1 w:1) /// The range of component `n` is `[1, 16]`. fn nominate(n: u32, ) -> Weight { - // Minimum execution time: 74_296 nanoseconds. - Weight::from_ref_time(73_201_782 as u64) - // Standard Error: 5_007 - .saturating_add(Weight::from_ref_time(2_810_370 as u64).saturating_mul(n as u64)) - .saturating_add(T::DbWeight::get().reads(12 as u64)) - .saturating_add(T::DbWeight::get().reads((1 as u64).saturating_mul(n as u64))) - .saturating_add(T::DbWeight::get().writes(6 as u64)) + // Minimum execution time: 75_097 nanoseconds. + Weight::from_ref_time(74_052_497) + // Standard Error: 6_784 + .saturating_add(Weight::from_ref_time(2_842_146).saturating_mul(n.into())) + .saturating_add(T::DbWeight::get().reads(12)) + .saturating_add(T::DbWeight::get().reads((1_u64).saturating_mul(n.into()))) + .saturating_add(T::DbWeight::get().writes(6)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Validators (r:1 w:0) @@ -209,59 +210,59 @@ impl WeightInfo for SubstrateWeight { // Storage: VoterList ListBags (r:1 w:1) // Storage: VoterList CounterForListNodes (r:1 w:1) fn chill() -> Weight { - // Minimum execution time: 66_605 nanoseconds. - Weight::from_ref_time(67_279_000 as u64) - .saturating_add(T::DbWeight::get().reads(8 as u64)) - .saturating_add(T::DbWeight::get().writes(6 as u64)) + // Minimum execution time: 67_307 nanoseconds. + Weight::from_ref_time(67_838_000) + .saturating_add(T::DbWeight::get().reads(8)) + .saturating_add(T::DbWeight::get().writes(6)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Payee (r:0 w:1) fn set_payee() -> Weight { - // Minimum execution time: 18_897 nanoseconds. - Weight::from_ref_time(19_357_000 as u64) - .saturating_add(T::DbWeight::get().reads(1 as u64)) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 18_831 nanoseconds. + Weight::from_ref_time(19_047_000) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Staking Bonded (r:1 w:1) // Storage: Staking Ledger (r:2 w:2) fn set_controller() -> Weight { - // Minimum execution time: 26_509 nanoseconds. - Weight::from_ref_time(26_961_000 as u64) - .saturating_add(T::DbWeight::get().reads(3 as u64)) - .saturating_add(T::DbWeight::get().writes(3 as u64)) + // Minimum execution time: 27_534 nanoseconds. + Weight::from_ref_time(27_806_000) + .saturating_add(T::DbWeight::get().reads(3)) + .saturating_add(T::DbWeight::get().writes(3)) } // Storage: Staking ValidatorCount (r:0 w:1) fn set_validator_count() -> Weight { - // Minimum execution time: 5_025 nanoseconds. - Weight::from_ref_time(5_240_000 as u64) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_211 nanoseconds. + Weight::from_ref_time(5_372_000) + .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Staking ForceEra (r:0 w:1) fn force_no_eras() -> Weight { - // Minimum execution time: 5_107 nanoseconds. - Weight::from_ref_time(5_320_000 as u64) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_382 nanoseconds. + Weight::from_ref_time(5_654_000) + .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Staking ForceEra (r:0 w:1) fn force_new_era() -> Weight { - // Minimum execution time: 5_094 nanoseconds. - Weight::from_ref_time(5_377_000 as u64) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_618 nanoseconds. + Weight::from_ref_time(5_714_000) + .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Staking ForceEra (r:0 w:1) fn force_new_era_always() -> Weight { - // Minimum execution time: 5_219 nanoseconds. - Weight::from_ref_time(5_434_000 as u64) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_589 nanoseconds. + Weight::from_ref_time(5_776_000) + .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Staking Invulnerables (r:0 w:1) /// The range of component `v` is `[0, 1000]`. fn set_invulnerables(v: u32, ) -> Weight { - // Minimum execution time: 5_122 nanoseconds. - Weight::from_ref_time(5_977_533 as u64) - // Standard Error: 34 - .saturating_add(Weight::from_ref_time(10_205 as u64).saturating_mul(v as u64)) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_541 nanoseconds. + Weight::from_ref_time(6_479_253) + // Standard Error: 49 + .saturating_add(Weight::from_ref_time(10_125).saturating_mul(v.into())) + .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Staking Bonded (r:1 w:1) // Storage: Staking SlashingSpans (r:1 w:0) @@ -278,23 +279,23 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking SpanSlash (r:0 w:2) /// The range of component `s` is `[0, 100]`. fn force_unstake(s: u32, ) -> Weight { - // Minimum execution time: 80_216 nanoseconds. - Weight::from_ref_time(86_090_609 as u64) - // Standard Error: 2_006 - .saturating_add(Weight::from_ref_time(1_039_308 as u64).saturating_mul(s as u64)) - .saturating_add(T::DbWeight::get().reads(11 as u64)) - .saturating_add(T::DbWeight::get().writes(12 as u64)) - .saturating_add(T::DbWeight::get().writes((1 as u64).saturating_mul(s as u64))) + // Minimum execution time: 81_041 nanoseconds. + Weight::from_ref_time(88_526_481) + // Standard Error: 11_494 + .saturating_add(Weight::from_ref_time(1_095_933).saturating_mul(s.into())) + .saturating_add(T::DbWeight::get().reads(11)) + .saturating_add(T::DbWeight::get().writes(12)) + .saturating_add(T::DbWeight::get().writes((1_u64).saturating_mul(s.into()))) } // Storage: Staking UnappliedSlashes (r:1 w:1) /// The range of component `s` is `[1, 1000]`. fn cancel_deferred_slash(s: u32, ) -> Weight { - // Minimum execution time: 92_034 nanoseconds. - Weight::from_ref_time(896_585_370 as u64) - // Standard Error: 58_231 - .saturating_add(Weight::from_ref_time(4_908_277 as u64).saturating_mul(s as u64)) - .saturating_add(T::DbWeight::get().reads(1 as u64)) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 92_308 nanoseconds. + Weight::from_ref_time(900_351_007) + // Standard Error: 59_145 + .saturating_add(Weight::from_ref_time(4_944_988).saturating_mul(s.into())) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) } // Storage: Staking CurrentEra (r:1 w:0) // Storage: Staking ErasValidatorReward (r:1 w:0) @@ -307,14 +308,14 @@ impl WeightInfo for SubstrateWeight { // Storage: System Account (r:1 w:1) /// The range of component `n` is `[0, 256]`. fn payout_stakers_dead_controller(n: u32, ) -> Weight { - // Minimum execution time: 127_936 nanoseconds. - Weight::from_ref_time(184_556_084 as u64) - // Standard Error: 26_981 - .saturating_add(Weight::from_ref_time(21_786_304 as u64).saturating_mul(n as u64)) - .saturating_add(T::DbWeight::get().reads(9 as u64)) - .saturating_add(T::DbWeight::get().reads((3 as u64).saturating_mul(n as u64))) - .saturating_add(T::DbWeight::get().writes(2 as u64)) - .saturating_add(T::DbWeight::get().writes((1 as u64).saturating_mul(n as u64))) + // Minimum execution time: 131_855 nanoseconds. + Weight::from_ref_time(197_412_779) + // Standard Error: 21_283 + .saturating_add(Weight::from_ref_time(22_093_758).saturating_mul(n.into())) + .saturating_add(T::DbWeight::get().reads(9)) + .saturating_add(T::DbWeight::get().reads((3_u64).saturating_mul(n.into()))) + .saturating_add(T::DbWeight::get().writes(2)) + .saturating_add(T::DbWeight::get().writes((1_u64).saturating_mul(n.into()))) } // Storage: Staking CurrentEra (r:1 w:0) // Storage: Staking ErasValidatorReward (r:1 w:0) @@ -328,14 +329,14 @@ impl WeightInfo for SubstrateWeight { // Storage: Balances Locks (r:1 w:1) /// The range of component `n` is `[0, 256]`. fn payout_stakers_alive_staked(n: u32, ) -> Weight { - // Minimum execution time: 157_778 nanoseconds. - Weight::from_ref_time(223_306_359 as u64) - // Standard Error: 27_216 - .saturating_add(Weight::from_ref_time(30_612_663 as u64).saturating_mul(n as u64)) - .saturating_add(T::DbWeight::get().reads(10 as u64)) - .saturating_add(T::DbWeight::get().reads((5 as u64).saturating_mul(n as u64))) - .saturating_add(T::DbWeight::get().writes(3 as u64)) - .saturating_add(T::DbWeight::get().writes((3 as u64).saturating_mul(n as u64))) + // Minimum execution time: 163_118 nanoseconds. + Weight::from_ref_time(229_356_697) + // Standard Error: 30_740 + .saturating_add(Weight::from_ref_time(31_575_360).saturating_mul(n.into())) + .saturating_add(T::DbWeight::get().reads(10)) + .saturating_add(T::DbWeight::get().reads((5_u64).saturating_mul(n.into()))) + .saturating_add(T::DbWeight::get().writes(3)) + .saturating_add(T::DbWeight::get().writes((3_u64).saturating_mul(n.into()))) } // Storage: Staking Ledger (r:1 w:1) // Storage: Balances Locks (r:1 w:1) @@ -345,12 +346,12 @@ impl WeightInfo for SubstrateWeight { // Storage: VoterList ListBags (r:2 w:2) /// The range of component `l` is `[1, 32]`. fn rebond(l: u32, ) -> Weight { - // Minimum execution time: 92_880 nanoseconds. - Weight::from_ref_time(94_434_663 as u64) - // Standard Error: 1_734 - .saturating_add(Weight::from_ref_time(34_453 as u64).saturating_mul(l as u64)) - .saturating_add(T::DbWeight::get().reads(9 as u64)) - .saturating_add(T::DbWeight::get().writes(8 as u64)) + // Minimum execution time: 94_048 nanoseconds. + Weight::from_ref_time(95_784_236) + // Standard Error: 2_313 + .saturating_add(Weight::from_ref_time(52_798).saturating_mul(l.into())) + .saturating_add(T::DbWeight::get().reads(9)) + .saturating_add(T::DbWeight::get().writes(8)) } // Storage: System Account (r:1 w:1) // Storage: Staking Bonded (r:1 w:1) @@ -367,16 +368,15 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking SpanSlash (r:0 w:1) /// The range of component `s` is `[1, 100]`. fn reap_stash(s: u32, ) -> Weight { - // Minimum execution time: 92_334 nanoseconds. - Weight::from_ref_time(95_207_614 as u64) - // Standard Error: 1_822 - .saturating_add(Weight::from_ref_time(1_036_787 as u64).saturating_mul(s as u64)) - .saturating_add(T::DbWeight::get().reads(12 as u64)) - .saturating_add(T::DbWeight::get().writes(12 as u64)) - .saturating_add(T::DbWeight::get().writes((1 as u64).saturating_mul(s as u64))) + // Minimum execution time: 93_342 nanoseconds. + Weight::from_ref_time(95_756_184) + // Standard Error: 2_067 + .saturating_add(Weight::from_ref_time(1_090_785).saturating_mul(s.into())) + .saturating_add(T::DbWeight::get().reads(12)) + .saturating_add(T::DbWeight::get().writes(12)) + .saturating_add(T::DbWeight::get().writes((1_u64).saturating_mul(s.into()))) } // Storage: VoterList CounterForListNodes (r:1 w:0) - // Storage: Staking SlashingSpans (r:1 w:0) // Storage: VoterList ListBags (r:200 w:0) // Storage: VoterList ListNodes (r:101 w:0) // Storage: Staking Nominators (r:101 w:0) @@ -395,20 +395,19 @@ impl WeightInfo for SubstrateWeight { /// The range of component `v` is `[1, 10]`. /// The range of component `n` is `[0, 100]`. fn new_era(v: u32, n: u32, ) -> Weight { - // Minimum execution time: 535_169 nanoseconds. - Weight::from_ref_time(548_667_000 as u64) - // Standard Error: 1_759_252 - .saturating_add(Weight::from_ref_time(58_283_319 as u64).saturating_mul(v as u64)) - // Standard Error: 175_299 - .saturating_add(Weight::from_ref_time(13_578_512 as u64).saturating_mul(n as u64)) - .saturating_add(T::DbWeight::get().reads(207 as u64)) - .saturating_add(T::DbWeight::get().reads((5 as u64).saturating_mul(v as u64))) - .saturating_add(T::DbWeight::get().reads((4 as u64).saturating_mul(n as u64))) - .saturating_add(T::DbWeight::get().writes(3 as u64)) - .saturating_add(T::DbWeight::get().writes((3 as u64).saturating_mul(v as u64))) + // Minimum execution time: 506_874 nanoseconds. + Weight::from_ref_time(507_798_000) + // Standard Error: 1_802_261 + .saturating_add(Weight::from_ref_time(59_874_736).saturating_mul(v.into())) + // Standard Error: 179_585 + .saturating_add(Weight::from_ref_time(13_668_574).saturating_mul(n.into())) + .saturating_add(T::DbWeight::get().reads(206)) + .saturating_add(T::DbWeight::get().reads((5_u64).saturating_mul(v.into()))) + .saturating_add(T::DbWeight::get().reads((4_u64).saturating_mul(n.into()))) + .saturating_add(T::DbWeight::get().writes(3)) + .saturating_add(T::DbWeight::get().writes((3_u64).saturating_mul(v.into()))) } // Storage: VoterList CounterForListNodes (r:1 w:0) - // Storage: Staking SlashingSpans (r:21 w:0) // Storage: VoterList ListBags (r:200 w:0) // Storage: VoterList ListNodes (r:1500 w:0) // Storage: Staking Nominators (r:1500 w:0) @@ -417,29 +416,27 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking Ledger (r:1500 w:0) /// The range of component `v` is `[500, 1000]`. /// The range of component `n` is `[500, 1000]`. - /// The range of component `s` is `[1, 20]`. - fn get_npos_voters(v: u32, n: u32, s: u32, ) -> Weight { - // Minimum execution time: 25_323_129 nanoseconds. - Weight::from_ref_time(25_471_672_000 as u64) - // Standard Error: 266_391 - .saturating_add(Weight::from_ref_time(6_665_504 as u64).saturating_mul(v as u64)) - // Standard Error: 266_391 - .saturating_add(Weight::from_ref_time(6_956_606 as u64).saturating_mul(n as u64)) - .saturating_add(T::DbWeight::get().reads(202 as u64)) - .saturating_add(T::DbWeight::get().reads((5 as u64).saturating_mul(v as u64))) - .saturating_add(T::DbWeight::get().reads((4 as u64).saturating_mul(n as u64))) - .saturating_add(T::DbWeight::get().reads((1 as u64).saturating_mul(s as u64))) + fn get_npos_voters(v: u32, n: u32, ) -> Weight { + // Minimum execution time: 24_634_585 nanoseconds. + Weight::from_ref_time(24_718_377_000) + // Standard Error: 324_839 + .saturating_add(Weight::from_ref_time(3_654_508).saturating_mul(v.into())) + // Standard Error: 324_839 + .saturating_add(Weight::from_ref_time(2_927_535).saturating_mul(n.into())) + .saturating_add(T::DbWeight::get().reads(201)) + .saturating_add(T::DbWeight::get().reads((5_u64).saturating_mul(v.into()))) + .saturating_add(T::DbWeight::get().reads((4_u64).saturating_mul(n.into()))) } // Storage: Staking CounterForValidators (r:1 w:0) // Storage: Staking Validators (r:501 w:0) /// The range of component `v` is `[500, 1000]`. fn get_npos_targets(v: u32, ) -> Weight { - // Minimum execution time: 4_905_036 nanoseconds. - Weight::from_ref_time(78_163_554 as u64) - // Standard Error: 23_723 - .saturating_add(Weight::from_ref_time(9_784_870 as u64).saturating_mul(v as u64)) - .saturating_add(T::DbWeight::get().reads(2 as u64)) - .saturating_add(T::DbWeight::get().reads((1 as u64).saturating_mul(v as u64))) + // Minimum execution time: 4_805_490 nanoseconds. + Weight::from_ref_time(118_475_494) + // Standard Error: 26_332 + .saturating_add(Weight::from_ref_time(9_635_188).saturating_mul(v.into())) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().reads((1_u64).saturating_mul(v.into()))) } // Storage: Staking MinCommission (r:0 w:1) // Storage: Staking MinValidatorBond (r:0 w:1) @@ -448,9 +445,9 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking MaxNominatorsCount (r:0 w:1) // Storage: Staking MinNominatorBond (r:0 w:1) fn set_staking_configs_all_set() -> Weight { - // Minimum execution time: 10_096 nanoseconds. - Weight::from_ref_time(10_538_000 as u64) - .saturating_add(T::DbWeight::get().writes(6 as u64)) + // Minimum execution time: 10_816 nanoseconds. + Weight::from_ref_time(11_242_000) + .saturating_add(T::DbWeight::get().writes(6)) } // Storage: Staking MinCommission (r:0 w:1) // Storage: Staking MinValidatorBond (r:0 w:1) @@ -459,9 +456,9 @@ impl WeightInfo for SubstrateWeight { // Storage: Staking MaxNominatorsCount (r:0 w:1) // Storage: Staking MinNominatorBond (r:0 w:1) fn set_staking_configs_all_remove() -> Weight { - // Minimum execution time: 9_045 nanoseconds. - Weight::from_ref_time(9_379_000 as u64) - .saturating_add(T::DbWeight::get().writes(6 as u64)) + // Minimum execution time: 9_581 nanoseconds. + Weight::from_ref_time(10_383_000) + .saturating_add(T::DbWeight::get().writes(6)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Nominators (r:1 w:1) @@ -474,18 +471,18 @@ impl WeightInfo for SubstrateWeight { // Storage: VoterList ListBags (r:1 w:1) // Storage: VoterList CounterForListNodes (r:1 w:1) fn chill_other() -> Weight { - // Minimum execution time: 81_457 nanoseconds. - Weight::from_ref_time(82_410_000 as u64) - .saturating_add(T::DbWeight::get().reads(11 as u64)) - .saturating_add(T::DbWeight::get().writes(6 as u64)) + // Minimum execution time: 83_669 nanoseconds. + Weight::from_ref_time(84_772_000) + .saturating_add(T::DbWeight::get().reads(11)) + .saturating_add(T::DbWeight::get().writes(6)) } // Storage: Staking MinCommission (r:1 w:0) // Storage: Staking Validators (r:1 w:1) fn force_apply_min_commission() -> Weight { - // Minimum execution time: 19_684 nanoseconds. - Weight::from_ref_time(20_059_000 as u64) - .saturating_add(T::DbWeight::get().reads(2 as u64)) - .saturating_add(T::DbWeight::get().writes(1 as u64)) + // Minimum execution time: 20_553 nanoseconds. + Weight::from_ref_time(20_933_000) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(1)) } } @@ -497,10 +494,10 @@ impl WeightInfo for () { // Storage: Balances Locks (r:1 w:1) // Storage: Staking Payee (r:0 w:1) fn bond() -> Weight { - // Minimum execution time: 53_097 nanoseconds. - Weight::from_ref_time(53_708_000 as u64) - .saturating_add(RocksDbWeight::get().reads(4 as u64)) - .saturating_add(RocksDbWeight::get().writes(4 as u64)) + // Minimum execution time: 56_034 nanoseconds. + Weight::from_ref_time(56_646_000) + .saturating_add(RocksDbWeight::get().reads(4)) + .saturating_add(RocksDbWeight::get().writes(4)) } // Storage: Staking Bonded (r:1 w:0) // Storage: Staking Ledger (r:1 w:1) @@ -508,10 +505,10 @@ impl WeightInfo for () { // Storage: VoterList ListNodes (r:3 w:3) // Storage: VoterList ListBags (r:2 w:2) fn bond_extra() -> Weight { - // Minimum execution time: 92_199 nanoseconds. - Weight::from_ref_time(93_541_000 as u64) - .saturating_add(RocksDbWeight::get().reads(8 as u64)) - .saturating_add(RocksDbWeight::get().writes(7 as u64)) + // Minimum execution time: 94_354 nanoseconds. + Weight::from_ref_time(95_318_000) + .saturating_add(RocksDbWeight::get().reads(8)) + .saturating_add(RocksDbWeight::get().writes(7)) } // Storage: Staking Ledger (r:1 w:1) // Storage: Staking Nominators (r:1 w:0) @@ -523,10 +520,10 @@ impl WeightInfo for () { // Storage: Staking Bonded (r:1 w:0) // Storage: VoterList ListBags (r:2 w:2) fn unbond() -> Weight { - // Minimum execution time: 98_227 nanoseconds. - Weight::from_ref_time(99_070_000 as u64) - .saturating_add(RocksDbWeight::get().reads(12 as u64)) - .saturating_add(RocksDbWeight::get().writes(8 as u64)) + // Minimum execution time: 99_960 nanoseconds. + Weight::from_ref_time(101_022_000) + .saturating_add(RocksDbWeight::get().reads(12)) + .saturating_add(RocksDbWeight::get().writes(8)) } // Storage: Staking Ledger (r:1 w:1) // Storage: Staking CurrentEra (r:1 w:0) @@ -534,12 +531,12 @@ impl WeightInfo for () { // Storage: System Account (r:1 w:1) /// The range of component `s` is `[0, 100]`. fn withdraw_unbonded_update(s: u32, ) -> Weight { - // Minimum execution time: 45_058 nanoseconds. - Weight::from_ref_time(46_592_713 as u64) - // Standard Error: 413 - .saturating_add(Weight::from_ref_time(63_036 as u64).saturating_mul(s as u64)) - .saturating_add(RocksDbWeight::get().reads(4 as u64)) - .saturating_add(RocksDbWeight::get().writes(3 as u64)) + // Minimum execution time: 45_819 nanoseconds. + Weight::from_ref_time(48_073_614) + // Standard Error: 1_410 + .saturating_add(Weight::from_ref_time(62_881).saturating_mul(s.into())) + .saturating_add(RocksDbWeight::get().reads(4)) + .saturating_add(RocksDbWeight::get().writes(3)) } // Storage: Staking Ledger (r:1 w:1) // Storage: Staking CurrentEra (r:1 w:0) @@ -556,10 +553,10 @@ impl WeightInfo for () { // Storage: Staking Payee (r:0 w:1) /// The range of component `s` is `[0, 100]`. fn withdraw_unbonded_kill(_s: u32, ) -> Weight { - // Minimum execution time: 86_087 nanoseconds. - Weight::from_ref_time(87_627_894 as u64) - .saturating_add(RocksDbWeight::get().reads(13 as u64)) - .saturating_add(RocksDbWeight::get().writes(11 as u64)) + // Minimum execution time: 86_035 nanoseconds. + Weight::from_ref_time(89_561_735) + .saturating_add(RocksDbWeight::get().reads(13)) + .saturating_add(RocksDbWeight::get().writes(11)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking MinValidatorBond (r:1 w:0) @@ -573,22 +570,22 @@ impl WeightInfo for () { // Storage: VoterList CounterForListNodes (r:1 w:1) // Storage: Staking CounterForValidators (r:1 w:1) fn validate() -> Weight { - // Minimum execution time: 67_690 nanoseconds. - Weight::from_ref_time(68_348_000 as u64) - .saturating_add(RocksDbWeight::get().reads(11 as u64)) - .saturating_add(RocksDbWeight::get().writes(5 as u64)) + // Minimum execution time: 68_748 nanoseconds. + Weight::from_ref_time(69_285_000) + .saturating_add(RocksDbWeight::get().reads(11)) + .saturating_add(RocksDbWeight::get().writes(5)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Nominators (r:1 w:1) /// The range of component `k` is `[1, 128]`. fn kick(k: u32, ) -> Weight { - // Minimum execution time: 43_512 nanoseconds. - Weight::from_ref_time(47_300_477 as u64) - // Standard Error: 11_609 - .saturating_add(Weight::from_ref_time(6_770_405 as u64).saturating_mul(k as u64)) - .saturating_add(RocksDbWeight::get().reads(1 as u64)) - .saturating_add(RocksDbWeight::get().reads((1 as u64).saturating_mul(k as u64))) - .saturating_add(RocksDbWeight::get().writes((1 as u64).saturating_mul(k as u64))) + // Minimum execution time: 41_641 nanoseconds. + Weight::from_ref_time(48_919_231) + // Standard Error: 11_548 + .saturating_add(Weight::from_ref_time(6_901_201).saturating_mul(k.into())) + .saturating_add(RocksDbWeight::get().reads(1)) + .saturating_add(RocksDbWeight::get().reads((1_u64).saturating_mul(k.into()))) + .saturating_add(RocksDbWeight::get().writes((1_u64).saturating_mul(k.into()))) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking MinNominatorBond (r:1 w:0) @@ -603,13 +600,13 @@ impl WeightInfo for () { // Storage: Staking CounterForNominators (r:1 w:1) /// The range of component `n` is `[1, 16]`. fn nominate(n: u32, ) -> Weight { - // Minimum execution time: 74_296 nanoseconds. - Weight::from_ref_time(73_201_782 as u64) - // Standard Error: 5_007 - .saturating_add(Weight::from_ref_time(2_810_370 as u64).saturating_mul(n as u64)) - .saturating_add(RocksDbWeight::get().reads(12 as u64)) - .saturating_add(RocksDbWeight::get().reads((1 as u64).saturating_mul(n as u64))) - .saturating_add(RocksDbWeight::get().writes(6 as u64)) + // Minimum execution time: 75_097 nanoseconds. + Weight::from_ref_time(74_052_497) + // Standard Error: 6_784 + .saturating_add(Weight::from_ref_time(2_842_146).saturating_mul(n.into())) + .saturating_add(RocksDbWeight::get().reads(12)) + .saturating_add(RocksDbWeight::get().reads((1_u64).saturating_mul(n.into()))) + .saturating_add(RocksDbWeight::get().writes(6)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Validators (r:1 w:0) @@ -619,59 +616,59 @@ impl WeightInfo for () { // Storage: VoterList ListBags (r:1 w:1) // Storage: VoterList CounterForListNodes (r:1 w:1) fn chill() -> Weight { - // Minimum execution time: 66_605 nanoseconds. - Weight::from_ref_time(67_279_000 as u64) - .saturating_add(RocksDbWeight::get().reads(8 as u64)) - .saturating_add(RocksDbWeight::get().writes(6 as u64)) + // Minimum execution time: 67_307 nanoseconds. + Weight::from_ref_time(67_838_000) + .saturating_add(RocksDbWeight::get().reads(8)) + .saturating_add(RocksDbWeight::get().writes(6)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Payee (r:0 w:1) fn set_payee() -> Weight { - // Minimum execution time: 18_897 nanoseconds. - Weight::from_ref_time(19_357_000 as u64) - .saturating_add(RocksDbWeight::get().reads(1 as u64)) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 18_831 nanoseconds. + Weight::from_ref_time(19_047_000) + .saturating_add(RocksDbWeight::get().reads(1)) + .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Staking Bonded (r:1 w:1) // Storage: Staking Ledger (r:2 w:2) fn set_controller() -> Weight { - // Minimum execution time: 26_509 nanoseconds. - Weight::from_ref_time(26_961_000 as u64) - .saturating_add(RocksDbWeight::get().reads(3 as u64)) - .saturating_add(RocksDbWeight::get().writes(3 as u64)) + // Minimum execution time: 27_534 nanoseconds. + Weight::from_ref_time(27_806_000) + .saturating_add(RocksDbWeight::get().reads(3)) + .saturating_add(RocksDbWeight::get().writes(3)) } // Storage: Staking ValidatorCount (r:0 w:1) fn set_validator_count() -> Weight { - // Minimum execution time: 5_025 nanoseconds. - Weight::from_ref_time(5_240_000 as u64) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_211 nanoseconds. + Weight::from_ref_time(5_372_000) + .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Staking ForceEra (r:0 w:1) fn force_no_eras() -> Weight { - // Minimum execution time: 5_107 nanoseconds. - Weight::from_ref_time(5_320_000 as u64) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_382 nanoseconds. + Weight::from_ref_time(5_654_000) + .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Staking ForceEra (r:0 w:1) fn force_new_era() -> Weight { - // Minimum execution time: 5_094 nanoseconds. - Weight::from_ref_time(5_377_000 as u64) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_618 nanoseconds. + Weight::from_ref_time(5_714_000) + .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Staking ForceEra (r:0 w:1) fn force_new_era_always() -> Weight { - // Minimum execution time: 5_219 nanoseconds. - Weight::from_ref_time(5_434_000 as u64) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_589 nanoseconds. + Weight::from_ref_time(5_776_000) + .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Staking Invulnerables (r:0 w:1) /// The range of component `v` is `[0, 1000]`. fn set_invulnerables(v: u32, ) -> Weight { - // Minimum execution time: 5_122 nanoseconds. - Weight::from_ref_time(5_977_533 as u64) - // Standard Error: 34 - .saturating_add(Weight::from_ref_time(10_205 as u64).saturating_mul(v as u64)) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 5_541 nanoseconds. + Weight::from_ref_time(6_479_253) + // Standard Error: 49 + .saturating_add(Weight::from_ref_time(10_125).saturating_mul(v.into())) + .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Staking Bonded (r:1 w:1) // Storage: Staking SlashingSpans (r:1 w:0) @@ -688,23 +685,23 @@ impl WeightInfo for () { // Storage: Staking SpanSlash (r:0 w:2) /// The range of component `s` is `[0, 100]`. fn force_unstake(s: u32, ) -> Weight { - // Minimum execution time: 80_216 nanoseconds. - Weight::from_ref_time(86_090_609 as u64) - // Standard Error: 2_006 - .saturating_add(Weight::from_ref_time(1_039_308 as u64).saturating_mul(s as u64)) - .saturating_add(RocksDbWeight::get().reads(11 as u64)) - .saturating_add(RocksDbWeight::get().writes(12 as u64)) - .saturating_add(RocksDbWeight::get().writes((1 as u64).saturating_mul(s as u64))) + // Minimum execution time: 81_041 nanoseconds. + Weight::from_ref_time(88_526_481) + // Standard Error: 11_494 + .saturating_add(Weight::from_ref_time(1_095_933).saturating_mul(s.into())) + .saturating_add(RocksDbWeight::get().reads(11)) + .saturating_add(RocksDbWeight::get().writes(12)) + .saturating_add(RocksDbWeight::get().writes((1_u64).saturating_mul(s.into()))) } // Storage: Staking UnappliedSlashes (r:1 w:1) /// The range of component `s` is `[1, 1000]`. fn cancel_deferred_slash(s: u32, ) -> Weight { - // Minimum execution time: 92_034 nanoseconds. - Weight::from_ref_time(896_585_370 as u64) - // Standard Error: 58_231 - .saturating_add(Weight::from_ref_time(4_908_277 as u64).saturating_mul(s as u64)) - .saturating_add(RocksDbWeight::get().reads(1 as u64)) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 92_308 nanoseconds. + Weight::from_ref_time(900_351_007) + // Standard Error: 59_145 + .saturating_add(Weight::from_ref_time(4_944_988).saturating_mul(s.into())) + .saturating_add(RocksDbWeight::get().reads(1)) + .saturating_add(RocksDbWeight::get().writes(1)) } // Storage: Staking CurrentEra (r:1 w:0) // Storage: Staking ErasValidatorReward (r:1 w:0) @@ -717,14 +714,14 @@ impl WeightInfo for () { // Storage: System Account (r:1 w:1) /// The range of component `n` is `[0, 256]`. fn payout_stakers_dead_controller(n: u32, ) -> Weight { - // Minimum execution time: 127_936 nanoseconds. - Weight::from_ref_time(184_556_084 as u64) - // Standard Error: 26_981 - .saturating_add(Weight::from_ref_time(21_786_304 as u64).saturating_mul(n as u64)) - .saturating_add(RocksDbWeight::get().reads(9 as u64)) - .saturating_add(RocksDbWeight::get().reads((3 as u64).saturating_mul(n as u64))) - .saturating_add(RocksDbWeight::get().writes(2 as u64)) - .saturating_add(RocksDbWeight::get().writes((1 as u64).saturating_mul(n as u64))) + // Minimum execution time: 131_855 nanoseconds. + Weight::from_ref_time(197_412_779) + // Standard Error: 21_283 + .saturating_add(Weight::from_ref_time(22_093_758).saturating_mul(n.into())) + .saturating_add(RocksDbWeight::get().reads(9)) + .saturating_add(RocksDbWeight::get().reads((3_u64).saturating_mul(n.into()))) + .saturating_add(RocksDbWeight::get().writes(2)) + .saturating_add(RocksDbWeight::get().writes((1_u64).saturating_mul(n.into()))) } // Storage: Staking CurrentEra (r:1 w:0) // Storage: Staking ErasValidatorReward (r:1 w:0) @@ -738,14 +735,14 @@ impl WeightInfo for () { // Storage: Balances Locks (r:1 w:1) /// The range of component `n` is `[0, 256]`. fn payout_stakers_alive_staked(n: u32, ) -> Weight { - // Minimum execution time: 157_778 nanoseconds. - Weight::from_ref_time(223_306_359 as u64) - // Standard Error: 27_216 - .saturating_add(Weight::from_ref_time(30_612_663 as u64).saturating_mul(n as u64)) - .saturating_add(RocksDbWeight::get().reads(10 as u64)) - .saturating_add(RocksDbWeight::get().reads((5 as u64).saturating_mul(n as u64))) - .saturating_add(RocksDbWeight::get().writes(3 as u64)) - .saturating_add(RocksDbWeight::get().writes((3 as u64).saturating_mul(n as u64))) + // Minimum execution time: 163_118 nanoseconds. + Weight::from_ref_time(229_356_697) + // Standard Error: 30_740 + .saturating_add(Weight::from_ref_time(31_575_360).saturating_mul(n.into())) + .saturating_add(RocksDbWeight::get().reads(10)) + .saturating_add(RocksDbWeight::get().reads((5_u64).saturating_mul(n.into()))) + .saturating_add(RocksDbWeight::get().writes(3)) + .saturating_add(RocksDbWeight::get().writes((3_u64).saturating_mul(n.into()))) } // Storage: Staking Ledger (r:1 w:1) // Storage: Balances Locks (r:1 w:1) @@ -755,12 +752,12 @@ impl WeightInfo for () { // Storage: VoterList ListBags (r:2 w:2) /// The range of component `l` is `[1, 32]`. fn rebond(l: u32, ) -> Weight { - // Minimum execution time: 92_880 nanoseconds. - Weight::from_ref_time(94_434_663 as u64) - // Standard Error: 1_734 - .saturating_add(Weight::from_ref_time(34_453 as u64).saturating_mul(l as u64)) - .saturating_add(RocksDbWeight::get().reads(9 as u64)) - .saturating_add(RocksDbWeight::get().writes(8 as u64)) + // Minimum execution time: 94_048 nanoseconds. + Weight::from_ref_time(95_784_236) + // Standard Error: 2_313 + .saturating_add(Weight::from_ref_time(52_798).saturating_mul(l.into())) + .saturating_add(RocksDbWeight::get().reads(9)) + .saturating_add(RocksDbWeight::get().writes(8)) } // Storage: System Account (r:1 w:1) // Storage: Staking Bonded (r:1 w:1) @@ -777,16 +774,15 @@ impl WeightInfo for () { // Storage: Staking SpanSlash (r:0 w:1) /// The range of component `s` is `[1, 100]`. fn reap_stash(s: u32, ) -> Weight { - // Minimum execution time: 92_334 nanoseconds. - Weight::from_ref_time(95_207_614 as u64) - // Standard Error: 1_822 - .saturating_add(Weight::from_ref_time(1_036_787 as u64).saturating_mul(s as u64)) - .saturating_add(RocksDbWeight::get().reads(12 as u64)) - .saturating_add(RocksDbWeight::get().writes(12 as u64)) - .saturating_add(RocksDbWeight::get().writes((1 as u64).saturating_mul(s as u64))) + // Minimum execution time: 93_342 nanoseconds. + Weight::from_ref_time(95_756_184) + // Standard Error: 2_067 + .saturating_add(Weight::from_ref_time(1_090_785).saturating_mul(s.into())) + .saturating_add(RocksDbWeight::get().reads(12)) + .saturating_add(RocksDbWeight::get().writes(12)) + .saturating_add(RocksDbWeight::get().writes((1_u64).saturating_mul(s.into()))) } // Storage: VoterList CounterForListNodes (r:1 w:0) - // Storage: Staking SlashingSpans (r:1 w:0) // Storage: VoterList ListBags (r:200 w:0) // Storage: VoterList ListNodes (r:101 w:0) // Storage: Staking Nominators (r:101 w:0) @@ -805,20 +801,19 @@ impl WeightInfo for () { /// The range of component `v` is `[1, 10]`. /// The range of component `n` is `[0, 100]`. fn new_era(v: u32, n: u32, ) -> Weight { - // Minimum execution time: 535_169 nanoseconds. - Weight::from_ref_time(548_667_000 as u64) - // Standard Error: 1_759_252 - .saturating_add(Weight::from_ref_time(58_283_319 as u64).saturating_mul(v as u64)) - // Standard Error: 175_299 - .saturating_add(Weight::from_ref_time(13_578_512 as u64).saturating_mul(n as u64)) - .saturating_add(RocksDbWeight::get().reads(207 as u64)) - .saturating_add(RocksDbWeight::get().reads((5 as u64).saturating_mul(v as u64))) - .saturating_add(RocksDbWeight::get().reads((4 as u64).saturating_mul(n as u64))) - .saturating_add(RocksDbWeight::get().writes(3 as u64)) - .saturating_add(RocksDbWeight::get().writes((3 as u64).saturating_mul(v as u64))) + // Minimum execution time: 506_874 nanoseconds. + Weight::from_ref_time(507_798_000) + // Standard Error: 1_802_261 + .saturating_add(Weight::from_ref_time(59_874_736).saturating_mul(v.into())) + // Standard Error: 179_585 + .saturating_add(Weight::from_ref_time(13_668_574).saturating_mul(n.into())) + .saturating_add(RocksDbWeight::get().reads(206)) + .saturating_add(RocksDbWeight::get().reads((5_u64).saturating_mul(v.into()))) + .saturating_add(RocksDbWeight::get().reads((4_u64).saturating_mul(n.into()))) + .saturating_add(RocksDbWeight::get().writes(3)) + .saturating_add(RocksDbWeight::get().writes((3_u64).saturating_mul(v.into()))) } // Storage: VoterList CounterForListNodes (r:1 w:0) - // Storage: Staking SlashingSpans (r:21 w:0) // Storage: VoterList ListBags (r:200 w:0) // Storage: VoterList ListNodes (r:1500 w:0) // Storage: Staking Nominators (r:1500 w:0) @@ -827,29 +822,27 @@ impl WeightInfo for () { // Storage: Staking Ledger (r:1500 w:0) /// The range of component `v` is `[500, 1000]`. /// The range of component `n` is `[500, 1000]`. - /// The range of component `s` is `[1, 20]`. - fn get_npos_voters(v: u32, n: u32, s: u32, ) -> Weight { - // Minimum execution time: 25_323_129 nanoseconds. - Weight::from_ref_time(25_471_672_000 as u64) - // Standard Error: 266_391 - .saturating_add(Weight::from_ref_time(6_665_504 as u64).saturating_mul(v as u64)) - // Standard Error: 266_391 - .saturating_add(Weight::from_ref_time(6_956_606 as u64).saturating_mul(n as u64)) - .saturating_add(RocksDbWeight::get().reads(202 as u64)) - .saturating_add(RocksDbWeight::get().reads((5 as u64).saturating_mul(v as u64))) - .saturating_add(RocksDbWeight::get().reads((4 as u64).saturating_mul(n as u64))) - .saturating_add(RocksDbWeight::get().reads((1 as u64).saturating_mul(s as u64))) + fn get_npos_voters(v: u32, n: u32, ) -> Weight { + // Minimum execution time: 24_634_585 nanoseconds. + Weight::from_ref_time(24_718_377_000) + // Standard Error: 324_839 + .saturating_add(Weight::from_ref_time(3_654_508).saturating_mul(v.into())) + // Standard Error: 324_839 + .saturating_add(Weight::from_ref_time(2_927_535).saturating_mul(n.into())) + .saturating_add(RocksDbWeight::get().reads(201)) + .saturating_add(RocksDbWeight::get().reads((5_u64).saturating_mul(v.into()))) + .saturating_add(RocksDbWeight::get().reads((4_u64).saturating_mul(n.into()))) } // Storage: Staking CounterForValidators (r:1 w:0) // Storage: Staking Validators (r:501 w:0) /// The range of component `v` is `[500, 1000]`. fn get_npos_targets(v: u32, ) -> Weight { - // Minimum execution time: 4_905_036 nanoseconds. - Weight::from_ref_time(78_163_554 as u64) - // Standard Error: 23_723 - .saturating_add(Weight::from_ref_time(9_784_870 as u64).saturating_mul(v as u64)) - .saturating_add(RocksDbWeight::get().reads(2 as u64)) - .saturating_add(RocksDbWeight::get().reads((1 as u64).saturating_mul(v as u64))) + // Minimum execution time: 4_805_490 nanoseconds. + Weight::from_ref_time(118_475_494) + // Standard Error: 26_332 + .saturating_add(Weight::from_ref_time(9_635_188).saturating_mul(v.into())) + .saturating_add(RocksDbWeight::get().reads(2)) + .saturating_add(RocksDbWeight::get().reads((1_u64).saturating_mul(v.into()))) } // Storage: Staking MinCommission (r:0 w:1) // Storage: Staking MinValidatorBond (r:0 w:1) @@ -858,9 +851,9 @@ impl WeightInfo for () { // Storage: Staking MaxNominatorsCount (r:0 w:1) // Storage: Staking MinNominatorBond (r:0 w:1) fn set_staking_configs_all_set() -> Weight { - // Minimum execution time: 10_096 nanoseconds. - Weight::from_ref_time(10_538_000 as u64) - .saturating_add(RocksDbWeight::get().writes(6 as u64)) + // Minimum execution time: 10_816 nanoseconds. + Weight::from_ref_time(11_242_000) + .saturating_add(RocksDbWeight::get().writes(6)) } // Storage: Staking MinCommission (r:0 w:1) // Storage: Staking MinValidatorBond (r:0 w:1) @@ -869,9 +862,9 @@ impl WeightInfo for () { // Storage: Staking MaxNominatorsCount (r:0 w:1) // Storage: Staking MinNominatorBond (r:0 w:1) fn set_staking_configs_all_remove() -> Weight { - // Minimum execution time: 9_045 nanoseconds. - Weight::from_ref_time(9_379_000 as u64) - .saturating_add(RocksDbWeight::get().writes(6 as u64)) + // Minimum execution time: 9_581 nanoseconds. + Weight::from_ref_time(10_383_000) + .saturating_add(RocksDbWeight::get().writes(6)) } // Storage: Staking Ledger (r:1 w:0) // Storage: Staking Nominators (r:1 w:1) @@ -884,17 +877,17 @@ impl WeightInfo for () { // Storage: VoterList ListBags (r:1 w:1) // Storage: VoterList CounterForListNodes (r:1 w:1) fn chill_other() -> Weight { - // Minimum execution time: 81_457 nanoseconds. - Weight::from_ref_time(82_410_000 as u64) - .saturating_add(RocksDbWeight::get().reads(11 as u64)) - .saturating_add(RocksDbWeight::get().writes(6 as u64)) + // Minimum execution time: 83_669 nanoseconds. + Weight::from_ref_time(84_772_000) + .saturating_add(RocksDbWeight::get().reads(11)) + .saturating_add(RocksDbWeight::get().writes(6)) } // Storage: Staking MinCommission (r:1 w:0) // Storage: Staking Validators (r:1 w:1) fn force_apply_min_commission() -> Weight { - // Minimum execution time: 19_684 nanoseconds. - Weight::from_ref_time(20_059_000 as u64) - .saturating_add(RocksDbWeight::get().reads(2 as u64)) - .saturating_add(RocksDbWeight::get().writes(1 as u64)) + // Minimum execution time: 20_553 nanoseconds. + Weight::from_ref_time(20_933_000) + .saturating_add(RocksDbWeight::get().reads(2)) + .saturating_add(RocksDbWeight::get().writes(1)) } } diff --git a/frame/state-trie-migration/src/lib.rs b/frame/state-trie-migration/src/lib.rs index 8027dc88f81b1..23f73bb56b173 100644 --- a/frame/state-trie-migration/src/lib.rs +++ b/frame/state-trie-migration/src/lib.rs @@ -546,6 +546,7 @@ pub mod pallet { /// Control the automatic migration. /// /// The dispatch origin of this call must be [`Config::ControlOrigin`]. + #[pallet::call_index(0)] #[pallet::weight(T::DbWeight::get().reads_writes(1, 1))] pub fn control_auto_migration( origin: OriginFor, @@ -577,6 +578,7 @@ pub mod pallet { /// Based on the documentation of [`MigrationTask::migrate_until_exhaustion`], the /// recommended way of doing this is to pass a `limit` that only bounds `count`, as the /// `size` limit can always be overwritten. + #[pallet::call_index(1)] #[pallet::weight( // the migration process Pallet::::dynamic_weight(limits.item, * real_size_upper) @@ -648,6 +650,7 @@ pub mod pallet { /// /// This does not affect the global migration process tracker ([`MigrationProcess`]), and /// should only be used in case any keys are leftover due to a bug. + #[pallet::call_index(2)] #[pallet::weight( T::WeightInfo::migrate_custom_top_success() .max(T::WeightInfo::migrate_custom_top_fail()) @@ -704,6 +707,7 @@ pub mod pallet { /// /// This does not affect the global migration process tracker ([`MigrationProcess`]), and /// should only be used in case any keys are leftover due to a bug. + #[pallet::call_index(3)] #[pallet::weight( T::WeightInfo::migrate_custom_child_success() .max(T::WeightInfo::migrate_custom_child_fail()) @@ -764,6 +768,7 @@ pub mod pallet { } /// Set the maximum limit of the signed migration. + #[pallet::call_index(4)] #[pallet::weight(T::DbWeight::get().reads_writes(1, 1))] pub fn set_signed_max_limits( origin: OriginFor, @@ -783,6 +788,7 @@ pub mod pallet { /// /// In case you mess things up, you can also, in principle, use this to reset the migration /// process. + #[pallet::call_index(5)] #[pallet::weight(T::DbWeight::get().reads_writes(1, 1))] pub fn force_set_progress( origin: OriginFor, diff --git a/frame/sudo/src/lib.rs b/frame/sudo/src/lib.rs index c18ced8911193..0867f24b1691e 100644 --- a/frame/sudo/src/lib.rs +++ b/frame/sudo/src/lib.rs @@ -148,6 +148,7 @@ pub mod pallet { /// - One DB write (event). /// - Weight of derivative `call` execution + 10,000. /// # + #[pallet::call_index(0)] #[pallet::weight({ let dispatch_info = call.get_dispatch_info(); (dispatch_info.weight, dispatch_info.class) @@ -176,6 +177,7 @@ pub mod pallet { /// - O(1). /// - The weight of this call is defined by the caller. /// # + #[pallet::call_index(1)] #[pallet::weight((*_weight, call.get_dispatch_info().class))] pub fn sudo_unchecked_weight( origin: OriginFor, @@ -202,6 +204,7 @@ pub mod pallet { /// - Limited storage reads. /// - One DB change. /// # + #[pallet::call_index(2)] #[pallet::weight(0)] pub fn set_key( origin: OriginFor, @@ -229,6 +232,7 @@ pub mod pallet { /// - One DB write (event). /// - Weight of derivative `call` execution + 10,000. /// # + #[pallet::call_index(3)] #[pallet::weight({ let dispatch_info = call.get_dispatch_info(); ( diff --git a/frame/sudo/src/mock.rs b/frame/sudo/src/mock.rs index db2ad4d563910..639e81ceaa308 100644 --- a/frame/sudo/src/mock.rs +++ b/frame/sudo/src/mock.rs @@ -49,6 +49,7 @@ pub mod logger { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(*weight)] pub fn privileged_i32_log( origin: OriginFor, @@ -62,6 +63,7 @@ pub mod logger { Ok(().into()) } + #[pallet::call_index(1)] #[pallet::weight(*weight)] pub fn non_privileged_log( origin: OriginFor, diff --git a/frame/support/src/traits.rs b/frame/support/src/traits.rs index 3a831d9c27cc6..63c86c1f68459 100644 --- a/frame/support/src/traits.rs +++ b/frame/support/src/traits.rs @@ -20,7 +20,6 @@ //! NOTE: If you're looking for `parameter_types`, it has moved in to the top-level module. pub mod tokens; -#[allow(deprecated)] pub use tokens::{ currency::{ ActiveIssuanceOf, Currency, LockIdentifier, LockableCurrency, NamedReservableCurrency, @@ -113,6 +112,12 @@ pub use voting::{ mod preimages; pub use preimages::{Bounded, BoundedInline, FetchResult, Hash, QueryPreimage, StorePreimage}; +mod messages; +pub use messages::{ + EnqueueMessage, ExecuteOverweightError, Footprint, ProcessMessage, ProcessMessageError, + ServiceQueues, +}; + #[cfg(feature = "try-runtime")] mod try_runtime; #[cfg(feature = "try-runtime")] diff --git a/frame/support/src/traits/messages.rs b/frame/support/src/traits/messages.rs new file mode 100644 index 0000000000000..9b86c421ad9e0 --- /dev/null +++ b/frame/support/src/traits/messages.rs @@ -0,0 +1,202 @@ +// This file is part of Substrate. + +// Copyright (C) 2019-2022 Parity Technologies (UK) Ltd. +// SPDX-License-Identifier: Apache-2.0 + +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//! Traits for managing message queuing and handling. + +use codec::{Decode, Encode, FullCodec, MaxEncodedLen}; +use scale_info::TypeInfo; +use sp_core::{ConstU32, Get, TypedGet}; +use sp_runtime::{traits::Convert, BoundedSlice, RuntimeDebug}; +use sp_std::{fmt::Debug, marker::PhantomData, prelude::*}; +use sp_weights::Weight; + +/// Errors that can happen when attempting to process a message with +/// [`ProcessMessage::process_message()`]. +#[derive(Copy, Clone, Eq, PartialEq, Encode, Decode, TypeInfo, RuntimeDebug)] +pub enum ProcessMessageError { + /// The message data format is unknown (e.g. unrecognised header) + BadFormat, + /// The message data is bad (e.g. decoding returns an error). + Corrupt, + /// The message format is unsupported (e.g. old XCM version). + Unsupported, + /// Message processing was not attempted because it was not certain that the weight limit + /// would be respected. The parameter gives the maximum weight which the message could take + /// to process. + Overweight(Weight), +} + +/// Can process messages from a specific origin. +pub trait ProcessMessage { + /// The transport from where a message originates. + type Origin: FullCodec + MaxEncodedLen + Clone + Eq + PartialEq + TypeInfo + Debug; + + /// Process the given message, using no more than `weight_limit` in weight to do so. + fn process_message( + message: &[u8], + origin: Self::Origin, + weight_limit: Weight, + ) -> Result<(bool, Weight), ProcessMessageError>; +} + +/// Errors that can happen when attempting to execute an overweight message with +/// [`ServiceQueues::execute_overweight()`]. +#[derive(Eq, PartialEq, RuntimeDebug)] +pub enum ExecuteOverweightError { + /// The referenced message was not found. + NotFound, + /// The available weight was insufficient to execute the message. + InsufficientWeight, +} + +/// Can service queues and execute overweight messages. +pub trait ServiceQueues { + /// Addresses a specific overweight message. + type OverweightMessageAddress; + + /// Service all message queues in some fair manner. + /// + /// - `weight_limit`: The maximum amount of dynamic weight that this call can use. + /// + /// Returns the dynamic weight used by this call; is never greater than `weight_limit`. + fn service_queues(weight_limit: Weight) -> Weight; + + /// Executes a message that could not be executed by [`Self::service_queues()`] because it was + /// temporarily overweight. + fn execute_overweight( + _weight_limit: Weight, + _address: Self::OverweightMessageAddress, + ) -> Result { + Err(ExecuteOverweightError::NotFound) + } +} + +/// The resource footprint of a queue. +#[derive(Default, Copy, Clone, Eq, PartialEq, RuntimeDebug)] +pub struct Footprint { + pub count: u64, + pub size: u64, +} + +/// Can enqueue messages for multiple origins. +pub trait EnqueueMessage { + /// The maximal length any enqueued message may have. + type MaxMessageLen: Get; + + /// Enqueue a single `message` from a specific `origin`. + fn enqueue_message(message: BoundedSlice, origin: Origin); + + /// Enqueue multiple `messages` from a specific `origin`. + fn enqueue_messages<'a>( + messages: impl Iterator>, + origin: Origin, + ); + + /// Any remaining unprocessed messages should happen only lazily, not proactively. + fn sweep_queue(origin: Origin); + + /// Return the state footprint of the given queue. + fn footprint(origin: Origin) -> Footprint; +} + +impl EnqueueMessage for () { + type MaxMessageLen = ConstU32<0>; + fn enqueue_message(_: BoundedSlice, _: Origin) {} + fn enqueue_messages<'a>( + _: impl Iterator>, + _: Origin, + ) { + } + fn sweep_queue(_: Origin) {} + fn footprint(_: Origin) -> Footprint { + Footprint::default() + } +} + +/// Transform the origin of an [`EnqueueMessage`] via `C::convert`. +pub struct TransformOrigin(PhantomData<(E, O, N, C)>); +impl, O: MaxEncodedLen, N: MaxEncodedLen, C: Convert> EnqueueMessage + for TransformOrigin +{ + type MaxMessageLen = E::MaxMessageLen; + + fn enqueue_message(message: BoundedSlice, origin: N) { + E::enqueue_message(message, C::convert(origin)); + } + + fn enqueue_messages<'a>( + messages: impl Iterator>, + origin: N, + ) { + E::enqueue_messages(messages, C::convert(origin)); + } + + fn sweep_queue(origin: N) { + E::sweep_queue(C::convert(origin)); + } + + fn footprint(origin: N) -> Footprint { + E::footprint(C::convert(origin)) + } +} + +/// Handles incoming messages for a single origin. +pub trait HandleMessage { + /// The maximal length any enqueued message may have. + type MaxMessageLen: Get; + + /// Enqueue a single `message` with an implied origin. + fn handle_message(message: BoundedSlice); + + /// Enqueue multiple `messages` from an implied origin. + fn handle_messages<'a>( + messages: impl Iterator>, + ); + + /// Any remaining unprocessed messages should happen only lazily, not proactively. + fn sweep_queue(); + + /// Return the state footprint of the queue. + fn footprint() -> Footprint; +} + +/// Adapter type to transform an [`EnqueueMessage`] with an origin into a [`HandleMessage`] impl. +pub struct EnqueueWithOrigin(PhantomData<(E, O)>); +impl, O: TypedGet> HandleMessage for EnqueueWithOrigin +where + O::Type: MaxEncodedLen, +{ + type MaxMessageLen = E::MaxMessageLen; + + fn handle_message(message: BoundedSlice) { + E::enqueue_message(message, O::get()); + } + + fn handle_messages<'a>( + messages: impl Iterator>, + ) { + E::enqueue_messages(messages, O::get()); + } + + fn sweep_queue() { + E::sweep_queue(O::get()); + } + + fn footprint() -> Footprint { + E::footprint(O::get()) + } +} diff --git a/frame/support/src/traits/tokens/currency.rs b/frame/support/src/traits/tokens/currency.rs index 29603198e9a2b..48247b6021798 100644 --- a/frame/support/src/traits/tokens/currency.rs +++ b/frame/support/src/traits/tokens/currency.rs @@ -32,10 +32,7 @@ use sp_std::fmt::Debug; mod reservable; pub use reservable::{NamedReservableCurrency, ReservableCurrency}; mod lockable; - -#[deprecated(note = "Deprecated in favour of using fungibles::Lockable trait directly")] -pub use super::fungibles::{LockIdentifier, Lockable as LockableCurrency}; -pub use lockable::VestingSchedule; +pub use lockable::{LockIdentifier, LockableCurrency, VestingSchedule}; /// Abstraction over a fungible assets system. pub trait Currency { diff --git a/frame/support/src/traits/tokens/currency/lockable.rs b/frame/support/src/traits/tokens/currency/lockable.rs index 5b7cad3b5c1d3..a10edd6e3e874 100644 --- a/frame/support/src/traits/tokens/currency/lockable.rs +++ b/frame/support/src/traits/tokens/currency/lockable.rs @@ -17,8 +17,52 @@ //! The lockable currency trait and some associated types. -use super::Currency; -use crate::dispatch::DispatchResult; +use super::{super::misc::WithdrawReasons, Currency}; +use crate::{dispatch::DispatchResult, traits::misc::Get}; + +/// An identifier for a lock. Used for disambiguating different locks so that +/// they can be individually replaced or removed. +pub type LockIdentifier = [u8; 8]; + +/// A currency whose accounts can have liquidity restrictions. +pub trait LockableCurrency: Currency { + /// The quantity used to denote time; usually just a `BlockNumber`. + type Moment; + + /// The maximum number of locks a user should have on their account. + type MaxLocks: Get; + + /// Create a new balance lock on account `who`. + /// + /// If the new lock is valid (i.e. not already expired), it will push the struct to + /// the `Locks` vec in storage. Note that you can lock more funds than a user has. + /// + /// If the lock `id` already exists, this will update it. + fn set_lock( + id: LockIdentifier, + who: &AccountId, + amount: Self::Balance, + reasons: WithdrawReasons, + ); + + /// Changes a balance lock (selected by `id`) so that it becomes less liquid in all + /// parameters or creates a new one if it does not exist. + /// + /// Calling `extend_lock` on an existing lock `id` differs from `set_lock` in that it + /// applies the most severe constraints of the two, while `set_lock` replaces the lock + /// with the new parameters. As in, `extend_lock` will set: + /// - maximum `amount` + /// - bitwise mask of all `reasons` + fn extend_lock( + id: LockIdentifier, + who: &AccountId, + amount: Self::Balance, + reasons: WithdrawReasons, + ); + + /// Remove an existing lock. + fn remove_lock(id: LockIdentifier, who: &AccountId); +} /// A vesting schedule over a currency. This allows a particular currency to have vesting limits /// applied to it. diff --git a/frame/support/src/traits/tokens/fungible.rs b/frame/support/src/traits/tokens/fungible.rs index d11959fd7c5d2..05e109b870ec0 100644 --- a/frame/support/src/traits/tokens/fungible.rs +++ b/frame/support/src/traits/tokens/fungible.rs @@ -29,7 +29,6 @@ use sp_runtime::traits::Saturating; mod balanced; mod imbalance; - pub use balanced::{Balanced, Unbalanced}; pub use imbalance::{CreditOf, DebtOf, HandleImbalanceDrop, Imbalance}; diff --git a/frame/support/src/traits/tokens/fungibles.rs b/frame/support/src/traits/tokens/fungibles.rs index 045ecd05134c2..a29cb974fe450 100644 --- a/frame/support/src/traits/tokens/fungibles.rs +++ b/frame/support/src/traits/tokens/fungibles.rs @@ -33,9 +33,7 @@ pub mod metadata; pub use balanced::{Balanced, Unbalanced}; mod imbalance; pub use imbalance::{CreditOf, DebtOf, HandleImbalanceDrop, Imbalance}; -mod lockable; pub mod roles; -pub use lockable::{LockIdentifier, Lockable}; /// Trait for providing balance-inspection access to a set of named fungible assets. pub trait Inspect { diff --git a/frame/support/src/traits/tokens/fungibles/lockable.rs b/frame/support/src/traits/tokens/fungibles/lockable.rs deleted file mode 100644 index 185b40eae9b28..0000000000000 --- a/frame/support/src/traits/tokens/fungibles/lockable.rs +++ /dev/null @@ -1,65 +0,0 @@ -// This file is part of Substrate. - -// Copyright (C) 2019-2022 Parity Technologies (UK) Ltd. -// SPDX-License-Identifier: Apache-2.0 - -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -//! The Lockable trait and some associated types. - -use super::{super::misc::WithdrawReasons, currency::Currency}; -use crate::traits::misc::Get; - -/// An identifier for a lock. Used for disambiguating different locks so that -/// they can be individually replaced or removed. -pub type LockIdentifier = [u8; 8]; - -/// A currency whose accounts can have liquidity restrictions. -pub trait Lockable: Currency { - /// The quantity used to denote time; usually just a `BlockNumber`. - type Moment; - - /// The maximum number of locks a user should have on their account. - type MaxLocks: Get; - - /// Create a new balance lock on account `who`. - /// - /// If the new lock is valid (i.e. not already expired), it will push the struct to - /// the `Locks` vec in storage. Note that you can lock more funds than a user has. - /// - /// If the lock `id` already exists, this will update it. - fn set_lock( - id: LockIdentifier, - who: &AccountId, - amount: Self::Balance, - reasons: WithdrawReasons, - ); - - /// Changes a balance lock (selected by `id`) so that it becomes less liquid in all - /// parameters or creates a new one if it does not exist. - /// - /// Calling `extend_lock` on an existing lock `id` differs from `set_lock` in that it - /// applies the most severe constraints of the two, while `set_lock` replaces the lock - /// with the new parameters. As in, `extend_lock` will set: - /// - maximum `amount` - /// - bitwise mask of all `reasons` - fn extend_lock( - id: LockIdentifier, - who: &AccountId, - amount: Self::Balance, - reasons: WithdrawReasons, - ); - - /// Remove an existing lock. - fn remove_lock(id: LockIdentifier, who: &AccountId); -} diff --git a/frame/support/src/weights/block_weights.rs b/frame/support/src/weights/block_weights.rs index 5c8e1f1c86e9d..b68c1fb508b01 100644 --- a/frame/support/src/weights/block_weights.rs +++ b/frame/support/src/weights/block_weights.rs @@ -37,7 +37,7 @@ // --repeat=100 use sp_core::parameter_types; -use sp_weights::{constants::WEIGHT_PER_NANOS, Weight}; +use sp_weights::{constants::WEIGHT_REF_TIME_PER_NANOS, Weight}; parameter_types! { /// Time to execute an empty block. @@ -53,7 +53,8 @@ parameter_types! { /// 99th: 390_723 /// 95th: 365_799 /// 75th: 361_582 - pub const BlockExecutionWeight: Weight = WEIGHT_PER_NANOS.saturating_mul(358_523); + pub const BlockExecutionWeight: Weight = + Weight::from_ref_time(WEIGHT_REF_TIME_PER_NANOS.saturating_mul(358_523)); } #[cfg(test)] @@ -69,12 +70,12 @@ mod test_weights { // At least 100 µs. assert!( - w.ref_time() >= 100u64 * constants::WEIGHT_PER_MICROS.ref_time(), + w.ref_time() >= 100u64 * constants::WEIGHT_REF_TIME_PER_MICROS, "Weight should be at least 100 µs." ); // At most 50 ms. assert!( - w.ref_time() <= 50u64 * constants::WEIGHT_PER_MILLIS.ref_time(), + w.ref_time() <= 50u64 * constants::WEIGHT_REF_TIME_PER_MILLIS, "Weight should be at most 50 ms." ); } diff --git a/frame/support/src/weights/extrinsic_weights.rs b/frame/support/src/weights/extrinsic_weights.rs index 1db2281dfe488..ced1fb91621f6 100644 --- a/frame/support/src/weights/extrinsic_weights.rs +++ b/frame/support/src/weights/extrinsic_weights.rs @@ -37,7 +37,7 @@ // --repeat=100 use sp_core::parameter_types; -use sp_weights::{constants::WEIGHT_PER_NANOS, Weight}; +use sp_weights::{constants::WEIGHT_REF_TIME_PER_NANOS, Weight}; parameter_types! { /// Time to execute a NO-OP extrinsic, for example `System::remark`. @@ -53,7 +53,8 @@ parameter_types! { /// 99th: 99_202 /// 95th: 99_163 /// 75th: 99_030 - pub const ExtrinsicBaseWeight: Weight = WEIGHT_PER_NANOS.saturating_mul(98_974); + pub const ExtrinsicBaseWeight: Weight = + Weight::from_ref_time(WEIGHT_REF_TIME_PER_NANOS.saturating_mul(98_974)); } #[cfg(test)] @@ -69,12 +70,12 @@ mod test_weights { // At least 10 µs. assert!( - w.ref_time() >= 10u64 * constants::WEIGHT_PER_MICROS.ref_time(), + w.ref_time() >= 10u64 * constants::WEIGHT_REF_TIME_PER_MICROS, "Weight should be at least 10 µs." ); // At most 1 ms. assert!( - w.ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + w.ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Weight should be at most 1 ms." ); } diff --git a/frame/support/src/weights/paritydb_weights.rs b/frame/support/src/weights/paritydb_weights.rs index 344e6cf0ddb6e..6fd1112ee2947 100644 --- a/frame/support/src/weights/paritydb_weights.rs +++ b/frame/support/src/weights/paritydb_weights.rs @@ -24,8 +24,8 @@ pub mod constants { /// ParityDB can be enabled with a feature flag, but is still experimental. These weights /// are available for brave runtime engineers who may want to try this out as default. pub const ParityDbWeight: RuntimeDbWeight = RuntimeDbWeight { - read: 8_000 * constants::WEIGHT_PER_NANOS.ref_time(), - write: 50_000 * constants::WEIGHT_PER_NANOS.ref_time(), + read: 8_000 * constants::WEIGHT_REF_TIME_PER_NANOS, + write: 50_000 * constants::WEIGHT_REF_TIME_PER_NANOS, }; } @@ -41,20 +41,20 @@ pub mod constants { fn sane() { // At least 1 µs. assert!( - W::get().reads(1).ref_time() >= constants::WEIGHT_PER_MICROS.ref_time(), + W::get().reads(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS, "Read weight should be at least 1 µs." ); assert!( - W::get().writes(1).ref_time() >= constants::WEIGHT_PER_MICROS.ref_time(), + W::get().writes(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS, "Write weight should be at least 1 µs." ); // At most 1 ms. assert!( - W::get().reads(1).ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + W::get().reads(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Read weight should be at most 1 ms." ); assert!( - W::get().writes(1).ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + W::get().writes(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Write weight should be at most 1 ms." ); } diff --git a/frame/support/src/weights/rocksdb_weights.rs b/frame/support/src/weights/rocksdb_weights.rs index 4dec2d8c877ea..b18b387de9957 100644 --- a/frame/support/src/weights/rocksdb_weights.rs +++ b/frame/support/src/weights/rocksdb_weights.rs @@ -24,8 +24,8 @@ pub mod constants { /// By default, Substrate uses RocksDB, so this will be the weight used throughout /// the runtime. pub const RocksDbWeight: RuntimeDbWeight = RuntimeDbWeight { - read: 25_000 * constants::WEIGHT_PER_NANOS.ref_time(), - write: 100_000 * constants::WEIGHT_PER_NANOS.ref_time(), + read: 25_000 * constants::WEIGHT_REF_TIME_PER_NANOS, + write: 100_000 * constants::WEIGHT_REF_TIME_PER_NANOS, }; } @@ -41,20 +41,20 @@ pub mod constants { fn sane() { // At least 1 µs. assert!( - W::get().reads(1).ref_time() >= constants::WEIGHT_PER_MICROS.ref_time(), + W::get().reads(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS, "Read weight should be at least 1 µs." ); assert!( - W::get().writes(1).ref_time() >= constants::WEIGHT_PER_MICROS.ref_time(), + W::get().writes(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS, "Write weight should be at least 1 µs." ); // At most 1 ms. assert!( - W::get().reads(1).ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + W::get().reads(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Read weight should be at most 1 ms." ); assert!( - W::get().writes(1).ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + W::get().writes(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Write weight should be at most 1 ms." ); } diff --git a/frame/support/test/tests/pallet.rs b/frame/support/test/tests/pallet.rs index 0fd32dad2242a..c0376d5aa450f 100644 --- a/frame/support/test/tests/pallet.rs +++ b/frame/support/test/tests/pallet.rs @@ -192,6 +192,7 @@ pub mod pallet { T::AccountId: From + From + SomeAssociation1, { /// Doc comment put in metadata + #[pallet::call_index(0)] #[pallet::weight(Weight::from_ref_time(*_foo as u64))] pub fn foo( origin: OriginFor, @@ -206,6 +207,7 @@ pub mod pallet { } /// Doc comment put in metadata + #[pallet::call_index(1)] #[pallet::weight(1)] pub fn foo_storage_layer( _origin: OriginFor, @@ -220,6 +222,7 @@ pub mod pallet { } // Test for DispatchResult return type + #[pallet::call_index(2)] #[pallet::weight(1)] pub fn foo_no_post_info(_origin: OriginFor) -> DispatchResult { Ok(()) diff --git a/frame/support/test/tests/pallet_compatibility.rs b/frame/support/test/tests/pallet_compatibility.rs index 398137d644ee4..300fb9a40cf4e 100644 --- a/frame/support/test/tests/pallet_compatibility.rs +++ b/frame/support/test/tests/pallet_compatibility.rs @@ -141,6 +141,7 @@ pub mod pallet { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(>::into(new_value.clone()))] pub fn set_dummy( origin: OriginFor, diff --git a/frame/support/test/tests/pallet_compatibility_instance.rs b/frame/support/test/tests/pallet_compatibility_instance.rs index e8b5fe9fa33d4..79370d911b943 100644 --- a/frame/support/test/tests/pallet_compatibility_instance.rs +++ b/frame/support/test/tests/pallet_compatibility_instance.rs @@ -127,6 +127,7 @@ pub mod pallet { #[pallet::call] impl, I: 'static> Pallet { + #[pallet::call_index(0)] #[pallet::weight(>::into(new_value.clone()))] pub fn set_dummy( origin: OriginFor, diff --git a/frame/support/test/tests/pallet_instance.rs b/frame/support/test/tests/pallet_instance.rs index 7e05e2ecf783b..d8ad13ceda1dd 100644 --- a/frame/support/test/tests/pallet_instance.rs +++ b/frame/support/test/tests/pallet_instance.rs @@ -82,6 +82,7 @@ pub mod pallet { #[pallet::call] impl, I: 'static> Pallet { /// Doc comment put in metadata + #[pallet::call_index(0)] #[pallet::weight(Weight::from_ref_time(*_foo as u64))] pub fn foo( origin: OriginFor, @@ -93,6 +94,7 @@ pub mod pallet { } /// Doc comment put in metadata + #[pallet::call_index(1)] #[pallet::weight(1)] pub fn foo_storage_layer( origin: OriginFor, diff --git a/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen.stderr b/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen.stderr index a1b1e364bf2b0..999d8585c221a 100644 --- a/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen.stderr +++ b/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen.stderr @@ -69,7 +69,7 @@ error[E0277]: the trait bound `Bar: TypeInfo` is not satisfied (A, B, C, D) (A, B, C, D, E) (A, B, C, D, E, F) - and 161 others + and 162 others = note: required for `Bar` to implement `StaticTypeInfo` = note: required for `frame_support::pallet_prelude::StorageValue<_GeneratedPrefixForStorageFoo, Bar>` to implement `StorageEntryMetadataBuilder` diff --git a/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen_unnamed.stderr b/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen_unnamed.stderr index 0c712324aa000..e2870ffb9e86f 100644 --- a/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen_unnamed.stderr +++ b/frame/support/test/tests/pallet_ui/storage_ensure_span_are_ok_on_wrong_gen_unnamed.stderr @@ -69,7 +69,7 @@ error[E0277]: the trait bound `Bar: TypeInfo` is not satisfied (A, B, C, D) (A, B, C, D, E) (A, B, C, D, E, F) - and 161 others + and 162 others = note: required for `Bar` to implement `StaticTypeInfo` = note: required for `frame_support::pallet_prelude::StorageValue<_GeneratedPrefixForStorageFoo, Bar>` to implement `StorageEntryMetadataBuilder` diff --git a/frame/support/test/tests/storage_layers.rs b/frame/support/test/tests/storage_layers.rs index 6fbbb8ac67bd7..cff81c0bea2ed 100644 --- a/frame/support/test/tests/storage_layers.rs +++ b/frame/support/test/tests/storage_layers.rs @@ -46,6 +46,7 @@ pub mod pallet { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(1)] pub fn set_value(_origin: OriginFor, value: u32) -> DispatchResult { Value::::put(value); diff --git a/frame/system/src/lib.rs b/frame/system/src/lib.rs index f7e3849beeb8d..b41083538a325 100644 --- a/frame/system/src/lib.rs +++ b/frame/system/src/lib.rs @@ -372,6 +372,7 @@ pub mod pallet { /// # /// - `O(1)` /// # + #[pallet::call_index(0)] #[pallet::weight(T::SystemWeightInfo::remark(_remark.len() as u32))] pub fn remark(origin: OriginFor, _remark: Vec) -> DispatchResultWithPostInfo { ensure_signed_or_root(origin)?; @@ -379,6 +380,7 @@ pub mod pallet { } /// Set the number of pages in the WebAssembly environment's heap. + #[pallet::call_index(1)] #[pallet::weight((T::SystemWeightInfo::set_heap_pages(), DispatchClass::Operational))] pub fn set_heap_pages(origin: OriginFor, pages: u64) -> DispatchResultWithPostInfo { ensure_root(origin)?; @@ -399,6 +401,7 @@ pub mod pallet { /// The weight of this function is dependent on the runtime, but generally this is very /// expensive. We will treat this as a full block. /// # + #[pallet::call_index(2)] #[pallet::weight((T::BlockWeights::get().max_block, DispatchClass::Operational))] pub fn set_code(origin: OriginFor, code: Vec) -> DispatchResultWithPostInfo { ensure_root(origin)?; @@ -416,6 +419,7 @@ pub mod pallet { /// - 1 event. /// The weight of this function is dependent on the runtime. We will treat this as a full /// block. # + #[pallet::call_index(3)] #[pallet::weight((T::BlockWeights::get().max_block, DispatchClass::Operational))] pub fn set_code_without_checks( origin: OriginFor, @@ -427,6 +431,7 @@ pub mod pallet { } /// Set some items of storage. + #[pallet::call_index(4)] #[pallet::weight(( T::SystemWeightInfo::set_storage(items.len() as u32), DispatchClass::Operational, @@ -443,6 +448,7 @@ pub mod pallet { } /// Kill some items from storage. + #[pallet::call_index(5)] #[pallet::weight(( T::SystemWeightInfo::kill_storage(keys.len() as u32), DispatchClass::Operational, @@ -459,6 +465,7 @@ pub mod pallet { /// /// **NOTE:** We rely on the Root origin to provide us the number of subkeys under /// the prefix we are removing to accurately calculate the weight of this function. + #[pallet::call_index(6)] #[pallet::weight(( T::SystemWeightInfo::kill_prefix(_subkeys.saturating_add(1)), DispatchClass::Operational, @@ -474,6 +481,7 @@ pub mod pallet { } /// Make some on-chain remark and emit event. + #[pallet::call_index(7)] #[pallet::weight(T::SystemWeightInfo::remark_with_event(remark.len() as u32))] pub fn remark_with_event( origin: OriginFor, diff --git a/frame/system/src/limits.rs b/frame/system/src/limits.rs index eb95b699eba32..54d27c5b9e86d 100644 --- a/frame/system/src/limits.rs +++ b/frame/system/src/limits.rs @@ -208,7 +208,7 @@ pub struct BlockWeights { impl Default for BlockWeights { fn default() -> Self { Self::with_sensible_defaults( - Weight::from_parts(constants::WEIGHT_PER_SECOND.ref_time(), u64::MAX), + Weight::from_parts(constants::WEIGHT_REF_TIME_PER_SECOND, u64::MAX), DEFAULT_NORMAL_RATIO, ) } diff --git a/frame/timestamp/src/lib.rs b/frame/timestamp/src/lib.rs index 6a7f849d1329a..e859474c2cb9e 100644 --- a/frame/timestamp/src/lib.rs +++ b/frame/timestamp/src/lib.rs @@ -198,6 +198,7 @@ pub mod pallet { /// `on_finalize`) /// - 1 event handler `on_timestamp_set`. Must be `O(1)`. /// # + #[pallet::call_index(0)] #[pallet::weight(( T::WeightInfo::set(), DispatchClass::Mandatory diff --git a/frame/tips/src/lib.rs b/frame/tips/src/lib.rs index 9313a26e52e00..dd9ebc9813233 100644 --- a/frame/tips/src/lib.rs +++ b/frame/tips/src/lib.rs @@ -235,6 +235,7 @@ pub mod pallet { /// - DbReads: `Reasons`, `Tips` /// - DbWrites: `Reasons`, `Tips` /// # + #[pallet::call_index(0)] #[pallet::weight(>::WeightInfo::report_awesome(reason.len() as u32))] pub fn report_awesome( origin: OriginFor, @@ -292,6 +293,7 @@ pub mod pallet { /// - DbReads: `Tips`, `origin account` /// - DbWrites: `Reasons`, `Tips`, `origin account` /// # + #[pallet::call_index(1)] #[pallet::weight(>::WeightInfo::retract_tip())] pub fn retract_tip(origin: OriginFor, hash: T::Hash) -> DispatchResult { let who = ensure_signed(origin)?; @@ -330,6 +332,7 @@ pub mod pallet { /// - DbReads: `Tippers`, `Reasons` /// - DbWrites: `Reasons`, `Tips` /// # + #[pallet::call_index(2)] #[pallet::weight(>::WeightInfo::tip_new(reason.len() as u32, T::Tippers::max_len() as u32))] pub fn tip_new( origin: OriginFor, @@ -384,6 +387,7 @@ pub mod pallet { /// - DbReads: `Tippers`, `Tips` /// - DbWrites: `Tips` /// # + #[pallet::call_index(3)] #[pallet::weight(>::WeightInfo::tip(T::Tippers::max_len() as u32))] pub fn tip( origin: OriginFor, @@ -417,6 +421,7 @@ pub mod pallet { /// - DbReads: `Tips`, `Tippers`, `tip finder` /// - DbWrites: `Reasons`, `Tips`, `Tippers`, `tip finder` /// # + #[pallet::call_index(4)] #[pallet::weight(>::WeightInfo::close_tip(T::Tippers::max_len() as u32))] pub fn close_tip(origin: OriginFor, hash: T::Hash) -> DispatchResult { ensure_signed(origin)?; @@ -443,6 +448,7 @@ pub mod pallet { /// `T` is charged as upper bound given by `ContainsLengthBound`. /// The actual cost depends on the implementation of `T::Tippers`. /// # + #[pallet::call_index(5)] #[pallet::weight(>::WeightInfo::slash_tip(T::Tippers::max_len() as u32))] pub fn slash_tip(origin: OriginFor, hash: T::Hash) -> DispatchResult { T::RejectOrigin::ensure_origin(origin)?; diff --git a/frame/transaction-payment/asset-tx-payment/Cargo.toml b/frame/transaction-payment/asset-tx-payment/Cargo.toml index b192c4e9cd96e..8e4645a2677f9 100644 --- a/frame/transaction-payment/asset-tx-payment/Cargo.toml +++ b/frame/transaction-payment/asset-tx-payment/Cargo.toml @@ -19,12 +19,10 @@ sp-io = { version = "7.0.0", default-features = false, path = "../../../primitiv sp-runtime = { version = "7.0.0", default-features = false, path = "../../../primitives/runtime" } sp-std = { version = "5.0.0", default-features = false, path = "../../../primitives/std" } -# optional dependencies for cargo features frame-support = { version = "4.0.0-dev", default-features = false, path = "../../support" } frame-system = { version = "4.0.0-dev", default-features = false, path = "../../system" } pallet-transaction-payment = { version = "4.0.0-dev", default-features = false, path = ".." } frame-benchmarking = { version = "4.0.0-dev", default-features = false, path = "../../benchmarking", optional = true } -pallet-assets = { default-features = false, optional = true, path = "../../assets" } # Other dependencies codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false, features = ["derive"] } @@ -40,7 +38,6 @@ pallet-assets = { version = "4.0.0-dev", path = "../../assets" } pallet-authorship = { version = "4.0.0-dev", path = "../../authorship" } pallet-balances = { version = "4.0.0-dev", path = "../../balances" } - [features] default = ["std"] std = [ @@ -60,6 +57,5 @@ runtime-benchmarks = [ "frame-benchmarking/runtime-benchmarks", "sp-runtime/runtime-benchmarks", "frame-system/runtime-benchmarks", - "pallet-assets/runtime-benchmarks", ] try-runtime = ["frame-support/try-runtime"] diff --git a/frame/transaction-payment/asset-tx-payment/src/tests.rs b/frame/transaction-payment/asset-tx-payment/src/tests.rs index 02e15654f3eed..b70a88d02c6e1 100644 --- a/frame/transaction-payment/asset-tx-payment/src/tests.rs +++ b/frame/transaction-payment/asset-tx-payment/src/tests.rs @@ -173,8 +173,9 @@ impl pallet_assets::Config for Runtime { type Extra = (); type WeightInfo = (); type RemoveItemsLimit = ConstU32<1000>; - #[cfg(feature = "runtime-benchmarks")] - type BenchmarkHelper = (); + pallet_assets::runtime_benchmarks_enabled! { + type BenchmarkHelper = (); + } } pub struct HardcodedAuthor; diff --git a/frame/transaction-payment/rpc/Cargo.toml b/frame/transaction-payment/rpc/Cargo.toml index 06dcaca937381..b77143201ffd4 100644 --- a/frame/transaction-payment/rpc/Cargo.toml +++ b/frame/transaction-payment/rpc/Cargo.toml @@ -14,7 +14,7 @@ targets = ["x86_64-unknown-linux-gnu"] [dependencies] codec = { package = "parity-scale-codec", version = "3.0.0" } -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } pallet-transaction-payment-rpc-runtime-api = { version = "4.0.0-dev", path = "./runtime-api" } sp-api = { version = "4.0.0-dev", path = "../../../primitives/api" } sp-blockchain = { version = "4.0.0-dev", path = "../../../primitives/blockchain" } diff --git a/frame/transaction-storage/src/lib.rs b/frame/transaction-storage/src/lib.rs index 07144c5617113..cda7610efdf87 100644 --- a/frame/transaction-storage/src/lib.rs +++ b/frame/transaction-storage/src/lib.rs @@ -188,6 +188,7 @@ pub mod pallet { /// - n*log(n) of data size, as all data is pushed to an in-memory trie. /// Additionally contains a DB write. /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::store(data.len() as u32))] pub fn store(origin: OriginFor, data: Vec) -> DispatchResult { ensure!(data.len() > 0, Error::::EmptyTransaction); @@ -236,6 +237,7 @@ pub mod pallet { /// # /// - Constant. /// # + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::renew())] pub fn renew( origin: OriginFor, @@ -281,6 +283,7 @@ pub mod pallet { /// There's a DB read for each transaction. /// Here we assume a maximum of 100 probed transactions. /// # + #[pallet::call_index(2)] #[pallet::weight((T::WeightInfo::check_proof_max(), DispatchClass::Mandatory))] pub fn check_proof( origin: OriginFor, diff --git a/frame/treasury/src/lib.rs b/frame/treasury/src/lib.rs index 4aa00c348585c..0ffc53d8b7978 100644 --- a/frame/treasury/src/lib.rs +++ b/frame/treasury/src/lib.rs @@ -350,6 +350,7 @@ pub mod pallet { /// - DbReads: `ProposalCount`, `origin account` /// - DbWrites: `ProposalCount`, `Proposals`, `origin account` /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::propose_spend())] pub fn propose_spend( origin: OriginFor, @@ -380,6 +381,7 @@ pub mod pallet { /// - DbReads: `Proposals`, `rejected proposer account` /// - DbWrites: `Proposals`, `rejected proposer account` /// # + #[pallet::call_index(1)] #[pallet::weight((T::WeightInfo::reject_proposal(), DispatchClass::Operational))] pub fn reject_proposal( origin: OriginFor, @@ -410,6 +412,7 @@ pub mod pallet { /// - DbReads: `Proposals`, `Approvals` /// - DbWrite: `Approvals` /// # + #[pallet::call_index(2)] #[pallet::weight((T::WeightInfo::approve_proposal(T::MaxApprovals::get()), DispatchClass::Operational))] pub fn approve_proposal( origin: OriginFor, @@ -431,6 +434,7 @@ pub mod pallet { /// /// NOTE: For record-keeping purposes, the proposer is deemed to be equivalent to the /// beneficiary. + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::spend())] pub fn spend( origin: OriginFor, @@ -472,6 +476,7 @@ pub mod pallet { /// - `ProposalNotApproved`: The `proposal_id` supplied was not found in the approval queue, /// i.e., the proposal has not been approved. This could also mean the proposal does not /// exist altogether, thus there is no way it would have been approved in the first place. + #[pallet::call_index(4)] #[pallet::weight((T::WeightInfo::remove_approval(), DispatchClass::Operational))] pub fn remove_approval( origin: OriginFor, diff --git a/frame/uniques/src/lib.rs b/frame/uniques/src/lib.rs index 185d8fc0c8edd..8157817d4166e 100644 --- a/frame/uniques/src/lib.rs +++ b/frame/uniques/src/lib.rs @@ -449,6 +449,7 @@ pub mod pallet { /// Emits `Created` event when successful. /// /// Weight: `O(1)` + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::create())] pub fn create( origin: OriginFor, @@ -485,6 +486,7 @@ pub mod pallet { /// Emits `ForceCreated` event when successful. /// /// Weight: `O(1)` + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::force_create())] pub fn force_create( origin: OriginFor, @@ -520,6 +522,7 @@ pub mod pallet { /// - `n = witness.items` /// - `m = witness.item_metadatas` /// - `a = witness.attributes` + #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::destroy( witness.items, witness.item_metadatas, @@ -555,6 +558,7 @@ pub mod pallet { /// Emits `Issued` event when successful. /// /// Weight: `O(1)` + #[pallet::call_index(3)] #[pallet::weight(T::WeightInfo::mint())] pub fn mint( origin: OriginFor, @@ -584,6 +588,7 @@ pub mod pallet { /// /// Weight: `O(1)` /// Modes: `check_owner.is_some()`. + #[pallet::call_index(4)] #[pallet::weight(T::WeightInfo::burn())] pub fn burn( origin: OriginFor, @@ -622,6 +627,7 @@ pub mod pallet { /// Emits `Transferred`. /// /// Weight: `O(1)` + #[pallet::call_index(5)] #[pallet::weight(T::WeightInfo::transfer())] pub fn transfer( origin: OriginFor, @@ -658,6 +664,7 @@ pub mod pallet { /// is not permitted to call it. /// /// Weight: `O(items.len())` + #[pallet::call_index(6)] #[pallet::weight(T::WeightInfo::redeposit(items.len() as u32))] pub fn redeposit( origin: OriginFor, @@ -718,6 +725,7 @@ pub mod pallet { /// Emits `Frozen`. /// /// Weight: `O(1)` + #[pallet::call_index(7)] #[pallet::weight(T::WeightInfo::freeze())] pub fn freeze( origin: OriginFor, @@ -749,6 +757,7 @@ pub mod pallet { /// Emits `Thawed`. /// /// Weight: `O(1)` + #[pallet::call_index(8)] #[pallet::weight(T::WeightInfo::thaw())] pub fn thaw( origin: OriginFor, @@ -779,6 +788,7 @@ pub mod pallet { /// Emits `CollectionFrozen`. /// /// Weight: `O(1)` + #[pallet::call_index(9)] #[pallet::weight(T::WeightInfo::freeze_collection())] pub fn freeze_collection( origin: OriginFor, @@ -806,6 +816,7 @@ pub mod pallet { /// Emits `CollectionThawed`. /// /// Weight: `O(1)` + #[pallet::call_index(10)] #[pallet::weight(T::WeightInfo::thaw_collection())] pub fn thaw_collection( origin: OriginFor, @@ -835,6 +846,7 @@ pub mod pallet { /// Emits `OwnerChanged`. /// /// Weight: `O(1)` + #[pallet::call_index(11)] #[pallet::weight(T::WeightInfo::transfer_ownership())] pub fn transfer_ownership( origin: OriginFor, @@ -883,6 +895,7 @@ pub mod pallet { /// Emits `TeamChanged`. /// /// Weight: `O(1)` + #[pallet::call_index(12)] #[pallet::weight(T::WeightInfo::set_team())] pub fn set_team( origin: OriginFor, @@ -923,6 +936,7 @@ pub mod pallet { /// Emits `ApprovedTransfer` on success. /// /// Weight: `O(1)` + #[pallet::call_index(13)] #[pallet::weight(T::WeightInfo::approve_transfer())] pub fn approve_transfer( origin: OriginFor, @@ -976,6 +990,7 @@ pub mod pallet { /// Emits `ApprovalCancelled` on success. /// /// Weight: `O(1)` + #[pallet::call_index(14)] #[pallet::weight(T::WeightInfo::cancel_approval())] pub fn cancel_approval( origin: OriginFor, @@ -1028,6 +1043,7 @@ pub mod pallet { /// Emits `ItemStatusChanged` with the identity of the item. /// /// Weight: `O(1)` + #[pallet::call_index(15)] #[pallet::weight(T::WeightInfo::force_item_status())] pub fn force_item_status( origin: OriginFor, @@ -1077,6 +1093,7 @@ pub mod pallet { /// Emits `AttributeSet`. /// /// Weight: `O(1)` + #[pallet::call_index(16)] #[pallet::weight(T::WeightInfo::set_attribute())] pub fn set_attribute( origin: OriginFor, @@ -1139,6 +1156,7 @@ pub mod pallet { /// Emits `AttributeCleared`. /// /// Weight: `O(1)` + #[pallet::call_index(17)] #[pallet::weight(T::WeightInfo::clear_attribute())] pub fn clear_attribute( origin: OriginFor, @@ -1188,6 +1206,7 @@ pub mod pallet { /// Emits `MetadataSet`. /// /// Weight: `O(1)` + #[pallet::call_index(18)] #[pallet::weight(T::WeightInfo::set_metadata())] pub fn set_metadata( origin: OriginFor, @@ -1250,6 +1269,7 @@ pub mod pallet { /// Emits `MetadataCleared`. /// /// Weight: `O(1)` + #[pallet::call_index(19)] #[pallet::weight(T::WeightInfo::clear_metadata())] pub fn clear_metadata( origin: OriginFor, @@ -1299,6 +1319,7 @@ pub mod pallet { /// Emits `CollectionMetadataSet`. /// /// Weight: `O(1)` + #[pallet::call_index(20)] #[pallet::weight(T::WeightInfo::set_collection_metadata())] pub fn set_collection_metadata( origin: OriginFor, @@ -1356,6 +1377,7 @@ pub mod pallet { /// Emits `CollectionMetadataCleared`. /// /// Weight: `O(1)` + #[pallet::call_index(21)] #[pallet::weight(T::WeightInfo::clear_collection_metadata())] pub fn clear_collection_metadata( origin: OriginFor, @@ -1392,6 +1414,7 @@ pub mod pallet { /// ownership transferal. /// /// Emits `OwnershipAcceptanceChanged`. + #[pallet::call_index(22)] #[pallet::weight(T::WeightInfo::set_accept_ownership())] pub fn set_accept_ownership( origin: OriginFor, @@ -1428,6 +1451,7 @@ pub mod pallet { /// - `max_supply`: The maximum amount of items a collection could have. /// /// Emits `CollectionMaxSupplySet` event when successful. + #[pallet::call_index(23)] #[pallet::weight(T::WeightInfo::set_collection_max_supply())] pub fn set_collection_max_supply( origin: OriginFor, @@ -1467,6 +1491,7 @@ pub mod pallet { /// /// Emits `ItemPriceSet` on success if the price is not `None`. /// Emits `ItemPriceRemoved` on success if the price is `None`. + #[pallet::call_index(24)] #[pallet::weight(T::WeightInfo::set_price())] pub fn set_price( origin: OriginFor, @@ -1489,6 +1514,7 @@ pub mod pallet { /// - `bid_price`: The price the sender is willing to pay. /// /// Emits `ItemBought` on success. + #[pallet::call_index(25)] #[pallet::weight(T::WeightInfo::buy_item())] #[transactional] pub fn buy_item( diff --git a/frame/utility/src/lib.rs b/frame/utility/src/lib.rs index 00cb18e1b23aa..2d60ae15679d5 100644 --- a/frame/utility/src/lib.rs +++ b/frame/utility/src/lib.rs @@ -181,6 +181,7 @@ pub mod pallet { /// `BatchInterrupted` event is deposited, along with the number of successful calls made /// and the error of the failed call. If all were successful, then the `BatchCompleted` /// event is deposited. + #[pallet::call_index(0)] #[pallet::weight({ let dispatch_infos = calls.iter().map(|call| call.get_dispatch_info()).collect::>(); let dispatch_weight = dispatch_infos.iter() @@ -254,6 +255,7 @@ pub mod pallet { /// NOTE: Prior to version *12, this was called `as_limited_sub`. /// /// The dispatch origin for this call must be _Signed_. + #[pallet::call_index(1)] #[pallet::weight({ let dispatch_info = call.get_dispatch_info(); ( @@ -302,6 +304,7 @@ pub mod pallet { /// # /// - Complexity: O(C) where C is the number of calls to be batched. /// # + #[pallet::call_index(2)] #[pallet::weight({ let dispatch_infos = calls.iter().map(|call| call.get_dispatch_info()).collect::>(); let dispatch_weight = dispatch_infos.iter() @@ -377,6 +380,7 @@ pub mod pallet { /// - One DB write (event). /// - Weight of derivative `call` execution + T::WeightInfo::dispatch_as(). /// # + #[pallet::call_index(3)] #[pallet::weight({ let dispatch_info = call.get_dispatch_info(); ( @@ -414,6 +418,7 @@ pub mod pallet { /// # /// - Complexity: O(C) where C is the number of calls to be batched. /// # + #[pallet::call_index(4)] #[pallet::weight({ let dispatch_infos = calls.iter().map(|call| call.get_dispatch_info()).collect::>(); let dispatch_weight = dispatch_infos.iter() @@ -481,6 +486,7 @@ pub mod pallet { /// Root origin to specify the weight of the call. /// /// The dispatch origin for this call must be _Root_. + #[pallet::call_index(5)] #[pallet::weight((*_weight, call.get_dispatch_info().class))] pub fn with_weight( origin: OriginFor, diff --git a/frame/utility/src/tests.rs b/frame/utility/src/tests.rs index d48ce139d839c..f9d6a16c1a0d4 100644 --- a/frame/utility/src/tests.rs +++ b/frame/utility/src/tests.rs @@ -53,11 +53,13 @@ pub mod example { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(*_weight)] pub fn noop(_origin: OriginFor, _weight: Weight) -> DispatchResult { Ok(()) } + #[pallet::call_index(1)] #[pallet::weight(*_start_weight)] pub fn foobar( origin: OriginFor, @@ -78,6 +80,7 @@ pub mod example { } } + #[pallet::call_index(2)] #[pallet::weight(0)] pub fn big_variant(_origin: OriginFor, _arg: [u8; 400]) -> DispatchResult { Ok(()) @@ -105,6 +108,7 @@ mod mock_democracy { #[pallet::call] impl Pallet { + #[pallet::call_index(3)] #[pallet::weight(0)] pub fn external_propose_majority(origin: OriginFor) -> DispatchResult { T::ExternalMajorityOrigin::ensure_origin(origin)?; diff --git a/frame/vesting/src/lib.rs b/frame/vesting/src/lib.rs index 226f539a740f8..3439608af3ce4 100644 --- a/frame/vesting/src/lib.rs +++ b/frame/vesting/src/lib.rs @@ -62,7 +62,7 @@ use frame_support::{ ensure, storage::bounded_vec::BoundedVec, traits::{ - fungibles, fungibles::Lockable, Currency, ExistenceRequirement, Get, VestingSchedule, + Currency, ExistenceRequirement, Get, LockIdentifier, LockableCurrency, VestingSchedule, WithdrawReasons, }, weights::Weight, @@ -83,12 +83,11 @@ pub use weights::WeightInfo; type BalanceOf = <::Currency as Currency<::AccountId>>::Balance; -type MaxLocksOf = <::Currency as fungibles::Lockable< - ::AccountId, ->>::MaxLocks; +type MaxLocksOf = + <::Currency as LockableCurrency<::AccountId>>::MaxLocks; type AccountIdLookupOf = <::Lookup as StaticLookup>::Source; -const VESTING_ID: fungibles::LockIdentifier = *b"vesting "; +const VESTING_ID: LockIdentifier = *b"vesting "; // A value placed in storage that represents the current version of the Vesting storage. // This value is used by `on_runtime_upgrade` to determine whether we run storage migration logic. @@ -160,7 +159,7 @@ pub mod pallet { type RuntimeEvent: From> + IsType<::RuntimeEvent>; /// The currency trait. - type Currency: fungibles::Lockable; + type Currency: LockableCurrency; /// Convert the block number into a balance. type BlockNumberToBalance: Convert>; @@ -304,6 +303,7 @@ pub mod pallet { /// - Reads: Vesting Storage, Balances Locks, [Sender Account] /// - Writes: Vesting Storage, Balances Locks, [Sender Account] /// # + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::vest_locked(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES) .max(T::WeightInfo::vest_unlocked(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES)) )] @@ -327,6 +327,7 @@ pub mod pallet { /// - Reads: Vesting Storage, Balances Locks, Target Account /// - Writes: Vesting Storage, Balances Locks, Target Account /// # + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::vest_other_locked(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES) .max(T::WeightInfo::vest_other_unlocked(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES)) )] @@ -353,6 +354,7 @@ pub mod pallet { /// - Reads: Vesting Storage, Balances Locks, Target Account, [Sender Account] /// - Writes: Vesting Storage, Balances Locks, Target Account, [Sender Account] /// # + #[pallet::call_index(2)] #[pallet::weight( T::WeightInfo::vested_transfer(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES) )] @@ -384,6 +386,7 @@ pub mod pallet { /// - Reads: Vesting Storage, Balances Locks, Target Account, Source Account /// - Writes: Vesting Storage, Balances Locks, Target Account, Source Account /// # + #[pallet::call_index(3)] #[pallet::weight( T::WeightInfo::force_vested_transfer(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES) )] @@ -418,6 +421,7 @@ pub mod pallet { /// /// - `schedule1_index`: index of the first schedule to merge. /// - `schedule2_index`: index of the second schedule to merge. + #[pallet::call_index(4)] #[pallet::weight( T::WeightInfo::not_unlocking_merge_schedules(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES) .max(T::WeightInfo::unlocking_merge_schedules(MaxLocksOf::::get(), T::MAX_VESTING_SCHEDULES)) diff --git a/frame/whitelist/src/lib.rs b/frame/whitelist/src/lib.rs index 1b2dc9415607e..8a5666331c7e9 100644 --- a/frame/whitelist/src/lib.rs +++ b/frame/whitelist/src/lib.rs @@ -119,6 +119,7 @@ pub mod pallet { #[pallet::call] impl Pallet { + #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::whitelist_call())] pub fn whitelist_call(origin: OriginFor, call_hash: PreimageHash) -> DispatchResult { T::WhitelistOrigin::ensure_origin(origin)?; @@ -136,6 +137,7 @@ pub mod pallet { Ok(()) } + #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::remove_whitelisted_call())] pub fn remove_whitelisted_call( origin: OriginFor, @@ -152,6 +154,7 @@ pub mod pallet { Ok(()) } + #[pallet::call_index(2)] #[pallet::weight( T::WeightInfo::dispatch_whitelisted_call(*call_encoded_len) .saturating_add(*call_weight_witness) @@ -190,6 +193,7 @@ pub mod pallet { Ok(actual_weight.into()) } + #[pallet::call_index(3)] #[pallet::weight({ let call_weight = call.get_dispatch_info().weight; let call_len = call.encoded_size() as u32; diff --git a/primitives/arithmetic/src/rational.rs b/primitives/arithmetic/src/rational.rs index 54cabfc6214e8..447b37551bb1f 100644 --- a/primitives/arithmetic/src/rational.rs +++ b/primitives/arithmetic/src/rational.rs @@ -94,14 +94,14 @@ pub struct Rational128(u128, u128); #[cfg(feature = "std")] impl sp_std::fmt::Debug for Rational128 { fn fmt(&self, f: &mut sp_std::fmt::Formatter<'_>) -> sp_std::fmt::Result { - write!(f, "Rational128({:.4})", self.0 as f32 / self.1 as f32) + write!(f, "Rational128({} / {} ≈ {:.8})", self.0, self.1, self.0 as f64 / self.1 as f64) } } #[cfg(not(feature = "std"))] impl sp_std::fmt::Debug for Rational128 { fn fmt(&self, f: &mut sp_std::fmt::Formatter<'_>) -> sp_std::fmt::Result { - write!(f, "Rational128(..)") + write!(f, "Rational128({} / {})", self.0, self.1) } } diff --git a/primitives/core/src/bounded/bounded_vec.rs b/primitives/core/src/bounded/bounded_vec.rs index 2f39f3340ce50..6e1e1c7cfda64 100644 --- a/primitives/core/src/bounded/bounded_vec.rs +++ b/primitives/core/src/bounded/bounded_vec.rs @@ -675,6 +675,13 @@ impl> BoundedVec { } } +impl BoundedVec { + /// Return a [`BoundedSlice`] with the content and bound of [`Self`]. + pub fn as_bounded_slice(&self) -> BoundedSlice { + BoundedSlice(&self.0[..], PhantomData::default()) + } +} + impl Default for BoundedVec { fn default() -> Self { // the bound cannot be below 0, which is satisfied by an empty vector diff --git a/primitives/core/src/lib.rs b/primitives/core/src/lib.rs index fda7604d5337f..30d0cc199b74d 100644 --- a/primitives/core/src/lib.rs +++ b/primitives/core/src/lib.rs @@ -622,3 +622,49 @@ macro_rules! bounded_btree_map { } }; } + +/// Generates a macro for checking if a certain feature is enabled. +/// +/// These feature checking macros can be used to conditionally enable/disable code in a dependent +/// crate based on a feature in the crate where the macro is called. +#[macro_export] +// We need to skip formatting this macro because of this bug: +// https://github.com/rust-lang/rustfmt/issues/5283 +#[rustfmt::skip] +macro_rules! generate_feature_enabled_macro { + ( $macro_name:ident, $feature_name:meta, $d:tt ) => { + /// Enable/disable the given code depending on + #[doc = concat!("`", stringify!($feature_name), "`")] + /// being enabled for the crate or not. + /// + /// # Example + /// + /// ```nocompile + /// // Will add the code depending on the feature being enabled or not. + #[doc = concat!(stringify!($macro_name), "!( println!(\"Hello\") )")] + /// ``` + #[cfg($feature_name)] + #[macro_export] + macro_rules! $macro_name { + ( $d ( $d input:tt )* ) => { + $d ( $d input )* + } + } + + /// Enable/disable the given code depending on + #[doc = concat!("`", stringify!($feature_name), "`")] + /// being enabled for the crate or not. + /// + /// # Example + /// + /// ```nocompile + /// // Will add the code depending on the feature being enabled or not. + #[doc = concat!(stringify!($macro_name), "!( println!(\"Hello\") )")] + /// ``` + #[cfg(not($feature_name))] + #[macro_export] + macro_rules! $macro_name { + ( $d ( $d input:tt )* ) => {}; + } + }; +} diff --git a/primitives/runtime/src/traits.rs b/primitives/runtime/src/traits.rs index 37291d2125ed8..375475141b818 100644 --- a/primitives/runtime/src/traits.rs +++ b/primitives/runtime/src/traits.rs @@ -1357,12 +1357,13 @@ pub trait GetNodeBlockType { type NodeBlock: self::Block; } -/// Something that can validate unsigned extrinsics for the transaction pool. +/// Provide validation for unsigned extrinsics. /// -/// Note that any checks done here are only used for determining the validity of -/// the transaction for the transaction pool. -/// During block execution phase one need to perform the same checks anyway, -/// since this function is not being called. +/// This trait provides two functions [`pre_dispatch`](Self::pre_dispatch) and +/// [`validate_unsigned`](Self::validate_unsigned). The [`pre_dispatch`](Self::pre_dispatch) +/// function is called right before dispatching the call wrapped by an unsigned extrinsic. The +/// [`validate_unsigned`](Self::validate_unsigned) function is mainly being used in the context of +/// the transaction pool to check the validity of the call wrapped by an unsigned extrinsic. pub trait ValidateUnsigned { /// The call to validate type Call; @@ -1370,13 +1371,15 @@ pub trait ValidateUnsigned { /// Validate the call right before dispatch. /// /// This method should be used to prevent transactions already in the pool - /// (i.e. passing `validate_unsigned`) from being included in blocks - /// in case we know they now became invalid. + /// (i.e. passing [`validate_unsigned`](Self::validate_unsigned)) from being included in blocks + /// in case they became invalid since being added to the pool. /// - /// By default it's a good idea to call `validate_unsigned` from within - /// this function again to make sure we never include an invalid transaction. + /// By default it's a good idea to call [`validate_unsigned`](Self::validate_unsigned) from + /// within this function again to make sure we never include an invalid transaction. Otherwise + /// the implementation of the call or this method will need to provide proper validation to + /// ensure that the transaction is valid. /// - /// Changes made to storage WILL be persisted if the call returns `Ok`. + /// Changes made to storage *WILL* be persisted if the call returns `Ok`. fn pre_dispatch(call: &Self::Call) -> Result<(), TransactionValidityError> { Self::validate_unsigned(TransactionSource::InBlock, call) .map(|_| ()) @@ -1385,8 +1388,14 @@ pub trait ValidateUnsigned { /// Return the validity of the call /// - /// This doesn't execute any side-effects; it merely checks - /// whether the transaction would panic if it were included or not. + /// This method has no side-effects. It merely checks whether the call would be rejected + /// by the runtime in an unsigned extrinsic. + /// + /// The validity checks should be as lightweight as possible because every node will execute + /// this code before the unsigned extrinsic enters the transaction pool and also periodically + /// afterwards to ensure the validity. To prevent dos-ing a network with unsigned + /// extrinsics, these validity checks should include some checks around uniqueness, for example, + /// like checking that the unsigned extrinsic was send by an authority in the active set. /// /// Changes made to storage should be discarded by caller. fn validate_unsigned(source: TransactionSource, call: &Self::Call) -> TransactionValidity; diff --git a/primitives/staking/Cargo.toml b/primitives/staking/Cargo.toml index 550c1485e992c..35feae43ebb8c 100644 --- a/primitives/staking/Cargo.toml +++ b/primitives/staking/Cargo.toml @@ -15,6 +15,7 @@ targets = ["x86_64-unknown-linux-gnu"] [dependencies] codec = { package = "parity-scale-codec", version = "3.0.0", default-features = false, features = ["derive"] } scale-info = { version = "2.1.1", default-features = false, features = ["derive"] } +sp-core = { version = "7.0.0", default-features = false, path = "../core" } sp-runtime = { version = "7.0.0", default-features = false, path = "../runtime" } sp-std = { version = "5.0.0", default-features = false, path = "../std" } @@ -23,6 +24,7 @@ default = ["std"] std = [ "codec/std", "scale-info/std", + "sp-core/std", "sp-runtime/std", "sp-std/std", ] diff --git a/primitives/staking/src/lib.rs b/primitives/staking/src/lib.rs index 703f0abe80458..9eb4a4890cdf8 100644 --- a/primitives/staking/src/lib.rs +++ b/primitives/staking/src/lib.rs @@ -190,3 +190,5 @@ pub trait StakingInterface { #[cfg(feature = "runtime-benchmarks")] fn set_current_era(era: EraIndex); } + +sp_core::generate_feature_enabled_macro!(runtime_benchmarks_enabled, feature = "runtime-benchmarks", $); diff --git a/primitives/weights/src/lib.rs b/primitives/weights/src/lib.rs index af9e730fbfefd..928080d139864 100644 --- a/primitives/weights/src/lib.rs +++ b/primitives/weights/src/lib.rs @@ -47,12 +47,13 @@ pub use weight_meter::*; pub use weight_v2::*; pub mod constants { - use super::Weight; + pub const WEIGHT_REF_TIME_PER_SECOND: u64 = 1_000_000_000_000; + pub const WEIGHT_REF_TIME_PER_MILLIS: u64 = 1_000_000_000; + pub const WEIGHT_REF_TIME_PER_MICROS: u64 = 1_000_000; + pub const WEIGHT_REF_TIME_PER_NANOS: u64 = 1_000; - pub const WEIGHT_PER_SECOND: Weight = Weight::from_ref_time(1_000_000_000_000); - pub const WEIGHT_PER_MILLIS: Weight = Weight::from_ref_time(1_000_000_000); - pub const WEIGHT_PER_MICROS: Weight = Weight::from_ref_time(1_000_000); - pub const WEIGHT_PER_NANOS: Weight = Weight::from_ref_time(1_000); + pub const WEIGHT_PROOF_SIZE_PER_MB: u64 = 1024 * 1024; + pub const WEIGHT_PROOF_SIZE_PER_KB: u64 = 1024; } /// The old weight type. diff --git a/primitives/weights/src/weight_meter.rs b/primitives/weights/src/weight_meter.rs index d03e72968bb09..17c5da1502e9e 100644 --- a/primitives/weights/src/weight_meter.rs +++ b/primitives/weights/src/weight_meter.rs @@ -71,6 +71,12 @@ impl WeightMeter { time.max(pov) } + /// Consume some weight and defensively fail if it is over the limit. Saturate in any case. + pub fn defensive_saturating_accrue(&mut self, w: Weight) { + self.consumed.saturating_accrue(w); + debug_assert!(self.consumed.all_lte(self.limit), "Weight counter overflow"); + } + /// Consume the given weight after checking that it can be consumed. Otherwise do nothing. pub fn check_accrue(&mut self, w: Weight) -> bool { self.consumed.checked_add(&w).map_or(false, |test| { diff --git a/scripts/ci/gitlab/pipeline/build.yml b/scripts/ci/gitlab/pipeline/build.yml index 5bbc3fb8f751c..ba529569d0fc1 100644 --- a/scripts/ci/gitlab/pipeline/build.yml +++ b/scripts/ci/gitlab/pipeline/build.yml @@ -62,8 +62,12 @@ build-linux-substrate: - job: test-linux-stable artifacts: false before_script: + - !reference [.job-switcher, before_script] - mkdir -p ./artifacts/substrate/ - !reference [.rusty-cachier, before_script] + # tldr: we need to checkout the branch HEAD explicitly because of our dynamic versioning approach while building the substrate binary + # see https://github.com/paritytech/ci_cd/issues/682#issuecomment-1340953589 + - git checkout -B "$CI_COMMIT_REF_NAME" "$CI_COMMIT_SHA" script: - rusty-cachier snapshot create - WASM_BUILD_NO_COLOR=1 time cargo build --locked --release --verbose @@ -91,6 +95,7 @@ build-linux-substrate: # this variable gets overriden by "rusty-cachier environment inject", use the value as default CARGO_TARGET_DIR: "$CI_PROJECT_DIR/target" before_script: + - !reference [.job-switcher, before_script] - mkdir -p ./artifacts/subkey - !reference [.rusty-cachier, before_script] script: @@ -115,6 +120,7 @@ build-subkey-macos: # duplicating before_script & script sections from .build-subkey hidden job # to overwrite rusty-cachier integration as it doesn't work on macos before_script: + - !reference [.job-switcher, before_script] - mkdir -p ./artifacts/subkey script: - cd ./bin/utils/subkey diff --git a/scripts/ci/gitlab/pipeline/publish.yml b/scripts/ci/gitlab/pipeline/publish.yml index 9053035a61cdb..6a0d6d6341304 100644 --- a/scripts/ci/gitlab/pipeline/publish.yml +++ b/scripts/ci/gitlab/pipeline/publish.yml @@ -12,6 +12,7 @@ DOCKERFILE: $PRODUCT.Dockerfile IMAGE_NAME: docker.io/$IMAGE_PATH before_script: + - !reference [.job-switcher, before_script] - cd ./artifacts/$PRODUCT/ - VERSION="$(cat ./VERSION)" - echo "${PRODUCT} version = ${VERSION}" @@ -211,6 +212,9 @@ update-node-template: # to roughly 202 minutes of delay, or 3h and 22 minutes. As such, the job needs to have a much # higher timeout than average. timeout: 5h + # A custom publishing environment is used for us to be able to set up protected secrets + # specifically for it + environment: publish-crates script: - rusty-cachier snapshot create - git clone diff --git a/scripts/ci/gitlab/pipeline/test.yml b/scripts/ci/gitlab/pipeline/test.yml index ea1240af8bd57..a468a7b04caeb 100644 --- a/scripts/ci/gitlab/pipeline/test.yml +++ b/scripts/ci/gitlab/pipeline/test.yml @@ -65,6 +65,7 @@ cargo-check-benches: before_script: # perform rusty-cachier operations before any further modifications to the git repo to make cargo feel cheated not so much - !reference [.rust-info-script, script] + - !reference [.job-switcher, before_script] - !reference [.rusty-cachier, before_script] - !reference [.pipeline-stopper-vars, script] # merges in the master branch on PRs @@ -414,6 +415,7 @@ cargo-check-each-crate-macos: - .collect-artifacts - .pipeline-stopper-artifacts before_script: + - !reference [.job-switcher, before_script] - !reference [.rust-info-script, script] - !reference [.pipeline-stopper-vars, script] variables: diff --git a/utils/frame/benchmarking-cli/src/block/bench.rs b/utils/frame/benchmarking-cli/src/block/bench.rs index 5a67b11f494f5..578158d8a2356 100644 --- a/utils/frame/benchmarking-cli/src/block/bench.rs +++ b/utils/frame/benchmarking-cli/src/block/bench.rs @@ -18,7 +18,7 @@ //! Contains the core benchmarking logic. use codec::DecodeAll; -use frame_support::weights::constants::WEIGHT_PER_NANOS; +use frame_support::weights::constants::WEIGHT_REF_TIME_PER_NANOS; use frame_system::ConsumedWeight; use sc_block_builder::{BlockBuilderApi, BlockBuilderProvider}; use sc_cli::{Error, Result}; @@ -148,7 +148,7 @@ where let weight = ConsumedWeight::decode_all(&mut raw_weight)?; // Should be divisible, but still use floats in case we ever change that. - Ok((weight.total().ref_time() as f64 / WEIGHT_PER_NANOS.ref_time() as f64).floor() + Ok((weight.total().ref_time() as f64 / WEIGHT_REF_TIME_PER_NANOS as f64).floor() as NanoSeconds) } diff --git a/utils/frame/benchmarking-cli/src/overhead/README.md b/utils/frame/benchmarking-cli/src/overhead/README.md index b21d051e9d44c..1584c2affe0a3 100644 --- a/utils/frame/benchmarking-cli/src/overhead/README.md +++ b/utils/frame/benchmarking-cli/src/overhead/README.md @@ -30,7 +30,8 @@ The file will contain the concrete weight value and various statistics about the /// 99th: 3_631_863 /// 95th: 3_595_674 /// 75th: 3_526_435 -pub const BlockExecutionWeight: Weight = WEIGHT_PER_NANOS.saturating_mul(3_532_484); +pub const BlockExecutionWeight: Weight = + Weight::from_ref_time(WEIGHT_REF_TIME_PER_NANOS.saturating_mul(3_532_484)); ``` In this example it takes 3.5 ms to execute an empty block. That means that it always takes at least 3.5 ms to execute *any* block. @@ -59,7 +60,8 @@ The relevant section in the output file looks like this: /// 99th: 68_758 /// 95th: 67_843 /// 75th: 67_749 -pub const ExtrinsicBaseWeight: Weight = WEIGHT_PER_NANOS.saturating_mul(67_745); +pub const ExtrinsicBaseWeight: Weight = + Weight::from_ref_time(WEIGHT_REF_TIME_PER_NANOS.saturating_mul(67_745)); ``` In this example it takes 67.7 µs to execute a NO-OP extrinsic. That means that it always takes at least 67.7 µs to execute *any* extrinsic. diff --git a/utils/frame/benchmarking-cli/src/overhead/weights.hbs b/utils/frame/benchmarking-cli/src/overhead/weights.hbs index 8d1a369372721..c54393d200bd3 100644 --- a/utils/frame/benchmarking-cli/src/overhead/weights.hbs +++ b/utils/frame/benchmarking-cli/src/overhead/weights.hbs @@ -14,7 +14,7 @@ {{/each}} use sp_core::parameter_types; -use sp_weights::{constants::WEIGHT_PER_NANOS, Weight}; +use sp_weights::{constants::WEIGHT_REF_TIME_PER_NANOS, Weight}; parameter_types! { {{#if (eq short_name "block")}} @@ -34,7 +34,8 @@ parameter_types! { /// 99th: {{underscore stats.p99}} /// 95th: {{underscore stats.p95}} /// 75th: {{underscore stats.p75}} - pub const {{long_name}}Weight: Weight = WEIGHT_PER_NANOS.saturating_mul({{underscore weight}}); + pub const {{long_name}}Weight: Weight = + Weight::from_ref_time(WEIGHT_REF_TIME_PER_NANOS.saturating_mul({{underscore weight}})); } #[cfg(test)] @@ -51,23 +52,23 @@ mod test_weights { {{#if (eq short_name "block")}} // At least 100 µs. assert!( - w.ref_time() >= 100u64 * constants::WEIGHT_PER_MICROS.ref_time(), + w.ref_time() >= 100u64 * constants::WEIGHT_REF_TIME_PER_MICROS, "Weight should be at least 100 µs." ); // At most 50 ms. assert!( - w.ref_time() <= 50u64 * constants::WEIGHT_PER_MILLIS.ref_time(), + w.ref_time() <= 50u64 * constants::WEIGHT_REF_TIME_PER_MILLIS, "Weight should be at most 50 ms." ); {{else}} // At least 10 µs. assert!( - w.ref_time() >= 10u64 * constants::WEIGHT_PER_MICROS.ref_time(), + w.ref_time() >= 10u64 * constants::WEIGHT_REF_TIME_PER_MICROS, "Weight should be at least 10 µs." ); // At most 1 ms. assert!( - w.ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + w.ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Weight should be at most 1 ms." ); {{/if}} diff --git a/utils/frame/benchmarking-cli/src/storage/README.md b/utils/frame/benchmarking-cli/src/storage/README.md index ecaf4edadab38..f61b7ba1bddd0 100644 --- a/utils/frame/benchmarking-cli/src/storage/README.md +++ b/utils/frame/benchmarking-cli/src/storage/README.md @@ -69,7 +69,7 @@ The interesting part in the generated weight file tells us the weight constants /// 99th: 18_270 /// 95th: 16_190 /// 75th: 14_819 -read: 14_262 * constants::WEIGHT_PER_NANOS, +read: 14_262 * constants::WEIGHT_REF_TIME_PER_NANOS, /// Time to write one storage item. /// Calculated by multiplying the *Average* of all values with `1.1` and adding `0`. @@ -84,7 +84,7 @@ read: 14_262 * constants::WEIGHT_PER_NANOS, /// 99th: 135_839 /// 95th: 106_129 /// 75th: 79_239 -write: 71_347 * constants::WEIGHT_PER_NANOS, +write: 71_347 * constants::WEIGHT_REF_TIME_PER_NANOS, ``` ## Arguments diff --git a/utils/frame/benchmarking-cli/src/storage/weights.hbs b/utils/frame/benchmarking-cli/src/storage/weights.hbs index 82e581cf990c8..135b18b193746 100644 --- a/utils/frame/benchmarking-cli/src/storage/weights.hbs +++ b/utils/frame/benchmarking-cli/src/storage/weights.hbs @@ -43,7 +43,7 @@ pub mod constants { /// 99th: {{underscore read.0.p99}} /// 95th: {{underscore read.0.p95}} /// 75th: {{underscore read.0.p75}} - read: {{underscore read_weight}} * constants::WEIGHT_PER_NANOS, + read: {{underscore read_weight}} * constants::WEIGHT_REF_TIME_PER_NANOS, /// Time to write one storage item. /// Calculated by multiplying the *{{params.weight_params.weight_metric}}* of all values with `{{params.weight_params.weight_mul}}` and adding `{{params.weight_params.weight_add}}`. @@ -58,7 +58,7 @@ pub mod constants { /// 99th: {{underscore write.0.p99}} /// 95th: {{underscore write.0.p95}} /// 75th: {{underscore write.0.p75}} - write: {{underscore write_weight}} * constants::WEIGHT_PER_NANOS, + write: {{underscore write_weight}} * constants::WEIGHT_REF_TIME_PER_NANOS, }; } @@ -74,20 +74,20 @@ pub mod constants { fn bound() { // At least 1 µs. assert!( - W::get().reads(1).ref_time() >= constants::WEIGHT_PER_MICROS.ref_time(), + W::get().reads(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS, "Read weight should be at least 1 µs." ); assert!( - W::get().writes(1).ref_time() >= constants::WEIGHT_PER_MICROS.ref_time(), + W::get().writes(1).ref_time() >= constants::WEIGHT_REF_TIME_PER_MICROS, "Write weight should be at least 1 µs." ); // At most 1 ms. assert!( - W::get().reads(1).ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + W::get().reads(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Read weight should be at most 1 ms." ); assert!( - W::get().writes(1).ref_time() <= constants::WEIGHT_PER_MILLIS.ref_time(), + W::get().writes(1).ref_time() <= constants::WEIGHT_REF_TIME_PER_MILLIS, "Write weight should be at most 1 ms." ); } diff --git a/utils/frame/remote-externalities/src/lib.rs b/utils/frame/remote-externalities/src/lib.rs index b98796a82b2a8..fb63b4275172d 100644 --- a/utils/frame/remote-externalities/src/lib.rs +++ b/utils/frame/remote-externalities/src/lib.rs @@ -42,7 +42,9 @@ use std::{ sync::Arc, thread, }; -use substrate_rpc_client::{rpc_params, ws_client, ChainApi, ClientT, StateApi, WsClient}; +use substrate_rpc_client::{ + rpc_params, ws_client, BatchRequestBuilder, ChainApi, ClientT, StateApi, WsClient, +}; type KeyValue = (StorageKey, StorageData); type TopKeyValues = Vec; @@ -415,8 +417,12 @@ where keys.chunks(thread_chunk_size).map(|s| s.into()).collect::>(); enum Message { + /// This thread completed the assigned work. Terminated, + /// The thread produced the following batch response. Batch(Vec<(Vec, Vec)>), + /// A request from the batch failed. + BatchFailed(String), } let (tx, mut rx) = mpsc::unbounded::(); @@ -427,57 +433,75 @@ where let handle = std::thread::spawn(move || { let rt = tokio::runtime::Runtime::new().unwrap(); let mut thread_key_values = Vec::with_capacity(thread_keys.len()); + for chunk_keys in thread_keys.chunks(DEFAULT_VALUE_DOWNLOAD_BATCH) { - let batch = chunk_keys - .iter() - .cloned() - .map(|key| ("state_getStorage", rpc_params![key, at])) - .collect::>(); + let mut batch = BatchRequestBuilder::new(); + + for key in chunk_keys.iter() { + batch + .insert("state_getStorage", rpc_params![key, at]) + .map_err(|_| "Invalid batch params") + .unwrap(); + } - let values = rt + let batch_response = rt .block_on(thread_client.batch_request::>(batch)) .map_err(|e| { log::error!( target: LOG_TARGET, - "failed to execute batch of {} values. Error: {:?}", - chunk_keys.len(), + "failed to execute batch: {:?}. Error: {:?}", + chunk_keys.iter().map(HexDisplay::from).collect::>(), e ); "batch failed." }) .unwrap(); - let batch_kv = chunk_keys - .into_iter() - .enumerate() - .map(|(idx, key)| { - let maybe_value = values[idx].clone(); - let value = maybe_value.unwrap_or_else(|| { + // Check if we got responses for all submitted requests. + assert_eq!(chunk_keys.len(), batch_response.len()); + + let mut batch_kv = Vec::with_capacity(chunk_keys.len()); + for (key, maybe_value) in chunk_keys.into_iter().zip(batch_response) { + match maybe_value { + Ok(Some(data)) => { + thread_key_values.push((key.clone(), data.clone())); + batch_kv.push((key.clone().0, data.0)); + }, + Ok(None) => { log::warn!( target: LOG_TARGET, "key {:?} had none corresponding value.", &key ); - StorageData(vec![]) - }); - thread_key_values.push((key.clone(), value.clone())); - if thread_key_values.len() % (thread_keys.len() / 10).max(1) == 0 { - let ratio: f64 = - thread_key_values.len() as f64 / thread_keys.len() as f64; - log::debug!( - target: LOG_TARGET, - "[thread = {:?}] progress = {:.2} [{} / {}]", - std::thread::current().id(), - ratio, - thread_key_values.len(), - thread_keys.len(), - ); - } - (key.clone().0, value.0) - }) - .collect::>(); + let data = StorageData(vec![]); + thread_key_values.push((key.clone(), data.clone())); + batch_kv.push((key.clone().0, data.0)); + }, + Err(e) => { + let reason = format!("key {:?} failed: {:?}", &key, e); + log::error!(target: LOG_TARGET, "Reason: {}", reason); + // Signal failures to the main thread, stop aggregating (key, value) + // pairs and return immediately an error. + thread_sender.unbounded_send(Message::BatchFailed(reason)).unwrap(); + return Default::default() + }, + }; + + if thread_key_values.len() % (thread_keys.len() / 10).max(1) == 0 { + let ratio: f64 = + thread_key_values.len() as f64 / thread_keys.len() as f64; + log::debug!( + target: LOG_TARGET, + "[thread = {:?}] progress = {:.2} [{} / {}]", + std::thread::current().id(), + ratio, + thread_key_values.len(), + thread_keys.len(), + ); + } + } - // send this batch to the main thread to start inserting. + // Send this batch to the main thread to start inserting. thread_sender.unbounded_send(Message::Batch(batch_kv)).unwrap(); } @@ -491,6 +515,7 @@ where // first, wait until all threads send a `Terminated` message, in the meantime populate // `pending_ext`. let mut terminated = 0usize; + let mut batch_failed = false; loop { match rx.next().await.unwrap() { Message::Batch(kv) => { @@ -502,6 +527,11 @@ where pending_ext.insert(k, v); } }, + Message::BatchFailed(error) => { + log::error!(target: LOG_TARGET, "Batch processing failed: {:?}", error); + batch_failed = true; + break + }, Message::Terminated => { terminated += 1; if terminated == handles.len() { @@ -511,8 +541,13 @@ where } } + // Ensure all threads finished execution before returning. let keys_and_values = - handles.into_iter().map(|h| h.join().unwrap()).flatten().collect::>(); + handles.into_iter().flat_map(|h| h.join().unwrap()).collect::>(); + + if batch_failed { + return Err("Batch failed.") + } Ok(keys_and_values) } @@ -525,12 +560,14 @@ where at: B::Hash, ) -> Result, &'static str> { let mut child_kv_inner = vec![]; + let mut batch_success = true; + for batch_child_key in child_keys.chunks(DEFAULT_VALUE_DOWNLOAD_BATCH) { - let batch_request = batch_child_key - .iter() - .cloned() - .map(|key| { - ( + let mut batch_request = BatchRequestBuilder::new(); + + for key in batch_child_key { + batch_request + .insert( "childstate_getStorage", rpc_params![ PrefixedStorageKey::new(prefixed_top_key.as_ref().to_vec()), @@ -538,8 +575,8 @@ where at ], ) - }) - .collect::>(); + .map_err(|_| "Invalid batch params")?; + } let batch_response = client.batch_request::>(batch_request).await.map_err(|e| { @@ -554,17 +591,32 @@ where assert_eq!(batch_child_key.len(), batch_response.len()); - for (idx, key) in batch_child_key.iter().enumerate() { - let maybe_value = batch_response[idx].clone(); - let value = maybe_value.unwrap_or_else(|| { - log::warn!(target: LOG_TARGET, "key {:?} had none corresponding value.", &key); - StorageData(vec![]) - }); - child_kv_inner.push((key.clone(), value)); + for (key, maybe_value) in batch_child_key.iter().zip(batch_response) { + match maybe_value { + Ok(Some(v)) => { + child_kv_inner.push((key.clone(), v)); + }, + Ok(None) => { + log::warn!( + target: LOG_TARGET, + "key {:?} had none corresponding value.", + &key + ); + child_kv_inner.push((key.clone(), StorageData(vec![]))); + }, + Err(e) => { + log::error!(target: LOG_TARGET, "key {:?} failed: {:?}", &key, e); + batch_success = false; + }, + }; } } - Ok(child_kv_inner) + if batch_success { + Ok(child_kv_inner) + } else { + Err("batch failed.") + } } pub(crate) async fn rpc_child_get_keys( @@ -724,8 +776,7 @@ where let child_kv = handles .into_iter() - .map(|h| h.join().unwrap()) - .flatten() + .flat_map(|h| h.join().unwrap()) .flatten() .collect::>(); Ok(child_kv) diff --git a/utils/frame/rpc/client/Cargo.toml b/utils/frame/rpc/client/Cargo.toml index bbe8879818092..ee9982971cee3 100644 --- a/utils/frame/rpc/client/Cargo.toml +++ b/utils/frame/rpc/client/Cargo.toml @@ -12,7 +12,7 @@ description = "Shared JSON-RPC client" targets = ["x86_64-unknown-linux-gnu"] [dependencies] -jsonrpsee = { version = "0.15.1", features = ["ws-client"] } +jsonrpsee = { version = "0.16.2", features = ["ws-client"] } sc-rpc-api = { version = "0.10.0-dev", path = "../../../../client/rpc-api" } async-trait = "0.1.57" serde = "1" diff --git a/utils/frame/rpc/client/src/lib.rs b/utils/frame/rpc/client/src/lib.rs index 01c6cd84ba266..a6f73ba6784b2 100644 --- a/utils/frame/rpc/client/src/lib.rs +++ b/utils/frame/rpc/client/src/lib.rs @@ -45,6 +45,7 @@ use std::collections::VecDeque; pub use jsonrpsee::{ core::{ client::{ClientT, Subscription, SubscriptionClientT}, + params::BatchRequestBuilder, Error, RpcResult, }, rpc_params, diff --git a/utils/frame/rpc/state-trie-migration-rpc/Cargo.toml b/utils/frame/rpc/state-trie-migration-rpc/Cargo.toml index 4886563a99440..3a1b4b8b6cbf8 100644 --- a/utils/frame/rpc/state-trie-migration-rpc/Cargo.toml +++ b/utils/frame/rpc/state-trie-migration-rpc/Cargo.toml @@ -25,7 +25,7 @@ sp-state-machine = { path = "../../../../primitives/state-machine" } sp-trie = { path = "../../../../primitives/trie" } trie-db = "0.24.0" -jsonrpsee = { version = "0.15.1", features = ["server", "macros"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } # Substrate Dependencies sc-client-api = { version = "4.0.0-dev", path = "../../../../client/api" } diff --git a/utils/frame/rpc/support/Cargo.toml b/utils/frame/rpc/support/Cargo.toml index 119acbd937c8a..d098877e7302c 100644 --- a/utils/frame/rpc/support/Cargo.toml +++ b/utils/frame/rpc/support/Cargo.toml @@ -17,7 +17,7 @@ targets = ["x86_64-unknown-linux-gnu"] [dependencies] codec = { package = "parity-scale-codec", version = "3.0.0" } futures = "0.3.21" -jsonrpsee = { version = "0.15.1", features = ["jsonrpsee-types"] } +jsonrpsee = { version = "0.16.2", features = ["jsonrpsee-types"] } serde = "1" frame-support = { version = "4.0.0-dev", path = "../../../../frame/support" } sc-rpc-api = { version = "0.10.0-dev", path = "../../../../client/rpc-api" } @@ -25,7 +25,7 @@ sp-storage = { version = "7.0.0", path = "../../../../primitives/storage" } [dev-dependencies] scale-info = "2.1.1" -jsonrpsee = { version = "0.15.1", features = ["ws-client", "jsonrpsee-types"] } +jsonrpsee = { version = "0.16.2", features = ["ws-client", "jsonrpsee-types"] } tokio = "1.22.0" sp-core = { version = "7.0.0", path = "../../../../primitives/core" } sp-runtime = { version = "7.0.0", path = "../../../../primitives/runtime" } diff --git a/utils/frame/rpc/system/Cargo.toml b/utils/frame/rpc/system/Cargo.toml index 56b8a79f8c080..92cf6882a10f1 100644 --- a/utils/frame/rpc/system/Cargo.toml +++ b/utils/frame/rpc/system/Cargo.toml @@ -15,7 +15,7 @@ targets = ["x86_64-unknown-linux-gnu"] [dependencies] serde_json = "1" codec = { package = "parity-scale-codec", version = "3.0.0" } -jsonrpsee = { version = "0.15.1", features = ["server"] } +jsonrpsee = { version = "0.16.2", features = ["client-core", "server", "macros"] } futures = "0.3.21" log = "0.4.17" frame-system-rpc-runtime-api = { version = "4.0.0-dev", path = "../../../../frame/system/rpc/runtime-api" } diff --git a/utils/wasm-builder/src/wasm_project.rs b/utils/wasm-builder/src/wasm_project.rs index 7688069dd7cca..d17997360deef 100644 --- a/utils/wasm-builder/src/wasm_project.rs +++ b/utils/wasm-builder/src/wasm_project.rs @@ -379,14 +379,15 @@ fn find_package_by_manifest_path<'a>( if let Some(pkg) = crate_metadata.packages.iter().find(|p| p.manifest_path == manifest_path) { return pkg } + let pkgs_by_name = crate_metadata .packages .iter() .filter(|p| p.name == pkg_name) .collect::>(); - let mut pkgs = pkgs_by_name.iter(); - if let Some(pkg) = pkgs.next() { - if pkgs.next().is_some() { + + if let Some(pkg) = pkgs_by_name.first() { + if pkgs_by_name.len() > 1 { panic!( "Found multiple packages matching the name {pkg_name} ({manifest_path:?}): {:?}", pkgs_by_name @@ -395,7 +396,7 @@ fn find_package_by_manifest_path<'a>( return pkg } } else { - panic!("Failed to find entry for package {pkg_name} ({manifest_path:?})"); + panic!("Failed to find entry for package {pkg_name} ({manifest_path:?})."); } } diff --git a/zombienet/0000-block-building/block-building.zndsl b/zombienet/0000-block-building/block-building.zndsl index c53e50915c202..86a54773484b3 100644 --- a/zombienet/0000-block-building/block-building.zndsl +++ b/zombienet/0000-block-building/block-building.zndsl @@ -2,8 +2,8 @@ Description: Block building Network: ./block-building.toml Creds: config -alice: is up -bob: is up +alice: is up within 30 seconds +bob: is up within 30 seconds alice: reports node_roles is 4 bob: reports node_roles is 4 diff --git a/zombienet/0001-basic-warp-sync/test-warp-sync.zndsl b/zombienet/0001-basic-warp-sync/test-warp-sync.zndsl index 1ccacb2e6d038..8ceb61c8b039d 100644 --- a/zombienet/0001-basic-warp-sync/test-warp-sync.zndsl +++ b/zombienet/0001-basic-warp-sync/test-warp-sync.zndsl @@ -2,10 +2,10 @@ Description: Warp sync Network: ./test-warp-sync.toml Creds: config -alice: is up -bob: is up -charlie: is up -dave: is up +alice: is up within 30 seconds +bob: is up within 30 seconds +charlie: is up within 30 seconds +dave: is up within 30 seconds alice: reports node_roles is 1 bob: reports node_roles is 1