Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Document weight for asset, system and timestamp pallets #5593

Merged
merged 13 commits into from
Apr 24, 2020
21 changes: 21 additions & 0 deletions frame/assets/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,13 @@ decl_module! {
/// Issue a new class of fungible assets. There are, and will only ever be, `total`
/// such assets and they'll all belong to the `origin` initially. It will have an
/// identifier `AssetId` instance: this will be specified in the `Issued` event.
///
/// # <weight>
/// - `O(1)`
/// - 1 storage mutation (codec `O(1)`).
/// - 2 storage writes (condec `O(1)`).
/// - 1 event.
/// # </weight>
#[weight = frame_support::weights::SimpleDispatchInfo::default()]
fn issue(origin, #[compact] total: T::Balance) {
let origin = ensure_signed(origin)?;
Expand All @@ -171,6 +178,13 @@ decl_module! {
}

/// Move some assets from one holder to another.
///
/// # <weight>
/// - `O(1)`
/// - 1 static lookup
/// - 2 storage mutations (codec `O(1)`).
/// - 1 event.
/// # </weight>
#[weight = frame_support::weights::SimpleDispatchInfo::default()]
fn transfer(origin,
#[compact] id: T::AssetId,
Expand All @@ -190,6 +204,13 @@ decl_module! {
}

/// Destroy any assets of `id` owned by `origin`.
///
/// # <weight>
/// - `O(1)`
/// - 1 storage mutation (codec `O(1)`).
/// - 1 storage deletion (codec `O(1)`).
/// - 1 event.
/// # </weight>
#[weight = frame_support::weights::SimpleDispatchInfo::default()]
fn destroy(origin, #[compact] id: T::AssetId) {
let origin = ensure_signed(origin)?;
Expand Down
53 changes: 53 additions & 0 deletions frame/system/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -482,19 +482,35 @@ decl_module! {
}

/// Make some on-chain remark.
///
/// # <weight>
/// - `O(1)`
/// # </weight>
#[weight = SimpleDispatchInfo::FixedNormal(10_000)]
fn remark(origin, _remark: Vec<u8>) {
ensure_signed(origin)?;
}

/// Set the number of pages in the WebAssembly environment's heap.
///
/// # <weight>
/// - `O(1)`
/// - 1 storage write.
/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(10_000)]
fn set_heap_pages(origin, pages: u64) {
ensure_root(origin)?;
storage::unhashed::put_raw(well_known_keys::HEAP_PAGES, &pages.encode());
}

/// Set the new runtime code.
///
/// # <weight>
/// - `O(C + S)` where `C` length of `code` and `S` complexity of `can_set_code`
/// - 1 storage write (codec `O(C)`).
/// - 1 call to `can_set_code`: `O(S)` (calls `sp_io::misc::runtime_version` which is expensive).
/// - 1 event.
Comment on lines +521 to +524
Copy link
Contributor

@gui1117 gui1117 Apr 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how should we express this:

shouldn't it be: complexity = O(C) + S with S cost of sp_io::misc::runtime_version ?

Here S is not a variable complexity it is a constant one.

I guess if I follow benchmark philosophie (in notion) then it should be written just O(C) and benchmark will figure out K1 and K2 with weight = K1 + K2*C

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because big O notation only gives an upper bound and not the tight upper bound, both O(C + S) and O(C) + S are totally admissable. The only guarantee is "will never be higher than the complexity of a function in this category", so in principle it's fine either way.

I guess in practice we often try to give a reasonably tight upper bound, but I don't think we need to spend a lot of time discussing how tight exactly it should be. 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nice thing of O(S) (instead of S) is that there might be some constant costs associated with calling can_set_code (or as in your comment runtime_version) that are included with big O, but not if you just put it there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think O(C + S) if S is constant complexity is equivalent to O(C) and so we should prefer the second one. By equivalent I mean that there is mathematically no differences at all (but I actually forgot a bit of my approximation math courses)

In the notion https://www.notion.so/Benchmarking-Philosophy-c5aafd34578f4be9ab8c8d7510e98314
It seems the strategy is to then come up with real number using benchmark. but S here is not varying so we don't benchmark its variation. so we will only benchmark with C varying and we will compute w = K1 + K2*C and so S will disappear in K1

The nice thing of O(S) (instead of S) is that there might be some constant costs associated with calling can_set_code (or as in your comment runtime_version) that are included with big O, but not if you just put it there.

I don't understand this for me S is constant cost so when a chain will benchmark its stuff it will be included in K1 no matter what the chain is using under the hood

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS: I'm not trying to be picky I'm just thinking that if we want to follow the strategy:

  • for complexity: O(M + logM + B + E)
  • benchmark will find constant for weight(M, B, E) = K_1 + K_2 * M + K_3 * logM + B + E

hmm actually I lost myself with this paragraph in notion:

When doing empirical testing, we are unable to separate complexities which have the same order. This means that there could be many many more operations added to this function, of order O(M), O(logM), etc.. but it would not change our final formula as a function of M, B, and E:

I though benchmark will compute: weight(M, B, E) = K_1 + K_2 * M + K_3 * logM + K_4*B + K_5*E
So not sure anymore about how this works

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not looked into the benchmarks much, yet, so I'm sad to tell you that I cannot be very helpful there. Maybe @shawntabrizi can weigh in .

Copy link
Contributor

@gui1117 gui1117 Apr 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I mixed myself indeed
complexity(sp_io::misc::runtime_version(&code)) == O(code.len())
so the cost of the extrinsic should be O(C + C)= O(C) no ?

At the end weight(C) = K1 + K2*C any constant cost in can_set_code will be in K1 anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true, except that I don't know whether S might be O(C^2) or something even more crazy. I went the conservative way because I saw the note on sp_io::misc::runtime_version being expensive and relying on executing the code in question.

Copy link
Contributor

@gui1117 gui1117 Apr 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm ok I also mistaken on the complexity of sp_io::misc::runtime_version then I'm ok to have S as complexity of sp_io::misc::runtime_version, which is unknown to me.

I guess to come up to a correct formula we should say this weight full block ?

EDIT: at the same time knowing the version of the runtime shouldn't be too costy, so this upper bound is very inaccurate. Another idea would be to actually benchmark it for the current runtime and do weight = 10 * benchmark_result and say that we don't expect a new runtime to increase this cost by 10. or maybe we can compare it to the cost of an empty block or base transaction weight.

But anyway this will only happen on runtime upgrade so IMHO it is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe? I'm out of my depth here.
@gavofyork or @shawntabrizi What do you think?

/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(200_000)]
pub fn set_code(origin, code: Vec<u8>) {
Self::can_set_code(origin, &code)?;
Expand All @@ -504,6 +520,12 @@ decl_module! {
}

/// Set the new runtime code without doing any checks of the given `code`.
///
/// # <weight>
/// - `O(C)` where `C` length of `code`
/// - 1 storage write (codec `O(C)`).
/// - 1 event.
/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(200_000)]
pub fn set_code_without_checks(origin, code: Vec<u8>) {
ensure_root(origin)?;
Expand All @@ -512,6 +534,12 @@ decl_module! {
}

/// Set the new changes trie configuration.
///
/// # <weight>
/// - `O(L)` where `L` complexity of `deposit_log`
/// - 1 storage write or delete (codec `O(1)`).
/// - 1 call to `deposit_log`: `O(L)` (which is `O(D)` for the length of `Digest`)
apopiak marked this conversation as resolved.
Show resolved Hide resolved
/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(20_000)]
pub fn set_changes_trie_config(origin, changes_trie_config: Option<ChangesTrieConfiguration>) {
ensure_root(origin)?;
Expand All @@ -530,6 +558,11 @@ decl_module! {
}

/// Set some items of storage.
///
/// # <weight>
/// - `O(VI)` where `V` length of `items` and `I` length of one item
/// - `V` storage writes (codec `O(I)`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we consider that a storage write is constant complexity? #2966 (comment)

So complexity would be O(V) and so with benchmark we can find w = K1 + K2*V otherwise how can we find solution with benchmark ? if we do w = K1 + K2*V*I then I'm not we can find any value for K2 because I think w(V, I) is somewhat equal to w(V, I*2).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing is that there is no bound (that I could see by looking at the code here) on the size of the item. AFAICS it's just a vec and could thus be arbitrarily big.
Do let me know if you know of some bounds for the item size, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we can find a correct upper bound for the size of value where it is still kind of constant complexity we can benchmark agains't this upper bound and say in the doc of this extrinsic that if I is more than this upper bound then weight can be off.

(note that such documentation constrain is only possible because this is root exstrinsic)

/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(10_000)]
fn set_storage(origin, items: Vec<KeyValue>) {
ensure_root(origin)?;
Expand All @@ -539,6 +572,11 @@ decl_module! {
}

/// Kill some items from storage.
///
/// # <weight>
/// - `O(VK)` where `V` length of `keys` and `K` length of one key
/// - `V` storage deletions (codec `O(K)`).
apopiak marked this conversation as resolved.
Show resolved Hide resolved
/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(10_000)]
fn kill_storage(origin, keys: Vec<Key>) {
ensure_root(origin)?;
Expand All @@ -548,6 +586,11 @@ decl_module! {
}

/// Kill all storage items with a key that starts with the given prefix.
///
/// # <weight>
/// - `O(P)` where `P` amount of keys with prefix `prefix`
apopiak marked this conversation as resolved.
Show resolved Hide resolved
/// - `P` storage deletions.
/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(10_000)]
fn kill_prefix(origin, prefix: Key) {
ensure_root(origin)?;
Expand All @@ -556,6 +599,11 @@ decl_module! {

/// Kill the sending account, assuming there are no references outstanding and the composite
/// data is equal to its default value.
///
/// # <weight>
/// - `O(1)`
/// - 1 storage read and deletion.
apopiak marked this conversation as resolved.
Show resolved Hide resolved
/// # </weight>
#[weight = SimpleDispatchInfo::FixedOperational(25_000)]
fn suicide(origin) {
let who = ensure_signed(origin)?;
Expand Down Expand Up @@ -912,6 +960,11 @@ impl<T: Trait> Module<T> {
}

/// Deposits a log and ensures it matches the block's log data.
///
/// # <weight>
/// - `O(D)` where `D` length of `Digest`
/// - 1 storage mutation (codec `O(D)`).
/// # </weight>
pub fn deposit_log(item: DigestItemOf<T>) {
let mut l = <Digest<T>>::get();
l.push(item);
apopiak marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
10 changes: 10 additions & 0 deletions frame/timestamp/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,12 @@ decl_module! {
/// `MinimumPeriod`.
///
/// The dispatch origin for this call must be `Inherent`.
///
/// # <weight>
/// - `O(T)` where `T` complexity of `on_timestamp_set`
/// - 2 storage mutations (codec `O(1)`).
/// - 1 event handler `on_timestamp_set` `O(T)`.
/// # </weight>
#[weight = SimpleDispatchInfo::FixedMandatory(10_000)]
fn set(origin, #[compact] now: T::Moment) {
ensure_none(origin)?;
Expand All @@ -161,6 +167,10 @@ decl_module! {
<T::OnTimestampSet as OnTimestampSet<_>>::on_timestamp_set(now);
}

/// # <weight>
/// - `O(1)`
/// - 1 storage deletion (codec `O(1)`).
/// # </weight>
fn on_finalize() {
assert!(<Self as Store>::DidUpdate::take(), "Timestamp must be updated once in the block");
}
Expand Down