-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Document weight for asset, system and timestamp pallets #5593
Conversation
It looks like @apopiak signed our Contributor License Agreement. 👍 Many thanks, Parity Technologies CLA Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small nits but looks good otherwise
@shawntabrizi Do you know why the bot keeps adding |
@apopiak just ignore it. |
/// - `O(C + S)` where `C` length of `code` and `S` complexity of `can_set_code` | ||
/// - 1 storage write (codec `O(C)`). | ||
/// - 1 call to `can_set_code`: `O(S)` (calls `sp_io::misc::runtime_version` which is expensive). | ||
/// - 1 event. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how should we express this:
shouldn't it be: complexity = O(C) + S with S cost of sp_io::misc::runtime_version
?
Here S is not a variable complexity it is a constant one.
I guess if I follow benchmark philosophie (in notion) then it should be written just O(C)
and benchmark will figure out K1 and K2 with weight = K1 + K2*C
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because big O notation only gives an upper bound and not the tight upper bound, both O(C + S)
and O(C) + S
are totally admissable. The only guarantee is "will never be higher than the complexity of a function in this category", so in principle it's fine either way.
I guess in practice we often try to give a reasonably tight upper bound, but I don't think we need to spend a lot of time discussing how tight exactly it should be. 🤷
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The nice thing of O(S)
(instead of S
) is that there might be some constant costs associated with calling can_set_code
(or as in your comment runtime_version
) that are included with big O, but not if you just put it there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think O(C + S)
if S is constant complexity is equivalent to O(C)
and so we should prefer the second one. By equivalent I mean that there is mathematically no differences at all (but I actually forgot a bit of my approximation math courses)
In the notion https://www.notion.so/Benchmarking-Philosophy-c5aafd34578f4be9ab8c8d7510e98314
It seems the strategy is to then come up with real number using benchmark. but S here is not varying so we don't benchmark its variation. so we will only benchmark with C varying and we will compute w = K1 + K2*C
and so S will disappear in K1
The nice thing of O(S) (instead of S) is that there might be some constant costs associated with calling can_set_code (or as in your comment runtime_version) that are included with big O, but not if you just put it there.
I don't understand this for me S is constant cost so when a chain will benchmark its stuff it will be included in K1 no matter what the chain is using under the hood
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PS: I'm not trying to be picky I'm just thinking that if we want to follow the strategy:
- for complexity:
O(M + logM + B + E)
- benchmark will find constant for
weight(M, B, E) = K_1 + K_2 * M + K_3 * logM + B + E
hmm actually I lost myself with this paragraph in notion:
When doing empirical testing, we are unable to separate complexities which have the same order. This means that there could be many many more operations added to this function, of order O(M), O(logM), etc.. but it would not change our final formula as a function of M, B, and E:
I though benchmark will compute: weight(M, B, E) = K_1 + K_2 * M + K_3 * logM + K_4*B + K_5*E
So not sure anymore about how this works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not looked into the benchmarks much, yet, so I'm sad to tell you that I cannot be very helpful there. Maybe @shawntabrizi can weigh in .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I mixed myself indeed
complexity(sp_io::misc::runtime_version(&code)) == O(code.len())
so the cost of the extrinsic should be O(C + C)= O(C)
no ?
At the end weight(C) = K1 + K2*C
any constant cost in can_set_code will be in K1 anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true, except that I don't know whether S might be O(C^2)
or something even more crazy. I went the conservative way because I saw the note on sp_io::misc::runtime_version
being expensive and relying on executing the code in question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm ok I also mistaken on the complexity of sp_io::misc::runtime_version
then I'm ok to have S as complexity of sp_io::misc::runtime_version
, which is unknown to me.
I guess to come up to a correct formula we should say this weight full block ?
EDIT: at the same time knowing the version of the runtime shouldn't be too costy, so this upper bound is very inaccurate. Another idea would be to actually benchmark it for the current runtime and do weight = 10 * benchmark_result
and say that we don't expect a new runtime to increase this cost by 10. or maybe we can compare it to the cost of an empty block or base transaction weight.
But anyway this will only happen on runtime upgrade so IMHO it is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe? I'm out of my depth here.
@gavofyork or @shawntabrizi What do you think?
frame/system/src/lib.rs
Outdated
/// - `O(VI)` where `V` length of `items` and `I` length of one item | ||
/// - `V` storage writes (codec `O(I)`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we consider that a storage write is constant complexity? #2966 (comment)
So complexity would be O(V)
and so with benchmark we can find w = K1 + K2*V
otherwise how can we find solution with benchmark ? if we do w = K1 + K2*V*I
then I'm not we can find any value for K2 because I think w(V, I)
is somewhat equal to w(V, I*2)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing is that there is no bound (that I could see by looking at the code here) on the size of the item. AFAICS it's just a vec and could thus be arbitrarily big.
Do let me know if you know of some bounds for the item size, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can find a correct upper bound for the size of value where it is still kind of constant complexity we can benchmark agains't this upper bound and say in the doc of this extrinsic that if I
is more than this upper bound then weight can be off.
(note that such documentation constrain is only possible because this is root exstrinsic)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks about right to me
Another pass will be had at all of these weight descriptions once we update the weight functions
Cool, cannot merge, though. |
Co-Authored-By: thiolliere <gui.thiolliere@gmail.com>
This PR adds weight documentation to some FRAME pallets according the benchmarking/weight philosophy.
Partially addresses #5596