You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think we should define some performance requirements for bitswap (like, for example: "for 10 bitswaps exchanging 1000 blocks between themselves, they must execute this in X seconds or less"), and then apply these to the already existing test/benchmarks.
This is almost done (test/benchmarks already can be parametrised to spawn a network of x nodes exchanging y blocks), so I think this is just a matter of defining the baseline to:
a) prevent performance regressions
b) incorporate improvements
Sounds about right. One thing that is needed for the baseline is the specs of the machine where the benchmarks are run, so we can still ~ follow the baseline on less/more powerful machines.
My thinking is that we should probably use some standard specs from an average laptop as the baseline, and we can spin up CI workers that are close to that.
Sounds good. Couchbase has a nice dashboard to easily spot regressions between builds: http://showfast.sc.couchbase.com/. Having something like that would be great. I know that's a separate discussion, but I thought this is a good place to bring it up.
I think we should define some performance requirements for bitswap (like, for example: "for 10 bitswaps exchanging 1000 blocks between themselves, they must execute this in X seconds or less"), and then apply these to the already existing
test/benchmarks
.This is almost done (
test/benchmarks
already can be parametrised to spawn a network of x nodes exchanging y blocks), so I think this is just a matter of defining the baseline to:a) prevent performance regressions
b) incorporate improvements
@victorbjelkholm @vmx Thoughts on this?
The text was updated successfully, but these errors were encountered: