-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] bandwidth degradation #4727
Comments
Should be noticed as well that degradation happens in case of single peer in the network as well. |
I tried to reproduce it locally and apart from degradation of tps over time, noticed that tps is very unstable. With single peer and configuration as in this report, and given constant load of 1600 requests per second (one request is one transaction with single transfer asset instruction, some requests may return error if queue is full), I receive following results: I think this is related to |
Hmm, we use |
@Erigara yes, but the thing is, that currently each transaction is pushed two times to the queue. E.g. consider queue capacity is 8 and max transaction in block = 2:
So in the queue will be a lot of transactions for old blocks, such transaction will be eventually removed from the queue, but because of this tps numbers fluctuates a lot (first tps plot) |
And you want to be able to push transactions back at the beginning of the queue so that they are removed from the queue earlier? |
Yes, I think this will improve stability of tps numbers |
Your idea got me thinking that we can pull something similar without changing type of the queue.
|
Good idea, I will try to implement it |
The reason of tps degradation over time in case of single peer is that Two methods are the reason of time per block increase:
|
is this the case even after #4995? If yes, then I can only suggest that we consider some faster serialization format. Because I think SCALE is pretty fast we can only go for a zero-copy serialization Is it the act of decoding that is slow or the act of copying bytes into linear memory? if it's the latter than I don't see what we can do |
OS and Environment
linux, k8s
GIT commit hash
0275cfa
Minimum working example / Steps to reproduce
Test preconditions
Create:
Test
Actual result
Throughput does not increase:
profiler
grafana
queue
Problems detected with modules
iroha_core::gossiper::TransactionGossiper::run
iroha_core::_::<impl parity_scale_codec::codec::Decode for iroha_core
iroha_core::sumeragi::main_loop::handle_block_sync
Expected result
Throughput increases in proportion to the load
Logs
Log contents
Who can help to reproduce?
@timofeevmd
Notes
No response
The text was updated successfully, but these errors were encountered: