-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Profile, benchmark, and add more load tests for portions of the p2p stack #1162
Comments
MConnection
MConnection
MConnection
I have been scrutinizing the code to pinpoint potential bottlenecks (outlined below) that could impede message transmission (on the sender side), thereby hindering network efficiency and increasing block time. I will then conduct various test scenarios (in the upcoming PRs) based on these findings aimed at stretching these bottlenecks to their limits. By measuring the time taken for each, I will determine which elements are most responsible for slowing down message transmission. Subsequently, I will conduct a similar analysis on the receiving flow.
|
A PR towards #1162 This PR introduces a comprehensive benchmark suite for MConnection, aimed at pinpointing performance bottlenecks. The tests evaluate the following aspects: - Evaluating send queue capacity impact on the message transmission delay - Assessing whether MConnection can fully utilize its maximum bandwidth - Testing performance beyond available bandwidth i.e., send and receive rates. This is to determine how the system behaves when message rates surpass the available bandwidth. So far, no significant bottlenecks have been identified. In a follow-up PR, I plan to adjust additional parameters and will share the results and insights gained from these tests.
Another PR towards #1162 The aim of this benchmark is to assess the impact of the size of individual messages, particularly in relation to the MaxPacketMsgPayloadSize in the overall performance of MConnection as well as the ability to utilize the maximum bandwidth / send rate.
…lestiaorg#1162) Bumps [bufbuild/buf-setup-action](https://github.com/bufbuild/buf-setup-action) from 1.24.0 to 1.25.0. - [Release notes](https://github.com/bufbuild/buf-setup-action/releases) - [Commits](bufbuild/buf-setup-action@v1.24.0...v1.25.0) --- updated-dependencies: - dependency-name: bufbuild/buf-setup-action dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The MConnection is used as a final queue for all incoming and outgoing packets. For this reason, it seems at least within the realm of possibility that it plays a role of what I refer to as "The Syndrome", which is when we see validators unable to reach consensus despite bandwidth or compute limitations not being hit.
To better understand if the
MConnection
is playing a role in events that trigger "The Syndrome", such as the inscription events a few weeks ago, we should create simple load unit tests/benchmarks, while also profiling these benchmarks to see if there are any unexpected effective mutexes. The same goes for other low level transport logic, such as the MultiplexTransport.The text was updated successfully, but these errors were encountered: