You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I did some back-of-the-envelope calculations about network bandwidth used with some ideal parameters (16MB PoV, 1000 validators, 100 parachain cores) and we can see that bandwidth is dominated by backing.
Parachain Networking Figures
Relating PoV Size, Number of Validators (N_V), Number of Backers (N_B), and Number of Parachains (N_P), Checkers Required (N_C)
Validators have to fetch the PoV from a collator in T_C and distribute it to other validators in their group in T_V
Backing:
With contextual execution, we have a 12 second window.
Every 6s, each validator needs to fetch N_P chunks of size (PoV/N_V)*3.
At 16MB and 1000 validators, that's roughly 50KB per chunk. At 100 parachains, that's 50MB downloaded overall, so 400Mb/6s or 66Mbps down.
The backers each need to serve N_V/N_B chunks of size (PoV/N_V)*3
At 16MB, 1000 validators, 100 cores, that's 200 chunks of 50KB, so another 1MB up, assuming that validators distribute their requests to backers randomly. This is every 6 seconds, so that's 8/6 Mbps or ~1.3Mbps
Approval:
Every 6s, each validator needs to recover data for ~ N_P*(N_C/N_V) cores, although this is bursty as it's a poisson distribution.
As a counterpart, every 6s each validator needs to provide chunks for N_P*(N_C/N_V)/3 requesters.
With 1000 validators, 20 checkers, 100 parachains, and 16MB PoV, we're looking at validators recovering about 100*(1/50)*16MB or 32MB of chunks. That'd be 256Mb over 6 seconds, or another 40Mbps down. This can be burstier.
For upload with the same parameters: 100*(1/50)/3 chunks served on average, with each chunk having 3*(16MB/1000) size. That's around 73KB of upload or 598Kbps per 6 seconds, so 99.6Kbps
Disputes are rare, so shouldn't require much extra bandwidth. Basically an occasional 16MB download/upload by each validator.
Overall, with desired parameters, we're looking at
This doesn't account for latency at all, but as backing is the most bandwidth-intensive component and also the most latency-intensive component, it'll make most sense to optimize our networking for the backing pipeline. Contextual execution will help this substantially.
The text was updated successfully, but these errors were encountered:
Off-chain XCMP, in the current line of thought, will also be a consumer of bandwidth. Mostly, we can think it to be using pretty much the same resources as the PoV, so we can just say that it will eat up some of the 16 MiB of the PoV. However, on top of that there is additional bandwidth for the collators of the receiving chains to recover the messages. We were thinking that there should be a cooperative case, i.e. that the collators of the sending network should serve the messages, however, that doesn't help us here since we still need to reserve bandwidth for that anyway.
Off-chain code. AFAIU, this will require hand offs between the validators of the current and the next set. This doesn't seem too bad taking into account a large session window.
This assumes paritytech/polkadot#3779 .
I did some back-of-the-envelope calculations about network bandwidth used with some ideal parameters (16MB PoV, 1000 validators, 100 parachain cores) and we can see that bandwidth is dominated by backing.
The text was updated successfully, but these errors were encountered: