-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: update the default per peer bandwidth limits #2390
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -28,6 +28,14 @@ import ( | |||||
coretypes "github.com/tendermint/tendermint/types" | ||||||
) | ||||||
|
||||||
const ( | ||||||
// oneGigabit 1Gbit == 1 billion bits | ||||||
oneGigabit = 1_000_000_000 | ||||||
// defaultBandwidth is the symetric bandwidth a node is assumed to have in | ||||||
// bytes (not bits!) | ||||||
defaultBandwidth = oneGigabit / 8 | ||||||
) | ||||||
|
||||||
// bankModule defines a custom wrapper around the x/bank module's AppModuleBasic | ||||||
// implementation to provide custom default genesis state. | ||||||
type bankModule struct { | ||||||
|
@@ -175,6 +183,9 @@ func DefaultConsensusConfig() *tmcfg.Config { | |||||
cfg.Consensus.TimeoutCommit = appconsts.TimeoutCommit | ||||||
cfg.Consensus.SkipTimeoutCommit = false | ||||||
cfg.TxIndex.Indexer = "null" | ||||||
|
||||||
cfg.P2P.SendRate = int64(defaultBandwidth / cfg.P2P.MaxNumInboundPeers) | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shouldn't it be:
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In theory yeah, but I think having a default where peers are sending more bandwidth than that peer is configured to receive is likely a bad idea. That being said, I haven't tried it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this being due to nodes by default having more inbound than outbound peers There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The total amount of consumed bandwidth for upload and download would remain the same for a node, only the amount dedicated to its connected peer would change.
I see, Is the concern about the symmetry in the send and receive rate per connection? Unless the two peers that are connected ensure parity in their count of inbound connections, disparities in their sending and receiving bandwidth could arise. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
yeah exactly. Having a default send rate per peer that is higher than the receive rate naively seemed like a bad idea. Our default OutboundPeers max is 10, where our default inbound is 40, therefore we could end up sending 4 times as much data as our peer is willing to receive. |
||||||
cfg.P2P.RecvRate = int64(defaultBandwidth / cfg.P2P.MaxNumInboundPeers) | ||||||
staheri14 marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
return cfg | ||||||
} | ||||||
|
||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the rationale of this number
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iirc, this was the bandwidth reccomendations for a consensus node (side note this should be gigabyte I think, since we're converting to bit below)
I could be wrong though, in which case we should update the default bandwidth expectation below