-
Notifications
You must be signed in to change notification settings - Fork 20.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eth, eth/downloader,p2p: reserve half peer slots for snap peers during snap sync #22171
Conversation
eth/downloader/downloader.go
Outdated
@@ -289,6 +289,10 @@ func (d *Downloader) Synchronising() bool { | |||
return atomic.LoadInt32(&d.synchronising) > 0 | |||
} | |||
|
|||
func (d *Downloader) SnapsyncInProgress() bool{ | |||
return d.snapSync && d.Synchronising() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, this probably won't work, becase synchronizing
is only set after we find a peer to start syncing against
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking the d.snapSync
should be sufficient, it's just that we currently don't set it to 0
after the sync is done
Deployed this on
=> 25 (ish) non-snap peers |
One snap peer found
...but gone one second later |
It finished eventually
|
Later on, after the sync is done, we're back at
|
eth/handler.go
Outdated
if atomic.LoadUint32(&h.snapSync) == 1 && !peer.SupportsCap("snap", 1) { | ||
// If we are running snap-sync, we want to reserve half the peer slots for | ||
// peers supporting the snap protocol | ||
reserved = h.maxPeers/2 - len(h.peers.snapPeers) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
peers
needs a lock before you can access an internal field. We already have a Len
for the eth
peers, we can add a second one for the snap peers.
} | ||
if reserved < 0 { | ||
reserved = 0 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it that the solution is short and sweet, but there's a cornercase we need to support. Currently the P2P layer splits the allowed peer count into 2, reserving half for inbound and half for outbound connections. If my node is NATed, I won't get any inbound, so I'll use up 1/2 of my slots outbound. The issue is that I'm able to saturate my outbound connections fully with eth
only peers without triggering the above snap reservation.
One solution is that we reserve half of the inbound and half of the outbound connections separately for snap. That would solve the above mentioned issue, but it can still be a bit flaky if I only have 1/4 peer.
Another alternative could be to enforce the restriction based on the currently connection peer count. E.g. if peers.LenEth() > min(2 * peers.LenSnap(), 5)
, then reject. That would guarantee that we can always get some baseline number of eth peers, but above that, we're enforicng half and half.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not quite sure I understand... Do you mean like this?
snapPeers := h.peers.SnapLen()
ethPeers := h.peers.Len() - snapPeers // snap-peers are a subset of eth-peers
if ethPeers > 5 && ethPeers > 2 *snapPeers{
reject = true
}
In that case, it would always accept up to 5
non-snap peers, but would otherwise allow twice as many non-snap as snap.
Alternatively;
snapPeers := h.peers.SnapLen()
ethPeers := h.peers.Len() - snapPeers // snap-peers are a subset of eth-peers
if ethPeers > 5 && ethPeers > snapPeers{
reject = true
}
In that case, it would always accept up to 5
non-snap peers, but would otherwise allow only as many non-snap as snap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess a different method would be to "reject if non-snap peers are more than 5+snap-peers".
As it is currently (83a9567
), going from 5
to you need 3
snap peers to join. before you allow the 6th
non-snap peer.
The alternative would only require 1
snap-peer before the 6th
is allowed in. And eventually, with 50
peers, you'll have something like 23
snap and 27
non-snap peers.
I made an implementation of what I think you meant, PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…hereum#22171) * eth, p2p: reserve half peer slots for snap peers during snap sync * eth: less logging * eth: rework the eth/snap peer reservation logic * eth: rework the eth/snap peer reservation logic (again)
This PR attempts to reseve half the peer slots for
snap/1
peers, if we're actively trying to perform a snap sync.Some caveats
--snapshot
, butsyncmode=fast
, then it should do nothing--snapshot
, andsyncmode=snap
, but are done syncing, it should do nothing