-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
p2p: smart discovery (dead/solo remedy subnets) #1949
base: stage
Are you sure you want to change the base?
Conversation
5dac0bc
to
a364c2c
Compare
Another thing I've realized is happening - because we only trim once we've reached MaxPeers limit, at some point we might stop trimming completely - for example, we've got to max amount of the incoming connections we allow via 0.5 ratio (say, 30) + we've discovered some outgoing peers but not enough to reach MaxPeers limit, in other words, we've discovered everything we could (and we keep discovering, just at a very slow rate - due to quite an aggressive filter there isn't many connections we are interested in) but that also means we stopped re-cycling(trimming) our incoming connections periodically because we don't trim incoming/outgoing connections separately I've added a commit to address that - d230c4a - it's pretty "hacky/ad-hoc" way do the job - ideally (to simplify this) I think we probably want to have separate limits per incoming/outgoing connections and somewhat separate trimming logic, WDYT ? |
88a752c
to
44665d0
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files☔ View full report in Codecov by Sentry. |
04f69b3
to
846688f
Compare
846688f
to
13b33d0
Compare
"go.uber.org/zap" | ||
"tailscale.com/util/singleflight" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previous location looks better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a make format
command that would take care of all the imports, it's a bit of time-waste to look out for this manually
Can we do a separate task/PR for this to fix it project-wide ?
cc @moshe-blox @nkryuchkov @oleg-ssvlabs @MatusKysel
I've had success using https://github.com/daixiang0/gci in the past - it's fast and we can make separate sections for golang deps, 3rd-party deps, our own package-deps for example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@iurii-ssv I agree, we often change imports according to different rules and it causes merge conflicts
…ed entities across records & commons sub-packages
…ad subnets linger for ~30m after startup)
network/p2p/p2p.go
Outdated
if peers.DiscoveredPeersPool.Has(proposal.ID) { | ||
// this log line is commented out as it is too spammy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couldn't we just use this pool in the filters to minimize discovering same peer over and over?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed we can, done - 82d4805
// peersToProposePoolCnt is a size of candidate-peers pool we'll be choosing exactly | ||
// peersToProposeCnt peers from | ||
peersToProposePoolCnt := 2 * peersToProposeCnt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not choose from the whole pool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The size of peersByPriority
(which is slightly trimmed version of the whole current pool of discovered peers peers.DiscoveredPeersPool
) can be large - hundreds or even thousands maybe,
from what I understand there isn't a much better solution to this problem than brute-force (which is what I'm doing here) because any combination might be best - we don't know until we compare it against the best we've got so far
so from my testing I figured 24 is the highest reasonable value:
// also limit how many peers we want to propose with respect to "peer synergy" (this
// value can't be too high for performance reasons, 12 seems like a good middle-ground)
const peersToProposeMaxWithSynergy = 12
peersToProposeCnt = min(peersToProposeCnt, peersToProposeMaxWithSynergy)
…f discovery filters
Replaces #1917 additionally improving a bunch of p2p-handling code, with the main contributing factor being "selective discovery"
Before merging:
MinConnectivitySubnets
to higher-than-0 value (say 3) ? There isn't really a good way to tell how it affects network connectivity (but it's almost certainly gonna improve it, not worsen it)