Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

p2p: smart discovery (dead/solo remedy subnets) #1949

Open
wants to merge 15 commits into
base: stage
Choose a base branch
from

Conversation

iurii-ssv
Copy link
Contributor

@iurii-ssv iurii-ssv commented Dec 18, 2024

Replaces #1917 additionally improving a bunch of p2p-handling code, with the main contributing factor being "selective discovery"

Before merging:

  • remove/resolve TODOs used for testing
  • (when testing) make sure this PR doesn't introduce memory-leaks or super-high CPU consumption type of issues
  • do we want to set MinConnectivitySubnets to higher-than-0 value (say 3) ? There isn't really a good way to tell how it affects network connectivity (but it's almost certainly gonna improve it, not worsen it)

@iurii-ssv iurii-ssv force-pushed the testing/deploy/deadsub-remedy branch 2 times, most recently from 5dac0bc to a364c2c Compare December 18, 2024 13:31
@iurii-ssv
Copy link
Contributor Author

Another thing I've realized is happening - because we only trim once we've reached MaxPeers limit, at some point we might stop trimming completely - for example, we've got to max amount of the incoming connections we allow via 0.5 ratio (say, 30) + we've discovered some outgoing peers but not enough to reach MaxPeers limit,

in other words, we've discovered everything we could (and we keep discovering, just at a very slow rate - due to quite an aggressive filter there isn't many connections we are interested in) but that also means we stopped re-cycling(trimming) our incoming connections periodically because we don't trim incoming/outgoing connections separately

I've added a commit to address that - d230c4a - it's pretty "hacky/ad-hoc" way do the job - ideally (to simplify this) I think we probably want to have separate limits per incoming/outgoing connections and somewhat separate trimming logic, WDYT ?

@iurii-ssv iurii-ssv force-pushed the testing/deploy/deadsub-remedy branch 2 times, most recently from 88a752c to 44665d0 Compare January 9, 2025 13:58
Copy link

codecov bot commented Jan 9, 2025

@iurii-ssv iurii-ssv force-pushed the testing/deploy/deadsub-remedy branch 4 times, most recently from 04f69b3 to 846688f Compare January 17, 2025 16:41
@iurii-ssv iurii-ssv force-pushed the testing/deploy/deadsub-remedy branch from 846688f to 13b33d0 Compare January 27, 2025 10:02
@iurii-ssv iurii-ssv changed the title p2p: dead/solo subnets remedy (enhanced) p2p: smart discovery (dead/solo remedy subnets) Jan 27, 2025
Comment on lines +27 to +28
"go.uber.org/zap"
"tailscale.com/util/singleflight"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previous location looks better

Copy link
Contributor Author

@iurii-ssv iurii-ssv Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a make format command that would take care of all the imports, it's a bit of time-waste to look out for this manually

Can we do a separate task/PR for this to fix it project-wide ?

cc @moshe-blox @nkryuchkov @oleg-ssvlabs @MatusKysel

I've had success using https://github.com/daixiang0/gci in the past - it's fast and we can make separate sections for golang deps, 3rd-party deps, our own package-deps for example

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iurii-ssv I agree, we often change imports according to different rules and it causes merge conflicts

Comment on lines 268 to 269
if peers.DiscoveredPeersPool.Has(proposal.ID) {
// this log line is commented out as it is too spammy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couldn't we just use this pool in the filters to minimize discovering same peer over and over?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed we can, done - 82d4805

Comment on lines +350 to +352
// peersToProposePoolCnt is a size of candidate-peers pool we'll be choosing exactly
// peersToProposeCnt peers from
peersToProposePoolCnt := 2 * peersToProposeCnt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not choose from the whole pool?

Copy link
Contributor Author

@iurii-ssv iurii-ssv Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The size of peersByPriority (which is slightly trimmed version of the whole current pool of discovered peers peers.DiscoveredPeersPool) can be large - hundreds or even thousands maybe,

from what I understand there isn't a much better solution to this problem than brute-force (which is what I'm doing here) because any combination might be best - we don't know until we compare it against the best we've got so far

so from my testing I figured 24 is the highest reasonable value:

		// also limit how many peers we want to propose with respect to "peer synergy" (this
		// value can't be too high for performance reasons, 12 seems like a good middle-ground)
		const peersToProposeMaxWithSynergy = 12
		peersToProposeCnt = min(peersToProposeCnt, peersToProposeMaxWithSynergy)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants