-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: correctly track CPLs of never refreshed buckets #71
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -365,3 +365,17 @@ func (rt *RoutingTable) bucketIdForPeer(p peer.ID) int { | |
} | ||
return bucketID | ||
} | ||
|
||
// maxCommonPrefix returns the maximum common prefix length between any peer in | ||
// the table and the current peer. | ||
func (rt *RoutingTable) maxCommonPrefix() uint { | ||
rt.tabLock.RLock() | ||
defer rt.tabLock.RUnlock() | ||
|
||
for i := len(rt.buckets) - 1; i >= 0; i-- { | ||
if rt.buckets[i].len() > 0 { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. While #72 is a valid point are we sure we actually want to return a smaller number of tracked buckets and ramp back up? Upsides: Scales nicely as the network grows without us touching anything. Downsides: Maybe we want some minimal number of buckets to ramp up our scale. Feel free to ignore this comment or just say "meh, this was is probably fine" There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The number of buckets shouldn't matter in practice, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The number of buckets we have shouldn't matter. We might care about the number of refreshes as long, but as long as it's ~15-20 and not ~100) it's probably fine. I'm not sure if there's any tweaking to be done over how we do initial bootstrapping so that we fill our buckets fairly quickly. I suspect in the real network this will be much less problematic then the test network though. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My point is that we:
In terms of refreshing the routing table, the logic in this PR will refresh every CPL, not regardless of how many buckets we actually have. In practice, the number of buckets we have shouldn't affect anything but memory usage, unless we're doing something wrong. |
||
return rt.buckets[i].maxCommonPrefix(rt.local) | ||
} | ||
} | ||
return 0 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting... it means that if we remove the maxCpl (because we fix the RPCs in the DHT to allow for querying random KadIDs) then if there's 2^10 peers in the network and bucket number 10 has someone really close to us (e.g. 20 shared bits) that I'm now going to be querying 20 buckets. Not sure if that's really what we want, is it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably query until we fail to fill a bucket.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean do a query, wait for it to finish and stop if the percentage of the bucket that is full after the query drops below x%? That seems reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, something like that. But I'm fine cutting an RC without that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem at all since we max out at 15 anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll file an issue to track it