Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconsider peer connection strategy #33

Open
rakoo opened this issue Jan 15, 2014 · 5 comments
Open

Reconsider peer connection strategy #33

rakoo opened this issue Jan 15, 2014 · 5 comments

Comments

@rakoo
Copy link
Contributor

rakoo commented Jan 15, 2014

Currently, we accept all incoming connections and try to connect to all new peers we learn of. According to this blog post from libtorrent main dev, the standard behavior is to cap connectivity to 50 peers.

The outcome of this strategy is that global connectivity is low, and it is long until a new entrant can find anyone willing to share in the swarm.

The proposed alternative is to have a commutative function that depends on both peers' address and outputs a priority, and select peers with highest priority first (even going as far as disconnecting peers we are currently connected to if another one turns out to be better).

Here's the function (from the simulator code):

def prio(n1, n2):
        if n1 > n2:
                t = n2
                n2 = n1
                n1 = t

        if (n1, n2) in prio_cache: return prio_cache[(n1, n2)]

        h = hashlib.sha1()
        h.update('%d%d' % (n1, n2))
        p = h.hexdigest()
        prio_cache[(n1, n2)] = p
        return p

Fairly simple, could be interesting to consider.

@jackpal
Copy link
Owner

jackpal commented Jan 16, 2014

Yeah, TT should do something better than it currently does. Fine with me if
you want to implement this proposed policy. (Might be nice if the policy is
pluggable and independently testable.)

On Wed Jan 15 2014 at 1:00:45 PM, Matthieu Rakotojaona <
notifications@github.com> wrote:

Currently, we accept all incoming connections and try to connect to all
new peers we learn of. According to this blog post from libtorrent main
dev http://blog.libtorrent.org/2012/12/swarm-connectivity/, the
standard behavior is to cap connectivity to 50 peers.

The outcome of this strategy is that global connectivity is low, and it is
long until a new entrant can find anyone willing to share in the swarm.

The proposed alternative is to have a commutative function that depends on
both peers' address and outputs a priority, and select peers with highest
priority first (even going as far as disconnecting peers we are currently
connected to if another one turns out to be better).

Here's the function (from the simulator codehttps://github.com/arvidn/peer_ordering
):

def prio(n1, n2):
if n1 > n2:
t = n2
n2 = n1
n1 = t

    if (n1, n2) in prio_cache: return prio_cache[(n1, n2)]

    h = hashlib.sha1()
    h.update('%d%d' % (n1, n2))
    p = h.hexdigest()
    prio_cache[(n1, n2)] = p
    return p

Fairly simple, could be interesting to consider.


Reply to this email directly or view it on GitHubhttps://github.com//issues/33
.

@jbenet
Copy link

jbenet commented Feb 2, 2014

It's worth mentioning that policies like this can lead to vulnerability to sybil (and other) attacks. (e.g. people can fabricate peerids to attack particular hosts. (For ex, Kademlia's trusting "oldest" peers more mitigates this). This is probably not worth worrying much about though :)

@jackpal
Copy link
Owner

jackpal commented May 4, 2014

Just a point of information, it looks like TT currently does have a limit of MAX_NUM_PEERS per torrent session and MAX_NUM_PEERS is currently 60.

@sashabaranov
Copy link

I've tested transmission and TT on same magnet link and TT is always slower. I wonder if it is because wrong peers(with low speed) are chosen.

@nictuku
Copy link
Collaborator

nictuku commented May 5, 2015 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants