Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

throughput is 5 MB/s on loopback #3

Closed
guymguym opened this issue Sep 7, 2015 · 13 comments
Closed

throughput is 5 MB/s on loopback #3

guymguym opened this issue Sep 7, 2015 · 13 comments

Comments

@guymguym
Copy link

guymguym commented Sep 7, 2015

Hi

I tested with go 1.5 on Mac Air:

$ go get github.com/anacrolix/utp
$ go get github.com/anacrolix/utp/cmd/ucat
$ ./bin/ucat -l -p 9876 >/dev/null &
$ dd if=/dev/zero bs=1m count=100 | ./bin/ucat 127.0.0.1 9876
100+0 records in
100+0 records out
104857600 bytes transferred in 18.542269 secs ( *** 5655058 bytes/sec *** )
2015/09/07 03:51:28 wrote 104857600 bytes
2015/09/07 03:51:28 received 0 bytes
2015/09/07 03:51:28 received 104857600 bytes

This is compared to native build of ucat from https://github.com/bittorrent/libutp:

$ ./build/Release/ucat -l -p 9876 >/dev/null &
$ dd if=/dev/zero bs=1m count=1000 | ./build/Release/ucat -B $((256*1024)) 127.0.0.1 9876
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 11.888499 secs ( *** 88200875 bytes/sec *** )

That's a big drop..
Cheers

@anacrolix
Copy link
Owner

There are some justifications in the README. Working over localhost on my MBP, using the provided pingpong utility (which I'll update in a sec), I get the following performance:

utp->libutp 20.7MiB/s
libutp->utp 13.7MiB/s
libutp->libutp 32.1MiB/s
utp->utp 6.93MiB/s

For a 348MiB file.

I think there's some improvements that can be made to receiving, it seems to take a few seconds to accept. Also utp->utp isn't great. In the wild, utp usually interacts with libutp or some other derivative, so performance isn't necessarily the worst case.

If you'd like to help out, the low hanging fruit is doing some profiling, and fiddling with the MTU value in the library.

@anacrolix
Copy link
Owner

There was some congestion in Socket.dispatcher, which I've rewritten. It seems to have improved speeds a bit.

Where other_ucat is libutp's ucat, and before changes on the left, and after on the right:

utp->other_ucat
[20.4MiB/s] [20.8MiB/s]
other_ucat->utp
[14.5MiB/s] [18.4MiB/s]
libutp->libutp
[30.7MiB/s] [31.1MiB/s]
utp->utp
[7.22MiB/s] [ 11MiB/s]

@anacrolix
Copy link
Owner

With some ricing of constants it's now
utp->other_ucat
[39.6MiB/s]
other_ucat->utp
[18.8MiB/s]
libutp->libutp
[29.7MiB/s]
utp->utp
2015/10/02 18:52:50 w
[22.6MiB/s]

For utp->utp that's nearly 4x the speed I got with the revision this issue was reported.

@jbenet
Copy link

jbenet commented Oct 2, 2015

👍 nice progress!. i've formatted the table better:

direction 1 2 3
go->lib 20.7 MiB/s 20.8 MiB/s 39.6 MiB/s
lib->go 13.7 MiB/s 18.4 MiB/s 18.8 MiB/s
lib->lib 32.1 MiB/s 31.1 MiB/s 29.7 MiB/s
go->go 6.93 MiB/s 11 MiB/s 22.6 MiB/s

curious that lib->go is the bad one now.

@anacrolix
Copy link
Owner

Thanks for that. I expect it has to do with only selectively acking the next 64 inbound packets, or something I'm feeding libutp in my outbound headers that it doesn't like. Interestingly, lib->go is actually the most important metric for the torrent use case as it represents inbound throughput, though it's a non-issue as that's ~18-19MiB per peer. If you have any feedback from your project @jbenet , that would be great.

@jbenet
Copy link

jbenet commented Oct 2, 2015

@anacrolix we haven't deployed it cause we were waiting for this throughput fix. I think 19MiB per peer is good enough to try. we'll ship it and report back results. if you'd like to have any metrics, we can see about adding them. go-ipfs can output both custom event logs and prometheus logs.

cc @whyrusleeping

@jbenet
Copy link

jbenet commented Oct 2, 2015

Also, @anacrolix i've been meaning to setup some benchmarks to check out throughput of various transports (uTP, UDT, SCTP, QUIC, ...) on various kinds of network setups. would be interesting to have a great bandwidth benchmark for transports using real networks to test them. if you're interested in this, we will be adapting golang/build to help us run these.

anacrolix added a commit that referenced this issue Oct 2, 2015
Improves libutp->utp performance from ~18.8MiB/s to ~25MiB/s. See issue #3.
@anacrolix
Copy link
Owner

Performance now:

direction 1 2 3 4
go->lib 20.7 MiB/s 20.8 MiB/s 39.6 MiB/s 38.6MiB/s
lib->go 13.7 MiB/s 18.4 MiB/s 18.8 MiB/s 27.3MiB/s
lib->lib 32.1 MiB/s 31.1 MiB/s 29.7 MiB/s 28.9MiB/s
go->go 6.93 MiB/s 11 MiB/s 22.6 MiB/s 39.4MiB/s

Localhost doesn't take into account connections between peers with greater latencies, so I'm closing this issue on account of the localhost performance is already greatly improved, and further ricing is at the detriment of more important real world scenarios. Thanks.

@jbenet
Copy link

jbenet commented Oct 4, 2015

👏 👏 👏 great work @anacrolix ! we're deploying this un go-ipfs 0.3.8. will keep you posted.

@AudriusButkevicius
Copy link

Hey,

One question. Has this seen any production use?
I mean lossy high latency networks etc?

Thanks.

@guymguym
Copy link
Author

guymguym commented Oct 4, 2015

@anacrolix did you get only ~30 MB/s with ucat->ucat over loopback? I get almost 90 MB/s ...

@whyrusleeping
Copy link

@anacrolix Thanks for the great work so far! I've started the integration of utp into ipfs here: ipfs/kubo#1789

It works moderately well. although we've seen a few random halts which arent necessarily caused by utp, theres a chance it is. I can continue to post updates here or elsewhere if you'd like, and I'll be sure to file issues for anything I find.

@anacrolix
Copy link
Owner

@AudriusButkevicius : The packet is used in production 24/7 by an application using package torrent.

@whyrusleeping : If you get any more information about the stalls with package utp, please report an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants