Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File mode slow, live mode fast for same route, what am I doing wrong! #835

Closed
oviano opened this issue Aug 26, 2019 · 13 comments
Closed

File mode slow, live mode fast for same route, what am I doing wrong! #835

oviano opened this issue Aug 26, 2019 · 13 comments
Assignees
Labels
Type: Question Questions or things that require clarification
Milestone

Comments

@oviano
Copy link
Contributor

oviano commented Aug 26, 2019

Between the two endpoints that I am currently testing (London -> Istanbul, both FTTC connections) I can get a really good performance in live mode - I've tried 7mb/s and it streams video flawlessly. I suspect I could go even higher but I've not tried.

However, I cannot get more than about 800kb/s using the file mode. I am using the buffer API, and in my test code I throw a 10MB chunk of data at the send function.

I assume I must be missing something? Here are my logs.

Server:
https://ovcollyer.synology.me:5001/d/f/506319724680847378

Client:
https://ovcollyer.synology.me:5001/d/f/506320602078912532

@oviano
Copy link
Contributor Author

oviano commented Aug 26, 2019

Ugh, my calculations were wrong it's faster than I thought.

@oviano oviano closed this as completed Aug 26, 2019
@maxsharabayko
Copy link
Collaborator

@oviano
Please also consider to configure the receivers's and sender's buffer, as well as the flow control window.
See #703.
File congestion control improvements are on the way in PR #807.

@oviano
Copy link
Contributor Author

oviano commented Aug 26, 2019

Yeah I've configured all that.

I've been investigating this closer as I thought I was getting slower speeds yesterday and I've found something really quite strange.

I am finding that with heavy logging enabled on the client (which is OS X), it is of the order of 3 or 4 x faster receiving the data from the server.

I've been running dozens of tests one after the other, alternating between linking against SRT with heavy logging enabled and disabled and there is a clear correlation.

@maxsharabayko
Copy link
Collaborator

with heavy logging enabled on the client (which is OS X), it is of the order of 3 or 4 x faster receiving the data from the server.

On the client... This might be due to the "delivery rate" calculation on the receiving side. The faster the client receives the data, the faster the server tries to send.
You can try to limit the sending rate with SRTO_MAXBW option if you know your available bandwidth. Otherwise it will be fluctuating, especially at the start.

You can use this script to plot the stats. Like the charts in #807.

@oviano
Copy link
Contributor Author

oviano commented Aug 26, 2019

Thanks. SRTO_MAXBW on the server didn't seem to make a difference.

Also, I tried a longer test, transmitting 100MB instead of 10MB, and I got 5mb/s without heavy logging, and 10mb/s with heavy logging.

@oviano
Copy link
Contributor Author

oviano commented Aug 26, 2019

So here are two logs showing the difference in the server heavy log output when heavy log is enabled/disabled on the client:

Server log (client heavy logging enabled), transmitted at 10mb/s:
https://ovcollyer.synology.me:5001/d/f/506345748764303386

Server log (client heavy logging disabled), transmitted at 3mb/s:
https://ovcollyer.synology.me:5001/d/f/506344812805365784

For a start, when it's slower, the server log is massively bigger.

@oviano
Copy link
Contributor Author

oviano commented Aug 26, 2019

Right, so here is what I have found.

MAXBW does indeed help with this issue, however, I think I may have uncovered a bug because if you set MAXBW for the listen socket, then even though it gets passed down/inherited to the accepted socket (getsockopt on the accepted socket returns the correct value), it is not properly taken into account and in fact it behaves as if it is using the default maxbw of zero.

However, after accepting a socket, if you explicitly called setsockopt on it for the maxbw, it then is taken into account.

Here are two logs. I've deliberate specified a low value so I can clearly see whether it is working or not.

The first is the log when maxbw is specified as 262144 only for the listen socket, and the accept socket is supposed to inherit this value:

https://ovcollyer.synology.me:5001/d/f/506361188731330598

The second is the log when maxbw is set to the same value directly on the accepted socket immediately after it has been accepted:

https://ovcollyer.synology.me:5001/d/f/506361155636174884

You don't even need to read the logs, just look at the size difference. When we rely on the listen socket value, it is ignored, which in my test scenario leads to tons of retransmissions as FileCC struggles to cope, hence the larger file.

With the overridden value however, it correctly limits the bandwidth and we get a shorter log due to fewer retransmissions.

So I see two issues here:

  1. the original issue, which is really a request for a general improvement to FileCC without having to specify the maxbw; specifically in my test scenario when I set a maxbw of 8mbit/s it uses all of it. When I leave it as zero, it struggles to use 2-3mbit/s. That doesn't seem optimal. As I mentioned earlier, I tried with a larger file (100MB instead of 10MB) and got a similar result (maybe slightly better, but it was far from using the available bandwidth).

  2. if you do getsockopt on an accepted socket, it will correctly return the maxbw that was originally set on the listen socket, but it will not use it properly.

I assume you are right in your suggestion that when the client is slowed down by heavy logging this has a similar effect in reducing the bandwidth it tries to use. It's curious, however, that enabling heavy logging seemed to change things just so that it just about used the available bandwidth, and in fact made it behave as I would expect FileCC to behave with maxbw = 0, i.e. almost exactly utilising the available bandwidth....?

@oviano oviano reopened this Aug 26, 2019
@maxsharabayko
Copy link
Collaborator

Please collect CSV stats instead.
If you are using srt-live-transit, add the following options: -s 1000 -pf csv -statsout stats-snd.csv
From the logs I can see that in case of 3 Mbps the time between the packets is 100 microseconds (10000 pkts/sec -> trying to deliver data at 116 Mbps). So I would suggest a high loss rate.
For 10 Mbps the time between packets is 900 microseconds (~10 Mbps sending rate).

Could you also check the branch in PR #807?

@oviano
Copy link
Contributor Author

oviano commented Aug 26, 2019

Ok Maxim I'll try and figure out how to generate the stats (this is custom code within my project, not srt-live-transmit).

Will also take a look at 807.

@maxsharabayko
Copy link
Collaborator

Stats writing code in srt-live-transmit: link.
Retrieve stats on a socket, e.g. every second by calling srt_bstats.
Use exsisting function to write CSV stats. Better to preserver the format.

@oviano
Copy link
Contributor Author

oviano commented Aug 26, 2019

Ok, ignore my point 2 above, I was being a fool, I didn't notice maxbw is int64_t so I was creating some weird scenario by passing in an int.

I'm having a bad day, please forgive me.

@maxsharabayko
Copy link
Collaborator

No worries. :)
So far everything looks as suggested in this comment.
So SRTO_MAXBW should help to reduce fluctuations. E.g. if you set it to 1250000 (bytes), the maximum sending rate will be limited to 10 Mbps.
PR #807 should improve this. However, there will be further improvements to FileCC.

@maxsharabayko maxsharabayko added the Type: Question Questions or things that require clarification label Aug 27, 2019
@mbakholdina mbakholdina added this to the Parking Lot milestone Feb 6, 2020
@maxsharabayko
Copy link
Collaborator

Closing as abandoned.
Please don't hesitate to reopen if the question is still topical.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Question Questions or things that require clarification
Projects
None yet
Development

No branches or pull requests

3 participants