Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

srt-live-transmit bitrate issue #933

Open
ltrayanov opened this issue Oct 30, 2019 · 33 comments
Open

srt-live-transmit bitrate issue #933

ltrayanov opened this issue Oct 30, 2019 · 33 comments
Labels
[apps] Area: Test applications related improvements [core] Area: Changes in SRT library core Priority: Low
Milestone

Comments

@ltrayanov
Copy link

When testing release 1.4 with srt-live-transmit application we found a strange behavior between caller(sender) to listener(receiver) and listener(sender) to caller(receiver).
In both cases our input was 54M mpeg ts multicast stream in the example below 225.168.111.105:51011. Sender had ip address of 192.168.111.14 and receiver was at 192.168.111.15

Everything worked as expected with the config below, sending and receiving rates were around 57M, output multicast stream was fine:
sender udp://@225,168.111.105:51011 srt://192.168.111.15:55555
receiver srt://:55555 udp://225.168.111.15:51021

When we switched the modes to:
sender udp://@225,168.111.105:51011 srt://:55555
receiver srt://192.168.111.14:55555 udp://225.168.111.15:51021
The send and receive rates were down to 40M and the output multicast stream had numerous continuity count and pcr errors.

The problem stopped when the input mpeg ts stream rate was reduced 20m or below.
Is that a expected behavior with the srt with listener->caller?
Are there any socket settings that can improve that mode?
Thanks!

@maxsharabayko maxsharabayko added this to the v1.4.1 milestone Oct 30, 2019
@maxsharabayko maxsharabayko added the [core] Area: Changes in SRT library core label Oct 30, 2019
@maxsharabayko
Copy link
Collaborator

Hi @ltrayanov This does not look as an expected behavior. We will have a look.
Although there were a couple of important fixes. Could you please also check with the latest version in master branch?

@ltrayanov
Copy link
Author

We already did. Unfortunately same result.

@ethouris
Copy link
Collaborator

ethouris commented Nov 1, 2019

Note that using for listening the port number above 32768 is extremely risky, especially if the UDP ports have default SO_REUSEPORT flag on by default. Please confirm that this happens also when you change the port for SRT to 5555, for example.

@ltrayanov
Copy link
Author

I just performed tests with ports 5555, 4444, 3333 and 2222.
Same result. No issues with caller sender -> listener receiver, however listener sender -> caller receiver has reduced bandwidth

@maxsharabayko
Copy link
Collaborator

@ltrayanov Could you also check v1.3.4 to understand if it is something newly introduced?

@maxsharabayko maxsharabayko added [apps] Area: Test applications related improvements and removed [core] Area: Changes in SRT library core labels Nov 12, 2019
@ltrayanov
Copy link
Author

Sorry for my late response. We did try the old versions and we can report same results.

@maxsharabayko
Copy link
Collaborator

@ltrayanov Thanks for the update. One more kind request.
Could you please collect CSV statistics for both cases with the latest version.
srt-live-transmit options to use:
-statsout stats-snd.csv -pf csv -s 2000

We can then take a look on the situation.

@maxsharabayko maxsharabayko modified the milestones: v1.4.1, v1.4.2 Nov 26, 2019
@mbakholdina mbakholdina self-assigned this Jan 17, 2020
@J-Rogmann
Copy link
Contributor

Hello @ltrayanov
I would like to understand this issue and will try to reproduce in my local network today. Did you had a chance to create the statistics files?
Furthermore I would like to understand your setup. Did you use SRT in your own application or did you try the sample-application srt-live-transmit? Would you mind posting the complete command line including the parameters set for SRT for sender & receiver?

I'll let you know, if I can reproduce it here.

@ltrayanov
Copy link
Author

I am sorry,
file are attached as per your request.

We were using the live-transmit app on both sides with the following command line:
sender udp://@225,168.111.104:51011 srt://192.168.111.15:51050
receiver srt://:55555 udp://192.168.111.15:20000
the udp stream is 50M, screenshot of the analyzer is inside the zip archive
the starts from the receiver device are in stats-snd-caller.csv file
with that setup we dont see any problems.

When we use the same udp source with:
sender udp://@225,168.111.105:51011 srt://:51050
receiver srt://192.168.111.14:55555 udp://192.168.111.15:20000
the received udp stream is bad. and the stats are in file stats-snd-listener.csv

We are in process of building an app for our encoder and I will update with the results.
Thanks,

stats.zip

@ltrayanov
Copy link
Author

We also decided to test srt-live-transmit ver 1.4.1
With latest release any stream udp input transport stream over 20M is not being processed the right way. With 94M input TS we see sending rate of below 55M; with 47M input rate sending rate is ~37M; with 28M input rate sending rate is 27M. In all cases the output ts has large amount of crc errors.

@J-Rogmann
Copy link
Contributor

We will have a look and get to you as soon as possible.

@maxsharabayko
Copy link
Collaborator

stats-rcv-caller
image

stats-rcv-listener
image

No drops, no retransmissions.

@ltrayanov Please recheck with different port numbers like 4200.

@J-Rogmann
Copy link
Contributor

Ahoi @ltrayanov
I tried all day to reproduce your issue but streams are stable at 70 Mbps in v1.4.1
sender:

SRT Data payload:          69.800 Mbps
SRT Data overhead:         1.824%
SRT Data lost:             0.000%
SRT Data rexmit overhead:  0.000%
SRT ACK overhead:          0.089%
SRT ACKACK overhead:       0.089%
SRT NAK overhead:          0.000%
===========================================
SRT overall overhead:      0.120%
SRT Retransmitted:         0.000% of original packets

receiver:

SRT Data payload:          69.800 Mbps
SRT Data overhead:         1.824%
SRT Data lost:             0.000%
SRT Data rexmit overhead:  0.000%
SRT ACK overhead:          0.089%
SRT ACKACK overhead:       0.089%
SRT NAK overhead:          0.000%
===========================================
SRT overall overhead:      0.120%
SRT Retransmitted:         0.000% of original packets

Can you please post the exact commands including all parameters, that you are using?
Also your example seems to have an error:

sender udp://@225,168.111.104:51011 srt://192.168.111.15:51050
receiver srt://:55555 udp://192.168.111.15:20000

the receiver should be changed to following according to example above:

receiver srt://:51050 udp://192.168.111.15:20000

Maybe it was just a typo when writing this, but please double check, that you did not try to receive another SRT stream by mixing up port numbers on the receiver side.

Is there a chance, you can create a packet capure of the stream on the receiver side? (Start capture, then start srt-live-transmit for maybe 20 sec, stop srt-live-transmit and stop capture)
To capute traffic, you can use wireshark or it's commandline tool tshark as following (assuming interface is eth0 on receiver and receiving port is 51050. Feel to change if using different setup):

sudo tshark -i eth0 -f "udp port 51050" -s 1500 -w ./receiver.pcapng

Don't hesitate to get back to me, in case you have any questions.
best regards
Justus

@ltrayanov
Copy link
Author

ltrayanov commented Jan 27, 2020

@maxsharabayko what can explain the 10M lower bit rate? Please note that the sender was getting the same input ts with both tests. We are testing with the srt-live-transmit app.
As per your request we also tested with port 4200, same results.

I am uploading two more tests with different commits, same input stream was used for all of them.

One of the tests was done with srt 1.4.0 (oct 4th) commit ef8ba13 and has similar results as our previous tests. We find only the snd-caller to rcv-listener combination to work (with the exception of high cpu load)

From srt 1.4.0 (oct 29) commit 0f8e93e which fixes the high cpu usage. From that commit till now (srt 1.4.1) we see truncated sending rates with any input ts higher than 20M

srt 1.4.0_ef8ba13.zip
srt 1.4.0_0f8e93.zip

@maxsharabayko
Copy link
Collaborator

@ltrayanov Let me sum up the results so far.

  1. For SRT v1.4.0 and above srt-live-transmit provides similar results: listener(sender) to caller(receiver) can't throughput 51 Mbps.
  2. From srt 1.4.0 (oct 29 commit 0f8e93e with fixes for epoll resolving the high CPU usage) till now (srt 1.4.1) you see truncated sending rates with any input TS higher than 20M.

Changing ports to 4400, etc, does not help.

Which OS do you have?

I see two directions to proceed with this. @ltrayanov could you please:

  1. Check v.1.3.4 as well.
  2. Collect network captures with the latest SRT version, as suggested by @J-Rogmann above.

@maxsharabayko
Copy link
Collaborator

maxsharabayko commented Jan 28, 2020

From the srt 1.4.0_0f8e93.zip srt 1.4.0 oct 29 commit 0f8e93e with fixes for epoll
sender-caller (stats-snd-call.csv):
image

sender-listener (stats-snd-list.csv):
image

@ltrayanov
Copy link
Author

ltrayanov commented Jan 28, 2020

@maxsharabayko Our test machines are Ubuntu 18.04 LTS and we are using srt-live-transmit app

Commands used for caller(sender) to listener(receiver) are:

./srt-live-transmit -v -statsout stats-snd-call.csv -pf csv -s 2000 udp://@225.168.111.11:4300?adapter=192.168.111.15 srt://192.168.111.16:4200?port=4100
./srt-live-transmit -v -statsout stats-rcv-list.csv -pf csv -s 2000 srt://:4200 udp://225.168.111.16:51011?adapter=192.168.111.16

Commands used for listener(sender) to caller(receiver) are:
./srt-live-transmit -v -statsout stats-snd-list.csv -pf csv -s 2000 udp://@225.168.111.11:4300?adapter=192.168.111.15 srt://:4200
./srt-live-transmit -v -statsout stats-rcv-call.csv -pf csv -s 2000 srt://192.168.111.15:4200?port=4100 udp://225.168.111.16:51011?adapter=192.168.111.16

Captures and csv files for tests with SRT 1.3.4, 1.4.0, 1.4.1 can be downloaded from:
https://www.myqnapcloud.com/smartshare/61434g737p84o286w4x264zc_6XaeBtZ

I also included a capture from udp://@225.168.111.11:4300 (test_ts.ts) for your reference.

@maxsharabayko
Copy link
Collaborator

In the data for "SRT tag 1.4.1" I see good transmission results for both cases.
@ltrayanov Please confirm.

@ltrayanov
Copy link
Author

ltrayanov commented Jan 30, 2020

@J-Rogmann TS used in the test was out of a broadcast encoder with multicast output. The encoder used for that capture was Radiant Communications VL4522, we had the same results with Harmonic Electra 8100, ts muxes from Moto Sem v8 and Arris CAP1000. All streams are ATSC CBR MPEG TS and when analyzed they don't show any errors or abnormalities.

@ltrayanov
Copy link
Author

@maxsharabayko the transmission results are good, but our issue is that input stream has 59.9M mux rate, and the sender rate is ~48M

@maxsharabayko
Copy link
Collaborator

Probably I am confused already, in that case sorry. All the dumps show 50 Mbps, and they don't include UDP. You say that the input was 60 Mbps.

I also included a capture from udp://@225.168.111.11:4300 (test_ts.ts) for your reference.

Could you collect the network capture for that UDP streaming as well then?

@ltrayanov
Copy link
Author

pcap of the source ts was uploaded to the same location, file name is mcast source.zip

@J-Rogmann
Copy link
Contributor

Hello @ltrayanov
I did some tests in last days and found the issue. It looks like the UDP input on srt-live-transmit is unstable at bitrates above 8 Mbps (depending on PC, which it is running on). It can happen, that the process doesn't read the UDP data fast enough. With SRT input to SRT/UDP output srt-live-transmit can handle bitrates above 62 Mbps.
This is a limitation of the sample application srt-live-transmit, not a bug of the SRT library itself!
I cross-checked with some other applications, which do not have this issue. I could play back your .ts file as MPEG-TS and feed (for example) a Haivision Media Gateway with it, which flipped it to SRT. Another srt-live-transmit instance pulled that SRT stream and flipped it back to MPEG-TS and VLC and ffplay could play it without any issue.

We have also seen some implementations of piping NDI streams through SRT, which run at 150 Mbps on the input side without any issue.

We will update the srt-live-transmit documentation and point out this limitation more clearly. This sample application is meant for a quick testing and we did not consider such high bitrates. If there is time, we will have a look at this again and improve the UDP input for srt-live-transmit.

For now, please don't hesitate to continue on your SRT implementation and be ensured, SRT can handle >62 Mbps streams.

best regards,
Justus

@mbakholdina
Copy link
Collaborator

@J-Rogmann , @maxsharabayko , please take a look at #762 , might be related to the current issue.

@J-Rogmann
Copy link
Contributor

Hello @ltrayanov
we have the issue with high bitrate MPEG-TS streams into srt-live-transmit sample application confirmed and will look into that later.
For the time being, you might want to have a look at another test application, which gives a better MPEG-TS to SRT performance.
When building SRT from source, try option -DENABLE_TESTING=ON .
That would generate a executable file named srt-test-live which -according to our tests- give you a way better MPEG-TS performance at input side.
But please keep in mind, that these sample applications are just applications to get yourself familiar with SRT protocol and are not meant for production. The SRT library itself can handle very high bitrates even way above 150 Mbps without issues.

best regards,
Justus

@ethouris
Copy link
Collaborator

Just mind that srt-test-live is an application intended for developer testing, and for example, it doesn't feature automatic reconnection.

@ltrayanov
Copy link
Author

Thank you! We will test perform couple of tests.

@maxsharabayko
Copy link
Collaborator

(Some development notes for future reference)
srt-live-transmit works in a non-blocking mode. Before reading data from a socket the app waits for a notification from epoll that reading is possible.

After some optimization during experiemnts the reading loop of srt-live-transmit has the following logic (In pseudocode):

while (true) {
    if (srt_epoll_wait(pollid, ...) < 0)
        continue;
    for (; ; )
    {
        int res = src->Read(1456, pkt);
        if (res == 0)
            break;
        dst->Write(pkt)
    }
}

The app waits for the first time there is something to read from source (UDP). Then the reading subloop (for (;;)) reads all data possible to read. If there is nothing to read, 0 is returned, the subloop breaks and the app waits for the next notification from epoll.

Even this optimized version still has severe packet losses.

The only way to make it work with UDP input was to remove the epoll_wait completely:

while (true) {
    int res = src->Read(1456, pkt);
    if (res == 0)
        continue;
    dst->Write(pkt)
}

This proved to work. However, it is hard to imagine an easy way to fix this behavior of srt-live-transmit.
The current data flow and implementation logic of srt-live-transmit supposes every source and target support epoll, and this is exposed in the main loop of the application itself.
We need either a blocking-mode for UDP input, or do not wait for epoll in case of UDP.

Reading from UDP socket:

  • blocking mode - no losses
  • non-blocking + ::select(..) - no losses
  • non-blocking + linux epoll_wait(..) - losses

@maxsharabayko
Copy link
Collaborator

A workaround for the existing srt-live-transmit implementation was found: increase UDP Receiver Buffer Size.

The UDP streaming you have (@ltrayanov) has an average of 60 Mbps, with peaks up to ~110 Mbps.
image

In that case the epoll turned out to be not fast enough, and due to small receiver buffer of the UDP socket, packets were accumulating in the buffer. During these peaks, there was not enough space in the buffer to store the packets, and they eventually were dropped on the UDP input.

Increasing the UDP buffer helped. First, the system maximum buffer size should be increased the following way:

$ cat /proc/sys/net/core/rmem_max
212992
$ sudo sysctl -w net.core.rmem_max=26214400
net.core.rmem_max = 26214400
$ cat /proc/sys/net/core/rmem_max
26214400

Then the desired buffer size (in bytes) can be specified on the UDP socket (after PR #1152 is merged).
For example, 64 MB:

./srt-live-transmit "udp://:4200?rcvbuf=67108864" srt://192.168.0.10:4200 -v

@maxsharabayko
Copy link
Collaborator

The remaining issue is the reported reduced performance of the receiver in listener mode, while the same receiving in caller mode works well.

The current guess is that the listener socket that remains in the listening state after the connection is established (accepted) may impact the performance of the listener-receiver.
Closing the listener socket after a connection is established and having the handle of the accepted socket might be a way to confirm this guess.

@maxsharabayko maxsharabayko added [core] Area: Changes in SRT library core and removed Status: Completed labels Apr 15, 2020
@maxsharabayko
Copy link
Collaborator

srt-live-transmit closes the listening socket (apps/transmitmedia.cpp:247), so the guess above is disproved.

@maxsharabayko maxsharabayko modified the milestones: v1.5.0, v1.5.1 Jun 25, 2020
@mbakholdina mbakholdina modified the milestones: v1.5.1, Parking Lot Apr 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[apps] Area: Test applications related improvements [core] Area: Changes in SRT library core Priority: Low
Projects
None yet
Development

No branches or pull requests

5 participants