Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TL Packet Drop supresses MAXBW limitation #713

Closed
maxsharabayko opened this issue May 30, 2019 · 11 comments · Fixed by #2232
Closed

TL Packet Drop supresses MAXBW limitation #713

maxsharabayko opened this issue May 30, 2019 · 11 comments · Fixed by #2232
Assignees
Labels
[core] Area: Changes in SRT library core Priority: High Type: Bug Indicates an unexpected problem or unintended behavior
Milestone

Comments

@maxsharabayko
Copy link
Collaborator

maxsharabayko commented May 30, 2019

When TL packet drop is turned on, and input bitrate exceeds the maxbw limitation, the MAXBW limit is suppressed. This can happen also when the amount of retransmission is high (see #638).
By default maxbw is limited to 1 Gbps (30 Mbps prior to v1.3.3).
Latest on the moment of testing SRT version 6f6b76b.
URI query both for receiver and sender:
transtype=live&messageapi=1&payloadsize=1456&rcvbuf=125000000&sndbuf=125000000
Link RTT is 0.24 ms (local 1 Gbps switch).

Sender sends packets at 900 Mbps. Nothing stops it. But looks like the receiver is ackowleding packets at slower rate, thus the sender's buffer gets full.

E.g. this query actually limits sending rate to 30 Mbps:
transtype=file&congestion=live&messageapi=1&payloadsize=1456&rcvbuf=125000000&sndbuf=125000000
The difference is that TL packet drop is turned off by transtype=file (there is no URI query socket option to turn off TSBPD directly).

Sender side packets. Notice the packets start dropping only at the end. And the sender starts printing error messages: SND-DROPPED 893 packets - lost delaying for 1032ms.
win-900Mbps-test-04-snd-packets

And the packets start dropping when the available sender's buffer size goes down to 0:
win-900Mbps-test-04-snd-availbuffer

While the time interval between the packets is 400 μs, that corresponds to 30 Mbps.
win-900Mbps-test-04-snd-pktsendperiod

Receiving rate:
win-900Mbps-test-04-rcv-packets

The receiver's buffer size, although set to 1 GB, is actually limited by FC size, so it is actually only 300 Mbits (refer to #700).
win-900Mbps-test-04-rcv-availbuffer

In the end receiver closes the connection due to:
SRT:RcvQ:worker*E:SRT.c: %915948307:SEQUENCE DISCREPANCY, reception no longer possible. REQUESTING TO CLOSE.

Setting maxbw=125000000 solves the problem, although in the described set up the size of the recevier's buffer is too small, and it anyway closes the connection with error message:

SRT:RcvQ:worker*E:SRT.c: %488590918:No room to store incoming packet: offset=0 avail=0 ack.seq=2032363150 pkt.seq=2032363150 rcv-remain=25599
SRT:RcvQ:worker*E:SRT.c: %488590918:No room to store incoming packet: offset=1 avail=0 ack.seq=2032363150 pkt.seq=2032363151 rcv-remain=25599

But notice the receiving rate is 900 Mbps:
win-900Mbps-test-05-rcv-rate

While for the case when maxbw=30 Mbps the receiving rate is actually lower:
win-900Mbps-test-04-rcv-rate

Also refer to #553, where almost similar experiment was conducted.

@maxsharabayko maxsharabayko added Type: Bug Indicates an unexpected problem or unintended behavior [core] Area: Changes in SRT library core labels May 30, 2019
@maxsharabayko maxsharabayko added this to the v.1.3.4 milestone May 30, 2019
@jeandube
Copy link
Collaborator

There is an option (SRTO_TSBPDMODE) to turn TSBPD on/off, You probably means no URI query.

@jeandube
Copy link
Collaborator

If all issues were presented that way, I can't imagine where SRT would be.

@maxsharabayko
Copy link
Collaborator Author

@jeandube

There is an option (SRTO_TSBPDMODE) to turn TSBPD on/off, You probably means no URI query.

Yes. Improved the description. Thank you, Jean.

If all issues were presented that way, I can't imagine where SRT would be.

I am looking forward to have a cool automated toolset for such cool things. 😄

@alexpokotilo
Copy link
Contributor

"While the time interval between the packets is 400 μs, that corresponds to 30 Mbps."
it should be 12.9422222222μs for 900MBps if we send one packet at time. Do you mean SRT sends ~30packets each time ?

Another question.
if I use live(TSBPD is on) and set maxbw to value greater stream max sending rate just to limit re-transmission traffic can I expect maxbw will still work ?
Is this problem reproduced only if actual stream bandwidth > 30Mbps or lower limits may be suppressed too ?
When and how maxbw works at all ? :) I just relaxed thinking I got your explanation from #638 before I read this:)

@maxsharabayko
Copy link
Collaborator Author

@alexpokotilo

Is this problem reproduced only if actual stream bandwidth > 30Mbps or lower limits may be suppressed too ?

Yes, the problem is in the "abnormal" usage scenario, when actual input bitrate exceeds maximum bandwidth value. In this case it looks like packet origin time somehow gains priority over the desired time intervals between the packets.
In the normal usage scenario (input bitrate < maxbw) maxbw should work, so no worries. 🙂

@maxsharabayko
Copy link
Collaborator Author

Further investigation shows that waiting gets interrupted from CSndUList::update and CSndUList::insert.
As we know, CTimer::interrupt resets target time, unlike just `CTimer::tick'.
Looking further into this.

@maxsharabayko
Copy link
Collaborator Author

Most probably NeedDrop(Ref(bCongestion)) decides to drop packets when TSBPD is enabled,
Because of that the sending is rescheduled by calling m_pSndQueue->m_pSndUList->update(this, CSndUList::rescheduleIf(bCongestion)) with bCongestion=true. And that triggers sending the packet that should have waited 400 us instead.

@ethouris
Copy link
Collaborator

ethouris commented Aug 7, 2019

Most probably NeedDrop(Ref(bCongestion)) decides to drop packets when TSBPD is enabled,

Dropping packets is controlled by the SRTO_TLPKTDROP option. This has nothing to do with TSBPD, only without this option it will work more clumsily. Note that SRTO_TRANSTYPE controls all these options.

@maxsharabayko
Copy link
Collaborator Author

Yes, I was thinking of TSBPD and TL packet drop mechanisms as a single. In fact this is about TL packet drop.

@maxsharabayko maxsharabayko changed the title TSBPD supresses MAXBW limitation TL Packet Drop supresses MAXBW limitation Aug 7, 2019
@maxsharabayko maxsharabayko removed this from the v1.3.4 milestone Aug 9, 2019
@maxsharabayko
Copy link
Collaborator Author

Update. TL Packet drop mechanism triggers sending of the next scheduled packet after
timespan_ms > (m_iPeerTsbPdDelay_ms/2)
where timespan_ms is the time difference between the very first packet in the sender's buffer, and the latest packet in the sender's buffer.

@maxsharabayko maxsharabayko added this to the v1.4.2 milestone Nov 4, 2019
@maxsharabayko maxsharabayko modified the milestones: v1.5.0, v1.6.0 Dec 27, 2019
@stoneljp
Copy link

when i use srt live and set transtype=0 tsbpdmode=false tlpktdrop=0
it still has SND-DROP, then socket close
then change parameters to the above I can send a 60Mbps streaming very perfect no mosaic no socket closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[core] Area: Changes in SRT library core Priority: High Type: Bug Indicates an unexpected problem or unintended behavior
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants