Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

15:31:15.563681/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=550ms #1630

Closed
nyl0330 opened this issue Oct 28, 2020 · 16 comments
Closed

15:31:15.563681/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=550ms #1630

nyl0330 opened this issue Oct 28, 2020 · 16 comments
Assignees
Labels
Type: Question Questions or things that require clarification

Comments

@nyl0330
Copy link

nyl0330 commented Oct 28, 2020

15:31:15.563681/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=550ms
15:31:15.873286/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=639ms
15:31:16.021725/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=705ms
15:31:16.034575/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=715ms
15:31:16.103417/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=649ms
15:31:16.120738/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=660ms
15:31:16.174797/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=676ms
15:31:16.191686/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=690ms
15:31:16.201825/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=699ms
15:31:16.220559/SRT:TsbPd!W:SRT.br: RCV-DROPPED packet delay=716ms

why?

@nyl0330 nyl0330 added the Type: Question Questions or things that require clarification label Oct 28, 2020
@mbakholdina
Copy link
Collaborator

Hi @nyl0330,
Could you please provide more details regarding your test setup, network conditions (like RTT, packet loss on a link) and SRT settings you use?
What's exactly the question?

@nyl0330
Copy link
Author

nyl0330 commented Oct 30, 2020

Hi@mbakholdina

I'm streaming TS over 5g. RTT fluctuates greatly, generally about 25ms, sometimes about 500ms.
The delay is set to 300ms. When the RTT value is 500ms, the packet loss rate increases and the picture is stuck.
Is there a problem with setting delay parameters? What other parameters do I need to set to avoid this problem.

@ethouris
Copy link
Collaborator

Latency definitely cannot be less than RTT, and with a greater RTT variance you need even more. The default low latency of 120ms is under a statement that RTT is not more than 30ms average and with quite low fluctuation. If you may have a case of 500ms RTT, I think you should try with at least latency of 1s and see if this looks good. High variance of RTT is quite a problem and if you want your video to look good regardless of the network conditions, it should be configured for the worst case. Normally we recommend that latency be 4*RTT, but if so high RTT happens only once per a time and is very short, you might probably risk lower latency than 2s, counting on that if this causes packet loss, it will be quickly recovered.

@mbakholdina mbakholdina self-assigned this Oct 30, 2020
@mbakholdina
Copy link
Collaborator

Hi @nyl0330,
Do you have any updates on the ticket? Has increasing the latency helped?

@nyl0330
Copy link
Author

nyl0330 commented Nov 11, 2020

Thank you for your reply!

Can I understand that when using SRT transmission, the RTT value can be a little larger, but it should be relatively stable, otherwise the delay setting is limited, and it will not guarantee packet loss.

Another problem:
“14:33:47.244854/ SRT:TsbPd!W : SRT.br : RCV-DROPPED packet delay=0ms

14:33:47.280548/ SRT:TsbPd!W : SRT.br : RCV-DROPPED packet delay=1ms

14:34:29.592825/ SRT:TsbPd!W : SRT.br : RCV-DROPPED packet delay=0ms

14:34:29.611813/ SRT:TsbPd!W : SRT.br : RCV-DROPPED packet delay=0ms”

When these errors are reported, has the packet been lost?
What does "delay = 0ms" mean?

@mbakholdina
Copy link
Collaborator

Hi @nyl0330,

The RTT should not be necessarily stable, it depends on the network conditions, use case and the cross traffic on a link. SRT is designed to work in both cases, but for the highly variable networks there should be a tradeoff: whether you choose low latency use case and accept that there would be losses in the moment RTT has a large spike or you set the latency high enough for SRT to have time to retransmit and recover packets in such cases.

The recommended value of latency is 4 times RTT. You can experiment with 2.5, 3, 3.5 and 4 RTT, where RTT=500ms (your maximum observed value) and see how many dropped packets you have. Alternatively, you can set lower latency depending on how critical the drops for your task.

Regarding the logs, this message indicates that there were dropped by SRT packets meaning the latency isn't enough to recover all the packets. Packet delay is an internal library information which gives an insight on what was the delay between the time the packet has been given by SRT to the upstream application and the target time. That's more for debugging purposes. A good indicator would be: the more messages you have, the more packets are dropped.

Additionally you can consider looking into SRT statistics pktRcvDrop.

@mbakholdina
Copy link
Collaborator

I've also created a FR for improving the log format a bit. See #1659.

@nyl0330
Copy link
Author

nyl0330 commented Nov 21, 2020

I have another question. Can multiple SRT clients connect to one server? When I test that multiple SRT clients send TS stream to one SRT server at the same time, the following message will appear:

16:08:52.083330/ SRT:RcvQ :w1!W: SRT.qr : @259645361:No room to store incoming packet: offset=2725 avail=0 ack.seq=1223164144 pkt.seq=1223166869 rcv-remain=8191 drift=0

16:08:52.085522/ SRT:RcvQ :w1!W: SRT.qr : @259645361:No room to store incoming packet: offset=2726 avail=0 ack.seq=1223164144 pkt.seq=1223166870 rcv-remain=8191 drift=0

16:08:52.085762/ SRT:RcvQ :w1!W: SRT.qr : @259645361:No room to store incoming packet: offset=2727 avail=0 ack.seq=1223164144 pkt.seq=1223166871 rcv-remain=8191 drift=0

@nyl0330
Copy link
Author

nyl0330 commented Nov 21, 2020

Just like TCP programming method, the implementation of a service (that is, the listen side), can connect multiple clients, and avoid the following problems:

16:08:52.083256/ SRT:RcvQ :w1!W: SRT.qr : @259645361:No room to store incoming packet: offset=2724 avail=0 ack.seq=1223164144 pkt.seq=1223166868 rcv-remain=8191 drift=0

16:08:52.083330/ SRT:RcvQ :w1!W: SRT.qr : @259645361:No room to store incoming packet: offset=2725 avail=0 ack.seq=1223164144 pkt.seq=1223166869 rcv-remain=8191 drift=0

@maxsharabayko
Copy link
Collaborator

Hi @nyl0330

SRT supports multiplexing of several connections into one UDP path, meaning that you can establish any number of SRT connection over the same <src_udp_ip>:<src_udp_port> -> <dst_udp_ip>:<dst_udp_port>. SRT will distinguish packets belonging to a certain SRT connection using SRT Socket ID fiesl of SRT packet header. See "Destination Socket ID" field here.

All the application has to do is create another SRT socket on the caller side and bind it to the same IP and port (see srt_bind()). On the listener side just keep accepting new connections (srt_accept) and do not stop listening (do not close the listening socket after first connection was accepted).

When I test that multiple SRT clients send TS stream to one SRT server at the same time, the following message will appear:

It is not clear whether you use a single SRT connection for all your TS senders or one connection per sender.
"No room to store incoming packet" means that there is no room in the receiver buffer to store a newly arrived packet.
Please see these guidelines of Receiver buffer size configuration.

@nyl0330
Copy link
Author

nyl0330 commented Nov 26, 2020

I connect a listener with two callers at the same time, and I need to adjust the SRTO_RCVBUF, so as to avoid "no room to store incoming packet"?

@nyl0330
Copy link
Author

nyl0330 commented Nov 26, 2020

I connect a listener with two callers at the same time, and I need to adjust the SRTO_RCVBUF, so as to avoid "no room to store incoming packet"?

I tried it, but there was still "no room to store incoming packet"

@maxsharabayko
Copy link
Collaborator

No room to store incoming packet:
offset=2724 avail=0 ack.seq=1223164144 pkt.seq=1223166868 rcv-remain=8191 drift=0

  • offset=2724 packets - offset of the newly arrived packet from the very first packet in the receiver buffer.
  • avail=0 - the available space in the buffer to store new packets (buffer size - acknowledged packets).
  • ack.seq=1223164144 - last acknowledged sequence number
  • pkt.seq=1223166868 - the sequence number of the newly arrived packet.
  • rcv-remain=8191 - the number of acknowledged packets that could be read.
  • drift=0 - estimated clock drift.

From the log message you've shared all packets in the receiver's buffer are acknowledged and potentially ready to be read.
When TSBPD mode is enabled (default for live), the is also buffering delay applied.

Possible reasons for this "no room to store" situation:

  • SRT signals to the application that it can read incoming packets, but the application does not read, thus accumulating packets in SRT's receiver buffer.
  • SRT does not notify the application that it can read due to the buffering latency.

@nyl0330
As you say that you don't see this issue with one SRT client, is it possible that your application may somehow lose the "available to read" notifications from SRT? Which synchronization mode do you use: blocking or non-blocking?
What is the bitrate of your streaming?

Potentially this log message can be extended to show if there are packets actually ready to be read after buffering latency.
One more way is to collect debug logs
Could you please collect CSV stats and network captures? Better from both sender and receiver sides, but at least from the receiver side. Thus we could check the bitrate and receiver buffer filling.

@nyl0330
Copy link
Author

nyl0330 commented Dec 3, 2020

Thank you for your reply.
The flow is that SRT receives data once, UDP sends data once, and the bit rate is 20M, latency = 500ms。
My coding process is like this:

int flag;
char buf[1024];
pthread_mutex_t mutex;

void *srt_recv(void *arg)
{
while(1)
{
pthread_mutex_lock(&mutex);
if(flag == 0)
{
srt_recvmsg(their_fd, buf, sizeof buf);
flag = 1;
}
pthread_mutex_unlock(&mutex);
}
}

void *udp_send(void *arg)
{
while(1)
{
pthread_mutex_lock(&mutex);
if(flag == 1)
{
sendto(..,.buf.,...);
flag = 0;
}
pthread_mutex_unlock(&mutex);
}
}

int main()
{
pthread_create(srt_recv); //thread1
pthread_create(udp_send); //thread2
}

@maxsharabayko
Copy link
Collaborator

From your code sample, I assume srt_recv and udp_send are working in parallel threads. Depending on how often these functions are called, the receiving might be not fast enough, causing the accumulation of packets in the receiver buffer.

Consider sending to UDP right after successful reception and check if you still see the message to validate the assumption.

@mbakholdina
Copy link
Collaborator

Closing due to inactivity. Please feel free to reopen the ticket in case of further questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Question Questions or things that require clarification
Projects
None yet
Development

No branches or pull requests

4 participants