-
Notifications
You must be signed in to change notification settings - Fork 840
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can i configure srt socket without loss #1579
Comments
I have some log like this. It seems that the flag works on. src/srtcore/core.cpp l=2744 area=processSrtMsg_HSRSP msg=: HSRSP/rcv: Ve |
For that you better use the file mode. There are multiple options that are affected, so you better use the Also, for TCP proxying you have also an application: |
Thanks for the suggestions. I wander the only diff between file mode and live mode is the initialize configure for drop message. What can i do if I have to use live mode ? |
No. The following options are affected by
What's exactly your use case? For me, making a TCP proxy by using the SRT live mode doesn't make any sense, or at least a solution that relies here on blindly transferring the data between two TCP endpoints won't work correctly with live mode. |
I have a test like this:
The other options (nakreport, payloadsize) I have not read about. I will test later. |
I can only recommend that you read the sources of the applications ( |
HI, I have the test about the options in livecc. I close tsbpdmode , nakreport and set linger like file mode. The only different opt is messageapi that is true. As I cannot set messageapi false with livecc mode. The test result shows loss packet. It seems that livecc cannot work as file mode. Do you have any ideas of what is causing this ? By the way I wanna use live mode as it works well for low latency. |
Hi @byrcoder
Please define how do you measure the loss? Do you use SRT statistics or your own? Please note that in SRT if a packet is lost, it is then retransmitted until delivered or until it is decided to be dropped due to latency constraints. Refer to pktRcvDropTotal and pktRcvLossTotal. You also should disable TL Packdet Drop to prohibit SRT to drop packets. Also you should no close the connection (srt_close) untill all data is delivered. Linger can help. But you can also check it on your own using srt_getsndbuffer(..). See how it is used in |
The live mode has several restrictions:
This means that the stream you are sending over SRT in live mode should be already a live stream, that is, time distances between consecutive packets should be more-less even with very little tolerance. If you are reading the data on-the-fly from a TCP connection, you should have also some intermediate module that will make sure that times between consecutive packets are even. You can turn off conditional dropping of the packets by turning off Delivery times of a stream received over TCP isn't something you can rely on, as it prefers reliability over timely delivery and may introduce unnecessary delays. SRT in live mode differs to it mainly in that it doesn't do any sending speed control, except for the overhead, if configured. |
Do you use SRT statistics or your own? Please define how do you measure the loss? |
Yes, the test tcp works as a live stream. The data will recv with tcp endless unless srt send buffer full. When the srt full the tcp will not read util it is ok for srt sending. I had turned off the tlpktdrop option. And it is expected that pausing the connection when srt send stop or the srt buffer is full. |
I'm not exactly following you. Please describe again your configuration. Do you have data transfer from SRT to TCP only, or from TCP to SRT as well? If you have a case of reading from TCP and sending these data over SRT, then you need some intermediate buffer that will be filled all the time whenever data come over TCP. A separate procedure should read from this buffer by portions of maximum 1316 bytes (by default - this size can be increased upto 1456 by If you are talking about having the SRT sender buffer full, then there's something wrong because this shall not happen in live mode. The sender buffer in SRT plays actually two roles:
Both these features have also their assigned sizes:
So, in File mode the value of Schedule Window may grow up to the value of buffer capacity decreased by the Flight Window. In Live mode, however, the Schedule Window shall be mostly constantly 1, at worst 2, and only exceptionally it may grow a little bit more in case when you had lately quite a large portion of lost packets that have to be retransmitted, and have configured a bandwidth limit on the socket - this way, the regular packets will be extra delayed, so the Schedule Window may temporarily grow. As the flight window is usually the size of about 1/10 of the sender buffer, then the situation when you make the SRT sender buffer full in live mode means simply that you are sending faster than your network allows - and the network will respond with packet drops. In other words, the speed with which you are sending packets over SRT in live mode must be exactly the desired speed of sending the packets over the network - and not just average in general, but as an average counted for at most 16 packets (once per 16 packets there's a "packet pair" sent when two consecutive packets are sent without any delay between them, which is the part of the procedure measuring the upper bandwidth limit). Let's state you send packets (or, say, "portions of data") of 1316 bytes using In this case the value calculated as 1613168/(T16-T1) must result equal to your current live stream bitrate. This must hold true for any range of packets, also with range of only 4 packets. That's the exact thing being the difference between File and Live mode. In File mode you can call |
Thanks for your detailed answer. The data transfer from TCP-Server to SRT-Server. The TCP-Server and SRT-Server works as follow:
The simple code shows as follow:
So The maximum length of the intermediate buffer is 2048 bytes. As the tcp input is a live stream something like rtmp, Event though the loss happens, Why wouldn't the retransmission work for srt in live mode? |
Here is exactly the problem:
This means that when you send 2048 bytes from the live TCP stream at once, then you'd call Maintaining delays between calls to OTOH if it so, as you said, that you are sending this over SRT to archive it, and moreover you are setting TSBPD mode off, which means that timing isn't important for your transmission, I still think that "file mode" is exactly what you need. So first of all you need this option:
This also takes care of TLPKTDROP and TSBPDMODE options the same way as you set them, so you can remove these from the list. Note that also by setting this value sets also I also think that you are trying to do something similar as the |
Yes, file mode is ok. But the test shows the file mode has two problems:
So. We wanna a better mechanism something like live mode. |
Ok, but in order to use the live mode you'd have to keep the bitrate at the same level as the bitrate of the TCP stream, and implement this yourself by calling In this case you might want to have TSBPDMODE=off, TLPKTDROP=off, NAKREPORT=on and PAYLOADSIZE=1456. Just remember that the |
Hi @byrcoder |
Thanks. My test is on linux. |
It is unreasonable if I must implement |
Hi,
I am trying to make srt as a tcp proxy which should make sure no packet loss allowed.
the srt config code as following:
when I check the byte from the srt proxy, somehow the data loss sometimes. Do you have any ideas of what is causing this ?
The text was updated successfully, but these errors were encountered: