From 029d5726cfd46ea3424ec069f1f222af6ce5a010 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Miko=C5=82aj=20Ma=C5=82ecki?= Date: Wed, 14 Feb 2024 11:49:10 +0100 Subject: [PATCH 1/7] Updated the documentation about latency and transtype --- docs/API/API-socket-options.md | 38 ++++++++++++++++++++++++++-------- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/docs/API/API-socket-options.md b/docs/API/API-socket-options.md index ef6f87513..32ddd7f7d 100644 --- a/docs/API/API-socket-options.md +++ b/docs/API/API-socket-options.md @@ -301,7 +301,8 @@ connection is rejected - **however** you may also change the value of this option for the accepted socket in the listener callback (see `srt_listen_callback`) if an appropriate instruction was given in the Stream ID. -Currently supported congestion controllers are designated as "live" and "file" +Currently supported congestion controllers are designated as "live" and "file", +which correspond to the Live and File modes. Note that it is not recommended to change this option manually, but you should rather change the whole set of options using the [`SRTO_TRANSTYPE`](#SRTO_TRANSTYPE) option. @@ -772,7 +773,8 @@ for more details. | `SRTO_LATENCY` | 1.0.2 | pre | `int32_t` | ms | 120 * | 0.. | RW | GSD | This option sets both [`SRTO_RCVLATENCY`](#SRTO_RCVLATENCY) and [`SRTO_PEERLATENCY`](#SRTO_PEERLATENCY) -to the same value specified. +to the same value specified. Note that the default value for `SRTO_RCVLATENCY` is modified by the +[`SRTO_TRANSTYPE`](#SRTO_TRANSTYPE) option. Prior to SRT version 1.3.0 `SRTO_LATENCY` was the only option to set the latency. However it is effectively equivalent to setting `SRTO_PEERLATENCY` in the sending direction @@ -1212,6 +1214,8 @@ considered broken on timeout. The latency value (as described in [`SRTO_RCVLATENCY`](#SRTO_RCVLATENCY)) provided by the sender side as a minimum value for the receiver. +This value is only significant when [`SRTO_TSBPDMODE`](#SRTO_TSBPDMODE) is enabled. + Reading the value of the option on an unconnected socket reports the configured value. Reading the value on a connected socket reports the effective receiver buffering latency of the peer. @@ -1296,16 +1300,22 @@ This value is only significant when [`SRTO_TSBPDMODE`](#SRTO_TSBPDMODE) is enabl **Default value**: 120 ms in Live mode, 0 in File mode (see [`SRTO_TRANSTYPE`](#SRTO_TRANSTYPE)). The latency value defines the **minimum** receiver buffering delay before delivering an SRT data packet -from a receiving SRT socket to a receiving application. The provided value is used in the connection establishment (handshake exchange) stage -to fix the end-to-end latency of the transmission. The effective end-to-end latency `L` will be fixed -as the network transmission time of the final handshake packet (~1/2 RTT) plus the **negotiated** latency value `Ln`. -Data packets will stay in the receiver buffer for at least `L` microseconds since the timestamp of the -packet, independent of the actual network transmission times (RTT variations) of these packets. +from a receiving SRT socket to a receiving application. The actual value of the receiver buffering delay `Ln` (the negotiated latency) used on a connection is determined by the negotiation in the connection establishment (handshake exchange) phase as the maximum of the `SRTO_RCVLATENCY` value and the value of [`SRTO_PEERLATENCY`](#SRTO_PEERLATENCY) set by the peer. +The general idea for the latency mechanism is to keep the time distance between two consecutive +received packets the same as the time when these same packets were scheduled for sending by the +sender application (or per the time explicitly declared when sending - see +[`srt_sendmsg2`](API-functions.md#srt_sendmsg2) for details). This makes the packets, that have arrived +earlier than their delivery time, kept in the receiver buffer until this time comes. This should +compensate any jitter in the network and an extra delay needed for a packet retransmission. + +For the detailed information on how the latency setting influences the actual packet delivery time and +how this time is defined, refer to the [latency documentation](../features/latency.md). + Reading the `SRTO_RCVLATENCY` value on a socket after the connection is established provides the actual (negotiated) latency value `Ln`. @@ -1638,9 +1648,19 @@ enabled in sender if receiver supports it. Sets the transmission type for the socket, in particular, setting this option sets multiple other parameters to their default values as required for a -particular transmission type. +particular transmission type. This set the following options to their defaults +in particular mode: + +* [`SRTO_CONGESTION`](#SRTO_CONGESTION) +* [`SRTO_MESSAGEAPI`](#SRTO_MESSAGEAPI) +* [`SRTO_NAKREPORT`](#SRTO_NAKREPORT) +* [`SRTO_RCVLATENCY`](#SRTO_RCVLATENCY), also set as [`SRTO_LATENCY`](#SRTO_LATENCY) +* [`SRTO_TLPKTDROP`](#SRTO_TLPKTDROP) +* [`SRTO_TSBPDMODE`](#SRTO_TSBPDMODE) + + -Values defined by enum `SRT_TRANSTYPE` (see above for possible values) +Values defined by enum [`SRT_TRANSTYPE`](#SRT_TRANSTYPE). [Return to list](#list-of-options) From 86f4e81e8a4515e1d9fdb66489a9ce1455c84843 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Miko=C5=82aj=20Ma=C5=82ecki?= Date: Wed, 14 Feb 2024 13:27:03 +0100 Subject: [PATCH 2/7] Added the latency document --- docs/features/latency.md | 231 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 231 insertions(+) create mode 100644 docs/features/latency.md diff --git a/docs/features/latency.md b/docs/features/latency.md new file mode 100644 index 000000000..70da45017 --- /dev/null +++ b/docs/features/latency.md @@ -0,0 +1,231 @@ +General statement about the latency +=================================== + +In the live streaming, where we transmit what really is happening through the +camera, there are many things happening between the camera's objective and the +screen of the video player. The view of the player is then delayed and this +delay is called "latency". But this is an overall latency, which includes +everything in between: the camera frame grabber device passing to the encoder, +encoding, multiplexing, sending over the network, splitting, decoding and +then finally displaying. What is defined in SRT as "latency" is only the +part referring to sending over the network: it's the time between the moment +when the `srt_sendmsg2` function is called at the sender side up to the moment +when the `srt_recvmsg2` function is called at the receiver side. Note however +that this is the "true latency", that is, the actual time difference between +these two events. + +The goal of the latency (TSBPD) mechanism +========================================= + +The strict goal of this mechanism is to have the time distance between two +consecutive packets on the receiver side identical as they were at the +sender side. Obviously this requires some extra delay defined from upside +that should define when exactly the packet can be retrieved by the receiver +application, and if the packet arrived earlier than this time, it will have to +wait in the receiver buffer until this time comes. This time for the packet N +is roughly defined as: + +``` +PTS[N] = ETS[N] + LATENCY(option) +``` + +where `ETS[N]` is the time when the packet would arrive, if all delays +from the network and the processing software on both sides are identical +as they were for the very first received data packet. This means that +for the very first packet `ETS[0]` is equal to this packet's arrival time. +For every next packet the delivery time distance should be equal to the same +packet's declared scheduling time distance. + + +SRT's approach to packet arrival time +===================================== + +SRT provides two socket options `SRTO_PEERLATENCY` and `SRTO_RCVLATENCY`, +where the "latency" name was used for convenience (also for a common option +`SRTO_LATENCY`), but it doesn't mean that it will define the true time +distance between the `srt_sendmsg2` and `srt_recvmsg2` calls for the same +packet. This is only an extra delay added at the receiver side towards +the time when the packet "should" arrive (ETS). This extra delay is used to +compensate two things: + +* The extra network delay, that is, if the packet arrived later than it +"should have arrived" + +* The packet retransmission, regarding that there might be a need to be +requested at least twice + +Note that the values included in these formulas are values that are +actually present, but many of them are not controllable and many are +not even measurable. In many cases there are measured values that are +sums of other values, but ingredients can't be extracted. Values that +we get at the receiver side are actually two: + +* ATS: actual arrival time. It's simply the time when the UDP packet +has been extracted through the `recvmsg` system call. + +* TS: time recorded in the packet header, set on the sender side and extracted +from the packet at the receiver side + +Note that the timestamp in the packet's header is 32-bit, which gives +it more-less 2.5 minutes to roll over. Therefore there is the timestamp +rollover tracked and a segment increase is done in order to keep an +eye on the overall actual time. For the needs of the formula definitions +it will be stated that TS is the true difference between the connection +start time and the time when the sending time has been declared when +the sender application is calling any of the `srt_send*` functions +(see [`srt_sendmsg2`](../API/API-functions.md#srt_sendmsg2) for details). + + +SRT latency components +====================== + +To understand the latency components we need also other definitions: + +* ETS: expected arrival time. This is the time of the packet when it +"should" arrive according to its TS + +* PTS: packet's play time. It's the time when SRT gives up the packet +to the `srt_recvmsg2` call (that is, it sets up the IN flag in epoll +and resumes the blocked function call, if it was in blocking mode). + +* STS: the time declared as the sending time when the packet was +scheduled for sending at the sender side (if you don't use the +declared time, by default it's the monotonic time taken when this +function was called), which is represented by TS. + +* RTS: the same as STS, but calculated at the receiver side. The +only way to extract it is by using some initial statements. + +The "true latency" for a particular packet in SRT can be simply defined as: + +* `TD = PTS - STS` + +Note that this is a stable definition (independent on the packet), +but this value is not really controllable. So let's define the PTS +for the packet `x`: + +* `PTS[x] = ETS[x] + LATENCY + DRIFT` + +where `LATENCY` is the negotiated latency value (out of the +`SRTO_RCVLATENCY` on the agent and `SRTO_PEERLATENCY` on the peer) +and DRIFT will be described later (for simplification you can +state it's initially 0). + +These components undergo the following formula: + +* `ETS[x] = start_time + TS[x]` + +Note that it's not possible to simply define it basing on STS +because sender and receiver are two different machines that can only +see one another through the network, but their clocks are separate, +and can even run on different or changing speeds, while the only +visible phenomena happen only at a packet arrival machine. This +above formula, however, allows us to define the start time because +we state the following for the very first data packet: + +* `ETS[0] = ATS[0]` + +This means that from this formula we can define the start time: + +* `start_time = ATS[0] - TS[0]` + +Therefore we can state that if we have two identical clocks on +both machines with identical time bases and speeds, then: + +* `ATS[x] = program_delay[x] + network_delay[x] + STS[x]` + +(The only problem with treating this above formula too seriously +is that there doesn't exist the common clock base for two +network-communicating machines, so these components should be +treated as something that does exist, but isn't exactly measurable). + +But even if there is still this formula for ATS, it doesn't +apply to the real latency - this one is based strictly on ETS. +But you can apply this formula for the very first arriving +packet, because for this one they are equal: `ATS[0] = ETS[0]`. + +Therefore this formula is true for the very first packet: + +* `ETS[0] = prg_delay[0] + net_delay[0] + STS[0]` + +We know also that the TS set on the sender side is: + +* `TS[x] = STS[x] - snd_connect_time` + +Taking both formulas for ETS together: + +* `ETS[x] = start_time + TS[x] = prg_delay[0] + net_delay[0] + snd_connect_time + TS[x]` + +we have then: + +* `start_time = prg_delay[0] + net_delay[0] + snd_connect_time` + +Note important thing: `start_time` is not the time of arrival of the first packet, +but that time taken backwards by using the delay already recorded in TS. As TS should +represent the delay towards `snd_connect_time`, `start_time` should be simply the same +as `snd_connect_time`, just on the receiver side, and so obviously shifted by the +first packet's delays of `prg_delay` and `net_delay`. + +So, as we have the start time defined, the above formulas: + +* `ETS[x] = start_time + TS[x]` +* `PTS[x] = ETS[x] + LATENCY + DRIFT` + +define now the packet delivery time as: + +* `PTS[x] = start_time + TS[x] + LATENCY + DRIFT` + +and after replacing the start time we have: + +* `PTS[x] = prg_delay[0] + net_delay[0] + snd_connect_time + TS[x] + LATENCY + DRIFT` + +and for the formula of TS we get STS, so we replace it: + +* `PTS[x] = prg_delay[0] + net_delay[0] + STS[x] + LATENCY + DRIFT` + +so the true network latency in SRT we can get by moving STS to the other side: + +* `PTS[x] - STS[x] = prg_delay[0] + net_delay[0] + LATENCY + DRIFT` + + +The DRIFT +========= + +The DRIFT, for simplifyint the calculations above, should be treated as 0, +which is the initial state. In time, however, it gets changed basing on the +value of the Arrival Time Deviation: + +* `ATD[x] = ATS[x] - ETS[x]` + +The drift is then formed as: + +* `DRIFT[x] = average(ATD[x-N] ... ATD[x])` + +The value of the drift is tracked by appropriate number of samples and if +it exceeds a threshold value, the drift value is applied to modify the +base time. However, as you can see from the formula for ATD, the drift is +simply taken from the real time when the packet was arrived, and the time +when it would arrive, if the `prg_delay` and `net_delay` values were +exactly the same as for the very first packet. ATD then represents the +changes in these values. There can be two main factors that could result +in having this value as nonzero: + +1. There has been observed a phenomenon in several types of networks that +the very first packet arrives very quickly, but then as the data packets +come in regularly, the network delay slightly increases and then stays +for a long time with this increased value. This phenomenon could be +mitigated by having a reliable value of RTT, so once it's observed as +increased, a special factor could be used to decrease the positive value +of the drift, but this currently isn't implemented. This phenomenon also +isn't observed in every network, especially in a longer distance. + +2. The clock speed on both machines isn't exactly the same, which means +that if you decipher the ETS basing on the TS, after time it may result +in values that even precede the STS (and this way suggesting as if the +network delay was negative) or having an enormous delay (with ATS exceeding +PTS). This is actually the main reason of tracking the drift. + + + + + From 8748d758698a1ab0a0838ef031054ff5733ef507 Mon Sep 17 00:00:00 2001 From: Sektor van Skijlen Date: Thu, 15 Feb 2024 09:56:24 +0100 Subject: [PATCH 3/7] Apply suggestions from code review (first portion) Co-authored-by: stevomatthews --- docs/API/API-socket-options.md | 2 +- docs/features/latency.md | 26 +++++++++++++------------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/docs/API/API-socket-options.md b/docs/API/API-socket-options.md index 32ddd7f7d..ba2f0959f 100644 --- a/docs/API/API-socket-options.md +++ b/docs/API/API-socket-options.md @@ -1648,7 +1648,7 @@ enabled in sender if receiver supports it. Sets the transmission type for the socket, in particular, setting this option sets multiple other parameters to their default values as required for a -particular transmission type. This set the following options to their defaults +particular transmission type. This sets the following options to their defaults in particular mode: * [`SRTO_CONGESTION`](#SRTO_CONGESTION) diff --git a/docs/features/latency.md b/docs/features/latency.md index 70da45017..6461a4635 100644 --- a/docs/features/latency.md +++ b/docs/features/latency.md @@ -1,18 +1,18 @@ -General statement about the latency +General statement about latency =================================== -In the live streaming, where we transmit what really is happening through the -camera, there are many things happening between the camera's objective and the -screen of the video player. The view of the player is then delayed and this -delay is called "latency". But this is an overall latency, which includes -everything in between: the camera frame grabber device passing to the encoder, -encoding, multiplexing, sending over the network, splitting, decoding and -then finally displaying. What is defined in SRT as "latency" is only the -part referring to sending over the network: it's the time between the moment -when the `srt_sendmsg2` function is called at the sender side up to the moment -when the `srt_recvmsg2` function is called at the receiver side. Note however -that this is the "true latency", that is, the actual time difference between -these two events. +In the live streaming there are many things happening between the +camera's lens and the screen of the video player, all of which contribute +to a delay that is generally referred to as "latency". This overall latency +includes the time it takes for the camera frame grabber device to pass +frames to the encoder, encoding, multiplexing, **sending over the network**, +splitting, decoding and then finally displaying. + +In SRT, however, "latency" is defined as only the delay introduced by **sending +over the network**. It's the time between the moment when the `srt_sendmsg2` +function is called at the sender side up to the moment when the `srt_recvmsg2` +function is called at the receiver side. This SRT latency is the actual time difference +between these two events. The goal of the latency (TSBPD) mechanism ========================================= From 25e251239f383aa7b7e917744a5b75a6d65d21d1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Miko=C5=82aj=20Ma=C5=82ecki?= Date: Thu, 15 Feb 2024 10:04:20 +0100 Subject: [PATCH 4/7] Applied a fix that was incorrectly formatted by github --- docs/features/latency.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/docs/features/latency.md b/docs/features/latency.md index 6461a4635..3fda52f98 100644 --- a/docs/features/latency.md +++ b/docs/features/latency.md @@ -14,15 +14,16 @@ function is called at the sender side up to the moment when the `srt_recvmsg2` function is called at the receiver side. This SRT latency is the actual time difference between these two events. + The goal of the latency (TSBPD) mechanism ========================================= -The strict goal of this mechanism is to have the time distance between two -consecutive packets on the receiver side identical as they were at the -sender side. Obviously this requires some extra delay defined from upside -that should define when exactly the packet can be retrieved by the receiver -application, and if the packet arrived earlier than this time, it will have to -wait in the receiver buffer until this time comes. This time for the packet N +SRT employs a TimeStamp Based Packet Delivery (TSBPD) mechanism +with strict goal of keeping the time interval between two consecutive packets +on the receiver side identical to what they were at the sender side. This +requires introducing an extra delay that should define when exactly the packet +can be retrieved by the receiver application -- if the packet arrives early, it must +wait in the receiver buffer until the delivery time. This time for a packet N is roughly defined as: ``` @@ -31,10 +32,10 @@ PTS[N] = ETS[N] + LATENCY(option) where `ETS[N]` is the time when the packet would arrive, if all delays from the network and the processing software on both sides are identical -as they were for the very first received data packet. This means that +to what they were for the very first received data packet. This means that for the very first packet `ETS[0]` is equal to this packet's arrival time. -For every next packet the delivery time distance should be equal to the same -packet's declared scheduling time distance. +For every following packet the delivery time interval should be equal to the +that packet's declared scheduling time interval. SRT's approach to packet arrival time From 31eeef60f0f0e9b60d07d9e17a7ddf2439d09be4 Mon Sep 17 00:00:00 2001 From: Sektor van Skijlen Date: Thu, 15 Feb 2024 10:41:33 +0100 Subject: [PATCH 5/7] Apply suggestions from code review (completed) Co-authored-by: stevomatthews --- docs/features/latency.md | 128 ++++++++++++++++++--------------------- 1 file changed, 60 insertions(+), 68 deletions(-) diff --git a/docs/features/latency.md b/docs/features/latency.md index 3fda52f98..38ee40da3 100644 --- a/docs/features/latency.md +++ b/docs/features/latency.md @@ -41,37 +41,35 @@ that packet's declared scheduling time interval. SRT's approach to packet arrival time ===================================== -SRT provides two socket options `SRTO_PEERLATENCY` and `SRTO_RCVLATENCY`, -where the "latency" name was used for convenience (also for a common option -`SRTO_LATENCY`), but it doesn't mean that it will define the true time -distance between the `srt_sendmsg2` and `srt_recvmsg2` calls for the same -packet. This is only an extra delay added at the receiver side towards +SRT provides two socket options `SRTO_PEERLATENCY` and `SRTO_RCVLATENCY`. +While they have "latency" in their names, they do *not* define the true time +interval between the `srt_sendmsg2` and `srt_recvmsg2` calls for the same +packet. They are only used to add an extra delay (at the receiver side) to the time when the packet "should" arrive (ETS). This extra delay is used to -compensate two things: +compensate for two things: -* The extra network delay, that is, if the packet arrived later than it -"should have arrived" +* an extra network delay (that is, if the packet arrived later than it +"should have arrived"), or -* The packet retransmission, regarding that there might be a need to be -requested at least twice +* a packet retransmission. -Note that the values included in these formulas are values that are -actually present, but many of them are not controllable and many are -not even measurable. In many cases there are measured values that are -sums of other values, but ingredients can't be extracted. Values that -we get at the receiver side are actually two: +Note that many of the values included in these formulas are not controllable and +some cannot be measured directly. In many cases there are measured values +that are sums of other values, but the component values can't be extracted. -* ATS: actual arrival time. It's simply the time when the UDP packet +There are two values that we can obtain at the receiver side: + +* ATS: actual arrival time, which is the time when the UDP packet has been extracted through the `recvmsg` system call. * TS: time recorded in the packet header, set on the sender side and extracted from the packet at the receiver side Note that the timestamp in the packet's header is 32-bit, which gives -it more-less 2.5 minutes to roll over. Therefore there is the timestamp -rollover tracked and a segment increase is done in order to keep an +it more or less 2.5 minutes to roll over. Therefore timestamp +rollover is tracked and a segment increase is performed in order to keep an eye on the overall actual time. For the needs of the formula definitions -it will be stated that TS is the true difference between the connection +it must be stated that TS is the true difference between the connection start time and the time when the sending time has been declared when the sender application is calling any of the `srt_send*` functions (see [`srt_sendmsg2`](../API/API-functions.md#srt_sendmsg2) for details). @@ -82,28 +80,28 @@ SRT latency components To understand the latency components we need also other definitions: -* ETS: expected arrival time. This is the time of the packet when it -"should" arrive according to its TS +* **ETS** (Expected Time Stamp): The packet's expected arrival time, when it +"should" arrive according to its timestamp -* PTS: packet's play time. It's the time when SRT gives up the packet +* **PTS** (Presentation Time Stamp): The packet's play time, when SRT gives the packet to the `srt_recvmsg2` call (that is, it sets up the IN flag in epoll and resumes the blocked function call, if it was in blocking mode). -* STS: the time declared as the sending time when the packet was +* **STS** (Sender Time Stamp): The time when the packet was scheduled for sending at the sender side (if you don't use the -declared time, by default it's the monotonic time taken when this -function was called), which is represented by TS. +declared time, by default it's the monotonic time used when this +function is called). -* RTS: the same as STS, but calculated at the receiver side. The +* **RTS** (Receiver Time Stamp): The same as STS, but calculated at the receiver side. The only way to extract it is by using some initial statements. The "true latency" for a particular packet in SRT can be simply defined as: * `TD = PTS - STS` -Note that this is a stable definition (independent on the packet), +Note that this is a stable definition (independent of the packet), but this value is not really controllable. So let's define the PTS -for the packet `x`: +for a packet `x`: * `PTS[x] = ETS[x] + LATENCY + DRIFT` @@ -116,12 +114,12 @@ These components undergo the following formula: * `ETS[x] = start_time + TS[x]` -Note that it's not possible to simply define it basing on STS -because sender and receiver are two different machines that can only -see one another through the network, but their clocks are separate, -and can even run on different or changing speeds, while the only -visible phenomena happen only at a packet arrival machine. This -above formula, however, allows us to define the start time because +Note that it's not possible to simply define a "true" latency based on STS +because the sender and receiver are two different machines that can only +see one another through the network. Their clocks are separate, +and can even run at different or changing speeds, and the only +visible phenomena happen when packets arrive at the receiver machine. +However, the formula above does allow us to define the start time because we state the following for the very first data packet: * `ETS[0] = ATS[0]` @@ -140,10 +138,9 @@ is that there doesn't exist the common clock base for two network-communicating machines, so these components should be treated as something that does exist, but isn't exactly measurable). -But even if there is still this formula for ATS, it doesn't -apply to the real latency - this one is based strictly on ETS. -But you can apply this formula for the very first arriving -packet, because for this one they are equal: `ATS[0] = ETS[0]`. +This formula for ATS doesn't apply to the real latency, which is based strictly +on ETS. But you can apply this formula for the very first arriving packet, +because in this case they are equal: `ATS[0] = ETS[0]`. Therefore this formula is true for the very first packet: @@ -161,10 +158,10 @@ we have then: * `start_time = prg_delay[0] + net_delay[0] + snd_connect_time` -Note important thing: `start_time` is not the time of arrival of the first packet, +**IMPORTANT**: `start_time` is not the time of arrival of the first packet, but that time taken backwards by using the delay already recorded in TS. As TS should represent the delay towards `snd_connect_time`, `start_time` should be simply the same -as `snd_connect_time`, just on the receiver side, and so obviously shifted by the +as `snd_connect_time`, just on the receiver side, and so shifted by the first packet's delays of `prg_delay` and `net_delay`. So, as we have the start time defined, the above formulas: @@ -172,7 +169,7 @@ So, as we have the start time defined, the above formulas: * `ETS[x] = start_time + TS[x]` * `PTS[x] = ETS[x] + LATENCY + DRIFT` -define now the packet delivery time as: +now define the packet delivery time as: * `PTS[x] = start_time + TS[x] + LATENCY + DRIFT` @@ -180,11 +177,11 @@ and after replacing the start time we have: * `PTS[x] = prg_delay[0] + net_delay[0] + snd_connect_time + TS[x] + LATENCY + DRIFT` -and for the formula of TS we get STS, so we replace it: +and from the TS formula we get STS, so we replace it: * `PTS[x] = prg_delay[0] + net_delay[0] + STS[x] + LATENCY + DRIFT` -so the true network latency in SRT we can get by moving STS to the other side: +We can now get the true network latency in SRT by moving STS to the other side: * `PTS[x] - STS[x] = prg_delay[0] + net_delay[0] + LATENCY + DRIFT` @@ -198,35 +195,30 @@ value of the Arrival Time Deviation: * `ATD[x] = ATS[x] - ETS[x]` -The drift is then formed as: +The drift is then calculated as: * `DRIFT[x] = average(ATD[x-N] ... ATD[x])` -The value of the drift is tracked by appropriate number of samples and if +The value of the drift is tracked over an appropriate number of samples. If it exceeds a threshold value, the drift value is applied to modify the base time. However, as you can see from the formula for ATD, the drift is -simply taken from the real time when the packet was arrived, and the time -when it would arrive, if the `prg_delay` and `net_delay` values were +simply taken from the actual time when the packet arrived, and the time +when it would have arrived if the `prg_delay` and `net_delay` values were exactly the same as for the very first packet. ATD then represents the changes in these values. There can be two main factors that could result -in having this value as nonzero: - -1. There has been observed a phenomenon in several types of networks that -the very first packet arrives very quickly, but then as the data packets -come in regularly, the network delay slightly increases and then stays -for a long time with this increased value. This phenomenon could be -mitigated by having a reliable value of RTT, so once it's observed as -increased, a special factor could be used to decrease the positive value -of the drift, but this currently isn't implemented. This phenomenon also -isn't observed in every network, especially in a longer distance. - -2. The clock speed on both machines isn't exactly the same, which means -that if you decipher the ETS basing on the TS, after time it may result -in values that even precede the STS (and this way suggesting as if the -network delay was negative) or having an enormous delay (with ATS exceeding -PTS). This is actually the main reason of tracking the drift. - - - - - +in having this value as non-zero: + +1. A phenomenon has been observed in several types of networks where +the very first packet arrives quickly, but as subsequent data packets +come in regularly, the network delay slightly increases and then remains fixed +for a long time at this increased value. This phenomenon can be +mitigated by having a reliable value for RTT. Once the increase is observed +a special factor could be applied to decrease the positive value +of the drift. This isn't currently implemented. This phenomenon also +isn't observed in every network, especially those covering longer distances. + +2. The clock speed on both machines (sender and receiver) isn't exactly the same, +which means that if you decipher the ETS basing on the TS, over time it may result +in values that even precede the STS (suggesting a negative network delay) or that +have an enormous delay (with ATS exceeding PTS). This is actually the main reason +for tracking the drift. From 0f6d2d313f5132a31968c89d3fbb945605bc0e28 Mon Sep 17 00:00:00 2001 From: Sektor van Skijlen Date: Fri, 16 Feb 2024 08:55:02 +0100 Subject: [PATCH 6/7] Apply suggestions from code review (doc review, finished) Co-authored-by: stevomatthews --- docs/API/API-socket-options.md | 8 ++++---- docs/features/latency.md | 12 ++++++------ 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/API/API-socket-options.md b/docs/API/API-socket-options.md index ba2f0959f..84e361f6d 100644 --- a/docs/API/API-socket-options.md +++ b/docs/API/API-socket-options.md @@ -1309,11 +1309,11 @@ is determined by the negotiation in the connection establishment (handshake exch The general idea for the latency mechanism is to keep the time distance between two consecutive received packets the same as the time when these same packets were scheduled for sending by the sender application (or per the time explicitly declared when sending - see -[`srt_sendmsg2`](API-functions.md#srt_sendmsg2) for details). This makes the packets, that have arrived -earlier than their delivery time, kept in the receiver buffer until this time comes. This should -compensate any jitter in the network and an extra delay needed for a packet retransmission. +[`srt_sendmsg2`](API-functions.md#srt_sendmsg2) for details). This keeps any packets that have arrived +earlier than their delivery time in the receiver buffer until their delivery time comes. This should +compensate for any jitter in the network and provides an extra delay needed for a packet retransmission. -For the detailed information on how the latency setting influences the actual packet delivery time and +For detailed information on how the latency setting influences the actual packet delivery time and how this time is defined, refer to the [latency documentation](../features/latency.md). Reading the `SRTO_RCVLATENCY` value on a socket after the connection is established provides the actual (negotiated) diff --git a/docs/features/latency.md b/docs/features/latency.md index 38ee40da3..cfbb779a4 100644 --- a/docs/features/latency.md +++ b/docs/features/latency.md @@ -133,10 +133,9 @@ both machines with identical time bases and speeds, then: * `ATS[x] = program_delay[x] + network_delay[x] + STS[x]` -(The only problem with treating this above formula too seriously -is that there doesn't exist the common clock base for two -network-communicating machines, so these components should be -treated as something that does exist, but isn't exactly measurable). +Note that two machines communicating over a network do not typically have a +common clock base. Therefore, although this formula is correct, it involves +components that can neither be measured nor captured at the receiver side. This formula for ATS doesn't apply to the real latency, which is based strictly on ETS. But you can apply this formula for the very first arriving packet, @@ -189,8 +188,9 @@ We can now get the true network latency in SRT by moving STS to the other side: The DRIFT ========= -The DRIFT, for simplifyint the calculations above, should be treated as 0, -which is the initial state. In time, however, it gets changed basing on the +The DRIFT is a measure of the variance over time of the base time. +To simplify the calculations above, DRIFT is considered to be 0, +which is the initial state. In time, however, it changes based on the value of the Arrival Time Deviation: * `ATD[x] = ATS[x] - ETS[x]` From 27b72e1b02b6050f6896a138fa16377e005fc89c Mon Sep 17 00:00:00 2001 From: Sektor van Skijlen Date: Fri, 16 Feb 2024 10:20:49 +0100 Subject: [PATCH 7/7] Fixed heading style Co-authored-by: Maxim Sharabayko --- docs/features/latency.md | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/docs/features/latency.md b/docs/features/latency.md index cfbb779a4..d287ac030 100644 --- a/docs/features/latency.md +++ b/docs/features/latency.md @@ -1,5 +1,4 @@ -General statement about latency -=================================== +## General statement about latency In the live streaming there are many things happening between the camera's lens and the screen of the video player, all of which contribute @@ -15,8 +14,7 @@ function is called at the receiver side. This SRT latency is the actual time dif between these two events. -The goal of the latency (TSBPD) mechanism -========================================= +## The goal of the latency (TSBPD) mechanism SRT employs a TimeStamp Based Packet Delivery (TSBPD) mechanism with strict goal of keeping the time interval between two consecutive packets @@ -38,8 +36,7 @@ For every following packet the delivery time interval should be equal to the that packet's declared scheduling time interval. -SRT's approach to packet arrival time -===================================== +## SRT's approach to packet arrival time SRT provides two socket options `SRTO_PEERLATENCY` and `SRTO_RCVLATENCY`. While they have "latency" in their names, they do *not* define the true time @@ -75,8 +72,7 @@ the sender application is calling any of the `srt_send*` functions (see [`srt_sendmsg2`](../API/API-functions.md#srt_sendmsg2) for details). -SRT latency components -====================== +## SRT latency components To understand the latency components we need also other definitions: @@ -185,8 +181,7 @@ We can now get the true network latency in SRT by moving STS to the other side: * `PTS[x] - STS[x] = prg_delay[0] + net_delay[0] + LATENCY + DRIFT` -The DRIFT -========= +## The DRIFT The DRIFT is a measure of the variance over time of the base time. To simplify the calculations above, DRIFT is considered to be 0,