Latency and Buffers explained #743
Replies: 3 comments
-
There are two latency settings
Ist this the same thing? Do they add up? |
Beta Was this translation helpful? Give feedback.
-
Let's think about network issues which cause the playback to stop on some clients and then resume after some seconds. This can and does happen especially on WiFi networks. (The following is just thoughts, please tell me, if it is right or wrong and add missing facts.) What is going on here? The network connection between server and client is temporarily interrupted, the client's buffer runs empty and playback stops. When the network connection is working again, the buffer is filled and playback resumes. Ok, but what happens in detail? The connection between server and client is based on TCP. So, there are two possibilities
TCP is designed to recover in case 2 and deliver all of the missing packets in correct order. But when it does deliver them to the client, it is already too late, the buffer was empty, playback has stopped. These packets with audio data, which got delivered to the client far too late, are discarded by the client. It will fast forward until it finds audio data with current timestamps. My question/thought here is if it would benefit the reliability of snapcast or better snapcast's ability to recover from and cope with network issues, if UDP was used instead of TCP. It would be necessary to implement some custom protocol that allowed the client to ask for retransmission of missing packets but it would also allow just skip missing packets that would be discarded anyways. Maybe the recovery time could be reduced? Maybe it would also make sense to make use of forward error correction? |
Beta Was this translation helpful? Give feedback.
-
First response without having read everything: here is a pamphlet about latencies: #663 |
Beta Was this translation helpful? Give feedback.
-
Hello there!
Is there an explanation or documentation of the buffers used in snapcast? I would like to understand how it is working.
I have the following idea of the system and want to know if this is correct.
The server buffer config (
snapserver.conf buffer
) is the general buffer in ms to account for the network latency. The actual buffer is created on the clients. When something is piped into the server it gets encoded (there is/must be an additional buffer for this?), timestamped and sent over the net to the clients. The buffer is filled on the client and the sound is played back with a delay equal to the size of the buffer.But what about the latency setting? The client's latency is the delay introduced by the soundcard and amp or other stuff after snapclient's output. A latency of 200ms for example means that the sound leaves the speaker 200ms later than it was played by the client.
So, when I now set a (positive) latency of 200ms for this client, the client knows about the latency of the output system and can adjust for it by playing back the sound on this client 200ms earlier. This effectively reduces the (used) buffer by 200ms on this client, does it?
What happens if the buffer is 1000ms and I set a latency of 1500ms? So latency > buffer, is the buffer automatically increased, which would affect the other clients, too?
What about negative latency? When there are two clients, one plays too late, I could set a positive latency on the late client, but could also set a negative latency on the early client. This would make sense if there were more than two client and only one of them is early (the other ones are late). Negative latency would require the buffer to be increased. Does this happen?
Beta Was this translation helpful? Give feedback.
All reactions