-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better algorithm for stream flow control #733
Comments
Status quo neqoStatic connection flow-control limit:
Static stream receive window limit: neqo/neqo-transport/src/recv_stream.rs Line 33 in c004359
Static stream send window limit: neqo/neqo-transport/src/send_stream.rs Line 36 in c004359
Other implementations
Potential solutionImplement auto-tuning based on Google's QUIC design document:
Used in quic-go. Also I implemented this for a muxer on top of TCP in the past libp2p/rust-yamux#176. AlternativesMore research needed. Pointers very much appreciated. |
This commit adds a basic smoke test using the `test-ficture` simulator, asserting that on a connection with unlimited bandwidth and 50ms round-trip-time Neqo can eventually achieve > 1 Gbit/s throughput. Showcases the potential a future stream flow-control auto-tuning algorithm can have. See mozilla#733.
I believe that a better approach would be not to double the window, but to increase the window by an amount equal to the amount consumed since the last increase if the time taken to consume the windows is less than one RTT (increased by a small fudge factor to allow slack for reporting delays). This is similar to the increase in Reno CC. The effect then would be that the window increases less rapidly and maybe more often, but it would end up with a commitment that is closer to a 2x factor of BDP, rather than being 4x or more. |
Given that the premise ("if the time taken to consume the windows is less than one RTT") requires the window to be fully consumed, the "amount consumed" will be equal to the window, thereby the increase will be equal to the window, and thus the new window will be twice as large as the current window. How is that different to doubling the window as proposed above? I might be confusing (a) the increase of the available window and (b) the increase of the maximum window size above. @martinthomson would you mind rephrasing your proposal above, with the terminology used in Lines 393 to 405 in 0ad1b77
|
Sure. The goal is to increase the value of Therefore, I suggest that if the rate at which This will approximately lead to a doubling as Note that we could increase only based on the amount by which the increase exceeds previous expectations, which would lead to a closer adaptation, but would be more reliant on ACK rate than the simple scheme. |
This commit adds a basic smoke test using the `test-fixture` simulator, asserting the expected bandwidth on a 1 gbit link. Given mozilla#733, the current expected bandwidth is limited by the fixed sized stream receive buffer (1MiB).
This commit adds a basic smoke test using the `test-fixture` simulator, asserting the expected bandwidth on a 1 gbit link. Given mozilla#733, the current expected bandwidth is limited by the fixed sized stream receive buffer (1MiB).
…eviewers Add a Glean metric counting the number of QUIC frames, labeled by frame type. This is e.g. helpful to measure the impact of [stream receive window auto-tuning](mozilla/neqo#733), looking at the number of max_stream_data frames sent and stream_data_blocked frames received. Differential Revision: https://phabricator.services.mozilla.com/D228295
…eviewers Add a Glean metric counting the number of QUIC frames, labeled by frame type. This is e.g. helpful to measure the impact of [stream receive window auto-tuning](mozilla/neqo#733), looking at the number of max_stream_data frames sent and stream_data_blocked frames received. Differential Revision: https://phabricator.services.mozilla.com/D228295
…eviewers Add a Glean metric counting the number of QUIC frames, labeled by frame type. This is e.g. helpful to measure the impact of [stream receive window auto-tuning](mozilla/neqo#733), looking at the number of max_stream_data frames sent and stream_data_blocked frames received. Differential Revision: https://phabricator.services.mozilla.com/D228295 UltraBlame original commit: 1352d3789a97da4a8bf6a6ae3b3702805c2a9c2e
…eviewers Add a Glean metric counting the number of QUIC frames, labeled by frame type. This is e.g. helpful to measure the impact of [stream receive window auto-tuning](mozilla/neqo#733), looking at the number of max_stream_data frames sent and stream_data_blocked frames received. Differential Revision: https://phabricator.services.mozilla.com/D228295 UltraBlame original commit: 1352d3789a97da4a8bf6a6ae3b3702805c2a9c2e
…eviewers Add a Glean metric counting the number of QUIC frames, labeled by frame type. This is e.g. helpful to measure the impact of [stream receive window auto-tuning](mozilla/neqo#733), looking at the number of max_stream_data frames sent and stream_data_blocked frames received. Differential Revision: https://phabricator.services.mozilla.com/D228295 UltraBlame original commit: 1352d3789a97da4a8bf6a6ae3b3702805c2a9c2e
…eviewers Add a Glean metric counting the number of QUIC frames, labeled by frame type. This is e.g. helpful to measure the impact of [stream receive window auto-tuning](mozilla/neqo#733), looking at the number of max_stream_data frames sent and stream_data_blocked frames received. Differential Revision: https://phabricator.services.mozilla.com/D228295
This commit adds a basic smoke test using the `test-fixture` simulator, asserting the expected bandwidth on a 1 gbit link. Given mozilla#733, the current expected bandwidth is limited by the fixed sized stream receive buffer (1MiB).
What we have now is the thing we did that was fast to implement, rather than a fully thought-out implementation. We should improve our algorithm, when possible.
from #726 (comment):
The text was updated successfully, but these errors were encountered: