Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better algorithm for stream flow control #733

Open
agrover opened this issue Jun 16, 2020 · 4 comments · May be fixed by #1868
Open

Better algorithm for stream flow control #733

agrover opened this issue Jun 16, 2020 · 4 comments · May be fixed by #1868
Labels
enhancement New feature or request p3 Backlog

Comments

@agrover
Copy link
Contributor

agrover commented Jun 16, 2020

What we have now is the thing we did that was fast to implement, rather than a fully thought-out implementation. We should improve our algorithm, when possible.

from #726 (comment):

We have an estimate of BDP we might use to tune this. We should aim for BDP*2 at a minimum if we have an algorithm like this, otherwise we will need more frequent sending of updates.

@mxinden
Copy link
Collaborator

mxinden commented Apr 19, 2024

Status quo neqo

Static connection flow-control limit:

const LOCAL_MAX_DATA: u64 = 0x3FFF_FFFF_FFFF_FFFF; // 2^62-1

Static stream receive window limit:

const RX_STREAM_DATA_WINDOW: u64 = 0x10_0000; // 1MiB

Static stream send window limit:

pub const SEND_BUFFER_SIZE: usize = 0x10_0000; // 1 MiB

Other implementations

Potential solution

Implement auto-tuning based on Google's QUIC design document:

Algorithm

  • As above, the flow control window update triggered when:
    available window < max window size / 2,
    where available window = max receive window offset - bytes consumed
  • to realize auto-tuning, add the following logic just before issuing a window update
    • keep track of time interval between subsequent window updates, call this since_last_update.
    • if ( since_last_update < RTT * trigger factor ) then max window size = MIN(max window size * increase factor, upper bound).
      • trigger factor is 2
      • increase factor is 2
  • As above, window update sets:
    max_received window offset += (max window size - available window)

https://docs.google.com/document/d/1F2YfdDXKpy20WVKJueEf4abn_LVZHhMUMS5gX6Pgjl4/edit#heading=h.j3nq4bn2eaud

Used in quic-go.

Also I implemented this for a muxer on top of TCP in the past libp2p/rust-yamux#176.

Alternatives

More research needed. Pointers very much appreciated.

mxinden added a commit to mxinden/neqo that referenced this issue Apr 25, 2024
This commit adds a basic smoke test using the `test-ficture` simulator,
asserting that on a connection with unlimited bandwidth and 50ms round-trip-time
Neqo can eventually achieve > 1 Gbit/s throughput.

Showcases the potential a future stream flow-control auto-tuning algorithm can have.

See mozilla#733.
@martinthomson
Copy link
Member

I believe that a better approach would be not to double the window, but to increase the window by an amount equal to the amount consumed since the last increase if the time taken to consume the windows is less than one RTT (increased by a small fudge factor to allow slack for reporting delays). This is similar to the increase in Reno CC. The effect then would be that the window increases less rapidly and maybe more often, but it would end up with a commitment that is closer to a 2x factor of BDP, rather than being 4x or more.

@mxinden mxinden linked a pull request May 2, 2024 that will close this issue
3 tasks
@mxinden
Copy link
Collaborator

mxinden commented May 14, 2024

I believe that a better approach would be not to double the window, but to increase the window by an amount equal to the amount consumed since the last increase if the time taken to consume the windows is less than one RTT

Given that the premise ("if the time taken to consume the windows is less than one RTT") requires the window to be fully consumed, the "amount consumed" will be equal to the window, thereby the increase will be equal to the window, and thus the new window will be twice as large as the current window. How is that different to doubling the window as proposed above?

I might be confusing (a) the increase of the available window and (b) the increase of the maximum window size above.

@martinthomson would you mind rephrasing your proposal above, with the terminology used in fc.rs. For what it is worth, here is the implementation of the Google QUIC algorithm:

if self.retired + self.max_active / 2 > self.max_allowed
&& self
.max_allowed_sent_at
.is_some_and(|at| now - at < rtt * 2)
&& self.max_active < STREAM_MAX_ACTIVE_LIMIT
{
let prev_max_active = self.max_active;
self.max_active = min(self.max_active * 2, STREAM_MAX_ACTIVE_LIMIT);
qdebug!(
"Increasing max stream receive window: previous max_active: {} MiB new max_active: {} MiB now: {now:?} rtt: {rtt:?} stream_id: {}",
prev_max_active / 1024 / 1024, self.max_active / 1024 / 1024, self.subject,
);
}

@martinthomson
Copy link
Member

Sure. The goal is to increase the value of self.max_active based on perceived changes in the BDP, as observed by the throughput on the affected stream.

Therefore, I suggest that if the rate at which self.retired increases (that is, the change in that value, divided by the time elapsed) exceeds some function of self.max_active / path.rtt, then we can increase self.max_active by the amount that self.retired has increased.

This will approximately lead to a doubling as self.max_active determines how much data can be outstanding, but that doubling might happen on a shorter timescale than an RTT, meaning that rate increases could exceed one doubling per RTT. The same is true of what you proposed as well, just that this version will cap out at a lower value for self.max_active.

Note that we could increase only based on the amount by which the increase exceeds previous expectations, which would lead to a closer adaptation, but would be more reliant on ACK rate than the simple scheme.

mxinden added a commit to mxinden/neqo that referenced this issue Oct 26, 2024
This commit adds a basic smoke test using the `test-fixture` simulator,
asserting the expected bandwidth on a 1 gbit link.

Given mozilla#733, the current expected bandwidth
is limited by the fixed sized stream receive buffer (1MiB).
mxinden added a commit to mxinden/neqo that referenced this issue Oct 26, 2024
This commit adds a basic smoke test using the `test-fixture` simulator,
asserting the expected bandwidth on a 1 gbit link.

Given mozilla#733, the current expected bandwidth
is limited by the fixed sized stream receive buffer (1MiB).
moz-v2v-gh pushed a commit to mozilla/gecko-dev that referenced this issue Nov 12, 2024
…eviewers

Add a Glean metric counting the number of QUIC frames, labeled by frame type.

This is e.g. helpful to measure the impact of [stream receive window
auto-tuning](mozilla/neqo#733), looking at the number
of max_stream_data frames sent and stream_data_blocked frames received.

Differential Revision: https://phabricator.services.mozilla.com/D228295
jamienicol pushed a commit to jamienicol/gecko that referenced this issue Nov 15, 2024
…eviewers

Add a Glean metric counting the number of QUIC frames, labeled by frame type.

This is e.g. helpful to measure the impact of [stream receive window
auto-tuning](mozilla/neqo#733), looking at the number
of max_stream_data frames sent and stream_data_blocked frames received.

Differential Revision: https://phabricator.services.mozilla.com/D228295
gecko-dev-updater pushed a commit to marco-c/gecko-dev-wordified that referenced this issue Nov 16, 2024
…eviewers

Add a Glean metric counting the number of QUIC frames, labeled by frame type.

This is e.g. helpful to measure the impact of [stream receive window
auto-tuning](mozilla/neqo#733), looking at the number
of max_stream_data frames sent and stream_data_blocked frames received.

Differential Revision: https://phabricator.services.mozilla.com/D228295

UltraBlame original commit: 1352d3789a97da4a8bf6a6ae3b3702805c2a9c2e
gecko-dev-updater pushed a commit to marco-c/gecko-dev-wordified-and-comments-removed that referenced this issue Nov 16, 2024
…eviewers

Add a Glean metric counting the number of QUIC frames, labeled by frame type.

This is e.g. helpful to measure the impact of [stream receive window
auto-tuning](mozilla/neqo#733), looking at the number
of max_stream_data frames sent and stream_data_blocked frames received.

Differential Revision: https://phabricator.services.mozilla.com/D228295

UltraBlame original commit: 1352d3789a97da4a8bf6a6ae3b3702805c2a9c2e
gecko-dev-updater pushed a commit to marco-c/gecko-dev-comments-removed that referenced this issue Nov 16, 2024
…eviewers

Add a Glean metric counting the number of QUIC frames, labeled by frame type.

This is e.g. helpful to measure the impact of [stream receive window
auto-tuning](mozilla/neqo#733), looking at the number
of max_stream_data frames sent and stream_data_blocked frames received.

Differential Revision: https://phabricator.services.mozilla.com/D228295

UltraBlame original commit: 1352d3789a97da4a8bf6a6ae3b3702805c2a9c2e
i3roly pushed a commit to i3roly/firefox-dynasty that referenced this issue Nov 16, 2024
…eviewers

Add a Glean metric counting the number of QUIC frames, labeled by frame type.

This is e.g. helpful to measure the impact of [stream receive window
auto-tuning](mozilla/neqo#733), looking at the number
of max_stream_data frames sent and stream_data_blocked frames received.

Differential Revision: https://phabricator.services.mozilla.com/D228295
mxinden added a commit to mxinden/neqo that referenced this issue Dec 2, 2024
This commit adds a basic smoke test using the `test-fixture` simulator,
asserting the expected bandwidth on a 1 gbit link.

Given mozilla#733, the current expected bandwidth
is limited by the fixed sized stream receive buffer (1MiB).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request p3 Backlog
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants