-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test(transport): assert maximum bandwidth on gbit link #2203
base: main
Are you sure you want to change the base?
Conversation
This commit adds a basic smoke test using the `test-fixture` simulator, asserting the expected bandwidth on a 1 gbit link. Given mozilla#733, the current expected bandwidth is limited by the fixed sized stream receive buffer (1MiB).
#[allow(clippy::cast_precision_loss)] | ||
fn gbit_bandwidth() { | ||
const MIB: usize = 1024 * 1024; | ||
const TRANSFER_AMOUNT: usize = 100 * MIB; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Takes ~2s to run on my machine, 1s for up-, 1s for download. Using < 100 MIB doesn't give me consistent results.
Worth spending 2s of our unit-test runtime on this?
(Not to be confused with simulated time.)
// | ||
// Tracked in https://github.com/mozilla/neqo/issues/733. | ||
let maximum_bandwidth = MIB as f64 * 8.0 / 0.1; // bandwidth-delay-product / delay = bandwidth | ||
let expected_utilization = 0.5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running this test in a loop, Neqo is not able to utilize more than ~50% of the maximum bandwidth of 80 Mbit/s. Intuitively, even with packet loss, I would expect a congestion controller to be able to saturate more than 50% of a link.
45.95196838717057
49.31818797697269
45.826026606578566
49.53933723888579
45.997472274917506
49.31788343565587
45.58981370339061
49.31818797697269
40.48485804428038
49.451830153095756
40.5585726298109
49.31793477754305
40.55224164532084
54.71429614593755
45.57110592947965
49.45192167242847
40.53596208031323
54.79092558021697
45.95144831798395
I will look for mistakes in my math with a fresh mind tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this using Reno or Cubic? Pacing or not?
pub const fn gbit_link() -> Self { | ||
let rate = 1_000_000_000 / 8; | ||
let delay = Duration::from_millis(50); | ||
let capacity = rate / 20; // rate * 0.05 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simply the bandwidth-delay-product. Happy for more realistic suggestions.
Failed Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
A `Node` (e.g. a `Client`, `Server` or `TailDrop` router) can be in 3 states: ``` rust enum NodeState { /// The node just produced a datagram. It should be activated again as soon as possible. Active, /// The node is waiting. Waiting(Instant), /// The node became idle. Idle, } ``` `NodeHolder::ready()` determines whether a `Node` is ready to be processed again. When `NodeState::Waiting`, it should only be ready when `t <= now`, i.e. the waiting time has passed, not `t >= now`. ``` rust impl NodeHolder { fn ready(&self, now: Instant) -> bool { match self.state { Active => true, Waiting(t) => t <= now, // not >= Idle => false, } } } ``` The previous behavior lead to wastefull non-ready `Node`s being processed and thus a large test runtime when e.g. simulating a gbit connection (mozilla#2203).
A `Node` (e.g. a `Client`, `Server` or `TailDrop` router) can be in 3 states: ``` rust enum NodeState { /// The node just produced a datagram. It should be activated again as soon as possible. Active, /// The node is waiting. Waiting(Instant), /// The node became idle. Idle, } ``` `NodeHolder::ready()` determines whether a `Node` is ready to be processed again. When `NodeState::Waiting`, it should only be ready when `t <= now`, i.e. the waiting time has passed, not `t >= now`. ``` rust impl NodeHolder { fn ready(&self, now: Instant) -> bool { match self.state { Active => true, Waiting(t) => t <= now, // not >= Idle => false, } } } ``` The previous behavior lead to wastefull non-ready `Node`s being processed and thus a large test runtime when e.g. simulating a gbit connection (mozilla#2203).
A `Node` (e.g. a `Client`, `Server` or `TailDrop` router) can be in 3 states: ``` rust enum NodeState { /// The node just produced a datagram. It should be activated again as soon as possible. Active, /// The node is waiting. Waiting(Instant), /// The node became idle. Idle, } ``` `NodeHolder::ready()` determines whether a `Node` is ready to be processed again. When `NodeState::Waiting`, it should only be ready when `t <= now`, i.e. the waiting time has passed, not `t >= now`. ``` rust impl NodeHolder { fn ready(&self, now: Instant) -> bool { match self.state { Active => true, Waiting(t) => t <= now, // not >= Idle => false, } } } ``` The previous behavior lead to wastefull non-ready `Node`s being processed and thus a large test runtime when e.g. simulating a gbit connection (mozilla#2203).
This commit adds a basic smoke test using the
test-fixture
simulator, asserting the expected bandwidth on a 1 gbit link.Given #733, the current expected bandwidth is limited by the fixed sized stream receive buffer (1MiB), not by the bandwidth of the link.
While a bit unconventional, I think it is worth having this smoke test, to make sure we don't regress. What do folks think?