-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test(transport): assert maximum bandwidth on gbit link #2203
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -196,3 +196,47 @@ fn transfer_fixed_seed() { | |
sim.seed_str("117f65d90ee5c1a7fb685f3af502c7730ba5d31866b758d98f5e3c2117cf9b86"); | ||
sim.run(); | ||
} | ||
|
||
#[test] | ||
#[allow(clippy::cast_precision_loss)] | ||
fn gbit_bandwidth() { | ||
const MIB: usize = 1024 * 1024; | ||
const TRANSFER_AMOUNT: usize = 100 * MIB; | ||
|
||
for upload in [false, true] { | ||
let sim = Simulator::new( | ||
"gbit-bandwidth", | ||
boxed![ | ||
ConnectionNode::default_client(if upload { | ||
boxed![SendData::new(TRANSFER_AMOUNT)] | ||
} else { | ||
boxed![ReceiveData::new(TRANSFER_AMOUNT)] | ||
}), | ||
TailDrop::gbit_link(), | ||
ConnectionNode::default_server(if upload { | ||
boxed![ReceiveData::new(TRANSFER_AMOUNT)] | ||
} else { | ||
boxed![SendData::new(TRANSFER_AMOUNT)] | ||
}), | ||
TailDrop::gbit_link() | ||
], | ||
); | ||
|
||
let simulated_time = sim.setup().run(); | ||
let bandwidth = TRANSFER_AMOUNT as f64 * 8.0 / simulated_time.as_secs_f64(); | ||
|
||
// Given Neqo's current static stream receive buffer of 1MiB, maximum | ||
// bandwidth is below gbit link bandwidth. | ||
// | ||
// Tracked in https://github.com/mozilla/neqo/issues/733. | ||
let maximum_bandwidth = MIB as f64 * 8.0 / 0.1; // bandwidth-delay-product / delay = bandwidth | ||
let expected_utilization = 0.5; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Running this test in a loop, Neqo is not able to utilize more than ~50% of the maximum bandwidth of 80 Mbit/s. Intuitively, even with packet loss, I would expect a congestion controller to be able to saturate more than 50% of a link.
I will look for mistakes in my math with a fresh mind tomorrow. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this using Reno or Cubic? Pacing or not? |
||
|
||
assert!( | ||
maximum_bandwidth * expected_utilization < bandwidth, | ||
"with upload {upload} expected to reach {expected_utilization} of maximum bandwidth ({} Mbit/s) but got {} Mbit/s", | ||
maximum_bandwidth / MIB as f64, | ||
bandwidth / MIB as f64, | ||
); | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -90,6 +90,16 @@ impl TailDrop { | |
Self::new(200_000, 8_192, Duration::from_millis(50)) | ||
} | ||
|
||
/// A tail drop queue on a 1 Gbps link with the default forward delay of | ||
/// 50ms and a buffer equal to the bandwidth-delay-product. | ||
#[must_use] | ||
pub const fn gbit_link() -> Self { | ||
let rate = 1_000_000_000 / 8; | ||
let delay = Duration::from_millis(50); | ||
let capacity = rate / 20; // rate * 0.05 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Simply the bandwidth-delay-product. Happy for more realistic suggestions. |
||
Self::new(rate, capacity, delay) | ||
} | ||
|
||
/// How "big" is this datagram, accounting for overheads. | ||
/// This approximates by using the same overhead for storing in the queue | ||
/// and for sending on the wire. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Takes ~2s to run on my machine, 1s for up-, 1s for download. Using < 100 MIB doesn't give me consistent results.
Worth spending 2s of our unit-test runtime on this?
(Not to be confused with simulated time.)