We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SEND_BUFFER_SIZE
RX_STREAM_DATA_WINDOW
Neqo currently sets a maximum stream send and receive window of 1 MiB:
neqo/neqo-transport/src/recv_stream.rs
Line 33 in c004359
neqo/neqo-transport/src/send_stream.rs
Line 36 in c004359
If my math is correct, on e.g. a 50ms connection, a 1MiB window does not allow for more than 160 Mbit/s per stream:
delay_s = 0.05 window_bits = 1 * 1024 * 1024 * 8 bandwidth_bits_s = window_bits / delay_s bandwidth_mbits_s = bandwidth_bits_s / 1024 / 1024 # 160.0
Intuitively 160 Mbit/s on a 50ms connection seems small.
Am I missing something? Is this relevant? Are applications expected to leverage multiple streams to exhaust the bandwidth of a connection?
The text was updated successfully, but these errors were encountered:
We should do auto tuning.
Sorry, something went wrong.
Closing here in favor of #733.
No branches or pull requests
Neqo currently sets a maximum stream send and receive window of 1 MiB:
neqo/neqo-transport/src/recv_stream.rs
Line 33 in c004359
neqo/neqo-transport/src/send_stream.rs
Line 36 in c004359
If my math is correct, on e.g. a 50ms connection, a 1MiB window does not allow for more than 160 Mbit/s per stream:
Intuitively 160 Mbit/s on a 50ms connection seems small.
Am I missing something? Is this relevant? Are applications expected to leverage multiple streams to exhaust the bandwidth of a connection?
what do other projects use
related
The text was updated successfully, but these errors were encountered: