-
-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send only significant MAX_STREAM_DATA updates #880
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a nice improvement, thanks for working on it!
@@ -1034,6 +1048,27 @@ impl Recv { | |||
} | |||
} | |||
|
|||
/// Returns the window that should be advertised for this Stream in a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd prefer to leave the announcable
prefixes and infixes out here, that feels a little wordy and doesn't seem to add that much value.
Also in doc comments, probably should keep the first line of the doc comment a bit shorter so that it fits on a line, and our convention is to not end these with a period (and have an empty line between the first one and the second, perhaps?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated to follow the comment conventions, and removed some of the announcable
in the body.
I wouldn't like to remove it from the function name however, since the destinction seems important here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Distinction between what and what? I think the update
serves to make it clear enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Distinction between:
- The maximum window we could announce at all.
- A window update we could sent, but won't do it due to it not being deemed significant enough
I guess I can also change it by switching the return type of the function to fn window_update(&self, stream_receive_window: u64) ->(u64, bool)
where the bool indicates whether transmission is wanted. Or call it max_stream_data
. Will leave it to you which option is preferred, and make that change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me it makes sense that the stream gets to decide whether a window update needs to be announced. If the stream decides it needs to be announced, it will return it to the connection, which will just act on the stream's behalf. As such, between the connection and stream there's no necessity to introduce the concept of announceability. (And therefore I prefer the original Option
-based return type.)
a185847
to
79623bc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this is definitely needed!
/// | ||
/// If `peek` is set to `false`, no transmission of the outgoing update would | ||
/// be recorded. | ||
fn window_update(&mut self, stream_receive_window: u64, peek: bool) -> Option<u64> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think callsites would be a bit more readable if peek
was removed in favor of a separate call (or direct assignment) that updates the counter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the separate assignment initially. But I removed it after djc's comments to make the Streams
struct more self-contained. A separate method would be possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely think a separate method is clearer. @djc, thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was now kind of necessary to properly handle the retransmits anyway
let max = match rs.window_update(self.stream_receive_window, false) { | ||
Some(max) => max, | ||
None => continue, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this packet is lost, this change will lead us to fail to retransmit the MAX_STREAM_DATA
frame, possibly deadlocking the sender. We should instead perform this check before adding the stream to pending.max_stream_data
and leave frame construction unchanged, ensuring that a loss will cause the latest data to be retransmitted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that's not good. I thought Quinn retransmits all lost frames completely, and therefore thought this is good.
So the idea is: Anytime pending
is set it grabs the highest possible window from the stream. And the read
methods make sure pending is only set once whenever the increment is high enough? I think this has a catch, but I will look into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought Quinn retransmits all lost frames completely
We keep track of what needs to be retransmitted if a packet is deemed lost by recording the retransmittable frames carried by each in-flight packet, and re-queueing those frames when the packet is deemed lost. See sent.max_stream_data.insert(id)
below. The key is that that data's abstract; we don't store the literal frames, because they might become out of date. Instead, we just rerun this exact code again when it's time to retransmit.
So the idea is:
Any time a stream appears in pending.max_stream_data
, yeah, that's the idea exactly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the new logic should cover it. The transmission will now suppress further updates, but retransmits can still grab the value
b4e8496
to
6ca2737
Compare
/// This will suppress enqueuing further `MAX_STREAM_DATA` frames unless | ||
/// either the previous transmission was not acknowledged or the window | ||
/// further increased. | ||
pub fn record_transmitted_max_stream_data(&mut self, transmitted_value: u64) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the record_
prefix here pretty superfluous. Otherwise this seems fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well it should have some prefix, since it’s not a getter. And I don’t prefer using set_
for this. Since that one sounds like you can always call it with an arbitrary value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe replace transmitted
with sent
? I just find record_transmitted_max_stream_data
very wordy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated it to sent
.
However I personally don't think anything wordy or verbose in code is bad, as long as it helps readers to understand what is meant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks! I like how retransmits further defer future updates; wouldn't have occurred to me to structure it that way.
The current version of Quinn tries to enqueue a MAX_STREAM_DATA update after every read from the stream. Those could potentially be tiny. Besides wasting network capacity with sending tiny packets, that behavior also causes a wakeup on the connection task. This change triggers MAX_STREAM_DATA frames only to be enqueued if they are deemed significant enough. In the version here, this means is bound to 12.5% of the overall window - but this could be changed. Note that the general connection task wakeup after reads is still there. A similar change would need to be performed for updating the connection window, in order to determine whether the connection task wakeup is still necessary. I gathered some example metrics for the amount and increment of window updates in the benchmark: Before: Num max stream announcements: 1663, Total window announced: 58500130, Avg window diff: 35177 After: Num max stream announcements: 301, Total window announced: 58195055, Avg window diff: 193339 ==> This sends 5x less window updates
6ca2737
to
a2e2e25
Compare
Thanks! |
This change equals quinn-rs#880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as quinn-rs#880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
This change equals quinn-rs#880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as quinn-rs#880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
This change equals quinn-rs#880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as quinn-rs#880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
This change equals quinn-rs#880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as quinn-rs#880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
This change equals quinn-rs#880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as quinn-rs#880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
This change equals quinn-rs#880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as quinn-rs#880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
This change equals quinn-rs#880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as quinn-rs#880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
This change equals #880, but targets the connection window which is updated via `MAX_DATA` frames instead of the stream windows. Similar as #880, frames will only be enqueued if the update is significant enough. Quinn had already before an optimization, which triggered MAX_DATA frames not to be sent when the window was set to the maximum at the start of the connection [1]. However the frame was still set as `pending` [2], which triggered the connection logic to create a packet for it [3]. Since the payload was suppressed, this lead the connection to either send additional ACK-only packets or create empty packets. This change fixes this behavior, and thereby impacts the amount of packets and performance even if the maximum connection flow control window is specified. Therefore the throughput for large streams increases on windows from 30MB/s to 56MB/s with this change. [1] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/streams.rs#L485 [2] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789 [3] https://github.com/quinn-rs/quinn/blob/bd14aa1dc2162e68c3a90027e3ca4e6404aea94d/quinn-proto/src/connection/mod.rs#L2789
The current version of Quinn tries to enqueue a MAX_STREAM_DATA
update after every read from the stream. Those could potentially be
tiny. Besides wasting network capacity with sending tiny packets, that
behavior also causes a wakeup on the connection task.
This change triggers MAX_STREAM_DATA frames only to be enqueued
if they are deemed significant enough. In the version here, this means
is bound to 12.5% of the overall window - but this could be changed.
Note that the general connection task wakeup after reads is still there.
A similar change would need to be performed for updating the
connection window, in order to determine whether the connection task
wakeup is still necessary.
I gathered some example metrics for the amount and increment of window
updates in the benchmark:
Before:
Num max stream announcements: 1663, Total window announced: 58500130, Avg window diff: 35177
After:
Num max stream announcements: 301, Total window announced: 58195055, Avg window diff: 193339
==> This sends 5x less window updates