-
Notifications
You must be signed in to change notification settings - Fork 629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloned mpsc::Sender never blocks #403
Comments
cc @carllerche Thanks for the report! Currently, however I believe this is intended behavior. The method of backpressure is that each channel gets a "free send" and typically the set of channels is reaching a fixed-ish point, but clearly here that's not happenign! |
Oh, that's quite unexpected. Do the docs mention this anywhere? What's the reason for this "free send"? |
For what it's worth, the reason I run into this is because I have an |
So, the intended behavior of this code: extern crate futures;
use futures::{Future, Sink};
fn main() {
let (tx, rx) = futures::sync::mpsc::channel(10);
loop {
if let Err(e) = tx.clone().send(0).wait() {
println!(":( {:?}", e);
break;
}
}
tx.send(0).wait().unwrap();
println!("done sending");
drop(rx);
} is for it to run out of memory? Despite the channel being bounded and of size 10? |
Yes, this behavior is documented as part of the
There are some implementation notes here: https://github.com/alexcrichton/futures-rs/blob/master/src/sync/mpsc/mod.rs#L30-L68 The short version of "why" is because it has to :) |
Just saw this part of the docs for
I think this is counter-intuitive behavior... What is the motivation for having this in the first place? |
Hehe, @carllerche beat me to it. Why does it have to? EDIT: Even reading through the implementation notes you link to, the claim that you need to be able to know if the send will succeed before sending seems strange to me. Clearly it is possible to have a |
This is an interesting case and relevant to a communications library ("herding server clients") I'm working on - cheers @jonhoo. I've found It's not clear if you spotted this already, but something I had missed for awhile is that a non-consuming send can be done using |
@46bit I actually ended up working around this by creating a wrapper struct which internally uses |
@jonhoo the current strategy is used to avoid the thundering herd problem. If only one slot is available and there are 1mm blockers senders, they will all be woken up even though only one sender can succeed. This behavior will end up continuing for every available slot. Guaranteeing a slot for the sender avoids this problem. I'm going to close the issue since this isn't a bug, but will keep an eye on further comments. |
@carllerche I don't see how the guaranteed slot fixes the problem? Say the reader isn't reading from the channel, and it fills up completely, including all the "free" slots for the 1mm senders. Then the reader decides to read a single value. Why will the single slot that opens up not cause a thundering herd? |
In practice, it is a bit more complex. You can read the code here: https://github.com/alexcrichton/futures-rs/blob/master/src/sync/mpsc/mod.rs#L748-L778 It does work though, popping one message will notify at most one sender task. |
I've written similar lockless queue code in the past, and I guess I'm having a hard time recognizing why Taking a step back, this strikes me as fairly unintuitive behavior for a "FIFO queue with back pressure", as it under some conditions simply does not provide back pressure at all. The docs also only reference this behavior in a single place, and arguably in the place you are least likely to look; I suspect few people look up the docs for I think personally I would prefer |
Something to keep in mind is that, in the async world of futures, a task cannot "block" in the critical section. This means that when a sender is notified, there is no actual guarantee that the task that was notified will ever touch the sender. I'm sure a PR adding docs in the I empathize with the confusion, but at this point I don't think a breaking change is worth it. |
Okay, I'll see if I get some time this week to write up a PR! I might also end up taking a stab at writing a "real" bounded channel using a lockless circular buffer, but that's for another repo and for another day. |
Sounds good, I would be happy to be proven wrong :) I would suggest reading the comment thread in #305 and the other issues related to |
Thanks — I'll check it out. I probably won't submit a PR changing the underlying implementation, but a documentation PR should be doable. If I do write up a new bounded Another note about the bounds, I don't think the claim about |
If you found a bug, a failing test PR would be super helpful. |
I believe I have found a pretty serious bug with
sync::mpsc::Sender
. Consider the program below, which has one thread sending on a channel (and waiting for each send to complete before sending the next), and one thread reading from that same channel. The sender is faster than the reader (due to thesleep
in the receiver thread). This program runs correctly; it produces 1001 lines ofrecv i
, then arecv done
, withdone sending
appearing somewhere nearrecv 11
(since the channel has a buffer size of 11).Now try changing the line
to
Semantically, these should be the same. The only change should be that the latter clones the transmit handle to the sender before sending on the corresponding channel (blocking if necessary). However, what happens instead is that the
.send().wait()
in the second version never blocks. The output looks something likeThat is, the channel suddenly behaves as if it is unbounded!
Interestingly, if the line is instead replaced with
The code reverts to the expected blocking behavior.
The text was updated successfully, but these errors were encountered: