-
Notifications
You must be signed in to change notification settings - Fork 629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Undoing a Sink::poll_ready #2109
Comments
I'll add to this that there is only a small number of |
The observation here is essentially that you cannot currently yield after calling |
I was just looking into the Sink API myself, and I found it rather hard to find prior discussions to why everything is the way it is, and why people don't like it (eg. it's not being nominated for std even though As for I haven't been able to find any more justification that this is actually the case. Maybe it is. In any case, the requirement has a considerable API cost which would benefit from proper justification and probably good documentation for implementers. It also contrasts a bit with the discussion in the PR you linked above, where it seems @carllerche argues that keeping one slot per sender is fine and you should just call I can see a number of issues with the current
In practice, this puts constraints on both the implementer and the client, and those aren't really documented very well, nor compiler enforced. And as @jonhoo points out above, even the combinators in the futures lib don't really follow the "spirit" of Looking at That being said, it still is susceptible to deadlocking I think (haven't tested) when all slot's are reserved by clients that don't send. Allowing to undo a To move forward, I see the following as essential:
|
One thing we've played around with in tower (which has a very similar API) is to have In my mind, there are three options:
I think 3 would be great, but I think it's not feasible without GAT. I think 1 is unfortunate because it means you must have the item you're going to send before seeing whether you can send it (this hinges on @carllerche's argument that this is important, which I'm inclined to believe). That leaves us with 2. Here's why I think option 2 isn't that bad: it will essentially only be used by people implementing low-level futures interfaces. "High-level" users will likely just be using |
Before we had
Sink::poll_ready
, senders always had to have an item ready to send before interacting with theSink
. This sometimes led to unnecessary buffering. The most trivial example of this is the current implementation ofStreamExt::forward
, which still adheres to the "have an item before you access the sink" model. It callsStream::poll_next
, and then callsSink::poll_ready
when it gets an element. Assumingpoll_ready
returnsPoll::Ready
, the element is sent, and all is good. The issue arises ifSink::poll_ready
returnsPoll::Pending
. The implementation must now buffer that element somewhere, and re-trySink::poll_ready
on the next call topoll
before it attempts to receive another element from theStream
(this is thebuffered_item
field onForward
). The upside of this approach is that when wepoll_ready
returnsPoll::Ready
, we almost immediately call.start_send
and "consume" the slot thatpoll_ready
told us was available.The alternative approach that
Sink::poll_ready
enabled is one that does not require any buffering, and it can be written out pretty easily:Here, we never need to buffer, since we only ever accept another item if we know we can send it immediately (I use this technique in
tokio-tower
for example).Unfortunately, this example highlights a problem with
Sink
as it stands: we may now take a long time (potentially forever) between when we getPoll::Ready
frompoll_ready
and when we callstart_send
. It might be that the client is simply not sending us any more requests. But, theSink
has promised us that we have a slot available to send, which we are still holding up. In the context of something like a bounded channel, this may mean that we are holding up one of the few slots in the channel, even though we will never use it (see #1312 and #984 (comment) for related discussion). If we have multiple of these loops (e.g., if you are forwarding multiple streams to clones of the same boundedmpsc::Sender
), then you can easily end up in a livelock: all the slots of the bounded channel are held up by forwards for streams that do not have any elements to send, and as a result forward for streams that do have elements to send are unable to forward them sincepoll_ready
returnsPoll::Pending
for them.This is something we've also run into in
tower
, whoseService
trait features a similarpoll_ready
mechanism. What is needed here is some way to "undo" aSink::poll_ready
that returnedPoll::Ready
. Intower
we've had two different proposals come up. The first (tower-rs/tower#408) is to add adisarm_ready
method. The idea here is that implementations ofpoll
that discover that they won't use the slot given to them bypoll_ready
can "disarm"/give up that slot by calling thedisarm_ready
method. The second (tower-rs/tower#412) is to havepoll_ready
return some kind of "token" that must then be passed tostart_send
(tower
's equivalent is calledcall
). This makes the API more misuse-resistant (you can't callstart_send
without callingpoll_ready
), and also gives us a way to give up apoll_ready
slot: just drop the token.Neither of these proposals are necessarily the right one, but I figured I would start discussion here since this is definitely an issue with
Sink
(and similar traits that include apoll_ready
method), and one that must be fixed before the trait is considered finished.The text was updated successfully, but these errors were encountered: