-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thoughts on backpressure #112
Comments
The proposed |
@jonhoo The existing (and really any concurrent code) is at risk of introducing deadlocks. Granted, this makes it a bit easier. Given that the only other option (that I can think of) is unbounded buffering, I think it is a necessary hazard. |
Yeah, I mostly raised it because this seems like a pre-made deadlock trap, much like locks are, and should probably be documented as such. That's not to say I think we shouldn't have it — we should just make sure that users are aware that this can deadlock. |
In this case, The problem is that Thoughts @olix0r |
This is implemented, but we need to document it better. |
Closing in favor of the doc meta issue (#33). |
Background
Service
is a push based API. Requests are pushed into the service. This makes it vulnerable to situations where the request producer generates requests faster than the service is able to process them.To mitigate this,
Service
provides a back pressure strategy based on theService::poll_ready
. Before pushing a new request into the service, the producer must callService::poll_ready
. The service is then able to inform the producer if it is ready to handle the new request. If it is not ready, the service will notify the producer when it becomes ready using the task notification system.Services and concurrency
The
Service::call
function returns immediately with a future of the response. The producer may call the service repeatedly with new requests before previously returned futures complete. This allows a single service value to concurrently process requests.This behavior complicates being able to sequentially call services. Consider the
and_then
combinator:This will result in
my_request
being pushed intoservice1
, and when the returned future completes, the response is pushed intoservice2
. Because the response fromservice1
completes asynchronously, the combined response future must contain a handle toservice2
. Because theService
trait requires&mut self
in order to use,service2
cannot be stored in anArc
and cloned into the shared response future.The strategy to handle this is to require
service2
to implementClone
and let each service implementation manage how it will handle dealing with concurrency. Concurrency can be added to any service by adding a layer of message passing. A task is spawned to drive the service itself and a channel is used to buffer queued requests. The channel sender implementsService
. This pattern is provided bybuffer
.Problem
The question at hand is how to combine the back pressure API (
poll_ready
) with the pattern of cloning service handles to handle concurrency.The first option is to use the same strategy as channels for back pressure. In short, each
Sender
handle has a dedicated buffer of 1 slot (see here for a detailed discussion). Given that it is permitted to cloneService
handles once per request, applying this behavior toService
would effectively result in unbounded buffering.Proposal
Instead,
Service::poll_ready
should use a reservation strategy. CallingService::poll_ready
results in reserving the capacity for the producer to send one request. Oncepoll_ready
returnsReady
, the next invocation ofcall
will not result in an out of capacity error.This strategy also means that it is possible for a service's capacity to be depleted without any requests being in-flight yet. Consider
buffer
with a capacity of 1. A service callspoll_ready
before generating the request. In order guarantee that that capacity is available when the request has been produced, the service must reserve a slot. The service then has no remaining capacity but the request has yet to be produced.The
and_then
combinatorIn the case of the
and_then
combinator,poll_ready
is forwarded to both services. For the combined service to be ready, capacity must be reserved in both the first and the second service. This means that, if the response future for the first service takes significant time to complete, the second service could be starved. This can be mitigated somewhat by adding additional buffering to the second service.cc @olix0r @seanmonstar
The text was updated successfully, but these errors were encountered: