-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request timeouts #13
Comments
@stakach anything I'm missing here for things that this could inadvertently break? |
Yeah this would work. i.e. 5 timers max, a new 5min timer created every minute. Then each request races the latest timer and the function completion. |
Rather than using a timer directly, what about hooking into the uvrays scheduler for that thread. That way it can reject the same deferrable without setting up any additional promises. |
Yep that would would be more efficient again. thread.schedule.every(1.minutes.to_i * 1000, :perform_now) do
local_queue = thread.cancellation_queue = []
thread.schedule.in(5.minutes.to_i * 1000) do { local_queue.each(&:reject) }
end Then in thread.cancellation_queue << deferral |
I'm looking at this a little deeper now and wondering if it's worth even shooting this functionality further upstream into libuv. It then becomes possible to implement an optional TTL on any deferrable. One approach could be do spin up a timer on first use that repeats with the desired granularity (maybe an ENV config option). This can keep spinning away incrementing a counter that mods over some sane range. Call this When a timeout is set the associated deferable is popped into a hash with the key as If the hash ever empties out the timer can also be stopped keep things as minimal as possible. |
Likely not needed at this point, but draft implementation for the above available over at https://github.com/cotag/uv-rays/tree/deferred-ttl. |
When performing any inter-module comms or interaction with modules via the websocket API, requests have no upper bound for resolution. This becomes more evident where logic modules may be combining async interactions across multiple devices as the failure of one request to resolve can cause multiple subsequent event chains to halt and associated callbacks to remain in memory. This can be worked around in user code, but it may be neater / nicer to shift it upstream so it doesn't need to be thought about and clutter logic.
I'd recommend that we race the promise returned by
Orchestrator::Core::RequestForward#request
against a timer with a sane upper limit - say 5 minutes - after which it rejects with:timeout
. This should act as a neat background cleanup task.Cases where a shorter guaranteed responses are needed (e.g. websocket responses for UI updates) can still be handled in user code with a separate race, with this still in the background as a safety net.
Possible negative impacts are it would prevent modules from performing async interactions where the response time is > whatever timeout value is chosen. I can't think of many use cases for this though.
The text was updated successfully, but these errors were encountered: