-
-
Notifications
You must be signed in to change notification settings - Fork 753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE REQUEST] Ability to control/speed up sleep when processing empty queue #506
Comments
@yousifh Thank you for creating this issue! And thanks for providing this insight, very helpful. In an ideal world, Redis provides a command to do just what we want, but not likely(context: redis/redis#1785). Make the sleep time configurable sounds ok to me, but maybe we should brainstorm other alternatives first. |
I was reading the issue linked and does The other alternative approach is something like this The idea being if the processor finds messages it resets the sleep to a baseline (100ms for example) Then if it doesn't find any messages it keeps incrementing it till it hits the max of 1 sec sleep. So while the server is usually busy it will try to fetch messages faster if it happens to get an empty queue error every once in a while. And while the server is idle, it will have the same old behaviour of 1 sec sleep. The numbers of base sleep and increments can be tweaked. I just went for something simple. During my benchmark tests this made the average time to execution latency be It tries to do an adaptive flow control, but I tried to keep it as basic as possible to not complicate the code |
Is your feature request related to a problem? Please describe.
I have been benchmarking this library and measuring the latency of time to execution. The time between when a task is enqueued and when it gets picked up by a worker. I'm using the following parameters
One thing I noticed is that the average time to execution latency was about
190ms
which seemed quite high. Digging into the code the issue seems to be about when the processor sleeps when it hits an empty queueasynq/processor.go
Lines 176 to 184 in c70ff6a
Since globally all the workers can process tasks at faster rate than they are being enqueued, they will frequently hit this 1 sec sleep period and increase the latency of picking up tasks.
Describe the solution you'd like
I prototyped couple of solutions to see if there are ways to lower the latency.
First solution was to make the sleep period configurable. Then I set it to
100ms
. That seemed reasonable enough given that Redis operations by Asynq run very fast and100ms
gives it enough time for other background stuff to run like the scheduler.Running the benchmark again, the latency now dropped to an average of
30ms
. Looking at Redis metrics, on idle with 7 workers before the change theredis.cpu.user
metric was around2ms
while after doing the change, it increased to5ms
. It doubled but still seems reasonable.Of course this is idle conditions, but during normal operations the impact on the Redis server is negligible. So for example during the benchmark in both cases the
redis.cpu.user
metric was averaging94ms
, but the lower sleep period case had lower latency as an advantage.Describe alternatives you've considered
I tested an alternative solution to have the sleep period for empty queues dynamically change between 1 sec and 100ms based on backoff of when it sees empty vs non-empty. So starts at 1 sec, decreases to 100ms for every successive call of non-empty queue. And keeps increasing all way back to 1 sec for calls when the queue is empty. But that just seemed to add more complexity to the code.
Additional context
If you think the first approach is reasonable, I can push a PR to allow users to set the sleep period while letting it default to 1 sec. So if an app doesn't need lower latency they can ignore it, while apps that need low latency can set that option at the cost of higher CPU utilization of the Redis server while it's idle.
If there are other solutions to explore, I'm more than happy to prototype and open a PR.
The text was updated successfully, but these errors were encountered: