-
Notifications
You must be signed in to change notification settings - Fork 867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it work for worker processes #574
Comments
I can't see why are you handling start/shutdown of the kue? It is not normal to behave a queue this way... you normally start Kue as a daemon and let And it seems you are on Kue 0.8 that you should call |
I'd like to handle the start/shutdown of the kue because I'd like to kill the daemon, so I'm not charged for the processing time when there is nothing to do, which in my case, it will be most of the time. Any ideas for that? |
worker uses redis BLOPOP and since it is blocking, it shouldn't load your CPU, would please first confirm if a live worker is charging you? or just the open os process will charge you in your cloud provider? |
Yes my provider charges me for the live worker, even if it is not consuming CPU, so I'd like to kill it if it's not processing jobs. How can I periodically check the queue count? If I close the queue with 0 active, 0 inactive and 1 delayed job, when I check the queue with inactiveCount and activeCount the counts never change, even if a delayed job is overdue, unless I start processing jobs... |
when your schelued process ticks, first call |
thanks for your guidance. I added a 100ms setTimeout right after calling I would suggest to allow call Also maybe I found a bug: if I set my setTimeout to the same value as the promote ms, the close queue gets stuck: doesn't exit the process after the "queue closed" message is shown. Although this can be a working solution, I don't really like it, and I think that emitting an event when the promoter finished checking all jobs would be the right solution for this use case. Do you think it makes sense to add this feature to kue? I'm happy to work in a PR... By the way, I see that you deprecated |
this one is more important and relates to promotion timer, this will involve both 0.8 and 0.9
we can add promotion start/end events to the queue object, PR will be more welcome if it's based on the master (0.9) :)
in 0.9, promotion is cluster-aware, and all nodes will automatically run a race against a distributed lock on redis, and the winner will do promotion each time, so you won't be sure which node each time will run promotion or emit promotion events locally. I think we should emit those events globally on all queue instances on all nodes. |
…to wait for interval if there are already jobs to promote on start. Automattic#574
closing this for now... can be reopened later |
I have a worker process that gets executed once per day, look for overdue jobs, and if there are no jobs to execute, exit the process. I can't figure out how to make kue work for this use case.
I prepared an example script to make more clear what I'm trying to achieve:
This worker process called
add-jobs.js
, I execute it, it creates the job, and then the process close. All good.var job, kue, queue;
And this is a worker process called
worker.js
, supposed to be called at regular intervals, let's say every 5 seconds. If there are no jobs to process at that moment, the process should close.So the idea is to check if there are jobs needing attention, and start the queue if so. Also after each job is finished it should check if there are still any jobs left needing attention, and if not, exit the process.
At the moment the first line of the
startIfNeeded
functionqueue.promote();
has no effect. Also, it will be deprecated in the next version. If I call thecloseIfCan()
function inside.progress()
, it would only work when there are active or inactive jobs, but not if there are only delayed jobs.So I'm not sure how to check for jobs needing promotion without actually start working in the jobs.
Is there any way to make kue work for this use case?
The text was updated successfully, but these errors were encountered: