-
Notifications
You must be signed in to change notification settings - Fork 28
Unique queues (prevent duplicated / the same jobs twice in the queue) #2151
Comments
Nice list! I like the implementation of https://github.com/mlntn/laravel-unique-queue but unless I missed it, it requires you to implement this for all queues and you can't selectively do this on a "per job" basis? But maybe I'm wrong. However it's strictly a "redis" feature though OTOH it supports horizon (which afaik only supports redis either). https://github.com/mingalevme/illuminate-uqueue seems complicated, supports DBs but doesn't mention Horizon (but maybe works transparently). https://github.com/mbm-rafal/laravel-single-dispatch is something else, it's only in-memory dispatching to prevent double release of events. Also this lib clashes if you already want/need to use another dispatcher (which I'm doing, e.g. https://github.com/fntneves/laravel-transactional-events |
This is really needed, I have also implemented my own unique I’d for my database driver. The list is great, but I think it should me added to the core. |
Following Thread. |
@themsaid? You're the queue guru? An example; let's say we have products and a product updated event which triggers an index to for example Algolia. When the queue is busy with 1000 jobs and we update a product 10 times. 10 jobs will be added to the queue. When the queue is finally there after processing the 1000 jobs; 10 times the same thing is executed. |
Implemented something similar in laravel/framework#34794. EDIT: It isn't exactly the same use case though. Instead of "pushing unique", it just "processes unique", meaning if there's an overlapping job, you can get rid of it with the |
Great work @paras-malhotra! But prevent jobs to overlap isn't the same als uniques right? With your middleware the jobs are still added to the queue and are getting executed. |
Great, I will check it later |
@royduin, yeah the jobs are still added to the queue but you have the option to delete jobs if duplicates are being processed at the same time. |
So let's take my example from earlier and we've 2 queue workers running, in that case when they're getting processed with your middleware the first one is executed, the second, third, fourth, etc are not getting executed by the second worker until the first one is finished. So it's possible that the first job is executed and the 7th for example. But just the first one is enough. I think it's better to check if there is a job in the queue before adding another one to get real uniqueness. |
Yes, the subsequent jobs won't be executed until the first one is finished. If you use The packages that you mentioned don't really avoid duplicate processing of jobs. This is best done with database queries. Let's take https://github.com/mingalevme/illuminate-uqueue for example. This does a
If you truly need to avoid processing duplicate jobs, database queries are probably the best way to do so. Just mark a column as processed and before processing check if already not processed. This approach does not suffer any of the problems above. TL;DR I don't recommend any of the packages listed above - the approach is faulty. A simple DB query will do the job! |
To keep track of the uniqueness with a database column isn't the way to go. What you're saying is that we add a boolean column to the products table and after the first job ran we set that column to true. In the job we check if the boolean false, if so we skip it. So we can only update the product once? That's not the behavior I want. The purpose of the uniqueness is to only have one job at the time in the queue as they're all doing the same thing. But they may be executed more than once, if I update the product now 1 job should be executed. If I update it tomorrow again; another one should run. But if the queue is busy and I update the product 10 times that job should only run once. |
Oh I see. So, you don't mind the jobs being executed again. You just want to skip the queue if it's busy. In that case, the package meets your requirement. Although, for the same use case (search indexing), I would recommend that you could just drop overlapping jobs with the This way you don't have to rely on a custom queue implementation. Also, note that |
But it feels strange to push jobs into the queue that do not have to be executed + the uniqueness isn't guaranteed because if the first job is finished, the next one will be executed as it prevents overlapping, not uniqueness. Nevertheless, thanks for your work and help! Hope to see uniqueness implemented in the framework itself someday. |
This will also happen with the custom queue implementation packages. When the first job is finished, it is deleted from the queue. The next one will still be executed. For this specific case, a DB query is the only solution. |
True but then there won't be another job as the uniqueness is handled when pushing jobs into the queue. If another job makes it into the queue that means the other one is finished so the new one should be executed. |
I think what you need it this: https://laravel.com/docs/8.x/cache#managing-locks-across-processes You acquire a lock before dispatching the job and release it when the job is done. New jobs won't get dispatched while the lock is still in place. Once the lock is released, new dispatches will come in. |
That's an option when the dispatching is handled manually and only one event is queued, but when I've multiple listeners on an event and a listener on multiple events it won't work. Example: product model with:
Within the EventServiceProvider there are multiple listeners on that event and also another event which triggers the listener:
I could dispatch the event manually from |
Dispatch a job inside the |
So un-queue the Don't you think it's cleaner to have this in the framework so people can just add a unique key to their jobs and listeners? |
Possibly :) Will look into that when I get a chance. Or maybe @paras-malhotra will beat me, he has been on fire lately 💪🏽🔥 |
I wrote a quick post on that: https://divinglaravel.com/dispatching-unique-jobs-to-laravel-queues |
Very elegant solution indeed @themsaid, lock before dispatching and release after processing! 👍 |
Looks smart indeed! Only thing to watch out: you need to have some idea about the time it takes to process your queue or how long are jobs living in there, as a timeout is required (rightly so). |
Do we need to create unique key for every lock according to Job? |
If you want unique queues, you should create an unique key. You can’t call it “products”, because you will not queue any job for any product until the actual one finished. I think the example |
@elfeffe so for each and evey product i need to create unique lock? |
If you want to avoid duplicated jobs for that product update, your key must include the ID of the product. |
I guess there is already middleware created ;) https://github.com/laravel/framework/blob/c659d37854e11f251d88fb543f408af77fc3f177/src/Illuminate/Queue/Middleware/WithoutOverlapping.php |
That's not the same as discussed earlier. |
I thought about working on implementing this. We could implement a function That would take care of the lock. To So, in order to trigger a unique job dispatch, we'd have to say This two-step process just doesn't seem very intuitive / automatic. Have to remember to both call the It would be awesome if we could just pull in a trait like Thoughts? If you guys can think of a better / more intuitive way, I'd be happy to chip in with a PR. |
I've submitted a PR with a cleaner approach (I hope 🤞). |
This feature will be released tomorrow. Docs here: laravel/docs#6558 @royduin, Taylor modified the PR based on your feedback to allow for flexibility to use a separate cache connection/driver for the unique job locks. You can check out the docs on how to do that. Thanks @themsaid and @royduin for your valuable feedback on the PR! 👍 |
Awesome! Thank you for your work 🚀 |
It would be nice to have unique queues baked into the framework by default so when pushing the same job twice only 1 makes it into the queue.
References:
The text was updated successfully, but these errors were encountered: