-
-
Notifications
You must be signed in to change notification settings - Fork 736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Same task received/processed by more than one worker #90
Comments
Thanks for filing this bug report! I ran the same code on my machine with 4 worker processes reading from the same redis instance but could not reproduce the bug. It could be that you've run the client code multiple times and there were duplicate tasks in Redis. Would you mind trying this again with clean Redis DB? You can |
@hibiken I prepared an environment where this can be reproduced (uses docker, docker-compose):
It will not always give exactly the same results, I guess there's some stochastic/random factor that determines which taskrunner catches tasks. Example outputs:
We see that both taskrunner1 and 3 have processed task 2. And taskrunner3 and 2 have both processed task1. Is there any way we can ensure that this does not happen? Would it be possible to clear all tasks prior to start creating tasks (programmatically)? Just to make sure? |
@gunnsth import "github.com/go-redis/redis/v7"
func main() {
// Flush DB first to start from a clean slate.
rdb := redis.NewClient(&redis.Options{
Addr: "redis:6379",
Password: "xxxx",
})
if err := rdb.FlushDB().Err(); err != nil {
log.Fatalln(err)
}
// ... create asynq.Client and schedule tasks (your existing code)
} Let me know if you are still seeing duplicate tasks. |
Describe the bug
The problem is that I spawned a taskqueuer that queued tasks ranging from now - 10 minutes to now + 10 minutes. With 4 workers running (each at concurrency 1).
The output I got was:
So tasks 3 and 4 were received twice, which could lead to problems, although I admit the case I am working with is a bit strange. I.e. minutes in the past etc.
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
Start 4 taskrunners
As per the output above, tasks 3 and 4 were received by two workers. Would be good if we can guarantee that each task can only get processed once.
Note the way I am spawning the workers and queuer, it is possible that the tasks are queued before some workers start. It is all starting at the same time. Ideally that would not matter.
Expected behavior
Expected that each task could only be received by one worker / processed once.
Screenshots
N/A
Environment (please complete the following information):
Additional context
If needed, I can clean up my docker compose environment and provide a fully contained example.
The text was updated successfully, but these errors were encountered: