You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently investigating a problem where two concurrent jobs are stalling even though I specified block: 0. The basic logic is
beginRedisMutex.with_lock(self.class.to_s,block: 0)do# do workendrescueRedisMutex::LockError# someone else is already working on it - going home now.end
So far I have been unable to force a reproduction on my local machine. Do you have an idea what might be the cause? One difference that comes to mind is that the real machine uses a hosted Redis (cluster mode, 1 shard, 2 nodes) versus the local Redis on my dev machine.
The text was updated successfully, but these errors were encountered:
You might have hit the issue described in #5 - PR is ready with #27, but I think Redlock would work better for newer versions of Redis. I tried to tweak it to fit the redis-mutex API, but it wasn't a small task. PR would be appreciated.
Not sure if I understand the meaning of the non-blocking mode right, but I recently had a problem where a job was started from cron on servers and lasted less than 1 second (fast task). The job worked on all 3 and did not trigger the LockError error. block=1 looks like some solution to random problems with connecting to Redis, right? have 1 second grace period for estalising the connection and acquiring the lock. This option in practice means that tasks can be executed many times in one second (block=1) if they last shorter.
I'm currently investigating a problem where two concurrent jobs are stalling even though I specified
block: 0
. The basic logic isSo far I have been unable to force a reproduction on my local machine. Do you have an idea what might be the cause? One difference that comes to mind is that the real machine uses a hosted Redis (cluster mode, 1 shard, 2 nodes) versus the local Redis on my dev machine.
The text was updated successfully, but these errors were encountered: