Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delaying & Scheduling extension: Wrap ops in multi/exec #701

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pjambet
Copy link

@pjambet pjambet commented Nov 12, 2020

Wrap write operations in multi/exec blocks to prevent any possibilities
of Redis being in an inconsistent state. It also has the added benefit
of providing small optimizations by reducing the number of round-trips.

The commit uses a verbose approach of wrapping most calls to
redis.multi in a redis.pipelined block, which is technically
unnecessary since the redis-rb gem does not send the multi & exec
command if it is pipelining the commands.

The reason this commit uses both is that both methods have different
semantics, pipelined is meant to be an optimization whereas multi
provides the guarantees of a Redis transaction.

While it may seem unlikely, it seems possible that future versions of
the redis gem change the underlying behavior of the multi and
pipelined commands. Additionally, it is a little bit more explicit in
terms of describing intent: "I want these commands to be run in a atomic
fashion, and I'm ok sending it all at once".

The call to redis.del(key) in clean_up_timestamp was unnecessary
since the only reason that would cause the llen call above to return
0 is if the list did not exist.

The call to pipelined in this example might seem even more overkill
since we only give a single command to multi, but multi/exec are
themselves commands, so in the eventuality that the redis gem would
start sending the multi command right away in a future version,
wrapping it in a pipelined call is explicitly asking it to send
multi/zrem/exec all at once.


I totally understand that the redis.pipelined/redis.multi approach I took in this commit is kinda pedantic and given that a call to pipelined is enough to guarantee atomicity, we could drop a bunch of multi calls in this commit and maintain the same behavior, with the exception of the existing multi call in clean_up_timestamp since none of them use WATCH or DISCARD.

In there we could drop the pipelined call, and still benefit from the ruby client pipelining the commands anyway, but that would be less explicit. Happy to hear what other think

Wrap write operations in multi/exec blocks to prevent any possibilities
of Redis being in an inconsistent state. It also has the added benefit
of providing small optimizations by reducing the number of round-trips.

The commit uses a verbose approach of wrapping most calls to
`redis.multi` in a `redis.pipelined` block, which is _technically_
unnecessary since the redis-rb gem does not send the `multi` & `exec`
command if it is pipelining the commands.

The reason this commit uses both is that both methods have different
semantics, `pipelined` is meant to be an optimization whereas `multi`
provides the guarantees of a Redis transaction.

While it may seem unlikely, it seems possible that future versions of
the redis gem change the underlying behavior of the `multi` and
`pipelined` commands. Additionally, it is a little bit more explicit in
terms of describing intent: "I want these commands to be run in a atomic
fashion, and I'm ok sending it all at once".

The call to `redis.del(key)` in `clean_up_timestamp` was unnecessary
since the only reason that would cause the `llen` call above to return
`0` is if the list did not exist.

The call to `pipelined` in this example might seem even more overkill
since we only give a single command to `multi`, but `multi`/`exec` are
themselves commands, so in the eventuality that the redis gem would
start sending the `multi` command right away in a future version,
wrapping it in a `pipelined` call is explicitly asking it to send
`multi`/`zrem`/`exec` all at once.
@pjambet
Copy link
Author

pjambet commented Nov 12, 2020

Tests failed for ruby 2.6 and resque ~1.27, trying to replicate locally, need to install the right ruby version and all that

Update: Tests pass on my machine with ruby 2.6.6 and resque 1.27.4, which confirms my gut feeling that the failures are not directly related to my changes.

Happy to hear if there's anything else I can do. Could it be that these tests randomly fail? Or is the test suite usually pretty robust and does not have random failures?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant