Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: note it is safe to auto-reconnect #126

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

bf4
Copy link
Contributor

@bf4 bf4 commented Feb 16, 2023

@bf4 bf4 force-pushed the is_it_idemponent branch from 7793589 to e93ff30 Compare February 16, 2023 21:40
@bf4 bf4 mentioned this pull request Feb 16, 2023
1 task
@@ -169,15 +169,17 @@ def initialize(connection)
def lock(resource, val, ttl, allow_new_lock)
recover_from_script_flush do
@redis.with { |conn|
conn.call('EVALSHA', Scripts::LOCK_SCRIPT_SHA, 1, resource, val, ttl, allow_new_lock)
# NOTE: is idempotent and safe to retry
conn.call_once('EVALSHA', Scripts::LOCK_SCRIPT_SHA, 1, resource, val, ttl, allow_new_lock)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you mean unsafe if you're using call_once?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, comment got out of date with my code. I changed my mind that locking probably isn't idempotent and change the code from call to call_once didn't update the comment.

Copy link
Contributor Author

@bf4 bf4 Feb 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now the logic and comments are in sync

unlock/lock

# NOTE: is not idempotent and unsafe to retry
conn.call_once(...)

load_scripts/get remaining ttl

# NOTE: is idempotent and safe to retry
conn.call(...)

@bf4 bf4 force-pushed the is_it_idemponent branch from e93ff30 to 99a24b2 Compare February 16, 2023 23:17
@@ -19,6 +19,9 @@ Redlock works with Redis versions 6.0 or later.

Redlock >= 2.0 only works with `RedisClient` instance.

If you'd like to enable auto-reconnect attempts like in Redis 5,
be sure to instantiate a RedisClient with `reconnect_attempts: 1`.
Copy link
Contributor Author

@bf4 bf4 Feb 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aside, interesting logic inside sidekiq where it defines redis(&block) https://github.com/sidekiq/sidekiq/blob/v7.0.5/lib/sidekiq/config.rb#L118-L133

    def redis
      raise ArgumentError, "requires a block" unless block_given?
      redis_pool.with do |conn|
        retryable = true
        begin
          yield conn
        rescue RedisClientAdapter::BaseError => ex
          # 2550 Failover can cause the server to become a replica, need
          # to disconnect and reopen the socket to get back to the primary.
          # 4495 Use the same logic if we have a "Not enough replicas" error from the primary
          # 4985 Use the same logic when a blocking command is force-unblocked
          # The same retry logic is also used in client.rb
          if retryable && ex.message =~ /READONLY|NOREPLICAS|UNBLOCKED/
            conn.close
            retryable = false
            retry
          end
          raise
        end
      end
    end

I'm doing some more comparisons since we got a flood of OpenSSL errors for about 10 minutes today which Heroku https://help.heroku.com/70UD7VSR/why-am-i-seeing-syscall-returned-5-errno-0-state-sslv3-tls-write-client-hello-when-trying-to-connect-to-heroku-redis reports as "because the maximum number of clients to Redis has been reached" which appears to be what happened
image

Nothing this PR would address, I don't think, but worth sharing in the upgrade changes discussion, I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants