Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failsafe on Redis error replies in RedisCacheStoreProxy. #421

Merged
merged 1 commit into from
Jul 15, 2019
Merged

Failsafe on Redis error replies in RedisCacheStoreProxy. #421

merged 1 commit into from
Jul 15, 2019

Conversation

cristiangreco
Copy link

@cristiangreco cristiangreco commented Jul 7, 2019

RedisCacheStoreProxy will blow up when RedisCacheStore raises a
CommandError exception. In fact, by default the proxied store only
handles BaseConnectionError exceptions, but will let bubble up any other
type of exception from the underlying client.

This pull request uses the same approach from RedisProxy, where store
operations are wrapped in a rescuing block that rescues and ignores
BaseError exceptions (the most generic exception class that can be
raised by the Redis client).

Best viewed ignoring whitespaces.

Copy link
Collaborator

@grzuy grzuy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

To be consistent I guess you would also want to rescue #read, right?

@cristiangreco
Copy link
Author

@grzuy thanks for reviewing this PR and sorry for coming back to you so late!

To be consistent I guess you would also want to rescue #read, right?

Not sure what you exactly mean here? read is not reimplemented by RedisCacheStoreProxy, it's only used within increment which is already wrapped in rescuing block itself. Or did I misunderstood your suggestion?

@grzuy
Copy link
Collaborator

grzuy commented Jul 12, 2019

Hi @cristiangreco,

Sorry for my short and vague comment :-P

So, rack-attack will eventually call #read, #write and #increment on whichever cache store you configure.

Whether or not those methods are re-implemented on any particular proxy doesn't mean they aren't going to be called.

In particular, for the RedisCacheStoreProxy, #read is not re-implemented because it wasn't necessary up until now, but it is still being called and delegated to ActiveSupport::Cache::RedisCacheStore#read.

If you don't re-implement an rescue #read you will continue to get Redis::BaseError for those #read calls.

@cristiangreco
Copy link
Author

@grzuy thanks for the thorough explanation! I've force pushed a change to wrap read operations as well.

Copy link
Collaborator

@grzuy grzuy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the update!

end
end

def read(name)
rescuing { super(name) }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are overriding ActiveSupport::Cache::Store#read(name, options = nil), which potentially can receive two arguments.

What about going with the following?

def read(*_args)
  rescuing { super }
end

So that we allow options argument, plus we make it forward compatible with any arity changes to the original #read?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks sensible, I've updated it!

RedisCacheStoreProxy will blow up when RedisCacheStore raises a
CommandError exception. In fact, by default the proxied store only
handles BaseConnectionError exceptions, but will let bubble up any other
type of exception from the underlying client.

This pull request uses the same approach from RedisProxy, where store
operations are wrapped in a `rescuing` block that rescues and ignores
BaseError exceptions (the most generic exception class that can be
raised by the Redis client).
@grzuy
Copy link
Collaborator

grzuy commented Jul 15, 2019

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants