Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a Redis Sentinel backend #175

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

sebmil-daily
Copy link
Contributor

This new backend allows using a Redis Sentinel cluster to reliably store objects in the cache.

With the redundancy and automatic master switching in a Redis Sentinel cluster, it ensures that the service and data stays available even if some of the storage nodes go down.

Please note that due to the delay in Redis replication, some of the most recent data inserted in the current master can be lost if this host goes down before having a chance to replicate the change. It also takes some time for Sentinel to detect a host going down and elect a new master, so a short outage may occur when the current master fails.

Redis Sentinel documentation (https://redis.io/docs/latest/operate/oss_and_stack/management/sentinel/) recommends a minimum of 3 nodes and gives several architecture examples.

This new backend allows using a Redis Sentinel cluster to reliably store objects in the cache.

With the redundancy and automatic master switching in a Redis Sentinel cluster, it ensures that the service and data stays available even if some of the storage nodes go down.

Please note that due to the delay in Redis replication, some of the most recent data inserted in the current master can be lost if this host goes down before having a chance to replicate the change. It also takes some time for Sentinel to detect a host going down and elect a new master, so a short outage may occur when the current master fails.

Redis Sentinel documentation (https://redis.io/docs/latest/operate/oss_and_stack/management/sentinel/) recommends a minimum of 3 nodes and gives several architecture examples.
@bsardo bsardo assigned bsardo, guscarreon and AlexBVolcy and unassigned bsardo Apr 29, 2024
@sebmil-daily
Copy link
Contributor Author

Hi @guscarreon , do you have some time to review this PR please ?
As you were the one reviewing the first POC by @cyrinux , I think you already have the context 😄

@sebmil-daily
Copy link
Contributor Author

Hello, does any of you have time to review this PR ?
Thank you

@bsardo
Copy link
Collaborator

bsardo commented May 30, 2024

Hi @sebmil-daily, thanks for your patience. We'll get to this shortly.

@sebmil-daily
Copy link
Contributor Author

Hello @bsardo , thanks for your last message.
I'm just sending a small reminder at it was almost one month ago

@sebmil-daily
Copy link
Contributor Author

Hello, would anyone be available to review this ?

@bsardo bsardo assigned przemkaczmarek and unassigned AlexBVolcy Aug 14, 2024
@sebmil-daily sebmil-daily force-pushed the add_redis_sentinel_backend branch from 5e08d83 to ed2bd33 Compare September 3, 2024 13:19
This fix solves these issues:
* The log messages were incorrectly using `redis-sentinel` instead of `redis_sentinel` when dumping the configuration in the logs, which was misleading.
* The configuration entries were not provided default values, preventing Viper from reading the associated environment variables
* The default configuration content was not tested
@sebmil-daily sebmil-daily force-pushed the add_redis_sentinel_backend branch from ed2bd33 to 742e424 Compare September 3, 2024 13:23
When using a password in Redis Sentinel, it should be provided at both levels
Copy link
Contributor

@guscarreon guscarreon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sebmil-daily thank you for your contribution to our repo, we are thrilled to support Redis Sentinel and I'm sorry for the late review.

Looking at your proposed changes, we were thinking that, given that files redis-sentinel.go and redis.go are almost identical, can we consolidate into a single file? This way we won't have repetitive code and we won't need both Redis 8 and redis 9. Can we keep redis.go and redis v9.4.0 and let go of redis-sentinel.go and redis v8.11.5? Maybe redis.go could be modified to be able to instantiate one or the other. Such an approach would avoid repeating code. Thoughts?

@@ -32,6 +32,14 @@ backend:
# tls:
# enabled: false
# insecure_skip_verify: false
# redis_sentinel:
# sentinel_addrs: [ "127.0.0.1:26379", "127.0.0.1:26380", "127.0.0.1:26381" ]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if we consolidate redis-sentinel.go and redis.go, I think we can keep the configuration logic as is: one for redis sentinel, and the other for redis cluster. I agree with the current approach here, let's just use the same adapter (redis.go) for both regular redis or redis_sentinel.

@@ -14,6 +14,7 @@ require (
github.com/prometheus/client_golang v1.12.2
github.com/prometheus/client_model v0.2.0
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475
github.com/redis/go-redis/v9 v9.4.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we consolidate redis-sentinel.go and redis.go, we shouldn't need to import both v8.11.5 and v9.4.0 versions of effectively the same tool. Can we please test both redis cluster and redis sentinel with v9.4.0 and keep only the updated version?

@sebmil-daily
Copy link
Contributor Author

Hi @guscarreon , thanks for the review and the suggestion.

I agree that this version has duplicated code, the objective was to clearly separate the configuration options for each scenario (as opposed to our initial PR which was mixing everything in redis.go) and keep compatibility with existing Redis deployments.

I will give a try at merging both source while keeping distinct configuration sections, but won't be able to submit a new PR before at least 2 month, as we have other priorities for the moment.

Right now, a version compiled from this PR is deployed in production on our side since one week, it's working great and fulfills all our requirements.

@przemkaczmarek przemkaczmarek self-requested a review September 12, 2024 11:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants