-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] GCS FT with redis sentinel #47335
[Core] GCS FT with redis sentinel #47335
Conversation
Signed-off-by: Kan Wang <kan.wang@datadoghq.com>
Signed-off-by: Kan Wang <kan.wang@datadoghq.com>
hey @rkooo567 gentle ping on this PR. |
Hey, |
@kanwang per Ray Slack; aiming to get to this early next week |
hey @anyscalesam |
Hi @kanwang sorry, the team is a bit busy right now. We will review it ASAP. |
Hey team, any chance that we can get this reviewed this week or early next week? Thank you! cc @jjyao |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very sorry for the late review. I'll actively review this PR from now on.
src/ray/gcs/redis_context.cc
Outdated
@@ -506,6 +564,14 @@ Status RedisContext::Connect(const std::string &address, | |||
// Ray has some restrictions for RedisDB. Validate it here. | |||
ValidateRedisDB(*this); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's Redis Sentinel, what will INFO CLUSTER
returns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will return empty - basically that section doesn't exist. Here's a full result of INFO
on redis sentinel:
# Server
redis_version:7.0.7
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:5dad631ce1b0fc10
redis_mode:sentinel
os:Linux 6.8.0-1017-aws x86_64
arch_bits:64
monotonic_clock:POSIX clock_gettime
multiplexing_api:epoll
atomicvar_api:c11-builtin
gcc_version:9.4.0
process_id:7
process_supervised:no
run_id:de8c925350decfbb0abd5940bf718855eb191f8d
tcp_port:26379
server_time_usec:1730346466295003
uptime_in_seconds:728236
uptime_in_days:8
hz:14
configured_hz:10
lru_clock:2293218
executable:/redis-sentinel
config_file:/etc/redis/sentinel.conf
io_threads_active:0
# Clients
connected_clients:3
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:20480
client_recent_max_output_buffer:0
blocked_clients:0
tracking_clients:0
clients_in_timeout_table:0
# Stats
total_connections_received:121374
total_commands_processed:2334908
instantaneous_ops_per_sec:3
total_net_input_bytes:135052866
total_net_output_bytes:163753996
total_net_repl_input_bytes:0
total_net_repl_output_bytes:0
instantaneous_input_kbps:0.17
instantaneous_output_kbps:0.02
instantaneous_input_repl_kbps:0.00
instantaneous_output_repl_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:12828
evicted_keys:0
evicted_clients:0
total_eviction_exceeded_time:0
current_eviction_exceeded_time:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
pubsubshard_channels:0
latest_fork_usec:0
total_forks:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
total_active_defrag_time:0
current_active_defrag_time:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:97084
dump_payload_sanitizations:0
total_reads_processed:2502068
total_writes_processed:2380696
io_threaded_reads_processed:0
io_threaded_writes_processed:0
reply_buffer_shrinks:29
reply_buffer_expands:0
# CPU
used_cpu_sys:1282.061897
used_cpu_user:931.763629
used_cpu_sys_children:0.032320
used_cpu_user_children:0.026019
used_cpu_sys_main_thread:1282.061999
used_cpu_user_main_thread:931.762826
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_tilt_since_seconds:-1
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=redis-ha,status=ok,address=10.112.187.90:6379,slaves=2,sentinels=3
src/ray/gcs/redis_context.cc
Outdated
// directly. continue otherwise. | ||
return sentinel_status; | ||
} | ||
|
||
// Find the true leader |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed only for Redis Cluster.
I feel our high level code for RedisContext::Connect
should be
if (redis_cluster) {
ValidateRedisCluster() // make sure it only has 1 shard
FindMasterUsingMOVED()
} else {
// redis sentinel
FindMasterUsingSentinel()
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the suggestion! refactored a little bit. so now the logic in RedisContext::Connect
should be mostly untouched: added one validation of redis_sentienel and an else branch to handle sentinel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be ready for re-review.
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Signed-off-by: Kan Wang <kan.wang@datadoghq.com> Signed-off-by: Kan Wang <kan.wang@datadoghq.com>
Thanks! Addressed the feedbacks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG. I'll review the tests later.
Have you tested with real Redis Sentinel?
src/ray/gcs/redis_context.cc
Outdated
bool isRedisSentinel(RedisContext &context) { | ||
auto reply = context.RunArgvSync(std::vector<std::string>{"INFO", "SENTINEL"}); | ||
if (reply->IsNil() || reply->IsError() || reply->ReadAsString().length() == 0) { | ||
RAY_LOG(INFO) << "failed to get redis sentinel info, continue as a regular redis."; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd remove this log and add logs inside ConnectRedisCluster and ConnectRedisSentinel like
RAY_LOG(INFO) << "Connect to Redis cluster/sentinel";
src/ray/gcs/redis_context.cc
Outdated
::redisCommandArgv(context.sync_context(), cmds.size(), argv.data(), argc.data())); | ||
|
||
RAY_CHECK(redis_reply) << "failed to get redis sentinel masters info"; | ||
RAY_CHECK(redis_reply->type == REDIS_REPLY_ARRAY) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RAY_CHECK_EQ
RAY_LOG(INFO) << "connecting to the redis primary node behind sentinel: " << actual_ip | ||
<< ":" << actual_port; | ||
context.Disconnect(); | ||
return context.Connect(actual_ip, std::stoi(actual_port), password, enable_ssl); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we connect to the primary, IsRedisSentinel will return false now?
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Yes, I did test it on redis sentinel locally (with a remote redis sentinel instance).
That is correct. Sentinel is basically a special redis instance that runs in front of redis and its replicas. It knows the topology/failover of the redis instance and its replica. Once you queried the primary address, you basically connect to redis instance directly. One thing that is still missing is reconnecting. when redis failover, the primary replica information is updated in sentinel, but Ray won't re-connect until head restart. It's acceptable for us for the moment. I've opened an issue before #46982 and I saw there was a more recent issue being worked on #47419. They are not exactly same issue (v.s. sentinel reconnect), but in general if head tries to re-connect to redis instead of fail, I think all cases will be solved. I will see if there's more updates there. or I can try to contribute that too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!!
python/ray/tests/conftest.py
Outdated
] | ||
# setup replicas of the master | ||
|
||
time.sleep(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we do better than a sleep: i.e. more deterministic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I updated to a wait function to check all redis instances are up.
python/ray/tests/conftest.py
Outdated
for port in redis_ports[2:]: | ||
redis_cli = get_redis_cli(port, False) | ||
redis_cli.replicaof("127.0.0.1", master_port) | ||
sentinel_process = start_redis_sentinel_instance( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indentation?
python/ray/_private/test_utils.py
Outdated
@@ -98,6 +98,18 @@ def redis_replicas(): | |||
return int(os.environ.get("TEST_EXTERNAL_REDIS_REPLICAS", "1")) | |||
|
|||
|
|||
def enable_redis_sentinel(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not used?
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Co-authored-by: Jiajun Yao <jeromeyjj@gmail.com> Signed-off-by: kanwang <kerrywang881122@gmail.com>
Let me know when it's ready for another review. |
Signed-off-by: Kan Wang <kan.wang@datadoghq.com>
Thank you for approving! Looks like all tests passed. Can you help with merging? |
Hi @kanwang, does this change already fit your needs? I am a little concerned about the implementation because it apparently does not fully follow the protocol of Redis Sentinel and Redis Cluster. |
Hey @rueian It does solve our problem at the moment. Based on my understanding this is how a client should be connecting to Redis Sentinel. It is not handling reconnection, which probably requires a bigger change in Ray. What particular part of the protocol is it not following for Redis Sentinel? Let me know - if it's critical/breaking I can send a fix too. |
Hi @kanwang, good to know your problem has been solved. However, the reconnection issue is exactly what I am concerned about. Both Redis Sentinel and Redis Cluster require clients to monitor their topology proactively and connect to the right nodes accordingly, not just at the beginning but throughout the application's lifetime. I know this kind of active monitoring is extremely difficult to implement. But if we don't do that we probably can not say GCS FT supports Sentinel and Cluster publicly because it still fails on Redis topology changes. Another easier way is querying the master before sending a command every time. Also, the way of querying master should be In terms of the Redis Cluster, the
Implementing full support to Redis Sentinel and Redis Cluster is extremely hard. Not to say even with the above aggressive reconnection approaches, we still need to keep discovering the latest topology of Sentinel and Cluster, otherwise, GCS FT will still fail if all preconfigured Redis addresses are down. |
Signed-off-by: Kan Wang <kan.wang@datadoghq.com> Signed-off-by: Connor Sanders <connor@elastiflow.com>
Signed-off-by: Kan Wang <kan.wang@datadoghq.com> Signed-off-by: hjiang <dentinyhao@gmail.com>
Our current solution is that when topology change, GCS will exit and it will be restarted and during restart, it will find the new master. It's not ideal and we want GCS to automatically find the new master without exiting and this is what @MortalHappiness is working on. |
Why are these changes needed?
We want to use redis sentinel to support Ray GCS FT. I opened a ticket here: #46983. Redis sentinel should provide high availability without worrying too much about redis cluster operations.
Related issue number
Closes #46983
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.