Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache CLUSTER SLOTS response for improving throughput and reduced latency. #53

Merged
merged 18 commits into from
May 22, 2024

Conversation

roshkhatri
Copy link
Member

This PR adds a logic to cache CLUSTER SLOTS response for reduced latency and also update the cache when a change in the cluster is detected.

Historically, CLUSTER SLOTS command was deprecated, however all the server clients have been using CLUSTER SLOTS and have not migrated to CLUSTER SHARDS. In future this logic can be added to any other commands to improve the performance of the engine.

To compare the performance gain this PR has to offer, I have ran benchmarks for 2 scenarios with 2 primaries in the cluster:

  1. Best case scenario - Continuous slot ownership where each primaries own slots 0-8191 and 8192-16383 respectively.
  2. Worst case scenario - Total fragmented slot ownership, where one primary owns odd numbered slots and other owns even numbered slots.

Complete Benchmark results:

  1. For BEST CASE we can see a gain of 76% in throughput from 21k to 37k RPS and 2X AVG latency drop from 0.044msec to 0.021msec for 100000 requests.

BEST CASE unstable branch:

% src/placeholderkv-benchmark -n 100000 -c 1 CLUSTER SLOTS
====== CLUSTER SLOTS ======                                                   
  100000 requests completed in 4.60 seconds
  1 parallel clients
  28 bytes payload
  keep alive: 1
  host configuration "save": 
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.039 milliseconds (cumulative count 1)
50.000% <= 0.047 milliseconds (cumulative count 99148)
99.219% <= 0.055 milliseconds (cumulative count 99740)
99.805% <= 0.063 milliseconds (cumulative count 99873)
99.902% <= 0.127 milliseconds (cumulative count 99916)
99.951% <= 0.167 milliseconds (cumulative count 99955)
99.976% <= 0.223 milliseconds (cumulative count 99980)
99.988% <= 0.239 milliseconds (cumulative count 99988)
99.994% <= 0.287 milliseconds (cumulative count 99994)
99.997% <= 0.303 milliseconds (cumulative count 99998)
99.998% <= 0.311 milliseconds (cumulative count 99999)
99.999% <= 0.631 milliseconds (cumulative count 100000)
100.000% <= 0.631 milliseconds (cumulative count 100000)

Cumulative distribution of latencies:
99.892% <= 0.103 milliseconds (cumulative count 99892)
99.966% <= 0.207 milliseconds (cumulative count 99966)
99.998% <= 0.303 milliseconds (cumulative count 99998)
99.999% <= 0.407 milliseconds (cumulative count 99999)
100.000% <= 0.703 milliseconds (cumulative count 100000)

Summary:
  throughput summary: 21724.96 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.044     0.032     0.047     0.047     0.047     0.631

BEST CASE This PR:

% src/placeholderkv-benchmark -n 100000 -c 1 CLUSTER SLOTS
====== CLUSTER SLOTS ======                                                   
  100000 requests completed in 2.67 seconds
  1 parallel clients
  28 bytes payload
  keep alive: 1
  host configuration "save": 
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.023 milliseconds (cumulative count 94098)
96.875% <= 0.031 milliseconds (cumulative count 99712)
99.805% <= 0.039 milliseconds (cumulative count 99899)
99.902% <= 0.047 milliseconds (cumulative count 99925)
99.951% <= 0.111 milliseconds (cumulative count 99962)
99.976% <= 0.191 milliseconds (cumulative count 99977)
99.988% <= 0.311 milliseconds (cumulative count 99988)
99.994% <= 0.399 milliseconds (cumulative count 99994)
99.997% <= 0.431 milliseconds (cumulative count 99997)
99.998% <= 0.471 milliseconds (cumulative count 99999)
99.999% <= 0.551 milliseconds (cumulative count 100000)
100.000% <= 0.551 milliseconds (cumulative count 100000)

Cumulative distribution of latencies:
99.947% <= 0.103 milliseconds (cumulative count 99947)
99.980% <= 0.207 milliseconds (cumulative count 99980)
99.986% <= 0.303 milliseconds (cumulative count 99986)
99.995% <= 0.407 milliseconds (cumulative count 99995)
99.999% <= 0.503 milliseconds (cumulative count 99999)
100.000% <= 0.607 milliseconds (cumulative count 100000)

Summary:
  throughput summary: 37439.16 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.021     0.016     0.023     0.031     0.031     0.551
  1. For WORST CASE we can see a gain of 24% in throughput from 49.07 to 61.15 RPS and 22X AVG latency drop from 4.026msec to 0.181msec for 1000 requests.

WORST CASE unstable - Single Benchmark:

% src/placeholderkv-benchmark -n 1000 -c 1 CLUSTER SLOTS
====== CLUSTER SLOTS ======                                             
  1000 requests completed in 20.38 seconds
  1 parallel clients
  28 bytes payload
  keep alive: 1
  host configuration "save": 
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 3.983 milliseconds (cumulative count 10)
50.000% <= 4.007 milliseconds (cumulative count 679)
75.000% <= 4.015 milliseconds (cumulative count 768)
87.500% <= 4.087 milliseconds (cumulative count 889)
93.750% <= 4.103 milliseconds (cumulative count 955)
96.875% <= 4.119 milliseconds (cumulative count 970)
98.438% <= 4.167 milliseconds (cumulative count 986)
99.219% <= 4.727 milliseconds (cumulative count 993)
99.609% <= 5.127 milliseconds (cumulative count 997)
99.805% <= 5.367 milliseconds (cumulative count 999)
99.902% <= 5.463 milliseconds (cumulative count 1000)
100.000% <= 5.463 milliseconds (cumulative count 1000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
95.500% <= 4.103 milliseconds (cumulative count 955)
99.600% <= 5.103 milliseconds (cumulative count 996)
100.000% <= 6.103 milliseconds (cumulative count 1000)

Summary:
  throughput summary: 49.07 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        4.026     3.976     4.007     4.103     4.591     5.463

WORST CASE This PR - Single Benchmark:

src/placeholderkv-benchmark -n 1000 -c 1 CLUSTER SLOTS 
====== CLUSTER SLOTS ======                                             
  1000 requests completed in 16.35 seconds
  1 parallel clients
  28 bytes payload
  keep alive: 1
  host configuration "save": 
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.159 milliseconds (cumulative count 3)
50.000% <= 0.175 milliseconds (cumulative count 804)
87.500% <= 0.183 milliseconds (cumulative count 924)
93.750% <= 0.191 milliseconds (cumulative count 955)
96.875% <= 0.199 milliseconds (cumulative count 969)
98.438% <= 0.263 milliseconds (cumulative count 988)
99.219% <= 0.271 milliseconds (cumulative count 998)
99.805% <= 0.831 milliseconds (cumulative count 999)
99.902% <= 6.935 milliseconds (cumulative count 1000)
100.000% <= 6.935 milliseconds (cumulative count 1000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
97.200% <= 0.207 milliseconds (cumulative count 972)
99.800% <= 0.303 milliseconds (cumulative count 998)
99.900% <= 0.903 milliseconds (cumulative count 999)
100.000% <= 7.103 milliseconds (cumulative count 1000)

Summary:
  throughput summary: 61.15 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.181     0.152     0.175     0.191     0.271     6.935

This seemed like benckmark was the bottleneck for worst case.

So I also tried running 5 benchmarks for Worst case scenario with UNSTABLE and this PR:
All benchmark were around 25rps and 17.8msec each and the CPU utilization of server went up to around 56% ,
while for this PR
All benchmark rps was around 48rps and 0.28msec each and the CPU utilization of server went down to just 12%

Thats seems like 100% gain in RPS and 60X less latency and around 5X drop in CPU usage

for below images placeholderkv-s is the server and placeholderkv-b are the benchmarks

WORST CASE scenario unstable - 5 Benchmarks:

CPU utilizations:
Screenshot 2024-03-27 at 12 47 59 PM
Benchmarks:
Screenshot 2024-03-27 at 12 49 31 PM

WORST CASE scenario PR - 5 Benchmarks:

CPU utilizations:
Screenshot 2024-03-27 at 12 54 43 PM
Benchmarks:
Screenshot 2024-03-27 at 12 55 32 PM

CONCLUSION:

  1. For BEST CASE : 76% gain in throughput and 2X AVG latency drop
  2. For WORST CASE: 100% gain in throughput and around 60X AVG latency drop and 5X drop in CPU usage

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
@roshkhatri roshkhatri force-pushed the cache-cluster-slots branch from 18c139e to 1836d9f Compare March 27, 2024 20:19
@roshkhatri
Copy link
Member Author

Added Sign-off to the PR

@roshkhatri roshkhatri changed the title Cache CLUSTER SLOTS response for reduced latency and throughput. Cache CLUSTER SLOTS response for improving throughput and reduced latency. Mar 27, 2024
@roshkhatri roshkhatri requested a review from hpatro March 27, 2024 21:41
Copy link
Contributor

@hpatro hpatro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This improves the cluster slots response significantly in a scenario of defragmented slots (extreme case is 1-1, 3-3 and so on on shard 1 and remaining on shard 2) by caching the output and share it across client connection(s). Lot's of clients are still using CLUSTER SLOTS command for topology discovery. So, it would be pretty beneficial for them.

Regarding the PR, one of the key challenges is discovering any state change of the cluster topology and invoking clearCachedClusterSlotsResp to recompute the CLUSTER SLOTS response. Would be nice to hear if there are any better way to approach this.

@valkey-io/core-team Please take a look.

src/cluster.c Outdated Show resolved Hide resolved
src/cluster.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
Copy link
Member

@PingXie PingXie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not thought through all the cache invalidation paths. Need to take a closer look next.

src/cluster.c Show resolved Hide resolved
src/networking.c Show resolved Hide resolved
src/networking.c Outdated Show resolved Hide resolved
src/cluster.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
src/cluster_legacy.h Outdated Show resolved Hide resolved
src/connection.h Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
src/cluster_legacy.h Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
@madolson
Copy link
Member

@roshkhatri There is a memory sanitization error. Looks like we're leaking some memory now.

src/cluster_legacy.c Outdated Show resolved Hide resolved
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Copy link

codecov bot commented May 10, 2024

Codecov Report

Attention: Patch coverage is 92.40506% with 6 lines in your changes are missing coverage. Please review.

Project coverage is 69.83%. Comparing base (72f2a87) to head (c0228eb).
Report is 15 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable      #53      +/-   ##
============================================
+ Coverage     69.67%   69.83%   +0.15%     
============================================
  Files           109      109              
  Lines         61801    61864      +63     
============================================
+ Hits          43062    43200     +138     
+ Misses        18739    18664      -75     
Files Coverage Δ
src/cluster_legacy.c 86.04% <100.00%> (-0.20%) ⬇️
src/config.c 77.88% <100.00%> (+0.07%) ⬆️
src/connection.h 93.58% <ø> (ø)
src/networking.c 85.21% <100.00%> (+0.26%) ⬆️
src/cluster.c 85.89% <83.33%> (-0.39%) ⬇️

... and 13 files with indirect coverage changes

Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the one open question left is @PingXie concern about using the original client vs a new caching client, so would like to close that before merging.

@hpatro
Copy link
Contributor

hpatro commented May 13, 2024

I think the one open question left is @PingXie concern about using the original client vs a new caching client, so would like to close that before merging.

@roshkhatri had tried the original client approach first. The change becomes a bit complex due to maintaining the start index of the clientReplyBlock as well as the start index within it. With the new caching client, all the offset bookkeeping becomes unnecessary as it is guaranteed the buffer would be empty.

@madolson
Copy link
Member

had tried the original client approach first. The change becomes a bit complex due to maintaining the start index of the clientReplyBlock as well as the start index within it. With the new caching client, all the offset bookkeeping becomes unnecessary as it is guaranteed the buffer would be empty.

I would like to see more information about this. I'm still not really convinced this is true.

@roshkhatri
Copy link
Member Author

we add a dummy node using addReplyDeferredLen where we start our command reply. However, after we finish adding the command reply to reply list and we want to add the length of the reply we optimize using setDeferredReply https://github.com/valkey-io/valkey/blob/unstable/src/networking.c#L762, which makes it complex to maintain the start node and the address where we start the command output. This implementation of using a caching client is cleaner and reusable without having to touch a lot of core networking code.

@madolson
Copy link
Member

However, after we finish adding the command reply to reply list and we want to add the length of the reply we optimize using setDeferredReply https://github.com/valkey-io/valkey/blob/unstable/src/networking.c#L762, which makes it complex to maintain the start node and the address where we start the command output.

I don't think this is where the complexity is stemming from. My original concern was that there are a lot of other edge cases around client that could possibly corrupt the output such as disabling the client reply or hitting the CoB limits. Having a dedicated client side-steps a lot of this complexity since we control that secondary client.

@hpatro
Copy link
Contributor

hpatro commented May 13, 2024

However, after we finish adding the command reply to reply list and we want to add the length of the reply we optimize using setDeferredReply https://github.com/valkey-io/valkey/blob/unstable/src/networking.c#L762, which makes it complex to maintain the start node and the address where we start the command output.

I don't think this is where the complexity is stemming from. My original concern was that there are a lot of other edge cases around client that could possibly corrupt the output such as disabling the client reply or hitting the CoB limits. Having a dedicated client side-steps a lot of this complexity since we control that secondary client.

Yes, that's also one of the point we had discussed internally 👍 . I'm aligned with current approach if there is no strong concern would like to merge this in.

@PingXie @madolson

@hpatro
Copy link
Contributor

hpatro commented May 13, 2024

had tried the original client approach first. The change becomes a bit complex due to maintaining the start index of the clientReplyBlock as well as the start index within it. With the new caching client, all the offset bookkeeping becomes unnecessary as it is guaranteed the buffer would be empty.

I would like to see more information about this. I'm still not really convinced this is true.

@roshkhatri Should be able to share the diff.

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
src/cluster.h Outdated Show resolved Hide resolved
src/connection.h Outdated Show resolved Hide resolved
src/networking.c Outdated Show resolved Hide resolved
@madolson madolson self-requested a review May 14, 2024 01:41
src/cluster_legacy.c Outdated Show resolved Hide resolved
Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So my sanity check for when we need to invalidate:

  1. When announced IP or hostnames config changed ✅
  2. When preferred endpoint type changes ✅
  3. TLS cluster can not change, so the default can not change.
  4. When a node updates it's slow ownership ✅
  5. When a failover happens ✅

src/cluster_legacy.c Outdated Show resolved Hide resolved
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'm happy with all the updates. @PingXie Let me know if you want to take another look at it before merging.

Copy link
Member

@PingXie PingXie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly readability comments.

src/cluster.c Show resolved Hide resolved
src/cluster.c Outdated Show resolved Hide resolved
src/cluster.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Show resolved Hide resolved
src/cluster_legacy.h Outdated Show resolved Hide resolved
src/networking.c Outdated Show resolved Hide resolved
src/networking.c Outdated Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
src/server.h Show resolved Hide resolved
src/cluster.c Show resolved Hide resolved
src/config.c Show resolved Hide resolved
src/networking.c Show resolved Hide resolved
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
…es in cluster_legacy file

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
tests/unit/cluster/announced-endpoints.tcl Show resolved Hide resolved
src/cluster_legacy.c Outdated Show resolved Hide resolved
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Copy link
Contributor

@hpatro hpatro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on it. 👍

Copy link
Member

@PingXie PingXie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is one last nitpick - not a blocker to me.

I like this PR :-). Thanks @roshkhatri!

src/cluster_legacy.c Show resolved Hide resolved
@madolson madolson merged commit c478206 into valkey-io:unstable May 22, 2024
17 checks passed
@zuiderkwast
Copy link
Contributor

We're improving a command that's deprecated. :)

@PingXie
Copy link
Member

PingXie commented May 22, 2024

We're improving a command that's deprecated. :)

Maybe we should reconsider the decision :).

@madolson madolson added the release-notes This issue should get a line item in the release notes label May 22, 2024
@madolson
Copy link
Member

@zuiderkwast @PingXie I think we should un-deprecate it, @hpatro and I were just talking about it. He'll open PR and we can discuss there.

@hpatro
Copy link
Contributor

hpatro commented May 22, 2024

We're improving a command that's deprecated. :)

Have you bugged our office ? :D

@roshkhatri roshkhatri deleted the cache-cluster-slots branch May 23, 2024 12:02
jonathanspw pushed a commit to jonathanspw/valkey that referenced this pull request May 23, 2024
…ency. (valkey-io#53)

This commit adds a logic to cache `CLUSTER SLOTS` response for reduced
latency and also updates the cache when a change in the cluster is
detected.

Historically, `CLUSTER SLOTS` command was deprecated, however all the
server clients have been using `CLUSTER SLOTS` and have not migrated to
`CLUSTER SHARDS`. In future this logic can be added to any other
commands to improve the performance of the engine.

---------

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>

convert centos 7 tests to almalinux 8
zuiderkwast pushed a commit that referenced this pull request May 24, 2024
Undeprecate cluster slots command. This command is widely used by
clients to form the cluster topology and with the recent change to
improve performance of `CLUSTER SLOTS` command via #53 as well as us
looking to further improve the usability via #517, it makes sense to
undeprecate this command.

---------

Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
enjoy-binbin added a commit to enjoy-binbin/valkey that referenced this pull request May 28, 2024
In valkey-io#53, we will cache the CLUSTER SLOTS response to improve the
throughput and reduct the latency.

In the code snippet below, the second cluster slots will use the
old hostname:
```
config set cluster-preferred-endpoint-type hostname
config set cluster-announce-hostname old-hostname.com
multi
cluster slots
config set cluster-announce-hostname new-hostname.com
cluster slots
exec
```

When updating the hostname, in updateAnnouncedHostname, we will set
CLUSTER_TODO_SAVE_CONFIG and we will do a clearCachedClusterSlotsResponse
in clusterSaveConfigOrDie, so harmless in most cases.

We will call clearCachedClusterSlotsResponse in updateClusterAnnouncedPort,
so it is reasonable to also call clearCachedClusterSlotsResponse in
updateClusterHostname.

Signed-off-by: Binbin <binloveplay1314@qq.com>
enjoy-binbin added a commit that referenced this pull request May 30, 2024
#564)

In #53, we will cache the CLUSTER SLOTS response to improve the
throughput and reduct the latency.

In the code snippet below, the second cluster slots will use the
old hostname:
```
config set cluster-preferred-endpoint-type hostname
config set cluster-announce-hostname old-hostname.com
multi
cluster slots
config set cluster-announce-hostname new-hostname.com
cluster slots
exec
```

When updating the hostname, in updateAnnouncedHostname, we will set
CLUSTER_TODO_SAVE_CONFIG and we will do a clearCachedClusterSlotsResponse
in clusterSaveConfigOrDie, so harmless in most cases.

Move the clearCachedClusterSlotsResponse call to clusterDoBeforeSleep
instead of scheduling it to be called in clusterSaveConfigOrDie.

Signed-off-by: Binbin <binloveplay1314@qq.com>
soloestoy added a commit that referenced this pull request Jun 28, 2024
PR #53 introduced the cache of CLUSTER SLOTS response, but the cache has
some problems for different types of clients:

1. the RESP version is wrongly ignored:

    ```
    $./valkey-cli
    127.0.0.1:6379> cluster slots
    1) 1) (integer) 0
       2) (integer) 16383
       3) 1) ""
          2) (integer) 6379
          3) "f1aeceb352401ce57acd432c68c60b359c00ef85"
          4) (empty array)
    127.0.0.1:6379> hello 3
    1# "server" => "valkey"
    2# "version" => "255.255.255"
    3# "proto" => (integer) 3
    4# "id" => (integer) 3
    5# "mode" => "cluster"
    6# "role" => "master"
    7# "modules" => (empty array)
    127.0.0.1:6379> cluster slots
    1) 1) (integer) 0
       2) (integer) 16383
       3) 1) ""
          2) (integer) 6379
          3) "f1aeceb352401ce57acd432c68c60b359c00ef85"
          4) (empty array)
    ```

    RESP3 should get "empty hash" but get RESP2's "empty array"

3. we should use the original client's connect type, or lua/function and
module would get wrong port:

    ```
    $./valkey-cli --tls --insecure -p 6789
    127.0.0.1:6789> config get port tls-port
    1) "tls-port"
    2) "6789"
    3) "port"
    4) "6379"
    127.0.0.1:6789> cluster slots
    1) 1) (integer) 0
       2) (integer) 16383
       3) 1) ""
          2) (integer) 6789
          3) "f1aeceb352401ce57acd432c68c60b359c00ef85"
          4) (empty array)
    127.0.0.1:6789> eval "return redis.call('cluster','slots')" 0
    1) 1) (integer) 0
       2) (integer) 16383
       3) 1) ""
          2) (integer) 6379
          3) "f1aeceb352401ce57acd432c68c60b359c00ef85"
          4) (empty array)
        ```

---------

Signed-off-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance release-notes This issue should get a line item in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants