Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CCR: Do not minimize requesting range on leader #30980

Merged
merged 4 commits into from
May 31, 2018

Conversation

dnhatn
Copy link
Member

@dnhatn dnhatn commented May 30, 2018

Today before reading operations on the leading shard, we apply minimization
the requesting range with the global checkpoint. However, this might
make the request invalid if the following shard generates a requesting
range based on the global-checkpoint from a primary shard and sends that
request to a replica whose global checkpoint is lagged.

Another issue is that we are mutating the request when applying
minimization. If the request becomes invalid on a replica, we will
retry that mutated request on the primary instead of the original one.

This commit removes the minimization and replaces it by a range check
with the local checkpoint.

Today before reading operations on the leading shard, we minimization
the requesting range with the global checkpoint. However, this might
generate an invalid range if the following shard generates a requesting
range based on the global checkpoint from a primary shard and sends that
request to a replica whose global checkpoint is lagged.

I see two possible solutions for this:

1. Remove the minimization as the requesting ranges are safely
generated by the following shards
2. Apply minimization to both min_seqno and max_seqno

I pick the first approach in this PR.
@dnhatn dnhatn added >bug :Distributed Indexing/CCR Issues around the Cross Cluster State Replication features labels May 30, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

@martijnvg
Copy link
Member

Thanks @dnhatn although option seems reasonable to me, it doesn't explain what I have been seeing in the latest benchmark run. It looks like the invalid range occurred on all shard copies when retrieving write operations via the shard changes api.

The error happened on the follower node:

[2018-05-30T16:40:39,949][ERROR][o.e.x.c.a.ShardFollowTasksExecutor$ChunksCoordinator] [leader3][3] Failure processing chunk [16453701/16453897]
org.elasticsearch.transport.RemoteTransportException: [ccr-es-cluster-b-mvg-0][10.132.0.4:39300][indices:data/read/xpack/ccr/shard_changes]
Caused by: org.elasticsearch.transport.RemoteTransportException: [ccr-es-cluster-b-mvg-2][10.132.0.5:39300][indices:data/read/xpack/ccr/shard_changes[s]]
Caused by: java.lang.IllegalArgumentException: Invalid range; from_seqno [16453701], to_seqno [16451890]
	at org.elasticsearch.index.engine.LuceneChangesSnapshot.<init>(LuceneChangesSnapshot.java:83) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.engine.InternalEngine.newLuceneChangesSnapshot(InternalEngine.java:2361) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.shard.IndexShard.newLuceneChangesSnapshot(IndexShard.java:1621) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction.getOperationsBetween(ShardChangesAction.java:275) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:242) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:217) ~[?:?]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$1.doRun(TransportSingleShardAction.java:112) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_151]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

And this the corresponding on the coordinating node in the follower cluster:

[2018-05-30T16:40:39,780][TRACE][o.e.x.c.a.ShardChangesAction$TransportAction] [ccr-es-cluster-b-mvg-0] executing [org.elasticsearch.xpack.ccr.action.ShardChangesAction$Request@b4b58279] on shard [[leader3][3]]
[2018-05-30T16:40:39,812][TRACE][o.e.x.c.a.ShardChangesAction$TransportAction] [ccr-es-cluster-b-mvg-0] [leader3][3], node[PLfFdzJsRg2mOiP1zlttLA], [R], s[STARTED], a[id=F8kMQ9msTDukI7693S3k0g]: failed to execute [org.elasticsearch.xpack.ccr.action.ShardChangesAction$Request@37684446]
org.elasticsearch.transport.RemoteTransportException: [ccr-es-cluster-b-mvg-0][10.132.0.4:39300][indices:data/read/xpack/ccr/shard_changes[s]]
Caused by: java.lang.IllegalArgumentException: Invalid range; from_seqno [16453701], to_seqno [16451890]
	at org.elasticsearch.index.engine.LuceneChangesSnapshot.<init>(LuceneChangesSnapshot.java:83) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.engine.InternalEngine.newLuceneChangesSnapshot(InternalEngine.java:2361) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.shard.IndexShard.newLuceneChangesSnapshot(IndexShard.java:1621) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction.getOperationsBetween(ShardChangesAction.java:275) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:242) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:217) ~[?:?]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$1.doRun(TransportSingleShardAction.java:112) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2018-05-30T16:40:39,819][TRACE][o.e.x.c.a.ShardChangesAction$TransportAction] [ccr-es-cluster-b-mvg-0] sending request [org.elasticsearch.xpack.ccr.action.ShardChangesAction$Request@37684446] to shard [[leader3][3]] on node [{ccr-es-cluster-b-mvg-2}{vNIfgWKGTb2B3tDYJ4HKbQ}{M85VH9STSZiXrBU7JaLOdg}{10.132.0.5}{10.132.0.5:39300}{xpack.installed=true}]
[2018-05-30T16:40:39,884][TRACE][o.e.x.c.a.ShardChangesAction$TransportAction] [ccr-es-cluster-b-mvg-0] [leader3][3], node[vNIfgWKGTb2B3tDYJ4HKbQ], [P], s[STARTED], a[id=WaKISuItS16-nvEFSVJrdg]: failed to execute [org.elasticsearch.xpack.ccr.action.ShardChangesAction$Request@37684446]
org.elasticsearch.transport.RemoteTransportException: [ccr-es-cluster-b-mvg-2][10.132.0.5:39300][indices:data/read/xpack/ccr/shard_changes[s]]
Caused by: java.lang.IllegalArgumentException: Invalid range; from_seqno [16453701], to_seqno [16451890]
	at org.elasticsearch.index.engine.LuceneChangesSnapshot.<init>(LuceneChangesSnapshot.java:83) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.engine.InternalEngine.newLuceneChangesSnapshot(InternalEngine.java:2361) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.shard.IndexShard.newLuceneChangesSnapshot(IndexShard.java:1621) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction.getOperationsBetween(ShardChangesAction.java:275) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:242) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:217) ~[?:?]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$1.doRun(TransportSingleShardAction.java:112) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_151]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2018-05-30T16:40:39,887][DEBUG][o.e.x.c.a.ShardChangesAction$TransportAction] [ccr-es-cluster-b-mvg-0] null: failed to execute [org.elasticsearch.xpack.ccr.action.ShardChangesAction$Request@37684446]
org.elasticsearch.transport.RemoteTransportException: [ccr-es-cluster-b-mvg-2][10.132.0.5:39300][indices:data/read/xpack/ccr/shard_changes[s]]
Caused by: java.lang.IllegalArgumentException: Invalid range; from_seqno [16453701], to_seqno [16451890]
	at org.elasticsearch.index.engine.LuceneChangesSnapshot.<init>(LuceneChangesSnapshot.java:83) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.engine.InternalEngine.newLuceneChangesSnapshot(InternalEngine.java:2361) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.index.shard.IndexShard.newLuceneChangesSnapshot(IndexShard.java:1621) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction.getOperationsBetween(ShardChangesAction.java:275) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:242) ~[?:?]
	at org.elasticsearch.xpack.ccr.action.ShardChangesAction$TransportAction.shardOperation(ShardChangesAction.java:217) ~[?:?]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$1.doRun(TransportSingleShardAction.java:112) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0-alpha1-SNAPSHOT.jar:7.0.0-alpha1-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_151]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

So I'm afraid that this change will hide the underlying issue that we're trying to find?
I would be good if we can take a look at the logs and the environment (it is still running) later today.

@jasontedor
Copy link
Member

I agree with @martijnvg. The issue that we looked at together earlier this week occurred on the primary shard; this shard copy can not be lagging in any knowledge here.

@dnhatn
Copy link
Member Author

dnhatn commented May 31, 2018

I talked to @martijnvg before working on this but did not gather enough information. I will close this and dig again. Thanks for looking @martijnvg and @jasontedor!

@dnhatn dnhatn closed this May 31, 2018
@dnhatn dnhatn deleted the remove-seq-min-check branch May 31, 2018 12:14
@dnhatn dnhatn reopened this May 31, 2018
@dnhatn
Copy link
Member Author

dnhatn commented May 31, 2018

@martijnvg and @jasontedor This is ready. Can you please have a look? Thank you!

Copy link
Member

@martijnvg martijnvg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

request.maxSeqNo = Math.min(request.maxSeqNo, indexShard.getGlobalCheckpoint());
// The following shard generates the request based on the global checkpoint which may not be synced to all leading copies.
// However, this guarantees that the requesting range always be below the local-checkpoint of any leading copies.
final long localCheckpoint = indexShard.getLocalCheckpoint();
Copy link
Member

@martijnvg martijnvg May 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to use local check point instead of global checkpoint here. Should be just a safe as using global checkpoint.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, but I don't think we should use the local checkpoint. I understand that it's safe because of the follower semantics and I also understand that the request is not supposed to be bellow the local checkpoint (this can be an assertion, as Jason noted) but I don't think we should rely on it. It's too subtle and difficult to understand. If there's no good reason to use the local checkpoint here (please share if there is, I can't see it ) can we please go back to using the global checkpoint?

We also don't really need to fail the request here but rather return what we have, if we have it (as before).

PS - can you also add a comment that this all best effort and that the true check is done when creating the snapshot? (merge policies etc can change availability of operations)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also don't really need to fail the request here but rather return what we have, if we have it (as before).

This will make the logic a bit more complex in the shard follow task. I prefer to fail here, knowing that the primary copy will have the requested range.

Alternatively the shard follow task can maybe use the global checkpoint of the shard copy with the lowest global checkpoint (liker was discussed in the es-ccr channel last night). Then this problem shouldn't occur either.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make the logic a bit more complex in the shard follow task.

Why is that? we already account for partial results due to byte size limits.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is true. I guess I reacted too quick. We already do this correctly, the assumption right now is that byte size limit has been reached.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this an assertion in 2a9a200#diff-64da2b915e53a36fdc911178059a02e5R242. The only purpose of this assertion is to make sure that the follower never requests a wrong range. However, we cannot use the global checkpoint here, and I "loosen" the condition by using the local-checkpoint - as the best effort. I am okay to remove this assertion. WDYT?

Copy link
Member

@jasontedor jasontedor Jun 1, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bleskes I do not agree that using the local checkpoint is too subtle and difficult to understand; we are relying on fundamental relationships between local and global checkpoints here? The problem is the global checkpoint on the replica is not the global checkpoint, it's only local knowledge (say "local global checkpoint" three times fast) that could be out of date but we know:

local checkpoint on replica >= actual global checkpoint >= global checkpoint on request

and that's why we can have an assertion here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And I also don't think we should use the global checkpoint and return partial results here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not agree that using the local checkpoint is too subtle and difficult to understand;

Let me unpack a bit what I meant with subtle. The current approach relies on a well behaved client that follows the pattern of formulating its requests based on some knowledge of the global checkpoint. That behavior is not clear when you look at the ShardChangesAction. It is a part of the shard following task which is not trivial to follow. That's what I meant with subtle. The system is complex and the more we can understand by reading a single file the better.

we are relying on fundamental relationships between local and global checkpoints here?

It is true that if you sample the global checkpoint from somewhere, all local checkpoints of in sync shards are above it and therefore it is safe to trim any request that using it uses a global checkpoint as an upper bound by the local checkpoint of an in-sync shard. It is also true that search requests should never be routed to not-in-sync shards, if we could manage it. Sadly that's not true and I'm not sure how to achieve that without other draw backs that are worse or some schemes that are complicated and will take time to bake.

Search requests are routed based on a cluster state they sample, which may be stale. They use a list of shard copies and prefer to go to active shard but if those fail they will go and try initializing shards. We don't know at what phase of recovery they are. We also don't know what their local checkpoint mean. It is highly likely it will be lower than the local checkpoint of the primary and thus will be safe (based on the behavior of the client), but maybe it's not? maybe it was constructed by a primary that has since failed and it has transferred operations that weren't safe and those aren't rolled back it? I'm not saying that's necessarily broken. I am saying that this gets complicated very quickly and I'm not sure it's right.

Using the local knowledge of the global checkpoint is always safe and is simple to understand. The complicated part is how the global checkpoint is maintained but you don't need to know that.

PS I want to go back to the notion of a well behaved client, from a different angle then complexity. It's true that we are currently building CCR and not the changes API but we do plan to build infrastructure that will power the Changes API (which CCR would be based on if we had it). With that in mind, I would rather avoid adding assumptions to the code that rely on some correctness aspects of the request. The logic can hopefully stay simple - you can ask for anything you want but we're not exposing unsafe ops. Also, this is why the original API was designed to say "give me X operation starting at this point up" (X be a number or size) rather than the current API of "give me this range please". To be clear - I'm OK with the range change (for now - we'll see how the changes API develops) but I want to be conscious of the Changes API and potential implications to it.

@dnhatn
Copy link
Member Author

dnhatn commented May 31, 2018

Thanks @martijnvg and @jasontedor

@dnhatn dnhatn merged commit fa54be2 into elastic:ccr May 31, 2018
@jasontedor
Copy link
Member

Good catch @dnhatn!

Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am sorry that I have missed this before it was merged but I left some comments.

// The following shard generates the request based on the global checkpoint which may not be synced to all leading copies.
// However, this guarantees that the requesting range always be below the local-checkpoint of any leading copies.
final long localCheckpoint = indexShard.getLocalCheckpoint();
if (localCheckpoint < request.minSeqNo || localCheckpoint < request.maxSeqNo) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that all we need here is that the local history at least covers the requested range so the opposite of localCheckpoint >= request.maxSeqNo should be sufficient here as we already validate on the request that minSeqNo < maxSeqNo. Therefore, I think that this condition can be indexShard.getLocalCheckpoint() < request.maxSeqNo.

// However, this guarantees that the requesting range always be below the local-checkpoint of any leading copies.
final long localCheckpoint = indexShard.getLocalCheckpoint();
if (localCheckpoint < request.minSeqNo || localCheckpoint < request.maxSeqNo) {
throw new IllegalStateException("invalid request from_seqno=[" + request.minSeqNo + "], " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we should add an assertion here? This should never happen in production because the global checkpoint on the primary exceeds is not more than, by definition, the local checkpoints on all of the in-sync shard copies. This shard copy must be in-sync or it would not be receiving this request and therefore I think we should treat this as a fatal condition? I am not sure if we are being harsh enough here.

final long indexMetaDataVersion = clusterService.state().metaData().index(shardId.getIndex()).getVersion();
request.maxSeqNo = Math.min(request.maxSeqNo, indexShard.getGlobalCheckpoint());
// The following shard generates the request based on the global checkpoint which may not be synced to all leading copies.
Copy link
Member

@jasontedor jasontedor May 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you amend this comment that the primary copy on the follower generates the request based on its knowledge of the global checkpoint on the primary copy on the leader?

dnhatn added a commit that referenced this pull request Jun 1, 2018
This commit clarifies the origin of the global checkpoint that the
following shard uses and replaces illegal_state_exc E by an assertion.

Relates #30980
@dnhatn
Copy link
Member Author

dnhatn commented Jun 1, 2018

@jasontedor I've pushed 2a9a200 to address all your comments. Thanks for an extra look.

dnhatn added a commit that referenced this pull request Jun 1, 2018
Today before reading operations on the leading shard, we minimization
the requesting range with the global checkpoint. However, this might
make the request invalid if the following shard generates a requesting
range based on the global-checkpoint from a primary shard and sends 
that request to a replica whose global checkpoint is lagged.

Another issue is that we are mutating the request when applying
minimization. If the request becomes invalid on a replica, we will
reroute the mutated request instead of the original one to the primary.

This commit removes the minimization and replaces it by a range check
with the local checkpoint.
dnhatn added a commit that referenced this pull request Jun 1, 2018
This commit clarifies the origin of the global checkpoint that the
following shard uses and replaces illegal_state_exc E by an assertion.

Relates #30980
@jasontedor
Copy link
Member

That looks good to me @dnhatn. Thank you. ❤️

@dnhatn dnhatn changed the title CCR: Do not minimization requesting range on leader CCR: Do not minimize requesting range on leader Jun 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed Indexing/CCR Issues around the Cross Cluster State Replication features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants