Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make Shard Started Response Handling only Return after the CS Update Completes #82790

Merged
merged 4 commits into from
Jan 19, 2022

Conversation

original-brownbear
Copy link
Member

Somewhat lazy solution by copying the approach from the failed handler 1:1 for now.
Added a todo to clean up this thing.

closes #81628

…Completes

Somewhat lazy solution by copying the approach from the failed handler 1:1 for now.
Added a todo to clean up this thing.

closes elastic#81628
@original-brownbear original-brownbear added >bug :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) v8.0.0 v8.1.0 labels Jan 19, 2022
@elasticmachine elasticmachine added the Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. label Jan 19, 2022
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

@original-brownbear original-brownbear added :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. and removed :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) labels Jan 19, 2022
Copy link
Contributor

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good although I think we should have a test for it too. I left a few comments. I'll leave it to @idegtiarenko to review too as he's working in this area and might want to wait to avoid merge conflicts.

new ClusterStateTaskListener() {
@Override
public void onFailure(Exception e) {
logger.error(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be better to keep the DEBUG level for FailedToCommitClusterStateException and NotMasterException. ERROR is a bit overdramatic in any case here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ made it debug in the onNoLongerMaster now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that fixes the NotMasterException case but not FailedToCommitClusterStateException

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤦 right added conditional handling for FailedToCommitClusterStateException now as well.

channel.sendResponse(e);
} catch (Exception channelException) {
channelException.addSuppressed(e);
logger.warn(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty much what ChannelActionListener does, maybe we should just use that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea right lets do that :)

@@ -594,6 +595,7 @@ public void shardStarted(
);
}

// TODO: Make this a TransportMasterNodeAction and remove duplication of master failover retrying from upstream code
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly 👍 except that I believe TransportMasterNodeAction requires a timeout today but these things should not time out. Relates #82185 too I think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, but we can just do what we did for the snapshot shard state update and set it to the max value. We can even do nicer here and create an override of the master node request that doesn't physically write the always-max value redundantly.
I actually mostly implemented this already this morning but then figured this one I can get merged more quickly and it actually helps my benchmarks :)

@original-brownbear
Copy link
Member Author

I think we should have a test for it too.

Would be nice to have indeed. I don't see a quick way of adding one though. The exiting UT infrastructure doesn't seem to have the plumbing for this. For ITs (which I'd like better) I'd have to implement something along the lines of org.elasticsearch.test.disruption.BusyMasterServiceDisruption (but with actual state updates) and then use the mock transport to inspect whether or not I get a response before the task goes through or something along those lines?

Maybe again ok to leave this for later? :) I don't think I'll have the time to implement that today and don't want to block @idegtiarenko 's refactoring efforts here longer than necessary. I manually verified that this works correctly (seeing lots of dedup now actually happening in internal cluster tests that wouldn't happen before).

Copy link
Contributor

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@original-brownbear
Copy link
Member Author

Jenkins run elasticsearch-ci/part-1 (unrelated + known)

@original-brownbear
Copy link
Member Author

Jenkins run elasticsearch-ci/part-1 (unrelated but new ... will open an issue)

@original-brownbear
Copy link
Member Author

Thanks both!

@original-brownbear original-brownbear merged commit 5232d67 into elastic:master Jan 19, 2022
@original-brownbear original-brownbear deleted the 81628-round-2 branch January 19, 2022 12:26
@elasticsearchmachine
Copy link
Collaborator

💔 Backport failed

Status Branch Result
8.0 Commit could not be cherrypicked due to conflicts

You can use sqren/backport to manually backport by running backport --upstream elastic/elasticsearch --pr 82790

@original-brownbear original-brownbear restored the 81628-round-2 branch April 18, 2023 20:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport pending >bug :Distributed Indexing/Recovery Anything around constructing a new shard, either from a local or a remote source. Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. v8.0.0-rc2 v8.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Stop unnecessary retries of shard-started tasks
6 participants