-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make Shard Started Response Handling only Return after the CS Update Completes #82790
Make Shard Started Response Handling only Return after the CS Update Completes #82790
Conversation
…Completes Somewhat lazy solution by copying the approach from the failed handler 1:1 for now. Added a todo to clean up this thing. closes elastic#81628
Pinging @elastic/es-distributed (Team:Distributed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good although I think we should have a test for it too. I left a few comments. I'll leave it to @idegtiarenko to review too as he's working in this area and might want to wait to avoid merge conflicts.
new ClusterStateTaskListener() { | ||
@Override | ||
public void onFailure(Exception e) { | ||
logger.error( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be better to keep the DEBUG
level for FailedToCommitClusterStateException
and NotMasterException
. ERROR
is a bit overdramatic in any case here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++ made it debug in the onNoLongerMaster
now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that fixes the NotMasterException
case but not FailedToCommitClusterStateException
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤦 right added conditional handling for FailedToCommitClusterStateException
now as well.
channel.sendResponse(e); | ||
} catch (Exception channelException) { | ||
channelException.addSuppressed(e); | ||
logger.warn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty much what ChannelActionListener
does, maybe we should just use that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea right lets do that :)
@@ -594,6 +595,7 @@ public void shardStarted( | |||
); | |||
} | |||
|
|||
// TODO: Make this a TransportMasterNodeAction and remove duplication of master failover retrying from upstream code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly 👍 except that I believe TransportMasterNodeAction
requires a timeout today but these things should not time out. Relates #82185 too I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, but we can just do what we did for the snapshot shard state update and set it to the max value. We can even do nicer here and create an override of the master node request that doesn't physically write the always-max value redundantly.
I actually mostly implemented this already this morning but then figured this one I can get merged more quickly and it actually helps my benchmarks :)
Would be nice to have indeed. I don't see a quick way of adding one though. The exiting UT infrastructure doesn't seem to have the plumbing for this. For ITs (which I'd like better) I'd have to implement something along the lines of Maybe again ok to leave this for later? :) I don't think I'll have the time to implement that today and don't want to block @idegtiarenko 's refactoring efforts here longer than necessary. I manually verified that this works correctly (seeing lots of dedup now actually happening in internal cluster tests that wouldn't happen before). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Jenkins run elasticsearch-ci/part-1 (unrelated + known) |
Jenkins run elasticsearch-ci/part-1 (unrelated but new ... will open an issue) |
Thanks both! |
💔 Backport failed
You can use sqren/backport to manually backport by running |
Somewhat lazy solution by copying the approach from the failed handler 1:1 for now.
Added a todo to clean up this thing.
closes #81628