-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
testDoNotInfinitelyWaitForMapping fails #47974
Labels
:Distributed Coordination/Allocation
All issues relating to the decision making around placing a shard (both master logic & on the nodes)
>test-failure
Triaged test failures from CI
Comments
dnhatn
added
>test-failure
Triaged test failures from CI
:Distributed Coordination/Allocation
All issues relating to the decision making around placing a shard (both master logic & on the nodes)
labels
Oct 14, 2019
Pinging @elastic/es-distributed (:Distributed/Allocation) |
howardhuanghua
pushed a commit
to TencentCloudES/elasticsearch
that referenced
this issue
Oct 14, 2019
dnhatn
added a commit
that referenced
this issue
Nov 1, 2019
This change fixes a poisonous situation where an ongoing recovery was canceled because a better copy was found on a node that the cluster had previously tried allocating the shard to but failed. The solution is to keep track of the set of nodes that an allocation was failed on so that we can avoid canceling the current recovery for a copy on failed nodes. Closes #47974
dnhatn
added a commit
that referenced
this issue
Nov 9, 2019
This change fixes a poisonous situation where an ongoing recovery was canceled because a better copy was found on a node that the cluster had previously tried allocating the shard to but failed. The solution is to keep track of the set of nodes that an allocation was failed on so that we can avoid canceling the current recovery for a copy on failed nodes. Closes #47974
dnhatn
added a commit
that referenced
this issue
Nov 9, 2019
This change fixes a poisonous situation where an ongoing recovery was canceled because a better copy was found on a node that the cluster had previously tried allocating the shard to but failed. The solution is to keep track of the set of nodes that an allocation was failed on so that we can avoid canceling the current recovery for a copy on failed nodes. Closes #47974
This was referenced Feb 3, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed Coordination/Allocation
All issues relating to the decision making around placing a shard (both master logic & on the nodes)
>test-failure
Triaged test failures from CI
This test starts failing since #46959 where we cancel an ongoing recovery if we find a new copy that can perform a noop recovery.
CI: https://gradle-enterprise.elastic.co/s/zbewn2l6ksvd2/tests/kyv2y2z3r4v7m-bgzwhe6nv7k4c
Relates #46959
The text was updated successfully, but these errors were encountered: