-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix cluster not able to spin up issue when disk usage exceeds threshold #15258
fix cluster not able to spin up issue when disk usage exceeds threshold #15258
Conversation
❌ Gradle check result for d9096b2: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
Sure, this is a draft PR to prove this can fix the issue, but this fix has drawbacks, e.g. it changes the |
7ce9886
to
3e8df68
Compare
❌ Gradle check result for 7ce9886: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
❌ Gradle check result for 3e8df68: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
keepAliveThread.start(); | ||
node.start(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naively speaking, if node.start()
doesn't succeeded, then there is nothing to keep alive and the JVM shutting down seems like the right thing to do. It seems like this change could lead to a partially-started or not-at-all-started node running indefinitely because the keepAliveThread will just keep it alive in some sort of zombie state after node.start()
failed. What am I missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naively responding, I've no objection to the JVM shutting down, but having spent way too many hours trying to figure out why the JVM shut down in various situations, I'd be really happy to have at least one thread faithfully logging whatever issues caused it to shut down, before shutting itself down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all node.start()
failing means the JVM should quit, like the case in the corresponding issue, if we're able to keep JVM running, user can fix this issue simply by changing the cluster settings. So the fix is to provide user the capability to interact with the running cluster(even partially-started).
And for those cases node.start()
failing and the cluster is not available at all case, this fix doesn't have real impact because in the end user needs to check error/fatal logs to fix the root cause. This change doesn't add any blocking to user but w/o this change, the first case users are blocked.
So in all I think this is a positive enhancement to user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
like the case in the corresponding issue, if we're able to keep JVM running, user can fix this issue simply by changing the cluster settings.
This seems like a really blunt change for an extremely specific case. In general, quitting the JVM and letting it restart is preferable to continuing to run in a partially-started state. Prior to this change, a transient failure in node.start()
might automatically recover because the JVM would quit and whatever is monitoring the process would restart it. Now there is a risk (in my opinion) that the process will continue running in a non-functional partially-started state that will require human intervention to resolve. How do we know that is not a risk here?
The assumption that anything useful can be done with a partially started node was true in the specific case mentioned but is not true in general.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but having spent way too many hours trying to figure out why the JVM shut down in various situations, I'd be really happy to have at least one thread faithfully logging whatever issues caused it to shut down, before shutting itself down.
@dbwiddis Not logging why the JVM shutdown is clearly a problem. However, I'd argue this change might make things considerably worse. Instead of knowing there's a problem (because the process restarted) we'll instead have a partially-started node running in a crippled state with no indication that something went wrong during startup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can create the revert PR now, and I'll look into the solution 2 for a long term solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @zane-neo . Please also consider solution 1 and/or the bug I reported. One of the two of those issues should be fixed ASAP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andrross @dblock @dbwiddis Do you think we can conclude that all plugins shouldn't block the OpenSearch from starting up even plugin itself encountering issue? I believe if this is true, we can simple do a try catch check for the onNodeStarted method to avoid this. The pros of this fix is it's simple and effective, and it can prevent this won't happen in other plugins.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we can conclude that all plugins shouldn't block the OpenSearch from starting up
@zane-neo I don't think this is true generally. For example, if the security plugin can't properly start up it would be safer to fail versus coming up with security disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andrross thanks, I created this PR: opensearch-project/observability#1873 for short term fix, and I will continue the long term fix exploration.
Signed-off-by: zane-neo <zaniu@amazon.com>
Signed-off-by: zane-neo <zaniu@amazon.com>
Signed-off-by: zane-neo <zaniu@amazon.com>
Signed-off-by: zane-neo <zaniu@amazon.com>
Signed-off-by: zane-neo <zaniu@amazon.com>
Signed-off-by: zane-neo <zaniu@amazon.com>
Signed-off-by: zane-neo <zaniu@amazon.com>
Signed-off-by: zane-neo <zaniu@amazon.com>
c81fbc1
to
af9d70d
Compare
❌ Gradle check result for af9d70d: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
…ncy issue caused test failure Signed-off-by: zane-neo <zaniu@amazon.com>
❌ Gradle check result for 7f24452: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…ld (#15258) * fix cluster not able to spin up issue when disk usage exceeds threshold Signed-off-by: zane-neo <zaniu@amazon.com> * Add comment to changes Signed-off-by: zane-neo <zaniu@amazon.com> * Add UT to ensure the keepAliveThread starts before node starts Signed-off-by: zane-neo <zaniu@amazon.com> * remove unused imports Signed-off-by: zane-neo <zaniu@amazon.com> * Fix forbidden API calls check failed issue Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * change setInstance method to static Signed-off-by: zane-neo <zaniu@amazon.com> * Add countdownlatch in test to coordinate the thread to avoid concureency issue caused test failure Signed-off-by: zane-neo <zaniu@amazon.com> --------- Signed-off-by: zane-neo <zaniu@amazon.com> (cherry picked from commit 62081f2) Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…ld (opensearch-project#15258) * fix cluster not able to spin up issue when disk usage exceeds threshold Signed-off-by: zane-neo <zaniu@amazon.com> * Add comment to changes Signed-off-by: zane-neo <zaniu@amazon.com> * Add UT to ensure the keepAliveThread starts before node starts Signed-off-by: zane-neo <zaniu@amazon.com> * remove unused imports Signed-off-by: zane-neo <zaniu@amazon.com> * Fix forbidden API calls check failed issue Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * change setInstance method to static Signed-off-by: zane-neo <zaniu@amazon.com> * Add countdownlatch in test to coordinate the thread to avoid concureency issue caused test failure Signed-off-by: zane-neo <zaniu@amazon.com> --------- Signed-off-by: zane-neo <zaniu@amazon.com>
…ld (opensearch-project#15258) * fix cluster not able to spin up issue when disk usage exceeds threshold Signed-off-by: zane-neo <zaniu@amazon.com> * Add comment to changes Signed-off-by: zane-neo <zaniu@amazon.com> * Add UT to ensure the keepAliveThread starts before node starts Signed-off-by: zane-neo <zaniu@amazon.com> * remove unused imports Signed-off-by: zane-neo <zaniu@amazon.com> * Fix forbidden API calls check failed issue Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * change setInstance method to static Signed-off-by: zane-neo <zaniu@amazon.com> * Add countdownlatch in test to coordinate the thread to avoid concureency issue caused test failure Signed-off-by: zane-neo <zaniu@amazon.com> --------- Signed-off-by: zane-neo <zaniu@amazon.com>
… threshold (opensearch-project#15258)" This reverts commit 62081f2.
… threshold (opensearch-project#15258)" This reverts commit 62081f2.
…x the issue. Signed-off-by: zane-neo <zaniu@amazon.com>
…ld (opensearch-project#15258) * fix cluster not able to spin up issue when disk usage exceeds threshold Signed-off-by: zane-neo <zaniu@amazon.com> * Add comment to changes Signed-off-by: zane-neo <zaniu@amazon.com> * Add UT to ensure the keepAliveThread starts before node starts Signed-off-by: zane-neo <zaniu@amazon.com> * remove unused imports Signed-off-by: zane-neo <zaniu@amazon.com> * Fix forbidden API calls check failed issue Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * format code Signed-off-by: zane-neo <zaniu@amazon.com> * change setInstance method to static Signed-off-by: zane-neo <zaniu@amazon.com> * Add countdownlatch in test to coordinate the thread to avoid concureency issue caused test failure Signed-off-by: zane-neo <zaniu@amazon.com> --------- Signed-off-by: zane-neo <zaniu@amazon.com>
…x the issue. (opensearch-project#16377) Signed-off-by: zane-neo <zaniu@amazon.com>
Description
root cause:
Changing the code of cluster start part to first start the
keepAliveThread
which is a non-daemon thread to make sure at least one non-daemon thread is running thus the JVM won't exit.Related Issues
#14791
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.