Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] RemoveCorruptedShardDataCommandIT.testCorruptIndex failure #52835

Closed
dimitris-athanasiou opened this issue Feb 26, 2020 · 3 comments
Closed
Labels
:Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. >test-failure Triaged test failures from CI

Comments

@dimitris-athanasiou
Copy link
Contributor

Failed on 7.x.

Build log: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+7.x+multijob+fast+part1/3581/console

Build scan: https://gradle-enterprise.elastic.co/s/kqmna7aoemr3o

Failure:

 No text input configured for prompt [Confirm [y/N] ]Close stacktrace
at __randomizedtesting.SeedInfo.seed([E6E73F63F51FEC69:90E32898CDEDDF0C]:0)
at org.elasticsearch.cli.MockTerminal.readText(MockTerminal.java:60)
at org.elasticsearch.index.shard.RemoveCorruptedShardDataCommand.confirm(RemoveCorruptedShardDataCommand.java:235)
at org.elasticsearch.index.shard.RemoveCorruptedShardDataCommand.dropCorruptMarkerFiles(RemoveCorruptedShardDataCommand.java:208)
at org.elasticsearch.index.shard.RemoveCorruptedShardDataCommand.lambda$processNodePaths$2(RemoveCorruptedShardDataCommand.java:377)
at org.elasticsearch.index.shard.RemoveCorruptedShardDataCommand.findAndProcessShardPath(RemoveCorruptedShardDataCommand.java:183)
at org.elasticsearch.index.shard.RemoveCorruptedShardDataCommand.processNodePaths(RemoveCorruptedShardDataCommand.java:258)
at org.elasticsearch.cluster.coordination.ElasticsearchNodeCommand.processNodePaths(ElasticsearchNodeCommand.java:120)

Also in the stack:

org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=1e1ff2b7 actual=e817cfb1 (resource=BufferedChecksumIndexInput(MMapIndexInput(path="/dev/shm/elastic+elasticsearch+7.x+multijob+fast+part1/server/build/testrun/integTest/temp/org.elasticsearch.index.shard.RemoveCorruptedShardDataCommandIT_E6E73F63F51FEC69-001/tempDir-003/node_t0/nodes/0/indices/7KPUop-fTHSb8Y-NmZmqHg/0/index/_8.cfs")))
	at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:419)
	at org.apache.lucene.codecs.CodecUtil.checksumEntireFile(CodecUtil.java:526)
	at org.elasticsearch.index.store.Store.checkIntegrity(Store.java:522)
	at org.elasticsearch.index.shard.IndexShard.doCheckIndex(IndexShard.java:2498)
	at org.elasticsearch.index.shard.IndexShard.checkIndex(IndexShard.java:2474)
	at org.elasticsearch.index.shard.IndexShard.maybeCheckIndex(IndexShard.java:2464)
	at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1589)
	at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:431)
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:97)
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325)
	at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:95)
	at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1873)
	at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
checksum passed: _c.cfe
checksum passed: _8.si
[2020-02-26T07:15:34,108][WARN ][o.e.i.c.IndicesClusterStateService] [node_t0] [index42][0] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [index42][0]: Recovery failed on {node_t0}{-uaTgAO2SdO65AjRKFu92A}{tUjKl9flR2iMhYrQYw4fkw}{127.0.0.1}{127.0.0.1:40836}{dim}
	at org.elasticsearch.index.shard.IndexShard.lambda$executeRecovery$21(IndexShard.java:2640) ~[main/:?]
	at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoveryListener$6(StoreRecovery.java:361) ~[main/:?]
	at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[main/:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:328) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:95) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1873) ~[main/:?]
	at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73) [main/:?]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) [main/:?]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [main/:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_241]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_241]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_241]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
	... 11 more
Caused by: org.elasticsearch.indices.recovery.RecoveryFailedException: [index42][0]: Recovery failed on {node_t0}{-uaTgAO2SdO65AjRKFu92A}{tUjKl9flR2iMhYrQYw4fkw}{127.0.0.1}{127.0.0.1:40836}{dim} (check index failed)
	at org.elasticsearch.index.shard.IndexShard.maybeCheckIndex(IndexShard.java:2466) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1589) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:431) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:97) ~[main/:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325) ~[main/:?]
	... 8 more
Caused by: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=1e1ff2b7 actual=e817cfb1 (resource=BufferedChecksumIndexInput(MMapIndexInput(path="/dev/shm/elastic+elasticsearch+7.x+multijob+fast+part1/server/build/testrun/integTest/temp/org.elasticsearch.index.shard.RemoveCorruptedShardDataCommandIT_E6E73F63F51FEC69-001/tempDir-003/node_t0/nodes/0/indices/7KPUop-fTHSb8Y-NmZmqHg/0/index/_8.cfs")))
	at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:419) ~[lucene-core-8.5.0-snapshot-b01d7cb.jar:8.5.0-snapshot-b01d7cb b01d7cb79a00aae89c108cdf3185971ee68ecda6 - jpountz - 2020-02-19 09:20:02]
	at org.apache.lucene.codecs.CodecUtil.checksumEntireFile(CodecUtil.java:526) ~[lucene-core-8.5.0-snapshot-b01d7cb.jar:8.5.0-snapshot-b01d7cb b01d7cb79a00aae89c108cdf3185971ee68ecda6 - jpountz - 2020-02-19 09:20:02]
	at org.elasticsearch.index.store.Store.checkIntegrity(Store.java:522) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.doCheckIndex(IndexShard.java:2498) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.checkIndex(IndexShard.java:2474) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.maybeCheckIndex(IndexShard.java:2464) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1589) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:431) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:97) ~[main/:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325) ~[main/:?]
	... 8 more
[2020-02-26T07:15:34,122][WARN ][o.e.c.r.a.AllocationService] [node_t0] failing shard [failed shard, shard [index42][0], node[-uaTgAO2SdO65AjRKFu92A], [P], recovery_source[existing store recovery; bootstrap_history_uuid=false], s[INITIALIZING], a[id=-yGTkijeSByRIvYfscntVQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2020-02-26T15:15:34.007Z], delayed=false, allocation_status[fetching_shard_data]], message [failed recovery], failure [RecoveryFailedException[[index42][0]: Recovery failed on {node_t0}{-uaTgAO2SdO65AjRKFu92A}{tUjKl9flR2iMhYrQYw4fkw}{127.0.0.1}{127.0.0.1:40836}{dim}]; nested: IndexShardRecoveryException[failed recovery]; nested: RecoveryFailedException[[index42][0]: Recovery failed on {node_t0}{-uaTgAO2SdO65AjRKFu92A}{tUjKl9flR2iMhYrQYw4fkw}{127.0.0.1}{127.0.0.1:40836}{dim} (check index failed)]; nested: CorruptIndexException[checksum failed (hardware problem?) : expected=1e1ff2b7 actual=e817cfb1 (resource=BufferedChecksumIndexInput(MMapIndexInput(path="/dev/shm/elastic+elasticsearch+7.x+multijob+fast+part1/server/build/testrun/integTest/temp/org.elasticsearch.index.shard.RemoveCorruptedShardDataCommandIT_E6E73F63F51FEC69-001/tempDir-003/node_t0/nodes/0/indices/7KPUop-fTHSb8Y-NmZmqHg/0/index/_8.cfs")))]; ], markAsStale [true]]
org.elasticsearch.indices.recovery.RecoveryFailedException: [index42][0]: Recovery failed on {node_t0}{-uaTgAO2SdO65AjRKFu92A}{tUjKl9flR2iMhYrQYw4fkw}{127.0.0.1}{127.0.0.1:40836}{dim}
	at org.elasticsearch.index.shard.IndexShard.lambda$executeRecovery$21(IndexShard.java:2640) ~[main/:?]
	at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoveryListener$6(StoreRecovery.java:361) ~[main/:?]
	at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[main/:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:328) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:95) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1873) ~[main/:?]
	at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73) ~[main/:?]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[main/:?]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[main/:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_241]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_241]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_241]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
	... 11 more
Caused by: org.elasticsearch.indices.recovery.RecoveryFailedException: [index42][0]: Recovery failed on {node_t0}{-uaTgAO2SdO65AjRKFu92A}{tUjKl9flR2iMhYrQYw4fkw}{127.0.0.1}{127.0.0.1:40836}{dim} (check index failed)
	at org.elasticsearch.index.shard.IndexShard.maybeCheckIndex(IndexShard.java:2466) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1589) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:431) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:97) ~[main/:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325) ~[main/:?]
	... 8 more
Caused by: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=1e1ff2b7 actual=e817cfb1 (resource=BufferedChecksumIndexInput(MMapIndexInput(path="/dev/shm/elastic+elasticsearch+7.x+multijob+fast+part1/server/build/testrun/integTest/temp/org.elasticsearch.index.shard.RemoveCorruptedShardDataCommandIT_E6E73F63F51FEC69-001/tempDir-003/node_t0/nodes/0/indices/7KPUop-fTHSb8Y-NmZmqHg/0/index/_8.cfs")))
	at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:419) ~[lucene-core-8.5.0-snapshot-b01d7cb.jar:8.5.0-snapshot-b01d7cb b01d7cb79a00aae89c108cdf3185971ee68ecda6 - jpountz - 2020-02-19 09:20:02]
	at org.apache.lucene.codecs.CodecUtil.checksumEntireFile(CodecUtil.java:526) ~[lucene-core-8.5.0-snapshot-b01d7cb.jar:8.5.0-snapshot-b01d7cb b01d7cb79a00aae89c108cdf3185971ee68ecda6 - jpountz - 2020-02-19 09:20:02]
	at org.elasticsearch.index.store.Store.checkIntegrity(Store.java:522) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.doCheckIndex(IndexShard.java:2498) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.checkIndex(IndexShard.java:2474) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.maybeCheckIndex(IndexShard.java:2464) ~[main/:?]
	at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1589) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:431) ~[main/:?]
	at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:97) ~[main/:?]
	at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325) ~[main/:?]
	... 8 more

Reproduce with:

./gradlew ':server:integTest' --tests "org.elasticsearch.index.shard.RemoveCorruptedShardDataCommandIT.testCorruptIndex" -Dtests.seed=E6E73F63F51FEC69 -Dtests.security.manager=true -Dtests.locale=hu -Dtests.timezone=America/Dawson -Dcompiler.java=13

Could not reproduce locally.

Could be a timing issue but I wasn't sure so raising it for a deeper look.

@dimitris-athanasiou dimitris-athanasiou added >test-failure Triaged test failures from CI :Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. labels Feb 26, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (:Distributed/Distributed)

@mark-vieira
Copy link
Contributor

This has happened twice today. If we don't have a fix on-deck for this we should consider muting.

@DaveCTurner
Copy link
Contributor

Duplicates #52490 - this is a Lucene issue that's now fixed upstream.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. >test-failure Triaged test failures from CI
Projects
None yet
Development

No branches or pull requests

4 participants