KAFKA-12867: Fix ConsumeBenchWorker exit behavior for maxMessages config #10797
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bug:
The trogdor
ConsumeBenchWorker
has a bug. It allows several consumption tasks to be run in parallel, the number is configurable using thethreadsPerWorker
config. If one of the consumption tasks completes executing successfully due tomaxMessages
being consumed, then, the consumption task prematurely notifies thedoneFuture
causing the entireConsumeBenchWorker
to halt. This becomes a problem when more than 1 consumption task is running in parallel, because the successful completion of 1 of the tasks shuts down the entire worker while the other tasks are still running. When the worker is shut down, it kills all the active consumption tasks, though they have not consumedmaxMessages
yet. This is not the desired behavior.How to reproduce?:
The bug is easily reproducible by running a
ConsumeBenchSpec
task configured withmaxMessages
value andthreadsPerWorker
> 1. When a trogdor workload is started with such a spec, and when one of the threads (i.e. consumption task) has consumedmaxMessages
, you can notice that it prematurely shuts down the worker although the other threads have not yet consumed at leastmaxMessages
.Fix:
The fix is to defer the notification of the
doneFuture
to theCloseStatusUpdater
thread. This thread is already responsible for tracking the status of the tasks and updating their status when all of the tasks complete, so this seems like the right place to inject thedoneFuture
notification.