-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock when reaching channel limit in DirectMessageListenerContainer #2653
Comments
Thank you for the report!
And it indeed never ends:
As you pointed correctly we never exit from the I'll try to play with a Any other ideas? |
Yeah, I think a fair-locking is a way to go, allowing other operations with consumers (either it be normal dispose or cancellation) to change the state. |
Will it also be possible to backport the fix to 2.4.x version? Or are they no longer maintained with non-security fixes? 3.x requires Spring update which we can do, but it'd be nice if this bug wasn't the main call to action :) |
That version is out of Open Source support: https://spring.io/projects/spring-amqp#support. |
…ainer Fixes: spring-projects#2653 When not enough channel in the cache, the `DirectMessageListenerContainer.consume()` returns null and `adjustConsumers()` goes into an infinite loop, since already active consumer does not release its channel. * Fix `DirectMessageListenerContainer.consume()` to re-throw an `AmqpTimeoutException` which is thrown when no available channels in the cache * Catch `AmqpTimeoutException` in the `DirectReplyToMessageListenerContainer.getChannelHolder()` and reset `this.consumerCount--` to allow to try existing consumer until it is available, e.g. when this one receives a reply or times out.
Fixes: #2653 When not enough channel in the cache, the `DirectMessageListenerContainer.consume()` returns null and `adjustConsumers()` goes into an infinite loop, since already active consumer does not release its channel. * Fix `DirectMessageListenerContainer.consume()` to re-throw an `AmqpTimeoutException` which is thrown when no available channels in the cache * Catch `AmqpTimeoutException` in the `DirectReplyToMessageListenerContainer.getChannelHolder()` and reset `this.consumerCount--` to allow to try existing consumer until it is available, e.g. when this one receives a reply or times out. * Change `DirectReplyToMessageListenerContainer.consumerCount` to `AtomicInteger`
In what version(s) of Spring AMQP are you seeing this issue?
2.4.11 (Reproduction example is present for 3.1.2)
Describe the bug
We use
convertSendAndReceive
for asynchronous RPC in our project. Invoking such a method requires getting a new consumer (and thus requiring a channel to communicate with the broker).In case when the load is high and all the channels are occupied, the consumers are tried to be created under
consumersLock
in the while loop, always failing to create a new one (DirectMessageListenerContainer.adjustConsumers
). With the lock being acquired by theadjustConsumers
, the timed out consumers can't be cancelled inDirectMessageListenerContainer.checkConsumers
thus never releasing any channels.To Reproduce
Run the example
RabbitmqDeadlockApplication
and observe thatTaskSubmitter
never has its futures completed.main
thread is blockingasyncRabbitTemplate-1
thread that tries to check the consumers and probably cancel some of them.Expected behavior
Timed out requests release underlying channels leading to eventually completing all futures.
Sample
https://github.com/utikeev/spring-amqp-directmessagelistenercontainer-deadlock
The sample has a limit for channels set to 2 to make it easier to reproduce (with just 5
convertSendAndReceive
). In our case, we hit the broker limit of 2047 channels per connection.We also set the
MessageListener
programmatically, but that shouldn't be the issue as far as I understand.The problem might be about the wrong usage of Spring AMQP API. In that case, if you could recommend a better way to handle that use-case, I'd consider that a solution as well.
The text was updated successfully, but these errors were encountered: