-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Producer Send and SendAsyn is blocked for forever when pulsar is down #515
Comments
Proposed fix : In
|
This is a duplication of #496 |
#496 Issue seems to be with the consumers. Also, does the proposed fix look good. If yes, I can create a PR for the same |
megarajtm
added a commit
to megarajtm/pulsar-client-go
that referenced
this issue
Jun 23, 2021
…down. Issue link - apache#515 Signed-off-by: Megaraj Mahadikar <megarajtm@gmail.com>
I had the same issue with v0.14.0. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Producer Send and SendAsyn is blocked forever when pulsar is down if the MaxReconnectToBroker is set to unlimited retry. In case of pulsar down scenarios, within
runEventsLoop
inproducer_partition.go
, the call entersreconnectToBroker
and remains in a forever loop until the pulsar broker connection is established. Due to this, no more events are consumed fromeventsChan
channel causing both Send and SendAsyn to be blocked. Due to this, theSendTimeout
would also be not honoured.Expected behavior
Producer Send must not be blocked forever when the pulsar broker is down. It must honour the SendTimeout and return back with an error.
Producer SendAsyn must never be blocked when the pulsar broker is down. It must honour the SendTimeout and call the callback function.
Actual behavior
Due the above mentioned issue Producer Send/SendAsyn blocks forever when the pulsar broker is down
Steps to reproduce
pendingQueue
is filled, the call is blocked forever.System configuration
Pulsar client version - v0.4.0
The text was updated successfully, but these errors were encountered: