You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue seems to happen to all clients (nodejs, ruby, python, etc), so most likely a expected broken behavior from GCP Pub/Sub. Per documentation, streaming pull always returns an error when it closes. In my scenario, it stopped pulling messages and only resumed when the pod received a SIGTERM and restarted.
StreamingPull has a 100% error rate
StreamingPull streams always close with a non-OK status. Unlike in unary RPCs, this status for StreamingPull is simply an indication that the stream is broken. The requests are not failing. Therefore, while the StreamingPull API might have a surprising 100% error rate, this behavior is by design.
Some devs pointed it to the "excess pings", the stream just sitting there idle forever, RST_STREAM errors, etc. Might be out of the scope of this crate, but if it's worth investigating and mitigating this "defect by design" at the lower level would be very welcomed.
The text was updated successfully, but these errors were encountered:
This issue seems to happen to all clients (nodejs, ruby, python, etc), so most likely a expected broken behavior from GCP Pub/Sub. Per documentation, streaming pull always returns an error when it closes. In my scenario, it stopped pulling messages and only resumed when the pod received a SIGTERM and restarted.
Some devs pointed it to the "excess pings", the stream just sitting there idle forever, RST_STREAM errors, etc. Might be out of the scope of this crate, but if it's worth investigating and mitigating this "defect by design" at the lower level would be very welcomed.
The text was updated successfully, but these errors were encountered: