-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caching Introduced by Pull #374 Absorbs Errors and Impedes Error Retry #523
Comments
I can reproduce the behavior as reported, switching from (I used Interestingly, #374 was later reverted in #400, but the retry issue remained. |
This is intended behavior. When a job fails, we have always raised an exception in Will keep this issue open to update the documentation strings with this information. |
@githubwua OK, so here's the thing - after digging in this actually appears to be expected behavior that was changed here and later retained even after the revert. The reason is that when the job is DONE, it is done for good and it will not change anymore, thus any further retries are redundant. In Edit: Tim beat me to it. Indeed, we need to clarify this in the docs, thus I'm re-classifying this. |
The following commit introduced request caching.
85bf2bc
Caching catches and aborbs errors and prevents the retrying library from catching errors.
As a result, failed requests are not being caught and retried.
Environment details
Last Known good version 2.3.1
google-cloud-bigquery==2.3.1
First version that broke retry
google-cloud-bigquery==2.4.0
Steps to reproduce
Run repro.py below with google-cloud-bigquery==2.3.1
Result: Error is retried until timeout at 60 sec
Run repro.py below with google-cloud-bigquery==2.4.0
Result: Error is not being retried. Script exits early.
Code example
Stack trace
Can we either fix google/cloud/bigquery/job/query.py or roll it back to previous version?
The text was updated successfully, but these errors were encountered: