Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(common): handle expired policies in *RetryLoop #12301

Merged
merged 2 commits into from
Aug 3, 2023

Conversation

coryan
Copy link
Contributor

@coryan coryan commented Aug 2, 2023

The retry policy may be expired when it enters the retry loop. For example, time-based policies may suffer from poor scheduling luck.

Part of the work for #12294


This change is Reviewable

The retry policy may be expired when it enters the retry loop.
For example, time-based policies may suffer from poor scheduling luck.
@coryan coryan temporarily deployed to internal August 2, 2023 22:54 — with GitHub Actions Inactive
@codecov
Copy link

codecov bot commented Aug 3, 2023

Codecov Report

Patch coverage: 85.18% and project coverage change: -0.01% ⚠️

Comparison is base (4d5204c) 93.56% compared to head (2f28f7d) 93.56%.
Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #12301      +/-   ##
==========================================
- Coverage   93.56%   93.56%   -0.01%     
==========================================
  Files        2012     2012              
  Lines      177918   177945      +27     
==========================================
+ Hits       166475   166490      +15     
- Misses      11443    11455      +12     
Files Changed Coverage Δ
google/cloud/internal/rest_retry_loop_test.cc 97.29% <75.00%> (-2.71%) ⬇️
google/cloud/internal/retry_loop_test.cc 97.48% <75.00%> (-2.52%) ⬇️
google/cloud/internal/rest_retry_loop.h 100.00% <100.00%> (ø)
google/cloud/internal/retry_loop.h 100.00% <100.00%> (ø)

... and 4 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@coryan coryan marked this pull request as ready for review August 3, 2023 01:27
@coryan coryan requested a review from a team as a code owner August 3, 2023 01:28
@@ -61,41 +62,35 @@ template <
typename std::enable_if<google::cloud::internal::is_invocable<
Functor, RestContext&, Request const&>::value,
int>::type = 0>
auto RestRetryLoopImpl(std::unique_ptr<RetryPolicy> retry_policy,
std::unique_ptr<BackoffPolicy> backoff_policy,
auto RestRetryLoopImpl(RetryPolicy& retry_policy, BackoffPolicy& backoff_policy,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are going to make this change, we should do it in RetryLoopImpl too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Comment on lines 87 to 92
// The last error cannot be retried, but it is not because the retry
// policy is exhausted. We call these "permanent errors", and they
// get a special message.
char const* prefix = !retry_policy.IsExhausted()
? "Permanent error in"
: "Retry policy exhausted in";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// The last error cannot be retried, but it is not because the retry
// policy is exhausted. We call these "permanent errors", and they
// get a special message.
char const* prefix = !retry_policy.IsExhausted()
? "Permanent error in"
: "Retry policy exhausted in";
// Distinguish between a permanent error and too many transient errors.
char const* prefix = retry_policy.IsExhausted()
? "Retry policy exhausted in"
: "Permanent error in";

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I'd drop the negation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit nit: It looks like RetryLoopError() could well handle the " in " part itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the negation(s). I am going to leave the in for a future PR.

char const* prefix = !retry_policy.IsExhausted()
? "Permanent error in"
: "Retry policy exhausted in";
return internal::RetryLoopError(prefix, location, last_status);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

observation: in the (rare) exhausted before start case, the error message is a bit redundant:

Retry policy exhausted in <location>:  Retry policy exhausted before first request attempt

I am fine with this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I will open a separate thread to shave this baby yak cleanup the error messages from retry loops.

@coryan coryan temporarily deployed to internal August 3, 2023 12:46 — with GitHub Actions Inactive
@coryan coryan enabled auto-merge (squash) August 3, 2023 12:48
@coryan coryan merged commit f93555a into googleapis:main Aug 3, 2023
@coryan coryan deleted the fix-common-handle-expired-policies branch August 3, 2023 14:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants