Skip to content
Brandon Tull edited this page Nov 7, 2023 · 35 revisions

Retry

ℹ️ This documentation describes the previous Polly v7 API. If you are using the new v8 API, please refer to pollydocs.org.

Syntax

RetryPolicy retry = Policy
  .Handle<HttpRequestException>()
  .Retry(3);

The above example will create a retry policy which will retry up to three times if an action fails with an exception handled by the Policy.

For full retry syntax and overloads (including retry-forever, wait-and-retry, and related variants), see https://github.com/App-vNext/Polly/tree/7.2.4#retry.

Syntax examples given are sync; comparable async overloads exist for asynchronous operation: see readme and wiki.

How Polly Retry works

retry operation

When an action is executed through the policy:

  • The retry policy attempts the action passed in the .Execute(…) (or similar) delegate.
    • If the action executes successfully, the return value (if relevant) is returned and the policy exits.
    • If the action throws an unhandled exception, it is rethrown and the policy exits: no further tries are made.
  • If the action throws a handled exception (or when handling faults: returns a handled fault), the policy:
    • Counts the exception/fault
    • Checks whether another retry is permitted.
      • If not, the exception is rethrown (/the fault is returned) and the policy terminates.
      • If another try is permitted, the policy:
        • for wait-and-retry policies, calculates the duration to wait from the supplied sleep duration configuration
        • raises the onRetry delegate (if configured)
        • for wait-and-retry policies, waits for the calculated duration.
        • Returns to the beginning of the cycle, to retry executing the action again.

Overall number of attempts

The overall number of attempts that may be made to execute the action is one plus the number of retries configured. For example, if the policy is configured .Retry(3), up to four attempts are made: the initial attempt, plus up to three retries.

Configurable policy delegates

onRetry / onRetryAsync

An optional onRetry or onRetryAsync delegate and be configured on many policies. The input parameters to the delegate may be some or all of:

int retryCount

  • 1, after the first attempt, before the first retry
  • 2, after the first retry, before the second retry (etc)

Exception or DelegateResult<TResult>

Exception: the exception which triggered a retry, for retry policies which handle only exceptions and not also result values

or:

DelegateResult<TResult>: the result of the preceding execution, which triggered a retry, for retry policies handling exceptions and/or results:

  • DelegateResult<TResult>.Exception will contain the exception which triggered a retry, if a handled exception triggered the retry; else null
  • DelegateResult<TResult>.Result will contain the handled result value which triggered a retry, if a handled result triggered a retry; else default(TResult)

TimeSpan

The duration of wait-before-next-try, for WaitAndRetry policies. The onRetry/Async delegate is called before the wait commences.

Context

The unique execution context travelling with this execution through the policy.

sleepDurationProvider

Some WaitAndRetry policies take a sleepDurationProvider as a function to provide the duration to wait for a particular retry attempt.

All versions of sleepDurationProvider have a return parameter TimeSpan, the time to wait before making the next retry.

The input parameters to the delegate may be some or all of:

  • int retryCount: see above.
  • Context: see above.
  • Exception or DelegateResult<TResult>: see above.

Exponential backoff

A common retry strategy is exponential backoff: this allows for retries to be made initially quickly, but then at progressively longer intervals, to avoid hitting a subsystem with repeated frequent calls if the subsystem may be struggling.

Exponential backoff can be achieved by configuring waits-between-retries manually (useful if you desire some custom or an easily-readable scheme):

Policy
  .Handle<SomeExceptionType>()
  .WaitAndRetry(new[]
  {
    TimeSpan.FromSeconds(1),
    TimeSpan.FromSeconds(2),
    TimeSpan.FromSeconds(4),
    TimeSpan.FromSeconds(8),
    TimeSpan.FromSeconds(15),
    TimeSpan.FromSeconds(30)
  });

or by calculation:

Policy
  .Handle<SomeExceptionType>()
  .WaitAndRetry(3, retryAttempt =>
    TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))
  );

The above code demonstrates how to build a common exponential backoff pattern from scratch, but our awesome community also contributed concise helper methods for this and a number of other common wait-and-retry cases: see Polly.Contrib.WaitAndRetry!

Jitter

In very high throughput scenarios it can be beneficial to add jitter to wait-and-retry strategies, to prevent retries bunching into further spikes of load.

Dynamically adjusting delay-before-retry at runtime

Retry intervals can be adjusted dynamically as the policy runs by:

  • using any of the retry overloads which take a Func<..., TimeSpan> sleepDurationProvider;
  • using retry overloads which take an IEnumerable<TimeSpan> sleepDurations parameter, and using a yield return implementation which returns dynamically changing values.

For occasionally tweaking parameters of policies which otherwise remain fairly constant, consider also the dynamic reconfiguration during runtime approach suggested via PolicyWrap.

RetryAfter: When the response specifies how long to wait

Some systems specify how long to wait before retrying as part of the fault response returned. This is typically expressed as a Retry-After header with a 429 response code.

This can be handled by using WaitAndRetry/Forever/Async(...) overloads where the sleepDurationProvider takes the handled fault/exception as an input parameter (example overload; discussion and sample code).

Some SDKs wrap RetryAfter in custom responses or exceptions: for example, the underlying Azure CosmosDB architecture (and RESTful api) sends a 429 response code (too many requests) with a x-ms-retry-after-ms header, but the Azure client SDK expresses this back to calling code by throwing a DocumentClientException with RetryAfter property. The same overloads can be used to handle these. For the CosmosDB case with Azure client SDK, some example code is here.

Retry to refresh authorization

A retry policy can be used to maintain authorisation against a third-party system, where that authorisation periodically lapses. For example:

var authorisationEnsuringPolicy = Policy
    .HandleResult<HttpResponseMessage>(r => r.StatusCode == HttpStatusCode.Unauthorized)
    .RetryAsync(
       retryCount: 1, // Consider how many retries. If auth lapses and you have valid credentials, one should be enough; too many tries can cause some auth systems to block or throttle the caller.
       onRetryAsync: async (outcome, retryNumber, context) => FooRefreshAuthorizationAsync(context),
      /* more configuration */);

var response = authorisationEnsuringPolicy.ExecuteAsync(context => DoSomethingThatRequiresAuthorization(context), cancellationToken);

The FooRefreshAuthorizationAsync(...) method can obtain a new authorization token and pass it to the delegate executed through the policy using Polly.Context. For a worked example, see Jerrie Pelser's blog: Refresh a Google Access Token with Polly.

If you wish to combine retry-for-reauthentication (as above) with retry-for-transient-faults, each retry policy can be expressed separately, and then combined with PolicyWrap.

Thread-safety

Each call to .Execute(…) (or similar) through a retry policy maintains its own private state. A retry policy can therefore be re-used safely in a multi-threaded environment.

The internal operation of the retry policy is thread-safe, but this does not magically make delegates you execute through the policy thread-safe: if the delegates you execute through the policy are not thread-safe, they remain not thread-safe.

Clone this wiki locally