Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Hedging and rate limiter strategy docs #1560

Merged
merged 6 commits into from
Sep 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 0 additions & 24 deletions README_V8.md
Original file line number Diff line number Diff line change
Expand Up @@ -425,30 +425,6 @@ new ResiliencePipelineBuilder()
PermitLimit = 100,
Window = TimeSpan.FromMinutes(1)
}));

// Create a custom partitioned rate limiter.
var partitionedLimiter = PartitionedRateLimiter.Create<Polly.ResilienceContext, string>(context =>
{
// Extract the partition key.
string partitionKey = GetPartitionKey(context);

return RateLimitPartition.GetConcurrencyLimiter(
partitionKey,
key => new ConcurrencyLimiterOptions
{
PermitLimit = 100
});
});

new ResiliencePipelineBuilder()
.AddRateLimiter(new RateLimiterStrategyOptions
{
// Provide a custom rate limiter delegate.
RateLimiter = args =>
{
return partitionedLimiter.AcquireAsync(args.Context, 1, args.Context.CancellationToken);
}
});
```
<!-- endSnippet -->

Expand Down
5 changes: 5 additions & 0 deletions docs/resilience-strategies.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@

Resilience strategies are essential components of Polly, designed to execute user-defined callbacks while adding an extra layer of resilience. These strategies can't be executed directly; they must be run through a **resilience pipeline**. Polly provides an API to construct resilience pipelines by incorporating one or more resilience strategies through the pipeline builders.

Polly categorizes resilience strategies into two main groups:

- **Reactive**: These strategies handle specific exceptions that are thrown, or results that are returned, by the callbacks executed through the strategy.
- **Proactive**: Unlike reactive strategies, proactive strategies do not focus on handling errors by the callbacks might throw or return. They can make proactive decisions to cancel or reject the execution of callbacks (e.g., using a rate limiter or a timeout resilience strategy).

## Usage

Extensions for adding resilience strategies to the builders are provided by each strategy. Depending on the type of strategy, these extensions may be available for both `ResiliencePipelineBuilder` and `ResiliencePipelineBuilder<T>` or just one of them. Proactive strategies like timeout or rate limiter are available for both types of builders, while specialized reactive strategies are only available for `ResiliencePipelineBuilder<T>`. Adding multiple resilience strategies is supported.
Expand Down
2 changes: 2 additions & 0 deletions docs/strategies/circuit-breaker.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
- `BrokenCircuitException`: Thrown when a circuit is broken and the action could not be executed.
- `IsolatedCircuitException`: Thrown when a circuit is isolated (held open) by manual override.

---

> [!NOTE]
> Version 8 documentation for this strategy has not yet been migrated. For more information on circuit breaker concepts and behavior, refer to the [older documentation](https://github.com/App-vNext/Polly/wiki/Circuit-Breaker).

Expand Down
2 changes: 2 additions & 0 deletions docs/strategies/fallback.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
- **Extensions**: `AddFallback`
- **Strategy Type**: Reactive

---

> [!NOTE]
> Version 8 documentation for this strategy has not yet been migrated. For more information on fallback concepts and behavior, refer to the [older documentation](https://github.com/App-vNext/Polly/wiki/Fallback).

Expand Down
218 changes: 216 additions & 2 deletions docs/strategies/hedging.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,14 @@
- **Extensions**: `AddHedging`
- **Strategy Type**: Reactive

> 🚧 This documentation is being written as part of the Polly v8 release.
---

The hedging strategy enables the re-execution of a user-defined callback if the previous execution takes too long. This approach gives you the option to either run the original callback again or specify a new callback for subsequent hedged attempts. Implementing a hedging strategy can boost the overall responsiveness of the system. However, it's essential to note that this improvement comes at the cost of increased resource utilization. If low latency is not a critical requirement, you may find the [retry strategy](retry.md) is more appropriate.

This strategy also supports multiple [concurrency modes](#concurrency-modes) for added flexibility.

> [!NOTE]
> Please do not start any background work when executing actions using the hedging strategy. This strategy can spawn multiple parallel tasks, and as a result multiple background tasks can be started.

## Usage

Expand Down Expand Up @@ -58,5 +65,212 @@ new ResiliencePipelineBuilder<HttpResponseMessage>()
| `MaxHedgedAttempts` | 1 | The maximum number of hedged actions to use, in addition to the original action. |
| `Delay` | 2 seconds | The maximum waiting time before spawning a new hedged action. |
| `ActionGenerator` | Returns the original callback that was passed to the hedging strategy. | Generator that creates hedged actions. |
| `DelayGenerator` | `null` | Used for generating custom delays for hedging. |
| `DelayGenerator` | `null` | Used for generating custom delays for hedging. If `null` then `Delay` is used. |
| `OnHedging` | `null` | Event that is raised when a hedging is performed. |

You can use the following special values for `Delay` or in `HedgingDelayGenerator`:

- `0 seconds` - the hedging strategy immediately creates a total of `MaxHedgedAttempts` and completes when the fastest acceptable result is available.
- `-1 millisecond` - this value indicates that the strategy does not create a new hedged task before the previous one completes. This enables scenarios where having multiple concurrent hedged tasks can cause side effects.

## Concurrency modes

In the sections below, explore the different concurrency modes available in the hedging strategy. The behavior is primarily controlled by the `Delay` property value.

### Latency mode

When the `Delay` property is set to a value greater than zero, the hedging strategy operates in latency mode. In this mode, additional executions are triggered when the initial ones take too long to complete. By default, the `Delay` is set to 2 seconds.

- The primary execution is initiated.
- If the initial execution either fails or takes longer than the `Delay` to complete, a new execution is initiated.
- If the first two executions fail or exceed the `Delay` (calculated from the last initiated execution), another execution is triggered.
- The final result is the result of fastest successful execution.
- If all executions fail, the final result will be the first failure encountered.

### Fallback mode

In fallback mode, the `Delay` value should be less than `TimeSpan.Zero`. This mode allows only a single execution to proceed at a given time.
martincostello marked this conversation as resolved.
Show resolved Hide resolved

- An execution is initiated, and the strategy waits for its completion.
- If the initial execution fails, new one is initiated.
- The final result will be the first successful execution.
- If all executions fail, the final result will be the first failure encountered.

### Parallel mode

The hedging strategy operates in parallel mode when the `Delay` property is set to `TimeSpan.Zero`. In this mode, all executions are initiated simultaneously, and the strategy waits for the fastest completion.

> [!IMPORTANT]
> Use this mode only when absolutely necessary, as it consumes the most resources, particularly when the hedging strategy uses remote resources such as remote HTTP services.

- All executions are initiated simultaneously, adhering to the `MaxHedgedAttempts` limit.
- The final result will be the fastest successful execution.
- If all executions fail, the final result will be the first failure encountered.

### Dynamic mode

In dynamic mode, you have the flexibility to control how the hedging strategy behaves during each execution. This control is achieved through the `HedgingDelayGenerator` property.

> [!NOTE]
> The `Delay` property is disregarded when `HedgingDelayGenerator` is set.

Example scenario:

- First, initiate the first two executions in parallel mode.
- Subsequently, switch to fallback mode for additional executions.

To configure hedging according to the above scenario, use the following code:

<!-- snippet: hedging-dynamic-mode -->
```cs
new ResiliencePipelineBuilder<HttpResponseMessage>()
.AddHedging(new()
{
MaxHedgedAttempts = 3,
DelayGenerator = args =>
{
var delay = args.AttemptNumber switch
{
0 => TimeSpan.FromSeconds(1),
1 => TimeSpan.FromSeconds(2),
_ => System.Threading.Timeout.InfiniteTimeSpan
};

return new ValueTask<TimeSpan>(delay);
}
});
```
<!-- endSnippet -->

With this configuration, the hedging strategy:

- Initiates a maximum of `4` executions. This includes initial action and an additional 3 attempts.
- Allows the first two executions to proceed in parallel, while the third and fourth executions follow the fallback mode.

## Action generator

The hedging options include an `ActionGenerator` property, allowing you to customize the actions executed during hedging. By default, the `ActionGenerator` returns the original callback passed to the strategy. The original callback also includes any logic introduced by subsequent resilience strategies. For more advanced scenarios, the `ActionGenerator` can be used to return entirely new hedged actions, as demonstrated in the example below:

<!-- snippet: hedging-action-generator -->
```cs
new ResiliencePipelineBuilder<HttpResponseMessage>()
.AddHedging(new()
{
ActionGenerator = args =>
{
// You can access data from the original (primary) context here
var customData = args.PrimaryContext.Properties.GetValue(customDataKey, "default-custom-data");

Console.WriteLine($"Hedging, Attempt: {args.AttemptNumber}, Custom Data: {customData}");

// Here, we can access the original callback and return it or return a completely new action
var callback = args.Callback;

// A delegate that returns a ValueTask<Outcome<HttpResponseMessage>> is required.
return async () =>
{
try
{
// A dedicated ActionContext is provided for each hedged action.
// It comes with a separate CancellationToken created specifically for this hedged attempt,
// which can be cancelled later if needed.
//
// Note that the "MyRemoteCallAsync" call won't have any additional resilience applied.
// You are responsible for wrapping it with any additional resilience pipeline.
var response = await MyRemoteCallAsync(args.ActionContext.CancellationToken);

return Outcome.FromResult(response);
}
catch (Exception e)
{
// Note: All exceptions should be caught and converted to Outcome.
return Outcome.FromException<HttpResponseMessage>(e);
}
};
}
});
```
<!-- endSnippet -->

### Parameterized callbacks and action generator

When you have control over the callbacks that the resilience pipeline receives, you can parameterize them. This flexibility allows for reusing the callbacks within an action generator.

A common use case is with [`DelegatingHandler`](https://learn.microsoft.com/aspnet/web-api/overview/advanced/http-message-handlers). Here, you can parameterize the `HttpRequestMessage`:

<!-- snippet: hedging-handler -->
```cs
internal class HedgingHandler : DelegatingHandler
{
private readonly ResiliencePipeline<HttpResponseMessage> _pipeline;

public HedgingHandler(ResiliencePipeline<HttpResponseMessage> pipeline)
{
_pipeline = pipeline;
}

protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
var context = ResilienceContextPool.Shared.Get(cancellationToken);

// Store the incoming request in the context
context.Properties.Set(ResilienceKeys.RequestMessage, request);

try
{
return await _pipeline.ExecuteAsync(async context =>
{
// Allow the pipeline to use request message that was stored in the context.
// This allows replacing the request message with a new one in the resilience pipeline.
request = context.Properties.GetValue(ResilienceKeys.RequestMessage, request);

return await base.SendAsync(request, context.CancellationToken);
},
context);
}
finally
{
ResilienceContextPool.Shared.Return(context);
}
}
}
```
<!-- endSnippet -->

Where `ResilienceKeys` is defined as:

<!-- snippet: hedging-resilience-keys -->
```cs
internal static class ResilienceKeys
{
public static readonly ResiliencePropertyKey<HttpRequestMessage> RequestMessage = new("MyFeature.RequestMessage");
}
```
<!-- endSnippet -->

In your `ActionGenerator`, you can easily provide your own `HttpRequestMessage` to ActionContext, and the original callback will use it:

<!-- snippet: hedging-parametrized-action-generator -->
```cs
new ResiliencePipelineBuilder<HttpResponseMessage>()
.AddHedging(new()
{
ActionGenerator = args =>
{
if (!args.PrimaryContext.Properties.TryGetValue(ResilienceKeys.RequestMessage, out var request))
{
throw new InvalidOperationException("The request message must be provided.");
}

// Prepare a new request message for the callback, potentially involving:
//
// - Cloning the request message
// - Providing alternate endpoint URLs
request = PrepareRequest(request);

// Then, execute the original callback
return () => args.Callback(args.ActionContext);
}
});
```
<!-- endSnippet -->
91 changes: 66 additions & 25 deletions docs/strategies/rate-limiter.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,14 @@
- `RateLimiterRejectedException`: Thrown when a rate limiter rejects an execution.
- **Package**: [Polly.RateLimiting](https://www.nuget.org/packages/Polly.RateLimiting)

> 🚧 This documentation is being written as part of the Polly v8 release.
---

The rate limiter resilience strategy controls the number of operations that can pass through it. This strategy is a thin layer over the API provided by the [`System.Threading.RateLimiting`](https://www.nuget.org/packages/System.Threading.RateLimiting) package.

Further reading:

- [Announcing rate limiting for .NET](https://devblogs.microsoft.com/dotnet/announcing-rate-limiting-for-dotnet/)
- [Rate limiting API documentation](https://learn.microsoft.com/dotnet/api/system.threading.ratelimiting)

## Usage

Expand All @@ -31,30 +38,6 @@ new ResiliencePipelineBuilder()
PermitLimit = 100,
Window = TimeSpan.FromMinutes(1)
}));

// Create a custom partitioned rate limiter.
var partitionedLimiter = PartitionedRateLimiter.Create<Polly.ResilienceContext, string>(context =>
{
// Extract the partition key.
string partitionKey = GetPartitionKey(context);

return RateLimitPartition.GetConcurrencyLimiter(
partitionKey,
key => new ConcurrencyLimiterOptions
{
PermitLimit = 100
});
});

new ResiliencePipelineBuilder()
.AddRateLimiter(new RateLimiterStrategyOptions
{
// Provide a custom rate limiter delegate.
RateLimiter = args =>
{
return partitionedLimiter.AcquireAsync(args.Context, 1, args.Context.CancellationToken);
}
});
```
<!-- endSnippet -->

Expand Down Expand Up @@ -90,3 +73,61 @@ catch (RateLimiterRejectedException ex)
| `RateLimiter` | `null` | Generator that creates a `RateLimitLease` for executions. |
| `DefaultRateLimiterOptions` | `PermitLimit` set to 1000 and `QueueLimit` set to 0. | The options for the default concurrency limiter that will be used when `RateLimiter` is `null`. |
| `OnRejected` | `null` | Event that is raised when the execution is rejected by the rate limiter. |

## Disposal of rate limiters

The `RateLimiter` is a disposable resource. When you explicitly create a `RateLimiter` instance, it's good practice to dispose of it once it's no longer needed. This is usually not an issue when manually creating resilience pipelines using the `ResiliencePipelineBuilder`. However, when dynamic reloads are enabled, failing to dispose of discarded rate limiters can lead to excessive resource consumption. Fortunately, Polly provides a way to dispose of discarded rate limiters, as demonstrated in the example below:

<!-- snippet: rate-limiter-disposal -->
```cs
services
.AddResiliencePipeline("my-pipeline", (builder, context) =>
{
var options = context.GetOptions<ConcurrencyLimiterOptions>("my-concurrency-options");

// This call enables dynamic reloading of the pipeline
// when the named ConcurrencyLimiterOptions change.
context.EnableReloads<ConcurrencyLimiterOptions>("my-concurrency-options");

var limiter = new ConcurrencyLimiter(options);

builder.AddRateLimiter(limiter);

// Dispose of the limiter when the pipeline is disposed.
context.OnPipelineDisposed(() => limiter.Dispose());
});
```
<!-- endSnippet -->

The above example uses the `AddResiliencePipeline(...)` extension method to configure the pipeline. However, a similar approach can be taken when directly using the `ResiliencePipelineRegistry<T>`.

## Partitioned rate limiter

For advanced use-cases, the partitioned rate limiter is also available. The following example illustrates how to retrieve a partition key from `ResilienceContext` using the `GetPartitionKey` method:

<!-- snippet: rate-limiter-partitioned -->
```cs
var partitionedLimiter = PartitionedRateLimiter.Create<ResilienceContext, string>(context =>
{
// Extract the partition key.
string partitionKey = GetPartitionKey(context);

return RateLimitPartition.GetConcurrencyLimiter(
partitionKey,
key => new ConcurrencyLimiterOptions
{
PermitLimit = 100
});
});

new ResiliencePipelineBuilder()
.AddRateLimiter(new RateLimiterStrategyOptions
{
// Provide a custom rate limiter delegate.
RateLimiter = args =>
{
return partitionedLimiter.AcquireAsync(args.Context, 1, args.Context.CancellationToken);
}
});
```
<!-- endSnippet -->
Loading