-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] RateLimitPolicy sketch #666
[WIP] RateLimitPolicy sketch #666
Conversation
Hi @georgiosd . Can you describe the use case/real world scenario for "should retry" in more detail, so that I can better understand? Thanks! |
Of course @reisenberger. Think of a trading system. The exchange API has a rate limit and it's possible I will have to wait to send my order. But if the prices have changed until the moment I get my turn and my order is no longer relevant (I calculate that I will make a loss), then I should discard the order. |
Thanks for raising this and elaborating, @georgiosd . The goal of my q was to understand whether the scenario should be handled by We'll come back to this when we have a policy closer to fruition. My first reaction to adding That perspective can evolve though as we learn more about how we and everybody would want to use the policy - more ideas/thoughts, anyone? |
Hi @reisenberger, The problem manifests itself in multi-threading environments where
It appears to me that there is a window of uncertainty in LockFreeTokenBucketRateLimiter.cs between the call to
I'm afraid I cannot come up with a fix from the top of my head, but at least I can offer a unit test that will expose the problem. Please modify TokenBucketRateLimiterTestsBase.cs to include the following:
The test will fail every so often when using LockFreeTokenBucketRateLimiter - at least when executed in the command line test runner along the lines of A possible workaround for the time being might be reverting the hard coded IRateLimiter implementation inside RateLimiterFactory from LockFreeTokenBucketRateLimiter back to LockBasedTokenBucketRateLimiter. Hope this helps. Cheers |
Thanks @djoe47441. Yes, I agree. I'd also independently spotted a potential race condition in |
Hi @reisenberger, BTW, could I ask you to extend the Before:
After:
|
Hi @djoe47441
That's a great idea! 👍 Near identical came up on Polly slack yesterday also. We should generalise this so that all Polly-defined execution context is included, not only |
I've just stumbled across this PR, and assuming I've understood it correctly, the proposal does not allow for a use case I'm interested in (apologies if it does and I've misunderstood). This appears to work on a global rate limit for the policy across all requests, but in some use cases (such as mine) you'd want to rate limit a user (e.g. by their ID, an access token, IP address etc.). While this is possible using this implementation as-is, it requires the implementer to maintain a policy per user. For a system with thousands of distinct users, this would consume a non-trivial amount of additional memory, as well as require a mechanism to evict the policies over time when they're no longer in use (you wouldn't want a Policy lying around in memory for every user who's ever made a request to the app). Could the functionality be extended to allow an arbitrary key to be used to shard the buckets inside the policy and make this a built-in feature, rather than a layer on top that many consumers would have to build on top themselves in a very similar manner? |
RetryAfter = retryAfter; | ||
} | ||
|
||
private static string DefaultMessage(TimeSpan retryAfter) => $"The operation has been rate-limited and should be retried after ${retryAfter}"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
private static string DefaultMessage(TimeSpan retryAfter) => $"The operation has been rate-limited and should be retried after ${retryAfter}"; | |
private static string DefaultMessage(TimeSpan retryAfter) => $"The operation has been rate-limited and should be retried after {retryAfter}"; |
@martincostello Yes, great use case, and I have also been thinking about this. I'll come back to this (and involve you) when we circle back to the rate-limit policy! Thanks again. |
@reisenberger No problem, thanks! FYI, I took this PR as-is as source (excluding two tweaks, which I've left code comments on above) and incorporated it into a production service using a |
This looks great; I'd love to see this merged in! I like the smoothing offered by the token bucket approach. Two thoughts:
This would also allow for specifying compound policies like "no more than 10 requests/second, and also no more than 100 requests/minute". These can be useful when you expect requests to come in bursts (and want to accommodate for that) but also you don't want to accommodate the burst-level rate constantly (I've worked with APIs that have such compound policies). |
For the compound key scenario, would wrapping the rate limit in a bulkhead policy (or vice-versa) suit your use case? |
@martincostello I'm not sure it would. If I understand correctly, bulkhead manages the number of concurrently executing actions, which is different than the rate of actions. For example a third-party API might be able to process requests very quickly but still block you from submitting requests faster than a certain rate. Another scenario is the one I mentioned here where we are using a rate-limit to protect our storage layer but the actions being executed always complete very quickly. |
Hey there, would love this feature to get merged in. Is this still being looked into? |
This feature is currently on-hold for an indefinite period due to the author's personal commitments making them unable to further pursue it. I've had success using this from source in production applications for my own use cases, so you might find the same approach acceptable to benefit until work on this PR resumes. |
I have taken the changes from this Pull Request and incorporated them into a new one - #903. If you are an interested party to this PR, please address any comments there. |
The issue or feature being addressed
#260 Rate limiting
Confirm the following
I started this PR by branching from the head of the latest dev vX.Y branch, orI have rebased on the latest dev vX.Y branch,or I have merged the latest changes from the dev vX.Y branch