Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Throttling Support #139

Closed
ricmoo opened this issue Mar 15, 2018 · 12 comments
Closed

Add Throttling Support #139

ricmoo opened this issue Mar 15, 2018 · 12 comments
Assignees
Labels
enhancement New feature or improvement.

Comments

@ricmoo
Copy link
Member

ricmoo commented Mar 15, 2018

To prevent being soft-banned from INFURA or Etherscan, or preventing DoS on a node, it would be useful to have throttling available for providers.

  • provider.maximumRequestsPerMinute
  • provider.maximumConcurrentRequests
  • provider.maximumBacklog
@ricmoo ricmoo added the enhancement New feature or improvement. label Mar 15, 2018
@ricmoo ricmoo self-assigned this Mar 15, 2018
@ricmoo
Copy link
Member Author

ricmoo commented Jul 19, 2020

Initial throttling support added in v5.0.6. The throttling is currently only available on the server-side. It may still make sense to support client-side throttling, but for the time being I'm going to close this issue.

If anyone has a need for client-side throttling, please re-open. I worry that this may cause large backlogs though, which might result in apps becoming unable to keep up...

@ricmoo ricmoo closed this as completed Jul 19, 2020
@zemse
Copy link
Collaborator

zemse commented Jul 19, 2020

@ricmoo what is the difference between server-side and client-side throttling in this feature? I assume that same ethers package from npm if run on nodejs, would be server-side and if bundled with something like webpack, for client side. So what makes this feature only work on nodejs? Or am I getting the context wrong?

@ricmoo
Copy link
Member Author

ricmoo commented Jul 19, 2020

Server-side means the server can complain that it is too busy, and send back a response (429 status or allow custom throwing a throttle error during processFunc) to initiate throttling. The developer does not need to do anything and the library will automatically throttle requests based on the server responses.

Client-side would allow the developer to specify a maximum request rate, which would be enforced by library to stall the next request until that duration has elapsed since the last request. I’ve used this technique in an iOS wallet though, and it led to “bunching”. Basically all requests get queued up and if requests come in faster than the rate duration, the queue grows indefinitely. And memory pressure issues and stale data ensues...

So doesn’t have anything to do with node vs browser, etc. :)

Make sense?

@ricmoo
Copy link
Member Author

ricmoo commented Jul 19, 2020

(maybe it’s easier to think about: from the point-of-view of the library, the Provider is always a client, regardless of whether it is being used in a server or an app)

@zemse
Copy link
Collaborator

zemse commented Jul 19, 2020

Oh, I get this, thanks for explaining! So I think it's great that to server-side throttle is added, as it would make it possible to go all out. I'm not sure why client-side throttle would be needed, as request-per-second of client-side cannot exceed that of server-side and if it did, then the server-side would be needed to control errors.

Edit: I just realized that client-side might be needed in a case when server side rate limiting has a big interva, e.g. blockcypher for bitcoin apis has 200 requests per hour, so one can have a client-side throttling for a lower interval (1 request per 20 sec) in such application. However, I don't think there is any ethereum provider who is as devil as blockcypher for bitcoin in terms of server-side throttling.

@ricmoo
Copy link
Member Author

ricmoo commented Jul 20, 2020

At the time I thought of this issue, I didn't know (and possibly they didn't?) that INFURA and Etherscan provided meaningful server-side throttle errors, but their (at the time, ignored) nominal rate limits were published. The plan was to bake these into the various providers.

But I agree, server-side is so much nicer, from a developer point-of-view and allows the server to have some additional control. Alchemy uses the Retry-After header and I'll be bugging INFURA about addition it too. The ethers library will honour it if present, otherwise falls back onto normal exponential back-off. :)

michaeltout pushed a commit to michaeltout/ethers.js that referenced this issue Aug 23, 2020
@appleseed-iii
Copy link

appleseed-iii commented Aug 22, 2021

FYI, I'm seeing 429 errors in my app but I'm never hitting showThrottleMessage 88c7eae#diff-9de0b6aaa58d0ca93a08f6e1a532023d397e517fffb3f23658fcbb8219f102e2R460. Is that unexpected to you, or am I misunderstanding? Upon further investigation this may be limited to infura only.

@ricmoo
Copy link
Member Author

ricmoo commented Aug 22, 2021

The showThrottleMessage is only for community resources, where you are using the default keys provided to ethers by services such as INFURA and alchemy.

A 429 simply guides the exponential back-off logic (unless it includes to retry fields, in which case those are honoured).

@appleseed-iii
Copy link

Much thanks, friend. Do you have any built-in handling where I can tap in and prevent retries? I'm actually not positive that will be best practice, but I'm curious if it's possible.

@ricmoo
Copy link
Member Author

ricmoo commented Aug 22, 2021

There is no ability in v5, but will be adding more flexibility in v6 to the Connection object .

But it is probably a bad idea to prevent retries entirely. Exponential back-off is your friend. :)

Why do you want to stop them?

@appleseed-iii
Copy link

I'm not sure that I want to stop them entirely. It was just something we were thinking about & wanting to experiment with. I see that you have a utility for retryLimit (though it doesn't appear accessible from within baseProvider). In theory we could have one provider that hits its limit & then switch to a new provider (like move from Infura to Alchemy, programmatically).

In the meantime I'm looking at setting provider.pollingInterval. It looks like the default is 250ms. I might experiment with setting a larger interval.

Really appreciate your feedback. Thank you.

@ricmoo
Copy link
Member Author

ricmoo commented Aug 22, 2021

If you use the FallbackProvider, it will already do much of that for you. You can specify a longer stallTimeout before it attempts the next provider in the chain, and allows each provider to be given more or less weight.

The default polling interval is 4000ms. Only set it to less than that if connecting to a local Geth node, otherwise you will definitely trigger throttles from the backends. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or improvement.
Projects
None yet
Development

No branches or pull requests

3 participants