-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure batch support max items #987
Comments
Hey, We are providing currently the ability to specify the max body size of those items, from here: jsonrpsee/server/src/transport/ws.rs Lines 110 to 115 in dab1bfc
Similarly, we could extend that area to support a We would greatly appreciate any help with this matter and any improvements made here. Let us know if you are interested and could provide further guidance here! Thanks for taking an interest in this! |
As you may have noticed that I raised multiple issues in past two days. To give you a bit more context, I am researching on using jsonrpsee to implement a JSON RPC reverse proxy server. Something similar to http://github.com/tomusdrw/jsonrpc-proxy but with up-to-date deps. So far I have identified multiple lacking features and raised issues here but none of them are significant enough to be a blocker. So I am likely to stick with the original plan using jsonrpsee for JSON RPC handling and making necessary changes in own fork and contribute them upstream. This is the project: https://github.com/AcalaNetwork/subway |
Thanks a lot for all context and interest! |
I confirm this is also important for us. |
If the response is bigger than that limit we simply bail out and drop the rest of the futures/calls in batch. However, until then I suppose the calls in the batch could use plenty of memory until it's dropped as it's executed "concurrently" Are you really sure that batch requests is the cause that serde_json is using > 10GB? How big batch requests then? Fine, but it already possible disable batch requests completly but let's add this to the next release then. |
Yes
I didn't go too deep into it yet but what is strange is that it only applies to the batch through Websocket. The same batch with HTTPS doesn't trigger such a big memory |
The HTTP server doesn't have internal mpsc channel which the WS connection has which may use a bunch of memory. We are in process moving everything to the bounded channels/backpressure that PR will probably go in a couple of weeks, #962 |
It would quite interesting if you could try paritytech/substrate#13992 As this PR is included we could actually make the batch request size configurable in substrate CLI as well but my hunch is that backpressure should be sufficient. |
We would like to enable batch support but with a relatively low limits. So it will be good if we can configure the max items in a batch request.
The text was updated successfully, but these errors were encountered: