-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add support for batch requests #86
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Can we add some more test scenarios? A single-item batch, a batch with successes and failures, a batch with one unrecognised method, etc.
@@ -168,14 +167,36 @@ func (s *RPCServer) handleReader(ctx context.Context, r io.Reader, w io.Writer, | |||
s.maxRequestSize)) | |||
return | |||
} | |||
// check if bufferedRequest is array or not. | |||
if bufferedRequest.Bytes()[0] != '[' { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure there's no way to send a request with an empty body? We probably need to check if the request is empty before continuing.
if err := json.Unmarshal(bufferedRequest.Bytes(), &reqs); err != nil { | ||
rpcError(wf, &req, rpcParseError, xerrors.Errorf("unmarshaling requests: %w", err)) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A single malformed request would make the whole batch fail. Is this what the JSON-RPC 2.0 spec mandates? If not, we would need to unmarshal requests individually, and record individual failures in the buffered response.
return | ||
} | ||
for _, req := range reqs { | ||
s.doHandleRequest(ctx, wf, req, *bufferedRequest, rpcError) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will a single failing request make the whole batch fail? Pretty sure the JSON-RPC 2.0 doesn't expect atomicity here, but rather individiual reporting of failures. We need to test a variety of failure scenarios:
- Two out of four requests carry invalid ID.
- Two out of four requests fail functionally (generate an error in their handlers).
- Two out of four requests carry the same ID.
- Two out of four requests fail to deserialize.
I suspect that in all these cases, the JSON-RPC 2.0 spec would mandate that those failures don't prevent the other two successful requests from being processed successfully.
Any updates here? Can help contribute if needed 🙏 using for go MIT licensed eth node. |
@itsdevbear that would be great. I believe @alvin-reyes was supposed to get to this last week, but I guess we can consider this PR stale and have another soul claim it. |
Will take a look, we are currently using the geth server but LGPL..... Anything major need to be done still? Or just address the above comments? |
@raulk unrelated but is this lib used in prod on filecoin nodes? |
Any updates on this PR? highly anticipated feature. |
fixed in #99 |
Context
Batched requests are an optional feature in the JSON-RPC 2.0 specification. However, Ethereum clients offer this feature and downstream users like block explorers (e.g. Blockscout) depend on it.
Change
This adds support for batch request as per specs here Batched requests
Related Issue
filecoin-project/ref-fvm#1047