-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apollo DataSources REST: Request Parallelization #1451
Comments
There is nothing in In your example, both fetching the list of popular movies and fetching individual movies happens in the |
The caching you're seeing is caching of REST responses, which is separate from the whole query caching Engine shows. |
So in the original example from that repo, the
Yeah that is pretty misleading. I wonder if that UI could be improved because to me that seemed like a clear sign that I was doing something wrong. |
Thanks for opening this originally! We've expanded and changed our documentation quite a bit since this was originally opened and introduced a full-response cache within Apollo Server which takes the place (but open-source and written in JavaScript) of the previous generation Apollo Engine Proxy (black-box Go binary) which once served a similar same purpose. This new full-response cache is documented here and respects the Furthermore, the data sources documentation has come along since this was first opened as well. It seems like any remaining concern here might be a concern with Apollo Engine itself, rather than Apollo Server, but that would be best surfaced to the Apollo Engine team using the Support tooling within Apollo Engine. I'm sure the Apollo Platform team would very much appreciate feedback on how the Thanks again! |
Hey there! Not sure whether to label this a bug report or a feature request. The gist of it is, when multiple requests are in flight to a REST API using
apollo-datasource-rest
, they all run in sequence, rather than in parallel, which inflates the response time for some queries.To better illustrate this, I've thrown together a basic example forked from
fullstack-workshop-server
, which from this screenshot from Engine you can better see what I'm talking about:And here's the fork itself:
https://github.com/Saeris/fullstack-workshop-server/tree/parallel-requests-example
Basically here's what's happening:
What should happen here is once we have all the movie IDs, we should fetch all of their data in parallel to optimize the response time of our server. That way resolving
movies: [Movie]!
in our schema will only take as long as the longest response of the 20 requests takes to come back. Which in the screenshot above would be 6.86ms, instead of the total 29.2ms it took to resolve the whole cascade.Obviously the deeper you went in a schema similar to this, the more time could be saved by running requests in parallel.
I'm not entirely certain as to how this can be accomplished. This just came up as I was picking apart the code for
apollo-datasource-rest
to solve an issue I'd been having with caching in Lambda.Also, please note that even though we're clearly hitting the cache for repeat requests for this query (you'll notice the response time shrink considerably), Engine does not appear to be receiving any caching information. I believe I've got the server configured correctly to support it, so I don't know what's wrong there. If you inspect the network requests being sent in Playground, you'll see that
cacheControl
is included in the extensions along with the other tracing data.The text was updated successfully, but these errors were encountered: