-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apollo-server-lambda require taking seconds to finish #2718
Comments
Two things to try:
Also, do note that you do not need |
@abernix Thanks for the response. I tried to update, but then I'm getting CORS errors. I tried the |
Can you investigate the cause of the CORs errors and provide the errors here so we can help debug? If you could try |
@abernix Just to give you an update. Indeed I'm still debugging what's happening. I will get to you as soon as I have any news! I will try to have something this week Thanks for the clues! |
@luanmuniz Ok, that seems to narrow it down to #2674 then! Thanks for the confirmation! What are the specific CORS errors? It's possible that this is either a new bug introduced by that PR, or that previously misconfigured CORS settings on that server are now being respected properly. I'm curious if @dsanders11, who originally introduced #2597, has any thoughts (or is able to reproduce a regression themselves?) |
@abernix, isn't this just describing AWS Lambda cold-start behavior? Sounds like what one would expect with Lambda, it's not a constant performance kind of thing. @luanmuniz, I think this is expected behavior. Look into Lambda "cold starts". Execution can be quite slow when it hasn't been used in a while. Regarding #2674, yes, there's a regression. I'll comment on that PR. |
@dsanders11 I would agree with you if it was always the case. But the problem doesn't happen every time the cold-start it's triggered. It'just some times. Which is very weird. I'm tracking it down at the moment. Maybe it's related to the Lambda Runtime and not this module, The timeouts are keeping us from releasing our product so I have a deadline to solve the issue. This week without a fault I will have a more detailed report to give you guys. Thanks for all the help so far! |
As helpfully debugged and identified by @dsanders11 (Thank you!), this was a mistake in my original implementation that incorrectly spread the iterable `Headers` into the `headers` object of the response. While I would have loved to create a regression test for this, there was no place which was currently wired up to support handler specific options (like `cors`), which is sometimes on the top-level constructor to `ApolloServer`, and sometimes on `createHandler` or `applyMiddleware`. I did try to futz with it for a full hour to no avail. I'm already aware of a number of limitations with the current so-called `integration-testsuite`, and I'm putting that on the list of things to address in an upcoming overhaul of that testing infra. That said, I've manually tested this and I think it's working properly. An upcoming release should hopefully confirm that. Ref: https://github.com/apollographql/apollo-server/pull/2674/files#r288053103 Ref: #2718 (comment)
As @dsanders11 has noted, there are certainly some properties of AWS Lambda that require longer cold-start times, but I was under the impression that you were saying that it was the finishing of the request that took longer. Though, looking at the code, I don't see where and how the That said, the CORS errors should be fixed thanks to 7e78a23 and the latest |
Hello Everyone! An Update: @abernix The response time was not affected. But the cold-started request sometimes took a lot, but it was because of the code start (requires, instances and so on). Together with an AWS representative, we tracked down the issue. When the memory of the functions was not high (128~256) the MJML module was basically consuming all the vCPU available for the container and because of that when it comes times for GraphQL to run, it didn't have enough CPU to make the initial load so it was waiting for the MJML module to free the CPU, that's why the delay in the load. According to AWS, with 1792MB of memory we have the equivalent of 1 vCPU and linearly going up and down from there. After I removed the MJML module, the GraphQL still took a little bit of time, but not as much as before. I'm going to test out the CORS problem this week and get back to you. Thank you guys again for all the help, it was SUPER helpful when debugging the problem. |
That's really great findings, @luanmuniz! Glad you got to the bottom of it. I'm going to close this, as it sounds like the original issue is actually resolved. If the CORS thing is still an issue, I suspect that is either a separate issue (or even a known, separate issue). Thanks for filing this originally! |
This piece of code in my structure is taking several seconds in some cases:
Log from this piece code:
I'm using:
honestly, I have no clue of what can be happening because the issue is not regular. It happens sometimes. The same code some times takes ~400ms sometimes several seconds.
I've tried apollo-server-lambda@2.4.8, but the issue remains
The text was updated successfully, but these errors were encountered: