-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gzippin #1037
gzippin #1037
Conversation
Beat me to it! Thanks for getting it done, though! |
so stoked! |
Looks great, nice work! I just tested deploying a bare bones project with python 2.7.12 and am getting an error from
If you put the tarfile in streaming mode 'r|gz' (instead of 'r:gz') its quiets that error. with tarfile.open(fileobj=archive_on_s3['Body'], mode="r|gz") as t: After that update,
If you check the .tar.gz on s3, the files are in the right places but they all have a size of 0 bytes. I haven't had time to look into that yet. It might be related to |
ok, try it now. Works for me with a 200M project on 128M Ram lambda. Test Environments were OSX py2.7 and OSX py3.6 |
Working now, 2.7 and 3.6 on ubuntu 16.04 Here are a couple of cold start times on different memory size instances. 414 mb project For the cases I could check (when the project + zip less than 500 mb), cold start performance is at least as fast as zip on disk, usually better. 225 mb project Given the slow cold start times on small instances, the win here will be huge deployments on large instances versus big-ish deployments on tiny instances. Notes:
|
Awesome testing! Was the timeout due to 30 second timeout on lambda?
…Sent from my iPhone
On Aug 6, 2017, at 12:23 PM, Oliver Rice ***@***.***> wrote:
Working now, 2.7 and 3.6 on ubuntu 16.04
Here are a couple of cold start times on different memory size instances.
414 mb project
Memory Size | Gzip Extract time (s)
1536mb | 3.05 seconds
512 mb | 12.49 seconds
256 mb | 24.79 seconds
128 mb | Error(timeout)
For the cases I could check (when the project + zip less than 500 mb), cold start performance is at least as fast as zip on disk, usually better.
225 mb project
Memory Size | Gzip Extract Time (s) | Zip Extract Time (s)
1536mb | 1.88 | 1.86
512 mb | 4.10 | 5.99
256 mb | 9.56 | 10.01
128 mb | 16.89 | 20.74
Given the slow cold start times on small instances, the win here will be huge deployments on large instances versus big-ish deployments on tiny instances.
Notes:
Precompiled packages was turned off to make sure extract sizes matched local size.
All times are slowest of 3 attempts
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Yes, all tests went through API Gateway and timed out after 30 seconds. I'm sure the it would have completed over direct invocation if you crank the Lambda timeout up to 5 minutes. |
Any ideas on speeding that up? It might be a separate issue from this PR but it sure would be nice to make that more doable.
…Sent from my iPhone
On Aug 6, 2017, at 3:27 PM, Oliver Rice ***@***.***> wrote:
Yes, all tests went through API Gateway and timed out after 30 seconds. I'm sure the it would have completed over drect invocation if you crank the Lambda timeout up to 5 minutes.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Completely agree. I'm not clear where the bottleneck is yet though. I tried removing gzip compression and stream extracting an uncompressed tarball to see if CPU usage during extraction was the issue. Speeds were pretty similar to gzip so it wasn't a helpful test. We need to get a better understanding of disk/network/CPU performance at each Lambda memory size to figure out if there's more we can do to improve cold starts. I think disk is most likely to be the problem. I'll run some tests this week and get back to you but (as you say) it's outside the scope for this PR. |
Any thoughts on a failover strategy to zip or an uncompressed tarfile when zlib isn't available? Zlib is technically an optional component so using gzip without a backup breaks slim_handler for certain valid builds. Too niche to worry about? |
Hi all, Is it already possible to use this fix? I am getting the following error |
Got any details about the project? Size of zip? RAM size on lambda? Size of project unzipped? etc. Oh, just reading your trace. You're using current code and want this new code. Not saying that this PR is broken. |
Sure: |
Check out this PR and give it a shot. I'd love to see how the 487M works. Also if you're on Windows we'd love to see it succeed there as well. |
I'm on Ubuntu. It's actually working for me now, not sure if I already have your modified version though. I believe the reason why it wasn't working is that the unzipped project was sometimes larger than 525M when I had all the python files. But after deleting them (project has 487M) it works. |
Description
Using gzipped tarballs for the slim_handler's package. This allows the project to be downloaded and unzipped into /tmp on the fly.
Would love help testing especially on windows as I attempted to address
https://github.com/Miserlou/Zappa/blob/master/zappa/core.py#L568
from PR #716 with the tarball approach.
GitHub Issues
#961
#881
#1020