-
-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large build output cannot be uploaded #32
Comments
This is strange. We already have the limit increased from default here: https://github.com/fox1t/turborepo-remote-cache/blob/main/src/plugins/remote-cache/index.ts#L18 |
Hmm. I'm building individually packaged lambda functions using esbuild, so the packages are varying between about 1 MB and 5 MB. But there are quite a lot of them in one service... the total size of the service build folder containing the ZIP files is 228 MB! This probably explains it if they're all being uploaded together as a single artifact: no reason to think your limit is ineffective. 100 MB does seem like a reasonable limit if the data is being buffered in memory. Do you think it would be possible to stream the request to storage? Along the lines of fastify/fastify#534 ? I appreciate this is probably harder to implement, though it might yield a performance benefit. |
I think we can support arbitrary significant artifacts using a strategy similar to one I used here: https://github.com/fox1t/fastify-multer (multer is only for form-data/multipart). The idea is to have a content parser that flags the incoming request as "interesting" and then handles the upload in a preHanlder function, piping the incoming stream directly to a writable stream (regardless of the local file or remote bucket). Would you like to open a PR to support it? |
This might also be related to vercel/turborepo#2096 and vercel/turborepo#2280 |
After further testing and creating a PR in turborepo (vercel/turborepo#2428) I can confirm that I was able to upload an artifact of 92mb to the |
I have an issue where my artifacts are larger than the 104mb file size limit. Would it be possible to increase the default, or add an environment variable to set this limit for the server? |
I'd like the same as @simonjpartridge. An environment variable would be great |
## [1.7.3](ducktors/turborepo-remote-cache@v1.7.2...v1.7.3) (2022-11-05) ### Bug Fixes * [#32](ducktors/turborepo-remote-cache#32), [#62](ducktors/turborepo-remote-cache#62) ([1052cb7](ducktors/turborepo-remote-cache@1052cb7)) * remove mapped envs ([ea614b3](ducktors/turborepo-remote-cache@ea614b3)) * typescript ([2f2902e](ducktors/turborepo-remote-cache@2f2902e)) * update dockerfile ([e03a900](ducktors/turborepo-remote-cache@e03a900))
I found that artifacts from the end of my build pipeline (just before deploy) are not cached.
Bit of a show-stopper as it causes a full redeploy!
It looks like Fastify has a default body size limit of 1 MiB, and so I'm speculating this is the reason.
Could we have a configuration option to bump this considerably higher?
A backport to the last stable version (so 1.1.2 I guess) would be really helpful, BTW.
The text was updated successfully, but these errors were encountered: