-
Notifications
You must be signed in to change notification settings - Fork 503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit used memory/CPU or parallel jobs #359
Comments
This should be as simple as passing on runtime options to the buildkit container, right? |
Also, docker buildx build already supports |
In my testing setting |
interested in this feature as well. There is It would be more useful to have options to limit the cpu/memory when creating a |
I'm running into similar issues with the Kubernetes driver. If I limit the CPU I just end up CPU throttled. Seems like my build jobs aren't fanning out when I set replicas either, so if I bake too many images at once I just eventually get timeouts. I add this because the same issue could come up with the docker-container solution with that solution alone. |
I'm still running into this same issue :( |
Compose supports https://docs.docker.com/compose/environment-variables/envvars/#compose_parallel_limit Alternatively, maybe add a |
That would be great! |
In my opinion this feature is absolutely critical to using My workaround for this is to run
|
Hi,
I tried to use buildx to speed up our pipeline.
We have a huge docker-compose with many services that need to be build and our buildserver is a bit limited with memory and cpu.
Once I started using buildx to build our project with
docker buildx bake -f docker-compose.yml
after a while the buildserver stopped responding and it was out of memory and cpu. It had many many build processes running in parallel.Is there any way to limit the amount of ram and cpu used or at least specify the maximum allowed parallel jobs? Right now it seems to run everything at once which seems to overwhelm our buildserver.
The text was updated successfully, but these errors were encountered: