-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gradle workers #447
Comments
Workers count is configured via Gradle org.gradle.workers.max property (or disable parallel task execution). Workers memory you could set via workerMaxHeapSize, though I would advice you not to decrease it further. Generally if you are using on your CI some docker executor with cgroup memory limit, I would advice you as well to set
This will not happen. |
Ha! Fair enough. You wouldn't believe how much pain we've suffered trying to tweak the Java options for memory management. Another couple of options to throw into the mix - maybe the magic spell will work this time. :) |
I ran into this too but on Windows (so the UseContainerSupport doesn't work) We have 6 gradle modules with 2-3 source sets per. Our logs are filled with:
This has doubled our memory usage and causing us to run out of paging file on windows machines. It seems the workers are keepAlive DAEMON so the memory is never returned back, when ktlint is done running. Downgrading back to 9.4.1, we are not seeing this memory growth curve. |
I run into this pretty regularly for local builds on macOS without containers. I'll capture a dump the next time it happens, if that would be helpful. Since it only happens once or twice a day to me, I usually just kill the daemons with |
v10.0.0 introduced Gradle workers to perform the linting - this causes us a slight issue with memory management on our CI builds as we need to keep the number of threads to a minimum due to blowing max heap size.
Please can you make the workers configurable? i.e. allow us to turn them off.
The text was updated successfully, but these errors were encountered: