-
-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General issue to discuss repeated "out of disk space" on Nodes #2510
Comments
docker images -a of node docker-packet-ubuntu2004-x64-1, a lot of maybe unused images, taking a lot of space:
|
For this case job has been assigned to the machine test-docker-ubuntu2110-x64-1 and failed when doing cleanWS. (I cannot log in the machine to see if there is files stays in that workspace besides that machine seems working now https://ci.adoptopenjdk.net/view/work-in-progress/job/grinder_sandbox_new/529/console Based on this case I suspect that deferred wipeout may ask for more space to do the wipeout as it will copy rename workspace directory to a temporary directory name, then start a background task for deleting that temporary directory to short the build time. PR created in aqa-tests to improve this adoptium/aqa-tests#3467 |
@Haroon-Khel @sxa @smlambert
Here's the full list of images:
|
Unable to run installer pipeline as docker-packet-ubuntu2004-intel-1 is "no space left on device" |
It's sitting at 82Gb free just now which ought to be enough 🤷🏻 |
Ah the other system |
@smlambert I've removed one particularly large docker container that I was using for some testing (hadn't realised it had got that large!) and am re-running at https://ci.adoptopenjdk.net/view/work-in-progress/job/Sophia-adoptium-packages-linux-pipeline/74/ |
The containers hosted on docker-packet-ubuntu2004-amd-1 have been moved to another disk on that machine. They are now pulling from 256g of available space, about 5x that of before |
New disk added to docker-packet-ubuntu2004-intel-1. The containers hosted on that machine now have access to 600g+ of disk space. More can be added if we continue to see bottlenecks |
I'm going to close this now as the current issues that were causing problems have been resolved. We do need to ensure that we do appropriate capacity planning when new releases come out and not just assume there will be sufficient space when adding new things to the pipelines. |
There seems to be a persistent issue of nodes running out of disk space, issues contributing to this seem to be:
The text was updated successfully, but these errors were encountered: