-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No space left on device #1042
Comments
I agree that this is not a satisfactory situation - out of curiosity, if you do a factory reset, do you get back to a working system? |
Thanks @friism for the workaround. It worked ! I just lost my settings, but that was expected and it didn't took me long to configure them again. I saved the hyper-v disk on an external drive if someone is interested in making more investigations (I can send it by dropbox or other). |
Next time run |
@hinell As said in my ticket description, I already prune everything. the |
@gaetancollaud Ohh yep sorry just missed that part. Seems like the memory leak. |
I am encountering this limitation as well. For me the issue is that we are working with large Oracle images populated with test data. Why is it not possible to extend the disk on the MobyLinuxVM? If there's no UI for it, at least give us the ability to access the VM to extend the disk! |
@vmarinelli I agree. Inability to access native mobyvm is frustrating so I'm forced to use stand-alone virtual machine in particular cases. |
Hi @vmarinelli @hinell Stop Docker |
@jasonbivins Thanks for the tip. But it seems like there is a memory leak somewhere. Increasing the disk has no use because we will reach the physical disk max size at some point. Are you interested in having the hyper-v disk that I saved just before my reset factory ? |
Even after editing MobyLinux.ps1 with VhdSize of 100GB and restarted Docker, nothing changed. |
@jasonbivins Thanks very much for that workaround. I was able to get the disk size increased, but I had to add one additional step to your instructions to make it work. After restarting Docker, I had to click "Reset to Factory Defaults". Then when the MobyLinuxVM was rebuilt, it came up with the larger volume. I made a post to the Forums to document this workaround and linked it back to this issue. @gaetancollaud While I have enountered the "Out of space on device" error repeatedly while trying to restore large Oracle containers to my D4Win install, I haven't encountered the issue of not being able to free up space even after deleting images and containers and running prune. So for me, the workaround provided addresses my issue. I will keep an eye out for this now that I am able to work and will post here if I encounter the same behavior. |
@Jamby93 TIP: use |
@hinell Yes, and also backup modified containers, volumes, registry and swarm configuration, networks and so on. That's simply unaffordable on production environment (let's talk about CD infrastructure that builds ton of images a day). All for a bug that simply as no sense at all. I mean; so far nobody has really proposed a reason Why this bug could happen, instead of simply trying to work-around and forget it. I'm used to try to understand why an issue is occurring instead of only fixing short-time-specific problems that will come back again one day or another. I'm willing to help gathering information if that could help addressing the issue. |
I have this as well with larger image content files. Here is an easy way to retest: A bash script to create 2x8GB files
Dockerfile:
Try to build image: Output:
Analysis
Docker version: Server: Docker info: Containers: 267 |
I managed to upgrade the disk size in Hyper-V GUI. |
@cybertk That did not work properly for me. Sure, the MobyLinuxVM reported a larger "max disk size" in Hyper-V manager afterwards, but the overlay inside of containers that were then created was still 60GB max:
It wasn't until I modified MobyLinux.ps1 and "reset to factory default" as described by @jasonbivins and @vmarinelli that the overlay was increased. |
@jasonbivins, what's the status of the enhancement mentioned at "#1042 (comment)"? |
@NdubisiOnuora This has been added to the edge channel. We are also working on some improvements to automatically reclaim space, but I don't have an ETA for those. |
having the same issue on Windows10, I am using oracle12c image hosted at https://hub.docker.com/r/sath89/oracle-12c/ everytime I do a commit or start the container again the MobyVM file keeps on increasing the size until it gets above 60GB. After that you cannot commit, save, load or even start a container. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so. Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. |
/remove-lifecycle stale |
problem is still present on: Client:
Version: 18.03.1-ce
API version: 1.37
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:12:48 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.05.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.10.1
Git commit: f150324
Built: Wed May 9 22:20:42 2018
OS/Arch: linux/amd64
Experimental: false |
What is the status of stopping the disk leak? This should be a rather high-priority issue given the impact, right? |
@mikeparker yes, it looks like I am indeed out of inodes when this issue occurs. I have seen this mentioned in a few of the SO issues I read, but don't know how to convert this into a resolution or find the reason that we are hitting the inode limit when others developers do not: I have, however, had a suspicion that the way in which we do our hot reloading volume mounting contributes to this issue (somehow). Our anonymous volumes don't seem to get cleaned up between After Then, after re-
And another round of And again, can lower inode count with
Thoughts:
As reference, this is the way in which we share our volumes:
|
Btw as an aside @cjancsar you can re-size the disk on your Docker host. As I noted above, you can use the Advanced settings in the GUI to increase the disk size. So that "Maximum Disk Size" is actually configurable. You could throw more of your 2TB disk at it. |
@cjancsar great detail, thanks. It looks like your up/down loop is creating 7GB of volume data every time instead of sharing the same volume for each loop. This is likely because you're using an anonymous volume instead of naming it so its recreated every time and never cleaned up. I agree it seems completely pointless us keeping the anonymous volume around if its impossible to access it again. I have raised this with the docker-compose team so I will keep you updated as to the response. Options:
Even if we raised the inode count you'd still hit the disk space limit before long so it's not really a fix. If this fixes your issue we need to think about how to provide tools to prevent other users hitting this or making it more clear whats going on and what the problem is. I suppose in a good way its nice to know this isnt a docker bug, but the UI definitely needs some work, or maybe something in docker-compose we can change. |
@cjancsar I spoke to the docker-compose team and they said that: So option 3 is to stop using |
@mikeparker yeah, I would recommend maybe throwing a different error when inode count is hit versus storage space being hit if that is possible--it would at least create a future separation in the similar issues (between actual disk storage space vs inode limit being hit). If I knew enough about docker internals I would try for a PR, but alas... I do not. I do however see some tests on the docker engine around the concept of the I think the main reason we do the I guess for now, we will continue to see why our inode counts are so high and see if we can do anything to reduce it, as this issue also apparently bleeds into our published images (so if we pull images they also have extremely high @mikeparker thank you for your help, I guess we will continue to try and manually clean our volumes until we can find root cause of why our
Unfortunately, changing the disk size does not seem to effect the inode count, we had already tried this. It seems to be a hard-coded limit. |
@cjancsar the reason your inode count is so high is likely twofold: a) Each time you docker-compose up you end up with 7GB of volume data, probably with thousands of files Ultimately solving (b) is more important than (a), I personally wouldnt spend time on (a) because (b) will solve your immediate problem and the solutions are quick and simple. There are two basic routes to do this: b1) Reuse the volume from the previous docker-compose up, by naming your volume in your compose file Both of these solutions seem fairly straightforward so I'd be interested if these don't solve the problem. |
Thanks @mikeparker we will trial those suggestions out and monitor how it effects performance! I will keep detailed notes in case things don't work out. |
For reference / clarity, if you want to dig around in the VM to find out where the inodes and space is used:
|
Hello, What helped me was the following: Edit powershell Script under C:\Program Files\Docker\Docker\resources\MobyLinux.ps1: |
@andreasdim You can change the VM hard drive size in the settings UI (see #1042 (comment)), you dont need to edit powershell (unless you are resetting to factory defaults a lot and want to change the factory default, in which case there is a wider problem!). |
@mikeparker Thank you, but I cannot see that option. I'm running docker on Version |
@cjancsar any luck? |
@mikeparker I'm also having the problem some other users have described. I've allocated 300Gb to the disk image, but when I look at the docker desktop vm, only ~60gb is allocated (even after restart). Our problem is that we actually need to unpack more than 60gb of data inside our build (true this could be done with a mount at run time, but this is the current setup I have to work with). So it's not an issue of just cleaning up old volumes/containers. Allocated 300GbVM has a max of 60gbWorkaround without factory resetFor others, I've found a workaround that doesn't require a factory reset by manually updating the image size in Hyper-V Manager. I'm not sure if this change will persist through updates though. |
@tophers42 there is a bug at the moment in Docker Desktop which makes it not resize the VM disk.
Sorry for the inconvenience. |
Seems I still experience this issue myself (v19.03.5 however), but it seems that manually setting my disk to 200GB in hyper-v, then setting it to 200GB in docker for windows AND changing my RAM from 8-4 GB, docker just accepts the new size. I was even able to change it back from 8GB RAM to 4GB and the disk still remained 200GB. Seems like maybe the disk slider itself is just bugging out? |
Hello team, did you find a solution for it?, I'm also experiencing the same issue in CI when running docker so it's failing every build!! it says no space left on device |
Don't set both graph parameter in JSON configuration file & Disk image location in Settings
I had the "No space left on device" error for a totally different reason and wanted to share the solution to my specific problem. I wanted to change the location of my docker images, and ended up setting both:
Then, when trying to pull images through docker-compose, I kept having errors: After removing the I was confused by the fact that on Windows, when using Linux containers, images are not actually on the Windows file-system but in the file-system of the *.vhdx Hard Disk Image File of the Mobby Virtual Machine. But apparently setting both parameters provokes a weird behavior in Docker. |
Regardless of the fact that this might be an issue of docker itself: A Docker reclaimed nearly 55 GB. |
Issues go stale after 90 days of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so. Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. |
This helped me. On Ubuntu 18.04 |
Run Then run:
|
The error for me is
Your solution works for me, thanks! I increased the disk image size limit from 68GB to 144GB, memory from 2GB to 8GB and swap from 1GB to 4GB. |
Closed issues are locked after 30 days of inactivity. If you have found a problem that seems similar to this, please open a new issue. Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. |
Expected behavior
Be able to use docker for windows more than one week after the installation.
Actual behavior
I cannot start/build anything since I always have the "No space left on device" message. It seems like my MobiLinux disk is full (60/60 gig used)
Information
I have already run
docker system prune -all
and I tried the commands in #600. Now I have no image and no container left. But I still cannot do anything.Steps to reproduce the behavior
Use docker on windows and build a lot of images. It tooks me less than a week since I installed docker for Windows. My images can be heavy: between 500mb and 2gb and I built a lot of them the last week (maybe 50 to 100). This could match the 60go of the MobiLinux VM.
The text was updated successfully, but these errors were encountered: