Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No space left on device #1042

Closed
gaetancollaud opened this issue Aug 25, 2017 · 72 comments
Closed

No space left on device #1042

gaetancollaud opened this issue Aug 25, 2017 · 72 comments

Comments

@gaetancollaud
Copy link

gaetancollaud commented Aug 25, 2017

Expected behavior

Be able to use docker for windows more than one week after the installation.

Actual behavior

I cannot start/build anything since I always have the "No space left on device" message. It seems like my MobiLinux disk is full (60/60 gig used)

Information

  • Diagnostic ID 0DCAC250-7F6C-4EB3-AA58-BD33AD062218/2017-08-25_17-18-28
  • I have the same problem as No space left on device #600 but this ticket was closed without any solution so I reopen a new one
  • I can see the MobiLinux virtual machine in the Hyper-V Manager, but no way to connect to it. It's like a black box. If someone has some insights on how I can gather more information about it would be helpful.

I have already run docker system prune -all and I tried the commands in #600. Now I have no image and no container left. But I still cannot do anything.

Steps to reproduce the behavior

Use docker on windows and build a lot of images. It tooks me less than a week since I installed docker for Windows. My images can be heavy: between 500mb and 2gb and I built a lot of them the last week (maybe 50 to 100). This could match the 60go of the MobiLinux VM.

@friism
Copy link

friism commented Aug 25, 2017

I agree that this is not a satisfactory situation - out of curiosity, if you do a factory reset, do you get back to a working system?

@gaetancollaud
Copy link
Author

gaetancollaud commented Aug 28, 2017

Thanks @friism for the workaround. It worked ! I just lost my settings, but that was expected and it didn't took me long to configure them again.

I saved the hyper-v disk on an external drive if someone is interested in making more investigations (I can send it by dropbox or other).

@hinell
Copy link

hinell commented Aug 28, 2017

Next time run docker image prune -f to remove unused images.

@gaetancollaud
Copy link
Author

@hinell As said in my ticket description, I already prune everything. the -f argument only skips the prompt.

@hinell
Copy link

hinell commented Aug 29, 2017

@gaetancollaud Ohh yep sorry just missed that part. Seems like the memory leak.

@vmarinelli
Copy link

I am encountering this limitation as well. For me the issue is that we are working with large Oracle images populated with test data. Why is it not possible to extend the disk on the MobyLinuxVM? If there's no UI for it, at least give us the ability to access the VM to extend the disk!

@hinell
Copy link

hinell commented Aug 31, 2017

@vmarinelli I agree. Inability to access native mobyvm is frustrating so I'm forced to use stand-alone virtual machine in particular cases.

@jasonbivins
Copy link

Hi @vmarinelli @hinell
We do have an open enhancement request around the hard coded MobyLinuxVM size, but I don't have an ETA on when that will make it onto the roadmap. There are a few options we're looking at.
For the time being, we do have a manual way to extend the MobyLinuxVM. Manual changes made to these scripts will be overwritten during upgrades or reinstalls though, so please keep that in mind.

Stop Docker
In Windows explorer go to C:\Program Files\Docker\Docker\Resources and edit MobyLinux.ps1
It's in Program Files, so this will require admin privileges to edit - find line 86
$global:VhdSize = 60* 1024* 1024*1024 Change the 60 to number of gb you want allocated to the drive.
Restart Docker

@gaetancollaud
Copy link
Author

@jasonbivins Thanks for the tip.

But it seems like there is a memory leak somewhere. Increasing the disk has no use because we will reach the physical disk max size at some point.

Are you interested in having the hyper-v disk that I saved just before my reset factory ?

@Jamby93
Copy link

Jamby93 commented Sep 1, 2017

Even after editing MobyLinux.ps1 with VhdSize of 100GB and restarted Docker, nothing changed.
Inspecting VHD from Hyper-V correctly says "Current File Size 60GB" "Maximum Disk Size 100GB", but any docker container reports 100% disk usage with a size of 55G and isn't able to create any file.
I have also pruned about 2GB of containers but space left is always the same.
I don't want to reset to factory default nor to lose downloaded images. How could anyone use Docker for Windows in production if you need to reset it from time to time to restore leaked space?

@vmarinelli
Copy link

vmarinelli commented Sep 1, 2017

@jasonbivins Thanks very much for that workaround. I was able to get the disk size increased, but I had to add one additional step to your instructions to make it work. After restarting Docker, I had to click "Reset to Factory Defaults". Then when the MobyLinuxVM was rebuilt, it came up with the larger volume.
image

I made a post to the Forums to document this workaround and linked it back to this issue.

@gaetancollaud While I have enountered the "Out of space on device" error repeatedly while trying to restore large Oracle containers to my D4Win install, I haven't encountered the issue of not being able to free up space even after deleting images and containers and running prune. So for me, the workaround provided addresses my issue. I will keep an eye out for this now that I am able to work and will post here if I encounter the same behavior.

@hinell
Copy link

hinell commented Sep 2, 2017

@Jamby93 TIP: use docker save > foo.img and docker load < foo.img to import and export your images respectively if you don't want do download them again after resetting.

@Jamby93
Copy link

Jamby93 commented Sep 2, 2017

@hinell Yes, and also backup modified containers, volumes, registry and swarm configuration, networks and so on. That's simply unaffordable on production environment (let's talk about CD infrastructure that builds ton of images a day). All for a bug that simply as no sense at all. I mean; so far nobody has really proposed a reason Why this bug could happen, instead of simply trying to work-around and forget it. I'm used to try to understand why an issue is occurring instead of only fixing short-time-specific problems that will come back again one day or another. I'm willing to help gathering information if that could help addressing the issue.

@synergiator
Copy link

synergiator commented Sep 5, 2017

I have this as well with larger image content files.

Here is an easy way to retest:

A bash script to create 2x8GB files

#!/bin/bash
dd if=/dev/zero of=file1.dat  bs=1M  count=8000
dd if=/dev/zero of=file2.dat  bs=1M  count=8000

Dockerfile:

FROM ubuntu:latest
WORKDIR /tmp
ADD . /tmp

Try to build image:
docker build . -t foo

Output:

Sending build context to Docker daemon  16.78GB
Error response from daemon: Error processing tar file(exit status 1): write /file2.dat: no space left on device

Analysis

  • Why that reference to tar file?! I this setup, there are no tar files
  • The prune command did not help - it has reclaimed only 320 MB space, the error is still there.
  • In the logs, service.txt and the standard one you get via the systray, there is nothing suspicious
  • in the MobyLinux.ps1 disk size set to 120GB. Probalbly I need to reset Docker to factory settings as told above.
  • UPD I have reset Docker and the above setup works now
    Diagnostic ID: 99071758-71EE-4434-938A-45DEF36AD8C5/2017-09-05_07-54-59

Docker version:
Client:
Version: 17.06.1-ce
API version: 1.30
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22
OS/Arch: windows/amd64

Server:
Version: 17.06.1-ce
API version: 1.30 (minimum
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22
OS/Arch: linux/amd64
Experimental: true

Docker info:

Containers: 267
Running: 0
Paused: 0
Stopped: 267
Images: 150
Server Version: 17.06.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.41-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 5
Total Memory: 22.99GiB
Name: moby
ID: MCC6:TLLN:GJGF:FFS7:XBYR:2JPF:OYK3:4CWU:ZHWB:HDQY:S3VP:TCZ2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 15
Goroutines: 26
System Time: 2017-09-05T06:05:02.8221668Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

@cybertk
Copy link

cybertk commented Nov 27, 2017

I managed to upgrade the disk size in Hyper-V GUI.

@e-rol
Copy link

e-rol commented Feb 1, 2018

@cybertk That did not work properly for me. Sure, the MobyLinuxVM reported a larger "max disk size" in Hyper-V manager afterwards, but the overlay inside of containers that were then created was still 60GB max:

[root@db2server /]# df
Filesystem     1K-blocks     Used Available Use% Mounted on
overlay         61664044  3899176  54602808   7% /

It wasn't until I modified MobyLinux.ps1 and "reset to factory default" as described by @jasonbivins and @vmarinelli that the overlay was increased.

@NdubisiOnuora
Copy link

@jasonbivins, what's the status of the enhancement mentioned at "#1042 (comment)"?

@jasonbivins
Copy link

@NdubisiOnuora This has been added to the edge channel. We are also working on some improvements to automatically reclaim space, but I don't have an ETA for those.

image

@imranrajjad
Copy link

having the same issue on Windows10, I am using oracle12c image hosted at https://hub.docker.com/r/sath89/oracle-12c/

everytime I do a commit or start the container again the MobyVM file keeps on increasing the size until it gets above 60GB. After that you cannot commit, save, load or even start a container.

@docker-robott
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@pavel-agarkov
Copy link

/remove-lifecycle stale

@pavel-agarkov
Copy link

problem is still present on:

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:12:48 2018
 OS/Arch:      windows/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   f150324
  Built:        Wed May  9 22:20:42 2018
  OS/Arch:      linux/amd64
  Experimental: false

@rogierschouten
Copy link

What is the status of stopping the disk leak? This should be a rather high-priority issue given the impact, right?

@cjancsar
Copy link

@mikeparker yes, it looks like I am indeed out of inodes when this issue occurs. I have seen this mentioned in a few of the SO issues I read, but don't know how to convert this into a resolution or find the reason that we are hitting the inode limit when others developers do not:
image

I have, however, had a suspicion that the way in which we do our hot reloading volume mounting contributes to this issue (somehow). Our anonymous volumes don't seem to get cleaned up between up / down cycles, but I don't have the expertise to dig deeper into this. In fact, when I did a docker volume prune the inode count returned to 35% capacity and I was able to re-up containers successfully

After docker volume prune operation:
image

Then, after re-up of containers:
image

down command again:
image

up command again:
image

down command again:
image

up command again (hit inode limit):
image

And another round of down / up, and we encounter the error:
image

And again, can lower inode count with docker volume prune:
image

When I disable our volume mounting strategy the inode capacity stays consistently at 35% and we do not encounter the dangling anonymous volumes after each up / down cycle, and consequently we do not encounter the no space left on device error. However, we need to be able to do hot reloading of our packages so that we can develop in the multi-service environment and watch our changes.

Thoughts:

  • How can I profile specifically what is consuming the inode count within these anonymous volumes? (Note: These are NodeJs services so I am anticipating that the node_modules folders are the culprit, and, again that would be necessary for the watch / build hot-reloading cycle.)
  • Can we increase the amount of inodes which are possible? I saw a note from justincormack (from Docker) about this, but again, it would be more of a band-aid and wouldn't solve root issue:
    image
  • Is there a different hot-reloading strategy that we should be following?
  • I have read through the docker and docker-compose cli commands, but, is there a way to ensure that dangling anonymous volumes are garbage collected when the containers they are attached to are downd, as what it the point in retaining them, can they somehow be re-attached when you up the same containers?

As reference, this is the way in which we share our volumes:

  some-service:
    container_name: some-service
    ...... other stuff
    volumes:
      - .:/app
      - /app/node_modules

@minusdavid
Copy link

Btw as an aside @cjancsar you can re-size the disk on your Docker host. As I noted above, you can use the Advanced settings in the GUI to increase the disk size. So that "Maximum Disk Size" is actually configurable. You could throw more of your 2TB disk at it.

@mikeparker
Copy link
Contributor

mikeparker commented Jan 24, 2020

@cjancsar great detail, thanks.

It looks like your up/down loop is creating 7GB of volume data every time instead of sharing the same volume for each loop. This is likely because you're using an anonymous volume instead of naming it so its recreated every time and never cleaned up.

I agree it seems completely pointless us keeping the anonymous volume around if its impossible to access it again. I have raised this with the docker-compose team so I will keep you updated as to the response.

Options:

  1. Name the volume and reuse the same volume every time you do up/down
  2. Remove the volume manually (docker-compose rm .. -v, see https://docs.docker.com/compose/reference/rm/)

Even if we raised the inode count you'd still hit the disk space limit before long so it's not really a fix. If this fixes your issue we need to think about how to provide tools to prevent other users hitting this or making it more clear whats going on and what the problem is.

I suppose in a good way its nice to know this isnt a docker bug, but the UI definitely needs some work, or maybe something in docker-compose we can change.

@mikeparker
Copy link
Contributor

mikeparker commented Jan 24, 2020

@cjancsar I spoke to the docker-compose team and they said that:
a) anonymous volumes are reused if you do docker-compose up again without running docker-compose down.
b) You can use docker-compose down -v to remove the volumes

So option 3 is to stop using down and simply re-run up.
Option 4 is to add -v to your down command.

@cjancsar
Copy link

@mikeparker yeah, I would recommend maybe throwing a different error when inode count is hit versus storage space being hit if that is possible--it would at least create a future separation in the similar issues (between actual disk storage space vs inode limit being hit). If I knew enough about docker internals I would try for a PR, but alas... I do not. I do however see some tests on the docker engine around the concept of the No space left on device error.

I think the main reason we do the up / down cycle is so we can rebuild the dependencies that are on the docker container (when adding a new dependency or when switching branches). Since our node_modules are not shared between host and container (windows to linux), the only way to 'refresh' the dependencies (hit the install stage of the Dockerfile) is to bring the container down and rebuild it. If we just do an up again, the re-install stage is not done--we can manually connect to each container and manually re-install dependencies however but generally have just tried to use docker-compose cli to manage the system. I think this is just something wrong we are doing with our workflow.

I guess for now, we will continue to see why our inode counts are so high and see if we can do anything to reduce it, as this issue also apparently bleeds into our published images (so if we pull images they also have extremely high inode counts).

@mikeparker thank you for your help, I guess we will continue to try and manually clean our volumes until we can find root cause of why our inode count is high.

@minusdavid

Btw as an aside @cjancsar you can re-size the disk on your Docker host. As I noted above, you can use the Advanced settings in the GUI to increase the disk size. So that "Maximum Disk Size" is actually configurable. You could throw more of your 2TB disk at it.

Unfortunately, changing the disk size does not seem to effect the inode count, we had already tried this. It seems to be a hard-coded limit.

@mikeparker
Copy link
Contributor

@cjancsar the reason your inode count is so high is likely twofold:

a) Each time you docker-compose up you end up with 7GB of volume data, probably with thousands of files
b) You are recreating this volume from scratch every time and not deleting the old one or reusing it.

Ultimately solving (b) is more important than (a), I personally wouldnt spend time on (a) because (b) will solve your immediate problem and the solutions are quick and simple. There are two basic routes to do this:

b1) Reuse the volume from the previous docker-compose up, by naming your volume in your compose file
b2) If you need 7GB of fresh data every time, delete the old one. Simply add -v to the docker-compose down command.

Both of these solutions seem fairly straightforward so I'd be interested if these don't solve the problem.

@cjancsar
Copy link

Thanks @mikeparker we will trial those suggestions out and monitor how it effects performance! I will keep detailed notes in case things don't work out.

@mikeparker
Copy link
Contributor

mikeparker commented Jan 24, 2020

For reference / clarity, if you want to dig around in the VM to find out where the inodes and space is used:

  1. Open a terminal inside the linux vm: docker run -it --privileged --pid=host justincormack/nsenter1
  2. Use df -i <path> to see the inode count or df -i to see overall (df = disk filesystem, i = inodes)
  3. Use du -hs <path> to see the disk usage (du = disk usage, -hs = human readable, summary)

@andreasdim
Copy link

andreasdim commented Jan 29, 2020

Hello,

What helped me was the following: Edit powershell Script under C:\Program Files\Docker\Docker\resources\MobyLinux.ps1:
Line 86: Change
$global:VhdSize = 60*1024*1024*1024 # 60GB
to whatever you want. In my case:
$global:VhdSize = 12'*1024*1024*1024 # 120GB

Then reset docker to factory defaults
image

For me that worked:
image

@mikeparker
Copy link
Contributor

mikeparker commented Jan 29, 2020

@andreasdim You can change the VM hard drive size in the settings UI (see #1042 (comment)), you dont need to edit powershell (unless you are resetting to factory defaults a lot and want to change the factory default, in which case there is a wider problem!).

@andreasdim
Copy link

@mikeparker Thank you, but I cannot see that option. I'm running docker on Version
image

@mikeparker
Copy link
Contributor

@cjancsar any luck?

@tophers42
Copy link

tophers42 commented Mar 10, 2020

@mikeparker I'm also having the problem some other users have described. I've allocated 300Gb to the disk image, but when I look at the docker desktop vm, only ~60gb is allocated (even after restart). Our problem is that we actually need to unpack more than 60gb of data inside our build (true this could be done with a mount at run time, but this is the current setup I have to work with). So it's not an issue of just cleaning up old volumes/containers.

Allocated 300Gb

image

VM has a max of 60gb

image

Workaround without factory reset

For others, I've found a workaround that doesn't require a factory reset by manually updating the image size in Hyper-V Manager. I'm not sure if this change will persist through updates though.

image
image

Docker version:
image

@mat007
Copy link
Member

mat007 commented Mar 14, 2020

@tophers42 there is a bug at the moment in Docker Desktop which makes it not resize the VM disk.
To work around this you need to resize it manually, e.g. from an admin powershell:

Resize-VHD -Path 'C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx' -SizeBytes 300gb

Sorry for the inconvenience.

@tophers42
Copy link

Thanks @mat007, I also noticed it's marked as a known issue in the latest release notes. Is this the issue to track? #4725

@Xortrox
Copy link

Xortrox commented Apr 20, 2020

Seems I still experience this issue myself (v19.03.5 however), but it seems that manually setting my disk to 200GB in hyper-v, then setting it to 200GB in docker for windows AND changing my RAM from 8-4 GB, docker just accepts the new size. I was even able to change it back from 8GB RAM to 4GB and the disk still remained 200GB.

Seems like maybe the disk slider itself is just bugging out?

@AnushaErrabelli
Copy link

Hello team, did you find a solution for it?, I'm also experiencing the same issue in CI when running docker so it's failing every build!! it says no space left on device

@tnodet
Copy link

tnodet commented May 18, 2020

Don't set both graph parameter in JSON configuration file & Disk image location in Settings

Windows 10, 1809, 17763.1158 - Docker 2.3.0.2 (45183) - Linux containers

I had the "No space left on device" error for a totally different reason and wanted to share the solution to my specific problem.

I wanted to change the location of my docker images, and ended up setting both:

  • "graph": "/D/path/to/docker/images in the engine's JSON configuration file (Settings → Docker Engine)
  • D:\path\to\docker\DockerDesktop as Disk image location (Settings → Resources)

Then, when trying to pull images through docker-compose, I kept having errors:
ERROR: for image_name write /D/path/to/docker/images/tmp/GetImageBlob<uid>: no space left on device.

After removing the "graph" parameter (keeping only the Disk image location), I could pull images normally.

I was confused by the fact that on Windows, when using Linux containers, images are not actually on the Windows file-system but in the file-system of the *.vhdx Hard Disk Image File of the Mobby Virtual Machine. But apparently setting both parameters provokes a weird behavior in Docker.

@byt3pool
Copy link

byt3pool commented May 26, 2020

Regardless of the fact that this might be an issue of docker itself:

A docker system prune -a followed by docker volume prune did the trick for me. At least for now.
(as mentioned by @cjancsar on 23 Jan)

Docker reclaimed nearly 55 GB.

@docker-robott
Copy link
Collaborator

Issues go stale after 90 days of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30 days of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@kwabena53
Copy link

Expected behavior

Be able to use docker for windows more than one week after the installation.

Actual behavior

I cannot start/build anything since I always have the "No space left on device" message. It seems like my MobiLinux disk is full (60/60 gig used)

Information

  • Diagnostic ID 0DCAC250-7F6C-4EB3-AA58-BD33AD062218/2017-08-25_17-18-28
  • I have the same problem as No space left on device #600 but this ticket was closed without any solution so I reopen a new one
  • I can see the MobiLinux virtual machine in the Hyper-V Manager, but no way to connect to it. It's like a black box. If someone has some insights on how I can gather more information about it would be helpful.

I have already run docker system prune -all and I tried the commands in #600. Now I have no image and no container left. But I still cannot do anything.

Steps to reproduce the behavior

Use docker on windows and build a lot of images. It tooks me less than a week since I installed docker for Windows. My images can be heavy: between 500mb and 2gb and I built a lot of them the last week (maybe 50 to 100). This could match the 60go of the MobiLinux VM.

This helped me. On Ubuntu 18.04
docker system prune --all --force

@kwabena53
Copy link

kwabena53 commented Oct 19, 2020

Run docker system df to see what is taking up space on your docker

Then run:

docker system prune --all --force
to remove all hidden and unused containers.
docker system prune does not remove all unused containers.

@Bill0412
Copy link

Bill0412 commented Nov 6, 2020

@NdubisiOnuora This has been added to the edge channel. We are also working on some improvements to automatically reclaim space, but I don't have an ETA for those.

image

The error for me is

d532e87af17e: Loading layer  18.73GB/19.53GB
Error processing tar file(exit status 1): write /swapfile: no space left on device

Your solution works for me, thanks!

I increased the disk image size limit from 68GB to 144GB, memory from 2GB to 8GB and swap from 1GB to 4GB.

image

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Dec 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests