-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate if we can get more disc space on VMs #211
Comments
How to use this for our runners? Option AAllocate one big storage volume (1TB) and mount it to each of our runners. Unfortunately, the docker cache cannot be shared, and each VM will use a different folder as the docker cache. Option BAllocate and mount additional storage for each VM individually. Option A seems simpler to manage, as we only have one storage. Also, less space is wasted if we join storage like this. |
Looks like the mounted volume doesn't support extended attributes and can't be used for the docker cache.
As a part of #211 I tried setting up additional storage to VMs and using that as docker cache. Mounted volume doesn't support extended attributes
|
Wekafs doesn't function well as a docker cache due to a lack of xattrib support. |
is it possible to get cloud VMs with more local storage? The ones we are currently using have 100GB (70Gb free after setup). We are using docker images that are quite large (20GB+) and docker cache can fill that up fast.
@teijo
You can get more storage by creating a new volume under "Storage" (if you don't see that tab, it needs to be enabled for each team separately), and then when you create a VM, you can pick the volume to be mounted when the VM starts (it'll go under /mnt/). You could point your cache to that mount, or try to symlink to it.
You could start with e.g. 1TB disk. There is no resize option in the UI yet, but we can enlarge the volume under the hood if needed.
The mount can be done to multiple VMs at a time, but just be aware that if your systems are trying to write to the same file, things will likely get corrupted, so you need to be mindful how you organize the data if you're looking to share the mount. You could e.g. do /mnt///docker-cache/* to have one volume but dodge conflicts between writing hosts.
If possible, I'd wait for a few more days before committing to the above approach. I'm hoping to wrap up this ticket that should improve the experience of dealing with the mounts if you ever need to restart the VM. The improvement would be available in VMs started after the change gets to production (i.e. not retroactively to old VMs)
The text was updated successfully, but these errors were encountered: