Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] support mapped btrfs devices on Linux #48

Closed
wants to merge 6 commits into from
Closed

[Feature] support mapped btrfs devices on Linux #48

wants to merge 6 commits into from

Conversation

jaredallard
Copy link

@jaredallard jaredallard commented May 15, 2019

What this PR does: Adds support for LUKS and LVM btrfs mapped device users on linux. This is related to k3s-io/k3s#471.

Notes for my reviewer: This might not be the best way to do this, since this will likely fail on pure windows machines in the future (one day). I've tested this on machines w/ luks+lvm and machines w/o it.

@jaredallard
Copy link
Author

Let me know if #46 will land first, and I will rebase this to use that.

@iwilltry42 iwilltry42 added the enhancement New feature or request label May 15, 2019
cli/container.go Outdated Show resolved Hide resolved
cli/container.go Outdated Show resolved Hide resolved
Copy link
Member

@iwilltry42 iwilltry42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice feature, that certainly a bunch of people will need, thanks for that 👍
Just added some minor comments.

@iwilltry42
Copy link
Member

Hey there, PR #46 just got merged 👍

cli/commands.go Outdated Show resolved Hide resolved
@andyz-dev
Copy link
Contributor

First of all. Thank you so much @jaredallard for sharing your findings and filed nice bug report, also for working on the PR. I am sure the facts collected and experiences described here will help many k3s / k3d users.

To be honest, I am ambivalent about this patch. On one hand, it is very helpful to folks who need /dev/mapper access for their cluster to work -- The error message they got will not be very helpful in pointing them toward the root cause nor solution. Having k3d built it in helps with the out of box experience.

On the other hand. It feels that k3d will be doing too much. It suppose to be a thin layer over k3s and we already have the -v option that can be applied when needed. Another concerned I have is that this option is always added regardless the cluster needs it or not -- this can be a security concern since we are unnecessarily increasing the attack surface. There is no way to turn it off. The third concern I have is that this option gets added behind user's back -- It can be a surprise to many users unless they know k3d very well.

May be reasonable path forward for this is to document this, say in k3d FAQ?

Another possible option is to add a "--helpful" option to k3d, telling user that k3d will automatically add a few options for the user, and this mapping can be part of the "--helpful" option. This option leaves default k3d bare bone.

If we still want to go head with this patch, I'd suggest that we add an option to not apply this mapping.

WDYT?

@iwilltry42
Copy link
Member

I already added a FAQ / Nice to know section to the README now 👍

@iwilltry42
Copy link
Member

On a side-note: I'm using LUKS in LVM as well and I didn't have any problems with k3d so far 🤔

@jaredallard
Copy link
Author

jaredallard commented May 16, 2019

On a side-note: I'm using LUKS in LVM as well and I didn't have any problems with k3d so far

I think the problem actually lies with how /proc/self/mountinfo is presented in some cases, where the Source attribute points to a device that doesn't exist in the container which causes google/cadvisor/fs/fs.go to then disregard that mount on start then causing the partition map to not contain the root filesystem in it, then causing k8s to die. It may not be specific to LUKS+LVM but rather my setup, and potential others.

EDIT: I'd be very intrested to see what the output of cat /proc/self/mountinfo is from your machine in the k3s container created.

@jaredallard
Copy link
Author

@andyz-dev I think that it should be included by default, but perhaps we might want to consider how that volume should be presented in the container further to reduce the attack surface. If the outcome is to have a --helpful or a flag to disable it, people who run into this issue in automated fashions or ones where a ton of people w/ different machines will be using it (i.e developer envs) the default usage will be to always include that flag.

I can set -v to fix this in my codebase, but I think that might render more issues as I will then be required to do logic on my own side to include that all the time to solve the basic use case of running k3d.

But, again, I can see the argument for withholding it as well, so I think I'm OK with whatever is agreed upon by everyone here.

@iwilltry42
Copy link
Member

@jaredallard there you go:

$ docker exec -it k3d-k3s-default-server sh
/ # cat /proc/self/mountinfo 
2061 1941 0:88 / / rw,relatime master:983 - overlay overlay rw,lowerdir=/var/lib/docker/overlay2/l/WRHFV73CMG2Y2D5GJLSF4WHPCE:/var/lib/docker/overlay2/l/3ZY5NG6BSIH6OPWVHSJRDUCFXP:/var/lib/docker/overlay2/l/Y3JO3VSXOX25MAWU4LQPWEI7NS,upperdir=/var/lib/docker/overlay2/b74cc788bcc107a875d6f9e7060686f7f9efbc1b75343da1207e261d72ddfb5a/diff,workdir=/var/lib/docker/overlay2/b74cc788bcc107a875d6f9e7060686f7f9efbc1b75343da1207e261d72ddfb5a/work,xino=off
2062 2061 0:91 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
2063 2061 0:92 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
2064 2063 0:93 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
2065 2061 0:94 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
2066 2065 0:95 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
2067 2066 0:30 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,xattr,name=systemd
2068 2066 0:34 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,cpu,cpuacct
2069 2066 0:35 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,blkio
2070 2066 0:36 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,devices
2071 2066 0:37 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,hugetlb
2072 2066 0:38 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime master:20 - cgroup cgroup rw,net_cls,net_prio
2073 2066 0:39 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:21 - cgroup cgroup rw,cpuset
2074 2066 0:40 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime master:22 - cgroup cgroup rw,perf_event
2075 2066 0:41 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime master:23 - cgroup cgroup rw,freezer
2076 2066 0:42 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:24 - cgroup cgroup rw,memory
2077 2066 0:43 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime master:25 - cgroup cgroup rw,rdma
2078 2066 0:44 /docker/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime master:26 - cgroup cgroup rw,pids
2079 2063 0:90 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw
2080 2061 253:1 /var/lib/docker/volumes/a416a98098c110a9e0acd82961613949d27e5887b5479ccf28e00073d3d3d6b9/_data /var/log rw,relatime master:1 - ext4 /dev/mapper/ubuntu--budgie--vg-root rw,errors=remount-ro
2081 2061 253:1 /var/lib/docker/containers/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/mapper/ubuntu--budgie--vg-root rw,errors=remount-ro
2082 2061 253:1 /var/lib/docker/containers/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b/hostname /etc/hostname rw,relatime - ext4 /dev/mapper/ubuntu--budgie--vg-root rw,errors=remount-ro
2083 2061 253:1 /var/lib/docker/containers/1b298f11af8dc2bb90831ad9250f10bad042b13f8b90f7906c8fb073b857517b/hosts /etc/hosts rw,relatime - ext4 /dev/mapper/ubuntu--budgie--vg-root rw,errors=remount-ro
2084 2063 0:89 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k
2085 2061 253:1 /var/lib/docker/volumes/82fa3b45ee6350da8c45a51bf0e1e0efa1dd1cd448b7cc8f6aab8d6414fb107b/_data /var/lib/cni rw,relatime master:1 - ext4 /dev/mapper/ubuntu--budgie--vg-root rw,errors=remount-ro
2086 2061 253:1 /var/lib/docker/volumes/f8e8ecbf2a0225ede67c7d2c66be3a056fa240ed87827e188885952d25e62e9c/_data /var/lib/rancher/k3s rw,relatime master:1 - ext4 /dev/mapper/ubuntu--budgie--vg-root rw,errors=remount-ro
1940 2086 253:1 /var/lib/docker/volumes/f8e8ecbf2a0225ede67c7d2c66be3a056fa240ed87827e188885952d25e62e9c/_data/agent/kubelet /var/lib/rancher/k3s/agent/kubelet rw,relatime shared:1046 master:1 - ext4 /dev/mapper/ubuntu--budgie--vg-root rw,errors=remount-ro

@iwilltry42
Copy link
Member

I think we should only include the option by default, when we figured out, what exactly the issue is and only if we can verify that the executing system has this problem, we should adjust for it in k3d. WDYT?

@jaredallard
Copy link
Author

jaredallard commented May 16, 2019

Interesting:

2087 2061 253:1 /var/lib/docker/volumes/cd9643d8197c128c76aaaa0af4b2468badfaea0537aad5a36ce7af3c4b987e86/_data /var/lib/rancher/k3s rw,relatime master:1 - ext4 /dev/mapper/ubuntu--vg-root rw,errors=remount-ro
2488 2087 253:1 /var/lib/docker/volumes/cd9643d8197c128c76aaaa0af4b2468badfaea0537aad5a36ce7af3c4b987e86/_data/agent/kubelet /var/lib/rancher/k3s/agent/kubelet rw,relatime shared:1130 master:1 - ext4 /dev/mapper/ubuntu--vg-root rw,errors=remount-ro

These two should be causing problems, since the Source attribute is a mapper device that, presumably, doesn't exist in the container. Interesting. I'll boot up a Ubuntu VM later tonight to see if I can get some stuff to repro (Japan Time).

EDIT: Yeah I agree with that, I'll look at this more. Thanks for the active feedback everyone.


I've pushed code that adjusts the params a bit to remove init and be safer with the possibility that someone may have already been using -v /dev/mapper:/dev/mapper.

@iwilltry42
Copy link
Member

I see what you mean... thanks for your efforts in investigating what's the problem here 👍

@jaredallard
Copy link
Author

jaredallard commented May 16, 2019

OK! Done investigating, the issue is the combination of /dev/mapper and btrfs. This does not occur when someone is running under ext4 but only when on btrfs and using mapped devices. I confirmed this w/ a Ubuntu VM using stock "install w/ lvm, etc", and then did the same but set the file system types to btrfs and it then occurred.

@andyz-dev
Copy link
Contributor

@jaredallard nice progress!

I am not an expert in device map. So the following are really just my curiosity.

Do we have other volume types may need this besides brtfs? Soft RAID and encrypted volumes came to mind, but I don't them well enough.

Do we have a way to determine if the /dev/mapper is being used? May be we can use /dev/mapper being used as an indicator rather than detecting volume types? Wonder if this will be a more reliable approach in the long run.

For the patch, I wonder if we should back off if user maps /dev/mapper to some other path:

-v /my-vm-imaeg/dev/mapper:/dev/mapper

With that, a user can block the hidden bind mount by:

-v /tmp:/dev/mapper

At any rate, when adding the hidden mount, we should probably generate a console message to tell user about it.

@jaredallard
Copy link
Author

jaredallard commented May 16, 2019

@andyz-dev So, my guess right now is that this is actually not an issue with mapped devices at all, but with how the special btrfs code in cAdvisor is written. That was written to handle subvolumes and the like, my guess is that it stats the /dev/mapper mount from Source to determine if it's a subvolume, which other filesystems don't require. It's also possible this may occur without mapped devices and purely just a btrfs filesystem. But, this is just speculation.

I'll attempt to scope this down further later today, as well as potentially dig through cadvisor's fs code to see if there is anything that can be done to not require that.

For backing off, yes, we should do that. I'll include that as part of the "dedupe" aspect.

Also, I'll include a notification when the default bind mount is added (determined by the initial stat check) EDIT: Done.

@jaredallard
Copy link
Author

Update: This isn't occurring in Ubuntu without mapped devices, but an important note is that the disk provided by Source from /proc/self/mountinfo DOES exist in the container in /dev/nvme... so that wouldn't likely trigger this.

So, recap, at the moment this will only occur on devices using mapped + btrfs. Will continue by digging into cadvisor.

@jaredallard
Copy link
Author

jaredallard commented May 17, 2019

So, located in cadvisor/fs/fs.go we can see that, here: https://github.com/google/cadvisor/blob/master/fs/fs.go#L740 a fix is made to resolve partition ids being reported by /proc/self/mountinfo by stat-ing the device. That is being grabbed from the Source attrib in that same file located here: https://github.com/google/cadvisor/blob/master/fs/fs.go#L745

I can't see a way to fix that since it was added to resolve that issue. I think we should go ahead with this fix since it resolves this issue, and including mapped devices is no more vulnerable, in my opinion, than the currently existing /dev mounts. We can also move forward with setting this mount to read-only as that would slightly mitigate any issues that pop up in the future.

@jaredallard jaredallard changed the title feat(cli): support LUKS + LVM on linux feat(cli): support mapped btrfs devices on Linux May 18, 2019
@zeerorg
Copy link
Collaborator

zeerorg commented May 19, 2019

As far as I understand, the issue is with systems where the FileSystem is btrfs and the /dev/mapper/ needs to be mounted ? Maybe I'm wrong.

But still, I see the issue can be solved by following the README and the PR doesn't do anything else. Can you summarize the PR and how is it different from following the README ?

@jaredallard-okta
Copy link

As far as I understand, the issue is with systems where the FileSystem is btrfs and the /dev/mapper/ needs to be mounted ? Maybe I'm wrong.

But still, I see the issue can be solved by following the README and the PR doesn't do anything else. Can you summarize the PR and how is it different from following the README ?

There was nothing in the README until this PR was made and I discovered the root cause of this. Yes that is the correct, that is the issue, again though that all came out of this thread if you back read it.

@iwilltry42
Copy link
Member

iwilltry42 commented May 19, 2019

Hi again @jaredallard, thanks a lot for investigating the specific cases in which this issue might occur.
Do you still think that it's worth adding this specific case to the core of k3d or maybe a more detailed section in the README/FAQ would be enough for this edge case?
I don't have a strong opinion here, but the simpler the solution, the better, right?

UPDATE: if we're going to include this in the code as a default setting, we should ensure that it's only being applied in the cases where it's really needed and then inform the user about it.

@@ -28,6 +28,8 @@ const (
defaultServerCount = 1
)

var defaultBindMounts = []string{"/dev/mapper:/dev/mapper"}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this shouldn't be in defaultBindMounts, since it's not default, but only added in specific cases, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sortof, it checks if it exists so it's a defaultBindMount but only applied when it exists. Could change that naming, but not sure what that'd get us since it's still sorta a default.

@jaredallard
Copy link
Author

Hi again @jaredallard, thanks a lot for investigating the specific cases in which this issue might occur.
Do you still think that it's worth adding this specific case to the core of k3d or maybe a more detailed section in the README/FAQ would be enough for this edge case?
I don't have a strong opinion here, but the simpler the solution, the better, right?

UPDATE: if we're going to include this in the code as a default setting, we should ensure that it's only being applied in the cases where it's really needed and then inform the user about it.

I'm impartial still on which to do. If I always have to -v /dev/mapper:/dev/mapper I'll have to add logic in all of my code to handle that if it exists which is detrimental to a "UX" standpoint, but for the majority of users this might not ever occur so it's not an issue to them.

I'd prefer to have this go in, but I can see the reasons not too. At this point I'd say this is ready to go since it's being only applied when /dev/mapper exists. The only other step I could add is to have this be applied only when ANY device is detected to be a btrfs device, and then apply it. My problem there is it gets complicated to conditionally detect this because then we need to parse user -v additions to see if they included a volume path that exists at a path that is mounted on a btrfs device that is from /dev/mapper etc... That's when it gets really un clean.

@iwilltry42
Copy link
Member

I understand your point of view, that it should be included to ease usage on systems with btrfs mapped volumes. But to me, checking only for the existence of /dev/mapper as a justification to mount it into the k3d server seems like it's not enough. This is because, /dev/mapper is e.g. also there on my machine, but since I'm not using btrfs, I definitely don't need it. So it just adds another volume mount that's simply not needed. I guess one could argue there that it doesn't harm... but you never know...

Why would you need to check for -v-provided volumes from /dev/mapper/btrfs?
I'd only check for the existence of both /dev/mapper and a btrfs device/volume and then go with the de-duplication logic which you already have.
But I may be missing something here 🤔

@jaredallard
Copy link
Author

jaredallard commented May 24, 2019

I understand your point of view, that it should be included to ease usage on systems with btrfs mapped volumes. But to me, checking only for the existence of /dev/mapper as a justification to mount it into the k3d server seems like it's not enough. This is because, /dev/mapper is e.g. also there on my machine, but since I'm not using btrfs, I definitely don't need it. So it just adds another volume mount that's simply not needed. I guess one could argue there that it doesn't harm... but you never know...

Why would you need to check for -v-provided volumes from /dev/mapper/btrfs?
I'd only check for the existence of both /dev/mapper and a btrfs device/volume and then go with the de-duplication logic which you already have.
But I may be missing something here

Yeah I agree with that. The idea behind having to check -v is so we only mount it if a path included, or a VOLUME declaration, will be on a btrfs device. Otherwise, I can just scan /dev/mapper for btrfs volumes, but that runs the risk of mounting it even if it's potentially not using a btrfs volume. Anyways, I think the scanning option is probably better than nothing, if nobody is opposed to it I'll implement it after confirmation.

Thanks everyone for being involved in this, now massive, thread! 🚀

@iwilltry42 iwilltry42 changed the title feat(cli): support mapped btrfs devices on Linux [Feature] support mapped btrfs devices on Linux May 27, 2019
@jaredallard
Copy link
Author

Hoping to get some work on this sometime this week. 🙀

@zer0def
Copy link

zer0def commented Aug 16, 2019

Based on schu/kubedee#2, the solution for btrfs would be to pass in the block device underneath DockerRootDir, however a broader solution/workaround (like for those who run DockerRootDir in tmpfs, in my particular case) would be to create, mount and pass off a loop block device underneath /var/lib/rancher/k3s with a filesystem that the kubelet doesn't barf itself on.

Perhaps https://github.com/ashald/docker-volume-loopback could be of use here? Example below works regardless of underlying filesystem:

VOLUME_NAME=asdf
docker plugin install ashald/docker-volume-loopback
docker volume create -d ashald/docker-volume-loopback ${VOLUME_NAME} -o sparse=true -o fs=ext4
k3d c -v ${VOLUME_NAME}:/var/lib/rancher/k3s

This example obviously fails on a multi-node cluster due to sharing state not meant to be shared, but it's something to be pushed down to per-container volume creation when starting a cluster.

Although, the "quickest" solution might be passing through --volume-driver to containers created for the cluster.

@zer0def
Copy link

zer0def commented Aug 29, 2019

@jaredallard @iwilltry42 thoughts on my comment?

@iwilltry42
Copy link
Member

Hi @zer0def , on a first glance, this looks quite cool to me, especially if it's platform/FS agnostic.
But I'm a bit off track here and probably cannot be of too much help atm.
Maybe @andyz-dev has thoughts on this?
In any case, PRs are always welcome :)

@zer0def
Copy link

zer0def commented Oct 12, 2019

@iwilltry42 with the introduction of #116 one can dodge this limitation by doing something like this:

#!/bin/bash -x

CLUSTER_NAME="${1:-k3s-default}"
NUM_WORKERS="${2:-2}"

setup() {
  PLUGIN_LS_OUT=`docker plugin ls --format '{{.Name}},{{.Enabled}}' | grep -E '^ashald/docker-volume-loopback'`
  [ -z "${PLUGIN_LS_OUT}" ] && docker plugin install ashald/docker-volume-loopback DATA_DIR=/tmp/docker-loop/data
  sleep 3
  [ "${PLUGIN_LS_OUT##*,}" != "true" ] && docker plugin enable ashald/docker-volume-loopback

  K3D_MOUNTS=()
  for i in `seq 0 ${NUM_WORKERS}`; do
    [ ${i} -eq 0 ] && VOLUME_NAME="k3d-${CLUSTER_NAME}-server" || VOLUME_NAME="k3d-${CLUSTER_NAME}-worker-$((${i}-1))"
    docker volume create -d ashald/docker-volume-loopback ${VOLUME_NAME} -o sparse=true -o fs=ext4
    K3D_MOUNTS+=('-v' "${VOLUME_NAME}:/var/lib/rancher/k3s@${VOLUME_NAME}")
  done
  k3d c -i rancher/k3s:v0.9.1 -n ${CLUSTER_NAME} -w ${NUM_WORKERS} ${K3D_MOUNTS[@]}
}

cleanup() {
  K3D_VOLUMES=()
  k3d d -n ${CLUSTER_NAME}
  for i in `seq 0 ${NUM_WORKERS}`; do
    [ ${i} -eq 0 ] && VOLUME_NAME="k3d-${CLUSTER_NAME}-server" || VOLUME_NAME="k3d-${CLUSTER_NAME}-worker-$((${i}-1))"
    K3D_VOLUMES+=("${VOLUME_NAME}")
  done
  docker volume rm -f ${K3D_VOLUMES[@]}
}

setup
sleep 300  # should be enough to inspect whether everything's fine
cleanup

Granted, it's a little clunky when compared to just providing a volume driver for Docker to consume from, but this at least get the job sufficiently done. This also means that this PR can probably end up being closed.

@iwilltry42
Copy link
Member

Closing this due to the existing workarounds and inactivity. Feel free to reopen if there's need for it.

@diepes
Copy link

diepes commented Nov 9, 2021

found this thread after struggling to get k3d working due to disk pressure taints on all nodes,.

    taints:
    - effect: NoSchedule
      key: node.kubernetes.io/disk-pressure

Got it working after reading this thread.
k3d cluster create test -s 3 -a 3 -v /dev/mapper:/dev/mapper

does seem to complicate k3d just working out of the box.

@iwilltry42
Copy link
Member

@diepes , this will be handled OOB soon as per #629 and it's also mentioned in the FAQ: https://k3d.io/faq/faq/#issues-with-btrfs 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants