Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

snap remove lxd apparently deletes external zfs zpool but the disk is not re-usuable #13812

Open
tomponline opened this issue Jul 24, 2024 · 2 comments
Labels
Bug Confirmed to be a bug
Milestone

Comments

@tomponline
Copy link
Member

tomponline commented Jul 24, 2024

Problem: The zpool created by LXD on an external disk apparently get removed on snap remove, but not sufficiently for the disk to be reused for a fresh ZFS pool.

Fresh ubuntu 24.04 system with an empty external disk /dev/sdb:

lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0 74.2M  1 loop /snap/core22/1380
loop2     7:2    0 38.8M  1 loop /snap/snapd/21759
sda       8:0    0   10G  0 disk 
├─sda1    8:1    0    9G  0 part /
├─sda14   8:14   0    4M  0 part 
├─sda15   8:15   0  106M  0 part /boot/efi
└─sda16 259:0    0  913M  0 part /boot
sdb       8:16   0   10G  0 disk 

Install LXD and create ZFS pool on /dev/sdb

snap install lxd
lxd (5.21/stable) 5.21.2-34459c8 from Canonical✓ installed
lxc storage create local zfs source=/dev/sdb

Install ZFS tools and inspect the newly create zpool:

apt install zfsutils-linux
zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
local  9.50G   622K  9.50G        -         -     0%     0%  1.00x    ONLINE  -

zpool status
  pool: local
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	local       ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

errors: No known data errors

Remove LXD and see that the partitions are left behind on /dev/sdb but the zpool has apparently gone:

snap remove lxd --purge

lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0 74.2M  1 loop /snap/core22/1380
loop2     7:2    0 38.8M  1 loop /snap/snapd/21759
sda       8:0    0   10G  0 disk 
├─sda1    8:1    0    9G  0 part /
├─sda14   8:14   0    4M  0 part 
├─sda15   8:15   0  106M  0 part /boot/efi
└─sda16 259:0    0  913M  0 part /boot
sdb       8:16   0   10G  0 disk 
├─sdb1    8:17   0   10G  0 part 
└─sdb9    8:25   0    8M  0 part 

zpool status
no pools available

Install LXD again and try to create ZFS storage pool on /dev/sdb.
See it fail due to left over ZFS pool export to "local" zpool:

snap install lxd
lxd (5.21/stable) 5.21.2-34459c8 from Canonical✓ installed
lxc storage create local zfs source=/dev/sdb
Error: Failed to run: zpool create -m none -O compression=on local /dev/sdb: exit status 1 (invalid vdev specification
use '-f' to override the following errors:
/dev/sdb1 is part of exported pool 'local')

Manually wipe partitions on /dev/sdb and see that LXD can create new ZFS pool successfully on /dev/sdb:

fdisk /dev/sdb

Welcome to fdisk (util-linux 2.39.3).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Partition number (1,9, default 9): 1

Partition 1 has been deleted.

Command (m for help): d
Selected partition 9
Partition 9 has been deleted.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.


lxc storage create local zfs source=/dev/sdb
Storage pool local created
@tomponline tomponline added the Bug Confirmed to be a bug label Jul 24, 2024
@tomponline tomponline added this to the lxd-6.2 milestone Jul 24, 2024
@tomponline tomponline changed the title snap remove lxd causes apparently deletes external zfs datasets but the disk is not re-usuable snap remove lxd apparently deletes external zfs datasets but the disk is not re-usuable Jul 24, 2024
@tomponline tomponline changed the title snap remove lxd apparently deletes external zfs datasets but the disk is not re-usuable snap remove lxd apparently deletes external zfs zpool but the disk is not re-usuable Jul 24, 2024
@MusicDin
Copy link
Member

MusicDin commented Aug 23, 2024

Adding here for later reference. It seems that lvm has a similar issue.

## version 6.1
$ lxc storage create test lvm

$ sudo snap remove lxd --purge
$ sudo snap install lxd --channel=5.21/edge

$ lxc storage create test lvm
Error: A volume group already exists called "test"

@simondeziel
Copy link
Member

I think that deleting (as in zpool destroy -f) an external pool would actually be bad. Being able to reimport a zpool that was left behind is pretty useful and LXD shouldn't assume ownership of the pool, even if it created it as it's possible for the local user to make use of the pool for other purposes.

In your snap remove lxd --purge case, seeing sdb1 and sdb9 left behind is a good thing IMHO as this let's you manually reimport your zpool if you want to recover your data.

I guess the question is how much purging should LXD do when going away. I think external pools are off limits.

The snap remove --purge is IMHO a snap concept to not keep rollback data, not something the snap itself should be concerned with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug
Projects
None yet
Development

No branches or pull requests

3 participants