Skip to content
Henryk Paluch edited this page Sep 25, 2024 · 18 revisions

ZFS

ZFS is mature filesystem (features similar to more recent BTRFS) that was originally created for Solaris and later ported to few other systems (mainly FreeBSD and Linux).

Despite its nice features and maturity there is one strong controversy: unclear licensing outcome - therefore Linus admitted that he has no plans to include it to official Linux kernels. https://www.phoronix.com/news/Linus-Says-No-To-ZFS-Linux

Some core kernel developers even hate ZFS and do absurd things that defy common sense - restrict export of some functions to GPL only modules:

Sebastian Andrzej Siewior: So btrfs uses crc32c() / kernel's crypto API for that and ZFS can't? Well the crypto API is GPL only exported so that won't work. crc32c() is EXPORT_SYMBOL() so it would work. On the other hand it does not look right to provide a EXPORT_SYMBOL wrapper around a GPL only interface…

Greg Kroah-Hartman:

Yes, the "GPL condom" attempt doesn't work at all. It's been shot down a long time ago in the courts.

My tolerance for ZFS is pretty non-existant. Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?

One really interesting feature is support for layered read-cache and layered write (basically tiered storage).

Read layers:

  • ARC (Adaptive Replacement Cache) cache - data are always read there, in kernel memory
  • L2ARC - optional - 2nd level - typically on SSD to cache reads from big HDD

Write layers:

  • ZIL writes are queued in transaction log
  • SLOG optionally stored on SSD - transaction will be quickly committed to SSD and later written asynchronously to HDD in batches - which is more optimal pattern for HDD

Please note that ARC cache can cause heavy swapping on Linux because it is not accounted as Cache (which can be quickly dropped when memory is needed) but as Allocated kernel memory. It is known to cause heavy swapping on Proxmox (because there is 80% threshold of allocated memory when both KSMd and VM ballooning kicks in) unless one use several workarounds.

ZFS on root

Please note that typical usage for ZFS is software RAID (known as JBOD - "just bunch of disks"). OS is often installed on traditional (say ext4 or xfs) filesystem. In such case pool os often called tank because it is used on official manual pages:

On other side you can even use ZFS for OS installation disk. However be prepared for few quirks. Only some distributions support ZFS on root filesystem.

Example Xubuntu installation layout

Fdisk:

Disk /dev/sda: 17,95 GiB, 19273465856 bytes, 37643488 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt

Device       Start      End  Sectors  Size Type
/dev/sda1     2048     4095     2048    1M BIOS boot
/dev/sda2     4096  1054719  1050624  513M EFI System
/dev/sda3  1054720  4986879  3932160  1,9G Linux swap
/dev/sda4  4986880  6815743  1828864  893M Solaris boot
/dev/sda5  6815744 37643454 30827711 14,7G Solaris root

The Solaris boot contains pool bpool (Boot pool) and Solaris root contains rpool (Root pool)

The dedicated bpool is required because GRUB has limited support for ZFS features - so Boot pool is setup with those limited features only while Root pool can use all available features without any restrictions.

cat /proc/cmdline

BOOT_IMAGE=/BOOT/ubuntu_lp7a5y@/vmlinuz-5.19.0-41-generic \
 root=ZFS=rpool/ROOT/ubuntu_lp7a5y \
 ro quiet splash

Pool(s) status:

 sudo zpool status

  pool: bpool
 state: ONLINE
config:

	NAME                                    STATE     READ WRITE CKSUM
	bpool                                   ONLINE       0     0     0
	  adf95ab8-1f48-b845-868d-eea185083f8e  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
config:

	NAME                                    STATE     READ WRITE CKSUM
	rpool                                   ONLINE       0     0     0
	  fa00daa7-436e-7d41-aec0-f8c75a9f3843  ONLINE       0     0     0

errors: No known data errors

Pool list...

sudo zfs list

NAME                                               USED  AVAIL     REFER  MOUNTPOINT
bpool                                              279M   425M       96K  /boot
bpool/BOOT                                         278M   425M       96K  none
bpool/BOOT/ubuntu_lp7a5y                           278M   425M      278M  /boot
rpool                                             5.52G  8.52G       96K  /
rpool/ROOT                                        5.52G  8.52G       96K  none
rpool/ROOT/ubuntu_lp7a5y                          5.52G  8.52G     3.92G  /
rpool/ROOT/ubuntu_lp7a5y/srv                        96K  8.52G       96K  /srv
rpool/ROOT/ubuntu_lp7a5y/usr                       224K  8.52G       96K  /usr
rpool/ROOT/ubuntu_lp7a5y/usr/local                 128K  8.52G      128K  /usr/local
rpool/ROOT/ubuntu_lp7a5y/var                      1.60G  8.52G       96K  /var
rpool/ROOT/ubuntu_lp7a5y/var/games                  96K  8.52G       96K  /var/games
rpool/ROOT/ubuntu_lp7a5y/var/lib                  1.60G  8.52G     1.44G  /var/lib
rpool/ROOT/ubuntu_lp7a5y/var/lib/AccountsService   100K  8.52G      100K  /var/lib/AccountsService
rpool/ROOT/ubuntu_lp7a5y/var/lib/NetworkManager    124K  8.52G      124K  /var/lib/NetworkManager
rpool/ROOT/ubuntu_lp7a5y/var/lib/apt               134M  8.52G      134M  /var/lib/apt
rpool/ROOT/ubuntu_lp7a5y/var/lib/dpkg             29.9M  8.52G     29.9M  /var/lib/dpkg
rpool/ROOT/ubuntu_lp7a5y/var/log                     2M  8.52G        2M  /var/log
rpool/ROOT/ubuntu_lp7a5y/var/mail                   96K  8.52G       96K  /var/mail
rpool/ROOT/ubuntu_lp7a5y/var/snap                  628K  8.52G      628K  /var/snap
rpool/ROOT/ubuntu_lp7a5y/var/spool                 112K  8.52G      112K  /var/spool
rpool/ROOT/ubuntu_lp7a5y/var/www                    96K  8.52G       96K  /var/www
rpool/USERDATA                                     696K  8.52G       96K  /
rpool/USERDATA/lxi_alu18m                          488K  8.52G      488K  /home/lxi
rpool/USERDATA/root_alu18m                         112K  8.52G      112K  /root

Non-default pool properties:

sudo zfs get -r -s local -o name,property,value all bpool
NAME                      PROPERTY              VALUE
bpool                     mountpoint            /boot
bpool                     compression           lz4
bpool                     devices               off
bpool                     canmount              off
bpool                     xattr                 sa
bpool                     acltype               posix
bpool                     relatime              on
bpool/BOOT                mountpoint            none
bpool/BOOT                canmount              off
bpool/BOOT/ubuntu_lp7a5y  mountpoint            /boot

sudo zfs get -r -s local -o name,property,value all rpool
NAME                                              PROPERTY                         VALUE
rpool                                             mountpoint                       /
rpool                                             compression                      lz4
rpool                                             canmount                         off
rpool                                             xattr                            sa
rpool                                             sync                             standard
rpool                                             dnodesize                        auto
rpool                                             acltype                          posix
rpool                                             relatime                         on
rpool/ROOT                                        mountpoint                       none
rpool/ROOT                                        canmount                         off
rpool/ROOT/ubuntu_lp7a5y                          mountpoint                       /
rpool/ROOT/ubuntu_lp7a5y                          com.ubuntu.zsys:bootfs           yes
rpool/ROOT/ubuntu_lp7a5y                          com.ubuntu.zsys:last-used        1682775301
rpool/ROOT/ubuntu_lp7a5y/srv                      com.ubuntu.zsys:bootfs           no
rpool/ROOT/ubuntu_lp7a5y/usr                      canmount                         off
rpool/ROOT/ubuntu_lp7a5y/usr                      com.ubuntu.zsys:bootfs           no
rpool/ROOT/ubuntu_lp7a5y/var                      canmount                         off
rpool/ROOT/ubuntu_lp7a5y/var                      com.ubuntu.zsys:bootfs           no
rpool/USERDATA                                    mountpoint                       /
rpool/USERDATA                                    canmount                         off
rpool/USERDATA/lxi_alu18m                         mountpoint                       /home/lxi
rpool/USERDATA/lxi_alu18m                         canmount                         on
rpool/USERDATA/lxi_alu18m                         com.ubuntu.zsys:bootfs-datasets  rpool/ROOT/ubuntu_lp7a5y
rpool/USERDATA/root_alu18m                        mountpoint                       /root
rpool/USERDATA/root_alu18m                        canmount                         on
rpool/USERDATA/root_alu18m                        com.ubuntu.zsys:bootfs-datasets  rpool/ROOT/ubuntu_lp7a5y

Example FreeBSD 13.1 layout

Here is an example of default ZFS layout on FreeBSD 13.1 installation (ZFS is supported out of the box) installed on bare metal (NVIDIA MCP55 chipset, must use SiI SATA PCI card to avoid data corruption - see FreeBSD

But there is an catch:

If you will install AutoZFS to MBR based disk you will get unbootable system!. I think that I found the cause:

How to get disk device names:

# sysctl kern.disks

kern.disks: cd0 ada0

Fdisk - NOT USEFUL show only fake MBR partition (to allow boot from GPT and avoid overwrite of GPT partition):

fdisk ada0

The data for partition 1 is:
sysid 238 (0xee),(EFI GPT)
    start 1, size 976773167 (476940 Meg), flag 0
	beg: cyl 0/ head 0/ sector 2;
	end: cyl 1023/ head 255/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>

Disklabel command can't be used on GPT - FreeBSD allocates directly partitions on GPT (as for example Linux does)

We have to use gpart to see GPT partitions:

gpart show ada0

=>       40  976773088  ada0  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   33554432     2  freebsd-swap  (16G)
   33556480  943216640     3  freebsd-zfs  (450G)
  976773120          8        - free -  (4.0K)

Please note that 1st line (with => arrow) is whole disk, while others are real GPT partition entries.

Now we can use similar commands to Linux to query ZFS status:

zpool status

  pool: zroot
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  ada0p3    ONLINE       0     0     0

errors: No known data errors

ZFS list:

zfs list
NAME                                        USED  AVAIL     REFER  MOUNTPOINT
zroot                                      1.44G   433G       96K  /zroot
zroot/ROOT                                 1.43G   433G       96K  none
zroot/ROOT/13.1-RELEASE_2023-05-01_100656     8K   433G      774M  /
zroot/ROOT/default                         1.43G   433G     1.37G  /
zroot/tmp                                    96K   433G       96K  /tmp
zroot/usr                                   428K   433G       96K  /usr
zroot/usr/home                              140K   433G      140K  /usr/home
zroot/usr/ports                              96K   433G       96K  /usr/ports
zroot/usr/src                                96K   433G       96K  /usr/src
zroot/var                                   632K   433G       96K  /var
zroot/var/audit                              96K   433G       96K  /var/audit
zroot/var/crash                              96K   433G       96K  /var/crash
zroot/var/log                               152K   433G      152K  /var/log
zroot/var/mail                               96K   433G       96K  /var/mail
zroot/var/tmp                                96K   433G       96K  /var/tmp

Non-default properties:

zfs get -r -s local -o name,property,value all zroot
NAME                                       PROPERTY              VALUE
zroot                                      mountpoint            /zroot
zroot                                      compression           lz4
zroot                                      atime                 off
zroot/ROOT                                 mountpoint            none
zroot/ROOT/13.1-RELEASE_2023-05-01_100656  mountpoint            /
zroot/ROOT/13.1-RELEASE_2023-05-01_100656  canmount              noauto
zroot/ROOT/default                         mountpoint            /
zroot/ROOT/default                         canmount              noauto
zroot/tmp                                  mountpoint            /tmp
zroot/tmp                                  exec                  on
zroot/tmp                                  setuid                off
zroot/usr                                  mountpoint            /usr
zroot/usr                                  canmount              off
zroot/usr/ports                            setuid                off
zroot/var                                  mountpoint            /var
zroot/var                                  canmount              off
zroot/var/audit                            exec                  off
zroot/var/audit                            setuid                off
zroot/var/crash                            exec                  off
zroot/var/crash                            setuid                off
zroot/var/log                              exec                  off
zroot/var/log                              setuid                off
zroot/var/mail                             atime                 on
zroot/var/tmp                              setuid                off

Some fine commands:

zpool list

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   448G  6.84G   441G        -         -     0%     1%  1.00x    ONLINE  -

Or iostat like:

zpool iostat 1

             capacity     operations     bandwidth 
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       6.83G   441G     15     14   675K  1.25M
zroot       6.83G   441G      0      0      0      0
...

NOTE: arc_summary command can be installed with pkg install arc_summary (tested on FreeBSD 14.1).

TODO: more...

Study

Here are pointers to ZFS I'm currently studying:

Important notes

From Proxmox https://pve.proxmox.com/wiki/ZFS_on_Linux:

If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use virtio for disks of that VM, as they are not supported by ZFS. Use IDE or SCSI instead (also works with the virtio SCSI controller type).

It is because Virtio-BLK has blank serial number so these disks are missing under /dev/disk/by-id/, for example here:

Recover GRUB in Proxmox/ZFS

When I played with NetBSD it overwrote my MBR where was GRUB from Proxmox on ZFS - the only bootloader capable to boot Proxmox, OpenBSD and NetBSD on my setup (GPT+BIOS).

There is no obvious guide how to boot Rescue Proxmox VE/Linux with working ZFS support (too many known boot media failed).

I had to do this:

  • boot Proxmox ISO proxmox-ve_7.4-1.iso
  • select Advanced Mode (Recovery mode failed with terrible errors)
  • exit 1st shell (it contains useless RAMdisk without ZFS module)
  • wait till 2nd shell - you should see message about loading ZFS module before this shell
  • now we have to use ZFS black magic (or art):
mkdir /mnt/zfs
zpool import -R /mnt/zfs -f rpool
# now your complete Proxmox/ZFS should be mounted on /mnt/zfs
# to continue we have to bind-mount /sys /proc and /dev as usual:
mount --bind /proc /mnt/zfs/proc
mount --bind /dev /mnt/zfs/dev
mount --bind /sys /mnt/zfs/sys
# finally chroot
chroot /mnt/zfs
# restore GRUB in MBR in my case:
/usr/sbin/grub-install.real /dev/sda
# unmount everything using
umount /mnt/zfs/proc
umount /mnt/zfs/dev
umount /mnt/zfs/sys
# and this magic command unmounts zfs:
zfs export rpool

Now you can reboot to your Proxmox/HDD and it should work...

ZFS on Ubuntu

Ubuntu was pioneer of ZFS but suddenly they decided to remove ZFS support only to add it back in 23.10 Desktop release.

I guess judging by https://github.com/canonical/subiquity/pull/1731 it looks like subiquity is going to have zfs support?

https://www.phoronix.com/news/Ubuntu-23.10-ZFS-Install

Ubuntu 23.10 Restores ZFS File-System Support In Its Installer

Currently testing 23.10 Desktop:

Notes:

  • must use Desktop installer (NOT Server)

Tried in Proxmox VM:

  • 1 vCPU or more in host mode (I have 2 cores on host, so 1 vCPU is reasonable maximum for Guest)
  • 6GB (6144 MB) RAM or more (I have only 8 GB on host - so 6GB is reasonable maximum for Guest)
  • Machine: q35 (allows to emulate PCIe instead of plain PCI when compared to i440fx - which was made in late '90)
  • Graphics: SPICE (allows use of accelerated remote-viewer on Proxmox Web Console)
  • mode: BIOS
  • 32 GB disk. Must use Virtio-SCSI!
    • Do NOT use Virtio-BLK - ZFS depends on unique disk serial numbers while Virtio-BLK has none assigned by default unless additional settings are used!
    • enabled discard, and Cache: write-back (unsafe)

In Wizard did:

  • language: English
  • type: Install Ubuntu
  • keyboard English US
  • connect to network: I don't want to connect (try to avoid bloat)
  • Skip Update
  • Default installation
  • Erase Disk and Install Ubuntu:
    • select Advanced Features: EXPERIMENTAL: Erase disk and use ZFS
  • partitions: just confirm the only layout (4 partitions, using 1 swap partition)
  • Time zone: select your best
  • setup your Account: ...
  • Theme: Dark
  • and finally installation should proceed...

Notes:

  • although I selected to NOT install updates, I was ignored anyway (according to installation log)
  • you can click on top-bottom Console icon to see more details what is installer doing
  • there is no progress bar - so it is hard to estimate installation time.
  • Ubuntu uses its specific zfs-zed.service to manage some ZFS features.
  • there are many bloated packages installed but SSH server is missing (!). You have to install it with apt-get install openssh-server

ZFS Details:

# zpool history

History for 'bpool':
2024-02-15.17:06:10 zpool create -o ashift=12 -o autotrim=on -o feature@async_destroy=enabled -o feature@bookmarks=enabled -o feature@embedded_data=enabled -o feature@empty_bpobj=enabled -o feature@enabled_txg=enabled -o feature@extensible_dataset=enabled -o feature@filesystem_limits=enabled -o feature@hole_birth=enabled -o feature@large_blocks=enabled -o feature@lz4_compress=enabled -o feature@spacemap_histogram=enabled -O canmount=off -O normalization=formD -O acltype=posixacl -O compression=lz4 -O devices=off -O relatime=on -O sync=standard -O xattr=sa -O mountpoint=/boot -R /target -d bpool /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part2
2024-02-15.17:06:10 zpool set cachefile=/etc/zfs/zpool.cache bpool
2024-02-15.17:06:11 zfs create -o canmount=off -o mountpoint=none bpool/BOOT
2024-02-15.17:06:28 zfs create -o canmount=on -o mountpoint=/boot bpool/BOOT/ubuntu_d1awaj
2024-02-15.17:46:40 zpool set cachefile= bpool
2024-02-15.17:46:43 zpool export -a
2024-02-15.17:48:08 zpool import -c /etc/zfs/zpool.cache -aN
2024-02-15.18:16:07 zpool import -c /etc/zfs/zpool.cache -aN
2024-02-15.18:44:29 zpool import -c /etc/zfs/zpool.cache -aN
2024-02-15.19:13:12 zpool import -c /etc/zfs/zpool.cache -aN
2024-02-15.19:18:49 zpool import -c /etc/zfs/zpool.cache -aN

History for 'rpool':
2024-02-15.17:06:16 zpool create -o ashift=12 -o autotrim=on -O canmount=off -O normalization=formD -O acltype=posixacl -O compression=lz4 -O devices=off -O dnodesize=auto -O relatime=on -O sync=standard -O xattr=sa -O mountpoint=/ -R /target rpool /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part4
2024-02-15.17:06:16 zpool set cachefile=/etc/zfs/zpool.cache rpool
2024-02-15.17:06:17 zfs create -o canmount=off -o mountpoint=none rpool/ROOT
2024-02-15.17:06:18 zfs create -o canmount=on -o mountpoint=/ rpool/ROOT/ubuntu_d1awaj
2024-02-15.17:06:19 zfs create -o canmount=off rpool/ROOT/ubuntu_d1awaj/var
2024-02-15.17:06:20 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/lib
2024-02-15.17:06:21 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/lib/AccountsService
2024-02-15.17:06:21 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/lib/apt
2024-02-15.17:06:22 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/lib/dpkg
2024-02-15.17:06:22 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/lib/NetworkManager
2024-02-15.17:06:23 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/srv
2024-02-15.17:06:23 zfs create -o canmount=off rpool/ROOT/ubuntu_d1awaj/usr
2024-02-15.17:06:23 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/usr/local
2024-02-15.17:06:24 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/games
2024-02-15.17:06:25 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/log
2024-02-15.17:06:25 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/mail
2024-02-15.17:06:26 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/snap
2024-02-15.17:06:26 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/spool
2024-02-15.17:06:27 zfs create -o canmount=on rpool/ROOT/ubuntu_d1awaj/var/www
2024-02-15.17:46:40 zpool set cachefile= rpool
2024-02-15.17:46:44 zpool export -a
2024-02-15.17:47:50 zpool import -N rpool
2024-02-15.18:15:56 zpool import -N rpool
2024-02-15.18:44:18 zpool import -N rpool
# command below done by me:
2024-02-15.18:53:40 zfs destroy rpool/ROOT/ubuntu_d1awaj/var/snap
# on every boot there is "zpool import" which "opens" pool for use.
2024-02-15.19:13:00 zpool import -N rpool
2024-02-15.19:18:37 zpool import -N rpool

# zpool status

  pool: bpool
 state: ONLINE
config:

	NAME                                          STATE     READ WRITE CKSUM
	bpool                                         ONLINE       0     0     0
	  scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part2  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
config:

	NAME                                          STATE     READ WRITE CKSUM
	rpool                                         ONLINE       0     0     0
	  scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part4  ONLINE       0     0     0

# zpool list

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool  1.88G  77.9M  1.80G        -         -     0%     4%  1.00x    ONLINE  -
rpool  25.5G  2.05G  23.5G        -         -     3%     8%  1.00x    ONLINE  -

# zfs list -t all

NAME                                               USED  AVAIL  REFER  MOUNTPOINT
bpool                                             77.8M  1.67G    96K  /boot
bpool/BOOT                                        77.2M  1.67G    96K  none
bpool/BOOT/ubuntu_d1awaj                          77.1M  1.67G  77.1M  /boot

rpool                                             2.05G  22.7G    96K  /
rpool/ROOT                                        2.04G  22.7G    96K  none
rpool/ROOT/ubuntu_d1awaj                          2.04G  22.7G  1.05G  /
rpool/ROOT/ubuntu_d1awaj/srv                        96K  22.7G    96K  /srv
rpool/ROOT/ubuntu_d1awaj/usr                       224K  22.7G    96K  /usr
rpool/ROOT/ubuntu_d1awaj/usr/local                 128K  22.7G   128K  /usr/local
rpool/ROOT/ubuntu_d1awaj/var                      1015M  22.7G    96K  /var
rpool/ROOT/ubuntu_d1awaj/var/games                  96K  22.7G    96K  /var/games
rpool/ROOT/ubuntu_d1awaj/var/lib                  1010M  22.7G   921M  /var/lib
rpool/ROOT/ubuntu_d1awaj/var/lib/AccountsService    96K  22.7G    96K  /var/lib/AccountsService
rpool/ROOT/ubuntu_d1awaj/var/lib/NetworkManager    128K  22.7G   128K  /var/lib/NetworkManager
rpool/ROOT/ubuntu_d1awaj/var/lib/apt              71.6M  22.7G  71.6M  /var/lib/apt
rpool/ROOT/ubuntu_d1awaj/var/lib/dpkg             16.8M  22.7G  16.8M  /var/lib/dpkg
rpool/ROOT/ubuntu_d1awaj/var/log                  4.22M  22.7G  4.22M  /var/log
rpool/ROOT/ubuntu_d1awaj/var/mail                   96K  22.7G    96K  /var/mail
rpool/ROOT/ubuntu_d1awaj/var/snap                 1.00M  22.7G  1.00M  /var/snap
rpool/ROOT/ubuntu_d1awaj/var/spool                  96K  22.7G    96K  /var/spool
rpool/ROOT/ubuntu_d1awaj/var/www                    96K  22.7G    96K  /var/www

Slimming Ubuntu 23.10.1 Desktop for CLI:

  • WORK IN PROGRESS
  • after reboot I uninstall lot of Desktop and other bloat - I plan to use CLI only
apt-get purge snapd
# you will likely need reboot and then again
apt-get purge snapd
# command below will remove most GUI apps :-)
apt-get purge libx11-6 plymouth\*
# WARNING! Keep installed network-manager - it seems that networkd does not work well...
apt-get purge kerneloops polkitd accountsservice sssd\*
apt-get purge avahi\* bluez cloud\* cups-\* fwupd\* gnome-\* gsettings-\* irqbalance laptop-detect printer-driver-\*

# run commands below in VM only (not on bare metal machine!)
apt-get purge linux-firmware fwupd

# finally removed auto-installed but now orphan packages:
apt-get autoremove --purge

Comment out all pam_motd.so lines in /etc/pam.d/* - to avoid running bloat on every user login.

TODO: After reboot it screwed network (I fixed it by few manual tweaks)

Masked some timers:

systemctl mask motd-news.timer dpkg-db-backup.timer \
   apt-daily-upgrade.timer man-db.timer apt-daily.timer e2scrub_all.timer fstrim.timer

And remember to disable worst SD crap ever:

systemctl mask --now systemd-oomd

Many unlucky users reported how that useless tool killed their LibreOffice or Firefox without any confirmation and without nay chance to save work...

After reboot we have to again purge packages that failed on uninstall:

pkg -l | grep ^ic

When I try to again remove snapd with apt-get purge snapd there were these problems:

  • unable to delete host-hunspell. Fixed with:
    umount /var/snap/firefox/common/host-hunspell
  • then apt-get purge snapd should finally remove it wit only error that it was unable to remove /var/snap. Fixed with:
    zfs destroy rpool/ROOT/ubuntu_d1awaj/var/snap
  • these had to be removed manually:
    rm /etc/systemd/system/snapd.mounts.target.wants/var-snap-firefox-common-host\\x2dhunspell.mount
    rm /etc/systemd/system/multi-user.target.wants/var-snap-firefox-common-host\\x2dhunspell.mount
    rm /etc/systemd/system/var-snap-firefox-common-host\\x2dhunspell.mount

ZFS on openSUSE

Although I followed https://openzfs.github.io/openzfs-docs/Getting%20Started/openSUSE/openSUSE%20Leap%20Root%20on%20ZFS.html closely I was unable to get working grub-install /dev/sdX command (GPT+BIOS, package grub2-i386-pc-extras), it always ended with:

grub2-install: error: ../grub-core/kern/fs.c:123:unknown filesystem.

There is deep problem with grub, because it insists on scanning all partitions, even when it needs just /boot partition...

I resolved this problem by using extlinux (package syslinux) - which I already know from Alpine Linux.

  • I formatted and mounted /boot as ext4 (instead of bpool ZFS)
  • having entry in /etc/fstab like: UUID=MY_EXT4_UUID_FROM_LSBLK_F /boot ext4 defaults 0 2
  • then I just run extlinux -i /boot
  • and copied /usr/share/syslinux/menu.c32 to /boot
  • finally created /boot/extlinux.conf with contents (grabbed one from real Alpine linux install on another ZFS partition and adapted it a bit):
DEFAULT menu.c32
PROMPT 0
MENU TITLE OpenSUSE ZFS
MENU AUTOBOOT OpenSUSE ZFS # seconds.
TIMEOUT 30
LABEL suse
  MENU DEFAULT
  MENU LABEL suse Linux lts
  LINUX vmlinuz-6.4.0-150600.23.17-default
  INITRD initrd-6.4.0-150600.23.17-default
  APPEND root=ZFS=spool/ROOT/suse

MENU SEPARATOR

NOTE: I use ZFS pool called spool (SUSE Pool) - while guide uses rpool (root pool). It is because I have on same disk installed FreeBSD on ZFS (another ZFS partition) and Alpine Linux on ZFS (another partition) and each pool should have unique name.

I use FreeBSD loader to chain-load openSUSE (commands lsdev and chain from FreeBSD loader).

ZFS Backup and restore

ZFS Backup

There exists great article how to transfer ZFS on Proxmox from bigger disk to smaller target disk

I decided to use similar strategy to backup my ZFS installation of FreeBSD.

  • mounted FAT32 backup USB pendrive:
    mount -t msdos /dev/da0s1 /mnt/target
  • WARNING! FAT32 is limited to 4GB (unsigned 32-bit integer) file size. For bigger disks you have to use split or another filesystems.
  • NOTE: To get estimated size of uncompressed backup use zpool list command and ALLOC column for your pool.
  • strongly recommended - backup metadata - output of commands:
    camcontrol devlist
    fdisk ada0
     gpart show ada0 
    zpool status
    zfs list
    zpool history # important!
  • now create snapshot and backup using zfs send (I also compress it with gzip): Note: my ZFS pool has name zbsd:
    zfs snapshot -r zbsd@migrate
    zfs send -R zbsd@migrate | gzip -1c > /mnt/target/00BACKUPS/wd500-fbsd-zfs/zfs-backup-notcompress.bin.gz
    • example with zstd instead of gzip: zfs send -R pool@snapshot | | zstd -o /backup_target/pool.bin.zst
  • this command can be used to inspect stream backup:
    zcat /mnt/target/00BACKUPS/wd500-fbsd-zfs/zfs-backup-notcompress.bin.gz | zstream dump
    • or in zstd case: zstdcat /path_to_backup.bin.zst | zstream dump
  • in my case I unmount USB pendrive with umount /mnt/target
  • now we can list all snapshots and delete our migrate snapshot`
    zfs list -t snapshot
    # dry run:
    zfs destroy -rvn zbsd@migrate
    # real destroy!
    zfs destroy -rv zbsd@migrate

ZFS Restore

NOTE: I used different FreeBSD ZFS installation for later restore than from above backup section. Thus my current pool is named zroot instead of expected zbsd. Please keep this in mind.

Restore - most imporant:

  • I want to shrink existing FreeBSD on ZFS installation: GPT+BIOS, ada0p3 partition with ZFS
  • so I recreated (smaller), ada0p3 partition
  • and then using FreeBSD install ISO in "Live System" mode:
  • please see details on FreeBSD remote install how I configured Network and SSHD access.

First be sure that there is no duplicate pool (for example from old Linux). I had following problem (all pools can be listed with plain zpool import:

zpool import

   pool: zroot
     id: 17366506765034322418
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	zroot       ONLINE
	  ada0p3    ONLINE

   pool: zroot
     id: 11489144554183585940
  state: UNAVAIL
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

	zroot                   UNAVAIL  insufficient replicas
	  diskid/DISK-5QF3SLHZ  UNAVAIL  invalid label

In such case we have to use labelclear - in my case on ada0 (dangeours!) Double check "to be cleared" pool name:

zpool labelclear ada0

  use '-f' to override the following error:
  /dev/ada0 is a member of potentially active pool "zroot"

zpool labelclear -f ada0

Here is how I DESTROYED zfs partition (from Live ISO) and created smaller empty:

gpart show ada0

  =>       40  625142368  ada0  GPT  (298G)
           40       1024     1  freebsd-boot  (512K)
         1064        984        - free -  (492K)
         2048   33554432     2  freebsd-swap  (16G)
     33556480  591585280     3  freebsd-zfs  (282G)
    625141760        648        - free -  (324K)

# DESTROYING ZFS:
# WRONG (ZFS has copy of metadata at the end of disk/part):
#        dd if=/dev/zero of=/dev/ada0p3 bs=1024k count=64
# Correct: Has to use "zpool labelclear - see above"

# Resize partition:
gpart resize -i 3 -s 50g ada0

  ada0p3 resized

gpart show ada0
=>       40  625142368  ada0  GPT  (298G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   33554432     2  freebsd-swap  (16G)
   33556480  104857600     3  freebsd-zfs  (50G)
  138414080  486728328        - free -  (232G)

Now real stuff - ZFS restore:

# actually was compress=lz4, but I decided to use FreeBSD default:
zpool create -o altroot=/mnt -O compress=on -O atime=off -m none -f zroot ada0p3
zpool list

  NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
  zroot  49.5G   165K  49.5G        -         -     0%     0%  1.00x    ONLINE  /mnt

# I have mounted backup from NFS server on /nfs
# Just this command (without any mount) is enough to restore ZFS
zcat /nfs/fbsd-zfs-sgate/zfs-backup-send.bin.gz | zfs recv -F zroot
# remove snapshot (dry-run):
zfs destroy -rvn zroot@migrate
# real destroy of all @migrate snapshots
zfs destroy -rv zroot@migrate

Now few important commands:

# very Important - this essential zpool property is NOT copied send/recv(!):
zpool set bootfs=zroot/ROOT/default zroot

# export before reboot...
zpool export zroot

Very important:

You have to repeat exactly same command zpool set bootfs=... from zpool history before backup (!) otherwise FreeBSD ZFS loader will not know where is rootfs dataset and will fail with error like /boot/... not found!.

Ufff....

ZFS properties

I'm not aware of official list of permitted values of ZFS properties. However I can peek here:

bectl (FreeBSD)

On FreeBSD there is command bectl to manage ZFS Boot environments:

Example output from my FreeBSD install:

$ bectl list

BE                                Active Mountpoint Space Created
14.1-RELEASE-p4_2024-09-24_162137 -      -          55.3M 2024-09-24 17:06
14.1-RELEASE_2024-09-10_162718    -      -          236M  2024-09-24 17:06
default                           NR     /          3.63G 2024-09-24 17:04

$ zfs get -r type

NAME                                          PROPERTY  VALUE       SOURCE
zroot                                         type      filesystem  -
zroot/ROOT                                    type      filesystem  -
zroot/ROOT/14.1-RELEASE-p4_2024-09-24_162137  type      filesystem  -
zroot/ROOT/14.1-RELEASE_2024-09-10_162718     type      filesystem  -
zroot/ROOT/default                            type      filesystem  -
zroot/ROOT/default@2024-09-10-16:27:18-0      type      snapshot    -
zroot/ROOT/default@2024-09-24-16:21:37-0      type      snapshot    -
zroot/home                                    type      filesystem  -

What is interesting that there is both filesystem AND snapshot, for example:

NAME                                          PROPERTY  VALUE       SOURCE
zroot/ROOT/14.1-RELEASE-p4_2024-09-24_162137  type      filesystem  -
zroot/ROOT/default@2024-09-24-16:21:37-0      type      snapshot    -

Resources

Clone this wiki locally