Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS 0.8.4 Encrpyted dataset on rootpool introduces systemd ordering cycle #10356

Closed
deadbeef2000 opened this issue May 22, 2020 · 12 comments
Closed

Comments

@deadbeef2000
Copy link

System information

Type Version/Name
Distribution Name Arch
Distribution Version
Linux Kernel 5.4.41-1-lts
Architecture x86_64
ZFS Version 0.8.4-1
SPL Version 0.8.4-1

Describe the problem you're observing

I have recently updated from ZFS 0.8.3 to ZFS 0.8.4.
I am using the zfs-mount-generator and I used the instructions from the arch wiki.
I've got one ZPOOL (rootpool), with 3 datasets, ROOT, ROOT/HOME, ROOT/SWAP.
rootpool/ROOT is encrypted with a passphrase.
After the upgrade, systemd complained about ordering cycles on bootup

Describe how to reproduce the problem

Use systemd-mount-generator with an encryted dataset for the root filesystem.

Include any warning/errors/backtraces from the system logs

With zfs-import-scan.service:

May 22 09:15:21 klaus systemd[1]: zfs-import-scan.service: Found ordering cycle on systemd-udev-settle.service/start
May 22 09:15:21 klaus systemd[1]: zfs-import-scan.service: Found dependency on systemd-udev-trigger.service/start
May 22 09:15:21 klaus systemd[1]: zfs-import-scan.service: Found dependency on systemd-udevd-control.socket/start
May 22 09:15:21 klaus systemd[1]: zfs-import-scan.service: Found dependency on -.mount/start
May 22 09:15:21 klaus systemd[1]: zfs-import-scan.service: Found dependency on zfs-import.target/start
May 22 09:15:21 klaus systemd[1]: zfs-import-scan.service: Found dependency on zfs-import-scan.service/start
May 22 09:15:21 klaus systemd[1]: zfs-import-scan.service: Job systemd-udev-settle.service/start deleted to break ordering cycle starting with zfs-import-scan.service/start
May 22 09:15:21 klaus systemd[1]: -.mount: Found ordering cycle on zfs-load-key-rootpool-ROOT.service/start
May 22 09:15:21 klaus systemd[1]: -.mount: Found dependency on systemd-journald.socket/start
May 22 09:15:21 klaus systemd[1]: -.mount: Found dependency on -.mount/start
May 22 09:15:21 klaus systemd[1]: -.mount: Job zfs-load-key-rootpool-ROOT.service/start deleted to break ordering cycle starting with -.mount/start

With zfs-import-cache.service:

May 21 09:03:32 klaus systemd[1]: zfs-import.target: Found ordering cycle on zfs-import-scan.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import.target: Found dependency on cryptsetup.target/start
May 21 09:03:32 klaus systemd[1]: zfs-import.target: Found dependency on systemd-ask-password-wall.path/start
May 21 09:03:32 klaus systemd[1]: zfs-import.target: Found dependency on -.mount/start
May 21 09:03:32 klaus systemd[1]: zfs-import.target: Found dependency on zfs-load-key-rootpool-ROOT.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import.target: Found dependency on zfs-import.target/start
May 21 09:03:32 klaus systemd[1]: zfs-import.target: Job zfs-import-scan.service/start deleted to break ordering cycle starting with zfs-import.target/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found ordering cycle on cryptsetup.target/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on systemd-ask-password-wall.path/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on zfs-load-key-rootpool-ROOT.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Job cryptsetup.target/start deleted to break ordering cycle starting with zfs-import-cache.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found ordering cycle on systemd-udev-settle.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on systemd-udev-trigger.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on systemd-journald.socket/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on zfs-load-key-rootpool-ROOT.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
May 21 09:03:32 klaus systemd[1]: zfs-import-cache.service: Job systemd-udev-settle.service/start deleted to break ordering cycle starting with zfs-import-cache.service/start
May 21 09:03:32 klaus systemd[1]: zfs-load-key-rootpool-ROOT.service: Found ordering cycle on systemd-journald.socket/start
May 21 09:03:32 klaus systemd[1]: zfs-load-key-rootpool-ROOT.service: Found dependency on -.mount/start
May 21 09:03:32 klaus systemd[1]: zfs-load-key-rootpool-ROOT.service: Found dependency on zfs-load-key-rootpool-ROOT.service/start
May 21 09:03:32 klaus systemd[1]: zfs-load-key-rootpool-ROOT.service: Job systemd-journald.socket/start deleted to break ordering cycle starting with zfs-load-key-rootpool-ROOT.service/start

How to get it fixed (not properly!)

Edit /usr/lib/systemd/system-generators/zfs-mount-generator
Remove the content of after and before on lines 113 and 114
Comment out line 207 (after="${after} ${keyloadunit}")

It removes the (actually rather useful) dependencies to avoid the ordering cycle with systemd-journald.socket.

@InsanePrawn
Copy link
Contributor

InsanePrawn commented May 22, 2020

  • What type of key is used for the root dataset?
  • Can you narrow down which item exactly needs to be dropped from the dependency list in which unit file to make it boot?
    The candidate files should be -.mount and zfs-load-key-rootpool-ROOT.service.
    [hint: to override a generated unit completely, copy it from /run/systemd/generator/ to /etc/systemd/system and modify the copy. May need an initcpio rebuild?]

Arch specifics:

  • Which bootloader are you using?
  • Where is your /boot located?
  • Can you post the HOOKS= line from your mkinitcpio.conf?
My laptop currently boots arch configured something like this:

Let me preface this with "This feels like a terrible hack". There we go.

My /boot is on a LUKS partition, the unencrypted EFI partition holds only grub.efi, the rest is on encrypted ZFS. Don't ask. :)
I've had to drop the stock arch zfs[-sd] hooks as they wouln't successfully load the keys or sometimes not even import the pool. Instead I chose to use the systemd hook itself. So now we get this mess in mkinitcpio.conf. Sigh.

FILES=(/etc/keys/*.key /etc/zfs/zpool.cache /usr/lib/systemd/system/systemd-udev-settle.service /etc/udev/rules.d/* /etc/systemd/system/{zfs-load-key-zroot.service,sysroot.mount,initrd-switch-root.service.requires/*})

[...]

HOOKS=(base keyboard systemd sd-vconsole autodetect modconf block encrypt filesystems usr fsck)

and a manually created sysroot.mount (slightly modified -.mount)

systemctl cat sysroot.mount

# /etc/systemd/system/sysroot.mount
# Automatically generated by zfs-mount-generator

[Unit]
SourcePath=/etc/zfs/zfs-list.cache/zroot
Documentation=man:zfs-mount-generator(8)

Before=zfs-mount.service initrd-switch-root.service
After=zfs-import.target zfs-load-key-zroot.service
Wants=zfs-import.target zfs-load-key-zroot.service



[Mount]
Where=/sysroot
What=zroot/os/ROOT/default
Type=zfs
Options=defaults,noatime,dev,exec,rw,suid,nomand,zfsutil

aaaand a modified keyload service. The RequiresMountsFor the key file caused dependency cycles for me too IIRC.

systemctl cat zfs-load-key-zroot.service

# /etc/systemd/system/zfs-load-key-zroot.service

[Unit]
Description=Load ZFS key for zroot
#SourcePath=/etc/zfs/zfs-list.cache/zroot
Documentation=man:zfs-mount-generator(8)
DefaultDependencies=no
Wants=systemd-udev-settle.service
After=systemd-udev-settle.service systemd-udev-trigger.service cryptsetup.target
Before=zfs-mount.service initrd-switch-root.service

#RequiresMountsFor='/etc/keys/zroot.key'
#RequiresMountsFor=

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/sh -c '/usr/bin/udevadm settle; /usr/bin/zpool import -N zroot; set -eu;keystatus="$$(/usr/bin/zfs get -H -o value keystatus "zroot")";[ "$$keystatus" = "unavailable" ] || exit 0;/usr/bin/zfs load-key "zroot"'

This is probably not everything someone would need to replicate my setup, but it might help you get on the right way.
I'm not sure myself if all of these modifications are necessary or the best way to do things, the udev stuff is definitely a big question mark to me right now, but at this point I'm just glad I got a booting laptop and will leave testing the finer details to the brave and/or people with VMs. :)
Seeing how I'm not willing to touch this often, this might have gotten easier to do.
Note that my setup will ignore the root= kernel cmdline.

@deadbeef2000
Copy link
Author

deadbeef2000 commented May 23, 2020

  • Key type: Passphrase, I'm getting prompted by your zfs-load-key-rootpool-ROOT.service unit
  • Changes:
    • zfs-load-key-rootpool-ROOT: Remove After=zfs-import.target
    • -.mount: Remove After=zfs-load-key-rootpool-ROOT

Arch specifics:

  • Bootloader: EFISTUB, created with efibootmgr
  • /boot location: FAT32 EFI partition, containing my kernel and initramfs
  • Initramfs hooks: base udev autodetect modconf block keyboard zfs filesystems

If you need any further information, feel free to ask, it's really nice to see you respond so quickly 👍

@InsanePrawn
Copy link
Contributor

InsanePrawn commented May 23, 2020

* Changes:
  
  * zfs-load-key-rootpool-ROOT: Remove `After=zfs-import.target`
  * -.mount: Remove `After=zfs-load-key-rootpool-ROOT`

Is this just what worked for you or have you determined these to be the minimal necessary modifications? :)

* Key type: Passphrase, I'm getting prompted by your zfs-load-key-rootpool-ROOT.service unit

Are you sure it's that unit prompting you and not the mkinitcpio zfs hook?
What happens if you systemctl mask -.mount and/or the load-key service?

To my understanding, your boot goes something like this:
Kernel boots initcpio, zfs hook unlocks and mounts just(?) the root dataset to /sysroot, /new_root or wherever, still in the initramfs, pivot_root there and start systemd, which then parses the mount and service units, etc.

Seems the biggest problem is that the pool is imported, unlocked and partially mounted before systemd was involved, which it finds confusing; e.g. zfs-import.target wasn't hit although the pool is clearly imported, etc.

At this point I'm unsure whether a) this is a bug with your setup OR b) a bug between the assumptions of the generator and some setups like the zfs hook OR c) if we can sit back and blame the zfs initcpio hook.

c) could be solved by adding instructions to modify -.mount and the keyload .service to the archwiki, but i doubt it's anyone's preferred solution.

Ping @aerusso and @rlaager

@rlaager
Copy link
Member

rlaager commented May 24, 2020

I'm not immediately seeing the root cause here and I unfortunately don't have a lot of time to dig into this. However, Ubuntu ran into an ordering cycle too and this patch of mine (which I have not yet submitted) fixes it:

--- a/etc/systemd/system-generators/zfs-mount-generator.in
+++ b/etc/systemd/system-generators/zfs-mount-generator.in
@@ -42,6 +42,8 @@
   do_fail "zero or three arguments required"
 fi
 
+pools=$(zpool list -H -o name)
+
 # For ZFSs marked "auto", a dependency is created for local-fs.target. To
 # avoid regressions, this dependency is reduced to "wants" rather than
 # "requires". **THIS MAY CHANGE**
@@ -62,6 +64,7 @@
   set -f
   set -- $1
   dataset="${1}"
+  pool="${dataset%%/*}"
   p_mountpoint="${2}"
   p_canmount="${3}"
   p_atime="${4}"
@@ -77,6 +80,18 @@
   # Minimal pre-requisites to mount a ZFS dataset
   wants="zfs-import.target"
 
+  # If the pool is already imported, zfs-import.target is not needed.  This
+  # avoids a dependency loop on root-on-ZFS systems:
+  # systemd-random-seed.service After (via RequiresMountsFor) var-lib.mount
+  # After zfs-import.target After zfs-import-{cache,scan}.service After
+  # cryptsetup.service After systemd-random-seed.service.
+  for p in $pools ; do
+    if [ "$p" = "$pool" ] ; then
+      wants=""
+      break
+    fi
+  done
+
   # Handle encryption
   if [ -n "${p_encroot}" ] &&
       [ "${p_encroot}" != "-" ] ; then
--- a/etc/systemd/system/zfs-mount.service.in
+++ b/etc/systemd/system/zfs-mount.service.in
@@ -6,7 +6,6 @@
 After=zfs-import.target
 After=systemd-remount-fs.service
 Before=local-fs.target
-Before=systemd-random-seed.service
 After=zfs-load-module.service
 ConditionPathExists=/sys/module/zfs

Does that patch have any effect on this situation?

@deadbeef2000
Copy link
Author

@rlaager Your patch doesn't fix the problem for me, although it simplifies it. Now there's only systemd-journald.socket in an ordering cycle (Take a look at the logs, systemd-journal is conflicting with itself :D)

[  +0.023297] systemd[1]: systemd-journald.socket: Found ordering cycle on -.mount/start
[  +0.000004] systemd[1]: systemd-journald.socket: Found dependency on zfs-load-key-rootpool-ROOT.service/start
[  +0.000003] systemd[1]: systemd-journald.socket: Found dependency on systemd-journald.socket/start
[  +0.000003] systemd[1]: systemd-journald.socket: Job zfs-load-key-rootpool-ROOT.service/start deleted to break ordering cycle starting with systemd-journald.socket/start
[  +0.000362] systemd[1]: systemd-journald.socket: Found ordering cycle on -.mount/start
[  +0.000003] systemd[1]: systemd-journald.socket: Found dependency on zfs-load-key-rootpool-ROOT.service/start
[  +0.000003] systemd[1]: systemd-journald.socket: Found dependency on systemd-journald.socket/start
[  +0.000002] systemd[1]: systemd-journald.socket: Job systemd-journald.socket/start deleted to break ordering cycle starting with systemd-journald.socket/start

@deadbeef2000
Copy link
Author

@InsanePrawn

Is this just what worked for you or have you determined these to be the minimal necessary modifications? :)

Those are the changes I need to make to stop systemd from ending up in an ordering cycle.

Are you sure it's that unit prompting you and not the mkinitcpio zfs hook?
What happens if you systemctl mask -.mount and/or the load-key service?

I just masked both units and it didn't make any difference (I already applied my solution as mentioned in my first post).
But if I mask the units, given the fact they are being generated at boot time, wouldn't I have to mask the generator script instead?

To be fair, I'm a human and I make mistakes, so it could be a bug with my setup and all the machines I deployed this particular setup on... I tell you if I find anything else that seems a bit odd and I can share my zfs/zpool properties/boot information/whatever you need, just keep asking, as I'd like this issue to be resolved asap as well :)

rlaager added a commit to rlaager/zfs that referenced this issue May 31, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
@rlaager
Copy link
Member

rlaager commented May 31, 2020

@Schebang I submitted that patch as #10388. I've pushed another commit there that I think should fix your loop. Can you test:

--- a/etc/systemd/system-generators/zfs-mount-generator.in
+++ b/etc/systemd/system-generators/zfs-mount-generator.in
@@ -212,6 +212,10 @@ ${keymountdep}
 [Service]
 Type=oneshot
 RemainAfterExit=yes
+# This avoids a dependency loop involving systemd-journald.socket if this
+# dataset is a parent of the root filesystem.
+StandardOutput=null
+StandardError=null
 ExecStart=${keyloadcmd}
 ExecStop=@sbindir@/zfs unload-key '${dataset}'"   > "${dest_norm}/${keyloadunit}"
     fi

@rlaager rlaager mentioned this issue May 31, 2020
12 tasks
@deadbeef2000
Copy link
Author

@rlaager Sadly I'm still running into dependency issues

[  +0.401862] systemd[1]: zfs-import-cache.service: Found ordering cycle on cryptsetup.target/start
[  +0.000004] systemd[1]: zfs-import-cache.service: Found dependency on systemd-ask-password-wall.path/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Job cryptsetup.target/start deleted to break ordering cycle starting with zfs-import-cache.service/start
[  +0.000315] systemd[1]: systemd-udev-settle.service: Found ordering cycle on systemd-udev-trigger.service/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Found dependency on systemd-udevd-control.socket/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on -.mount/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import-cache.service/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Found dependency on systemd-udev-settle.service/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Job systemd-udev-trigger.service/start deleted to break ordering cycle starting with systemd-udev-settle.service/start
[  +0.000251] systemd[1]: systemd-udev-settle.service: Found ordering cycle on systemd-journald.socket/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on -.mount/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import-cache.service/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on systemd-udev-settle.service/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Job systemd-journald.socket/start deleted to break ordering cycle starting with systemd-udev-settle.service/start

@rlaager
Copy link
Member

rlaager commented Jun 9, 2020

Is that with both changes applied? The first commit should eliminate -.mount depending on zfs-import.target.

@mtippmann
Copy link

mtippmann commented Jun 9, 2020

I managed to resolve this using this dataset property (Im on zfs-dkms-git maybe this is not in 0.8.4):

org.openzfs.systemd:ignore=on|off

# zfs set org.openzfs.systemd:ignore=on rpool/encr/ROOT/arch  

# zfs get org.openzfs.systemd:ignore rpool/encr/ROOT/arch
NAME                  PROPERTY                    VALUE                       SOURCE
rpool/encr/ROOT/arch  org.openzfs.systemd:ignore  on                          local

Seeh the manpage of zfs-mount-generator. https://github.com/openzfs/zfs/blob/master/man/man8/zfs-mount-generator.8.in

Not sure what a general solution would look like.

@deadbeef2000
Copy link
Author

@rlaager I've applied both of your patches and I'm still getting an awful lot of dependency cycles, maybe I didn't set up my rootpool properly?
I'll attach the output of zpool get all rootpool and zfs get all rootpool/ROOT (apart from the guid properties), maybe it'll help :)

dmesg -H

[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on systemd-ask-password-wall.path/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Job cryptsetup.target/start deleted to break ordering cycle starting with zfs-import-cache.service/start
[  +0.000286] systemd[1]: systemd-udev-settle.service: Found ordering cycle on systemd-udev-trigger.service/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Found dependency on systemd-journald.socket/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on -.mount/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import-cache.service/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on systemd-udev-settle.service/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Job systemd-udev-trigger.service/start deleted to break ordering cycle starting with systemd-udev-settle.service/start
[  +0.000217] systemd[1]: zfs-import-cache.service: Found ordering cycle on systemd-remount-fs.service/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on systemd-journald.socket/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Job systemd-remount-fs.service/start deleted to break ordering cycle starting with zfs-import-cache.service/start
[  +0.000217] systemd[1]: zfs-import-cache.service: Found ordering cycle on systemd-journald.socket/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Job systemd-journald.socket/start deleted to break ordering cycle starting with zfs-import-cache.service/start
[  +0.001494] systemd[1]: Created slice system-getty.slice.
[  +0.000319] systemd[1]: Created slice system-modprobe.slice.
[  +0.000272] systemd[1]: Created slice system-systemd\x2dfsck.slice.
[  +0.000333] systemd[1]: Created slice User and Session Slice.
[  +0.000139] systemd[1]: Reached target Slices.
[  +0.001748] systemd[1]: Listening on Journal Audit Socket.
[  +0.000931] systemd[1]: Listening on udev Kernel Socket.
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Job cryptsetup.target/start deleted to break ordering cycle starting with zfs-import-cache.service/start
[  +0.000286] systemd[1]: systemd-udev-settle.service: Found ordering cycle on systemd-udev-trigger.service/start
[  +0.000003] systemd[1]: systemd-udev-settle.service: Found dependency on systemd-journald.socket/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on -.mount/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on zfs-import-cache.service/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Found dependency on systemd-udev-settle.service/start
[  +0.000002] systemd[1]: systemd-udev-settle.service: Job systemd-udev-trigger.service/start deleted to break ordering cycle starting with systemd-udev-settle.service/start
[  +0.000217] systemd[1]: zfs-import-cache.service: Found ordering cycle on systemd-remount-fs.service/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on systemd-journald.socket/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Job systemd-remount-fs.service/start deleted to break ordering cycle starting with zfs-import-cache.service/start
[  +0.000217] systemd[1]: zfs-import-cache.service: Found ordering cycle on systemd-journald.socket/start
[  +0.000003] systemd[1]: zfs-import-cache.service: Found dependency on -.mount/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import.target/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Found dependency on zfs-import-cache.service/start
[  +0.000002] systemd[1]: zfs-import-cache.service: Job systemd-journald.socket/start deleted to break ordering cycle starting with zfs-import-cache.service/start

zpool get all rootpool

rootpool  capacity                       40%                            -
rootpool  altroot                        -                              default
rootpool  health                         ONLINE                         -
rootpool  version                        -                              default
rootpool  bootfs                         rootpool/ROOT                  local
rootpool  delegation                     on                             default
rootpool  autoreplace                    off                            default
rootpool  cachefile                      -                              default
rootpool  failmode                       wait                           default
rootpool  listsnapshots                  off                            default
rootpool  autoexpand                     off                            default
rootpool  dedupditto                     0                              default
rootpool  dedupratio                     1.00x                          -
rootpool  free                           66.0G                          -
rootpool  allocated                      45.0G                          -
rootpool  readonly                       off                            -
rootpool  ashift                         12                             local
rootpool  comment                        -                              default
rootpool  expandsize                     -                              -
rootpool  freeing                        0                              -
rootpool  fragmentation                  29%                            -
rootpool  leaked                         0                              -
rootpool  multihost                      off                            default
rootpool  checkpoint                     -                              -
rootpool  autotrim                       off                            default
rootpool  feature@async_destroy          enabled                        local
rootpool  feature@empty_bpobj            active                         local
rootpool  feature@lz4_compress           active                         local
rootpool  feature@multi_vdev_crash_dump  enabled                        local
rootpool  feature@spacemap_histogram     active                         local
rootpool  feature@enabled_txg            active                         local
rootpool  feature@hole_birth             active                         local
rootpool  feature@extensible_dataset     active                         local
rootpool  feature@embedded_data          active                         local
rootpool  feature@bookmarks              enabled                        local
rootpool  feature@filesystem_limits      enabled                        local
rootpool  feature@large_blocks           enabled                        local
rootpool  feature@large_dnode            enabled                        local
rootpool  feature@sha512                 enabled                        local
rootpool  feature@skein                  enabled                        local
rootpool  feature@edonr                  enabled                        local
rootpool  feature@userobj_accounting     active                         local
rootpool  feature@encryption             active                         local
rootpool  feature@project_quota          active                         local
rootpool  feature@device_removal         enabled                        local
rootpool  feature@obsolete_counts        enabled                        local
rootpool  feature@zpool_checkpoint       enabled                        local
rootpool  feature@spacemap_v2            active                         local
rootpool  feature@allocation_classes     enabled                        local
rootpool  feature@resilver_defer         enabled                        local
rootpool  feature@bookmark_v2            enabled                        local

zfs get all rootpool/ROOT

rootpool/ROOT  type                   filesystem             -
rootpool/ROOT  creation               Tue Aug 20 14:11 2019  -
rootpool/ROOT  used                   52.6G                  -
rootpool/ROOT  available              54.8G                  -
rootpool/ROOT  referenced             24.9G                  -
rootpool/ROOT  compressratio          1.00x                  -
rootpool/ROOT  mounted                yes                    -
rootpool/ROOT  quota                  none                   default
rootpool/ROOT  reservation            none                   default
rootpool/ROOT  recordsize             128K                   default
rootpool/ROOT  mountpoint             /                      local
rootpool/ROOT  sharenfs               off                    default
rootpool/ROOT  checksum               on                     default
rootpool/ROOT  compression            off                    default
rootpool/ROOT  atime                  on                     default
rootpool/ROOT  devices                on                     default
rootpool/ROOT  exec                   on                     default
rootpool/ROOT  setuid                 on                     default
rootpool/ROOT  readonly               off                    default
rootpool/ROOT  zoned                  off                    default
rootpool/ROOT  snapdir                hidden                 default
rootpool/ROOT  aclinherit             restricted             default
rootpool/ROOT  createtxg              44                     -
rootpool/ROOT  canmount               on                     default
rootpool/ROOT  xattr                  on                     default
rootpool/ROOT  copies                 1                      default
rootpool/ROOT  version                5                      -
rootpool/ROOT  utf8only               off                    -
rootpool/ROOT  normalization          none                   -
rootpool/ROOT  casesensitivity        sensitive              -
rootpool/ROOT  vscan                  off                    default
rootpool/ROOT  nbmand                 off                    default
rootpool/ROOT  sharesmb               off                    default
rootpool/ROOT  refquota               none                   default
rootpool/ROOT  refreservation         none                   default
rootpool/ROOT  primarycache           all                    default
rootpool/ROOT  secondarycache         all                    default
rootpool/ROOT  usedbysnapshots        3.30G                  -
rootpool/ROOT  usedbydataset          24.9G                  -
rootpool/ROOT  usedbychildren         24.5G                  -
rootpool/ROOT  usedbyrefreservation   0B                     -
rootpool/ROOT  logbias                latency                default
rootpool/ROOT  objsetid               515                    -
rootpool/ROOT  dedup                  off                    default
rootpool/ROOT  mlslabel               none                   default
rootpool/ROOT  sync                   standard               default
rootpool/ROOT  dnodesize              legacy                 default
rootpool/ROOT  refcompressratio       1.00x                  -
rootpool/ROOT  written                37.6M                  -
rootpool/ROOT  logicalused            42.8G                  -
rootpool/ROOT  logicalreferenced      23.9G                  -
rootpool/ROOT  volmode                default                default
rootpool/ROOT  filesystem_limit       none                   default
rootpool/ROOT  snapshot_limit         none                   default
rootpool/ROOT  filesystem_count       none                   default
rootpool/ROOT  snapshot_count         none                   default
rootpool/ROOT  snapdev                hidden                 default
rootpool/ROOT  acltype                off                    default
rootpool/ROOT  context                none                   default
rootpool/ROOT  fscontext              none                   default
rootpool/ROOT  defcontext             none                   default
rootpool/ROOT  rootcontext            none                   default
rootpool/ROOT  relatime               on                     temporary
rootpool/ROOT  redundant_metadata     all                    default
rootpool/ROOT  overlay                off                    default
rootpool/ROOT  encryption             aes-256-ccm            -
rootpool/ROOT  keylocation            prompt                 local
rootpool/ROOT  keyformat              passphrase             -
rootpool/ROOT  pbkdf2iters            342K                   -
rootpool/ROOT  encryptionroot         rootpool/ROOT          -
rootpool/ROOT  keystatus              available              -
rootpool/ROOT  special_small_blocks   0                      default
rootpool/ROOT  com.sun:auto-snapshot  true                   local

@deadbeef2000
Copy link
Author

Alright, I've got myself a pretty NVMe SSD and put a new pool on it, this time I followed all the guides again and I didn't run into any problems regarding the zfs-mount-generator. No idea how this is all working out now, but I can't seem to find any issues anymore. I'll attach you the structure of my zpool, after all that's most likely what causes everything to run smoothly now!

zroot                   21.2G   207G      192K  none
zroot/ROOT              2.89G   207G      192K  none
zroot/ROOT/default      2.89G   207G     2.89G  /
zroot/home              1.26G   207G     1.26G  /home
zroot/home/root          336K   207G      336K  /root
zroot/swap              17.0G   224G       92K  -
zroot/tmp               4.86M   207G     4.86M  /tmp
zroot/usr               2.85M   207G      192K  /usr
zroot/usr/local         2.66M   207G     2.66M  /usr/local
zroot/var               5.48M   207G      192K  /var
zroot/var/lib            192K   207G      192K  /var/lib
zroot/var/log           4.88M   207G     4.88M  /var/log
zroot/var/tmp            224K   207G      224K  /var/tmp

If there's anything else you need, don't hesitate and just ask me :)

rlaager added a commit to rlaager/zfs that referenced this issue Jul 30, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
rlaager added a commit to rlaager/zfs that referenced this issue Aug 1, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
rlaager added a commit to rlaager/zfs that referenced this issue Aug 1, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
rlaager added a commit to rlaager/zfs that referenced this issue Aug 1, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
rlaager added a commit to rlaager/zfs that referenced this issue Aug 1, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
rlaager added a commit to rlaager/zfs that referenced this issue Aug 2, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
rlaager added a commit to rlaager/zfs that referenced this issue Aug 18, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: openzfs#10356
behlendorf pushed a commit that referenced this issue Aug 30, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Reviewed-by: Antonio Russo <antonio.e.russo@gmail.com>
Reviewed-by: InsanePrawn <insane.prawny@gmail.com>
Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes #10356
Closes #10388
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Sep 22, 2020
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Reviewed-by: Antonio Russo <antonio.e.russo@gmail.com>
Reviewed-by: InsanePrawn <insane.prawny@gmail.com>
Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes openzfs#10356
Closes openzfs#10388
jsai20 pushed a commit to jsai20/zfs that referenced this issue Mar 30, 2021
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Reviewed-by: Antonio Russo <antonio.e.russo@gmail.com>
Reviewed-by: InsanePrawn <insane.prawny@gmail.com>
Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes openzfs#10356
Closes openzfs#10388
sempervictus pushed a commit to sempervictus/zfs that referenced this issue May 31, 2021
zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Reviewed-by: Antonio Russo <antonio.e.russo@gmail.com>
Reviewed-by: InsanePrawn <insane.prawny@gmail.com>
Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes openzfs#10356
Closes openzfs#10388
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants