-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong order with systemd-remount-fs.service in zfs-mount.service #205
Comments
+1 |
I just ran into this issue after upgrading by Debian Sid system, and I don't even have
This prevents ZFS filesystems from being mounted on startup, because The workaround mentioned above (removing |
If someone has a portable fix for this it's something which could be applied to the main ZoL tree. |
this only seems to affect pkg-zfs, since the regression is introduced by this debian specific patch: |
What's the problem to revert this change? |
Is this still a problem? |
Sure. |
I'm seeing this as well, on debian jessie (8.5). Sticking the updated service file in /etc/systemd/system overrides the package installed version, e.g.
|
Sabayon 15.0 system user here, this boot order issue bit me hard on the last upgrade. Since an empty /var/{lib,cache} forces a population on boot if the boot order is wrong, results in ZFS trying to then mount on a filled directory. End result != awesome, had to boot into a live cd and clear out the directories after modifying the service file as per the @ggzengel solution. I also took @ianabc suggestion and copied an altered service file into my /etc tree to keep this from happening in the future. Now I just have to remember that I did that. |
Still a problem: Pop!_OS 19.10 --> install zfs: Reboot and:
I end up with no ZFS pool (and a missing /home) |
I just ran into the same problem on a fresh Ubuntu 20.04 installation with a random key swap partition. After boot either the pool or my swap was missing, about 50/50 each. I looked around and come upon this thread and came up with the following workaround that wont need patching after each update. Its not the most elegant, but it basically works around the problem. Basically what I did, was remove the random key swap from the /etc/fstab and /etc/crypttab and set it up manually directly after each boot via a script running from /etc/rc.local. Steps
To enable rc.local create this file with root as owner and make it executable
make_cswap
This works great for me right now and maybe this can help somebody else to work around this bug/situation. |
Out of curiosity, did you test your solution across suspends? In the past
I've seen where there was an issue with corruption of encrypted swaps when
memory is written during the suspend.
…On Wed, Jul 15, 2020 at 12:13 AM mind-code ***@***.***> wrote:
I just ran into the same problem on a fresh Ubuntu 20.04 installation with
a random key swap partition.
After boot either the pool or my swap was missing, about 50/50 each. I
looked around and come upon this thread and came up with the following
workaround that wont need patching after each update. Its not the most
elegant, but it basically works around the problem.
Basically what I did, was remove the random key swap from the /etc/fstab
and /etc/crypttab and set it up manually directly after each boot via a
script running from /etc/rc.local.
Steps
- remove swap from /etc/fstab and /etc/crypttab and check machine
comes up without swap and with mounted pool
- enable rc.local (its not there by default, you can also use a
systemd service file, but I went with rc.local here)
To enable rc.local create this file with root as owner and make it
executable
*/etc/rc.local*
#!/bin/sh -e
exit 0
- reboot and check it worked with: systemctl status rc-local.service
(if its active you can now run stuff on boot via this script)
- create a small script to mount your random key swap for example in
/usr/local/sbin
*make_cswap*
#!/bin/bash
# set umask for keyfile
umask 0377
# generate keyfile
dd if=/dev/urandom of=/run/cswap.k bs=1k count=512 >/dev/null 2>&1
# setup cryptdevice
cryptsetup --cipher aes-xts-plain64 --key-size 512 --key-file /run/cswap.k open --type plain <SWAP_DEVICE/DISK> scrypt
# mkswap / swapon
mkswap /dev/mapper/scrypt >/dev/null 2>&1
swapon /dev/mapper/scrypt
# wipe keyfile
shred -u /run/cswap.k
- obviously change the mapper names and Device according to your needs
(<SWAP_DEVICE/DISK> must be your desired swapdevice partition/lv)
- *MAKE SURE YOU SELECT THE CORRECT PARTITION/LV HERE, THE COMMANDS
WILL EXECUTE AS ROOT!*
- this script generates random data (probably too much, but ah well)
- this key is put into a tmpfs filesystem, so its only in RAM as there
is no swap yet
- then we create a cryptdevice with this random data as keyfile
- then we create a swap device and activate it
- lastly we wipe the key
- test this script before you add it to the rc.local
- add this script to the rc.local
- reboot and check
This works great for me right now and maybe this can help somebody else to
work around this bug/situation.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#205 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA4ERPPCZS2A7DBOABDGUNDR3VJK5ANCNFSM4CAXISIA>
.
|
@rmackinnon But to me it sounds like you are describing an issue in general with encrypted swaps? This swap space is no different than any other swap space. Its just that I took the automated created/mount away and do it myself manually right on boot. So the availability of swap is delayed by seconds and I assume that you dont need swap space in the first seconds after boot so the delayed creation isn't a problem, but if there is a general problem with suspend with encrypted swaps this swap/work around is probably no different. |
@mind-code I want to one up your script because I like it and want to make it mine 😄 /etc/systemd/system/zfs-swap.service
/root/mount-zfs-swap.sh #! /bin/bash
set -e
KEY_SIZE=256
KEY_FILE=/tmp/swapkey
CYPHER=aes-xts-plain64
SWAP_DEVICE=<SWAP_DEVICE/DISK> # should be something like /dev/zvol/<pool>/<vol>
CRYPT_NAME=swap_crypt
if [ "$1" == "start" ]; then
umask 0377
dd if=/dev/urandom of=$KEY_FILE bs=1k count=$KEY_SIZE > /dev/null 2>&1
cryptsetup --cipher $CYPHER --key-size $KEY_SIZE --key-file $KEY_FILE open --type plain $SWAP_DEVICE $CRYPT_NAME
mkswap /dev/mapper/$CRYPT_NAME > /dev/null 2>&1
swapon /dev/mapper/$CRYPT_NAME
shred -u $KEY_FILE
elif [ "$1" == "stop" ]; then
swapoff /dev/mapper/$CRYPT_NAME
cryptsetup close $CRYPT_NAME
else
echo Invalid command $1
fi |
I found an odd little workaround ... I'm encrypting swap after encrypting all of our zfs pools with a remote key that I have to paste in anyway, so mounting on boot is not an issue. What I found is that the dependency loop seems to go away if I turn set canmount=off for the root filesystem of my zfs pool. YMMV. |
Because I didn't find any commit to this, is this a special undocumented feature of @FransUrbo?
In zfs-mount.service there is
Before=systemd-remount-fs.service
On boot I get:
It couldn't work because
zfs-import-scan.service -> cryptsetup.target -> systemd-cryptsetup@VG1\x2dswap_crypt.service -> systemd-random-seed.service -> systemd-remount-fs.service
which means:
zfs-import-scan.service -> systemd-remount-fs.service
Now you say:
systemd-remount-fs.service -> zfs-mount.service -> zfs-import-scan.service
which means:
systemd-remount-fs.service -> zfs-import-scan.service
That's the opposite boot order.
After removing
all is working as before.
The text was updated successfully, but these errors were encountered: