Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

coredumps with zfs send/receive with 2.1.7 #14281

Closed
mabod opened this issue Dec 12, 2022 · 1 comment
Closed

coredumps with zfs send/receive with 2.1.7 #14281

mabod opened this issue Dec 12, 2022 · 1 comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@mabod
Copy link

mabod commented Dec 12, 2022

I am experiencing coredumps when doing send/receive with the latest 2.1.7 release. This is on Endeavouros with zfs-dksm/zfs-utils PKGBUILD files modified to pull version 2.1.7.

I was initially thinking that this here was also my issue: #14252
The solution to this issue is reverting patch c8d2ab0
But that does not solve my problem.

I am getting coredumps when doing send/receive between two pools resp. datasets. I am using syncoid to do that:

    syncoid --sendoptions="L" --mbuffer-size=512M --no-sync-snap zHome/home zstore/data/BACKUP/rakete_home/home
    NEWEST SNAPSHOT: 2022-12-03--23:23
    Sending incremental zHome/home@2022-11-29--08:05 ... 2022-12-03--23:23 (~ 34 KB):
0,00 B 0:00:00 [0,00 B/s] [>                                                                   ]  0%            
    cannot receive: failed to read from stream
CRITICAL ERROR:  zfs send -L  -I 'zHome/home'@'2022-11-29--08:05' 'zHome/home'@'2022-12-03--23:23' | mbuffer  -q -s 128k -m 512M 2>/dev/null | pv -p -t -e -r -b -s 35264 |  zfs receive  -s -F 'zstore/data/BACKUP/rakete_home/home' 2>&1 failed: 256 at /usr/bin/syncoid line 817.
# zfs get recordsize,compression,encryption zHome/home zstore/data/BACKUP/rakete_home/home
NAME                                 PROPERTY     VALUE           SOURCE
zHome/home                           recordsize   128K            inherited from zHome
zHome/home                           compression  lz4             inherited from zHome
zHome/home                           encryption   off             default
zstore/data/BACKUP/rakete_home/home  recordsize   1M              inherited from zstore
zstore/data/BACKUP/rakete_home/home  compression  zstd            inherited from zstore/data/BACKUP
zstore/data/BACKUP/rakete_home/home  encryption   aes-256-gcm     -

I am not able to do a git bisect. Unloading the zfs module does not work. Although I exported all pools it keeps telling me that the module is busy.

I am also not able to get a full coredump: "Resource limits disable core dumping for process 9485 (zfs)." I do not know why that is. gnome-shell dumped core yesterday and the day before and I do have a coredump file for that.

Dez 12 11:25:14 rakete kernel: traps: zfs[9485] general protection fault ip:7f50bc70dbd0 sp:7ffce1cd88c0 error:0 in libzfs.so.4.1.0[7f50bc6d2000+43000]
Dez 12 11:25:15 rakete systemd[1]: Created slice Slice /system/systemd-coredump.
Dez 12 11:25:15 rakete systemd[1]: Started Process Core Dump (PID 9489/UID 0).
Dez 12 11:25:15 rakete systemd-coredump[9490]: Resource limits disable core dumping for process 9485 (zfs).
Dez 12 11:25:15 rakete systemd-coredump[9490]: [🡕] Process 9485 (zfs) of user 0 dumped core.
Dez 12 11:25:15 rakete systemd[1]: systemd-coredump@0-9489-0.service: Deactivated successfully.

Any help is appreciated

@mabod mabod added the Type: Defect Incorrect behavior (e.g. crash, hang) label Dec 12, 2022
@mabod
Copy link
Author

mabod commented Dec 12, 2022

I guess I made a mistake with reverting the patch c8d2ab0

In the meantime I did more trials and now I have it working. For the last dozens of send/receive I did not have any issue. No locks, no core dumps.

I am closing this issue.

@mabod mabod closed this as completed Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

1 participant