Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash with 0.6.5.4 #4295

Closed
ronnyegner opened this issue Jan 31, 2016 · 5 comments
Closed

Crash with 0.6.5.4 #4295

ronnyegner opened this issue Jan 31, 2016 · 5 comments

Comments

@ronnyegner
Copy link

Hi,

i have recently updated to ZFS&SPL 0.6.5.4. With that version i am observing frequent and reproducible crashes when doing medium load on my system (e.g. copy a few files with one cp process from one cryptsetup file system to another cryptsetup file system).

When this happens top shows:

top - 18:01:59 up 14 min,  6 users,  load average: 20.82, 10.91, 5.24
Tasks: 1826 total,   7 running, 1819 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.8 us, 55.1 sy,  0.0 ni, 29.9 id, 14.2 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  65926580 total, 64492332 used,  1434248 free,    97600 buffers
KiB Swap:  8388604 total,        0 used,  8388604 free. 42345516 cached Mem

 1213 root       0 -20       0      0      0 D  22.9  0.0   0:04.49 spl_system_task
12596 root      20   0       0      0      0 R  62.2  0.0   0:15.25 txg_sync
12595 root      20   0       0      0      0 S  59.9  0.0   0:13.08 txg_quiesce
 7779 root      20   0       0      0      0 R  57.9  0.0   0:13.66 txg_sync
12091 root       0 -20       0      0      0 S  23.4  0.0   0:07.46 z_null_iss
 1215 root       0 -20       0      0      0 D  22.7  0.0   0:06.33 spl_system_task
 1241 root       0 -20       0      0      0 R  22.7  0.0   0:05.78 spl_system_task
 1252 root       0 -20       0      0      0 D  22.7  0.0   0:05.76 spl_system_task
 1213 root       0 -20       0      0      0 D  22.4  0.0   0:05.84 spl_system_task
 1218 root       0 -20       0      0      0 D  22.4  0.0   0:06.12 spl_system_task
 1219 root       0 -20       0      0      0 D  22.4  0.0   0:06.31 spl_system_task
 1226 root       0 -20       0      0      0 D  22.4  0.0   0:06.21 spl_system_task
 1229 root       0 -20       0      0      0 D  22.4  0.0   0:06.47 spl_system_task
 1237 root       0 -20       0      0      0 R  22.4  0.0   0:05.70 spl_system_task
 1243 root       0 -20       0      0      0 D  22.4  0.0   0:06.52 spl_system_task
 1245 root       0 -20       0      0      0 D  22.4  0.0   0:05.64 spl_system_task
 1262 root       0 -20       0      0      0 D  22.4  0.0   0:05.57 spl_system_task
 1263 root       0 -20       0      0      0 R  22.4  0.0   0:05.69 spl_system_task

Note that the system here is freshly booted (14 minutes uptime).

The I/O then stops completely after 15 seconds and does not resume. The following stack traces are shown:

Jan 31 17:15:28 homenas kernel: [ 1680.632131] INFO: task txg_sync:17878 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.632132]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.632133] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.632134] txg_sync        D 0000000000000000     0 17878      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.632137]  ffff880f45db3ba0 0000000000000002 ffff880fddf1b0b0 ffff880f45db3fd8
Jan 31 17:15:28 homenas kernel: [ 1680.632139]  0000000000013200 0000000000013200 ffff880fe852cb30 ffff880f3ddf8000
Jan 31 17:15:28 homenas kernel: [ 1680.632142]  ffff880f45db3ba0 ffff88103fcf3b08 ffff880f3ddf8000 ffff88075abaa208
Jan 31 17:15:28 homenas kernel: [ 1680.632145] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.632147]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.632149]  [<ffffffff8175ff4f>] io_schedule+0x8f/0xd0
Jan 31 17:15:28 homenas kernel: [ 1680.632067]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632071]  [<ffffffffa0917a25>] __cv_wait+0x15/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632093]  [<ffffffffa099270b>] txg_quiesce_thread+0x2db/0x3f0 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632113]  [<ffffffffa0992430>] ? txg_fini+0x2d0/0x2d0 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632116]  [<ffffffffa0912de1>] thread_generic_wrapper+0x71/0x80 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632119]  [<ffffffffa0912d70>] ? __thread_exit+0x20/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632122]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632124]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632127]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632129]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632131] INFO: task txg_sync:17878 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.632132]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.632133] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.632134] txg_sync        D 0000000000000000     0 17878      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.632137]  ffff880f45db3ba0 0000000000000002 ffff880fddf1b0b0 ffff880f45db3fd8
Jan 31 17:15:28 homenas kernel: [ 1680.632139]  0000000000013200 0000000000013200 ffff880fe852cb30 ffff880f3ddf8000
Jan 31 17:15:28 homenas kernel: [ 1680.632142]  ffff880f45db3ba0 ffff88103fcf3b08 ffff880f3ddf8000 ffff88075abaa208
Jan 31 17:15:28 homenas kernel: [ 1680.632145] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.632147]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.632149]  [<ffffffff8175ff4f>] io_schedule+0x8f/0xd0
Jan 31 17:15:28 homenas kernel: [ 1680.632154]  [<ffffffffa091798f>] cv_wait_common+0x9f/0x120 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632156]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632161]  [<ffffffffa0917a68>] __cv_wait_io+0x18/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632181]  [<ffffffffa09db533>] zio_wait+0x123/0x210 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632197]  [<ffffffffa0966151>] dsl_pool_sync+0xb1/0x470 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632216]  [<ffffffffa09809a5>] spa_sync+0x365/0xb20 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632218]  [<ffffffff810b4868>] ? __wake_up_common+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632238]  [<ffffffffa0992bd9>] txg_sync_thread+0x3b9/0x620 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632256]  [<ffffffffa0992820>] ? txg_quiesce_thread+0x3f0/0x3f0 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632261]  [<ffffffffa0912de1>] thread_generic_wrapper+0x71/0x80 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632264]  [<ffffffffa0912d70>] ? __thread_exit+0x20/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632267]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632269]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632272]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632274]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0

Jan 31 17:15:28 homenas kernel: [ 1680.631427] INFO: task kworker/4:0:53 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.631431]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.631432] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.631433] kworker/4:0     D 0000000000000001     0    53      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.631445] Workqueue: kcryptd kcryptd_crypt [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631446]  ffff880fe843dac0 0000000000000002 0000000000000000 ffff880fe843dfd8
Jan 31 17:15:28 homenas kernel: [ 1680.631449]  0000000000013200 0000000000013200 ffff880d2df8e440 ffff880fe840cb30
Jan 31 17:15:28 homenas kernel: [ 1680.631450]  ffff880fe843dad0 ffff880fc8e42b68 ffff880fc8e42a20 ffff880fc8e42b70
Jan 31 17:15:28 homenas kernel: [ 1680.631452] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.631457]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.631463]  [<ffffffffa09179d5>] cv_wait_common+0xe5/0x120 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.631466]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.631470]  [<ffffffffa0917a25>] __cv_wait+0x15/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.631498]  [<ffffffffa0991cd3>] txg_wait_open+0xc3/0x110 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631514]  [<ffffffffa094d460>] dmu_tx_wait+0x380/0x390 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631516]  [<ffffffff81762086>] ? mutex_lock+0x16/0x37
Jan 31 17:15:28 homenas kernel: [ 1680.631529]  [<ffffffffa094d50a>] dmu_tx_assign+0x9a/0x510 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631555]  [<ffffffffa09e60c4>] zvol_request+0x204/0x610 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631559]  [<ffffffff8134a96b>] ? generic_make_request_checks+0x19b/0x3b0
Jan 31 17:15:28 homenas kernel: [ 1680.631563]  [<ffffffff8134a5e5>] generic_make_request.part.62+0x75/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631566]  [<ffffffff8134abe8>] generic_make_request+0x68/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.631570]  [<ffffffffa07da0b1>] kcryptd_crypt_write_io_submit+0x51/0xd0 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631572]  [<ffffffffa07db67a>] kcryptd_crypt_write_convert+0x11a/0x260 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631575]  [<ffffffff8109a953>] ? finish_task_switch+0x53/0x180
Jan 31 17:15:28 homenas kernel: [ 1680.631577]  [<ffffffffa07db7df>] kcryptd_crypt+0x1f/0x40 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631580]  [<ffffffff81087e1f>] process_one_work+0x17f/0x4c0
Jan 31 17:15:28 homenas kernel: [ 1680.631582]  [<ffffffff81088fbb>] worker_thread+0x11b/0x3d0
Jan 31 17:15:28 homenas kernel: [ 1680.631584]  [<ffffffff81088ea0>] ? manage_workers.isra.21+0x150/0x150
Jan 31 17:15:28 homenas kernel: [ 1680.631586]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.631587]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631590]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.631592]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631596] INFO: task kworker/11:0:88 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.631597]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.631598] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.631599] kworker/11:0    D ffffffff818118e0     0    88      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.631603] Workqueue: kcryptd kcryptd_crypt [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631604]  ffff880fe86adb00 0000000000000002 0000000000000246 ffff880fe86adfd8
Jan 31 17:15:28 homenas kernel: [ 1680.631607]  0000000000013200 0000000000013200 ffff880fe8a4e440 ffff880fe86a4b30
Jan 31 17:15:28 homenas kernel: [ 1680.631610]  ffff880fe86adb00 ffff88103fd73b08 ffff880fe86a4b30 ffff880f36235f48
Jan 31 17:15:28 homenas kernel: [ 1680.631613] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.631615]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.631617]  [<ffffffff8175ff4f>] io_schedule+0x8f/0xd0
Jan 31 17:15:28 homenas kernel: [ 1680.631622]  [<ffffffffa091798f>] cv_wait_common+0x9f/0x120 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.631625]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.631629]  [<ffffffffa0917a68>] __cv_wait_io+0x18/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.631650]  [<ffffffffa09db533>] zio_wait+0x123/0x210 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631670]  [<ffffffffa09d5221>] zil_commit.part.11+0x451/0x7e0 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631692]  [<ffffffffa09d55c7>] zil_commit+0x17/0x20 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631712]  [<ffffffffa09e61c1>] zvol_request+0x301/0x610 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631715]  [<ffffffff8134a5e5>] generic_make_request.part.62+0x75/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631717]  [<ffffffff8134abe8>] generic_make_request+0x68/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.631720]  [<ffffffffa07da0b1>] kcryptd_crypt_write_io_submit+0x51/0xd0 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631724]  [<ffffffffa07db67a>] kcryptd_crypt_write_convert+0x11a/0x260 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631727]  [<ffffffffa07db7df>] kcryptd_crypt+0x1f/0x40 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631729]  [<ffffffff81087e1f>] process_one_work+0x17f/0x4c0
Jan 31 17:15:28 homenas kernel: [ 1680.631732]  [<ffffffff81088fbb>] worker_thread+0x11b/0x3d0
Jan 31 17:15:28 homenas kernel: [ 1680.631734]  [<ffffffff81088ea0>] ? manage_workers.isra.21+0x150/0x150
Jan 31 17:15:28 homenas kernel: [ 1680.631736]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.631738]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631741]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.631743]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631771] INFO: task kworker/0:2:4432 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.631772]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.631772] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.631773] kworker/0:2     D 0000000000000000     0  4432      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.631777] Workqueue: kcryptd kcryptd_crypt [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631777]  ffff880fd7e3dac0 0000000000000002 000000009c470000 ffff880fd7e3dfd8
Jan 31 17:15:28 homenas kernel: [ 1680.631779]  0000000000013200 0000000000013200 ffff880fe3ad1910 ffff880fd2946440
Jan 31 17:15:28 homenas kernel: [ 1680.631781]  ffff880fd7e3dad0 ffff880fc8e42b68 ffff880fc8e42a20 ffff880fc8e42b70
Jan 31 17:15:28 homenas kernel: [ 1680.631783] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.631785]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.631789]  [<ffffffffa09179d5>] cv_wait_common+0xe5/0x120 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.631790]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.631794]  [<ffffffffa0917a25>] __cv_wait+0x15/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.631815]  [<ffffffffa0991cd3>] txg_wait_open+0xc3/0x110 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631830]  [<ffffffffa094d460>] dmu_tx_wait+0x380/0x390 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631832]  [<ffffffff81762086>] ? mutex_lock+0x16/0x37
Jan 31 17:15:28 homenas kernel: [ 1680.631844]  [<ffffffffa094d50a>] dmu_tx_assign+0x9a/0x510 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631862]  [<ffffffffa09e60c4>] zvol_request+0x204/0x610 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.631865]  [<ffffffff8134a96b>] ? generic_make_request_checks+0x19b/0x3b0
Jan 31 17:15:28 homenas kernel: [ 1680.631867]  [<ffffffff8134a5e5>] generic_make_request.part.62+0x75/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631869]  [<ffffffff8134abe8>] generic_make_request+0x68/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.631872]  [<ffffffffa07da0b1>] kcryptd_crypt_write_io_submit+0x51/0xd0 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631875]  [<ffffffffa07db67a>] kcryptd_crypt_write_convert+0x11a/0x260 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631878]  [<ffffffff810a5678>] ? vtime_common_task_switch+0x28/0x50
Jan 31 17:15:28 homenas kernel: [ 1680.631881]  [<ffffffffa07db7df>] kcryptd_crypt+0x1f/0x40 [dm_crypt]
Jan 31 17:15:28 homenas kernel: [ 1680.631884]  [<ffffffff81087e1f>] process_one_work+0x17f/0x4c0
Jan 31 17:15:28 homenas kernel: [ 1680.631886]  [<ffffffff81088fbb>] worker_thread+0x11b/0x3d0
Jan 31 17:15:28 homenas kernel: [ 1680.631888]  [<ffffffff81088ea0>] ? manage_workers.isra.21+0x150/0x150
Jan 31 17:15:28 homenas kernel: [ 1680.631890]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.631892]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.631895]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.631897]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632046] INFO: task txg_quiesce:17877 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.632047]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.632048] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.632050] txg_quiesce     D 0000000000000000     0 17877      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.632052]  ffff880f3deb1d40 0000000000000002 0000000000000000 ffff880f3deb1fd8
Jan 31 17:15:28 homenas kernel: [ 1680.632054]  0000000000013200 0000000000013200 ffff880aef9e4b30 ffff880d2df8e440
Jan 31 17:15:28 homenas kernel: [ 1680.632056]  ffff880f3deb1d50 ffff880f83abf8d8 ffff880f83abf8a0 ffff880f83abf8e0
Jan 31 17:15:28 homenas kernel: [ 1680.632058] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.632060]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.632065]  [<ffffffffa09179d5>] cv_wait_common+0xe5/0x120 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632067]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632071]  [<ffffffffa0917a25>] __cv_wait+0x15/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632093]  [<ffffffffa099270b>] txg_quiesce_thread+0x2db/0x3f0 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632113]  [<ffffffffa0992430>] ? txg_fini+0x2d0/0x2d0 [zfs]
Jan 31 17:15:28 homenas kernel: [ 1680.632116]  [<ffffffffa0912de1>] thread_generic_wrapper+0x71/0x80 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632119]  [<ffffffffa0912d70>] ? __thread_exit+0x20/0x20 [spl]
Jan 31 17:15:28 homenas kernel: [ 1680.632122]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632124]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632127]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632129]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0

Jan 31 17:15:28 homenas kernel: [ 1680.632332] INFO: task jbd2/dm-6-8:21694 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.632333]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.632334] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.632335] jbd2/dm-6-8     D 0000000000000000     0 21694      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.632338]  ffff880dd085db68 0000000000000002 ffff8808c6c8d100 ffff880dd085dfd8
Jan 31 17:15:28 homenas kernel: [ 1680.632340]  0000000000013200 0000000000013200 ffff880fe86a4b30 ffff880a756db220
Jan 31 17:15:28 homenas kernel: [ 1680.632342]  ffff880dd085db68 ffff88103fd73b08 ffff880a756db220 ffffffff81205910
Jan 31 17:15:28 homenas kernel: [ 1680.632344] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.632347]  [<ffffffff81205910>] ? __wait_on_buffer+0x30/0x30
Jan 31 17:15:28 homenas kernel: [ 1680.632349]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.632350]  [<ffffffff8175ff4f>] io_schedule+0x8f/0xd0
Jan 31 17:15:28 homenas kernel: [ 1680.632352]  [<ffffffff8120591e>] sleep_on_buffer+0xe/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632354]  [<ffffffff81760612>] __wait_on_bit+0x62/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632355]  [<ffffffff81205910>] ? __wait_on_buffer+0x30/0x30
Jan 31 17:15:28 homenas kernel: [ 1680.632357]  [<ffffffff817606bc>] out_of_line_wait_on_bit+0x7c/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632359]  [<ffffffff810b5010>] ? wake_atomic_t_function+0x40/0x40
Jan 31 17:15:28 homenas kernel: [ 1680.632360]  [<ffffffff8120590e>] __wait_on_buffer+0x2e/0x30
Jan 31 17:15:28 homenas kernel: [ 1680.632364]  [<ffffffff812a47f9>] jbd2_journal_commit_transaction+0x12d9/0x1480
Jan 31 17:15:28 homenas kernel: [ 1680.632367]  [<ffffffff8101ec59>] ? sched_clock+0x9/0x10
Jan 31 17:15:28 homenas kernel: [ 1680.632371]  [<ffffffff812a8208>] kjournald2+0xb8/0x240
Jan 31 17:15:28 homenas kernel: [ 1680.632373]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632376]  [<ffffffff812a8150>] ? commit_timeout+0x10/0x10
Jan 31 17:15:28 homenas kernel: [ 1680.632378]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632380]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632382]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632384]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632388] INFO: task jbd2/dm-10-8:21775 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.632389]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.632390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.632391] jbd2/dm-10-8    D 0000000000000000     0 21775      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.632393]  ffff880a2cd51c68 0000000000000002 ffff880a2cd51c18 ffff880a2cd51fd8
Jan 31 17:15:28 homenas kernel: [ 1680.632396]  0000000000013200 0000000000013200 ffff880fe8be3220 ffff880d2ddd8000
Jan 31 17:15:28 homenas kernel: [ 1680.632399]  ffff880a2cd51c78 ffff880a2cd51da8 ffff880fe63afc00 ffff880fd5969824
Jan 31 17:15:28 homenas kernel: [ 1680.632401] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.632404]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.632406]  [<ffffffff812a374a>] jbd2_journal_commit_transaction+0x22a/0x1480
Jan 31 17:15:28 homenas kernel: [ 1680.632409]  [<ffffffff810ab541>] ? dequeue_entity+0x181/0x440
Jan 31 17:15:28 homenas kernel: [ 1680.632412]  [<ffffffff8101ec59>] ? sched_clock+0x9/0x10
Jan 31 17:15:28 homenas kernel: [ 1680.632413]  [<ffffffff810a564a>] ? arch_vtime_task_switch+0x8a/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632415]  [<ffffffff810a568d>] ? vtime_common_task_switch+0x3d/0x50
Jan 31 17:15:28 homenas kernel: [ 1680.632416]  [<ffffffff8109aa28>] ? finish_task_switch+0x128/0x180
Jan 31 17:15:28 homenas kernel: [ 1680.632418]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632420]  [<ffffffff812a8208>] kjournald2+0xb8/0x240
Jan 31 17:15:28 homenas kernel: [ 1680.632422]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632424]  [<ffffffff812a8150>] ? commit_timeout+0x10/0x10
Jan 31 17:15:28 homenas kernel: [ 1680.632427]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632429]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632431]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632433]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632435] INFO: task ext4lazyinit:21777 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.632437]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.632438] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.632439] ext4lazyinit    D 0000000000000000     0 21777      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.632441]  ffff880a1ac81b48 0000000000000002 000002001ac81b68 ffff880a1ac81fd8
Jan 31 17:15:28 homenas kernel: [ 1680.632443]  0000000000013200 0000000000013200 ffff880d2dddb220 ffff880ef5440000
Jan 31 17:15:28 homenas kernel: [ 1680.632446]  ffffc90016adf040 ffff88103fd73b08 7fffffffffffffff 7fffffffffffffff
Jan 31 17:15:28 homenas kernel: [ 1680.632449] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.632451]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.632454]  [<ffffffff8175f17d>] schedule_timeout+0x1bd/0x220
Jan 31 17:15:28 homenas kernel: [ 1680.632457]  [<ffffffff815e20b7>] ? __split_and_process_bio+0xe7/0x150
Jan 31 17:15:28 homenas kernel: [ 1680.632459]  [<ffffffff8101e4a9>] ? read_tsc+0x9/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632463]  [<ffffffff810d87ac>] ? ktime_get_ts+0x4c/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632465]  [<ffffffff81760332>] io_schedule_timeout+0xa2/0x100
Jan 31 17:15:28 homenas kernel: [ 1680.632467]  [<ffffffff81760777>] wait_for_completion_io+0xa7/0x160
Jan 31 17:15:28 homenas kernel: [ 1680.632471]  [<ffffffff810a1ff0>] ? try_to_wake_up+0x210/0x210
Jan 31 17:15:28 homenas kernel: [ 1680.632473]  [<ffffffff81356fc7>] __blkdev_issue_zeroout+0x157/0x180
Jan 31 17:15:28 homenas kernel: [ 1680.632476]  [<ffffffff813570c9>] blkdev_issue_zeroout+0xd9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632478]  [<ffffffff81287863>] ? __ext4_journal_get_write_access+0x43/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632481]  [<ffffffff8125433f>] ext4_init_inode_table+0x15f/0x370
Jan 31 17:15:28 homenas kernel: [ 1680.632483]  [<ffffffff81269e76>] ext4_run_li_request+0xa6/0x120
Jan 31 17:15:28 homenas kernel: [ 1680.632485]  [<ffffffff81269f98>] ext4_lazyinit_thread+0xa8/0x1c0
Jan 31 17:15:28 homenas kernel: [ 1680.632486]  [<ffffffff81269ef0>] ? ext4_run_li_request+0x120/0x120
Jan 31 17:15:28 homenas kernel: [ 1680.632488]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632490]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632491]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632493]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632494] INFO: task jbd2/dm-11-8:21820 blocked for more than 120 seconds.
Jan 31 17:15:28 homenas kernel: [ 1680.632495]       Tainted: PF          O 3.14.43-031443-generic #201505171835
Jan 31 17:15:28 homenas kernel: [ 1680.632495] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 31 17:15:28 homenas kernel: [ 1680.632496] jbd2/dm-11-8    D ffffffff818118e0     0 21820      2 0x00000000
Jan 31 17:15:28 homenas kernel: [ 1680.632498]  ffff880f8dd3db68 0000000000000002 ffff880ea1547800 ffff880f8dd3dfd8
Jan 31 17:15:28 homenas kernel: [ 1680.632500]  0000000000013200 0000000000013200 ffff880fe8a4cb30 ffff880fd4d61910
Jan 31 17:15:28 homenas kernel: [ 1680.632503]  ffff880f8dd3db68 ffff88103fd53b08 ffff880fd4d61910 ffffffff81205910
Jan 31 17:15:28 homenas kernel: [ 1680.632506] Call Trace:
Jan 31 17:15:28 homenas kernel: [ 1680.632509]  [<ffffffff81205910>] ? __wait_on_buffer+0x30/0x30
Jan 31 17:15:28 homenas kernel: [ 1680.632511]  [<ffffffff8175fe79>] schedule+0x29/0x70
Jan 31 17:15:28 homenas kernel: [ 1680.632513]  [<ffffffff8175ff4f>] io_schedule+0x8f/0xd0
Jan 31 17:15:28 homenas kernel: [ 1680.632515]  [<ffffffff8120591e>] sleep_on_buffer+0xe/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632517]  [<ffffffff81760612>] __wait_on_bit+0x62/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632520]  [<ffffffff81205910>] ? __wait_on_buffer+0x30/0x30
Jan 31 17:15:28 homenas kernel: [ 1680.632522]  [<ffffffff817606bc>] out_of_line_wait_on_bit+0x7c/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632524]  [<ffffffff810b5010>] ? wake_atomic_t_function+0x40/0x40
Jan 31 17:15:28 homenas kernel: [ 1680.632527]  [<ffffffff8120590e>] __wait_on_buffer+0x2e/0x30
Jan 31 17:15:28 homenas kernel: [ 1680.632529]  [<ffffffff812a47f9>] jbd2_journal_commit_transaction+0x12d9/0x1480
Jan 31 17:15:28 homenas kernel: [ 1680.632532]  [<ffffffff8101ec59>] ? sched_clock+0x9/0x10
Jan 31 17:15:28 homenas kernel: [ 1680.632535]  [<ffffffff812a8208>] kjournald2+0xb8/0x240
Jan 31 17:15:28 homenas kernel: [ 1680.632537]  [<ffffffff810b4f90>] ? __wake_up_sync+0x20/0x20
Jan 31 17:15:28 homenas kernel: [ 1680.632540]  [<ffffffff812a8150>] ? commit_timeout+0x10/0x10
Jan 31 17:15:28 homenas kernel: [ 1680.632542]  [<ffffffff8108fec9>] kthread+0xc9/0xe0
Jan 31 17:15:28 homenas kernel: [ 1680.632544]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0
Jan 31 17:15:28 homenas kernel: [ 1680.632546]  [<ffffffff8176cfd8>] ret_from_fork+0x58/0x90
Jan 31 17:15:28 homenas kernel: [ 1680.632547]  [<ffffffff8108fe00>] ? flush_kthread_worker+0xb0/0xb0

I had to reboot the system.

The system is an Ubuntu 14.04.3 (LTS) with kernel: 3.14.43-031443-generic

The setup consists of four pools:

NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mirpool1   856G   189G   667G         -    54%    22%  1.00x  ONLINE  -
pool1     6.31T  2.92T  3.39T         -    27%    46%  1.00x  ONLINE  -
pool2       38T  22.8T  15.2T         -     7%    59%  1.00x  ONLINE  -
pool3     50.8T  18.9T  31.9T         -     8%    37%  1.00x  ONLINE  -

Deduplication is not in use.

The only thing worth mentioning is that pool2 currently has a broken disk which was "replaced" with a file stored on pool3:

  pool: pool2
 state: ONLINE
  scan: none requested
config:

        NAME                          STATE     READ WRITE CKSUM
        pool2                         ONLINE       0     0     0
          raidz3-0                    ONLINE       0     0     0
            sds                       ONLINE       0     0     0
            sdal                      ONLINE       0     0     0
            sdaj                      ONLINE       0     0     0
            sdaf                      ONLINE       0     0     0
            sdg                       ONLINE       0     0     0
            sdah                      ONLINE       0     0     0
            sdv                       ONLINE       0     0     0
            sdk                       ONLINE       0     0     0
            sdi                       ONLINE       0     0     0
            sda                       ONLINE       0     0     0
            sdu                       ONLINE       0     0     0
            sdag                      ONLINE       0     0     0
            sdak                      ONLINE       0     0     0
            /pool3/pool2-disk/disk01  ONLINE       0     0     0

errors: No known data errors

I was unable to capture the ARC Stats... however a system running with 0.6.5.3 (i downgraded to fix the issue) looks like this:

root@homenas:~# cat /proc/spl/kstat/zfs/arcstats
6 1 0x01 91 4368 496673023208 2010789986814
name                            type data
hits                            4    3095527
misses                          4    710774
demand_data_hits                4    2245740
demand_data_misses              4    146954
demand_metadata_hits            4    732231
demand_metadata_misses          4    325589
prefetch_data_hits              4    113074
prefetch_data_misses            4    235449
prefetch_metadata_hits          4    4482
prefetch_metadata_misses        4    2782
mru_hits                        4    860312
mru_ghost_hits                  4    56538
mfu_hits                        4    2117661
mfu_ghost_hits                  4    71662
deleted                         4    377324
mutex_miss                      4    35
evict_skip                      4    7017003
evict_not_enough                4    43823
evict_l2_cached                 4    0
evict_l2_eligible               4    52409084416
evict_l2_ineligible             4    52639311872
evict_l2_skip                   4    0
hash_elements                   4    126424
hash_elements_max               4    129334
hash_collisions                 4    6459
hash_chains                     4    842
hash_chain_max                  4    2
p                               4    5911566336
c                               4    8589934592
c_min                           4    8389934592
c_max                           4    8589934592
size                            4    8594328736
hdr_size                        4    47986464
data_size                       4    8090550272
metadata_size                   4    434358784
other_size                      4    21433216
anon_size                       4    10944512
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    5368710656
mru_evictable_data              4    5176426496
mru_evictable_metadata          4    107916800
mru_ghost_size                  4    3238283264
mru_ghost_evictable_data        4    1775501312
mru_ghost_evictable_metadata    4    1462781952
mfu_size                        4    3145253888
mfu_evictable_data              4    2908225536
mfu_evictable_metadata          4    215804928
mfu_ghost_size                  4    5349966336
mfu_ghost_evictable_data        4    2917924864
mfu_ghost_evictable_metadata    4    2432041472
l2_hits                         4    0
l2_misses                       4    0
l2_feeds                        4    0
l2_rw_clash                     4    0
l2_read_bytes                   4    0
l2_write_bytes                  4    0
l2_writes_sent                  4    0
l2_writes_done                  4    0
l2_writes_error                 4    0
l2_writes_lock_retry            4    0
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    0
l2_cdata_free_on_write          4    0
l2_abort_lowmem                 4    0
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    0
l2_asize                        4    0
l2_hdr_size                     4    0
l2_compress_successes           4    0
l2_compress_zeros               4    0
l2_compress_failures            4    0
memory_throttle_count           4    0
duplicate_buffers               4    0
duplicate_buffers_size          4    0
duplicate_reads                 4    0
memory_direct_count             4    0
memory_indirect_count           4    0
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    1366974
arc_meta_used                   4    503778464
arc_meta_limit                  4    2147483648
arc_meta_max                    4    2537925864
arc_meta_min                    4    536870912
arc_need_free                   4    0
arc_sys_free                    4    1054822400

If you need more details please let me know.

@kernelOfTruth
Copy link
Contributor

Please post more hardware specs (ECC ?, how much RAM, processor, harddrive models ? etc. ),

how is health of the harddrives ?

you are copying from ext4 to ZFS ? or reverse ?

search terms for google:
jbd2/dm __wait_on_buffer schedule io_schedule sleep_on_buffer __wait_on_bit __wait_on_buffer out_of_line_wait_on_bit

referencing:
#2221 zvol/kworker/jbd2 possible deadlock (hung tasks)
https://access.redhat.com/solutions/96783 Recurring jbd2/ext4 deadlock at io_schedule

@ronnyegner
Copy link
Author

Hi,

the system is a:

Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz

64 GB memory, no ECC (but memory-Test for 48h without problems)

The disk drives are fine according to the SMART values. Hard drives are 3 or 4 TB models from Seagate, WD and HGST. I doubt that this is a "write error problem" as the problem disappears when downgrading to 0.6.5.3 while it can be almost immediately reproduced with 0.6.5.4.

The copy goes from a file system on ZFS to a luks encrypted file system with ext4, e.g.:

cp /pool3/temp/some_file [ZFS file system] /data/misc [ext4 formatted luks file system on pool2]

The system is stable with 0.6.5.3 but can be brought down almost immediately with 0.6.5.4.

@tuxoko
Copy link
Contributor

tuxoko commented Feb 1, 2016

@ronnyegner
It's not quite clear from your stack trace what it's waiting for. Could you do sysrq-t when it happens. Note sysrq-t would be quite large and can take quite a while.

@ronnyegner
Copy link
Author

@tuxoko: I do it tomorrow.

@ronnyegner
Copy link
Author

So i tried the whole day to reproduce the issue without any success so far.
The only thing thas has been changed is that i have replaced the defctive disk in pool2:

BEFORE

 pool: pool2
 state: ONLINE
  scan: none requested
config:

        NAME                          STATE     READ WRITE CKSUM
        pool2                         ONLINE       0     0     0
          raidz3-0                    ONLINE       0     0     0
            sds                       ONLINE       0     0     0
        ...

            sdak                      ONLINE       0     0     0
            /pool3/pool2-disk/disk01  ONLINE       0     0     0

AFTER

 pool: pool2
 state: ONLINE
  scan: none requested
config:

        NAME                          STATE     READ WRITE CKSUM
        pool2                         ONLINE       0     0     0
          raidz3-0                    ONLINE       0     0     0
            sds                       ONLINE       0     0     0
        ...

            sdak                      ONLINE       0     0     0
            sdao                     ONLINE       0     0     0 <=== new disk

I will try maybe one more day and then close the issue... seems solved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants