Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PANIC at arc.c:6120:arc_release(), VERIFY(HDR_EMPTY(hdr)) failed #9897

Closed
softminus opened this issue Jan 27, 2020 · 6 comments
Closed

PANIC at arc.c:6120:arc_release(), VERIFY(HDR_EMPTY(hdr)) failed #9897

softminus opened this issue Jan 27, 2020 · 6 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@softminus
Copy link

System information

Type Version/Name
Distribution Name Arch linux
Distribution Version N/A, rolling release
Linux Kernel 5.5.0-rc7-00086-gf9725d103968
Architecture x86_64
ZFS Version 0.8.0-550_ga3403164d
SPL Version 0.8.0-550_ga3403164d

Describe the problem you're observing

I was saving a file in Sublime Text, and ST3 froze and the aforementioned text showed up in dmesg. The system was unusable, I had to use magic-sysrq to halt it.

Describe how to reproduce the problem

This is the first time I encountered this problem and I am unable to reproduce it. For what it's worth, I compiled ZFS with --enable-debug --enable-debuginfo --enable-debug-kmem

Include any warning/errors/backtraces from the system logs

[90358.263384] VERIFY(HDR_EMPTY(hdr)) failed
[90358.263386] PANIC at arc.c:6120:arc_release()
[90358.263387] Showing stack for process 5235
[90358.263389] CPU: 0 PID: 5235 Comm: sublime_text Tainted: P           O      5.5.0-rc7HEADD-00086-gf9725d103968 #1
[90358.263389] Hardware name: FUJITSU /D3641-S1, BIOS V5.0.0.13 R1.7.0 for D3641-S1x                     06/05/2019
[90358.263390] Call Trace:
[90358.263395]  dump_stack+0x66/0x90
[90358.263404]  spl_panic+0xef/0x117 [spl]
[90358.263469]  arc_release+0xaf4/0xcc0 [zfs]
[90358.263475]  ? __cv_signal+0x2c/0xb0 [spl]
[90358.263530]  dbuf_undirty.isra.0+0x247/0x6f0 [zfs]
[90358.263563]  dbuf_free_range+0x263/0x680 [zfs]
[90358.263601]  dnode_free_range+0x24e/0xac0 [zfs]
[90358.263635]  dmu_free_long_range_impl+0x2b7/0x4a0 [zfs]
[90358.263673]  dmu_free_long_range+0x70/0xc0 [zfs]
[90358.263726]  ? zfs_rangelock_enter+0xe3/0x160 [zfs]
[90358.263785]  zfs_trunc+0xa8/0x220 [zfs]
[90358.263836]  zfs_freesp+0x107/0x330 [zfs]
[90358.263888]  zfs_setattr+0x10e7/0x2c70 [zfs]
[90358.263933]  ? __raw_spin_unlock+0x5/0x10 [zfs]
[90358.263994]  ? zfs_lookup+0x140/0x3d0 [zfs]
[90358.264046]  zpl_setattr+0x101/0x1c0 [zfs]
[90358.264049]  notify_change+0x2e1/0x450
[90358.264051]  do_truncate+0xaf/0x100
[90358.264102]  ? zpl_open+0x87/0xc0 [zfs]
[90358.264103]  path_openat+0x5cc/0x15a0
[90358.264105]  ? vfs_mknod+0x116/0x1e0
[90358.264148]  ? zfs_refcount_remove_many+0x11e/0x1d0 [zfs]
[90358.264150]  do_filp_open+0xcc/0x140
[90358.264152]  do_sys_open+0x199/0x240
[90358.264154]  do_syscall_64+0x4e/0x150
[90358.264156]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[90358.264158] RIP: 0033:0x7fd5b63331d4
[90358.264159] Code: 24 20 eb 8f 66 90 44 89 54 24 0c e8 66 59 f9 ff 44 8b 54 24 0c 44 89 e2 48 89 ee 41 89 c0 bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 32 44 89 c7 89 44 24 0c e8 98 59 f9 ff 8b 44
[90358.264159] RSP: 002b:00007ffdf8c73800 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[90358.264160] RAX: ffffffffffffffda RBX: 0000000005cee530 RCX: 00007fd5b63331d4
[90358.264161] RDX: 0000000000000241 RSI: 00000000081ed7c0 RDI: 00000000ffffff9c
[90358.264161] RBP: 00000000081ed7c0 R08: 0000000000000000 R09: 0000000000000001
[90358.264161] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
[90358.264162] R13: 0000000005cee530 R14: 0000000000000001 R15: 0000000006f901f8
[90382.293459] audit: type=1101 audit(1580083608.148:349): pid=680936 uid=1000 auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix,pam_permit,pam_time acct="thz" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/5 res=success'
[90382.293575] audit: type=1110 audit(1580083608.148:350): pid=680936 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_unix,pam_permit,pam_env acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/5 res=success'
[90382.294532] audit: type=1105 audit(1580083608.149:351): pid=680936 uid=0 auid=1000 ses=1 msg='op=PAM:session_open grantors=pam_limits,pam_unix,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/5 res=success'
[90383.065227] audit: type=1106 audit(1580083608.920:352): pid=680936 uid=0 auid=1000 ses=1 msg='op=PAM:session_close grantors=pam_limits,pam_unix,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/5 res=success'
[90383.065331] audit: type=1104 audit(1580083608.920:353): pid=680936 uid=0 auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_unix,pam_permit,pam_env acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/5 res=success'
[90563.823079] INFO: task txg_quiesce:609 blocked for more than 122 seconds.
[90563.823081]       Tainted: P           O      5.5.0-rc7HEADD-00086-gf9725d103968 #1
[90563.823082] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[90563.823086] txg_quiesce     D    0   609      2 0x80004000
[90563.823090] Call Trace:
[90563.823117]  ? __schedule+0x2dc/0x760
[90563.823118]  schedule+0x4a/0xb0
[90563.823124]  cv_wait_common+0x17f/0x2d0 [spl]
[90563.823126]  ? wait_woken+0x80/0x80
[90563.823201]  txg_quiesce+0x213/0x2e0 [zfs]
[90563.823255]  txg_quiesce_thread+0xfb/0x250 [zfs]
[90563.823301]  ? txg_quiesce+0x2e0/0x2e0 [zfs]
[90563.823305]  thread_generic_wrapper+0x78/0xb0 [spl]
[90563.823307]  kthread+0xfb/0x130
[90563.823312]  ? IS_ERR+0x10/0x10 [spl]
[90563.823312]  ? kthread_park+0x90/0x90
[90563.823314]  ret_from_fork+0x1f/0x40
[90563.823338] INFO: task sublime_text:5235 blocked for more than 122 seconds.
[90563.823339]       Tainted: P           O      5.5.0-rc7HEADD-00086-gf9725d103968 #1
[90563.823339] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[90563.823340] sublime_text    D    0  5235      1 0x80000084
[90563.823341] Call Trace:
[90563.823343]  ? __schedule+0x2dc/0x760
[90563.823350]  schedule+0x4a/0xb0
[90563.823356]  spl_panic+0x115/0x117 [spl]
[90563.823388]  arc_release+0xaf4/0xcc0 [zfs]
[90563.823392]  ? __cv_signal+0x2c/0xb0 [spl]
[90563.823423]  dbuf_undirty.isra.0+0x247/0x6f0 [zfs]
[90563.823454]  dbuf_free_range+0x263/0x680 [zfs]
[90563.823490]  dnode_free_range+0x24e/0xac0 [zfs]
[90563.823522]  dmu_free_long_range_impl+0x2b7/0x4a0 [zfs]
[90563.823555]  dmu_free_long_range+0x70/0xc0 [zfs]
[90563.823603]  ? zfs_rangelock_enter+0xe3/0x160 [zfs]
[90563.823652]  zfs_trunc+0xa8/0x220 [zfs]
[90563.823701]  zfs_freesp+0x107/0x330 [zfs]
[90563.823750]  zfs_setattr+0x10e7/0x2c70 [zfs]
[90563.823793]  ? __raw_spin_unlock+0x5/0x10 [zfs]
[90563.823841]  ? zfs_lookup+0x140/0x3d0 [zfs]
[90563.823889]  zpl_setattr+0x101/0x1c0 [zfs]
[90563.823892]  notify_change+0x2e1/0x450
[90563.823894]  do_truncate+0xaf/0x100
[90563.823976]  ? zpl_open+0x87/0xc0 [zfs]
[90563.823984]  path_openat+0x5cc/0x15a0
[90563.824002]  ? vfs_mknod+0x116/0x1e0
[90563.824045]  ? zfs_refcount_remove_many+0x11e/0x1d0 [zfs]
[90563.824046]  do_filp_open+0xcc/0x140
[90563.824048]  do_sys_open+0x199/0x240
[90563.824050]  do_syscall_64+0x4e/0x150
[90563.824052]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[90563.824053] RIP: 0033:0x7fd5b63331d4
[90563.824055] Code: Bad RIP value.
[90563.824056] RSP: 002b:00007ffdf8c73800 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[90563.824057] RAX: ffffffffffffffda RBX: 0000000005cee530 RCX: 00007fd5b63331d4
[90563.824059] RDX: 0000000000000241 RSI: 00000000081ed7c0 RDI: 00000000ffffff9c
[90563.824059] RBP: 00000000081ed7c0 R08: 0000000000000000 R09: 0000000000000001
[90563.824060] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
[90563.824060] R13: 0000000005cee530 R14: 0000000000000001 R15: 0000000006f901f8
[90686.703020] INFO: task txg_quiesce:609 blocked for more than 245 seconds.
[90686.703028]       Tainted: P           O      5.5.0-rc7HEADD-00086-gf9725d103968 #1
[90686.703031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[90686.703057] txg_quiesce     D    0   609      2 0x80004000
[90686.703059] Call Trace:
[90686.703065]  ? __schedule+0x2dc/0x760
[90686.703082]  schedule+0x4a/0xb0
[90686.703087]  cv_wait_common+0x17f/0x2d0 [spl]
[90686.703089]  ? wait_woken+0x80/0x80
[90686.703153]  txg_quiesce+0x213/0x2e0 [zfs]
[90686.703201]  txg_quiesce_thread+0xfb/0x250 [zfs]
[90686.703247]  ? txg_quiesce+0x2e0/0x2e0 [zfs]
[90686.703252]  thread_generic_wrapper+0x78/0xb0 [spl]
[90686.703254]  kthread+0xfb/0x130
[90686.703258]  ? IS_ERR+0x10/0x10 [spl]
[90686.703259]  ? kthread_park+0x90/0x90
[90686.703260]  ret_from_fork+0x1f/0x40
[90686.703284] INFO: task sublime_text:5235 blocked for more than 245 seconds.
[90686.703285]       Tainted: P           O      5.5.0-rc7HEADD-00086-gf9725d103968 #1
[90686.703285] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[90686.703286] sublime_text    D    0  5235      1 0x80000084
[90686.703287] Call Trace:
[90686.703289]  ? __schedule+0x2dc/0x760
[90686.703291]  schedule+0x4a/0xb0
[90686.703295]  spl_panic+0x115/0x117 [spl]
[90686.703337]  arc_release+0xaf4/0xcc0 [zfs]
[90686.703341]  ? __cv_signal+0x2c/0xb0 [spl]
[90686.703372]  dbuf_undirty.isra.0+0x247/0x6f0 [zfs]
[90686.703403]  dbuf_free_range+0x263/0x680 [zfs]
[90686.703439]  dnode_free_range+0x24e/0xac0 [zfs]
[90686.703471]  dmu_free_long_range_impl+0x2b7/0x4a0 [zfs]
[90686.703504]  dmu_free_long_range+0x70/0xc0 [zfs]
[90686.703553]  ? zfs_rangelock_enter+0xe3/0x160 [zfs]
[90686.703601]  zfs_trunc+0xa8/0x220 [zfs]
[90686.703651]  zfs_freesp+0x107/0x330 [zfs]
[90686.703700]  zfs_setattr+0x10e7/0x2c70 [zfs]
[90686.703742]  ? __raw_spin_unlock+0x5/0x10 [zfs]
[90686.703790]  ? zfs_lookup+0x140/0x3d0 [zfs]
[90686.703839]  zpl_setattr+0x101/0x1c0 [zfs]
[90686.703842]  notify_change+0x2e1/0x450
[90686.703844]  do_truncate+0xaf/0x100
[90686.703892]  ? zpl_open+0x87/0xc0 [zfs]
[90686.703894]  path_openat+0x5cc/0x15a0
[90686.703895]  ? vfs_mknod+0x116/0x1e0
[90686.703937]  ? zfs_refcount_remove_many+0x11e/0x1d0 [zfs]
[90686.703945]  do_filp_open+0xcc/0x140
[90686.703964]  do_sys_open+0x199/0x240
[90686.703966]  do_syscall_64+0x4e/0x150
[90686.703967]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[90686.703968] RIP: 0033:0x7fd5b63331d4
[90686.703971] Code: Bad RIP value.
[90686.703971] RSP: 002b:00007ffdf8c73800 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[90686.703973] RAX: ffffffffffffffda RBX: 0000000005cee530 RCX: 00007fd5b63331d4
[90686.703973] RDX: 0000000000000241 RSI: 00000000081ed7c0 RDI: 00000000ffffff9c
[90686.703973] RBP: 00000000081ed7c0 R08: 0000000000000000 R09: 0000000000000001
[90686.703974] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
[90686.703974] R13: 0000000005cee530 R14: 0000000000000001 R15: 0000000006f901f8

@behlendorf behlendorf added the Type: Defect Incorrect behavior (e.g. crash, hang) label Jan 27, 2020
@softminus
Copy link
Author

softminus commented Jan 28, 2020

Just got it again. If it helps, I have two pools; two mirrored 12TB SATA drives and two mirrored 1TB SSDs (one NVMe, one SATA). If anyone has any patch/workaround/diagnostics/tests they want me to try, let me know; I would be glad to help troubleshoot this.

[82827.161019] VERIFY(HDR_EMPTY(hdr)) failed
[82827.161022] PANIC at arc.c:6120:arc_release()
[82827.161022] Showing stack for process 2206
[82827.161024] CPU: 2 PID: 2206 Comm: sublime_text Tainted: P           O      5.5.0chirumiru-00004-g57fd71a6c08b #1
[82827.161024] Hardware name: FUJITSU /D3641-S1, BIOS V5.0.0.13 R1.7.0 for D3641-S1x                     06/05/2019
[82827.161025] Call Trace:
[82827.161030]  dump_stack+0x66/0x90
[82827.161053]  spl_panic+0xef/0x117 [spl]
[82827.161109]  arc_release+0xaf4/0xcc0 [zfs]
[82827.161114]  ? __cv_signal+0x2c/0xb0 [spl]
[82827.161156]  dbuf_undirty.isra.0+0x247/0x6f0 [zfs]
[82827.161188]  dbuf_free_range+0x263/0x680 [zfs]
[82827.161225]  dnode_free_range+0x24e/0xac0 [zfs]
[82827.161258]  dmu_free_long_range_impl+0x2b7/0x4a0 [zfs]
[82827.161307]  dmu_free_long_range+0x70/0xc0 [zfs]
[82827.161357]  ? zfs_rangelock_enter+0xe3/0x160 [zfs]
[82827.161406]  zfs_trunc+0xa8/0x220 [zfs]
[82827.161455]  zfs_freesp+0x107/0x330 [zfs]
[82827.161504]  zfs_setattr+0x10e7/0x2c70 [zfs]
[82827.161547]  ? __raw_spin_unlock+0x5/0x10 [zfs]
[82827.161595]  ? zfs_lookup+0x140/0x3d0 [zfs]
[82827.161644]  zpl_setattr+0x101/0x1c0 [zfs]
[82827.161647]  notify_change+0x2e1/0x450
[82827.161649]  do_truncate+0xaf/0x100
[82827.161697]  ? zpl_open+0x87/0xc0 [zfs]
[82827.161698]  path_openat+0x5d4/0x1590
[82827.161700]  do_filp_open+0xcc/0x140
[82827.161702]  do_sys_open+0x199/0x240
[82827.161704]  do_syscall_64+0x4e/0x150
[82827.161706]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[82827.161707] RIP: 0033:0x7fee3b7481d4
[82827.161708] Code: 24 20 eb 8f 66 90 44 89 54 24 0c e8 66 59 f9 ff 44 8b 54 24 0c 44 89 e2 48 89 ee 41 89 c0 bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 32 44 89 c7 89 44 24 0c e8 98 59 f9 ff 8b 44
[82827.161708] RSP: 002b:00007ffc352a1210 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[82827.161709] RAX: ffffffffffffffda RBX: 000000001ef596a0 RCX: 00007fee3b7481d4
[82827.161710] RDX: 0000000000000241 RSI: 000000001a8521b0 RDI: 00000000ffffff9c
[82827.161710] RBP: 000000001a8521b0 R08: 0000000000000000 R09: 0000000000000001
[82827.161710] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
[82827.161711] R13: 000000001ef596a0 R14: 0000000000000001 R15: 000000000e7a0bd8

@softminus
Copy link
Author

I can reliably reproduce it -- by hitting Control-S (save keybind) in my Sublime Text 3 session as rapidly as possible (about 2 times per second or so for a few seconds).

15226.587076] VERIFY(HDR_EMPTY(hdr)) failed
[15226.587079] PANIC at arc.c:6120:arc_release()
[15226.587091] Showing stack for process 2432
[15226.587093] CPU: 3 PID: 2432 Comm: sublime_text Tainted: P           O      5.5.0-chirumiru-01160-g059c144582b7 #1
[15226.587093] Hardware name: FUJITSU /D3641-S1, BIOS V5.0.0.13 R1.7.0 for D3641-S1x                     06/05/2019
[15226.587094] Call Trace:
[15226.587099]  dump_stack+0x66/0x90
[15226.587104]  spl_panic+0xef/0x117 [spl]
[15226.587107]  ? _cond_resched+0x15/0x30
[15226.587160]  arc_release+0x1089/0x1170 [zfs]
[15226.587163]  ? avl_find+0x68/0xe0 [zavl]
[15226.587164]  ? _cond_resched+0x15/0x30
[15226.587184]  dbuf_free_range+0x4f7/0xd80 [zfs]
[15226.587207]  dnode_free_range+0x25a/0xae0 [zfs]
[15226.587229]  dmu_free_long_range+0x3e3/0x650 [zfs]
[15226.587261]  zfs_trunc+0x82/0x200 [zfs]
[15226.587311]  zfs_freesp+0xd7/0x4b0 [zfs]
[15226.587367]  ? zfs_zaccess_aces_check+0x22b/0x460 [zfs]
[15226.587398]  zfs_setattr+0xe8f/0x2d00 [zfs]
[15226.587400]  ? mutex_lock+0xe/0x30
[15226.587432]  zpl_setattr+0x109/0x250 [zfs]
[15226.587434]  notify_change+0x2e1/0x450
[15226.587436]  do_truncate+0x88/0xe0
[15226.587437]  path_openat+0x5b7/0x15a0
[15226.587439]  do_filp_open+0xab/0x120
[15226.587441]  do_sys_open+0x199/0x240
[15226.587442]  do_syscall_64+0x4e/0x150
[15226.587443]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[15226.587445] RIP: 0033:0x7fb738ecb1d4
[15226.587446] Code: 24 20 eb 8f 66 90 44 89 54 24 0c e8 66 59 f9 ff 44 8b 54 24 0c 44 89 e2 48 89 ee 41 89 c0 bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 32 44 89 c7 89 44 24 0c e8 98 59 f9 ff 8b 44
[15226.587446] RSP: 002b:00007fffa54e4af0 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[15226.587447] RAX: ffffffffffffffda RBX: 00000000024c6190 RCX: 00007fb738ecb1d4
[15226.587448] RDX: 0000000000000241 RSI: 00000000025fd9b0 RDI: 00000000ffffff9c
[15226.587448] RBP: 00000000025fd9b0 R08: 0000000000000000 R09: 0000000000000001
[15226.587448] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
[15226.587449] R13: 00000000024c6190 R14: 0000000000000001 R15: 0000000001fdca78

@delphij
Copy link
Contributor

delphij commented Apr 17, 2020

This looks very similar to a panic observed on FreeBSD, I've filed a FreeBSD PR at https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=245683 too.

@stale
Copy link

stale bot commented Apr 17, 2021

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Apr 17, 2021
@behlendorf
Copy link
Contributor

This issue has not been observed recently, however it hasn't been specifically addressed either so it's likely not stale.

@stale stale bot removed the Status: Stale No recent activity for issue label Apr 17, 2021
@delphij
Copy link
Contributor

delphij commented Apr 28, 2021

I wonder if the VERIFY(HDR_EMPTY(hdr)) was right here. I have sent something similar to other developers some time ago, in summary:

In arc_release, we basically have:

        /*
         * We don't grab the hash lock prior to this check, because if
         * the buffer's header is in the arc_anon state, it won't be
         * linked into the hash table.
         */
        if (hdr->b_l1hdr.b_state == arc_anon) {
[...]
                ASSERT(HDR_EMPTY(hdr));
[...]
                /*
                 * If the buf is being overridden then it may already
                 * have a hdr that is not empty.
                 */
                buf_discard_identity(hdr);

So in short, we asserted HDR_EMPTY(hdr) (b_dva.dva_word == 0), but later comment suggests that it's not always the case, and either the assertion should not be there, or the later buf_discard_identity should be removed, and looking at illumos/illumos-gate@dcbf3bd we probably should have removed the assertion.

rincebrain added a commit to rincebrain/zfs that referenced this issue Jun 15, 2021
Unfortunately, there was an overzealous assertion that was (in pretty
specific circumstances) false, causing failure.

Let's not, and say we did.

Closes: openzfs#9897
Closes: openzfs#12020

Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Sep 15, 2021
Unfortunately, there was an overzealous assertion that was (in pretty
specific circumstances) false, causing failure.  This assertion was
added in error, so we're removing it.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
Closes openzfs#9897
Closes openzfs#12020
Closes openzfs#12246
rincebrain added a commit to rincebrain/zfs that referenced this issue Sep 22, 2021
Unfortunately, there was an overzealous assertion that was (in pretty
specific circumstances) false, causing failure.  This assertion was
added in error, so we're removing it.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
Closes openzfs#9897
Closes openzfs#12020
Closes openzfs#12246
datacore-rm pushed a commit to DataCoreSoftware/openzfs that referenced this issue Oct 4, 2022
Unfortunately, there was an overzealous assertion that was (in pretty
specific circumstances) false, causing failure.  This assertion was
added in error, so we're removing it.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
Closes openzfs#9897
Closes openzfs#12020
Closes openzfs#12246
datacore-rm pushed a commit to DataCoreSoftware/openzfs that referenced this issue Oct 13, 2022
Unfortunately, there was an overzealous assertion that was (in pretty
specific circumstances) false, causing failure.  This assertion was
added in error, so we're removing it.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
Closes openzfs#9897
Closes openzfs#12020
Closes openzfs#12246
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

3 participants