-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
long hung tasks #796
Comments
The stacks are just advisory but they seem to indicate the txg_sync thread was block on an I/O request for quite some time. Perhaps one of your drives had a hiccup. |
I also frequently see lock-ups (sometimes the system seems to have crashed...) with similar traces in the logs. This happens typically when under a reasonably heavy load, e.g. when resilvering/scrubbing and/or snapshotting while rsyncing. I thought maybe this was related to zfs memory issues, but do you think it could be an issue with my disks or eSATA card? I am finding scrubs fix errors on them from time to time. They are in a TowerRAID eSATA external enclosure driven by a eSATA raid card (RocketRaid 622- but just running them JBOD), so there are many places to go wrong. I'm running the latest stable Ubuntu ppa as of July 10. Our array is an 8TB raidz2 with a 32 GB usb stick cache device. The array is about 75% full with compression and dedup enabled. The system has 8GB RAM. I'd very much appreciate you helping me understand these stack traces so that I can do something on my end to help stop the lockups, and that they help you with your great work on zfs. Looking at other bug reports, it looks like this implementation of zfs has a variety of issues with memory and excessive disk access while under load. Since scrubbing takes so long on reasonably sized arrays, it seems that the system should be able to do this and handle other IO without locking up. Is there perhaps an underlying issue with prioritizing finite resources? This may seem to go under "optimization" but because zfs systems can easily grow large, some amount of optimization may be required for basic functionality. Here are my kernel logs from the most recent lockup. The system recovered from this one:
|
@mooinglemur Is this still an issue in 0.6.0-rc11? |
I just had this happen this morning. Running the most recent zfsonlinux on ubuntu12. It completely b0rked my ESXi host which had a datastore mounted on the zfsonlinux VM. I had to reboot the whole host from the previous SAN (openindiana [a solaris variant]), since my wife uses a virtualized windows XP for telecommuting, and the WAF took a big hit due to this. I was migrating datasets between disks when this happened. |
3.2.0-32-generic #51-Ubuntu SMP, zfs-0.6.0-rc11 I just saw this today, seemed to make an AFP copy hang for a while, but no ill effects besides that observed:
|
It looks like there are a couple unrelated issues mentioned here. The original problem of The latter issues look slightly different but there's not a ton of data to go on. So I'm going to close this issue and we can open a new one if these issues persist with -rc12 and latter. |
…penzfs#796) = Description This commit allows us to add an object store bucket as a vdev in an existing pooll and it is the first part of the DOSE Migration project. = Note: Forcing Addition From `zpool add` Attempting to add an object-store vdev without `-f` yields the following error message: ``` $ sudo zpool add -o object-endpoint=etc.. testpool s3 cloudburst-data-2 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses disk and new vdev is objectstore ``` This is done on purpose for now. Adding an objects-store vdev to a pool is an irreversible operation and should be handled with caution. = Note: Syncing Labels & The Uberblock When starting from a block-based pool and we add an object-store vdev there is a point where we have an object-store vdev in our config but that vdev is not accepting allocations and therefore we can sync the config to it. That point is the exact TXG where the vdev is added and we need to sync its config changes to the labels & uberblock of our block-based vdevs. For this reason, I adjusted all the codepaths under `vdev_config_sync()` to be able to handle the update of the labels and uberblock of the local devices even when there is an object-store vdev. This way, if the next TXG fails we have the new vdev somewhere on our config. For all TXGs from that point on, we always sync the object store's config first. This is also the config that we always look at first when opening the pool. With the above changes in `vdev_config_sync()` changes the behavior of existing pure (e.g. non-hybrid) object-store pools to occasionally update the labels of their slog devices (e.g. every time we dirty the pool's config). This should not really have any negative effect in existing pure object-store pools. On the contrary it should keep their labels up to date and potentially fix any extreme corner cases in pool import. = Note: ZIL allocations When the pool is in a hybrid state (e.g. backed by both an object store and block devices) with no slog devices, we could make zil allocations fall back to the embedded slog or normal class. I left that functionality as future work. This is not a prerequisite for DOSE migration as customers are expected to add zettacache(+ slog) devices as the first part of migration and therefore their VMs will always have at least one slog device when the object store vdev is added. = Note: Storage Pool Checkpoint Pure object-based pools (e.g. not hybrid ones) do the checkpoint rewinding process in the object agent. This is a different mechanism from the storage pool checkpoint in block-based pools. Until we have the need to make those two mechanism work well with each other we avoid any migrations to the object store while a zpool checkpoint is in effect. See `spa_ld_checkpoint_rewind()` usage in `spa_load_impl()` for more info. = Note: Ordering of import paths To import a hybrid pool we need to specify two import paths: (1) the path of the local block devices (e.g. `/dev/..etc`) and (2) the name of the bucket in the object store. Unfortunately given how `zpool_find_import_agent()` is implemented, importing hybrid pools only works if we specify (2) first and then (1) but not the opposite. Doing the opposite results in the zpool command hanging (again this is because of the current XXX in the aforementioned function). = Note: Testing Lossing Power Mid-Addition of the Object Store vdev I tested that by manually introducing panics like so: ``` diff --git a/module/zfs/spa.c b/module/zfs/spa.c index 5b55bb275..a82fab841 100644 --- a/module/zfs/spa.c +++ b/module/zfs/spa.c @@ -6969,7 +6969,9 @@ spa_vdev_add(spa_t *spa, nvlist_t *nvroot) * if we lose power at any point in this sequence, ... * steps will be completed the next time we load ... */ + ASSERT(B_FALSE); // <--- panic before config sync (void) spa_vdev_exit(spa, vd, txg, 0); + ASSERT(B_FALSE); // <--- panic before config sync mutex_enter(&spa_namespace_lock); spa_config_update(spa, SPA_CONFIG_UPDATE_POOL); ``` Importing the pool after the first panic we come back without the object store vdev as expected. Importing the pool after the second panic, we come back with the object store but don't allocate from it nor change the spa_pool_type until we create its metaslabs on disk. = Next Steps I'm planning to implement the augmented device removal logic which evacuates data from the block devices to the object-store. Once that is done, I'm planning to work on avoiding the insertion of all the evacuated data in the zettacache. Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Filing bug at request of ryao.
Kernel 3.4.1-vs2.3.3.4 with 3.4 patch from https://bugs.gentoo.org/show_bug.cgi?id=416685
Using git sources of both spl and zfs.
I rebooted into this kernel, and my scrub continued. I then intiated a zfs destroy -r on a fs with a few dozen snapshots, and then rm -r a large tree in a different fs on the same pool. I got some hung tasks for several minutes, but it eventually unfroze and continued.
pool: bkup0
state: ONLINE
scan: scrub in progress since Sun Jun 17 00:50:01 2012
1.59T scanned out of 10.9T at 1.57M/s, (scan is slow, no estimated time)
0 repaired, 14.58% done
config:
errors: No known data errors
pool: bkup1
state: ONLINE
scan: resilvered 324G in 42h22m with 0 errors on Fri Jun 15 20:11:17 2012
config:
errors: No known data errors
[ 599.525371] INFO: task txg_sync:6809 blocked for more than 120 seconds.
[ 599.525375] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 599.525379] txg_sync D ffff88053fcd2400 0 6809 2 0x00000000
[ 599.525387] ffff8805008ef9e0 0000000000000046 ffff880521e82b60 0000000000012400
[ 599.525395] ffff8805008effd8 ffff8805008ee000 0000000000012400 0000000000004000
[ 599.525401] ffff8805008effd8 0000000000012400 ffff88052a208ec0 ffff880521e82b60
[ 599.525408] Call Trace:
[ 599.525419] [] ? _raw_spin_unlock_irqrestore+0x2f/0x40
[ 599.525427] [] ? try_to_wake_up+0xce/0x2a0
[ 599.525433] [] schedule+0x24/0x70
[ 599.525443] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 599.525446] [] ? wake_up_bit+0x40/0x40
[ 599.525449] [] __cv_wait+0xe/0x10 [spl]
[ 599.525457] [] zio_wait+0xf3/0x1b0 [zfs]
[ 599.525465] [] dsl_dataset_destroy_sync+0xa7c/0x1080 [zfs]
[ 599.525468] [] ? __cv_timedwait+0xc5/0x1a0 [spl]
[ 599.525477] [] dsl_sync_task_group_sync+0x123/0x3d0 [zfs]
[ 599.525485] [] dsl_pool_sync+0x20b/0x490 [zfs]
[ 599.525496] [] spa_sync+0x3a2/0x10b0 [zfs]
[ 599.525504] [] txg_init+0x4cb/0x9b0 [zfs]
[ 599.525513] [] ? txg_init+0x210/0x9b0 [zfs]
[ 599.525515] [] ? __thread_create+0x310/0x3a0 [spl]
[ 599.525518] [] __thread_create+0x383/0x3a0 [spl]
[ 599.525520] [] ? __thread_create+0x310/0x3a0 [spl]
[ 599.525522] [] kthread+0x96/0xa0
[ 599.525525] [] kernel_thread_helper+0x4/0x10
[ 599.525527] [] ? retint_restore_args+0xe/0xe
[ 599.525529] [] ? kthread_worker_fn+0x140/0x140
[ 599.525531] [] ? gs_change+0xb/0xb
[ 599.525548] INFO: task zfs:10166 blocked for more than 120 seconds.
[ 599.525549] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 599.525550] zfs D ffff88053fc52400 0 10166 10125 0x00000000
[ 599.525552] ffff8804d717fbe8 0000000000000086 ffff880521e824c0 0000000000012400
[ 599.525555] ffff8804d717ffd8 ffff8804d717e000 0000000000012400 0000000000004000
[ 599.525557] ffff8804d717ffd8 0000000000012400 ffff88052a1380c0 ffff880521e824c0
[ 599.525560] Call Trace:
[ 599.525562] [] ? spl_debug_get_subsys+0x51/0x460 [spl]
[ 599.525565] [] ? spl_debug_msg+0x413/0x970 [spl]
[ 599.525567] [] schedule+0x24/0x70
[ 599.525569] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 599.525571] [] ? wake_up_bit+0x40/0x40
[ 599.525573] [] __cv_wait+0xe/0x10 [spl]
[ 599.525581] [] txg_wait_synced+0xb3/0x190 [zfs]
[ 599.525590] [] dsl_sync_task_group_wait+0x109/0x240 [zfs]
[ 599.525597] [] ? dsl_dataset_disown+0x240/0x240 [zfs]
[ 599.525604] [] ? dsl_destroy_inconsistent+0x1c0/0x1c0 [zfs]
[ 599.525612] [] dsl_sync_task_do+0x54/0x80 [zfs]
[ 599.525620] [] dsl_dataset_destroy+0x132/0x490 [zfs]
[ 599.525627] [] ? dsl_dataset_tryown+0x49/0x120 [zfs]
[ 599.525633] [] dmu_objset_destroy+0x36/0x40 [zfs]
[ 599.525639] [] zfs_unmount_snap+0x253/0x480 [zfs]
[ 599.525646] [] pool_status_check+0x196/0x270 [zfs]
[ 599.525648] [] do_vfs_ioctl+0x96/0x500
[ 599.525650] [] ? alloc_fd+0x47/0x140
[ 599.525652] [] ? trace_hardirqs_off_thunk+0x3a/0x6c
[ 599.525654] [] sys_ioctl+0x4a/0x80
[ 599.525656] [] system_call_fastpath+0x16/0x1b
[ 599.525658] INFO: task rm:10209 blocked for more than 120 seconds.
[ 599.525659] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 599.525660] rm D ffff88053fc52400 0 10209 10168 0x00000000
[ 599.525662] ffff8804dfff5b48 0000000000000086 ffff880515a5b7e0 0000000000012400
[ 599.525664] ffff8804dfff5fd8 ffff8804dfff4000 0000000000012400 0000000000004000
[ 599.525667] ffff8804dfff5fd8 0000000000012400 ffff88052a1380c0 ffff880515a5b7e0
[ 599.525669] Call Trace:
[ 599.525671] [] ? _raw_spin_unlock_irqrestore+0x2f/0x40
[ 599.525673] [] ? try_to_wake_up+0xce/0x2a0
[ 599.525675] [] schedule+0x24/0x70
[ 599.525677] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 599.525679] [] ? wake_up_bit+0x40/0x40
[ 599.525681] [] __cv_wait+0xe/0x10 [spl]
[ 599.525689] [] txg_wait_open+0x83/0x110 [zfs]
[ 599.525696] [] dmu_tx_wait+0xed/0xf0 [zfs]
[ 599.525703] [] dmu_tx_assign+0x86/0xe90 [zfs]
[ 599.525708] [] ? dmu_buf_rele+0x2b/0x40 [zfs]
[ 599.525714] [] zfs_rmnode+0x127/0x340 [zfs]
[ 599.525720] [] zfs_zinactive+0x8b/0x100 [zfs]
[ 599.525725] [] zfs_inactive+0x66/0x210 [zfs]
[ 599.525729] [] zpl_vap_init+0x603/0x6c0 [zfs]
[ 599.525732] [] evict+0xa6/0x1b0
[ 599.525734] [] iput+0x103/0x210
[ 599.525735] [] do_unlinkat+0x157/0x1c0
[ 599.525738] [] ? sys_newfstatat+0x2e/0x40
[ 599.525740] [] ? trace_hardirqs_on_thunk+0x3a/0x3c
[ 599.525742] [] sys_unlinkat+0x1d/0x40
[ 599.525743] [] system_call_fastpath+0x16/0x1b
[ 719.262968] INFO: task txg_sync:6809 blocked for more than 120 seconds.
[ 719.262973] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 719.262977] txg_sync D ffff88053fcd2400 0 6809 2 0x00000000
[ 719.262984] ffff8805008ef9e0 0000000000000046 ffff880521e82b60 0000000000012400
[ 719.262992] ffff8805008effd8 ffff8805008ee000 0000000000012400 0000000000004000
[ 719.262999] ffff8805008effd8 0000000000012400 ffff88052a208ec0 ffff880521e82b60
[ 719.263006] Call Trace:
[ 719.263016] [] ? _raw_spin_unlock_irqrestore+0x2f/0x40
[ 719.263024] [] ? try_to_wake_up+0xce/0x2a0
[ 719.263030] [] schedule+0x24/0x70
[ 719.263040] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 719.263046] [] ? wake_up_bit+0x40/0x40
[ 719.263053] [] __cv_wait+0xe/0x10 [spl]
[ 719.263069] [] zio_wait+0xf3/0x1b0 [zfs]
[ 719.263091] [] dsl_dataset_destroy_sync+0xa7c/0x1080 [zfs]
[ 719.263099] [] ? __cv_timedwait+0xc5/0x1a0 [spl]
[ 719.263115] [] dsl_sync_task_group_sync+0x123/0x3d0 [zfs]
[ 719.263122] [] dsl_pool_sync+0x20b/0x490 [zfs]
[ 719.263131] [] spa_sync+0x3a2/0x10b0 [zfs]
[ 719.263139] [] txg_init+0x4cb/0x9b0 [zfs]
[ 719.263148] [] ? txg_init+0x210/0x9b0 [zfs]
[ 719.263150] [] ? __thread_create+0x310/0x3a0 [spl]
[ 719.263153] [] __thread_create+0x383/0x3a0 [spl]
[ 719.263155] [] ? __thread_create+0x310/0x3a0 [spl]
[ 719.263157] [] kthread+0x96/0xa0
[ 719.263160] [] kernel_thread_helper+0x4/0x10
[ 719.263162] [] ? retint_restore_args+0xe/0xe
[ 719.263164] [] ? kthread_worker_fn+0x140/0x140
[ 719.263166] [] ? gs_change+0xb/0xb
[ 719.263183] INFO: task zfs:10166 blocked for more than 120 seconds.
[ 719.263184] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 719.263185] zfs D ffff88053fc52400 0 10166 10125 0x00000000
[ 719.263187] ffff8804d717fbe8 0000000000000086 ffff880521e824c0 0000000000012400
[ 719.263190] ffff8804d717ffd8 ffff8804d717e000 0000000000012400 0000000000004000
[ 719.263192] ffff8804d717ffd8 0000000000012400 ffff88052a1380c0 ffff880521e824c0
[ 719.263195] Call Trace:
[ 719.263197] [] ? spl_debug_get_subsys+0x51/0x460 [spl]
[ 719.263200] [] ? spl_debug_msg+0x413/0x970 [spl]
[ 719.263201] [] schedule+0x24/0x70
[ 719.263204] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 719.263206] [] ? wake_up_bit+0x40/0x40
[ 719.263208] [] __cv_wait+0xe/0x10 [spl]
[ 719.263216] [] txg_wait_synced+0xb3/0x190 [zfs]
[ 719.263225] [] dsl_sync_task_group_wait+0x109/0x240 [zfs]
[ 719.263232] [] ? dsl_dataset_disown+0x240/0x240 [zfs]
[ 719.263239] [] ? dsl_destroy_inconsistent+0x1c0/0x1c0 [zfs]
[ 719.263247] [] dsl_sync_task_do+0x54/0x80 [zfs]
[ 719.263254] [] dsl_dataset_destroy+0x132/0x490 [zfs]
[ 719.263261] [] ? dsl_dataset_tryown+0x49/0x120 [zfs]
[ 719.263268] [] dmu_objset_destroy+0x36/0x40 [zfs]
[ 719.263274] [] zfs_unmount_snap+0x253/0x480 [zfs]
[ 719.263280] [] pool_status_check+0x196/0x270 [zfs]
[ 719.263283] [] do_vfs_ioctl+0x96/0x500
[ 719.263285] [] ? alloc_fd+0x47/0x140
[ 719.263288] [] ? trace_hardirqs_off_thunk+0x3a/0x6c
[ 719.263289] [] sys_ioctl+0x4a/0x80
[ 719.263291] [] system_call_fastpath+0x16/0x1b
[ 719.263293] INFO: task rm:10209 blocked for more than 120 seconds.
[ 719.263294] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 719.263295] rm D ffff88053fc52400 0 10209 10168 0x00000000
[ 719.263297] ffff8804dfff5b48 0000000000000086 ffff880515a5b7e0 0000000000012400
[ 719.263300] ffff8804dfff5fd8 ffff8804dfff4000 0000000000012400 0000000000004000
[ 719.263302] ffff8804dfff5fd8 0000000000012400 ffff88052a1380c0 ffff880515a5b7e0
[ 719.263304] Call Trace:
[ 719.263306] [] ? _raw_spin_unlock_irqrestore+0x2f/0x40
[ 719.263308] [] ? try_to_wake_up+0xce/0x2a0
[ 719.263310] [] schedule+0x24/0x70
[ 719.263312] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 719.263314] [] ? wake_up_bit+0x40/0x40
[ 719.263316] [] __cv_wait+0xe/0x10 [spl]
[ 719.263324] [] txg_wait_open+0x83/0x110 [zfs]
[ 719.263331] [] dmu_tx_wait+0xed/0xf0 [zfs]
[ 719.263338] [] dmu_tx_assign+0x86/0xe90 [zfs]
[ 719.263343] [] ? dmu_buf_rele+0x2b/0x40 [zfs]
[ 719.263350] [] zfs_rmnode+0x127/0x340 [zfs]
[ 719.263355] [] zfs_zinactive+0x8b/0x100 [zfs]
[ 719.263360] [] zfs_inactive+0x66/0x210 [zfs]
[ 719.263365] [] zpl_vap_init+0x603/0x6c0 [zfs]
[ 719.263367] [] evict+0xa6/0x1b0
[ 719.263369] [] iput+0x103/0x210
[ 719.263371] [] do_unlinkat+0x157/0x1c0
[ 719.263374] [] ? sys_newfstatat+0x2e/0x40
[ 719.263375] [] ? trace_hardirqs_on_thunk+0x3a/0x3c
[ 719.263377] [] sys_unlinkat+0x1d/0x40
[ 719.263379] [] system_call_fastpath+0x16/0x1b
[ 839.000555] INFO: task txg_sync:6809 blocked for more than 120 seconds.
[ 839.000559] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 839.000563] txg_sync D ffff88053fcd2400 0 6809 2 0x00000000
[ 839.000570] ffff8805008ef9e0 0000000000000046 ffff880521e82b60 0000000000012400
[ 839.000580] ffff8805008effd8 ffff8805008ee000 0000000000012400 0000000000004000
[ 839.000590] ffff8805008effd8 0000000000012400 ffff88052a208ec0 ffff880521e82b60
[ 839.000598] Call Trace:
[ 839.000609] [] ? _raw_spin_unlock_irqrestore+0x2f/0x40
[ 839.000617] [] ? try_to_wake_up+0xce/0x2a0
[ 839.000622] [] schedule+0x24/0x70
[ 839.000633] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 839.000639] [] ? wake_up_bit+0x40/0x40
[ 839.000646] [] __cv_wait+0xe/0x10 [spl]
[ 839.000661] [] zio_wait+0xf3/0x1b0 [zfs]
[ 839.000685] [] dsl_dataset_destroy_sync+0xa7c/0x1080 [zfs]
[ 839.000688] [] ? __cv_timedwait+0xc5/0x1a0 [spl]
[ 839.000696] [] dsl_sync_task_group_sync+0x123/0x3d0 [zfs]
[ 839.000704] [] dsl_pool_sync+0x20b/0x490 [zfs]
[ 839.000713] [] spa_sync+0x3a2/0x10b0 [zfs]
[ 839.000722] [] txg_init+0x4cb/0x9b0 [zfs]
[ 839.000733] [] ? txg_init+0x210/0x9b0 [zfs]
[ 839.000736] [] ? __thread_create+0x310/0x3a0 [spl]
[ 839.000739] [] __thread_create+0x383/0x3a0 [spl]
[ 839.000741] [] ? __thread_create+0x310/0x3a0 [spl]
[ 839.000743] [] kthread+0x96/0xa0
[ 839.000746] [] kernel_thread_helper+0x4/0x10
[ 839.000748] [] ? retint_restore_args+0xe/0xe
[ 839.000749] [] ? kthread_worker_fn+0x140/0x140
[ 839.000751] [] ? gs_change+0xb/0xb
[ 839.000769] INFO: task zfs:10166 blocked for more than 120 seconds.
[ 839.000770] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 839.000771] zfs D ffff88053fc52400 0 10166 10125 0x00000000
[ 839.000773] ffff8804d717fbe8 0000000000000086 ffff880521e824c0 0000000000012400
[ 839.000776] ffff8804d717ffd8 ffff8804d717e000 0000000000012400 0000000000004000
[ 839.000778] ffff8804d717ffd8 0000000000012400 ffff88052a1380c0 ffff880521e824c0
[ 839.000780] Call Trace:
[ 839.000783] [] ? spl_debug_get_subsys+0x51/0x460 [spl]
[ 839.000785] [] ? spl_debug_msg+0x413/0x970 [spl]
[ 839.000787] [] schedule+0x24/0x70
[ 839.000790] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 839.000791] [] ? wake_up_bit+0x40/0x40
[ 839.000794] [] __cv_wait+0xe/0x10 [spl]
[ 839.000802] [] txg_wait_synced+0xb3/0x190 [zfs]
[ 839.000810] [] dsl_sync_task_group_wait+0x109/0x240 [zfs]
[ 839.000817] [] ? dsl_dataset_disown+0x240/0x240 [zfs]
[ 839.000825] [] ? dsl_destroy_inconsistent+0x1c0/0x1c0 [zfs]
[ 839.000833] [] dsl_sync_task_do+0x54/0x80 [zfs]
[ 839.000840] [] dsl_dataset_destroy+0x132/0x490 [zfs]
[ 839.000847] [] ? dsl_dataset_tryown+0x49/0x120 [zfs]
[ 839.000853] [] dmu_objset_destroy+0x36/0x40 [zfs]
[ 839.000860] [] zfs_unmount_snap+0x253/0x480 [zfs]
[ 839.000866] [] pool_status_check+0x196/0x270 [zfs]
[ 839.000868] [] do_vfs_ioctl+0x96/0x500
[ 839.000870] [] ? alloc_fd+0x47/0x140
[ 839.000873] [] ? trace_hardirqs_off_thunk+0x3a/0x6c
[ 839.000875] [] sys_ioctl+0x4a/0x80
[ 839.000877] [] system_call_fastpath+0x16/0x1b
[ 839.000878] INFO: task rm:10209 blocked for more than 120 seconds.
[ 839.000879] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 839.000880] rm D ffff88053fc52400 0 10209 10168 0x00000000
[ 839.000882] ffff8804dfff5b48 0000000000000086 ffff880515a5b7e0 0000000000012400
[ 839.000885] ffff8804dfff5fd8 ffff8804dfff4000 0000000000012400 0000000000004000
[ 839.000887] ffff8804dfff5fd8 0000000000012400 ffff88052a1380c0 ffff880515a5b7e0
[ 839.000889] Call Trace:
[ 839.000891] [] ? _raw_spin_unlock_irqrestore+0x2f/0x40
[ 839.000893] [] ? try_to_wake_up+0xce/0x2a0
[ 839.000895] [] schedule+0x24/0x70
[ 839.000897] [] __cv_timedwait+0xa7/0x1a0 [spl]
[ 839.000899] [] ? wake_up_bit+0x40/0x40
[ 839.000902] [] __cv_wait+0xe/0x10 [spl]
[ 839.000910] [] txg_wait_open+0x83/0x110 [zfs]
[ 839.000917] [] dmu_tx_wait+0xed/0xf0 [zfs]
[ 839.000923] [] dmu_tx_assign+0x86/0xe90 [zfs]
[ 839.000928] [] ? dmu_buf_rele+0x2b/0x40 [zfs]
[ 839.000935] [] zfs_rmnode+0x127/0x340 [zfs]
[ 839.000940] [] zfs_zinactive+0x8b/0x100 [zfs]
[ 839.000946] [] zfs_inactive+0x66/0x210 [zfs]
[ 839.000950] [] zpl_vap_init+0x603/0x6c0 [zfs]
[ 839.000953] [] evict+0xa6/0x1b0
[ 839.000954] [] iput+0x103/0x210
[ 839.000956] [] do_unlinkat+0x157/0x1c0
[ 839.000959] [] ? sys_newfstatat+0x2e/0x40
[ 839.000961] [] ? trace_hardirqs_on_thunk+0x3a/0x3c
[ 839.000962] [] sys_unlinkat+0x1d/0x40
[ 839.000964] [] system_call_fastpath+0x16/0x1b
[ 839.000966] INFO: task zfs:10227 blocked for more than 120 seconds.
[ 839.000967] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 839.000968] zfs D ffff88053fc92400 0 10227 10211 0x00000000
[ 839.000970] ffff8804bbfc7bf0 0000000000000082 ffff8804f66690c0 0000000000012400
[ 839.000972] ffff8804bbfc7fd8 ffff8804bbfc6000 0000000000012400 0000000000004000
[ 839.000975] ffff8804bbfc7fd8 0000000000012400 ffff88052a1f7520 ffff8804f66690c0
[ 839.000977] Call Trace:
[ 839.000979] [] ? vsnprintf+0x356/0x610
[ 839.000981] [] ? spl_debug_get_subsys+0x51/0x460 [spl]
[ 839.000984] [] ? spl_debug_msg+0x413/0x970 [spl]
[ 839.000992] [] ? spa_meta_objset+0x19/0x40 [zfs]
[ 839.000994] [] schedule+0x24/0x70
[ 839.000996] [] rwsem_down_failed_common+0xc5/0x150
[ 839.000998] [] rwsem_down_read_failed+0x15/0x17
[ 839.000999] [] call_rwsem_down_read_failed+0x14/0x30
[ 839.001001] [] ? down_read+0x12/0x20
[ 839.001009] [] dsl_dir_open_spa+0xb3/0x540 [zfs]
[ 839.001017] [] ? spa_meta_objset+0x19/0x40 [zfs]
[ 839.001019] [] ? avl_find+0x5a/0xa0 [zavl]
[ 839.001026] [] dsl_dataset_hold+0x3b/0x290 [zfs]
[ 839.001029] [] ? __kmalloc+0x13e/0x1c0
[ 839.001035] [] dmu_objset_hold+0x1f/0x1a0 [zfs]
[ 839.001042] [] zfs_secpolicy_smb_acl+0x2f84/0x3b20 [zfs]
[ 839.001048] [] ? pool_status_check+0x61/0x270 [zfs]
[ 839.001054] [] pool_status_check+0x196/0x270 [zfs]
[ 839.001056] [] do_vfs_ioctl+0x96/0x500
[ 839.001058] [] ? trace_hardirqs_off_thunk+0x3a/0x6c
[ 839.001059] [] sys_ioctl+0x4a/0x80
[ 839.001061] [] system_call_fastpath+0x16/0x1b
The text was updated successfully, but these errors were encountered: