Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zfs incremental recv fails with dataset is busy error #1761

Closed
olw2005 opened this issue Sep 30, 2013 · 26 comments
Closed

Zfs incremental recv fails with dataset is busy error #1761

olw2005 opened this issue Sep 30, 2013 · 26 comments
Labels
Component: ZVOL ZFS Volumes

Comments

@olw2005
Copy link

olw2005 commented Sep 30, 2013

Dear All,
First let me say, keep up the great work! Happy to see my tax dollars funding something useful.

Brief problem description:
We have two servers setup to send rotating incremental snapshots of the local zvol to each other (zfs send -I). In the main this works well, but about once a week the zfs recv fails with a 'dataset is busy' error.

Dmesg output is:
Buffer I/O error on device zd16, logical block 0
Buffer I/O error on device zd16, logical block 0
Buffer I/O error on device zd16, logical block 0

Expected dmesg output:
zd16:
GPT:Primary header thinks Alt. header is not at the end of the disk.
GPT:34358689719 != 34359738367
GPT:Alternate GPT header not at the end of the disk.
GPT:34358689719 != 34359738367
GPT: Use GNU Parted to correct GPT errors.
p1
(The zvol is used as the disk for a drbd resource. Drbd uses the last bit of the disk to store its own metadata, presenting a slightly smaller disk to layers above it. Thus the spurious GPT warning is normal / expected.)

After this error, all subsequent zfs recv's on that particular zvol [pool/chicagorepl] with the same error. Only way to clear the error is to reboot the server. Meanwhile, the two other zvols [arch/archive and pool/detroit] on the server continue to work fine, creating / sending / destroying snapshots w/o issue.

Server Configuration / Setup Info:
HP DL380p Gen8, 192 GB RAM, four external disk chassis [48 disks], two ssd's setup as l2arc cache disks
CentOS 6.4 with recent updates
zfs & spl 0.6.2

[root@dtc-san2 ~]# uname -a
Linux dtc-san2.stc.ricplc.com 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 17:19:38 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

[root@dtc-san2 ~]# cat /etc/modprobe.d/zfs.conf
options zfs zfs_nocacheflush=1 zfs_arc_max=154618822656 zfs_arc_min=1073741824

[root@dtc-san2 ~]# zpool status
pool: arch
state: ONLINE
scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    arch        ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        sdan    ONLINE       0     0     0
        sdao    ONLINE       0     0     0
        sdap    ONLINE       0     0     0
        sdaq    ONLINE       0     0     0
        sdar    ONLINE       0     0     0
        sdas    ONLINE       0     0     0
        sdat    ONLINE       0     0     0
        sdau    ONLINE       0     0     0
        sdav    ONLINE       0     0     0
        sdaw    ONLINE       0     0     0
        sdax    ONLINE       0     0     0
    spares
      sday      AVAIL

errors: No known data errors

pool: pool
state: ONLINE
scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    pool        ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        sdd     ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdf     ONLINE       0     0     0
        sdg     ONLINE       0     0     0
        sdh     ONLINE       0     0     0
        sdi     ONLINE       0     0     0
        sdj     ONLINE       0     0     0
        sdk     ONLINE       0     0     0
        sdl     ONLINE       0     0     0
        sdm     ONLINE       0     0     0
      raidz2-1  ONLINE       0     0     0
        sdp     ONLINE       0     0     0
        sdq     ONLINE       0     0     0
        sdr     ONLINE       0     0     0
        sds     ONLINE       0     0     0
        sdt     ONLINE       0     0     0
        sdu     ONLINE       0     0     0
        sdv     ONLINE       0     0     0
        sdw     ONLINE       0     0     0
        sdx     ONLINE       0     0     0
        sdy     ONLINE       0     0     0
      raidz2-2  ONLINE       0     0     0
        sdab    ONLINE       0     0     0
        sdac    ONLINE       0     0     0
        sdae    ONLINE       0     0     0
        sdaf    ONLINE       0     0     0
        sdag    ONLINE       0     0     0
        sdah    ONLINE       0     0     0
        sdai    ONLINE       0     0     0
        sdaj    ONLINE       0     0     0
        sdak    ONLINE       0     0     0
    cache
      sdb       ONLINE       0     0     0
      sdc       ONLINE       0     0     0
    spares
      sdn       AVAIL
      sdo       AVAIL
      sdz       AVAIL
      sdaa      AVAIL
      sdal      AVAIL
      sdam      AVAIL

errors: No known data errors

[root@dtc-san2 ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
arch 19.8T 4.20T 65.8K none
arch/archive 19.8T 4.20T 19.1T -
pool 20.8T 40.6T 64.5K none
pool/chicagorepl 5.98T 40.6T 4.77T -
pool/detroit 14.6T 40.6T 13.8T -
pool/vdistore 177G 40.6T 177G -

Abbreviated zpool history:
History for 'arch':
...
2013-09-30.14:16:07 zfs snapshot arch/archive@hour-20130930141601
2013-09-30.14:16:33 zfs send -I arch/archive@hour-20130930131601 arch/archive@hour-20130930141601
2013-09-30.14:28:10 zfs destroy arch/archive@hour-20130930101601
2013-09-30.15:16:07 zfs snapshot arch/archive@hour-20130930151601
2013-09-30.15:16:31 zfs send -I arch/archive@hour-20130930141601 arch/archive@hour-20130930151601

History for 'pool':
...
2013-09-30.12:05:11 zfs recv -F pool/chicagorepl
2013-09-30.12:05:33 zfs destroy pool/chicagorepl@hour-20130930070101
2013-09-30.12:14:13 zfs destroy pool/detroit@hour-20130930064601
2013-09-30.12:14:20 zfs destroy pool/detroit@hour-20130930074601
2013-09-30.12:42:06 zfs snapshot pool/detroit@4hour-20130930124201
2013-09-30.12:42:32 zfs destroy pool/detroit@4hour-20130929124201
2013-09-30.12:46:06 zfs snapshot pool/detroit@hour-20130930124601
2013-09-30.12:46:49 zfs send -I pool/detroit@hour-20130930114601 pool/detroit@hour-20130930124601
2013-09-30.12:54:15 zfs destroy pool/detroit@hour-20130930084601
2013-09-30.13:20:01 zfs recv -F pool/chicagorepl
2013-09-30.13:20:20 zfs destroy pool/chicagorepl@hour-20130930080101
2013-09-30.13:46:07 zfs snapshot pool/detroit@hour-20130930134601
2013-09-30.13:46:53 zfs send -I pool/detroit@hour-20130930124601 pool/detroit@hour-20130930134601
2013-09-30.14:00:29 zfs destroy pool/detroit@hour-20130930094601
2013-09-30.14:01:23 zfs destroy pool/chicagorepl@4hour-20130929125601
----> 2013-09-30.14:02:40 zfs recv -F pool/chicagorepl
2013-09-30.14:46:06 zfs snapshot pool/detroit@hour-20130930144601
2013-09-30.14:46:47 zfs send -I pool/detroit@hour-20130930134601 pool/detroit@hour-20130930144601

Line marked with ----> is where it went south. Note the subsequent snapshot and send operations of "pool/detroit" and "arch/archive" had no errors...

If I can provide more information, please let me know. I'm an engineer / IT guy not a developer, but I can read and follow instructions well. =) I have purposefully left the system as-is. It is non-primary storage, so it can be rebooted / recompiled / whatever as needed. Looking forward to hear from you!

Regards,
Owen L. Wieck
Ricardo, Inc.

@olw2005
Copy link
Author

olw2005 commented Sep 30, 2013

Oh, forgot to mention. The 'dataset is busy' seems to occur (usually, not sure if it's 100% of the time) when the server is in the middle of a zfs send on another zpool / zvol. In this case, it was running an incremental send of arch/archive to another server when the recv failed on pool/detroitrepl. I've tried to set them up to run at different times, but occasionally they run over.

@olw2005
Copy link
Author

olw2005 commented Oct 1, 2013

More verbose error message on the receiving side. This is after the 'dataset is busy' error noted above. Manually created an incremental snapshot file (chicago_snap1.snap), copied it over and attempted to apply it locally to the receive side.
[root@dtc-san2 ~]# cat chicago_snap1.snap | zfs recv -vF pool/chicagorepl
receiving incremental stream of pool/chicago@hour-20130930180101 into pool/chicagorepl@hour-20130930180101
cannot receive incremental stream: dataset is busy
cannot create device links for 'pool/chicagorepl': dataset is busy

@dweeezil
Copy link
Contributor

dweeezil commented Oct 4, 2013

@olw2005 It would be interesting to see your experiment above without the "-F" option on the receive side. If the zvol on the receive side has been touched in any way since the last receive, the -F causes an automatic rollback to be attempted and that may be what's failing. It would also be interesting to see "zpool history -i" on the receiving side because that will show the underlying operations of an automatic rollback if it was required.

@prakashsurya
Copy link
Member

@olw2005 Do you run automatic snapshots on the pool receiving the snapshot?

@olw2005
Copy link
Author

olw2005 commented Oct 4, 2013

@dweeezil I have since rebooted the node so I will try it without -F next time it errors out. May take a few days. It's there "just in case", but the zvol on the receiving end doesn't get touched at all.

Was not aware of the -i option for history. 38k lines of output, but here is a small subset from around the time of the error (---> indicates when the buffer error pops in the log, the zfs send side fails, and the 'dataset is busy' error pops for the pool/detroitrepl zvol):

ARCH zpool:
2013-09-30.13:16:37 zfs send -I arch/archive@hour-20130930121601 arch/archive@hour-20130930131601
2013-09-30.13:16:37 [internal user hold txg:2734613] <.send-20546-1> temp = 1 dataset = 20910
2013-09-30.13:32:07 [internal user release txg:2734799] <.send-20546-1> 0 dataset = 20869
2013-09-30.13:32:07 [internal user release txg:2734800] <.send-20546-1> 0 dataset = 20910
2013-09-30.13:33:34 [internal destroy txg:2734818] dataset = 20740
2013-09-30.13:33:39 zfs destroy arch/archive@hour-20130930091601
2013-09-30.14:16:02 [internal snapshot txg:2735327] dataset = 20954
2013-09-30.14:16:07 zfs snapshot arch/archive@hour-20130930141601
2013-09-30.14:16:33 [internal user hold txg:2735334] <.send-62620-1> temp = 1 dataset = 20910
2013-09-30.14:16:33 zfs send -I arch/archive@hour-20130930131601 arch/archive@hour-20130930141601
2013-09-30.14:16:33 [internal user hold txg:2735335] <.send-62620-1> temp = 1 dataset = 20954
2013-09-30.14:26:27 [internal user release txg:2735454] <.send-62620-1> 0 dataset = 20910
2013-09-30.14:26:27 [internal user release txg:2735455] <.send-62620-1> 0 dataset = 20954
2013-09-30.14:28:05 [internal destroy txg:2735475] dataset = 20784
2013-09-30.14:28:10 zfs destroy arch/archive@hour-20130930101601
2013-09-30.15:16:02 [internal snapshot txg:2736050] dataset = 20998
2013-09-30.15:16:07 zfs snapshot arch/archive@hour-20130930151601

POOL zpool:
2013-09-30.13:46:53 zfs send -I pool/detroit@hour-20130930124601 pool/detroit@hour-20130930134601
2013-09-30.13:46:53 [internal user hold txg:4016108] <.send-9318-1> temp = 1 dataset = 5525
2013-09-30.13:57:58 [internal user release txg:4016242] <.send-9318-1> 0 dataset = 5403
2013-09-30.13:57:59 [internal user release txg:4016243] <.send-9318-1> 0 dataset = 5525
2013-09-30.14:00:24 [internal destroy txg:4016273] dataset = 5018
2013-09-30.14:00:29 zfs destroy pool/detroit@hour-20130930094601
2013-09-30.14:01:18 [internal destroy txg:4016284] dataset = 2363
2013-09-30.14:01:23 zfs destroy pool/chicagorepl@4hour-20130929125601
2013-09-30.14:01:38 [internal replay_inc_sync txg:4016288] dataset = 5592
2013-09-30.14:02:39 [internal snapshot txg:4016302] dataset = 5624
2013-09-30.14:02:39 [internal destroy txg:4016303] dataset = 5592
2013-09-30.14:02:39 [internal property set txg:4016303] reservation=0 dataset = 5592
--> 2013-09-30.14:02:40 zfs recv -F pool/chicagorepl
--> 2013-09-30.14:02:40 [internal replay_inc_sync txg:4016304] dataset = 5630
--> 2013-09-30.14:02:52 [internal destroy txg:4016310] dataset = 5630
--> 2013-09-30.14:02:52 [internal property set txg:4016310] reservation=0 dataset = 5630
2013-09-30.14:46:01 [internal snapshot txg:4016831] dataset = 5659
2013-09-30.14:46:06 zfs snapshot pool/detroit@hour-20130930144601
2013-09-30.14:46:47 [internal user hold txg:4016841] <.send-51684-1> temp = 1 dataset = 5525
2013-09-30.14:46:47 zfs send -I pool/detroit@hour-20130930134601 pool/detroit@hour-20130930144601
2013-09-30.14:46:47 [internal user hold txg:4016842] <.send-51684-1> temp = 1 dataset = 5659
2013-09-30.14:58:14 [internal user release txg:4016980] <.send-51684-1> 0 dataset = 5525

Shortly after I restarted the machine on the morning of 1 Oct, the cron job on the other server ran a "zfs send -I" with multiple snapshots and the server in question had the same error (buffer i/o in the dmesg / var/log/messages and 'dataset is busy' error from zfs) part way through the transmission. Interestingly, it did receive and process about 3 of the incremental snapshots before it locked up. Here's the history -i from that timeframe:

ARCH:
2013-10-01.08:18:33 zfs destroy arch/archive@hour-20131001041601
2013-10-01.09:16:01 [internal snapshot txg:2749053] dataset = 12290
2013-10-01.09:16:06 zfs snapshot arch/archive@hour-20131001091601
2013-10-01.09:16:25 [internal user hold txg:2749058] <.send-29280-1> temp = 1 dataset = 21742
2013-10-01.09:16:25 zfs send -I arch/archive@hour-20131001081601 arch/archive@hour-20131001091601
2013-10-01.09:16:25 [internal user hold txg:2749059] <.send-29280-1> temp = 1 dataset = 12290
2013-10-01.09:18:20 [internal user release txg:2749082] <.send-29280-1> 0 dataset = 21742
2013-10-01.09:18:20 [internal user release txg:2749083] <.send-29280-1> 0 dataset = 12290
2013-10-01.09:19:45 [internal destroy txg:2749100] dataset = 21614
2013-10-01.09:19:49 zfs destroy arch/archive@hour-20131001051601
2013-10-01.10:16:02 [internal snapshot txg:2749776] dataset = 12345

POOL:
2013-10-01.08:52:59 zfs destroy pool/detroit@hour-20131001044601
2013-10-01.09:01:33 [internal replay_inc_sync txg:4030107] dataset = 41035
2013-10-01.09:01:49 [internal snapshot txg:4030112] dataset = 41052
2013-10-01.09:01:49 [internal destroy txg:4030113] dataset = 41035
2013-10-01.09:01:49 [internal property set txg:4030113] reservation=0 dataset = 41035
2013-10-01.09:01:50 zfs recv -F pool/chicagorepl
2013-10-01.09:01:50 [internal replay_inc_sync txg:4030114] dataset = 41058
2013-10-01.09:05:07 [internal snapshot txg:4030155] dataset = 41079
2013-10-01.09:05:07 [internal destroy txg:4030156] dataset = 41058
2013-10-01.09:05:07 [internal property set txg:4030156] reservation=0 dataset = 41058
2013-10-01.09:05:10 [internal replay_inc_sync txg:4030157] dataset = 41085
2013-10-01.09:08:22 [internal snapshot txg:4030197] dataset = 41107
2013-10-01.09:08:23 [internal destroy txg:4030198] dataset = 41085
2013-10-01.09:08:23 [internal property set txg:4030198] reservation=0 dataset = 41085
2013-10-01.09:08:23 [internal replay_inc_sync txg:4030199] dataset = 41113
2013-10-01.09:12:01 [internal snapshot txg:4030244] dataset = 41136
2013-10-01.09:12:02 [internal destroy txg:4030245] dataset = 41113
2013-10-01.09:12:02 [internal property set txg:4030245] reservation=0 dataset = 41113
2013-10-01.09:12:02 [internal replay_inc_sync txg:4030246] dataset = 41142
2013-10-01.09:16:37 [internal snapshot txg:4030302] dataset = 41166
2013-10-01.09:16:37 [internal destroy txg:4030303] dataset = 41142
2013-10-01.09:16:37 [internal property set txg:4030303] reservation=0 dataset = 41142
2013-10-01.09:16:38 [internal replay_inc_sync txg:4030304] dataset = 41172
--> 2013-10-01.09:17:02 [internal snapshot txg:4030310] dataset = 41195
--> 2013-10-01.09:17:02 [internal destroy txg:4030311] dataset = 41172
--> 2013-10-01.09:17:02 [internal property set txg:4030311] reservation=0 dataset = 41172
--> 2013-10-01.09:17:02 [internal replay_inc_sync txg:4030312] dataset = 41201
2013-10-01.09:25:25 [internal destroy txg:4030416] dataset = 41201
2013-10-01.09:25:25 [internal property set txg:4030416] reservation=0 dataset = 41201
2013-10-01.09:34:21 [internal replay_inc_sync txg:4030524] dataset = 41231
2013-10-01.09:42:15 [internal destroy txg:4030622] dataset = 41231
2013-10-01.09:42:15 [internal property set txg:4030622] reservation=0 dataset = 41231
2013-10-01.09:46:02 [internal snapshot txg:4030669] dataset = 41257
2013-10-01.09:46:07 zfs snapshot pool/detroit@hour-20131001094601
2013-10-01.09:46:35 [internal user hold txg:4030676] <.send-19041-1> temp = 1 dataset = 40987
2013-10-01.09:46:35 zfs send -I pool/detroit@hour-20131001084601 pool/detroit@hour-20131001094601
2013-10-01.09:46:35 [internal user hold txg:4030677] <.send-19041-1> temp = 1 dataset = 41257
2013-10-01.09:51:20 [internal user release txg:4030735] <.send-19041-1> 0 dataset = 40987
2013-10-01.09:51:20 [internal user release txg:4030736] <.send-19041-1> 0 dataset = 41257
2013-10-01.09:52:59 [internal destroy txg:4030756] dataset = 7023

@olw2005
Copy link
Author

olw2005 commented Oct 4, 2013

@prakashsurya Yes we are running automatic snapshots on both ends. We're doing cross-zfs-send/recv's between servers in Chicago and Detroit (ctc-san2 and dtc-san2). ctc-san2 contains pool/chicago and pool/detroitrepl. dtc-san2 contains pool/detroit and pool/chicagorepl. (It also contains another zvol on a separate pool arch/archive.) The non-repl shares are running automated snapshots in conjunction with zfs send / receive via cron. The "repl" shares are the targets of the send/recv. A little complicated, I know... Make sense?

@prakashsurya
Copy link
Member

My guess is you're running into the same issue I was, the snapshot creation is causing the zfs receive to fail. Please see: #1590

@olw2005
Copy link
Author

olw2005 commented Oct 7, 2013

@prakashsurya Thanks for the response! I actually read that bug report earlier, but assumed it applied to inbound (zfs recv'd) snapshots to the same dataset that was being "zfs sent". To be clear, are you saying that applies to any dataset in the same pool? E.g. zfs send of pool/zvolA in conjunction with zfs recv of pool/zvolB.

@prakashsurya
Copy link
Member

@olw2005 From what I understand, #1590 can be triggered by creating a snapshot on pool/zvolB before or during the receive of pool/zvolA, that is not in the receive stream. So, if you have automatic snapshots occurring on pool/zvolB, then I would not be surprised if you hit that issue. I don't know for certain (I would need to read the code), but my guess is the snapshot is manipulating the on-disk structures such that the incremental receive stream cannot apply cleanly onto pool/zvolB (because pool/zvolA does not contain the same snapshot that was created independently on pool/zvolB).

@olw2005
Copy link
Author

olw2005 commented Oct 16, 2013

Another occurence, this time on the Chicago server (ctc-san2). Log files and other sorted output below. I tried doing a manual "zfs recv" without the -F, no joy. Three items of note:

  1. There is no other snapshot activity taking place at the time of the lockup (see output below)
  2. Also of interest, after the "dataset busy" error came up it refuses new incremental snapshots (with the same error) but snapshot destroys (of our old 4hr snaps) continue to work for that zvol...?!
  3. There were two snapshots in the "zfs send -I" stream, a "4hour" and an "hour". The zfs recv succeeds on the "4hour" snapshot, then fails out almost immediately on the "hour" snapshot (w/ the dataset busy + buffer i/o error) as before. Not sure but I think that may be significant.

I have to reboot the host ctc-san2 now so it can catch up with the Detroit side. But appreciate any other suggestions to try.

Assorted Log File spewage:
DETROIT SEND-SIDE TIMESTAMPED LOGS FROM REPLICATION SCRIPT

Tue Oct 15 20:46:01 EDT 2013 -> pool/detroit@hour-20131015204601 Snapshot creation.
Tue Oct 15 20:46:21 EDT 2013 -> Destroying snapshot pool/detroitrepl@4hour-20131014204201 on ctc-san2.stc.ricplc.com
Tue Oct 15 20:46:39 EDT 2013 -> pool/detroit@hour-20131015194601 pool/detroit@hour-20131015204601 Incremental send.
Tue Oct 15 21:00:47 EDT 2013 cannot receive incremental stream: dataset is busy cannot create device links for 'pool/detroitrepl': dataset is busy
Tue Oct 15 21:00:47 EDT 2013 End ------------------------------------------------------------

Tue Oct 15 21:46:01 EDT 2013 -> pool/detroit@hour-20131015214601 Snapshot creation.
Tue Oct 15 21:46:33 EDT 2013 -> pool/detroit@4hour-20131015204201 pool/detroit@hour-20131015214601 Incremental send.
Tue Oct 15 21:48:19 EDT 2013 warning: cannot send 'pool/detroit@hour-20131015214601': Broken pipe ce links for 'pool/detroitrepl': dataset is busy
Tue Oct 15 21:48:19 EDT 2013 End ------------------------------------------------------------

Wed Oct 16 16:42:02 EDT 2013 -> pool/detroit@4hour-20131016164202 Snapshot creation.
Wed Oct 16 16:42:18 EDT 2013 -> Destroying snapshot pool/detroit@4hour-20131015164201 on localhost
Wed Oct 16 16:42:18 EDT 2013 End ------------------------------------------------------------

Wed Oct 16 16:46:01 EDT 2013 -> pool/detroit@hour-20131016164601 Snapshot creation.
Wed Oct 16 16:46:19 EDT 2013 -> Destroying snapshot pool/detroitrepl@4hour-20131015164201 on ctc-san2.stc.ricplc.com
Wed Oct 16 16:46:29 EDT 2013 -> pool/detroit@4hour-20131015204201 pool/detroit@hour-20131016164601 Incremental send.

CHICAGO SEND-SIDE TIMESTAMPED LOGS FROM REPLICATION SCRIPT

Tue Oct 15 20:01:01 CDT 2013 -> pool/chicago@hour-20131015200101 Snapshot creation.
Tue Oct 15 20:01:34 CDT 2013 -> pool/chicago@hour-20131015190101 pool/chicago@hour-20131015200101 Incremental send.
Tue Oct 15 20:02:34 CDT 2013 -> Destroying snapshot pool/chicago@hour-20131015160101 on localhost
Tue Oct 15 20:02:34 CDT 2013 -> Destroying snapshot pool/chicagorepl@hour-20131015160101 on dtc-san2.stc.ricplc.com
Tue Oct 15 20:02:35 CDT 2013 End ------------------------------------------------------------

CTC-SAN2 /var/log/messages -- Snapshot @4hour-20131015204201 received, detroit@hour-20131015204601 is not.

Oct 15 19:59:19 ctc-san2 kernel: zd16:
Oct 15 19:59:19 ctc-san2 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Oct 15 19:59:19 ctc-san2 kernel: GPT:68717379511 != 68719476735
Oct 15 19:59:19 ctc-san2 kernel: GPT:Alternate GPT header not at the end of the disk.
Oct 15 19:59:19 ctc-san2 kernel: GPT:68717379511 != 68719476735
Oct 15 19:59:19 ctc-san2 kernel: GPT: Use GNU Parted to correct GPT errors.
Oct 15 19:59:19 ctc-san2 kernel: p1
Oct 15 19:59:19 ctc-san2 kernel: Buffer I/O error on device zd16, logical block 0
Oct 15 19:59:19 ctc-san2 kernel: Buffer I/O error on device zd16, logical block 0
Oct 15 19:59:19 ctc-san2 kernel: Buffer I/O error on device zd16, logical block 0

CTC-SAN2 zpool history -i

2013-10-15.19:46:22 [internal destroy txg:4657566] dataset = 58075
2013-10-15.19:46:28 zfs destroy pool/detroitrepl@4hour-20131014204201
2013-10-15.19:46:42 [internal replay_inc_sync txg:4657571] dataset = 60797
2013-10-15.19:59:19 [internal snapshot txg:4657724] dataset = 60821
2013-10-15.19:59:19 [internal destroy txg:4657725] dataset = 60797
2013-10-15.19:59:19 [internal property set txg:4657725] reservation=0 dataset = 60797
2013-10-15.19:59:19 zfs recv -F pool/detroitrepl
2013-10-15.19:59:19 [internal replay_inc_sync txg:4657726] dataset = 60827
--> 2013-10-15.20:00:46 [internal destroy txg:4657747] dataset = 60827
--> 2013-10-15.20:00:46 [internal property set txg:4657747] reservation=0 dataset = 60827
2013-10-15.20:01:01 [internal snapshot txg:4657751] dataset = 60852
2013-10-15.20:01:06 zfs snapshot pool/chicago@hour-20131015200101

2013-10-15.20:46:36 [internal replay_inc_sync txg:4658302] dataset = 60901
--> 2013-10-15.20:48:17 [internal destroy txg:4658326] dataset = 60901
--> 2013-10-15.20:48:17 [internal property set txg:4658326] reservation=0 dataset = 60901

2013-10-15.21:46:37 [internal replay_inc_sync txg:4659034] dataset = 60996
--> 2013-10-15.21:48:06 [internal destroy txg:4659055] dataset = 60996
--> 2013-10-15.21:48:06 [internal property set txg:4659055] reservation=0 dataset = 60996

2013-10-16.15:46:20 [internal destroy txg:4672181] dataset = 60398
2013-10-16.15:46:25 zfs destroy pool/detroitrepl@4hour-20131015164201
2013-10-16.15:46:32 [internal replay_inc_sync txg:4672184] dataset = 46263

TRYING ZFS RECV MANUALLY (w/o -F)...

[root@dtc-san2 ~]# zfs send -i pool/detroit@4hour-20131015204201 pool/detroit@hour-20131015204601 > detroit_inc_snap1.zfssnap
[root@dtc-san2 ~]# ls -lh detroit_inc_snap1.zfssnap
-rw-r--r-- 1 root root 2.6G Oct 16 16:01 detroit_inc_snap1.zfssnap
[root@dtc-san2 ~]# gzip !$
gzip detroit_inc_snap1.zfssnap
[root@dtc-san2 ~]# date
Wed Oct 16 16:20:01 EDT 2013
You have new mail in /var/spool/mail/root
[root@dtc-san2 ~]# ls -l detroit_inc_snap1.zfssnap.gz
-rw-r--r-- 1 root root 646109695 Oct 16 16:01 detroit_inc_snap1.zfssnap.gz
[root@dtc-san2 ~]# ls -lh !$
ls -lh detroit_inc_snap1.zfssnap.gz
-rw-r--r-- 1 root root 617M Oct 16 16:01 detroit_inc_snap1.zfssnap.gz
[root@dtc-san2 ~]# scp detroit_inc_snap1.zfssnap.gz root@ctc-san2:/root
detroit_inc_snap1.zfssnap.gz 100% 616MB 11.2MB/s 00:55

[root@ctc-san2 ~]# gzip -d detroit_inc_snap1.zfssnap.gz
[root@ctc-san2 ~]# cat detroit_inc_snap1.zfssnap | zfs recv pool/detroitrepl
cannot receive incremental stream: dataset is busy
cannot create device links for 'pool/detroitrepl': dataset is busy
[root@ctc-san2 ~]# date
Wed Oct 16 15:32:17 CDT 2013
[root@ctc-san2 ~]# zpool history -i | tail -n 5
2013-10-16.15:02:58 zfs destroy pool/chicago@hour-20131016110101
2013-10-16.15:30:57 [internal replay_inc_sync txg:4671988] dataset = 46229
2013-10-16.15:32:07 [internal destroy txg:4672005] dataset = 46229
2013-10-16.15:32:07 [internal property set txg:4672005] reservation=0 dataset = 46229

@olw2005
Copy link
Author

olw2005 commented Feb 5, 2014

Looks like this is similar to issue #2104 which mentions a fix in the latest master. Just compiled and installed the latest master of spl & zfs from git this morning on one our secondary nodes. (Note, I ran into the uname-r build problem referenced in another recent issue when trying to compile zfs. [Still running Centos 6.5.] Fixed by commenting out the four require lines in zfs/scripts/kmodtool. The spl rpm's compiled w/o incident.) Afterward the system survived three consecutive zfs incremental sends (followed by zfs destroy operations on old snapshots) running concurrent with a zfs recv (and again followed by zfs destroy of old snaps). Previously this almost certainly would have resulted in a 'dataset busy' lockup and/or a kernel oops. I'd like to wait a few days before declaring it resolved, but it certainly looks encouraging. Well done, gents!

@behlendorf
Copy link
Contributor

@olw2005 Thanks for following up with the positive feedback. OK, well then is I don't hear anything back in the next few weeks we'll treat this issue as resolved.

@pyavdr
Copy link
Contributor

pyavdr commented Feb 7, 2014

I got a "cannot receive incremental strem dataset is busy" on my VMwarepool. The pool consists of about 25 zvols, each with 5 snapshots. The zvols are sparse and compression is on. No Dedup. I created a zfs send -R pool@snap > file.zfs . About 1.5 TB. When receiving the file.zfs with zfs recv -vF newpool - i get persistent these "dataset is busy" error with a full stop, but sometimes it restores 3 zvols, sometimes 5. ZFS Version is HEAD today. Kernel 3.11, opensuse 13.1. There is no output in messages or any kernelstack. What can i do to obtain more detailed error details ? zpool history -i - there are no errors mentioned. Any ideas ?

@behlendorf
Copy link
Contributor

@pyavdr I have an educated guess as to what might be causing the EBUSY errors on ZVOLs. When ZVOLs are created entries for them are created in sysfs causing udev to run zvol_id which briefly takes a hold of the ZVOL making them busy. It would be helpful to see if this machinery is part of the problem your seeing.

If you set the zvol_inhibit_dev=1 module option prior to doing the zfs recv it will prevent the /dev/sysfs entry from being created. You probably would only need to do this on the receive side. If this prevents the issue that provides us some more insight in to what the problem is.

@tomposmiko
Copy link

@pyavdr Did you set only zvol_inihibit_dev=1 or "options zfs zvol_inihibit_dev=1" ?

@pyavdr
Copy link
Contributor

pyavdr commented Feb 8, 2014

After setting "options zvol_inhibit_dev=1" ( thanks to sopmot and Brian) i see zfs recv process running, but after about 160GB it really slows down under the 1 MB/s ( max was 150 MB/s i/o read from disk) and stays there for long time. The zfs process hangs at 12% cpu, with about 5 KB/s i/o from time to time. The command is lzop < file.zfs | zfs recv -vF newpool ( full stream) , so pretty easy. Where file.zfs lays on a single hardisk (1.4TB, compressed, ext4) and pool is a raidz1 with 5 disks. Data is sparse zvols of Vm OS, compressed, no dedup, ZFS vers. is today HEAD. After 80 minutes there is activity again. More then 100 MB/s. After a short period of time it stalls again. Looks like it it runs without any dataset is busy error, but performance is down to 3,5 MB/s for some zvols, others are running at 130 MB/s. There are 10s of minutes with no i/o activitiy. Besides that long times of inactivitiy in runs through and gets a correct finish. @dweeezil Today i got no chance to catch some stack traces, but i will try that again.

@dweeezil
Copy link
Contributor

dweeezil commented Feb 8, 2014

@pyavdr When the process is blocked, you should try to get some stack traces. Start with cat /proc/<pid>/stack and then also echo w > /proc/sysrq-trigger. The latter's messages should wind up in your syslog. Also, I presume you meant zfs receive -vF newpool? This particular example is receiving a full stream, correct? The previous reports in this thread were related to receiving incremental streams. What version is your ZFS code (try dmesg | grep ZFS:)?

Also, while the process is blocked, you might gather some of the kstats in /proc/spl/kstat/zfs starting with arcstats. It might be interesting to see whether a echo 3 > /proc/sys/vm/drop_caches gives it a kick once it's hung. You're reading a lot of data from a local ext4 filesystem which may cause a lot of memory to be eaten up to cache the file.

@pyavdr
Copy link
Contributor

pyavdr commented Feb 10, 2014

@dweeezil
I repeated that Zfs recv today. System has 16 GB mem, 50 % free. Zfs stalls again, got some stacks for you, drop caches has no effect.

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa025159f] dsl_dataset_name+0x1f/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff812da3a6] vsnprintf+0x426/0x6a0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231a9] dbuf_free_range+0x259/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d86ca] number.isra.1+0x31a/0x350
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff812da195] vsnprintf+0x215/0x6a0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a6f9] dsl_dir_name+0x49/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff812da195] vsnprintf+0x215/0x6a0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffffa0145554] trace_lock_tcd+0x14/0x30 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a6cc] dsl_dir_name+0x1c/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa01456b2] spl_debug_msg+0xc2/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231ed] dbuf_free_range+0x29d/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa014573b] spl_debug_msg+0x14b/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d86ca] number.isra.1+0x31a/0x350
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff812da195] vsnprintf+0x215/0x6a0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff812da195] vsnprintf+0x215/0x6a0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffff810a36e0] do_gettimeofday+0x10/0x50
[ffffffffa0145554] trace_lock_tcd+0x14/0x30 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa025159f] dsl_dataset_name+0x1f/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d86ca] number.isra.1+0x31a/0x350
[ffffffff815b1866] retint_kernel+0x26/0x30
[ffffffff812d9fd1] vsnprintf+0x51/0x6a0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231ed] dbuf_free_range+0x29d/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231a9] dbuf_free_range+0x259/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d86ca] number.isra.1+0x31a/0x350
[ffffffff812da195] vsnprintf+0x215/0x6a0
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa025159f] dsl_dataset_name+0x1f/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffffa025a6cc] dsl_dir_name+0x1c/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02231ed] dbuf_free_range+0x29d/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[ffffffff812d86ca] number.isra.1+0x31a/0x350
[ffffffff812d8f06] string.isra.5+0x36/0xe0
[ffffffff812da195] vsnprintf+0x215/0x6a0
[ffffffff81009e35] read_tsc+0x5/0x20
[ffffffff810a36b5] getnstimeofday+0x5/0x20
[ffffffff810a36e0] do_gettimeofday+0x10/0x50
[ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[ffffffff812da6c9] snprintf+0x39/0x40
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[ffffffff811c3dfe] fsnotify+0x24e/0x330
[ffffffff8107bbd4] __wake_up+0x34/0x50
[ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[ffffffff81186044] vfs_write+0x154/0x1e0
[ffffffff81185e24] vfs_read+0x94/0x160
[ffffffff81197a20] SyS_ioctl+0x80/0xa0
[ffffffff811869b3] SyS_write+0x43/0xa0
[ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[ffffffffffffffff] 0xffffffffffffffff

@pyavdr
Copy link
Contributor

pyavdr commented Feb 10, 2014

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a6f9] dsl_dir_name+0x49/0x90 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231ed] dbuf_free_range+0x29d/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812d8f06] string.isra.5+0x36/0xe0
[<ffffffff812da195] vsnprintf+0x215/0x6a0
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a6cc] dsl_dir_name+0x1c/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812d8f06] string.isra.5+0x36/0xe0
[<ffffffff812da195] vsnprintf+0x215/0x6a0
[<ffffffff81009e35] read_tsc+0x5/0x20
[<ffffffff810a36b5] getnstimeofday+0x5/0x20
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231ed] dbuf_free_range+0x29d/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffff8118ddf8] pipe_wait+0x88/0x90
[<ffffffff810729a0] autoremove_wake_function+0x0/0x30
[<ffffffff8118e717] pipe_read+0x307/0x4f0
[<ffffffff81185887] do_sync_read+0x67/0x90
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffffa0152bdd] vn_rdwr+0x12d/0x470 [spl]
[<ffffffff81185e7e] vfs_read+0xee/0x160
[<ffffffffa02356f0] restore_read+0xa0/0x1e0 [zfs]
[<ffffffffa023602c] restore_write+0x5c/0x140 [zfs]
[<ffffffffa02387e1] dmu_recv_stream+0x461/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812d8f06] string.isra.5+0x36/0xe0
[<ffffffff81009e35] read_tsc+0x5/0x20
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffffa0145a58] spl_debug_msg+0x468/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffffa025a6cc] dsl_dir_name+0x1c/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231ed] dbuf_free_range+0x29d/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812da195] vsnprintf+0x215/0x6a0
[<ffffffff81009e35] read_tsc+0x5/0x20
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812d8f06] string.isra.5+0x36/0xe0
[<ffffffff810a362f] __getnstimeofday+0x2f/0xb0
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231a9] dbuf_free_range+0x259/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffffa025a6cc] dsl_dir_name+0x1c/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812d8f06] string.isra.5+0x36/0xe0
[<ffffffff812da195] vsnprintf+0x215/0x6a0
[<ffffffff81009e35] read_tsc+0x5/0x20
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02231bb] dbuf_free_range+0x26b/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa0145515] trace_put_tcd+0x5/0x30 [spl]
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812d8f06] string.isra.5+0x36/0xe0
[<ffffffff812da3a6] vsnprintf+0x426/0x6a0
[<ffffffff81009e35] read_tsc+0x5/0x20
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffff812d8f06] string.isra.5+0x36/0xe0
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa025159f] dsl_dataset_name+0x1f/0x130 [zfs]
[<ffffffffa02235f5] dbuf_free_range+0x6a5/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

linux-ts3r:~ # cat /proc/4936/stack
[<ffffffffa01458ce] spl_debug_msg+0x2de/0x880 [spl]
[<ffffffffa025a708] dsl_dir_name+0x58/0x90 [zfs]
[<ffffffff812da6c9] snprintf+0x39/0x40
[<ffffffffa02515a7] dsl_dataset_name+0x27/0x130 [zfs]
[<ffffffffa0223308] dbuf_free_range+0x3b8/0x7e0 [zfs]
[<ffffffffa01494ad] kmem_free_debug+0x4d/0x1b0 [spl]
[<ffffffffa0248d12] dnode_free_range+0xa82/0x1160 [zfs]
[<ffffffffa0229998] dmu_free_long_range+0x1d8/0x390 [zfs]
[<ffffffffa0238702] dmu_recv_stream+0x382/0xb80 [zfs]
[<ffffffffa02cdca2] zfs_ioc_recv+0x202/0xc70 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa014c8d0] kmem_alloc_debug+0x250/0x420 [spl]
[<ffffffffa02744a8] rrw_exit+0x98/0x350 [zfs]
[<ffffffffa02739bf] refcount_remove_many+0x18f/0x300 [zfs]
[<ffffffffa028e44d] spa_close+0x1d/0xa0 [zfs]
[<ffffffffa02cf7fb] zfsdev_ioctl+0x50b/0x580 [zfs]
[<ffffffff811c3dfe] fsnotify+0x24e/0x330
[<ffffffff8107bbd4] __wake_up+0x34/0x50
[<ffffffff811977bc] do_vfs_ioctl+0x2dc/0x4c0
[<ffffffff81186044] vfs_write+0x154/0x1e0
[<ffffffff81185e24] vfs_read+0x94/0x160
[<ffffffff81197a20] SyS_ioctl+0x80/0xa0
[<ffffffff811869b3] SyS_write+0x43/0xa0
[<ffffffff815b82ed] system_call_fastpath+0x1a/0x1f
[<ffffffffffffffff] 0xffffffffffffffff

@olw2005
Copy link
Author

olw2005 commented Feb 18, 2014

I can't speak to the above issues, but running the code version(s) outlined in #1795 I am no longer getting the "dataset is busy" error [or associated kernel panics].

@FransUrbo
Copy link
Contributor

Setting zvol_inhibit_dev=1 works for me.

@behlendorf behlendorf removed this from the 0.6.4 milestone Oct 29, 2014
@behlendorf behlendorf added the Component: ZVOL ZFS Volumes label Oct 29, 2014
@behlendorf
Copy link
Contributor

Yes. This should be fixed, I'll close it out.

@setaou
Copy link

setaou commented Aug 17, 2017

I am having the same problem. For now i use zvol_inhibit_dev=1 as a workaround, but it is only acceptable because I dont need to actually use the zvols on the destination server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: ZVOL ZFS Volumes
Projects
None yet
Development

No branches or pull requests

9 participants