Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

zfs pool import hangs on [tx->tx_sync_done_cv]

163 views
Skip to first unread message

Mark Martinec

unread,
Oct 9, 2014, 8:00:28 PM10/9/14
to freebsd...@freebsd.org
In short, after upgrading to 10.1-BETA3 or -RC1 I ended up with several
zfs pools that can no longer be imported. The zpool import command
(with no arguments) does show all available pools, but trying to
import one just hangs and the command cannot be aborted, although
the rest of the system is still alive and fine:

# zpool import -f <mypool>

Typing ^T just shows an idle process, waiting on [tx->tx_sync_done_cv]:

load: 0.20 cmd: zpool 939 [tx->tx_sync_done_cv] 5723.65r 0.01u 0.02s 0%
8220k
load: 0.16 cmd: zpool 939 [tx->tx_sync_done_cv] 5735.73r 0.01u 0.02s 0%
8220k
load: 0.14 cmd: zpool 939 [tx->tx_sync_done_cv] 5741.83r 0.01u 0.02s 0%
8220k
load: 0.13 cmd: zpool 939 [tx->tx_sync_done_cv] 5749.16r 0.01u 0.02s 0%
8220k

ps shows (on a system re-booted to a LiveCD running FreeBSD-10.1-RC1):

PID TID COMM TDNAME CPU PRI STATE WCHAN
939 100632 zpool - 5 122 sleep tx->tx_s
UID PID PPID CPU PRI NI VSZ RSS MWCHAN STAT TT TIME COMMAND
0 939 801 0 22 0 107732 8236 tx->tx_s D+ v0 0:00.04
zpool import -f -o cachefile=/tmp/zpool.cache -R /tmp/sys0boot sys0boot

NWCHAN
fffff8007b0f2a20

# procstat -kk 939

PID TID COMM TDNAME KSTACK
939 100632 zpool - mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d txg_wait_synced+0x85 spa_load+0x1cd1
spa_load_best+0x6f spa_import+0x1ff zfs_ioc_pool_import+0x137
zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c
amd64_syscall+0x351 Xfast_syscall+0xfb


Background story: the system where this happened was being kept
to a fairly recent 10-STABLE. The last upgrade was very close to
a BETA3 release. There are a couple of zfs pools there, one on a
mirrored pair of SSDs mostly holding the OS, one with a mirrored
pair of large spindles, and three more small ones (4 GiB each),
mostly for boot redundancy or testing - these small ones are on
old smallish disks. These disks are different, and attached to
different SATA controllers (LSI and onboard Intel). Pools were
mostly kept up-to-date to the most recent zpool features set
through their lifetime (some starting their life with 9.0, some
with 10.0).

About two weeks ago after a reboot to a 10-STABLE of the day
the small pools became unavailable, but the regular two large
pools were still normal. At first I wasn't giving much attention
to that, as these pools were on oldish disks and nonessential
for normal operation, blaming a potentially crappy hw.

Today I needed to do a reboot (for unrelated reason), and the
machine was no longer able to mount the boot pool.
The first instinct was - disks are malfunctioning - but ...

Booting it to a FreeBSD-10.1-RC1 LiveCD was successful.
smartmon disk test shows no problems. dd is able to read whole
partititions of each problematic pool. And most importantly,
running a 'zdb -e -cc' on each (non-imported) pool was churning
normally and steadily, producing a stats report at the end
and reported no errors.

As a final proof that disks are fine I sacrificed one of the broken
4 GiB GPT partitions with one of the problematic pools, and
did a fresh 10.1-RC1 install on it from a distribution ISO DVD.
The installation went fine and the system does boot and run
fine from the newly installed OS. Trying to import one of the
remaining old pools hangs the import command as before.

As a final proof, I copied (with dd) one of the broken 4 GiB
partitions to a file on another system (running 10.1-BETA3,
which did not suffer from this problem), made a memory disk
out of this file, then run zfs import on this pool - and it hangs
there too! So hardware was not a problem - either these partitions
are truly broken (even though zdb -cc says they are fine),
or the new OS is somehow no longer able to import them.

Please advise.

I have a copy of the 4 GiB partition on a 400 MB compressed
file available, if somebody would be willing to play with it.

Also have a ktrace of the 'zpool import' command. It's last
actions before it hangs are:

939 zpool RET madvise 0
939 zpool CALL madvise(0x80604e000,0x1000,MADV_FREE)
939 zpool RET madvise 0
939 zpool CALL close(0x6)
939 zpool RET close 0
939 zpool CALL ioctl(0x3,0xc0185a05,0x7fffffffbf00)
939 zpool RET ioctl -1 errno 2 No such file or directory
939 zpool CALL madvise(0x802c71000,0x10000,MADV_FREE)
939 zpool RET madvise 0
939 zpool CALL madvise(0x802ca5000,0x1000,MADV_FREE)
939 zpool RET madvise 0
939 zpool CALL ioctl(0x3,0xc0185a06,0x7fffffffbf60)
939 zpool RET ioctl 0
939 zpool CALL ioctl(0x3,0xc0185a06,0x7fffffffbf60)
939 zpool RET ioctl 0
939 zpool CALL stat(0x802c380e0,0x7fffffffbc58)
939 zpool NAMI "/tmp"
939 zpool STRU struct stat {dev=273, ino=2, mode=041777,
nlink=8, uid=0, gid=0, rdev=96, atime=1412866648, stime=1412871393,
ctime=1412871393, birthtime=1412866648, size=512, blksize=32768,
blocks=8, flags=0x0 }
939 zpool RET stat 0
939 zpool CALL ioctl(0x3,0xc0185a02,0x7fffffffbc60)


Mark

_______________________________________________
freebsd...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stabl...@freebsd.org"

Steven Hartland

unread,
Oct 9, 2014, 8:07:08 PM10/9/14
to Mark Martinec, freebsd...@freebsd.org
What does procstat -k -k report?

Mark Martinec

unread,
Oct 9, 2014, 8:57:07 PM10/9/14
to freebsd...@freebsd.org
On 10/10/2014 02:06, Steven Hartland wrote:
> What does procstat -k -k report?

For the hung process the procstat -k -k was (as in my previous posting):

# procstat -kk 939

PID TID COMM TDNAME KSTACK
939 100632 zpool - mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d txg_wait_synced+0x85 spa_load+0x1cd1
spa_load_best+0x6f spa_import+0x1ff zfs_ioc_pool_import+0x137
zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c
amd64_syscall+0x351 Xfast_syscall+0xfb



For the whole system (-k -k -a) it's here
(needed to re-do the experiment, the hung PID this time is 2074)

PID TID COMM TDNAME KSTACK

0 100000 kernel swapper mi_switch+0xe1
sleepq_timedwait+0x3a _sleep+0x26e swapper+0x28f btext+0x2c
0 100024 kernel firmware taskq mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100026 kernel thread taskq mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100027 kernel ffs_trim taskq mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100029 kernel kqueue taskq mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100032 kernel acpi_task_0 mi_switch+0xe1
sleepq_wait+0x3a msleep_spin_sbt+0x1a3 taskqueue_thread_loop+0xfd
fork_exit+0x9a fork_trampoline+0xe
0 100033 kernel acpi_task_1 mi_switch+0xe1
sleepq_wait+0x3a msleep_spin_sbt+0x1a3 taskqueue_thread_loop+0xfd
fork_exit+0x9a fork_trampoline+0xe
0 100034 kernel acpi_task_2 mi_switch+0xe1
sleepq_wait+0x3a msleep_spin_sbt+0x1a3 taskqueue_thread_loop+0xfd
fork_exit+0x9a fork_trampoline+0xe
0 100037 kernel mps0 taskq mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100039 kernel em0 que mi_switch+0xe1
sleepq_wait+0x3a msleep_spin_sbt+0x1a3 taskqueue_thread_loop+0xfd
fork_exit+0x9a fork_trampoline+0xe
0 100040 kernel em0 txq mi_switch+0xe1
sleepq_wait+0x3a msleep_spin_sbt+0x1a3 taskqueue_thread_loop+0xfd
fork_exit+0x9a fork_trampoline+0xe
0 100079 kernel em1 taskq mi_switch+0xe1
sleepq_wait+0x3a msleep_spin_sbt+0x1a3 taskqueue_thread_loop+0xfd
fork_exit+0x9a fork_trampoline+0xe
0 100098 kernel mca taskq mi_switch+0xe1
sleepq_wait+0x3a msleep_spin_sbt+0x1a3 taskqueue_thread_loop+0xfd
fork_exit+0x9a fork_trampoline+0xe
0 100099 kernel system_taskq_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100100 kernel system_taskq_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100101 kernel system_taskq_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100102 kernel system_taskq_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100103 kernel system_taskq_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100104 kernel system_taskq_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100105 kernel system_taskq_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100106 kernel system_taskq_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100110 kernel CAM taskq mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100119 kernel zio_null_issue mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100120 kernel zio_null_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100121 kernel zio_read_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100122 kernel zio_read_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100176 kernel zio_read_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100180 kernel zio_read_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100182 kernel zio_read_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100183 kernel zio_read_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100185 kernel zio_read_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100187 kernel zio_read_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100188 kernel zio_read_intr_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100189 kernel zio_read_intr_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100190 kernel zio_read_intr_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100191 kernel zio_read_intr_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100192 kernel zio_read_intr_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100193 kernel zio_read_intr_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100194 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100195 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100196 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100197 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100198 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100199 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100200 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100201 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100202 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100203 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100204 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100205 kernel zio_write_intr_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100206 kernel zio_write_intr_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100207 kernel zio_write_intr_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100208 kernel zio_write_intr_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100209 kernel zio_write_intr_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100210 kernel zio_write_intr_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100211 kernel zio_write_intr_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100212 kernel zio_write_intr_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100213 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100214 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100215 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100216 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100217 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100218 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100219 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100220 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100221 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100222 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100223 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100224 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100225 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100226 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100227 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100228 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100229 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100230 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100231 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100232 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100233 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100234 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100235 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100236 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100237 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100238 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100239 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100240 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100241 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100242 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100243 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100244 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100245 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100246 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100247 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100248 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100249 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100250 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100251 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100252 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100253 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100254 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100255 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100256 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100257 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100258 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100259 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100260 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100261 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100262 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100264 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100265 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100266 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100267 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100268 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100269 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100270 kernel zio_null_issue mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100271 kernel zio_null_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100272 kernel zio_read_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100273 kernel zio_read_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100274 kernel zio_read_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100275 kernel zio_read_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100276 kernel zio_read_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100277 kernel zio_read_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100278 kernel zio_read_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100279 kernel zio_read_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100280 kernel zio_read_intr_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100281 kernel zio_read_intr_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100282 kernel zio_read_intr_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100283 kernel zio_read_intr_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100284 kernel zio_read_intr_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100285 kernel zio_read_intr_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100286 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100287 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100288 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100289 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100290 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100291 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100292 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100293 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100294 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100295 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100296 kernel zio_write_issue_ mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100297 kernel zio_write_intr_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100298 kernel zio_write_intr_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100299 kernel zio_write_intr_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100300 kernel zio_write_intr_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100301 kernel zio_write_intr_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100302 kernel zio_write_intr_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100303 kernel zio_write_intr_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100304 kernel zio_write_intr_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100305 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100306 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100307 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100308 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100309 kernel zio_write_intr_h mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100310 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100311 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100312 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100313 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100314 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100315 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100316 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100317 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100318 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100319 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100320 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100321 kernel zio_free_issue_0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100322 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100323 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100324 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100325 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100326 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100327 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100328 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100329 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100330 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100331 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100332 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100333 kernel zio_free_issue_1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100334 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100335 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100336 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100337 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100338 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100339 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100340 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100341 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100342 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100343 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100344 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100345 kernel zio_free_issue_2 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100346 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100347 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100348 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100349 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100350 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100351 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100352 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100353 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100354 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100355 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100356 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100357 kernel zio_free_issue_3 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100358 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100359 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100360 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100361 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100362 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100363 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100364 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100365 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100366 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100367 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100368 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100369 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100370 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100371 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100372 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100373 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100374 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100375 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100376 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100377 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100378 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100379 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100380 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100381 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100382 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100383 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100384 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100385 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100386 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100387 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100388 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100389 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100390 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100391 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100392 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100393 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100394 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100395 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100396 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100397 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100398 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100399 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100400 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100401 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100402 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100403 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100404 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100405 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100406 kernel zio_free_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100407 kernel zio_claim_issue mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100408 kernel zio_claim_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100409 kernel zio_ioctl_issue mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100410 kernel zio_ioctl_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100412 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100413 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100414 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100415 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100416 kernel zfs_vn_rele_task mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100417 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100418 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100419 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100420 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100423 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100433 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100438 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100439 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100440 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100441 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100442 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100443 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100444 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100445 kernel zil_clean mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100526 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100527 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100528 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100529 kernel zio_free_issue_4 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100530 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100531 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100532 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100533 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100534 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100535 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100536 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100537 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100538 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100539 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100540 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100541 kernel zio_free_issue_5 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100647 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100649 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100651 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100652 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100653 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100654 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100655 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100656 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100657 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100658 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100659 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100660 kernel zio_free_issue_6 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100661 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100662 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100663 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100664 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100665 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100666 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100667 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100668 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100669 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100670 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100671 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 100672 kernel zio_free_issue_7 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101449 kernel zio_free_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101450 kernel zio_claim_issue mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101451 kernel zio_claim_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101452 kernel zio_ioctl_issue mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101454 kernel zio_ioctl_intr mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101527 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101528 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101529 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101530 kernel metaslab_group_t mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
0 101531 kernel zfs_vn_rele_task mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 taskqueue_thread_loop+0xd5 fork_exit+0x9a
fork_trampoline+0xe
1 100002 init - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d
kern_wait6+0x5f4 sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb
2 100030 cam doneq0 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 xpt_done_td+0x8e fork_exit+0x9a
fork_trampoline+0xe
2 100031 cam doneq1 mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 xpt_done_td+0x8e fork_exit+0x9a
fork_trampoline+0xe
2 100111 cam scanner mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 xpt_scanner_thread+0x7c fork_exit+0x9a
fork_trampoline+0xe
3 100107 zfskern arc_reclaim_thre mi_switch+0xe1
sleepq_timedwait+0x3a _cv_timedwait_sbt+0x18b arc_reclaim_thread+0x301
fork_exit+0x9a fork_trampoline+0xe
3 100108 zfskern l2arc_feed_threa mi_switch+0xe1
sleepq_timedwait+0x3a _cv_timedwait_sbt+0x18b l2arc_feed_thread+0x16f
fork_exit+0x9a fork_trampoline+0xe
3 100177 zfskern txg_thread_enter mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d txg_quiesce_thread+0x39b fork_exit+0x9a
fork_trampoline+0xe
3 100181 zfskern txg_thread_enter mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d zio_wait+0x5b dsl_pool_sync+0x371
spa_sync+0x530 txg_sync_thread+0x3a6 fork_exit+0x9a fork_trampoline+0xe
3 100411 zfskern trim zroot mi_switch+0xe1
sleepq_timedwait+0x3a _cv_timedwait_sbt+0x18b trim_thread+0x9e
fork_exit+0x9a fork_trampoline+0xe
3 100421 zfskern txg_thread_enter mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d txg_quiesce_thread+0x39b fork_exit+0x9a
fork_trampoline+0xe
3 100422 zfskern txg_thread_enter mi_switch+0xe1
sleepq_timedwait+0x3a _cv_timedwait_sbt+0x18b txg_sync_thread+0x1dc
fork_exit+0x9a fork_trampoline+0xe
3 101526 zfskern trim sys1boot mi_switch+0xe1
sleepq_timedwait+0x3a _cv_timedwait_sbt+0x18b trim_thread+0x9e
fork_exit+0x9a fork_trampoline+0xe
4 100109 sctp_iterator - mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 sctp_iterator_thread+0x69 fork_exit+0x9a
fork_trampoline+0xe
5 100112 enc_daemon0 - mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 enc_daemon+0x140 fork_exit+0x9a
fork_trampoline+0xe
6 100113 pagedaemon - mi_switch+0xe1
sleepq_timedwait+0x3a _sleep+0x26e vm_pageout+0x275 fork_exit+0x9a
fork_trampoline+0xe
7 100114 vmdaemon - mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 vm_daemon+0x91 fork_exit+0x9a
fork_trampoline+0xe
8 100115 pagezero - mi_switch+0xe1
sleepq_timedwait+0x3a _sleep+0x26e vm_pagezero+0x98 fork_exit+0x9a
fork_trampoline+0xe
9 100116 bufdaemon - mi_switch+0xe1
sleepq_timedwait+0x3a _sleep+0x26e buf_daemon+0x78 fork_exit+0x9a
fork_trampoline+0xe
10 100001 audit - mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d audit_worker+0xa3 fork_exit+0x9a
fork_trampoline+0xe
11 100003 idle idle: cpu0 <running>

11 100004 idle idle: cpu1 <running>

11 100005 idle idle: cpu2 <running>

11 100006 idle idle: cpu3 <running>

11 100007 idle idle: cpu4 <running>

11 100008 idle idle: cpu5 <running>

11 100009 idle idle: cpu6 <running>

11 100010 idle idle: cpu7 mi_switch+0xe1
critical_exit+0x7a sched_idletd+0x1d5 fork_exit+0x9a fork_trampoline+0xe
12 100011 intr swi4: clock mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100012 intr swi4: clock

12 100013 intr swi4: clock

12 100014 intr swi4: clock

12 100015 intr swi4: clock

12 100016 intr swi4: clock

12 100017 intr swi4: clock

12 100018 intr swi4: clock

12 100019 intr swi1: netisr 0 mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100020 intr swi3: vm

12 100028 intr swi5: fast taskq mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100035 intr swi6: task queue mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100036 intr swi6: Giant task mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100038 intr irq256: mps0 mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100041 intr irq16: uhci0+

12 100046 intr irq21: uhci1

12 100051 intr irq19: uhci2 uhc

12 100056 intr irq18: em1 ehci0

12 100061 intr irq258: hdac0 mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100062 intr irq23: uhci3 ehc mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100080 intr irq259: ahci1:ch mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100081 intr irq260: ahci1:ch mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100082 intr irq261: ahci1:ch

12 100083 intr irq262: ahci1:ch mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100084 intr irq263: ahci1:ch mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100085 intr irq264: ahci1:ch mi_switch+0xe1
ithread_loop+0x190 fork_exit+0x9a fork_trampoline+0xe
12 100086 intr irq265: ahci1:6

12 100087 intr irq266: ahci1:7

12 100088 intr irq267: ahci1:8

12 100089 intr irq268: ahci1:9

12 100090 intr irq269: ahci1:10

12 100091 intr irq270: ahci1:11

12 100092 intr irq271: ahci1:12

12 100093 intr irq272: ahci1:13

12 100094 intr irq273: ahci1:14

12 100095 intr irq274: ahci1:15

12 100096 intr irq1: atkbd0

12 100097 intr swi0: uart

13 100021 geom g_event mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 g_run_events+0x4d fork_exit+0x9a
fork_trampoline+0xe
13 100022 geom g_up mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 g_io_schedule_up+0xbc g_up_procbody+0x6d
fork_exit+0x9a fork_trampoline+0xe
13 100023 geom g_down mi_switch+0xe1
sleepq_wait+0x3a _sleep+0x287 g_io_schedule_down+0x4c
g_down_procbody+0x6d fork_exit+0x9a fork_trampoline+0xe
14 100025 rand_harvestq - mi_switch+0xe1
sleepq_timedwait+0x3a msleep_spin_sbt+0x191 random_kthread+0x276
fork_exit+0x9a fork_trampoline+0xe
15 100042 usb usbus0 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100043 usb usbus0 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100044 usb usbus0 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100045 usb usbus0 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100047 usb usbus1 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100048 usb usbus1 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100049 usb usbus1 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100050 usb usbus1 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100052 usb usbus2 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100053 usb usbus2 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100054 usb usbus2 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100055 usb usbus2 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100057 usb usbus3 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100058 usb usbus3 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100059 usb usbus3 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100060 usb usbus3 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100063 usb usbus4 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100064 usb usbus4 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100065 usb usbus4 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100066 usb usbus4 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100067 usb usbus5 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100068 usb usbus5 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100069 usb usbus5 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100070 usb usbus5 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100071 usb usbus6 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100072 usb usbus6 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100073 usb usbus6 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100074 usb usbus6 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100075 usb usbus7 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100076 usb usbus7 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100077 usb usbus7 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100078 usb usbus7 mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
15 100457 usb ucom mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d usb_process+0xe0 fork_exit+0x9a
fork_trampoline+0xe
16 100117 syncer - mi_switch+0xe1
sleepq_timedwait+0x3a _cv_timedwait_sbt+0x18b sched_sync+0x6be
fork_exit+0x9a fork_trampoline+0xe
17 100118 vnlru - mi_switch+0xe1
sleepq_timedwait+0x3a _sleep+0x26e vnlru_proc+0x48 fork_exit+0x9a
fork_trampoline+0xe
479 100454 devd - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10
_cv_timedwait_sig_sbt+0x18b seltdwait+0xa4 kern_select+0x8fa
sys_select+0x54 amd64_syscall+0x351 Xfast_syscall+0xfb
554 100452 syslogd - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
seltdwait+0xae kern_select+0x8fa sys_select+0x54 amd64_syscall+0x351
Xfast_syscall+0xfb
750 100455 sshd - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
seltdwait+0xae kern_select+0x8fa sys_select+0x54 amd64_syscall+0x351
Xfast_syscall+0xfb
808 100424 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
809 100470 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
810 100471 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
811 100472 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
812 100473 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
813 100474 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
814 100475 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
815 100476 getty - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb
dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351
Xfast_syscall+0xfb
1661 100498 sshd - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
seltdwait+0xae sys_poll+0x3a3 amd64_syscall+0x351 Xfast_syscall+0xfb
1664 100514 sshd - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
seltdwait+0xae kern_select+0x8fa sys_select+0x54 amd64_syscall+0x351
Xfast_syscall+0xfb
1665 100425 sh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d
kern_wait6+0x5f4 sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb
2000 100477 su - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d
kern_wait6+0x5f4 sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb
2001 100503 csh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d
kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351
Xfast_syscall+0xfb
2074 100524 zpool - mi_switch+0xe1
sleepq_wait+0x3a _cv_wait+0x16d txg_wait_synced+0x85 spa_load+0x1cd1
spa_load_best+0x6f spa_import+0x1ff zfs_ioc_pool_import+0x137
zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c
amd64_syscall+0x351 Xfast_syscall+0xfb
2075 100430 sshd - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
seltdwait+0xae sys_poll+0x3a3 amd64_syscall+0x351 Xfast_syscall+0xfb
2079 100446 sshd - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a
seltdwait+0xae kern_select+0x8fa sys_select+0x54 amd64_syscall+0x351
Xfast_syscall+0xfb
2080 100488 sh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d
kern_wait6+0x5f4 sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb
2081 100517 su - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d
kern_wait6+0x5f4 sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb
2082 100465 csh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d
kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351
Xfast_syscall+0xfb
2085 100451 procstat - <running>

Steven Hartland

unread,
Oct 9, 2014, 9:03:14 PM10/9/14
to Mark Martinec, freebsd...@freebsd.org
Sorry to be a pain but could you attach the full output or link it
somewhere as mail has messed up the formatting :(

----- Original Message -----
From: "Mark Martinec" <Mark.Marti...@ijs.si>
To: <freebsd...@freebsd.org>

Mark Martinec

unread,
Oct 9, 2014, 9:10:36 PM10/9/14
to freebsd...@freebsd.org
On 10/10/2014 03:02, Steven Hartland wrote:
> Sorry to be a pain but could you attach the full output or link it
> somewhere as mail has messed up the formatting :(

(sent the attachment to Steven directly,
if somebody else is interested please let me know)

Mark Martinec

unread,
Oct 12, 2014, 4:41:03 PM10/12/14
to freebsd...@freebsd.org, freeb...@freebsd.org
I made available an image copy (dd) of my 4 GiB partition
(compressed down to a 384 MiB file), holding one of my
partitions (a small bsd root) that can no longer be imported
into a 10.1-RC1 or 10.1-BETA3, as described in my first
posting (below):

http://www.ijs.si/usr/mark/bsd/

I would appreciate if it can be confirmed that such pool
(one of several I have with this symptom) causes
'zpool import' to hang on 10.1 or 10-STABLE:

- download, xz -d sys1boot.img.xz
# mdconfig -f sys1boot.img
# zpool import sys1boot

.. and advise on a solution.

Considering that 'zdb -e -cc' is happy and there were no
other prior trouble with these pools, except for an upgrade
to 10.1-BETA3/-RC1 from 10-STABLE as of cca. late September,
it is my belief that these pools are still healthy, just
non-importable. I'm reluctant to upgrade any other system
from 10.0 to 10.1 without finding out what went wrong here.

Mark


On 10/10/2014 03:02, Steven Hartland wrote:
> Sorry to be a pain but could you attach the full output or link it
> somewhere as mail has messed up the formatting :(

Now at
http://www.ijs.si/usr/mark/bsd/procstat-kka-2074.txt

On 2014-10-10 Mark Martinec wrote:
> In short, after upgrading to 10.1-BETA3 or -RC1 I ended up with several
> zfs pools that can no longer be imported. The zpool import command
> (with no arguments) does show all available pools, but trying to
> import one just hangs and the command cannot be aborted, although
> the rest of the system is still alive and fine:
>
> # zpool import -f <mypool>
>
> Typing ^T just shows an idle process, waiting on [tx->tx_sync_done_cv]:
>
> load: 0.20 cmd: zpool 939 [tx->tx_sync_done_cv] 5723.65r 0.01u 0.02s 0% 8220k
> load: 0.16 cmd: zpool 939 [tx->tx_sync_done_cv] 5735.73r 0.01u 0.02s 0% 8220k
> load: 0.14 cmd: zpool 939 [tx->tx_sync_done_cv] 5741.83r 0.01u 0.02s 0% 8220k
> load: 0.13 cmd: zpool 939 [tx->tx_sync_done_cv] 5749.16r 0.01u 0.02s 0% 8220k
>
> ps shows (on a system re-booted to a LiveCD running FreeBSD-10.1-RC1):
>
> PID TID COMM TDNAME CPU PRI STATE WCHAN
> 939 100632 zpool - 5 122 sleep tx->tx_s
> UID PID PPID CPU PRI NI VSZ RSS MWCHAN STAT TT TIME COMMAND
> 0 939 801 0 22 0 107732 8236 tx->tx_s D+ v0 0:00.04
> zpool import -f -o cachefile=/tmp/zpool.cache -R /tmp/sys0boot sys0boot
>
> NWCHAN
> fffff8007b0f2a20
>
> # procstat -kk 939
>
> PID TID COMM TDNAME KSTACK
> 939 100632 zpool - mi_switch+0xe1 sleepq_wait+0x3a _cv_wait+0x16d txg_wait_synced+0x85 spa_load+0x1cd1 spa_load_best+0x6f spa_import+0x1ff zfs_ioc_pool_import+0x137 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb
>
>

Steven Hartland

unread,
Oct 12, 2014, 8:54:43 PM10/12/14
to Mark Martinec, freebsd...@freebsd.org, freeb...@freebsd.org
I have reproduced the issue with your image. It seems
like the import is waiting a transation group to complete.

It looks like its waiting on the zio from dsl_pool_sync_mos
which for some reason is never returning.

The pool does seem to be happy importing read only with:
zpool import -N -f -o readonly=on sys1boot

print dp->dp_tx
$3 = {
tx_cpu = 0xfffffe0004a05000,
tx_sync_lock = {
lock_object = {
lo_name = 0xffffffff815f2941 "tx->tx_sync_lock",
lo_flags = 40960000,
lo_data = 0,
lo_witness = 0x0
},
sx_lock = 1
},
tx_open_txg = 11733519,
tx_quiesced_txg = 0,
tx_syncing_txg = 11733518,
tx_synced_txg = 0,
tx_open_time = 120562502889,
tx_sync_txg_waiting = 11733518,
tx_quiesce_txg_waiting = 11733519,
tx_sync_more_cv = {
cv_description = 0xffffffff815f2953 "tx->tx_sync_more_cv",
cv_waiters = 0
},
tx_sync_done_cv = {
cv_description = 0xffffffff815f2968 "tx->tx_sync_done_cv",
cv_waiters = 1
},
tx_quiesce_more_cv = {
cv_description = 0xffffffff815f297d "tx->tx_quiesce_more_cv",
cv_waiters = 1
},
tx_quiesce_done_cv = {
cv_description = 0xffffffff815f2995 "tx->tx_quiesce_done_cv",
cv_waiters = 0
},
tx_timeout_cv = {
cv_description = 0x0,
cv_waiters = 0
},
tx_exit_cv = {
cv_description = 0xffffffff815f29ad "tx->tx_exit_cv",
cv_waiters = 0
},
tx_threads = 2 '\002',
tx_exiting = 0 '\0',
tx_sync_thread = 0xfffff80041505490,
tx_quiesce_thread = 0xfffff80041509920,
tx_commit_cb_taskq = 0x0
}

Relavent threads:

#0 sched_switch (td=0xfffff800446bc000, newtd=<value optimized out>, flags=<value optimized out>) at
/usr/src/sys/kern/sched_ule.c:1945
#1 0xffffffff806110d9 in mi_switch (flags=260, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:494
#2 0xffffffff8064ddc2 in sleepq_switch (wchan=<value optimized out>, pri=<value optimized out>) at
/usr/src/sys/kern/subr_sleepqueue.c:538
#3 0xffffffff8064dc23 in sleepq_wait (wchan=0xfffff80021f4b220, pri=0) at /usr/src/sys/kern/subr_sleepqueue.c:617
#4 0xffffffff805b7c1a in _cv_wait (cvp=0xfffff80021f4b220, lock=0xfffff80021f4b1b8) at /usr/src/sys/kern/kern_condvar.c:139
#5 0xffffffff8151ec4b in txg_wait_synced (dp=0xfffff80021f4b000, txg=<value optimized out>) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:631
#6 0xffffffff81510937 in spa_load (spa=0xfffffe0003fe1000, state=SPA_LOAD_IMPORT, type=<value optimized out>, mosconfig=<value
optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:2777
#7 0xffffffff8150e5df in spa_load_best (spa=0xfffffe0003fe1000, state=SPA_LOAD_IMPORT, mosconfig=1, max_request=<value optimized
out>, rewind_flags=1) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:2873
#8 0xffffffff8150e023 in spa_import (pool=0xfffffe0002e2a000 "sys1boot", config=0xfffff80044f24ba0, props=0x0, flags=<value
optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:4207
#9 0xffffffff81566d78 in zfs_ioc_pool_import (zc=0xfffffe0002e2a000) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:1594
#10 0xffffffff81563c4f in zfsdev_ioctl (dev=<value optimized out>, zcmd=<value optimized out>, arg=<value optimized out>,
flag=<value optimized out>, td=<value optimized out>) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:6158
#11 0xffffffff805299eb in devfs_ioctl_f (fp=0xfffff8004f020aa0, com=3222821378, data=0xfffff80038123e60, cred=<value optimized out>,
td=0xfffff800446bc000) at /usr/src/sys/fs/devfs/devfs_vnops.c:757
#12 0xffffffff8066201b in kern_ioctl (td=<value optimized out>, fd=<value optimized out>, com=<value optimized out>) at file.h:320
#13 0xffffffff80661d9c in sys_ioctl (td=0xfffff800446bc000, uap=0xfffffe011e49fa40) at /usr/src/sys/kern/sys_generic.c:702
#14 0xffffffff8088defa in amd64_syscall (td=0xfffff800446bc000, traced=0) at subr_syscall.c:134
#15 0xffffffff80870ecb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:391


Thread 509 (Thread 100533):
#0 sched_switch (td=0xfffff800446bf000, newtd=<value optimized out>, flags=<value optimized out>) at
/usr/src/sys/kern/sched_ule.c:1945
#1 0xffffffff806110d9 in mi_switch (flags=260, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:494
#2 0xffffffff8064ddc2 in sleepq_switch (wchan=<value optimized out>, pri=<value optimized out>) at
/usr/src/sys/kern/subr_sleepqueue.c:538
#3 0xffffffff8064e323 in sleepq_timedwait (wchan=0xfffff80044e47210, pri=0) at /usr/src/sys/kern/subr_sleepqueue.c:652
#4 0xffffffff805b84f0 in _cv_timedwait_sbt (cvp=0xfffff80044e47210, lock=0xfffff80044e471b8, sbt=<value optimized out>, pr=<value
optimized out>, flags=<value optimized out>) at /usr/src/sys/kern/kern_condvar.c:325
#5 0xffffffff8151f72e in txg_thread_wait (tx=<value optimized out>, cpr=0xfffffe011dc76a00, cv=0xfffff80044e47210, time=<value
optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:249
#6 0xffffffff8151e930 in txg_sync_thread (arg=0xfffff80044e47000) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:483
#7 0xffffffff805d5bd4 in fork_exit (callout=0xffffffff8151e480 <txg_sync_thread>, arg=0xfffff80044e47000, frame=0xfffffe011dc76ac0)
at /usr/src/sys/kern/kern_fork.c:996
#8 0xffffffff8087111e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606
#9 0x0000000000000000 in ?? ()

Thread 508 (Thread 100532):
#0 sched_switch (td=0xfffff80048204920, newtd=<value optimized out>, flags=<value optimized out>) at
/usr/src/sys/kern/sched_ule.c:1945
#1 0xffffffff806110d9 in mi_switch (flags=260, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:494
#2 0xffffffff8064ddc2 in sleepq_switch (wchan=<value optimized out>, pri=<value optimized out>) at
/usr/src/sys/kern/subr_sleepqueue.c:538
#3 0xffffffff8064dc23 in sleepq_wait (wchan=0xfffff80044e47230, pri=0) at /usr/src/sys/kern/subr_sleepqueue.c:617
#4 0xffffffff805b7c1a in _cv_wait (cvp=0xfffff80044e47230, lock=0xfffff80044e471b8) at /usr/src/sys/kern/kern_condvar.c:139
#5 0xffffffff8151f73b in txg_thread_wait (tx=<value optimized out>, cpr=0xfffffe011dc6da00, cv=<value optimized out>, time=<value
optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:251
#6 0xffffffff8151e450 in txg_quiesce_thread (arg=0xfffff80044e47000) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:556
#7 0xffffffff805d5bd4 in fork_exit (callout=0xffffffff8151e030 <txg_quiesce_thread>, arg=0xfffff80044e47000,
frame=0xfffffe011dc6dac0) at /usr/src/sys/kern/kern_fork.c:996
#8 0xffffffff8087111e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606
#9 0x0000000000000000 in ?? ()


Thread 512 (Thread 100338):
#0 sched_switch (td=0xfffff80041505490, newtd=<value optimized out>, flags=<value optimized out>) at
/usr/src/sys/kern/sched_ule.c:1945
#1 0xffffffff806110d9 in mi_switch (flags=260, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:494
#2 0xffffffff8064ddc2 in sleepq_switch (wchan=<value optimized out>, pri=<value optimized out>) at
/usr/src/sys/kern/subr_sleepqueue.c:538
#3 0xffffffff8064dc23 in sleepq_wait (wchan=0xfffff80044acf320, pri=0) at /usr/src/sys/kern/subr_sleepqueue.c:617
#4 0xffffffff805b7c1a in _cv_wait (cvp=0xfffff80044acf320, lock=0xfffff80044acf300) at /usr/src/sys/kern/kern_condvar.c:139
#5 0xffffffff81547c9b in zio_wait (zio=<value optimized out>) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1442
#6 0xffffffff814f056c in dsl_pool_sync (dp=0xfffff80021f4b000, txg=11733518) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:531
#7 0xffffffff81514030 in spa_sync (spa=0xfffffe0003fe1000, txg=11733518) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6604
#8 0xffffffff8151e6cd in txg_sync_thread (arg=0xfffff80021f4b000) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:518
#9 0xffffffff805d5bd4 in fork_exit (callout=0xffffffff8151e480 <txg_sync_thread>, arg=0xfffff80021f4b000, frame=0xfffffe011e6fdac0)
at /usr/src/sys/kern/kern_fork.c:996
#10 0xffffffff8087111e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:606
#11 0x0000000000000000 in ?? ()


----- Original Message -----
From: "Mark Martinec" <Mark.Marti...@ijs.si>
To: <freebsd...@freebsd.org>; <freeb...@freebsd.org>
Sent: Sunday, October 12, 2014 9:40 PM
Subject: Re: zfs pool import hangs on [tx->tx_sync_done_cv]


>I made available an image copy (dd) of my 4 GiB partition
> (compressed down to a 384 MiB file), holding one of my
> partitions (a small bsd root) that can no longer be imported
> into a 10.1-RC1 or 10.1-BETA3, as described in my first
> posting (below):
>
> http://www.ijs.si/usr/mark/bsd/
>
> I would appreciate if it can be confirmed that such pool
> (one of several I have with this symptom) causes
> 'zpool import' to hang on 10.1 or 10-STABLE:
>
> - download, xz -d sys1boot.img.xz
> # mdconfig -f sys1boot.img
> # zpool import sys1boot
>
> ... and advise on a solution.

K. Macy

unread,
Oct 12, 2014, 11:54:52 PM10/12/14
to Steven Hartland, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
A recent quick read of the code would lead me to believe that zio_wait not
returning there means that the zio never reached the zio_done stage. Parent
zios seem to yield in a couple of stages in the pipeline if they have
incomplete children. They determine this by calling zio_wait_for_children
with zio child types and their corresponding wait type. In so doing they
set the io_stall to the count of the number of waiters of the first
non-zero check. This parent I/O will be resumed by the last child zio of
that type and wait state in zio_notify_parent. I'm sure you know all this -
but I wrote it to preface asking for the following fields of the zio being
waited on in dsl_pool_sync_mos: io_stall (i.e, which field in io_children
is pointed to) *io_stall, io_children[*][*], io_child_list (at a first
glance just the addresses). The other alternative is that it reexecuting
has gotten in to a bad place in the state machine so io_reexecute.

Thanks.

-K

Steven Hartland

unread,
Oct 13, 2014, 4:06:30 AM10/13/14
to K. Macy, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
----- Original Message -----
From: "K. Macy" <km...@freebsd.org>

> A recent quick read of the code would lead me to believe that zio_wait not
> returning there means that the zio never reached the zio_done stage. Parent
> zios seem to yield in a couple of stages in the pipeline if they have
> incomplete children. They determine this by calling zio_wait_for_children
> with zio child types and their corresponding wait type. In so doing they
> set the io_stall to the count of the number of waiters of the first
> non-zero check. This parent I/O will be resumed by the last child zio of
> that type and wait state in zio_notify_parent. I'm sure you know all this -
> but I wrote it to preface asking for the following fields of the zio being
> waited on in dsl_pool_sync_mos: io_stall (i.e, which field in io_children
> is pointed to) *io_stall, io_children[*][*], io_child_list (at a first
> glance just the addresses). The other alternative is that it reexecuting
>has gotten in to a bad place in the state machine so io_reexecute.

Yer I would have got the zio details but typically its "optimised out" by the
compiler, so will need some effort to track that down unfortunately :(

Regards
Steve

K. Macy

unread,
Oct 13, 2014, 4:12:06 AM10/13/14
to Steven Hartland, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
On Mon, Oct 13, 2014 at 1:06 AM, Steven Hartland <kil...@multiplay.co.uk>
wrote:

> ----- Original Message ----- From: "K. Macy" <km...@freebsd.org>
>
> A recent quick read of the code would lead me to believe that zio_wait not
>> returning there means that the zio never reached the zio_done stage.
>> Parent
>> zios seem to yield in a couple of stages in the pipeline if they have
>> incomplete children. They determine this by calling zio_wait_for_children
>> with zio child types and their corresponding wait type. In so doing they
>> set the io_stall to the count of the number of waiters of the first
>> non-zero check. This parent I/O will be resumed by the last child zio of
>> that type and wait state in zio_notify_parent. I'm sure you know all this
>> -
>> but I wrote it to preface asking for the following fields of the zio being
>> waited on in dsl_pool_sync_mos: io_stall (i.e, which field in io_children
>> is pointed to) *io_stall, io_children[*][*], io_child_list (at a first
>> glance just the addresses). The other alternative is that it reexecuting
>> has gotten in to a bad place in the state machine so io_reexecute.
>>
>
> Yer I would have got the zio details but typically its "optimised out" by
> the
> compiler, so will need some effort to track that down unfortunately :(
>

Well, let me know if you can. Re-creating a new 10.x VM is taking a while
as it's taking me forever to checkout the sources.

Things like that need to somehow continue to be accessible.

Cheers.

-K

K. Macy

unread,
Oct 13, 2014, 2:40:31 PM10/13/14
to Steven Hartland, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
>> Yer I would have got the zio details but typically its "optimised out" by
>> the
>> compiler, so will need some effort to track that down unfortunately :(
>
>
> Well, let me know if you can. Re-creating a new 10.x VM is taking a while as
> it's taking me forever to checkout the sources.
>
> Things like that need to somehow continue to be accessible.


So there is an outstanding child, but it isn't clear what state it's
in because the child pointer in the zio_link isn't valid. It's as if
the memory got re-used for something else. I'm dumping my findings
below on the off-chance that it helps you push this along.




(kgdb) thread 459
[Switching to thread 459 (Thread 101524)]#0 sched_switch
(td=0xfffff80063111000, newtd=<value optimized out>,
flags=<value optimized out>) at
/usr/home/kmacy/devel/svn/10/sys/kern/sched_ule.c:1945
1945 cpuid = PCPU_GET(cpuid);
Current language: auto; currently minimal
(kgdb) bt
#0 sched_switch (td=0xfffff80063111000, newtd=<value optimized out>,
flags=<value optimized out>)
at /usr/home/kmacy/devel/svn/10/sys/kern/sched_ule.c:1945
#1 0xffffffff807aa199 in mi_switch (flags=260, newtd=0x0) at
/usr/home/kmacy/devel/svn/10/sys/kern/kern_synch.c:494
#2 0xffffffff807e6e82 in sleepq_switch (wchan=<value optimized out>,
pri=<value optimized out>)
at /usr/home/kmacy/devel/svn/10/sys/kern/subr_sleepqueue.c:538
#3 0xffffffff807e6ce3 in sleepq_wait (wchan=0xfffff8004ddf4a50, pri=0)
at /usr/home/kmacy/devel/svn/10/sys/kern/subr_sleepqueue.c:617
#4 0xffffffff80750d7a in _cv_wait (cvp=0xfffff8004ddf4a50,
lock=0xfffff8004ddf4a30)
at /usr/home/kmacy/devel/svn/10/sys/kern/kern_condvar.c:139
#5 0xffffffff817d145b in zio_wait (zio=<value optimized out>)
at /usr/home/kmacy/devel/svn/10/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1442
#6 0xffffffff81779d3c in dsl_pool_sync (dp=0xfffff8004d364800, txg=11733518)
at /usr/home/kmacy/devel/svn/10/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:531
#7 0xffffffff8179d800 in spa_sync (spa=0xfffffe000372f000, txg=11733518)
at /usr/home/kmacy/devel/svn/10/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6604
#8 0xffffffff817a7e9d in txg_sync_thread (arg=0xfffff8004d364800)
at /usr/home/kmacy/devel/svn/10/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:518
#9 0xffffffff8076ed34 in fork_exit (callout=0xffffffff817a7c50
<txg_sync_thread>, arg=0xfffff8004d364800,
frame=0xfffffe012043fac0) at
/usr/home/kmacy/devel/svn/10/sys/kern/kern_fork.c:996
#10 0xffffffff80b96b3e in fork_trampoline () at
/usr/home/kmacy/devel/svn/10/sys/amd64/amd64/exception.S:606
#11 0x0000000000000000 in ?? ()
(kgdb) f 5
#5 0xffffffff817d145b in zio_wait (zio=<value optimized out>)
at /usr/home/kmacy/devel/svn/10/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1442
1442 cv_wait(&zio->io_cv, &zio->io_lock);
(kgdb) disassemble zio_wait
Dump of assembler code for function zio_wait:
0xffffffff817d13c0 <zio_wait+0>: push %rbp
0xffffffff817d13c1 <zio_wait+1>: mov %rsp,%rbp
0xffffffff817d13c4 <zio_wait+4>: push %r15
0xffffffff817d13c6 <zio_wait+6>: push %r14
0xffffffff817d13c8 <zio_wait+8>: push %r12
0xffffffff817d13ca <zio_wait+10>: push %rbx
0xffffffff817d13cb <zio_wait+11>: mov %rdi,%r14
0xffffffff817d13ce <zio_wait+14>: cmpl $0x1,0x254(%r14)
0xffffffff817d13d6 <zio_wait+22>: je 0xffffffff817d13f0 <zio_wait+48>
0xffffffff817d13d8 <zio_wait+24>: mov $0xffffffff81883de9,%rdi
0xffffffff817d13df <zio_wait+31>: mov $0xffffffff81883b20,%rsi
0xffffffff817d13e6 <zio_wait+38>: mov $0x599,%edx
0xffffffff817d13eb <zio_wait+43>: callq 0xffffffff81a19200 <assfail>
0xffffffff817d13f0 <zio_wait+48>: cmpq $0x0,0x2f0(%r14)
0xffffffff817d13f8 <zio_wait+56>: je 0xffffffff817d1412 <zio_wait+82>
0xffffffff817d13fa <zio_wait+58>: mov $0xffffffff8188410c,%rdi
0xffffffff817d1401 <zio_wait+65>: mov $0xffffffff81883b20,%rsi
0xffffffff817d1408 <zio_wait+72>: mov $0x59a,%edx
0xffffffff817d140d <zio_wait+77>: callq 0xffffffff81a19200 <assfail>
0xffffffff817d1412 <zio_wait+82>: mov %gs:0x0,%rax
0xffffffff817d141b <zio_wait+91>: mov %rax,0x2f8(%r14)
0xffffffff817d1422 <zio_wait+98>: mov %r14,%rdi
0xffffffff817d1425 <zio_wait+101>: callq 0xffffffff817d24c0 <zio_execute>
0xffffffff817d142a <zio_wait+106>: lea 0x300(%r14),%r15
0xffffffff817d1431 <zio_wait+113>: xor %esi,%esi
0xffffffff817d1433 <zio_wait+115>: mov $0xffffffff81883b20,%rdx
0xffffffff817d143a <zio_wait+122>: mov $0x5a0,%ecx
0xffffffff817d143f <zio_wait+127>: mov %r15,%rdi
0xffffffff817d1442 <zio_wait+130>: callq 0xffffffff807a8270 <_sx_xlock>
0xffffffff817d1447 <zio_wait+135>: lea 0x320(%r14),%rbx
0xffffffff817d144e <zio_wait+142>: jmp 0xffffffff817d145b <zio_wait+155>
0xffffffff817d1450 <zio_wait+144>: mov %rbx,%rdi
0xffffffff817d1453 <zio_wait+147>: mov %r15,%rsi
0xffffffff817d1456 <zio_wait+150>: callq 0xffffffff80750ba0 <_cv_wait>
0xffffffff817d145b <zio_wait+155>: cmpq $0x0,0x2f0(%r14)
0xffffffff817d1463 <zio_wait+163>: jne 0xffffffff817d1450 <zio_wait+144>
0xffffffff817d1465 <zio_wait+165>: mov $0xffffffff81883b20,%rsi
0xffffffff817d146c <zio_wait+172>: mov $0x5a3,%edx
0xffffffff817d1471 <zio_wait+177>: mov %r15,%rdi
0xffffffff817d1474 <zio_wait+180>: callq 0xffffffff807a8630 <_sx_xunlock>
0xffffffff817d1479 <zio_wait+185>: mov 0x268(%r14),%r12d
0xffffffff817d1480 <zio_wait+192>: lea 0xf0(%r14),%rdi
0xffffffff817d1487 <zio_wait+199>: callq 0xffffffff8172d480 <list_destroy>
0xffffffff817d148c <zio_wait+204>: lea 0x110(%r14),%rdi
0xffffffff817d1493 <zio_wait+211>: callq 0xffffffff8172d480 <list_destroy>
0xffffffff817d1498 <zio_wait+216>: mov %r15,%rdi
0xffffffff817d149b <zio_wait+219>: callq 0xffffffff807a8010 <sx_destroy>
0xffffffff817d14a0 <zio_wait+224>: mov %rbx,%rdi
0xffffffff817d14a3 <zio_wait+227>: callq 0xffffffff80750b50 <cv_destroy>
0xffffffff817d14a8 <zio_wait+232>: mov 0xffffffff818af6e0,%rdi
0xffffffff817d14b0 <zio_wait+240>: mov %r14,%rsi
0xffffffff817d14b3 <zio_wait+243>: callq 0xffffffff81a19400
<kmem_cache_free>
0xffffffff817d14b8 <zio_wait+248>: mov %r12d,%eax
0xffffffff817d14bb <zio_wait+251>: pop %rbx
0xffffffff817d14bc <zio_wait+252>: pop %r12
0xffffffff817d14be <zio_wait+254>: pop %r14
0xffffffff817d14c0 <zio_wait+256>: pop %r15
0xffffffff817d14c2 <zio_wait+258>: pop %rbp
0xffffffff817d14c3 <zio_wait+259>: retq
End of assembler dump.

0xffffffff817d1422 <zio_wait+98>: mov %r14,%rdi
0xffffffff817d1425 <zio_wait+101>: callq 0xffffffff817d24c0 <zio_execute>
0xffffffff817d142a <zio_wait+106>: lea 0x300(%r14),%r15
0xffffffff817d1431 <zio_wait+113>: xor %esi,%esi
0xffffffff817d1433 <zio_wait+115>: mov $0xffffffff81883b20,%rdx
0xffffffff817d143a <zio_wait+122>: mov $0x5a0,%ecx
0xffffffff817d143f <zio_wait+127>: mov %r15,%rdi
0xffffffff817d1442 <zio_wait+130>: callq 0xffffffff807a8270 <_sx_xlock>
0xffffffff817d1447 <zio_wait+135>: lea 0x320(%r14),%rbx
0xffffffff817d144e <zio_wait+142>: jmp 0xffffffff817d145b <zio_wait+155>
0xffffffff817d1450 <zio_wait+144>: mov %rbx,%rdi
0xffffffff817d1453 <zio_wait+147>: mov %r15,%rsi
0xffffffff817d1456 <zio_wait+150>: callq 0xffffffff80750ba0 <_cv_wait>

(kgdb) p *(zio_t *)$r14
$1 = {
io_bookmark = {
zb_objset = 0,
zb_object = 0,
zb_level = 0,
zb_blkid = 0
},
io_prop = {
zp_checksum = ZIO_CHECKSUM_INHERIT,
zp_compress = ZIO_COMPRESS_INHERIT,
zp_type = DMU_OT_NONE,
zp_level = 0 '\0',
zp_copies = 0 '\0',
zp_dedup = 0,
zp_dedup_verify = 0,
zp_nopwrite = 0
},
io_type = ZIO_TYPE_NULL,
io_child_type = ZIO_CHILD_LOGICAL,
io_cmd = 0,
io_priority = ZIO_PRIORITY_NOW,
io_reexecute = 2 '\002',
io_state = "\001\001",
io_txg = 0,
io_spa = 0xfffffe000372f000,
io_bp = 0x0,
io_bp_override = 0x0,
io_bp_copy = {
blk_dva = {{
dva_word = {0, 0}
}, {
dva_word = {0, 0}
}, {
dva_word = {0, 0}
}},
blk_prop = 0,
blk_pad = {0, 0},
blk_phys_birth = 0,
blk_birth = 0,
blk_fill = 0,
blk_cksum = {
zc_word = {0, 0, 0, 0}
}
},
io_parent_list = {
list_size = 48,
list_offset = 16,
list_head = {
list_next = 0xfffff800b435c850,
list_prev = 0xfffff800b435c850
}
},
io_child_list = {
list_size = 48,
list_offset = 32,
list_head = {
list_next = 0xfffff80003585770,
list_prev = 0xfffff80003585770
}
},
io_walk_link = 0x0,
io_logical = 0x0,
io_transform_stack = 0x0,
io_ready = 0,
io_physdone = 0,
io_done = 0,
io_private = 0x0,
io_prev_space_delta = 0,
io_bp_orig = {
blk_dva = {{
dva_word = {0, 0}
}, {
dva_word = {0, 0}
}, {
dva_word = {0, 0}
}},
blk_prop = 0,
blk_pad = {0, 0},
blk_phys_birth = 0,
blk_birth = 0,
blk_fill = 0,
blk_cksum = {
zc_word = {0, 0, 0, 0}
}
},
io_data = 0x0,
io_orig_data = 0x0,
io_size = 0,
io_orig_size = 0,
io_vd = 0x0,
io_vsd = 0x0,
io_vsd_ops = 0x0,
io_offset = 0,
io_timestamp = 0,
io_queue_node = {
avl_child = {0x0, 0x0},
avl_pcb = 0
},
io_flags = 0,
io_stage = ZIO_STAGE_DONE,
io_pipeline = 2162688,
io_orig_flags = 0,
io_orig_stage = ZIO_STAGE_OPEN,
io_orig_pipeline = 2162688,
io_error = 0,
io_child_error = {0, 0, 0, 0},
io_children = {{0, 0}, {0, 0}, {0, 0}, {0, 0}},
io_child_count = 1,
io_phys_children = 0,
io_parent_count = 1,
io_stall = 0x0,
io_gang_leader = 0x0,
io_gang_tree = 0x0,
io_executor = 0xfffff8006314a000,
io_waiter = 0xfffff80063111000,
io_lock = {
lock_object = {
lo_name = 0xffffffff8188655c "zio->io_lock",
lo_flags = 41091072,
lo_data = 0,
lo_witness = 0xfffffe00006eca80
},
sx_lock = 1
},
io_cv = {
cv_description = 0xffffffff8188656a "zio->io_cv",
cv_waiters = 1
},
io_cksum_report = 0x0,
io_ena = 0,
io_tqent = {
tqent_task = {
ta_link = {
stqe_next = 0x0
},
ta_pending = 0,
ta_priority = 0,
ta_func = 0,
ta_context = 0x0
},
tqent_func = 0,
tqent_arg = 0x0
},
io_trim_node = {
avl_child = {0x0, 0x0},
avl_pcb = 0
},
io_trim_link = {
list_next = 0x0,
list_prev = 0x0
}
}


All of its children have called zio_notify_parent on this zio and
are error free:
io_child_error = {0, 0, 0, 0},
io_children = {{0, 0}, {0, 0}, {0, 0}, {0, 0}},

(kgdb) p ((zio_t *)$r14)->io_stall
$5 = (uint64_t *) 0x0

One child has itself not reached the end of zio_done:
io_child_count = 1,

(kgdb) p ((zio_t *)$r14)->io_child_list
$2 = {
list_size = 48,
list_offset = 32,
list_head = {
list_next = 0xfffff80003585770,
list_prev = 0xfffff80003585770
}


(kgdb) p *(zio_link_t *)0xfffff80003585770
$15 = {
zl_parent = 0xfffff8004ddf4850,
zl_child = 0xfffff8004ddf4850,
zl_parent_node = {
list_next = 0xfffff8004dd62398,
list_prev = 0xfffff8004d4d2730
},
zl_child_node = {
list_next = 0x0,
list_prev = 0x0
}
}

I don't understand why the parent and child are the same, of course that
could be part of the problem. That address isn't a valid zio and there
and the child node entry doesn't point to anything.

It has a thread executing it (which itself is probably in cv_wait):
io_executor = 0xfffff8006314a000,


(kgdb) p *((struct thread *)0xfffff8006314a000)
$11 = {
td_lock = 0xffffffff81224b30,
td_proc = 0xffffffff814438d8,
td_plist = {
tqe_next = 0xfffff8006312b920,
tqe_prev = 0xfffff8006312a4a0
},
td_runq = {
tqe_next = 0x0,
tqe_prev = 0xffffffff811ef378
},
td_slpq = {
tqe_next = 0x0,
tqe_prev = 0xfffff80020e204c0
},
td_lockq = {
tqe_next = 0x0,
tqe_prev = 0x0
},
td_hash = {
le_next = 0x0,
le_prev = 0xfffffe0000879fc8
},
td_cpuset = 0xfffff800631d2318,
td_sel = 0x0,
td_sleepqueue = 0x0,
td_turnstile = 0xfffff80063128480,
td_rlqe = 0x0,
td_umtxq = 0xfffff8004d7a6380,
td_tid = 100345,
td_sigqueue = {
sq_signals = {
__bits = {0, 0, 0, 0}
},
sq_kill = {
__bits = {0, 0, 0, 0}
},
sq_list = {
tqh_first = 0x0,
tqh_last = 0xfffff8006314a0b8
},
sq_proc = 0xffffffff814438d8,
sq_flags = 1
},
td_lend_user_pri = 255 'ÿ',
td_flags = 4,
td_inhibitors = 2,
td_pflags = 2097152,
td_dupfd = 0,
td_sqqueue = 0,
td_wchan = 0xfffff800b48a2900,
td_wmesg = 0xffffffff80d64d7e "-",
td_lastcpu = 0 '\0',
td_oncpu = 255 'ÿ',
td_owepreempt = 0 '\0',
td_tsqueue = 0 '\0',
td_locks = 0,
td_rw_rlocks = 0,
td_lk_slocks = 0,
td_stopsched = 0,
td_blocked = 0x0,
td_lockname = 0x0,
td_contested = {
lh_first = 0x0
},
td_sleeplocks = 0xffffffff81369470,
---Type <return> to continue, or q <return> to quit---
td_intr_nesting_level = 0,
td_pinned = 0,
td_ucred = 0xfffff800027f1e00,
td_estcpu = 0,
td_slptick = 226829,
td_blktick = 0,
td_swvoltick = 226829,
td_cow = 0,
td_ru = {
ru_utime = {
tv_sec = 0,
tv_usec = 0
},
ru_stime = {
tv_sec = 0,
tv_usec = 0
},
ru_maxrss = 0,
ru_ixrss = 0,
ru_idrss = 0,
ru_isrss = 0,
ru_minflt = 0,
ru_majflt = 0,
ru_nswap = 0,
ru_inblock = 0,
ru_oublock = 0,
ru_msgsnd = 0,
ru_msgrcv = 0,
ru_nsignals = 0,
ru_nvcsw = 6,
ru_nivcsw = 0
},
td_rux = {
rux_runtime = 0,
rux_uticks = 0,
rux_sticks = 0,
rux_iticks = 0,
rux_uu = 0,
rux_su = 0,
rux_tu = 0
},
td_incruntime = 410296,
td_runtime = 410296,
td_pticks = 0,
td_sticks = 0,
td_iticks = 0,
td_uticks = 0,
td_intrval = 0,
td_oldsigmask = {
__bits = {0, 0, 0, 0}
},
td_generation = 6,
td_sigstk = {
ss_sp = 0x0,
ss_size = 0,
ss_flags = 0
},
td_xsig = 0,
td_profil_addr = 0,
td_profil_ticks = 0,
td_name = "zio_write_intr_7\000\000\000",
td_fpop = 0x0,
td_dbgflags = 0,
td_dbgksi = {
ksi_link = {
tqe_next = 0x0,
tqe_prev = 0x0
},
ksi_info = {
si_signo = 0,
si_errno = 0,
si_code = 0,
si_pid = 0,
si_uid = 0,
si_status = 0,
si_addr = 0x0,
si_value = {
sival_int = 0,
sival_ptr = 0x0,
sigval_int = 0,
sigval_ptr = 0x0
},
_reason = {
_fault = {
_trapno = 0
},
_timer = {
_timerid = 0,
_overrun = 0
},
_mesgq = {
_mqd = 0
},
_poll = {
_band = 0
},
__spare__ = {
__spare1__ = 0,
__spare2__ = {0, 0, 0, 0, 0, 0, 0}
}
}
},
ksi_flags = 0,
ksi_sigq = 0x0
},
td_ng_outbound = 0,
td_osd = {
osd_nslots = 0,
osd_slots = 0x0,
osd_next = {
le_next = 0x0,
le_prev = 0x0
}
},
td_map_def_user = 0x0,
td_dbg_forked = 0,
td_vp_reserv = 0,
td_no_sleeping = 0,
td_dom_rr_idx = 0,
td_sigmask = {
__bits = {0, 0, 0, 0}
},
td_rqindex = 21 '\025',
td_base_pri = 84 'T',
td_priority = 84 'T',
td_pri_class = 3 '\003',
td_user_pri = 172 '¬',
td_base_user_pri = 172 '¬',
td_pcb = 0xfffffe012060eb80,
td_state = TDS_INHIBITED,
td_retval = {0, 0},
td_slpcallout = {
---Type <return> to continue, or q <return> to quit---
c_links = {
le = {
le_next = 0x0,
le_prev = 0x0
},
sle = {
sle_next = 0x0
},
tqe = {
tqe_next = 0x0,
tqe_prev = 0x0
}
},
c_time = 0,
c_precision = 0,
c_arg = 0x0,
c_func = 0,
c_lock = 0x0,
c_flags = 16,
c_cpu = 0
},
td_frame = 0xfffffe012060eac0,
td_kstack_obj = 0xfffff8006314cd00,
td_kstack = 18446741879524470784,
td_kstack_pages = 4,
td_critnest = 1,
td_md = {
md_spinlock_count = 1,
md_saved_flags = 582,
md_spurflt_addr = 0
},
td_sched = 0xfffff8006314a468,
td_ar = 0x0,
td_lprof = {{
lh_first = 0x0
}, {
lh_first = 0x0
}},
td_dtrace = 0xfffff800b437ce00,
td_errno = 0,
td_vnet = 0x0,
td_vnet_lpush = 0x0,
td_intr_frame = 0x0,
td_rfppwait_p = 0x0,
td_ma = 0x0,
td_ma_cnt = 0
}
Interesting bits about the executing thread:

td_name = "zio_write_intr_7\000\000\000",
td_critnest = 1,
td_md = {
md_spinlock_count = 1,
md_saved_flags = 582,
md_spurflt_addr = 0
},

Unfortunately executor is never cleared, so this is just the last
thread that happened to
execute this i/o.

K. Macy

unread,
Oct 13, 2014, 2:44:05 PM10/13/14
to Steven Hartland, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
>> A recent quick read of the code would lead me to believe that
>
> Yer I would have got the zio details but typically its "optimised out" by
> the
> compiler, so will need some effort to track that down unfortunately :(

FYI it isn't actually optimized out. The debug info accounting just
isn't very good about tracking the values in registers. You'll see in
my last mail that just looking at the assembler makes it pretty
obvious that zio was in %r14. HTH.

Cheers.


-K

Steven Hartland

unread,
Oct 13, 2014, 3:02:38 PM10/13/14
to K. Macy, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
I believe there's some pool corruption here somewhere as every once in a while
I trip and ASSERT panic:
panic: solaris assert: size >= SPA_MINBLOCKSIZE || range_tree_space(msp->ms_tree) == 0, file:
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c, line: 1636


#3 0xffffffff80607ed3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:688
#4 0xffffffff8179321d in assfail (a=<value optimized out>, f=<value optimized out>, l=<value optimized out>) at
/usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:81
#5 0xffffffff81501ecf in metaslab_passivate (msp=0xfffff800091e5800, size=0) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:1636
#6 0xffffffff81501ca7 in metaslab_group_alloc (mg=0xfffff80044ef7400, psize=512, asize=512, txg=11733518, min_distance=536281088,
dva=<value optimized out>, d=-1) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:2206
#7 0xffffffff81500c9d in metaslab_alloc_dva (spa=0xfffffe00022bb000, mc=0xfffff800045d0c00, psize=512, dva=0xfffffe000e409640, d=0,
hintdva=<value optimized out>, flags=<value optimized out>)
at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:2381
#8 0xffffffff81500709 in metaslab_alloc (spa=0xfffffe00022bb000, mc=0xfffff800045d0c00, psize=512, bp=0xfffffe000e409640, ndvas=3,
txg=11733518, hintbp=0x0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:2586
#9 0xffffffff8154ca8a in zio_dva_allocate (zio=0xfffff80044a57398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:2444
#10 0xffffffff81548d54 in zio_execute (zio=0xfffff80044a57398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#11 0xffffffff8154d25f in zio_ready (zio=0xfffff800487d2730) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3031
#12 0xffffffff81548d54 in zio_execute (zio=0xfffff800487d2730) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#13 0xffffffff80651410 in taskqueue_run_locked (queue=0xfffff80054291000) at /usr/src/sys/kern/subr_taskqueue.c:342
#14 0xffffffff80651dcb in taskqueue_thread_loop (arg=<value optimized out>) at /usr/src/sys/kern/subr_taskqueue.c:563


(kgdb) frame 5
#5 0xffffffff81501ecf in metaslab_passivate (msp=0xfffff800091e5800, size=0) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c:1636
1636 ASSERT(size >= SPA_MINBLOCKSIZE || range_tree_space(msp->ms_tree) == 0);
(kgdb) print size
$5 = 0
(kgdb) print msp->ms_tree
$6 = (range_tree_t *) 0xfffff800091e5400
(kgdb) print *msp->ms_tree
$7 = {
rt_root = {
avl_root = 0xfffff80048fab840,
avl_compar = 0xffffffff81502850 <range_tree_seg_compare>,
avl_offset = 0,
avl_numnodes = 1,
avl_size = 64
},
rt_space = 5632,
rt_ops = 0xffffffff8160b110,
rt_arg = 0xfffff800091e5800,
rt_histogram = {0 <repeats 12 times>, 1, 0 <repeats 51 times>},
rt_lock = 0xfffff800091e5800
}
(kgdb) print *msp
$8 = {
ms_lock = {
lock_object = {
lo_name = 0xffffffff815eb310 "msp->ms_lock",
lo_flags = 40960000,
lo_data = 0,
lo_witness = 0x0
},
sx_lock = 18446735279027798016
},
ms_load_cv = {
cv_description = 0xffffffff815eb31e "msp->ms_load_cv",
cv_waiters = 0
},
ms_sm = 0xfffff8000924bb80,
ms_ops = 0xffffffff8160b100,
ms_id = 119,
ms_start = 3992977408,
ms_size = 33554432,
ms_fragmentation = 90,
ms_alloctree = {0xfffff800091e5000, 0xfffff800091ec000, 0xfffff800091e3c00, 0xfffff800091e3400},
ms_freetree = {0xfffff800091e4c00, 0xfffff800091e4000, 0xfffff800091e4400, 0xfffff800091e3000},
ms_defertree = {0xfffff800091e3800, 0xfffff800091df400},
ms_tree = 0xfffff800091e5400,
ms_condensing = 0,
ms_condense_wanted = 0,
ms_loaded = 1,
ms_loading = 0,
ms_deferspace = 0,
ms_weight = 13835058055282163712,
ms_access_txg = 11733526,
ms_size_tree = {
avl_root = 0xfffff80048fab858,
avl_compar = 0xffffffff81502360 <metaslab_rangesize_compare>,
avl_offset = 24,
avl_numnodes = 1,
avl_size = 64
},
ms_lbas = {0, 0, 0, 0, 0, 0, 0, 0, 0, 4026524672, 4026530816, 0 <repeats 53 times>},
ms_group = 0xfffff80044ef7400,
ms_group_node = {
avl_child = {0x0, 0x0},
avl_pcb = 18446735278773074705
},
ms_txg_node = {
tn_next = {0x0, 0xfffff800091d7b28, 0xfffff800091e2328, 0x0},
tn_member = "\000\001\001"
}
}

Also when it stalls I've seen this in the zfs debug:-
13 39363 :zfs-dprintf dnode.c - dnode_free_range:1655: ds=mos obj=31 blkid=0 nblks=1125899906842624
txg=3078494
13 39363 :zfs-dprintf dbuf.c - dbuf_free_range:816: ds=mos obj=31 start=0 end=0
13 39363 :zfs-dprintf dbuf.c - dbuf_dirty:1133: ds=mos obj=40 lvl=0 blkid=-1 size=140
13 39363 :zfs-dprintf dnode.c - dnode_setdirty:1286: ds=mos obj=40 txg=3078494

nblks looks suspisiously large and I'm not sure blkid should be -1?

For reference to see this I'm using:
sysctl debug.zfs_flags=64
dtrace -n 'zfs-dprintf {printf("%s - %s:%d: %s", stringof(arg0), stringof(arg1), arg2, stringof(arg3))}'

With all that said the following looks like it might indicate the issue:
zdb -e -m sys1boot

Metaslabs:
vdev 0
metaslabs 127 offset spacemap free
--------------- ------------------- --------------- -------------
metaslab 0 offset 0 spacemap 417 free 0
metaslab 1 offset 2000000 spacemap 24 free 0
metaslab 2 offset 4000000 spacemap 64 free 0
metaslab 3 offset 6000000 spacemap 1094 free 0
metaslab 4 offset 8000000 spacemap 1091 free 0
metaslab 5 offset a000000 spacemap 1093 free 0
metaslab 6 offset c000000 spacemap 1095 free 0
metaslab 7 offset e000000 spacemap 1096 free 0
metaslab 8 offset 10000000 spacemap 1098 free 0
metaslab 9 offset 12000000 spacemap 1097 free 0
metaslab 10 offset 14000000 spacemap 1099 free 0
metaslab 11 offset 16000000 spacemap 1102 free 0
metaslab 12 offset 18000000 spacemap 113 free 0
metaslab 13 offset 1a000000 spacemap 110 free 0
metaslab 14 offset 1c000000 spacemap 115 free 0
metaslab 15 offset 1e000000 spacemap 1103 free 0
metaslab 16 offset 20000000 spacemap 416 free 0
metaslab 17 offset 22000000 spacemap 81 free 0
metaslab 18 offset 24000000 spacemap 57 free 0
metaslab 19 offset 26000000 spacemap 102 free 0
metaslab 20 offset 28000000 spacemap 100 free 0
metaslab 21 offset 2a000000 spacemap 104 free 0
metaslab 22 offset 2c000000 spacemap 125 free 0
metaslab 23 offset 2e000000 spacemap 105 free 0
metaslab 24 offset 30000000 spacemap 108 free 0
metaslab 25 offset 32000000 spacemap 126 free 0
metaslab 26 offset 34000000 spacemap 248 free 0
metaslab 27 offset 36000000 spacemap 55 free 0
metaslab 28 offset 38000000 spacemap 82 free 0
metaslab 29 offset 3a000000 spacemap 149 free 0
metaslab 30 offset 3c000000 spacemap 152 free 0
metaslab 31 offset 3e000000 spacemap 155 free 0
metaslab 32 offset 40000000 spacemap 156 free 0
metaslab 33 offset 42000000 spacemap 72 free 0
metaslab 34 offset 44000000 spacemap 96 free 0
metaslab 35 offset 46000000 spacemap 159 free 0
metaslab 36 offset 48000000 spacemap 158 free 0
metaslab 37 offset 4a000000 spacemap 160 free 0
metaslab 38 offset 4c000000 spacemap 1271 free 0
metaslab 39 offset 4e000000 spacemap 161 free 0
metaslab 40 offset 50000000 spacemap 1273 free 0
metaslab 41 offset 52000000 spacemap 1277 free 0
metaslab 42 offset 54000000 spacemap 35 free 0
metaslab 43 offset 56000000 spacemap 25 free 0
metaslab 44 offset 58000000 spacemap 32 free 0
metaslab 45 offset 5a000000 spacemap 150 free 0
metaslab 46 offset 5c000000 spacemap 151 free 0
metaslab 47 offset 5e000000 spacemap 163 free 0
metaslab 48 offset 60000000 spacemap 415 free 0
metaslab 49 offset 62000000 spacemap 98 free 0
metaslab 50 offset 64000000 spacemap 109 free 0
metaslab 51 offset 66000000 spacemap 171 free 0
metaslab 52 offset 68000000 spacemap 80 free 0
metaslab 53 offset 6a000000 spacemap 134 free 0
metaslab 54 offset 6c000000 spacemap 135 free 0
metaslab 55 offset 6e000000 spacemap 154 free 0
metaslab 56 offset 70000000 spacemap 140 free 0
metaslab 57 offset 72000000 spacemap 141 free 0
metaslab 58 offset 74000000 spacemap 1272 free 0
metaslab 59 offset 76000000 spacemap 138 free 0
metaslab 60 offset 78000000 spacemap 139 free 0
metaslab 61 offset 7a000000 spacemap 18 free 0
metaslab 62 offset 7c000000 spacemap 148 free 0
metaslab 63 offset 7e000000 spacemap 1270 free 0
metaslab 64 offset 80000000 spacemap 114 free 0
metaslab 65 offset 82000000 spacemap 112 free 0
metaslab 66 offset 84000000 spacemap 116 free 0
metaslab 67 offset 86000000 spacemap 164 free 0
metaslab 68 offset 88000000 spacemap 243 free 0
metaslab 69 offset 8a000000 spacemap 128 free 0
metaslab 70 offset 8c000000 spacemap 1101 free 0
metaslab 71 offset 8e000000 spacemap 153 free 0
metaslab 72 offset 90000000 spacemap 120 free 0
metaslab 73 offset 92000000 spacemap 62 free 0
metaslab 74 offset 94000000 spacemap 122 free 0
metaslab 75 offset 96000000 spacemap 131 free 0
metaslab 76 offset 98000000 spacemap 129 free 0
metaslab 77 offset 9a000000 spacemap 157 free 0
metaslab 78 offset 9c000000 spacemap 133 free 0
metaslab 79 offset 9e000000 spacemap 137 free 0
metaslab 80 offset a0000000 spacemap 165 free 0
metaslab 81 offset a2000000 spacemap 168 free 0
metaslab 82 offset a4000000 spacemap 170 free 0
metaslab 83 offset a6000000 spacemap 127 free 0
metaslab 84 offset a8000000 spacemap 180 free 0
metaslab 85 offset aa000000 spacemap 162 free 0
metaslab 86 offset ac000000 spacemap 1100 free 0
metaslab 87 offset ae000000 spacemap 107 free 0
metaslab 88 offset b0000000 spacemap 119 free 0
metaslab 89 offset b2000000 spacemap 61 free 0
metaslab 90 offset b4000000 spacemap 60 free 0
metaslab 91 offset b6000000 spacemap 123 free 0
metaslab 92 offset b8000000 spacemap 130 free 0
metaslab 93 offset ba000000 spacemap 59 free 0
metaslab 94 offset bc000000 spacemap 167 free 0
metaslab 95 offset be000000 spacemap 136 free 0
metaslab 96 offset c0000000 spacemap 144 free 0
metaslab 97 offset c2000000 spacemap 166 free 0
metaslab 98 offset c4000000 spacemap 169 free 0
metaslab 99 offset c6000000 spacemap 58 free 0
metaslab 100 offset c8000000 spacemap 56 free 0
metaslab 101 offset ca000000 spacemap 54 free 0
metaslab 102 offset cc000000 spacemap 53 free 0
metaslab 103 offset ce000000 spacemap 52 free 0
metaslab 104 offset d0000000 spacemap 106 free 0
metaslab 105 offset d2000000 spacemap 51 free 0
metaslab 106 offset d4000000 spacemap 50 free 0
metaslab 107 offset d6000000 spacemap 121 free 0
metaslab 108 offset d8000000 spacemap 124 free 0
metaslab 109 offset da000000 spacemap 49 free 0
metaslab 110 offset dc000000 spacemap 142 free 0
metaslab 111 offset de000000 spacemap 132 free 0
metaslab 112 offset e0000000 spacemap 48 free 0
metaslab 113 offset e2000000 spacemap 47 free 0
metaslab 114 offset e4000000 spacemap 46 free 0
metaslab 115 offset e6000000 spacemap 45 free 0
metaslab 116 offset e8000000 spacemap 143 free 0
metaslab 117 offset ea000000 spacemap 44 free 0
metaslab 118 offset ec000000 spacemap 43 free 177K
metaslab 119 offset ee000000 spacemap 42 free 13.0K
metaslab 120 offset f0000000 spacemap 41 free 0
metaslab 121 offset f2000000 spacemap 179 free 0
metaslab 122 offset f4000000 spacemap 40 free 0
metaslab 123 offset f6000000 spacemap 39 free 0
metaslab 124 offset f8000000 spacemap 38 free 0
metaslab 125 offset fa000000 spacemap 37 free 0
metaslab 126 offset fc000000 spacemap 36 free 0

Only two metaslabs with any free?

zfs list -r sys1boot
NAME USED AVAIL REFER MOUNTPOINT
sys1boot 1.76G 2.08G 11K /sys1boot
sys1boot/ROOT 1.72G 2.08G 1.20G /sys1boot/ROOT


NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
sys1boot 3.97G 3.97G 190K 0% - 99% 1.00x ONLINE -

So basically the zpool says its out of space even though zfs says its under
half full?

Given this I'm guessing that on import when it goes to write an update its
hitting a brick wall and can't, with the zio being flagged as can't fail
it seems to get stuck.

Regards
Steve

K. Macy

unread,
Oct 13, 2014, 3:14:19 PM10/13/14
to Steven Hartland, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
>>> Yer I would have got the zio details but typically its "optimised out" by
>>> the
>>> compiler, so will need some effort to track that down unfortunately :(
>>>
>>
>> Well, let me know if you can. Re-creating a new 10.x VM is taking a while
>> as it's taking me forever to checkout the sources.
>>
>> Things like that need to somehow continue to be accessible.
>
>
> I believe there's some pool corruption here somewhere as every once in a
> while
> I trip and ASSERT panic:
> panic: solaris assert: size >= SPA_MINBLOCKSIZE ||
> range_tree_space(msp->ms_tree) == 0, file:
> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c,
> line: 1636
>


<... snip>

You are correct.

(kgdb) p ((zio_t *)$r14)->io_reexecute
$32 = 2 '\002'
(kgdb) p ((zio_t *)$r14)->io_flags
$33 = 0
(kgdb) p ((zio_t *)$r14)->io_spa->spa_suspended
$34 = 1 '\001'

This means zio_suspend has been called from zio_done:
else if (zio->io_reexecute & ZIO_REEXECUTE_SUSPEND) {
/*
* We'd fail again if we reexecuted now, so suspend
* until conditions improve (e.g. device comes online).
*/
zio_suspend(spa, zio);
}

If failure mode were panic we would have panicked when attempting the import:
void
zio_suspend(spa_t *spa, zio_t *zio)
{
if (spa_get_failmode(spa) == ZIO_FAILURE_MODE_PANIC)
fm_panic("Pool '%s' has encountered an uncorrectable I/O "
"failure and the failure mode property for this pool "
"is set to panic.", spa_name(spa));

Steven Hartland

unread,
Oct 13, 2014, 3:14:55 PM10/13/14
to K. Macy, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable

----- Original Message -----
From: "K. Macy" <km...@freebsd.org>
To: "Steven Hartland" <kil...@multiplay.co.uk>
Cc: "Mark Martinec" <Mark.Marti...@ijs.si>; "freeb...@FreeBSD.org" <freeb...@freebsd.org>; "FreeBSD Stable"
<freebsd...@freebsd.org>
Sent: Monday, October 13, 2014 7:43 PM
Subject: Re: zfs pool import hangs on [tx->tx_sync_done_cv]


Steven Hartland

unread,
Oct 13, 2014, 4:10:40 PM10/13/14
to K. Macy, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
----- Original Message -----
From: "K. Macy" <km...@freebsd.org>
> You are correct.
>
> (kgdb) p ((zio_t *)$r14)->io_reexecute
> $32 = 2 '\002'
> (kgdb) p ((zio_t *)$r14)->io_flags
> $33 = 0
> (kgdb) p ((zio_t *)$r14)->io_spa->spa_suspended
> $34 = 1 '\001'
>
> This means zio_suspend has been called from zio_done:
> else if (zio->io_reexecute & ZIO_REEXECUTE_SUSPEND) {
> /*
> * We'd fail again if we reexecuted now, so suspend
> * until conditions improve (e.g. device comes online).
> */
> zio_suspend(spa, zio);
> }
>
> If failure mode were panic we would have panicked when attempting the import:
> void
> zio_suspend(spa_t *spa, zio_t *zio)
> {
> if (spa_get_failmode(spa) == ZIO_FAILURE_MODE_PANIC)
> fm_panic("Pool '%s' has encountered an uncorrectable I/O "
> "failure and the failure mode property for this pool "
> "is set to panic.", spa_name(spa));

Yep and forcing that panic I got the following stack:

#0 doadump (textdump=1) at pcpu.h:219
#1 0xffffffff80607977 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:452
#2 0xffffffff80607e85 in vpanic (fmt=<value optimized out>, ap=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:759
#3 0xffffffff80607ed3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:688
#4 0xffffffff81548dfa in zio_suspend (spa=<value optimized out>, zio=<value optimized out>) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1527
#5 0xffffffff8154ec66 in zio_done (zio=<value optimized out>) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3264
#6 0xffffffff81548d54 in zio_execute (zio=0xfffff80044a0dac8) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#7 0xffffffff8154ebfc in zio_done (zio=0xfffff8004884b398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3258
#8 0xffffffff81548d54 in zio_execute (zio=0xfffff8004884b398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#9 0xffffffff8154ebfc in zio_done (zio=0xfffff80044c0a000) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3258
#10 0xffffffff81548d54 in zio_execute (zio=0xfffff80044c0a000) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#11 0xffffffff8154ebfc in zio_done (zio=0xfffff80044a2fac8) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3258
#12 0xffffffff81548d54 in zio_execute (zio=0xfffff80044a2fac8) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#13 0xffffffff8154ebfc in zio_done (zio=0xfffff80044853398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3258
#14 0xffffffff81548d54 in zio_execute (zio=0xfffff80044853398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#15 0xffffffff8154ea2a in zio_done (zio=0xfffff8004877e398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3313
#16 0xffffffff81548d54 in zio_execute (zio=0xfffff8004877e398) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#17 0xffffffff8154ea2a in zio_done (zio=0xfffff80044cb0730) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:3313
#18 0xffffffff81548d54 in zio_execute (zio=0xfffff80044cb0730) at
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1407
#19 0xffffffff80651410 in taskqueue_run_locked (queue=0xfffff800488cf400) at /usr/src/sys/kern/subr_taskqueue.c:342
#20 0xffffffff80651dcb in taskqueue_thread_loop (arg=<value optimized out>) at /usr/src/sys/kern/subr_taskqueue.c:563

Along with:
(kgdb) print (*(zio_t *)0xfffff80044853398)->io_error
$20 = 28
(kgdb) print (*(zio_t *)0xfffff80044a2fac8)->io_error
$21 = 28

grep 28 /usr/include/sys/errno.h
#define ENOSPC 28 /* No space left on device */

So the issue is simply the pool is out of space to perform the import
as that process, when not readonly, requires space to write to the pool.

The problem with that is that during this process it has the pool lock so
any subsequent zpool actions are dead in the water as they will block
waiting on that lock.

Something to discuss with the openzfs guys, but I would say the import
should fail with a no space error.

So Mark the mystery is solved, when you upgraded you ran the pool so low
on space that it now can't be imported RW as that requires a write.

Regards
Steve

mark

unread,
Oct 13, 2014, 4:56:02 PM10/13/14
to freeb...@freebsd.org, freebsd...@freebsd.org, FreeBSD Stable
On 10/13/2014 22:10, Steven Hartland wrote:
> So the issue is simply the pool is out of space to perform the import
> as that process, when not readonly, requires space to write to the pool.
>
> The problem with that is that during this process it has the pool lock so
> any subsequent zpool actions are dead in the water as they will block
> waiting on that lock.
>
> Something to discuss with the openzfs guys, but I would say the import
> should fail with a no space error.
>
> So Mark the mystery is solved, when you upgraded you ran the pool so low
> on space that it now can't be imported RW as that requires a write.
>
> Regards
> Steve

Thank you both for analysis and effort!

I can't rule out the possibility that my main system pool
on a SSD was low on space at some point in time, but the
three 4 GiB cloned pools (sys1boot and its brothers) were all
created as a zfs send / receive copies of the main / (root)
file system and I haven't noticed anything unusual during
syncing. This syncing was done manually (using zxfer) and
independently from the upgrade on the system - on a steady/quiet
system, when the source file system definitely had sufficient
free space.

The source file system now shows 1.2 GiB of usage shown
by df:
shiny/ROOT 61758388 1271620 60486768 2% /
Seems unlikely that the 1.2 GiB has grown to 4 GiB space
on a cloned filesystem.

Will try to import the main two pools after re-creating
a sane boot pool...

Mark

Steven Hartland

unread,
Oct 13, 2014, 5:57:24 PM10/13/14
to mark, freeb...@freebsd.org, freebsd...@freebsd.org, FreeBSD Stable
Yer zfs list only shows around 2-3GB used too but zpool list
shows the pool is out of space. Cant rule out an accounting
issue though.

Regards
Steve

K. Macy

unread,
Oct 13, 2014, 9:16:23 PM10/13/14
to Steven Hartland, freeb...@freebsd.org, FreeBSD Stable, mark
>> Thank you both for analysis and effort!
>>
>> I can't rule out the possibility that my main system pool
>> on a SSD was low on space at some point in time, but the
>> three 4 GiB cloned pools (sys1boot and its brothers) were all
>> created as a zfs send / receive copies of the main / (root)
>> file system and I haven't noticed anything unusual during
>> syncing. This syncing was done manually (using zxfer) and
>> independently from the upgrade on the system - on a steady/quiet
>> system, when the source file system definitely had sufficient
>> free space.
>>
>> The source file system now shows 1.2 GiB of usage shown
>> by df:
>> shiny/ROOT 61758388 1271620 60486768 2% /
>> Seems unlikely that the 1.2 GiB has grown to 4 GiB space
>> on a cloned filesystem.
>>
>> Will try to import the main two pools after re-creating
>> a sane boot pool...
>
>
> Yer zfs list only shows around 2-3GB used too but zpool list
> shows the pool is out of space. Cant rule out an accounting
> issue though.
>

What is using the extra space in the pool? Is there an unmounted
dataset or snapshot? Do you know how to easily tell? Unlike txg and
zio processing I don't have the luxury of having just read that part
of the codebase.

Thanks.

Mark Martinec

unread,
Oct 13, 2014, 9:24:43 PM10/13/14
to freebsd...@freebsd.org, freeb...@freebsd.org
On 10/14/2014 03:15, K. Macy wrote:
> What is using the extra space in the pool? Is there an unmounted
> dataset or snapshot? Do you know how to easily tell? Unlike txg and
> zio processing I don't have the luxury of having just read that part
> of the codebase.

Most likely the snapshots (regular periodic snapshots).
Changes after upgrading an OS can maybe take an additional 50%
of space (just guessing). Btw, ashift=12.
Still can't see how that would amount to 4 GiB, but it's possible.

Mark

K. Macy

unread,
Oct 13, 2014, 9:32:48 PM10/13/14
to Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
On Mon, Oct 13, 2014 at 6:24 PM, Mark Martinec
<Mark.Marti...@ijs.si> wrote:
> On 10/14/2014 03:15, K. Macy wrote:
>>
>> What is using the extra space in the pool? Is there an unmounted
>> dataset or snapshot? Do you know how to easily tell? Unlike txg and
>> zio processing I don't have the luxury of having just read that part
>> of the codebase.
>
>
> Most likely the snapshots (regular periodic snapshots).
> Changes after upgrading an OS can maybe take an additional 50%
> of space (just guessing). Btw, ashift=12.
> Still can't see how that would amount to 4 GiB, but it's possible.
>

Disconcerting. Is this something that others are likely to hit? Should
accounting for writes fail with ENOSPC a bit earlier so that we never
reach a state like this? I.e. non-metadata writes will fail at a lower
threshold than data or if that is already the case, reduce the
threshold further.

-K

Steven Hartland

unread,
Oct 14, 2014, 4:15:27 AM10/14/14
to K. Macy, freeb...@freebsd.org, mark, FreeBSD Stable
Its not clear but I believe it could just be fragmention even though
its ashift=9.

I sent the last snapshot to another pool of the same size and it
resulted in:
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
sys1boot 3.97G 3.97G 190K 0% - 99% 1.00x ONLINE -
sys1copy 3.97G 3.47G 512M 72% - 87% 1.00x ONLINE -

I believe FRAG is 0% as the feature wasn't enabled for the lifetime of
the pool hence its simply not showing a valid value.

zfs list -t all -r sys1boot
NAME USED AVAIL REFER MOUNTPOINT
sys1boot 1.76G 2.08G 11K /sys1boot
sys1boot/ROOT 1.72G 2.08G 1.20G /sys1boot/ROOT
sys1boot/ROOT@auto-2014-08-16_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-08-17_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-08-19_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-08-20_04.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-21_04.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-22_04.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-23_04.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-24_04.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-26_04.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-27_04.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-28_04.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-08-29_04.00 128K - 1.19G -
sys1boot/ROOT@auto-2014-08-31_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-01_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-02_04.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-03_04.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-04_04.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-05_04.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-07_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-08_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-09_04.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-10_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-10_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-10_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-10_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-10_16.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-10_20.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-11_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-11_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-11_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-11_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-11_16.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-11_20.00 84.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-12_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-12_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-12_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-12_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-12_16.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-12_20.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-13_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-13_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-13_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-13_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-13_16.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-13_20.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-14_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-14_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-14_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-14_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-14_16.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-14_20.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-15_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-15_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-15_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-15_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-15_16.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-15_20.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-16_00.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-16_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-16_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-16_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-16_16.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-16_20.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-17_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-17_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-17_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-17_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-17_16.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-17_20.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-18_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-18_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-18_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-18_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-18_16.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-18_20.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-18_23.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_00.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_01.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_02.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_03.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_04.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_05.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_06.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_07.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_08.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_09.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_10.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_11.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_12.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_13.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_14.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_15.00 1K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_16.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_17.00 85.5K - 1.19G -
sys1boot/ROOT@auto-2014-09-19_18.00 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_18.40 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_18.50 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_19.00 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_19.10 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_19.20 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_19.30 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_19.40 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_19.50 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_20.00 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_20.10 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_20.20 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_20.30 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_20.40 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_20.50 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_21.00 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_21.10 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_21.20 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_21.30 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_21.40 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_21.50 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_22.00 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_22.10 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_22.20 1K - 1.20G -
sys1boot/ROOT@auto-2014-09-19_22.30 0 - 1.20G -

Regards
Steve

Steven Hartland

unread,
Oct 14, 2014, 4:25:31 AM10/14/14
to K. Macy, Mark Martinec, freeb...@freebsd.org, FreeBSD Stable
----- Original Message -----
From: "K. Macy" <km...@freebsd.org>


> On Mon, Oct 13, 2014 at 6:24 PM, Mark Martinec
> <Mark.Marti...@ijs.si> wrote:
>> On 10/14/2014 03:15, K. Macy wrote:
>>>
>>> What is using the extra space in the pool? Is there an unmounted
>>> dataset or snapshot? Do you know how to easily tell? Unlike txg and
>>> zio processing I don't have the luxury of having just read that part
>>> of the codebase.
>>
>>
>> Most likely the snapshots (regular periodic snapshots).
>> Changes after upgrading an OS can maybe take an additional 50%
>> of space (just guessing). Btw, ashift=12.
>> Still can't see how that would amount to 4 GiB, but it's possible.
>>
>
> Disconcerting. Is this something that others are likely to hit? Should
> accounting for writes fail with ENOSPC a bit earlier so that we never
> reach a state like this? I.e. non-metadata writes will fail at a lower
> threshold than data or if that is already the case, reduce the
> threshold further.

I thought I remembered seeing some recent changes in this area, but I can't
find them ATM.

Something to raise on the openzfs list.

Regards
Steve

Steven Hartland

unread,
Oct 14, 2014, 7:20:38 AM10/14/14
to Steven Hartland, K. Macy, freeb...@freebsd.org, FreeBSD Stable, mark

----- Original Message -----
From: "Steven Hartland" <kil...@multiplay.co.uk>
To: "K. Macy" <km...@freebsd.org>
Cc: "freeb...@FreeBSD.org" <freeb...@freebsd.org>; "mark" <Mark.M...@ijs.si>; "FreeBSD Stable" <freebsd...@freebsd.org>
Sent: Tuesday, October 14, 2014 9:14 AM
Subject: Re: zpool import hangs when out of space - Was: zfs pool import hangs on [tx->tx_sync_done_cv]


.

Well interesting issue I left this pool alone this morning literally doing
nothing, and its now out of space.
zpool list
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
sys1boot 3.97G 3.97G 190K 0% - 99% 1.00x ONLINE -
sys1copy 3.97G 3.97G 8K 0% - 99% 1.00x ONLINE -

There's something very wrong here as nothing has been accessing the pool.

pool: zfs
state: ONLINE
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: http://illumos.org/msg/ZFS-8000-HC
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zfs ONLINE 0 2 0
md1 ONLINE 0 0 0

I tried destroying the pool and ever that failed, presumably because
the pool has suspended IO.

Mark Martinec

unread,
Oct 14, 2014, 7:27:21 AM10/14/14
to freeb...@freebsd.org, freebsd...@freebsd.org
On 10/14/2014 10:14, Steven Hartland wrote:

>>> Yer zfs list only shows around 2-3GB used too but zpool list
>>> shows the pool is out of space. Cant rule out an accounting
>>> issue though.
>>
>> What is using the extra space in the pool? Is there an unmounted
>> dataset or snapshot? Do you know how to easily tell? Unlike txg and
>> zio processing I don't have the luxury of having just read that part
>> of the codebase.
>
> Its not clear but I believe it could just be fragmention even though
> its ashift=9.
>
> I sent the last snapshot to another pool of the same size and it
> resulted in:
> NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH
> ALTROOT
> sys1boot 3.97G 3.97G 190K 0% - 99% 1.00x ONLINE -
> sys1copy 3.97G 3.47G 512M 72% - 87% 1.00x ONLINE -

Yes, that's it! Fragmentation.

> I believe FRAG is 0% as the feature wasn't enabled for the lifetime of
> the pool hence its simply not showing a valid value.

Indeed. The pool has a long lifetime and the feature was only recently
made available.

> zfs list -t all -r sys1boot
> NAME USED AVAIL REFER MOUNTPOINT
> sys1boot 1.76G 2.08G 11K /sys1boot
> sys1boot/ROOT 1.72G 2.08G 1.20G /sys1boot/ROOT
> sys1boot/ROOT@auto-2014-08-16_04.00 1K - 1.19G -
> sys1boot/ROOT@auto-2014-08-17_04.00 1K - 1.19G -
> sys1boot/ROOT@auto-2014-08-19_04.00 1K - 1.19G -
> [...]
> sys1boot/ROOT@auto-2014-09-19_22.20 1K - 1.20G -
> sys1boot/ROOT@auto-2014-09-19_22.30 0 - 1.20G -

So snapshots were not consuming much, it was fragmentation.

Thanks!
Mark

Mark Martinec

unread,
Oct 14, 2014, 7:30:50 AM10/14/14
to freeb...@freebsd.org, freebsd...@freebsd.org
On 10/14/2014 13:19, Steven Hartland wrote:
> Well interesting issue I left this pool alone this morning literally doing
> nothing, and its now out of space.
> zpool list
> NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH
> ALTROOT
> sys1boot 3.97G 3.97G 190K 0% - 99% 1.00x ONLINE -
> sys1copy 3.97G 3.97G 8K 0% - 99% 1.00x ONLINE -
>
> There's something very wrong here as nothing has been accessing the pool.
>
> pool: zfs
> state: ONLINE
> status: One or more devices are faulted in response to IO failures.
> action: Make sure the affected devices are connected, then run 'zpool
> clear'.
> see: http://illumos.org/msg/ZFS-8000-HC
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> zfs ONLINE 0 2 0
> md1 ONLINE 0 0 0
>
> I tried destroying the pool and ever that failed, presumably because
> the pool has suspended IO.

That's exactly how trouble started here. Got the
"One or more devices are faulted in response to IO failures"
on all three small cloned boot pools one day, out of the blue.
There was no activity there, except for periodic snapshoting
every 10 minutes.

Mark

Steven Hartland

unread,
Oct 14, 2014, 7:41:10 AM10/14/14
to Mark Martinec, freeb...@freebsd.org, freebsd...@freebsd.org
Yer this isn't fragmentation, this is something else. I've started a
thread on the openzfs list to discuss this as theres something quite
odd going on.

Regards
Steve

Steven Hartland

unread,
Oct 15, 2014, 12:52:37 AM10/15/14
to Steven Hartland, Mark Martinec, freeb...@freebsd.org, freebsd...@freebsd.org

----- Original Message -----
From: "Steven Hartland" <kil...@multiplay.co.uk>
To: "Mark Martinec" <Mark.Marti...@ijs.si>; <freeb...@freebsd.org>; <freebsd...@freebsd.org>
Sent: Tuesday, October 14, 2014 12:40 PM
Subject: Re: zpool import hangs when out of space - Was: zfs pool import hangs on [tx->tx_sync_done_cv]


After bisecting the kernel versions in stable/10 the problem commit
appears to be:
https://svnweb.freebsd.org/base?view=revision&revision=268650

Removing it or using a pool without async_destory enabled prevents
the leak.

More debugging tomorrow.

Regards
steve

Steven Hartland

unread,
Oct 15, 2014, 10:56:46 PM10/15/14
to Mark Martinec, freeb...@freebsd.org, freebsd...@freebsd.org
Fix for this has now been committed:
https://svnweb.freebsd.org/changeset/base/273158

I'm already talking with re@ to get this in to the 10.1 release.

Regards
Steve

Marcus Reid

unread,
Oct 16, 2014, 3:55:47 AM10/16/14
to Steven Hartland, Mark Martinec, freeb...@freebsd.org, freebsd...@freebsd.org
On Thu, Oct 16, 2014 at 03:56:23AM +0100, Steven Hartland wrote:
> Fix for this has now been committed:
> https://svnweb.freebsd.org/changeset/base/273158
>
> I'm already talking with re@ to get this in to the 10.1 release.

Thank you for that. I looked at your thread on the illumos zfs list,
and from what I gather, if you aren't wedged into a state where you have
to import read-only, you don't have to worry about leaked data in your
pool, correct?

I always have a small number of 'deferred free' blocks that's always
somewhere between 8 and 10:

9 108K 15.5K 108K 12.0K 6.97 0.00 deferred free

Also, if you run 'zdb -bb <pool>' on a live pool, you can get a bunch
of:

leaked space: vdev 0, offset 0x16464c2000, size 1048576
...

and then:

block traversal size 14586265600 != alloc 14667571200 (leaked 81305600)

which I believe is normal and unrelated.

Marcus

Steven Hartland

unread,
Oct 16, 2014, 4:51:16 AM10/16/14
to Marcus Reid, Mark Martinec, freeb...@freebsd.org, freebsd...@freebsd.org
----- Original Message -----
From: "Marcus Reid" <mar...@blazingdot.com>


> On Thu, Oct 16, 2014 at 03:56:23AM +0100, Steven Hartland wrote:
>> Fix for this has now been committed:
>> https://svnweb.freebsd.org/changeset/base/273158
>>
>> I'm already talking with re@ to get this in to the 10.1 release.
>
> Thank you for that. I looked at your thread on the illumos zfs list,
> and from what I gather, if you aren't wedged into a state where you have
> to import read-only, you don't have to worry about leaked data in your
> pool, correct?
>
> I always have a small number of 'deferred free' blocks that's always
> somewhere between 8 and 10:
>
> 9 108K 15.5K 108K 12.0K 6.97 0.00 deferred free
>
> Also, if you run 'zdb -bb <pool>' on a live pool, you can get a bunch
> of:
>
> leaked space: vdev 0, offset 0x16464c2000, size 1048576
> ...
>
> and then:
>
> block traversal size 14586265600 != alloc 14667571200 (leaked 81305600)
>
> which I believe is normal and unrelated.

Yep thats normal.

Regards
Steve
0 new messages