Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

panic: spin lock held too long (RELENG_8 from today)

260 views
Skip to first unread message

Mike Tancsa

unread,
Jul 7, 2011, 1:59:07 AM7/7/11
to FreeBSD-STABLE Mailing List
I did a buildworld on this box to bring it up to RELENG_8 for the BIND
fixes. Unfortunately, the formerly solid box (April 13th kernel)
panic'd tonight with

Unread portion of the kernel message buffer:
spin lock 0xc0b1d200 (sched lock 1) held by 0xc5dac8a0 (tid 100107) too long
panic: spin lock held too long
cpuid = 0
Uptime: 13h30m4s
Physical memory: 2035 MB


Its a somewhat busy box taking in mail as well as backups for a few
servers over nfs. At the time, it would have been getting about 250Mb/s
inbound on its gigabit interface. Full core.txt file at

http://www.tancsa.com/core-jul8-2011.txt


#0 doadump () at pcpu.h:231
231 pcpu.h: No such file or directory.
in pcpu.h
(kgdb) #0 doadump () at pcpu.h:231
#1 0xc06fd6d3 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:429
#2 0xc06fd937 in panic (fmt=Variable "fmt" is not available.
) at /usr/src/sys/kern/kern_shutdown.c:602
#3 0xc06ed95f in _mtx_lock_spin_failed (m=0x0)
at /usr/src/sys/kern/kern_mutex.c:490
#4 0xc06ed9e5 in _mtx_lock_spin (m=0xc0b1d200, tid=3312388992, opts=0,
file=0x0, line=0) at /usr/src/sys/kern/kern_mutex.c:526
#5 0xc0720254 in sched_add (td=0xc5dac5c0, flags=0)
at /usr/src/sys/kern/sched_ule.c:1119
#6 0xc07203f9 in sched_wakeup (td=0xc5dac5c0)
at /usr/src/sys/kern/sched_ule.c:1950
#7 0xc07061f8 in setrunnable (td=0xc5dac5c0)
at /usr/src/sys/kern/kern_synch.c:499
#8 0xc07362af in sleepq_resume_thread (sq=0xca0da300, td=0xc5dac5c0,
pri=Variable "pri" is not available.
)
at /usr/src/sys/kern/subr_sleepqueue.c:751
#9 0xc0736e18 in sleepq_signal (wchan=0xc5fafe50, flags=1, pri=0, queue=0)
at /usr/src/sys/kern/subr_sleepqueue.c:825
#10 0xc06b6764 in cv_signal (cvp=0xc5fafe50)
at /usr/src/sys/kern/kern_condvar.c:422
#11 0xc08eaa0d in xprt_assignthread (xprt=Variable "xprt" is not available.
) at /usr/src/sys/rpc/svc.c:342
#12 0xc08ec502 in xprt_active (xprt=0xc95d9600) at
/usr/src/sys/rpc/svc.c:378
#13 0xc08ee051 in svc_vc_soupcall (so=0xc6372ce0, arg=0xc95d9600,
waitflag=1)
at /usr/src/sys/rpc/svc_vc.c:747
#14 0xc075bbb1 in sowakeup (so=0xc6372ce0, sb=0xc6372d34)
at /usr/src/sys/kern/uipc_sockbuf.c:191
#15 0xc08447bc in tcp_do_segment (m=0xcaa8d200, th=0xca6aa824,
so=0xc6372ce0,
tp=0xc63b4d20, drop_hdrlen=52, tlen=1448, iptos=0 '\0', ti_locked=2)
at /usr/src/sys/netinet/tcp_input.c:1775
#16 0xc0847930 in tcp_input (m=0xcaa8d200, off0=20)
at /usr/src/sys/netinet/tcp_input.c:1329
#17 0xc07ddaf7 in ip_input (m=0xcaa8d200)
at /usr/src/sys/netinet/ip_input.c:787
#18 0xc07b8859 in netisr_dispatch_src (proto=1, source=0, m=0xcaa8d200)
at /usr/src/sys/net/netisr.c:859
#19 0xc07b8af0 in netisr_dispatch (proto=1, m=0xcaa8d200)
at /usr/src/sys/net/netisr.c:946
#20 0xc07ae5e1 in ether_demux (ifp=0xc56ed800, m=0xcaa8d200)
at /usr/src/sys/net/if_ethersubr.c:894
#21 0xc07aeb5f in ether_input (ifp=0xc56ed800, m=0xcaa8d200)
at /usr/src/sys/net/if_ethersubr.c:753
#22 0xc09977b2 in nfe_int_task (arg=0xc56ff000, pending=1)
at /usr/src/sys/dev/nfe/if_nfe.c:2187
#23 0xc07387ca in taskqueue_run_locked (queue=0xc5702440)
at /usr/src/sys/kern/subr_taskqueue.c:248
#24 0xc073895c in taskqueue_thread_loop (arg=0xc56ff130)
at /usr/src/sys/kern/subr_taskqueue.c:385
#25 0xc06d1027 in fork_exit (callout=0xc07388a0 <taskqueue_thread_loop>,
arg=0xc56ff130, frame=0xc538ed28) at /usr/src/sys/kern/kern_fork.c:861
#26 0xc09a5c24 in fork_trampoline () at
/usr/src/sys/i386/i386/exception.s:275
(kgdb)

--
-------------------
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, mi...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada http://www.tancsa.com/
_______________________________________________
freebsd...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stabl...@freebsd.org"

Andriy Gapon

unread,
Jul 7, 2011, 3:40:06 AM7/7/11
to Mike Tancsa, FreeBSD-STABLE Mailing List
on 07/07/2011 08:55 Mike Tancsa said the following:

> I did a buildworld on this box to bring it up to RELENG_8 for the BIND
> fixes. Unfortunately, the formerly solid box (April 13th kernel)
> panic'd tonight with
>
> Unread portion of the kernel message buffer:
> spin lock 0xc0b1d200 (sched lock 1) held by 0xc5dac8a0 (tid 100107) too long
> panic: spin lock held too long
> cpuid = 0
> Uptime: 13h30m4s
> Physical memory: 2035 MB
>
>
> Its a somewhat busy box taking in mail as well as backups for a few
> servers over nfs. At the time, it would have been getting about 250Mb/s
> inbound on its gigabit interface. Full core.txt file at
>
> http://www.tancsa.com/core-jul8-2011.txt

I thought that this was supposed to contain output of 'thread apply all bt' in
kgdb. Anyway, I think that stacktrace for tid 100107 may have some useful
information.


--
Andriy Gapon

Kostik Belousov

unread,
Jul 7, 2011, 4:23:44 AM7/7/11
to Andriy Gapon, FreeBSD-STABLE Mailing List

BTW, we had a similar panic, "spinlock held too long", the spinlock
is the sched lock N, on busy 8-core box recently upgraded to the
stable/8. Unfortunately, machine hung dumping core, so the stack trace
for the owner thread was not available.

I was unable to make any conclusion from the data that was present.
If the situation is reproducable, you coulld try to revert r221937. This
is pure speculation, though.

Mike Tancsa

unread,
Jul 7, 2011, 7:36:53 AM7/7/11
to Kostik Belousov, FreeBSD-STABLE Mailing List, Andriy Gapon
On 7/7/2011 4:20 AM, Kostik Belousov wrote:
>
> BTW, we had a similar panic, "spinlock held too long", the spinlock
> is the sched lock N, on busy 8-core box recently upgraded to the
> stable/8. Unfortunately, machine hung dumping core, so the stack trace
> for the owner thread was not available.
>
> I was unable to make any conclusion from the data that was present.
> If the situation is reproducable, you coulld try to revert r221937. This
> is pure speculation, though.

Another crash just now after 5hrs uptime. I will try and revert r221937
unless there is any extra debugging you want me to add to the kernel
instead ?

This is an inbound mail server so a little disruption is possible

kgdb /usr/obj/usr/src/sys/recycle/kernel.debug vmcore.13
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-marcel-freebsd"...

Unread portion of the kernel message buffer:

spin lock 0xc0b1d200 (sched lock 1) held by 0xc5dac2e0 (tid 100109) too long


panic: spin lock held too long
cpuid = 0

Uptime: 5h37m43s
Physical memory: 2035 MB
Dumping 260 MB: 245 229 213 197 181 165 149 133 117 101 85 69 53 37 21 5

Reading symbols from /boot/kernel/amdsbwd.ko...Reading symbols from
/boot/kernel/amdsbwd.ko.symbols...done.
done.
Loaded symbols for /boot/kernel/amdsbwd.ko


#0 doadump () at pcpu.h:231
231 pcpu.h: No such file or directory.
in pcpu.h

(kgdb) bt


#0 doadump () at pcpu.h:231
#1 0xc06fd6d3 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:429
#2 0xc06fd937 in panic (fmt=Variable "fmt" is not available.
) at /usr/src/sys/kern/kern_shutdown.c:602
#3 0xc06ed95f in _mtx_lock_spin_failed (m=0x0) at
/usr/src/sys/kern/kern_mutex.c:490
#4 0xc06ed9e5 in _mtx_lock_spin (m=0xc0b1d200, tid=3312388992, opts=0,
file=0x0, line=0)
at /usr/src/sys/kern/kern_mutex.c:526

#5 0xc0720254 in sched_add (td=0xc61892e0, flags=0) at
/usr/src/sys/kern/sched_ule.c:1119
#6 0xc07203f9 in sched_wakeup (td=0xc61892e0) at
/usr/src/sys/kern/sched_ule.c:1950
#7 0xc07061f8 in setrunnable (td=0xc61892e0) at
/usr/src/sys/kern/kern_synch.c:499
#8 0xc07362af in sleepq_resume_thread (sq=0xc55311c0, td=0xc61892e0,


pri=Variable "pri" is not available.
)
at /usr/src/sys/kern/subr_sleepqueue.c:751

#9 0xc0736e18 in sleepq_signal (wchan=0xc60386d0, flags=1, pri=0, queue=0)
at /usr/src/sys/kern/subr_sleepqueue.c:825
#10 0xc06b6764 in cv_signal (cvp=0xc60386d0) at


/usr/src/sys/kern/kern_condvar.c:422
#11 0xc08eaa0d in xprt_assignthread (xprt=Variable "xprt" is not available.
) at /usr/src/sys/rpc/svc.c:342

#12 0xc08ec502 in xprt_active (xprt=0xc5db8a00) at
/usr/src/sys/rpc/svc.c:378
#13 0xc08ee051 in svc_vc_soupcall (so=0xc618a19c, arg=0xc5db8a00,
waitflag=1) at /usr/src/sys/rpc/svc_vc.c:747
#14 0xc075bbb1 in sowakeup (so=0xc618a19c, sb=0xc618a1f0) at
/usr/src/sys/kern/uipc_sockbuf.c:191
#15 0xc08447bc in tcp_do_segment (m=0xc6567a00, th=0xc6785824,
so=0xc618a19c, tp=0xc617e000, drop_hdrlen=52,


tlen=1448, iptos=0 '\0', ti_locked=2) at
/usr/src/sys/netinet/tcp_input.c:1775

#16 0xc0847930 in tcp_input (m=0xc6567a00, off0=20) at
/usr/src/sys/netinet/tcp_input.c:1329
#17 0xc07ddaf7 in ip_input (m=0xc6567a00) at
/usr/src/sys/netinet/ip_input.c:787
#18 0xc07b8859 in netisr_dispatch_src (proto=1, source=0, m=0xc6567a00)
at /usr/src/sys/net/netisr.c:859
#19 0xc07b8af0 in netisr_dispatch (proto=1, m=0xc6567a00) at
/usr/src/sys/net/netisr.c:946
#20 0xc07ae5e1 in ether_demux (ifp=0xc56ed800, m=0xc6567a00) at
/usr/src/sys/net/if_ethersubr.c:894
#21 0xc07aeb5f in ether_input (ifp=0xc56ed800, m=0xc6567a00) at


/usr/src/sys/net/if_ethersubr.c:753
#22 0xc09977b2 in nfe_int_task (arg=0xc56ff000, pending=1) at
/usr/src/sys/dev/nfe/if_nfe.c:2187
#23 0xc07387ca in taskqueue_run_locked (queue=0xc5702440) at
/usr/src/sys/kern/subr_taskqueue.c:248
#24 0xc073895c in taskqueue_thread_loop (arg=0xc56ff130) at
/usr/src/sys/kern/subr_taskqueue.c:385
#25 0xc06d1027 in fork_exit (callout=0xc07388a0 <taskqueue_thread_loop>,
arg=0xc56ff130, frame=0xc538ed28)
at /usr/src/sys/kern/kern_fork.c:861
#26 0xc09a5c24 in fork_trampoline () at
/usr/src/sys/i386/i386/exception.s:275

--
-------------------
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, mi...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada http://www.tancsa.com/

Jeremy Chadwick

unread,
Jul 7, 2011, 7:44:33 AM7/7/11
to Mike Tancsa, Kostik Belousov, FreeBSD-STABLE Mailing List, Andriy Gapon

1. info threads
2. Find the index value that matches the tid in question (in the above
spin lock panic, that'd be tid 100109). The index value will be
the first number shown on the left
3. thread {index}
4. bt

If this doesn't work, alternatively you can try (from the beginning)
"thread apply all bt" and provide the output from that. (It will be
quite lengthy, and at this point I think tid 100109 is the one of
interest in this crash, based on what Andriy said earlier)

--
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, US |
| Making life hard for others since 1977. PGP 4BD6C0CB |

Andriy Gapon

unread,
Jul 7, 2011, 7:54:15 AM7/7/11
to Jeremy Chadwick, Kostik Belousov, FreeBSD-STABLE Mailing List
on 07/07/2011 14:41 Jeremy Chadwick said the following:

> 1. info threads
> 2. Find the index value that matches the tid in question (in the above
> spin lock panic, that'd be tid 100109). The index value will be
> the first number shown on the left
> 3. thread {index}

Just in case, in kgdb there is a command 'tid' that does all of the above steps in
one go.

> 4. bt
>
> If this doesn't work, alternatively you can try (from the beginning)
> "thread apply all bt" and provide the output from that. (It will be
> quite lengthy, and at this point I think tid 100109 is the one of
> interest in this crash, based on what Andriy said earlier)


--
Andriy Gapon

Mike Tancsa

unread,
Jul 7, 2011, 8:05:35 AM7/7/11
to Kostik Belousov, FreeBSD-STABLE Mailing List, Andriy Gapon

And the second crash from today

kgdb /usr/obj/usr/src/sys/recycle/kernel.debug vmcore.13
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-marcel-freebsd"...

Unread portion of the kernel message buffer:
spin lock 0xc0b1d200 (sched lock 1) held by 0xc5dac2e0 (tid 100109) too long
panic: spin lock held too long
cpuid = 0
Uptime: 5h37m43s
Physical memory: 2035 MB
Dumping 260 MB: 245 229 213 197 181 165 149 133 117 101 85 69 53 37 21 5

Reading symbols from /boot/kernel/amdsbwd.ko...Reading symbols from
/boot/kernel/amdsbwd.ko.symbols...done.
done.
Loaded symbols for /boot/kernel/amdsbwd.ko
#0 doadump () at pcpu.h:231
231 pcpu.h: No such file or directory.
in pcpu.h

(kgdb) tid 100109
[Switching to thread 82 (Thread 100109)]#0 sched_switch (td=0xc5dac2e0,
newtd=0xc553c5c0, flags=260)
at /usr/src/sys/kern/sched_ule.c:1866
1866 cpuid = PCPU_GET(cpuid);
(kgdb) list
1861 /*
1862 * We may return from cpu_switch on a different
cpu. However,
1863 * we always return with td_lock pointing to the
current cpu's
1864 * run queue lock.
1865 */
1866 cpuid = PCPU_GET(cpuid);
1867 tdq = TDQ_CPU(cpuid);
1868 lock_profile_obtain_lock_success(
1869 &TDQ_LOCKPTR(tdq)->lock_object, 0, 0,
__FILE__, __LINE__);
1870 #ifdef HWPMC_HOOKS
(kgdb) p *td
$1 = {td_lock = 0xc0b1d200, td_proc = 0xc5db4000, td_plist = {tqe_next =
0xc5dac5c0, tqe_prev = 0xc5dac008},
td_runq = {tqe_next = 0x0, tqe_prev = 0xc0b1d334}, td_slpq = {tqe_next
= 0x0, tqe_prev = 0xc65d3b00},
td_lockq = {tqe_next = 0x0, tqe_prev = 0xc51f6b38}, td_cpuset =
0xc5533e38, td_sel = 0x0,
td_sleepqueue = 0xc65d3b00, td_turnstile = 0xc63ceb80, td_umtxq =
0xc5d229c0, td_tid = 100109, td_sigqueue = {
sq_signals = {__bits = {0, 0, 0, 0}}, sq_kill = {__bits = {0, 0, 0,
0}}, sq_list = {tqh_first = 0x0,
tqh_last = 0xc5dac340}, sq_proc = 0xc5db4000, sq_flags = 1},
td_flags = 4, td_inhibitors = 0,
td_pflags = 2097152, td_dupfd = 0, td_sqqueue = 0, td_wchan = 0x0,
td_wmesg = 0x0, td_lastcpu = 0 '\0',
td_oncpu = 1 '\001', td_owepreempt = 0 '\0', td_tsqueue = 0 '\0',
td_locks = -291, td_rw_rlocks = 0,
td_lk_slocks = 0, td_blocked = 0x0, td_lockname = 0x0, td_contested =
{lh_first = 0x0}, td_sleeplocks = 0x0,
td_intr_nesting_level = 0, td_pinned = 0, td_ucred = 0xc5538100,
td_estcpu = 0, td_slptick = 0,
td_blktick = 0, td_ru = {ru_utime = {tv_sec = 0, tv_usec = 0},
ru_stime = {tv_sec = 0, tv_usec = 0},
ru_maxrss = 1048, ru_ixrss = 85216, ru_idrss = 3834720, ru_isrss =
681728, ru_minflt = 0, ru_majflt = 0,
ru_nswap = 0, ru_inblock = 82, ru_oublock = 271222, ru_msgsnd =
135625, ru_msgrcv = 2427350,
ru_nsignals = 0, ru_nvcsw = 2076938, ru_nivcsw = 731134},
td_incruntime = 852332612,
td_runtime = 88202475877, td_pticks = 5326, td_sticks = 48, td_iticks
= 0, td_uticks = 0, td_intrval = 0,
td_oldsigmask = {__bits = {0, 0, 0, 0}}, td_sigmask = {__bits = {0, 0,
0, 0}}, td_generation = 2808072,
td_sigstk = {ss_sp = 0x0, ss_size = 0, ss_flags = 0}, td_xsig = 0,
td_profil_addr = 0, td_profil_ticks = 0,
td_name = "nfsd: service\000\000\000\000\000\000", td_fpop = 0x0,
td_dbgflags = 0, td_dbgksi = {ksi_link = {
tqe_next = 0x0, tqe_prev = 0x0}, ksi_info = {si_signo = 0,
si_errno = 0, si_code = 0, si_pid = 0,
si_uid = 0, si_status = 0, si_addr = 0x0, si_value = {sival_int =
0, sival_ptr = 0x0, sigval_int = 0,
sigval_ptr = 0x0}, _reason = {_fault = {_trapno = 0}, _timer =
{_timerid = 0, _overrun = 0}, _mesgq = {
_mqd = 0}, _poll = {_band = 0}, __spare__ = {__spare1__ = 0,
__spare2__ = {0, 0, 0, 0, 0, 0, 0}}}},
ksi_flags = 0, ksi_sigq = 0x0}, td_ng_outbound = 0, td_osd =
{osd_nslots = 0, osd_slots = 0x0, osd_next = {
le_next = 0x0, le_prev = 0x0}}, td_rqindex = 32 ' ', td_base_pri =
160 ' ', td_priority = 128 '\200',
td_pri_class = 3 '\003', td_user_pri = 128 '\200', td_base_user_pri =
128 '\200', td_pcb = 0xe7d14d80,
td_state = TDS_RUNNING, td_retval = {0, 0}, td_slpcallout = {c_links =
{sle = {sle_next = 0xc5d704e0}, tqe = {
tqe_next = 0xc5d704e0, tqe_prev = 0xc55bb1f0}}, c_time =
20246590, c_arg = 0xc5dac2e0,
c_func = 0xc0736bc0 <sleepq_timeout>, c_lock = 0x0, c_flags = 18,
c_cpu = 32}, td_frame = 0xe7d14d28,
td_kstack_obj = 0xc6182088, td_kstack = 3889246208, td_kstack_pages =
2, td_unused1 = 0x0, td_unused2 = 0,
td_unused3 = 0, td_critnest = 1, td_md = {md_spinlock_count = 1,
md_saved_flags = 582},
td_sched = 0xc5dac58c, td_ar = 0x0, td_syscalls = 0, td_lprof =
{{lh_first = 0x0}, {lh_first = 0x0}},
td_dtrace = 0x0, td_errno = 0, td_vnet = 0x0, td_vnet_lpush = 0x0,
td_rux = {rux_runtime = 87350143265,
rux_uticks = 0, rux_sticks = 5278, rux_iticks = 0, rux_uu = 0,
rux_su = 0, rux_tu = 0},
td_map_def_user = 0x0, td_dbg_forked = 0}
(kgdb) p *newtd
$2 = {td_lock = 0xc0b1cb80, td_proc = 0xc553a810, td_plist = {tqe_next =
0xc553c8a0, tqe_prev = 0xc553a818},
td_runq = {tqe_next = 0x0, tqe_prev = 0x0}, td_slpq = {tqe_next = 0x0,
tqe_prev = 0x0}, td_lockq = {
tqe_next = 0x0, tqe_prev = 0x0}, td_cpuset = 0xc5533e38, td_sel =
0x0, td_sleepqueue = 0xc5531e00,
td_turnstile = 0xc553d000, td_umtxq = 0xc5527ac0, td_tid = 100004,
td_sigqueue = {sq_signals = {__bits = {0,
0, 0, 0}}, sq_kill = {__bits = {0, 0, 0, 0}}, sq_list =
{tqh_first = 0x0, tqh_last = 0xc553c620},
sq_proc = 0xc553a810, sq_flags = 1}, td_flags = 262180,
td_inhibitors = 0, td_pflags = 2097152,
td_dupfd = 0, td_sqqueue = 0, td_wchan = 0x0, td_wmesg = 0x0,
td_lastcpu = 0 '\0', td_oncpu = 255 '�',
td_owepreempt = 0 '\0', td_tsqueue = 0 '\0', td_locks = 0,
td_rw_rlocks = 0, td_lk_slocks = 0,
td_blocked = 0x0, td_lockname = 0x0, td_contested = {lh_first = 0x0},
td_sleeplocks = 0x0,
td_intr_nesting_level = 0, td_pinned = 0, td_ucred = 0xc5535600,
td_estcpu = 0, td_slptick = 0,
td_blktick = 0, td_ru = {ru_utime = {tv_sec = 0, tv_usec = 0},
ru_stime = {tv_sec = 0, tv_usec = 0},
ru_maxrss = 0, ru_ixrss = 0, ru_idrss = 0, ru_isrss = 0, ru_minflt =
0, ru_majflt = 0, ru_nswap = 0,
ru_inblock = 0, ru_oublock = 0, ru_msgsnd = 0, ru_msgrcv = 0,
ru_nsignals = 0, ru_nvcsw = 33962290,
ru_nivcsw = 40323696}, td_incruntime = 370201469879, td_runtime =
41685199750119, td_pticks = 2502607,
td_sticks = 22282, td_iticks = 0, td_uticks = 0, td_intrval = 0,
td_oldsigmask = {__bits = {0, 0, 0, 0}},
td_sigmask = {__bits = {0, 0, 0, 0}}, td_generation = 74285986,
td_sigstk = {ss_sp = 0x0, ss_size = 0,
ss_flags = 0}, td_xsig = 0, td_profil_addr = 0, td_profil_ticks = 0,
td_name = "idle: cpu0\000\000\000\000\000\000\000\000\000", td_fpop =
0x0, td_dbgflags = 0, td_dbgksi = {
ksi_link = {tqe_next = 0x0, tqe_prev = 0x0}, ksi_info = {si_signo =
0, si_errno = 0, si_code = 0,
si_pid = 0, si_uid = 0, si_status = 0, si_addr = 0x0, si_value =
{sival_int = 0, sival_ptr = 0x0,
sigval_int = 0, sigval_ptr = 0x0}, _reason = {_fault = {_trapno
= 0}, _timer = {_timerid = 0,
_overrun = 0}, _mesgq = {_mqd = 0}, _poll = {_band = 0},
__spare__ = {__spare1__ = 0, __spare2__ = {
0, 0, 0, 0, 0, 0, 0}}}}, ksi_flags = 0, ksi_sigq = 0x0},
td_ng_outbound = 0, td_osd = {
osd_nslots = 0, osd_slots = 0x0, osd_next = {le_next = 0x0, le_prev
= 0x0}}, td_rqindex = 0 '\0',
td_base_pri = 255 '�', td_priority = 255 '�', td_pri_class = 4 '\004',
td_user_pri = 160 ' ',
td_base_user_pri = 160 ' ', td_pcb = 0xc51e3d80, td_state =
TDS_CAN_RUN, td_retval = {0, 0}, td_slpcallout = {
c_links = {sle = {sle_next = 0x0}, tqe = {tqe_next = 0x0, tqe_prev =
0x0}}, c_time = 0, c_arg = 0x0,
c_func = 0, c_lock = 0x0, c_flags = 16, c_cpu = 0}, td_frame =
0xc51e3d28, td_kstack_obj = 0xc157ddd0,
td_kstack = 3307085824, td_kstack_pages = 2, td_unused1 = 0x0,
td_unused2 = 0, td_unused3 = 0,
td_critnest = 1, td_md = {md_spinlock_count = 1, md_saved_flags =
582}, td_sched = 0xc553c86c, td_ar = 0x0,
td_syscalls = 0, td_lprof = {{lh_first = 0x0}, {lh_first = 0x0}},
td_dtrace = 0x0, td_errno = 0,
td_vnet = 0x0, td_vnet_lpush = 0x0, td_rux = {rux_runtime =
41315105445882, rux_uticks = 0,
rux_sticks = 2480325, rux_iticks = 0, rux_uu = 0, rux_su = 0, rux_tu
= 0}, td_map_def_user = 0x0,
td_dbg_forked = 0}
(kgdb) p *mtx
$3 = {lock_object = {lo_name = 0xc0a3af04 "sleepq chain", lo_flags =
720896, lo_data = 0, lo_witness = 0x0},
mtx_lock = 4}
(kgdb) disassemble
Dump of assembler code for function sched_switch:
0xc07206c0 <sched_switch+0>: push %ebp
0xc07206c1 <sched_switch+1>: mov %esp,%ebp
0xc07206c3 <sched_switch+3>: push %edi
0xc07206c4 <sched_switch+4>: push %esi
0xc07206c5 <sched_switch+5>: push %ebx
0xc07206c6 <sched_switch+6>: sub $0x24,%esp
0xc07206c9 <sched_switch+9>: mov 0x8(%ebp),%esi
0xc07206cc <sched_switch+12>: mov (%esi),%eax
0xc07206ce <sched_switch+14>: mov %fs:0x20,%eax
0xc07206d4 <sched_switch+20>: mov %eax,0xfffffff0(%ebp)
0xc07206d7 <sched_switch+23>: mov %eax,0xffffffec(%ebp)
0xc07206da <sched_switch+26>: imul $0x680,%eax,%eax
0xc07206e0 <sched_switch+32>: lea 0xc0b1cb80(%eax),%edi
0xc07206e6 <sched_switch+38>: mov 0x248(%esi),%ebx
0xc07206ec <sched_switch+44>: mov (%esi),%eax
0xc07206ee <sched_switch+46>: mov %eax,0xffffffe4(%ebp)
0xc07206f1 <sched_switch+49>: mov 0xc0b16e8c,%eax
0xc07206f6 <sched_switch+54>: mov %eax,0x8(%ebx)
0xc07206f9 <sched_switch+57>: movzbl 0x8d(%esi),%eax
0xc0720700 <sched_switch+64>: mov %al,0x8c(%esi)
0xc0720706 <sched_switch+70>: movb $0xff,0x8d(%esi)
0xc072070d <sched_switch+77>: mov 0x10(%ebp),%eax
0xc0720710 <sched_switch+80>: and $0x400,%eax
0xc0720715 <sched_switch+85>: jne 0xc072071e <sched_switch+94>
0xc0720717 <sched_switch+87>: andl $0xfffeffff,0x70(%esi)
0xc072071e <sched_switch+94>: movb $0x0,0x8e(%esi)
0xc0720725 <sched_switch+101>: addw $0x1,0x24(%edi)
0xc072072a <sched_switch+106>: testb $0x20,0x70(%esi)
0xc072072e <sched_switch+110>: je 0xc0720740 <sched_switch+128>
0xc0720730 <sched_switch+112>: movl $0x2,0x1f4(%esi)
0xc072073a <sched_switch+122>: jmp 0xc0720924 <sched_switch+612>
0xc072073f <sched_switch+127>: nop
0xc0720740 <sched_switch+128>: cmpl $0x4,0x1f4(%esi)
0xc0720747 <sched_switch+135>: jne 0xc07208c0 <sched_switch+512>
0xc072074d <sched_switch+141>: cmp $0x1,%eax
0xc0720750 <sched_switch+144>: sbb %edx,%edx
0xc0720752 <sched_switch+146>: and $0xfffffff8,%edx
0xc0720755 <sched_switch+149>: add $0xb,%edx
0xc0720758 <sched_switch+152>: mov %edx,0xffffffe8(%ebp)
0xc072075b <sched_switch+155>: cmpl $0x0,0xac(%esi)
0xc0720762 <sched_switch+162>: jne 0xc0720790 <sched_switch+208>
0xc0720764 <sched_switch+164>: mov 0x28(%esi),%edx
0xc0720767 <sched_switch+167>: movzbl 0x6(%ebx),%ecx
---Type <return> to continue, or q <return> to quit---
0xc072076b <sched_switch+171>: mov %ecx,%eax
0xc072076d <sched_switch+173>: shr $0x5,%al
0xc0720770 <sched_switch+176>: movzbl %al,%eax
0xc0720773 <sched_switch+179>: mov (%edx,%eax,4),%eax
0xc0720776 <sched_switch+182>: and $0x1f,%ecx
0xc0720779 <sched_switch+185>: sar %cl,%eax
0xc072077b <sched_switch+187>: test $0x1,%al
0xc072077d <sched_switch+189>: jne 0xc0720790 <sched_switch+208>
0xc072077f <sched_switch+191>: mov $0x0,%edx
0xc0720784 <sched_switch+196>: mov %esi,%eax
0xc0720786 <sched_switch+198>: call 0xc071ff90 <sched_pickcpu>
0xc072078b <sched_switch+203>: mov %al,0x6(%ebx)
0xc072078e <sched_switch+206>: mov %esi,%esi
0xc0720790 <sched_switch+208>: movzbl 0x6(%ebx),%eax
0xc0720794 <sched_switch+212>: cmp 0xffffffec(%ebp),%eax
0xc0720797 <sched_switch+215>: jne 0xc0720850 <sched_switch+400>
0xc072079d <sched_switch+221>: mov (%esi),%eax
0xc072079f <sched_switch+223>: movzbl 0x1ea(%esi),%ebx
0xc07207a6 <sched_switch+230>: mov 0x248(%esi),%ecx
0xc07207ac <sched_switch+236>: movl $0x3,0x1f4(%esi)
0xc07207b6 <sched_switch+246>: cmpl $0x0,0xac(%esi)
0xc07207bd <sched_switch+253>: jne 0xc07207c8 <sched_switch+264>
0xc07207bf <sched_switch+255>: addl $0x1,0x20(%edi)
0xc07207c3 <sched_switch+259>: orw $0x2,0x4(%ecx)
0xc07207c8 <sched_switch+264>: cmp $0x9f,%bl
0xc07207cb <sched_switch+267>: ja 0xc07207d4 <sched_switch+276>
0xc07207cd <sched_switch+269>: lea 0x2c(%edi),%eax
0xc07207d0 <sched_switch+272>: mov %eax,(%ecx)
0xc07207d2 <sched_switch+274>: jmp 0xc0720833 <sched_switch+371>
0xc07207d4 <sched_switch+276>: cmp $0xdf,%bl
0xc07207d7 <sched_switch+279>: ja 0xc072082b <sched_switch+363>
0xc07207d9 <sched_switch+281>: lea 0x234(%edi),%eax
0xc07207df <sched_switch+287>: mov %eax,(%ecx)
0xc07207e1 <sched_switch+289>: testb $0x18,0xffffffe8(%ebp)
0xc07207e5 <sched_switch+293>: jne 0xc0720806 <sched_switch+326>
0xc07207e7 <sched_switch+295>: movzbl 0x2a(%edi),%edx
0xc07207eb <sched_switch+299>: lea 0x60(%ebx,%edx,1),%eax
0xc07207ef <sched_switch+303>: and $0x3f,%eax
0xc07207f2 <sched_switch+306>: movzbl 0x2b(%edi),%ebx
0xc07207f6 <sched_switch+310>: cmp %dl,%bl
0xc07207f8 <sched_switch+312>: je 0xc072080a <sched_switch+330>
0xc07207fa <sched_switch+314>: cmp %al,%bl
0xc07207fc <sched_switch+316>: jne 0xc072080a <sched_switch+330>
0xc07207fe <sched_switch+318>: sub $0x1,%eax
---Type <return> to continue, or q <return> to quit---
0xc0720801 <sched_switch+321>: and $0x3f,%eax
0xc0720804 <sched_switch+324>: jmp 0xc072080a <sched_switch+330>
0xc0720806 <sched_switch+326>: movzbl 0x2b(%edi),%eax
0xc072080a <sched_switch+330>: movzbl %al,%eax
0xc072080d <sched_switch+333>: mov (%ecx),%edx
0xc072080f <sched_switch+335>: mov 0xffffffe8(%ebp),%ecx
0xc0720812 <sched_switch+338>: mov %ecx,0xc(%esp)
0xc0720816 <sched_switch+342>: mov %eax,0x8(%esp)
0xc072081a <sched_switch+346>: mov %esi,0x4(%esp)
0xc072081e <sched_switch+350>: mov %edx,(%esp)
0xc0720821 <sched_switch+353>: call 0xc07051a0 <runq_add_pri>
0xc0720826 <sched_switch+358>: jmp 0xc0720924 <sched_switch+612>
0xc072082b <sched_switch+363>: lea 0x43c(%edi),%eax
0xc0720831 <sched_switch+369>: mov %eax,(%ecx)
0xc0720833 <sched_switch+371>: mov (%ecx),%eax
0xc0720835 <sched_switch+373>: mov 0xffffffe8(%ebp),%edx
0xc0720838 <sched_switch+376>: mov %edx,0x8(%esp)
0xc072083c <sched_switch+380>: mov %esi,0x4(%esp)
0xc0720840 <sched_switch+384>: mov %eax,(%esp)
0xc0720843 <sched_switch+387>: call 0xc0704dc0 <runq_add>
0xc0720848 <sched_switch+392>: jmp 0xc0720924 <sched_switch+612>
0xc072084d <sched_switch+397>: lea 0x0(%esi),%esi
0xc0720850 <sched_switch+400>: mov 0x248(%esi),%eax
0xc0720856 <sched_switch+406>: movzbl 0x6(%eax),%eax
0xc072085a <sched_switch+410>: imul $0x680,%eax,%eax
0xc0720860 <sched_switch+416>: lea 0xc0b1cb80(%eax),%ebx
0xc0720866 <sched_switch+422>: mov %esi,%edx
0xc0720868 <sched_switch+424>: mov %edi,%eax
0xc072086a <sched_switch+426>: call 0xc071da30 <tdq_load_rem>
0xc072086f <sched_switch+431>: call 0xc09af910 <spinlock_enter>
0xc0720874 <sched_switch+436>: mov %esi,(%esp)
0xc0720877 <sched_switch+439>: call 0xc06edf20 <thread_lock_block>
0xc072087c <sched_switch+444>: mov %edi,%edx
0xc072087e <sched_switch+446>: mov %ebx,%eax
0xc0720880 <sched_switch+448>: call 0xc071e080 <tdq_lock_pair>
0xc0720885 <sched_switch+453>: mov 0xffffffe8(%ebp),%ecx
0xc0720888 <sched_switch+456>: mov %esi,%edx
0xc072088a <sched_switch+458>: mov %ebx,%eax
0xc072088c <sched_switch+460>: call 0xc071ef50 <tdq_add>
0xc0720891 <sched_switch+465>: mov %esi,%edx
0xc0720893 <sched_switch+467>: mov %ebx,%eax
0xc0720895 <sched_switch+469>: call 0xc071e990 <tdq_notify>
0xc072089a <sched_switch+474>: mov 0x8(%ebx),%eax
0xc072089d <sched_switch+477>: test %eax,%eax
---Type <return> to continue, or q <return> to quit---
0xc072089f <sched_switch+479>: je 0xc07208a9 <sched_switch+489>
0xc07208a1 <sched_switch+481>: sub $0x1,%eax
0xc07208a4 <sched_switch+484>: mov %eax,0x8(%ebx)
0xc07208a7 <sched_switch+487>: jmp 0xc07208b1 <sched_switch+497>
0xc07208a9 <sched_switch+489>: mov $0x4,%eax
0xc07208ae <sched_switch+494>: xchg %eax,0x10(%ebx)
0xc07208b1 <sched_switch+497>: call 0xc09afae0 <spinlock_exit>
0xc07208b6 <sched_switch+502>: call 0xc09afae0 <spinlock_exit>
0xc07208bb <sched_switch+507>: mov %ebx,0xffffffe4(%ebp)
0xc07208be <sched_switch+510>: jmp 0xc0720924 <sched_switch+612>
0xc07208c0 <sched_switch+512>: mov %fs:0x0,%ebx
0xc07208c7 <sched_switch+519>: call 0xc09af910 <spinlock_enter>
0xc07208cc <sched_switch+524>: mov $0x4,%eax
0xc07208d1 <sched_switch+529>: lock cmpxchg %ebx,0x10(%edi)
0xc07208d6 <sched_switch+534>: sete %al
0xc07208d9 <sched_switch+537>: test %al,%al
0xc07208db <sched_switch+539>: jne 0xc0720910 <sched_switch+592>
0xc07208dd <sched_switch+541>: mov 0x10(%edi),%eax
0xc07208e0 <sched_switch+544>: cmp %ebx,%eax
0xc07208e2 <sched_switch+546>: jne 0xc07208ea <sched_switch+554>
0xc07208e4 <sched_switch+548>: addl $0x1,0x8(%edi)
0xc07208e8 <sched_switch+552>: jmp 0xc0720910 <sched_switch+592>
0xc07208ea <sched_switch+554>: movl $0x0,0x10(%esp)
0xc07208f2 <sched_switch+562>: movl $0x0,0xc(%esp)
0xc07208fa <sched_switch+570>: movl $0x0,0x8(%esp)
0xc0720902 <sched_switch+578>: mov %ebx,0x4(%esp)
0xc0720906 <sched_switch+582>: mov %edi,(%esp)
0xc0720909 <sched_switch+585>: call 0xc06ed970 <_mtx_lock_spin>
0xc072090e <sched_switch+590>: mov %esi,%esi
0xc0720910 <sched_switch+592>: mov %esi,(%esp)
0xc0720913 <sched_switch+595>: call 0xc06edf20 <thread_lock_block>
0xc0720918 <sched_switch+600>: mov %eax,0xffffffe4(%ebp)
0xc072091b <sched_switch+603>: mov %esi,%edx
0xc072091d <sched_switch+605>: mov %edi,%eax
0xc072091f <sched_switch+607>: call 0xc071da30 <tdq_load_rem>
0xc0720924 <sched_switch+612>: call 0xc0705140 <choosethread>
0xc0720929 <sched_switch+617>: mov %eax,%ebx
0xc072092b <sched_switch+619>: cmp %eax,%esi
0xc072092d <sched_switch+621>: je 0xc07209b1 <sched_switch+753>
0xc0720933 <sched_switch+627>: mov 0x4(%esi),%ecx
0xc0720936 <sched_switch+630>: lock cmpxchg %eax,0x5c(%ecx)
0xc072093b <sched_switch+635>: test $0x800000,%eax
0xc0720940 <sched_switch+640>: je 0xc0720960 <sched_switch+672>
0xc0720942 <sched_switch+642>: mov 0xc0b184dc,%eax
---Type <return> to continue, or q <return> to quit---
0xc0720947 <sched_switch+647>: test %eax,%eax
0xc0720949 <sched_switch+649>: je 0xc0720960 <sched_switch+672>
0xc072094b <sched_switch+651>: movl $0x0,0x8(%esp)
0xc0720953 <sched_switch+659>: movl $0x3,0x4(%esp)
0xc072095b <sched_switch+667>: mov %esi,(%esp)
0xc072095e <sched_switch+670>: call *%eax
0xc0720960 <sched_switch+672>: mov %ebx,0x10(%edi)
0xc0720963 <sched_switch+675>: mov 0xffffffe4(%ebp),%edx
0xc0720966 <sched_switch+678>: mov %edx,0x8(%esp)
0xc072096a <sched_switch+682>: mov %ebx,0x4(%esp)
0xc072096e <sched_switch+686>: mov %esi,(%esp)
0xc0720971 <sched_switch+689>: call 0xc09bcdc4 <cpu_switch>
0xc0720976 <sched_switch+694>: mov %fs:0x20,%eax
0xc072097c <sched_switch+700>: mov %eax,0xfffffff0(%ebp)
0xc072097f <sched_switch+703>: mov %eax,0xffffffec(%ebp)
0xc0720982 <sched_switch+706>: mov 0x4(%esi),%ecx
0xc0720985 <sched_switch+709>: lock cmpxchg %eax,0x5c(%ecx)
0xc072098a <sched_switch+714>: test $0x800000,%eax
0xc072098f <sched_switch+719>: je 0xc07209b6 <sched_switch+758>
0xc0720991 <sched_switch+721>: mov 0xc0b184dc,%eax
0xc0720996 <sched_switch+726>: test %eax,%eax
0xc0720998 <sched_switch+728>: je 0xc07209b6 <sched_switch+758>
0xc072099a <sched_switch+730>: movl $0x0,0x8(%esp)
0xc07209a2 <sched_switch+738>: movl $0x2,0x4(%esp)
0xc07209aa <sched_switch+746>: mov %esi,(%esp)
0xc07209ad <sched_switch+749>: call *%eax
0xc07209af <sched_switch+751>: jmp 0xc07209b6 <sched_switch+758>
0xc07209b1 <sched_switch+753>: mov 0xffffffe4(%ebp),%eax
0xc07209b4 <sched_switch+756>: xchg %eax,(%esi)
0xc07209b6 <sched_switch+758>: movzbl 0xffffffec(%ebp),%edx
0xc07209ba <sched_switch+762>: mov %dl,0x8d(%esi)
0xc07209c0 <sched_switch+768>: add $0x24,%esp
0xc07209c3 <sched_switch+771>: pop %ebx
0xc07209c4 <sched_switch+772>: pop %esi
0xc07209c5 <sched_switch+773>: pop %edi
0xc07209c6 <sched_switch+774>: pop %ebp
0xc07209c7 <sched_switch+775>: ret
End of assembler dump.

Hiroki Sato

unread,
Aug 17, 2011, 1:40:22 PM8/17/11
to mi...@sentex.net, kost...@gmail.com, freebsd...@freebsd.org, a...@freebsd.org
Hi,

Mike Tancsa <mi...@sentex.net> wrote
in <4E15A08C...@sentex.net>:

mi> On 7/7/2011 7:32 AM, Mike Tancsa wrote:
mi> > On 7/7/2011 4:20 AM, Kostik Belousov wrote:
mi> >>
mi> >> BTW, we had a similar panic, "spinlock held too long", the spinlock
mi> >> is the sched lock N, on busy 8-core box recently upgraded to the
mi> >> stable/8. Unfortunately, machine hung dumping core, so the stack trace
mi> >> for the owner thread was not available.
mi> >>
mi> >> I was unable to make any conclusion from the data that was present.
mi> >> If the situation is reproducable, you coulld try to revert r221937. This
mi> >> is pure speculation, though.
mi> >
mi> > Another crash just now after 5hrs uptime. I will try and revert r221937
mi> > unless there is any extra debugging you want me to add to the kernel
mi> > instead ?

I am also suffering from a reproducible panic on an 8-STABLE box, an
NFS server with heavy I/O load. I could not get a kernel dump
because this panic locked up the machine just after it occurred, but
according to the stack trace it was the same as posted one.
Switching to an 8.2R kernel can prevent this panic.

Any progress on the investigation?

--
spin lock 0xffffffff80cb46c0 (sched lock 0) held by 0xffffff01900458c0 (tid 100489) too long


panic: spin lock held too long

cpuid = 1
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2a
kdb_backtrace() at kdb_backtrace+0x37
panic() at panic+0x187
_mtx_lock_spin_failed() at _mtx_lock_spin_failed+0x39
_mtx_lock_spin() at _mtx_lock_spin+0x9e
sched_add() at sched_add+0x117
setrunnable() at setrunnable+0x78
sleepq_signal() at sleepq_signal+0x7a
cv_signal() at cv_signal+0x3b
xprt_active() at xprt_active+0xe3
svc_vc_soupcall() at svc_vc_soupcall+0xc
sowakeup() at sowakeup+0x69
tcp_do_segment() at tcp_do_segment+0x25e7
tcp_input() at tcp_input+0xcdd
ip_input() at ip_input+0xac
netisr_dispatch_src() at netisr_dispatch_src+0x7e
ether_demux() at ether_demux+0x14d
ether_input() at ether_input+0x17d
em_rxeof() at em_rxeof+0x1ca
em_handle_que() at em_handle_que+0x5b
taskqueue_run_locked() at taskqueue_run_locked+0x85
taskqueue_thread_loop() at taskqueue_thread_loop+0x4e
fork_exit() at fork_exit+0x11f
fork_trampoline() at fork_trampoline+0xe
--

-- Hiroki

Chip Camden

unread,
Aug 17, 2011, 1:53:09 PM8/17/11
to freebsd...@freebsd.org
Quoth Hiroki Sato on Thursday, 18 August 2011:


I'm also getting similar panics on 8.2-STABLE. Locks up everything and I
have to power off. Once, I happened to be looking at the console when it
happened and copied dow the following:

Sleeping thread (tif 100037, pid 0) owns a non-sleepable lock
panic: sleeping thread
cpuid=1

Another time I got:

lock order reversal:
1st 0xffffff000593e330 snaplk (snaplk) @ /usr/src/sys/kern/vfr_vnops.c:296
2nd 0xffffff0005e5d578 ufs (ufs) @ /usr/src/sys/ufs/ffs/ffs_snapshot.c:1587

I didn't copy down the traceback.

These panics seem to hit when I'm doing heavy WAN I/O. I can go for
about a day without one as long as I stay away from the web or even chat.
Last night this system copied a backup of 35GB over the local network
without failing, but as soon as I hopped onto Firefox this morning, down
she went. I don't know if that's coincidence or useful data.

I didn't get to say "Thanks" to Eitan Adler for attempting to help me
with this on Monday night. Thanks, Eitan!

--
O. | Sterling (Chip) Camden | http://camdensoftware.com
.O | ster...@camdensoftware.com | http://chipsquips.com
OOO | 2048R/D6DBAF91 | http://chipstips.com

Mike Tancsa

unread,
Aug 17, 2011, 2:27:38 PM8/17/11
to Hiroki Sato, kost...@gmail.com, freebsd...@freebsd.org, a...@freebsd.org
On 8/17/2011 1:38 PM, Hiroki Sato wrote:
> Any progress on the investigation?

Unfortunately, I cannot reproduce it yet with a debugging kernel :(


---Mike

Attilio Rao

unread,
Aug 17, 2011, 2:37:56 PM8/17/11
to Hiroki Sato, kost...@gmail.com, freebsd...@freebsd.org, a...@freebsd.org
2011/8/17 Hiroki Sato <h...@freebsd.org>:

> Hi,
>
> Mike Tancsa <mi...@sentex.net> wrote
>  in <4E15A08C...@sentex.net>:
>
> mi> On 7/7/2011 7:32 AM, Mike Tancsa wrote:
> mi> > On 7/7/2011 4:20 AM, Kostik Belousov wrote:
> mi> >>
> mi> >> BTW, we had a similar panic, "spinlock held too long", the spinlock
> mi> >> is the sched lock N, on busy 8-core box recently upgraded to the
> mi> >> stable/8. Unfortunately, machine hung dumping core, so the stack trace
> mi> >> for the owner thread was not available.
> mi> >>
> mi> >> I was unable to make any conclusion from the data that was present.
> mi> >> If the situation is reproducable, you coulld try to revert r221937. This
> mi> >> is pure speculation, though.
> mi> >
> mi> > Another crash just now after 5hrs uptime. I will try and revert r221937
> mi> > unless there is any extra debugging you want me to add to the kernel
> mi> > instead  ?
>
>  I am also suffering from a reproducible panic on an 8-STABLE box, an
>  NFS server with heavy I/O load.  I could not get a kernel dump
>  because this panic locked up the machine just after it occurred, but
>  according to the stack trace it was the same as posted one.
>  Switching to an 8.2R kernel can prevent this panic.
>
>  Any progress on the investigation?

Hiroki,
how easilly can you reproduce it?

It would be important to have a DDB textdump with these informations:
- bt
- ps
- show allpcpu
- alltrace

Alternatively, a coredump which has the stop cpu patch which Andryi can provide.

Thanks,
Attilio


--
Peace can only be achieved by understanding - A. Einstein

Hiroki Sato

unread,
Aug 17, 2011, 3:45:22 PM8/17/11
to att...@freebsd.org, kost...@gmail.com, freebsd...@freebsd.org, a...@freebsd.org
Attilio Rao <att...@freebsd.org> wrote
in <CAJ-FndCDOW0_B2MV0LZEo-tp...@mail.gmail.com>:

at> 2011/8/17 Hiroki Sato <h...@freebsd.org>:
at> > Hi,
at> >
at> > Mike Tancsa <mi...@sentex.net> wrote
at> >  in <4E15A08C...@sentex.net>:
at> >
at> > mi> On 7/7/2011 7:32 AM, Mike Tancsa wrote:
at> > mi> > On 7/7/2011 4:20 AM, Kostik Belousov wrote:
at> > mi> >>
at> > mi> >> BTW, we had a similar panic, "spinlock held too long", the spinlock
at> > mi> >> is the sched lock N, on busy 8-core box recently upgraded to the
at> > mi> >> stable/8. Unfortunately, machine hung dumping core, so the stack trace
at> > mi> >> for the owner thread was not available.
at> > mi> >>
at> > mi> >> I was unable to make any conclusion from the data that was present.
at> > mi> >> If the situation is reproducable, you coulld try to revert r221937. This
at> > mi> >> is pure speculation, though.
at> > mi> >
at> > mi> > Another crash just now after 5hrs uptime. I will try and revert r221937
at> > mi> > unless there is any extra debugging you want me to add to the kernel
at> > mi> > instead  ?
at> >
at> >  I am also suffering from a reproducible panic on an 8-STABLE box, an
at> >  NFS server with heavy I/O load.  I could not get a kernel dump
at> >  because this panic locked up the machine just after it occurred, but
at> >  according to the stack trace it was the same as posted one.
at> >  Switching to an 8.2R kernel can prevent this panic.
at> >
at> >  Any progress on the investigation?
at>
at> Hiroki,
at> how easilly can you reproduce it?

It takes 5-10 hours. I installed another kernel for debugging just
now, so I think I will be able to collect more detail information in
a couple of days.

at> It would be important to have a DDB textdump with these informations:
at> - bt
at> - ps
at> - show allpcpu
at> - alltrace
at>
at> Alternatively, a coredump which has the stop cpu patch which Andryi can provide.

Okay, I will post them once I can get another panic. Thanks!

-- Hiroki

Jeremy Chadwick

unread,
Aug 17, 2011, 5:06:46 PM8/17/11
to freebsd...@freebsd.org

No idea, might be relevant to the thread.

> Another time I got:
>
> lock order reversal:
> 1st 0xffffff000593e330 snaplk (snaplk) @ /usr/src/sys/kern/vfr_vnops.c:296
> 2nd 0xffffff0005e5d578 ufs (ufs) @ /usr/src/sys/ufs/ffs/ffs_snapshot.c:1587
>
> I didn't copy down the traceback.

"snaplk" refers to UFS snapshots. The above must have been typed in
manually as well, due to some typos in filenames as well.

Either this is a different problem, or if everyone in this thread is
doing UFS snapshots (dump -L, mksnap_ffs, etc.) and having this problem
happen then I recommend people stop using UFS snapshots. I've ranted
about their unreliability in the past (years upon years ago -- still
seems valid) and just how badly they can "wedge" a system. This is one
of the many (MANY!) reasons why we use rsnapshot/rsync instead. The
atime clobbering issue is the only downside.

I don't see what this has to do with "heavy WAN I/O" unless you're doing
something like dump-over-ssh, in which case see the above paragraph.

> These panics seem to hit when I'm doing heavy WAN I/O. I can go for
> about a day without one as long as I stay away from the web or even chat.
> Last night this system copied a backup of 35GB over the local network
> without failing, but as soon as I hopped onto Firefox this morning, down
> she went. I don't know if that's coincidence or useful data.
>
> I didn't get to say "Thanks" to Eitan Adler for attempting to help me
> with this on Monday night. Thanks, Eitan!

--

| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, US |
| Making life hard for others since 1977. PGP 4BD6C0CB |

_______________________________________________

Chip Camden

unread,
Aug 17, 2011, 8:02:29 PM8/17/11
to freebsd...@freebsd.org
Quoth Jeremy Chadwick on Wednesday, 17 August 2011:

> >
> > I'm also getting similar panics on 8.2-STABLE. Locks up everything and I
> > have to power off. Once, I happened to be looking at the console when it
> > happened and copied dow the following:
> >
> > Sleeping thread (tif 100037, pid 0) owns a non-sleepable lock
> > panic: sleeping thread
> > cpuid=1
>
> No idea, might be relevant to the thread.
>
> > Another time I got:
> >
> > lock order reversal:
> > 1st 0xffffff000593e330 snaplk (snaplk) @ /usr/src/sys/kern/vfr_vnops.c:296
> > 2nd 0xffffff0005e5d578 ufs (ufs) @ /usr/src/sys/ufs/ffs/ffs_snapshot.c:1587
> >
> > I didn't copy down the traceback.
>
> "snaplk" refers to UFS snapshots. The above must have been typed in
> manually as well, due to some typos in filenames as well.
>
> Either this is a different problem, or if everyone in this thread is
> doing UFS snapshots (dump -L, mksnap_ffs, etc.) and having this problem
> happen then I recommend people stop using UFS snapshots. I've ranted
> about their unreliability in the past (years upon years ago -- still
> seems valid) and just how badly they can "wedge" a system. This is one
> of the many (MANY!) reasons why we use rsnapshot/rsync instead. The
> atime clobbering issue is the only downside.
>

If I'm doing UFS snapshots, I didn't know it. Yes, everything was copied
manually because it only displays on the console and the keyboard does
not respond after that point. So I copied first to paper, then had to
decode my lousy handwriting to put it in an email. Sorry for the scribal
errors.

Hiroki Sato

unread,
Aug 17, 2011, 8:18:07 PM8/17/11
to att...@freebsd.org, kost...@gmail.com, freebsd...@freebsd.org, a...@freebsd.org
Hiroki Sato <h...@freebsd.org> wrote
in <20110818.043332.27...@allbsd.org>:

hr> Attilio Rao <att...@freebsd.org> wrote
hr> in <CAJ-FndCDOW0_B2MV0LZEo-tp...@mail.gmail.com>:
hr>
hr> at> 2011/8/17 Hiroki Sato <h...@freebsd.org>:
hr> at> > Hi,
hr> at> >
hr> at> > Mike Tancsa <mi...@sentex.net> wrote
hr> at> >  in <4E15A08C...@sentex.net>:
hr> at> >
hr> at> > mi> On 7/7/2011 7:32 AM, Mike Tancsa wrote:
hr> at> > mi> > On 7/7/2011 4:20 AM, Kostik Belousov wrote:
hr> at> > mi> >>
hr> at> > mi> >> BTW, we had a similar panic, "spinlock held too long", the spinlock
hr> at> > mi> >> is the sched lock N, on busy 8-core box recently upgraded to the
hr> at> > mi> >> stable/8. Unfortunately, machine hung dumping core, so the stack trace
hr> at> > mi> >> for the owner thread was not available.
hr> at> > mi> >>
hr> at> > mi> >> I was unable to make any conclusion from the data that was present.
hr> at> > mi> >> If the situation is reproducable, you coulld try to revert r221937. This
hr> at> > mi> >> is pure speculation, though.
hr> at> > mi> >
hr> at> > mi> > Another crash just now after 5hrs uptime. I will try and revert r221937
hr> at> > mi> > unless there is any extra debugging you want me to add to the kernel
hr> at> > mi> > instead  ?
hr> at> >
hr> at> >  I am also suffering from a reproducible panic on an 8-STABLE box, an
hr> at> >  NFS server with heavy I/O load.  I could not get a kernel dump
hr> at> >  because this panic locked up the machine just after it occurred, but
hr> at> >  according to the stack trace it was the same as posted one.
hr> at> >  Switching to an 8.2R kernel can prevent this panic.
hr> at> >
hr> at> >  Any progress on the investigation?
hr> at>
hr> at> Hiroki,
hr> at> how easilly can you reproduce it?
hr>
hr> It takes 5-10 hours. I installed another kernel for debugging just
hr> now, so I think I will be able to collect more detail information in
hr> a couple of days.
hr>
hr> at> It would be important to have a DDB textdump with these informations:
hr> at> - bt
hr> at> - ps
hr> at> - show allpcpu
hr> at> - alltrace
hr> at>
hr> at> Alternatively, a coredump which has the stop cpu patch which Andryi can provide.
hr>
hr> Okay, I will post them once I can get another panic. Thanks!

I got the panic with a crash dump this time. The result of bt, ps,
allpcpu, and traces can be found at the following URL:

http://people.allbsd.org/~hrs/FreeBSD/pool-panic_20110818-1.txt

-- Hiroki

Attilio Rao

unread,
Aug 17, 2011, 8:37:09 PM8/17/11
to Hiroki Sato, kost...@gmail.com, freebsd...@freebsd.org, a...@freebsd.org
2011/8/18 Hiroki Sato <h...@freebsd.org>:

I'm not sure I understand it, is also a corefile available?
If yes, where I could get it? (with the relevant sources and kernel.debug).

Thanks,
Attilio


--
Peace can only be achieved by understanding - A. Einstein

Attilio Rao

unread,
Aug 17, 2011, 9:05:44 PM8/17/11
to Hiroki Sato, freebsd...@freebsd.org, ster...@camdensoftware.com, a...@freebsd.org, Nick Esborn, kost...@gmail.com, mdta...@freebsd.org
2011/8/18 Hiroki Sato <h...@freebsd.org>:

Actually, I think I see the bug here.

In callout_cpu_switch() if a low priority thread is migrating the
callout and gets preempted after the outcoming cpu queue lock is left
(and scheduled much later) we get this problem.

In order to fix this bug it could be enough to use a critical section,
but I think this should be really interrupt safe, thus I'd wrap them
up with spinlock_enter()/spinlock_exit(). Fortunately
callout_cpu_switch() should be called rarely and also we already do
expensive locking operations in callout, thus we should not have
problem performance-wise.

Can the guys I also CC'ed here try the following patch, with all the
initial kernel options that were leading you to the deadlock? (thus
revert any debugging patch/option you added for the moment):
http://www.freebsd.org/~attilio/callout-fixup.diff

Please note that this patch is for STABLE_8, if you can confirm the
good result I'll commit to -CURRENT and then backmarge as soon as
possible.

Thanks,
Attilio


--
Peace can only be achieved by understanding - A. Einstein

Jeremy Chadwick

unread,
Aug 17, 2011, 9:31:38 PM8/17/11
to freebsd...@freebsd.org
On Wed, Aug 17, 2011 at 05:01:05PM -0700, Chip Camden wrote:
> Quoth Jeremy Chadwick on Wednesday, 17 August 2011:
> > >
> > > I'm also getting similar panics on 8.2-STABLE. Locks up everything and I
> > > have to power off. Once, I happened to be looking at the console when it
> > > happened and copied dow the following:
> > >
> > > Sleeping thread (tif 100037, pid 0) owns a non-sleepable lock
> > > panic: sleeping thread
> > > cpuid=1
> >
> > No idea, might be relevant to the thread.
> >
> > > Another time I got:
> > >
> > > lock order reversal:
> > > 1st 0xffffff000593e330 snaplk (snaplk) @ /usr/src/sys/kern/vfr_vnops.c:296
> > > 2nd 0xffffff0005e5d578 ufs (ufs) @ /usr/src/sys/ufs/ffs/ffs_snapshot.c:1587
> > >
> > > I didn't copy down the traceback.
> >
> > "snaplk" refers to UFS snapshots. The above must have been typed in
> > manually as well, due to some typos in filenames as well.
> >
> > Either this is a different problem, or if everyone in this thread is
> > doing UFS snapshots (dump -L, mksnap_ffs, etc.) and having this problem
> > happen then I recommend people stop using UFS snapshots. I've ranted
> > about their unreliability in the past (years upon years ago -- still
> > seems valid) and just how badly they can "wedge" a system. This is one
> > of the many (MANY!) reasons why we use rsnapshot/rsync instead. The
> > atime clobbering issue is the only downside.
> >
>
> If I'm doing UFS snapshots, I didn't know it.

The backtrace indicates that a UFS snapshot is being made -- which
causes the state to be set to string "snaplk", which is then honoured in
vfs_vnops.c.

You can see for yourself: grep -r snaplk /usr/src/sys.

So yes, I'm inclined to believe something on your system is doing UFS
snapshot generation. Whether or not other people are doing it as well
is a different story.

> Yes, everything was copied manually because it only displays on the
> console and the keyboard does not respond after that point. So I
> copied first to paper, then had to decode my lousy handwriting to put
> it in an email. Sorry for the scribal errors.

That sounds more or less like what I saw with UFS snapshots: the system
would go catatonic in one way or another. It wouldn't "hard lock" (as
in if you had powered it off, etc.), it would "live lock" (as in the
kernel was wedged or held up/spinning doing something).

I never saw a panic as a result of UFS snapshots, only what I described
here.

TL;DR -- Your system appears to be making UFS snapshots, and that
situation is possibly (likely?) unrelated to the sleeping thread issue
you see that causes a panic.

Chip Camden

unread,
Aug 17, 2011, 10:57:39 PM8/17/11
to freebsd...@freebsd.org
Quoth Attilio Rao on Thursday, 18 August 2011:

Thanks, Attilio. I've applied the patch and removed the extra debug
options I had added (though keeping debug symbols). I'll let you know if
I experience any more panics.

Regards,

Hiroki Sato

unread,
Aug 18, 2011, 8:29:47 PM8/18/11
to att...@freebsd.org, freebsd...@freebsd.org, ster...@camdensoftware.com, a...@freebsd.org, Nick Esborn, kost...@gmail.com, mdta...@freebsd.org
Chip Camden <ster...@camdensoftware.com> wrote
in <2011081802...@libertas.local.camdensoftware.com>:

st> Quoth Attilio Rao on Thursday, 18 August 2011:
st> > In callout_cpu_switch() if a low priority thread is migrating the
st> > callout and gets preempted after the outcoming cpu queue lock is left
st> > (and scheduled much later) we get this problem.
st> >
st> > In order to fix this bug it could be enough to use a critical section,
st> > but I think this should be really interrupt safe, thus I'd wrap them
st> > up with spinlock_enter()/spinlock_exit(). Fortunately
st> > callout_cpu_switch() should be called rarely and also we already do
st> > expensive locking operations in callout, thus we should not have
st> > problem performance-wise.
st> >
st> > Can the guys I also CC'ed here try the following patch, with all the
st> > initial kernel options that were leading you to the deadlock? (thus
st> > revert any debugging patch/option you added for the moment):
st> > http://www.freebsd.org/~attilio/callout-fixup.diff
st> >
st> > Please note that this patch is for STABLE_8, if you can confirm the
st> > good result I'll commit to -CURRENT and then backmarge as soon as
st> > possible.
st> >
st> > Thanks,
st> > Attilio
st> >
st>
st> Thanks, Attilio. I've applied the patch and removed the extra debug
st> options I had added (though keeping debug symbols). I'll let you know if
st> I experience any more panics.

No panic for 20 hours at this moment, FYI. For my NFS server, I
think another 24 hours would be sufficient to confirm the stability.
I will see how it works...

-- Hiroki

Chip Camden

unread,
Aug 18, 2011, 8:39:02 PM8/18/11
to Hiroki Sato, freebsd...@freebsd.org, a...@freebsd.org, att...@freebsd.org, Nick Esborn, kost...@gmail.com, mdta...@freebsd.org
Quoth Hiroki Sato on Friday, 19 August 2011:

Likewise:

$ uptime
5:37PM up 21:45, 5 users, load averages: 0.68, 0.45, 0.63

So far, so good (knocks on head).

Mike Tancsa

unread,
Aug 19, 2011, 8:56:49 AM8/19/11
to Hiroki Sato, att...@freebsd.org, kost...@gmail.com, freebsd...@freebsd.org, a...@freebsd.org, Nick Esborn
On 8/18/2011 8:37 PM, Chip Camden wrote:

>> st> Thanks, Attilio. I've applied the patch and removed the extra debug
>> st> options I had added (though keeping debug symbols). I'll let you know if
>> st> I experience any more panics.
>>
>> No panic for 20 hours at this moment, FYI. For my NFS server, I
>> think another 24 hours would be sufficient to confirm the stability.
>> I will see how it works...
>>
>> -- Hiroki
>
> Likewise:
>
> $ uptime
> 5:37PM up 21:45, 5 users, load averages: 0.68, 0.45, 0.63
>
> So far, so good (knocks on head).
>


0(ns4)% uptime
8:55AM up 22:39, 3 users, load averages: 0.01, 0.00, 0.00
0(ns4)%


So far so good for me too

---Mike

--
-------------------
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, mi...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada http://www.tancsa.com/

Chip Camden

unread,
Aug 19, 2011, 11:07:44 AM8/19/11
to freebsd...@freebsd.org
Quoth Mike Tancsa on Friday, 19 August 2011:

Still up and running here.

8:02AM up 1 day, 12:10, 4 users, load averages: 0.08, 0.26, 0.52

After the panics began, I never went more than 12 hours without one before
applying this patch. I think you nailed it, Attilio. Or at least, you
moved it.

Attilio Rao

unread,
Aug 19, 2011, 7:55:15 PM8/19/11
to Mike Tancsa, kost...@gmail.com, Nick Esborn, freebsd...@freebsd.org, a...@freebsd.org
If nobody complains about it earlier, I'll propose the patch to re@ in 8 hours.

Attilio

2011/8/19 Mike Tancsa <mi...@sentex.net>:


> On 8/18/2011 8:37 PM, Chip Camden wrote:
>
>>> st> Thanks, Attilio.  I've applied the patch and removed the extra debug
>>> st> options I had added (though keeping debug symbols).  I'll let you know if
>>> st> I experience any more panics.
>>>
>>>  No panic for 20 hours at this moment, FYI.  For my NFS server, I
>>>  think another 24 hours would be sufficient to confirm the stability.
>>>  I will see how it works...
>>>
>>> -- Hiroki
>>
>> Likewise:
>>
>> $ uptime
>>  5:37PM  up 21:45, 5 users, load averages: 0.68, 0.45, 0.63
>>
>> So far, so good (knocks on head).
>>
>
>
> 0(ns4)% uptime
>  8:55AM  up 22:39, 3 users, load averages: 0.01, 0.00, 0.00
> 0(ns4)%
>
>
> So far so good for me too
>
>        ---Mike
>
> --
> -------------------
> Mike Tancsa, tel +1 519 651 3400
> Sentex Communications, mi...@sentex.net
> Providing Internet services since 1994 www.sentex.net
> Cambridge, Ontario Canada   http://www.tancsa.com/
>

--

Peace can only be achieved by understanding - A. Einstein

Hiroki Sato

unread,
Aug 19, 2011, 9:54:11 PM8/19/11
to att...@freebsd.org, kost...@gmail.com, ni...@desert.net, freebsd...@freebsd.org, a...@freebsd.org
Attilio Rao <att...@freebsd.org> wrote
in <CAJ-FndDHmwa+=LNGgU+5MK2Xmtj8kWH...@mail.gmail.com>:

at> If nobody complains about it earlier, I'll propose the patch to re@ in 8 hours.

Running fine for 45 hours so far. Please go ahead!

-- Hiroki

Trent Nelson

unread,
Sep 1, 2011, 4:13:56 AM9/1/11
to Attilio Rao, freebsd...@freebsd.org

On Aug 19, 2011, at 7:53 PM, Attilio Rao wrote:

> If nobody complains about it earlier, I'll propose the patch to re@ in 8 hours.

Just a friendly 'me too', for the records. 22 hours of heavy network/disk I/O and no panic yet -- prior to the patch it was a panic orgy.

Any response from re@ on the patch? It didn't appear to be in stable/8 as of yesterday:

[root@flanker/ttypts/0(../src/sys/kern)#] svn diff
Index: kern_timeout.c
===================================================================
--- kern_timeout.c (revision 225280)
+++ kern_timeout.c (working copy)
@@ -268,9 +268,11 @@
CC_LOCK_ASSERT(cc);

c->c_cpu = CPUBLOCK;
+ spinlock_enter();
CC_UNLOCK(cc);
new_cc = CC_CPU(new_cpu);
CC_LOCK(new_cc);
+ spinlock_exit();
c->c_cpu = new_cpu;
return (new_cc);
}


Regards,

Trent._______________________________________________

Attilio Rao

unread,
Sep 1, 2011, 6:11:08 AM9/1/11
to Trent Nelson, freebsd...@freebsd.org
2011/9/1 Trent Nelson <tr...@snakebite.org>:

>
> On Aug 19, 2011, at 7:53 PM, Attilio Rao wrote:
>
>> If nobody complains about it earlier, I'll propose the patch to re@ in 8 hours.
>
> Just a friendly 'me too', for the records.  22 hours of heavy network/disk I/O and no panic yet -- prior to the patch it was a panic orgy.
>
> Any response from re@ on the patch?  It didn't appear to be in stable/8 as of yesterday:

It has been committed to STABLE_8 as r225288.

Thanks,
Attilio


--
Peace can only be achieved by understanding - A. Einstein
_______________________________________________

Attilio Rao

unread,
Sep 3, 2011, 6:06:52 AM9/3/11
to Hiroki Sato, freebsd...@freebsd.org, ster...@camdensoftware.com, ni...@desert.net, a...@freebsd.org
This should be enough for someone NFS-aware to look into it.

Were you also able to get a core?

I'll try to look into it in the next days, in particular about the
softclock state.

Attilio

2011/9/3 Hiroki Sato <h...@freebsd.org>:
> Hiroki Sato <h...@freebsd.org> wrote
>  in <20110820.105229.834...@allbsd.org>:


>
> hr> Attilio Rao <att...@freebsd.org> wrote

> hr>   in <CAJ-FndDHmwa+=LNGgU+5MK2Xmtj8kWH...@mail.gmail.com>:
> hr>
> hr> at> If nobody complains about it earlier, I'll propose the patch to re@ in 8 hours.
> hr>
> hr>  Running fine for 45 hours so far.  Please go ahead!
>
>  The NFS server was working fine with no panic for a week, but after
>  that I noticed it sometimes got stuck.  When it occurred, all of
>  processes seemed to stop working though I was able to break it into
>  ddb.  I am still not sure of what triggered it, but this symptom is
>  reproducible within three days now.  Does anyone suffer from this?
>
>  The attached file is a result of show allpcpu, show threads, ps, info
>  thread, and bt for all threads.  I guess all of CPUs became idle due
>  to some deadlock, but how do I debug this?
>
> -- Hiroki
>
> KDB: enter: Break sequence on console
> [thread pid 11 tid 100003 ]
> Stopped at      kdb_enter+0x3b: movq    $0,0x6a4102(%rip)
> db> show allpcpu
> Current CPU: 1
>
> cpuid        = 0
> dynamic pcpu = 0x4b3380
> curthread    = 0xffffff00033fe000: pid 11 "idle: cpu0"
> curpcb       = 0xffffff8000043d10
> fpcurthread  = none
> idlethread   = 0xffffff00033fe000: tid 100004 "idle: cpu0"
> curpmap      = 0xffffffff80d00250
> tssp         = 0xffffffff80d6d200
> commontssp   = 0xffffffff80d6d200
> rsp0         = 0xffffff8000043d10
> gs32p        = 0xffffffff80d6c038
> ldt          = 0xffffffff80d6c078
> tss          = 0xffffffff80d6c068
>
> cpuid        = 1
> dynamic pcpu = 0xffffff807f36b380
> curthread    = 0xffffff00033fe460: pid 11 "idle: cpu1"
> curpcb       = 0xffffff800003ed10
> fpcurthread  = none
> idlethread   = 0xffffff00033fe460: tid 100003 "idle: cpu1"
> curpmap      = 0xffffffff80d00250
> tssp         = 0xffffffff80d6d268
> commontssp   = 0xffffffff80d6d268
> rsp0         = 0xffffff800003ed10
> gs32p        = 0xffffffff80d6c0a0
> ldt          = 0xffffffff80d6c0e0
> tss          = 0xffffffff80d6c0d0
>
> db> show threads
>  100530 (0xffffff000662f8c0)  sched_switch() at sched_switch+0x102
>  100538 (0xffffff004caa28c0)  sched_switch() at sched_switch+0x102
>  100524 (0xffffff004c022460)  sched_switch() at sched_switch+0x102
>  100536 (0xffffff00066898c0)  sched_switch() at sched_switch+0x102
>  100527 (0xffffff004c115460)  sched_switch() at sched_switch+0x102
>  100238 (0xffffff00066848c0)  sched_switch() at sched_switch+0x102
>  100526 (0xffffff004c1158c0)  sched_switch() at sched_switch+0x102
>  100236 (0xffffff0006685460)  sched_switch() at sched_switch+0x102
>  100087 (0xffffff0006284000)  sched_switch() at sched_switch+0x102
>  100242 (0xffffff0006683460)  sched_switch() at sched_switch+0x102
>  100516 (0xffffff002eba2000)  sched_switch() at sched_switch+0x102
>  100515 (0xffffff002eba2460)  sched_switch() at sched_switch+0x102
>  100514 (0xffffff002eba28c0)  sched_switch() at sched_switch+0x102
>  100513 (0xffffff002eba3000)  sched_switch() at sched_switch+0x102
>  100512 (0xffffff002eba3460)  sched_switch() at sched_switch+0x102
>  100511 (0xffffff002eba38c0)  sched_switch() at sched_switch+0x102
>  100510 (0xffffff002eba4000)  sched_switch() at sched_switch+0x102
>  100509 (0xffffff002eba4460)  sched_switch() at sched_switch+0x102
>  100508 (0xffffff002eba48c0)  sched_switch() at sched_switch+0x102
>  100507 (0xffffff002eba5000)  sched_switch() at sched_switch+0x102
>  100506 (0xffffff002eba5460)  sched_switch() at sched_switch+0x102
>  100505 (0xffffff002eb96000)  sched_switch() at sched_switch+0x102
>  100504 (0xffffff002eb96460)  sched_switch() at sched_switch+0x102
>  100503 (0xffffff002eb968c0)  sched_switch() at sched_switch+0x102
>  100502 (0xffffff002eb97000)  sched_switch() at sched_switch+0x102
>  100501 (0xffffff002eb97460)  sched_switch() at sched_switch+0x102
>  100500 (0xffffff002eb978c0)  sched_switch() at sched_switch+0x102
>  100499 (0xffffff002eb99000)  sched_switch() at sched_switch+0x102
>  100498 (0xffffff002eb99460)  sched_switch() at sched_switch+0x102
>  100497 (0xffffff002eb998c0)  sched_switch() at sched_switch+0x102
>  100496 (0xffffff002eb9a000)  sched_switch() at sched_switch+0x102
>  100495 (0xffffff002eb9a460)  sched_switch() at sched_switch+0x102
>  100494 (0xffffff002eb9a8c0)  sched_switch() at sched_switch+0x102
>  100493 (0xffffff002eb9b000)  sched_switch() at sched_switch+0x102
>  100492 (0xffffff002eb9b460)  sched_switch() at sched_switch+0x102
>  100491 (0xffffff002eb9b8c0)  sched_switch() at sched_switch+0x102
>  100490 (0xffffff002eb9d000)  sched_switch() at sched_switch+0x102
>  100489 (0xffffff002eb8f8c0)  sched_switch() at sched_switch+0x102
>  100488 (0xffffff002eb90000)  sched_switch() at sched_switch+0x102
>  100487 (0xffffff002eb90460)  sched_switch() at sched_switch+0x102
>  100486 (0xffffff002eb908c0)  sched_switch() at sched_switch+0x102
>  100485 (0xffffff002eb91000)  sched_switch() at sched_switch+0x102
>  100484 (0xffffff002eb91460)  sched_switch() at sched_switch+0x102
>  100483 (0xffffff002eb918c0)  sched_switch() at sched_switch+0x102
>  100482 (0xffffff002eb92000)  sched_switch() at sched_switch+0x102
>  100481 (0xffffff002eb92460)  sched_switch() at sched_switch+0x102
>  100480 (0xffffff002eb928c0)  sched_switch() at sched_switch+0x102
>  100479 (0xffffff002eb93000)  sched_switch() at sched_switch+0x102
>  100478 (0xffffff002eb93460)  sched_switch() at sched_switch+0x102
>  100477 (0xffffff002eb938c0)  sched_switch() at sched_switch+0x102
>  100476 (0xffffff002eb94000)  sched_switch() at sched_switch+0x102
>  100475 (0xffffff002eb94460)  sched_switch() at sched_switch+0x102
>  100474 (0xffffff002eb948c0)  sched_switch() at sched_switch+0x102
>  100473 (0xffffff002eb84460)  sched_switch() at sched_switch+0x102
>  100472 (0xffffff002eb848c0)  sched_switch() at sched_switch+0x102
>  100471 (0xffffff002eb88000)  sched_switch() at sched_switch+0x102
>  100470 (0xffffff002eb88460)  sched_switch() at sched_switch+0x102
>  100469 (0xffffff002eb888c0)  sched_switch() at sched_switch+0x102
>  100468 (0xffffff002eb89000)  sched_switch() at sched_switch+0x102
>  100467 (0xffffff002eb89460)  sched_switch() at sched_switch+0x102
>  100466 (0xffffff002eb898c0)  sched_switch() at sched_switch+0x102
>  100465 (0xffffff002eb8b000)  sched_switch() at sched_switch+0x102
>  100464 (0xffffff002eb8b460)  sched_switch() at sched_switch+0x102
>  100463 (0xffffff002eb8b8c0)  sched_switch() at sched_switch+0x102
>  100462 (0xffffff002eb8c000)  sched_switch() at sched_switch+0x102
>  100461 (0xffffff002eb8c460)  sched_switch() at sched_switch+0x102
>  100460 (0xffffff002eb8c8c0)  sched_switch() at sched_switch+0x102
>  100459 (0xffffff002eb8f000)  sched_switch() at sched_switch+0x102
>  100458 (0xffffff002eb8f460)  sched_switch() at sched_switch+0x102
>  100457 (0xffffff002eb7f000)  sched_switch() at sched_switch+0x102
>  100456 (0xffffff002eb7f460)  sched_switch() at sched_switch+0x102
>  100455 (0xffffff002eb7f8c0)  sched_switch() at sched_switch+0x102
>  100454 (0xffffff002eb80000)  sched_switch() at sched_switch+0x102
>  100453 (0xffffff002eb80460)  sched_switch() at sched_switch+0x102
>  100452 (0xffffff002eb808c0)  sched_switch() at sched_switch+0x102
>  100451 (0xffffff002eb81000)  sched_switch() at sched_switch+0x102
>  100450 (0xffffff002eb81460)  sched_switch() at sched_switch+0x102
>  100449 (0xffffff002eb818c0)  sched_switch() at sched_switch+0x102
>  100448 (0xffffff002eb82000)  sched_switch() at sched_switch+0x102
>  100447 (0xffffff002eb82460)  sched_switch() at sched_switch+0x102
>  100446 (0xffffff002eb828c0)  sched_switch() at sched_switch+0x102
>  100445 (0xffffff002eb83000)  sched_switch() at sched_switch+0x102
>  100444 (0xffffff002eb83460)  sched_switch() at sched_switch+0x102
>  100443 (0xffffff002eb838c0)  sched_switch() at sched_switch+0x102
>  100442 (0xffffff002eb84000)  sched_switch() at sched_switch+0x102
>  100441 (0xffffff002eb768c0)  sched_switch() at sched_switch+0x102
>  100440 (0xffffff002eb77000)  sched_switch() at sched_switch+0x102
>  100439 (0xffffff002eb77460)  sched_switch() at sched_switch+0x102
>  100438 (0xffffff002eb778c0)  sched_switch() at sched_switch+0x102
>  100437 (0xffffff002eb78000)  sched_switch() at sched_switch+0x102
>  100436 (0xffffff002eb78460)  sched_switch() at sched_switch+0x102
>  100435 (0xffffff002eb788c0)  sched_switch() at sched_switch+0x102
>  100434 (0xffffff002eb7a000)  sched_switch() at sched_switch+0x102
>  100433 (0xffffff002eb7a460)  sched_switch() at sched_switch+0x102
>  100432 (0xffffff002eb7a8c0)  sched_switch() at sched_switch+0x102
>  100431 (0xffffff002eb7b000)  sched_switch() at sched_switch+0x102
>  100430 (0xffffff002eb7b460)  sched_switch() at sched_switch+0x102
>  100429 (0xffffff002eb7b8c0)  sched_switch() at sched_switch+0x102
>  100428 (0xffffff002eb7c000)  sched_switch() at sched_switch+0x102
>  100427 (0xffffff002eb7c460)  sched_switch() at sched_switch+0x102
>  100426 (0xffffff002eb7c8c0)  sched_switch() at sched_switch+0x102
>  100425 (0xffffff002eb71460)  sched_switch() at sched_switch+0x102
>  100424 (0xffffff002eb718c0)  sched_switch() at sched_switch+0x102
>  100423 (0xffffff002eb72000)  sched_switch() at sched_switch+0x102
>  100422 (0xffffff002eb72460)  sched_switch() at sched_switch+0x102
>  100421 (0xffffff002eb728c0)  sched_switch() at sched_switch+0x102
>  100420 (0xffffff002eb73000)  sched_switch() at sched_switch+0x102
>  100419 (0xffffff002eb73460)  sched_switch() at sched_switch+0x102
>  100418 (0xffffff002eb738c0)  sched_switch() at sched_switch+0x102
>  100417 (0xffffff002eb74000)  sched_switch() at sched_switch+0x102
>  100416 (0xffffff002eb74460)  sched_switch() at sched_switch+0x102
>  100415 (0xffffff002eb748c0)  sched_switch() at sched_switch+0x102
>  100414 (0xffffff002eb75000)  sched_switch() at sched_switch+0x102
>  100413 (0xffffff002eb75460)  sched_switch() at sched_switch+0x102
>  100412 (0xffffff002eb758c0)  sched_switch() at sched_switch+0x102
>  100411 (0xffffff002eb76000)  sched_switch() at sched_switch+0x102
>  100410 (0xffffff002eb76460)  sched_switch() at sched_switch+0x102
>  100409 (0xffffff002eb6a000)  sched_switch() at sched_switch+0x102
>  100408 (0xffffff002eb6a460)  sched_switch() at sched_switch+0x102
>  100407 (0xffffff002eb6a8c0)  sched_switch() at sched_switch+0x102
>  100406 (0xffffff002eb6b000)  sched_switch() at sched_switch+0x102
>  100405 (0xffffff002eb6b460)  sched_switch() at sched_switch+0x102
>  100404 (0xffffff002eb6b8c0)  sched_switch() at sched_switch+0x102
>  100403 (0xffffff002eb6d000)  sched_switch() at sched_switch+0x102
>  100402 (0xffffff002eb6d460)  sched_switch() at sched_switch+0x102
>  100401 (0xffffff002eb6d8c0)  sched_switch() at sched_switch+0x102
>  100400 (0xffffff002eb6e000)  sched_switch() at sched_switch+0x102
>  100399 (0xffffff002eb6e460)  sched_switch() at sched_switch+0x102
>  100398 (0xffffff002eb6e8c0)  sched_switch() at sched_switch+0x102
>  100397 (0xffffff002eb6f000)  sched_switch() at sched_switch+0x102
>  100396 (0xffffff002eb6f460)  sched_switch() at sched_switch+0x102
>  100395 (0xffffff002eb6f8c0)  sched_switch() at sched_switch+0x102
>  100394 (0xffffff002eb71000)  sched_switch() at sched_switch+0x102
>  100393 (0xffffff002eb608c0)  sched_switch() at sched_switch+0x102
>  100392 (0xffffff002eb62000)  sched_switch() at sched_switch+0x102
>  100391 (0xffffff002eb62460)  sched_switch() at sched_switch+0x102
>  100390 (0xffffff002eb628c0)  sched_switch() at sched_switch+0x102
>  100389 (0xffffff002eb63000)  sched_switch() at sched_switch+0x102
>  100388 (0xffffff002eb63460)  sched_switch() at sched_switch+0x102
>  100387 (0xffffff002eb638c0)  sched_switch() at sched_switch+0x102
>  100386 (0xffffff002eb64000)  sched_switch() at sched_switch+0x102
>  100385 (0xffffff002eb64460)  sched_switch() at sched_switch+0x102
>  100384 (0xffffff002eb648c0)  sched_switch() at sched_switch+0x102
>  100383 (0xffffff002eb65000)  sched_switch() at sched_switch+0x102
>  100382 (0xffffff002eb65460)  sched_switch() at sched_switch+0x102
>  100381 (0xffffff002eb658c0)  sched_switch() at sched_switch+0x102
>  100380 (0xffffff002eb66000)  sched_switch() at sched_switch+0x102
>  100379 (0xffffff002eb66460)  sched_switch() at sched_switch+0x102
>  100378 (0xffffff002eb668c0)  sched_switch() at sched_switch+0x102
>  100377 (0xffffff002eb58460)  sched_switch() at sched_switch+0x102
>  100376 (0xffffff002eb588c0)  sched_switch() at sched_switch+0x102
>  100375 (0xffffff002eb59000)  sched_switch() at sched_switch+0x102
>  100374 (0xffffff002eb59460)  sched_switch() at sched_switch+0x102
>  100373 (0xffffff002eb598c0)  sched_switch() at sched_switch+0x102
>  100372 (0xffffff002eb5a000)  sched_switch() at sched_switch+0x102
>  100371 (0xffffff002eb5a460)  sched_switch() at sched_switch+0x102
>  100370 (0xffffff002eb5a8c0)  sched_switch() at sched_switch+0x102
>  100369 (0xffffff002eb5c000)  sched_switch() at sched_switch+0x102
>  100368 (0xffffff002eb5c460)  sched_switch() at sched_switch+0x102
>  100367 (0xffffff002eb5c8c0)  sched_switch() at sched_switch+0x102
>  100366 (0xffffff002eb5d000)  sched_switch() at sched_switch+0x102
>  100365 (0xffffff002eb5d460)  sched_switch() at sched_switch+0x102
>  100364 (0xffffff002eb5d8c0)  sched_switch() at sched_switch+0x102
>  100363 (0xffffff002eb60000)  sched_switch() at sched_switch+0x102
>  100362 (0xffffff002eb60460)  sched_switch() at sched_switch+0x102
>  100361 (0xffffff002eb53000)  sched_switch() at sched_switch+0x102
>  100360 (0xffffff002eb53460)  sched_switch() at sched_switch+0x102
>  100359 (0xffffff002eb538c0)  sched_switch() at sched_switch+0x102
>  100358 (0xffffff002eb54000)  sched_switch() at sched_switch+0x102
>  100357 (0xffffff002eb54460)  sched_switch() at sched_switch+0x102
>  100356 (0xffffff002eb548c0)  sched_switch() at sched_switch+0x102
>  100355 (0xffffff002eb55000)  sched_switch() at sched_switch+0x102
>  100354 (0xffffff002eb55460)  sched_switch() at sched_switch+0x102
>  100353 (0xffffff002eb558c0)  sched_switch() at sched_switch+0x102
>  100352 (0xffffff002eb56000)  sched_switch() at sched_switch+0x102
>  100351 (0xffffff002eb56460)  sched_switch() at sched_switch+0x102
>  100350 (0xffffff002eb568c0)  sched_switch() at sched_switch+0x102
>  100349 (0xffffff002eb57000)  sched_switch() at sched_switch+0x102
>  100348 (0xffffff002eb57460)  sched_switch() at sched_switch+0x102
>  100347 (0xffffff002eb578c0)  sched_switch() at sched_switch+0x102
>  100346 (0xffffff002eb58000)  sched_switch() at sched_switch+0x102
>  100345 (0xffffff002eb498c0)  sched_switch() at sched_switch+0x102
>  100344 (0xffffff002eb4a000)  sched_switch() at sched_switch+0x102
>  100343 (0xffffff002eb4a460)  sched_switch() at sched_switch+0x102
>  100342 (0xffffff002eb4a8c0)  sched_switch() at sched_switch+0x102
>  100341 (0xffffff002eb4b000)  sched_switch() at sched_switch+0x102
>  100340 (0xffffff002eb4b460)  sched_switch() at sched_switch+0x102
>  100339 (0xffffff002eb4b8c0)  sched_switch() at sched_switch+0x102
>  100338 (0xffffff002eb4d000)  sched_switch() at sched_switch+0x102
>  100337 (0xffffff002eb4d460)  sched_switch() at sched_switch+0x102
>  100336 (0xffffff002eb4d8c0)  sched_switch() at sched_switch+0x102
>  100335 (0xffffff002eb4e000)  sched_switch() at sched_switch+0x102
>  100334 (0xffffff002eb4e460)  sched_switch() at sched_switch+0x102
>  100333 (0xffffff002eb4e8c0)  sched_switch() at sched_switch+0x102
>  100332 (0xffffff002eb4f000)  sched_switch() at sched_switch+0x102
>  100331 (0xffffff002eb4f460)  sched_switch() at sched_switch+0x102
>  100330 (0xffffff002eb4f8c0)  sched_switch() at sched_switch+0x102
>  100329 (0xffffff002eb41460)  sched_switch() at sched_switch+0x102
>  100328 (0xffffff002eb418c0)  sched_switch() at sched_switch+0x102
>  100327 (0xffffff002eb45000)  sched_switch() at sched_switch+0x102
>  100326 (0xffffff002eb45460)  sched_switch() at sched_switch+0x102
>  100325 (0xffffff002eb458c0)  sched_switch() at sched_switch+0x102
>  100324 (0xffffff002eb46000)  sched_switch() at sched_switch+0x102
>  100323 (0xffffff002eb46460)  sched_switch() at sched_switch+0x102
>  100322 (0xffffff002eb468c0)  sched_switch() at sched_switch+0x102
>  100321 (0xffffff002eb47000)  sched_switch() at sched_switch+0x102
>  100320 (0xffffff002eb47460)  sched_switch() at sched_switch+0x102
>  100319 (0xffffff002eb478c0)  sched_switch() at sched_switch+0x102
>  100318 (0xffffff002eb48000)  sched_switch() at sched_switch+0x102
>  100317 (0xffffff002eb48460)  sched_switch() at sched_switch+0x102
>  100316 (0xffffff002eb488c0)  sched_switch() at sched_switch+0x102
>  100315 (0xffffff002eb49000)  sched_switch() at sched_switch+0x102
>  100314 (0xffffff002eb49460)  sched_switch() at sched_switch+0x102
>  100313 (0xffffff002eb38000)  sched_switch() at sched_switch+0x102
>  100312 (0xffffff002eb38460)  sched_switch() at sched_switch+0x102
>  100311 (0xffffff002eb388c0)  sched_switch() at sched_switch+0x102
>  100310 (0xffffff002eb39000)  sched_switch() at sched_switch+0x102
>  100309 (0xffffff002eb39460)  sched_switch() at sched_switch+0x102
>  100308 (0xffffff002eb398c0)  sched_switch() at sched_switch+0x102
>  100307 (0xffffff002eb3c000)  sched_switch() at sched_switch+0x102
>  100306 (0xffffff002eb3c460)  sched_switch() at sched_switch+0x102
>  100305 (0xffffff002eb3c8c0)  sched_switch() at sched_switch+0x102
>  100304 (0xffffff002eb3d000)  sched_switch() at sched_switch+0x102
>  100303 (0xffffff002eb3d460)  sched_switch() at sched_switch+0x102
>  100302 (0xffffff002eb3d8c0)  sched_switch() at sched_switch+0x102
>  100301 (0xffffff002eb3e000)  sched_switch() at sched_switch+0x102
>  100300 (0xffffff002eb3e460)  sched_switch() at sched_switch+0x102
>  100299 (0xffffff002eb3e8c0)  sched_switch() at sched_switch+0x102
>  100298 (0xffffff002eb41000)  sched_switch() at sched_switch+0x102
>  100297 (0xffffff002eb318c0)  sched_switch() at sched_switch+0x102
>  100296 (0xffffff002eb33000)  sched_switch() at sched_switch+0x102
>  100295 (0xffffff002eb33460)  sched_switch() at sched_switch+0x102
>  100294 (0xffffff002eb338c0)  sched_switch() at sched_switch+0x102
>  100293 (0xffffff002eb34000)  sched_switch() at sched_switch+0x102
>  100292 (0xffffff002eb34460)  sched_switch() at sched_switch+0x102
>  100291 (0xffffff002eb348c0)  sched_switch() at sched_switch+0x102
>  100290 (0xffffff002eb35000)  sched_switch() at sched_switch+0x102
>  100289 (0xffffff002eb35460)  sched_switch() at sched_switch+0x102
>  100288 (0xffffff002eb358c0)  sched_switch() at sched_switch+0x102
>  100287 (0xffffff002eb36000)  sched_switch() at sched_switch+0x102
>  100286 (0xffffff002eb36460)  sched_switch() at sched_switch+0x102
>  100285 (0xffffff002eb368c0)  sched_switch() at sched_switch+0x102
>  100284 (0xffffff002eb37000)  sched_switch() at sched_switch+0x102
>  100283 (0xffffff002eb37460)  sched_switch() at sched_switch+0x102
>  100282 (0xffffff002eb378c0)  sched_switch() at sched_switch+0x102
>  100281 (0xffffff001de40460)  sched_switch() at sched_switch+0x102
>  100280 (0xffffff001de408c0)  sched_switch() at sched_switch+0x102
>  100279 (0xffffff002eb12000)  sched_switch() at sched_switch+0x102
>  100278 (0xffffff002eb12460)  sched_switch() at sched_switch+0x102
>  100277 (0xffffff002eb128c0)  sched_switch() at sched_switch+0x102
>  100276 (0xffffff002eb13000)  sched_switch() at sched_switch+0x102
>  100275 (0xffffff002eb13460)  sched_switch() at sched_switch+0x102
>  100274 (0xffffff002eb138c0)  sched_switch() at sched_switch+0x102
>  100273 (0xffffff002eb15000)  sched_switch() at sched_switch+0x102
>  100272 (0xffffff002eb15460)  sched_switch() at sched_switch+0x102
>  100271 (0xffffff002eb158c0)  sched_switch() at sched_switch+0x102
>  100270 (0xffffff002eb16000)  sched_switch() at sched_switch+0x102
>  100269 (0xffffff002eb16460)  sched_switch() at sched_switch+0x102
>  100268 (0xffffff002eb168c0)  sched_switch() at sched_switch+0x102
>  100267 (0xffffff002eb31000)  sched_switch() at sched_switch+0x102
>  100266 (0xffffff002eb31460)  sched_switch() at sched_switch+0x102
>  100265 (0xffffff001de3b000)  sched_switch() at sched_switch+0x102
>  100264 (0xffffff001de3b460)  sched_switch() at sched_switch+0x102
>  100263 (0xffffff001de3b8c0)  sched_switch() at sched_switch+0x102
>  100262 (0xffffff001de3c000)  sched_switch() at sched_switch+0x102
>  100088 (0xffffff00062838c0)  sched_switch() at sched_switch+0x102
>  100239 (0xffffff0006684460)  sched_switch() at sched_switch+0x102
>  100255 (0xffffff001de3e460)  sched_switch() at sched_switch+0x102
>  100241 (0xffffff00066838c0)  sched_switch() at sched_switch+0x102
>  100232 (0xffffff0003fa0460)  sched_switch() at sched_switch+0x102
>  100254 (0xffffff001de3e8c0)  sched_switch() at sched_switch+0x102
>  100091 (0xffffff00062808c0)  sched_switch() at sched_switch+0x102
>  100261 (0xffffff001de3c460)  sched_switch() at sched_switch+0x102
>  100252 (0xffffff001de3f460)  sched_switch() at sched_switch+0x102
>  100078 (0xffffff0003f9e000)  sched_switch() at sched_switch+0x102
>  100074 (0xffffff0003f9e8c0)  sched_switch() at sched_switch+0x102
>  100073 (0xffffff0003fa0000)  sched_switch() at sched_switch+0x102
>  100072 (0xffffff0003ed5000)  sched_switch() at sched_switch+0x102
>  100071 (0xffffff0003ed5460)  sched_switch() at sched_switch+0x102
>  100070 (0xffffff0003ed58c0)  sched_switch() at sched_switch+0x102
>  100069 (0xffffff0003ed7000)  sched_switch() at sched_switch+0x102
>  100068 (0xffffff0003ed7460)  sched_switch() at sched_switch+0x102
>  100067 (0xffffff0003ed78c0)  sched_switch() at sched_switch+0x102
>  100066 (0xffffff0003ed9000)  sched_switch() at sched_switch+0x102
>  100231 (0xffffff0003fa08c0)  sched_switch() at sched_switch+0x102
>  100230 (0xffffff0006280000)  sched_switch() at sched_switch+0x102
>  100065 (0xffffff0003ed9460)  sched_switch() at sched_switch+0x102
>  100064 (0xffffff0003ed98c0)  sched_switch() at sched_switch+0x102
>  100076 (0xffffff0006285460)  sched_switch() at sched_switch+0x102
>  100075 (0xffffff00062858c0)  sched_switch() at sched_switch+0x102
>  100058 (0xffffff0003c40000)  sched_switch() at sched_switch+0x102
>  100057 (0xffffff0003c40460)  sched_switch() at sched_switch+0x102
>  100056 (0xffffff0003c408c0)  sched_switch() at sched_switch+0x102
>  100055 (0xffffff0003c41000)  sched_switch() at sched_switch+0x102
>  100054 (0xffffff0003c41460)  sched_switch() at sched_switch+0x102
>  100053 (0xffffff0003c418c0)  sched_switch() at sched_switch+0x102
>  100052 (0xffffff0003c42000)  sched_switch() at sched_switch+0x102
>  100051 (0xffffff0003c42460)  sched_switch() at sched_switch+0x102
>  100049 (0xffffff0003900000)  sched_switch() at sched_switch+0x102
>  100048 (0xffffff0003900460)  sched_switch() at sched_switch+0x102
>  100047 (0xffffff00039008c0)  sched_switch() at sched_switch+0x102
>  100046 (0xffffff0003901000)  sched_switch() at sched_switch+0x102
>  100044 (0xffffff00039018c0)  sched_switch() at sched_switch+0x102
>  100043 (0xffffff0003903000)  sched_switch() at sched_switch+0x102
>  100042 (0xffffff0003903460)  sched_switch() at sched_switch+0x102
>  100041 (0xffffff00039038c0)  sched_switch() at sched_switch+0x102
>  100039 (0xffffff00035f1000)  sched_switch() at sched_switch+0x102
>  100038 (0xffffff00035f1460)  sched_switch() at sched_switch+0x102
>  100037 (0xffffff00035f18c0)  sched_switch() at sched_switch+0x102
>  100036 (0xffffff00035f2000)  sched_switch() at sched_switch+0x102
>  100032 (0xffffff00035f3460)  sched_switch() at sched_switch+0x102
>  100031 (0xffffff00035f38c0)  sched_switch() at sched_switch+0x102
>  100029 (0xffffff000357d8c0)  sched_switch() at sched_switch+0x102
>  100028 (0xffffff0003581000)  sched_switch() at sched_switch+0x102
>  100013 (0xffffff00034138c0)  sched_switch() at sched_switch+0x102
>  100011 (0xffffff0003414460)  sched_switch() at sched_switch+0x102
>  100010 (0xffffff00034148c0)  sched_switch() at sched_switch+0x102
>  100009 (0xffffff00033ff460)  sched_switch() at sched_switch+0x102
>  100061 (0xffffff0003eda8c0)  sched_switch() at sched_switch+0x102
>  100060 (0xffffff0003904460)  fork_trampoline() at fork_trampoline
>  100059 (0xffffff00039048c0)  sched_switch() at sched_switch+0x102
>  100050 (0xffffff0003c428c0)  sched_switch() at sched_switch+0x102
>  100045 (0xffffff0003901460)  fork_trampoline() at fork_trampoline
>  100040 (0xffffff0003904000)  sched_switch() at sched_switch+0x102
>  100035 (0xffffff00035f2460)  sched_switch() at sched_switch+0x102
>  100030 (0xffffff000357d460)  sched_switch() at sched_switch+0x102
>  100027 (0xffffff0003581460)  sched_switch() at sched_switch+0x102
>  100024 (0xffffff0003582460)  fork_trampoline() at fork_trampoline
>  100022 (0xffffff000357a000)  sched_switch() at sched_switch+0x102
>  100017 (0xffffff000357b8c0)  sched_switch() at sched_switch+0x102
>  100015 (0xffffff0003413000)  sched_switch() at sched_switch+0x102
>  100014 (0xffffff0003413460)  fork_trampoline() at fork_trampoline
>  100008 (0xffffff00033ff8c0)  fork_trampoline() at fork_trampoline
>  100007 (0xffffff0003410000)  sched_switch() at sched_switch+0x102
>  100006 (0xffffff0003410460)  sched_switch() at sched_switch+0x102
>  100005 (0xffffff00034108c0)  sched_switch() at sched_switch+0x102
>  100004 (0xffffff00033fe000)  cpustop_handler() at cpustop_handler+0x3a
>  100003 (0xffffff00033fe460)  kdb_enter() at kdb_enter+0x3b
>  100002 (0xffffff00033fe8c0)  sched_switch() at sched_switch+0x102
>  100001 (0xffffff00033ff000)  sched_switch() at sched_switch+0x102
>  100250 (0xffffff00066388c0)  sched_switch() at sched_switch+0x102
>  100249 (0xffffff0006639000)  sched_switch() at sched_switch+0x102
>  100248 (0xffffff0006639460)  sched_switch() at sched_switch+0x102
>  100247 (0xffffff00066398c0)  sched_switch() at sched_switch+0x102
>  100246 (0xffffff000663a000)  sched_switch() at sched_switch+0x102
>  100245 (0xffffff000663a460)  sched_switch() at sched_switch+0x102
>  100244 (0xffffff001de40000)  sched_switch() at sched_switch+0x102
>  100243 (0xffffff000663a8c0)  sched_switch() at sched_switch+0x102
>  100229 (0xffffff0006280460)  sched_switch() at sched_switch+0x102
>  100228 (0xffffff0006650460)  sched_switch() at sched_switch+0x102
>  100227 (0xffffff000665a460)  sched_switch() at sched_switch+0x102
>  100226 (0xffffff000667d000)  sched_switch() at sched_switch+0x102
>  100225 (0xffffff000665c460)  sched_switch() at sched_switch+0x102
>  100224 (0xffffff000667f000)  sched_switch() at sched_switch+0x102
>  100223 (0xffffff0006689460)  sched_switch() at sched_switch+0x102
>  100222 (0xffffff0006680000)  sched_switch() at sched_switch+0x102
>  100221 (0xffffff000667d460)  sched_switch() at sched_switch+0x102
>  100220 (0xffffff0006681000)  sched_switch() at sched_switch+0x102
>  100219 (0xffffff000667f8c0)  sched_switch() at sched_switch+0x102
>  100218 (0xffffff0006682000)  sched_switch() at sched_switch+0x102
>  100217 (0xffffff00066808c0)  sched_switch() at sched_switch+0x102
>  100216 (0xffffff0006683000)  sched_switch() at sched_switch+0x102
>  100215 (0xffffff00066818c0)  sched_switch() at sched_switch+0x102
>  100214 (0xffffff0006675460)  sched_switch() at sched_switch+0x102
>  100213 (0xffffff00066828c0)  sched_switch() at sched_switch+0x102
>  100212 (0xffffff0006676460)  sched_switch() at sched_switch+0x102
>  100211 (0xffffff0006675000)  sched_switch() at sched_switch+0x102
>  100210 (0xffffff0006678460)  sched_switch() at sched_switch+0x102
>  100209 (0xffffff00066758c0)  sched_switch() at sched_switch+0x102
>  100208 (0xffffff0006679000)  sched_switch() at sched_switch+0x102
>  100207 (0xffffff00066768c0)  sched_switch() at sched_switch+0x102
>  100206 (0xffffff000667a000)  sched_switch() at sched_switch+0x102
>  100205 (0xffffff00066788c0)  sched_switch() at sched_switch+0x102
>  100204 (0xffffff000666b460)  sched_switch() at sched_switch+0x102
>  100203 (0xffffff00066798c0)  sched_switch() at sched_switch+0x102
>  100202 (0xffffff000666e460)  sched_switch() at sched_switch+0x102
>  100201 (0xffffff000667a460)  sched_switch() at sched_switch+0x102
>  100200 (0xffffff0006671460)  sched_switch() at sched_switch+0x102
>  100199 (0xffffff000666b8c0)  sched_switch() at sched_switch+0x102
>  100198 (0xffffff0006672460)  sched_switch() at sched_switch+0x102
>  100197 (0xffffff000666e8c0)  sched_switch() at sched_switch+0x102
>  100196 (0xffffff0006673460)  sched_switch() at sched_switch+0x102
>  100195 (0xffffff00066718c0)  sched_switch() at sched_switch+0x102
>  100194 (0xffffff0006674460)  sched_switch() at sched_switch+0x102
>  100193 (0xffffff00066728c0)  sched_switch() at sched_switch+0x102
>  100192 (0xffffff0006665460)  sched_switch() at sched_switch+0x102
>  100191 (0xffffff00066738c0)  sched_switch() at sched_switch+0x102
>  100190 (0xffffff0006665000)  sched_switch() at sched_switch+0x102
>  100189 (0xffffff0006666000)  sched_switch() at sched_switch+0x102
>  100188 (0xffffff0006666460)  sched_switch() at sched_switch+0x102
>  100187 (0xffffff0006667000)  sched_switch() at sched_switch+0x102
>  100186 (0xffffff0006667460)  sched_switch() at sched_switch+0x102
>  100185 (0xffffff00066678c0)  sched_switch() at sched_switch+0x102
>  100184 (0xffffff0006669460)  sched_switch() at sched_switch+0x102
>  100183 (0xffffff00066698c0)  sched_switch() at sched_switch+0x102
>  100182 (0xffffff000666a460)  sched_switch() at sched_switch+0x102
>  100181 (0xffffff000666a8c0)  sched_switch() at sched_switch+0x102
>  100180 (0xffffff000665c8c0)  sched_switch() at sched_switch+0x102
>  100179 (0xffffff000665f000)  sched_switch() at sched_switch+0x102
>  100178 (0xffffff000665f8c0)  sched_switch() at sched_switch+0x102
>  100177 (0xffffff0006660000)  sched_switch() at sched_switch+0x102
>  100176 (0xffffff00066608c0)  sched_switch() at sched_switch+0x102
>  100175 (0xffffff0006661000)  sched_switch() at sched_switch+0x102
>  100174 (0xffffff00066618c0)  sched_switch() at sched_switch+0x102
>  100173 (0xffffff0006663000)  sched_switch() at sched_switch+0x102
>  100172 (0xffffff00066638c0)  sched_switch() at sched_switch+0x102
>  100171 (0xffffff0006664000)  sched_switch() at sched_switch+0x102
>  100170 (0xffffff0006664460)  sched_switch() at sched_switch+0x102
>  100169 (0xffffff0006656460)  sched_switch() at sched_switch+0x102
>  100168 (0xffffff00066568c0)  sched_switch() at sched_switch+0x102
>  100167 (0xffffff0006657460)  sched_switch() at sched_switch+0x102
>  100166 (0xffffff00066578c0)  sched_switch() at sched_switch+0x102
>  100165 (0xffffff0006658460)  sched_switch() at sched_switch+0x102
>  100164 (0xffffff00066588c0)  sched_switch() at sched_switch+0x102
>  100163 (0xffffff000665a000)  sched_switch() at sched_switch+0x102
>  100162 (0xffffff000665a8c0)  sched_switch() at sched_switch+0x102
>  100161 (0xffffff000665b000)  sched_switch() at sched_switch+0x102
>  100160 (0xffffff000665b8c0)  sched_switch() at sched_switch+0x102
>  100159 (0xffffff000665c000)  sched_switch() at sched_switch+0x102
>  100158 (0xffffff0006650000)  sched_switch() at sched_switch+0x102
>  100157 (0xffffff00066508c0)  sched_switch() at sched_switch+0x102
>  100156 (0xffffff0006651460)  sched_switch() at sched_switch+0x102
>  100155 (0xffffff0006652000)  sched_switch() at sched_switch+0x102
>  100154 (0xffffff0006651000)  sched_switch() at sched_switch+0x102
>  100153 (0xffffff00066518c0)  sched_switch() at sched_switch+0x102
>  100152 (0xffffff0006652460)  sched_switch() at sched_switch+0x102
>  100151 (0xffffff0006653000)  sched_switch() at sched_switch+0x102
>  100150 (0xffffff0006654000)  sched_switch() at sched_switch+0x102
>  100149 (0xffffff00066548c0)  sched_switch() at sched_switch+0x102
>  100148 (0xffffff00066528c0)  sched_switch() at sched_switch+0x102
>  100147 (0xffffff0006653460)  sched_switch() at sched_switch+0x102
>  100146 (0xffffff00066538c0)  sched_switch() at sched_switch+0x102
>  100145 (0xffffff0006654460)  sched_switch() at sched_switch+0x102
>  100144 (0xffffff0006656000)  sched_switch() at sched_switch+0x102
>  100143 (0xffffff0006646460)  sched_switch() at sched_switch+0x102
>  100142 (0xffffff0006649000)  sched_switch() at sched_switch+0x102
>  100141 (0xffffff0006646000)  sched_switch() at sched_switch+0x102
>  100140 (0xffffff00066468c0)  sched_switch() at sched_switch+0x102
>  100139 (0xffffff0006649460)  sched_switch() at sched_switch+0x102
>  100138 (0xffffff00066498c0)  sched_switch() at sched_switch+0x102
>  100137 (0xffffff000664b000)  sched_switch() at sched_switch+0x102
>  100136 (0xffffff000664b460)  sched_switch() at sched_switch+0x102
>  100135 (0xffffff000664b8c0)  sched_switch() at sched_switch+0x102
>  100134 (0xffffff000664c460)  sched_switch() at sched_switch+0x102
>  100133 (0xffffff000664d460)  sched_switch() at sched_switch+0x102
>  100132 (0xffffff0003f9b000)  sched_switch() at sched_switch+0x102
>  100131 (0xffffff000664c000)  sched_switch() at sched_switch+0x102
>  100130 (0xffffff000664c8c0)  sched_switch() at sched_switch+0x102
>  100129 (0xffffff000664d000)  sched_switch() at sched_switch+0x102
>  100128 (0xffffff000664d8c0)  sched_switch() at sched_switch+0x102
>  100127 (0xffffff0003f9b460)  sched_switch() at sched_switch+0x102
>  100126 (0xffffff0003f9b8c0)  sched_switch() at sched_switch+0x102
>  100125 (0xffffff0003f9c000)  sched_switch() at sched_switch+0x102
>  100124 (0xffffff0006687460)  sched_switch() at sched_switch+0x102
>  100123 (0xffffff00066878c0)  sched_switch() at sched_switch+0x102
>  100122 (0xffffff0006688000)  sched_switch() at sched_switch+0x102
>  100121 (0xffffff0006688460)  sched_switch() at sched_switch+0x102
>  100120 (0xffffff00066888c0)  sched_switch() at sched_switch+0x102
>  100119 (0xffffff0006689000)  sched_switch() at sched_switch+0x102
>  100118 (0xffffff000667d8c0)  sched_switch() at sched_switch+0x102
>  100117 (0xffffff0006680460)  sched_switch() at sched_switch+0x102
>  100116 (0xffffff0006682460)  sched_switch() at sched_switch+0x102
>  100115 (0xffffff000667f460)  sched_switch() at sched_switch+0x102
>  100114 (0xffffff0006676000)  sched_switch() at sched_switch+0x102
>  100113 (0xffffff0006681460)  sched_switch() at sched_switch+0x102
>  100112 (0xffffff0006679460)  sched_switch() at sched_switch+0x102
>  100111 (0xffffff00066748c0)  sched_switch() at sched_switch+0x102
>  100110 (0xffffff000666e000)  sched_switch() at sched_switch+0x102
>  100109 (0xffffff0006678000)  sched_switch() at sched_switch+0x102
>  100108 (0xffffff0006672000)  sched_switch() at sched_switch+0x102
>  100107 (0xffffff000667a8c0)  sched_switch() at sched_switch+0x102
>  100106 (0xffffff0006674000)  sched_switch() at sched_switch+0x102
>  100105 (0xffffff0006671000)  sched_switch() at sched_switch+0x102
>  100104 (0xffffff00066668c0)  sched_switch() at sched_switch+0x102
>  100103 (0xffffff0006673000)  sched_switch() at sched_switch+0x102
>  100102 (0xffffff000666a000)  sched_switch() at sched_switch+0x102
>  100101 (0xffffff00066658c0)  sched_switch() at sched_switch+0x102
>  100100 (0xffffff000665f460)  sched_switch() at sched_switch+0x102
>  100099 (0xffffff0006669000)  sched_switch() at sched_switch+0x102
>  100098 (0xffffff0006661460)  sched_switch() at sched_switch+0x102
>  100097 (0xffffff000666b000)  sched_switch() at sched_switch+0x102
>  100096 (0xffffff00066648c0)  sched_switch() at sched_switch+0x102
>  100095 (0xffffff0006660460)  sched_switch() at sched_switch+0x102
>  100094 (0xffffff0006658000)  sched_switch() at sched_switch+0x102
>  100093 (0xffffff0006663460)  sched_switch() at sched_switch+0x102
>  100092 (0xffffff000665b460)  sched_switch() at sched_switch+0x102
>  100063 (0xffffff0003eda000)  sched_switch() at sched_switch+0x102
>  100062 (0xffffff0003eda460)  sched_switch() at sched_switch+0x102
>  100034 (0xffffff00035f28c0)  sched_switch() at sched_switch+0x102
>  100033 (0xffffff00035f3000)  sched_switch() at sched_switch+0x102
>  100026 (0xffffff00035818c0)  sched_switch() at sched_switch+0x102
>  100025 (0xffffff0003582000)  sched_switch() at sched_switch+0x102
>  100023 (0xffffff00035828c0)  sched_switch() at sched_switch+0x102
>  100021 (0xffffff000357a460)  sched_switch() at sched_switch+0x102
>  100020 (0xffffff000357a8c0)  sched_switch() at sched_switch+0x102
>  100019 (0xffffff000357b000)  sched_switch() at sched_switch+0x102
>  100018 (0xffffff000357b460)  sched_switch() at sched_switch+0x102
>  100016 (0xffffff000357d000)  sched_switch() at sched_switch+0x102
>  100012 (0xffffff0003414000)  sched_switch() at sched_switch+0x102
>  100000 (0xffffffff80cffcf0)  sched_switch() at sched_switch+0x102
>
> db>ps
>  pid  ppid  pgrp   uid   state   wmesg         wchan        cmd
>  5002  2389  5002 20001  SL+     pfault   0xffffffff80d33adc top
>  2389  2388  2389 20001  Ss+     pause    0xffffff004ca3a0a0 tcsh
>  2388  2386  2386 20001  S       vmwait   0xffffffff80d33adc sshd
>  2386  1195  2386     0  Ss      sbwait   0xffffff01ef77fe8c sshd
>  1345     1  1345     0  SLs+    pfault   0xffffffff80d33adc getty
>  1344     1  1344     0  Ss+     ttyin    0xffffff0003ede4a8 getty
>  1248     1  1248     0  ?s                                  cron
>  1227     1  1227    25  ?s                                  sendmail
>  1211     1  1211     0  SLs     pfault   0xffffffff80d33adc sendmail
>  1195     1  1195     0  SLs     pfault   0xffffffff80d33adc sshd
>  1055     1  1055     0  SLs     pfault   0xffffffff80d33adc perl5.10.1
>  1035     1  1035     1  ?s                                  rwhod
>  1005     1  1005     0  SLs     pfault   0xffffffff80d33adc ntpd
>  939     1   939     0  Ss      rpcsvc   0xffffff001de7a4a0 NLM: master
>  933     1   933     0  SLs     pfault   0xffffffff80d33adc rpc.statd
>  927   926   926     0  S       (threaded)                  nfsd
> 100516                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100515                   S       rpcsvc   0xffffff002dba9520 nfsd: service
> 100514                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100513                   D       zio->io_ 0xffffff01187c0320 nfsd: service
> 100512                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100511                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100510                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100509                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100508                   D       db->db_c 0xffffff008d7a4950 nfsd: service
> 100507                   S       rpcsvc   0xffffff002d2ba720 nfsd: service
> 100506                   D       tx->tx_q 0xffffff0006abf240 nfsd: service
> 100505                   S       rpcsvc   0xffffff002d728ba0 nfsd: service
> 100504                   D       zio->io_ 0xffffff0185b36320 nfsd: service
> 100503                   S       rpcsvc   0xffffff002d2ba7a0 nfsd: service
> 100502                   S       rpcsvc   0xffffff002d3165a0 nfsd: service
> 100501                   S       rpcsvc   0xffffff002d728c20 nfsd: service
> 100500                   D       tx->tx_q 0xffffff0006abf240 nfsd: service
> 100499                   D       zfsvfs-> 0xffffff001ddc4788 nfsd: service
> 100498                   D       zfs      0xffffff009b810ba8 nfsd: service
> 100497                   S       rpcsvc   0xffffff002d728ca0 nfsd: service
> 100496                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100495                   D       tx->tx_q 0xffffff0006abf240 nfsd: service
> 100494                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100493                   S       rpcsvc   0xffffff002d728d20 nfsd: service
> 100492                   S       rpcsvc   0xffffff002d316620 nfsd: service
> 100491                   S       rpcsvc   0xffffff002d2ba9a0 nfsd: service
> 100490                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100489                   D       zfsvfs-> 0xffffff0011243828 nfsd: service
> 100488                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100487                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100486                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100485                   D       zio->io_ 0xffffff01046c2320 nfsd: service
> 100484                   S       rpcsvc   0xffffff002d316720 nfsd: service
> 100483                   D       zfs      0xffffff00b62b9d80 nfsd: service
> 100482                   S       rpcsvc   0xffffff002d728a20 nfsd: service
> 100481                   S       rpcsvc   0xffffff002d728e20 nfsd: service
> 100480                   S       rpcsvc   0xffffff002d3167a0 nfsd: service
> 100479                   S       rpcsvc   0xffffff002d2baaa0 nfsd: service
> 100478                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100477                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100476                   D       zio->io_ 0xffffff00e076ed70 nfsd: service
> 100475                   D       zfs      0xffffff00b62b9d80 nfsd: service
> 100474                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100473                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100472                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100471                   D       zfs      0xffffff00b62b9d80 nfsd: service
> 100470                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100469                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100468                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100467                   D       zfs      0xffffff00b9501ba8 nfsd: service
> 100466                   S       rpcsvc   0xffffff002d3169a0 nfsd: service
> 100465                   D       zfs      0xffffff00b5979620 nfsd: service
> 100464                   D       zfs      0xffffff00b62b9d80 nfsd: service
> 100463                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100462                   S       rpcsvc   0xffffff002d316920 nfsd: service
> 100461                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100460                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100459                   S       rpcsvc   0xffffff002d935320 nfsd: service
> 100458                   S       rpcsvc   0xffffff002d318aa0 nfsd: service
> 100457                   D       zfsvfs-> 0xffffff001ddc4788 nfsd: service
> 100456                   S       rpcsvc   0xffffff002d318a20 nfsd: service
> 100455                   D       zfs      0xffffff00b9467098 nfsd: service
> 100454                   S       rpcsvc   0xffffff002d316aa0 nfsd: service
> 100453                   S       rpcsvc   0xffffff002d2bae20 nfsd: service
> 100452                   S       rpcsvc   0xffffff002d3189a0 nfsd: service
> 100451                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100450                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100449                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100448                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100447                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100446                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100445                   S       rpcsvc   0xffffff002d315020 nfsd: service
> 100444                   D       zfs      0xffffff00b62b9d80 nfsd: service
> 100443                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100442                   S       rpcsvc   0xffffff002dfa6720 nfsd: service
> 100441                   S       rpcsvc   0xffffff002d3150a0 nfsd: service
> 100440                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100439                   S       rpcsvc   0xffffff002d9355a0 nfsd: service
> 100438                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100437                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100436                   S       rpcsvc   0xffffff002dba96a0 nfsd: service
> 100435                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100434                   S       rpcsvc   0xffffff002dfa6820 nfsd: service
> 100433                   S       rpcsvc   0xffffff002d935720 nfsd: service
> 100432                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100431                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100430                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100429                   S       rpcsvc   0xffffff002d315220 nfsd: service
> 100428                   D       zio->io_ 0xffffff01b4551690 nfsd: service
> 100427                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100426                   S       rpcsvc   0xffffff002dbab6a0 nfsd: service
> 100425                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100424                   S       rpcsvc   0xffffff002dba97a0 nfsd: service
> 100423                   D       zio->io_ 0xffffff002d979320 nfsd: service
> 100422                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100421                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100420                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100419                   S       rpcsvc   0xffffff002dbab620 nfsd: service
> 100418                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100417                   D       zfs      0xffffff00b9501ba8 nfsd: service
> 100416                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100415                   D       zio->io_ 0xffffff01d4611d70 nfsd: service
> 100414                   S       rpcsvc   0xffffff002dbaa6a0 nfsd: service
> 100413                   S       rpcsvc   0xffffff002dba98a0 nfsd: service
> 100412                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100411                   S       rpcsvc   0xffffff002dbab7a0 nfsd: service
> 100410                   S       rpcsvc   0xffffff002ddbe620 nfsd: service
> 100409                   S       rpcsvc   0xffffff002dba9920 nfsd: service
> 100408                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100407                   S       rpcsvc   0xffffff002dbab820 nfsd: service
> 100406                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100405                   S       rpcsvc   0xffffff002dbab8a0 nfsd: service
> 100404                   D       zio->io_ 0xffffff0028d8da00 nfsd: service
> 100403                   S       rpcsvc   0xffffff002dba9a20 nfsd: service
> 100402                   D       zio->io_ 0xffffff0157902d70 nfsd: service
> 100401                   S       rpcsvc   0xffffff002dba99a0 nfsd: service
> 100400                   D       zfs      0xffffff009b810ba8 nfsd: service
> 100399                   D       zfs      0xffffff00bd5e5448 nfsd: service
> 100398                   S       rpcsvc   0xffffff002dbaa920 nfsd: service
> 100397                   D       zio->io_ 0xffffff002ec82a00 nfsd: service
> 100396                   S       rpcsvc   0xffffff002d935d20 nfsd: service
> 100395                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100394                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100393                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100392                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100391                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100390                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100389                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100388                   D       tx->tx_q 0xffffff0006abf240 nfsd: service
> 100387                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100386                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100385                   S       rpcsvc   0xffffff002dbabaa0 nfsd: service
> 100384                   D       zfs      0xffffff00c1e12620 nfsd: service
> 100383                   D       zfs      0xffffff009b810ba8 nfsd: service
> 100382                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100381                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100380                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100379                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100378                   D       zio->io_ 0xffffff01005ada00 nfsd: service
> 100377                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100376                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100375                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100374                   S       rpcsvc   0xffffff002db1a1a0 nfsd: service
> 100373                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100372                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100371                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100370                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100369                   D       zfs      0xffffff00c1e12620 nfsd: service
> 100368                   D       zfsvfs-> 0xffffff001ddc4788 nfsd: service
> 100367                   D       zfs      0xffffff00b9467098 nfsd: service
> 100366                   S       rpcsvc   0xffffff002db1a2a0 nfsd: service
> 100365                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100364                   S       rpcsvc   0xffffff002dbaad20 nfsd: service
> 100363                   S       rpcsvc   0xffffff002dbabda0 nfsd: service
> 100362                   D       zfs      0xffffff00b9501ba8 nfsd: service
> 100361                   D       zfs      0xffffff00b62b9d80 nfsd: service
> 100360                   S       rpcsvc   0xffffff002dbaaca0 nfsd: service
> 100359                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100358                   D       zio->io_ 0xffffff013395a320 nfsd: service
> 100357                   D       zfs      0xffffff00bd5e5448 nfsd: service
> 100356                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100355                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100354                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100353                   S       rpcsvc   0xffffff002dbaa220 nfsd: service
> 100352                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100351                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100350                   S       rpcsvc   0xffffff002dbab020 nfsd: service
> 100349                   D       zio->io_ 0xffffff00b9edfa00 nfsd: service
> 100348                   D       tx->tx_q 0xffffff0006abf240 nfsd: service
> 100347                   S       rpcsvc   0xffffff002ddbe120 nfsd: service
> 100346                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100345                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100344                   S       rpcsvc   0xffffff002db1a620 nfsd: service
> 100343                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100342                   S       rpcsvc   0xffffff002dbab120 nfsd: service
> 100341                   D       zio->io_ 0xffffff002c859690 nfsd: service
> 100340                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100339                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100338                   S       rpcsvc   0xffffff002dbab1a0 nfsd: service
> 100337                   D       zfs      0xffffff00c1e12620 nfsd: service
> 100336                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100335                   S       rpcsvc   0xffffff002ca84a20 nfsd: service
> 100334                   D       zio->io_ 0xffffff017b42a690 nfsd: service
> 100333                   D       buf_hash 0xffffffff8107d600 nfsd: service
> 100332                   D       zio->io_ 0xffffff002f6ccd70 nfsd: service
> 100331                   D       zfs      0xffffff009b810ba8 nfsd: service
> 100330                   S       rpcsvc   0xffffff002dbab320 nfsd: service
> 100329                   D       zio->io_ 0xffffff0161a66690 nfsd: service
> 100328                   S       rpcsvc   0xffffff002db1a720 nfsd: service
> 100327                   D       zio->io_ 0xffffff0038042320 nfsd: service
> 100326                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100325                   S       rpcsvc   0xffffff002e1ac7a0 nfsd: service
> 100324                   D       zio->io_ 0xffffff016d9b0690 nfsd: service
> 100323                   D       zio->io_ 0xffffff002a0b7a00 nfsd: service
> 100322                   D       zfs      0xffffff00c31037f8 nfsd: service
> 100321                   D       zio->io_ 0xffffff0205a61320 nfsd: service
> 100320                   S       rpcsvc   0xffffff002dbab3a0 nfsd: service
> 100319                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100318                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100317                   S       rpcsvc   0xffffff002dba8520 nfsd: service
> 100316                   S       rpcsvc   0xffffff002dbab4a0 nfsd: service
> 100315                   S       rpcsvc   0xffffff002e1ac920 nfsd: service
> 100314                   D       zio->io_ 0xffffff018542ed70 nfsd: service
> 100313                   D       zio->io_ 0xffffff01e156f320 nfsd: service
> 100312                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100311                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100310                   D       zfs      0xffffff017689cd80 nfsd: service
> 100309                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100308                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100307                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100306                   D       tx->tx_q 0xffffff0006abf240 nfsd: service
> 100305                   S       rpcsvc   0xffffff002dba86a0 nfsd: service
> 100304                   D       zio->io_ 0xffffff00b7c48690 nfsd: service
> 100303                   D       zfs      0xffffff00850f77f8 nfsd: service
> 100302                   D       zio->io_ 0xffffff01c7f41690 nfsd: service
> 100301                   D       zio->io_ 0xffffff01f44aed70 nfsd: service
> 100300                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100299                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100298                   S       rpcsvc   0xffffff002db1aaa0 nfsd: service
> 100297                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100296                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100295                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100294                   S       rpcsvc   0xffffff002db1ab20 nfsd: service
> 100293                   S       rpcsvc   0xffffff002ddbe8a0 nfsd: service
> 100292                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100291                   S       rpcsvc   0xffffff002dba88a0 nfsd: service
> 100290                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100289                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100288                   D       zfs      0xffffff00c495aba8 nfsd: service
> 100287                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100286                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100285                   D       zfs      0xffffff00bb36eba8 nfsd: service
> 100284                   D       zio->io_ 0xffffff0115b28a00 nfsd: service
> 100283                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100282                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100281                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100280                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100279                   S       rpcsvc   0xffffff002dba89a0 nfsd: service
> 100278                   D       zfs      0xffffff00b62b9d80 nfsd: service
> 100277                   D       zfs      0xffffff00b5979620 nfsd: service
> 100276                   D       zfs      0xffffff00c31037f8 nfsd: service
> 100275                   S       rpcsvc   0xffffff002dba8a20 nfsd: service
> 100274                   D       vq->vq_l 0xffffff0006692600 nfsd: service
> 100273                   S       rpcsvc   0xffffff002ddbeaa0 nfsd: service
> 100272                   S       rpcsvc   0xffffff002dfa6520 nfsd: service
> 100271                   S       rpcsvc   0xffffff002dba8b20 nfsd: service
> 100270                   D       vmwait   0xffffffff80d33adc nfsd: service
> 100269                   S       rpcsvc   0xffffff002ddbeba0 nfsd: service
> 100268                   S       rpcsvc   0xffffff002dfa65a0 nfsd: service
> 100267                   D       zfs      0xffffff00bd5e5448 nfsd: service
> 100266                   D       zfs      0xffffff00bdab0098 nfsd: service
> 100265                   D       zio->io_ 0xffffff0190d08690 nfsd: service
> 100264                   D       zfsvfs-> 0xffffff001ddc4788 nfsd: service
> 100263                   S       rpcsvc   0xffffff002dba8ba0 nfsd: service
> 100262                   D       zfs      0xffffff00b55f17f8 nfsd: service
> 100088                   S       rpcsvc   0xffffff002dba8c20 nfsd: master
>  926     1   926     0  SLs     pfault   0xffffffff80d33adc nfsd
>  924   920   920     0  SL      pfault   0xffffffff80d33adc nfsuserd
>  923   920   920     0  SL      pfault   0xffffffff80d33adc nfsuserd
>  922   920   920     0  SL      pfault   0xffffffff80d33adc nfsuserd
>  921   920   920     0  SL      pfault   0xffffffff80d33adc nfsuserd
>  920     1   920     0  Ss      pause    0xffffff002e9bc0a0 nfsuserd
>  901     1   901     0  Ss      select   0xffffff002d317440 mountd
>  820e    1   820     0  SLs     pfault   0xffffffff80d33adc rpcbind
>  795     1   795     0  SLs     pfault   0xffffffff80d33adc syslogd
>   92     0     0     0  SL      mdwait   0xffffff0011141000 [md0]
>   23     0     0     0  SL      sdflush  0xffffffff80d33058 [softdepflush]
>   22     0     0     0  SL      zio->io_ 0xffffff00b49fca00 [syncer]
>   21     0     0     0  SL      vlruwt   0xffffff0003f53470 [vnlru]
>   20     0     0     0  SL      psleep   0xffffffff80d27608 [bufdaemon]
>   19     0     0     0  SL      pgzero   0xffffffff80d34b2c [pagezero]
>   18     0     0     0  SL      psleep   0xffffffff80d33ec8 [vmdaemon]
>   17     0     0     0  SL      psleep   0xffffffff80d33e8c [pagedaemon]
>   16     0     0     0  SL      ccb_scan 0xffffffff80cc46e0 [xpt_thrd]
>   15     0     0     0  SL      waiting_ 0xffffffff80d2d3a0 [sctp_iterator]
>    9     0     0     0  SL      (threaded)                  zfskern
> 100231                   D       vmwait   0xffffffff80d33adc [txg_thread_enter]
> 100230                   D       tx->tx_q 0xffffff0006abf230 [txg_thread_enter]
> 100065                   D       l2arc_fe 0xffffffff81081940 [l2arc_feed_thread]
> 100064                   D       arc_recl 0xffffffff81071aa0 [arc_reclaim_thread]
>   14     0     0     0  SL      (threaded)                  usb
> 100076                   D       -        0xffffff0006226810 [ucom]
> 100075                   D       -        0xffffff0006225c10 [ucom]
> 100058                   D       -        0xffffff800034ae18 [usbus4]
> 100057                   D       -        0xffffff800034adc0 [usbus4]
> 100056                   D       -        0xffffff800034ad68 [usbus4]
> 100055                   D       -        0xffffff800034ad10 [usbus4]
> 100054                   D       -        0xffffff8000341ef0 [usbus3]
> 100053                   D       -        0xffffff8000341e98 [usbus3]
> 100052                   D       -        0xffffff8000341e40 [usbus3]
> 100051                   D       -        0xffffff8000341de8 [usbus3]
> 100049                   D       -        0xffffff8000338ef0 [usbus2]
> 100048                   D       -        0xffffff8000338e98 [usbus2]
> 100047                   D       -        0xffffff8000338e40 [usbus2]
> 100046                   D       -        0xffffff8000338de8 [usbus2]
> 100044                   D       -        0xffffff800032fef0 [usbus1]
> 100043                   D       -        0xffffff800032fe98 [usbus1]
> 100042                   D       -        0xffffff800032fe40 [usbus1]
> 100041                   D       -        0xffffff800032fde8 [usbus1]
> 100039                   D       -        0xffffff8000326ef0 [usbus0]
> 100038                   D       -        0xffffff8000326e98 [usbus0]
> 100037                   D       -        0xffffff8000326e40 [usbus0]
> 100036                   D       -        0xffffff8000326de8 [usbus0]
>  Mo8e    0     0     0  SL      idle     0xffffff80002d7300 [mpt_raid1]
>    7     0     0     0  SL      idle     0xffffff80002d7000 [mpt_recovery1]
>    6     0     0     0  SL      idle     0xffffff80002c2300 [mpt_raid0]
>    5     0     0     0  SL      idle     0xffffff80002c2000 [mpt_recovery0]
>   13     0     0     0  SL      -        0xffffffff80d031a4 [yarrow]
>    4     0     0     0  SL      -        0xffffffff80cff748 [g_down]
>    3     0     0     0  SL      -        0xffffffff80cff740 [g_up]
>    2     0     0     0  SL      -        0xffffffff80cff730 [g_event]
>   12     0     0     0  WL      (threaded)                  intr
> 100061                   I                                   [swi0: uart]
> 100060                   I                                   [irq1: atkbd0]
> 100059                   I                                   [irq14: ata0]
> 100050                   I                                   [irq16: uhci3]
> 100045                   I                                   [irq18: uhci2]
> 100040                   I                                   [irq19: uhci1+]
> 100035                   I                                   [irq23: uhci0 ehci0]
> 100030                   I                                   [irq49: mpt1]
> 100027                   I                                   [irq48: mpt0]
> 100024                   I                                   [irq9: acpi0]
> 100022                   I                                   [swi5: +]
> 100017                   I                                   [swi2: cambio]
> 100015                   I                                   [swi6: task queue]
> 100014                   I                                   [swi6: Giant taskq]
> 100008                   I                                   [swi3: vm]
> 100007                   I                                   [swi4: clock]
> 100006                   I                                   [swi4: clock]
> 100005                   I                                   [swi1: netisr 0]
>   11     0     0     0  RL      (threaded)                  idle
> 100004                   Run     CPU 0                       [idle: cpu0]
> 100003                   Run     CPU 1                       [idle: cpu1]
>    1     0     1     0  SLs     wait     0xffffff00033fb8e0 [init]
>   10     0     0     0  SL      audit_wo 0xffffffff80d32390 [audit]
>    0     0     0     0  SLs     (threaded)                  kernel
> 100250                   D       -        0xffffff001d741b00 [zil_clean]
> 100249                   D       -        0xffffff001de7b500 [zil_clean]
> 100248                   D       -        0xffffff001de6b580 [zil_clean]
> 100247                   D       -        0xffffff001de66080 [zil_clean]
> 100246                   D       -        0xffffff001de51e00 [zil_clean]
> 100245                   D       -        0xffffff001de51400 [zil_clean]
> 100244                   D       -        0xffffff001d74e580 [zil_clean]
> 100243                   D       -        0xffffff001de04100 [zil_clean]
> 100229                   D       -        0xffffff0003f14980 [zfs_vn_rele_taskq]
> 100228                   D       -        0xffffff0006301b80 [zio_ioctl_intr]
> 100227                   D       -        0xffffff0006301b00 [zio_ioctl_issue]
> 100226                   D       -        0xffffff0006301a80 [zio_claim_intr]
> 100225                   D       -        0xffffff0006301a00 [zio_claim_issue]
> 100224                   D       -        0xffffff0006301980 [zio_free_intr]
> 100223                   D       -        0xffffff0006301900 [zio_free_issue_99]
> 100222                   D       -        0xffffff0006301900 [zio_free_issue_98]
> 100221                   D       -        0xffffff0006301900 [zio_free_issue_97]
> 100220                   D       -        0xffffff0006301900 [zio_free_issue_96]
> 100219                   D       -        0xffffff0006301900 [zio_free_issue_95]
> 100218                   D       -        0xffffff0006301900 [zio_free_issue_94]
> 100217                   D       -        0xffffff0006301900 [zio_free_issue_93]
> 100216                   D       -        0xffffff0006301900 [zio_free_issue_92]
> 100215                   D       -        0xffffff0006301900 [zio_free_issue_91]
> 100214                   D       -        0xffffff0006301900 [zio_free_issue_90]
> 100213                   D       -        0xffffff0006301900 [zio_free_issue_89]
> 100212                   D       -        0xffffff0006301900 [zio_free_issue_88]
> 100211                   D       -        0xffffff0006301900 [zio_free_issue_87]
> 100210                   D       -        0xffffff0006301900 [zio_free_issue_86]
> 100209                   D       -        0xffffff0006301900 [zio_free_issue_85]
> 100208                   D       -        0xffffff0006301900 [zio_free_issue_84]
> 100207                   D       -        0xffffff0006301900 [zio_free_issue_83]
> 100206                   D       -        0xffffff0006301900 [zio_free_issue_82]
> 100205                   D       -        0xffffff0006301900 [zio_free_issue_81]
> 100204                   D       -        0xffffff0006301900 [zio_free_issue_80]
> 100203                   D       -        0xffffff0006301900 [zio_free_issue_79]
> 100202                   D       -        0xffffff0006301900 [zio_free_issue_78]
> 100201                   D       -        0xffffff0006301900 [zio_free_issue_77]
> 100200                   D       -        0xffffff0006301900 [zio_free_issue_76]
> 100199                   D       -        0xffffff0006301900 [zio_free_issue_75]
> 100198                   D       -        0xffffff0006301900 [zio_free_issue_74]
> 100197                   D       -        0xffffff0006301900 [zio_free_issue_73]
> 100196                   D       -        0xffffff0006301900 [zio_free_issue_72]
> 100195                   D       -        0xffffff0006301900 [zio_free_issue_71]
> 100194                   D       -        0xffffff0006301900 [zio_free_issue_70]
> 100193                   D       -        0xffffff0006301900 [zio_free_issue_69]
> 100192                   D       -        0xffffff0006301900 [zio_free_issue_68]
> 100191                   D       -        0xffffff0006301900 [zio_free_issue_67]
> 100190                   D       -        0xffffff0006301900 [zio_free_issue_66]
> 100189                   D       -        0xffffff0006301900 [zio_free_issue_65]
> 100188                   D       -        0xffffff0006301900 [zio_free_issue_64]
> 100187                   D       -        0xffffff0006301900 [zio_free_issue_63]
> 100186                   D       -        0xffffff0006301900 [zio_free_issue_62]
> 100185                   D       -        0xffffff0006301900 [zio_free_issue_61]
> 100184                   D       -        0xffffff0006301900 [zio_free_issue_60]
> 100183                   D       -        0xffffff0006301900 [zio_free_issue_59]
> 100182                   D       -        0xffffff0006301900 [zio_free_issue_58]
> 100181                   D       -        0xffffff0006301900 [zio_free_issue_57]
> 100180                   D       -        0xffffff0006301900 [zio_free_issue_56]
> 100179                   D       -        0xffffff0006301900 [zio_free_issue_55]
> 100178                   D       -        0xffffff0006301900 [zio_free_issue_54]
> 100177                   D       -        0xffffff0006301900 [zio_free_issue_53]
> 100176                   D       -        0xffffff0006301900 [zio_free_issue_52]
> 100175                   D       -        0xffffff0006301900 [zio_free_issue_51]
> 100174                   D       -        0xffffff0006301900 [zio_free_issue_50]
> 100173                   D       -        0xffffff0006301900 [zio_free_issue_49]
> 100172                   D       -        0xffffff0006301900 [zio_free_issue_48]
> 100171                   D       -        0xffffff0006301900 [zio_free_issue_47]
> 100170                   D       -        0xffffff0006301900 [zio_free_issue_46]
> 100169                   D       -        0xffffff0006301900 [zio_free_issue_45]
> 100168                   D       -        0xffffff0006301900 [zio_free_issue_44]
> 100167                   D       -        0xffffff0006301900 [zio_free_issue_43]
> 100166                   D       -        0xffffff0006301900 [zio_free_issue_42]
> 100165                   D       -        0xffffff0006301900 [zio_free_issue_41]
> 100164                   D       -        0xffffff0006301900 [zio_free_issue_40]
> 100163                   D       -        0xffffff0006301900 [zio_free_issue_39]
> 100162                   D       -        0xffffff0006301900 [zio_free_issue_38]
> 100161                   D       -        0xffffff0006301900 [zio_free_issue_37]
> 100160                   D       -        0xffffff0006301900 [zio_free_issue_36]
> 100159                   D       -        0xffffff0006301900 [zio_free_issue_35]
> 100158                   D       -        0xffffff0006301900 [zio_free_issue_34]
> 100157                   D       -        0xffffff0006301900 [zio_free_issue_33]
> 100156                   D       -        0xffffff0006301900 [zio_free_issue_32]
> 100155                   D       -        0xffffff0006301900 [zio_free_issue_31]
> 100154                   D       -        0xffffff0006301900 [zio_free_issue_30]
> 100153                   D       -        0xffffff0006301900 [zio_free_issue_29]
> 100152                   D       -        0xffffff0006301900 [zio_free_issue_28]
> 100151                   D       -        0xffffff0006301900 [zio_free_issue_27]
> 100150                   D       -        0xffffff0006301900 [zio_free_issue_26]
> 100149                   D       -        0xffffff0006301900 [zio_free_issue_25]
> 100148                   D       -        0xffffff0006301900 [zio_free_issue_24]
> 100147                   D       -        0xffffff0006301900 [zio_free_issue_23]
> 100146                   D       -        0xffffff0006301900 [zio_free_issue_22]
> 100145                   D       -        0xffffff0006301900 [zio_free_issue_21]
> 100144                   D       -        0xffffff0006301900 [zio_free_issue_20]
> 100143                   D       -        0xffffff0006301900 [zio_free_issue_19]
> 100142                   D       -        0xffffff0006301900 [zio_free_issue_18]
> 100141                   D       -        0xffffff0006301900 [zio_free_issue_17]
> 100140                   D       -        0xffffff0006301900 [zio_free_issue_16]
> 100139                   D       -        0xffffff0006301900 [zio_free_issue_15]
> 100138                   D       -        0xffffff0006301900 [zio_free_issue_14]
> 100137                   D       -        0xffffff0006301900 [zio_free_issue_13]
> 100136                   D       -        0xffffff0006301900 [zio_free_issue_12]
> 100135                   D       -        0xffffff0006301900 [zio_free_issue_11]
> 100134                   D       -        0xffffff0006301900 [zio_free_issue_10]
> 100133                   D       -        0xffffff0006301900 [zio_free_issue_9]
> 100132                   D       -        0xffffff0006301900 [zio_free_issue_8]
> 100131                   D       -        0xffffff0006301900 [zio_free_issue_7]
> 100130                   D       -        0xffffff0006301900 [zio_free_issue_6]
> 100129                   D       -        0xffffff0006301900 [zio_free_issue_5]
> 100128                   D       -        0xffffff0006301900 [zio_free_issue_4]
> 100127                   D       -        0xffffff0006301900 [zio_free_issue_3]
> 100126                   D       -        0xffffff0006301900 [zio_free_issue_2]
> 100125                   D       -        0xffffff0006301900 [zio_free_issue_1]
> 100124                   D       -        0xffffff0006301900 [zio_free_issue_0]
> 100123                   D       -        0xffffff0006301880 [zio_write_intr_high]
> 100122                   D       -        0xffffff0006301880 [zio_write_intr_high]
> 100121                   D       -        0xffffff0006301880 [zio_write_intr_high]
> 100120                   D       -        0xffffff0006301880 [zio_write_intr_high]
> 100119                   D       -        0xffffff0006301880 [zio_write_intr_high]
> 100118                   D       -        0xffffff0006301800 [zio_write_intr_7]
> 100117                   D       -        0xffffff0006301800 [zio_write_intr_6]
> 100116                   D       -        0xffffff0006301800 [zio_write_intr_5]
> 100115                   D       -        0xffffff0006301800 [zio_write_intr_4]
> 100114                   D       -        0xffffff0006301800 [zio_write_intr_3]
> 100113                   D       -        0xffffff0006301800 [zio_write_intr_2]
> 100112                   D       -        0xffffff0006301800 [zio_write_intr_1]
> 100111                   D       -        0xffffff0006301800 [zio_write_intr_0]
> 100110                   D       -        0xffffff0006301780 [zio_write_issue_hig]
> 100109                   D       -        0xffffff0006301780 [zio_write_issue_hig]
> 100108                   D       -        0xffffff0006301780 [zio_write_issue_hig]
> 100107                   D       -        0xffffff0006301780 [zio_write_issue_hig]
> 100106                   D       -        0xffffff0006301780 [zio_write_issue_hig]
> 100105                   D       vmwait   0xffffffff80d33adc [zio_write_issue_1]
> 100104                   D       vmwait   0xffffffff80d33adc [zio_write_issue_0]
> 100103                   D       vmwait   0xffffffff80d33adc [zio_read_intr_1]
> 100102                   D       vq->vq_l 0xffffff0006692600 [zio_read_intr_0]
> 100101                   D       -        0xffffff0006301600 [zio_read_issue_7]
> 100100                   D       -        0xffffff0006301600 [zio_read_issue_6]
> 100099                   D       -        0xffffff0006301600 [zio_read_issue_5]
> 100098                   D       -        0xffffff0006301600 [zio_read_issue_4]
> 100097                   D       -        0xffffff0006301600 [zio_read_issue_3]
> 100096                   D       -        0xffffff0006301600 [zio_read_issue_2]
> 100095                   D       -        0xffffff0006301600 [zio_read_issue_1]
> 100094                   D       -        0xffffff0006301600 [zio_read_issue_0]
> 100093                   D       -        0xffffff0006064e00 [zio_null_intr]
> 100092                   D       -        0xffffff0006084000 [zio_null_issue]
> 100063                   D       -        0xffffff0003ee2880 [system_taskq_1]
> 100062                   D       -        0xffffff0003ee2880 [system_taskq_0]
> 100034                   D       -        0xffffff0003620500 [em3 taskq]
> 100033                   D       -        0xffffff0003611000 [em2 taskq]
> 100026                   D       -        0xffffff00035d1d00 [em1 taskq]
> 100025                   D       -        0xffffff00035d0e00 [em0 taskq]
> 100023                   D       -        0xffffff000357f180 [thread taskq]
> 100021                   D       -        0xffffff000357f300 [acpi_task_2]
> 100020                   D       -        0xffffff000357f300 [acpi_task_1]
> 100019                   D       -        0xffffff000357f300 [acpi_task_0]
> 100018                   D       -        0xffffff000357f380 [kqueue taskq]
> 100016                   D       -        0xffffff000352b900 [ffs_trim taskq]
> 100012                   D       -        0xffffff00033f7d00 [firmware taskq]
> 100000                   D       vmwait   0xffffffff80d33adc [swapper]
> db> call doadump
> Dumping 8191 out of 8173 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91%
> Dump complete
> = 0
> db> reset

Kostik Belousov

unread,
Sep 3, 2011, 6:49:04 AM9/3/11
to Attilio Rao, ni...@desert.net, freebsd...@freebsd.org, ster...@camdensoftware.com, a...@freebsd.org
On Sat, Sep 03, 2011 at 12:05:47PM +0200, Attilio Rao wrote:
> This should be enough for someone NFS-aware to look into it.
>
> Were you also able to get a core?
>
> I'll try to look into it in the next days, in particular about the
> softclock state.
>
I am absolutely sure that this is a zfs deadlock.

Hiroki Sato

unread,
Sep 6, 2011, 8:50:01 PM9/6/11
to att...@freebsd.org, p...@freebsd.org, ster...@camdensoftware.com, a...@freebsd.org, ni...@desert.net, freebsd...@freebsd.org, k...@freebsd.org
Attilio Rao <att...@freebsd.org> wrote
in <CAJ-FndAChGndC=LkZNi7i6mOt+Spw3-O...@mail.gmail.com>:

at> This should be enough for someone NFS-aware to look into it.
at>
at> Were you also able to get a core?

Yes. But as kib@ pointed out it seems a deadlock in ZFS. Some
experiments I did showed that this deadlock can be triggered at least
by doing "rm -rf" against a local directory that has a large number
of files/sub-directories.

Then, I updated the kernel with the latest 8-STABLE + WITNESS option
because a fix for LOR of spa_config lock was committed and tracking
locks without WITNESS was hard. The deadlock can still be triggered
after that.

During this investigation an disk has to be replaced and resilvering
it is now in progress. A deadlock and a forced reboot after that
make recovering of the zfs datasets take a long time (for committing
logs, I think), so I will try to reproduce the deadlock and get a
core dump after it finished.

If the old kernel and core of the deadlock I reported on Saturday are
still useful for debugging, I can put them to somewhere you can
access.

-- Hiroki

Hiroki Sato

unread,
Sep 9, 2011, 4:11:50 PM9/9/11
to p...@freebsd.org, m...@freebsd.org, freebsd...@freebsd.org, att...@freebsd.org, k...@freebsd.org
Hiroki Sato <h...@freebsd.org> wrote
in <20110907.094717.227...@allbsd.org>:

hr> During this investigation an disk has to be replaced and resilvering
hr> it is now in progress. A deadlock and a forced reboot after that
hr> make recovering of the zfs datasets take a long time (for committing
hr> logs, I think), so I will try to reproduce the deadlock and get a
hr> core dump after it finished.

I think I could reproduce the symptoms. I have no idea about if
these are exactly the same as occurred on my box before because the
kernel was replaced with one with some debugging options, but these
are reproducible at least.

There are two symptoms. One is a panic. A DDB output when the panic
occurred is the following:

----
Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address = 0x100000040
fault code = supervisor read data, page not present
instruction pointer = 0x20:0xffffffff8065b926
stack pointer = 0x28:0xffffff8257b94d70
frame pointer = 0x28:0xffffff8257b94e10
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 992 (nfsd: service)
[thread pid 992 tid 100586 ]
Stopped at witness_checkorder+0x246: movl 0x40(%r13),%ebx

db> bt
Tracing pid 992 tid 100586 td 0xffffff00595d9000
witness_checkorder() at witness_checkorder+0x246
_sx_slock() at _sx_slock+0x35
dmu_bonus_hold() at dmu_bonus_hold+0x57
zfs_zget() at zfs_zget+0x237
zfs_dirent_lock() at zfs_dirent_lock+0x488
zfs_dirlook() at zfs_dirlook+0x69
zfs_lookup() at zfs_lookup+0x26b
zfs_freebsd_lookup() at zfs_freebsd_lookup+0x81
vfs_cache_lookup() at vfs_cache_lookup+0xf0
VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0x40
lookup() at lookup+0x384
nfsvno_namei() at nfsvno_namei+0x268
nfsrvd_lookup() at nfsrvd_lookup+0xd6
nfsrvd_dorpc() at nfsrvd_dorpc+0x745
nfssvc_program() at nfssvc_program+0x447
svc_run_internal() at svc_run_internal+0x51b
svc_thread_start() at svc_thread_start+0xb
fork_exit() at fork_exit+0x11d
fork_trampoline() at fork_trampoline+0xe
--- trap 0xc, rip = 0x8006a031c, rsp = 0x7fffffffe6c8, rbp = 0x6 ---
----

The complete output can be found at:

http://people.allbsd.org/~hrs/zfs_panic_20110909_1/pool-zfs-20110909-1.txt

Another is getting stuck at ZFS access. The kernel is running with
no panic but any access to ZFS datasets causes a program
non-responsive. The DDB output can be found at:

http://people.allbsd.org/~hrs/zfs_panic_20110909_2/pool-zfs-20110909-2.txt

The trigger for the both was some access to a ZFS dataset from the
NFS clients. Because the access pattern was complex I could not
narrow down what was the culprit, but it seems timing-dependent and
simply doing "rm -rf" locally on the server can sometimes trigger
them.

The crash dump and the kernel can be found at the following URLs:

panic:
http://people.allbsd.org/~hrs/zfs_panic_20110909_1/

no panic but unresponsive:
http://people.allbsd.org/~hrs/zfs_panic_20110909_2/

kernel:
http://people.allbsd.org/~hrs/zfs_panic_20110909_kernel/

-- Hiroki

Hiroki Sato

unread,
Sep 10, 2011, 4:47:32 PM9/10/11
to p...@freebsd.org, m...@freebsd.org, freebsd...@freebsd.org, att...@freebsd.org, k...@freebsd.org
Hiroki Sato <h...@freebsd.org> wrote
in <20110910.044841.232...@allbsd.org>:

hr> Hiroki Sato <h...@freebsd.org> wrote
hr> in <20110907.094717.227...@allbsd.org>:
hr>
hr> hr> During this investigation an disk has to be replaced and resilvering
hr> hr> it is now in progress. A deadlock and a forced reboot after that
hr> hr> make recovering of the zfs datasets take a long time (for committing
hr> hr> logs, I think), so I will try to reproduce the deadlock and get a
hr> hr> core dump after it finished.
hr>
hr> I think I could reproduce the symptoms. I have no idea about if
hr> these are exactly the same as occurred on my box before because the
hr> kernel was replaced with one with some debugging options, but these
hr> are reproducible at least.
hr>
hr> There are two symptoms. One is a panic. A DDB output when the panic
hr> occurred is the following:

I am trying vfs.lookup_shared=0 and seeing how it goes. It seems the
box can endure a high load which quickly caused these symptoms.

-- Hiroki

Hiroki Sato

unread,
Sep 19, 2011, 11:25:51 PM9/19/11
to p...@freebsd.org, m...@freebsd.org, freebsd...@freebsd.org, att...@freebsd.org, k...@freebsd.org
Hiroki Sato <h...@freebsd.org> wrote
in <20110911.054601.142...@allbsd.org>:

hr> Hiroki Sato <h...@freebsd.org> wrote
hr> in <20110910.044841.232...@allbsd.org>:
hr>
hr> hr> Hiroki Sato <h...@freebsd.org> wrote
hr> hr> in <20110907.094717.227...@allbsd.org>:
hr> hr>
hr> hr> hr> During this investigation an disk has to be replaced and resilvering
hr> hr> hr> it is now in progress. A deadlock and a forced reboot after that
hr> hr> hr> make recovering of the zfs datasets take a long time (for committing
hr> hr> hr> logs, I think), so I will try to reproduce the deadlock and get a
hr> hr> hr> core dump after it finished.
hr> hr>
hr> hr> I think I could reproduce the symptoms. I have no idea about if
hr> hr> these are exactly the same as occurred on my box before because the
hr> hr> kernel was replaced with one with some debugging options, but these
hr> hr> are reproducible at least.
hr> hr>
hr> hr> There are two symptoms. One is a panic. A DDB output when the panic
hr> hr> occurred is the following:
hr>
hr> I am trying vfs.lookup_shared=0 and seeing how it goes. It seems the
hr> box can endure a high load which quickly caused these symptoms.

There was no difference by the knob. The same panic or
unresponsiveness still occurs in about 24-32 hours or so.

-- Hiroki
0 new messages