panic: receive 1: so ADDR, so_type 3, sb_cc 33

0 views
Skip to first unread message

syzbot

unread,
Dec 15, 2019, 9:32:11 AM12/15/19
to syzkaller-o...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: a037bd75 whitespace from go fmt + update a comment
git tree: openbsd
console output: https://syzkaller.appspot.com/x/log.txt?x=15cc0a51e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=fe55924c11e64b0a
dashboard link: https://syzkaller.appspot.com/bug?extid=3dbd177568a46236b95f

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+3dbd17...@syzkaller.appspotmail.com

login: panic: receive 1: so 0xfffffd805e4da300, so_type 3, sb_cc 33
Stopped at db_enter+0x18: addq $0x8,%rsp
TID PID UID PRFLAGS PFLAGS CPU COMMAND
*121968 23458 0 0 0 0 dhclient
db_enter() at db_enter+0x18 sys/arch/amd64/amd64/db_interface.c:398
panic(ffffffff8219b1e0) at panic+0x15c sys/kern/subr_prf.c:207
soreceive(fffffd805e4da300,0,ffff80001d376bd8,0,0,ffff80001d376ae4) at
soreceive+0x16ac sys/kern/uipc_socket.c:798
soo_read(fffffd8066ff67f8,ffff80001d376bd8,0) at soo_read+0x53
sys/kern/sys_socket.c:70
dofilereadv(ffff8000ffff89e8,6,ffff80001d376bd8,0,ffff80001d376cc0) at
dofilereadv+0x1a2 sys/kern/sys_generic.c:236
sys_read(ffff8000ffff89e8,ffff80001d376c70,ffff80001d376cc0) at
sys_read+0x83 sys/kern/sys_generic.c:156
syscall(ffff80001d376d40) at syscall+0x507 sys/arch/amd64/amd64/trap.c:555
Xsyscall() at Xsyscall+0x128
end of kernel
end trace frame: 0x7f7ffffe8f50, count: 7
https://www.openbsd.org/ddb.html describes the minimum info required in bug
reports. Insufficient info makes it difficult to find and fix bugs.
ddb>
ddb> set $lines = 0
ddb> set $maxwidth = 0
ddb> show panic
receive 1: so 0xfffffd805e4da300, so_type 3, sb_cc 33
ddb> trace
db_enter() at db_enter+0x18 sys/arch/amd64/amd64/db_interface.c:398
panic(ffffffff8219b1e0) at panic+0x15c sys/kern/subr_prf.c:207
soreceive(fffffd805e4da300,0,ffff80001d376bd8,0,0,ffff80001d376ae4) at
soreceive+0x16ac sys/kern/uipc_socket.c:798
soo_read(fffffd8066ff67f8,ffff80001d376bd8,0) at soo_read+0x53
sys/kern/sys_socket.c:70
dofilereadv(ffff8000ffff89e8,6,ffff80001d376bd8,0,ffff80001d376cc0) at
dofilereadv+0x1a2 sys/kern/sys_generic.c:236
sys_read(ffff8000ffff89e8,ffff80001d376c70,ffff80001d376cc0) at
sys_read+0x83 sys/kern/sys_generic.c:156
syscall(ffff80001d376d40) at syscall+0x507 sys/arch/amd64/amd64/trap.c:555
Xsyscall() at Xsyscall+0x128
end of kernel
end trace frame: 0x7f7ffffe8f50, count: -8
ddb> show registers
rdi 0
rsi 0x1
rbp 0xffff80001d376920
rbx 0xffff80001d3769d0
rdx 0x2
rcx 0
rax 0
r8 0xffff80001d3768e0
r9 0x1
r10 0
r11 0xa10854ef37b80b93
r12 0x3000000008
r13 0xffff80001d376930
r14 0x100
r15 0x1
rip 0xffffffff81f1cb68 db_enter+0x18
cs 0x8
rflags 0x246
rsp 0xffff80001d376910
ss 0x10
db_enter+0x18: addq $0x8,%rsp
ddb> show proc
PROC (dhclient) pid=121968 stat=onproc
flags process=0 proc=0
pri=50, usrpri=50, nice=20
forw=0xffffffffffffffff, list=0xffff8000ffff8c60,0xffff8000ffff93d8
process=0xffff8000ffff66e0 user=0xffff80001d371000,
vmspace=0xfffffd806bc12440
estcpu=0, cpticks=0, pctcpu=0.0
user=0, sys=0, intr=0
ddb> ps
PID TID PPID UID S FLAGS WAIT COMMAND
68445 73834 10460 0 2 0 syz-executor.1
68445 193901 10460 0 2 0x4000000 syz-executor.1
10926 376209 80706 0 3 0x80 nanosleep syz-executor.0
10926 138085 80706 0 3 0x4000080 msgwait syz-executor.0
10926 106298 80706 0 3 0x4000080 fsleep syz-executor.0
14889 467231 0 0 3 0x14200 acct acct
80706 457557 98613 0 3 0x82 nanosleep syz-executor.0
10460 297215 98613 0 3 0x82 nanosleep syz-executor.1
98613 230004 16945 0 3 0x82 thrsleep syz-fuzzer
98613 520368 16945 0 3 0x4000082 thrsleep syz-fuzzer
98613 391747 16945 0 3 0x4000082 thrsleep syz-fuzzer
98613 274729 16945 0 3 0x4000082 thrsleep syz-fuzzer
98613 51524 16945 0 3 0x4000082 thrsleep syz-fuzzer
98613 129662 16945 0 3 0x4000082 kqread syz-fuzzer
98613 492437 16945 0 3 0x4000082 thrsleep syz-fuzzer
16945 43158 4333 0 3 0x10008a pause ksh
4333 12418 31891 0 3 0x92 select sshd
66524 329127 1 0 3 0x100083 ttyin getty
31891 411417 1 0 3 0x80 select sshd
98149 345823 45599 73 3 0x100090 kqread syslogd
45599 272425 1 0 3 0x100082 netio syslogd
15462 6917 1 77 2 0x100010 dhclient
*23458 121968 1 0 7 0 dhclient
54301 216567 0 0 2 0x14200 zerothread
25347 428390 0 0 3 0x14200 aiodoned aiodoned
37616 515308 0 0 3 0x14200 syncer update
31829 242728 0 0 3 0x14200 cleaner cleaner
79327 363958 0 0 3 0x14200 reaper reaper
29434 337709 0 0 3 0x14200 pgdaemon pagedaemon
87606 417835 0 0 3 0x14200 bored crynlk
84304 469171 0 0 3 0x14200 bored crypto
97733 68977 0 0 3 0x40014200 acpi0 acpi0
16564 389672 0 0 3 0x14200 bored softnet
30634 373169 0 0 3 0x14200 bored systqmp
89567 485881 0 0 3 0x14200 bored systq
65476 85463 0 0 3 0x40014200 bored softclock
1175 176737 0 0 3 0x40014200 idle0
99510 461451 0 0 3 0x14200 bored smr
1 228378 0 0 3 0x82 wait init
0 0 -1 0 3 0x10200 scheduler swapper
ddb> show all locks
No such command
ddb> show malloc
Type InUse MemUse HighUse Limit Requests Type Lim
devbuf 9477 6340K 6778K 78643K 10928 0
pcb 13 8K 8K 78643K 63 0
rtable 83 2K 3K 78643K 324 0
ifaddr 50 11K 11K 78643K 78 0
counters 19 16K 16K 78643K 19 0
ioctlops 0 0K 2K 78643K 37 0
iov 0 0K 24K 78643K 27 0
mount 1 1K 1K 78643K 1 0
vnodes 1223 77K 77K 78643K 1335 0
UFS quota 1 32K 32K 78643K 1 0
UFS mount 5 36K 36K 78643K 5 0
shm 2 1K 5K 78643K 5 0
VM map 2 0K 0K 78643K 2 0
sem 12 0K 0K 78643K 22 0
dirhash 12 2K 2K 78643K 12 0
ACPI 1794 195K 288K 78643K 12646 0
file desc 6 17K 25K 78643K 173 0
sigio 0 0K 0K 78643K 2 0
proc 48 38K 54K 78643K 375 0
subproc 32 2K 2K 78643K 34 0
NFS srvsock 1 0K 0K 78643K 1 0
NFS daemon 1 16K 16K 78643K 1 0
in_multi 25 1K 2K 78643K 46 0
ether_multi 1 0K 0K 78643K 1 0
ISOFS mount 1 32K 32K 78643K 1 0
MSDOSFS mount 1 16K 16K 78643K 1 0
ttys 48 212K 212K 78643K 48 0
exec 0 0K 1K 78643K 195 0
pagedep 1 8K 8K 78643K 1 0
inodedep 1 32K 32K 78643K 1 0
newblk 1 0K 0K 78643K 1 0
VM swap 7 26K 26K 78643K 7 0
UVM amap 121 70K 70K 78643K 1240 0
UVM aobj 9 2K 2K 78643K 9 0
memdesc 1 4K 4K 78643K 1 0
crypto data 1 1K 1K 78643K 1 0
ip6_options 0 0K 0K 78643K 46 0
NDP 8 0K 0K 78643K 16 0
temp 100 3022K 3085K 78643K 3658 0
kqueue 0 0K 0K 78643K 2 0
SYN cache 2 16K 16K 78643K 2 0
ddb> show all pools
Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg
Idle
arp 64 7 0 2 1 0 1 1 0
8 0
rtpcb 80 157 0 155 1 0 1 1 0
8 0
rtentry 112 50 0 17 2 0 2 2 0
8 0
unpcb 120 93 0 85 1 0 1 1 0
8 0
syncache 264 6 0 6 2 1 1 1 0
8 1
sackhl 24 1 0 1 1 0 1 1 0
8 1
tcpqe 32 343 0 343 1 0 1 1 0
8 1
tcpcb 544 60 0 56 1 0 1 1 0
8 0
ipq 40 2 0 2 1 0 1 1 0
8 1
ipqe 40 90 0 90 1 0 1 1 0
8 1
inpcb 280 227 0 218 2 0 2 2 0
8 1
nd6 48 5 0 4 1 0 1 1 0
8 0
art_heap8 4096 1 0 0 1 0 1 1 0
8 0
art_heap4 256 237 0 82 14 0 14 14 0
8 2
art_table 32 238 0 82 2 0 2 2 0
8 0
art_node 16 49 0 18 1 0 1 1 0
8 0
sysvmsgpl 40 14 0 10 1 0 1 1 0
8 0
semapl 112 20 0 10 1 0 1 1 0
8 0
shmpl 112 7 0 0 1 0 1 1 0
8 0
dirhash 1024 17 0 0 3 0 3 3 0
8 0
dino1pl 128 1648 0 246 46 0 46 46 0
8 0
ffsino 240 1648 0 246 83 0 83 83 0
8 0
nchpl 144 2098 0 483 60 0 60 60 0
8 0
uvmvnodes 72 1777 0 0 33 0 33 33 0
8 0
vnodes 208 1777 0 0 94 0 94 94 0
8 0
namei 1024 5566 0 5566 1 0 1 1 0
8 1
vcpupl 1984 6 0 0 1 0 1 1 0
8 0
vmpool 528 6 0 0 1 0 1 1 0
8 0
scsiplug 64 1 0 1 1 0 1 1 0
8 1
scxspl 192 6088 0 6088 1 0 1 1 0
8 1
plimitpl 152 24 0 17 1 0 1 1 0
8 0
sigapl 432 344 0 330 2 0 2 2 0
8 0
futexpl 56 4055 0 4054 1 0 1 1 0
8 0
knotepl 112 74 0 55 1 0 1 1 0
8 0
kqueuepl 104 46 0 44 1 0 1 1 0
8 0
pipepl 128 334 0 315 2 0 2 2 0
8 1
fdescpl 424 345 0 330 2 0 2 2 0
8 0
filepl 120 2389 0 2291 5 0 5 5 0
8 2
lockfpl 104 221 0 219 1 0 1 1 0
8 0
lockfspl 48 22 0 20 1 0 1 1 0
8 0
sessionpl 112 17 0 7 1 0 1 1 0
8 0
pgrppl 48 19 0 9 1 0 1 1 0
8 0
ucredpl 96 211 0 204 1 0 1 1 0
8 0
zombiepl 144 330 0 330 1 0 1 1 0
8 1
processpl 872 360 0 330 4 0 4 4 0
8 0
procpl 632 543 0 504 4 0 4 4 0
8 0
sockpl 384 480 0 463 4 0 4 4 0
8 2
mcl64k 65536 15 0 15 1 0 1 1 0
8 1
mcl16k 16384 2 0 2 1 0 1 1 0
8 1
mcl12k 12288 6 0 6 1 0 1 1 0
8 1
mcl9k 9216 1 0 1 1 0 1 1 0
8 1
mcl8k 8192 14 0 14 1 0 1 1 0
8 1
mcl4k 4096 39 0 39 2 1 1 1 0
8 1
mcl2k2 2112 7 0 7 1 0 1 1 0
8 1
mcl2k 2048 67947 0 67882 18 5 13 16 0
8 4
mtagpl 80 18 0 4 2 1 1 1 0
8 0
mbufpl 256 109548 0 109422 29 8 21 23 0
8 5
bufpl 280 6562 0 1330 374 0 374 374 0
8 0
anonpl 16 49583 0 30188 81 1 80 80 0
107 2
amapchunkpl 152 1358 0 1202 8 1 7 8 0
158 0
amappl16 192 1801 0 730 55 0 55 55 0
8 1
amappl15 184 50 0 46 1 0 1 1 0
8 0
amappl14 176 26 0 23 1 0 1 1 0
8 0
amappl13 168 2 0 1 1 0 1 1 0
8 0
amappl12 160 80 0 75 1 0 1 1 0
8 0
amappl11 152 121 0 107 1 0 1 1 0
8 0
amappl10 144 19 0 17 1 0 1 1 0
8 0
amappl9 136 568 0 564 1 0 1 1 0
8 0
amappl8 128 116 0 92 1 0 1 1 0
8 0
amappl7 120 91 0 81 1 0 1 1 0
8 0
amappl6 112 120 0 114 1 0 1 1 0
8 0
amappl5 104 207 0 196 1 0 1 1 0
8 0
amappl4 96 488 0 457 1 0 1 1 0
8 0
amappl3 88 193 0 182 1 0 1 1 0
8 0
amappl2 80 1981 0 1905 3 1 2 3 0
8 0
amappl1 72 15890 0 15450 27 18 9 20 0
8 0
amappl 80 766 0 717 2 0 2 2 0
84 0
dma4096 4096 1 0 1 1 1 0 1 0
8 0
dma256 256 6 0 6 1 1 0 1 0
8 0
dma128 128 253 0 253 1 1 0 1 0
8 0
dma64 64 6 0 6 1 1 0 1 0
8 0
dma32 32 7 0 7 1 1 0 1 0
8 0
dma16 16 18 0 17 1 0 1 1 0
8 0
aobjpl 64 8 0 0 1 0 1 1 0
8 0
uaddrrnd 24 351 0 330 1 0 1 1 0
8 0
uaddrbest 32 2 0 0 1 0 1 1 0
8 0
uaddr 24 351 0 330 1 0 1 1 0
8 0
vmmpekpl 168 6516 0 6488 2 0 2 2 0
8 0
vmmpepl 168 49286 0 46995 136 11 125 126 0 357
25
vmsppl 272 350 0 330 2 0 2 2 0
8 0
pdppl 4096 708 0 666 6 0 6 6 0
8 0
pvpl 32 159659 0 137209 191 0 191 191 0
265 9
pmappl 200 350 0 330 2 0 2 2 0
8 0
extentpl 40 46 0 29 1 0 1 1 0
8 0
phpool 112 165 0 16 5 0 5 5 0
8 0


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

Anton Lindqvist

unread,
Dec 15, 2019, 9:46:58 AM12/15/19
to syzbot, syzkaller-o...@googlegroups.com
#syz dup: panic: receive 1: so ADDR, so_type 3, sb_cc 176
Reply all
Reply to author
Forward
0 new messages