assertion failed: spin_locked(lock) (2)

0 views
Skip to first unread message

syzbot

unread,
Jan 11, 2021, 11:46:26 PM1/11/21
to aka...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d8b15e15 tests/linux: use Akaros's CFLAGS
git tree: akaros
console output: https://syzkaller.appspot.com/x/log.txt?x=13dd8e93500000
kernel config: https://syzkaller.appspot.com/x/.config?x=9b018fab5edd31b3
dashboard link: https://syzkaller.appspot.com/bug?extid=4c7456b5abf5cea9c92e

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4c7456...@syzkaller.appspotmail.com

kernel panic at kern/src/atomic.c:100, from core 3: assertion failed: spin_locked(lock)
Stack Backtrace on Core 3:
#01 [<0xffffffffc200aa6c>] in backtrace at src/kdebug.c:235
#02 [<0xffffffffc200a205>] in _panic at src/init.c:275
#03 [<0xffffffffc2003d9d>] in spin_unlock at src/atomic.c:100
#04 [< [inline] >] in spin_unlock_irqsave at include/atomic.h:303
#04 [< [inline] >] in alloc_from_arena at src/arena.c:712
#04 [<0xffffffffc20024cd>] in arena_alloc at src/arena.c:842
#05 [< [inline] >] in kmem_cache_grow at src/slab.c:852
#05 [<0xffffffffc205409e>] in __kmem_alloc_from_slab at src/slab.c:608
#06 [<0xffffffffc20548da>] in kmem_cache_alloc at src/slab.c:696
#07 [<0xffffffffc2002497>] in arena_alloc at src/arena.c:839
#08 [<0xffffffffc2046675>] in kpages_alloc at src/page_alloc.c:80
#09 [<0xffffffffc200af5e>] in kmalloc at src/kmalloc.c:65
#10 [<0xffffffffc200b01f>] in kzmalloc at src/kmalloc.c:91
#11 [<0xffffffffc207fa69>] in mntralloc at drivers/dev/mnt.c:1120
#12 [<0xffffffffc207fb73>] in mntflushalloc at drivers/dev/mnt.c:1058
#13 [<0xffffffffc2080010>] in mountio at drivers/dev/mnt.c:854
#14 [<0xffffffffc2080105>] in mountrpc at drivers/dev/mnt.c:783
#15 [<0xffffffffc2080f3b>] in mntwalk at drivers/dev/mnt.c:475
#16 [<0xffffffffc2033bcb>] in walk at src/ns/chan.c:800
#17 [<0xffffffffc2034039>] in __namec_from at src/ns/chan.c:1138
#18 [<0xffffffffc2034c43>] in namec at src/ns/chan.c:1530
#19 [<0xffffffffc2041ddd>] in sysopenat at src/ns/sysfile.c:585
#20 [<0xffffffffc20592de>] in sys_openat at src/syscall.c:1826
04:45:36 executing program 2:
openat$prof_kpctl(0xffffffffffffff9c, &(0x7f0000000040)='/prof/kpctl\x00', 0xfffffffffffffdfb, 0x3, 0x0)
openat$net_tcp_1_local(0xffffffffffffff9c, &(0x7f0000000000)='/net/tcp/1/local\x00', 0x11, 0x1, 0x0)
04:45:36 executing program 1:
openat$proc_self_note(0xffffffffffffff9c, &(0x7f0000000000)='/proc/self/note\x00', 0x10, 0x1, 0x0)
openat$net_udp_0_data(0xffffffffffffff9c, &(0x7f0000000040)='/net/udp/0/data\x00', 0x10, 0x3, 0x0)
kernel panic at kern/src/arena.c:810, from core 0: OOM!
Stack Backtrace on Core 0:
#01 [<0xffffffffc200aa6c>] in backtrace at src/kdebug.c:235
#02 [<0xffffffffc200a205>] in _panic at src/init.c:275
#03 [<0xffffffffc2002d17>] in get_more_resources at src/arena.c:810
#04 [<0xffffffffc200250b>] in arena_alloc at src/arena.c:863
#05 [<0xffffffffc2002cb6>] in get_more_resources at src/arena.c:797
#06 [<0xffffffffc200250b>] in arena_alloc at src/arena.c:863
#07 [<0xffffffffc2046675>] in kpages_alloc at src/page_alloc.c:80
#08 [<0xffffffffc20070d0>] in env_setup_vm at src/env.c:50
#09 [<0xffffffffc204b0fe>] in proc_alloc at src/process.c:381
#10 [<0xffffffffc205a423>] in sys_fork at src/syscall.c:897
#11 [<0xffffffffc205a249>] in syscall at src/syscall.c:2582
#12 [<0xffffffffc205add8>] in run_local_syscall at src/syscall.c:2619
#13 [<0xffffffffc205b319>] in prep_syscalls at src/syscall.c:2639
#14 [<0xffffffffc20b7a92>] in sysenter_callwrapper at arch/x86/trap.c:932


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jun 23, 2021, 10:50:16 AM6/23/21
to aka...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages