[Android 5.15] BUG: soft lockup in vfork

0 views
Skip to first unread message

syzbot

unread,
Apr 7, 2024, 9:21:19 PMApr 7
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 993bed180178 Merge "Merge branch 'android13-5.15' into bra..
git tree: android13-5.15-lts
console output: https://syzkaller.appspot.com/x/log.txt?x=16ea7e99180000
kernel config: https://syzkaller.appspot.com/x/.config?x=49ce29477ba81e8f
dashboard link: https://syzkaller.appspot.com/bug?extid=0bdff378a7f08b8bc71a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=151a7e75180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15171ba1180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/803c31a2cd6a/disk-993bed18.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/fb3ff89c92a4/vmlinux-993bed18.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b0859874279b/bzImage-993bed18.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0bdff3...@syzkaller.appspotmail.com

watchdog: BUG: soft lockup - CPU#1 stuck for 143s! [init:1]
Modules linked in:
CPU: 1 PID: 1 Comm: init Not tainted 5.15.148-syzkaller-00718-g993bed180178 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
RIP: 0010:arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
RIP: 0010:kvm_wait+0x147/0x180 arch/x86/kernel/kvm.c:918
Code: 4c 89 e8 48 c1 e8 03 42 0f b6 04 20 84 c0 44 8b 74 24 1c 75 34 41 0f b6 45 00 44 38 f0 75 10 66 90 0f 00 2d 5b 03 f3 03 fb f4 <e9> 24 ff ff ff fb e9 1e ff ff ff 44 89 e9 80 e1 07 38 c1 7c a3 4c
RSP: 0018:ffffc90000017140 EFLAGS: 00000246
RAX: 0000000000000001 RBX: 1ffff92000002e2c RCX: 1ffffffff0d1aa9c
RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff8881f7137ed4
RBP: ffffc900000171f0 R08: dffffc0000000000 R09: ffffed103ee26fdb
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: ffff8881f7137ed4 R14: 0000000000000001 R15: 1ffff92000002e30
FS: 00007fd706af9380(0000) GS:ffff8881f7100000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f669770d130 CR3: 000000010b948000 CR4: 00000000003506a0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
</IRQ>
<TASK>
pv_wait arch/x86/include/asm/paravirt.h:597 [inline]
pv_wait_node kernel/locking/qspinlock_paravirt.h:325 [inline]
__pv_queued_spin_lock_slowpath+0x41b/0xc40 kernel/locking/qspinlock.c:473
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:585 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:85 [inline]
do_raw_spin_lock include/linux/spinlock.h:187 [inline]
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:136 [inline]
_raw_spin_lock_bh+0x139/0x1b0 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xb1/0x2f0 net/core/sock_map.c:937
bpf_prog_a8aaa52f2e199321+0x42/0x3d4
bpf_dispatcher_nop_func include/linux/bpf.h:785 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1883 [inline]
bpf_trace_run4+0x13f/0x270 kernel/trace/bpf_trace.c:1922
__bpf_trace_mm_page_alloc+0xbf/0xf0 include/trace/events/kmem.h:201
__traceiter_mm_page_alloc+0x3a/0x60 include/trace/events/kmem.h:201
trace_mm_page_alloc include/trace/events/kmem.h:201 [inline]
__alloc_pages+0x3cb/0x8f0 mm/page_alloc.c:5798
__alloc_pages_node include/linux/gfp.h:591 [inline]
alloc_pages_node include/linux/gfp.h:605 [inline]
alloc_pages include/linux/gfp.h:618 [inline]
__get_free_pages+0x10/0x30 mm/page_alloc.c:5813
kasan_populate_vmalloc_pte+0x39/0x130 mm/kasan/shadow.c:266
apply_to_pte_range mm/memory.c:2604 [inline]
apply_to_pmd_range mm/memory.c:2648 [inline]
apply_to_pud_range mm/memory.c:2684 [inline]
apply_to_p4d_range mm/memory.c:2720 [inline]
__apply_to_page_range+0x8dd/0xbe0 mm/memory.c:2754
apply_to_page_range+0x3b/0x50 mm/memory.c:2773
kasan_populate_vmalloc+0x65/0x70 mm/kasan/shadow.c:297
alloc_vmap_area+0x192f/0x1a80 mm/vmalloc.c:1576
__get_vm_area_node+0x158/0x360 mm/vmalloc.c:2439
__vmalloc_node_range+0xe2/0x8d0 mm/vmalloc.c:3051
alloc_thread_stack_node kernel/fork.c:254 [inline]
dup_task_struct+0x416/0xc60 kernel/fork.c:944
copy_process+0x5c4/0x3290 kernel/fork.c:2094
kernel_clone+0x21e/0x9e0 kernel/fork.c:2662
__do_sys_vfork+0xcd/0x130 kernel/fork.c:2750
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fd706c33a68
Code: 00 48 8d b8 e0 02 00 00 48 89 b8 d8 02 00 00 48 89 b8 e0 02 00 00 b8 11 01 00 00 0f 05 44 89 c0 c3 90 5f b8 3a 00 00 00 0f 05 <57> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 90 43 0f 00 f7 d8 64 89 01 48
RSP: 002b:00007ffd16b501f0 EFLAGS: 00000246 ORIG_RAX: 000000000000003a
RAX: ffffffffffffffda RBX: 0000557d5126aa50 RCX: 00007fd706c33a68
RDX: 0000000000000008 RSI: 0000000000000000 RDI: 00007fd706dbebed
RBP: 00007fd706df9528 R08: 0000000000000007 R09: dd9e00cf133aa7b0
R10: 00007ffd16b50230 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000018 R14: 0000557d4fde1169 R15: 00007fd706e2aa80
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages