BUG: soft lockup in free_work (2)

10 views
Skip to first unread message

syzbot

unread,
May 16, 2020, 11:00:11 PM5/16/20
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: ac935d22 Add linux-next specific files for 20200415
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=1492f294100000
kernel config: https://syzkaller.appspot.com/x/.config?x=bc498783097e9019
dashboard link: https://syzkaller.appspot.com/bug?extid=cfb23b4598344ef942e0
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
CC: [big...@linutronix.de linux-...@vger.kernel.org na...@vmware.com pet...@infradead.org tg...@linutronix.de]

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+cfb23b...@syzkaller.appspotmail.com

watchdog: BUG: soft lockup - CPU#0 stuck for 123s! [kworker/0:0:5]
Modules linked in:
irq event stamp: 5512878
hardirqs last enabled at (5512877): [<ffffffff81007636>] trace_hardirqs_on_thunk+0x1a/0x1c arch/x86/entry/thunk_64.S:41
hardirqs last disabled at (5512878): [<ffffffff81007652>] trace_hardirqs_off_thunk+0x1a/0x1c arch/x86/entry/thunk_64.S:42
softirqs last enabled at (5510042): [<ffffffff880006ef>] __do_softirq+0x6ef/0x9f7 kernel/softirq.c:319
softirqs last disabled at (5509997): [<ffffffff81461a82>] invoke_softirq kernel/softirq.c:373 [inline]
softirqs last disabled at (5509997): [<ffffffff81461a82>] irq_exit+0x192/0x1d0 kernel/softirq.c:413
CPU: 0 PID: 5 Comm: kworker/0:0 Not tainted 5.7.0-rc1-next-20200415-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: events free_work
RIP: 0010:csd_lock_wait kernel/smp.c:109 [inline]
RIP: 0010:smp_call_function_single+0x18d/0x480 kernel/smp.c:311
Code: 00 48 8b 4c 24 08 48 8b 54 24 10 48 8d 74 24 40 8b 7c 24 1c e8 d4 f9 ff ff 41 89 c5 eb 07 e8 7a d5 0a 00 f3 90 44 8b 64 24 58 <31> ff 41 83 e4 01 44 89 e6 e8 05 d7 0a 00 45 85 e4 75 e1 e8 5b d5
RSP: 0018:ffffc90000cbf9a0 EFLAGS: 00000293 ORIG_RAX: ffffffffffffff13
RAX: ffff8880a9598140 RBX: 1ffff92000197f38 RCX: ffffffff81685f2b
RDX: 0000000000000000 RSI: ffffffff81685f16 RDI: 0000000000000005
RBP: ffffc90000cbfa78 R08: ffff8880a9598140 R09: ffffed1015ce7129
R10: ffff8880ae738947 R11: ffffed1015ce7128 R12: 0000000000000003
R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000040
FS: 0000000000000000(0000) GS:ffff8880ae600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055f21eb669a8 CR3: 0000000063872000 CR4: 00000000001426f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
smp_call_function_many_cond+0x1a3/0x980 kernel/smp.c:447
smp_call_function_many kernel/smp.c:506 [inline]
smp_call_function+0x40/0x80 kernel/smp.c:528
on_each_cpu+0x2a/0x1e0 kernel/smp.c:628
flush_tlb_kernel_range+0x197/0x250 arch/x86/mm/tlb.c:839
__purge_vmap_area_lazy+0xcc4/0x1f60 mm/vmalloc.c:1329
try_purge_vmap_area_lazy mm/vmalloc.c:1348 [inline]
free_vmap_area_noflush+0x2bc/0x370 mm/vmalloc.c:1384
free_unmap_vmap_area mm/vmalloc.c:1397 [inline]
remove_vm_area+0x1c7/0x230 mm/vmalloc.c:2217
vm_remove_mappings mm/vmalloc.c:2244 [inline]
__vunmap+0x232/0x960 mm/vmalloc.c:2306
free_work+0x58/0x70 mm/vmalloc.c:66
process_one_work+0x965/0x16a0 kernel/workqueue.c:2268
worker_thread+0x96/0xe20 kernel/workqueue.c:2414
kthread+0x388/0x470 kernel/kthread.c:268
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 7539 Comm: syz-executor.5 Not tainted 5.7.0-rc1-next-20200415-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:__read_once_size include/linux/compiler.h:232 [inline]
RIP: 0010:trylock_clear_pending kernel/locking/qspinlock_paravirt.h:121 [inline]
RIP: 0010:pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:435 [inline]
RIP: 0010:__pv_queued_spin_lock_slowpath+0x3b1/0xb60 kernel/locking/qspinlock.c:508
Code: 83 e3 07 41 be 01 00 00 00 48 b8 00 00 00 00 00 fc ff df 4c 8d 2c 01 eb 0c f3 90 41 83 ec 01 0f 84 ea 04 00 00 41 0f b6 45 00 <38> d8 7f 08 84 c0 0f 85 34 06 00 00 0f b6 45 00 84 c0 75 db be 02
RSP: 0018:ffffc900084bf798 EFLAGS: 00000206
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 1ffffffff134ca0c
RDX: 0000000000000001 RSI: ffffffff8178c0c5 RDI: 0000000000000286
RBP: ffffffff89a65060 R08: ffff888058d98180 R09: fffffbfff186273d
R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000001190
R13: fffffbfff134ca0c R14: 0000000000000001 R15: ffff8880ae738700
FS: 0000000000ff2940(0000) GS:ffff8880ae700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000000073b138 CR3: 0000000058d9c000 CR4: 00000000001426e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:645 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:50 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:81 [inline]
do_raw_spin_lock+0x20d/0x2e0 kernel/locking/spinlock_debug.c:113
spin_lock include/linux/spinlock.h:353 [inline]
alloc_vmap_area+0xb81/0x1e20 mm/vmalloc.c:1152
__get_vm_area_node+0x178/0x3b0 mm/vmalloc.c:2117
__vmalloc_node_range+0xdc/0x7a0 mm/vmalloc.c:2549
__vmalloc_node mm/vmalloc.c:2609 [inline]
__vmalloc_node_flags mm/vmalloc.c:2623 [inline]
vzalloc+0x67/0x80 mm/vmalloc.c:2668
alloc_counters.isra.0+0x50/0x690 net/ipv6/netfilter/ip6_tables.c:816
copy_entries_to_user net/ipv4/netfilter/arp_tables.c:680 [inline]
get_entries net/ipv4/netfilter/arp_tables.c:867 [inline]
do_arpt_get_ctl+0x46a/0x780 net/ipv4/netfilter/arp_tables.c:1489
nf_sockopt net/netfilter/nf_sockopt.c:104 [inline]
nf_getsockopt+0x72/0xd0 net/netfilter/nf_sockopt.c:122
ip_getsockopt net/ipv4/ip_sockglue.c:1576 [inline]
ip_getsockopt+0x165/0x1c0 net/ipv4/ip_sockglue.c:1556
tcp_getsockopt net/ipv4/tcp.c:3782 [inline]
tcp_getsockopt+0x86/0xd0 net/ipv4/tcp.c:3776
__sys_getsockopt+0x14b/0x2e0 net/socket.c:2177
__do_sys_getsockopt net/socket.c:2192 [inline]
__se_sys_getsockopt net/socket.c:2189 [inline]
__x64_sys_getsockopt+0xba/0x150 net/socket.c:2189
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3
RIP: 0033:0x45f33a
Code: b8 34 01 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 ed 8b fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 49 89 ca b8 37 00 00 00 0f 05 <48> 3d 01 f0 ff ff 0f 83 ca 8b fb ff c3 66 0f 1f 84 00 00 00 00 00
RSP: 002b:00007ffeb57b4f98 EFLAGS: 00000212 ORIG_RAX: 0000000000000037
RAX: ffffffffffffffda RBX: 00007ffeb57b50a0 RCX: 000000000045f33a
RDX: 0000000000000061 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 0000000000000003 R08: 00007ffeb57b4fac R09: 000000000000000a
R10: 00007ffeb57b50a0 R11: 0000000000000212 R12: 0000000000000000
R13: 00007ffeb57b5720 R14: 00000000000982e5 R15: 00007ffeb57b5730


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 11, 2020, 10:53:15 PM7/11/20
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages