INFO: rcu detected stall in __handle_mm_fault

7 views
Skip to first unread message

syzbot

unread,
Jan 23, 2020, 8:07:11 AM1/23/20
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 244dc268 Merge tag 'drm-fixes-2020-01-19' of git://anongit..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=142373b9e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=d9290aeb7e6cf1c4
dashboard link: https://syzkaller.appspot.com/bug?extid=f1841cd7b4ebed415014
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
CC: [b...@alien8.de h...@zytor.com linux-...@vger.kernel.org mi...@redhat.com tg...@linutronix.de x...@kernel.org]

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+f1841c...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt self-detected stall on CPU
rcu: 0-...!: (1 ticks this GP) idle=f0a/1/0x4000000000000002 softirq=24323/24323 fqs=0
(t=11213 jiffies g=18537 q=148)
rcu: rcu_preempt kthread starved for 11213 jiffies! g18537 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: RCU grace-period kthread stack dump:
rcu_preempt R running task 29592 10 2 0x80004000
Call Trace:
context_switch kernel/sched/core.c:3385 [inline]
__schedule+0x934/0x1f90 kernel/sched/core.c:4081
schedule+0xdc/0x2b0 kernel/sched/core.c:4155
schedule_timeout+0x486/0xc50 kernel/time/timer.c:1895
rcu_gp_fqs_loop kernel/rcu/tree.c:1661 [inline]
rcu_gp_kthread+0x9b2/0x18d0 kernel/rcu/tree.c:1821
kthread+0x361/0x430 kernel/kthread.c:255
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
NMI backtrace for cpu 0
CPU: 0 PID: 10787 Comm: syz-executor.2 Not tainted 5.5.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x197/0x210 lib/dump_stack.c:118
nmi_cpu_backtrace.cold+0x70/0xb2 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x23b/0x28b lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_single_cpu_backtrace include/linux/nmi.h:164 [inline]
rcu_dump_cpu_stacks+0x183/0x1cf kernel/rcu/tree_stall.h:254
print_cpu_stall kernel/rcu/tree_stall.h:455 [inline]
check_cpu_stall kernel/rcu/tree_stall.h:529 [inline]
rcu_pending kernel/rcu/tree.c:2827 [inline]
rcu_sched_clock_irq.cold+0x509/0xc0d kernel/rcu/tree.c:2271
update_process_times+0x2d/0x70 kernel/time/timer.c:1726
tick_sched_handle+0xa2/0x190 kernel/time/tick-sched.c:171
tick_sched_timer+0x53/0x140 kernel/time/tick-sched.c:1314
__run_hrtimer kernel/time/hrtimer.c:1517 [inline]
__hrtimer_run_queues+0x364/0xe40 kernel/time/hrtimer.c:1579
hrtimer_interrupt+0x314/0x770 kernel/time/hrtimer.c:1641
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1110 [inline]
smp_apic_timer_interrupt+0x160/0x610 arch/x86/kernel/apic/apic.c:1135
apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:829
</IRQ>
RIP: 0010:pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:434 [inline]
RIP: 0010:__pv_queued_spin_lock_slowpath+0x3e9/0xc40 kernel/locking/qspinlock.c:507
Code: 41 c6 45 01 01 48 b8 00 00 00 00 00 fc ff df 48 c1 e9 03 41 be 00 80 00 00 83 e3 07 4c 8d 24 01 41 bf 01 00 00 00 eb 0c f3 90 <41> 83 ee 01 0f 84 33 05 00 00 41 0f b6 04 24 38 d8 7f 08 84 c0 0f
RSP: 0000:ffffc900017e7b30 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff13
RAX: 0000000000000003 RBX: 0000000000000000 RCX: 1ffff11012eb016b
RDX: 0000000000000001 RSI: ffffffff817a8b0e RDI: 0000000000000282
RBP: ffffc900017e7c00 R08: ffff88804f734340 R09: fffffbfff165e79f
R10: 0000000000000001 R11: 0000000000000000 R12: ffffed1012eb016b
R13: ffff888097580b58 R14: 0000000000001690 R15: 0000000000000001
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:638 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:50 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:81 [inline]
do_raw_spin_lock+0x21d/0x2f0 kernel/locking/spinlock_debug.c:113
__raw_spin_lock include/linux/spinlock_api_smp.h:143 [inline]
_raw_spin_lock+0x37/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:338 [inline]
handle_pte_fault mm/memory.c:4007 [inline]
__handle_mm_fault+0x177b/0x3cc0 mm/memory.c:4127
handle_mm_fault+0x3b2/0xa50 mm/memory.c:4164
do_user_addr_fault arch/x86/mm/fault.c:1441 [inline]
__do_page_fault+0x536/0xd80 arch/x86/mm/fault.c:1506
do_page_fault+0x38/0x590 arch/x86/mm/fault.c:1530
page_fault+0x39/0x40 arch/x86/entry/entry_64.S:1203
RIP: 0033:0x40419e
Code: 48 dc ff ff 0f 1f 84 00 00 00 00 00 0f b6 b5 84 00 00 00 bf 61 00 4c 00 31 c0 e8 0d dd ff ff e9 30 fe ff ff 8b 0b 48 83 f8 ff <48> 89 45 78 89 8d 80 00 00 00 0f 85 8d fd ff ff 85 c9 0f 85 85 fd
RSP: 002b:00007ff48c35ec90 EFLAGS: 00010213
RAX: 0000000000000000 RBX: 00007ff48c35f6d4 RCX: 0000000000000000
RDX: 0000000000000001 RSI: 0000000000403ecc RDI: 0000000000000006
RBP: 000000000075bf20 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 000000000000039d R14: 00000000004c4b76 R15: 000000000075bf2c


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Apr 18, 2020, 8:50:09 AM4/18/20
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages