INFO: rcu detected stall in cleanup_net

17 views
Skip to first unread message

syzbot

unread,
Jan 21, 2020, 9:17:09 PM1/21/20
to syzkaller...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: dc4ba5be Linux 4.19.97
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10596185e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=cc17a984a7e9c2f3
dashboard link: https://syzkaller.appspot.com/bug?extid=725126310a27da0ea8fb
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+725126...@syzkaller.appspotmail.com

batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
rcu: INFO: rcu_preempt self-detected stall on CPU
rcu: 0-...!: (1 GPs behind) idle=456/1/0x4000000000000002 softirq=517147/517148 fqs=9
rcu: (t=10502 jiffies g=529301 q=1034)
rcu: rcu_preempt kthread starved for 10485 jiffies! g529301 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1
rcu: RCU grace-period kthread stack dump:
rcu_preempt I29104 10 2 0x80000000
Call Trace:
context_switch kernel/sched/core.c:2826 [inline]
__schedule+0x866/0x1dc0 kernel/sched/core.c:3515
schedule+0x92/0x1c0 kernel/sched/core.c:3559
schedule_timeout+0x4db/0xfc0 kernel/time/timer.c:1806
rcu_gp_kthread+0xd5c/0x2190 kernel/rcu/tree.c:2202
kthread+0x354/0x420 kernel/kthread.c:246
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415
NMI backtrace for cpu 0
CPU: 0 PID: 3263 Comm: kworker/u4:4 Not tainted 4.19.97-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: netns cleanup_net
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x197/0x210 lib/dump_stack.c:118
nmi_cpu_backtrace.cold+0x63/0xa4 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x1b0/0x1f8 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_single_cpu_backtrace include/linux/nmi.h:164 [inline]
rcu_dump_cpu_stacks+0x189/0x1d5 kernel/rcu/tree.c:1340
print_cpu_stall kernel/rcu/tree.c:1478 [inline]
check_cpu_stall kernel/rcu/tree.c:1550 [inline]
__rcu_pending kernel/rcu/tree.c:3293 [inline]
rcu_pending kernel/rcu/tree.c:3336 [inline]
rcu_check_callbacks.cold+0x5e3/0xd90 kernel/rcu/tree.c:2682
update_process_times+0x32/0x80 kernel/time/timer.c:1638
tick_sched_handle+0xa2/0x190 kernel/time/tick-sched.c:164
tick_sched_timer+0x47/0x130 kernel/time/tick-sched.c:1274
__run_hrtimer kernel/time/hrtimer.c:1401 [inline]
__hrtimer_run_queues+0x33b/0xdc0 kernel/time/hrtimer.c:1463
hrtimer_interrupt+0x314/0x770 kernel/time/hrtimer.c:1521
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1067 [inline]
smp_apic_timer_interrupt+0x111/0x550 arch/x86/kernel/apic/apic.c:1092
apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:893
</IRQ>
RIP: 0010:should_resched arch/x86/include/asm/preempt.h:99 [inline]
RIP: 0010:__local_bh_enable_ip+0x18e/0x270 kernel/softirq.c:196
Code: 00 00 00 00 fc ff df 48 c1 e8 03 80 3c 10 00 0f 85 df 00 00 00 48 83 3d a7 27 b2 07 00 0f 84 8f 00 00 00 fb 66 0f 1f 44 00 00 <65> 8b 05 8b b9 c1 7e 85 c0 74 7f 5b 41 5c 41 5d 5d c3 80 3d 07 53
RSP: 0018:ffff88805cb8f898 EFLAGS: 00000286 ORIG_RAX: ffffffffffffff13
RAX: 1ffffffff11e4b7b RBX: 0000000000000201 RCX: 1ffff11002913d60
RDX: dffffc0000000000 RSI: ffff88801489eae0 RDI: ffff88801489ea3c
RBP: ffff88805cb8f8b0 R08: ffff88801489e1c0 R09: ffff88801489eb00
R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff873c093a
R13: ffff88801489e1c0 R14: ffff88804c8499c0 R15: dffffc0000000000
__raw_spin_unlock_bh include/linux/spinlock_api_smp.h:176 [inline]
_raw_spin_unlock_bh+0x31/0x40 kernel/locking/spinlock.c:200
spin_unlock_bh include/linux/spinlock.h:374 [inline]
batadv_tt_local_purge+0x27a/0x360 net/batman-adv/translation-table.c:1454
batadv_tt_local_resize_to_mtu+0x98/0x140 net/batman-adv/translation-table.c:4205
batadv_update_min_mtu net/batman-adv/hard-interface.c:637 [inline]
batadv_hardif_deactivate_interface.part.0+0xfb/0x102 net/batman-adv/hard-interface.c:686
batadv_hardif_deactivate_interface net/batman-adv/hard-interface.c:445 [inline]
batadv_hardif_disable_interface.cold+0x4d7/0xbe9 net/batman-adv/hard-interface.c:853
batadv_softif_destroy_netlink+0xad/0x150 net/batman-adv/soft-interface.c:1151
default_device_exit_batch+0x25c/0x410 net/core/dev.c:9778
ops_exit_list.isra.0+0xfc/0x150 net/core/net_namespace.c:156
cleanup_net+0x404/0x960 net/core/net_namespace.c:553
process_one_work+0x989/0x1750 kernel/workqueue.c:2153
worker_thread+0x98/0xe40 kernel/workqueue.c:2296
kthread+0x354/0x420 kernel/kthread.c:246
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415
net_ratelimit: 3868 callbacks suppressed
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
net_ratelimit: 4424 callbacks suppressed
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (20160)


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
May 20, 2020, 10:17:12 PM5/20/20
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages