[v5.15] INFO: rcu detected stall in batadv_nc_worker

5 views
Skip to first unread message

syzbot

unread,
Jun 13, 2023, 4:27:11 PM6/13/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 7349e40704a0 Linux 5.15.116
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12f90c75280000
kernel config: https://syzkaller.appspot.com/x/.config?x=831c3122ac9c9145
dashboard link: https://syzkaller.appspot.com/bug?extid=42b636738d776704a740
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/8c03c3ad4501/disk-7349e407.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/350c3d79bc87/vmlinux-7349e407.xz
kernel image: https://storage.googleapis.com/syzbot-assets/73a4ed3d5438/bzImage-7349e407.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+42b636...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P6072/1:b..l
(detected by 0, t=10502 jiffies, g=138109, q=92)
task:kworker/u4:19 state:R running task stack:20544 pid: 6072 ppid: 2 flags:0x00004000
Workqueue: bat_events batadv_nc_worker
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
preempt_schedule_irq+0xf7/0x1c0 kernel/sched/core.c:6776
irqentry_exit+0x53/0x80 kernel/entry/common.c:426
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
RIP: 0010:lock_acquire+0x252/0x4f0 kernel/locking/lockdep.c:5626
Code: 2b 00 74 08 4c 89 f7 e8 cc e4 66 00 f6 44 24 61 02 0f 85 84 01 00 00 41 f7 c7 00 02 00 00 74 01 fb 48 c7 44 24 40 0e 36 e0 45 <4b> c7 44 25 00 00 00 00 00 43 c7 44 25 09 00 00 00 00 43 c7 44 25
RSP: 0018:ffffc90002dcfa20 EFLAGS: 00000206
RAX: 0000000000000001 RBX: 1ffff920005b9f50 RCX: 1ffff920005b9ef0
RDX: dffffc0000000000 RSI: ffffffff8a8b0f00 RDI: ffffffff8ad86000
RBP: ffffc90002dcfb70 R08: dffffc0000000000 R09: fffffbfff1f79621
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff920005b9f4c
R13: dffffc0000000000 R14: ffffc90002dcfa80 R15: 0000000000000246
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:269
rcu_read_lock include/linux/rcupdate.h:696 [inline]
batadv_nc_process_nc_paths+0xb8/0x350 net/batman-adv/network-coding.c:691
batadv_nc_worker+0x3d4/0x5b0 net/batman-adv/network-coding.c:732
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2307
worker_thread+0xaca/0x1280 kernel/workqueue.c:2454
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
rcu: rcu_preempt kthread timer wakeup didn't happen for 10481 jiffies! g138109 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
rcu: Possible timer handling issue on cpu=1 timer-softirq=85349
rcu: rcu_preempt kthread starved for 10482 jiffies! g138109 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:I stack:25496 pid: 15 ppid: 2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
schedule_timeout+0x1b9/0x300 kernel/time/timer.c:1884
rcu_gp_fqs_loop+0x2af/0xf70 kernel/rcu/tree.c:1959
rcu_gp_kthread+0xa4/0x360 kernel/rcu/tree.c:2132
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 23500 Comm: kworker/1:8 Not tainted 5.15.116-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Workqueue: mld mld_ifc_work
RIP: 0010:__list_add include/linux/list.h:73 [inline]
RIP: 0010:list_add_tail include/linux/list.h:100 [inline]
RIP: 0010:list_move_tail include/linux/list.h:228 [inline]
RIP: 0010:fq_pie_qdisc_dequeue+0x495/0xa90 net/sched/sch_fq_pie.c:248
Code: 00 fc ff df 80 3c 08 00 74 08 48 89 df e8 13 2e 60 f9 48 89 2b 48 89 e8 48 c1 e8 03 48 bb 00 00 00 00 00 fc ff df 80 3c 18 00 <74> 08 48 89 ef e8 f1 2d 60 f9 4c 89 75 00 e9 fe fb ff ff e8 43 d2
RSP: 0018:ffffc90002daf470 EFLAGS: 00000246
RAX: 1ffff11008b75c5c RBX: dffffc0000000000 RCX: dffffc0000000000
RDX: 0000000000000000 RSI: ffff888045bae2e0 RDI: ffff88804c658bd0
RBP: ffff888045bae2e0 R08: ffffffff88692394 R09: ffffffff8851a5b8
R10: 0000000000000002 R11: ffff88801eb8bb80 R12: ffff888045bae2f0
R13: ffff888045bae000 R14: ffff88804c658bd0 R15: ffff888045bae2e0
FS: 0000000000000000(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007faf49368718 CR3: 000000005d695000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
dequeue_skb net/sched/sch_generic.c:292 [inline]
qdisc_restart net/sched/sch_generic.c:397 [inline]
__qdisc_run+0x253/0x1e90 net/sched/sch_generic.c:415
__dev_xmit_skb net/core/dev.c:3879 [inline]
__dev_queue_xmit+0xf0a/0x3230 net/core/dev.c:4190
neigh_output include/net/neighbour.h:516 [inline]
ip6_finish_output2+0xee4/0x14f0 net/ipv6/ip6_output.c:126
dst_output include/net/dst.h:449 [inline]
NF_HOOK+0x166/0x4f0 include/linux/netfilter.h:307
mld_sendpack+0x70e/0xc10 net/ipv6/mcast.c:1820
mld_send_cr net/ipv6/mcast.c:2121 [inline]
mld_ifc_work+0x7d7/0xc90 net/ipv6/mcast.c:2653
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2307
worker_thread+0xaca/0x1280 kernel/workqueue.c:2454
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages