[moderation] [batman?] INFO: rcu detected stall in batadv_tt_purge (3)

4 views
Skip to first unread message

syzbot

unread,
Dec 5, 2023, 8:41:31 AM12/5/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: ac40916a3f72 rtnetlink: introduce nlmsg_new_large and use ..
git tree: net-next
console output: https://syzkaller.appspot.com/x/log.txt?x=161bee48e80000
kernel config: https://syzkaller.appspot.com/x/.config?x=84217b7fc4acdc59
dashboard link: https://syzkaller.appspot.com/bug?extid=ff09673842973f96366b
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
CC: [a...@unstable.cc b.a.t...@lists.open-mesh.org da...@davemloft.net edum...@google.com ku...@kernel.org linux-...@vger.kernel.org marekl...@neomailbox.ch net...@vger.kernel.org pab...@redhat.com sv...@narfation.org s...@simonwunderlich.de]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/71a9b42e011c/disk-ac40916a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/a3587dbf920b/vmlinux-ac40916a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/604e318fd196/bzImage-ac40916a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ff0967...@syzkaller.appspotmail.com

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 0-...!: (1 GPs behind) idle=b6dc/1/0x4000000000000000 softirq=57407/57408 fqs=6
rcu: (detected by 1, t=10502 jiffies, g=97277, q=1013 ncpus=2)
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 5280 Comm: kworker/u4:9 Not tainted 6.7.0-rc1-syzkaller-00268-gac40916a3f72 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
Workqueue: bat_events batadv_tt_purge
RIP: 0010:arch_local_irq_restore arch/x86/include/asm/irqflags.h:134 [inline]
RIP: 0010:lock_release+0x3a3/0x690 kernel/locking/lockdep.c:5776
Code: 20 bc cc 8a e8 0e ec 17 09 b8 ff ff ff ff 65 0f c1 05 e9 75 9a 7e 83 f8 01 0f 85 c8 01 00 00 9c 58 f6 c4 02 0f 85 b3 01 00 00 <48> f7 04 24 00 02 00 00 74 01 fb 48 b8 00 00 00 00 00 fc ff df 48
RSP: 0018:ffffc90000007c40 EFLAGS: 00000046
RAX: 0000000000000046 RBX: 49da982df18c1733 RCX: ffffc90000007c90
RDX: 1ffff110049d250e RSI: ffffffff8accbc20 RDI: ffffffff8b2f0e40
RBP: 1ffff92000000f8a R08: 0000000000000000 R09: fffffbfff1e327e2
R10: ffffffff8f193f17 R11: 0000000000000004 R12: 0000000000000004
R13: 0000000000000005 R14: ffff888024e92878 R15: ffff888024e91dc0
FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f438d0c66e4 CR3: 00000000456d9000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
debug_objects_fill_pool lib/debugobjects.c:616 [inline]
debug_object_activate+0x158/0x490 lib/debugobjects.c:713
debug_hrtimer_activate kernel/time/hrtimer.c:422 [inline]
debug_activate kernel/time/hrtimer.c:477 [inline]
enqueue_hrtimer+0x23/0x310 kernel/time/hrtimer.c:1087
__run_hrtimer kernel/time/hrtimer.c:1705 [inline]
__hrtimer_run_queues+0xa12/0xc20 kernel/time/hrtimer.c:1752
hrtimer_interrupt+0x31b/0x800 kernel/time/hrtimer.c:1814
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1065 [inline]
__sysvec_apic_timer_interrupt+0x105/0x400 arch/x86/kernel/apic/apic.c:1082
sysvec_apic_timer_interrupt+0x90/0xb0 arch/x86/kernel/apic/apic.c:1076
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:lock_acquire+0x1ef/0x520 kernel/locking/lockdep.c:5721
Code: c1 05 bd 68 9a 7e 83 f8 01 0f 85 b4 02 00 00 9c 58 f6 c4 02 0f 85 9f 02 00 00 48 85 ed 74 01 fb 48 b8 00 00 00 00 00 fc ff df <48> 01 c3 48 c7 03 00 00 00 00 48 c7 43 08 00 00 00 00 48 8b 84 24
RSP: 0018:ffffc9000491fa08 EFLAGS: 00000206
RAX: dffffc0000000000 RBX: 1ffff92000923f43 RCX: ffffffff8167301e
RDX: 0000000000000001 RSI: ffffffff8accbc20 RDI: ffffffff8b2f0e40
RBP: 0000000000000200 R08: 0000000000000000 R09: fffffbfff23e35eb
R10: ffffffff91f1af5f R11: 0000000000000002 R12: 0000000000000001
R13: 0000000000000000 R14: ffff8880432b4dd8 R15: 0000000000000000
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x33/0x40 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
batadv_tt_local_purge+0x145/0x3c0 net/batman-adv/translation-table.c:1354
batadv_tt_purge+0x8b/0xbb0 net/batman-adv/translation-table.c:3560
process_one_work+0x886/0x15d0 kernel/workqueue.c:2630
process_scheduled_works kernel/workqueue.c:2703 [inline]
worker_thread+0x8b9/0x1290 kernel/workqueue.c:2784
kthread+0x2c6/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
</TASK>
rcu: rcu_preempt kthread starved for 10361 jiffies! g97277 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27616 pid:17 tgid:17 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5376 [inline]
__schedule+0xedb/0x5af0 kernel/sched/core.c:6688
__schedule_loop kernel/sched/core.c:6763 [inline]
schedule+0xe9/0x270 kernel/sched/core.c:6778
schedule_timeout+0x137/0x290 kernel/time/timer.c:2167
rcu_gp_fqs_loop+0x1ec/0xb10 kernel/rcu/tree.c:1631
rcu_gp_kthread+0x24b/0x380 kernel/rcu/tree.c:1830
kthread+0x2c6/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 1 PID: 5161 Comm: kworker/1:4 Not tainted 6.7.0-rc1-syzkaller-00268-gac40916a3f72 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
Workqueue: events fqdir_free_fn
RIP: 0010:write_comp_data+0x7e/0x80 kernel/kcov.c:262
Code: 00 00 4a 8d 34 dd 28 00 00 00 48 39 f2 72 1b 48 83 c7 01 48 89 38 4c 89 44 30 e0 4c 89 4c 30 e8 4c 89 54 30 f0 4a 89 4c d8 20 <c3> 90 f3 0f 1e fa 48 8b 0c 24 40 0f b6 d6 40 0f b6 f7 31 ff e9 69
RSP: 0018:ffffc9000466fb18 EFLAGS: 00000293
RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffffffff817bf4d3
RDX: ffff88801aaf3b80 RSI: 0000000000000000 RDI: 0000000000000005
RBP: ffffc9000466fc30 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000003 R12: 1ffff920008cdf68
R13: 0000000000000000 R14: 0000000000000001 R15: ffff8880b983d600
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000200044c0 CR3: 000000001e8b2000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
</IRQ>
<TASK>
csd_lock_wait kernel/smp.c:311 [inline]
smp_call_function_single+0x203/0x650 kernel/smp.c:650
rcu_barrier+0x284/0x6c0 kernel/rcu/tree.c:4082
fqdir_free_fn+0x32/0x160 net/ipv4/inet_fragment.c:164
process_one_work+0x886/0x15d0 kernel/workqueue.c:2630
process_scheduled_works kernel/workqueue.c:2703 [inline]
worker_thread+0x8b9/0x1290 kernel/workqueue.c:2784
kthread+0x2c6/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Feb 17, 2024, 11:44:15 AMFeb 17
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages