Hello,
syzbot found the following issue on:
HEAD commit: 3f5b4c104b7d Linux 6.6.95
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=11818582580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=747dbf84b0ecd30c
dashboard link:
https://syzkaller.appspot.com/bug?extid=33bfc46927a4ecf45b6d
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/421f6e2d0cd1/disk-3f5b4c10.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/90250695b20b/vmlinux-3f5b4c10.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/32250e77bce9/bzImage-3f5b4c10.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+33bfc4...@syzkaller.appspotmail.com
============================================
WARNING: possible recursive locking detected
6.6.95-syzkaller #0 Not tainted
--------------------------------------------
ksoftirqd/0/16 is trying to acquire lock:
ffffc900046910d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
but task is already holding lock:
ffffc900047150d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&rb->spinlock);
lock(&rb->spinlock);
*** DEADLOCK ***
May be due to missing lock nesting notation
4 locks held by ksoftirqd/0/16:
#0: ffffffff8cd2f880 (rcu_callback){....}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2f880 (rcu_callback){....}-{0:0}, at: rcu_do_batch kernel/rcu/tree.c:2188 [inline]
#0: ffffffff8cd2f880 (rcu_callback){....}-{0:0}, at: rcu_core+0xc51/0x1720 kernel/rcu/tree.c:2467
#1: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#1: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#1: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
#1: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run3+0xf4/0x400 kernel/trace/bpf_trace.c:2362
#2: ffffc900047150d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361
stack backtrace:
CPU: 0 PID: 16 Comm: ksoftirqd/0 Not tainted 6.6.95-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_deadlock kernel/locking/lockdep.c:3062 [inline]
validate_chain kernel/locking/lockdep.c:3856 [inline]
__lock_acquire+0x5d40/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:466
bpf_prog_fe0ed97373b08409+0x2d/0x4a
bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xb4/0xf0 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:466
bpf_prog_fe0ed97373b08409+0x2d/0x4a
bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run3+0x1e7/0x400 kernel/trace/bpf_trace.c:2362
trace_kmem_cache_free include/trace/events/kmem.h:114 [inline]
kmem_cache_free+0x1e0/0x280 mm/slub.c:3837
rcu_do_batch kernel/rcu/tree.c:2194 [inline]
rcu_core+0xcc4/0x1720 kernel/rcu/tree.c:2467
handle_softirqs+0x280/0x820 kernel/softirq.c:578
run_ksoftirqd+0x9c/0xf0 kernel/softirq.c:950
smpboot_thread_fn+0x635/0xa00 kernel/smpboot.c:164
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup