Hello,
syzbot found the following issue on:
HEAD commit: ac56c046adf4 Linux 5.15.195
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=107ca614580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=e1bb6d24ef2164eb
dashboard link:
https://syzkaller.appspot.com/bug?extid=03cb9448675893965b49
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/098286e78a2b/disk-ac56c046.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/34cbdaabbc45/vmlinux-ac56c046.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/ac636b4df380/bzImage-ac56c046.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+03cb94...@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
rcu_preempt/15 is trying to acquire lock:
ffff8880b903a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
but task is already holding lock:
ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: force_qs_rnp kernel/rcu/tree.c:2646 [inline]
ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs kernel/rcu/tree.c:-1 [inline]
ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs_loop+0x768/0x11b0 kernel/rcu/tree.c:1986
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (rcu_node_0){-.-.}-{2:2}:
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
check_cb_ovld kernel/rcu/tree.c:2974 [inline]
__call_rcu kernel/rcu/tree.c:3025 [inline]
call_rcu+0x312/0x930 kernel/rcu/tree.c:3091
queue_rcu_work+0x81/0x90 kernel/workqueue.c:1788
kfree_rcu_monitor+0x5fe/0x730 kernel/rcu/tree.c:3418
process_one_work+0x863/0x1000 kernel/workqueue.c:2310
worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
-> #2 (krc.lock){..-.}-{2:2}:
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
krc_this_cpu_lock kernel/rcu/tree.c:3203 [inline]
add_ptr_to_bulk_krc_lock kernel/rcu/tree.c:3510 [inline]
kvfree_call_rcu+0x186/0x7c0 kernel/rcu/tree.c:3601
trie_update_elem+0x86e/0xc50 kernel/bpf/lpm_trie.c:396
bpf_map_update_value+0x57d/0x650 kernel/bpf/syscall.c:223
generic_map_update_batch+0x525/0x7c0 kernel/bpf/syscall.c:1430
bpf_map_do_batch+0x466/0x600 kernel/bpf/syscall.c:-1
__sys_bpf+0x601/0x670 kernel/bpf/syscall.c:-1
__do_sys_bpf kernel/bpf/syscall.c:4761 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4759 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4759
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
-> #1 (&trie->lock){-.-.}-{2:2}:
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
trie_delete_elem+0x90/0x710 kernel/bpf/lpm_trie.c:467
0xffffffffa002e230
bpf_dispatcher_nop_func include/linux/bpf.h:888 [inline]
__bpf_prog_run include/linux/filter.h:628 [inline]
bpf_prog_run include/linux/filter.h:635 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1878 [inline]
bpf_trace_run3+0x17e/0x320 kernel/trace/bpf_trace.c:1916
__traceiter_sched_switch+0x83/0xb0 include/trace/events/sched.h:220
trace_sched_switch include/trace/events/sched.h:220 [inline]
__schedule+0x1d71/0x4390 kernel/sched/core.c:6392
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
freezable_schedule include/linux/freezer.h:172 [inline]
futex_wait_queue_me+0x22d/0x440 kernel/futex/core.c:2863
futex_wait+0x202/0x5c0 kernel/futex/core.c:2964
do_futex+0xd1c/0x1240 kernel/futex/core.c:3982
__do_sys_futex kernel/futex/core.c:4059 [inline]
__se_sys_futex+0x3a3/0x430 kernel/futex/core.c:4040
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
-> #0 (&rq->__lock){-.-.}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
_raw_spin_lock_nested+0x2e/0x40 kernel/locking/spinlock.c:368
raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
raw_spin_rq_lock kernel/sched/sched.h:1326 [inline]
_raw_spin_rq_lock_irqsave kernel/sched/sched.h:1345 [inline]
resched_cpu+0xd4/0x240 kernel/sched/core.c:994
rcu_implicit_dynticks_qs+0x438/0xc30 kernel/rcu/tree.c:1329
force_qs_rnp kernel/rcu/tree.c:2664 [inline]
rcu_gp_fqs kernel/rcu/tree.c:-1 [inline]
rcu_gp_fqs_loop+0x972/0x11b0 kernel/rcu/tree.c:1986
rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
other info that might help us debug this:
Chain exists of:
&rq->__lock --> krc.lock --> rcu_node_0
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(rcu_node_0);
lock(krc.lock);
lock(rcu_node_0);
lock(&rq->__lock);
*** DEADLOCK ***
1 lock held by rcu_preempt/15:
#0: ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: force_qs_rnp kernel/rcu/tree.c:2646 [inline]
#0: ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs kernel/rcu/tree.c:-1 [inline]
#0: ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs_loop+0x768/0x11b0 kernel/rcu/tree.c:1986
stack backtrace:
CPU: 1 PID: 15 Comm: rcu_preempt Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
_raw_spin_lock_nested+0x2e/0x40 kernel/locking/spinlock.c:368
raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
raw_spin_rq_lock kernel/sched/sched.h:1326 [inline]
_raw_spin_rq_lock_irqsave kernel/sched/sched.h:1345 [inline]
resched_cpu+0xd4/0x240 kernel/sched/core.c:994
rcu_implicit_dynticks_qs+0x438/0xc30 kernel/rcu/tree.c:1329
force_qs_rnp kernel/rcu/tree.c:2664 [inline]
rcu_gp_fqs kernel/rcu/tree.c:-1 [inline]
rcu_gp_fqs_loop+0x972/0x11b0 kernel/rcu/tree.c:1986
rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup