[v6.6] possible deadlock in get_partial_node

1 view
Skip to first unread message

syzbot

unread,
12:18 AM (20 hours ago) 12:18 AM
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0a805b6ea8cd Linux 6.6.116
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12b6c114580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=bce67b76e1f9f819fa30
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/743e52c6a2c2/disk-0a805b6e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b7385817c222/vmlinux-0a805b6e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/587fbfb2961d/bzImage-0a805b6e.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+bce67b...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.3.602/7168 is trying to acquire lock:
ffff88801784e6d8 (&n->list_lock){-.-.}-{2:2}, at: get_partial_node+0x36/0x540 mm/slub.c:2301

but task is already holding lock:
ffff88801cb74238 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xca/0xea0 kernel/bpf/lpm_trie.c:335

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&trie->lock){-.-.}-{2:2}:
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:467
0xffffffffa0000a16
bpf_dispatcher_nop_func include/linux/bpf.h:1224 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xb4/0xf0 kernel/locking/spinlock.c:162
__unfreeze_partials+0x7f/0x210 mm/slub.c:2631
put_cpu_partial+0x17c/0x250 mm/slub.c:2743
__slab_free+0x31d/0x410 mm/slub.c:3700
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x75/0xe0 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
__kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:305
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x11e/0x2e0 mm/slub.c:3519
kmem_cache_zalloc include/linux/slab.h:711 [inline]
__proc_create+0x370/0x8d0 fs/proc/generic.c:447
proc_create_reg+0x8b/0x110 fs/proc/generic.c:574
proc_create_net_data+0x99/0x1b0 fs/proc/proc_net.c:120
nfs_fs_proc_net_init+0xd2/0x190 fs/nfs/client.c:1417
nfs_net_init+0x226/0x2b0 fs/nfs/inode.c:2456
ops_init+0x397/0x640 net/core/net_namespace.c:139
setup_net+0x3a5/0xa00 net/core/net_namespace.c:343
copy_net_ns+0x36d/0x5e0 net/core/net_namespace.c:520
create_new_namespaces+0x3d3/0x6f0 kernel/nsproxy.c:110
copy_namespaces+0x430/0x4a0 kernel/nsproxy.c:179
copy_process+0x1700/0x3d70 kernel/fork.c:2509
kernel_clone+0x21b/0x840 kernel/fork.c:2914
__do_sys_clone kernel/fork.c:3057 [inline]
__se_sys_clone kernel/fork.c:3041 [inline]
__x64_sys_clone+0x18c/0x1e0 kernel/fork.c:3041
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&n->list_lock){-.-.}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
get_partial_node+0x36/0x540 mm/slub.c:2301
get_partial mm/slub.c:2416 [inline]
___slab_alloc+0x9cd/0x1300 mm/slub.c:3230
__slab_alloc mm/slub.c:3339 [inline]
__slab_alloc_node mm/slub.c:3392 [inline]
slab_alloc_node mm/slub.c:3485 [inline]
__kmem_cache_alloc_node+0x1a2/0x260 mm/slub.c:3534
__do_kmalloc_node mm/slab_common.c:1006 [inline]
__kmalloc_node+0xa4/0x230 mm/slab_common.c:1014
kmalloc_node include/linux/slab.h:620 [inline]
bpf_map_kmalloc_node+0xbc/0x1b0 kernel/bpf/syscall.c:424
lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
trie_update_elem+0x166/0xea0 kernel/bpf/lpm_trie.c:338
bpf_map_update_value+0x660/0x720 kernel/bpf/syscall.c:203
map_update_elem+0x57b/0x700 kernel/bpf/syscall.c:1567
__sys_bpf+0x652/0x800 kernel/bpf/syscall.c:5461
__do_sys_bpf kernel/bpf/syscall.c:5577 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5575 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5575
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&trie->lock);
lock(&n->list_lock);
lock(&trie->lock);
lock(&n->list_lock);

*** DEADLOCK ***

2 locks held by syz.3.602/7168:
#0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: bpf_map_update_value+0x41d/0x720 kernel/bpf/syscall.c:202
#1: ffff88801cb74238 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xca/0xea0 kernel/bpf/lpm_trie.c:335

stack backtrace:
CPU: 0 PID: 7168 Comm: syz.3.602 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
get_partial_node+0x36/0x540 mm/slub.c:2301
get_partial mm/slub.c:2416 [inline]
___slab_alloc+0x9cd/0x1300 mm/slub.c:3230
__slab_alloc mm/slub.c:3339 [inline]
__slab_alloc_node mm/slub.c:3392 [inline]
slab_alloc_node mm/slub.c:3485 [inline]
__kmem_cache_alloc_node+0x1a2/0x260 mm/slub.c:3534
__do_kmalloc_node mm/slab_common.c:1006 [inline]
__kmalloc_node+0xa4/0x230 mm/slab_common.c:1014
kmalloc_node include/linux/slab.h:620 [inline]
bpf_map_kmalloc_node+0xbc/0x1b0 kernel/bpf/syscall.c:424
lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
trie_update_elem+0x166/0xea0 kernel/bpf/lpm_trie.c:338
bpf_map_update_value+0x660/0x720 kernel/bpf/syscall.c:203
map_update_elem+0x57b/0x700 kernel/bpf/syscall.c:1567
__sys_bpf+0x652/0x800 kernel/bpf/syscall.c:5461
__do_sys_bpf kernel/bpf/syscall.c:5577 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5575 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5575
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fd26b58f6c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fd2697f6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fd26b7e5fa0 RCX: 00007fd26b58f6c9
RDX: 0000000000000020 RSI: 00002000000004c0 RDI: 0000000000000002
RBP: 00007fd26b611f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fd26b7e6038 R14: 00007fd26b7e5fa0 R15: 00007ffc24d4ed48
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages