[v6.6] possible deadlock in htab_lock_bucket

0 views
Skip to first unread message

syzbot

unread,
Oct 28, 2025, 3:33:22 AMOct 28
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 4a243110dc88 Linux 6.6.114
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=179d6258580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=b41573d719ff7fa9f894
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1487a7e2580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/92010b707866/disk-4a243110.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c037a989b43b/vmlinux-4a243110.xz
kernel image: https://storage.googleapis.com/syzbot-assets/69f07438fde1/bzImage-4a243110.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+b41573...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
udevd/5975 is trying to acquire lock:
ffff888026155720 (&htab->lockdep_key#61){....}-{2:2}, at: htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166

but task is already holding lock:
ffff888026155620 (&htab->lockdep_key#60){....}-{2:2}, at: htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&htab->lockdep_key#60){....}-{2:2}:
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166
htab_lru_map_delete_elem+0x1a4/0x630 kernel/bpf/hashtab.c:1479
bpf_prog_9c142407653414ad+0x52/0x56
bpf_dispatcher_nop_func include/linux/bpf.h:1224 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
__traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166
htab_lru_map_delete_elem+0x1a4/0x630 kernel/bpf/hashtab.c:1479
bpf_prog_9c142407653414ad+0x52/0x56
bpf_dispatcher_nop_func include/linux/bpf.h:1224 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
__traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xc5/0xe0 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2fa/0xcc0 kernel/locking/mutex.c:747
nsim_dev_hwstats_traffic_work+0x31/0x190 drivers/net/netdevsim/hwstats.c:47
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

-> #0 (&htab->lockdep_key#61){....}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166
htab_lru_map_delete_elem+0x1a4/0x630 kernel/bpf/hashtab.c:1479
bpf_prog_9c142407653414ad+0x52/0x56
bpf_dispatcher_nop_func include/linux/bpf.h:1224 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
__traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166
htab_lru_map_delete_elem+0x1a4/0x630 kernel/bpf/hashtab.c:1479
bpf_prog_9c142407653414ad+0x52/0x56
bpf_dispatcher_nop_func include/linux/bpf.h:1224 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
__traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xc5/0xe0 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2fa/0xcc0 kernel/locking/mutex.c:747
blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
blkdev_open+0x152/0x360 block/fops.c:589
do_dentry_open+0x8c6/0x1500 fs/open.c:929
do_open fs/namei.c:3640 [inline]
path_openat+0x274b/0x3190 fs/namei.c:3797
do_filp_open+0x1c5/0x3d0 fs/namei.c:3824
do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
do_sys_open fs/open.c:1434 [inline]
__do_sys_openat fs/open.c:1450 [inline]
__se_sys_openat fs/open.c:1445 [inline]
__x64_sys_openat+0x139/0x160 fs/open.c:1445
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&htab->lockdep_key#60);
lock(&htab->lockdep_key#61);
lock(&htab->lockdep_key#60);
lock(&htab->lockdep_key#61);

*** DEADLOCK ***

4 locks held by udevd/5975:
#0: ffff8881413e24c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
#1: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#1: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#1: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
#1: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361
#2: ffff888026155620 (&htab->lockdep_key#60){....}-{2:2}, at: htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166
#3: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#3: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#3: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
#3: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361

stack backtrace:
CPU: 0 PID: 5975 Comm: udevd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166
htab_lru_map_delete_elem+0x1a4/0x630 kernel/bpf/hashtab.c:1479
bpf_prog_9c142407653414ad+0x52/0x56
bpf_dispatcher_nop_func include/linux/bpf.h:1224 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
__traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
htab_lock_bucket+0x181/0x300 kernel/bpf/hashtab.c:166
htab_lru_map_delete_elem+0x1a4/0x630 kernel/bpf/hashtab.c:1479
bpf_prog_9c142407653414ad+0x52/0x56
bpf_dispatcher_nop_func include/linux/bpf.h:1224 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
__traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xc5/0xe0 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2fa/0xcc0 kernel/locking/mutex.c:747
blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
blkdev_open+0x152/0x360 block/fops.c:589
do_dentry_open+0x8c6/0x1500 fs/open.c:929
do_open fs/namei.c:3640 [inline]
path_openat+0x274b/0x3190 fs/namei.c:3797
do_filp_open+0x1c5/0x3d0 fs/namei.c:3824
do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
do_sys_open fs/open.c:1434 [inline]
__do_sys_openat fs/open.c:1450 [inline]
__se_sys_openat fs/open.c:1445 [inline]
__x64_sys_openat+0x139/0x160 fs/open.c:1445
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7efced8a7407
Code: 48 89 fa 4c 89 df e8 38 aa 00 00 8b 93 08 03 00 00 59 5e 48 83 f8 fc 74 1a 5b c3 0f 1f 84 00 00 00 00 00 48 8b 44 24 10 0f 05 <5b> c3 0f 1f 80 00 00 00 00 83 e2 39 83 fa 08 75 de e8 23 ff ff ff
RSP: 002b:00007ffe774511f0 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007efcee05b880 RCX: 00007efced8a7407
RDX: 0000000000080000 RSI: 0000564583ae8570 RDI: ffffffffffffff9c
RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000056456f31995f
R13: 000056456f32a660 R14: 0000000000000000 R15: 00000000ffffffff
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages