[v6.1] possible deadlock in htab_lock_bucket

0 views
Skip to first unread message

syzbot

unread,
Apr 17, 2024, 9:40:21 AMApr 17
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 6741e066ec76 Linux 6.1.87
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=114dfec7180000
kernel config: https://syzkaller.appspot.com/x/.config?x=3fc2f61bd0ae457
dashboard link: https://syzkaller.appspot.com/bug?extid=44ec6afbbc9b9c66b458
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=130b08e7180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=161a17b3180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/665ba05da528/disk-6741e066.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/2ec8afc9ea0a/vmlinux-6741e066.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b0dbfd69f35b/bzImage-6741e066.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+44ec6a...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.87-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor242/3557 is trying to acquire lock:
ffff888013f5eca0 (&htab->lockdep_key#13){....}-{2:2}, at: htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166

but task is already holding lock:
ffff88807b96aaa0 (&htab->lockdep_key#14){....}-{2:2}, at: htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&htab->lockdep_key#14){....}-{2:2}:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166
htab_map_delete_elem+0x1d5/0x6b0 kernel/bpf/hashtab.c:1399
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:596 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x361/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166
htab_map_delete_elem+0x1d5/0x6b0 kernel/bpf/hashtab.c:1399
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:596 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x361/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
nf_sockopt_find net/netfilter/nf_sockopt.c:67 [inline]
nf_getsockopt+0x32/0x2b0 net/netfilter/nf_sockopt.c:113
ip_getsockopt+0x21a/0x2d0 net/ipv4/ip_sockglue.c:1826
tcp_getsockopt+0x15c/0x1c0 net/ipv4/tcp.c:4445
__sys_getsockopt+0x2b2/0x5d0 net/socket.c:2327
__do_sys_getsockopt net/socket.c:2342 [inline]
__se_sys_getsockopt net/socket.c:2339 [inline]
__x64_sys_getsockopt+0xb1/0xc0 net/socket.c:2339
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&htab->lockdep_key#13){....}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166
htab_map_delete_elem+0x1d5/0x6b0 kernel/bpf/hashtab.c:1399
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:596 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x361/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166
htab_map_delete_elem+0x1d5/0x6b0 kernel/bpf/hashtab.c:1399
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:596 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x361/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
xt_find_table_lock+0x48/0x3a0 net/netfilter/x_tables.c:1242
xt_request_find_table_lock+0x22/0xf0 net/netfilter/x_tables.c:1284
get_info net/ipv4/netfilter/ip_tables.c:965 [inline]
do_ipt_get_ctl+0x872/0x1890 net/ipv4/netfilter/ip_tables.c:1661
nf_getsockopt+0x28e/0x2b0 net/netfilter/nf_sockopt.c:116
ip_getsockopt+0x21a/0x2d0 net/ipv4/ip_sockglue.c:1826
tcp_getsockopt+0x15c/0x1c0 net/ipv4/tcp.c:4445
__sys_getsockopt+0x2b2/0x5d0 net/socket.c:2327
__do_sys_getsockopt net/socket.c:2342 [inline]
__se_sys_getsockopt net/socket.c:2339 [inline]
__x64_sys_getsockopt+0xb1/0xc0 net/socket.c:2339
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&htab->lockdep_key#14);
lock(&htab->lockdep_key#13);
lock(&htab->lockdep_key#14);
lock(&htab->lockdep_key#13);

*** DEADLOCK ***

4 locks held by syz-executor242/3557:
#0: ffff8880295a8308 (&xt[i].mutex){+.+.}-{3:3}, at: xt_find_table_lock+0x48/0x3a0 net/netfilter/x_tables.c:1242
#1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312
#2: ffff88807b96aaa0 (&htab->lockdep_key#14){....}-{2:2}, at: htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166
#3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312

stack backtrace:
CPU: 1 PID: 3557 Comm: syz-executor242 Not tainted 6.1.87-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2170
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166
htab_map_delete_elem+0x1d5/0x6b0 kernel/bpf/hashtab.c:1399
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:596 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x361/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
htab_lock_bucket+0x1a0/0x360 kernel/bpf/hashtab.c:166
htab_map_delete_elem+0x1d5/0x6b0 kernel/bpf/hashtab.c:1399
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:596 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x361/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
xt_find_table_lock+0x48/0x3a0 net/netfilter/x_tables.c:1242
xt_request_find_table_lock+0x22/0xf0 net/netfilter/x_tables.c:1284
get_info net/ipv4/netfilter/ip_tables.c:965 [inline]
do_ipt_get_ctl+0x872/0x1890 net/ipv4/netfilter/ip_tables.c:1661
nf_getsockopt+0x28e/0x2b0 net/netfilter/nf_sockopt.c:116
ip_getsockopt+0x21a/0x2d0 net/ipv4/ip_sockglue.c:1826
tcp_getsockopt+0x15c/0x1c0 net/ipv4/tcp.c:4445
__sys_getsockopt+0x2b2/0x5d0 net/socket.c:2327
__do_sys_getsockopt net/socket.c:2342 [inline]
__se_sys_getsockopt net/socket.c:2339 [inline]
__x64_sys_getsockopt+0xb1/0xc0 net/socket.c:2339
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f617160ba4a
Code: c4 c1 e0 1a 0d 00 00 04 00 89 01 e9 e0 fe ff ff e8 0b 05 00 00 66 2e 0f 1f 84 00 00 00 00 00 90 49 89 ca b8 37 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 06 c3 0f 1f 44 00 00 48 c7 c2 b0 ff ff ff f7
RSP: 002b:00007ffcc07e9c48 EFLAGS: 00000242 ORIG_RAX: 0000000000000037
RAX: ffffffffffffffda RBX: 00007ffcc07e9c70 RCX: 00007f617160ba4a
RDX: 0000000000000040 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 0000000000000003 R08: 00007ffcc07e9c6c R09: 00007ffcc07ea176
R10: 00007ffcc07e9c70 R11: 0000000000000242 R12: 00007f6171690440
R13: 00007f6171691f40 R14: 00007ffcc07e9c6c R15: 0000000000000000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages