[v5.15] possible deadlock in sock_hash_delete_elem

0 views
Skip to first unread message

syzbot

unread,
Mar 16, 2024, 8:55:29 AMMar 16
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: b95c01af2113 Linux 5.15.152
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1261b83a180000
kernel config: https://syzkaller.appspot.com/x/.config?x=b26cb65e5b8ad5c7
dashboard link: https://syzkaller.appspot.com/bug?extid=990f10fde4e43920d8c2
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2fc98856fcae/disk-b95c01af.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3186db0dfe08/vmlinux-b95c01af.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0df136a3e808/bzImage-b95c01af.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+990f10...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
5.15.152-syzkaller #0 Not tainted
--------------------------------------------
syz-executor.2/13912 is trying to acquire lock:
ffff88801f4378f8 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937

but task is already holding lock:
ffff88804d9cd968 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_update_common+0x20c/0xa30 net/core/sock_map.c:1005

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&htab->buckets[i].lock);
lock(&htab->buckets[i].lock);

*** DEADLOCK ***

May be due to missing lock nesting notation

5 locks held by syz-executor.2/13912:
#0: ffff888078046920 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1668 [inline]
#0: ffff888078046920 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: sock_map_sk_acquire net/core/sock_map.c:119 [inline]
#0: ffff888078046920 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: sock_map_update_elem_sys+0x1c8/0x770 net/core/sock_map.c:581
#1: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
#2: ffff88804d9cd968 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_update_common+0x20c/0xa30 net/core/sock_map.c:1005
#3: ffff88807a542290 (&psock->link_lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:368 [inline]
#3: ffff88807a542290 (&psock->link_lock){+...}-{2:2}, at: sock_map_del_link net/core/sock_map.c:147 [inline]
#3: ffff88807a542290 (&psock->link_lock){+...}-{2:2}, at: sock_map_unref+0xcc/0x5d0 net/core/sock_map.c:182
#4: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268

stack backtrace:
CPU: 0 PID: 13912 Comm: syz-executor.2 Not tainted 5.15.152-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2946 [inline]
check_deadlock kernel/locking/lockdep.c:2989 [inline]
validate_chain+0x46d2/0x5930 kernel/locking/lockdep.c:3775
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0xf28
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run2+0x19e/0x340 kernel/trace/bpf_trace.c:1917
__bpf_trace_kfree+0x6e/0x90 include/trace/events/kmem.h:118
__traceiter_kfree+0x26/0x40 include/trace/events/kmem.h:118
trace_kfree include/trace/events/kmem.h:118 [inline]
kfree+0x22f/0x270 mm/slub.c:4549
sk_psock_free_link include/linux/skmsg.h:422 [inline]
sock_map_del_link net/core/sock_map.c:160 [inline]
sock_map_unref+0x3ac/0x5d0 net/core/sock_map.c:182
sock_hash_update_common+0x911/0xa30 net/core/sock_map.c:1028
sock_map_update_elem_sys+0x485/0x770 net/core/sock_map.c:587
map_update_elem+0x6a0/0x7c0 kernel/bpf/syscall.c:1163
__sys_bpf+0x2fd/0x670 kernel/bpf/syscall.c:4617
__do_sys_bpf kernel/bpf/syscall.c:4733 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4731 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4731
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f042a53eda9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f0428abf0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f042a66cf80 RCX: 00007f042a53eda9
RDX: 0000000000000020 RSI: 0000000020000040 RDI: 0000000000000002
RBP: 00007f042a58b47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f042a66cf80 R15: 00007fff6b93ab08
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Mar 24, 2024, 6:39:25 PMMar 24
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d7543167affd Linux 6.1.82
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17d19546180000
kernel config: https://syzkaller.appspot.com/x/.config?x=59059e181681c079
dashboard link: https://syzkaller.appspot.com/bug?extid=e4bf1416ef54504e4c07
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/a2421980b49a/disk-d7543167.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/52a6bb44161f/vmlinux-d7543167.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9b3723bf43a9/bzImage-d7543167.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e4bf14...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.82-syzkaller #0 Not tainted
------------------------------------------------------
syz-fuzzer/11553 is trying to acquire lock:
ffff888074075a18
(&htab->buckets[i].lock
){+...}-{2:2}
, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932

but task is already holding lock:
ffff8880b982a4d8
(hrtimer_bases.lock
){-.-.}-{2:2}
, at: lock_hrtimer_base kernel/time/hrtimer.c:173 [inline]
, at: hrtimer_start_range_ns+0xd8/0xc50 kernel/time/hrtimer.c:1297

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (
hrtimer_bases.lock){-.-.}-{2:2}
:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
lock_hrtimer_base kernel/time/hrtimer.c:173 [inline]
hrtimer_start_range_ns+0xd8/0xc50 kernel/time/hrtimer.c:1297
hrtimer_start include/linux/hrtimer.h:420 [inline]
run_page_cache_worker kernel/rcu/tree.c:3292 [inline]
kvfree_call_rcu+0x72b/0x8c0 kernel/rcu/tree.c:3403
rtnl_register_internal+0x489/0x580 net/core/rtnetlink.c:260
rtnl_register+0x32/0x70 net/core/rtnetlink.c:310
ip_rt_init+0x335/0x3c7 net/ipv4/route.c:3768
ip_init+0xa/0x14 net/ipv4/ip_output.c:1767
inet_init+0x2ae/0x3c0 net/ipv4/af_inet.c:2031
do_one_initcall+0x265/0x8f0 init/main.c:1296
do_initcall_level+0x157/0x207 init/main.c:1369
do_initcalls+0x49/0x86 init/main.c:1385
kernel_init_freeable+0x45c/0x60f init/main.c:1624
kernel_init+0x19/0x290 init/main.c:1512
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307

-> #1 (
krc.lock){..-.}-{2:2}
:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
krc_this_cpu_lock kernel/rcu/tree.c:2990 [inline]
add_ptr_to_bulk_krc_lock kernel/rcu/tree.c:3310 [inline]
kvfree_call_rcu+0x1b2/0x8c0 kernel/rcu/tree.c:3401
sock_hash_free_elem net/core/sock_map.c:893 [inline]
sock_hash_delete_from_link net/core/sock_map.c:916 [inline]
sock_map_unlink net/core/sock_map.c:1550 [inline]
sock_map_remove_links+0x46f/0x550 net/core/sock_map.c:1562
sock_map_close+0x118/0x2d0 net/core/sock_map.c:1627
unix_release+0x7e/0xc0 net/unix/af_unix.c:1038
__sock_release net/socket.c:654 [inline]
sock_close+0xcd/0x230 net/socket.c:1400
__fput+0x3b7/0x890 fs/file_table.c:320
task_work_run+0x246/0x300 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xde/0x100 kernel/entry/common.c:171
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
__syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
syscall_exit_to_user_mode+0x60/0x270 kernel/entry/common.c:297
do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (
&htab->buckets[i].lock
){+...}-{2:2}
:
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
trace_hrtimer_start include/trace/events/timer.h:202 [inline]
debug_activate kernel/time/hrtimer.c:476 [inline]
enqueue_hrtimer+0x382/0x410 kernel/time/hrtimer.c:1084
__hrtimer_start_range_ns kernel/time/hrtimer.c:1259 [inline]
hrtimer_start_range_ns+0xa9c/0xc50 kernel/time/hrtimer.c:1299
hrtimer_start_expires include/linux/hrtimer.h:434 [inline]
hrtimer_sleeper_start_expires kernel/time/hrtimer.c:1966 [inline]
schedule_hrtimeout_range_clock+0x272/0x480 kernel/time/hrtimer.c:2305
ep_poll fs/eventpoll.c:1884 [inline]
do_epoll_wait+0x1be9/0x1e60 fs/eventpoll.c:2262
do_epoll_pwait+0x56/0x1d0 fs/eventpoll.c:2296
__do_sys_epoll_pwait fs/eventpoll.c:2309 [inline]
__se_sys_epoll_pwait fs/eventpoll.c:2303 [inline]
__x64_sys_epoll_pwait+0x2b4/0x300 fs/eventpoll.c:2303
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Chain exists of:

&htab->buckets[i].lock
--> krc.lock
--> hrtimer_bases.lock


Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(hrtimer_bases.lock
);
lock(
krc.lock);
lock(hrtimer_bases.lock
);
lock(
&htab->buckets[i].lock);

*** DEADLOCK ***

2 locks held by syz-fuzzer/11553:
#0: ffff8880b982a4d8
(hrtimer_bases.lock
){-.-.}-{2:2}
, at: lock_hrtimer_base kernel/time/hrtimer.c:173 [inline]
, at: hrtimer_start_range_ns+0xd8/0xc50 kernel/time/hrtimer.c:1297
#1: ffffffff8d12a940
(rcu_read_lock
){....}-{1:2}
, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline]
, at: rcu_read_lock include/linux/rcupdate.h:760 [inline]
, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312

stack backtrace:
CPU: 0 PID: 11553 Comm: syz-fuzzer Not tainted 6.1.82-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2170
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
trace_hrtimer_start include/trace/events/timer.h:202 [inline]
debug_activate kernel/time/hrtimer.c:476 [inline]
enqueue_hrtimer+0x382/0x410 kernel/time/hrtimer.c:1084
__hrtimer_start_range_ns kernel/time/hrtimer.c:1259 [inline]
hrtimer_start_range_ns+0xa9c/0xc50 kernel/time/hrtimer.c:1299
hrtimer_start_expires include/linux/hrtimer.h:434 [inline]
hrtimer_sleeper_start_expires kernel/time/hrtimer.c:1966 [inline]
schedule_hrtimeout_range_clock+0x272/0x480 kernel/time/hrtimer.c:2305
ep_poll fs/eventpoll.c:1884 [inline]
do_epoll_wait+0x1be9/0x1e60 fs/eventpoll.c:2262
do_epoll_pwait+0x56/0x1d0 fs/eventpoll.c:2296
__do_sys_epoll_pwait fs/eventpoll.c:2309 [inline]
__se_sys_epoll_pwait fs/eventpoll.c:2303 [inline]
__x64_sys_epoll_pwait+0x2b4/0x300 fs/eventpoll.c:2303
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x40720e
Code: 48 83 ec 38 e8 13 00 00 00 48 83 c4 38 5d c3 cc cc cc cc cc cc cc cc cc cc cc cc cc 49 89 f2 48 89 fa 48 89 ce 48 89 df 0f 05 <48> 3d 01 f0 ff ff 76 15 48 f7 d8 48 89 c1 48 c7 c0 ff ff ff ff 48
RSP: 002b:000000c00b3ab748 EFLAGS: 00000246 ORIG_RAX: 0000000000000119
RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 000000000040720e
RDX: 0000000000000080 RSI: 000000c00b3ab818 RDI: 0000000000000004
RBP: 000000c00b3ab790 R08: 0000000000000000 R09: 0000000000000000
R10: 000000000000004b R11: 0000000000000246 R12: 000000c00b3ab820
R13: 000000c013f3b95c R14: 000000c00addf1e0 R15: 0000000000000000

syzbot

unread,
Apr 7, 2024, 1:57:23 AMApr 7
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 9465fef4ae35 Linux 5.15.153
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=128304f3180000
kernel config: https://syzkaller.appspot.com/x/.config?x=176c746ee3348b33
dashboard link: https://syzkaller.appspot.com/bug?extid=990f10fde4e43920d8c2
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11794da9180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=150f2fe3180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/741f29d5f449/disk-9465fef4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/66645f341114/vmlinux-9465fef4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/f7d21e0c9e19/bzImage-9465fef4.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+990f10...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
5.15.153-syzkaller #0 Not tainted
--------------------------------------------
syz-executor155/3500 is trying to acquire lock:
ffff888078a28020 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937

but task is already holding lock:
ffff888078a28020 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&htab->buckets[i].lock);
lock(&htab->buckets[i].lock);

*** DEADLOCK ***

May be due to missing lock nesting notation

4 locks held by syz-executor155/3500:
#0: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:311
#1: ffff888078a28020 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
#2: ffff8881472bf290 (&psock->link_lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:368 [inline]
#2: ffff8881472bf290 (&psock->link_lock){+...}-{2:2}, at: sock_map_del_link net/core/sock_map.c:147 [inline]
#2: ffff8881472bf290 (&psock->link_lock){+...}-{2:2}, at: sock_map_unref+0xcc/0x5d0 net/core/sock_map.c:182
#3: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:311

stack backtrace:
CPU: 0 PID: 3500 Comm: syz-executor155 Not tainted 5.15.153-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2946 [inline]
check_deadlock kernel/locking/lockdep.c:2989 [inline]
validate_chain+0x46d2/0x5930 kernel/locking/lockdep.c:3775
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_3ffd2c70e20892c6+0x3a/0x104
bpf_dispatcher_nop_func include/linux/bpf.h:785 [inline]
__bpf_prog_run include/linux/filter.h:628 [inline]
bpf_prog_run include/linux/filter.h:635 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run2+0x19e/0x340 kernel/trace/bpf_trace.c:1917
__bpf_trace_kfree+0x6e/0x90 include/trace/events/kmem.h:118
trace_kfree include/trace/events/kmem.h:118 [inline]
kfree+0x22f/0x270 mm/slub.c:4549
sk_psock_free_link include/linux/skmsg.h:422 [inline]
sock_map_del_link net/core/sock_map.c:160 [inline]
sock_map_unref+0x3ac/0x5d0 net/core/sock_map.c:182
sock_hash_delete_elem+0x273/0x2f0 net/core/sock_map.c:941
bpf_prog_3ffd2c70e20892c6+0x3a/0x104
bpf_dispatcher_nop_func include/linux/bpf.h:785 [inline]
__bpf_prog_run include/linux/filter.h:628 [inline]
bpf_prog_run include/linux/filter.h:635 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run2+0x19e/0x340 kernel/trace/bpf_trace.c:1917
__bpf_trace_kfree+0x6e/0x90 include/trace/events/kmem.h:118
trace_kfree include/trace/events/kmem.h:118 [inline]
kfree+0x22f/0x270 mm/slub.c:4549
map_update_elem+0x6ab/0x7c0 kernel/bpf/syscall.c:1188
__sys_bpf+0x2fd/0x670 kernel/bpf/syscall.c:4639
__do_sys_bpf kernel/bpf/syscall.c:4755 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4753 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4753
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fad957df5e9
Code: 48 83 c4 28 c3 e8 37 17 00 00 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff8558fd38 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fff8558ff08 RCX: 00007fad957df5e9
RDX: 0000000000000020 RSI: 0000000020000c80 RDI: 0000000000000002
RBP: 00007fad95852610 R08: 00007fff8558ff08 R09: 00007fff8558ff08
R10: 00007fff8558ff08 R11: 0000000000000246 R12: 0000000000000001
R13: 00007fff8558fef8 R14: 0000000000000001 R15: 0000000000000001
</TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

unread,
Apr 10, 2024, 11:16:20 PMApr 10
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: bf1e3b1cb1e0 Linux 6.1.85
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=161500cb180000
kernel config: https://syzkaller.appspot.com/x/.config?x=d3e21b90946dbbab
dashboard link: https://syzkaller.appspot.com/bug?extid=e4bf1416ef54504e4c07
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16a2c5bd180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/686a8153616f/disk-bf1e3b1c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1e494c0feb8c/vmlinux-bf1e3b1c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/fa38d0bc0763/bzImage-bf1e3b1c.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e4bf14...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
6.1.85-syzkaller #0 Not tainted
--------------------------------------------
syz-executor.0/3979 is trying to acquire lock:
ffff8880637b9820 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938

but task is already holding lock:
ffff888075218020 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&htab->buckets[i].lock);
lock(&htab->buckets[i].lock);

*** DEADLOCK ***

May be due to missing lock nesting notation

4 locks held by syz-executor.0/3979:
#0: ffff888027f75020 (&child->perf_event_mutex){+.+.}-{3:3}, at: perf_event_exit_task+0xa3/0xb30 kernel/events/core.c:13042
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312
#2: ffff888075218020 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312

stack backtrace:
CPU: 0 PID: 3979 Comm: syz-executor.0 Not tainted 6.1.85-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2983 [inline]
check_deadlock kernel/locking/lockdep.c:3026 [inline]
validate_chain+0x4711/0x5950 kernel/locking/lockdep.c:3812
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:603 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:603 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
perf_event_exit_task+0xa3/0xb30 kernel/events/core.c:13042
do_exit+0xa83/0x26a0 kernel/exit.c:878
do_group_exit+0x202/0x2b0 kernel/exit.c:1019
__do_sys_exit_group kernel/exit.c:1030 [inline]
__se_sys_exit_group kernel/exit.c:1028 [inline]
__x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1028
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fb1c787de69
Code: Unable to access opcode bytes at 0x7fb1c787de3f.
RSP: 002b:00007fffb8803ac8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 000000000000001e RCX: 00007fb1c787de69
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000001 R08: 00007fb1c79abf8c R09: 0000000000000000
R10: 0000001b32060000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000

syzbot

unread,
Apr 11, 2024, 3:42:19 PMApr 11
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: bf1e3b1cb1e0 Linux 6.1.85
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=16b54f83180000
kernel config: https://syzkaller.appspot.com/x/.config?x=d3e21b90946dbbab
dashboard link: https://syzkaller.appspot.com/bug?extid=e4bf1416ef54504e4c07
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=106547bd180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=139f0eb9180000

Downloadable assets:
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e4bf14...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
6.1.85-syzkaller #0 Not tainted
--------------------------------------------
syz-executor358/3568 is trying to acquire lock:
ffff88807b56e820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938

but task is already holding lock:
ffff88807b4e2820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&htab->buckets[i].lock);
lock(&htab->buckets[i].lock);

*** DEADLOCK ***

May be due to missing lock nesting notation

4 locks held by syz-executor358/3568:
#0: ffff88807b844f78 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_cleanup_begin kernel/futex/core.c:1076 [inline]
#0: ffff88807b844f78 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#1: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312
#2: ffff88807b4e2820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#3: ffffffff8d12ac40 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312

stack backtrace:
CPU: 0 PID: 3568 Comm: syz-executor358 Not tainted 6.1.85-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2983 [inline]
check_deadlock kernel/locking/lockdep.c:3026 [inline]
validate_chain+0x4711/0x5950 kernel/locking/lockdep.c:3812
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938
bpf_prog_05fc780d7a5f93f9+0x42/0x46
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:603 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
sock_hash_delete_elem+0x177/0x400 net/core/sock_map.c:938
bpf_prog_05fc780d7a5f93f9+0x42/0x46
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:603 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
futex_cleanup_begin kernel/futex/core.c:1076 [inline]
futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
exit_mm_release+0x16/0x30 kernel/fork.c:1505
exit_mm+0xa9/0x300 kernel/exit.c:535
do_exit+0x9f6/0x26a0 kernel/exit.c:856
do_group_exit+0x202/0x2b0 kernel/exit.c:1019
__do_sys_exit_group kernel/exit.c:1030 [inline]
__se_sys_exit_group kernel/exit.c:1028 [inline]
__x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1028
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f642724cf79
Code: 90 49 c7 c0 b8 ff ff ff be e7 00 00 00 ba 3c 00 00 00 eb 12 0f 1f 44 00 00 89 d0 0f 05 48 3d 00 f0 ff ff 77 1c f4 89 f0 0f 05 <48> 3d 00 f0 ff ff 76 e7 f7 d8 64 41 89 00 eb df 0f 1f 80 00 00 00
RSP: 002b:00007fff199c5a78 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f642724cf79
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
RBP: 00007f64272c82b0 R08: ffffffffffffffb8 R09: 00000000000000a0
R10: 00000000000000a0 R11: 0000000000000246 R12: 00007f64272c82b0
R13: 0000000000000000 R14: 00007f64272c8d20 R15: 00007f642721e110
Reply all
Reply to author
Forward
0 new messages