[v6.1] possible deadlock in __bpf_ringbuf_reserve

0 views
Skip to first unread message

syzbot

unread,
Mar 8, 2024, 6:13:23 PMMar 8
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 61adba85cc40 Linux 6.1.81
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10793fde180000
kernel config: https://syzkaller.appspot.com/x/.config?x=41dc7343796eb054
dashboard link: https://syzkaller.appspot.com/bug?extid=f85ef54ebfb2b7619272
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=13c4721e180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=162ba649180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/15f4df42f73e/disk-61adba85.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/773cd7f79956/vmlinux-61adba85.xz
kernel image: https://storage.googleapis.com/syzbot-assets/6455b0fc2f11/bzImage-61adba85.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+f85ef5...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
6.1.81-syzkaller #0 Not tainted
--------------------------------------------
syz-executor262/3548 is trying to acquire lock:
ffffc90003ce90d8 (&rb->spinlock){....}-{2:2}, at: __bpf_ringbuf_reserve+0x20d/0x4f0 kernel/bpf/ringbuf.c:410

but task is already holding lock:
ffffc90003c650d8 (&rb->spinlock){....}-{2:2}, at: __bpf_ringbuf_reserve+0x20d/0x4f0 kernel/bpf/ringbuf.c:410

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&rb->spinlock);
lock(&rb->spinlock);

*** DEADLOCK ***

May be due to missing lock nesting notation

4 locks held by syz-executor262/3548:
#0: ffff88801f7313f8 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_cleanup_begin kernel/futex/core.c:1076 [inline]
#0: ffff88801f7313f8 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
#1: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline]
#1: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:760 [inline]
#1: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#1: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312
#2: ffffc90003c650d8 (&rb->spinlock){....}-{2:2}, at: __bpf_ringbuf_reserve+0x20d/0x4f0 kernel/bpf/ringbuf.c:410
#3: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline]
#3: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:760 [inline]
#3: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#3: ffffffff8d12a840 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312

stack backtrace:
CPU: 0 PID: 3548 Comm: syz-executor262 Not tainted 6.1.81-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2983 [inline]
check_deadlock kernel/locking/lockdep.c:3026 [inline]
validate_chain+0x4711/0x5950 kernel/locking/lockdep.c:3812
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x20d/0x4f0 kernel/bpf/ringbuf.c:410
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:445 [inline]
bpf_ringbuf_reserve+0x58/0x70 kernel/bpf/ringbuf.c:437
bpf_prog_9efe54833449f08e+0x25/0x3f
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xdd/0x120 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x20d/0x4f0 kernel/bpf/ringbuf.c:410
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:445 [inline]
bpf_ringbuf_reserve+0x58/0x70 kernel/bpf/ringbuf.c:437
bpf_prog_9efe54833449f08e+0x25/0x3f
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
__traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:612 [inline]
__mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
futex_cleanup_begin kernel/futex/core.c:1076 [inline]
futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
exit_mm_release+0x16/0x30 kernel/fork.c:1505
exit_mm+0xa9/0x300 kernel/exit.c:535
do_exit+0x9f6/0x26a0 kernel/exit.c:856
do_group_exit+0x202/0x2b0 kernel/exit.c:1019
__do_sys_exit_group kernel/exit.c:1030 [inline]
__se_sys_exit_group kernel/exit.c:1028 [inline]
__x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1028
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f9821f6ce79
Code: 90 49 c7 c0 b8 ff ff ff be e7 00 00 00 ba 3c 00 00 00 eb 12 0f 1f 44 00 00 89 d0 0f 05 48 3d 00 f0 ff ff 77 1c f4 89 f0 0f 05 <48> 3d 00 f0 ff ff 76 e7 f7 d8 64 41 89 00 eb df 0f 1f 80 00 00 00
RSP: 002b:00007ffcad784b68 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f9821f6ce79
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
RBP: 00007f9821fe82b0 R08: ffffffffffffffb8 R09: 00000000000000a0
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f9821fe82b0
R13: 0000000000000000 R14: 00007f9821fe8d20 R15: 00007f9821f3e000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages