[v6.6] possible deadlock in trie_delete_elem

1 view
Skip to first unread message

syzbot

unread,
Jun 16, 2025, 9:09:28 PM6/16/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c2603c511feb Linux 6.6.93
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=144b1e82580000
kernel config: https://syzkaller.appspot.com/x/.config?x=486bade17c8c30b9
dashboard link: https://syzkaller.appspot.com/bug?extid=3293cebb61e068fec0e9
compiler: Debian clang version 20.1.6 (++20250514063057+1e4d39e07757-1~exp1~20250514183223.118), Debian LLD 20.1.6

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/5819843271d0/disk-c2603c51.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/113f268f936d/vmlinux-c2603c51.xz
kernel image: https://storage.googleapis.com/syzbot-assets/493dd7e2863f/bzImage-c2603c51.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+3293ce...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.6.93-syzkaller #0 Not tainted
------------------------------------------------------
syz.1.495/6892 is trying to acquire lock:
ffff88805c071238 (&trie->lock){-.-.}-{2:2}, at: trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:467

but task is already holding lock:
ffffffff9714e068 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x6c/0x4b0 lib/debugobjects.c:709

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&obj_hash[i].lock){-.-.}-{2:2}:
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
__debug_check_no_obj_freed lib/debugobjects.c:979 [inline]
debug_check_no_obj_freed+0x13a/0x540 lib/debugobjects.c:1020
slab_free_hook mm/slub.c:1781 [inline]
slab_free_freelist_hook+0xd2/0x1b0 mm/slub.c:1832
slab_free mm/slub.c:3816 [inline]
__kmem_cache_free+0xba/0x1f0 mm/slub.c:3829
trie_update_elem+0x6d1/0xea0 kernel/bpf/lpm_trie.c:444
bpf_map_update_value+0x67c/0x740 kernel/bpf/syscall.c:201
map_update_elem+0x57b/0x700 kernel/bpf/syscall.c:1561
__sys_bpf+0x652/0x800 kernel/bpf/syscall.c:5455
__do_sys_bpf kernel/bpf/syscall.c:5571 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5569 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5569
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&trie->lock){-.-.}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:467
bpf_prog_41385012b43a9f2e+0x48/0x4c
bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xb4/0xf0 kernel/locking/spinlock.c:162
debug_object_activate+0x6c/0x4b0 lib/debugobjects.c:709
debug_hrtimer_activate kernel/time/hrtimer.c:450 [inline]
debug_activate kernel/time/hrtimer.c:505 [inline]
enqueue_hrtimer+0x30/0x370 kernel/time/hrtimer.c:1113
__run_hrtimer kernel/time/hrtimer.c:1772 [inline]
__hrtimer_run_queues+0x637/0xc40 kernel/time/hrtimer.c:1819
hrtimer_interrupt+0x3c9/0x9c0 kernel/time/hrtimer.c:1881
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1077 [inline]
__sysvec_apic_timer_interrupt+0xfb/0x3b0 arch/x86/kernel/apic/apic.c:1094
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0x9f/0xc0 arch/x86/kernel/apic/apic.c:1088
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
should_resched arch/x86/include/asm/preempt.h:104 [inline]
__local_bh_enable_ip+0x136/0x1c0 kernel/softirq.c:413
spin_unlock_bh include/linux/spinlock.h:396 [inline]
sock_hash_delete_elem+0x27d/0x2e0 net/core/sock_map.c:957
bpf_prog_2c29ac5cdc6b1842+0x42/0x46
bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
bpf_prog_run_pin_on_cpu include/linux/filter.h:636 [inline]
bpf_flow_dissect+0x12f/0x3e0 net/core/flow_dissector.c:1000
bpf_prog_test_run_flow_dissector+0x40d/0x600 net/bpf/test_run.c:1353
bpf_prog_test_run+0x321/0x390 kernel/bpf/syscall.c:4123
__sys_bpf+0x440/0x800 kernel/bpf/syscall.c:5485
__do_sys_bpf kernel/bpf/syscall.c:5571 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5569 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5569
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&obj_hash[i].lock);
lock(&trie->lock);
lock(&obj_hash[i].lock);
lock(&trie->lock);

*** DEADLOCK ***

4 locks held by syz.1.495/6892:
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: bpf_test_timer_enter+0x1a/0x140 net/bpf/test_run.c:39
#1: ffff8880b8e2b958 (hrtimer_bases.lock){-.-.}-{2:2}, at: __run_hrtimer kernel/time/hrtimer.c:1759 [inline]
#1: ffff8880b8e2b958 (hrtimer_bases.lock){-.-.}-{2:2}, at: __hrtimer_run_queues+0x5e3/0xc40 kernel/time/hrtimer.c:1819
#2: ffffffff9714e068 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x6c/0x4b0 lib/debugobjects.c:709
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
#3: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361

stack backtrace:
CPU: 0 PID: 6892 Comm: syz.1.495 Not tainted 6.6.93-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<IRQ>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:467
bpf_prog_41385012b43a9f2e+0x48/0x4c
bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
__bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xb4/0xf0 kernel/locking/spinlock.c:162
debug_object_activate+0x6c/0x4b0 lib/debugobjects.c:709
debug_hrtimer_activate kernel/time/hrtimer.c:450 [inline]
debug_activate kernel/time/hrtimer.c:505 [inline]
enqueue_hrtimer+0x30/0x370 kernel/time/hrtimer.c:1113
__run_hrtimer kernel/time/hrtimer.c:1772 [inline]
__hrtimer_run_queues+0x637/0xc40 kernel/time/hrtimer.c:1819
hrtimer_interrupt+0x3c9/0x9c0 kernel/time/hrtimer.c:1881
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1077 [inline]
__sysvec_apic_timer_interrupt+0xfb/0x3b0 arch/x86/kernel/apic/apic.c:1094
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0x9f/0xc0 arch/x86/kernel/apic/apic.c:1088
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:should_resched arch/x86/include/asm/preempt.h:104 [inline]
RIP: 0010:__local_bh_enable_ip+0x136/0x1c0 kernel/softirq.c:413
Code: 8a e8 5e ac 14 09 65 66 8b 05 e6 38 b2 7e 66 85 c0 75 54 bf 01 00 00 00 e8 a7 e2 09 00 e8 82 78 3a 00 fb 65 8b 05 b2 38 b2 7e <85> c0 75 05 e8 91 1a af ff 48 c7 04 24 0e 36 e0 45 4b c7 04 37 00
RSP: 0018:ffffc90004c27a60 EFLAGS: 00000286
RAX: 0000000080000001 RBX: 0000000000000201 RCX: 13c41dd0561a5c00
RDX: dffffc0000000000 RSI: ffffffff8aaab2c0 RDI: ffffffff8afc6780
RBP: ffffc90004c27ae8 R08: ffffffff8e49a76f R09: 1ffffffff1c934ed
R10: dffffc0000000000 R11: fffffbfff1c934ee R12: ffffffff887a0ced
R13: 0000000000000006 R14: dffffc0000000000 R15: 1ffff92000984f4c
spin_unlock_bh include/linux/spinlock.h:396 [inline]
sock_hash_delete_elem+0x27d/0x2e0 net/core/sock_map.c:957
bpf_prog_2c29ac5cdc6b1842+0x42/0x46
bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
bpf_prog_run_pin_on_cpu include/linux/filter.h:636 [inline]
bpf_flow_dissect+0x12f/0x3e0 net/core/flow_dissector.c:1000
bpf_prog_test_run_flow_dissector+0x40d/0x600 net/bpf/test_run.c:1353
bpf_prog_test_run+0x321/0x390 kernel/bpf/syscall.c:4123
__sys_bpf+0x440/0x800 kernel/bpf/syscall.c:5485
__do_sys_bpf kernel/bpf/syscall.c:5571 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5569 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5569
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f231398e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f231474b038 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f2313bb5fa0 RCX: 00007f231398e929
RDX: 0000000000000050 RSI: 0000200000000180 RDI: 000000000000000a
RBP: 00007f2313a10b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f2313bb5fa0 R15: 00007ffc909f0558
</TASK>
----------------
Code disassembly (best guess):
0: 8a e8 mov %al,%ch
2: 5e pop %rsi
3: ac lods %ds:(%rsi),%al
4: 14 09 adc $0x9,%al
6: 65 66 8b 05 e6 38 b2 mov %gs:0x7eb238e6(%rip),%ax # 0x7eb238f4
d: 7e
e: 66 85 c0 test %ax,%ax
11: 75 54 jne 0x67
13: bf 01 00 00 00 mov $0x1,%edi
18: e8 a7 e2 09 00 call 0x9e2c4
1d: e8 82 78 3a 00 call 0x3a78a4
22: fb sti
23: 65 8b 05 b2 38 b2 7e mov %gs:0x7eb238b2(%rip),%eax # 0x7eb238dc
* 2a: 85 c0 test %eax,%eax <-- trapping instruction
2c: 75 05 jne 0x33
2e: e8 91 1a af ff call 0xffaf1ac4
33: 48 c7 04 24 0e 36 e0 movq $0x45e0360e,(%rsp)
3a: 45
3b: 4b rex.WXB
3c: c7 .byte 0xc7
3d: 04 37 add $0x37,%al


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Jun 17, 2025, 8:08:33 AM6/17/25
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: c2603c511feb Linux 6.6.93
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10e0c370580000
kernel config: https://syzkaller.appspot.com/x/.config?x=486bade17c8c30b9
dashboard link: https://syzkaller.appspot.com/bug?extid=3293cebb61e068fec0e9
compiler: Debian clang version 20.1.6 (++20250514063057+1e4d39e07757-1~exp1~20250514183223.118), Debian LLD 20.1.6
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11d79e82580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=14e0c370580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/5819843271d0/disk-c2603c51.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/113f268f936d/vmlinux-c2603c51.xz
kernel image: https://storage.googleapis.com/syzbot-assets/493dd7e2863f/bzImage-c2603c51.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+3293ce...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
6.6.93-syzkaller #0 Not tainted
--------------------------------------------
syz-executor401/5803 is trying to acquire lock:
ffff88807da89a38 (&trie->lock){..-.}-{2:2}, at: trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:467

but task is already holding lock:
ffff88807da89a38 (&trie->lock){..-.}-{2:2}, at: trie_update_elem+0xca/0xea0 kernel/bpf/lpm_trie.c:335

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&trie->lock);
lock(&trie->lock);

*** DEADLOCK ***

May be due to missing lock nesting notation

3 locks held by syz-executor401/5803:
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: bpf_map_update_value+0x439/0x740 kernel/bpf/syscall.c:200
#1: ffff88807da89a38 (&trie->lock){..-.}-{2:2}, at: trie_update_elem+0xca/0xea0 kernel/bpf/lpm_trie.c:335
#2: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#2: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#2: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
#2: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run4+0xfd/0x420 kernel/trace/bpf_trace.c:2363

stack backtrace:
CPU: 1 PID: 5803 Comm: syz-executor401 Not tainted 6.6.93-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_deadlock kernel/locking/lockdep.c:3062 [inline]
validate_chain kernel/locking/lockdep.c:3856 [inline]
__lock_acquire+0x5d40/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:467
bpf_prog_ae0c3e605f35524c+0x45/0x49
bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
__bpf_prog_run include/linux/filter.h:612 [inline]
bpf_prog_run include/linux/filter.h:619 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
bpf_trace_run4+0x1f9/0x420 kernel/trace/bpf_trace.c:2363
trace_mm_page_alloc include/trace/events/kmem.h:177 [inline]
__alloc_pages+0x429/0x460 mm/page_alloc.c:4479
alloc_slab_page+0x5d/0x170 mm/slub.c:1876
allocate_slab mm/slub.c:2023 [inline]
new_slab+0x87/0x2e0 mm/slub.c:2076
___slab_alloc+0xc6d/0x12f0 mm/slub.c:3230
__slab_alloc mm/slub.c:3329 [inline]
__slab_alloc_node mm/slub.c:3382 [inline]
slab_alloc_node mm/slub.c:3475 [inline]
__kmem_cache_alloc_node+0x1a2/0x260 mm/slub.c:3524
__do_kmalloc_node mm/slab_common.c:1006 [inline]
__kmalloc_node+0xa4/0x230 mm/slab_common.c:1014
kmalloc_node include/linux/slab.h:620 [inline]
bpf_map_kmalloc_node+0xbc/0x1b0 kernel/bpf/syscall.c:422
lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
trie_update_elem+0x166/0xea0 kernel/bpf/lpm_trie.c:338
bpf_map_update_value+0x67c/0x740 kernel/bpf/syscall.c:201
map_update_elem+0x57b/0x700 kernel/bpf/syscall.c:1561
__sys_bpf+0x652/0x800 kernel/bpf/syscall.c:5455
__do_sys_bpf kernel/bpf/syscall.c:5571 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5569 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5569
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fc1d2e37e39
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 c1 17 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffdcd01a2a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 002020786c352d25 RCX: 00007fc1d2e37e39
RDX: 0000000000000020 RSI: 00002000000002c0 RDI: 0000000000000002
RBP: 0000000000000000 R08: 0000000000000006 R09: 0000000000000006
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000001
</TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

unread,
Aug 29, 2025, 6:23:06 PM8/29/25
to syzkaller...@googlegroups.com
syzbot suspects this issue could be fixed by backporting the following commit:

commit cdc2e1d9d929d7f7009b3a5edca52388a2b0891f
git tree: upstream
Author: Nathan Chancellor <nat...@kernel.org>
Date: Mon Apr 14 22:00:59 2025 +0000

lib/Kconfig.ubsan: Remove 'default UBSAN' from UBSAN_INTEGER_WRAP

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=147d5634580000
Please keep in mind that other backports might be required as well.

For information about bisection process see: https://goo.gl/tpsmEJ#bisection
Reply all
Reply to author
Forward
0 new messages