BUG: unable to handle kernel paging request in bpf_prog_ADDR_F

1 view
Skip to first unread message

syzbot

unread,
Jun 1, 2022, 9:53:23 PM6/1/22
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 7e062cda7d90 Merge tag 'net-next-5.19' of git://git.kernel..
git tree: bpf-next
console output: https://syzkaller.appspot.com/x/log.txt?x=16095913f00000
kernel config: https://syzkaller.appspot.com/x/.config?x=e2c9c27babb4d679
dashboard link: https://syzkaller.appspot.com/bug?extid=b9b16a0f741fb57ab039
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
CC: [and...@kernel.org a...@kernel.org b...@vger.kernel.org dan...@iogearbox.net da...@davemloft.net ha...@kernel.org john.fa...@gmail.com ka...@fb.com kps...@kernel.org ku...@kernel.org linux-...@vger.kernel.org mi...@redhat.com net...@vger.kernel.org ros...@goodmis.org songliu...@fb.com y...@fb.com net...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+b9b16a...@syzkaller.appspotmail.com

BUG: unable to handle page fault for address: ffffffffa0000ecc
#PF: supervisor instruction fetch in kernel mode
#PF: error_code(0x0010) - not-present page
PGD ba8f067 P4D ba8f067 PUD ba90063 PMD 16245067 PTE 0
Oops: 0010 [#1] PREEMPT SMP KASAN
CPU: 0 PID: 12618 Comm: syz-executor.1 Not tainted 5.18.0-syzkaller-04943-g7e062cda7d90 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:bpf_prog_9d4bccaf8ccaf0dc_F+0x0/0xd
Code: Unable to access opcode bytes at RIP 0xffffffffa0000ea2.
RSP: 0018:ffffc90005aef250 EFLAGS: 00010046
RAX: dffffc0000000000 RBX: ffffc90005941000 RCX: 000000000000000c
RDX: 1ffff92000b28206 RSI: ffffc90005941048 RDI: 00000000ffff8880
RBP: ffffc90005aef258 R08: ffffffff8f296018 R09: ffffffff8f29600f
R10: ffffffff8f296017 R11: 0000000000000001 R12: 000001057eee48da
R13: ffff888010e4bb00 R14: ffff888023b6d880 R15: 0000000000000001
FS: 00007fc87b1dd700(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffffffa0000ea2 CR3: 000000006f1fc000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600
Call Trace:
<TASK>
bpf_dispatcher_nop_func include/linux/bpf.h:869 [inline]
__bpf_prog_run include/linux/filter.h:621 [inline]
bpf_prog_run include/linux/filter.h:635 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2046 [inline]
bpf_trace_run4+0x1d9/0x360 kernel/trace/bpf_trace.c:2085
__bpf_trace_sched_switch+0x115/0x160 include/trace/events/sched.h:222
__traceiter_sched_switch+0x68/0xb0 include/trace/events/sched.h:222
trace_sched_switch include/trace/events/sched.h:222 [inline]
__schedule+0x145b/0x4b30 kernel/sched/core.c:6388
preempt_schedule_common+0x45/0xc0 kernel/sched/core.c:6556
preempt_schedule_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:35
__raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
_raw_spin_unlock+0x36/0x40 kernel/locking/spinlock.c:186
spin_unlock include/linux/spinlock.h:389 [inline]
__cond_resched_lock+0x93/0xe0 kernel/sched/core.c:8233
__purge_vmap_area_lazy+0x976/0x1c50 mm/vmalloc.c:1728
_vm_unmap_aliases.part.0+0x3f0/0x500 mm/vmalloc.c:2127
_vm_unmap_aliases mm/vmalloc.c:2101 [inline]
vm_remove_mappings mm/vmalloc.c:2626 [inline]
__vunmap+0x6d5/0xd30 mm/vmalloc.c:2653
__vfree+0x3c/0xd0 mm/vmalloc.c:2715
vfree+0x5a/0x90 mm/vmalloc.c:2746
bpf_jit_binary_free kernel/bpf/core.c:1080 [inline]
bpf_jit_free+0x21a/0x2b0 kernel/bpf/core.c:1203
jit_subprogs kernel/bpf/verifier.c:13683 [inline]
fixup_call_args kernel/bpf/verifier.c:13712 [inline]
bpf_check+0x71ab/0xbbc0 kernel/bpf/verifier.c:15063
bpf_prog_load+0xfb2/0x2250 kernel/bpf/syscall.c:2575
__sys_bpf+0x11a1/0x5700 kernel/bpf/syscall.c:4917
__do_sys_bpf kernel/bpf/syscall.c:5021 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5019 [inline]
__x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:5019
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x46/0xb0
RIP: 0033:0x7fc87c289109
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fc87b1dd168 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fc87c39c100 RCX: 00007fc87c289109
RDX: 0000000000000070 RSI: 0000000020000440 RDI: 0000000000000005
RBP: 00007fc87c2e308d R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fffd1291fff R14: 00007fc87b1dd300 R15: 0000000000022000
</TASK>
Modules linked in:
CR2: ffffffffa0000ecc
---[ end trace 0000000000000000 ]---
RIP: 0010:bpf_prog_9d4bccaf8ccaf0dc_F+0x0/0xd
Code: Unable to access opcode bytes at RIP 0xffffffffa0000ea2.
RSP: 0018:ffffc90005aef250 EFLAGS: 00010046

RAX: dffffc0000000000 RBX: ffffc90005941000 RCX: 000000000000000c
RDX: 1ffff92000b28206 RSI: ffffc90005941048 RDI: 00000000ffff8880
RBP: ffffc90005aef258 R08: ffffffff8f296018 R09: ffffffff8f29600f
R10: ffffffff8f296017 R11: 0000000000000001 R12: 000001057eee48da
R13: ffff888010e4bb00 R14: ffff888023b6d880 R15: 0000000000000001
FS: 00007fc87b1dd700(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffffffa0000ea2 CR3: 000000006f1fc000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 3, 2022, 3:23:15 AM7/3/22
to syzkaller-upst...@googlegroups.com
Sending this report upstream.
Reply all
Reply to author
Forward
0 new messages