Hello,
syzbot found the following issue on:
HEAD commit: 0966d385830d riscv: Fix auipc+jalr relocation range checks
git tree: git://
git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git fixes
console output:
https://syzkaller.appspot.com/x/log.txt?x=1232d49f080000
kernel config:
https://syzkaller.appspot.com/x/.config?x=6295d67591064921
dashboard link:
https://syzkaller.appspot.com/bug?extid=78dd01e715903e7c0b51
compiler: riscv64-linux-gnu-gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: riscv64
CC: [
andre...@igalia.com da...@stgolabs.net dvh...@infradead.org linux-...@vger.kernel.org mi...@redhat.com pet...@infradead.org tg...@linutronix.de]
Unfortunately, I don't have any reproducer for this issue yet.
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+78dd01...@syzkaller.appspotmail.com
==================================================================
BUG: KASAN: null-ptr-deref in check_wait_context kernel/locking/lockdep.c:4700 [inline]
BUG: KASAN: null-ptr-deref in __lock_acquire+0x336/0x333e kernel/locking/lockdep.c:4977
Read of size 1 at addr 00000000000000b8 by task syz-fuzzer/2025
CPU: 1 PID: 2025 Comm: syz-fuzzer Not tainted 5.17.0-rc1-syzkaller-00002-g0966d385830d #0
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff8000a228>] dump_backtrace+0x2e/0x3c arch/riscv/kernel/stacktrace.c:113
[<ffffffff831668cc>] show_stack+0x34/0x40 arch/riscv/kernel/stacktrace.c:119
[<ffffffff831756ba>] __dump_stack lib/dump_stack.c:88 [inline]
[<ffffffff831756ba>] dump_stack_lvl+0xe4/0x150 lib/dump_stack.c:106
[<ffffffff80474da6>] __kasan_report mm/kasan/report.c:446 [inline]
[<ffffffff80474da6>] kasan_report+0x1de/0x1e0 mm/kasan/report.c:459
[<ffffffff804757da>] check_region_inline mm/kasan/generic.c:183 [inline]
[<ffffffff804757da>] __asan_load1+0x54/0x6c mm/kasan/generic.c:253
[<ffffffff801125b8>] check_wait_context kernel/locking/lockdep.c:4700 [inline]
[<ffffffff801125b8>] __lock_acquire+0x336/0x333e kernel/locking/lockdep.c:4977
[<ffffffff80116582>] lock_acquire.part.0+0x1d0/0x424 kernel/locking/lockdep.c:5639
[<ffffffff8011682a>] lock_acquire+0x54/0x6a kernel/locking/lockdep.c:5612
[<ffffffff800cf2a8>] do_write_seqcount_begin_nested include/linux/seqlock.h:520 [inline]
[<ffffffff800cf2a8>] do_write_seqcount_begin include/linux/seqlock.h:545 [inline]
[<ffffffff800cf2a8>] vtime_task_switch_generic+0x50/0x1f4 kernel/sched/cputime.c:769
[<ffffffff800bdc7a>] vtime_task_switch include/linux/vtime.h:95 [inline]
[<ffffffff800bdc7a>] finish_task_switch.isra.0+0x292/0x420 kernel/sched/core.c:4860
[<ffffffff831a5c8a>] context_switch kernel/sched/core.c:4989 [inline]
[<ffffffff831a5c8a>] __schedule+0x58e/0x118e kernel/sched/core.c:6296
[<ffffffff831a68fe>] schedule+0x74/0x14c kernel/sched/core.c:6369
[<ffffffff8019970a>] freezable_schedule include/linux/freezer.h:172 [inline]
[<ffffffff8019970a>] futex_wait_queue+0xc4/0x1d4 kernel/futex/waitwake.c:355
[<ffffffff8019a338>] futex_wait+0x174/0x2f8 kernel/futex/waitwake.c:656
[<ffffffff80194d3e>] do_futex+0x19c/0x284 kernel/futex/syscalls.c:106
[<ffffffff80194f1e>] __do_sys_futex kernel/futex/syscalls.c:183 [inline]
[<ffffffff80194f1e>] sys_futex+0xf8/0x310 kernel/futex/syscalls.c:164
[<ffffffff80005716>] ret_from_syscall+0x0/0x2
==================================================================
Unable to handle kernel NULL pointer dereference at virtual address 00000000000000b8
Oops [#1]
Modules linked in:
CPU: 1 PID: 2025 Comm: syz-fuzzer Tainted: G B 5.17.0-rc1-syzkaller-00002-g0966d385830d #0
Hardware name: riscv-virtio,qemu (DT)
epc : check_wait_context kernel/locking/lockdep.c:4700 [inline]
epc : __lock_acquire+0x33a/0x333e kernel/locking/lockdep.c:4977
ra : check_wait_context kernel/locking/lockdep.c:4700 [inline]
ra : __lock_acquire+0x336/0x333e kernel/locking/lockdep.c:4977
epc : ffffffff801125bc ra : ffffffff801125b8 sp : ffffaf800eb17570
gp : ffffffff85863ac0 tp : ffffaf800938c8c0 t0 : ffffffff86bcb657
t1 : fffffffef0b0dfa4 t2 : 0000000000000000 s0 : ffffaf800eb176d0
s1 : 0000000000000000 a0 : ffffaf800938d300 a1 : 0000000000000003
a2 : 1ffff5f001271919 a3 : ffffffff831afd3a a4 : 0000000000000000
a5 : ffffaf800938d8c0 a6 : 0000000000f00000 a7 : ffffffff8586fd23
s2 : 0000000000000081 s3 : ffffffff858c4cb0 s4 : 0000000000000000
s5 : ffffaf800938d2d8 s6 : ffffffff858c4ca0 s7 : 00000000000c0000
s8 : ffffaf800938d2e0 s9 : ffffffff800bdc7a s10: 00000000000c0081
s11: ffffaf800938c8c0 t3 : 000000000000003d t4 : fffffffef0b0dfa4
t5 : fffffffef0b0dfa5 t6 : ffffaf800eb16fd8
status: 0000000000000100 badaddr: 00000000000000b8 cause: 000000000000000d
[<ffffffff80116582>] lock_acquire.part.0+0x1d0/0x424 kernel/locking/lockdep.c:5639
[<ffffffff8011682a>] lock_acquire+0x54/0x6a kernel/locking/lockdep.c:5612
[<ffffffff800cf2a8>] do_write_seqcount_begin_nested include/linux/seqlock.h:520 [inline]
[<ffffffff800cf2a8>] do_write_seqcount_begin include/linux/seqlock.h:545 [inline]
[<ffffffff800cf2a8>] vtime_task_switch_generic+0x50/0x1f4 kernel/sched/cputime.c:769
[<ffffffff800bdc7a>] vtime_task_switch include/linux/vtime.h:95 [inline]
[<ffffffff800bdc7a>] finish_task_switch.isra.0+0x292/0x420 kernel/sched/core.c:4860
[<ffffffff831a5c8a>] context_switch kernel/sched/core.c:4989 [inline]
[<ffffffff831a5c8a>] __schedule+0x58e/0x118e kernel/sched/core.c:6296
[<ffffffff831a68fe>] schedule+0x74/0x14c kernel/sched/core.c:6369
[<ffffffff8019970a>] freezable_schedule include/linux/freezer.h:172 [inline]
[<ffffffff8019970a>] futex_wait_queue+0xc4/0x1d4 kernel/futex/waitwake.c:355
[<ffffffff8019a338>] futex_wait+0x174/0x2f8 kernel/futex/waitwake.c:656
[<ffffffff80194d3e>] do_futex+0x19c/0x284 kernel/futex/syscalls.c:106
[<ffffffff80194f1e>] __do_sys_futex kernel/futex/syscalls.c:183 [inline]
[<ffffffff80194f1e>] sys_futex+0xf8/0x310 kernel/futex/syscalls.c:164
[<ffffffff80005716>] ret_from_syscall+0x0/0x2
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.