KASAN: user-memory-access Read in userfaultfd_event_wait_completion

6 views
Skip to first unread message

syzbot

unread,
Aug 3, 2022, 3:08:22 AM8/3/22
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0966d385830d riscv: Fix auipc+jalr relocation range checks
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git fixes
console output: https://syzkaller.appspot.com/x/log.txt?x=11410e16080000
kernel config: https://syzkaller.appspot.com/x/.config?x=6295d67591064921
dashboard link: https://syzkaller.appspot.com/bug?extid=7cac101e96002eb197ec
compiler: riscv64-linux-gnu-gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: riscv64
CC: [linux-...@vger.kernel.org linux-...@vger.kernel.org vi...@zeniv.linux.org.uk]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+7cac10...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: user-memory-access in __lock_acquire+0x8ee/0x333e kernel/locking/lockdep.c:4897
Read of size 8 at addr 000000007087f30f by task dhcpcd-run-hook/12972

CPU: 1 PID: 12972 Comm: dhcpcd-run-hook Not tainted 5.17.0-rc1-syzkaller-00002-g0966d385830d #0
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff8000a228>] dump_backtrace+0x2e/0x3c arch/riscv/kernel/stacktrace.c:113
[<ffffffff831668cc>] show_stack+0x34/0x40 arch/riscv/kernel/stacktrace.c:119
[<ffffffff831756ba>] __dump_stack lib/dump_stack.c:88 [inline]
[<ffffffff831756ba>] dump_stack_lvl+0xe4/0x150 lib/dump_stack.c:106
[<ffffffff80474da6>] __kasan_report mm/kasan/report.c:446 [inline]
[<ffffffff80474da6>] kasan_report+0x1de/0x1e0 mm/kasan/report.c:459
[<ffffffff80475b20>] check_region_inline mm/kasan/generic.c:183 [inline]
[<ffffffff80475b20>] __asan_load8+0x6e/0x96 mm/kasan/generic.c:256
[<ffffffff80112b70>] __lock_acquire+0x8ee/0x333e kernel/locking/lockdep.c:4897
[<ffffffff80116582>] lock_acquire.part.0+0x1d0/0x424 kernel/locking/lockdep.c:5639
[<ffffffff8011682a>] lock_acquire+0x54/0x6a kernel/locking/lockdep.c:5612
[<ffffffff831af9ce>] __raw_spin_lock_irq include/linux/spinlock_api_smp.h:119 [inline]
[<ffffffff831af9ce>] _raw_spin_lock_irq+0x3e/0x5e kernel/locking/spinlock.c:170
[<ffffffff80591f18>] spin_lock_irq include/linux/spinlock.h:374 [inline]
[<ffffffff80591f18>] userfaultfd_event_wait_completion+0xac/0x642 fs/userfaultfd.c:567
[<ffffffff80597f86>] userfaultfd_unmap_complete+0x11c/0x22a fs/userfaultfd.c:837
[<ffffffff803955a6>] vm_mmap_pgoff+0x1d0/0x24e mm/util.c:522
==================================================================
Unable to handle kernel paging request at virtual address 000000007087f30f
Oops [#1]
Modules linked in:

CPU: 1 PID: 12972 Comm: dhcpcd-run-hook Tainted: G B 5.17.0-rc1-syzkaller-00002-g0966d385830d #0
Hardware name: riscv-virtio,qemu (DT)
epc : __lock_acquire+0x8ee/0x333e kernel/locking/lockdep.c:4897
ra : __lock_acquire+0x8ee/0x333e kernel/locking/lockdep.c:4897
epc : ffffffff80112b70 ra : ffffffff80112b70 sp : ffffaf8010683800
gp : ffffffff85863ac0 tp : ffffaf800e4ec8c0 t0 : ffffffff86c085c8
t1 : fffff5ef0b53c90c t2 : 0000000000000000 s0 : ffffaf8010683960
s1 : 0000000000000000 a0 : 0000000000000001 a1 : 0000000000000003
a2 : 1ffff5f001c9d919 a3 : ffffffff831afd3a a4 : 0000000000000000
a5 : ffffaf800e4ed8c0 a6 : 0000000000f00000 a7 : ffffaf805a9e4863
s2 : ffffffff86c1a620 s3 : 0000000000000000 s4 : 0000000000000000
s5 : 000000007087f30f s6 : 000000007087f30f s7 : 0000000000000001
s8 : ffffffff80591f18 s9 : ffffffff80591f18 s10: 0000000000000000
s11: ffffaf800e4ec8c0 t3 : 00000000746e6961 t4 : fffff5ef0b53c90c
t5 : fffff5ef0b53c90d t6 : ffffffff86c085f7
status: 0000000000000100 badaddr: 000000007087f30f cause: 000000000000000d
[<ffffffff80116582>] lock_acquire.part.0+0x1d0/0x424 kernel/locking/lockdep.c:5639
[<ffffffff8011682a>] lock_acquire+0x54/0x6a kernel/locking/lockdep.c:5612
[<ffffffff831af9ce>] __raw_spin_lock_irq include/linux/spinlock_api_smp.h:119 [inline]
[<ffffffff831af9ce>] _raw_spin_lock_irq+0x3e/0x5e kernel/locking/spinlock.c:170
[<ffffffff80591f18>] spin_lock_irq include/linux/spinlock.h:374 [inline]
[<ffffffff80591f18>] userfaultfd_event_wait_completion+0xac/0x642 fs/userfaultfd.c:567
[<ffffffff80597f86>] userfaultfd_unmap_complete+0x11c/0x22a fs/userfaultfd.c:837
[<ffffffff803955a6>] vm_mmap_pgoff+0x1d0/0x24e mm/util.c:522


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Oct 28, 2022, 2:57:32 AM10/28/22
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages