KASAN: out-of-bounds Read in unwind_next_frame

9 views
Skip to first unread message

syzbot

unread,
Apr 27, 2020, 12:09:13 AM4/27/20
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: ab502651 ANDROID: gki_defconfig: enable CONFIG_PM_DEVFREQ_..
git tree: android-5.4
console output: https://syzkaller.appspot.com/x/log.txt?x=12a76490100000
kernel config: https://syzkaller.appspot.com/x/.config?x=5ca90a2020718dd5
dashboard link: https://syzkaller.appspot.com/bug?extid=fa556ea76b0b21118d75
compiler: Android (6032204 based on r370808) clang version 10.0.1 (https://android.googlesource.com/toolchain/llvm-project 6e765c10313d15c02ab29977a82938f66742c3a9)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+fa556e...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: out-of-bounds in user_mode arch/x86/include/asm/ptrace.h:131 [inline]
BUG: KASAN: out-of-bounds in unwind_next_frame+0x194/0x2230 arch/x86/kernel/unwind_orc.c:395
Read of size 8 at addr ffff888152aa6780 by task syz-executor.3/20491

CPU: 0 PID: 20491 Comm: syz-executor.3 Not tainted 5.4.35-syzkaller-00685-gab5026515199 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x14a/0x1ce lib/dump_stack.c:118
print_address_description+0x93/0x620 mm/kasan/report.c:374
__kasan_report+0x16d/0x1e0 mm/kasan/report.c:506
kasan_report+0x34/0x60 mm/kasan/common.c:634
user_mode arch/x86/include/asm/ptrace.h:131 [inline]
unwind_next_frame+0x194/0x2230 arch/x86/kernel/unwind_orc.c:395
arch_stack_walk+0xf4/0x120 arch/x86/kernel/stacktrace.c:25
stack_trace_save_tsk+0x2e7/0x490 kernel/stacktrace.c:151
proc_pid_stack+0x12f/0x1f0 fs/proc/base.c:455
proc_single_show+0xd3/0x130 fs/proc/base.c:757
seq_read+0x4aa/0xd30 fs/seq_file.c:229
do_loop_readv_writev fs/read_write.c:717 [inline]
do_iter_read+0x43b/0x550 fs/read_write.c:938
vfs_readv fs/read_write.c:1000 [inline]
do_preadv+0x213/0x350 fs/read_write.c:1092
do_syscall_64+0xcb/0x150 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x45c829
Code: 0d b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007f77fb2c3c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000127
RAX: ffffffffffffffda RBX: 00000000004fa1c0 RCX: 000000000045c829
RDX: 000000000000037d RSI: 0000000020000500 RDI: 000000000000000c
RBP: 000000000078c0e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 000000000000085c R14: 00000000004cb1c7 R15: 00007f77fb2c46d4

The buggy address belongs to the page:
page:ffffea00054aa980 refcount:0 mapcount:0 mapping:0000000000000000 index:0x0
flags: 0x8000000000000000()
raw: 8000000000000000 dead000000000100 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected

Memory state around the buggy address:
ffff888152aa6680: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff888152aa6700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff888152aa6780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
^
ffff888152aa6800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff888152aa6880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jan 13, 2022, 12:16:17 AM1/13/22
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages