KASAN: stack-out-of-bounds Read in iov_iter_revert

18 views
Skip to first unread message

syzbot

unread,
Aug 19, 2020, 1:37:16 PM8/19/20
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 010ff9a0 FROMLIST: tty: serial: qcom_geni_serial: Drop __i..
git tree: android-5.4
console output: https://syzkaller.appspot.com/x/log.txt?x=144dfa72900000
kernel config: https://syzkaller.appspot.com/x/.config?x=71d36a7b70f701e3
dashboard link: https://syzkaller.appspot.com/bug?extid=c77ec3164d96c0a029c7
compiler: Android (6032204 based on r370808) clang version 10.0.1 (https://android.googlesource.com/toolchain/llvm-project 6e765c10313d15c02ab29977a82938f66742c3a9)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11f0d97a900000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1380e37a900000

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c77ec3...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: stack-out-of-bounds in iov_iter_revert+0x249/0xa60 lib/iov_iter.c:1100
Read of size 8 at addr ffff8881c44bfcb8 by task syz-executor215/344

CPU: 0 PID: 344 Comm: syz-executor215 Not tainted 5.4.59-syzkaller-00504-g010ff9a0f65f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x14a/0x1ce lib/dump_stack.c:118
print_address_description+0x93/0x620 mm/kasan/report.c:374
__kasan_report+0x16d/0x1e0 mm/kasan/report.c:506
kasan_report+0x36/0x60 mm/kasan/common.c:634
iov_iter_revert+0x249/0xa60 lib/iov_iter.c:1100
generic_file_read_iter+0x1dd5/0x20b0 mm/filemap.c:2308
fuse_cache_read_iter fs/fuse/file.c:1018 [inline]
fuse_file_read_iter+0x3ec/0x4e0 fs/fuse/file.c:1576
call_read_iter include/linux/fs.h:1965 [inline]
new_sync_read fs/read_write.c:414 [inline]
__vfs_read+0x59a/0x710 fs/read_write.c:427
vfs_read+0x166/0x380 fs/read_write.c:461
ksys_read+0x18c/0x2c0 fs/read_write.c:590
do_syscall_64+0xcb/0x150 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x446889
Code: e8 5c bb 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 8b 0e fc ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fd76a67ad98 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 00000000006e0c48 RCX: 0000000000446889
RDX: 00000000200041e0 RSI: 00000000200021c0 RDI: 0000000000000005
RBP: 00000000006e0c40 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006e0c4c
R13: 0000000020006380 R14: 00000000004b1100 R15: 00000000004af0f8

The buggy address belongs to the page:
page:ffffea0007112fc0 refcount:0 mapcount:0 mapping:0000000000000000 index:0x0
flags: 0x8000000000000000()
raw: 8000000000000000 0000000000000000 ffffea0007112fc8 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected

addr ffff8881c44bfcb8 is located in stack of task syz-executor215/344 at offset 24 in frame:
__vfs_read+0x0/0x710 fs/read_write.c:39

this frame has 3 objects:
[32, 48) 'iov.i'
[64, 112) 'kiocb.i'
[144, 184) 'iter.i'

Memory state around the buggy address:
ffff8881c44bfb80: 00 00 00 00 00 00 00 00 00 f3 f3 f3 f3 f3 f3 f3
ffff8881c44bfc00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff8881c44bfc80: 00 00 00 00 f1 f1 f1 f1 00 00 f2 f2 00 00 00 00



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches

syzbot

unread,
Apr 16, 2023, 5:56:37 PM4/16/23
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
No recent activity, existing reproducers are no longer triggering the issue.
Reply all
Reply to author
Forward
0 new messages