KASAN: use-after-free Read in __schedule

5 views
Skip to first unread message

syzbot

unread,
Mar 1, 2021, 8:25:19 AM3/1/21
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 2d19be46 Linux 4.19.177
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17b74e46d00000
kernel config: https://syzkaller.appspot.com/x/.config?x=6a1a8f0ba6627eb7
dashboard link: https://syzkaller.appspot.com/bug?extid=85ca27c917d81cd1287d

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+85ca27...@syzkaller.appspotmail.com

kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
==================================================================
BUG: KASAN: use-after-free in schedule_debug kernel/sched/core.c:3329 [inline]
BUG: KASAN: use-after-free in __schedule+0x1ae3/0x2040 kernel/sched/core.c:3439
Read of size 8 at addr ffff8880465d0000 by task syz-executor.5/16762

CPU: 1 PID: 16762 Comm: syz-executor.5 Not tainted 4.19.177-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1fc/0x2ef lib/dump_stack.c:118
print_address_description.cold+0x54/0x219 mm/kasan/report.c:256
kasan_report_error.cold+0x8a/0x1b9 mm/kasan/report.c:354
kasan_report mm/kasan/report.c:412 [inline]
__asan_report_load8_noabort+0x88/0x90 mm/kasan/report.c:433
schedule_debug kernel/sched/core.c:3329 [inline]
__schedule+0x1ae3/0x2040 kernel/sched/core.c:3439
preempt_schedule_notrace+0x92/0x110 kernel/sched/core.c:3715
___preempt_schedule_notrace+0x16/0x2e
rcu_is_watching+0x96/0xc0 kernel/rcu/tree.c:1026
rcu_read_unlock include/linux/rcupdate.h:677 [inline]
ext4_get_group_desc+0x2de/0x4e0 fs/ext4/balloc.c:284
recently_deleted fs/ext4/ialloc.c:681 [inline]
find_inode_bit+0x1a0/0x520 fs/ext4/ialloc.c:725
__ext4_new_inode+0x160c/0x5a20 fs/ext4/ialloc.c:917
ext4_symlink+0x3f5/0xc00 fs/ext4/namei.c:3177
vfs_symlink+0x453/0x6c0 fs/namei.c:4129
do_symlinkat+0x258/0x2c0 fs/namei.c:4156
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x465807
Code: 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 58 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffd986f7228 EFLAGS: 00000206 ORIG_RAX: 0000000000000058
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000000000465807
RDX: 00007ffd986f7317 RSI: 00000000004bcdbd RDI: 00007ffd986f7300
RBP: 0000000000000000 R08: 0000000000000000 R09: 00007ffd986f70c0
R10: 00007ffd986f6f77 R11: 0000000000000206 R12: 0000000000000001
R13: 0000000000000000 R14: 0000000000000001 R15: 00007ffd986f7300

The buggy address belongs to the page:
page:ffffea0001197400 count:0 mapcount:0 mapping:0000000000000000 index:0x0
flags: 0xfff00000000000()
raw: 00fff00000000000 ffffea000118f248 ffff8880ba12e9f8 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected

Memory state around the buggy address:
ffff8880465cff00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff8880465cff80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff8880465d0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
^
ffff8880465d0080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff8880465d0100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jun 29, 2021, 9:25:11 AM6/29/21
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages