KASAN: use-after-free Read in cpuacct_charge

7 views
Skip to first unread message

syzbot

unread,
Apr 13, 2019, 8:00:36 PM4/13/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 54068d61 Merge 4.9.122 into android-4.9
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=1664bb9a400000
kernel config: https://syzkaller.appspot.com/x/.config?x=c7451be69185755b
dashboard link: https://syzkaller.appspot.com/bug?extid=b40b97213e321fef022e
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14373186400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1326a516400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+b40b97...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: use-after-free in task_css include/linux/cgroup.h:455 [inline]
BUG: KASAN: use-after-free in task_ca kernel/sched/cpuacct.c:53 [inline]
BUG: KASAN: use-after-free in cpuacct_charge+0x328/0x360
kernel/sched/cpuacct.c:359
Read of size 8 at addr ffff8801c45042e0 by task syz-executor156/3816

CPU: 0 PID: 3816 Comm: syz-executor156 Not tainted 4.9.122-g54068d6 #78
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
ffff8801d91b7a08 ffffffff81eb8829 ffffea0007114100 ffff8801c45042e0
0000000000000000 ffff8801c45042e0 0000000000000000 ffff8801d91b7a40
ffffffff8156b6be ffff8801c45042e0 0000000000000008 0000000000000000
Call Trace:
[<ffffffff81eb8829>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81eb8829>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
[<ffffffff8156b6be>] print_address_description+0x6c/0x234
mm/kasan/report.c:256
[<ffffffff8156bac8>] kasan_report_error mm/kasan/report.c:355 [inline]
[<ffffffff8156bac8>] kasan_report.cold.6+0x242/0x2fe mm/kasan/report.c:412
[<ffffffff8153f304>] __asan_report_load8_noabort+0x14/0x20
mm/kasan/report.c:433
[<ffffffff8122d848>] task_css include/linux/cgroup.h:455 [inline]
[<ffffffff8122d848>] task_ca kernel/sched/cpuacct.c:53 [inline]
[<ffffffff8122d848>] cpuacct_charge+0x328/0x360 kernel/sched/cpuacct.c:359
[<ffffffff811e82ab>] update_curr+0x28b/0x680 kernel/sched/fair.c:874
[<ffffffff811f57f3>] dequeue_entity kernel/sched/fair.c:3721 [inline]
[<ffffffff811f57f3>] dequeue_task_fair+0xe3/0x1000 kernel/sched/fair.c:4903
[<ffffffff811ccf0b>] dequeue_task kernel/sched/core.c:782 [inline]
[<ffffffff811ccf0b>] deactivate_task+0xfb/0x2e0 kernel/sched/core.c:798
[<ffffffff839f0101>] __schedule+0x981/0x1bd0 kernel/sched/core.c:3458
[<ffffffff839f13cf>] schedule+0x7f/0x1b0 kernel/sched/core.c:3553
[<ffffffff839fe325>] freezable_schedule include/linux/freezer.h:171
[inline]
[<ffffffff839fe325>] do_nanosleep+0x1f5/0x4d0 kernel/time/hrtimer.c:1497
[<ffffffff812a85a0>] hrtimer_nanosleep+0x210/0x540
kernel/time/hrtimer.c:1566
[<ffffffff812a899c>] SYSC_nanosleep kernel/time/hrtimer.c:1604 [inline]
[<ffffffff812a899c>] SyS_nanosleep+0xcc/0x120 kernel/time/hrtimer.c:1593
[<ffffffff81006316>] do_syscall_64+0x1a6/0x490 arch/x86/entry/common.c:282
[<ffffffff83a00cd3>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb

Allocated by task 6358:
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57
save_stack+0x43/0xd0 mm/kasan/kasan.c:505
set_track mm/kasan/kasan.c:517 [inline]
kasan_kmalloc+0xc7/0xe0 mm/kasan/kasan.c:609
kmem_cache_alloc_trace+0xfd/0x2b0 mm/slub.c:2742
kmalloc include/linux/slab.h:490 [inline]
kzalloc include/linux/slab.h:636 [inline]
find_css_set kernel/cgroup.c:1087 [inline]
cgroup_migrate_prepare_dst+0x779/0x1810 kernel/cgroup.c:2723
cgroup_update_dfl_csses kernel/cgroup.c:3103 [inline]
cgroup_apply_control+0x35f/0x650 kernel/cgroup.c:3357
cgroup_subtree_control_write+0x9d2/0xf40 kernel/cgroup.c:3490
cgroup_file_write+0x10d/0x550 kernel/cgroup.c:3517
kernfs_fop_write+0x2ae/0x460 fs/kernfs/file.c:316
__vfs_write+0x115/0x580 fs/read_write.c:507
vfs_write+0x187/0x530 fs/read_write.c:557
SYSC_write fs/read_write.c:604 [inline]
SyS_write+0xd9/0x1c0 fs/read_write.c:596
do_syscall_64+0x1a6/0x490 arch/x86/entry/common.c:282
entry_SYSCALL_64_after_swapgs+0x5d/0xdb

Freed by task 0:
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57
save_stack+0x43/0xd0 mm/kasan/kasan.c:505
set_track mm/kasan/kasan.c:517 [inline]
kasan_slab_free+0x72/0xc0 mm/kasan/kasan.c:582
slab_free_hook mm/slub.c:1355 [inline]
slab_free_freelist_hook mm/slub.c:1377 [inline]
slab_free mm/slub.c:2958 [inline]
kfree+0xfb/0x310 mm/slub.c:3878
__rcu_reclaim kernel/rcu/rcu.h:113 [inline]
rcu_do_batch kernel/rcu/tree.c:2789 [inline]
invoke_rcu_callbacks kernel/rcu/tree.c:3053 [inline]
__rcu_process_callbacks kernel/rcu/tree.c:3020 [inline]
rcu_process_callbacks+0x9d5/0x12b0 kernel/rcu/tree.c:3037
__do_softirq+0x210/0x940 kernel/softirq.c:288

The buggy address belongs to the object at ffff8801c4504280
which belongs to the cache kmalloc-512 of size 512
The buggy address is located 96 bytes inside of
512-byte region [ffff8801c4504280, ffff8801c4504480)
The buggy address belongs to the page:
page:ffffea0007114100 count:1 mapcount:0 mapping: (null) index:0x0
compound_mapcount: 0
flags: 0x8000000000004080(slab|head)
page dumped because: kasan: bad access detected

Memory state around the buggy address:
ffff8801c4504180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8801c4504200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> ffff8801c4504280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff8801c4504300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8801c4504380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches
Reply all
Reply to author
Forward
0 new messages