KASAN: stack-out-of-bounds Read in profile_pc

29 views
Skip to first unread message

syzbot

unread,
May 26, 2021, 10:31:17 PM5/26/21
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 1d3dcc20 ANDROID: dm-user: Fix build warnings
git tree: android12-5.4
console output: https://syzkaller.appspot.com/x/log.txt?x=1109e5cfd00000
kernel config: https://syzkaller.appspot.com/x/.config?x=8fac8bc1c995d734
dashboard link: https://syzkaller.appspot.com/bug?extid=0ca27feeb396418459ae
compiler: Debian clang version 11.0.1-2

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0ca27f...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: stack-out-of-bounds in profile_pc+0xa4/0xe0 arch/x86/kernel/time.c:42
Read of size 8 at addr ffff8881e719f4a0 by task systemd-udevd/152

CPU: 1 PID: 152 Comm: systemd-udevd Not tainted 5.4.121-syzkaller-00751-g1d3dcc209600 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1d8/0x24e lib/dump_stack.c:118
print_address_description+0x9b/0x650 mm/kasan/report.c:384
__kasan_report+0x182/0x260 mm/kasan/report.c:516
kasan_report+0x30/0x60 mm/kasan/common.c:641
profile_pc+0xa4/0xe0 arch/x86/kernel/time.c:42
profile_tick+0xb2/0xf0 kernel/profile.c:408
tick_sched_handle kernel/time/tick-sched.c:172 [inline]
tick_sched_timer+0x268/0x410 kernel/time/tick-sched.c:1296
__run_hrtimer+0x187/0x7b0 kernel/time/hrtimer.c:1535
__hrtimer_run_queues kernel/time/hrtimer.c:1597 [inline]
hrtimer_interrupt+0x582/0x1170 kernel/time/hrtimer.c:1659
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1123 [inline]
smp_apic_timer_interrupt+0x109/0x420 arch/x86/kernel/apic/apic.c:1148
apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:831
</IRQ>
RIP: 0010:__raw_spin_lock include/linux/spinlock_api_smp.h:141 [inline]
RIP: 0010:_raw_spin_lock+0x64/0x1b0 kernel/locking/spinlock.c:151
Code: 8a b5 41 48 c7 44 24 08 8b 93 71 85 48 c7 44 24 10 40 4d 46 84 48 89 e3 48 c1 eb 03 48 b8 f1 f1 f1 f1 04 f3 f3 f3 4a 89 04 23 <bf> 01 00 00 00 e8 b2 3b 00 fd 4d 89 fe 49 c1 ee 03 43 8a 04 26 84
RSP: 0018:ffff8881e719f4a0 EFLAGS: 00000a02 ORIG_RAX: ffffffffffffff13
RAX: f3f3f304f1f1f1f1 RBX: 1ffff1103ce33e94 RCX: 0000000000000000
RDX: ffff8881e7190fc0 RSI: 000000004f820e04 RDI: ffff8881ecc0e8d8
RBP: ffff8881e719f530 R08: ffffffff819e75e9 R09: ffff8881e719f780
R10: ffffed103ce33ef2 R11: 0000000000000000 R12: dffffc0000000000
R13: ffff8881ecc0e8d8 R14: ffff8881ecc0e8a0 R15: ffff8881e719f4c0
spin_lock include/linux/spinlock.h:338 [inline]
__d_lookup+0xfe/0x510 fs/dcache.c:2374
lookup_fast+0x12b/0xfd0 fs/namei.c:1694
walk_component+0x145/0x960 fs/namei.c:1881
link_path_walk+0x62b/0x14b0 fs/namei.c:2210
path_lookupat+0xc8/0xa40 fs/namei.c:2392
filename_lookup+0x223/0x6a0 fs/namei.c:2423
user_path_at include/linux/namei.h:49 [inline]
vfs_statx fs/stat.c:187 [inline]
vfs_lstat include/linux/fs.h:3356 [inline]
__do_sys_newlstat fs/stat.c:354 [inline]
__se_sys_newlstat+0xde/0x860 fs/stat.c:348
do_syscall_64+0xcb/0x1e0 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7fc2acb6d335
Code: 69 db 2b 00 64 c7 00 16 00 00 00 b8 ff ff ff ff c3 0f 1f 40 00 83 ff 01 48 89 f0 77 30 48 89 c7 48 89 d6 b8 06 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 03 f3 c3 90 48 8b 15 31 db 2b 00 f7 d8 64 89
RSP: 002b:00007fff76ea9658 EFLAGS: 00000246 ORIG_RAX: 0000000000000006
RAX: ffffffffffffffda RBX: 0000556bfe934a70 RCX: 00007fc2acb6d335
RDX: 00007fff76ea9690 RSI: 00007fff76ea9690 RDI: 0000556bfe933a70
RBP: 00007fff76ea9750 R08: 00007fc2ace2c1c8 R09: 0000000000001010
R10: 0000000000000020 R11: 0000000000000246 R12: 0000556bfe933a70
R13: 0000556bfe933a8a R14: 0000556bfe92feb5 R15: 0000556bfe92feba

The buggy address belongs to the page:
page:ffffea00079c67c0 refcount:0 mapcount:0 mapping:0000000000000000 index:0x0
flags: 0x8000000000000000()
raw: 8000000000000000 ffffea00079c67c8 ffffea00079c67c8 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0x400dc0(GFP_KERNEL_ACCOUNT|__GFP_ZERO)
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook mm/page_alloc.c:2165 [inline]
prep_new_page+0x19a/0x380 mm/page_alloc.c:2171
get_page_from_freelist+0x550/0x8b0 mm/page_alloc.c:3794
__alloc_pages_nodemask+0x3a2/0x880 mm/page_alloc.c:4855
__alloc_pages include/linux/gfp.h:503 [inline]
__alloc_pages_node include/linux/gfp.h:516 [inline]
alloc_pages_node include/linux/gfp.h:530 [inline]
alloc_thread_stack_node kernel/fork.c:259 [inline]
dup_task_struct kernel/fork.c:875 [inline]
copy_process+0x605/0x5630 kernel/fork.c:1877
_do_fork+0x18f/0x900 kernel/fork.c:2391
__do_sys_clone kernel/fork.c:2549 [inline]
__se_sys_clone kernel/fork.c:2530 [inline]
__x64_sys_clone+0x25b/0x2c0 kernel/fork.c:2530
do_syscall_64+0xcb/0x1e0 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9
page_owner free stack trace missing

addr ffff8881e719f4a0 is located in stack of task systemd-udevd/152 at offset 0 in frame:
_raw_spin_lock+0x0/0x1b0 arch/x86/include/asm/atomic.h:200

this frame has 1 object:
[32, 36) 'val.i.i.i'

Memory state around the buggy address:
ffff8881e719f380: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff8881e719f400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff8881e719f480: 00 00 00 00 f1 f1 f1 f1 04 f3 f3 f3 00 00 00 00
^
ffff8881e719f500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff8881e719f580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
May 27, 2021, 12:01:24 AM5/27/21
to syzkaller-a...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 1d3dcc20 ANDROID: dm-user: Fix build warnings
git tree: android12-5.4
console output: https://syzkaller.appspot.com/x/log.txt?x=1757ce13d00000
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15a5d6cbd00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=169382d3d00000

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0ca27f...@syzkaller.appspotmail.com

BUG: KASAN: stack-out-of-bounds in profile_pc+0xa4/0xe0 arch/x86/kernel/time.c:42
Read of size 8 at addr ffff8881e69c6f00 by task sshd/337

CPU: 1 PID: 337 Comm: sshd Not tainted 5.4.121-syzkaller-00751-g1d3dcc209600 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1d8/0x24e lib/dump_stack.c:118
print_address_description+0x9b/0x650 mm/kasan/report.c:384
__kasan_report+0x182/0x260 mm/kasan/report.c:516
kasan_report+0x30/0x60 mm/kasan/common.c:641
profile_pc+0xa4/0xe0 arch/x86/kernel/time.c:42
profile_tick+0xb2/0xf0 kernel/profile.c:408
tick_sched_handle kernel/time/tick-sched.c:172 [inline]
tick_sched_timer+0x268/0x410 kernel/time/tick-sched.c:1296
__run_hrtimer+0x187/0x7b0 kernel/time/hrtimer.c:1535
__hrtimer_run_queues kernel/time/hrtimer.c:1597 [inline]
hrtimer_interrupt+0x582/0x1170 kernel/time/hrtimer.c:1659
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1123 [inline]
smp_apic_timer_interrupt+0x109/0x420 arch/x86/kernel/apic/apic.c:1148
apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:831
</IRQ>
RIP: 0010:arch_atomic_try_cmpxchg arch/x86/include/asm/atomic.h:200 [inline]
RIP: 0010:atomic_try_cmpxchg include/asm-generic/atomic-instrumented.h:695 [inline]
RIP: 0010:queued_spin_lock include/asm-generic/qspinlock.h:78 [inline]
RIP: 0010:do_raw_spin_lock include/linux/spinlock.h:181 [inline]
RIP: 0010:__raw_spin_lock include/linux/spinlock_api_smp.h:143 [inline]
RIP: 0010:_raw_spin_lock+0xbe/0x1b0 kernel/locking/spinlock.c:151
Code: 4d fd 4c 89 ff be 04 00 00 00 e8 2d 84 4d fd 43 8a 04 26 84 c0 0f 85 a9 00 00 00 8b 44 24 20 b9 01 00 00 00 f0 41 0f b1 4d 00 <75> 33 48 c7 04 24 0e 36 e0 45 49 c7 04 1c 00 00 00 00 65 48 8b 04
RSP: 0018:ffff8881e69c6f00 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
RAX: 0000000000000000 RBX: 1ffff1103cd38de0 RCX: 0000000000000001
RDX: 0000000000000001 RSI: 0000000000000004 RDI: ffff8881e69c6f20
RBP: ffff8881e69c6fa0 R08: dffffc0000000000 R09: 0000000000000003
R10: ffffed103cd38de5 R11: 0000000000000004 R12: dffffc0000000000
R13: ffff8881e985bac4 R14: 1ffff1103cd38de4 R15: ffff8881e69c6f20
spin_lock include/linux/spinlock.h:338 [inline]
ptr_ring_produce include/linux/ptr_ring.h:127 [inline]
skb_array_produce include/linux/skb_array.h:44 [inline]
pfifo_fast_enqueue+0xa4/0x580 net/sched/sch_generic.c:631
__dev_xmit_skb net/core/dev.c:3393 [inline]
__dev_queue_xmit+0xa68/0x26b0 net/core/dev.c:3747
neigh_hh_output include/net/neighbour.h:500 [inline]
neigh_output include/net/neighbour.h:509 [inline]
ip_finish_output2+0xbfb/0x1830 net/ipv4/ip_output.c:229
NF_HOOK_COND include/linux/netfilter.h:294 [inline]
ip_output+0x1a9/0x3a0 net/ipv4/ip_output.c:433
dst_output include/net/dst.h:444 [inline]
ip_local_out net/ipv4/ip_output.c:126 [inline]
__ip_queue_xmit+0xee6/0x17f0 net/ipv4/ip_output.c:533
__tcp_transmit_skb+0x1cae/0x3ad0 net/ipv4/tcp_output.c:1179
tcp_transmit_skb net/ipv4/tcp_output.c:1195 [inline]
tcp_write_xmit+0x165a/0x8250 net/ipv4/tcp_output.c:2459
__tcp_push_pending_frames+0x8f/0x300 net/ipv4/tcp_output.c:2640
tcp_sendmsg_locked+0x32c4/0x4080 net/ipv4/tcp.c:1415
tcp_sendmsg+0x2c/0x40 net/ipv4/tcp.c:1445
sock_sendmsg_nosec net/socket.c:638 [inline]
sock_sendmsg net/socket.c:658 [inline]
sock_write_iter+0x330/0x450 net/socket.c:990
call_write_iter include/linux/fs.h:1971 [inline]
new_sync_write fs/read_write.c:483 [inline]
__vfs_write+0x5ec/0x780 fs/read_write.c:496
vfs_write+0x212/0x4e0 fs/read_write.c:558
ksys_write+0x186/0x2b0 fs/read_write.c:611
do_syscall_64+0xcb/0x1e0 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f3ec61c1970
Code: 73 01 c3 48 8b 0d 28 d5 2b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 99 2d 2c 00 00 75 10 b8 01 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 7e 9b 01 00 48 89 04 24
RSP: 002b:00007fff17169128 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000034 RCX: 00007f3ec61c1970
RDX: 0000000000000034 RSI: 000055c3fd84ed50 RDI: 0000000000000003
RBP: 000055c3fd84ae90 R08: 00007fff171dc080 R09: 00007fff171dc118
R10: 000000000000096e R11: 0000000000000246 R12: 0000000000000001
R13: 00007fff171691bf R14: 000055c3fd17dbe7 R15: 0000000000000003

The buggy address belongs to the page:
page:ffffea00079a7180 refcount:0 mapcount:0 mapping:0000000000000000 index:0x0
flags: 0x8000000000000000()
raw: 8000000000000000 0000000000000000 ffffea00079a7188 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Unmovable, gfp_mask 0x400dc0(GFP_KERNEL_ACCOUNT|__GFP_ZERO)
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook mm/page_alloc.c:2165 [inline]
prep_new_page+0x19a/0x380 mm/page_alloc.c:2171
get_page_from_freelist+0x550/0x8b0 mm/page_alloc.c:3794
__alloc_pages_nodemask+0x3a2/0x880 mm/page_alloc.c:4855
__alloc_pages include/linux/gfp.h:503 [inline]
__alloc_pages_node include/linux/gfp.h:516 [inline]
alloc_pages_node include/linux/gfp.h:530 [inline]
alloc_thread_stack_node kernel/fork.c:259 [inline]
dup_task_struct kernel/fork.c:875 [inline]
copy_process+0x605/0x5630 kernel/fork.c:1877
_do_fork+0x18f/0x900 kernel/fork.c:2391
__do_sys_clone kernel/fork.c:2549 [inline]
__se_sys_clone kernel/fork.c:2530 [inline]
__x64_sys_clone+0x25b/0x2c0 kernel/fork.c:2530
do_syscall_64+0xcb/0x1e0 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1176 [inline]
__free_pages_ok+0xc60/0xd80 mm/page_alloc.c:1438
free_the_page mm/page_alloc.c:4917 [inline]
__free_pages+0x8f/0x250 mm/page_alloc.c:4923
__free_slab+0x237/0x2f0 mm/slub.c:1766
free_slab mm/slub.c:1781 [inline]
discard_slab mm/slub.c:1787 [inline]
unfreeze_partials+0x14f/0x180 mm/slub.c:2279
put_cpu_partial+0xb5/0x150 mm/slub.c:2315
__slab_free mm/slub.c:2963 [inline]
do_slab_free mm/slub.c:3060 [inline]
___cache_free+0x352/0x4e0 mm/slub.c:3079
qlist_free_all mm/kasan/quarantine.c:167 [inline]
quarantine_reduce+0x17a/0x1e0 mm/kasan/quarantine.c:260
__kasan_kmalloc+0x43/0x1e0 mm/kasan/common.c:495
slab_post_alloc_hook mm/slab.h:584 [inline]
slab_alloc_node mm/slub.c:2821 [inline]
slab_alloc mm/slub.c:2829 [inline]
kmem_cache_alloc+0x115/0x290 mm/slub.c:2834
sock_alloc_inode+0x17/0xb0 net/socket.c:239
alloc_inode fs/inode.c:232 [inline]
new_inode_pseudo+0x61/0x220 fs/inode.c:928
sock_alloc net/socket.c:559 [inline]
__sys_accept4+0x227/0x9f0 net/socket.c:1725
__do_sys_accept net/socket.c:1796 [inline]
__se_sys_accept net/socket.c:1793 [inline]
__x64_sys_accept+0x79/0x90 net/socket.c:1793
do_syscall_64+0xcb/0x1e0 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9

addr ffff8881e69c6f00 is located in stack of task sshd/337 at offset 0 in frame:
_raw_spin_lock+0x0/0x1b0 arch/x86/include/asm/atomic.h:200

this frame has 1 object:
[32, 36) 'val.i.i.i'

Memory state around the buggy address:
ffff8881e69c6e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff8881e69c6e80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff8881e69c6f00: f1 f1 f1 f1 04 f3 f3 f3 00 00 00 00 00 00 00 00
^
ffff8881e69c6f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff8881e69c7000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================

Reply all
Reply to author
Forward
0 new messages