[mm?] KASAN: slab-out-of-bounds Read in mas_slot_locked

6 views
Skip to first unread message

syzbot

unread,
Mar 31, 2023, 12:44:06 AM3/31/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 950b879b7f02 riscv: Fixup race condition on PG_dcache_clea..
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git fixes
console output: https://syzkaller.appspot.com/x/log.txt?x=156114f5c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=ecebece1b90c0342
dashboard link: https://syzkaller.appspot.com/bug?extid=cc9ff5470465290688e4
compiler: riscv64-linux-gnu-gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: riscv64
CC: [ak...@linux-foundation.org linux-...@vger.kernel.org linu...@kvack.org]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+cc9ff5...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-out-of-bounds in mas_slot_locked.isra.0+0x82/0x12a lib/maple_tree.c:826
Read of size 8 at addr ff60000013543500 by task syz-executor.0/4452

CPU: 1 PID: 4452 Comm: syz-executor.0 Tainted: G W 6.2.0-rc1-syzkaller #0
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff8000b9ea>] dump_backtrace+0x2e/0x3c arch/riscv/kernel/stacktrace.c:121
[<ffffffff83402b96>] show_stack+0x34/0x40 arch/riscv/kernel/stacktrace.c:127
[<ffffffff83442726>] __dump_stack lib/dump_stack.c:88 [inline]
[<ffffffff83442726>] dump_stack_lvl+0xe0/0x14c lib/dump_stack.c:106
[<ffffffff83409674>] print_address_description mm/kasan/report.c:306 [inline]
[<ffffffff83409674>] print_report+0x1e4/0x4c0 mm/kasan/report.c:417
[<ffffffff804ead14>] kasan_report+0xb8/0xe6 mm/kasan/report.c:517
[<ffffffff804ebea4>] check_region_inline mm/kasan/generic.c:183 [inline]
[<ffffffff804ebea4>] __asan_load8+0x7e/0xa6 mm/kasan/generic.c:256
[<ffffffff833beea0>] mas_slot_locked.isra.0+0x82/0x12a lib/maple_tree.c:826
[<ffffffff833bf37e>] mas_store_b_node+0x436/0x5d2 lib/maple_tree.c:2159
[<ffffffff833da0f0>] mas_wr_bnode+0x14c/0x14c8 lib/maple_tree.c:4324
[<ffffffff833db670>] mas_wr_modify+0x204/0x964 lib/maple_tree.c:4368
[<ffffffff833dc8a4>] mas_wr_store_entry.isra.0+0x390/0x854 lib/maple_tree.c:4406
[<ffffffff833dd802>] mas_store_prealloc+0xd4/0x15c lib/maple_tree.c:5706
[<ffffffff804559ca>] do_mas_align_munmap+0x794/0xba8 mm/mmap.c:2424
[<ffffffff80456004>] do_mas_munmap+0x19a/0x230 mm/mmap.c:2498
[<ffffffff804563e2>] do_munmap+0xc2/0xf4 mm/mmap.c:2512
[<ffffffff80461a74>] mremap_to mm/mremap.c:826 [inline]
[<ffffffff80461a74>] __do_sys_mremap+0xe02/0xf06 mm/mremap.c:972
[<ffffffff80461c1c>] sys_mremap+0x32/0x44 mm/mremap.c:889
[<ffffffff80005ff6>] ret_from_syscall+0x0/0x2

Allocated by task 2892:
stack_trace_save+0xa6/0xd8 kernel/stacktrace.c:122
kasan_save_stack+0x2c/0x5a mm/kasan/common.c:45
kasan_set_track+0x1a/0x26 mm/kasan/common.c:52
kasan_save_alloc_info+0x1a/0x24 mm/kasan/generic.c:507
__kasan_slab_alloc+0x7a/0x80 mm/kasan/common.c:325
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook mm/slab.h:761 [inline]
kmem_cache_alloc_bulk+0x244/0x5c2 mm/slub.c:4033
mt_alloc_bulk lib/maple_tree.c:157 [inline]
mas_alloc_nodes+0x26c/0x54c lib/maple_tree.c:1256
mas_node_count_gfp+0xe6/0xe8 lib/maple_tree.c:1315
mas_node_count lib/maple_tree.c:1329 [inline]
mas_expected_entries+0xc2/0x148 lib/maple_tree.c:5828
dup_mmap+0x2a6/0xa3e kernel/fork.c:616
dup_mm kernel/fork.c:1548 [inline]
copy_mm kernel/fork.c:1597 [inline]
copy_process+0x26da/0x4068 kernel/fork.c:2266
kernel_clone+0xee/0x914 kernel/fork.c:2681
__do_sys_clone+0xec/0x120 kernel/fork.c:2822
sys_clone+0x32/0x44 kernel/fork.c:2790
ret_from_syscall+0x0/0x2

The buggy address belongs to the object at ff60000013543400
which belongs to the cache maple_node of size 256
The buggy address is located 0 bytes to the right of
256-byte region [ff60000013543400, ff60000013543500)

The buggy address belongs to the physical page:
page:ff1c0000024dd080 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x93742
head:ff1c0000024dd080 order:1 compound_mapcount:0 subpages_mapcount:0 compound_pincount:0
flags: 0xffe000000010200(slab|head|node=0|zone=0|lastcpupid=0x7ff)
raw: 0ffe000000010200 ff6000000820ddc0 0000000000000100 0000000000000122
raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 1, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 4082, tgid 4082 (syz-executor.0), ts 1691943972400, free_ts 1687047820300
__set_page_owner+0x32/0x182 mm/page_owner.c:190
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0xf8/0x11a mm/page_alloc.c:2524
prep_new_page mm/page_alloc.c:2531 [inline]
get_page_from_freelist+0xc0e/0x1118 mm/page_alloc.c:4283
__alloc_pages+0x1b0/0x165a mm/page_alloc.c:5549
alloc_pages+0x132/0x25e mm/mempolicy.c:2286
alloc_slab_page mm/slub.c:1851 [inline]
allocate_slab mm/slub.c:1998 [inline]
new_slab+0x270/0x382 mm/slub.c:2051
___slab_alloc+0x57e/0xaa6 mm/slub.c:3193
__kmem_cache_alloc_bulk mm/slub.c:3951 [inline]
kmem_cache_alloc_bulk+0x33c/0x5c2 mm/slub.c:4026
mt_alloc_bulk lib/maple_tree.c:157 [inline]
mas_alloc_nodes+0x26c/0x54c lib/maple_tree.c:1256
mas_node_count_gfp lib/maple_tree.c:1315 [inline]
mas_preallocate+0x14a/0x226 lib/maple_tree.c:5724
__vma_adjust+0x12c/0xf22 mm/mmap.c:715
vma_adjust include/linux/mm.h:2793 [inline]
__split_vma+0x32c/0x334 mm/mmap.c:2233
split_vma+0x68/0x8c mm/mmap.c:2269
mprotect_fixup+0x328/0x438 mm/mprotect.c:620
do_mprotect_pkey.constprop.0+0x3aa/0x63c mm/mprotect.c:785
__do_sys_mprotect mm/mprotect.c:812 [inline]
sys_mprotect+0x26/0x3c mm/mprotect.c:809
page last free stack trace:
__reset_page_owner+0x4a/0xf8 mm/page_owner.c:148
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1446 [inline]
free_pcp_prepare+0x254/0x48e mm/page_alloc.c:1496
free_unref_page_prepare mm/page_alloc.c:3369 [inline]
free_unref_page+0x60/0x2ae mm/page_alloc.c:3464
free_the_page mm/page_alloc.c:750 [inline]
__free_pages+0xd6/0x106 mm/page_alloc.c:5635
free_pages.part.0+0xd8/0x13a mm/page_alloc.c:5646
free_pages+0xe/0x18 mm/page_alloc.c:5643
kasan_depopulate_vmalloc_pte+0x46/0x64 mm/kasan/shadow.c:372
apply_to_pte_range mm/memory.c:2600 [inline]
apply_to_pmd_range mm/memory.c:2644 [inline]
apply_to_pud_range mm/memory.c:2680 [inline]
apply_to_p4d_range mm/memory.c:2716 [inline]
__apply_to_page_range+0x90c/0x12b4 mm/memory.c:2750
apply_to_existing_page_range+0x34/0x46 mm/memory.c:2783
kasan_release_vmalloc+0x80/0x96 mm/kasan/shadow.c:486
__purge_vmap_area_lazy+0x636/0x1330 mm/vmalloc.c:1776
drain_vmap_area_work+0x3c/0x7e mm/vmalloc.c:1809
process_one_work+0x660/0x102e kernel/workqueue.c:2289
worker_thread+0x362/0x878 kernel/workqueue.c:2436
kthread+0x19c/0x1f8 kernel/kthread.c:376
ret_from_exception+0x0/0x1a arch/riscv/kernel/entry.S:249

Memory state around the buggy address:
ff60000013543400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ff60000013543480: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ff60000013543500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
^
ff60000013543580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ff60000013543600: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jun 25, 2023, 12:40:44 AM6/25/23
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages