[v6.1] KASAN: out-of-bounds Read in leaf_copy_items_entirely

0 views
Skip to first unread message

syzbot

unread,
Jun 24, 2023, 12:16:43 AM6/24/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: e84a4e368abe Linux 6.1.35
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=124fa03b280000
kernel config: https://syzkaller.appspot.com/x/.config?x=a69b5c9de715622a
dashboard link: https://syzkaller.appspot.com/bug?extid=bf5eb4c6e3d8a2f28fff
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16da81af280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15ca78c7280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/3c7fedd1a86d/disk-e84a4e36.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/8b34c6296ed7/vmlinux-e84a4e36.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a88164798cc2/bzImage-e84a4e36.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/d6a4fba5a382/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+bf5eb4...@syzkaller.appspotmail.com

REISERFS (device loop0): Created .reiserfs_priv - reserved for xattr storage.
==================================================================
BUG: KASAN: out-of-bounds in leaf_copy_items_entirely+0xac8/0xee0 fs/reiserfs/lbalance.c:384
Read of size 18446744073709500467 at addr ffff888079a5d000 by task syz-executor336/3538

CPU: 1 PID: 3538 Comm: syz-executor336 Not tainted 6.1.35-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_address_description mm/kasan/report.c:284 [inline]
print_report+0x15f/0x4f0 mm/kasan/report.c:395
kasan_report+0x136/0x160 mm/kasan/report.c:495
kasan_check_range+0x27f/0x290 mm/kasan/generic.c:189
memcpy+0x25/0x60 mm/kasan/shadow.c:65
leaf_copy_items_entirely+0xac8/0xee0 fs/reiserfs/lbalance.c:384
leaf_copy_items fs/reiserfs/lbalance.c:610 [inline]
leaf_move_items+0xfd4/0x28a0 fs/reiserfs/lbalance.c:726
balance_leaf_new_nodes_paste_whole fs/reiserfs/do_balan.c:1162 [inline]
balance_leaf_new_nodes_paste fs/reiserfs/do_balan.c:1215 [inline]
balance_leaf_new_nodes fs/reiserfs/do_balan.c:1246 [inline]
balance_leaf+0x6515/0x12510 fs/reiserfs/do_balan.c:1450
do_balance+0x309/0x8f0 fs/reiserfs/do_balan.c:1888
reiserfs_paste_into_item+0x73b/0x880 fs/reiserfs/stree.c:2159
reiserfs_get_block+0x2259/0x5150 fs/reiserfs/inode.c:1069
__block_write_begin_int+0x544/0x1a30 fs/buffer.c:1991
reiserfs_write_begin+0x249/0x510 fs/reiserfs/inode.c:2775
generic_cont_expand_simple+0x187/0x2a0 fs/buffer.c:2347
reiserfs_setattr+0x606/0x11c0 fs/reiserfs/inode.c:3305
notify_change+0xdcd/0x1080 fs/attr.c:482
do_truncate+0x21c/0x300 fs/open.c:65
do_sys_ftruncate+0x2e2/0x380 fs/open.c:193
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f13a444e859
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 14 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe39b98438 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f13a444e859
RDX: 00007f13a444e859 RSI: 0000000002007fff RDI: 0000000000000004
RBP: 00007f13a440e0f0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f13a440e180
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
</TASK>

The buggy address belongs to the physical page:
page:ffffea0001e69740 refcount:2 mapcount:0 mapping:ffff8880128af5f8 index:0x213 pfn:0x79a5d
memcg:ffff888140148000
aops:def_blk_aops ino:700000
flags: 0xfff38000002052(referenced|lru|workingset|private|node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff38000002052 ffffea0001c2c7c8 ffffea0001c2c808 ffff8880128af5f8
raw: 0000000000000213 ffff8880704043a0 00000002ffffffff ffff888140148000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Movable, gfp_mask 0x148c48(GFP_NOFS|__GFP_NOFAIL|__GFP_COMP|__GFP_HARDWALL|__GFP_MOVABLE), pid 3538, tgid 3538 (syz-executor336), ts 52491197749, free_ts 52349239906
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x18d/0x1b0 mm/page_alloc.c:2533
prep_new_page mm/page_alloc.c:2540 [inline]
get_page_from_freelist+0x32ed/0x3480 mm/page_alloc.c:4292
__alloc_pages+0x28d/0x770 mm/page_alloc.c:5559
folio_alloc+0x1a/0x50 mm/mempolicy.c:2290
filemap_alloc_folio+0xda/0x4f0 mm/filemap.c:971
__filemap_get_folio+0x711/0xe30 mm/filemap.c:1965
pagecache_get_page+0x28/0x250 mm/folio-compat.c:110
find_or_create_page include/linux/pagemap.h:613 [inline]
grow_dev_page fs/buffer.c:946 [inline]
grow_buffers fs/buffer.c:1011 [inline]
__getblk_slow fs/buffer.c:1038 [inline]
__getblk_gfp+0x211/0xa20 fs/buffer.c:1333
sb_getblk include/linux/buffer_head.h:356 [inline]
search_by_key+0x460/0x4b60 fs/reiserfs/stree.c:672
reiserfs_read_locked_inode+0x23c/0x2950 fs/reiserfs/inode.c:1549
reiserfs_fill_super+0x135f/0x2620 fs/reiserfs/super.c:2071
mount_bdev+0x2c9/0x3f0 fs/super.c:1423
legacy_get_tree+0xeb/0x180 fs/fs_context.c:610
vfs_get_tree+0x88/0x270 fs/super.c:1553
do_new_mount+0x28b/0xae0 fs/namespace.c:3040
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3568
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1460 [inline]
free_pcp_prepare mm/page_alloc.c:1510 [inline]
free_unref_page_prepare+0xf63/0x1120 mm/page_alloc.c:3388
free_unref_page_list+0x107/0x810 mm/page_alloc.c:3530
release_pages+0x2836/0x2b40 mm/swap.c:1055
tlb_batch_pages_flush mm/mmu_gather.c:59 [inline]
tlb_flush_mmu_free mm/mmu_gather.c:254 [inline]
tlb_flush_mmu+0xfc/0x210 mm/mmu_gather.c:261
tlb_finish_mmu+0xce/0x1f0 mm/mmu_gather.c:361
exit_mmap+0x3c3/0x9f0 mm/mmap.c:3139
__mmput+0x115/0x3c0 kernel/fork.c:1191
exec_mmap+0x4fa/0x5b0 fs/exec.c:1035
begin_new_exec+0x7ac/0x1030 fs/exec.c:1294
load_elf_binary+0x945/0x2750 fs/binfmt_elf.c:1002
search_binary_handler fs/exec.c:1727 [inline]
exec_binprm fs/exec.c:1768 [inline]
bprm_execve+0x8ff/0x1820 fs/exec.c:1837
do_execveat_common+0x580/0x720 fs/exec.c:1942
do_execve fs/exec.c:2016 [inline]
__do_sys_execve fs/exec.c:2092 [inline]
__se_sys_execve fs/exec.c:2087 [inline]
__x64_sys_execve+0x8e/0xa0 fs/exec.c:2087
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

Memory state around the buggy address:
ffff888079a5cf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff888079a5cf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff888079a5d000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
^
ffff888079a5d080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff888079a5d100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages