[moderation] [iomap?] KASAN: use-after-free Read in iomap_read_inline_data (2)

0 views
Skip to first unread message

syzbot

unread,
Nov 12, 2025, 1:44:30 PM (15 hours ago) Nov 12
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: e811c33b1f13 Merge tag 'drm-fixes-2025-11-08' of https://g..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1556bbcd980000
kernel config: https://syzkaller.appspot.com/x/.config?x=929790bc044e87d7
dashboard link: https://syzkaller.appspot.com/bug?extid=2e05261efde0f86b80c8
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
CC: [bra...@kernel.org djw...@kernel.org linux-...@vger.kernel.org linux-...@vger.kernel.org linu...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-e811c33b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/9a1852775b7f/vmlinux-e811c33b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d2f7ca742771/bzImage-e811c33b.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2e0526...@syzkaller.appspotmail.com

loop0: detected capacity change from 0 to 32768
gfs2: fsid=syz:syz: Trying to join cluster "lock_nolock", "syz:syz"
gfs2: fsid=syz:syz: Now mounting FS (format 1801)...
gfs2: fsid=syz:syz.0: journal 0 mapped with 1 extents in 2ms
gfs2: fsid=syz:syz.0: first mount done, others may mount
==================================================================
BUG: KASAN: use-after-free in folio_fill_tail include/linux/highmem.h:598 [inline]
BUG: KASAN: use-after-free in iomap_read_inline_data+0x6dd/0xbb0 fs/iomap/buffered-io.c:318
Read of size 280 at addr ffff88800db4d8e8 by task syz.0.0/5323

CPU: 0 UID: 0 PID: 5323 Comm: syz.0.0 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0xca/0x240 mm/kasan/report.c:482
kasan_report+0x118/0x150 mm/kasan/report.c:595
check_region_inline mm/kasan/generic.c:-1 [inline]
kasan_check_range+0x2b0/0x2c0 mm/kasan/generic.c:200
__asan_memcpy+0x29/0x70 mm/kasan/shadow.c:105
folio_fill_tail include/linux/highmem.h:598 [inline]
iomap_read_inline_data+0x6dd/0xbb0 fs/iomap/buffered-io.c:318
iomap_write_begin_inline fs/iomap/buffered-io.c:805 [inline]
iomap_write_begin+0xa05/0x1c70 fs/iomap/buffered-io.c:858
iomap_write_iter fs/iomap/buffered-io.c:990 [inline]
iomap_file_buffered_write+0x441/0x9b0 fs/iomap/buffered-io.c:1071
gfs2_file_buffered_write+0x4ed/0x880 fs/gfs2/file.c:1061
gfs2_file_write_iter+0xbc6/0x1100 fs/gfs2/file.c:1141
iter_file_splice_write+0x975/0x10e0 fs/splice.c:738
do_splice_from fs/splice.c:938 [inline]
direct_splice_actor+0x101/0x160 fs/splice.c:1161
splice_direct_to_actor+0x5a8/0xcc0 fs/splice.c:1105
do_splice_direct_actor fs/splice.c:1204 [inline]
do_splice_direct+0x181/0x270 fs/splice.c:1230
do_sendfile+0x4da/0x7e0 fs/read_write.c:1370
__do_sys_sendfile64 fs/read_write.c:1431 [inline]
__se_sys_sendfile64+0x13e/0x190 fs/read_write.c:1417
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1cbbb8f6c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1cbcac9038 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f1cbbde5fa0 RCX: 00007f1cbbb8f6c9
RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000004
RBP: 00007f1cbbc11f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0001000200201005 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f1cbbde6038 R14: 00007f1cbbde5fa0 R15: 00007ffc4958a428
</TASK>

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x928 pfn:0xdb4d
flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff)
page_type: f0(buddy)
raw: 00fff00000000000 ffffea0000452fc8 ffff88802fffbd00 0000000000000000
raw: 0000000000000928 0000000000000000 00000000f0000000 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as freed
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x48c40(GFP_NOFS|__GFP_NOFAIL|__GFP_COMP), pid 5323, tgid 5322 (syz.0.0), ts 87189287586, free_ts 87571644129
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1850
prep_new_page mm/page_alloc.c:1858 [inline]
get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3884
__alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5183
alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416
alloc_frozen_pages_noprof mm/mempolicy.c:2487 [inline]
alloc_pages_noprof+0xa9/0x190 mm/mempolicy.c:2507
folio_alloc_noprof+0x1e/0x30 mm/mempolicy.c:2517
filemap_alloc_folio_noprof+0xdf/0x470 mm/filemap.c:1020
__filemap_get_folio+0x3f2/0xaf0 mm/filemap.c:2012
gfs2_getbuf+0x17e/0x6d0 fs/gfs2/meta_io.c:144
gfs2_meta_new+0x31/0x160 fs/gfs2/meta_io.c:196
init_dinode+0x75/0xa70 fs/gfs2/inode.c:566
gfs2_create_inode+0x10c5/0x1560 fs/gfs2/inode.c:848
gfs2_atomic_open+0x116/0x200 fs/gfs2/inode.c:1387
atomic_open fs/namei.c:3656 [inline]
lookup_open fs/namei.c:3767 [inline]
open_last_lookups fs/namei.c:3895 [inline]
path_openat+0xf66/0x3830 fs/namei.c:4131
do_filp_open+0x1fa/0x410 fs/namei.c:4161
do_sys_openat2+0x121/0x1c0 fs/open.c:1437
page last free pid 73 tgid 73 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1394 [inline]
free_unref_folios+0xdb3/0x14f0 mm/page_alloc.c:2963
shrink_folio_list+0x44ab/0x4c70 mm/vmscan.c:1638
evict_folios+0x471e/0x57c0 mm/vmscan.c:4745
try_to_shrink_lruvec+0x8a3/0xb50 mm/vmscan.c:4908
shrink_one+0x21b/0x7c0 mm/vmscan.c:4953
shrink_many mm/vmscan.c:5016 [inline]
lru_gen_shrink_node mm/vmscan.c:5094 [inline]
shrink_node+0x315d/0x3780 mm/vmscan.c:6081
kswapd_shrink_node mm/vmscan.c:6941 [inline]
balance_pgdat mm/vmscan.c:7124 [inline]
kswapd+0x147c/0x2800 mm/vmscan.c:7389
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245

Memory state around the buggy address:
ffff88800db4d780: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff88800db4d800: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>ffff88800db4d880: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
^
ffff88800db4d900: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff88800db4d980: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

Aleksandr Nogikh

unread,
Nov 12, 2025, 2:01:15 PM (15 hours ago) Nov 12
to syzbot, syzkaller-upst...@googlegroups.com
#syz set subsystems: gfs2
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-upstream-moderation" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-upstream-m...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/syzkaller-upstream-moderation/6914d58b.050a0220.3565dc.000a.GAE%40google.com.
Reply all
Reply to author
Forward
0 new messages