[v6.6] KASAN: slab-use-after-free Read in iomap_finish_ioend

0 views
Skip to first unread message

syzbot

unread,
Jun 17, 2025, 6:07:35 AMJun 17
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c2603c511feb Linux 6.6.93
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=152b4370580000
kernel config: https://syzkaller.appspot.com/x/.config?x=486bade17c8c30b9
dashboard link: https://syzkaller.appspot.com/bug?extid=9a6793fafb32d754dd91
compiler: Debian clang version 20.1.6 (++20250514063057+1e4d39e07757-1~exp1~20250514183223.118), Debian LLD 20.1.6

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/8754f950a6e7/disk-c2603c51.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3b19332dbf63/vmlinux-c2603c51.xz
kernel image: https://storage.googleapis.com/syzbot-assets/cb245e836038/bzImage-c2603c51.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+9a6793...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in iomap_finish_ioend+0x586/0x620 fs/iomap/buffered-io.c:1517
Read of size 8 at addr ffff888078f3cfa0 by task kworker/1:4/5834

CPU: 1 PID: 5834 Comm: kworker/1:4 Not tainted 6.6.93-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: xfs-conv/loop4 xfs_end_io
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:364 [inline]
print_report+0xac/0x230 mm/kasan/report.c:475
kasan_report+0x151/0x180 mm/kasan/report.c:588
iomap_finish_ioend+0x586/0x620 fs/iomap/buffered-io.c:1517
iomap_finish_ioends+0x117/0x2b0 fs/iomap/buffered-io.c:1541
xfs_end_ioend+0x367/0x470 fs/xfs/xfs_aops.c:136
xfs_end_io+0x254/0x2d0 fs/xfs/xfs_aops.c:173
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>

Allocated by task 12545:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3485 [inline]
slab_alloc mm/slub.c:3493 [inline]
__kmem_cache_alloc_lru mm/slub.c:3500 [inline]
kmem_cache_alloc_lru+0x115/0x2e0 mm/slub.c:3516
alloc_inode_sb include/linux/fs.h:2946 [inline]
xfs_inode_alloc+0x80/0x6c0 fs/xfs/xfs_icache.c:81
xfs_iget_cache_miss fs/xfs/xfs_icache.c:611 [inline]
xfs_iget+0xa94/0x2db0 fs/xfs/xfs_icache.c:777
xfs_lookup+0x250/0x400 fs/xfs/xfs_inode.c:669
xfs_vn_lookup+0x119/0x1e0 fs/xfs/xfs_iops.c:304
lookup_open fs/namei.c:3466 [inline]
open_last_lookups fs/namei.c:3556 [inline]
path_openat+0x10b8/0x3190 fs/namei.c:3786
do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
do_sys_open fs/open.c:1434 [inline]
__do_sys_openat fs/open.c:1450 [inline]
__se_sys_openat fs/open.c:1445 [inline]
__x64_sys_openat+0x139/0x160 fs/open.c:1445
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

Freed by task 1102:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:164 [inline]
slab_free_hook mm/slub.c:1806 [inline]
slab_free_freelist_hook+0x130/0x1b0 mm/slub.c:1832
slab_free mm/slub.c:3816 [inline]
kmem_cache_free+0xf8/0x280 mm/slub.c:3838
rcu_do_batch kernel/rcu/tree.c:2190 [inline]
rcu_core+0xcc4/0x1720 kernel/rcu/tree.c:2463
handle_softirqs+0x280/0x820 kernel/softirq.c:578
do_softirq+0xed/0x180 kernel/softirq.c:479
__local_bh_enable_ip+0x178/0x1c0 kernel/softirq.c:406
spin_unlock_bh include/linux/spinlock.h:396 [inline]
batadv_nc_purge_paths+0x311/0x3a0 net/batman-adv/network-coding.c:471
batadv_nc_worker+0x328/0x610 net/batman-adv/network-coding.c:720
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

Last potentially related work creation:
kasan_save_stack+0x3e/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xaf/0xc0 mm/kasan/generic.c:492
__call_rcu_common kernel/rcu/tree.c:2713 [inline]
call_rcu+0x14f/0x920 kernel/rcu/tree.c:2829
__xfs_inode_free fs/xfs/xfs_icache.c:161 [inline]
xfs_reclaim_inode fs/xfs/xfs_icache.c:942 [inline]
xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1632 [inline]
xfs_icwalk_ag+0x138d/0x1a80 fs/xfs/xfs_icache.c:1714
xfs_icwalk fs/xfs/xfs_icache.c:1763 [inline]
xfs_reclaim_inodes+0x18c/0x260 fs/xfs/xfs_icache.c:975
xfs_unmount_flush_inodes+0xaf/0xc0 fs/xfs/xfs_mount.c:595
xfs_unmountfs+0xc4/0x270 fs/xfs/xfs_mount.c:1075
xfs_fs_put_super+0x65/0x140 fs/xfs/xfs_super.c:1142
generic_shutdown_super+0x134/0x2b0 fs/super.c:693
kill_block_super+0x44/0x90 fs/super.c:1660
xfs_kill_sb+0x15/0x50 fs/xfs/xfs_super.c:2032
deactivate_locked_super+0x97/0x100 fs/super.c:481
cleanup_mnt+0x429/0x4c0 fs/namespace.c:1250
task_work_run+0x1ce/0x250 kernel/task_work.c:239
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2

Second to last potentially related work creation:
kasan_save_stack+0x3e/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xaf/0xc0 mm/kasan/generic.c:492
insert_work+0x3d/0x310 kernel/workqueue.c:1651
__queue_work+0xc39/0x1020 kernel/workqueue.c:1800
queue_work_on+0x121/0x1e0 kernel/workqueue.c:1835
queue_work include/linux/workqueue.h:562 [inline]
xfs_end_bio+0xf4/0x200 fs/xfs/xfs_aops.c:188
iomap_submit_ioend fs/iomap/buffered-io.c:1656 [inline]
iomap_writepages+0x1ba/0x210 fs/iomap/buffered-io.c:2004
xfs_vm_writepages+0x103/0x160 fs/xfs/xfs_aops.c:480
do_writepages+0x3a2/0x600 mm/page-writeback.c:2575
filemap_fdatawrite_wbc+0x122/0x180 mm/filemap.c:390
__filemap_fdatawrite_range mm/filemap.c:423 [inline]
file_write_and_wait_range+0x171/0x240 mm/filemap.c:781
xfs_file_fsync+0x197/0x9c0 fs/xfs/xfs_file.c:144
generic_write_sync include/linux/fs.h:2651 [inline]
xfs_file_buffered_write+0x854/0x940 fs/xfs/xfs_file.c:816
do_iter_readv_writev fs/read_write.c:-1 [inline]
do_iter_write+0x79a/0xc70 fs/read_write.c:860
vfs_writev fs/read_write.c:933 [inline]
do_pwritev+0x205/0x340 fs/read_write.c:1030
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

The buggy address belongs to the object at ffff888078f3cd80
which belongs to the cache xfs_inode of size 1808
The buggy address is located 544 bytes inside of
freed 1808-byte region [ffff888078f3cd80, ffff888078f3d490)

The buggy address belongs to the physical page:
page:ffffea0001e3ce00 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff888078f38000 pfn:0x78f38
head:ffffea0001e3ce00 order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0
memcg:ffff8880774b6c01
flags: 0xfff00000000840(slab|head|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000840 ffff888018b7e000 dead000000000122 0000000000000000
raw: ffff888078f38000 000000008010000c 00000001ffffffff ffff8880774b6c01
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Reclaimable, gfp_mask 0x1d20d0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL|__GFP_RECLAIMABLE), pid 6695, tgid 6694 (syz.3.253), ts 177978057226, free_ts 119696100925
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1cd/0x210 mm/page_alloc.c:1554
prep_new_page mm/page_alloc.c:1561 [inline]
get_page_from_freelist+0x195c/0x19f0 mm/page_alloc.c:3191
__alloc_pages+0x1e3/0x460 mm/page_alloc.c:4457
alloc_slab_page+0x5d/0x170 mm/slub.c:1876
allocate_slab mm/slub.c:2023 [inline]
new_slab+0x87/0x2e0 mm/slub.c:2076
___slab_alloc+0xc6d/0x12f0 mm/slub.c:3230
__slab_alloc mm/slub.c:3329 [inline]
__slab_alloc_node mm/slub.c:3382 [inline]
slab_alloc_node mm/slub.c:3475 [inline]
slab_alloc mm/slub.c:3493 [inline]
__kmem_cache_alloc_lru mm/slub.c:3500 [inline]
kmem_cache_alloc_lru+0x1ae/0x2e0 mm/slub.c:3516
alloc_inode_sb include/linux/fs.h:2946 [inline]
xfs_inode_alloc+0x80/0x6c0 fs/xfs/xfs_icache.c:81
xfs_iget_cache_miss fs/xfs/xfs_icache.c:611 [inline]
xfs_iget+0xa94/0x2db0 fs/xfs/xfs_icache.c:777
xfs_mountfs+0xdac/0x1d20 fs/xfs/xfs_mount.c:851
xfs_fs_fill_super+0x112f/0x13a0 fs/xfs/xfs_super.c:1738
get_tree_bdev+0x3e4/0x510 fs/super.c:1591
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3355
do_mount fs/namespace.c:3695 [inline]
__do_sys_mount fs/namespace.c:3904 [inline]
__se_sys_mount+0x2da/0x3c0 fs/namespace.c:3881
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1154 [inline]
free_unref_page_prepare+0x7ce/0x8e0 mm/page_alloc.c:2336
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
discard_slab mm/slub.c:2122 [inline]
__unfreeze_partials+0x1cf/0x210 mm/slub.c:2662
put_cpu_partial+0x17c/0x250 mm/slub.c:2738
__slab_free+0x31d/0x410 mm/slub.c:3686
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x75/0xe0 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
__kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:305
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3485 [inline]
slab_alloc mm/slub.c:3493 [inline]
__kmem_cache_alloc_lru mm/slub.c:3500 [inline]
kmem_cache_alloc+0x11e/0x2e0 mm/slub.c:3509
vm_area_dup+0x27/0x270 kernel/fork.c:501
dup_mmap kernel/fork.c:711 [inline]
dup_mm kernel/fork.c:1692 [inline]
copy_mm+0xc08/0x1c20 kernel/fork.c:1741
copy_process+0x16d3/0x3d70 kernel/fork.c:2506
kernel_clone+0x21b/0x840 kernel/fork.c:2914
__do_sys_clone kernel/fork.c:3057 [inline]
__se_sys_clone kernel/fork.c:3041 [inline]
__x64_sys_clone+0x18c/0x1e0 kernel/fork.c:3041
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81

Memory state around the buggy address:
ffff888078f3ce80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888078f3cf00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888078f3cf80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff888078f3d000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888078f3d080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Sep 25, 2025, 6:08:19 AM (3 days ago) Sep 25
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages