[v6.6] KASAN: slab-use-after-free Read in xfs_inode_item_push

2 views
Skip to first unread message

syzbot

unread,
Jan 27, 2026, 8:32:30 AM (2 days ago) Jan 27
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: cbb31f77b879 Linux 6.6.121
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=164cb1b2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=2a950bf7c0bff9f9
dashboard link: https://syzkaller.appspot.com/bug?extid=652af2b3c5569c4ab63c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/6436cbae3604/disk-cbb31f77.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/8f3a3d318f99/vmlinux-cbb31f77.xz
kernel image: https://storage.googleapis.com/syzbot-assets/84920a2a012f/bzImage-cbb31f77.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+652af2...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in xfs_inode_item_push+0x28a/0x2e0 fs/xfs/xfs_inode_item.c:776
Read of size 8 at addr ffff88805ebe87e0 by task xfsaild/loop5/10155

CPU: 1 PID: 10155 Comm: xfsaild/loop5 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:364 [inline]
print_report+0xa8/0x210 mm/kasan/report.c:468
kasan_report+0x117/0x150 mm/kasan/report.c:581
xfs_inode_item_push+0x28a/0x2e0 fs/xfs/xfs_inode_item.c:776
xfsaild_push_item fs/xfs/xfs_trans_ail.c:414 [inline]
xfsaild_push fs/xfs/xfs_trans_ail.c:486 [inline]
xfsaild+0xc84/0x25e0 fs/xfs/xfs_trans_ail.c:671
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>

Allocated by task 10120:
kasan_save_stack mm/kasan/common.c:46 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:53
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:329
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4b0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x11a/0x2d0 mm/slub.c:3519
kmem_cache_zalloc include/linux/slab.h:711 [inline]
xfs_inode_item_init+0x33/0xc0 fs/xfs/xfs_inode_item.c:871
xfs_trans_ijoin+0xd8/0x120 fs/xfs/libxfs/xfs_trans_inode.c:36
xfs_trans_alloc_ichange+0xd8/0x530 fs/xfs/xfs_trans.c:1297
xfs_ioctl_setattr_get_trans fs/xfs/xfs_ioctl.c:1220 [inline]
xfs_fileattr_set+0x5e7/0x17d0 fs/xfs/xfs_ioctl.c:1367
vfs_fileattr_set+0x85d/0xb10 fs/ioctl.c:697
ioctl_setflags fs/ioctl.c:729 [inline]
do_vfs_ioctl+0x1449/0x1cc0 fs/ioctl.c:840
__do_sys_ioctl fs/ioctl.c:869 [inline]
__se_sys_ioctl+0x83/0x170 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2

Freed by task 3:
kasan_save_stack mm/kasan/common.c:46 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:53
kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:237
kasan_slab_free include/linux/kasan.h:164 [inline]
slab_free_hook mm/slub.c:1811 [inline]
slab_free_freelist_hook+0x130/0x1a0 mm/slub.c:1837
slab_free mm/slub.c:3830 [inline]
kmem_cache_free+0xf8/0x270 mm/slub.c:3852
xfs_inode_free_callback+0x14f/0x1c0 fs/xfs/xfs_icache.c:145
rcu_do_batch kernel/rcu/tree.c:2194 [inline]
rcu_core+0xcfb/0x1770 kernel/rcu/tree.c:2467
handle_softirqs+0x280/0x820 kernel/softirq.c:578
__do_softirq kernel/softirq.c:612 [inline]
invoke_softirq kernel/softirq.c:452 [inline]
__irq_exit_rcu+0xd3/0x190 kernel/softirq.c:661
irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687

The buggy address belongs to the object at ffff88805ebe87b0
which belongs to the cache xfs_ili of size 264
The buggy address is located 48 bytes inside of
freed 264-byte region [ffff88805ebe87b0, ffff88805ebe88b8)

The buggy address belongs to the physical page:
page:ffffea00017afa00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5ebe8
flags: 0xfff00000000800(slab|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000800 ffff888018fbe280 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Reclaimable, gfp_mask 0x112c50(GFP_NOFS|__GFP_NOWARN|__GFP_NORETRY|__GFP_HARDWALL|__GFP_RECLAIMABLE), pid 5914, tgid 5913 (syz.0.26), ts 95807326330, free_ts 27432984022
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1c1/0x200 mm/page_alloc.c:1554
prep_new_page mm/page_alloc.c:1561 [inline]
get_page_from_freelist+0x1951/0x19e0 mm/page_alloc.c:3191
__alloc_pages+0x1f0/0x460 mm/page_alloc.c:4457
alloc_slab_page+0x5d/0x160 mm/slub.c:1881
allocate_slab mm/slub.c:2028 [inline]
new_slab+0x87/0x2d0 mm/slub.c:2081
___slab_alloc+0xc5d/0x12f0 mm/slub.c:3253
__slab_alloc mm/slub.c:3339 [inline]
__slab_alloc_node mm/slub.c:3392 [inline]
slab_alloc_node mm/slub.c:3485 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x1b3/0x2d0 mm/slub.c:3519
kmem_cache_zalloc include/linux/slab.h:711 [inline]
xfs_inode_item_init+0x33/0xc0 fs/xfs/xfs_inode_item.c:871
xfs_trans_ijoin+0xd8/0x120 fs/xfs/libxfs/xfs_trans_inode.c:36
xfs_init_new_inode+0xacb/0xd70 fs/xfs/xfs_inode.c:901
xfs_qm_qino_alloc+0x4a4/0x990 fs/xfs/xfs_qm.c:790
xfs_qm_init_quotainos+0x4a9/0x6d0 fs/xfs/xfs_qm.c:1576
xfs_qm_init_quotainfo+0x11b/0x11a0 fs/xfs/xfs_qm.c:643
xfs_qm_mount_quotas+0xa0/0x600 fs/xfs/xfs_qm.c:1460
xfs_mountfs+0x165b/0x1d40 fs/xfs/xfs_mount.c:962
xfs_fs_fill_super+0x112f/0x13a0 fs/xfs/xfs_super.c:1747
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1154 [inline]
free_unref_page_prepare+0x7b2/0x8c0 mm/page_alloc.c:2336
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
free_contig_range+0xa1/0x150 mm/page_alloc.c:6369
destroy_args+0x80/0x850 mm/debug_vm_pgtable.c:1015
debug_vm_pgtable+0x411/0x440 mm/debug_vm_pgtable.c:1400
do_one_initcall+0x242/0x790 init/main.c:1250
do_initcall_level+0x137/0x1f0 init/main.c:1312
do_initcalls+0x69/0xd0 init/main.c:1328
kernel_init_freeable+0x3ed/0x580 init/main.c:1565
kernel_init+0x1d/0x1c0 init/main.c:1455
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

Memory state around the buggy address:
ffff88805ebe8680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88805ebe8700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fc fc
>ffff88805ebe8780: fc fc fc fc fc fc fa fb fb fb fb fb fb fb fb fb
^
ffff88805ebe8800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88805ebe8880: fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc fb
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages