Hello,
syzbot found the following issue on:
HEAD commit: cd9b81672742 Linux 6.1.161
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=12b14802580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=f0605c5af04d7603
dashboard link:
https://syzkaller.appspot.com/bug?extid=1cbccab0531bf8c11371
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/7c7d6fd2ef9f/disk-cd9b8167.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/544b7fd5d4e0/vmlinux-cd9b8167.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/3454f19d3753/bzImage-cd9b8167.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+1cbcca...@syzkaller.appspotmail.com
loop7: detected capacity change from 0 to 1024
==================================================================
BUG: KASAN: use-after-free in hfsplus_btree_open+0x92e/0xd30 fs/hfsplus/btree.c:155
Read of size 4 at addr ffff8880714e196c by task syz.7.8566/26823
CPU: 1 PID: 26823 Comm: syz.7.8566 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x188/0x24e lib/dump_stack.c:106
print_address_description mm/kasan/report.c:316 [inline]
print_report+0xa8/0x210 mm/kasan/report.c:420
kasan_report+0x10b/0x140 mm/kasan/report.c:524
hfsplus_btree_open+0x92e/0xd30 fs/hfsplus/btree.c:155
hfsplus_fill_super+0xa67/0x1e70 fs/hfsplus/super.c:486
mount_bdev+0x287/0x3c0 fs/super.c:1443
legacy_get_tree+0xe6/0x180 fs/fs_context.c:632
vfs_get_tree+0x88/0x270 fs/super.c:1573
do_new_mount+0x24a/0xa40 fs/namespace.c:3078
do_mount fs/namespace.c:3421 [inline]
__do_sys_mount fs/namespace.c:3629 [inline]
__se_sys_mount+0x2e3/0x3d0 fs/namespace.c:3606
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f270439c14a
Code: 48 c7 c2 e8 ff ff ff f7 d8 64 89 02 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f27051d7e58 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f27051d7ee0 RCX: 00007f270439c14a
RDX: 0000200000000080 RSI: 0000200000000100 RDI: 00007f27051d7ea0
RBP: 0000200000000080 R08: 00007f27051d7ee0 R09: 0000000000800002
R10: 0000000000800002 R11: 0000000000000246 R12: 0000200000000100
R13: 00007f27051d7ea0 R14: 00000000000006b3 R15: 0000200000000500
</TASK>
Allocated by task 18458:
kasan_save_stack mm/kasan/common.c:46 [inline]
kasan_set_track+0x4b/0x70 mm/kasan/common.c:53
__kasan_slab_alloc+0x6b/0x80 mm/kasan/common.c:329
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook+0x4b/0x480 mm/slab.h:737
slab_alloc_node mm/slub.c:3359 [inline]
slab_alloc mm/slub.c:3367 [inline]
__kmem_cache_alloc_lru mm/slub.c:3374 [inline]
kmem_cache_alloc_lru+0x11a/0x2e0 mm/slub.c:3390
alloc_inode_sb include/linux/fs.h:3245 [inline]
f2fs_alloc_inode+0x151/0x610 fs/f2fs/super.c:1419
alloc_inode fs/inode.c:261 [inline]
iget_locked+0x1a9/0x830 fs/inode.c:1373
f2fs_iget+0x52/0x4b30 fs/f2fs/inode.c:489
f2fs_fill_super+0x3a83/0x6b40 fs/f2fs/super.c:4312
mount_bdev+0x287/0x3c0 fs/super.c:1443
legacy_get_tree+0xe6/0x180 fs/fs_context.c:632
vfs_get_tree+0x88/0x270 fs/super.c:1573
do_new_mount+0x24a/0xa40 fs/namespace.c:3078
do_mount fs/namespace.c:3421 [inline]
__do_sys_mount fs/namespace.c:3629 [inline]
__se_sys_mount+0x2e3/0x3d0 fs/namespace.c:3606
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
Last potentially related work creation:
kasan_save_stack+0x3a/0x60 mm/kasan/common.c:46
__kasan_record_aux_stack+0xb2/0xc0 mm/kasan/generic.c:486
call_rcu+0x14f/0x990 kernel/rcu/tree.c:2849
destroy_inode fs/inode.c:316 [inline]
evict+0x834/0x8d0 fs/inode.c:720
f2fs_fill_super+0x52b9/0x6b40 fs/f2fs/super.c:4618
mount_bdev+0x287/0x3c0 fs/super.c:1443
legacy_get_tree+0xe6/0x180 fs/fs_context.c:632
vfs_get_tree+0x88/0x270 fs/super.c:1573
do_new_mount+0x24a/0xa40 fs/namespace.c:3078
do_mount fs/namespace.c:3421 [inline]
__do_sys_mount fs/namespace.c:3629 [inline]
__se_sys_mount+0x2e3/0x3d0 fs/namespace.c:3606
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
Second to last potentially related work creation:
kasan_save_stack+0x3a/0x60 mm/kasan/common.c:46
__kasan_record_aux_stack+0xb2/0xc0 mm/kasan/generic.c:486
call_rcu+0x14f/0x990 kernel/rcu/tree.c:2849
destroy_inode fs/inode.c:316 [inline]
evict+0x834/0x8d0 fs/inode.c:720
f2fs_put_super+0x679/0xbf0 fs/f2fs/super.c:1653
generic_shutdown_super+0x130/0x340 fs/super.c:501
kill_block_super+0x7c/0xe0 fs/super.c:1470
kill_f2fs_super+0x309/0x3c0 fs/f2fs/super.c:4694
deactivate_locked_super+0x93/0xf0 fs/super.c:332
cleanup_mnt+0x42c/0x4b0 fs/namespace.c:1191
task_work_run+0x1d0/0x260 kernel/task_work.c:203
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:292 [inline]
syscall_exit_to_user_mode+0x16/0x40 kernel/entry/common.c:303
do_syscall_64+0x58/0xa0 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x68/0xd2
The buggy address belongs to the object at ffff8880714e1210
which belongs to the cache f2fs_inode_cache of size 2184
The buggy address is located 1884 bytes inside of
2184-byte region [ffff8880714e1210, ffff8880714e1a98)
The buggy address belongs to the physical page:
page:ffffea0001c53800 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff8880714e5a50 pfn:0x714e0
head:ffffea0001c53800 order:3 compound_mapcount:0 compound_pincount:0
memcg:ffff88805542f201
flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000010200 0000000000000000 dead000000000122 ffff88801fa99dc0
raw: ffff8880714e5a50 00000000800e0006 00000001ffffffff ffff88805542f201
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Reclaimable, gfp_mask 0x1d2050(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL|__GFP_RECLAIMABLE), pid 4797, tgid 4796 (syz.0.171), ts 98915041985, free_ts 98061468498
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x173/0x1a0 mm/page_alloc.c:2532
prep_new_page mm/page_alloc.c:2539 [inline]
get_page_from_freelist+0x1a1e/0x1ab0 mm/page_alloc.c:4328
__alloc_pages+0x1ec/0x4f0 mm/page_alloc.c:5614
alloc_slab_page+0x5d/0x160 mm/slub.c:1799
allocate_slab mm/slub.c:1944 [inline]
new_slab+0x87/0x2c0 mm/slub.c:1997
___slab_alloc+0xbc6/0x1240 mm/slub.c:3154
__slab_alloc mm/slub.c:3240 [inline]
slab_alloc_node mm/slub.c:3325 [inline]
slab_alloc mm/slub.c:3367 [inline]
__kmem_cache_alloc_lru mm/slub.c:3374 [inline]
kmem_cache_alloc_lru+0x1ae/0x2e0 mm/slub.c:3390
alloc_inode_sb include/linux/fs.h:3245 [inline]
f2fs_alloc_inode+0x151/0x610 fs/f2fs/super.c:1419
alloc_inode fs/inode.c:261 [inline]
iget_locked+0x1a9/0x830 fs/inode.c:1373
f2fs_iget+0x52/0x4b30 fs/f2fs/inode.c:489
f2fs_fill_super+0x4487/0x6b40 fs/f2fs/super.c:4414
mount_bdev+0x287/0x3c0 fs/super.c:1443
legacy_get_tree+0xe6/0x180 fs/fs_context.c:632
vfs_get_tree+0x88/0x270 fs/super.c:1573
do_new_mount+0x24a/0xa40 fs/namespace.c:3078
do_mount fs/namespace.c:3421 [inline]
__do_sys_mount fs/namespace.c:3629 [inline]
__se_sys_mount+0x2e3/0x3d0 fs/namespace.c:3606
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1459 [inline]
free_pcp_prepare mm/page_alloc.c:1509 [inline]
free_unref_page_prepare+0x8b4/0x9a0 mm/page_alloc.c:3384
free_unref_page+0x2e/0x3f0 mm/page_alloc.c:3479
free_slab mm/slub.c:2036 [inline]
discard_slab mm/slub.c:2042 [inline]
__unfreeze_partials+0x1a5/0x200 mm/slub.c:2591
put_cpu_partial+0x17c/0x250 mm/slub.c:2667
qlink_free mm/kasan/quarantine.c:168 [inline]
qlist_free_all+0x76/0xe0 mm/kasan/quarantine.c:187
kasan_quarantine_reduce+0x144/0x160 mm/kasan/quarantine.c:294
__kasan_slab_alloc+0x1e/0x80 mm/kasan/common.c:306
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook+0x4b/0x480 mm/slab.h:737
slab_alloc_node mm/slub.c:3359 [inline]
kmem_cache_alloc_node+0x14d/0x320 mm/slub.c:3404
__alloc_skb+0xfc/0x7e0 net/core/skbuff.c:505
alloc_skb include/linux/skbuff.h:1271 [inline]
alloc_skb_with_frags+0xa7/0x710 net/core/skbuff.c:6164
sock_alloc_send_pskb+0x87f/0x9a0 net/core/sock.c:2755
unix_dgram_sendmsg+0x539/0x16e0 net/unix/af_unix.c:1945
sock_sendmsg_nosec net/socket.c:718 [inline]
__sock_sendmsg net/socket.c:730 [inline]
____sys_sendmsg+0x5be/0x970 net/socket.c:2518
___sys_sendmsg+0x2a2/0x360 net/socket.c:2572
__sys_sendmmsg+0x2c3/0x510 net/socket.c:2658
Memory state around the buggy address:
ffff8880714e1800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8880714e1880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff8880714e1900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff8880714e1980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8880714e1a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup