Hello,
syzbot found the following issue on:
HEAD commit: 68efe5a6c16a Linux 5.15.197
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=16caaa3a580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=7e6ed99963d6ee1d
dashboard link:
https://syzkaller.appspot.com/bug?extid=f898b1c69dd9e471ab17
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/900f9b9bd850/disk-68efe5a6.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/1e089a5019a6/vmlinux-68efe5a6.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/b319f477b907/bzImage-68efe5a6.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+f898b1...@syzkaller.appspotmail.com
==================================================================
BUG: KASAN: use-after-free in ocfs2_fault+0xd3/0x3c0 fs/ocfs2/mmap.c:41
Read of size 8 at addr ffff888059d890a0 by task syz.0.50/4590
CPU: 0 PID: 4590 Comm: syz.0.50 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
print_address_description+0x60/0x2d0 mm/kasan/report.c:248
__kasan_report mm/kasan/report.c:434 [inline]
kasan_report+0xdf/0x130 mm/kasan/report.c:451
ocfs2_fault+0xd3/0x3c0 fs/ocfs2/mmap.c:41
__do_fault+0x141/0x330 mm/memory.c:3928
do_read_fault mm/memory.c:4264 [inline]
do_fault mm/memory.c:4392 [inline]
handle_pte_fault mm/memory.c:4650 [inline]
__handle_mm_fault mm/memory.c:4785 [inline]
handle_mm_fault+0x2946/0x43b0 mm/memory.c:4883
faultin_page mm/gup.c:976 [inline]
__get_user_pages+0x93e/0x11c0 mm/gup.c:1197
populate_vma_page_range+0x213/0x290 mm/gup.c:1529
__mm_populate+0x26f/0x3a0 mm/gup.c:1638
mm_populate include/linux/mm.h:2646 [inline]
vm_mmap_pgoff+0x203/0x2b0 mm/util.c:556
ksys_mmap_pgoff+0x542/0x780 mm/mmap.c:1635
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7fd09226a749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fd0904d1038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007fd0924c0fa0 RCX: 00007fd09226a749
RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000200000000000
RBP: 00007fd0922eef91 R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fd0924c1038 R14: 00007fd0924c0fa0 R15: 00007ffd2eaa7948
</TASK>
Allocated by task 4590:
kasan_save_stack mm/kasan/common.c:38 [inline]
kasan_set_track mm/kasan/common.c:46 [inline]
set_alloc_info mm/kasan/common.c:434 [inline]
__kasan_slab_alloc+0x9c/0xd0 mm/kasan/common.c:467
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x4c/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3225 [inline]
slab_alloc mm/slub.c:3233 [inline]
kmem_cache_alloc+0x100/0x290 mm/slub.c:3238
vm_area_alloc+0x20/0xe0 kernel/fork.c:350
__mmap_region mm/mmap.c:1782 [inline]
mmap_region+0xac7/0x1660 mm/mmap.c:2933
do_mmap+0x81f/0xea0 mm/mmap.c:1586
vm_mmap_pgoff+0x1b2/0x2b0 mm/util.c:551
ksys_mmap_pgoff+0x542/0x780 mm/mmap.c:1635
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
Freed by task 4592:
kasan_save_stack mm/kasan/common.c:38 [inline]
kasan_set_track+0x4b/0x70 mm/kasan/common.c:46
kasan_set_free_info+0x1f/0x40 mm/kasan/generic.c:360
____kasan_slab_free+0xd5/0x110 mm/kasan/common.c:366
kasan_slab_free include/linux/kasan.h:230 [inline]
slab_free_hook mm/slub.c:1710 [inline]
slab_free_freelist_hook+0xea/0x170 mm/slub.c:1736
slab_free mm/slub.c:3504 [inline]
kmem_cache_free+0x8f/0x210 mm/slub.c:3520
remove_vma mm/mmap.c:188 [inline]
remove_vma_list mm/mmap.c:2628 [inline]
__do_munmap+0xc54/0xdc0 mm/mmap.c:2902
do_munmap mm/mmap.c:2910 [inline]
munmap_vma_range mm/mmap.c:603 [inline]
__mmap_region mm/mmap.c:1757 [inline]
mmap_region+0x8bb/0x1660 mm/mmap.c:2933
do_mmap+0x81f/0xea0 mm/mmap.c:1586
vm_mmap_pgoff+0x1b2/0x2b0 mm/util.c:551
ksys_mmap_pgoff+0x542/0x780 mm/mmap.c:1635
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
The buggy address belongs to the object at ffff888059d89000
which belongs to the cache vm_area_struct of size 200
The buggy address is located 160 bytes inside of
200-byte region [ffff888059d89000, ffff888059d890c8)
The buggy address belongs to the page:
page:ffffea0001676240 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x59d89
memcg:ffff8880239de301
flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000000200 0000000000000000 dead000000000122 ffff888140007a00
raw: 0000000000000000 00000000800f000f 00000001ffffffff ffff8880239de301
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112cc0(GFP_USER|__GFP_NOWARN|__GFP_NORETRY), pid 4590, ts 75160712029, free_ts 75160571279
prep_new_page mm/page_alloc.c:2426 [inline]
get_page_from_freelist+0x1b77/0x1c60 mm/page_alloc.c:4192
__alloc_pages+0x1e1/0x470 mm/page_alloc.c:5487
alloc_slab_page mm/slub.c:1780 [inline]
allocate_slab mm/slub.c:1917 [inline]
new_slab+0xc0/0x4b0 mm/slub.c:1980
___slab_alloc+0x81e/0xdf0 mm/slub.c:3013
__slab_alloc mm/slub.c:3100 [inline]
slab_alloc_node mm/slub.c:3191 [inline]
slab_alloc mm/slub.c:3233 [inline]
kmem_cache_alloc+0x195/0x290 mm/slub.c:3238
vm_area_alloc+0x20/0xe0 kernel/fork.c:350
__mmap_region mm/mmap.c:1782 [inline]
mmap_region+0xac7/0x1660 mm/mmap.c:2933
do_mmap+0x81f/0xea0 mm/mmap.c:1586
vm_mmap_pgoff+0x1b2/0x2b0 mm/util.c:551
ksys_mmap_pgoff+0x542/0x780 mm/mmap.c:1635
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1340 [inline]
free_pcp_prepare mm/page_alloc.c:1391 [inline]
free_unref_page_prepare+0x637/0x6c0 mm/page_alloc.c:3317
free_unref_page_list+0x122/0x7e0 mm/page_alloc.c:3433
release_pages+0x184b/0x1bb0 mm/swap.c:963
tlb_batch_pages_flush mm/mmu_gather.c:49 [inline]
tlb_flush_mmu_free mm/mmu_gather.c:240 [inline]
tlb_flush_mmu mm/mmu_gather.c:247 [inline]
tlb_finish_mmu+0x164/0x2e0 mm/mmu_gather.c:338
unmap_region+0x315/0x360 mm/mmap.c:2669
__do_munmap+0x9d3/0xdc0 mm/mmap.c:2899
do_munmap mm/mmap.c:2910 [inline]
munmap_vma_range mm/mmap.c:603 [inline]
__mmap_region mm/mmap.c:1757 [inline]
mmap_region+0x8bb/0x1660 mm/mmap.c:2933
do_mmap+0x81f/0xea0 mm/mmap.c:1586
vm_mmap_pgoff+0x1b2/0x2b0 mm/util.c:551
ksys_mmap_pgoff+0x542/0x780 mm/mmap.c:1635
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
Memory state around the buggy address:
ffff888059d88f80: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
ffff888059d89000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888059d89080: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
^
ffff888059d89100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff888059d89180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup