[v6.6] KASAN: slab-use-after-free Read in ocfs2_fault

0 views
Skip to first unread message

syzbot

unread,
Sep 17, 2025, 3:37:38 PMSep 17
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 60a9e718726f Linux 6.6.106
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=15059f62580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=7bf539c666eded139bd7
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/eca27e056a5a/disk-60a9e718.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/bc64d4eeb7f6/vmlinux-60a9e718.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a345041561ac/bzImage-60a9e718.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+7bf539...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in ocfs2_fault+0xd7/0x3d0 fs/ocfs2/mmap.c:41
Read of size 8 at addr ffff88806b02d788 by task syz.3.83/6378

CPU: 1 PID: 6378 Comm: syz.3.83 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:364 [inline]
print_report+0xac/0x220 mm/kasan/report.c:468
kasan_report+0x117/0x150 mm/kasan/report.c:581
ocfs2_fault+0xd7/0x3d0 fs/ocfs2/mmap.c:41
__do_fault+0x13b/0x4e0 mm/memory.c:4243
do_read_fault mm/memory.c:4616 [inline]
do_fault mm/memory.c:4753 [inline]
do_pte_missing mm/memory.c:3688 [inline]
handle_pte_fault mm/memory.c:5025 [inline]
__handle_mm_fault mm/memory.c:5166 [inline]
handle_mm_fault+0x3886/0x4920 mm/memory.c:5331
faultin_page mm/gup.c:868 [inline]
__get_user_pages+0x5ea/0x1470 mm/gup.c:1167
populate_vma_page_range+0x2b6/0x370 mm/gup.c:1593
__mm_populate+0x24c/0x380 mm/gup.c:1696
mm_populate include/linux/mm.h:3328 [inline]
vm_mmap_pgoff+0x2e7/0x400 mm/util.c:561
ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f6d1538eba9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f6d16205038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f6d155d6090 RCX: 00007f6d1538eba9
RDX: 00000000027ffff7 RSI: 0000000000600000 RDI: 0000200000000000
RBP: 00007f6d15411e19 R08: 000000000000000c R09: 0000000000000000
R10: 0000000004012011 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6d155d6128 R14: 00007f6d155d6090 R15: 00007fffea8fcdc8
</TASK>

Allocated by task 6378:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x11e/0x2e0 mm/slub.c:3519
vm_area_alloc+0x24/0x1d0 kernel/fork.c:486
__mmap_region mm/mmap.c:2770 [inline]
mmap_region+0xc8a/0x2020 mm/mmap.c:2941
do_mmap+0x92f/0x10a0 mm/mmap.c:1385
vm_mmap_pgoff+0x1c0/0x400 mm/util.c:556
ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

Freed by task 6391:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:164 [inline]
slab_free_hook mm/slub.c:1811 [inline]
slab_free_freelist_hook+0x130/0x1b0 mm/slub.c:1837
slab_free mm/slub.c:3830 [inline]
kmem_cache_free+0xf8/0x280 mm/slub.c:3852
rcu_do_batch kernel/rcu/tree.c:2194 [inline]
rcu_core+0xcc4/0x1720 kernel/rcu/tree.c:2467
handle_softirqs+0x280/0x820 kernel/softirq.c:578
__do_softirq kernel/softirq.c:612 [inline]
invoke_softirq kernel/softirq.c:452 [inline]
__irq_exit_rcu+0xc7/0x190 kernel/softirq.c:661
irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687

Last potentially related work creation:
kasan_save_stack+0x3e/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xaf/0xc0 mm/kasan/generic.c:492
__call_rcu_common kernel/rcu/tree.c:2721 [inline]
call_rcu+0x158/0x930 kernel/rcu/tree.c:2837
remove_vma mm/mmap.c:148 [inline]
remove_mt mm/mmap.c:2323 [inline]
do_vmi_align_munmap+0x1403/0x1660 mm/mmap.c:2596
do_vmi_munmap+0x252/0x2d0 mm/mmap.c:2660
__vm_munmap+0x193/0x3c0 mm/mmap.c:2961
__do_sys_munmap mm/mmap.c:2978 [inline]
__se_sys_munmap mm/mmap.c:2975 [inline]
__x64_sys_munmap+0x60/0x70 mm/mmap.c:2975
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

The buggy address belongs to the object at ffff88806b02d700
which belongs to the cache vm_area_struct of size 192
The buggy address is located 136 bytes inside of
freed 192-byte region [ffff88806b02d700, ffff88806b02d7c0)

The buggy address belongs to the physical page:
page:ffffea0001ac0b40 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x6b02d
memcg:ffff888075921101
flags: 0xfff00000000800(slab|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000800 ffff888019a4cb40 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000100010 00000001ffffffff ffff888075921101
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112cc0(GFP_USER|__GFP_NOWARN|__GFP_NORETRY), pid 6395, tgid 6395 (sed), ts 106721451261, free_ts 106671261377
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1cd/0x210 mm/page_alloc.c:1554
prep_new_page mm/page_alloc.c:1561 [inline]
get_page_from_freelist+0x195c/0x19f0 mm/page_alloc.c:3191
__alloc_pages+0x1e3/0x460 mm/page_alloc.c:4457
alloc_slab_page+0x5d/0x170 mm/slub.c:1881
allocate_slab mm/slub.c:2028 [inline]
new_slab+0x87/0x2e0 mm/slub.c:2081
___slab_alloc+0xc6d/0x1300 mm/slub.c:3253
__slab_alloc mm/slub.c:3339 [inline]
__slab_alloc_node mm/slub.c:3392 [inline]
slab_alloc_node mm/slub.c:3485 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x1b7/0x2e0 mm/slub.c:3519
vm_area_dup+0x27/0x270 kernel/fork.c:501
__split_vma+0x19f/0xc00 mm/mmap.c:2373
mprotect_fixup+0xaad/0xc90 mm/mprotect.c:650
do_mprotect_pkey+0x76e/0xc30 mm/mprotect.c:819
__do_sys_mprotect mm/mprotect.c:840 [inline]
__se_sys_mprotect mm/mprotect.c:837 [inline]
__x64_sys_mprotect+0x80/0x90 mm/mprotect.c:837
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1154 [inline]
free_unref_page_prepare+0x7ce/0x8e0 mm/page_alloc.c:2336
free_unref_page_list+0xbe/0x860 mm/page_alloc.c:2475
release_pages+0x1fa0/0x2220 mm/swap.c:1022
tlb_batch_pages_flush mm/mmu_gather.c:98 [inline]
tlb_flush_mmu_free mm/mmu_gather.c:293 [inline]
tlb_flush_mmu+0x368/0x4f0 mm/mmu_gather.c:300
tlb_finish_mmu+0xc3/0x1d0 mm/mmu_gather.c:392
exit_mmap+0x3f0/0xb50 mm/mmap.c:3315
__mmput+0x118/0x3c0 kernel/fork.c:1355
exec_mmap+0x584/0x640 fs/exec.c:1036
begin_new_exec+0xa2e/0x1eb0 fs/exec.c:1294
load_elf_binary+0x9a0/0x2700 fs/binfmt_elf.c:1027
search_binary_handler fs/exec.c:1775 [inline]
exec_binprm fs/exec.c:1817 [inline]
bprm_execve+0xaeb/0x16f0 fs/exec.c:1892
do_execveat_common+0x51b/0x6c0 fs/exec.c:1998
do_execve fs/exec.c:2072 [inline]
__do_sys_execve fs/exec.c:2148 [inline]
__se_sys_execve fs/exec.c:2143 [inline]
__x64_sys_execve+0x92/0xa0 fs/exec.c:2143
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

Memory state around the buggy address:
ffff88806b02d680: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
ffff88806b02d700: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88806b02d780: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
^
ffff88806b02d800: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88806b02d880: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Sep 17, 2025, 5:04:26 PMSep 17
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 60a9e718726f Linux 6.6.106
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=15f48712580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=7bf539c666eded139bd7
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=13e94c7c580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10256f62580000
mounted in repro: https://storage.googleapis.com/syzbot-assets/795f05c0648b/mount_0.gz
fsck result: OK (log: https://syzkaller.appspot.com/x/fsck.log?x=1453c534580000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+7bf539...@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in ocfs2_fault+0xd7/0x3d0 fs/ocfs2/mmap.c:41
Read of size 8 at addr ffff888077010d88 by task syz.3.29/6065

CPU: 0 PID: 6065 Comm: syz.3.29 Not tainted syzkaller #0
RIP: 0033:0x7faa7418eba9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007faa74fb4038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007faa743d5fa0 RCX: 00007faa7418eba9
RDX: 00000000027ffff7 RSI: 0000000000600000 RDI: 0000200000000000
RBP: 00007faa74211e19 R08: 0000000000000004 R09: 0000000000000000
R10: 0000000004012011 R11: 0000000000000246 R12: 0000000000000000
R13: 00007faa743d6038 R14: 00007faa743d5fa0 R15: 00007ffed3d1b8e8
</TASK>

Allocated by task 6065:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x11e/0x2e0 mm/slub.c:3519
vm_area_alloc+0x24/0x1d0 kernel/fork.c:486
__mmap_region mm/mmap.c:2770 [inline]
mmap_region+0xc8a/0x2020 mm/mmap.c:2941
do_mmap+0x92f/0x10a0 mm/mmap.c:1385
vm_mmap_pgoff+0x1c0/0x400 mm/util.c:556
ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

Freed by task 16:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:164 [inline]
slab_free_hook mm/slub.c:1811 [inline]
slab_free_freelist_hook+0x130/0x1b0 mm/slub.c:1837
slab_free mm/slub.c:3830 [inline]
kmem_cache_free+0xf8/0x280 mm/slub.c:3852
rcu_do_batch kernel/rcu/tree.c:2194 [inline]
rcu_core+0xcc4/0x1720 kernel/rcu/tree.c:2467
handle_softirqs+0x280/0x820 kernel/softirq.c:578
run_ksoftirqd+0x9c/0xf0 kernel/softirq.c:950
smpboot_thread_fn+0x635/0xa00 kernel/smpboot.c:164
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

Last potentially related work creation:
kasan_save_stack+0x3e/0x60 mm/kasan/common.c:45
__kasan_record_aux_stack+0xaf/0xc0 mm/kasan/generic.c:492
__call_rcu_common kernel/rcu/tree.c:2721 [inline]
call_rcu+0x158/0x930 kernel/rcu/tree.c:2837
remove_vma mm/mmap.c:148 [inline]
remove_mt mm/mmap.c:2323 [inline]
do_vmi_align_munmap+0x1403/0x1660 mm/mmap.c:2596
do_vmi_munmap+0x252/0x2d0 mm/mmap.c:2660
__vm_munmap+0x193/0x3c0 mm/mmap.c:2961
__do_sys_munmap mm/mmap.c:2978 [inline]
__se_sys_munmap mm/mmap.c:2975 [inline]
__x64_sys_munmap+0x60/0x70 mm/mmap.c:2975
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

The buggy address belongs to the object at ffff888077010d00
which belongs to the cache vm_area_struct of size 192
The buggy address is located 136 bytes inside of
freed 192-byte region [ffff888077010d00, ffff888077010dc0)

The buggy address belongs to the physical page:
page:ffffea0001dc0400 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x77010
memcg:ffff88807f36e801
flags: 0xfff00000000800(slab|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000800 ffff888019a4cb40 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000100010 00000001ffffffff ffff88807f36e801
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112cc0(GFP_USER|__GFP_NOWARN|__GFP_NORETRY), pid 5947, tgid 5947 (udevd), ts 114448487869, free_ts 113625430157
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1cd/0x210 mm/page_alloc.c:1554
prep_new_page mm/page_alloc.c:1561 [inline]
get_page_from_freelist+0x195c/0x19f0 mm/page_alloc.c:3191
__alloc_pages+0x1e3/0x460 mm/page_alloc.c:4457
alloc_slab_page+0x5d/0x170 mm/slub.c:1881
allocate_slab mm/slub.c:2028 [inline]
new_slab+0x87/0x2e0 mm/slub.c:2081
___slab_alloc+0xc6d/0x1300 mm/slub.c:3253
__slab_alloc mm/slub.c:3339 [inline]
__slab_alloc_node mm/slub.c:3392 [inline]
slab_alloc_node mm/slub.c:3485 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x1b7/0x2e0 mm/slub.c:3519
vm_area_dup+0x27/0x270 kernel/fork.c:501
__split_vma+0x19f/0xc00 mm/mmap.c:2373
do_vmi_align_munmap+0x377/0x1660 mm/mmap.c:2514
do_vmi_munmap+0x252/0x2d0 mm/mmap.c:2660
__vm_munmap+0x193/0x3c0 mm/mmap.c:2961
__do_sys_munmap mm/mmap.c:2978 [inline]
__se_sys_munmap mm/mmap.c:2975 [inline]
__x64_sys_munmap+0x60/0x70 mm/mmap.c:2975
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1154 [inline]
free_unref_page_prepare+0x7ce/0x8e0 mm/page_alloc.c:2336
free_unref_page_list+0xbe/0x860 mm/page_alloc.c:2475
release_pages+0x1fa0/0x2220 mm/swap.c:1022
__folio_batch_release+0x71/0xe0 mm/swap.c:1042
folio_batch_release include/linux/pagevec.h:83 [inline]
truncate_inode_pages_range+0x358/0xf00 mm/truncate.c:371
kill_bdev block/bdev.c:76 [inline]
blkdev_flush_mapping+0x132/0x290 block/bdev.c:632
blkdev_put_whole block/bdev.c:663 [inline]
blkdev_put+0x498/0x760 block/bdev.c:941
blkdev_release+0x84/0x90 block/fops.c:604
__fput+0x234/0x970 fs/file_table.c:384
__do_sys_close fs/open.c:1571 [inline]
__se_sys_close+0x15f/0x220 fs/open.c:1556
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

Memory state around the buggy address:
ffff888077010c80: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
ffff888077010d00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888077010d80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
^
ffff888077010e00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888077010e80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
==================================================================


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Reply all
Reply to author
Forward
0 new messages