Hello,
syzbot found the following issue on:
HEAD commit: 1e89a1be4fe9 Linux 6.6.117
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=164c5612580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link:
https://syzkaller.appspot.com/bug?extid=8f19aec650b306c0309d
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/ad3647e47b66/disk-1e89a1be.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/d6f7ba94aea7/vmlinux-1e89a1be.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/e08af2290355/bzImage-1e89a1be.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+8f19ae...@syzkaller.appspotmail.com
BUG: spinlock bad magic on CPU#0, jfsCommit/112
==================================================================
BUG: KASAN: slab-out-of-bounds in string_nocheck lib/vsprintf.c:645 [inline]
BUG: KASAN: slab-out-of-bounds in string+0x223/0x2b0 lib/vsprintf.c:727
Read of size 1 at addr ffff88805b59f868 by task jfsCommit/112
CPU: 0 PID: 112 Comm: jfsCommit Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:364 [inline]
print_report+0xac/0x220 mm/kasan/report.c:468
kasan_report+0x117/0x150 mm/kasan/report.c:581
string_nocheck lib/vsprintf.c:645 [inline]
string+0x223/0x2b0 lib/vsprintf.c:727
vsnprintf+0xe52/0x1a40 lib/vsprintf.c:2823
vprintk_store+0x3c7/0xc70 kernel/printk/printk.c:2226
vprintk_emit+0x11f/0x600 kernel/printk/printk.c:2322
_printk+0xd0/0x110 kernel/printk/printk.c:2366
spin_dump+0x101/0x1a0 kernel/locking/spinlock_debug.c:63
spin_bug kernel/locking/spinlock_debug.c:77 [inline]
debug_spin_lock_before kernel/locking/spinlock_debug.c:85 [inline]
do_raw_spin_lock+0x1c6/0x2c0 kernel/locking/spinlock_debug.c:114
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xb4/0xf0 kernel/locking/spinlock.c:162
__wake_up_common_lock kernel/sched/wait.c:137 [inline]
__wake_up+0xf8/0x190 kernel/sched/wait.c:160
unlock_metapage fs/jfs/jfs_metapage.c:38 [inline]
release_metapage+0xc5/0x870 fs/jfs/jfs_metapage.c:765
xtTruncate+0xe65/0x2dc0 fs/jfs/jfs_xtree.c:-1
jfs_free_zero_link+0x33b/0x490 fs/jfs/namei.c:758
jfs_evict_inode+0x35d/0x440 fs/jfs/inode.c:159
evict+0x486/0x870 fs/inode.c:705
txLazyCommit fs/jfs/jfs_txnmgr.c:2665 [inline]
jfs_lazycommit+0x42b/0xa60 fs/jfs/jfs_txnmgr.c:2733
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Allocated by task 6124:
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc_lru+0x115/0x2e0 mm/slub.c:3526
alloc_inode_sb include/linux/fs.h:2946 [inline]
jfs_alloc_inode+0x28/0x60 fs/jfs/super.c:105
alloc_inode fs/inode.c:261 [inline]
iget_locked+0x1ad/0x840 fs/inode.c:1359
jfs_iget+0x24/0x440 fs/jfs/inode.c:29
jfs_lookup+0x1c6/0x380 fs/jfs/namei.c:1467
__lookup_slow+0x281/0x3b0 fs/namei.c:1702
lookup_slow+0x53/0x70 fs/namei.c:1719
walk_component+0x2be/0x3f0 fs/namei.c:2010
lookup_last fs/namei.c:2467 [inline]
path_lookupat+0x169/0x440 fs/namei.c:2491
filename_lookup+0x1f4/0x510 fs/namei.c:2520
user_path_at_empty+0x42/0x60 fs/namei.c:2917
user_path_at include/linux/namei.h:57 [inline]
ksys_umount fs/namespace.c:1921 [inline]
__do_sys_umount fs/namespace.c:1929 [inline]
__se_sys_umount fs/namespace.c:1927 [inline]
__x64_sys_umount+0xf5/0x170 fs/namespace.c:1927
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
The buggy address belongs to the object at ffff88805b59ef00
which belongs to the cache jfs_ip of size 2240
The buggy address is located 168 bytes to the right of
allocated 2240-byte region [ffff88805b59ef00, ffff88805b59f7c0)
The buggy address belongs to the physical page:
page:ffffea00016d6600 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5b598
head:ffffea00016d6600 order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0
memcg:ffff88802e317201
flags: 0xfff00000000840(slab|head|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000840 ffff888018f5d780 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800d000d 00000001ffffffff ffff88802e317201
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Reclaimable, gfp_mask 0x1d2050(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL|__GFP_RECLAIMABLE), pid 6029, tgid 6028 (syz.2.45), ts 87887303313, free_ts
18656114039
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1cd/0x210 mm/page_alloc.c:1554
prep_new_page mm/page_alloc.c:1561 [inline]
get_page_from_freelist+0x195c/0x19f0 mm/page_alloc.c:3191
__alloc_pages+0x1e3/0x460 mm/page_alloc.c:4457
alloc_slab_page+0x5d/0x170 mm/slub.c:1881
allocate_slab mm/slub.c:2028 [inline]
new_slab+0x87/0x2e0 mm/slub.c:2081
___slab_alloc+0xc6d/0x1300 mm/slub.c:3253
__slab_alloc mm/slub.c:3339 [inline]
__slab_alloc_node mm/slub.c:3392 [inline]
slab_alloc_node mm/slub.c:3485 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc_lru+0x1ae/0x2e0 mm/slub.c:3526
alloc_inode_sb include/linux/fs.h:2946 [inline]
jfs_alloc_inode+0x28/0x60 fs/jfs/super.c:105
alloc_inode fs/inode.c:261 [inline]
new_inode_pseudo+0x63/0x1d0 fs/inode.c:1049
new_inode+0x22/0x1b0 fs/inode.c:1075
jfs_fill_super+0x396/0xac0 fs/jfs/super.c:544
mount_bdev+0x22b/0x2d0 fs/super.c:1643
legacy_get_tree+0xea/0x180 fs/fs_context.c:662
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3386
do_mount fs/namespace.c:3726 [inline]
__do_sys_mount fs/namespace.c:3935 [inline]
__se_sys_mount+0x2da/0x3c0 fs/namespace.c:3912
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1154 [inline]
free_unref_page_prepare+0x7ce/0x8e0 mm/page_alloc.c:2336
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
free_contig_range+0xa1/0x160 mm/page_alloc.c:6369
destroy_args+0x80/0x850 mm/debug_vm_pgtable.c:1015
debug_vm_pgtable+0x3cc/0x410 mm/debug_vm_pgtable.c:1400
do_one_initcall+0x1fd/0x750 init/main.c:1250
do_initcall_level+0x137/0x1f0 init/main.c:1312
do_initcalls+0x69/0xd0 init/main.c:1328
kernel_init_freeable+0x3d2/0x570 init/main.c:1565
kernel_init+0x1d/0x1c0 init/main.c:1455
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
Memory state around the buggy address:
ffff88805b59f700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff88805b59f780: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
>ffff88805b59f800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
^
ffff88805b59f880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff88805b59f900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup