[v5.15] WARNING in hfsplus_ext_write_extent (2)

0 views
Skip to first unread message

syzbot

unread,
Nov 19, 2025, 1:16:26 PM (15 hours ago) Nov 19
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: cc5ec8769306 Linux 5.15.196
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10fd9658580000
kernel config: https://syzkaller.appspot.com/x/.config?x=e1bb6d24ef2164eb
dashboard link: https://syzkaller.appspot.com/bug?extid=75e7f7f1301e65a7cea3
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c71c660545b2/disk-cc5ec876.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4f011826cca6/vmlinux-cc5ec876.xz
kernel image: https://storage.googleapis.com/syzbot-assets/8ccd1a2c3f8c/bzImage-cc5ec876.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+75e7f7...@syzkaller.appspotmail.com

hfsplus: b-tree write err: -5, ino 4
------------[ cut here ]------------
DEBUG_LOCKS_WARN_ON(lock->magic != lock)
WARNING: CPU: 0 PID: 4796 at kernel/locking/mutex.c:575 __mutex_lock_common+0x18fa/0x2390 kernel/locking/mutex.c:-1
Modules linked in:
CPU: 0 PID: 4796 Comm: kworker/u4:5 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: writeback wb_workfn (flush-7:2)
RIP: 0010:__mutex_lock_common+0x18fa/0x2390 kernel/locking/mutex.c:575
Code: 00 48 8d bc 24 e0 00 00 00 0f 85 47 e8 ff ff 48 c7 c7 60 0b 0b 8a 48 c7 c6 a0 0b 0b 8a e8 6e ac ec ff 48 8d bc 24 e0 00 00 00 <0f> 0b e9 25 e8 ff ff e8 ea 7e 4e f7 e9 df fb ff ff 0f 0b e9 2d f0
RSP: 0018:ffffc90004b171a0 EFLAGS: 00010246
RAX: 07a41d47b2d0e900 RBX: ffffffff822902e7 RCX: ffff888079f0d940
RDX: 0000000000000000 RSI: 0000000080000000 RDI: ffffc90004b17280
RBP: ffffc90004b17340 R08: dffffc0000000000 R09: ffffed1017204f2c
R10: ffffed1017204f2c R11: 1ffff11017204f2b R12: 0000000000000000
R13: 0000000000000000 R14: ffff88805b212840 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8880b9000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055731860cfb0 CR3: 0000000062b35000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
hfsplus_ext_write_extent+0x87/0x200 fs/hfsplus/extents.c:149
hfsplus_write_inode+0x1e/0x5b0 fs/hfsplus/super.c:167
write_inode fs/fs-writeback.c:1505 [inline]
__writeback_single_inode+0x6c3/0xda0 fs/fs-writeback.c:1715
writeback_sb_inodes+0x9fe/0x1610 fs/fs-writeback.c:1940
__writeback_inodes_wb+0x12a/0x3f0 fs/fs-writeback.c:2011
wb_writeback+0x455/0xb90 fs/fs-writeback.c:2116
wb_check_start_all fs/fs-writeback.c:2238 [inline]
wb_do_writeback fs/fs-writeback.c:2264 [inline]
wb_workfn+0x8dd/0xe60 fs/fs-writeback.c:2298
process_one_work+0x863/0x1000 kernel/workqueue.c:2310
worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages