Hello,
syzbot found the following issue on:
HEAD commit: 68efe5a6c16a Linux 5.15.197
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=14d9d9b4580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=7e6ed99963d6ee1d
dashboard link:
https://syzkaller.appspot.com/bug?extid=cb579f957ed80bdc1a09
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/900f9b9bd850/disk-68efe5a6.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/1e089a5019a6/vmlinux-68efe5a6.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/b319f477b907/bzImage-68efe5a6.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+cb579f...@syzkaller.appspotmail.com
JBD2: Ignoring recovery information on journal
ocfs2: Mounting device (7,9) on (node local, slot 0) with ordered data mode.
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.9.214/6007 is trying to acquire lock:
ffff8880251ec098 (&new->rf_sem){+.+.}-{3:3}, at: __ocfs2_lock_refcount_tree fs/ocfs2/refcounttree.c:428 [inline]
ffff8880251ec098 (&new->rf_sem){+.+.}-{3:3}, at: ocfs2_lock_refcount_tree+0x1c0/0x980 fs/ocfs2/refcounttree.c:463
but task is already holding lock:
ffff88805fe68660 (&ocfs2_file_ip_alloc_sem_key){++++}-{3:3}, at: ocfs2_inode_lock_for_extent_tree+0x72/0x190 fs/ocfs2/file.c:2211
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&ocfs2_file_ip_alloc_sem_key){++++}-{3:3}:
down_read+0x44/0x2e0 kernel/locking/rwsem.c:1498
ocfs2_read_virt_blocks+0x23f/0x8a0 fs/ocfs2/extent_map.c:984
ocfs2_read_dir_block fs/ocfs2/dir.c:508 [inline]
ocfs2_find_entry_el fs/ocfs2/dir.c:715 [inline]
ocfs2_find_entry+0x3d1/0x1f90 fs/ocfs2/dir.c:1091
ocfs2_find_files_on_disk+0xdb/0x2f0 fs/ocfs2/dir.c:1995
ocfs2_lookup_ino_from_name+0x4f/0xf0 fs/ocfs2/dir.c:2017
_ocfs2_get_system_file_inode fs/ocfs2/sysfile.c:136 [inline]
ocfs2_get_system_file_inode+0x319/0x760 fs/ocfs2/sysfile.c:112
ocfs2_init_global_system_inodes+0x316/0x650 fs/ocfs2/super.c:458
ocfs2_initialize_super fs/ocfs2/super.c:2279 [inline]
ocfs2_fill_super+0x3dbf/0x4d80 fs/ocfs2/super.c:995
mount_bdev+0x287/0x3c0 fs/super.c:1400
legacy_get_tree+0xe6/0x180 fs/fs_context.c:611
vfs_get_tree+0x88/0x270 fs/super.c:1530
do_new_mount+0x24a/0xa40 fs/namespace.c:3034
do_mount fs/namespace.c:3377 [inline]
__do_sys_mount fs/namespace.c:3585 [inline]
__se_sys_mount+0x2d6/0x3c0 fs/namespace.c:3562
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
-> #1 (&osb->system_file_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
ocfs2_get_system_file_inode+0x1b5/0x760 fs/ocfs2/sysfile.c:101
ocfs2_reserve_suballoc_bits+0x139/0x4350 fs/ocfs2/suballoc.c:776
ocfs2_reserve_new_metadata_blocks+0x400/0x940 fs/ocfs2/suballoc.c:978
ocfs2_add_refcount_flag+0x36e/0xd80 fs/ocfs2/refcounttree.c:3711
ocfs2_reflink_remap_extent fs/ocfs2/refcounttree.c:4591 [inline]
ocfs2_reflink_remap_blocks+0xd2c/0x1930 fs/ocfs2/refcounttree.c:4718
ocfs2_remap_file_range+0x4aa/0x720 fs/ocfs2/file.c:2706
vfs_copy_file_range+0xce7/0x1470 fs/read_write.c:1510
__do_sys_copy_file_range fs/read_write.c:1588 [inline]
__se_sys_copy_file_range+0x31d/0x480 fs/read_write.c:1551
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
-> #0 (&new->rf_sem){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
down_write+0x38/0x60 kernel/locking/rwsem.c:1551
__ocfs2_lock_refcount_tree fs/ocfs2/refcounttree.c:428 [inline]
ocfs2_lock_refcount_tree+0x1c0/0x980 fs/ocfs2/refcounttree.c:463
ocfs2_refcount_cow_hunk fs/ocfs2/refcounttree.c:3443 [inline]
ocfs2_refcount_cow+0x56e/0xc50 fs/ocfs2/refcounttree.c:3504
ocfs2_prepare_inode_for_write fs/ocfs2/file.c:2343 [inline]
ocfs2_file_write_iter+0xdfd/0x1cf0 fs/ocfs2/file.c:2454
do_iter_readv_writev+0x497/0x600 fs/read_write.c:-1
do_iter_write+0x205/0x7b0 fs/read_write.c:855
vfs_writev fs/read_write.c:928 [inline]
do_pwritev+0x204/0x340 fs/read_write.c:1025
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
other info that might help us debug this:
Chain exists of:
&new->rf_sem --> &osb->system_file_mutex --> &ocfs2_file_ip_alloc_sem_key
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&ocfs2_file_ip_alloc_sem_key);
lock(&osb->system_file_mutex);
lock(&ocfs2_file_ip_alloc_sem_key);
lock(&new->rf_sem);
*** DEADLOCK ***
3 locks held by syz.9.214/6007:
#0: ffff888026a04460 (sb_writers#14){.+.+}-{0:0}, at: vfs_writev fs/read_write.c:927 [inline]
#0: ffff888026a04460 (sb_writers#14){.+.+}-{0:0}, at: do_pwritev+0x1f2/0x340 fs/read_write.c:1025
#1: ffff88805fe689c8 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff88805fe689c8 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: ocfs2_file_write_iter+0x401/0x1cf0 fs/ocfs2/file.c:2402
#2: ffff88805fe68660 (&ocfs2_file_ip_alloc_sem_key){++++}-{3:3}, at: ocfs2_inode_lock_for_extent_tree+0x72/0x190 fs/ocfs2/file.c:2211
stack backtrace:
CPU: 0 PID: 6007 Comm: syz.9.214 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
down_write+0x38/0x60 kernel/locking/rwsem.c:1551
__ocfs2_lock_refcount_tree fs/ocfs2/refcounttree.c:428 [inline]
ocfs2_lock_refcount_tree+0x1c0/0x980 fs/ocfs2/refcounttree.c:463
ocfs2_refcount_cow_hunk fs/ocfs2/refcounttree.c:3443 [inline]
ocfs2_refcount_cow+0x56e/0xc50 fs/ocfs2/refcounttree.c:3504
ocfs2_prepare_inode_for_write fs/ocfs2/file.c:2343 [inline]
ocfs2_file_write_iter+0xdfd/0x1cf0 fs/ocfs2/file.c:2454
do_iter_readv_writev+0x497/0x600 fs/read_write.c:-1
do_iter_write+0x205/0x7b0 fs/read_write.c:855
vfs_writev fs/read_write.c:928 [inline]
do_pwritev+0x204/0x340 fs/read_write.c:1025
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f6ffddd4749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f6ffc03b038 EFLAGS: 00000246 ORIG_RAX: 0000000000000148
RAX: ffffffffffffffda RBX: 00007f6ffe02afa0 RCX: 00007f6ffddd4749
RDX: 0000000000000001 RSI: 0000200000000400 RDI: 0000000000000004
RBP: 00007f6ffde58f91 R08: 0000000000000002 R09: 0000000000000005
R10: 0000000000000004 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6ffe02b038 R14: 00007f6ffe02afa0 R15: 00007ffde8c4c3f8
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup