Hello,
syzbot found the following issue on:
HEAD commit: 9760bf04666d Linux 6.6.135
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=13cbf4ce580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=c5b35c4db8465904
dashboard link:
https://syzkaller.appspot.com/bug?extid=13cbdee13e74c1b07ed4
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/a685913d05a7/disk-9760bf04.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/a3c1d21d4bca/vmlinux-9760bf04.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/09178887ef0d/bzImage-9760bf04.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+13cbde...@syzkaller.appspotmail.com
REISERFS (device loop0): Using tea hash to sort names
REISERFS (device loop0): Created .reiserfs_priv - reserved for xattr storage.
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.0.1224/8897 is trying to acquire lock:
ffff8880192a3090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
but task is already holding lock:
ffff888077671030 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
ffff888077671030 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: filename_create+0x20c/0x480 fs/namei.c:3890
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}:
down_write_nested+0x9e/0x200 kernel/locking/rwsem.c:1689
inode_lock_nested include/linux/fs.h:839 [inline]
filename_create+0x20c/0x480 fs/namei.c:3890
do_symlinkat+0xc5/0x400 fs/namei.c:4500
__do_sys_symlinkat fs/namei.c:4523 [inline]
__se_sys_symlinkat fs/namei.c:4520 [inline]
__x64_sys_symlinkat+0x99/0xb0 fs/namei.c:4520
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #1 (sb_writers#28){.+.+}-{0:0}:
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1633 [inline]
sb_start_write+0x4d/0x1c0 include/linux/fs.h:1708
mnt_want_write_file+0x63/0x200 fs/namespace.c:456
reiserfs_ioctl+0x112/0x2d0 fs/reiserfs/ioctl.c:103
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:871 [inline]
__se_sys_ioctl+0xfd/0x170 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #0 (&sbi->lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2df1/0x7d40 kernel/locking/lockdep.c:5137
lock_acquire+0x19e/0x420 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x136/0xcc0 kernel/locking/mutex.c:747
reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
reiserfs_lookup+0x183/0x580 fs/reiserfs/namei.c:364
lookup_one_qstr_excl+0x112/0x250 fs/namei.c:1617
filename_create+0x23e/0x480 fs/namei.c:3891
do_symlinkat+0xc5/0x400 fs/namei.c:4500
__do_sys_symlinkat fs/namei.c:4523 [inline]
__se_sys_symlinkat fs/namei.c:4520 [inline]
__x64_sys_symlinkat+0x99/0xb0 fs/namei.c:4520
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
other info that might help us debug this:
Chain exists of:
&sbi->lock --> sb_writers#28 --> &type->i_mutex_dir_key#22/1
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&type->i_mutex_dir_key#22/1);
lock(sb_writers#28);
lock(&type->i_mutex_dir_key#22/1);
lock(&sbi->lock);
*** DEADLOCK ***
2 locks held by syz.0.1224/8897:
#0: ffff88807d7a4418 (sb_writers#28){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff888077671030 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff888077671030 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: filename_create+0x20c/0x480 fs/namei.c:3890
stack backtrace:
CPU: 0 PID: 8897 Comm: syz.0.1224 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
check_noncircular+0x2fc/0x400 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2df1/0x7d40 kernel/locking/lockdep.c:5137
lock_acquire+0x19e/0x420 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x136/0xcc0 kernel/locking/mutex.c:747
reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
reiserfs_lookup+0x183/0x580 fs/reiserfs/namei.c:364
lookup_one_qstr_excl+0x112/0x250 fs/namei.c:1617
filename_create+0x23e/0x480 fs/namei.c:3891
do_symlinkat+0xc5/0x400 fs/namei.c:4500
__do_sys_symlinkat fs/namei.c:4523 [inline]
__se_sys_symlinkat fs/namei.c:4520 [inline]
__x64_sys_symlinkat+0x99/0xb0 fs/namei.c:4520
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fdb7359c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fdb74428028 EFLAGS: 00000246 ORIG_RAX: 000000000000010a
RAX: ffffffffffffffda RBX: 00007fdb73815fa0 RCX: 00007fdb7359c819
RDX: 00002000000005c0 RSI: ffffffffffffff9c RDI: 0000200000000700
RBP: 00007fdb73632c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fdb73816038 R14: 00007fdb73815fa0 R15: 00007ffc68724828
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup