[v6.1] possible deadlock in filename_create (2)

0 views
Skip to first unread message

syzbot

unread,
Mar 2, 2024, 4:45:21 PMMar 2
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: a3eb3a74aa8c Linux 6.1.80
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13a07526180000
kernel config: https://syzkaller.appspot.com/x/.config?x=40fd5f1c69352c2d
dashboard link: https://syzkaller.appspot.com/bug?extid=6002e4561fb277938809
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=173da7aa180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1666fbac180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/fcf176340788/disk-a3eb3a74.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/31d4ae9bb2ff/vmlinux-a3eb3a74.xz
kernel image: https://storage.googleapis.com/syzbot-assets/cb907876b80a/Image-a3eb3a74.gz.xz
mounted in repro #1: https://storage.googleapis.com/syzbot-assets/6d424c994076/mount_0.gz
mounted in repro #2: https://storage.googleapis.com/syzbot-assets/59545b170213/mount_3.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6002e4...@syzkaller.appspotmail.com

REISERFS (device loop3): Using tea hash to sort names
REISERFS (device loop3): Created .reiserfs_priv - reserved for xattr storage.
======================================================
WARNING: possible circular locking dependency detected
6.1.80-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor344/6785 is trying to acquire lock:
ffff0000e2596640 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:793 [inline]
ffff0000e2596640 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: filename_create+0x204/0x468 fs/namei.c:3878

but task is already holding lock:
ffff0000dbe14460 (sb_writers#8){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (sb_writers#8){.+.+}-{0:0}:
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1877 [inline]
sb_start_write+0x7c/0x308 include/linux/fs.h:1952
mnt_want_write_file+0x64/0x1e8 fs/namespace.c:437
reiserfs_ioctl+0x184/0x454 fs/reiserfs/ioctl.c:103
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

-> #1 (&sbi->lock){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
reiserfs_write_lock+0x7c/0xe8 fs/reiserfs/lock.c:27
reiserfs_lookup+0x130/0x3c4 fs/reiserfs/namei.c:364
lookup_one_qstr_excl+0x108/0x230 fs/namei.c:1605
filename_create+0x230/0x468 fs/namei.c:3879
do_mkdirat+0xac/0x510 fs/namei.c:4123
__do_sys_mkdirat fs/namei.c:4148 [inline]
__se_sys_mkdirat fs/namei.c:4146 [inline]
__arm64_sys_mkdirat+0x90/0xa8 fs/namei.c:4146
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

-> #0 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x3338/0x7680 kernel/locking/lockdep.c:5049
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5662
down_write_nested+0x64/0x94 kernel/locking/rwsem.c:1689
inode_lock_nested include/linux/fs.h:793 [inline]
filename_create+0x204/0x468 fs/namei.c:3878
do_mkdirat+0xac/0x510 fs/namei.c:4123
__do_sys_mkdirat fs/namei.c:4148 [inline]
__se_sys_mkdirat fs/namei.c:4146 [inline]
__arm64_sys_mkdirat+0x90/0xa8 fs/namei.c:4146
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

other info that might help us debug this:

Chain exists of:
&type->i_mutex_dir_key#6/1 --> &sbi->lock --> sb_writers#8

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(sb_writers#8);
lock(&sbi->lock);
lock(sb_writers#8);
lock(&type->i_mutex_dir_key#6/1);

*** DEADLOCK ***

1 lock held by syz-executor344/6785:
#0: ffff0000dbe14460 (sb_writers#8){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393

stack backtrace:
CPU: 0 PID: 6785 Comm: syz-executor344 Not tainted 6.1.80-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2048
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2170
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x3338/0x7680 kernel/locking/lockdep.c:5049
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5662
down_write_nested+0x64/0x94 kernel/locking/rwsem.c:1689
inode_lock_nested include/linux/fs.h:793 [inline]
filename_create+0x204/0x468 fs/namei.c:3878
do_mkdirat+0xac/0x510 fs/namei.c:4123
__do_sys_mkdirat fs/namei.c:4148 [inline]
__se_sys_mkdirat fs/namei.c:4146 [inline]
__arm64_sys_mkdirat+0x90/0xa8 fs/namei.c:4146
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages