Hello,
syzbot found the following issue on:
HEAD commit: c16c81c81336 Linux 5.15.178
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=11c98f24580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=3ca28fba9b2e5c5
dashboard link:
https://syzkaller.appspot.com/bug?extid=b56327bfdb9a4b46222a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/3f2947e5e4dc/disk-c16c81c8.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/7034800fdaa8/vmlinux-c16c81c8.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/42337be7c213/Image-c16c81c8.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+b56327...@syzkaller.appspotmail.com
BTRFS info (device loop6): enabling ssd optimizations
======================================================
WARNING: possible circular locking dependency detected
5.15.178-syzkaller #0 Not tainted
------------------------------------------------------
5�/5195 is trying to acquire lock:
ffff0000d7bf0650 (sb_internal#4){.+.+}-{0:0}, at: btrfs_start_transaction+0x34/0x44 fs/btrfs/transaction.c:777
but task is already holding lock:
ffff0000e925ad50 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
ffff0000e925ad50 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: vfs_fileattr_set+0x110/0xad4 fs/ioctl.c:685
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&type->i_mutex_dir_key#10){++++}-{3:3}:
down_read+0xc0/0x398 kernel/locking/rwsem.c:1498
inode_lock_shared include/linux/fs.h:799 [inline]
lookup_slow+0x50/0x84 fs/namei.c:1679
walk_component+0x394/0x4cc fs/namei.c:1976
lookup_last fs/namei.c:2431 [inline]
path_lookupat+0x13c/0x3d0 fs/namei.c:2455
filename_lookup+0x1c4/0x4c8 fs/namei.c:2484
kern_path+0x4c/0x194 fs/namei.c:2582
lookup_bdev+0xc0/0x25c block/bdev.c:979
device_matched fs/btrfs/volumes.c:568 [inline]
btrfs_free_stale_devices+0x658/0x9ec fs/btrfs/volumes.c:608
btrfs_forget_devices+0x5c/0x98 fs/btrfs/volumes.c:1431
btrfs_control_ioctl+0x12c/0x248 fs/btrfs/super.c:2451
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
-> #1 (&fs_devs->device_list_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
insert_dev_extents fs/btrfs/block-group.c:2395 [inline]
btrfs_create_pending_block_groups+0x490/0xebc fs/btrfs/block-group.c:2445
__btrfs_end_transaction+0x13c/0x610 fs/btrfs/transaction.c:1014
btrfs_end_transaction+0x24/0x34 fs/btrfs/transaction.c:1050
flush_space+0x458/0xc94 fs/btrfs/space-info.c:674
btrfs_async_reclaim_data_space+0xec/0x37c fs/btrfs/space-info.c:1169
process_one_work+0x790/0x11b8 kernel/workqueue.c:2310
worker_thread+0x910/0x1034 kernel/workqueue.c:2457
kthread+0x37c/0x45c kernel/kthread.c:334
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:870
-> #0 (sb_internal#4){.+.+}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x32d4/0x7638 kernel/locking/lockdep.c:5012
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5623
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1813 [inline]
sb_start_intwrite include/linux/fs.h:1930 [inline]
start_transaction+0x644/0x1480 fs/btrfs/transaction.c:678
btrfs_start_transaction+0x34/0x44 fs/btrfs/transaction.c:777
btrfs_fileattr_set+0x4dc/0x9b8 fs/btrfs/ioctl.c:331
vfs_fileattr_set+0x70c/0xad4 fs/ioctl.c:700
do_vfs_ioctl+0x1634/0x2a38
__do_sys_ioctl fs/ioctl.c:872 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0xe4/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
other info that might help us debug this:
Chain exists of:
sb_internal#4 --> &fs_devs->device_list_mutex --> &type->i_mutex_dir_key#10
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&type->i_mutex_dir_key#10);
lock(&fs_devs->device_list_mutex);
lock(&type->i_mutex_dir_key#10);
lock(sb_internal#4);
*** DEADLOCK ***
2 locks held by 5�/5195:
#0: ffff0000d7bf0460 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write_file+0x64/0x1e8 fs/namespace.c:421
#1: ffff0000e925ad50 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#1: ffff0000e925ad50 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: vfs_fileattr_set+0x110/0xad4 fs/ioctl.c:685
stack backtrace:
CPU: 1 PID: 5195 Comm: 5� Not tainted 5.15.178-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2011
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x32d4/0x7638 kernel/locking/lockdep.c:5012
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5623
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1813 [inline]
sb_start_intwrite include/linux/fs.h:1930 [inline]
start_transaction+0x644/0x1480 fs/btrfs/transaction.c:678
btrfs_start_transaction+0x34/0x44 fs/btrfs/transaction.c:777
btrfs_fileattr_set+0x4dc/0x9b8 fs/btrfs/ioctl.c:331
vfs_fileattr_set+0x70c/0xad4 fs/ioctl.c:700
do_vfs_ioctl+0x1634/0x2a38
__do_sys_ioctl fs/ioctl.c:872 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0xe4/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup