Hello,
syzbot found the following issue on:
HEAD commit: 60a9e718726f Linux 6.6.106
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=11468712580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link:
https://syzkaller.appspot.com/bug?extid=000400bb77ad91b66624
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/eca27e056a5a/disk-60a9e718.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/bc64d4eeb7f6/vmlinux-60a9e718.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/a345041561ac/bzImage-60a9e718.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+000400...@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.2.656/8572 is trying to acquire lock:
ffff8880216e54c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xff/0x760 block/bdev.c:927
but task is already holding lock:
ffff888021617b60 (&lo->lo_mutex){+.+.}-{3:3}, at: loop_set_block_size+0x7c/0x480 drivers/block/loop.c:1490
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&lo->lo_mutex){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
lo_release+0xb9/0x200 drivers/block/loop.c:1753
blkdev_put+0x5bd/0x760 block/bdev.c:-1
blkdev_release+0x84/0x90 block/fops.c:604
__fput+0x234/0x970 fs/file_table.c:384
__do_sys_close fs/open.c:1571 [inline]
__se_sys_close+0x15f/0x220 fs/open.c:1556
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #0 (&disk->open_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
blkdev_put+0xff/0x760 block/bdev.c:927
blkdev_release+0x84/0x90 block/fops.c:604
__fput+0x234/0x970 fs/file_table.c:384
task_work_run+0x1ce/0x250 kernel/task_work.c:239
exit_task_work include/linux/task_work.h:43 [inline]
do_exit+0x90b/0x23c0 kernel/exit.c:883
do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
get_signal+0x12fc/0x1400 kernel/signal.c:2902
arch_do_signal_or_restart+0x96/0x780 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xf6/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&lo->lo_mutex);
lock(&disk->open_mutex);
lock(&lo->lo_mutex);
lock(&disk->open_mutex);
*** DEADLOCK ***
1 lock held by syz.2.656/8572:
#0: ffff888021617b60 (&lo->lo_mutex){+.+.}-{3:3}, at: loop_set_block_size+0x7c/0x480 drivers/block/loop.c:1490
stack backtrace:
CPU: 1 PID: 8572 Comm: syz.2.656 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
blkdev_put+0xff/0x760 block/bdev.c:927
blkdev_release+0x84/0x90 block/fops.c:604
__fput+0x234/0x970 fs/file_table.c:384
task_work_run+0x1ce/0x250 kernel/task_work.c:239
exit_task_work include/linux/task_work.h:43 [inline]
do_exit+0x90b/0x23c0 kernel/exit.c:883
do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
get_signal+0x12fc/0x1400 kernel/signal.c:2902
arch_do_signal_or_restart+0x96/0x780 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xf6/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fec79f8eba9
Code: Unable to access opcode bytes at 0x7fec79f8eb7f.
RSP: 002b:00007fec7ad7f038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffea RBX: 00007fec7a1d5fa0 RCX: 00007fec79f8eba9
RDX: 0000000000000000 RSI: 0000000000004c09 RDI: 0000000000000008
RBP: 00007fec7a011e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fec7a1d6038 R14: 00007fec7a1d5fa0 R15: 00007ffe13dff3a8
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup