[v6.1] possible deadlock in get_super

2 views
Skip to first unread message

syzbot

unread,
Mar 4, 2025, 11:25:28 AM3/4/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 3a8358583626 Linux 6.1.129
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1360d8b7980000
kernel config: https://syzkaller.appspot.com/x/.config?x=ff93ecc085d8436e
dashboard link: https://syzkaller.appspot.com/bug?extid=44c7ee7d1f6c040d2451
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/3cc3985223b7/disk-3a835858.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/e40398fb298b/vmlinux-3a835858.xz
kernel image: https://storage.googleapis.com/syzbot-assets/40469708dc9a/Image-3a835858.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+44c7ee...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.129-syzkaller #0 Not tainted
------------------------------------------------------
syz.8.295/6273 is trying to acquire lock:
ffff0000d502c0e0 (&type->s_umount_key#75){++++}-{3:3}, at: get_super+0x100/0x1f0 fs/super.c:828

but task is already holding lock:
ffff0000ce783998 (&nbd->config_lock){+.+.}-{3:3}, at: nbd_ioctl+0x128/0xc40 drivers/block/nbd.c:1535

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&nbd->config_lock){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
refcount_dec_and_mutex_lock+0x40/0x158 lib/refcount.c:118
nbd_config_put+0x3c/0x6c8 drivers/block/nbd.c:1320
nbd_release+0xf8/0x130 drivers/block/nbd.c:1624
blkdev_put+0x4e8/0x6e0
blkdev_close+0x58/0x94 block/fops.c:514
__fput+0x1c8/0x7c8 fs/file_table.c:320
____fput+0x20/0x30 fs/file_table.c:348
task_work_run+0x240/0x2f0 kernel/task_work.c:203
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
do_notify_resume+0x2080/0x2cb8 arch/arm64/kernel/signal.c:1132
prepare_exit_to_user_mode arch/arm64/kernel/entry-common.c:137 [inline]
exit_to_user_mode arch/arm64/kernel/entry-common.c:142 [inline]
el0_svc+0x9c/0x168 arch/arm64/kernel/entry-common.c:638
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

-> #1 (&disk->open_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
blkdev_put+0xec/0x6e0 block/bdev.c:913
release_journal_dev fs/reiserfs/journal.c:2594 [inline]
free_journal_ram+0x308/0x374 fs/reiserfs/journal.c:1896
do_journal_release+0x2f8/0x454 fs/reiserfs/journal.c:1960
journal_release+0x2c/0x40 fs/reiserfs/journal.c:1971
reiserfs_put_super+0x204/0x444 fs/reiserfs/super.c:616
generic_shutdown_super+0x130/0x328 fs/super.c:501
kill_block_super+0x70/0xdc fs/super.c:1470
reiserfs_kill_sb+0x134/0x14c fs/reiserfs/super.c:570
deactivate_locked_super+0xac/0x124 fs/super.c:332
deactivate_super+0xf0/0x110 fs/super.c:363
cleanup_mnt+0x394/0x41c fs/namespace.c:1186
__cleanup_mnt+0x20/0x30 fs/namespace.c:1193
task_work_run+0x240/0x2f0 kernel/task_work.c:203
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
do_notify_resume+0x2080/0x2cb8 arch/arm64/kernel/signal.c:1132
prepare_exit_to_user_mode arch/arm64/kernel/entry-common.c:137 [inline]
exit_to_user_mode arch/arm64/kernel/entry-common.c:142 [inline]
el0_svc+0x9c/0x168 arch/arm64/kernel/entry-common.c:638
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

-> #0 (&type->s_umount_key#75){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x3338/0x7680 kernel/locking/lockdep.c:5049
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5662
down_read+0x64/0x308 kernel/locking/rwsem.c:1520
get_super+0x100/0x1f0 fs/super.c:828
__invalidate_device+0x28/0x108 block/bdev.c:1005
nbd_clear_sock_ioctl drivers/block/nbd.c:1455 [inline]
__nbd_ioctl drivers/block/nbd.c:1482 [inline]
nbd_ioctl+0x2e8/0xc40 drivers/block/nbd.c:1542
blkdev_ioctl+0x408/0xb40 block/ioctl.c:620
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

other info that might help us debug this:

Chain exists of:
&type->s_umount_key#75 --> &disk->open_mutex --> &nbd->config_lock

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&nbd->config_lock);
lock(&disk->open_mutex);
lock(&nbd->config_lock);
lock(&type->s_umount_key#75);

*** DEADLOCK ***

1 lock held by syz.8.295/6273:
#0: ffff0000ce783998 (&nbd->config_lock){+.+.}-{3:3}, at: nbd_ioctl+0x128/0xc40 drivers/block/nbd.c:1535

stack backtrace:
CPU: 1 PID: 6273 Comm: syz.8.295 Not tainted 6.1.129-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2048
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2170
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x3338/0x7680 kernel/locking/lockdep.c:5049
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5662
down_read+0x64/0x308 kernel/locking/rwsem.c:1520
get_super+0x100/0x1f0 fs/super.c:828
__invalidate_device+0x28/0x108 block/bdev.c:1005
nbd_clear_sock_ioctl drivers/block/nbd.c:1455 [inline]
__nbd_ioctl drivers/block/nbd.c:1482 [inline]
nbd_ioctl+0x2e8/0xc40 drivers/block/nbd.c:1542
blkdev_ioctl+0x408/0xb40 block/ioctl.c:620
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Jun 12, 2025, 12:25:36 PM6/12/25
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages