Hello,
syzbot found the following issue on:
HEAD commit: f6e38ae624cf Linux 6.1.158
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=13f05a58580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=68aa5a3af1cb953a
dashboard link:
https://syzkaller.appspot.com/bug?extid=b10fd08ab62d15b5160c
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/c1bd671a9def/disk-f6e38ae6.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/fa0af998ea40/vmlinux-f6e38ae6.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/e5512d873524/Image-f6e38ae6.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+b10fd0...@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.5.78/4841 is trying to acquire lock:
ffff0000d5b64948 (&mm->mmap_lock){++++}-{3:3}, at: __might_fault+0x9c/0x124 mm/memory.c:5851
but task is already holding lock:
ffff0000f4116090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x7c/0xe8 fs/reiserfs/lock.c:27
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&sbi->lock){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x1f38 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
reiserfs_write_lock+0x7c/0xe8 fs/reiserfs/lock.c:27
map_block_for_writepage fs/reiserfs/inode.c:2388 [inline]
reiserfs_write_full_page fs/reiserfs/inode.c:2585 [inline]
reiserfs_writepage+0x76c/0x2a28 fs/reiserfs/inode.c:2737
writeout mm/migrate.c:888 [inline]
fallback_migrate_folio mm/migrate.c:912 [inline]
move_to_new_folio+0x4b0/0xb5c mm/migrate.c:961
__migrate_folio_move mm/migrate.c:1199 [inline]
migrate_folio_move mm/migrate.c:1297 [inline]
migrate_pages_batch mm/migrate.c:1639 [inline]
migrate_pages+0x259c/0x3bbc mm/migrate.c:1843
do_mbind mm/mempolicy.c:1334 [inline]
kernel_mbind mm/mempolicy.c:1481 [inline]
__do_sys_mbind mm/mempolicy.c:1558 [inline]
__se_sys_mbind mm/mempolicy.c:1554 [inline]
__arm64_sys_mbind+0x640/0x7e8 mm/mempolicy.c:1554
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
el0_svc+0x58/0x138 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585
-> #0 (&mm->mmap_lock){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x293c/0x6544 kernel/locking/lockdep.c:5049
lock_acquire+0x20c/0x644 kernel/locking/lockdep.c:5662
__might_fault+0xc4/0x124 mm/memory.c:5852
reiserfs_ioctl+0x140/0x450 fs/reiserfs/ioctl.c:96
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
el0_svc+0x58/0x138 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&sbi->lock);
lock(&mm->mmap_lock);
lock(&sbi->lock);
lock(&mm->mmap_lock);
*** DEADLOCK ***
1 lock held by syz.5.78/4841:
#0: ffff0000f4116090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x7c/0xe8 fs/reiserfs/lock.c:27
stack backtrace:
CPU: 1 PID: 4841 Comm: syz.5.78 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack+0x30/0x40 lib/dump_stack.c:88
dump_stack_lvl+0xf8/0x160 lib/dump_stack.c:106
dump_stack+0x1c/0x5c lib/dump_stack.c:113
print_circular_bug+0x148/0x1b0 kernel/locking/lockdep.c:2048
check_noncircular+0x240/0x2d4 kernel/locking/lockdep.c:2170
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x293c/0x6544 kernel/locking/lockdep.c:5049
lock_acquire+0x20c/0x644 kernel/locking/lockdep.c:5662
__might_fault+0xc4/0x124 mm/memory.c:5852
reiserfs_ioctl+0x140/0x450 fs/reiserfs/ioctl.c:96
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
el0_svc+0x58/0x138 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup