Hello,
syzbot found the following issue on:
HEAD commit: 4a243110dc88 Linux 6.6.114
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=1466fb04580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link:
https://syzkaller.appspot.com/bug?extid=1fdcfe1e229e0f2b423b
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/1950ac2cd960/disk-4a243110.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/d7dccd93693b/vmlinux-4a243110.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/6f93496e2b47/bzImage-4a243110.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+1fdcfe...@syzkaller.appspotmail.com
loop0: detected capacity change from 0 to 4096
ntfs3: loop0: Different NTFS sector size (2048) and media sector size (512).
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.0.3542/21802 is trying to acquire lock:
ffff888051f78540 (mapping.invalidate_lock#10){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:859 [inline]
ffff888051f78540 (mapping.invalidate_lock#10){++++}-{3:3}, at: filemap_fault+0x5db/0x15a0 mm/filemap.c:3330
but task is already holding lock:
ffff88807b69baa0 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock include/linux/mmap_lock.h:146 [inline]
ffff88807b69baa0 (&mm->mmap_lock){++++}-{3:3}, at: __mm_populate+0x16f/0x380 mm/gup.c:1675
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&mm->mmap_lock){++++}-{3:3}:
__might_fault+0xc6/0x120 mm/memory.c:5946
_copy_to_user+0x2a/0xa0 lib/usercopy.c:36
copy_to_user include/linux/uaccess.h:191 [inline]
fiemap_fill_next_extent+0x1c1/0x390 fs/ioctl.c:145
ni_fiemap+0x7e6/0xbe0 fs/ntfs3/frecord.c:2038
ntfs_fiemap+0xdb/0x130 fs/ntfs3/file.c:1216
ioctl_fiemap fs/ioctl.c:220 [inline]
do_vfs_ioctl+0x140c/0x1bb0 fs/ioctl.c:811
__do_sys_ioctl fs/ioctl.c:869 [inline]
__se_sys_ioctl+0x83/0x170 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #1 (&ni->ni_lock#2/5){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
ni_lock fs/ntfs3/ntfs_fs.h:1119 [inline]
ntfs_fallocate+0x7b2/0xfd0 fs/ntfs3/file.c:557
vfs_fallocate+0x58e/0x700 fs/open.c:324
ksys_fallocate fs/open.c:347 [inline]
__do_sys_fallocate fs/open.c:355 [inline]
__se_sys_fallocate fs/open.c:353 [inline]
__x64_sys_fallocate+0xc1/0x110 fs/open.c:353
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #0 (mapping.invalidate_lock#10){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
filemap_invalidate_lock_shared include/linux/fs.h:859 [inline]
filemap_fault+0x5db/0x15a0 mm/filemap.c:3330
__do_fault+0x13b/0x4e0 mm/memory.c:4243
do_read_fault mm/memory.c:4616 [inline]
do_fault mm/memory.c:4753 [inline]
do_pte_missing mm/memory.c:3688 [inline]
handle_pte_fault mm/memory.c:5025 [inline]
__handle_mm_fault mm/memory.c:5166 [inline]
handle_mm_fault+0x3886/0x4920 mm/memory.c:5331
faultin_page mm/gup.c:868 [inline]
__get_user_pages+0x5ea/0x1470 mm/gup.c:1167
populate_vma_page_range+0x2b6/0x370 mm/gup.c:1593
__mm_populate+0x24c/0x380 mm/gup.c:1696
mm_populate include/linux/mm.h:3383 [inline]
vm_mmap_pgoff+0x2e7/0x400 mm/util.c:561
ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
other info that might help us debug this:
Chain exists of:
mapping.invalidate_lock#10 --> &ni->ni_lock#2/5 --> &mm->mmap_lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(&mm->mmap_lock);
lock(&ni->ni_lock#2/5);
lock(&mm->mmap_lock);
rlock(mapping.invalidate_lock#10);
*** DEADLOCK ***
1 lock held by syz.0.3542/21802:
#0: ffff88807b69baa0 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock include/linux/mmap_lock.h:146 [inline]
#0: ffff88807b69baa0 (&mm->mmap_lock){++++}-{3:3}, at: __mm_populate+0x16f/0x380 mm/gup.c:1675
stack backtrace:
CPU: 1 PID: 21802 Comm: syz.0.3542 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
filemap_invalidate_lock_shared include/linux/fs.h:859 [inline]
filemap_fault+0x5db/0x15a0 mm/filemap.c:3330
__do_fault+0x13b/0x4e0 mm/memory.c:4243
do_read_fault mm/memory.c:4616 [inline]
do_fault mm/memory.c:4753 [inline]
do_pte_missing mm/memory.c:3688 [inline]
handle_pte_fault mm/memory.c:5025 [inline]
__handle_mm_fault mm/memory.c:5166 [inline]
handle_mm_fault+0x3886/0x4920 mm/memory.c:5331
faultin_page mm/gup.c:868 [inline]
__get_user_pages+0x5ea/0x1470 mm/gup.c:1167
populate_vma_page_range+0x2b6/0x370 mm/gup.c:1593
__mm_populate+0x24c/0x380 mm/gup.c:1696
mm_populate include/linux/mm.h:3383 [inline]
vm_mmap_pgoff+0x2e7/0x400 mm/util.c:561
ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f401958efc9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f401a4d2038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f40197e5fa0 RCX: 00007f401958efc9
RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000200000000000
RBP: 00007f4019611f91 R08: 0000000000000004 R09: 0000000000000000
R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f40197e6038 R14: 00007f40197e5fa0 R15: 00007ffffafd2a28
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup