[v5.15] possible deadlock in __ntfs_clear_inode (2)

0 views
Skip to first unread message

syzbot

unread,
10:21 AM (5 hours ago) 10:21 AM
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 91d48252ad4b Linux 5.15.202
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=127f18d2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=353ae28c40b35af5
dashboard link: https://syzkaller.appspot.com/bug?extid=cb5baba8b1ba555b6092
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/74e75dfcb812/disk-91d48252.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/bfa72aab00f2/vmlinux-91d48252.xz
kernel image: https://storage.googleapis.com/syzbot-assets/47ea72d1c7dc/bzImage-91d48252.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+cb5bab...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kswapd0/254 is trying to acquire lock:
ffff88805495c300 (&rl->lock){++++}-{3:3}, at: __ntfs_clear_inode+0x32/0x1e0 fs/ntfs/inode.c:2189

but task is already holding lock:
ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: 0x1

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (fs_reclaim){+.+.}-{0:0}:
__fs_reclaim_acquire mm/page_alloc.c:4580 [inline]
fs_reclaim_acquire+0x6d/0x100 mm/page_alloc.c:4594
prepare_alloc_pages+0x15a/0x5f0 mm/page_alloc.c:5272
__alloc_pages+0x11b/0x480 mm/page_alloc.c:5490
__page_cache_alloc+0xce/0x440 mm/filemap.c:1022
do_read_cache_page+0x1da/0x1030 mm/filemap.c:3457
read_mapping_page include/linux/pagemap.h:515 [inline]
ntfs_map_page+0x24/0x390 fs/ntfs/aops.h:75
map_mft_record_page fs/ntfs/mft.c:73 [inline]
map_mft_record+0x1c9/0x620 fs/ntfs/mft.c:156
ntfs_read_locked_inode+0x1ae/0x4de0 fs/ntfs/inode.c:550
ntfs_iget+0x108/0x1a0 fs/ntfs/inode.c:177
ntfs_lookup+0x24f/0xc40 fs/ntfs/namei.c:117
__lookup_slow+0x29d/0x410 fs/namei.c:1671
lookup_slow+0x53/0x70 fs/namei.c:1688
walk_component+0x319/0x460 fs/namei.c:1984
link_path_walk+0x665/0xd70 fs/namei.c:-1
path_openat+0x28d/0x2fa0 fs/namei.c:3746
do_filp_open+0x1e2/0x410 fs/namei.c:3777
do_sys_openat2+0x150/0x4b0 fs/open.c:1255
do_sys_open fs/open.c:1271 [inline]
__do_sys_openat fs/open.c:1287 [inline]
__se_sys_openat fs/open.c:1282 [inline]
__x64_sys_openat+0x135/0x160 fs/open.c:1282
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #1 (&ni->mrec_lock){+.+.}-{3:3}:
__mutex_lock_common+0x1e3/0x2400 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
map_mft_record+0x4e/0x620 fs/ntfs/mft.c:154
ntfs_truncate+0x280/0x27c0 fs/ntfs/inode.c:2383
ntfs_truncate_vfs fs/ntfs/inode.c:2862 [inline]
ntfs_setattr+0x2bc/0x3a0 fs/ntfs/inode.c:2914
notify_change+0xbcd/0xee0 fs/attr.c:505
do_truncate+0x1ac/0x240 fs/open.c:65
vfs_truncate+0x262/0x2f0 fs/open.c:111
do_sys_truncate+0xf2/0x1c0 fs/open.c:134
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #0 (&rl->lock){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c42/0x7d10 kernel/locking/lockdep.c:5012
lock_acquire+0x19e/0x400 kernel/locking/lockdep.c:5623
down_write+0x38/0x60 kernel/locking/rwsem.c:1551
__ntfs_clear_inode+0x32/0x1e0 fs/ntfs/inode.c:2189
ntfs_evict_big_inode+0x2c4/0x4a0 fs/ntfs/inode.c:2278
evict+0x4c9/0x8d0 fs/inode.c:647
dispose_list fs/inode.c:680 [inline]
prune_icache_sb+0x220/0x2d0 fs/inode.c:879
super_cache_scan+0x343/0x440 fs/super.c:107
do_shrink_slab+0x510/0xd00 mm/vmscan.c:765
shrink_slab_memcg mm/vmscan.c:834 [inline]
shrink_slab+0x450/0x7a0 mm/vmscan.c:913
shrink_node_memcgs mm/vmscan.c:2958 [inline]
shrink_node+0x110c/0x2610 mm/vmscan.c:3079
kswapd_shrink_node mm/vmscan.c:3821 [inline]
balance_pgdat+0xe92/0x1a10 mm/vmscan.c:4012
kswapd+0x7dd/0xdd0 mm/vmscan.c:4271
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287

other info that might help us debug this:

Chain exists of:
&rl->lock --> &ni->mrec_lock --> fs_reclaim

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&ni->mrec_lock);
lock(fs_reclaim);
lock(&rl->lock);

*** DEADLOCK ***

3 locks held by kswapd0/254:
#0: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: 0x1
#1: ffffffff8c3bbfd0 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab_memcg mm/vmscan.c:807 [inline]
#1: ffffffff8c3bbfd0 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0x22f/0x7a0 mm/vmscan.c:913
#2: ffff8880240820e0 (&type->s_umount_key#53){++++}-{3:3}, at: trylock_super fs/super.c:418 [inline]
#2: ffff8880240820e0 (&type->s_umount_key#53){++++}-{3:3}, at: super_cache_scan+0x70/0x440 fs/super.c:80

stack backtrace:
CPU: 1 PID: 254 Comm: kswapd0 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0x188/0x250 lib/dump_stack.c:106
check_noncircular+0x296/0x330 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c42/0x7d10 kernel/locking/lockdep.c:5012
lock_acquire+0x19e/0x400 kernel/locking/lockdep.c:5623
down_write+0x38/0x60 kernel/locking/rwsem.c:1551
__ntfs_clear_inode+0x32/0x1e0 fs/ntfs/inode.c:2189
ntfs_evict_big_inode+0x2c4/0x4a0 fs/ntfs/inode.c:2278
evict+0x4c9/0x8d0 fs/inode.c:647
dispose_list fs/inode.c:680 [inline]
prune_icache_sb+0x220/0x2d0 fs/inode.c:879
super_cache_scan+0x343/0x440 fs/super.c:107
do_shrink_slab+0x510/0xd00 mm/vmscan.c:765
shrink_slab_memcg mm/vmscan.c:834 [inline]
shrink_slab+0x450/0x7a0 mm/vmscan.c:913
shrink_node_memcgs mm/vmscan.c:2958 [inline]
shrink_node+0x110c/0x2610 mm/vmscan.c:3079
kswapd_shrink_node mm/vmscan.c:3821 [inline]
balance_pgdat+0xe92/0x1a10 mm/vmscan.c:4012
kswapd+0x7dd/0xdd0 mm/vmscan.c:4271
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages