[moderation] [fs?] possible deadlock in fsnotify_destroy_marks (2)

9 views
Skip to first unread message

syzbot

unread,
Feb 26, 2024, 2:19:17 AMFeb 26
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 39133352cbed Merge tag 'for-linus' of git://git.kernel.org..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=162de1d0180000
kernel config: https://syzkaller.appspot.com/x/.config?x=67f463bd6c2a0273
dashboard link: https://syzkaller.appspot.com/bug?extid=1db1c99d9f675fcae3f2
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386
CC: [amir...@gmail.com ja...@suse.cz linux-...@vger.kernel.org linux-...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-39133352.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/e46af53e02fa/vmlinux-39133352.xz
kernel image: https://storage.googleapis.com/syzbot-assets/6ca804d5c1db/bzImage-39133352.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+1db1c9...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.8.0-rc5-syzkaller-00029-g39133352cbed #0 Not tainted
------------------------------------------------------
kswapd0/109 is trying to acquire lock:
ffff888029dc0130 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
ffff888029dc0130 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
ffff888029dc0130 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_marks+0x149/0x4a0 fs/notify/mark.c:818

but task is already holding lock:
ffffffff8d720500 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x160/0x1a90 mm/vmscan.c:6771

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
__fs_reclaim_acquire mm/page_alloc.c:3692 [inline]
fs_reclaim_acquire+0x104/0x150 mm/page_alloc.c:3706
might_alloc include/linux/sched/mm.h:303 [inline]
slab_pre_alloc_hook mm/slub.c:3761 [inline]
slab_alloc_node mm/slub.c:3842 [inline]
kmem_cache_alloc+0x4f/0x320 mm/slub.c:3867
inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
inotify_update_watch+0x527/0xc10 fs/notify/inotify/inotify_user.c:647
__do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
__se_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:729 [inline]
__x64_sys_inotify_add_watch+0x2e9/0x380 fs/notify/inotify/inotify_user.c:729
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xd5/0x270 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x6f/0x77

-> #0 (&group->mark_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x244f/0x3b40 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1ae/0x520 kernel/locking/lockdep.c:5719
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x175/0x9d0 kernel/locking/mutex.c:752
fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
fsnotify_destroy_marks+0x149/0x4a0 fs/notify/mark.c:818
fsnotify_inoderemove include/linux/fsnotify.h:233 [inline]
dentry_unlink_inode+0x38f/0x440 fs/dcache.c:396
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x426/0x1120 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7b0 mm/vmscan.c:4767
shrink_many mm/vmscan.c:4828 [inline]
lru_gen_shrink_node mm/vmscan.c:4929 [inline]
shrink_node+0x21d0/0x3790 mm/vmscan.c:5888
kswapd_shrink_node mm/vmscan.c:6693 [inline]
balance_pgdat+0x9d2/0x1a90 mm/vmscan.c:6883
kswapd+0x5be/0xc00 mm/vmscan.c:7143
kthread+0x2c6/0x3b0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:242

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&group->mark_mutex);
lock(fs_reclaim);
lock(&group->mark_mutex);

*** DEADLOCK ***

2 locks held by kswapd0/109:
#0: ffffffff8d720500 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x160/0x1a90 mm/vmscan.c:6771
#1: ffff88801d5600e0 (&type->s_umount_key#52){++++}-{3:3}, at: super_trylock_shared fs/super.c:566 [inline]
#1: ffff88801d5600e0 (&type->s_umount_key#52){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196

stack backtrace:
CPU: 0 PID: 109 Comm: kswapd0 Not tainted 6.8.0-rc5-syzkaller-00029-g39133352cbed #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106
check_noncircular+0x31b/0x400 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x244f/0x3b40 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1ae/0x520 kernel/locking/lockdep.c:5719
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x175/0x9d0 kernel/locking/mutex.c:752
fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
fsnotify_destroy_marks+0x149/0x4a0 fs/notify/mark.c:818
fsnotify_inoderemove include/linux/fsnotify.h:233 [inline]
dentry_unlink_inode+0x38f/0x440 fs/dcache.c:396
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x426/0x1120 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7b0 mm/vmscan.c:4767
shrink_many mm/vmscan.c:4828 [inline]
lru_gen_shrink_node mm/vmscan.c:4929 [inline]
shrink_node+0x21d0/0x3790 mm/vmscan.c:5888
kswapd_shrink_node mm/vmscan.c:6693 [inline]
balance_pgdat+0x9d2/0x1a90 mm/vmscan.c:6883
kswapd+0x5be/0xc00 mm/vmscan.c:7143
kthread+0x2c6/0x3b0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:242
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages