[syzbot] [bcachefs?] possible deadlock in bch2_clear_folio_bits

0 views
Skip to first unread message

syzbot

unread,
5:38 PM (4 hours ago) 5:38 PM
to kent.ov...@linux.dev, linux-b...@vger.kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: e5f0a698b34e Linux 6.17
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=102a06e2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=bf99f2510ef92ba5
dashboard link: https://syzkaller.appspot.com/bug?extid=5438c0a9fffa875ff01e
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-e5f0a698.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c024c1509688/vmlinux-e5f0a698.xz
kernel image: https://storage.googleapis.com/syzbot-assets/28d9bfec58a3/bzImage-e5f0a698.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5438c0...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kswapd1/78 is trying to acquire lock:
ffff88805299d558 (&inode->ei_quota_lock){+.+.}-{4:4}, at: bch2_i_sectors_acct fs/bcachefs/fs-io.h:137 [inline]
ffff88805299d558 (&inode->ei_quota_lock){+.+.}-{4:4}, at: bch2_clear_folio_bits+0x506/0x830 fs/bcachefs/fs-io-pagecache.c:513

but task is already holding lock:
ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7012 [inline]
ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2830 mm/vmscan.c:7386

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__fs_reclaim_acquire mm/page_alloc.c:4234 [inline]
fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4248
might_alloc include/linux/sched/mm.h:318 [inline]
slab_pre_alloc_hook mm/slub.c:4142 [inline]
slab_alloc_node mm/slub.c:4220 [inline]
__kmalloc_cache_noprof+0x41/0x3d0 mm/slub.c:4402
kmalloc_noprof include/linux/slab.h:905 [inline]
kzalloc_noprof include/linux/slab.h:1039 [inline]
genradix_alloc_node include/linux/generic-radix-tree.h:101 [inline]
__genradix_ptr_alloc+0x199/0x4a0 lib/generic-radix-tree.c:44
bch2_quota_transfer+0x300/0xa50 fs/bcachefs/quota.c:343
bch2_fs_quota_transfer+0x27d/0x4f0 fs/bcachefs/fs.c:183
bch2_set_projid fs/bcachefs/fs.h:166 [inline]
bch2_fileattr_set+0x515/0x6f0 fs/bcachefs/fs.c:1728
vfs_fileattr_set+0x92f/0xb90 fs/file_attr.c:298
ioctl_fssetxattr+0x1ed/0x270 fs/file_attr.c:372
do_vfs_ioctl+0x81d/0x1430 fs/ioctl.c:567
__do_sys_ioctl fs/ioctl.c:596 [inline]
__se_sys_ioctl+0x82/0x170 fs/ioctl.c:584
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&inode->ei_quota_lock){+.+.}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__mutex_lock_common kernel/locking/mutex.c:598 [inline]
__mutex_lock+0x187/0x1350 kernel/locking/mutex.c:760
bch2_i_sectors_acct fs/bcachefs/fs-io.h:137 [inline]
bch2_clear_folio_bits+0x506/0x830 fs/bcachefs/fs-io-pagecache.c:513
bch2_release_folio+0xf7/0x150 fs/bcachefs/fs-io-pagecache.c:672
shrink_folio_list+0x20ac/0x4cd0 mm/vmscan.c:1518
evict_folios+0x471e/0x57c0 mm/vmscan.c:4744
try_to_shrink_lruvec+0x8a3/0xb50 mm/vmscan.c:4907
shrink_one+0x21b/0x7c0 mm/vmscan.c:4952
shrink_many mm/vmscan.c:5015 [inline]
lru_gen_shrink_node mm/vmscan.c:5093 [inline]
shrink_node+0x314e/0x3760 mm/vmscan.c:6078
kswapd_shrink_node mm/vmscan.c:6938 [inline]
balance_pgdat mm/vmscan.c:7121 [inline]
kswapd+0x147c/0x2830 mm/vmscan.c:7386
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x439/0x7d0 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&inode->ei_quota_lock);
lock(fs_reclaim);
lock(&inode->ei_quota_lock);

*** DEADLOCK ***

1 lock held by kswapd1/78:
#0: ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7012 [inline]
#0: ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2830 mm/vmscan.c:7386

stack backtrace:
CPU: 0 UID: 0 PID: 78 Comm: kswapd1 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043
check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__mutex_lock_common kernel/locking/mutex.c:598 [inline]
__mutex_lock+0x187/0x1350 kernel/locking/mutex.c:760
bch2_i_sectors_acct fs/bcachefs/fs-io.h:137 [inline]
bch2_clear_folio_bits+0x506/0x830 fs/bcachefs/fs-io-pagecache.c:513
bch2_release_folio+0xf7/0x150 fs/bcachefs/fs-io-pagecache.c:672
shrink_folio_list+0x20ac/0x4cd0 mm/vmscan.c:1518
evict_folios+0x471e/0x57c0 mm/vmscan.c:4744
try_to_shrink_lruvec+0x8a3/0xb50 mm/vmscan.c:4907
shrink_one+0x21b/0x7c0 mm/vmscan.c:4952
shrink_many mm/vmscan.c:5015 [inline]
lru_gen_shrink_node mm/vmscan.c:5093 [inline]
shrink_node+0x314e/0x3760 mm/vmscan.c:6078
kswapd_shrink_node mm/vmscan.c:6938 [inline]
balance_pgdat mm/vmscan.c:7121 [inline]
kswapd+0x147c/0x2830 mm/vmscan.c:7386
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x439/0x7d0 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages