[v5.15] possible deadlock in hfs_extend_file (3)

0 views
Skip to first unread message

syzbot

unread,
Mar 13, 2024, 1:11:22 AMMar 13
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 574362648507 Linux 5.15.151
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=131d1646180000
kernel config: https://syzkaller.appspot.com/x/.config?x=6c9a42d9e3519ca9
dashboard link: https://syzkaller.appspot.com/bug?extid=033048aa2a56eacb2789
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/f00d4062000b/disk-57436264.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3a74c2b6ca62/vmlinux-57436264.xz
kernel image: https://storage.googleapis.com/syzbot-assets/93bd706dc219/bzImage-57436264.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+033048...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.151-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.4/13323 is trying to acquire lock:
ffff88805313dbf8 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xfb/0x1440 fs/hfs/extent.c:397

but task is already holding lock:
ffff8880806ca0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x16a/0x1e0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&tree->tree_lock/1){+.+.}-{3:3}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
hfs_find_init+0x16a/0x1e0
hfs_ext_read_extent fs/hfs/extent.c:200 [inline]
hfs_extend_file+0x317/0x1440 fs/hfs/extent.c:401
hfs_bmap_reserve+0xd5/0x3f0 fs/hfs/btree.c:231
hfs_cat_create+0x1e7/0xa60 fs/hfs/catalog.c:104
hfs_create+0x62/0xd0 fs/hfs/dir.c:202
lookup_open fs/namei.c:3462 [inline]
open_last_lookups fs/namei.c:3532 [inline]
path_openat+0x12f6/0x2f20 fs/namei.c:3739
do_filp_open+0x21c/0x460 fs/namei.c:3769
file_open_name fs/open.c:1156 [inline]
filp_open+0x25d/0x2c0 fs/open.c:1176
do_coredump+0x2549/0x31e0 fs/coredump.c:767
get_signal+0xc06/0x14e0 kernel/signal.c:2875
arch_do_signal_or_restart+0xc3/0x1890 arch/x86/kernel/signal.c:867
handle_signal_work kernel/entry/common.c:148 [inline]
exit_to_user_mode_loop+0x97/0x130 kernel/entry/common.c:172
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:208
irqentry_exit_to_user_mode+0x5/0x40 kernel/entry/common.c:314
exc_page_fault+0x342/0x740 arch/x86/mm/fault.c:1544
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:568

-> #0 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1649/0x5930 kernel/locking/lockdep.c:3788
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
hfs_extend_file+0xfb/0x1440 fs/hfs/extent.c:397
hfs_bmap_reserve+0xd5/0x3f0 fs/hfs/btree.c:231
__hfs_ext_write_extent+0x22e/0x4f0 fs/hfs/extent.c:121
__hfs_ext_cache_extent+0x6a/0x990 fs/hfs/extent.c:174
hfs_ext_read_extent fs/hfs/extent.c:202 [inline]
hfs_extend_file+0x340/0x1440 fs/hfs/extent.c:401
hfs_get_block+0x3e0/0xb60 fs/hfs/extent.c:353
__block_write_begin_int+0x60b/0x1650 fs/buffer.c:2012
__block_write_begin fs/buffer.c:2062 [inline]
block_write_begin fs/buffer.c:2122 [inline]
cont_write_begin+0x5d6/0x840 fs/buffer.c:2471
hfs_write_begin+0x92/0xd0 fs/hfs/inode.c:59
cont_expand_zero fs/buffer.c:2398 [inline]
cont_write_begin+0x2ad/0x840 fs/buffer.c:2461
hfs_write_begin+0x92/0xd0 fs/hfs/inode.c:59
hfs_file_truncate+0x1ed/0xa20 fs/hfs/extent.c:494
hfs_inode_setattr+0x45d/0x6a0 fs/hfs/inode.c:654
notify_change+0xc6d/0xf50 fs/attr.c:505
do_truncate+0x21c/0x300 fs/open.c:65
do_sys_ftruncate+0x2eb/0x390 fs/open.c:193
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&tree->tree_lock/1);
lock(&HFS_I(tree->inode)->extents_lock);
lock(&tree->tree_lock/1);
lock(&HFS_I(tree->inode)->extents_lock);

*** DEADLOCK ***

4 locks held by syz-executor.4/13323:
#0: ffff88807e2ec460 (sb_writers#13){.+.+}-{0:0}, at: do_sys_ftruncate+0x25a/0x390 fs/open.c:190
#1: ffff88805313f128 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#1: ffff88805313f128 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: do_truncate+0x208/0x300 fs/open.c:63
#2: ffff88805313ef78 (&HFS_I(inode)->extents_lock#2){+.+.}-{3:3}, at: hfs_extend_file+0xfb/0x1440 fs/hfs/extent.c:397
#3: ffff8880806ca0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x16a/0x1e0

stack backtrace:
CPU: 1 PID: 13323 Comm: syz-executor.4 Not tainted 5.15.151-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2f8/0x3b0 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1649/0x5930 kernel/locking/lockdep.c:3788
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
hfs_extend_file+0xfb/0x1440 fs/hfs/extent.c:397
hfs_bmap_reserve+0xd5/0x3f0 fs/hfs/btree.c:231
__hfs_ext_write_extent+0x22e/0x4f0 fs/hfs/extent.c:121
__hfs_ext_cache_extent+0x6a/0x990 fs/hfs/extent.c:174
hfs_ext_read_extent fs/hfs/extent.c:202 [inline]
hfs_extend_file+0x340/0x1440 fs/hfs/extent.c:401
hfs_get_block+0x3e0/0xb60 fs/hfs/extent.c:353
__block_write_begin_int+0x60b/0x1650 fs/buffer.c:2012
__block_write_begin fs/buffer.c:2062 [inline]
block_write_begin fs/buffer.c:2122 [inline]
cont_write_begin+0x5d6/0x840 fs/buffer.c:2471
hfs_write_begin+0x92/0xd0 fs/hfs/inode.c:59
cont_expand_zero fs/buffer.c:2398 [inline]
cont_write_begin+0x2ad/0x840 fs/buffer.c:2461
hfs_write_begin+0x92/0xd0 fs/hfs/inode.c:59
hfs_file_truncate+0x1ed/0xa20 fs/hfs/extent.c:494
hfs_inode_setattr+0x45d/0x6a0 fs/hfs/inode.c:654
notify_change+0xc6d/0xf50 fs/attr.c:505
do_truncate+0x21c/0x300 fs/open.c:65
do_sys_ftruncate+0x2eb/0x390 fs/open.c:193
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f8a59992da9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f8a57f130c8 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
RAX: ffffffffffffffda RBX: 00007f8a59ac0f80 RCX: 00007f8a59992da9
RDX: 0000000000000000 RSI: 0000000002007fff RDI: 0000000000000004
RBP: 00007f8a599df47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f8a59ac0f80 R15: 00007ffda0dc8288
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages