possible deadlock in hfsplus_file_extend

6 views
Skip to first unread message

syzbot

unread,
Nov 26, 2022, 5:00:38 AM11/26/22
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 3f8a27f9e27b Linux 4.19.211
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=11b5a205880000
kernel config: https://syzkaller.appspot.com/x/.config?x=9b9277b418617afe
dashboard link: https://syzkaller.appspot.com/bug?extid=3a4301a9b1b8b62c34e3
compiler: gcc version 10.2.1 20210110 (Debian 10.2.1-6)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=123a779b880000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1515bbed880000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/98c0bdb4abb3/disk-3f8a27f9.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ea228ff02669/vmlinux-3f8a27f9.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/b25629ab364e/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+3a4301...@syzkaller.appspotmail.com

IPVS: ftp: loaded support on port[0] = 21
======================================================
WARNING: possible circular locking dependency detected
4.19.211-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor176/8140 is trying to acquire lock:
00000000bb4b6777 (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: hfsplus_file_extend+0x1bb/0xf40 fs/hfsplus/extents.c:457

but task is already holding lock:
000000008525b6a7 (&tree->tree_lock){+.+.}, at: hfsplus_find_init+0x1b7/0x220 fs/hfsplus/bfind.c:30

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&tree->tree_lock){+.+.}:
hfsplus_file_truncate+0xde7/0x1040 fs/hfsplus/extents.c:595
hfsplus_delete_inode+0x18d/0x220 fs/hfsplus/inode.c:419
hfsplus_unlink+0x595/0x820 fs/hfsplus/dir.c:405
vfs_unlink+0x27d/0x4e0 fs/namei.c:4002
do_unlinkat+0x3b8/0x660 fs/namei.c:4065
__do_sys_unlinkat fs/namei.c:4107 [inline]
__se_sys_unlinkat fs/namei.c:4099 [inline]
__x64_sys_unlinkat+0xbd/0x120 fs/namei.c:4099
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (&HFSPLUS_I(inode)->extents_lock){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:937 [inline]
__mutex_lock+0xd7/0x1190 kernel/locking/mutex.c:1078
hfsplus_file_extend+0x1bb/0xf40 fs/hfsplus/extents.c:457
hfsplus_bmap_reserve+0x298/0x440 fs/hfsplus/btree.c:357
hfsplus_rename_cat+0x272/0x1490 fs/hfsplus/catalog.c:456
hfsplus_unlink+0x49c/0x820 fs/hfsplus/dir.c:376
vfs_unlink+0x27d/0x4e0 fs/namei.c:4002
do_unlinkat+0x3b8/0x660 fs/namei.c:4065
do_coredump+0x1f9c/0x2d60 fs/coredump.c:687
get_signal+0xed9/0x1f70 kernel/signal.c:2583
do_signal+0x8f/0x1670 arch/x86/kernel/signal.c:799
exit_to_usermode_loop+0x204/0x2a0 arch/x86/entry/common.c:163
prepare_exit_to_usermode+0x277/0x2d0 arch/x86/entry/common.c:198
retint_user+0x8/0x18

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***

5 locks held by syz-executor176/8140:
#0: 0000000078040b6d (sb_writers#11){.+.+}, at: sb_start_write include/linux/fs.h:1579 [inline]
#0: 0000000078040b6d (sb_writers#11){.+.+}, at: mnt_want_write+0x3a/0xb0 fs/namespace.c:360
#1: 000000004d7e9a04 (&type->i_mutex_dir_key#7/1){+.+.}, at: inode_lock_nested include/linux/fs.h:783 [inline]
#1: 000000004d7e9a04 (&type->i_mutex_dir_key#7/1){+.+.}, at: do_unlinkat+0x27d/0x660 fs/namei.c:4051
#2: 0000000018664581 (&sb->s_type->i_mutex_key#18){+.+.}, at: inode_lock include/linux/fs.h:748 [inline]
#2: 0000000018664581 (&sb->s_type->i_mutex_key#18){+.+.}, at: vfs_unlink+0xca/0x4e0 fs/namei.c:3993
#3: 00000000b34336f2 (&sbi->vh_mutex){+.+.}, at: hfsplus_unlink+0x140/0x820 fs/hfsplus/dir.c:370
#4: 000000008525b6a7 (&tree->tree_lock){+.+.}, at: hfsplus_find_init+0x1b7/0x220 fs/hfsplus/bfind.c:30

stack backtrace:
CPU: 1 PID: 8140 Comm: syz-executor176 Not tainted 4.19.211-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1fc/0x2ef lib/dump_stack.c:118
print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1222
check_prev_add kernel/locking/lockdep.c:1866 [inline]
check_prevs_add kernel/locking/lockdep.c:1979 [inline]
validate_chain kernel/locking/lockdep.c:2420 [inline]
__lock_acquire+0x30c9/0x3ff0 kernel/locking/lockdep.c:3416
lock_acquire+0x170/0x3c0 kernel/locking/lockdep.c:3908
__mutex_lock_common kernel/locking/mutex.c:937 [inline]
__mutex_lock+0xd7/0x1190 kernel/locking/mutex.c:1078
hfsplus_file_extend+0x1bb/0xf40 fs/hfsplus/extents.c:457
hfsplus_bmap_reserve+0x298/0x440 fs/hfsplus/btree.c:357
hfsplus_rename_cat+0x272/0x1490 fs/hfsplus/catalog.c:456
hfsplus_unlink+0x49c/0x820 fs/hfsplus/dir.c:376
vfs_unlink+0x27d/0x4e0 fs/namei.c:4002
do_unlinkat+0x3b8/0x660 fs/namei.c:4065
do_coredump+0x1f9c/0x2d60 fs/coredump.c:687
get_signal+0xed9/0x1f70 kernel/signal.c:2583
do_signal+0x8f/0x1670 arch/x86/kernel/signal.c:799
exit_to_usermode_loop+0x204/0x2a0 arch/x86/entry/common.c:163
prepare_exit_to_usermode+0x277/0x2d0 arch/x86/entry/common.c:198
retint_user+0x8/0x18
RIP: 0033: (null)
Code: Bad RIP value.
RSP: 002b:0000000020000048 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 0000000000000003 RCX: 00007fdcf1fabb29
RDX: 0000000000000000 RSI: 0000000020000040 RDI: 0000000000000080
RBP: 00007ffff7555890 R08: 0000000000000000 R09: 0000000000000003
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffff7555880 R14: 00007ffff7555870 R15: 0000000000000000


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches
Reply all
Reply to author
Forward
0 new messages