[v6.1] possible deadlock in hfsplus_find_init

0 views
Skip to first unread message

syzbot

unread,
Mar 15, 2023, 7:24:47 AM3/15/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 6449a0ba6843 Linux 6.1.19
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13120c3ac80000
kernel config: https://syzkaller.appspot.com/x/.config?x=75eadb21ef1208e4
dashboard link: https://syzkaller.appspot.com/bug?extid=ca6703aab96175fc4f6f
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/dc227ecd3e21/disk-6449a0ba.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1d08e21b50c2/vmlinux-6449a0ba.xz
kernel image: https://storage.googleapis.com/syzbot-assets/71a43f2c4d2c/Image-6449a0ba.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ca6703...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.19-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.1/8341 is trying to acquire lock:
ffff0000d7a3db48 (&mm->mmap_lock){++++}-{3:3}, at: __might_fault+0x9c/0x124 mm/memory.c:5640

but task is already holding lock:
ffff00010dd1c0b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x144/0x1bc

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&tree->tree_lock){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_file_truncate+0x6d0/0x9b8 fs/hfsplus/extents.c:595
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:269
notify_change+0xc24/0xec0 fs/attr.c:482
do_truncate+0x1c0/0x28c fs/open.c:65
vfs_truncate+0x2c4/0x36c fs/open.c:111
do_sys_truncate+0xec/0x1b4 fs/open.c:134
__do_sys_truncate fs/open.c:146 [inline]
__se_sys_truncate fs/open.c:144 [inline]
__arm64_sys_truncate+0x5c/0x70 fs/open.c:144
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #2 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_get_block+0x2c4/0x1168 fs/hfsplus/extents.c:260
block_read_full_folio+0x2f4/0x98c fs/buffer.c:2271
hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
read_pages+0x4c0/0x6a0 mm/readahead.c:181
page_cache_ra_unbounded+0x46c/0x58c mm/readahead.c:270
do_page_cache_ra mm/readahead.c:300 [inline]
page_cache_ra_order+0x7fc/0x994 mm/readahead.c:560
ondemand_readahead+0x5f8/0xb04 mm/readahead.c:682
page_cache_async_ra+0x1b0/0x1cc mm/readahead.c:731
filemap_readahead mm/filemap.c:2557 [inline]
filemap_get_pages mm/filemap.c:2598 [inline]
filemap_read+0x7e0/0x2260 mm/filemap.c:2676
generic_file_read_iter+0xa0/0x450 mm/filemap.c:2822
call_read_iter include/linux/fs.h:2199 [inline]
new_sync_read fs/read_write.c:389 [inline]
vfs_read+0x5bc/0x8ac fs/read_write.c:470
ksys_read+0x15c/0x26c fs/read_write.c:613
__do_sys_read fs/read_write.c:623 [inline]
__se_sys_read fs/read_write.c:621 [inline]
__arm64_sys_read+0x7c/0x90 fs/read_write.c:621
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #1 (mapping.invalidate_lock#4){.+.+}-{3:3}:
down_read+0x5c/0x78 kernel/locking/rwsem.c:1520
filemap_invalidate_lock_shared include/linux/fs.h:811 [inline]
filemap_fault+0x58c/0xf7c mm/filemap.c:3128
__do_fault+0x11c/0x3d8 mm/memory.c:4198
do_read_fault mm/memory.c:4549 [inline]
do_fault mm/memory.c:4678 [inline]
handle_pte_fault mm/memory.c:4950 [inline]
__handle_mm_fault mm/memory.c:5092 [inline]
handle_mm_fault+0x1f20/0x3d18 mm/memory.c:5213
__do_page_fault arch/arm64/mm/fault.c:512 [inline]
do_page_fault+0x634/0xac4 arch/arm64/mm/fault.c:612
do_translation_fault+0x94/0xc8 arch/arm64/mm/fault.c:695
do_mem_abort+0x74/0x200 arch/arm64/mm/fault.c:831
el1_abort+0x3c/0x5c arch/arm64/kernel/entry-common.c:367
el1h_64_sync_handler+0x60/0xac arch/arm64/kernel/entry-common.c:427
el1h_64_sync+0x64/0x68 arch/arm64/kernel/entry.S:576
do_strncpy_from_user lib/strncpy_from_user.c:41 [inline]
strncpy_from_user+0x224/0x54c lib/strncpy_from_user.c:139
getname_flags+0x104/0x480 fs/namei.c:150
getname+0x28/0x38 fs/namei.c:218
do_sys_openat2+0xd4/0x3d8 fs/open.c:1304
do_sys_open fs/open.c:1326 [inline]
__do_sys_openat fs/open.c:1342 [inline]
__se_sys_openat fs/open.c:1337 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1337
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #0 (&mm->mmap_lock){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x300/0x8e4 kernel/locking/lockdep.c:5669
__might_fault+0xc4/0x124 mm/memory.c:5641
filldir64+0x2d4/0x948 fs/readdir.c:335
dir_emit_dot include/linux/fs.h:3583 [inline]
hfsplus_readdir+0x398/0xf28 fs/hfsplus/dir.c:159
iterate_dir+0x1f4/0x4e4
__do_sys_getdents64 fs/readdir.c:369 [inline]
__se_sys_getdents64 fs/readdir.c:354 [inline]
__arm64_sys_getdents64+0x1c4/0x4a0 fs/readdir.c:354
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

other info that might help us debug this:

Chain exists of:
&mm->mmap_lock --> &HFSPLUS_I(inode)->extents_lock --> &tree->tree_lock

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&tree->tree_lock);
lock(&mm->mmap_lock);

*** DEADLOCK ***

3 locks held by syz-executor.1/8341:
#0: ffff0000ce6c9c68 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xd8/0x104 fs/file.c:1046
#1: ffff00010d5324c0 (&type->i_mutex_dir_key#14){++++}-{3:3}, at: iterate_dir+0x108/0x4e4 fs/readdir.c:55
#2: ffff00010dd1c0b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x144/0x1bc

stack backtrace:
CPU: 1 PID: 8341 Comm: syz-executor.1 Not tainted 6.1.19-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x5c lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2056
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x300/0x8e4 kernel/locking/lockdep.c:5669
__might_fault+0xc4/0x124 mm/memory.c:5641
filldir64+0x2d4/0x948 fs/readdir.c:335
dir_emit_dot include/linux/fs.h:3583 [inline]
hfsplus_readdir+0x398/0xf28 fs/hfsplus/dir.c:159
iterate_dir+0x1f4/0x4e4
__do_sys_getdents64 fs/readdir.c:369 [inline]
__se_sys_getdents64 fs/readdir.c:354 [inline]
__arm64_sys_getdents64+0x1c4/0x4a0 fs/readdir.c:354
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581
hfsplus: bad catalog entry type


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
May 28, 2023, 10:17:52 AM5/28/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: a343b0dd87b4 Linux 6.1.30
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=118ae789280000
kernel config: https://syzkaller.appspot.com/x/.config?x=5265a3c898f3cbbb
dashboard link: https://syzkaller.appspot.com/bug?extid=ca6703aab96175fc4f6f
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=115e1925280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1454f83e280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/195d974b1f1c/disk-a343b0dd.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ea41850547fb/vmlinux-a343b0dd.xz
kernel image: https://storage.googleapis.com/syzbot-assets/13ec9e70ad28/bzImage-a343b0dd.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/d0721b53a6cc/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ca6703...@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
6.1.30-syzkaller #0 Not tainted
--------------------------------------------
kworker/u4:1/11 is trying to acquire lock:
ffff88807b65c0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfsplus_find_init+0x146/0x1c0

but task is already holding lock:
ffff88807b65c0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfsplus_find_init+0x146/0x1c0

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&tree->tree_lock/1);
lock(&tree->tree_lock/1);

*** DEADLOCK ***

May be due to missing lock nesting notation

5 locks held by kworker/u4:1/11:
#0: ffff888015686938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc90000107d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
#2: ffff88807deea988 (&hip->extents_lock){+.+.}-{3:3}, at: hfsplus_ext_write_extent+0x8a/0x1f0 fs/hfsplus/extents.c:149
#3: ffff88807b65c0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfsplus_find_init+0x146/0x1c0
#4: ffff88807dee8108 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x1d2/0x1b10 fs/hfsplus/extents.c:457

stack backtrace:
CPU: 0 PID: 11 Comm: kworker/u4:1 Not tainted 6.1.30-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Workqueue: writeback wb_workfn (flush-7:0)
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2991 [inline]
check_deadlock kernel/locking/lockdep.c:3034 [inline]
validate_chain+0x4726/0x58e0 kernel/locking/lockdep.c:3819
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
hfsplus_find_init+0x146/0x1c0
hfsplus_ext_read_extent fs/hfsplus/extents.c:216 [inline]
hfsplus_file_extend+0x40a/0x1b10 fs/hfsplus/extents.c:461
hfsplus_bmap_reserve+0x101/0x4e0 fs/hfsplus/btree.c:358
__hfsplus_ext_write_extent+0x2a4/0x5b0 fs/hfsplus/extents.c:104
hfsplus_ext_write_extent_locked fs/hfsplus/extents.c:139 [inline]
hfsplus_ext_write_extent+0x166/0x1f0 fs/hfsplus/extents.c:150
hfsplus_write_inode+0x1e/0x5c0 fs/hfsplus/super.c:154
write_inode fs/fs-writeback.c:1443 [inline]
__writeback_single_inode+0x67d/0x11e0 fs/fs-writeback.c:1655
writeback_sb_inodes+0xc21/0x1ac0 fs/fs-writeback.c:1881
wb_writeback+0x49d/0xe10 fs/fs-writeback.c:2055
wb_do_writeback fs/fs-writeback.c:2198 [inline]
wb_workfn+0x427/0x1020 fs/fs-writeback.c:2238
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Reply all
Reply to author
Forward
0 new messages