[v6.1] possible deadlock in hfsplus_get_block

0 views
Skip to first unread message

syzbot

unread,
Mar 13, 2023, 10:59:05 AM3/13/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 6449a0ba6843 Linux 6.1.19
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=16e9afdac80000
kernel config: https://syzkaller.appspot.com/x/.config?x=75eadb21ef1208e4
dashboard link: https://syzkaller.appspot.com/bug?extid=4b92cf592cc55a929db6
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/dc227ecd3e21/disk-6449a0ba.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1d08e21b50c2/vmlinux-6449a0ba.xz
kernel image: https://storage.googleapis.com/syzbot-assets/71a43f2c4d2c/Image-6449a0ba.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4b92cf...@syzkaller.appspotmail.com

loop1: detected capacity change from 0 to 1024
======================================================
WARNING: possible circular locking dependency detected
6.1.19-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.1/10690 is trying to acquire lock:
ffff0001105e4488 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_get_block+0x2c4/0x1168 fs/hfsplus/extents.c:260

but task is already holding lock:
ffff0001105e4820 (mapping.invalidate_lock#3){.+.+}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:811 [inline]
ffff0001105e4820 (mapping.invalidate_lock#3){.+.+}-{3:3}, at: page_cache_ra_unbounded+0xc8/0x58c mm/readahead.c:226

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (mapping.invalidate_lock#3){.+.+}-{3:3}:
down_read+0x5c/0x78 kernel/locking/rwsem.c:1520
filemap_invalidate_lock_shared include/linux/fs.h:811 [inline]
filemap_fault+0x58c/0xf7c mm/filemap.c:3128
__do_fault+0x11c/0x3d8 mm/memory.c:4198
do_read_fault mm/memory.c:4549 [inline]
do_fault mm/memory.c:4678 [inline]
handle_pte_fault mm/memory.c:4950 [inline]
__handle_mm_fault mm/memory.c:5092 [inline]
handle_mm_fault+0x1f20/0x3d18 mm/memory.c:5213
__do_page_fault arch/arm64/mm/fault.c:512 [inline]
do_page_fault+0x634/0xac4 arch/arm64/mm/fault.c:612
do_translation_fault+0x94/0xc8 arch/arm64/mm/fault.c:695
do_mem_abort+0x74/0x200 arch/arm64/mm/fault.c:831
el1_abort+0x3c/0x5c arch/arm64/kernel/entry-common.c:367
el1h_64_sync_handler+0x60/0xac arch/arm64/kernel/entry-common.c:427
el1h_64_sync+0x64/0x68 arch/arm64/kernel/entry.S:576
do_strncpy_from_user lib/strncpy_from_user.c:41 [inline]
strncpy_from_user+0x224/0x54c lib/strncpy_from_user.c:139
getname_flags+0x104/0x480 fs/namei.c:150
getname+0x28/0x38 fs/namei.c:218
do_sys_openat2+0xd4/0x3d8 fs/open.c:1304
do_sys_open fs/open.c:1326 [inline]
__do_sys_openat fs/open.c:1342 [inline]
__se_sys_openat fs/open.c:1337 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1337
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #2 (&mm->mmap_lock){++++}-{3:3}:
__might_fault+0xc4/0x124 mm/memory.c:5641
filldir64+0x2d4/0x948 fs/readdir.c:335
dir_emit_dot include/linux/fs.h:3583 [inline]
hfsplus_readdir+0x398/0xf28 fs/hfsplus/dir.c:159
iterate_dir+0x1f4/0x4e4
__do_sys_getdents64 fs/readdir.c:369 [inline]
__se_sys_getdents64 fs/readdir.c:354 [inline]
__arm64_sys_getdents64+0x1c4/0x4a0 fs/readdir.c:354
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #1 (&tree->tree_lock){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_file_truncate+0x6d0/0x9b8 fs/hfsplus/extents.c:595
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:269
notify_change+0xc24/0xec0 fs/attr.c:482
do_truncate+0x1c0/0x28c fs/open.c:65
vfs_truncate+0x2c4/0x36c fs/open.c:111
do_sys_truncate+0xec/0x1b4 fs/open.c:134
__do_sys_truncate fs/open.c:146 [inline]
__se_sys_truncate fs/open.c:144 [inline]
__arm64_sys_truncate+0x5c/0x70 fs/open.c:144
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #0 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x300/0x8e4 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_get_block+0x2c4/0x1168 fs/hfsplus/extents.c:260
block_read_full_folio+0x2f4/0x98c fs/buffer.c:2271
hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
read_pages+0x4c0/0x6a0 mm/readahead.c:181
page_cache_ra_unbounded+0x46c/0x58c mm/readahead.c:270
do_page_cache_ra mm/readahead.c:300 [inline]
page_cache_ra_order+0x7fc/0x994 mm/readahead.c:560
ondemand_readahead+0x5f8/0xb04 mm/readahead.c:682
page_cache_async_ra+0x1b0/0x1cc mm/readahead.c:731
filemap_readahead mm/filemap.c:2557 [inline]
filemap_get_pages mm/filemap.c:2598 [inline]
filemap_read+0x7e0/0x2260 mm/filemap.c:2676
generic_file_read_iter+0xa0/0x450 mm/filemap.c:2822
call_read_iter include/linux/fs.h:2199 [inline]
new_sync_read fs/read_write.c:389 [inline]
vfs_read+0x5bc/0x8ac fs/read_write.c:470
ksys_read+0x15c/0x26c fs/read_write.c:613
__do_sys_read fs/read_write.c:623 [inline]
__se_sys_read fs/read_write.c:621 [inline]
__arm64_sys_read+0x7c/0x90 fs/read_write.c:621
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

other info that might help us debug this:

Chain exists of:
&HFSPLUS_I(inode)->extents_lock --> &mm->mmap_lock --> mapping.invalidate_lock#3

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(mapping.invalidate_lock#3);
lock(&mm->mmap_lock);
lock(mapping.invalidate_lock#3);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***

2 locks held by syz-executor.1/10690:
#0: ffff0000cfb9efe8 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xd8/0x104 fs/file.c:1046
#1: ffff0001105e4820 (mapping.invalidate_lock#3){.+.+}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:811 [inline]
#1: ffff0001105e4820 (mapping.invalidate_lock#3){.+.+}-{3:3}, at: page_cache_ra_unbounded+0xc8/0x58c mm/readahead.c:226

stack backtrace:
CPU: 1 PID: 10690 Comm: syz-executor.1 Not tainted 6.1.19-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x5c lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2056
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x300/0x8e4 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_get_block+0x2c4/0x1168 fs/hfsplus/extents.c:260
block_read_full_folio+0x2f4/0x98c fs/buffer.c:2271
hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
read_pages+0x4c0/0x6a0 mm/readahead.c:181
page_cache_ra_unbounded+0x46c/0x58c mm/readahead.c:270
do_page_cache_ra mm/readahead.c:300 [inline]
page_cache_ra_order+0x7fc/0x994 mm/readahead.c:560
ondemand_readahead+0x5f8/0xb04 mm/readahead.c:682
page_cache_async_ra+0x1b0/0x1cc mm/readahead.c:731
filemap_readahead mm/filemap.c:2557 [inline]
filemap_get_pages mm/filemap.c:2598 [inline]
filemap_read+0x7e0/0x2260 mm/filemap.c:2676
generic_file_read_iter+0xa0/0x450 mm/filemap.c:2822
call_read_iter include/linux/fs.h:2199 [inline]
new_sync_read fs/read_write.c:389 [inline]
vfs_read+0x5bc/0x8ac fs/read_write.c:470
ksys_read+0x15c/0x26c fs/read_write.c:613
__do_sys_read fs/read_write.c:623 [inline]
__se_sys_read fs/read_write.c:621 [inline]
__arm64_sys_read+0x7c/0x90 fs/read_write.c:621
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Mar 14, 2023, 8:03:04 AM3/14/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 2ddbd0f967b3 Linux 5.15.102
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=14a1ed1ac80000
kernel config: https://syzkaller.appspot.com/x/.config?x=d6af46e4bd7d6a2f
dashboard link: https://syzkaller.appspot.com/bug?extid=b46585854af80cb191c5
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d46a989959b6/disk-2ddbd0f9.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4d06a9b2ddaf/vmlinux-2ddbd0f9.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0921009430c0/Image-2ddbd0f9.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+b46585...@syzkaller.appspotmail.com

loop2: detected capacity change from 0 to 1024
======================================================
WARNING: possible circular locking dependency detected
5.15.102-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.2/6135 is trying to acquire lock:
ffff0000e2a9b048 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_get_block+0x2c4/0x1194 fs/hfsplus/extents.c:260

but task is already holding lock:
ffff000111ea20b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x144/0x1bc

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&tree->tree_lock){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfsplus_file_truncate+0x6d4/0x9cc fs/hfsplus/extents.c:595
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:267
notify_change+0xae4/0xd80 fs/attr.c:426
do_truncate+0x1bc/0x288 fs/open.c:65
do_sys_ftruncate+0x288/0x31c fs/open.c:193
__do_sys_ftruncate fs/open.c:204 [inline]
__se_sys_ftruncate fs/open.c:202 [inline]
__arm64_sys_ftruncate+0x60/0x74 fs/open.c:202
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584

-> #0 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x2c0/0x89c kernel/locking/lockdep.c:5622
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfsplus_get_block+0x2c4/0x1194 fs/hfsplus/extents.c:260
block_read_full_page+0x2a0/0xc4c fs/buffer.c:2290
hfsplus_readpage+0x28/0x38 fs/hfsplus/inode.c:28
do_read_cache_page+0x60c/0x950
read_cache_page+0x68/0x84 mm/filemap.c:3565
read_mapping_page include/linux/pagemap.h:515 [inline]
__hfs_bnode_create+0x3f0/0x864 fs/hfsplus/bnode.c:447
hfsplus_bnode_find+0x200/0xcb0 fs/hfsplus/bnode.c:497
hfsplus_brec_find+0x134/0x4a0 fs/hfsplus/bfind.c:183
hfsplus_brec_read+0x38/0x128 fs/hfsplus/bfind.c:222
hfsplus_find_cat+0x140/0x4a0 fs/hfsplus/catalog.c:202
hfsplus_iget+0x354/0x570 fs/hfsplus/super.c:82
hfsplus_fill_super+0x9c4/0x167c fs/hfsplus/super.c:503
mount_bdev+0x26c/0x368 fs/super.c:1369
hfsplus_mount+0x44/0x58 fs/hfsplus/super.c:641
legacy_get_tree+0xd4/0x16c fs/fs_context.c:610
vfs_get_tree+0x90/0x274 fs/super.c:1499
do_new_mount+0x25c/0x8c8 fs/namespace.c:2994
path_mount+0x590/0x104c fs/namespace.c:3324
do_mount fs/namespace.c:3337 [inline]
__do_sys_mount fs/namespace.c:3545 [inline]
__se_sys_mount fs/namespace.c:3522 [inline]
__arm64_sys_mount+0x510/0x5e0 fs/namespace.c:3522
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***

2 locks held by syz-executor.2/6135:
#0: ffff000111ed60e0 (&type->s_umount_key#60/1){+.+.}-{3:3}, at: alloc_super+0x1b8/0x844 fs/super.c:229
#1: ffff000111ea20b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x144/0x1bc

stack backtrace:
CPU: 1 PID: 6135 Comm: syz-executor.2 Not tainted 5.15.102-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2011
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x2c0/0x89c kernel/locking/lockdep.c:5622
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfsplus_get_block+0x2c4/0x1194 fs/hfsplus/extents.c:260
block_read_full_page+0x2a0/0xc4c fs/buffer.c:2290
hfsplus_readpage+0x28/0x38 fs/hfsplus/inode.c:28
do_read_cache_page+0x60c/0x950
read_cache_page+0x68/0x84 mm/filemap.c:3565
read_mapping_page include/linux/pagemap.h:515 [inline]
__hfs_bnode_create+0x3f0/0x864 fs/hfsplus/bnode.c:447
hfsplus_bnode_find+0x200/0xcb0 fs/hfsplus/bnode.c:497
hfsplus_brec_find+0x134/0x4a0 fs/hfsplus/bfind.c:183
hfsplus_brec_read+0x38/0x128 fs/hfsplus/bfind.c:222
hfsplus_find_cat+0x140/0x4a0 fs/hfsplus/catalog.c:202
hfsplus_iget+0x354/0x570 fs/hfsplus/super.c:82
hfsplus_fill_super+0x9c4/0x167c fs/hfsplus/super.c:503
mount_bdev+0x26c/0x368 fs/super.c:1369
hfsplus_mount+0x44/0x58 fs/hfsplus/super.c:641
legacy_get_tree+0xd4/0x16c fs/fs_context.c:610
vfs_get_tree+0x90/0x274 fs/super.c:1499
do_new_mount+0x25c/0x8c8 fs/namespace.c:2994
path_mount+0x590/0x104c fs/namespace.c:3324
do_mount fs/namespace.c:3337 [inline]
__do_sys_mount fs/namespace.c:3545 [inline]
__se_sys_mount fs/namespace.c:3522 [inline]
__arm64_sys_mount+0x510/0x5e0 fs/namespace.c:3522
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584
hfsplus: failed to load root directory

syzbot

unread,
May 5, 2023, 10:00:08 AM5/5/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 8a7f2a5c5aa1 Linux 5.15.110
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=14bff870280000
kernel config: https://syzkaller.appspot.com/x/.config?x=7e93d602da27af41
dashboard link: https://syzkaller.appspot.com/bug?extid=b46585854af80cb191c5
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=12e11622280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15763070280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/16bea75b636d/disk-8a7f2a5c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3b169e33dcf2/vmlinux-8a7f2a5c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/190d08a00950/Image-8a7f2a5c.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/d41ccf64efd6/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+b46585...@syzkaller.appspotmail.com

and is ignored by this kernel. Remove the mand
option from the mount to silence this warning.
=======================================================
============================================
WARNING: possible recursive locking detected
5.15.110-syzkaller #0 Not tainted
--------------------------------------------
syz-executor228/3967 is trying to acquire lock:
ffff0000c9c89548 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_get_block+0x2c4/0x1194 fs/hfsplus/extents.c:260

but task is already holding lock:
ffff0000c9c8a988 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x254/0x9cc fs/hfsplus/extents.c:576

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***

May be due to missing lock nesting notation

4 locks held by syz-executor228/3967:
#0: ffff0000c9ab2460 (sb_writers#8){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:377
#1: ffff0000c9c8ab80 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff0000c9c8ab80 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: do_truncate+0x1ac/0x28c fs/open.c:63
#2: ffff0000c9c8a988 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x254/0x9cc fs/hfsplus/extents.c:576
#3: ffff0000dd09f8f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xcc/0x514 fs/hfsplus/bitmap.c:182

stack backtrace:
CPU: 0 PID: 3967 Comm: syz-executor228 Not tainted 5.15.110-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
__lock_acquire+0x62b4/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5622
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfsplus_get_block+0x2c4/0x1194 fs/hfsplus/extents.c:260
block_read_full_page+0x2a0/0xc4c fs/buffer.c:2290
hfsplus_readpage+0x28/0x38 fs/hfsplus/inode.c:28
do_read_cache_page+0x60c/0x950
read_cache_page+0x68/0x84 mm/filemap.c:3565
read_mapping_page include/linux/pagemap.h:515 [inline]
hfsplus_block_free+0x120/0x514 fs/hfsplus/bitmap.c:185
hfsplus_free_extents+0x148/0x8d4 fs/hfsplus/extents.c:363
hfsplus_file_truncate+0x69c/0x9cc fs/hfsplus/extents.c:591
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:267
notify_change+0xac4/0xd60 fs/attr.c:488
do_truncate+0x1c0/0x28c fs/open.c:65
vfs_truncate+0x2e0/0x388 fs/open.c:111
do_sys_truncate+0xec/0x1b4 fs/open.c:134
__do_sys_truncate fs/open.c:146 [inline]
__se_sys_truncate fs/open.c:144 [inline]
__arm64_sys_truncate+0x5c/0x70 fs/open.c:144
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
hfsplus: unable to mark blocks free: error -5
hfsplus: can't free extent


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

unread,
May 7, 2023, 12:52:56 AM5/7/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: ca48fc16c493 Linux 6.1.27
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=14f572b8280000
kernel config: https://syzkaller.appspot.com/x/.config?x=aea4bb7802570997
dashboard link: https://syzkaller.appspot.com/bug?extid=4b92cf592cc55a929db6
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1526cc46280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11d75fd2280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/ec11c1903c52/disk-ca48fc16.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/8ce41c1ad391/vmlinux-ca48fc16.xz
kernel image: https://storage.googleapis.com/syzbot-assets/affba5631cad/Image-ca48fc16.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/451b1bb2b279/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4b92cf...@syzkaller.appspotmail.com

and is ignored by this kernel. Remove the mand
option from the mount to silence this warning.
=======================================================
============================================
WARNING: possible recursive locking detected
6.1.27-syzkaller #0 Not tainted
--------------------------------------------
syz-executor137/4223 is trying to acquire lock:
ffff0000d7dc9548 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_get_block+0x2c4/0x1168 fs/hfsplus/extents.c:260

but task is already holding lock:
ffff0000d7dca988 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x250/0x9b8 fs/hfsplus/extents.c:576

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***

May be due to missing lock nesting notation

4 locks held by syz-executor137/4223:
#0: ffff0000d8050460 (sb_writers#8){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393
#1: ffff0000d7dcab80 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff0000d7dcab80 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: do_truncate+0x1ac/0x28c fs/open.c:63
#2: ffff0000d7dca988 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x250/0x9b8 fs/hfsplus/extents.c:576
#3: ffff0000d4b6f8f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xcc/0x4b0 fs/hfsplus/bitmap.c:182

stack backtrace:
CPU: 1 PID: 4223 Comm: syz-executor137 Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x5c lib/dump_stack.c:113
__lock_acquire+0x6310/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5669
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_get_block+0x2c4/0x1168 fs/hfsplus/extents.c:260
block_read_full_folio+0x2f4/0x98c fs/buffer.c:2271
hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
filemap_read_folio+0x14c/0x39c mm/filemap.c:2407
do_read_cache_folio+0x24c/0x544 mm/filemap.c:3535
do_read_cache_page mm/filemap.c:3577 [inline]
read_cache_page+0x6c/0x180 mm/filemap.c:3586
read_mapping_page include/linux/pagemap.h:756 [inline]
hfsplus_block_free+0x11c/0x4b0 fs/hfsplus/bitmap.c:185
hfsplus_free_extents+0x148/0x8d4 fs/hfsplus/extents.c:363
hfsplus_file_truncate+0x698/0x9b8 fs/hfsplus/extents.c:591
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:269
notify_change+0xc24/0xec0 fs/attr.c:482
do_truncate+0x1c0/0x28c fs/open.c:65
vfs_truncate+0x2c4/0x36c fs/open.c:111
do_sys_truncate+0xec/0x1b4 fs/open.c:134
__do_sys_truncate fs/open.c:146 [inline]
__se_sys_truncate fs/open.c:144 [inline]
__arm64_sys_truncate+0x5c/0x70 fs/open.c:144
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581
Reply all
Reply to author
Forward
0 new messages