[v6.1] possible deadlock in hfs_find_init

Skoðað 0 sinnum
Fara í fyrstu ólesnu skilaboð

syzbot

ólesið,
20. mar. 2023, 13:38:4420.3.2023
til syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 7eaef76fbc46 Linux 6.1.20
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13af4802c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=28c36fe4d02f8c88
dashboard link: https://syzkaller.appspot.com/bug?extid=6cc76a2d7d5627cfdabc
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/610a00ba4375/disk-7eaef76f.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/57c1310f9a30/vmlinux-7eaef76f.xz
kernel image: https://storage.googleapis.com/syzbot-assets/81999f717d3b/bzImage-7eaef76f.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6cc76a...@syzkaller.appspotmail.com

loop0: detected capacity change from 0 to 64
======================================================
WARNING: possible circular locking dependency detected
6.1.20-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.0/21978 is trying to acquire lock:
ffff88807a3060b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x16a/0x1e0

but task is already holding lock:
ffff888054a78778 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_get_block+0x26b/0xb60 fs/hfs/extent.c:365

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}:
lock_acquire+0x23a/0x630 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
hfs_extend_file+0xfb/0x1440 fs/hfs/extent.c:397
hfs_bmap_reserve+0xd5/0x3f0 fs/hfs/btree.c:234
__hfs_ext_write_extent+0x22e/0x4f0 fs/hfs/extent.c:121
__hfs_ext_cache_extent+0x6a/0x990 fs/hfs/extent.c:174
hfs_ext_read_extent fs/hfs/extent.c:202 [inline]
hfs_extend_file+0x340/0x1440 fs/hfs/extent.c:401
hfs_get_block+0x3e0/0xb60 fs/hfs/extent.c:353
__block_write_begin_int+0x544/0x1a30 fs/buffer.c:1991
__block_write_begin fs/buffer.c:2041 [inline]
block_write_begin+0x98/0x1f0 fs/buffer.c:2102
cont_write_begin+0x63f/0x880 fs/buffer.c:2456
hfs_write_begin+0x86/0xd0 fs/hfs/inode.c:58
generic_perform_write+0x2fc/0x5e0 mm/filemap.c:3754
__generic_file_write_iter+0x176/0x400 mm/filemap.c:3882
generic_file_write_iter+0xab/0x310 mm/filemap.c:3914
call_write_iter include/linux/fs.h:2205 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x7ae/0xba0 fs/read_write.c:584
ksys_write+0x19c/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&tree->tree_lock/1){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3832
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x23a/0x630 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
hfs_find_init+0x16a/0x1e0
hfs_ext_read_extent fs/hfs/extent.c:200 [inline]
hfs_get_block+0x4f0/0xb60 fs/hfs/extent.c:366
block_read_full_folio+0x403/0xf60 fs/buffer.c:2271
filemap_read_folio+0x199/0x780 mm/filemap.c:2407
do_read_cache_folio+0x2ee/0x810 mm/filemap.c:3535
do_read_cache_page+0x32/0x220 mm/filemap.c:3577
read_mapping_page include/linux/pagemap.h:756 [inline]
hfs_btree_open+0x507/0xf20 fs/hfs/btree.c:78
hfs_mdb_get+0x14c3/0x21b0 fs/hfs/mdb.c:204
hfs_fill_super+0x100c/0x1730 fs/hfs/super.c:406
mount_bdev+0x26d/0x3a0 fs/super.c:1414
legacy_get_tree+0xeb/0x180 fs/fs_context.c:610
vfs_get_tree+0x88/0x270 fs/super.c:1544
do_new_mount+0x28b/0xad0 fs/namespace.c:3040
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&HFS_I(tree->inode)->extents_lock);
lock(&tree->tree_lock/1);
lock(&HFS_I(tree->inode)->extents_lock);
lock(&tree->tree_lock/1);

*** DEADLOCK ***

2 locks held by syz-executor.0/21978:
#0: ffff8880748a60e0 (&type->s_umount_key#53/1){+.+.}-{3:3}, at: alloc_super+0x217/0x930 fs/super.c:228
#1: ffff888054a78778 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_get_block+0x26b/0xb60 fs/hfs/extent.c:365

stack backtrace:
CPU: 1 PID: 21978 Comm: syz-executor.0 Not tainted 6.1.20-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3832
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x23a/0x630 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
hfs_find_init+0x16a/0x1e0
hfs_ext_read_extent fs/hfs/extent.c:200 [inline]
hfs_get_block+0x4f0/0xb60 fs/hfs/extent.c:366
block_read_full_folio+0x403/0xf60 fs/buffer.c:2271
filemap_read_folio+0x199/0x780 mm/filemap.c:2407
do_read_cache_folio+0x2ee/0x810 mm/filemap.c:3535
do_read_cache_page+0x32/0x220 mm/filemap.c:3577
read_mapping_page include/linux/pagemap.h:756 [inline]
hfs_btree_open+0x507/0xf20 fs/hfs/btree.c:78
hfs_mdb_get+0x14c3/0x21b0 fs/hfs/mdb.c:204
hfs_fill_super+0x100c/0x1730 fs/hfs/super.c:406
mount_bdev+0x26d/0x3a0 fs/super.c:1414
legacy_get_tree+0xeb/0x180 fs/fs_context.c:610
vfs_get_tree+0x88/0x270 fs/super.c:1544
do_new_mount+0x28b/0xad0 fs/namespace.c:3040
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fe9e188d62a
Code: 48 c7 c2 b8 ff ff ff f7 d8 64 89 02 b8 ff ff ff ff eb d2 e8 b8 04 00 00 0f 1f 84 00 00 00 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe9e2669f88 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 000000000000028f RCX: 00007fe9e188d62a
RDX: 00000000200000c0 RSI: 0000000020000140 RDI: 00007fe9e2669fe0
RBP: 00007fe9e266a020 R08: 00007fe9e266a020 R09: 000000000000840a
R10: 000000000000840a R11: 0000000000000202 R12: 00000000200000c0
R13: 0000000020000140 R14: 00007fe9e2669fe0 R15: 00000000200004c0
</TASK>
hfs: unable to open catalog tree


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

ólesið,
22. mar. 2023, 19:14:4922.3.2023
til syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 115472395b0a Linux 5.15.104
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=16c9a089c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=e597b110d58e7b4
dashboard link: https://syzkaller.appspot.com/bug?extid=5ec6d29e9352c6f10dc7
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/76798ca1c9b6/disk-11547239.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3b608633c8f5/vmlinux-11547239.xz
kernel image: https://storage.googleapis.com/syzbot-assets/8836fafb618b/Image-11547239.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5ec6d2...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.104-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.5/8016 is trying to acquire lock:
ffff0000cb1980b0 (&tree->tree_lock#2/1){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8

but task is already holding lock:
ffff0000c7c44878 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xe4/0x10e4 fs/hfs/extent.c:397

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfs_extend_file+0xe4/0x10e4 fs/hfs/extent.c:397
hfs_bmap_reserve+0xd0/0x3b4 fs/hfs/btree.c:231
__hfs_ext_write_extent+0x1a0/0x468 fs/hfs/extent.c:121
__hfs_ext_cache_extent+0x84/0x754 fs/hfs/extent.c:174
hfs_ext_read_extent fs/hfs/extent.c:202 [inline]
hfs_extend_file+0x278/0x10e4 fs/hfs/extent.c:401
hfs_get_block+0x3ac/0x9fc fs/hfs/extent.c:353
__block_write_begin_int+0x3ec/0x1608 fs/buffer.c:2012
__block_write_begin fs/buffer.c:2062 [inline]
block_write_begin fs/buffer.c:2122 [inline]
cont_write_begin+0x538/0x710 fs/buffer.c:2471
hfs_write_begin+0xa8/0xf8 fs/hfs/inode.c:59
generic_perform_write+0x24c/0x520 mm/filemap.c:3776
__generic_file_write_iter+0x230/0x454 mm/filemap.c:3903
generic_file_write_iter+0xb4/0x1b8 mm/filemap.c:3935
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0x87c/0xb3c fs/read_write.c:594
ksys_write+0x15c/0x26c fs/read_write.c:647
__do_sys_write fs/read_write.c:659 [inline]
__se_sys_write fs/read_write.c:656 [inline]
__arm64_sys_write+0x7c/0x90 fs/read_write.c:656
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #0 (&tree->tree_lock#2/1){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5622
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfs_find_init+0x148/0x1c8
hfs_ext_read_extent fs/hfs/extent.c:200 [inline]
hfs_extend_file+0x24c/0x10e4 fs/hfs/extent.c:401
hfs_bmap_reserve+0xd0/0x3b4 fs/hfs/btree.c:231
hfs_cat_create+0x1bc/0x844 fs/hfs/catalog.c:104
hfs_create+0x70/0xe4 fs/hfs/dir.c:202
lookup_open fs/namei.c:3392 [inline]
open_last_lookups fs/namei.c:3462 [inline]
path_openat+0xec0/0x26f0 fs/namei.c:3669
do_filp_open+0x1a8/0x3b4 fs/namei.c:3699
do_sys_openat2+0x128/0x3d8 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1238
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&HFS_I(tree->inode)->extents_lock);
lock(&tree->tree_lock#2/1);
lock(&HFS_I(tree->inode)->extents_lock);
lock(&tree->tree_lock#2/1);

*** DEADLOCK ***

4 locks held by syz-executor.5/8016:
#0: ffff0000da9fa460 (sb_writers#18){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:377
#1: ffff0000c7c45728 (&type->i_mutex_dir_key#14){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff0000c7c45728 (&type->i_mutex_dir_key#14){++++}-{3:3}, at: open_last_lookups fs/namei.c:3459 [inline]
#1: ffff0000c7c45728 (&type->i_mutex_dir_key#14){++++}-{3:3}, at: path_openat+0x63c/0x26f0 fs/namei.c:3669
#2: ffff00011b1660b0 (&tree->tree_lock#2){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8
#3: ffff0000c7c44878 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xe4/0x10e4 fs/hfs/extent.c:397

stack backtrace:
CPU: 1 PID: 8016 Comm: syz-executor.5 Not tainted 5.15.104-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2011
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5622
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfs_find_init+0x148/0x1c8
hfs_ext_read_extent fs/hfs/extent.c:200 [inline]
hfs_extend_file+0x24c/0x10e4 fs/hfs/extent.c:401
hfs_bmap_reserve+0xd0/0x3b4 fs/hfs/btree.c:231
hfs_cat_create+0x1bc/0x844 fs/hfs/catalog.c:104
hfs_create+0x70/0xe4 fs/hfs/dir.c:202
lookup_open fs/namei.c:3392 [inline]
open_last_lookups fs/namei.c:3462 [inline]
path_openat+0xec0/0x26f0 fs/namei.c:3669
do_filp_open+0x1a8/0x3b4 fs/namei.c:3699
do_sys_openat2+0x128/0x3d8 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1238
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
hfs: request for non-existent node 16777216 in B*Tree
hfs: request for non-existent node 16777216 in B*Tree

syzbot

ólesið,
16. jún. 2023, 23:19:0816.6.2023
til syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: ca87e77a2ef8 Linux 6.1.34
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=14924b17280000
kernel config: https://syzkaller.appspot.com/x/.config?x=143044f84cdceac2
dashboard link: https://syzkaller.appspot.com/bug?extid=6cc76a2d7d5627cfdabc
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10ce5137280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=14879d17280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/1141c37ce351/disk-ca87e77a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/de1fca0d0bb4/vmlinux-ca87e77a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/c3417b70e0bf/Image-ca87e77a.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/99e3adedecb8/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6cc76a...@syzkaller.appspotmail.com

loop0: detected capacity change from 0 to 64
============================================
WARNING: possible recursive locking detected
6.1.34-syzkaller #0 Not tainted
--------------------------------------------
syz-executor234/4216 is trying to acquire lock:
ffff0000de89c0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8

but task is already holding lock:
ffff0000de89c0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&tree->tree_lock/1);
lock(&tree->tree_lock/1);

*** DEADLOCK ***

May be due to missing lock nesting notation

5 locks held by syz-executor234/4216:
#0: ffff0000d4e1e460 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x244/0x914 fs/read_write.c:580
#1: ffff0000d8239628 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff0000d8239628 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: generic_file_write_iter+0x88/0x2b4 mm/filemap.c:3911
#2: ffff0000d8239478 (&HFS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xe4/0x1130 fs/hfs/extent.c:397
#3: ffff0000de89c0b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8
#4: ffff0000d82380f8 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xe4/0x1130 fs/hfs/extent.c:397

stack backtrace:
CPU: 1 PID: 4216 Comm: syz-executor234 Not tainted 6.1.34-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
__lock_acquire+0x6310/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5669
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfs_find_init+0x148/0x1c8
hfs_ext_read_extent fs/hfs/extent.c:200 [inline]
hfs_extend_file+0x270/0x1130 fs/hfs/extent.c:401
hfs_bmap_reserve+0xd0/0x3b4 fs/hfs/btree.c:234
__hfs_ext_write_extent+0x1a0/0x468 fs/hfs/extent.c:121
__hfs_ext_cache_extent+0x84/0x754 fs/hfs/extent.c:174
hfs_ext_read_extent fs/hfs/extent.c:202 [inline]
hfs_extend_file+0x29c/0x1130 fs/hfs/extent.c:401
hfs_get_block+0x3b8/0x9e0 fs/hfs/extent.c:353
__block_write_begin_int+0x340/0x13b4 fs/buffer.c:1991
__block_write_begin fs/buffer.c:2041 [inline]
block_write_begin fs/buffer.c:2102 [inline]
cont_write_begin+0x5c0/0x7d8 fs/buffer.c:2456
hfs_write_begin+0x98/0xe4 fs/hfs/inode.c:58
generic_perform_write+0x278/0x55c mm/filemap.c:3754
__generic_file_write_iter+0x168/0x388 mm/filemap.c:3882
generic_file_write_iter+0xb8/0x2b4 mm/filemap.c:3914
call_write_iter include/linux/fs.h:2205 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x610/0x914 fs/read_write.c:584
ksys_write+0x15c/0x26c fs/read_write.c:637
__do_sys_write fs/read_write.c:649 [inline]
__se_sys_write fs/read_write.c:646 [inline]
__arm64_sys_write+0x7c/0x90 fs/read_write.c:646
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

ólesið,
16. jún. 2023, 23:50:5216.6.2023
til syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 471e639e59d1 Linux 5.15.117
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=127bf2cf280000
kernel config: https://syzkaller.appspot.com/x/.config?x=eeb4064efec7aa39
dashboard link: https://syzkaller.appspot.com/bug?extid=5ec6d29e9352c6f10dc7
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=128e85f7280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17ad46e3280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2e7359ecba67/disk-471e639e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ef6d17e44bc3/vmlinux-471e639e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/99b68dbd7e00/Image-471e639e.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/f5a8b1a5d310/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5ec6d2...@syzkaller.appspotmail.com

loop0: detected capacity change from 0 to 64
============================================
WARNING: possible recursive locking detected
5.15.117-syzkaller #0 Not tainted
--------------------------------------------
syz-executor280/3960 is trying to acquire lock:
ffff0000c94000b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8

but task is already holding lock:
ffff0000c94000b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&tree->tree_lock/1);
lock(&tree->tree_lock/1);

*** DEADLOCK ***

May be due to missing lock nesting notation

5 locks held by syz-executor280/3960:
#0: ffff0000c1ee6460 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x228/0xb3c fs/read_write.c:590
#1: ffff0000c2269628 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff0000c2269628 (&sb->s_type->i_mutex_key#17){+.+.}-{3:3}, at: generic_file_write_iter+0x84/0x1b8 mm/filemap.c:3932
#2: ffff0000c2269478 (&HFS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xe4/0x10e4 fs/hfs/extent.c:397
#3: ffff0000c94000b0 (&tree->tree_lock/1){+.+.}-{3:3}, at: hfs_find_init+0x148/0x1c8
#4: ffff0000c22680f8 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xe4/0x10e4 fs/hfs/extent.c:397

stack backtrace:
CPU: 0 PID: 3960 Comm: syz-executor280 Not tainted 5.15.117-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
__lock_acquire+0x62b4/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5622
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfs_find_init+0x148/0x1c8
hfs_ext_read_extent fs/hfs/extent.c:200 [inline]
hfs_extend_file+0x24c/0x10e4 fs/hfs/extent.c:401
hfs_bmap_reserve+0xd0/0x3b4 fs/hfs/btree.c:231
Svara öllum
Svara höfundi
Senda áfram
0 ný skilaboð