[v5.15] possible deadlock in hfsplus_block_free (2)

0 views
Skip to first unread message

syzbot

unread,
Jun 7, 2024, 7:28:28 AM (11 days ago) Jun 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c61bd26ae81a Linux 5.15.160
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13001932980000
kernel config: https://syzkaller.appspot.com/x/.config?x=6a313cb27403a960
dashboard link: https://syzkaller.appspot.com/bug?extid=e472c8ce19be27f2a396
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c5c43c69147f/disk-c61bd26a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3e9c98d00e66/vmlinux-c61bd26a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e9da759b078f/Image-c61bd26a.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e472c8...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.160-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.0/6682 is trying to acquire lock:
ffff0000db3aa8f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xcc/0x514 fs/hfsplus/bitmap.c:182

but task is already holding lock:
ffff0000c1506648 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x254/0x9cc fs/hfsplus/extents.c:576

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfsplus_get_block+0x2c4/0x1194 fs/hfsplus/extents.c:260
block_read_full_page+0x2a0/0xc4c fs/buffer.c:2290
hfsplus_readpage+0x28/0x38 fs/hfsplus/inode.c:28
do_read_cache_page+0x60c/0x950
read_cache_page+0x68/0x84 mm/filemap.c:3574
read_mapping_page include/linux/pagemap.h:515 [inline]
hfsplus_block_allocate+0xe0/0x800 fs/hfsplus/bitmap.c:37
hfsplus_file_extend+0x770/0x14e0 fs/hfsplus/extents.c:468
hfsplus_get_block+0x398/0x1194 fs/hfsplus/extents.c:245
__block_write_begin_int+0x3ec/0x1608 fs/buffer.c:2012
__block_write_begin fs/buffer.c:2062 [inline]
block_write_begin fs/buffer.c:2122 [inline]
cont_write_begin+0x538/0x710 fs/buffer.c:2471
hfsplus_write_begin+0xa8/0xf8 fs/hfsplus/inode.c:53
pagecache_write_begin+0xa0/0xc0 mm/filemap.c:3608
__page_symlink+0x140/0x2b4 fs/namei.c:5177
page_symlink+0x88/0xac fs/namei.c:5200
hfsplus_symlink+0xc0/0x224 fs/hfsplus/dir.c:449
vfs_symlink+0x244/0x3a8 fs/namei.c:4429
do_symlinkat+0x364/0x6b0 fs/namei.c:4458
__do_sys_symlinkat fs/namei.c:4475 [inline]
__se_sys_symlinkat fs/namei.c:4472 [inline]
__arm64_sys_symlinkat+0xa4/0xbc fs/namei.c:4472
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #0 (&sbi->alloc_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x32d4/0x7638 kernel/locking/lockdep.c:5012
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5623
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfsplus_block_free+0xcc/0x514 fs/hfsplus/bitmap.c:182
hfsplus_free_extents+0x148/0x8d4 fs/hfsplus/extents.c:363
hfsplus_file_truncate+0x69c/0x9cc fs/hfsplus/extents.c:591
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:267
notify_change+0xa34/0xcf8 fs/attr.c:505
do_truncate+0x1c0/0x28c fs/open.c:65
handle_truncate fs/namei.c:3265 [inline]
do_open fs/namei.c:3612 [inline]
path_openat+0x20c4/0x26cc fs/namei.c:3742
do_filp_open+0x1a8/0x3b4 fs/namei.c:3769
do_sys_openat2+0x128/0x3d8 fs/open.c:1253
do_sys_open fs/open.c:1269 [inline]
__do_sys_openat fs/open.c:1285 [inline]
__se_sys_openat fs/open.c:1280 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1280
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&sbi->alloc_mutex);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&sbi->alloc_mutex);

*** DEADLOCK ***

3 locks held by syz-executor.0/6682:
#0: ffff0000d00e6460 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:377
#1: ffff0000c1506840 (&sb->s_type->i_mutex_key#22){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#1: ffff0000c1506840 (&sb->s_type->i_mutex_key#22){+.+.}-{3:3}, at: do_truncate+0x1ac/0x28c fs/open.c:63
#2: ffff0000c1506648 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x254/0x9cc fs/hfsplus/extents.c:576

stack backtrace:
CPU: 1 PID: 6682 Comm: syz-executor.0 Not tainted 5.15.160-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2011
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x32d4/0x7638 kernel/locking/lockdep.c:5012
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5623
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
hfsplus_block_free+0xcc/0x514 fs/hfsplus/bitmap.c:182
hfsplus_free_extents+0x148/0x8d4 fs/hfsplus/extents.c:363
hfsplus_file_truncate+0x69c/0x9cc fs/hfsplus/extents.c:591
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:267
notify_change+0xa34/0xcf8 fs/attr.c:505
do_truncate+0x1c0/0x28c fs/open.c:65
handle_truncate fs/namei.c:3265 [inline]
do_open fs/namei.c:3612 [inline]
path_openat+0x20c4/0x26cc fs/namei.c:3742
do_filp_open+0x1a8/0x3b4 fs/namei.c:3769
do_sys_openat2+0x128/0x3d8 fs/open.c:1253
do_sys_open fs/open.c:1269 [inline]
__do_sys_openat fs/open.c:1285 [inline]
__se_sys_openat fs/open.c:1280 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1280
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Jun 7, 2024, 7:36:26 AM (11 days ago) Jun 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 88690811da69 Linux 6.1.92
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12784a6a980000
kernel config: https://syzkaller.appspot.com/x/.config?x=f084fbeeff2de042
dashboard link: https://syzkaller.appspot.com/bug?extid=ffef020b8300527d0d56
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/9040ec940045/disk-88690811.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/dc70128611fd/vmlinux-88690811.xz
kernel image: https://storage.googleapis.com/syzbot-assets/f05abc0b618b/Image-88690811.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ffef02...@syzkaller.appspotmail.com

loop1: detected capacity change from 0 to 1024
======================================================
WARNING: possible circular locking dependency detected
6.1.92-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.1/6672 is trying to acquire lock:
ffff0000d65090f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xcc/0x4b0 fs/hfsplus/bitmap.c:182

but task is already holding lock:
ffff0000f39e3048 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x250/0x9b8 fs/hfsplus/extents.c:576

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_get_block+0x2c4/0x1168 fs/hfsplus/extents.c:260
block_read_full_folio+0x2f4/0x98c fs/buffer.c:2271
hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
filemap_read_folio+0x14c/0x39c mm/filemap.c:2461
do_read_cache_folio+0x24c/0x544 mm/filemap.c:3598
do_read_cache_page mm/filemap.c:3640 [inline]
read_cache_page+0x6c/0x180 mm/filemap.c:3649
read_mapping_page include/linux/pagemap.h:791 [inline]
hfsplus_block_allocate+0xe0/0x818 fs/hfsplus/bitmap.c:37
hfsplus_file_extend+0x770/0x14cc fs/hfsplus/extents.c:468
hfsplus_get_block+0x398/0x1168 fs/hfsplus/extents.c:245
__block_write_begin_int+0x340/0x13b4 fs/buffer.c:1991
__block_write_begin fs/buffer.c:2041 [inline]
block_write_begin fs/buffer.c:2102 [inline]
cont_write_begin+0x5c0/0x7d8 fs/buffer.c:2456
hfsplus_write_begin+0x98/0xe4 fs/hfsplus/inode.c:52
page_symlink+0x278/0x4a4 fs/namei.c:5220
hfsplus_symlink+0xc0/0x224 fs/hfsplus/dir.c:449
vfs_symlink+0x244/0x3a8 fs/namei.c:4473
do_symlinkat+0x1bc/0x45c fs/namei.c:4502
__do_sys_symlinkat fs/namei.c:4519 [inline]
__se_sys_symlinkat fs/namei.c:4516 [inline]
__arm64_sys_symlinkat+0xa4/0xbc fs/namei.c:4516
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

-> #0 (&sbi->alloc_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x3338/0x7680 kernel/locking/lockdep.c:5049
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5662
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_block_free+0xcc/0x4b0 fs/hfsplus/bitmap.c:182
hfsplus_free_extents+0x148/0x8d4 fs/hfsplus/extents.c:363
hfsplus_file_truncate+0x698/0x9b8 fs/hfsplus/extents.c:591
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:269
notify_change+0xb58/0xe1c fs/attr.c:499
do_truncate+0x1c0/0x28c fs/open.c:65
vfs_truncate+0x2c4/0x36c fs/open.c:111
do_sys_truncate+0xec/0x1b4 fs/open.c:134
__do_sys_truncate fs/open.c:146 [inline]
__se_sys_truncate fs/open.c:144 [inline]
__arm64_sys_truncate+0x5c/0x70 fs/open.c:144
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&sbi->alloc_mutex);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&sbi->alloc_mutex);

*** DEADLOCK ***

3 locks held by syz-executor.1/6672:
#0: ffff0000d2b5e460 (sb_writers#11){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393
#1: ffff0000f39e3240 (&sb->s_type->i_mutex_key#28){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
#1: ffff0000f39e3240 (&sb->s_type->i_mutex_key#28){+.+.}-{3:3}, at: do_truncate+0x1ac/0x28c fs/open.c:63
#2: ffff0000f39e3048 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x250/0x9b8 fs/hfsplus/extents.c:576

stack backtrace:
CPU: 0 PID: 6672 Comm: syz-executor.1 Not tainted 6.1.92-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x5c lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2048
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2170
check_prev_add kernel/locking/lockdep.c:3090 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain kernel/locking/lockdep.c:3825 [inline]
__lock_acquire+0x3338/0x7680 kernel/locking/lockdep.c:5049
lock_acquire+0x26c/0x7cc kernel/locking/lockdep.c:5662
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
hfsplus_block_free+0xcc/0x4b0 fs/hfsplus/bitmap.c:182
hfsplus_free_extents+0x148/0x8d4 fs/hfsplus/extents.c:363
hfsplus_file_truncate+0x698/0x9b8 fs/hfsplus/extents.c:591
hfsplus_setattr+0x18c/0x25c fs/hfsplus/inode.c:269
notify_change+0xb58/0xe1c fs/attr.c:499
do_truncate+0x1c0/0x28c fs/open.c:65
vfs_truncate+0x2c4/0x36c fs/open.c:111
do_sys_truncate+0xec/0x1b4 fs/open.c:134
__do_sys_truncate fs/open.c:146 [inline]
__se_sys_truncate fs/open.c:144 [inline]
__arm64_sys_truncate+0x5c/0x70 fs/open.c:144
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585
Reply all
Reply to author
Forward
0 new messages