[v5.15] possible deadlock in ext4_bmap

1 view
Skip to first unread message

syzbot

unread,
Mar 7, 2023, 11:42:45 AM3/7/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d9b4a0c83a2d Linux 5.15.98
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1566bb54c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=2f8d9515b973b23b
dashboard link: https://syzkaller.appspot.com/bug?extid=5bc75b63c58b40e14b5b
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/be34e23aaa99/disk-d9b4a0c8.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b475bbb4fab3/vmlinux-d9b4a0c8.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7c1abc451dda/bzImage-d9b4a0c8.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5bc75b...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.98-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.5/3577 is trying to acquire lock:
ffff88801ca683f0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:797 [inline]
ffff88801ca683f0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3159

but task is already holding lock:
ffff88814b0d23f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x31c/0xc90 fs/jbd2/journal.c:2474

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
mutex_lock_io_nested+0x45/0x60 kernel/locking/mutex.c:777
jbd2_journal_flush+0x290/0xc90 fs/jbd2/journal.c:2464
ext4_change_inode_journal_flag+0x1de/0x6e0 fs/ext4/inode.c:6039
ext4_ioctl_setflags fs/ext4/ioctl.c:447 [inline]
ext4_fileattr_set+0xe6e/0x17d0 fs/ext4/ioctl.c:762
vfs_fileattr_set+0x8ee/0xd30 fs/ioctl.c:700
ioctl_setflags fs/ioctl.c:732 [inline]
do_vfs_ioctl+0x1d85/0x2b70 fs/ioctl.c:843
__do_sys_ioctl fs/ioctl.c:872 [inline]
__se_sys_ioctl+0x81/0x160 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #2 (&journal->j_barrier){+.+.}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
jbd2_journal_lock_updates+0x4a9/0x580 fs/jbd2/transaction.c:895
ext4_change_inode_journal_flag+0x1a8/0x6e0 fs/ext4/inode.c:6026
ext4_ioctl_setflags fs/ext4/ioctl.c:447 [inline]
ext4_fileattr_set+0xe6e/0x17d0 fs/ext4/ioctl.c:762
vfs_fileattr_set+0x8ee/0xd30 fs/ioctl.c:700
ioctl_setflags fs/ioctl.c:732 [inline]
do_vfs_ioctl+0x1d85/0x2b70 fs/ioctl.c:843
__do_sys_ioctl fs/ioctl.c:872 [inline]
__se_sys_ioctl+0x81/0x160 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #1 (&sbi->s_writepages_rwsem){++++}-{0:0}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
percpu_down_read+0x46/0x190 include/linux/percpu-rwsem.h:51
ext4_writepages+0x1f6/0x3eb0 fs/ext4/inode.c:2687
do_writepages+0x481/0x730 mm/page-writeback.c:2364
filemap_fdatawrite_wbc+0x1d6/0x230 mm/filemap.c:400
__filemap_fdatawrite_range mm/filemap.c:433 [inline]
filemap_write_and_wait_range+0x19e/0x280 mm/filemap.c:704
__iomap_dio_rw+0x897/0x1f40 fs/iomap/direct-io.c:557
iomap_dio_rw+0x38/0x80 fs/iomap/direct-io.c:672
ext4_dio_write_iter fs/ext4/file.c:574 [inline]
ext4_file_write_iter+0x15af/0x1990 fs/ext4/file.c:685
call_write_iter include/linux/fs.h:2101 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0xacf/0xe50 fs/read_write.c:594
ksys_write+0x1a2/0x2c0 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
down_read+0x3b/0x50 kernel/locking/rwsem.c:1480
inode_lock_shared include/linux/fs.h:797 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3159
bmap+0xa1/0xd0 fs/inode.c:1714
jbd2_journal_bmap fs/jbd2/journal.c:978 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1786 [inline]
jbd2_journal_flush+0x7a2/0xc90 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:847 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1265 [inline]
ext4_ioctl+0x335b/0x5db0 fs/ext4/ioctl.c:1277
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Chain exists of:
&sb->s_type->i_mutex_key#9 --> &journal->j_barrier --> &journal->j_checkpoint_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&journal->j_checkpoint_mutex);
lock(&journal->j_barrier);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#9);

*** DEADLOCK ***

2 locks held by syz-executor.5/3577:
#0: ffff88814b0d2170 (&journal->j_barrier){+.+.}-{3:3}, at: jbd2_journal_lock_updates+0x4a9/0x580 fs/jbd2/transaction.c:895
#1: ffff88814b0d23f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x31c/0xc90 fs/jbd2/journal.c:2474

stack backtrace:
CPU: 0 PID: 3577 Comm: syz-executor.5 Not tainted 5.15.98-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2f8/0x3b0 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
down_read+0x3b/0x50 kernel/locking/rwsem.c:1480
inode_lock_shared include/linux/fs.h:797 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3159
bmap+0xa1/0xd0 fs/inode.c:1714
jbd2_journal_bmap fs/jbd2/journal.c:978 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1786 [inline]
jbd2_journal_flush+0x7a2/0xc90 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:847 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1265 [inline]
ext4_ioctl+0x335b/0x5db0 fs/ext4/ioctl.c:1277
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fdb587580f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fdb56ca9168 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fdb58878050 RCX: 00007fdb587580f9
RDX: 00000000200005c0 RSI: 000000004004662b RDI: 0000000000000006
RBP: 00007fdb587b3ae9 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffe8c2e8a2f R14: 00007fdb56ca9300 R15: 0000000000022000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Mar 7, 2023, 12:07:47 PM3/7/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 42616e0f09fb Linux 6.1.15
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17babf4cc80000
kernel config: https://syzkaller.appspot.com/x/.config?x=690b9ff41783cd73
dashboard link: https://syzkaller.appspot.com/bug?extid=1e10f17d0fa3e43ce77c
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/7aca0b4cb788/disk-42616e0f.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/458e15d5fc53/vmlinux-42616e0f.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7d3a81dd294e/bzImage-42616e0f.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+1e10f1...@syzkaller.appspotmail.com

Scheduler tracepoints stat_sleep, stat_iowait, stat_blocked and stat_runtime require the kernel parameter schedstats=enable or kernel.sched_schedstats=1
======================================================
WARNING: possible circular locking dependency detected
6.1.15-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.5/11629 is trying to acquire lock:
ffff888140ee0400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:766 [inline]
ffff888140ee0400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3171

but task is already holding lock:
ffff8880295f23f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x323/0xc40 fs/jbd2/journal.c:2474

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
lock_acquire+0x231/0x620 kernel/locking/lockdep.c:5668
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
mutex_lock_io_nested+0x43/0x60 kernel/locking/mutex.c:833
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:110
add_transaction_credits+0x94c/0xc00 fs/jbd2/transaction.c:298
start_this_handle+0x747/0x1640 fs/jbd2/transaction.c:422
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:520
__ext4_journal_start_sb+0x206/0x4e0 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
ext4_dirty_inode+0x8b/0x100 fs/ext4/inode.c:6037
__mark_inode_dirty+0x3d9/0x1220 fs/fs-writeback.c:2408
mark_inode_dirty include/linux/fs.h:2481 [inline]
generic_write_end+0x180/0x1d0 fs/buffer.c:2184
ext4_da_write_end+0x836/0xba0 fs/ext4/inode.c:3103
generic_perform_write+0x3e9/0x5e0 mm/filemap.c:3765
ext4_buffered_write_iter+0x122/0x3a0 fs/ext4/file.c:285
ext4_file_write_iter+0x1d2/0x18f0
call_write_iter include/linux/fs.h:2205 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x7ae/0xba0 fs/read_write.c:584
ksys_write+0x19c/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&sb->s_type->i_mutex_key#8){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3831
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5055
lock_acquire+0x231/0x620 kernel/locking/lockdep.c:5668
down_read+0x39/0x50 kernel/locking/rwsem.c:1509
inode_lock_shared include/linux/fs.h:766 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3171
bmap+0xa1/0xd0 fs/inode.c:1798
jbd2_journal_bmap fs/jbd2/journal.c:977 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1789 [inline]
jbd2_journal_flush+0x5b5/0xc40 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:1082 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1590 [inline]
ext4_ioctl+0x3a7d/0x61c0 fs/ext4/ioctl.c:1610
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#8);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#8);

*** DEADLOCK ***

2 locks held by syz-executor.5/11629:
#0: ffff8880295f2170 (&journal->j_barrier){+.+.}-{3:3}, at: jbd2_journal_lock_updates+0x2a8/0x370 fs/jbd2/transaction.c:904
#1: ffff8880295f23f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x323/0xc40 fs/jbd2/journal.c:2474

stack backtrace:
CPU: 0 PID: 11629 Comm: syz-executor.5 Not tainted 6.1.15-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2177
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3831
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5055
lock_acquire+0x231/0x620 kernel/locking/lockdep.c:5668
down_read+0x39/0x50 kernel/locking/rwsem.c:1509
inode_lock_shared include/linux/fs.h:766 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3171
bmap+0xa1/0xd0 fs/inode.c:1798
jbd2_journal_bmap fs/jbd2/journal.c:977 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1789 [inline]
jbd2_journal_flush+0x5b5/0xc40 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:1082 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1590 [inline]
ext4_ioctl+0x3a7d/0x61c0 fs/ext4/ioctl.c:1610
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f3f0408c0f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f3f04e8a168 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f3f041abf80 RCX: 00007f3f0408c0f9
RDX: 00000000200005c0 RSI: 000000004004662b RDI: 0000000000000005
RBP: 00007f3f040e7ae9 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc4eb5c6df R14: 00007f3f04e8a300 R15: 0000000000022000

syzbot

unread,
Mar 7, 2023, 8:37:41 PM3/7/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: d9b4a0c83a2d Linux 5.15.98
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10a8ca92c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=b57cfa804330c3b7
dashboard link: https://syzkaller.appspot.com/bug?extid=5bc75b63c58b40e14b5b
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15a65dbcc80000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11650dbcc80000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/8088989394e3/disk-d9b4a0c8.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/2651d6753959/vmlinux-d9b4a0c8.xz
kernel image: https://storage.googleapis.com/syzbot-assets/f3fa3f994f9a/Image-d9b4a0c8.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5bc75b...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.98-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor189/4053 is trying to acquire lock:
ffff0000ccb403f0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:797 [inline]
ffff0000ccb403f0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: ext4_bmap+0x58/0x36c fs/ext4/inode.c:3159

but task is already holding lock:
ffff0000d3b903f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x28c/0xaa0 fs/jbd2/journal.c:2474

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
mutex_lock_io_nested+0xcc/0x12c kernel/locking/mutex.c:777
jbd2_journal_flush+0x210/0xaa0 fs/jbd2/journal.c:2464
ext4_ioctl_checkpoint fs/ext4/ioctl.c:847 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1265 [inline]
ext4_ioctl+0x3448/0x675c fs/ext4/ioctl.c:1277
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584

-> #2 (&journal->j_barrier){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
jbd2_journal_lock_updates+0x3f0/0x4b4 fs/jbd2/transaction.c:895
ext4_change_inode_journal_flag+0x15c/0x648 fs/ext4/inode.c:6026
ext4_ioctl_setflags fs/ext4/ioctl.c:447 [inline]
ext4_fileattr_set+0xb7c/0x12e0 fs/ext4/ioctl.c:762
vfs_fileattr_set+0x708/0xad0 fs/ioctl.c:700
do_vfs_ioctl+0x1634/0x2a38
__do_sys_ioctl fs/ioctl.c:872 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0xe4/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584

-> #1 (&sbi->s_writepages_rwsem){++++}-{0:0}:
percpu_down_write+0xd8/0x3b0 kernel/locking/percpu-rwsem.c:217
ext4_ind_migrate+0x170/0x58c fs/ext4/migrate.c:625
ext4_ioctl_setflags fs/ext4/ioctl.c:456 [inline]
ext4_fileattr_set+0xbf0/0x12e0 fs/ext4/ioctl.c:762
vfs_fileattr_set+0x708/0xad0 fs/ioctl.c:700
do_vfs_ioctl+0x1634/0x2a38
__do_sys_ioctl fs/ioctl.c:872 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0xe4/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584

-> #0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x2b8/0x894 kernel/locking/lockdep.c:5622
down_read+0xbc/0x11c kernel/locking/rwsem.c:1480
inode_lock_shared include/linux/fs.h:797 [inline]
ext4_bmap+0x58/0x36c fs/ext4/inode.c:3159
bmap+0xa8/0xe8 fs/inode.c:1714
jbd2_journal_bmap fs/jbd2/journal.c:978 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1786 [inline]
jbd2_journal_flush+0x63c/0xaa0 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:847 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1265 [inline]
ext4_ioctl+0x3448/0x675c fs/ext4/ioctl.c:1277
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584

other info that might help us debug this:

Chain exists of:
&sb->s_type->i_mutex_key#9 --> &journal->j_barrier --> &journal->j_checkpoint_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&journal->j_checkpoint_mutex);
lock(&journal->j_barrier);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#9);

*** DEADLOCK ***

2 locks held by syz-executor189/4053:
#0: ffff0000d3b90170 (&journal->j_barrier){+.+.}-{3:3}, at: jbd2_journal_lock_updates+0x3f0/0x4b4 fs/jbd2/transaction.c:895
#1: ffff0000d3b903f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x28c/0xaa0 fs/jbd2/journal.c:2474

stack backtrace:
CPU: 0 PID: 4053 Comm: syz-executor189 Not tainted 5.15.98-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2011
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x2b8/0x894 kernel/locking/lockdep.c:5622
down_read+0xbc/0x11c kernel/locking/rwsem.c:1480
inode_lock_shared include/linux/fs.h:797 [inline]
ext4_bmap+0x58/0x36c fs/ext4/inode.c:3159
bmap+0xa8/0xe8 fs/inode.c:1714
jbd2_journal_bmap fs/jbd2/journal.c:978 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1786 [inline]
jbd2_journal_flush+0x63c/0xaa0 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:847 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1265 [inline]
ext4_ioctl+0x3448/0x675c fs/ext4/ioctl.c:1277
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 <unknown>:584

syzbot

unread,
Mar 9, 2023, 5:33:41 PM3/9/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 42616e0f09fb Linux 6.1.15
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1117f592c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=650737f7e9682672
dashboard link: https://syzkaller.appspot.com/bug?extid=1e10f17d0fa3e43ce77c
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10ddc4dac80000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=13cba40cc80000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/f10713d1fd0f/disk-42616e0f.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5a1307bb774e/vmlinux-42616e0f.xz
kernel image: https://storage.googleapis.com/syzbot-assets/388238a30fe4/Image-42616e0f.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+1e10f1...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.15-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor142/4314 is trying to acquire lock:
ffff0000c0548400 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:766 [inline]
ffff0000c0548400 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: ext4_bmap+0x58/0x35c fs/ext4/inode.c:3171

but task is already holding lock:
ffff0000d5e1c3f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x28c/0xa60 fs/jbd2/journal.c:2474

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
mutex_lock_io_nested+0x6c/0x88 kernel/locking/mutex.c:833
jbd2_journal_flush+0x210/0xa60 fs/jbd2/journal.c:2464
ext4_ioctl_checkpoint fs/ext4/ioctl.c:1082 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1590 [inline]
ext4_ioctl+0x38b8/0x6ef0 fs/ext4/ioctl.c:1610
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #2 (&journal->j_barrier){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x38/0x44 kernel/locking/mutex.c:799
jbd2_journal_lock_updates+0x260/0x324 fs/jbd2/transaction.c:904
ext4_change_inode_journal_flag+0x15c/0x614 fs/ext4/inode.c:6088
ext4_ioctl_setflags fs/ext4/ioctl.c:687 [inline]
ext4_fileattr_set+0xb58/0x12c8 fs/ext4/ioctl.c:1004
vfs_fileattr_set+0x708/0xad0 fs/ioctl.c:696
do_vfs_ioctl+0x14cc/0x26f8
__do_sys_ioctl fs/ioctl.c:868 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0xe4/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #1 (&sbi->s_writepages_rwsem){++++}-{0:0}:
percpu_down_write+0x78/0x320 kernel/locking/percpu-rwsem.c:227
ext4_ind_migrate+0x174/0x6e0 fs/ext4/migrate.c:624
ext4_ioctl_setflags fs/ext4/ioctl.c:696 [inline]
ext4_fileattr_set+0xbcc/0x12c8 fs/ext4/ioctl.c:1004
vfs_fileattr_set+0x708/0xad0 fs/ioctl.c:696
do_vfs_ioctl+0x14cc/0x26f8
__do_sys_ioctl fs/ioctl.c:868 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0xe4/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain kernel/locking/lockdep.c:3831 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5055
lock_acquire+0x2f8/0x8dc kernel/locking/lockdep.c:5668
down_read+0x5c/0x78 kernel/locking/rwsem.c:1509
inode_lock_shared include/linux/fs.h:766 [inline]
ext4_bmap+0x58/0x35c fs/ext4/inode.c:3171
bmap+0xa8/0xe8 fs/inode.c:1798
jbd2_journal_bmap fs/jbd2/journal.c:977 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1789 [inline]
jbd2_journal_flush+0x4c0/0xa60 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:1082 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1590 [inline]
ext4_ioctl+0x38b8/0x6ef0 fs/ext4/ioctl.c:1610
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

other info that might help us debug this:

Chain exists of:
&sb->s_type->i_mutex_key#9 --> &journal->j_barrier --> &journal->j_checkpoint_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&journal->j_checkpoint_mutex);
lock(&journal->j_barrier);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#9);

*** DEADLOCK ***

2 locks held by syz-executor142/4314:
#0: ffff0000d5e1c170 (&journal->j_barrier){+.+.}-{3:3}, at: jbd2_journal_lock_updates+0x260/0x324 fs/jbd2/transaction.c:904
#1: ffff0000d5e1c3f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x28c/0xa60 fs/jbd2/journal.c:2474

stack backtrace:
CPU: 0 PID: 4314 Comm: syz-executor142 Not tainted 6.1.15-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call trace:
dump_backtrace+0x1c8/0x1f4 arch/arm64/kernel/stacktrace.c:158
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:165
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2055
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2177
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain kernel/locking/lockdep.c:3831 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5055
lock_acquire+0x2f8/0x8dc kernel/locking/lockdep.c:5668
down_read+0x5c/0x78 kernel/locking/rwsem.c:1509
inode_lock_shared include/linux/fs.h:766 [inline]
ext4_bmap+0x58/0x35c fs/ext4/inode.c:3171
bmap+0xa8/0xe8 fs/inode.c:1798
jbd2_journal_bmap fs/jbd2/journal.c:977 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1789 [inline]
jbd2_journal_flush+0x4c0/0xa60 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:1082 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1590 [inline]
ext4_ioctl+0x38b8/0x6ef0 fs/ext4/ioctl.c:1610
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

syzbot

unread,
Sep 15, 2023, 6:04:39 AM9/15/23
to syzkaller...@googlegroups.com
syzbot suspects this issue could be fixed by backporting the following commit:

commit 62913ae96de747091c4dacd06d158e7729c1a76d
git tree: upstream
Author: Theodore Ts'o <ty...@mit.edu>
Date: Wed Mar 8 04:15:49 2023 +0000

ext4, jbd2: add an optimized bmap for the journal inode

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=11ef42f8680000
kernel config: https://syzkaller.appspot.com/x/.config?x=ac04a15f4a80e9d0
dashboard link: https://syzkaller.appspot.com/bug?extid=1e10f17d0fa3e43ce77c
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=129b89c6c80000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17bedaecc80000


Please keep in mind that other backports might be required as well.

For information about bisection process see: https://goo.gl/tpsmEJ#bisection
Reply all
Reply to author
Forward
0 new messages