[v5.15] possible deadlock in __jbd2_log_wait_for_space

1 view
Skip to first unread message

syzbot

unread,
Mar 7, 2023, 12:44:49 PM3/7/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d9b4a0c83a2d Linux 5.15.98
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=135e4754c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=2f8d9515b973b23b
dashboard link: https://syzkaller.appspot.com/bug?extid=385d1b32404207ed55d6
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/037cabbd3313/disk-d9b4a0c8.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/9967e551eb34/vmlinux-d9b4a0c8.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a050c7a4fd99/bzImage-d9b4a0c8.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+385d1b...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.98-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.4/28805 is trying to acquire lock:
ffff88814aab03f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: __jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:110

but task is already holding lock:
ffff888026d967a0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
ffff888026d967a0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: ext4_dio_write_iter fs/ext4/file.c:510 [inline]
ffff888026d967a0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: ext4_file_write_iter+0x5c4/0x1990 fs/ext4/file.c:685

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&sb->s_type->i_mutex_key#9){++++}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
down_read+0x3b/0x50 kernel/locking/rwsem.c:1480
inode_lock_shared include/linux/fs.h:797 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3159
bmap+0xa1/0xd0 fs/inode.c:1714
jbd2_journal_bmap fs/jbd2/journal.c:978 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1786 [inline]
jbd2_journal_flush+0x7a2/0xc90 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:847 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1265 [inline]
ext4_ioctl+0x335b/0x5db0 fs/ext4/ioctl.c:1277
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #0 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
mutex_lock_io_nested+0x45/0x60 kernel/locking/mutex.c:777
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:110
add_transaction_credits+0x950/0xc00 fs/jbd2/transaction.c:303
start_this_handle+0x747/0x1590 fs/jbd2/transaction.c:427
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:525
__ext4_journal_start_sb+0x1eb/0x440 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
ext4_handle_inode_extension+0x1a7/0x8b0 fs/ext4/file.c:325
ext4_dio_write_iter fs/ext4/file.c:581 [inline]
ext4_file_write_iter+0x15ed/0x1990 fs/ext4/file.c:685
do_iter_readv_writev+0x594/0x7a0
do_iter_write+0x1ea/0x760 fs/read_write.c:855
iter_file_splice_write+0x806/0xfa0 fs/splice.c:689
do_splice_from fs/splice.c:767 [inline]
direct_splice_actor+0xe3/0x1c0 fs/splice.c:936
splice_direct_to_actor+0x500/0xc10 fs/splice.c:891
do_splice_direct+0x285/0x3d0 fs/splice.c:979
do_sendfile+0x625/0xff0 fs/read_write.c:1249
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64+0x178/0x1e0 fs/read_write.c:1303
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sb->s_type->i_mutex_key#9);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#9);
lock(&journal->j_checkpoint_mutex);

*** DEADLOCK ***

2 locks held by syz-executor.4/28805:
#0: ffff88814aaac460 (sb_writers#5){.+.+}-{0:0}, at: do_sendfile+0x600/0xff0 fs/read_write.c:1248
#1: ffff888026d967a0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff888026d967a0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: ext4_dio_write_iter fs/ext4/file.c:510 [inline]
#1: ffff888026d967a0 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: ext4_file_write_iter+0x5c4/0x1990 fs/ext4/file.c:685

stack backtrace:
CPU: 0 PID: 28805 Comm: syz-executor.4 Not tainted 5.15.98-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2f8/0x3b0 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
mutex_lock_io_nested+0x45/0x60 kernel/locking/mutex.c:777
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:110
add_transaction_credits+0x950/0xc00 fs/jbd2/transaction.c:303
start_this_handle+0x747/0x1590 fs/jbd2/transaction.c:427
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:525
__ext4_journal_start_sb+0x1eb/0x440 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
ext4_handle_inode_extension+0x1a7/0x8b0 fs/ext4/file.c:325
ext4_dio_write_iter fs/ext4/file.c:581 [inline]
ext4_file_write_iter+0x15ed/0x1990 fs/ext4/file.c:685
do_iter_readv_writev+0x594/0x7a0
do_iter_write+0x1ea/0x760 fs/read_write.c:855
iter_file_splice_write+0x806/0xfa0 fs/splice.c:689
do_splice_from fs/splice.c:767 [inline]
direct_splice_actor+0xe3/0x1c0 fs/splice.c:936
splice_direct_to_actor+0x500/0xc10 fs/splice.c:891
do_splice_direct+0x285/0x3d0 fs/splice.c:979
do_sendfile+0x625/0xff0 fs/read_write.c:1249
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64+0x178/0x1e0 fs/read_write.c:1303
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fd8084cc0f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fd806a3e168 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007fd8085ebf80 RCX: 00007fd8084cc0f9
RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000004
RBP: 00007fd808527ae9 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000f03afffe R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffddf65b3ff R14: 00007fd806a3e300 R15: 0000000000022000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Mar 7, 2023, 1:21:41 PM3/7/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 42616e0f09fb Linux 6.1.15
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=107a245cc80000
kernel config: https://syzkaller.appspot.com/x/.config?x=690b9ff41783cd73
dashboard link: https://syzkaller.appspot.com/bug?extid=c5b10e098e75430412f1
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/db869f2ed2bd/disk-42616e0f.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/37951bbe5829/vmlinux-42616e0f.xz
kernel image: https://storage.googleapis.com/syzbot-assets/23aa1a75ce0f/bzImage-42616e0f.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c5b10e...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.15-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.2/31132 is trying to acquire lock:
ffff88807e2623f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: __jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:110

but task is already holding lock:
ffff88804c0b2218 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
ffff88804c0b2218 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xaf/0x3a0 fs/ext4/file.c:279

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&sb->s_type->i_mutex_key#8){++++}-{3:3}:
lock_acquire+0x231/0x620 kernel/locking/lockdep.c:5668
down_read+0x39/0x50 kernel/locking/rwsem.c:1509
inode_lock_shared include/linux/fs.h:766 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3171
bmap+0xa1/0xd0 fs/inode.c:1798
jbd2_journal_bmap fs/jbd2/journal.c:977 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1789 [inline]
jbd2_journal_flush+0x5b5/0xc40 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:1082 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1590 [inline]
ext4_ioctl+0x3a7d/0x61c0 fs/ext4/ioctl.c:1610
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3831
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5055
lock_acquire+0x231/0x620 kernel/locking/lockdep.c:5668
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
mutex_lock_io_nested+0x43/0x60 kernel/locking/mutex.c:833
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:110
add_transaction_credits+0x94c/0xc00 fs/jbd2/transaction.c:298
start_this_handle+0x747/0x1640 fs/jbd2/transaction.c:422
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:520
__ext4_journal_start_sb+0x206/0x4e0 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
ext4_dirty_inode+0x8b/0x100 fs/ext4/inode.c:6037
__mark_inode_dirty+0x3d9/0x1220 fs/fs-writeback.c:2408
mark_inode_dirty include/linux/fs.h:2481 [inline]
generic_write_end+0x180/0x1d0 fs/buffer.c:2184
ext4_da_write_end+0x836/0xba0 fs/ext4/inode.c:3103
generic_perform_write+0x3e9/0x5e0 mm/filemap.c:3765
ext4_buffered_write_iter+0x122/0x3a0 fs/ext4/file.c:285
ext4_file_write_iter+0x1d2/0x18f0
do_iter_write+0x6e6/0xc50 fs/read_write.c:861
iter_file_splice_write+0x806/0xfa0 fs/splice.c:686
do_splice_from fs/splice.c:764 [inline]
direct_splice_actor+0xe3/0x1c0 fs/splice.c:931
splice_direct_to_actor+0x4c0/0xbd0 fs/splice.c:886
do_splice_direct+0x27f/0x3c0 fs/splice.c:974
do_sendfile+0x61c/0xff0 fs/read_write.c:1255
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64+0xfc/0x1e0 fs/read_write.c:1309
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sb->s_type->i_mutex_key#8);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#8);
lock(&journal->j_checkpoint_mutex);

*** DEADLOCK ***

2 locks held by syz-executor.2/31132:
#0: ffff88807e25e460 (sb_writers#4){.+.+}-{0:0}, at: do_sendfile+0x5f7/0xff0 fs/read_write.c:1254
#1: ffff88804c0b2218 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff88804c0b2218 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xaf/0x3a0 fs/ext4/file.c:279

stack backtrace:
CPU: 1 PID: 31132 Comm: syz-executor.2 Not tainted 6.1.15-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2177
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3831
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5055
lock_acquire+0x231/0x620 kernel/locking/lockdep.c:5668
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
mutex_lock_io_nested+0x43/0x60 kernel/locking/mutex.c:833
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:110
add_transaction_credits+0x94c/0xc00 fs/jbd2/transaction.c:298
start_this_handle+0x747/0x1640 fs/jbd2/transaction.c:422
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:520
__ext4_journal_start_sb+0x206/0x4e0 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
ext4_dirty_inode+0x8b/0x100 fs/ext4/inode.c:6037
__mark_inode_dirty+0x3d9/0x1220 fs/fs-writeback.c:2408
mark_inode_dirty include/linux/fs.h:2481 [inline]
generic_write_end+0x180/0x1d0 fs/buffer.c:2184
ext4_da_write_end+0x836/0xba0 fs/ext4/inode.c:3103
generic_perform_write+0x3e9/0x5e0 mm/filemap.c:3765
ext4_buffered_write_iter+0x122/0x3a0 fs/ext4/file.c:285
ext4_file_write_iter+0x1d2/0x18f0
do_iter_write+0x6e6/0xc50 fs/read_write.c:861
iter_file_splice_write+0x806/0xfa0 fs/splice.c:686
do_splice_from fs/splice.c:764 [inline]
direct_splice_actor+0xe3/0x1c0 fs/splice.c:931
splice_direct_to_actor+0x4c0/0xbd0 fs/splice.c:886
do_splice_direct+0x27f/0x3c0 fs/splice.c:974
do_sendfile+0x61c/0xff0 fs/read_write.c:1255
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64+0xfc/0x1e0 fs/read_write.c:1309
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f805428c0f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f80550d9168 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f80543abf80 RCX: 00007f805428c0f9
RDX: 0000000020000100 RSI: 0000000000000004 RDI: 0000000000000004
RBP: 00007f80542e7ae9 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000006b26 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffec3c85dff R14: 00007f80550d9300 R15: 0000000000022000

syzbot

unread,
Aug 14, 2023, 8:11:06 PM8/14/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 1321ab403b38 Linux 6.1.45
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=15140437a80000
kernel config: https://syzkaller.appspot.com/x/.config?x=5afeea223ff7d6fa
dashboard link: https://syzkaller.appspot.com/bug?extid=c5b10e098e75430412f1
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=13438a07a80000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=12d89b73a80000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/df0f614deffc/disk-1321ab40.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/df0f1be30831/vmlinux-1321ab40.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9e190f335ac9/bzImage-1321ab40.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c5b10e...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.1.45-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor266/3583 is trying to acquire lock:
ffff88807f7523f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: __jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:88

but task is already holding lock:
ffff88807092ca38 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
ffff88807092ca38 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xaf/0x3a0 fs/ext4/file.c:279

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&sb->s_type->i_mutex_key#8){++++}-{3:3}:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
down_read+0x43/0x2e0 kernel/locking/rwsem.c:1520
inode_lock_shared include/linux/fs.h:766 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3164
bmap+0xa1/0xd0 fs/inode.c:1840
jbd2_journal_bmap fs/jbd2/journal.c:977 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1789 [inline]
jbd2_journal_flush+0x5b5/0xc40 fs/jbd2/journal.c:2492
ext4_ioctl_checkpoint fs/ext4/ioctl.c:1086 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1594 [inline]
ext4_ioctl+0x3986/0x5f60 fs/ext4/ioctl.c:1614
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3832
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
mutex_lock_io_nested+0x43/0x60 kernel/locking/mutex.c:833
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:88
add_transaction_credits+0x94c/0xc00 fs/jbd2/transaction.c:298
start_this_handle+0x747/0x1640 fs/jbd2/transaction.c:422
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:520
__ext4_journal_start_sb+0x19b/0x410 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
ext4_dirty_inode+0x8b/0x100 fs/ext4/inode.c:6057
__mark_inode_dirty+0x331/0xf80 fs/fs-writeback.c:2411
generic_update_time fs/inode.c:1901 [inline]
inode_update_time fs/inode.c:1914 [inline]
__file_update_time+0x221/0x240 fs/inode.c:2102
file_modified_flags+0x3e1/0x480 fs/inode.c:2175
ext4_write_checks+0x24a/0x2c0 fs/ext4/file.c:264
ext4_buffered_write_iter+0xbd/0x3a0 fs/ext4/file.c:280
ext4_file_write_iter+0x1d2/0x18f0
__kernel_write_iter+0x2ff/0x710 fs/read_write.c:517
dump_emit_page fs/coredump.c:881 [inline]
dump_user_range+0x43d/0x8e0 fs/coredump.c:908
elf_core_dump+0x3cff/0x45b0 fs/binfmt_elf.c:2312
do_coredump+0x18b7/0x2700 fs/coredump.c:755
get_signal+0x1454/0x17d0 kernel/signal.c:2848
arch_do_signal_or_restart+0xb0/0x1a10 arch/x86/kernel/signal.c:871
exit_to_user_mode_loop+0x6a/0x100 kernel/entry/common.c:168
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:310
exc_page_fault+0x1c0/0x660 arch/x86/mm/fault.c:1530
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sb->s_type->i_mutex_key#8);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#8);
lock(&journal->j_checkpoint_mutex);

*** DEADLOCK ***

2 locks held by syz-executor266/3583:
#0: ffff88807f74e460 (sb_writers#4){.+.+}-{0:0}, at: do_coredump+0x1892/0x2700 fs/coredump.c:754
#1: ffff88807092ca38 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff88807092ca38 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xaf/0x3a0 fs/ext4/file.c:279

stack backtrace:
CPU: 0 PID: 3583 Comm: syz-executor266 Not tainted 6.1.45-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3832
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
mutex_lock_io_nested+0x43/0x60 kernel/locking/mutex.c:833
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:88
add_transaction_credits+0x94c/0xc00 fs/jbd2/transaction.c:298
start_this_handle+0x747/0x1640 fs/jbd2/transaction.c:422
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:520
__ext4_journal_start_sb+0x19b/0x410 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
ext4_dirty_inode+0x8b/0x100 fs/ext4/inode.c:6057
__mark_inode_dirty+0x331/0xf80 fs/fs-writeback.c:2411
generic_update_time fs/inode.c:1901 [inline]
inode_update_time fs/inode.c:1914 [inline]
__file_update_time+0x221/0x240 fs/inode.c:2102
file_modified_flags+0x3e1/0x480 fs/inode.c:2175
ext4_write_checks+0x24a/0x2c0 fs/ext4/file.c:264
ext4_buffered_write_iter+0xbd/0x3a0 fs/ext4/file.c:280
ext4_file_write_iter+0x1d2/0x18f0
__kernel_write_iter+0x2ff/0x710 fs/read_write.c:517
dump_emit_page fs/coredump.c:881 [inline]
dump_user_range+0x43d/0x8e0 fs/coredump.c:908
elf_core_dump+0x3cff/0x45b0 fs/binfmt_elf.c:2312
do_coredump+0x18b7/0x2700 fs/coredump.c:755
get_signal+0x1454/0x17d0 kernel/signal.c:2848
arch_do_signal_or_restart+0xb0/0x1a10 arch/x86/kernel/signal.c:871
exit_to_user_mode_loop+0x6a/0x100 kernel/entry/common.c:168
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:310
exc_page_fault+0x1c0/0x660 arch/x86/mm/fault.c:1530
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0033:0x0
Code: Unable to access opcode bytes at 0xffffffffffffffd6.
RSP: 002b:0000000020000448 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 00007f9b6511d3c8 RCX: 00007f9b65095e79
RDX: 0000000000000000 RSI: 0000000020000440 RDI: 0000000000080400
RBP: 00007f9b6511d3c0 R08: 0000000000000000 R09: 00007f9b650536c0
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f9b650ea198
R13: 00007f9b6511d3cc R14: 0030656c69662f2e R15: 00007fff488d9798
</TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

unread,
Sep 7, 2023, 4:31:37 PM9/7/23
to syzkaller...@googlegroups.com
syzbot suspects this issue could be fixed by backporting the following commit:

commit 62913ae96de747091c4dacd06d158e7729c1a76d
git tree: upstream
Author: Theodore Ts'o <ty...@mit.edu>
Date: Wed Mar 8 04:15:49 2023 +0000

ext4, jbd2: add an optimized bmap for the journal inode

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=10ceb2c8680000
Please keep in mind that other backports might be required as well.

For information about bisection process see: https://goo.gl/tpsmEJ#bisection

syzbot

unread,
Dec 9, 2023, 1:49:25 AM12/9/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 8a1d809b0545 Linux 5.15.142
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=147b496ce80000
kernel config: https://syzkaller.appspot.com/x/.config?x=ee92f7141049e8f2
dashboard link: https://syzkaller.appspot.com/bug?extid=385d1b32404207ed55d6
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=12d9a112e80000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10fa707ae80000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/8965895d90e2/disk-8a1d809b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/f8d2c2cd9799/vmlinux-8a1d809b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4fa328be23e7/bzImage-8a1d809b.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+385d1b...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.142-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor228/3513 is trying to acquire lock:
ffff88814b5343f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: __jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:71

but task is already holding lock:
ffff88806e2eb5c8 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
ffff88806e2eb5c8 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: vfs_unlink+0xe0/0x5f0 fs/namei.c:4198

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&sb->s_type->i_mutex_key#9){++++}-{3:3}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
down_read+0x45/0x2e0 kernel/locking/rwsem.c:1488
inode_lock_shared include/linux/fs.h:797 [inline]
ext4_bmap+0x4b/0x410 fs/ext4/inode.c:3152
bmap+0xa1/0xd0 fs/inode.c:1756
jbd2_journal_bmap fs/jbd2/journal.c:980 [inline]
__jbd2_journal_erase fs/jbd2/journal.c:1788 [inline]
jbd2_journal_flush+0x7a2/0xc90 fs/jbd2/journal.c:2494
ext4_ioctl_checkpoint fs/ext4/ioctl.c:849 [inline]
__ext4_ioctl fs/ext4/ioctl.c:1267 [inline]
ext4_ioctl+0x3249/0x5b80 fs/ext4/ioctl.c:1276
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #0 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1649/0x5930 kernel/locking/lockdep.c:3788
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
mutex_lock_io_nested+0x45/0x60 kernel/locking/mutex.c:777
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:71
add_transaction_credits+0x950/0xc00 fs/jbd2/transaction.c:299
start_this_handle+0x747/0x1570 fs/jbd2/transaction.c:423
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:521
__ext4_journal_start_sb+0x175/0x370 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
__ext4_unlink+0x3a7/0xae0 fs/ext4/namei.c:3265
ext4_unlink+0x1a9/0x530 fs/ext4/namei.c:3324
vfs_unlink+0x359/0x5f0 fs/namei.c:4209
do_unlinkat+0x49d/0x940 fs/namei.c:4277
__do_sys_unlink fs/namei.c:4325 [inline]
__se_sys_unlink fs/namei.c:4323 [inline]
__x64_sys_unlink+0x45/0x50 fs/namei.c:4323
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sb->s_type->i_mutex_key#9);
lock(&journal->j_checkpoint_mutex);
lock(&sb->s_type->i_mutex_key#9);
lock(&journal->j_checkpoint_mutex);

*** DEADLOCK ***

3 locks held by syz-executor228/3513:
#0: ffff88814b530460 (sb_writers#5){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:377
#1: ffff88806e2e8de8 (&type->i_mutex_dir_key#4/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:822 [inline]
#1: ffff88806e2e8de8 (&type->i_mutex_dir_key#4/1){+.+.}-{3:3}, at: do_unlinkat+0x260/0x940 fs/namei.c:4260
#2: ffff88806e2eb5c8 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#2: ffff88806e2eb5c8 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: vfs_unlink+0xe0/0x5f0 fs/namei.c:4198

stack backtrace:
CPU: 1 PID: 3513 Comm: syz-executor228 Not tainted 5.15.142-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2f8/0x3b0 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1649/0x5930 kernel/locking/lockdep.c:3788
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
mutex_lock_io_nested+0x45/0x60 kernel/locking/mutex.c:777
__jbd2_log_wait_for_space+0x213/0x760 fs/jbd2/checkpoint.c:71
add_transaction_credits+0x950/0xc00 fs/jbd2/transaction.c:299
start_this_handle+0x747/0x1570 fs/jbd2/transaction.c:423
jbd2__journal_start+0x2d1/0x5c0 fs/jbd2/transaction.c:521
__ext4_journal_start_sb+0x175/0x370 fs/ext4/ext4_jbd2.c:105
__ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
__ext4_unlink+0x3a7/0xae0 fs/ext4/namei.c:3265
ext4_unlink+0x1a9/0x530 fs/ext4/namei.c:3324
vfs_unlink+0x359/0x5f0 fs/namei.c:4209
do_unlinkat+0x49d/0x940 fs/namei.c:4277
__do_sys_unlink fs/namei.c:4325 [inline]
__se_sys_unlink fs/namei.c:4323 [inline]
__x64_sys_unlink+0x45/0x50 fs/namei.c:4323
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fe41e0559e7
Code: 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 57 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe13339168 EFLAGS: 00000206 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fe41e0559e7
RDX: 00007ffe13339190 RSI: 00007ffe13339220 RDI: 00007ffe13339220
RBP: 00007ffe13339220 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000100 R11: 0000000000000206 R12: 00007ffe1333a2d0
R13: 0000555555656700 R14: 000000000001ae23 R15: 00007ffe1333a2d0
Reply all
Reply to author
Forward
0 new messages