Hello,
syzbot found the following issue on:
HEAD commit: 147338df3487 Linux 6.6.108
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=132fc092580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link:
https://syzkaller.appspot.com/bug?extid=f3f0bf20e7f3735ca720
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/23d0a7436789/disk-147338df.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/5658c8bd0cce/vmlinux-147338df.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/be243abccdbe/bzImage-147338df.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+f3f0bf...@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.4.583/7970 is trying to acquire lock:
ffff8880685f6a20 (&mm->mmap_lock){++++}-{3:3}, at: internal_get_user_pages_fast+0x204/0x2730 mm/gup.c:3195
but task is already holding lock:
ffff88805a47c010 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
ffff88805a47c010 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_dio_write_iter fs/ext4/file.c:530 [inline]
ffff88805a47c010 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_file_write_iter+0x60f/0x1870 fs/ext4/file.c:696
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #5 (&sb->s_type->i_mutex_key#8){++++}-{3:3}:
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
inode_lock_shared include/linux/fs.h:814 [inline]
ext4_bmap+0x4e/0x260 fs/ext4/inode.c:3139
bmap+0xa6/0xe0 fs/inode.c:1871
jbd2_journal_init_inode+0x87/0x3d0 fs/jbd2/journal.c:1711
ext4_open_inode_journal fs/ext4/super.c:5861 [inline]
ext4_load_journal fs/ext4/super.c:6020 [inline]
ext4_load_and_init_journal+0x315/0x2100 fs/ext4/super.c:4925
__ext4_fill_super fs/ext4/super.c:5398 [inline]
ext4_fill_super+0x4198/0x66c0 fs/ext4/super.c:5731
get_tree_bdev+0x3e4/0x510 fs/super.c:1591
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3377
init_mount+0xd2/0x120 fs/init.c:25
do_mount_root+0x97/0x230 init/do_mounts.c:166
mount_root_generic+0x195/0x3c0 init/do_mounts.c:205
prepare_namespace+0xc2/0x100 init/do_mounts.c:489
kernel_init_freeable+0x413/0x570 init/main.c:1566
kernel_init+0x1d/0x1c0 init/main.c:1443
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
-> #4 (&type->s_umount_key#32){++++}-{3:3}:
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
__super_lock fs/super.c:58 [inline]
super_lock+0x167/0x360 fs/super.c:117
super_lock_shared fs/super.c:146 [inline]
super_lock_shared_active fs/super.c:1442 [inline]
fs_bdev_sync+0xa4/0x170 fs/super.c:1477
blkdev_flushbuf block/ioctl.c:375 [inline]
blkdev_common_ioctl+0x880/0x23d0 block/ioctl.c:505
blkdev_ioctl+0x4eb/0x6f0 block/ioctl.c:627
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:871 [inline]
__se_sys_ioctl+0xfd/0x170 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #3 (&bdev->bd_holder_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_finish_claiming+0x22f/0x3f0 block/bdev.c:568
blkdev_get_by_dev+0x45c/0x600 block/bdev.c:801
bdev_open_by_dev+0x77/0x100 block/bdev.c:842
setup_bdev_super+0x59/0x660 fs/super.c:1496
mount_bdev+0x1dd/0x2d0 fs/super.c:1640
legacy_get_tree+0xea/0x180 fs/fs_context.c:662
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3377
init_mount+0xd2/0x120 fs/init.c:25
do_mount_root+0x97/0x230 init/do_mounts.c:166
mount_root_generic+0x195/0x3c0 init/do_mounts.c:205
prepare_namespace+0xc2/0x100 init/do_mounts.c:489
kernel_init_freeable+0x413/0x570 init/main.c:1566
kernel_init+0x1d/0x1c0 init/main.c:1443
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
-> #2 (
bdev_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_prepare_to_claim+0x1ba/0x480 block/bdev.c:510
truncate_bdev_range+0x4e/0x260 block/bdev.c:105
blkdev_fallocate+0x3ff/0x670 block/fops.c:792
vfs_fallocate+0x58e/0x700 fs/open.c:324
madvise_remove mm/madvise.c:1007 [inline]
madvise_vma_behavior mm/madvise.c:1031 [inline]
madvise_walk_vmas mm/madvise.c:1266 [inline]
do_madvise+0x15fe/0x3710 mm/madvise.c:1446
__do_sys_madvise mm/madvise.c:1459 [inline]
__se_sys_madvise mm/madvise.c:1457 [inline]
__x64_sys_madvise+0xa6/0xc0 mm/madvise.c:1457
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #1 (mapping.invalidate_lock#2){++++}-{3:3}:
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
filemap_invalidate_lock_shared include/linux/fs.h:859 [inline]
filemap_fault+0x5db/0x15a0 mm/filemap.c:3330
__do_fault+0x13b/0x4e0 mm/memory.c:4243
do_read_fault mm/memory.c:4616 [inline]
do_fault mm/memory.c:4753 [inline]
do_pte_missing mm/memory.c:3688 [inline]
handle_pte_fault mm/memory.c:5025 [inline]
__handle_mm_fault mm/memory.c:5166 [inline]
handle_mm_fault+0x3886/0x4920 mm/memory.c:5331
faultin_page mm/gup.c:868 [inline]
__get_user_pages+0x5ea/0x1470 mm/gup.c:1167
populate_vma_page_range+0x2b6/0x370 mm/gup.c:1593
__mm_populate+0x24c/0x380 mm/gup.c:1696
mm_populate include/linux/mm.h:3328 [inline]
vm_mmap_pgoff+0x2e7/0x400 mm/util.c:561
ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
-> #0 (&mm->mmap_lock){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
internal_get_user_pages_fast+0x21d/0x2730 mm/gup.c:3195
iov_iter_extract_user_pages lib/iov_iter.c:1785 [inline]
iov_iter_extract_pages+0x393/0x790 lib/iov_iter.c:1848
__bio_iov_iter_get_pages block/bio.c:1265 [inline]
bio_iov_iter_get_pages+0x597/0x15f0 block/bio.c:1343
iomap_dio_bio_iter+0xb27/0x1680 fs/iomap/direct-io.c:387
iomap_dio_iter fs/iomap/direct-io.c:-1 [inline]
__iomap_dio_rw+0xe06/0x1c40 fs/iomap/direct-io.c:659
iomap_dio_rw+0x45/0xa0 fs/iomap/direct-io.c:748
ext4_dio_write_iter fs/ext4/file.c:577 [inline]
ext4_file_write_iter+0x13ff/0x1870 fs/ext4/file.c:696
do_iter_readv_writev fs/read_write.c:-1 [inline]
do_iter_write+0x79a/0xc70 fs/read_write.c:860
vfs_writev fs/read_write.c:933 [inline]
do_writev+0x252/0x410 fs/read_write.c:976
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
other info that might help us debug this:
Chain exists of:
&mm->mmap_lock --> &type->s_umount_key#32 --> &sb->s_type->i_mutex_key#8
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&sb->s_type->i_mutex_key#8);
lock(&type->s_umount_key#32);
lock(&sb->s_type->i_mutex_key#8);
rlock(&mm->mmap_lock);
*** DEADLOCK ***
3 locks held by syz.4.583/7970:
#0: ffff88802fe02fc8 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x2a3/0x330 fs/file.c:1040
#1: ffff888055372418 (sb_writers#4){.+.+}-{0:0}, at: vfs_writev fs/read_write.c:932 [inline]
#1: ffff888055372418 (sb_writers#4){.+.+}-{0:0}, at: do_writev+0x236/0x410 fs/read_write.c:976
#2: ffff88805a47c010 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#2: ffff88805a47c010 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_dio_write_iter fs/ext4/file.c:530 [inline]
#2: ffff88805a47c010 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_file_write_iter+0x60f/0x1870 fs/ext4/file.c:696
stack backtrace:
CPU: 0 PID: 7970 Comm: syz.4.583 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
internal_get_user_pages_fast+0x21d/0x2730 mm/gup.c:3195
iov_iter_extract_user_pages lib/iov_iter.c:1785 [inline]
iov_iter_extract_pages+0x393/0x790 lib/iov_iter.c:1848
__bio_iov_iter_get_pages block/bio.c:1265 [inline]
bio_iov_iter_get_pages+0x597/0x15f0 block/bio.c:1343
iomap_dio_bio_iter+0xb27/0x1680 fs/iomap/direct-io.c:387
iomap_dio_iter fs/iomap/direct-io.c:-1 [inline]
__iomap_dio_rw+0xe06/0x1c40 fs/iomap/direct-io.c:659
iomap_dio_rw+0x45/0xa0 fs/iomap/direct-io.c:748
ext4_dio_write_iter fs/ext4/file.c:577 [inline]
ext4_file_write_iter+0x13ff/0x1870 fs/ext4/file.c:696
do_iter_readv_writev fs/read_write.c:-1 [inline]
do_iter_write+0x79a/0xc70 fs/read_write.c:860
vfs_writev fs/read_write.c:933 [inline]
do_writev+0x252/0x410 fs/read_write.c:976
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f21bb18eec9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f21bbf98038 EFLAGS: 00000246 ORIG_RAX: 0000000000000014
RAX: ffffffffffffffda RBX: 00007f21bb3e5fa0 RCX: 00007f21bb18eec9
RDX: 000000000000001f RSI: 0000200000000140 RDI: 0000000000000004
RBP: 00007f21bb211f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f21bb3e6038 R14: 00007f21bb3e5fa0 R15: 00007ffc7a63e4c8
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup