[v6.6] possible deadlock in bd_prepare_to_claim

3 views
Skip to first unread message

syzbot

unread,
Jul 16, 2025, 8:57:31 PM7/16/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 9247f4e6573a Linux 6.6.98
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12b7e58c580000
kernel config: https://syzkaller.appspot.com/x/.config?x=cfe840f14e117c98
dashboard link: https://syzkaller.appspot.com/bug?extid=c6b56897158c5958d596
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/3855f6db0ca8/disk-9247f4e6.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/fe3d6afeb3a6/vmlinux-9247f4e6.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7d8784621ac6/bzImage-9247f4e6.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c6b568...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.6.98-syzkaller #0 Not tainted
------------------------------------------------------
syz.9.2189/13360 is trying to acquire lock:
ffffffff8ca648e8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1ba/0x480 block/bdev.c:510

but task is already holding lock:
ffff88801fc870c0 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:849 [inline]
ffff88801fc870c0 (mapping.invalidate_lock#2){++++}-{3:3}, at: blkdev_fallocate+0x214/0x670 block/fops.c:774

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (mapping.invalidate_lock#2){++++}-{3:3}:
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
filemap_invalidate_lock_shared include/linux/fs.h:859 [inline]
page_cache_ra_unbounded+0xdc/0x770 mm/readahead.c:225
page_cache_sync_readahead include/linux/pagemap.h:1325 [inline]
ext4_readdir+0xb71/0x39d0 fs/ext4/dir.c:197
iterate_dir+0x1c2/0x580 fs/readdir.c:106
__do_sys_getdents64 fs/readdir.c:405 [inline]
__se_sys_getdents64+0xe9/0x260 fs/readdir.c:390
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #3 (&type->i_mutex_dir_key#3){++++}-{3:3}:
down_write+0x97/0x1f0 kernel/locking/rwsem.c:1573
inode_lock include/linux/fs.h:804 [inline]
ext4_process_orphan+0x187/0x300 fs/ext4/orphan.c:337
ext4_orphan_cleanup+0xbd4/0x1400 fs/ext4/orphan.c:474
__ext4_fill_super fs/ext4/super.c:5606 [inline]
ext4_fill_super+0x5d47/0x6620 fs/ext4/super.c:5729
get_tree_bdev+0x3e4/0x510 fs/super.c:1591
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3366
do_mount fs/namespace.c:3706 [inline]
__do_sys_mount fs/namespace.c:3915 [inline]
__se_sys_mount+0x2da/0x3c0 fs/namespace.c:3892
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #2 (&type->s_umount_key#32){++++}-{3:3}:
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
__super_lock fs/super.c:58 [inline]
super_lock+0x167/0x360 fs/super.c:117
super_lock_shared fs/super.c:146 [inline]
super_lock_shared_active fs/super.c:1442 [inline]
fs_bdev_sync+0xa4/0x170 fs/super.c:1477
blkdev_flushbuf block/ioctl.c:375 [inline]
blkdev_common_ioctl+0x880/0x23d0 block/ioctl.c:505
blkdev_ioctl+0x4eb/0x6f0 block/ioctl.c:627
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:871 [inline]
__se_sys_ioctl+0xfd/0x170 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #1 (&bdev->bd_holder_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_finish_claiming+0x22f/0x3f0 block/bdev.c:568
blkdev_get_by_dev+0x45c/0x600 block/bdev.c:801
bdev_open_by_dev+0x77/0x100 block/bdev.c:842
setup_bdev_super+0x59/0x660 fs/super.c:1496
mount_bdev+0x1dd/0x2d0 fs/super.c:1640
legacy_get_tree+0xea/0x180 fs/fs_context.c:662
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3366
init_mount+0xd2/0x120 fs/init.c:25
do_mount_root+0x97/0x230 init/do_mounts.c:166
mount_root_generic+0x195/0x3c0 init/do_mounts.c:205
prepare_namespace+0xc2/0x100 init/do_mounts.c:489
kernel_init_freeable+0x413/0x570 init/main.c:1566
kernel_init+0x1d/0x1c0 init/main.c:1443
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

-> #0 (bdev_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_prepare_to_claim+0x1ba/0x480 block/bdev.c:510
truncate_bdev_range+0x4e/0x260 block/bdev.c:105
blkdev_fallocate+0x3ff/0x670 block/fops.c:792
vfs_fallocate+0x58e/0x700 fs/open.c:324
madvise_remove mm/madvise.c:1007 [inline]
madvise_vma_behavior mm/madvise.c:1031 [inline]
madvise_walk_vmas mm/madvise.c:1266 [inline]
do_madvise+0x15fe/0x3710 mm/madvise.c:1446
__do_sys_madvise mm/madvise.c:1459 [inline]
__se_sys_madvise mm/madvise.c:1457 [inline]
__x64_sys_madvise+0xa6/0xc0 mm/madvise.c:1457
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Chain exists of:
bdev_lock --> &type->i_mutex_dir_key#3 --> mapping.invalidate_lock#2

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(mapping.invalidate_lock#2);
lock(&type->i_mutex_dir_key#3);
lock(mapping.invalidate_lock#2);
lock(bdev_lock);

*** DEADLOCK ***

1 lock held by syz.9.2189/13360:
#0: ffff88801fc870c0 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:849 [inline]
#0: ffff88801fc870c0 (mapping.invalidate_lock#2){++++}-{3:3}, at: blkdev_fallocate+0x214/0x670 block/fops.c:774

stack backtrace:
CPU: 0 PID: 13360 Comm: syz.9.2189 Not tainted 6.6.98-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_prepare_to_claim+0x1ba/0x480 block/bdev.c:510
truncate_bdev_range+0x4e/0x260 block/bdev.c:105
blkdev_fallocate+0x3ff/0x670 block/fops.c:792
vfs_fallocate+0x58e/0x700 fs/open.c:324
madvise_remove mm/madvise.c:1007 [inline]
madvise_vma_behavior mm/madvise.c:1031 [inline]
madvise_walk_vmas mm/madvise.c:1266 [inline]
do_madvise+0x15fe/0x3710 mm/madvise.c:1446
__do_sys_madvise mm/madvise.c:1459 [inline]
__se_sys_madvise mm/madvise.c:1457 [inline]
__x64_sys_madvise+0xa6/0xc0 mm/madvise.c:1457
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fc3ddf8e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fc3ded81038 EFLAGS: 00000246 ORIG_RAX: 000000000000001c
RAX: ffffffffffffffda RBX: 00007fc3de1b5fa0 RCX: 00007fc3ddf8e929
RDX: 0000000000000009 RSI: 0000000000600002 RDI: 0000200000000000
RBP: 00007fc3de010ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fc3de1b5fa0 R15: 00007fff2d7ebc18
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Dec 14, 2025, 3:34:26 AM12/14/25
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 5fa4793a2d2d Linux 6.6.119
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1766fe1a580000
kernel config: https://syzkaller.appspot.com/x/.config?x=691a6769a86ac817
dashboard link: https://syzkaller.appspot.com/bug?extid=c6b56897158c5958d596
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15520d92580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=14dbf1b4580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/63699875f1dd/disk-5fa4793a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/8506652fcb6f/vmlinux-5fa4793a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/1b30ceed1710/bzImage-5fa4793a.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/984adb7b340f/mount_0.gz
fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=10dbf1b4580000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c6b568...@syzkaller.appspotmail.com

syz.0.17[5920]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
loop0: detected capacity change from 0 to 128
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.0.17/5920 is trying to acquire lock:
ffffffff8ca648e8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1ba/0x480 block/bdev.c:527

but task is already holding lock:
ffff888148c8a040 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:849 [inline]
ffff888148c8a040 (mapping.invalidate_lock){++++}-{3:3}, at: blkdev_fallocate+0x22b/0x6a0 block/fops.c:789

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (mapping.invalidate_lock){++++}-{3:3}:
down_write+0x97/0x1f0 kernel/locking/rwsem.c:1573
filemap_invalidate_lock include/linux/fs.h:849 [inline]
set_blocksize+0x249/0x4b0 block/bdev.c:161
sb_set_blocksize block/bdev.c:178 [inline]
sb_min_blocksize+0xbe/0x190 block/bdev.c:194
ext4_load_super fs/ext4/super.c:5042 [inline]
__ext4_fill_super fs/ext4/super.c:5251 [inline]
ext4_fill_super+0x6df/0x66c0 fs/ext4/super.c:5724
get_tree_bdev+0x3e4/0x510 fs/super.c:1591
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3386
init_mount+0xd2/0x120 fs/init.c:25
do_mount_root+0x97/0x230 init/do_mounts.c:166
mount_root_generic+0x195/0x3c0 init/do_mounts.c:205
prepare_namespace+0xc2/0x100 init/do_mounts.c:489
kernel_init_freeable+0x413/0x570 init/main.c:1578
kernel_init+0x1d/0x1c0 init/main.c:1455
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

-> #3 (&sb->s_type->i_mutex_key#8){++++}-{3:3}:
down_write+0x97/0x1f0 kernel/locking/rwsem.c:1573
inode_lock include/linux/fs.h:804 [inline]
set_blocksize+0x201/0x4b0 block/bdev.c:160
sb_set_blocksize block/bdev.c:178 [inline]
sb_min_blocksize+0xbe/0x190 block/bdev.c:194
fat_fill_super+0x1b21/0x4c00 fs/fat/inode.c:1644
mount_bdev+0x22b/0x2d0 fs/super.c:1643
legacy_get_tree+0xea/0x180 fs/fs_context.c:662
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3386
do_mount fs/namespace.c:3726 [inline]
__do_sys_mount fs/namespace.c:3935 [inline]
__se_sys_mount+0x2da/0x3c0 fs/namespace.c:3912
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #2 (&type->s_umount_key#56){++++}-{3:3}:
down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
__super_lock fs/super.c:58 [inline]
super_lock+0x167/0x360 fs/super.c:117
super_lock_shared fs/super.c:146 [inline]
super_lock_shared_active fs/super.c:1442 [inline]
fs_bdev_sync+0xa4/0x170 fs/super.c:1477
blkdev_flushbuf block/ioctl.c:381 [inline]
blkdev_common_ioctl+0x881/0x2460 block/ioctl.c:511
blkdev_ioctl+0x4eb/0x6f0 block/ioctl.c:633
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:871 [inline]
__se_sys_ioctl+0xfd/0x170 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #1 (&bdev->bd_holder_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_finish_claiming+0x22f/0x3f0 block/bdev.c:585
blkdev_get_by_dev+0x45c/0x600 block/bdev.c:818
bdev_open_by_dev+0x77/0x100 block/bdev.c:859
setup_bdev_super+0x59/0x660 fs/super.c:1496
mount_bdev+0x1dd/0x2d0 fs/super.c:1640
legacy_get_tree+0xea/0x180 fs/fs_context.c:662
vfs_get_tree+0x8c/0x280 fs/super.c:1764
do_new_mount+0x24b/0xa40 fs/namespace.c:3386
init_mount+0xd2/0x120 fs/init.c:25
do_mount_root+0x97/0x230 init/do_mounts.c:166
mount_root_generic+0x195/0x3c0 init/do_mounts.c:205
prepare_namespace+0xc2/0x100 init/do_mounts.c:489
kernel_init_freeable+0x413/0x570 init/main.c:1578
kernel_init+0x1d/0x1c0 init/main.c:1455
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

-> #0 (bdev_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_prepare_to_claim+0x1ba/0x480 block/bdev.c:527
truncate_bdev_range+0x4e/0x260 block/bdev.c:105
blkdev_fallocate+0x50d/0x6a0 block/fops.c:798
vfs_fallocate+0x58e/0x700 fs/open.c:324
ksys_fallocate fs/open.c:347 [inline]
__do_sys_fallocate fs/open.c:355 [inline]
__se_sys_fallocate fs/open.c:353 [inline]
__x64_sys_fallocate+0xc1/0x110 fs/open.c:353
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Chain exists of:
bdev_lock --> &sb->s_type->i_mutex_key#8 --> mapping.invalidate_lock

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(mapping.invalidate_lock);
lock(&sb->s_type->i_mutex_key#8);
lock(mapping.invalidate_lock);
lock(bdev_lock);

*** DEADLOCK ***

2 locks held by syz.0.17/5920:
#0: ffff888148c89eb0 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff888148c89eb0 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: blkdev_fallocate+0x205/0x6a0 block/fops.c:788
#1: ffff888148c8a040 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:849 [inline]
#1: ffff888148c8a040 (mapping.invalidate_lock){++++}-{3:3}, at: blkdev_fallocate+0x22b/0x6a0 block/fops.c:789

stack backtrace:
CPU: 0 PID: 5920 Comm: syz.0.17 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
bd_prepare_to_claim+0x1ba/0x480 block/bdev.c:527
truncate_bdev_range+0x4e/0x260 block/bdev.c:105
blkdev_fallocate+0x50d/0x6a0 block/fops.c:798
vfs_fallocate+0x58e/0x700 fs/open.c:324
ksys_fallocate fs/open.c:347 [inline]
__do_sys_fallocate fs/open.c:355 [inline]
__se_sys_fallocate fs/open.c:353 [inline]
__x64_sys_fallocate+0xc1/0x110 fs/open.c:353
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f4f3af8f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffffdad1288 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
RAX: ffffffffffffffda RBX: 00007f4f3b1e5fa0 RCX: 00007f4f3af8f749
RDX: 0000000000004000 RSI: 0000000000000010 RDI: 0000000000000004
RBP: 00007f4f3b013f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000004000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f4f3b1e5fa0 R14: 00007f4f3b1e5fa0 R15: 0000000000000004
</TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Reply all
Reply to author
Forward
0 new messages