[v5.15] possible deadlock in __loop_clr_fd

7 views
Skip to first unread message

syzbot

unread,
Mar 9, 2023, 9:25:53 AM3/9/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d9b4a0c83a2d Linux 5.15.98
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12a33824c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=2f8d9515b973b23b
dashboard link: https://syzkaller.appspot.com/bug?extid=a94f42fb4f5d3739ef4a
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/037cabbd3313/disk-d9b4a0c8.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/9967e551eb34/vmlinux-d9b4a0c8.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a050c7a4fd99/bzImage-d9b4a0c8.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+a94f42...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.98-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.0/3624 is trying to acquire lock:
ffff8880788c1138 ((wq_completion)loop0){+.+.}-{0:0}, at: flush_workqueue+0x154/0x1610 kernel/workqueue.c:2826

but task is already holding lock:
ffff88807d780468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa9/0xbe0 drivers/block/loop.c:1348

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #7 (&lo->lo_mutex){+.+.}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_killable_nested+0x17/0x20 kernel/locking/mutex.c:758
lo_open+0x68/0x100 drivers/block/loop.c:2038
blkdev_get_whole+0x94/0x390 block/bdev.c:669
blkdev_get_by_dev+0x2b2/0xa50 block/bdev.c:824
blkdev_open+0x138/0x2d0 block/fops.c:448
do_dentry_open+0x807/0xfb0 fs/open.c:826
do_open fs/namei.c:3480 [inline]
path_openat+0x26c3/0x2ed0 fs/namei.c:3615
do_filp_open+0x21c/0x460 fs/namei.c:3642
do_sys_openat2+0x13b/0x500 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__x64_sys_openat+0x243/0x290 fs/open.c:1238
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #6 (&disk->open_mutex){+.+.}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
blkdev_get_by_dev+0x14d/0xa50 block/bdev.c:817
swsusp_check+0xb1/0x2c0 kernel/power/swap.c:1526
software_resume+0xc6/0x3c0 kernel/power/hibernate.c:977
resume_store+0xe3/0x130 kernel/power/hibernate.c:1179
kernfs_fop_write_iter+0x3a2/0x4f0 fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2101 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0xacf/0xe50 fs/read_write.c:594
ksys_write+0x1a2/0x2c0 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #5 (system_transition_mutex/1){+.+.}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
software_resume+0x7c/0x3c0 kernel/power/hibernate.c:932
resume_store+0xe3/0x130 kernel/power/hibernate.c:1179
kernfs_fop_write_iter+0x3a2/0x4f0 fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2101 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0xacf/0xe50 fs/read_write.c:594
ksys_write+0x1a2/0x2c0 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #4 (&of->mutex){+.+.}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
kernfs_seq_start+0x50/0x3b0 fs/kernfs/file.c:112
seq_read_iter+0x3d0/0xd10 fs/seq_file.c:225
call_read_iter include/linux/fs.h:2095 [inline]
new_sync_read fs/read_write.c:404 [inline]
vfs_read+0xa9f/0xe10 fs/read_write.c:485
ksys_read+0x1a2/0x2c0 fs/read_write.c:623
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #3 (&p->lock){+.+.}-{3:3}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
seq_read_iter+0xae/0xd10 fs/seq_file.c:182
proc_reg_read_iter+0x1b7/0x280 fs/proc/inode.c:300
call_read_iter include/linux/fs.h:2095 [inline]
generic_file_splice_read+0x4ad/0x790 fs/splice.c:311
do_splice_to fs/splice.c:796 [inline]
splice_direct_to_actor+0x448/0xc10 fs/splice.c:870
do_splice_direct+0x285/0x3d0 fs/splice.c:979
do_sendfile+0x625/0xff0 fs/read_write.c:1249
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64+0x178/0x1e0 fs/read_write.c:1303
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #2 (sb_writers#3){.+.+}-{0:0}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1742 [inline]
sb_start_write include/linux/fs.h:1812 [inline]
file_start_write include/linux/fs.h:2964 [inline]
lo_write_bvec+0x1a3/0x710 drivers/block/loop.c:315
lo_write_simple drivers/block/loop.c:338 [inline]
do_req_filebacked drivers/block/loop.c:656 [inline]
loop_handle_cmd drivers/block/loop.c:2208 [inline]
loop_process_work+0x209c/0x2b30 drivers/block/loop.c:2248
process_one_work+0x8e6/0x1230 kernel/workqueue.c:2306
worker_thread+0xaca/0x1280 kernel/workqueue.c:2453
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 <unknown>:298

-> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
process_one_work+0x7ee/0x1230 kernel/workqueue.c:2282
worker_thread+0xaca/0x1280 kernel/workqueue.c:2453
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 <unknown>:298

-> #0 ((wq_completion)loop0){+.+.}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
flush_workqueue+0x170/0x1610 kernel/workqueue.c:2826
drain_workqueue+0xc5/0x390 kernel/workqueue.c:2991
destroy_workqueue+0x7b/0xae0 kernel/workqueue.c:4426
__loop_clr_fd+0x241/0xbe0 drivers/block/loop.c:1366
blkdev_put_whole block/bdev.c:692 [inline]
blkdev_put+0x455/0x790 block/bdev.c:954
deactivate_locked_super+0xa0/0x110 fs/super.c:335
cleanup_mnt+0x44e/0x500 fs/namespace.c:1143
task_work_run+0x129/0x1a0 kernel/task_work.c:164
tracehook_notify_resume include/linux/tracehook.h:189 [inline]
exit_to_user_mode_loop+0x106/0x130 kernel/entry/common.c:175
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:207
__syscall_exit_to_user_mode_work kernel/entry/common.c:289 [inline]
syscall_exit_to_user_mode+0x5d/0x2a0 kernel/entry/common.c:300
do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Chain exists of:
(wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&lo->lo_mutex);
lock(&disk->open_mutex);
lock(&lo->lo_mutex);
lock((wq_completion)loop0);

*** DEADLOCK ***

2 locks held by syz-executor.0/3624:
#0: ffff88807caaf918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xfb/0x790 block/bdev.c:912
#1: ffff88807d780468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa9/0xbe0 drivers/block/loop.c:1348

stack backtrace:
CPU: 1 PID: 3624 Comm: syz-executor.0 Not tainted 5.15.98-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2f8/0x3b0 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1f6/0x560 kernel/locking/lockdep.c:5622
flush_workqueue+0x170/0x1610 kernel/workqueue.c:2826
drain_workqueue+0xc5/0x390 kernel/workqueue.c:2991
destroy_workqueue+0x7b/0xae0 kernel/workqueue.c:4426
__loop_clr_fd+0x241/0xbe0 drivers/block/loop.c:1366
blkdev_put_whole block/bdev.c:692 [inline]
blkdev_put+0x455/0x790 block/bdev.c:954
deactivate_locked_super+0xa0/0x110 fs/super.c:335
cleanup_mnt+0x44e/0x500 fs/namespace.c:1143
task_work_run+0x129/0x1a0 kernel/task_work.c:164
tracehook_notify_resume include/linux/tracehook.h:189 [inline]
exit_to_user_mode_loop+0x106/0x130 kernel/entry/common.c:175
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:207
__syscall_exit_to_user_mode_work kernel/entry/common.c:289 [inline]
syscall_exit_to_user_mode+0x5d/0x2a0 kernel/entry/common.c:300
do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f987922b567
Code: ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe19b87f58 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f987922b567
RDX: 00007ffe19b8802c RSI: 000000000000000a RDI: 00007ffe19b88020
RBP: 00007ffe19b88020 R08: 00000000ffffffff R09: 00007ffe19b87df0
R10: 000055555740a873 R11: 0000000000000246 R12: 00007f9879284b24
R13: 00007ffe19b890e0 R14: 000055555740a810 R15: 00007ffe19b89120
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jun 5, 2023, 6:49:47 PM6/5/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: d7af3e5ba454 Linux 5.15.115
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10288745280000
kernel config: https://syzkaller.appspot.com/x/.config?x=fc49fb9fb40e8b87
dashboard link: https://syzkaller.appspot.com/bug?extid=a94f42fb4f5d3739ef4a
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=168301b5280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/648d6fb5d654/disk-d7af3e5b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/a693f550ea71/vmlinux-d7af3e5b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/09afa713a569/Image-d7af3e5b.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+a94f42...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
5.15.115-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.5/3989 is trying to acquire lock:
ffff0000dd483138 ((wq_completion)loop5){+.+.}-{0:0}, at: flush_workqueue+0x120/0x11c4 kernel/workqueue.c:2827

but task is already holding lock:
ffff0000cb507468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa8/0x9b8 drivers/block/loop.c:1365

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #8 (&lo->lo_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_killable_nested+0xa4/0xf8 kernel/locking/mutex.c:758
lo_open+0x6c/0x14c drivers/block/loop.c:2055
blkdev_get_whole+0x94/0x344 block/bdev.c:669
blkdev_get_by_dev+0x238/0x89c block/bdev.c:824
blkdev_open+0x10c/0x274 block/fops.c:463
do_dentry_open+0x780/0xed8 fs/open.c:826
vfs_open+0x7c/0x90 fs/open.c:956
do_open fs/namei.c:3538 [inline]
path_openat+0x1f28/0x26f0 fs/namei.c:3672
do_filp_open+0x1a8/0x3b4 fs/namei.c:3699
do_sys_openat2+0x128/0x3d8 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1238
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #7 (&disk->open_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
bd_register_pending_holders+0x44/0x2d4 block/holder.c:161
device_add_disk+0x440/0xaa0 block/genhd.c:484
add_disk include/linux/genhd.h:212 [inline]
md_alloc+0x6bc/0xa80 drivers/md/md.c:5723
md_probe+0x78/0x94 drivers/md/md.c:5753
blk_request_module+0x184/0x1a8 block/genhd.c:684
blkdev_get_no_open+0x4c/0x178 block/bdev.c:740
blkdev_get_by_dev+0x8c/0x89c block/bdev.c:804
swsusp_check+0xb8/0x2dc kernel/power/swap.c:1526
software_resume+0xe8/0x410 kernel/power/hibernate.c:977
resume_store+0xe4/0x12c kernel/power/hibernate.c:1179
kobj_attr_store+0x6c/0x90 lib/kobject.c:864
sysfs_kf_write+0x200/0x280 fs/sysfs/file.c:139
kernfs_fop_write_iter+0x334/0x48c fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0x87c/0xb3c fs/read_write.c:594
ksys_write+0x15c/0x26c fs/read_write.c:647
__do_sys_write fs/read_write.c:659 [inline]
__se_sys_write fs/read_write.c:656 [inline]
__arm64_sys_write+0x7c/0x90 fs/read_write.c:656
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #6 (disks_mutex){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
md_alloc+0x58/0xa80 drivers/md/md.c:5664
md_probe+0x78/0x94 drivers/md/md.c:5753
blk_request_module+0x184/0x1a8 block/genhd.c:684
blkdev_get_no_open+0x4c/0x178 block/bdev.c:740
blkdev_get_by_dev+0x8c/0x89c block/bdev.c:804
swsusp_check+0xb8/0x2dc kernel/power/swap.c:1526
software_resume+0xe8/0x410 kernel/power/hibernate.c:977
resume_store+0xe4/0x12c kernel/power/hibernate.c:1179
kobj_attr_store+0x6c/0x90 lib/kobject.c:864
sysfs_kf_write+0x200/0x280 fs/sysfs/file.c:139
kernfs_fop_write_iter+0x334/0x48c fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0x87c/0xb3c fs/read_write.c:594
ksys_write+0x15c/0x26c fs/read_write.c:647
__do_sys_write fs/read_write.c:659 [inline]
__se_sys_write fs/read_write.c:656 [inline]
__arm64_sys_write+0x7c/0x90 fs/read_write.c:656
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #5 (major_names_lock){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
blk_request_module+0x3c/0x1a8 block/genhd.c:681
blkdev_get_no_open+0x4c/0x178 block/bdev.c:740
blkdev_get_by_dev+0x8c/0x89c block/bdev.c:804
swsusp_check+0xb8/0x2dc kernel/power/swap.c:1526
software_resume+0xe8/0x410 kernel/power/hibernate.c:977
resume_store+0xe4/0x12c kernel/power/hibernate.c:1179
kobj_attr_store+0x6c/0x90 lib/kobject.c:864
sysfs_kf_write+0x200/0x280 fs/sysfs/file.c:139
kernfs_fop_write_iter+0x334/0x48c fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0x87c/0xb3c fs/read_write.c:594
ksys_write+0x15c/0x26c fs/read_write.c:647
__do_sys_write fs/read_write.c:659 [inline]
__se_sys_write fs/read_write.c:656 [inline]
__arm64_sys_write+0x7c/0x90 fs/read_write.c:656
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #4 (system_transition_mutex/1){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
software_resume+0x9c/0x410 kernel/power/hibernate.c:932
resume_store+0xe4/0x12c kernel/power/hibernate.c:1179
kobj_attr_store+0x6c/0x90 lib/kobject.c:864
sysfs_kf_write+0x200/0x280 fs/sysfs/file.c:139
kernfs_fop_write_iter+0x334/0x48c fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0x87c/0xb3c fs/read_write.c:594
ksys_write+0x15c/0x26c fs/read_write.c:647
__do_sys_write fs/read_write.c:659 [inline]
__se_sys_write fs/read_write.c:656 [inline]
__arm64_sys_write+0x7c/0x90 fs/read_write.c:656
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #3 (&of->mutex){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
kernfs_seq_start+0x58/0x3a0 fs/kernfs/file.c:112
seq_read_iter+0x378/0xc44 fs/seq_file.c:225
kernfs_fop_read_iter+0x140/0x50c fs/kernfs/file.c:241
call_read_iter include/linux/fs.h:2097 [inline]
new_sync_read fs/read_write.c:404 [inline]
vfs_read+0x86c/0xb10 fs/read_write.c:485
ksys_read+0x15c/0x26c fs/read_write.c:623
__do_sys_read fs/read_write.c:633 [inline]
__se_sys_read fs/read_write.c:631 [inline]
__arm64_sys_read+0x7c/0x90 fs/read_write.c:631
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

-> #2 (&p->lock){+.+.}-{3:3}:
__mutex_lock_common+0x194/0x2154 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xa4/0xf8 kernel/locking/mutex.c:743
seq_read_iter+0xac/0xc44 fs/seq_file.c:182
kernfs_fop_read_iter+0x140/0x50c fs/kernfs/file.c:241
do_iter_readv_writev+0x420/0x5f8
do_iter_read+0x1c4/0x67c fs/read_write.c:790
vfs_iter_read+0x88/0xac fs/read_write.c:832
lo_read_simple drivers/block/loop.c:392 [inline]
do_req_filebacked drivers/block/loop.c:663 [inline]
loop_handle_cmd drivers/block/loop.c:2234 [inline]
loop_process_work+0x16b0/0x2790 drivers/block/loop.c:2274
loop_workfn+0x54/0x68 drivers/block/loop.c:2298
process_one_work+0x790/0x11b8 kernel/workqueue.c:2307
worker_thread+0x910/0x1034 kernel/workqueue.c:2454
kthread+0x37c/0x45c kernel/kthread.c:319
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:870

-> #1 ((work_completion)(&worker->work)){+.+.}-{0:0}:
process_one_work+0x6d4/0x11b8 kernel/workqueue.c:2283
worker_thread+0x910/0x1034 kernel/workqueue.c:2454
kthread+0x37c/0x45c kernel/kthread.c:319
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:870

-> #0 ((wq_completion)loop5){+.+.}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5622
flush_workqueue+0x14c/0x11c4 kernel/workqueue.c:2827
drain_workqueue+0xb8/0x32c kernel/workqueue.c:2992
destroy_workqueue+0x80/0xa34 kernel/workqueue.c:4427
__loop_clr_fd+0x1c0/0x9b8 drivers/block/loop.c:1383
loop_clr_fd drivers/block/loop.c:1509 [inline]
lo_ioctl+0xe74/0x20d0 drivers/block/loop.c:1865
blkdev_ioctl+0x3d8/0xbd0 block/ioctl.c:601
block_ioctl+0xa8/0x114 block/fops.c:493
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

other info that might help us debug this:

Chain exists of:
(wq_completion)loop5 --> &disk->open_mutex --> &lo->lo_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&lo->lo_mutex);
lock(&disk->open_mutex);
lock(&lo->lo_mutex);
lock((wq_completion)loop5);

*** DEADLOCK ***

1 lock held by syz-executor.5/3989:
#0: ffff0000cb507468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa8/0x9b8 drivers/block/loop.c:1365

stack backtrace:
CPU: 0 PID: 3989 Comm: syz-executor.5 Not tainted 5.15.115-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/28/2023
Call trace:
dump_backtrace+0x0/0x530 arch/arm64/kernel/stacktrace.c:152
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:216
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x108/0x170 lib/dump_stack.c:106
dump_stack+0x1c/0x58 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2011
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3787 [inline]
__lock_acquire+0x32cc/0x7620 kernel/locking/lockdep.c:5011
lock_acquire+0x240/0x77c kernel/locking/lockdep.c:5622
flush_workqueue+0x14c/0x11c4 kernel/workqueue.c:2827
drain_workqueue+0xb8/0x32c kernel/workqueue.c:2992
destroy_workqueue+0x80/0xa34 kernel/workqueue.c:4427
__loop_clr_fd+0x1c0/0x9b8 drivers/block/loop.c:1383
loop_clr_fd drivers/block/loop.c:1509 [inline]
lo_ioctl+0xe74/0x20d0 drivers/block/loop.c:1865
blkdev_ioctl+0x3d8/0xbd0 block/ioctl.c:601
block_ioctl+0xa8/0x114 block/fops.c:493
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
Dev loop5: unable to read RDB block 8
loop5: unable to read partition table
loop5: partition table beyond EOD, truncated


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

unread,
Aug 24, 2023, 9:44:51 PM8/24/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: f6f7927ac664 Linux 5.15.127
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10dec0c0680000
kernel config: https://syzkaller.appspot.com/x/.config?x=ea39ec6ccd2c5d32
dashboard link: https://syzkaller.appspot.com/bug?extid=a94f42fb4f5d3739ef4a
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=140c6790680000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11e91adfa80000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/e290f46611c7/disk-f6f7927a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/64ee0ddb7c8c/vmlinux-f6f7927a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/3c32675f93c1/bzImage-f6f7927a.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/d406912ff680/mount_3.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+a94f42...@syzkaller.appspotmail.com

F2FS-fs (loop0): invalid crc value
F2FS-fs (loop0): Found nat_bits in checkpoint
F2FS-fs (loop0): Try to recover 1th superblock, ret: 0
F2FS-fs (loop0): Mounted with checkpoint version = 753bd00b
======================================================
WARNING: possible circular locking dependency detected
5.15.127-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor428/3487 is trying to acquire lock:
ffff8880240c0938 ((wq_completion)loop0){+.+.}-{0:0}, at: flush_workqueue+0x154/0x1610 kernel/workqueue.c:2830

but task is already holding lock:
ffff8881473f8468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa9/0xbe0 drivers/block/loop.c:1365

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #7 (&lo->lo_mutex){+.+.}-{3:3}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_killable_nested+0x17/0x20 kernel/locking/mutex.c:758
lo_open+0x68/0x100 drivers/block/loop.c:2055
blkdev_get_whole+0x94/0x390 block/bdev.c:669
blkdev_get_by_dev+0x2b2/0xa50 block/bdev.c:824
blkdev_open+0x138/0x2d0 block/fops.c:463
do_dentry_open+0x807/0xfb0 fs/open.c:826
do_open fs/namei.c:3538 [inline]
path_openat+0x2702/0x2f20 fs/namei.c:3672
do_filp_open+0x21c/0x460 fs/namei.c:3699
do_sys_openat2+0x13b/0x500 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__x64_sys_openat+0x243/0x290 fs/open.c:1238
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #6 (&disk->open_mutex){+.+.}-{3:3}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
blkdev_get_by_dev+0x14d/0xa50 block/bdev.c:817
swsusp_check+0xb1/0x2c0 kernel/power/swap.c:1526
software_resume+0xc6/0x3c0 kernel/power/hibernate.c:977
resume_store+0xe3/0x130 kernel/power/hibernate.c:1179
kernfs_fop_write_iter+0x3a2/0x4f0 fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0xacf/0xe50 fs/read_write.c:594
ksys_write+0x1a2/0x2c0 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #5 (system_transition_mutex/1){+.+.}-{3:3}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
software_resume+0x7c/0x3c0 kernel/power/hibernate.c:932
resume_store+0xe3/0x130 kernel/power/hibernate.c:1179
kernfs_fop_write_iter+0x3a2/0x4f0 fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2103 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0xacf/0xe50 fs/read_write.c:594
ksys_write+0x1a2/0x2c0 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #4 (&of->mutex){+.+.}-{3:3}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
kernfs_seq_start+0x50/0x3b0 fs/kernfs/file.c:112
seq_read_iter+0x3d0/0xd10 fs/seq_file.c:225
call_read_iter include/linux/fs.h:2097 [inline]
new_sync_read fs/read_write.c:404 [inline]
vfs_read+0xa9f/0xe10 fs/read_write.c:485
ksys_read+0x1a2/0x2c0 fs/read_write.c:623
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #3 (&p->lock){+.+.}-{3:3}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
__mutex_lock_common+0x1da/0x25a0 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
seq_read_iter+0xae/0xd10 fs/seq_file.c:182
proc_reg_read_iter+0x1b7/0x280 fs/proc/inode.c:300
call_read_iter include/linux/fs.h:2097 [inline]
generic_file_splice_read+0x4ad/0x790 fs/splice.c:311
do_splice_to fs/splice.c:796 [inline]
splice_direct_to_actor+0x448/0xc10 fs/splice.c:870
do_splice_direct+0x285/0x3d0 fs/splice.c:979
do_sendfile+0x625/0xff0 fs/read_write.c:1249
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64+0x178/0x1e0 fs/read_write.c:1303
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

-> #2 (sb_writers#3){.+.+}-{0:0}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1742 [inline]
sb_start_write include/linux/fs.h:1812 [inline]
file_start_write include/linux/fs.h:2966 [inline]
lo_write_bvec+0x1a3/0x740 drivers/block/loop.c:315
lo_write_simple drivers/block/loop.c:338 [inline]
do_req_filebacked drivers/block/loop.c:656 [inline]
loop_handle_cmd drivers/block/loop.c:2234 [inline]
loop_process_work+0x2309/0x2af0 drivers/block/loop.c:2274
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298

-> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
process_one_work+0x7f1/0x10c0 kernel/workqueue.c:2286
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298

-> #0 ((wq_completion)loop0){+.+.}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
flush_workqueue+0x170/0x1610 kernel/workqueue.c:2830
drain_workqueue+0xc5/0x390 kernel/workqueue.c:2995
destroy_workqueue+0x7b/0xae0 kernel/workqueue.c:4430
__loop_clr_fd+0x241/0xbe0 drivers/block/loop.c:1383
blkdev_put_whole block/bdev.c:692 [inline]
blkdev_put+0x455/0x790 block/bdev.c:954
kill_f2fs_super+0x2ff/0x3c0 fs/f2fs/super.c:4501
deactivate_locked_super+0xa0/0x110 fs/super.c:335
cleanup_mnt+0x44e/0x500 fs/namespace.c:1143
task_work_run+0x129/0x1a0 kernel/task_work.c:164
exit_task_work include/linux/task_work.h:32 [inline]
do_exit+0x6a3/0x2480 kernel/exit.c:872
do_group_exit+0x144/0x310 kernel/exit.c:994
__do_sys_exit_group kernel/exit.c:1005 [inline]
__se_sys_exit_group kernel/exit.c:1003 [inline]
__x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1003
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Chain exists of:
(wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&lo->lo_mutex);
lock(&disk->open_mutex);
lock(&lo->lo_mutex);
lock((wq_completion)loop0);

*** DEADLOCK ***

2 locks held by syz-executor428/3487:
#0: ffff88801b604118 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xfb/0x790 block/bdev.c:912
#1: ffff8881473f8468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa9/0xbe0 drivers/block/loop.c:1365

stack backtrace:
CPU: 0 PID: 3487 Comm: syz-executor428 Not tainted 5.15.127-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
check_noncircular+0x2f8/0x3b0 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x1646/0x58b0 kernel/locking/lockdep.c:3787
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5011
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5622
flush_workqueue+0x170/0x1610 kernel/workqueue.c:2830
drain_workqueue+0xc5/0x390 kernel/workqueue.c:2995
destroy_workqueue+0x7b/0xae0 kernel/workqueue.c:4430
__loop_clr_fd+0x241/0xbe0 drivers/block/loop.c:1383
blkdev_put_whole block/bdev.c:692 [inline]
blkdev_put+0x455/0x790 block/bdev.c:954
kill_f2fs_super+0x2ff/0x3c0 fs/f2fs/super.c:4501
deactivate_locked_super+0xa0/0x110 fs/super.c:335
cleanup_mnt+0x44e/0x500 fs/namespace.c:1143
task_work_run+0x129/0x1a0 kernel/task_work.c:164
exit_task_work include/linux/task_work.h:32 [inline]
do_exit+0x6a3/0x2480 kernel/exit.c:872
do_group_exit+0x144/0x310 kernel/exit.c:994
__do_sys_exit_group kernel/exit.c:1005 [inline]
__se_sys_exit_group kernel/exit.c:1003 [inline]
__x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1003
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f15bf4b5b49
Code: Unable to access opcode bytes at RIP 0x7f15bf4b5b1f.
RSP: 002b:00007ffe6d1339b8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f15bf4b5b49
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000001
RBP: 00007f15bf5393d0 R08: ffffffffffffffb8 R09: 0000555555e4e378
R10: 0000000000000003 R11: 0000000000000246 R12: 00007f15bf5393d0
R13: 0000000000000000 R14: 00007f15bf53a140 R15: 00007f15bf483e20
</TASK>

syzbot

unread,
Oct 9, 2023, 7:55:32 PM10/9/23
to syzkaller...@googlegroups.com
syzbot suspects this issue could be fixed by backporting the following commit:

commit fbdee71bb5d8d054e1bdb5af4c540f2cb86fe296
git tree: upstream
Author: Christoph Hellwig <h...@lst.de>
Date: Tue Jan 4 07:16:47 2022 +0000

block: deprecate autoloading based on dev_t

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=1747a57e680000
Please keep in mind that other backports might be required as well.

For information about bisection process see: https://goo.gl/tpsmEJ#bisection
Reply all
Reply to author
Forward
0 new messages