Hello,
syzbot found the following issue on:
HEAD commit: 98f47d0e9b8c Linux 5.15.184
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=13bcadf4580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=9eb2b5a65dfc4761
dashboard link:
https://syzkaller.appspot.com/bug?extid=5cf071d639967d79b221
compiler: Debian clang version 20.1.6 (++20250514063057+1e4d39e07757-1~exp1~20250514183223.118), Debian LLD 20.1.6
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/d2845fb3af6c/disk-98f47d0e.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/1f12b743be24/vmlinux-98f47d0e.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/8f178b57ea38/Image-98f47d0e.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+5cf071...@syzkaller.appspotmail.com
INFO: task udevd:5852 blocked for more than 143 seconds.
Not tainted 5.15.184-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd state:D stack: 0 pid: 5852 ppid: 3643 flags:0x00000004
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0xe00/0x1c0c kernel/sched/core.c:6376
schedule+0x11c/0x1c8 kernel/sched/core.c:6459
schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6518
__mutex_lock_common+0xa9c/0x1edc kernel/locking/mutex.c:669
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xac/0x11c kernel/locking/mutex.c:743
blkdev_get_by_dev+0x120/0x874 block/bdev.c:820
blkdev_open+0x108/0x27c block/fops.c:466
do_dentry_open+0x760/0xebc fs/open.c:826
vfs_open+0x7c/0x90 fs/open.c:956
do_open fs/namei.c:3608 [inline]
path_openat+0x1f80/0x26e4 fs/namei.c:3742
do_filp_open+0x164/0x330 fs/namei.c:3769
do_sys_openat2+0x128/0x3d8 fs/open.c:1253
do_sys_open fs/open.c:1269 [inline]
__do_sys_openat fs/open.c:1285 [inline]
__se_sys_openat fs/open.c:1280 [inline]
__arm64_sys_openat+0x120/0x154 fs/open.c:1280
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x78/0x1e0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0xcc/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
INFO: task syz.4.425:6043 blocked for more than 143 seconds.
Not tainted 5.15.184-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.425 state:D stack: 0 pid: 6043 ppid: 4030 flags:0x00000001
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0xe00/0x1c0c kernel/sched/core.c:6376
schedule+0x11c/0x1c8 kernel/sched/core.c:6459
io_schedule+0x84/0x160 kernel/sched/core.c:8484
wait_on_page_bit_common+0x61c/0xa74 mm/filemap.c:1356
wait_on_page_bit mm/filemap.c:1417 [inline]
wait_on_page_locked include/linux/pagemap.h:688 [inline]
wait_on_page_read mm/filemap.c:3437 [inline]
do_read_cache_page+0x748/0x8f8 mm/filemap.c:3480
read_cache_page+0x68/0x88 mm/filemap.c:3574
read_mapping_page include/linux/pagemap.h:515 [inline]
read_part_sector+0xe8/0x4c4 block/partitions/core.c:731
adfspart_check_ICS+0xbc/0x560 block/partitions/acorn.c:360
check_partition block/partitions/core.c:148 [inline]
blk_add_partitions block/partitions/core.c:616 [inline]
bdev_disk_changed+0x7fc/0x1378 block/partitions/core.c:702
blkdev_get_whole+0x2a4/0x344 block/bdev.c:682
blkdev_get_by_dev+0x224/0x874 block/bdev.c:827
blkdev_open+0x108/0x27c block/fops.c:466
do_dentry_open+0x760/0xebc fs/open.c:826
vfs_open+0x7c/0x90 fs/open.c:956
do_open fs/namei.c:3608 [inline]
path_openat+0x1f80/0x26e4 fs/namei.c:3742
do_filp_open+0x164/0x330 fs/namei.c:3769
do_sys_openat2+0x128/0x3d8 fs/open.c:1253
do_sys_open fs/open.c:1269 [inline]
__do_sys_openat fs/open.c:1285 [inline]
__se_sys_openat fs/open.c:1280 [inline]
__arm64_sys_openat+0x120/0x154 fs/open.c:1280
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x78/0x1e0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0xcc/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
INFO: task syz.4.425:6047 blocked for more than 143 seconds.
Not tainted 5.15.184-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.425 state:D stack: 0 pid: 6047 ppid: 4030 flags:0x00000009
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0xe00/0x1c0c kernel/sched/core.c:6376
schedule+0x11c/0x1c8 kernel/sched/core.c:6459
schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6518
__mutex_lock_common+0xa9c/0x1edc kernel/locking/mutex.c:669
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0xac/0x11c kernel/locking/mutex.c:743
blkdev_put+0xdc/0x6ac block/bdev.c:915
blkdev_close+0x74/0xb0 block/fops.c:478
__fput+0x1c0/0x7f8 fs/file_table.c:311
____fput+0x20/0x30 fs/file_table.c:339
task_work_run+0x12c/0x1e0 kernel/task_work.c:188
tracehook_notify_resume include/linux/tracehook.h:189 [inline]
do_notify_resume+0x24b4/0x3128 arch/arm64/kernel/signal.c:949
prepare_exit_to_user_mode arch/arm64/kernel/entry-common.c:133 [inline]
exit_to_user_mode arch/arm64/kernel/entry-common.c:138 [inline]
el0_svc+0xf0/0x1e0 arch/arm64/kernel/entry-common.c:609
el0t_64_sync_handler+0xcc/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffff8000143211e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:311
3 locks held by kworker/u4:4/346:
#0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x678/0x1140 kernel/workqueue.c:2283
#1: ffff80001f5b7c00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x6b8/0x1140 kernel/workqueue.c:2285
#2: ffff800016278168 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72
1 lock held by udevd/3643:
2 locks held by getty/3813:
#0: ffff0000d3b63098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x40/0x50 drivers/tty/tty_ldsem.c:340
#1: ffff80001b7802e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x2f0/0xf6c drivers/tty/n_tty.c:2158
2 locks held by kworker/1:3/4068:
#0: ffff0000c0021938 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x678/0x1140 kernel/workqueue.c:2283
#1: ffff80001f327c00 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x6b8/0x1140 kernel/workqueue.c:2285
2 locks held by kworker/1:5/4070:
#0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x678/0x1140 kernel/workqueue.c:2283
#1: ffff80001f367c00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x6b8/0x1140 kernel/workqueue.c:2285
2 locks held by kworker/u4:9/4453:
1 lock held by udevd/5852:
#0: ffff0000cbeb4918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x120/0x874 block/bdev.c:820
1 lock held by syz.4.425/6043:
#0: ffff0000cbeb4918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x120/0x874 block/bdev.c:820
1 lock held by syz.4.425/6047:
#0: ffff0000cbeb4918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xdc/0x6ac block/bdev.c:915
2 locks held by syz.0.874/7885:
#0: ffff800016278168 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72
#1: ffff800014325ca8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#1: ffff800014325ca8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x2f0/0x5e0 kernel/rcu/tree_exp.h:845
1 lock held by syz.7.876/7892:
#0: ffff800016278168 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72
3 locks held by syz.6.879/7900:
2 locks held by syz.5.877/7901:
#0: ffff8000162d3470 (cb_lock){++++}-{3:3}, at: genl_rcv+0x28/0x50 net/netlink/genetlink.c:802
#1: ffff800016278168 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72
1 lock held by syz.8.878/7902:
#0: ffff800016278168 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72
2 locks held by dhcpcd/7909:
#0: ffff0000eb9d4120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1694 [inline]
#0: ffff0000eb9d4120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x4c/0xb68 net/packet/af_packet.c:3213
#1: ffff800014325ca8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
#1: ffff800014325ca8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x248/0x5e0 kernel/rcu/tree_exp.h:845
1 lock held by dhcpcd/7910:
#0: ffff0000ce976120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1694 [inline]
#0: ffff0000ce976120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x4c/0xb68 net/packet/af_packet.c:3213
=============================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup