Hello,
syzbot found the following issue on:
HEAD commit: 1c700860e8bc Linux 5.15.185
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=113319d4580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=1ea6d61094f2bc7
dashboard link:
https://syzkaller.appspot.com/bug?extid=08c48a5997d421e7cdc3
compiler: Debian clang version 20.1.6 (++20250514063057+1e4d39e07757-1~exp1~20250514183223.118), Debian LLD 20.1.6
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/5b3869563672/disk-1c700860.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/382d7f427d53/vmlinux-1c700860.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/c344c3ce2e27/Image-1c700860.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+08c48a...@syzkaller.appspotmail.com
INFO: task syz.4.1396:9077 blocked for more than 144 seconds.
Not tainted 5.15.185-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.1396 state:D stack: 0 pid: 9077 ppid: 4034 flags:0x00000009
Call trace:
__switch_to+0x2f4/0x558 arch/arm64/kernel/process.c:521
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0xe00/0x1c0c kernel/sched/core.c:6376
schedule+0x11c/0x1c8 kernel/sched/core.c:6459
schedule_timeout+0xb4/0x2c8 kernel/time/timer.c:1890
do_wait_for_common+0x1fc/0x35c kernel/sched/completion.c:85
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x48/0x60 kernel/sched/completion.c:138
__flush_work+0x128/0x1bc kernel/workqueue.c:3094
__cancel_work_timer+0x2ec/0x448 kernel/workqueue.c:3181
cancel_work_sync+0x24/0x38 kernel/workqueue.c:3217
p9_conn_destroy net/9p/trans_fd.c:911 [inline]
p9_fd_close+0x200/0x398 net/9p/trans_fd.c:946
p9_client_create+0x840/0xd08 net/9p/client.c:1082
v9fs_session_init+0x18c/0x139c fs/9p/v9fs.c:409
v9fs_mount+0x88/0x758 fs/9p/vfs_super.c:126
legacy_get_tree+0xd4/0x16c fs/fs_context.c:611
vfs_get_tree+0x90/0x274 fs/super.c:1530
do_new_mount+0x228/0x810 fs/namespace.c:3010
path_mount+0x5b4/0x1000 fs/namespace.c:3340
do_mount fs/namespace.c:3353 [inline]
__do_sys_mount fs/namespace.c:3561 [inline]
__se_sys_mount fs/namespace.c:3538 [inline]
__arm64_sys_mount+0x514/0x5e4 fs/namespace.c:3538
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x78/0x1e0 arch/arm64/kernel/entry-common.c:608
el0t_64_sync_handler+0xcc/0xe4 arch/arm64/kernel/entry-common.c:626
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
Showing all locks held in the system:
2 locks held by kworker/u4:0/9:
3 locks held by kworker/0:1/13:
#0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x678/0x1140 kernel/workqueue.c:2283
#1: ffff80001b307c00 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work+0x6b8/0x1140 kernel/workqueue.c:2285
#2: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:475 [inline]
#2: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1326 [inline]
#2: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1621 [inline]
#2: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x310/0x1c0c kernel/sched/core.c:6290
1 lock held by khungtaskd/27:
#0: ffff8000143311e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:311
3 locks held by kworker/0:1H/149:
#0: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:475 [inline]
#0: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1326 [inline]
#0: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1621 [inline]
#0: ffff0001a10fc918 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x310/0x1c0c kernel/sched/core.c:6290
#1: ffff0001a10e9c48 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x414/0x68c kernel/sched/psi.c:891
#2: ffff8000143311e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x10/0x4c include/linux/rcupdate.h:311
2 locks held by udevd/3642:
2 locks held by getty/3792:
#0: ffff0000d381a098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x40/0x50 drivers/tty/tty_ldsem.c:340
#1: ffff80001b7902e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x2f0/0xf6c drivers/tty/n_tty.c:2158
2 locks held by kworker/1:7/4112:
#0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x678/0x1140 kernel/workqueue.c:2283
#1: ffff80001f687c00 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x6b8/0x1140 kernel/workqueue.c:2285
2 locks held by kworker/u4:7/4113:
#0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x678/0x1140 kernel/workqueue.c:2283
#1: ffff80001f587c00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x6b8/0x1140 kernel/workqueue.c:2285
2 locks held by kworker/0:7/4265:
#0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x678/0x1140 kernel/workqueue.c:2283
#1: ffff800020437c00 (key_gc_work){+.+.}-{0:0}, at: process_one_work+0x6b8/0x1140 kernel/workqueue.c:2285
1 lock held by syz-executor/6151:
2 locks held by syz.3.1984/11703:
1 lock held by modprobe/11714:
=============================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup