Hello,
syzbot found the following issue on:
HEAD commit: 3b29299e5f60 Linux 6.1.22
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=154a3da5c80000
kernel config:
https://syzkaller.appspot.com/x/.config?x=bbb9a1f6f7f5a1d9
dashboard link:
https://syzkaller.appspot.com/bug?extid=6543f8265dc75e2a9639
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/2affbd06cbfd/disk-3b29299e.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/8b22d1baf827/vmlinux-3b29299e.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/d5e3891c88bf/Image-3b29299e.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+6543f8...@syzkaller.appspotmail.com
INFO: task syz-executor.4:32241 blocked for more than 143 seconds.
Not tainted 6.1.22-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:0 pid:32241 ppid:4354 flags:0x00000001
Call trace:
__switch_to+0x320/0x754 arch/arm64/kernel/process.c:553
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0xee4/0x1c98 kernel/sched/core.c:6554
schedule+0xc4/0x170 kernel/sched/core.c:6630
rwsem_down_write_slowpath+0xc80/0x156c kernel/locking/rwsem.c:1189
__down_write_common kernel/locking/rwsem.c:1314 [inline]
__down_write kernel/locking/rwsem.c:1323 [inline]
down_write+0x84/0x88 kernel/locking/rwsem.c:1574
inode_lock include/linux/fs.h:756 [inline]
open_last_lookups fs/namei.c:3478 [inline]
path_openat+0x5ec/0x2548 fs/namei.c:3711
do_filp_open+0x1bc/0x3cc fs/namei.c:3741
do_sys_openat2+0x128/0x3d8 fs/open.c:1310
do_sys_open fs/open.c:1326 [inline]
__do_sys_openat fs/open.c:1342 [inline]
__se_sys_openat fs/open.c:1337 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1337
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581
Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffff800015754a30 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
#0: ffff800015755230 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
#0: ffff800015754860 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:305
2 locks held by getty/3987:
#0: ffff0000d5a88098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
#1: ffff80001ca002f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1210 drivers/tty/n_tty.c:2177
1 lock held by syz-executor.2/4338:
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3e0/0x768 kernel/rcu/tree_exp.h:948
1 lock held by syz-executor.3/4342:
1 lock held by syz-executor.5/4357:
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x394/0x768 kernel/rcu/tree_exp.h:948
3 locks held by kworker/u4:18/29381:
5 locks held by syz-executor.4/32229:
2 locks held by syz-executor.4/32241:
#0: ffff0000d5dfc460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393
#1: ffff00014b2eb5e8 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff00014b2eb5e8 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: open_last_lookups fs/namei.c:3478 [inline]
#1: ffff00014b2eb5e8 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: path_openat+0x5ec/0x2548 fs/namei.c:3711
2 locks held by kworker/1:1/32375:
#0: ffff0000c0021d38 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x664/0x1404 kernel/workqueue.c:2262
#1: ffff80002b4f7c20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x6a8/0x1404 kernel/workqueue.c:2264
6 locks held by syz-executor.1/800:
2 locks held by syz-executor.1/803:
#0: ffff0000cc64a460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393
#1: ffff00015203a2b0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff00015203a2b0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: open_last_lookups fs/namei.c:3478 [inline]
#1: ffff00015203a2b0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: path_openat+0x5ec/0x2548 fs/namei.c:3711
1 lock held by syz-executor.4/2231:
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:537 [inline]
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1354 [inline]
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1644 [inline]
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x2c4/0x1c98 kernel/sched/core.c:6471
=============================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.