[v6.1] INFO: task hung in path_openat

9 views
Skip to first unread message

syzbot

unread,
Apr 4, 2023, 3:33:49 AM4/4/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 3b29299e5f60 Linux 6.1.22
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=154a3da5c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=bbb9a1f6f7f5a1d9
dashboard link: https://syzkaller.appspot.com/bug?extid=6543f8265dc75e2a9639
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2affbd06cbfd/disk-3b29299e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/8b22d1baf827/vmlinux-3b29299e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d5e3891c88bf/Image-3b29299e.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6543f8...@syzkaller.appspotmail.com

INFO: task syz-executor.4:32241 blocked for more than 143 seconds.
Not tainted 6.1.22-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:0 pid:32241 ppid:4354 flags:0x00000001
Call trace:
__switch_to+0x320/0x754 arch/arm64/kernel/process.c:553
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0xee4/0x1c98 kernel/sched/core.c:6554
schedule+0xc4/0x170 kernel/sched/core.c:6630
rwsem_down_write_slowpath+0xc80/0x156c kernel/locking/rwsem.c:1189
__down_write_common kernel/locking/rwsem.c:1314 [inline]
__down_write kernel/locking/rwsem.c:1323 [inline]
down_write+0x84/0x88 kernel/locking/rwsem.c:1574
inode_lock include/linux/fs.h:756 [inline]
open_last_lookups fs/namei.c:3478 [inline]
path_openat+0x5ec/0x2548 fs/namei.c:3711
do_filp_open+0x1bc/0x3cc fs/namei.c:3741
do_sys_openat2+0x128/0x3d8 fs/open.c:1310
do_sys_open fs/open.c:1326 [inline]
__do_sys_openat fs/open.c:1342 [inline]
__se_sys_openat fs/open.c:1337 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1337
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffff800015754a30 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
#0: ffff800015755230 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
#0: ffff800015754860 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:305
2 locks held by getty/3987:
#0: ffff0000d5a88098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
#1: ffff80001ca002f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1210 drivers/tty/n_tty.c:2177
1 lock held by syz-executor.2/4338:
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3e0/0x768 kernel/rcu/tree_exp.h:948
1 lock held by syz-executor.3/4342:
1 lock held by syz-executor.5/4357:
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#0: ffff800015759e38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x394/0x768 kernel/rcu/tree_exp.h:948
3 locks held by kworker/u4:18/29381:
5 locks held by syz-executor.4/32229:
2 locks held by syz-executor.4/32241:
#0: ffff0000d5dfc460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393
#1: ffff00014b2eb5e8 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff00014b2eb5e8 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: open_last_lookups fs/namei.c:3478 [inline]
#1: ffff00014b2eb5e8 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: path_openat+0x5ec/0x2548 fs/namei.c:3711
2 locks held by kworker/1:1/32375:
#0: ffff0000c0021d38 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x664/0x1404 kernel/workqueue.c:2262
#1: ffff80002b4f7c20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x6a8/0x1404 kernel/workqueue.c:2264
6 locks held by syz-executor.1/800:
2 locks held by syz-executor.1/803:
#0: ffff0000cc64a460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:393
#1: ffff00015203a2b0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#1: ffff00015203a2b0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: open_last_lookups fs/namei.c:3478 [inline]
#1: ffff00015203a2b0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: path_openat+0x5ec/0x2548 fs/namei.c:3711
1 lock held by syz-executor.4/2231:
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:537 [inline]
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1354 [inline]
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1644 [inline]
#0: ffff0001b45c8598 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x2c4/0x1c98 kernel/sched/core.c:6471

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Apr 11, 2023, 9:28:56 PM4/11/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d86dfc4d95cd Linux 5.15.106
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=15b80a0fc80000
kernel config: https://syzkaller.appspot.com/x/.config?x=639d55ab480652c5
dashboard link: https://syzkaller.appspot.com/bug?extid=b5d549d467bbe6809a64
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/b2a94107dd69/disk-d86dfc4d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/398f8d288cb9/vmlinux-d86dfc4d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9b790c7e7c8c/Image-d86dfc4d.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+b5d549...@syzkaller.appspotmail.com

INFO: task syz-executor.4:13274 blocked for more than 143 seconds.
Not tainted 5.15.106-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack: 0 pid:13274 ppid: 10648 flags:0x00000001
Call trace:
__switch_to+0x308/0x5e8 arch/arm64/kernel/process.c:518
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0xf10/0x1e38 kernel/sched/core.c:6372
schedule+0x11c/0x1c8 kernel/sched/core.c:6455
rwsem_down_write_slowpath+0xca8/0x1340 kernel/locking/rwsem.c:1157
__down_write_common kernel/locking/rwsem.c:1284 [inline]
__down_write kernel/locking/rwsem.c:1293 [inline]
down_write+0x25c/0x260 kernel/locking/rwsem.c:1542
inode_lock include/linux/fs.h:787 [inline]
open_last_lookups fs/namei.c:3459 [inline]
path_openat+0x63c/0x26f0 fs/namei.c:3669
do_filp_open+0x1a8/0x3b4 fs/namei.c:3699
do_sys_openat2+0x128/0x3d8 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__arm64_sys_openat+0x1f0/0x240 fs/open.c:1238
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffff800014aa1660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:268
3 locks held by kworker/u4:6/1624:
2 locks held by kworker/1:3/3551:
#0: ffff0000c0020d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2279
#1: ffff80001f407c00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2281
1 lock held by udevd/3587:
2 locks held by getty/3736:
#0: ffff0000d3846098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x40/0x50 drivers/tty/tty_ldsem.c:340
#1: ffff80001a27e2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1200 drivers/tty/n_tty.c:2147
2 locks held by syz-fuzzer/4816:
7 locks held by syz-executor.4/13271:
2 locks held by syz-executor.4/13274:
#0: ffff000126670460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:377
#1: ffff0001210c22b0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff0001210c22b0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3459 [inline]
#1: ffff0001210c22b0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x63c/0x26f0 fs/namei.c:3669
5 locks held by syz-executor.5/13355:
1 lock held by syz-executor.5/13358:
#0: ffff00012c063c50 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#0: ffff00012c063c50 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3459 [inline]
#0: ffff00012c063c50 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x63c/0x26f0 fs/namei.c:3669
1 lock held by udevd/14381:
3 locks held by kworker/u4:2/15083:
4 locks held by syz-executor.1/15167:
6 locks held by syz-executor.2/15177:

syzbot

unread,
Jun 18, 2023, 10:42:49 PM6/18/23
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 471e639e59d1 Linux 5.15.117
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=113f25bb280000
kernel config: https://syzkaller.appspot.com/x/.config?x=6390289bcc1381aa
dashboard link: https://syzkaller.appspot.com/bug?extid=b5d549d467bbe6809a64
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14949187280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=12f40bef280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/145b7f137d72/disk-471e639e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b5bd2fcb945c/vmlinux-471e639e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/794584032e0f/bzImage-471e639e.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+b5d549...@syzkaller.appspotmail.com

INFO: task syz-executor135:6569 blocked for more than 143 seconds.
Not tainted 5.15.117-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor135 state:D stack:25600 pid: 6569 ppid: 3534 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
rwsem_down_write_slowpath+0xebb/0x15c0 kernel/locking/rwsem.c:1157
__down_write_common kernel/locking/rwsem.c:1284 [inline]
__down_write kernel/locking/rwsem.c:1293 [inline]
down_write+0x164/0x170 kernel/locking/rwsem.c:1542
inode_lock include/linux/fs.h:787 [inline]
open_last_lookups fs/namei.c:3459 [inline]
path_openat+0x824/0x2f20 fs/namei.c:3669
do_filp_open+0x21c/0x460 fs/namei.c:3699
file_open_name fs/open.c:1156 [inline]
filp_open+0x25d/0x2c0 fs/open.c:1176
do_coredump+0x2549/0x31e0 fs/coredump.c:767
get_signal+0xc06/0x14e0 kernel/signal.c:2875
arch_do_signal_or_restart+0xc3/0x1890 arch/x86/kernel/signal.c:865
handle_signal_work kernel/entry/common.c:148 [inline]
exit_to_user_mode_loop+0x97/0x130 kernel/entry/common.c:172
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:208
irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:314
exc_page_fault+0x342/0x740 arch/x86/mm/fault.c:1544
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:568
RIP: 0033:0x0
RSP: 002b:0000000020000008 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 0000000000000003 RCX: 00007f85fbdedc19
RDX: 0000000000000000 RSI: 0000000020000000 RDI: 0000000000000600
RBP: 0000000000000000 R08: 0000000020000100 R09: 000000a800000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000697a1
R13: 00007ffebac2a8d0 R14: 00007ffebac2a8c0 R15: 00007ffebac2a8b0
</TASK>
INFO: task syz-executor135:6657 blocked for more than 146 seconds.
Not tainted 5.15.117-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor135 state:D stack:22336 pid: 6657 ppid: 3534 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
wb_wait_for_completion+0x164/0x290 fs/fs-writeback.c:191
__writeback_inodes_sb_nr+0x2ce/0x370 fs/fs-writeback.c:2662
try_to_writeback_inodes_sb+0x94/0xb0 fs/fs-writeback.c:2710
ext4_nonda_switch fs/ext4/inode.c:2943 [inline]
ext4_da_write_begin+0x228/0xb60 fs/ext4/inode.c:2970
generic_perform_write+0x2bf/0x5b0 mm/filemap.c:3776
ext4_buffered_write_iter+0x227/0x360 fs/ext4/file.c:268
ext4_file_write_iter+0x87c/0x1990
__kernel_write+0x5b1/0xa60 fs/read_write.c:539
__dump_emit+0x264/0x3a0 fs/coredump.c:875
dump_user_range+0x91/0x320 fs/coredump.c:949
elf_core_dump+0x3c7d/0x4570 fs/binfmt_elf.c:2285
do_coredump+0x1852/0x31e0 fs/coredump.c:826
get_signal+0xc06/0x14e0 kernel/signal.c:2875
arch_do_signal_or_restart+0xc3/0x1890 arch/x86/kernel/signal.c:865
handle_signal_work kernel/entry/common.c:148 [inline]
exit_to_user_mode_loop+0x97/0x130 kernel/entry/common.c:172
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:208
irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:314
exc_page_fault+0x342/0x740 arch/x86/mm/fault.c:1544
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:568
RIP: 0033:0x0
RSP: 002b:0000000020000308 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 00000000000f4240 RCX: 00007f85fbdedc19
RDX: 0000000000000000 RSI: 0000000020000300 RDI: 0000000000400000
RBP: 0000000000000000 R08: 0000000020000480 R09: 000000a800000000
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000006987e
R13: 00007ffebac2a8d0 R14: 00007ffebac2a8c0 R15: 00007ffebac2a8b0
</TASK>
INFO: task syz-executor135:6953 blocked for more than 149 seconds.
Not tainted 5.15.117-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor135 state:D stack:25472 pid: 6953 ppid: 3534 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
rwsem_down_write_slowpath+0xebb/0x15c0 kernel/locking/rwsem.c:1157
__down_write_common kernel/locking/rwsem.c:1284 [inline]
__down_write kernel/locking/rwsem.c:1293 [inline]
down_write+0x164/0x170 kernel/locking/rwsem.c:1542
inode_lock include/linux/fs.h:787 [inline]
open_last_lookups fs/namei.c:3459 [inline]
path_openat+0x824/0x2f20 fs/namei.c:3669
do_filp_open+0x21c/0x460 fs/namei.c:3699


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

unread,
Oct 23, 2023, 8:08:54 AM10/23/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.

syzbot

unread,
Nov 9, 2023, 3:56:21 PM11/9/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
No recent activity, existing reproducers are no longer triggering the issue.
Reply all
Reply to author
Forward
0 new messages