[v5.15] INFO: task hung in xfs_buf_item_unpin

0 views
Skip to first unread message

syzbot

unread,
Apr 12, 2023, 5:34:42 AM4/12/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d86dfc4d95cd Linux 5.15.106
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12123317c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=639d55ab480652c5
dashboard link: https://syzkaller.appspot.com/bug?extid=4eb75e297b6d0240b632
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/b2a94107dd69/disk-d86dfc4d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/398f8d288cb9/vmlinux-d86dfc4d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9b790c7e7c8c/Image-d86dfc4d.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4eb75e...@syzkaller.appspotmail.com

INFO: task kworker/0:1H:149 blocked for more than 145 seconds.
Not tainted 5.15.106-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:1H state:D stack: 0 pid: 149 ppid: 2 flags:0x00000008
Workqueue: xfs-log/loop4 xlog_ioend_work
Call trace:
__switch_to+0x308/0x5e8 arch/arm64/kernel/process.c:518
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0xf10/0x1e38 kernel/sched/core.c:6372
schedule+0x11c/0x1c8 kernel/sched/core.c:6455
schedule_timeout+0xb8/0x344 kernel/time/timer.c:1860
__down_common+0x2a4/0x4e0 kernel/locking/semaphore.c:224
__down+0x18/0x24 kernel/locking/semaphore.c:241
down+0x98/0xec kernel/locking/semaphore.c:62
xfs_buf_lock+0x1f8/0x798 fs/xfs/xfs_buf.c:1080
xfs_buf_item_unpin+0x1ec/0x808 fs/xfs/xfs_buf_item.c:546
xfs_trans_committed_bulk+0x2b0/0x70c fs/xfs/xfs_trans.c:780
xlog_cil_committed+0x22c/0xc20 fs/xfs/xfs_log_cil.c:631
xlog_cil_process_committed+0x11c/0x174 fs/xfs/xfs_log_cil.c:659
xlog_state_shutdown_callbacks+0x23c/0x324 fs/xfs/xfs_log.c:516
xlog_force_shutdown+0x1a8/0x208 fs/xfs/xfs_log.c:3896
xfs_do_force_shutdown+0x118/0x5e8 fs/xfs/xfs_fsops.c:529
xlog_ioend_work+0xc0/0x114 fs/xfs/xfs_log.c:1364
process_one_work+0x790/0x11b8 kernel/workqueue.c:2306
worker_thread+0x910/0x1034 kernel/workqueue.c:2453
kthread+0x37c/0x45c kernel/kthread.c:319
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:870
INFO: task syz-executor.4:5004 blocked for more than 145 seconds.
Not tainted 5.15.106-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack: 0 pid: 5004 ppid: 4108 flags:0x00000009
Call trace:
__switch_to+0x308/0x5e8 arch/arm64/kernel/process.c:518
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0xf10/0x1e38 kernel/sched/core.c:6372
schedule+0x11c/0x1c8 kernel/sched/core.c:6455
xlog_wait+0x154/0x1d0 fs/xfs/xfs_log_priv.h:625
xlog_wait_on_iclog+0x39c/0x684 fs/xfs/xfs_log.c:885
xlog_force_lsn+0x5d4/0x7bc fs/xfs/xfs_log.c:3423
xfs_log_force_seq+0x310/0x81c fs/xfs/xfs_log.c:3460
__xfs_trans_commit+0x8cc/0xe98 fs/xfs/xfs_trans.c:890
xfs_trans_commit+0x24/0x34 fs/xfs/xfs_trans.c:925
xfs_sync_sb_buf+0x150/0x1ec fs/xfs/libxfs/xfs_sb.c:1100
xfs_ioc_setlabel fs/xfs/xfs_ioctl.c:1875 [inline]
xfs_file_ioctl+0x1ef4/0x297c fs/xfs/xfs_ioctl.c:1964
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:860
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffff800014aa1660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:268
2 locks held by kworker/0:1H/149:
#0: ffff0000d57c7938 ((wq_completion)xfs-log/loop4){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2279
#1: ffff80001a417c00 ((work_completion)(&iclog->ic_end_io_work)){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2281
1 lock held by udevd/3584:
2 locks held by getty/3733:
#0: ffff0000d246e098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x40/0x50 drivers/tty/tty_ldsem.c:340
#1: ffff80001a34b2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1200 drivers/tty/n_tty.c:2147
1 lock held by syz-executor.5/4086:
#0: ffff800014aa5be8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#0: ffff800014aa5be8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x320/0x660 kernel/rcu/tree_exp.h:840
2 locks held by kworker/u4:12/4285:
#0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2279
#1: ffff80001e957c00 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2281
1 lock held by syz-executor.4/5004:
#0: ffff000101204460 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write_file+0x64/0x1e8 fs/namespace.c:421
2 locks held by kworker/0:12/5682:
#0: ffff0000c0021d38 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2279
#1: ffff800021df7c00 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2281
1 lock held by syz-executor.3/7415:
1 lock held by syz-executor.2/7419:

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Aug 21, 2023, 7:02:46 PM8/21/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages