[v6.1] INFO: task hung in xfs_buf_item_unpin

0 views
Skip to first unread message

syzbot

unread,
Mar 13, 2023, 11:47:51 PM3/13/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 6449a0ba6843 Linux 6.1.19
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1687dffac80000
kernel config: https://syzkaller.appspot.com/x/.config?x=75eadb21ef1208e4
dashboard link: https://syzkaller.appspot.com/bug?extid=666446d4aba279647315
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/dc227ecd3e21/disk-6449a0ba.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1d08e21b50c2/vmlinux-6449a0ba.xz
kernel image: https://storage.googleapis.com/syzbot-assets/71a43f2c4d2c/Image-6449a0ba.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+666446...@syzkaller.appspotmail.com

INFO: task kworker/0:1H:51 blocked for more than 143 seconds.
Not tainted 6.1.19-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:1H state:D stack:0 pid:51 ppid:2 flags:0x00000008
Workqueue: xfs-log/loop5 xlog_ioend_work
Call trace:
__switch_to+0x320/0x754 arch/arm64/kernel/process.c:553
context_switch kernel/sched/core.c:5238 [inline]
__schedule+0xf9c/0x1d84 kernel/sched/core.c:6551
schedule+0xc4/0x170 kernel/sched/core.c:6627
schedule_timeout+0xb8/0x344 kernel/time/timer.c:1911
___down_common+0x2a4/0x4f0 kernel/locking/semaphore.c:225
__down_common+0x17c/0x7cc kernel/locking/semaphore.c:246
__down+0x18/0x24 kernel/locking/semaphore.c:254
down+0x94/0xe8 kernel/locking/semaphore.c:63
xfs_buf_lock+0x284/0xaa8 fs/xfs/xfs_buf.c:1120
xfs_buf_item_unpin+0x2e4/0xc58 fs/xfs/xfs_buf_item.c:547
xfs_trans_committed_bulk+0x2d8/0x73c fs/xfs/xfs_trans.c:806
xlog_cil_committed+0x210/0xf18 fs/xfs/xfs_log_cil.c:795
xlog_cil_process_committed+0x11c/0x174 fs/xfs/xfs_log_cil.c:823
xlog_state_shutdown_callbacks+0x23c/0x324 fs/xfs/xfs_log.c:538
xlog_force_shutdown+0x29c/0x350 fs/xfs/xfs_log.c:3821
xlog_ioend_work+0xa8/0xf8 fs/xfs/xfs_log.c:1402
process_one_work+0x868/0x16f4 kernel/workqueue.c:2289
worker_thread+0x8e4/0xfec kernel/workqueue.c:2436
kthread+0x24c/0x2d4 kernel/kthread.c:376
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:860
INFO: task syz-executor.5:13740 blocked for more than 143 seconds.
Not tainted 6.1.19-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.5 state:D stack:0 pid:13740 ppid:4346 flags:0x00000009
Call trace:
__switch_to+0x320/0x754 arch/arm64/kernel/process.c:553
context_switch kernel/sched/core.c:5238 [inline]
__schedule+0xf9c/0x1d84 kernel/sched/core.c:6551
schedule+0xc4/0x170 kernel/sched/core.c:6627
xlog_wait+0x154/0x1d0 fs/xfs/xfs_log_priv.h:617
xlog_wait_on_iclog+0x438/0x830 fs/xfs/xfs_log.c:907
xlog_force_lsn+0x710/0x9c4 fs/xfs/xfs_log.c:3356
xfs_log_force_seq+0x218/0x50c fs/xfs/xfs_log.c:3393
__xfs_trans_commit+0xb30/0x1240 fs/xfs/xfs_trans.c:1014
xfs_trans_commit+0x24/0x34 fs/xfs/xfs_trans.c:1049
xfs_sync_sb_buf+0x150/0x1ec fs/xfs/libxfs/xfs_sb.c:1108
xfs_ioc_setlabel fs/xfs/xfs_ioctl.c:1805 [inline]
xfs_file_ioctl+0x1ca8/0x2680 fs/xfs/xfs_ioctl.c:1903
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:856
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x218 arch/arm64/kernel/syscall.c:206
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffff800015905db0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
#0: ffff8000159065b0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
#0: ffff800015905be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:305
2 locks held by kworker/0:1H/51:
#0: ffff0001159ba538 ((wq_completion)xfs-log/loop5){+.+.}-{0:0}, at: process_one_work+0x664/0x16f4 kernel/workqueue.c:2262
#1: ffff80001b247c20 ((work_completion)(&iclog->ic_end_io_work)){+.+.}-{0:0}, at: process_one_work+0x6a8/0x16f4 kernel/workqueue.c:2264
2 locks held by getty/3987:
#0: ffff0000d3bfe098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
#1: ffff80001bbc02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1210 drivers/tty/n_tty.c:2177
2 locks held by kworker/u4:12/5253:
2 locks held by kworker/u4:14/5295:
#0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x664/0x16f4 kernel/workqueue.c:2262
#1: ffff8000202d7c20 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work+0x6a8/0x16f4 kernel/workqueue.c:2264
1 lock held by syz-executor.5/13740:
#0: ffff00012c77e460 (sb_writers#17){.+.+}-{0:0}, at: mnt_want_write_file+0x64/0x1e8 fs/namespace.c:437
4 locks held by kworker/u4:16/16726:

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Oct 11, 2023, 5:07:46 AM10/11/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages