[v6.1] INFO: task hung in xfs_buf_item_unpin (2)

3 views
Skip to first unread message

syzbot

unread,
Aug 22, 2025, 3:02:27 AM8/22/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0bc96de781b4 Linux 6.1.148
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17c82a34580000
kernel config: https://syzkaller.appspot.com/x/.config?x=5c8a5866886424a8
dashboard link: https://syzkaller.appspot.com/bug?extid=140ba3fddd5e22a27d02
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/04c58ffc4a8a/disk-0bc96de7.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/01cad248945e/vmlinux-0bc96de7.xz
kernel image: https://storage.googleapis.com/syzbot-assets/fc5395191c77/bzImage-0bc96de7.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+140ba3...@syzkaller.appspotmail.com

INFO: task kworker/u4:7:4803 blocked for more than 143 seconds.
Not tainted 6.1.148-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:7 state:D stack:24640 pid:4803 ppid:2 flags:0x00004000
Workqueue: xfs-cil/loop2 xlog_cil_push_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
schedule_timeout+0x97/0x280 kernel/time/timer.c:1941
___down_common kernel/locking/semaphore.c:229 [inline]
__down_common+0x2e7/0x700 kernel/locking/semaphore.c:250
down+0x7c/0xd0 kernel/locking/semaphore.c:64
xfs_buf_lock+0x163/0x560 fs/xfs/xfs_buf.c:1120
xfs_buf_item_unpin+0x1c7/0x770 fs/xfs/xfs_buf_item.c:582
xfs_trans_committed_bulk+0x333/0x7f0 fs/xfs/xfs_trans.c:808
xlog_cil_committed+0x26c/0xe60 fs/xfs/xfs_log_cil.c:795
xlog_cil_push_work+0x1ec3/0x2490 fs/xfs/xfs_log_cil.c:1405
process_one_work+0x898/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
INFO: task syz.2.324:5391 blocked for more than 144 seconds.
Not tainted 6.1.148-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.324 state:D stack:26672 pid:5391 ppid:4268 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
schedule_timeout+0x97/0x280 kernel/time/timer.c:1941
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x2b9/0x590 kernel/sched/completion.c:138
__flush_workqueue+0x634/0x1380 kernel/workqueue.c:2864
xlog_cil_push_now fs/xfs/xfs_log_cil.c:1521 [inline]
xlog_cil_force_seq+0x227/0x8b0 fs/xfs/xfs_log_cil.c:1723
xfs_log_force_seq+0x18c/0x420 fs/xfs/xfs_log.c:3370
__xfs_trans_commit+0x959/0xe00 fs/xfs/xfs_trans.c:1013
xfs_sync_sb_buf+0xe7/0x180 fs/xfs/libxfs/xfs_sb.c:1162
xfs_ioc_setlabel fs/xfs/xfs_ioctl.c:1827 [inline]
xfs_file_ioctl+0x1290/0x1590 fs/xfs/xfs_ioctl.c:1925
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xfa/0x170 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fafc9b8ebe9
RSP: 002b:00007fafca945038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fafc9db6090 RCX: 00007fafc9b8ebe9
RDX: 0000200000000100 RSI: 0000000041009432 RDI: 0000000000000004
RBP: 00007fafc9c11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fafc9db6128 R14: 00007fafc9db6090 R15: 00007ffe83620988
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cb2b770 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cb2bf90 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/27:
#0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by getty/4028:
#0: ffff8880304ae098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by kworker/0:9/4556:
#0: ffff88801bbf1938 ((wq_completion)xfs-sync/loop2){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9000537fd00 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/u4:7/4803:
#0: ffff888027160938 ((wq_completion)xfs-cil/loop2){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9000de5fd00 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/1:17/4882:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9000e83fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by syz.2.324/5391:
#0: ffff888024664460 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
2 locks held by syz-executor/8075:
#0: ffff88805558e0e0 (&type->s_umount_key#94){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
#1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x455/0x830 kernel/rcu/tree_exp.h:962

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 6.1.148-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xeee/0xf30 kernel/hung_task.c:377
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 3619 Comm: syslogd Not tainted 6.1.148-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
RIP: 0010:__skb_try_recv_from_queue+0x220/0x730 net/core/datagram.c:-1
Code: ff ff 48 89 df e8 50 5d b2 f9 e9 3b ff ff ff 89 d9 80 e1 07 38 c1 7c 88 48 89 df e8 1a 5d b2 f9 e9 7b ff ff ff e8 a0 40 62 f9 <45> 31 e4 e9 7e 03 00 00 e8 93 40 62 f9 49 83 c4 10 4c 89 e0 48 c1
RSP: 0018:ffffc900032678f0 EFLAGS: 00000093
RAX: ffffffff881e7dc0 RBX: ffff88807e7ba1b8 RCX: ffff88807d848000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000000 R08: ffffc900032679dc R09: ffffc90003267b00
R10: fffff5200064cf04 R11: 1ffff9200064cf04 R12: ffff88807e7ba1b8
R13: ffffc90003267b40 R14: ffffc90003267b00 R15: dffffc0000000000
FS: 00007fa076e6ac80(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fea81383ad8 CR3: 0000000030288000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
__skb_try_recv_datagram+0x16b/0x4d0 net/core/datagram.c:263
__unix_dgram_recvmsg+0x20d/0xd70 net/unix/af_unix.c:2451
sock_recvmsg_nosec net/socket.c:1022 [inline]
sock_recvmsg net/socket.c:1040 [inline]
sock_read_iter+0x2bf/0x370 net/socket.c:1121
call_read_iter include/linux/fs.h:2259 [inline]
new_sync_read fs/read_write.c:389 [inline]
vfs_read+0x434/0x920 fs/read_write.c:470
ksys_read+0x143/0x240 fs/read_write.c:613
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fa076fba407
Code: 48 89 fa 4c 89 df e8 38 aa 00 00 8b 93 08 03 00 00 59 5e 48 83 f8 fc 74 1a 5b c3 0f 1f 84 00 00 00 00 00 48 8b 44 24 10 0f 05 <5b> c3 0f 1f 80 00 00 00 00 83 e2 39 83 fa 08 75 de e8 23 ff ff ff
RSP: 002b:00007ffcd63eb680 EFLAGS: 00000202 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 00007fa076e6ac80 RCX: 00007fa076fba407
RDX: 00000000000000ff RSI: 000055e9f62bb950 RDI: 0000000000000000
RBP: 000055e9f62bb910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000055e9f62bb9a3
R13: 0000000000000000 R14: 000055e9f62bb950 R15: 000055e9bc0a6d98
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Aug 22, 2025, 7:02:38 PM8/22/25
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 0bc96de781b4 Linux 6.1.148
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=127e8062580000
kernel config: https://syzkaller.appspot.com/x/.config?x=5c8a5866886424a8
dashboard link: https://syzkaller.appspot.com/bug?extid=140ba3fddd5e22a27d02
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=167e8062580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17a2aa34580000
mounted in repro: https://storage.googleapis.com/syzbot-assets/424d2e283abe/mount_0.gz
fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=15665062580000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+140ba3...@syzkaller.appspotmail.com

INFO: task syz.3.39:4742 blocked for more than 143 seconds.
Not tainted 6.1.148-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.39 state:D stack:24608 pid:4742 ppid:4394 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
schedule_timeout+0x97/0x280 kernel/time/timer.c:1941
___down_common kernel/locking/semaphore.c:229 [inline]
__down_common+0x2e7/0x700 kernel/locking/semaphore.c:250
down+0x7c/0xd0 kernel/locking/semaphore.c:64
xfs_buf_lock+0x163/0x560 fs/xfs/xfs_buf.c:1120
xfs_buf_item_unpin+0x1c7/0x770 fs/xfs/xfs_buf_item.c:582
xfs_trans_committed_bulk+0x333/0x7f0 fs/xfs/xfs_trans.c:808
xlog_cil_committed+0x26c/0xe60 fs/xfs/xfs_log_cil.c:795
xlog_cil_process_committed+0x155/0x1a0 fs/xfs/xfs_log_cil.c:823
xlog_state_shutdown_callbacks+0x266/0x360 fs/xfs/xfs_log.c:538
xlog_force_shutdown+0x2c5/0x320 fs/xfs/xfs_log.c:3802
xfs_do_force_shutdown+0x27d/0x660 fs/xfs/xfs_fsops.c:540
xfs_fs_goingdown+0x6d/0x150 fs/xfs/xfs_fsops.c:-1
xfs_file_ioctl+0x1031/0x1590 fs/xfs/xfs_ioctl.c:2132
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xfa/0x170 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f32a258ebe9
RSP: 002b:00007f32a3472038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f32a27b5fa0 RCX: 00007f32a258ebe9
RDX: 0000200000000080 RSI: 000000008004587d RDI: 0000000000000005
RBP: 00007f32a2611e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f32a27b6038 R14: 00007f32a27b5fa0 R15: 00007ffc00e44428
</TASK>
INFO: task syz.3.39:4794 blocked for more than 144 seconds.
Not tainted 6.1.148-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.39 state:D stack:26784 pid:4794 ppid:4394 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
xlog_wait fs/xfs/xfs_log_priv.h:617 [inline]
xlog_wait_on_iclog+0x497/0x730 fs/xfs/xfs_log.c:890
xlog_force_lsn+0x557/0x9d0 fs/xfs/xfs_log.c:3337
__xfs_trans_commit+0x959/0xe00 fs/xfs/xfs_trans.c:1013
xfs_sync_sb_buf+0xe7/0x180 fs/xfs/libxfs/xfs_sb.c:1162
xfs_ioc_setlabel fs/xfs/xfs_ioctl.c:1827 [inline]
xfs_file_ioctl+0x1290/0x1590 fs/xfs/xfs_ioctl.c:1925
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xfa/0x170 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f32a258ebe9
RSP: 002b:00007f32a3451038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f32a27b6090 RCX: 00007f32a258ebe9
RDX: 0000200000000100 RSI: 0000000041009432 RDI: 0000000000000004
RBP: 00007f32a2611e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f32a27b6128 R14: 00007f32a27b6090 R15: 00007ffc00e44428
</TASK>

Showing all locks held in the system:
2 locks held by kworker/u4:1/11:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cb2b770 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cb2bf90 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
2 locks held by kworker/1:1/26:
#0: ffff888024018538 ((wq_completion)xfs-sync/loop0){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90000a2fd00 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by khungtaskd/28:
#0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by kworker/u4:3/46:
#0: ffff88814478f938 ((wq_completion)xfs-cil/loop0){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90000b77d00 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/u4:4/1011:
2 locks held by getty/4028:
#0: ffff88814cd83098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by kworker/1:15/4351:
#0: ffff88801fee1d38 ((wq_completion)xfs-sync/loop7){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc900032a7d00 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/1:18/4354:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc900032e7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/0:8/4457:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90003657d00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/u4:8/4612:
1 lock held by udevd/4698:
1 lock held by syz.3.39/4794:
#0: ffff88807a1f0460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.1.66/5109:
#0: ffff88807acba460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.4.80/5266:
#0: ffff88805a89a460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz-executor/5358:
#0: ffff88805c7980e0 (&type->s_umount_key#53){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
1 lock held by syz.5.112/5634:
#0: ffff88807405a460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.0.122/5755:
#0: ffff88805783e460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
2 locks held by syz-executor/5878:
#0: ffff8880589d60e0 (&type->s_umount_key#53){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
#1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
2 locks held by syz-executor/6012:
#0: ffff88801f6720e0 (&type->s_umount_key#53){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
#1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
2 locks held by kworker/u4:10/6015:
#0: ffff888074add138 ((wq_completion)xfs-cil/loop7){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc900046b7d00 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by syz.2.151/6086:
#0: ffff888069a8a460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.7.210/6736:
#0: ffff8880284ba460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
2 locks held by syz.1.334/7584:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.148-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xeee/0xf30 kernel/hung_task.c:377
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 11 Comm: kworker/u4:1 Not tainted 6.1.148-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:pvclock_clocksource_read+0x6a/0x760 arch/x86/kernel/pvclock.c:68
Code: 84 24 90 00 00 00 48 89 4c 24 50 48 c1 e9 03 48 89 8c 24 88 00 00 00 49 8d 49 03 4c 89 c8 48 c1 e8 03 48 89 84 24 80 00 00 00 <48> 89 4c 24 48 48 c1 e9 03 48 89 4c 24 78 48 89 f0 48 c1 e8 03 48
RSP: 0018:ffffc90000107460 EFLAGS: 00000a02
RAX: 1ffffffff1f3e60b RBX: ffffc90000107580 RCX: ffffffff8f9f305b
RDX: 1ffffffff1f3e608 RSI: ffffffff8f9f305c RDI: ffffffff8f9f3040
RBP: ffffc900001075e8 R08: ffffffff8f9f3048 R09: ffffffff8f9f3058
R10: ffffffff8f9f3050 R11: ffffffff8f9f3043 R12: 000000000000000b
R13: dffffc0000000000 R14: 1ffffffff1f3e608 R15: ffff888019c5ee30
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe9653fe000 CR3: 0000000056141000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
kvm_clock_read arch/x86/kernel/kvmclock.c:79 [inline]
kvm_sched_clock_read+0x14/0x40 arch/x86/kernel/kvmclock.c:91
sched_clock_cpu+0x6e/0x250 kernel/sched/clock.c:369
local_clock include/linux/sched/clock.h:84 [inline]
__set_page_owner_handle+0x1a9/0x3c0 mm/page_owner.c:174
__set_page_owner+0x41/0x60 mm/page_owner.c:195
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x173/0x1a0 mm/page_alloc.c:2532
prep_new_page mm/page_alloc.c:2539 [inline]
get_page_from_freelist+0x1a26/0x1ac0 mm/page_alloc.c:4328
__alloc_pages+0x1df/0x4e0 mm/page_alloc.c:5614
alloc_slab_page+0x5d/0x160 mm/slub.c:1794
allocate_slab mm/slub.c:1939 [inline]
new_slab+0x87/0x2c0 mm/slub.c:1992
___slab_alloc+0xbc6/0x1220 mm/slub.c:3180
__slab_alloc mm/slub.c:3279 [inline]
slab_alloc_node mm/slub.c:3364 [inline]
__kmem_cache_alloc_node+0x1a0/0x260 mm/slub.c:3437
__do_kmalloc_node mm/slab_common.c:935 [inline]
__kmalloc_node_track_caller+0x9e/0x230 mm/slab_common.c:956
kmalloc_reserve net/core/skbuff.c:446 [inline]
__alloc_skb+0x22a/0x7e0 net/core/skbuff.c:515
alloc_skb include/linux/skbuff.h:1271 [inline]
nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
nsim_dev_trap_report_work+0x28f/0xaf0 drivers/net/netdevsim/dev.c:851
process_one_work+0x898/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Reply all
Reply to author
Forward
0 new messages