[v6.1] INFO: task hung in xfs_ilock (2)

3 views
Skip to first unread message

syzbot

unread,
Mar 22, 2025, 1:42:27 AM3/22/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 344a09659766 Linux 6.1.131
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=15113004580000
kernel config: https://syzkaller.appspot.com/x/.config?x=14d5dbae75afa499
dashboard link: https://syzkaller.appspot.com/bug?extid=241ffda03886fa32f7fa
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/8c94453bbad3/disk-344a0965.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/6bf26518c790/vmlinux-344a0965.xz
kernel image: https://storage.googleapis.com/syzbot-assets/424da9a470eb/Image-344a0965.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+241ffd...@syzkaller.appspotmail.com

INFO: task syz.2.119:5004 blocked for more than 143 seconds.
Not tainted 6.1.131-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.119 state:D
stack:0 pid:5004 ppid:4304 flags:0x00000009
Call trace:
__switch_to+0x308/0x598 arch/arm64/kernel/process.c:553
context_switch kernel/sched/core.c:5243 [inline]
__schedule+0xef4/0x1d44 kernel/sched/core.c:6560
schedule+0xc4/0x170 kernel/sched/core.c:6636
schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6695
rwsem_down_read_slowpath+0x534/0x858 kernel/locking/rwsem.c:1094
__down_read_common kernel/locking/rwsem.c:1261 [inline]
__down_read kernel/locking/rwsem.c:1274 [inline]
down_read_nested+0xb0/0x30c kernel/locking/rwsem.c:1646
xfs_ilock+0x1e0/0x4e4 fs/xfs/xfs_inode.c:206
xfs_ilock_for_write_fault fs/xfs/xfs_file.c:244 [inline]
__xfs_filemap_fault+0x43c/0xe0c fs/xfs/xfs_file.c:1363
xfs_filemap_page_mkwrite+0x28/0x38 fs/xfs/xfs_file.c:1420
do_page_mkwrite+0x144/0x37c mm/memory.c:3011
wp_page_shared+0x148/0x550 mm/memory.c:3360
do_wp_page+0xcbc/0xf44 mm/memory.c:3510
handle_pte_fault mm/memory.c:5049 [inline]
__handle_mm_fault mm/memory.c:5173 [inline]
handle_mm_fault+0x19a4/0x3d38 mm/memory.c:5294
faultin_page mm/gup.c:1026 [inline]
__get_user_pages+0x3b0/0x968 mm/gup.c:1250
faultin_vma_page_range+0x1d8/0x274 mm/gup.c:1670
madvise_populate mm/madvise.c:928 [inline]
madvise_vma_behavior mm/madvise.c:1037 [inline]
madvise_walk_vmas mm/madvise.c:1259 [inline]
do_madvise+0x1234/0x2f78 mm/madvise.c:1438
__do_sys_madvise mm/madvise.c:1451 [inline]
__se_sys_madvise mm/madvise.c:1449 [inline]
__arm64_sys_madvise+0xa4/0xc0 mm/madvise.c:1449
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

Showing all locks held in the system:
3 locks held by kworker/u4:1/11:
#0: ffff0000c2e4d938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x6bc/0x1484 kernel/workqueue.c:2265
#1: ffff80001d2e7c20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x6fc/0x1484 kernel/workqueue.c:2267
#2: ffff0000c94980e0 (&type->s_umount_key#54){++++}-{3:3}, at: trylock_super+0x28/0xf8 fs/super.c:415
1 lock held by rcu_tasks_kthre/12:
#0: ffff800015cd79b0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffff800015cd81b0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x44/0xcf4 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
#0: ffff800015cd77e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:349
2 locks held by kworker/u4:3/55:
#0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x6bc/0x1484 kernel/workqueue.c:2265
#1: ffff80001d8e7c20 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x1484 kernel/workqueue.c:2267
2 locks held by getty/4055:
#0: ffff0000d6892098 (&tty->ldisc_sem
){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
#1: ffff80001d9102f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1214 drivers/tty/n_tty.c:2198
2 locks held by kworker/u4:6/4358:
#0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x6bc/0x1484 kernel/workqueue.c:2265
#1: ffff8000216d7c20 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x1484 kernel/workqueue.c:2267
3 locks held by kworker/1:10/4462:
#0: ffff0000c0020938 ((wq_completion)events
){+.+.}-{0:0}, at: process_one_work+0x6bc/0x1484 kernel/workqueue.c:2265
#1: ffff8000229a7c20 (fqdir_free_work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x1484 kernel/workqueue.c:2267
#2: ffff800015cdccc0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x58/0x5c4 kernel/rcu/tree.c:4019
5 locks held by kworker/0:17/4595:
3 locks held by syz.2.119/5004:
#0: ffff0000d4419348 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock+0x28/0x74 include/linux/mmap_lock.h:117
#1: ffff0000c9498558 (sb_pagefaults#2){.+.+}-{0:0}, at: xfs_filemap_page_mkwrite+0x28/0x38 fs/xfs/xfs_file.c:1420
#2: ffff0000e2b404d8 (mapping.invalidate_lock#3){++++}-{3:3}, at: xfs_ilock+0x1e0/0x4e4 fs/xfs/xfs_inode.c:206
5 locks held by syz.2.119/5027:
#0: ffff0000c9498460 (
sb_writers#16){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:3015 [inline]
sb_writers#16){.+.+}-{0:0}, at: vfs_fallocate+0x404/0x5b4 fs/open.c:322
#1: ffff0000e2b40338 (&sb->s_type->i_mutex_key#24){+.+.}-{3:3}, at: xfs_ilock+0x148/0x4e4 fs/xfs/xfs_inode.c:195
#2: ffff0000e2b404d8 (mapping.invalidate_lock#3){++++}-{3:3}, at: xfs_ilock+0x1b0/0x4e4 fs/xfs/xfs_inode.c:203
#3: ffff0000c9498650 (sb_internal#3){.+.+}-{0:0}, at: xfs_bmapi_convert_delalloc+0x21c/0x10d4 fs/xfs/libxfs/xfs_bmap.c:4507
#4: ffff0000e2b40118 (&xfs_nondir_ilock_class){++++}-{3:3}, at: mrupdate_nested fs/xfs/mrlock.h:36 [inline]
#4: ffff0000e2b40118 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_ilock+0x218/0x4e4 fs/xfs/xfs_inode.c:211
3 locks held by kworker/u4:25/5740:
#0:
ffff0000c0845138 ((wq_completion)netns){+.+.}-{0:0}
, at: process_one_work+0x6bc/0x1484 kernel/workqueue.c:2265
#1: ffff800024b67c20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x6fc/0x1484 kernel/workqueue.c:2267
#2: ffff80001817d750 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x13c/0xaec net/core/net_namespace.c:594
1 lock held by syz.1.794/7424:
3 locks held by syz.4.796/7426:
2 locks held by syz.0.801/7438:

=============================================



---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Mar 23, 2025, 2:21:17 PM3/23/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0c935c049b5c Linux 5.15.179
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1322c43f980000
kernel config: https://syzkaller.appspot.com/x/.config?x=bcb6af887426ce59
dashboard link: https://syzkaller.appspot.com/bug?extid=cbf779bf5941deb878da
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/75b060834e1e/disk-0c935c04.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d42b06ae465d/vmlinux-0c935c04.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d5217676cd35/bzImage-0c935c04.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+cbf779...@syzkaller.appspotmail.com

INFO: task syz.3.113:4741 blocked for more than 143 seconds.
Not tainted 5.15.179-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.113 state:D stack:25600 pid: 4741 ppid: 4172 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5029 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6375
schedule+0x11b/0x1f0 kernel/sched/core.c:6458
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6517
rwsem_down_read_slowpath+0x605/0xb40 kernel/locking/rwsem.c:1055
__down_read_common kernel/locking/rwsem.c:1239 [inline]
__down_read kernel/locking/rwsem.c:1252 [inline]
down_read_nested+0x9c/0x2f0 kernel/locking/rwsem.c:1624
xfs_ilock+0x1c0/0x390 fs/xfs/xfs_inode.c:195
__xfs_filemap_fault+0x3ec/0x950 fs/xfs/xfs_file.c:1345
do_page_mkwrite+0x1a9/0x440 mm/memory.c:2922
wp_page_shared+0x179/0x690 mm/memory.c:3259
handle_pte_fault mm/memory.c:4668 [inline]
__handle_mm_fault mm/memory.c:4785 [inline]
handle_mm_fault+0x2a3d/0x5960 mm/memory.c:4883
faultin_page mm/gup.c:976 [inline]
__get_user_pages+0x4ed/0x11d0 mm/gup.c:1197
faultin_vma_page_range+0x21f/0x2a0 mm/gup.c:1590
madvise_populate mm/madvise.c:856 [inline]
madvise_vma mm/madvise.c:1004 [inline]
do_madvise+0xbca/0x3470 mm/madvise.c:1207
__do_sys_madvise mm/madvise.c:1233 [inline]
__se_sys_madvise mm/madvise.c:1231 [inline]
__x64_sys_madvise+0xa1/0xb0 mm/madvise.c:1231
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f1cad9e1169
RSP: 002b:00007f1cab829038 EFLAGS: 00000246 ORIG_RAX: 000000000000001c
RAX: ffffffffffffffda RBX: 00007f1cadbfa080 RCX: 00007f1cad9e1169
RDX: 0000000000000017 RSI: 0000000000600000 RDI: 0000200000000000
RBP: 00007f1cada622a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f1cadbfa080 R15: 00007ffe03e9d5a8
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8cb1f4e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by kworker/u4:4/1278:
1 lock held by udevd/3546:
#0:
ffff8880b8f3a318 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
2 locks held by dhcpcd/3837:
#0: ffffffff8dc2b968 (vlan_ioctl_mutex){+.+.}-{3:3}, at: sock_ioctl+0x526/0x770 net/socket.c:1219
#1: ffffffff8dc43ac8 (rtnl_mutex){+.+.}-{3:3}, at: vlan_ioctl_handler+0x114/0x9a0 net/8021q/vlan.c:555
2 locks held by getty/3930:
#0: ffff88814c7d7098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc900025ae2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1db0 drivers/tty/n_tty.c:2158
3 locks held by kworker/1:3/4158:
#0: ffff88802bbc7d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90002e5fd20 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8dc43ac8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd8/0x16f0 net/ipv6/addrconf.c:4113
3 locks held by kworker/1:4/4214:
#0: ffff888141fbcd38 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900045ffd20 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2:
ffff8880248fd220
(
&dev->mutex
){....}-{3:3}, at: device_lock include/linux/device.h:760 [inline]
){....}-{3:3}, at: hub_event+0x208/0x54c0 drivers/usb/core/hub.c:5781
2 locks held by kworker/1:7/4244:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc9000466fd20 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/0:7/4421:
5 locks held by syz.3.113/4719:
#0:
ffff888079e50460 (sb_writers#21){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:3042 [inline]
ffff888079e50460 (sb_writers#21){.+.+}-{0:0}, at: vfs_fallocate+0x4bd/0x6b0 fs/open.c:307
#1: ffff88805f8a1980 (&sb->s_type->i_mutex_key#30){+.+.}-{3:3}, at: xfs_ilock+0xec/0x390 fs/xfs/xfs_inode.c:184
#2: ffff88805f8a1b20 (mapping.invalidate_lock#4){++++}-{3:3}, at: xfs_ilock+0x17d/0x390 fs/xfs/xfs_inode.c:192
#3: ffff888079e50650 (sb_internal#2){.+.+}-{0:0}, at: xfs_bmapi_convert_delalloc+0x20f/0x1180 fs/xfs/libxfs/xfs_bmap.c:4573
#4: ffff88805f8a1768 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_bmapi_convert_delalloc+0x23c/0x1180 fs/xfs/libxfs/xfs_bmap.c:4578
3 locks held by syz.3.113/4741:
#0: ffff88802c41f128 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock include/linux/mmap_lock.h:117 [inline]
#0: ffff88802c41f128 (&mm->mmap_lock){++++}-{3:3}, at: do_madvise+0x37f/0x3470 mm/madvise.c:1174
#1: ffff888079e50558 (sb_pagefaults#3){.+.+}-{0:0}, at: do_page_mkwrite+0x1a9/0x440 mm/memory.c:2922
#2: ffff88805f8a1b20 (mapping.invalidate_lock#4){++++}-{3:3}, at: xfs_ilock+0x1c0/0x390 fs/xfs/xfs_inode.c:195
3 locks held by kworker/u4:12/4973:
#0: ffff888019bd8938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1:
ffffc9000374fd20
(
(work_completion)(&(&wb->dwork)->work)
){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffff888079e500e0 (&type->s_umount_key#100){++++}-{3:3}, at: trylock_super+0x1b/0xf0 fs/super.c:418
3 locks held by kworker/u4:14/5145:
5 locks held by kworker/u4:20/5749:
#0: ffff8880175cd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900039dfd20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8dc37e10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x166/0xc90 net/core/net_namespace.c:572
#3: ffff8880775313e8 (&wg->device_update_lock){+.+.}-{3:3}, at: wg_destruct+0x10c/0x2f0 drivers/net/wireguard/device.c:233
#4: ffffffff8cb23aa8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
#4: ffffffff8cb23aa8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x350/0x740 kernel/rcu/tree_exp.h:845
3 locks held by kworker/0:16/5825:
#0:
ffff88802bbc7d38
((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc9000386fd20 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8dc43ac8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd8/0x16f0 net/ipv6/addrconf.c:4113
3 locks held by kworker/0:17/5826:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc9000387fd20 (ser_release_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8dc43ac8 (rtnl_mutex){+.+.}-{3:3}, at: ser_release+0x137/0x240 drivers/net/caif/caif_serial.c:307
1 lock held by syz-executor/6434:
#0: ffffffff8dc43ac8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8dc43ac8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x956/0xef0 net/core/rtnetlink.c:5644
1 lock held by syz.0.638/6711:
2 locks held by syz.7.636/6708:
#0:
ffff88805f92c410
(
&sb->s_type->i_mutex_key
#11
){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
){+.+.}-{3:3}, at: __sock_release net/socket.c:648 [inline]
){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1336
#1: ffffffff8cb23aa8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
#1: ffffffff8cb23aa8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x350/0x740 kernel/rcu/tree_exp.h:845

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.15.179-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:369
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 4216 Comm: kworker/0:6 Not tainted 5.15.179-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Workqueue: xfs-buf/loop3 xfs_buf_ioend_work
RIP: 0010:__preempt_count_add kernel/rcu/tree.c:1119 [inline]
RIP: 0010:rcu_is_watching+0x4/0xa0 kernel/rcu/tree.c:1122
Code: 5d 41 5e 41 5f 5d c3 e8 2a b2 d4 08 41 f7 c4 00 02 00 00 75 b4 eb b3 e8 0a b2 d4 08 66 2e 0f 1f 84 00 00 00 00 00 41 57 41 56 <53> 65 ff 05 7c 22 97 7e e8 af c5 d4 08 89 c3 83 f8 08 73 72 49 bf
RSP: 0018:ffffc90000007c28 EFLAGS: 00000057
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff81630798
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8e0a9fa8
RBP: ffffc90000007d70 R08: dffffc0000000000 R09: fffffbfff1c153f6
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff92000000f94
R13: ffffffff816fe471 R14: ffffc90000007da0 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6df6f8ed58 CR3: 000000002b604000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000097 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<IRQ>
trace_lock_release include/trace/events/lock.h:58 [inline]
lock_release+0xb9/0x9a0 kernel/locking/lockdep.c:5634
seqcount_lockdep_reader_access+0x10b/0x220 include/linux/seqlock.h:104
ktime_get+0x31/0x270 kernel/time/timekeeping.c:829
clockevents_program_event+0xe1/0x310 kernel/time/clockevents.c:326
hrtimer_interrupt+0x546/0x980 kernel/time/hrtimer.c:1827
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
__sysvec_apic_timer_interrupt+0x13b/0x4b0 arch/x86/kernel/apic/apic.c:1114
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:console_trylock_spinning+0x36b/0x3f0 kernel/printk/printk.c:1909
Code: 0f 84 75 ff ff ff e8 14 1a 1a 00 fb 31 db eb 48 e8 0a 1a 1a 00 e8 95 7c d9 08 4d 85 ed 74 cd e8 fb 19 1a 00 fb bb 01 00 00 00 <48> c7 c7 a0 bd 9f 8c 31 f6 ba 01 00 00 00 31 c9 41 b8 01 00 00 00
RSP: 0018:ffffc9000461f740 EFLAGS: 00000293
RAX: ffffffff816682c5 RBX: 0000000000000001 RCX: ffff88806157bb80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc9000461f810 R08: ffffffff8166827e R09: fffffbfff2131e4c
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff920008c3ee8
R13: 0000000000000200 R14: 0000000000000046 R15: dffffc0000000000
vprintk_emit+0xa6/0x150 kernel/printk/printk.c:2273
_printk+0xd1/0x120 kernel/printk/printk.c:2299
print_hex_dump+0x1a2/0x250 lib/hexdump.c:285
xfs_hex_dump+0x39/0x50 fs/xfs/xfs_message.c:118
xfs_buf_verifier_error+0x1bc/0x290 fs/xfs/xfs_error.c:418
xfs_agf_read_verify+0x1f8/0x2a0 fs/xfs/libxfs/xfs_alloc.c:2955
xfs_buf_ioend+0x26a/0x6e0 fs/xfs/xfs_buf.c:1263
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>

syzbot

unread,
Jul 16, 2025, 10:17:22 PM7/16/25
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.

syzbot

unread,
Sep 20, 2025, 5:00:24 PM9/20/25
to syzkaller...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages