[syzbot] Monthly xfs report (Oct 2025)

3 views
Skip to first unread message

syzbot

unread,
Oct 29, 2025, 5:50:25 AM10/29/25
to c...@kernel.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
Hello xfs maintainers/developers,

This is a 31-day syzbot report for the xfs subsystem.
All related reports/information can be found at:
https://syzkaller.appspot.com/upstream/s/xfs

During the period, 1 new issues were detected and 0 were fixed.
In total, 16 issues are still open and 27 have already been fixed.

Some of the still happening issues:

Ref Crashes Repro Title
<1> 2779 Yes INFO: task hung in sync_inodes_sb (5)
https://syzkaller.appspot.com/bug?extid=30476ec1b6dc84471133
<2> 254 Yes KASAN: slab-use-after-free Read in xfs_inode_item_push
https://syzkaller.appspot.com/bug?extid=1a28995e12fd13faa44e
<3> 108 Yes INFO: task hung in xfs_buf_item_unpin (2)
https://syzkaller.appspot.com/bug?extid=837bcd54843dd6262f2f
<4> 53 Yes INFO: task hung in vfs_setxattr (7)
https://syzkaller.appspot.com/bug?extid=3d0a18cd22695979a7c6
<5> 17 Yes KASAN: slab-use-after-free Read in xfs_buf_rele (4)
https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
<6> 13 Yes KASAN: slab-out-of-bounds Read in xlog_cksum
https://syzkaller.appspot.com/bug?extid=9f6d080dece587cfdd4c
<7> 8 Yes INFO: task hung in xfs_buf_get_map
https://syzkaller.appspot.com/bug?extid=d74d844bdcee0902b28a
<8> 3 Yes INFO: task hung in xlog_force_lsn (2)
https://syzkaller.appspot.com/bug?extid=c27dee924f3271489c82
<9> 1 Yes INFO: task hung in xfs_file_fsync
https://syzkaller.appspot.com/bug?extid=9bc8c0586b39708784d9

---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

To disable reminders for individual bugs, reply with the following command:
#syz set <Ref> no-reminders

To change bug's subsystems, reply with:
#syz set <Ref> subsystems: new-subsystem

You may send multiple commands in a single email message.

Christoph Hellwig

unread,
Oct 30, 2025, 3:11:39 AM10/30/25
to syzbot+0391d3...@syzkaller.appspotmail.com, linu...@vger.kernel.org, syzkall...@googlegroups.com
#syz test git://git.infradead.org/users/hch/xfs.git xfs-buf-hash

syzbot

unread,
Oct 30, 2025, 3:42:05 AM10/30/25
to h...@infradead.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot tried to test the proposed patch but the build/boot failed:

failed to checkout kernel repo git://git.infradead.org/users/hch/xfs.git/xfs-buf-hash: failed to run ["git" "fetch" "--force" "679bdfc056221ae86d16104d6de6223afaafa4b7" "xfs-buf-hash"]: exit status 128


Tested on:

commit: [unknown
git tree: git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
kernel config: https://syzkaller.appspot.com/x/.config?x=8c5ac3d8b8abfcb
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:
userspace arch: arm64

Note: no patches were applied.

Christoph Hellwig

unread,
Oct 30, 2025, 4:01:45 AM10/30/25
to syzbot+0391d3...@syzkaller.appspotmail.com, linu...@vger.kernel.org, syzkall...@googlegroups.com
#syz test git://git.infradead.org/users/hch/misc.git xfs-buf-hash

syzbot

unread,
Oct 30, 2025, 4:47:06 AM10/30/25
to h...@infradead.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
BUG: MAX_LOCKDEP_CHAINS too low!

BUG: MAX_LOCKDEP_CHAINS too low!
turning off the locking correctness validator.
CPU: 1 UID: 0 PID: 2577 Comm: kworker/u8:7 Not tainted syzkaller #0 PREEMPT
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
Workqueue: xfs-cil/loop0 xlog_cil_push_work
Call trace:
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
__dump_stack+0x30/0x40 lib/dump_stack.c:94
dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
dump_stack+0x1c/0x28 lib/dump_stack.c:129
add_chain_cache kernel/locking/lockdep.c:-1 [inline]
lookup_chain_cache_add kernel/locking/lockdep.c:3855 [inline]
validate_chain kernel/locking/lockdep.c:3876 [inline]
__lock_acquire+0xf9c/0x30a4 kernel/locking/lockdep.c:5237
lock_acquire+0x14c/0x2e0 kernel/locking/lockdep.c:5868
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x5c/0x7c kernel/locking/spinlock.c:162
__wake_up_common_lock kernel/sched/wait.c:124 [inline]
__wake_up+0x40/0x1a8 kernel/sched/wait.c:146
xlog_cil_set_ctx_write_state+0x2a8/0x310 fs/xfs/xfs_log_cil.c:997
xlog_write+0x1fc/0xe94 fs/xfs/xfs_log.c:2252
xlog_cil_write_commit_record fs/xfs/xfs_log_cil.c:1118 [inline]
xlog_cil_push_work+0x19ec/0x1f74 fs/xfs/xfs_log_cil.c:1434
process_one_work+0x7e8/0x155c kernel/workqueue.c:3236
process_scheduled_works kernel/workqueue.c:3319 [inline]
worker_thread+0x958/0xed8 kernel/workqueue.c:3400
kthread+0x5fc/0x75c kernel/kthread.c:463
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:844


Tested on:

commit: af1722bb xfs: switch (back) to a per-buftarg buffer hash
git tree: git://git.infradead.org/users/hch/misc.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=1110bfe2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=39f8a155475bc42d
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Christoph Hellwig

unread,
1:06 AM (13 hours ago) 1:06 AM
to syzbot+0391d3...@syzkaller.appspotmail.com, linu...@vger.kernel.org, syzkall...@googlegroups.com

syzbot

unread,
2:38 AM (12 hours ago) 2:38 AM
to h...@infradead.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
BUG: MAX_LOCKDEP_CHAINS too low!

BUG: MAX_LOCKDEP_CHAINS too low!
turning off the locking correctness validator.
CPU: 0 UID: 0 PID: 1610 Comm: kworker/u8:6 Not tainted syzkaller #0 PREEMPT
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
Workqueue: xfs_iwalk-13497 xfs_pwork_work
Call trace:
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
__dump_stack+0x30/0x40 lib/dump_stack.c:94
dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
dump_stack+0x1c/0x28 lib/dump_stack.c:129
add_chain_cache kernel/locking/lockdep.c:-1 [inline]
lookup_chain_cache_add kernel/locking/lockdep.c:3855 [inline]
validate_chain kernel/locking/lockdep.c:3876 [inline]
__lock_acquire+0xf9c/0x30a4 kernel/locking/lockdep.c:5237
lock_acquire+0x140/0x2e0 kernel/locking/lockdep.c:5868
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x5c/0x7c kernel/locking/spinlock.c:162
debug_object_activate+0x7c/0x460 lib/debugobjects.c:818
debug_timer_activate kernel/time/timer.c:793 [inline]
__mod_timer+0x8c4/0xd00 kernel/time/timer.c:1124
add_timer_global+0x88/0xc0 kernel/time/timer.c:1283
__queue_delayed_work+0x218/0x2c8 kernel/workqueue.c:2520
queue_delayed_work_on+0xe4/0x194 kernel/workqueue.c:2555
queue_delayed_work include/linux/workqueue.h:684 [inline]
xfs_reclaim_work_queue+0x154/0x244 fs/xfs/xfs_icache.c:211
xfs_perag_set_inode_tag+0x19c/0x4bc fs/xfs/xfs_icache.c:263
xfs_inodegc_set_reclaimable+0x1e0/0x444 fs/xfs/xfs_icache.c:1917
xfs_inode_mark_reclaimable+0x2c8/0x10f8 fs/xfs/xfs_icache.c:2252
xfs_fs_destroy_inode+0x2fc/0x618 fs/xfs/xfs_super.c:712
destroy_inode fs/inode.c:396 [inline]
evict+0x7cc/0xa74 fs/inode.c:861
iput_final fs/inode.c:1954 [inline]
iput+0xc54/0xfdc fs/inode.c:2006
xfs_irele+0xd0/0x2ac fs/xfs/xfs_inode.c:2662
xfs_qm_dqusage_adjust+0x4f4/0x5b0 fs/xfs/xfs_qm.c:1411
xfs_iwalk_ag_recs+0x404/0x7c8 fs/xfs/xfs_iwalk.c:209
xfs_iwalk_run_callbacks+0x1c0/0x3e8 fs/xfs/xfs_iwalk.c:370
xfs_iwalk_ag+0x6ac/0x82c fs/xfs/xfs_iwalk.c:473
xfs_iwalk_ag_work+0xf8/0x1a0 fs/xfs/xfs_iwalk.c:620
xfs_pwork_work+0x80/0x1a4 fs/xfs/xfs_pwork.c:47
process_one_work+0x7c0/0x1558 kernel/workqueue.c:3257
process_scheduled_works kernel/workqueue.c:3340 [inline]
worker_thread+0x958/0xed8 kernel/workqueue.c:3421
kthread+0x5fc/0x75c kernel/kthread.c:463
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:844


Tested on:

commit: 855e81db xfs: switch (back) to a per-buftarg buffer hash
git tree: git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=162bb63a580000
kernel config: https://syzkaller.appspot.com/x/.config?x=1707867b02964a26

Christoph Hellwig

unread,
2:44 AM (12 hours ago) 2:44 AM
to syzbot+0391d3...@syzkaller.appspotmail.com, linu...@vger.kernel.org, syzkall...@googlegroups.com

syzbot

unread,
3:34 AM (11 hours ago) 3:34 AM
to h...@infradead.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
BUG: MAX_LOCKDEP_KEYS too low!

BUG: MAX_LOCKDEP_KEYS too low!
turning off the locking correctness validator.
CPU: 1 UID: 0 PID: 7123 Comm: syz-executor Not tainted syzkaller #0 PREEMPT
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
Call trace:
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
__dump_stack+0x30/0x40 lib/dump_stack.c:94
dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
dump_stack+0x1c/0x28 lib/dump_stack.c:129
register_lock_class+0x310/0x348 kernel/locking/lockdep.c:1332
__lock_acquire+0xbc/0x30a4 kernel/locking/lockdep.c:5112
lock_acquire+0x140/0x2e0 kernel/locking/lockdep.c:5868
touch_wq_lockdep_map+0xa8/0x164 kernel/workqueue.c:3940
__flush_workqueue+0xfc/0x109c kernel/workqueue.c:3982
drain_workqueue+0xa4/0x310 kernel/workqueue.c:4146
destroy_workqueue+0xb4/0xd90 kernel/workqueue.c:5903
xfs_destroy_mount_workqueues+0xac/0xdc fs/xfs/xfs_super.c:649
xfs_fs_put_super+0x128/0x144 fs/xfs/xfs_super.c:1262
generic_shutdown_super+0x12c/0x2b8 fs/super.c:643
kill_block_super+0x44/0x90 fs/super.c:1722
xfs_kill_sb+0x20/0x58 fs/xfs/xfs_super.c:2297
deactivate_locked_super+0xc4/0x12c fs/super.c:474
deactivate_super+0xe0/0x100 fs/super.c:507
cleanup_mnt+0x31c/0x3ac fs/namespace.c:1318
__cleanup_mnt+0x20/0x30 fs/namespace.c:1325
task_work_run+0x1dc/0x260 kernel/task_work.c:233
resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
__exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
exit_to_user_mode_loop+0x10c/0x18c kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
exit_to_user_mode_prepare_legacy include/linux/irq-entry-common.h:242 [inline]
arm64_exit_to_user_mode arch/arm64/kernel/entry-common.c:81 [inline]
el0_svc+0x17c/0x26c arch/arm64/kernel/entry-common.c:725
el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:743
el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb


Tested on:

commit: 3e548540 increase LOCKDEP_CHAINS_BITS
git tree: git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=101b0d22580000
kernel config: https://syzkaller.appspot.com/x/.config?x=6c6138f827b10ea4

Christoph Hellwig

unread,
3:37 AM (11 hours ago) 3:37 AM
to syzbot, h...@infradead.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
So I'm not sure what this test does that it always triggers the lockdep
keys, but that makes it impossible to validate the original xfs report.

Is there a way to force running syzbot reproducers without lockdep?

Note that I've also had it running locally for quite a while, an even
with lockdep enabled I'm somehow not hitting the lockdep splat.
Although that is using my normal debug config and not the provided
one.
---end quoted text---

Aleksandr Nogikh

unread,
3:53 AM (11 hours ago) 3:53 AM
to Christoph Hellwig, syzbot, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com, Dmitry Vyukov
On Mon, Jan 19, 2026 at 9:37 AM Christoph Hellwig <h...@infradead.org> wrote:
>
> So I'm not sure what this test does that it always triggers the lockdep
> keys, but that makes it impossible to validate the original xfs report.
>
> Is there a way to force running syzbot reproducers without lockdep?

Not directly, but you could explicitly modify lockdep's Kconfig in
your test patch to disable lockdep entirely.

>
> Note that I've also had it running locally for quite a while, an even
> with lockdep enabled I'm somehow not hitting the lockdep splat.
> Although that is using my normal debug config and not the provided
> one.

Hmm, yes, that sounds weird.

I wonder if it's because we run the reproducers in threaded mode when
handling #syz test commands on the syzbot side, which leads to even
more syscalls being executed in parallel. Or the system just got lucky
once when it was generating the reproducer - overall, "BUG:
MAX_LOCKDEP_KEYS too low!" [1] seems to be a popular sink for
different reproducers on our side :(

[1] https://syzkaller.appspot.com/bug?extid=a70a6358abd2c3f9550f

Christoph Hellwig

unread,
4:03 AM (10 hours ago) 4:03 AM
to Aleksandr Nogikh, Christoph Hellwig, syzbot, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com, Dmitry Vyukov
On Mon, Jan 19, 2026 at 09:53:18AM +0100, Aleksandr Nogikh wrote:
> On Mon, Jan 19, 2026 at 9:37 AM Christoph Hellwig <h...@infradead.org> wrote:
> >
> > So I'm not sure what this test does that it always triggers the lockdep
> > keys, but that makes it impossible to validate the original xfs report.
> >
> > Is there a way to force running syzbot reproducers without lockdep?
>
> Not directly, but you could explicitly modify lockdep's Kconfig in
> your test patch to disable lockdep entirely.

Already, I'll give it a try.

Christoph Hellwig

unread,
4:03 AM (10 hours ago) 4:03 AM
to syzbot+0391d3...@syzkaller.appspotmail.com, linu...@vger.kernel.org, syzkall...@googlegroups.com

syzbot

unread,
4:29 AM (10 hours ago) 4:29 AM
to h...@infradead.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot tried to test the proposed patch but the build/boot failed:

./include/linux/srcu.h:197:2: error: call to undeclared function 'lock_sync'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
./include/linux/semaphore.h:52:28: error: field designator 'name' does not refer to any field in type 'struct lockdep_map'
./include/linux/semaphore.h:52:28: error: field designator 'wait_type_inner' does not refer to any field in type 'struct lockdep_map'


Tested on:

commit: 9f73447f disable lockdep
git tree: git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
kernel config: https://syzkaller.appspot.com/x/.config?x=8c5ac3d8b8abfcb

Christoph Hellwig

unread,
9:45 AM (5 hours ago) 9:45 AM
to syzbot+0391d3...@syzkaller.appspotmail.com, linu...@vger.kernel.org, syzkall...@googlegroups.com

syzbot

unread,
10:17 AM (4 hours ago) 10:17 AM
to h...@infradead.org, linux-...@vger.kernel.org, linu...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch and the reproducer did not trigger any issue:

Reported-by: syzbot+0391d3...@syzkaller.appspotmail.com
Tested-by: syzbot+0391d3...@syzkaller.appspotmail.com

Tested on:

commit: 5dc79b07 disable lockdep
git tree: git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=16604bfc580000
kernel config: https://syzkaller.appspot.com/x/.config?x=3433733714e92ec3
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64

Note: no patches were applied.
Note: testing is done by a robot and is best-effort only.
Reply all
Reply to author
Forward
0 new messages