[v6.6] INFO: task hung in do_journal_end

0 views
Skip to first unread message

syzbot

unread,
Jan 15, 2026, 9:03:22 PM (14 days ago) Jan 15
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: c596736dadab Linux 6.6.120
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1172e39a580000
kernel config: https://syzkaller.appspot.com/x/.config?x=691a6769a86ac817
dashboard link: https://syzkaller.appspot.com/bug?extid=fffc4f0b5a2308216b09
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=168ff052580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=178c339a580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/855c94eb3eef/disk-c596736d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b7510b30b774/vmlinux-c596736d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/3ce7fe4f6991/bzImage-c596736d.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/39312f8a2efa/mount_0.gz
fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=138c339a580000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+fffc4f...@syzkaller.appspotmail.com

INFO: task syz.0.20:5940 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.20 state:D stack:24840 pid:5940 ppid:5879 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6833
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
reiserfs_write_lock_nested+0x60/0xd0 fs/reiserfs/lock.c:78
reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:814 [inline]
lock_journal fs/reiserfs/journal.c:534 [inline]
do_journal_end+0x3ba/0x4860 fs/reiserfs/journal.c:4032
reiserfs_create+0x5ec/0x680 fs/reiserfs/namei.c:693
lookup_open fs/namei.c:3496 [inline]
open_last_lookups fs/namei.c:3564 [inline]
path_openat+0x1277/0x3190 fs/namei.c:3794
do_filp_open+0x1c5/0x3d0 fs/namei.c:3824
do_sys_openat2+0x12c/0x1c0 fs/open.c:1421
do_sys_open fs/open.c:1436 [inline]
__do_sys_openat fs/open.c:1452 [inline]
__se_sys_openat fs/open.c:1447 [inline]
__x64_sys_openat+0x139/0x160 fs/open.c:1447
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f61b1f8f749
RSP: 002b:00007f61b2d54038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f61b21e6090 RCX: 00007f61b1f8f749
RDX: 000000000000275a RSI: 0000200000000140 RDI: ffffffffffffff9c
RBP: 00007f61b2013f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f61b21e6128 R14: 00007f61b21e6090 R15: 00007ffd811075c8
</TASK>

Showing all locks held in the system:
3 locks held by kworker/0:0/8:
#0: ffff88801ef06d38 ((wq_completion)reiserfs/loop2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88801ef06d38 ((wq_completion)reiserfs/loop2){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900000d7d00 ((work_completion)(&(&journal->j_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900000d7d00 ((work_completion)(&(&journal->j_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88807ab7c090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
3 locks held by kworker/u4:1/12:
#0: ffff8880b8e3c018 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:559
#1: ffff8880182ee418 (&p->pi_lock){-.-.}-{2:2}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffff8880182ee418 (&p->pi_lock){-.-.}-{2:2}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff8880b8e29598 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x123/0x270 kernel/time/timer.c:999
4 locks held by kworker/1:0/23:
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900001d7d00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900001d7d00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88807c86c0e0 (&type->s_umount_key#25){++++}-{3:3}, at: flush_old_commits+0xcc/0x2f0 fs/reiserfs/super.c:97
#3: ffff88807ab7c090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
4 locks held by kworker/1:1/27:
#0: ffff888031930938 ((wq_completion)reiserfs/loop1){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888031930938 ((wq_completion)reiserfs/loop1){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90000a2fd00 ((work_completion)(&(&journal->j_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90000a2fd00 ((work_completion)(&(&journal->j_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88823bd16e90 (&jl->j_commit_mutex){+.+.}-{3:3}, at: reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:813 [inline]
#2: ffff88823bd16e90 (&jl->j_commit_mutex){+.+.}-{3:3}, at: flush_commit_list+0x6c8/0x1d80 fs/reiserfs/journal.c:1007
#3: ffff88807d0f7090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock_nested+0x60/0xd0 fs/reiserfs/lock.c:78
1 lock held by khungtaskd/29:
#0: ffffffff8cd2ffa0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2ffa0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2ffa0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
4 locks held by kworker/0:2/787:
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90003757d00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003757d00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88802c51e0e0 (&type->s_umount_key#25){++++}-{3:3}, at: flush_old_commits+0xcc/0x2f0 fs/reiserfs/super.c:97
#3: ffff888030795090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
3 locks held by kworker/u4:6/1075:
2 locks held by kworker/0:3/5154:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900033a7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900033a7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
2 locks held by getty/5529:
#0: ffff88814cc5c0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
4 locks held by kworker/1:3/5805:
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000461fd00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000461fd00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88807bdb80e0 (&type->s_umount_key#25){++++}-{3:3}, at: flush_old_commits+0xcc/0x2f0 fs/reiserfs/super.c:97
#3: ffff88807d0f7090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
3 locks held by syz.0.20/5938:
4 locks held by syz.0.20/5940:
#0: ffff88802c51e418 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff88805f66c530 (&type->i_mutex_dir_key#8){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#1: ffff88805f66c530 (&type->i_mutex_dir_key#8){+.+.}-{3:3}, at: open_last_lookups fs/namei.c:3561 [inline]
#1: ffff88805f66c530 (&type->i_mutex_dir_key#8){+.+.}-{3:3}, at: path_openat+0x7c6/0x3190 fs/namei.c:3794
#2: ffffc9000343e0f0 (&journal->j_mutex){+.+.}-{3:3}, at: reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:813 [inline]
#2: ffffc9000343e0f0 (&journal->j_mutex){+.+.}-{3:3}, at: lock_journal fs/reiserfs/journal.c:534 [inline]
#2: ffffc9000343e0f0 (&journal->j_mutex){+.+.}-{3:3}, at: do_journal_end+0x3b0/0x4860 fs/reiserfs/journal.c:4032
#3: ffff888030795090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock_nested+0x60/0xd0 fs/reiserfs/lock.c:78
3 locks held by syz.1.39/6043:
3 locks held by syz.1.39/6045:
#0: ffff88807bdb8418 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff88805f669030 (&type->i_mutex_dir_key#8){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#1: ffff88805f669030 (&type->i_mutex_dir_key#8){+.+.}-{3:3}, at: open_last_lookups fs/namei.c:3561 [inline]
#1: ffff88805f669030 (&type->i_mutex_dir_key#8){+.+.}-{3:3}, at: path_openat+0x7c6/0x3190 fs/namei.c:3794
#2: ffff88807d0f7090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock_nested+0x60/0xd0 fs/reiserfs/lock.c:78
3 locks held by syz.2.60/6155:
5 locks held by udevd/6161:
2 locks held by syz.3.76/6249:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 6043 Comm: syz.1.39 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:get_current arch/x86/include/asm/current.h:41 [inline]
RIP: 0010:__sanitizer_cov_trace_pc+0x8/0x60 kernel/kcov.c:215
Code: 00 00 f3 0f 1e fa 53 48 89 fb e8 13 00 00 00 48 8b 3d 1c 91 c4 0c 48 89 de 5b e9 13 9f 56 00 cc cc cc f3 0f 1e fa 48 8b 04 24 <65> 48 8b 0d b0 0a 7e 7e 65 8b 15 b1 0a 7e 7e 81 e2 00 01 ff 00 74
RSP: 0018:ffffc900037f6b48 EFLAGS: 00000246
RAX: ffffffff8228094c RBX: 0000000000000000 RCX: dffffc0000000000
RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff88807b95c030
RBP: ffffc900037f6ca0 R08: 0000000000000000 R09: 00000000ffffffff
R10: 0000000000000070 R11: 0000000000000000 R12: ffff88807b95c030
R13: 0000000000000000 R14: ffffffff8ceced60 R15: 1ffff1100f72b806
FS: 00007f62b1eac6c0(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000020000001f900 CR3: 000000002fef0000 CR4: 00000000003506f0
Call Trace:
<TASK>
sd_part_size+0xc/0x30 fs/reiserfs/item_ops.c:81
get_num_ver+0x4a7/0xfe0 fs/reiserfs/fix_node.c:475
ip_check_balance fs/reiserfs/fix_node.c:1523 [inline]
check_balance fs/reiserfs/fix_node.c:2083 [inline]
fix_nodes+0x2854/0x82e0 fs/reiserfs/fix_node.c:2636
reiserfs_paste_into_item+0x5ce/0x7f0 fs/reiserfs/stree.c:2128
reiserfs_get_block+0x1bd3/0x3ed0 fs/reiserfs/inode.c:1069
__block_write_begin_int+0x566/0x1ad0 fs/buffer.c:2124
reiserfs_write_begin+0x20a/0x4c0 fs/reiserfs/inode.c:2771
generic_perform_write+0x2fb/0x5b0 mm/filemap.c:4031
generic_file_write_iter+0xaf/0x2e0 mm/filemap.c:4152
call_write_iter include/linux/fs.h:2018 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x43b/0x940 fs/read_write.c:584
ksys_pwrite64 fs/read_write.c:699 [inline]
__do_sys_pwrite64 fs/read_write.c:709 [inline]
__se_sys_pwrite64 fs/read_write.c:706 [inline]
__x64_sys_pwrite64+0x195/0x220 fs/read_write.c:706
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f62b0f8f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f62b1eac038 EFLAGS: 00000246 ORIG_RAX: 0000000000000012
RAX: ffffffffffffffda RBX: 00007f62b11e5fa0 RCX: 00007f62b0f8f749
RDX: 0000000000000001 RSI: 000020000001f900 RDI: 0000000000000005
RBP: 00007f62b1013f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000008000c63 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f62b11e6038 R14: 00007f62b11e5fa0 R15: 00007fff337cf978
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages