[v6.1] INFO: task hung in reiserfs_lookup (2)

0 views
Skip to first unread message

syzbot

unread,
Dec 10, 2025, 5:19:28 AM (yesterday) Dec 10
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 50cbba13faa2 Linux 6.1.159
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=178961c2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=6a5cbd268406bf43
dashboard link: https://syzkaller.appspot.com/bug?extid=7fb6df3f10d7ad44f222
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/50ceb00b1b39/disk-50cbba13.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ebb8883b9a29/vmlinux-50cbba13.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7edc2a795f03/bzImage-50cbba13.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+7fb6df...@syzkaller.appspotmail.com

INFO: task syz.6.1791:12713 blocked for more than 147 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.6.1791 state:D stack:26600 pid:12713 ppid:8459 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6696
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x555/0xaf0 kernel/locking/mutex.c:747
reiserfs_write_lock+0x75/0xd0 fs/reiserfs/lock.c:27
reiserfs_lookup+0x137/0x420 fs/reiserfs/namei.c:364
__lookup_slow+0x27d/0x3a0 fs/namei.c:1698
lookup_slow+0x53/0x70 fs/namei.c:1715
walk_component fs/namei.c:2006 [inline]
link_path_walk+0x928/0xe50 fs/namei.c:2333
path_openat+0x276/0x2e70 fs/namei.c:3787
do_filp_open+0x1c1/0x3c0 fs/namei.c:3818
do_sys_openat2+0x142/0x490 fs/open.c:1320
do_sys_open fs/open.c:1336 [inline]
__do_sys_open fs/open.c:1344 [inline]
__se_sys_open fs/open.c:1340 [inline]
__x64_sys_open+0x11b/0x140 fs/open.c:1340
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fe59258f749
RSP: 002b:00007fe593487038 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fe5927e6090 RCX: 00007fe59258f749
RDX: 0000000000000162 RSI: 00000000001a1342 RDI: 0000200000000000
RBP: 00007fe592613f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fe5927e6128 R14: 00007fe5927e6090 R15: 00007ffe81799cc8
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8c92bab0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8c92c2d0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
4 locks held by kworker/1:1/26:
#0: ffff888017471138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90000a1fd00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffff888074ff40e0 (&type->s_umount_key#101){++++}-{3:3}, at: flush_old_commits+0xc8/0x2f0 fs/reiserfs/super.c:97
#3: ffff8880562a1090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x75/0xd0 fs/reiserfs/lock.c:27
1 lock held by khungtaskd/28:
#0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by getty/4030:
#0: ffff88814d818098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by kworker/0:17/5751:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90005a1fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
3 locks held by kworker/u4:1/8922:
#0: ffff888017479138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90003b6fd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:263
4 locks held by kworker/u4:6/8957:
#0: ffff888017616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc900044e7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8db2e6d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x132/0xb80 net/core/net_namespace.c:594
#3: ffffffff8c930cc0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x48/0x600 kernel/rcu/tree.c:4023
3 locks held by kworker/1:3/11884:
#0: ffff88814cb3a938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90004477d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4672
2 locks held by syz.6.1791/12713:
#0: ffff8880545602e0 (&type->i_mutex_dir_key#19){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
#0: ffff8880545602e0 (&type->i_mutex_dir_key#19){++++}-{3:3}, at: lookup_slow+0x46/0x70 fs/namei.c:1714
#1: ffff8880562a1090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x75/0xd0 fs/reiserfs/lock.c:27
6 locks held by syz.6.1791/12714:
2 locks held by syz-executor/13848:
#0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147
#1: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#1: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x455/0x830 kernel/rcu/tree_exp.h:962
4 locks held by syz.7.1988/13915:
#0: ffff8880956650b8 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close net/bluetooth/hci_core.c:508 [inline]
#0: ffff8880956650b8 (&hdev->req_lock){+.+.}-{3:3}, at: hci_unregister_dev+0x1fa/0x4f0 net/bluetooth/hci_core.c:2705
#1: ffff888095664078 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_close_sync+0x458/0xf40 net/bluetooth/hci_sync.c:5233
#2: ffffffff8dc98f48 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1826 [inline]
#2: ffffffff8dc98f48 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xac/0x290 net/bluetooth/hci_conn.c:2504
#3: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#3: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
1 lock held by syz-executor/13985:
#0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xeee/0xf30 kernel/hung_task.c:377
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
NMI backtrace for cpu 0 skipped: idling at default_idle+0xb/0x10 arch/x86/kernel/process.c:741


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages