Hello,
syzbot found the following issue on:
HEAD commit: c596736dadab Linux 6.6.120
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=17542dfc580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=691a6769a86ac817
dashboard link:
https://syzkaller.appspot.com/bug?extid=3eab5a7edb67c0092ac0
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/855c94eb3eef/disk-c596736d.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/b7510b30b774/vmlinux-c596736d.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/3ce7fe4f6991/bzImage-c596736d.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+3eab5a...@syzkaller.appspotmail.com
INFO: task syz.8.1703:14245 blocked for more than 144 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.1703 state:D stack:25064 pid:14245 ppid:12634 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6833
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
reiserfs_write_lock_nested+0x60/0xd0 fs/reiserfs/lock.c:78
reiserfs_paste_into_item+0x1b4/0x7f0 fs/reiserfs/stree.c:2111
reiserfs_add_entry+0x978/0xd90 fs/reiserfs/namei.c:565
reiserfs_create+0x53f/0x680 fs/reiserfs/namei.c:676
lookup_open fs/namei.c:3496 [inline]
open_last_lookups fs/namei.c:3564 [inline]
path_openat+0x1277/0x3190 fs/namei.c:3794
do_filp_open+0x1c5/0x3d0 fs/namei.c:3824
do_sys_openat2+0x12c/0x1c0 fs/open.c:1421
do_sys_open fs/open.c:1436 [inline]
__do_sys_openat fs/open.c:1452 [inline]
__se_sys_openat fs/open.c:1447 [inline]
__x64_sys_openat+0x139/0x160 fs/open.c:1447
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fef5818f749
RSP: 002b:00007fef563f6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fef583e6090 RCX: 00007fef5818f749
RDX: 000000000000275a RSI: 0000200000000140 RDI: ffffffffffffff9c
RBP: 00007fef58213f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fef583e6128 R14: 00007fef583e6090 R15: 00007fff52a105d8
</TASK>
Showing all locks held in the system:
4 locks held by kworker/1:0/23:
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900001d7d00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900001d7d00 ((work_completion)(&(&sbi->old_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff888025c420e0 (&type->s_umount_key#25){++++}-{3:3}, at: flush_old_commits+0xcc/0x2f0 fs/reiserfs/super.c:97
#3: ffff88807efc8090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
1 lock held by khungtaskd/28:
#0: ffffffff8cd2ffa0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2ffa0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2ffa0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by kworker/0:2/786:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90003b27d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003b27d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
2 locks held by getty/5523:
#0: ffff88814c6c30a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc900015c72f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
5 locks held by kworker/u4:3/11909:
3 locks held by syz.8.1703/14235:
3 locks held by syz.8.1703/14245:
#0: ffff888025c42418 (sb_writers#34){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff888052f8dfb0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#1: ffff888052f8dfb0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: open_last_lookups fs/namei.c:3561 [inline]
#1: ffff888052f8dfb0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: path_openat+0x7c6/0x3190 fs/namei.c:3794
#2: ffff88807efc8090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock_nested+0x60/0xd0 fs/reiserfs/lock.c:78
2 locks held by syz.3.1832/15002:
#0: ffff888025c420e0 (&type->s_umount_key#25){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
#0: ffff888025c420e0 (&type->s_umount_key#25){++++}-{3:3}, at: super_lock+0x167/0x360 fs/super.c:117
#1: ffff88807efc8090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0xd0 fs/reiserfs/lock.c:27
1 lock held by dhcpcd/15868:
#0: ffff888052d14420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff888052d14420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff888052d14420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
1 lock held by dhcpcd/15869:
#0: ffff888052d12c20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff888052d12c20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff888052d12c20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
1 lock held by dhcpcd/15870:
#0: ffff888049ef1420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff888049ef1420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff888049ef1420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
1 lock held by dhcpcd/15871:
#0: ffff88805da27420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff88805da27420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff88805da27420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
1 lock held by dhcpcd/15872:
#0: ffff88805da20820 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff88805da20820 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff88805da20820 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
2 locks held by dhcpcd/15873:
#0: ffff88805da24420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff88805da24420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff88805da24420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
#1: ffffffff8cd35978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#1: ffffffff8cd35978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
2 locks held by dhcpcd/15874:
#0: ffff88805da22c20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff88805da22c20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff88805da22c20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
#1: ffffffff8cd35978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#1: ffffffff8cd35978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by syz.5.1961/15884:
#0: ffff888052cbbe20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff888052cbbe20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff888052cbbe20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
3 locks held by dhcpcd-run-hook/15894:
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 15895 Comm: dhcpcd-run-hook Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:__lock_is_held kernel/locking/lockdep.c:5492 [inline]
RIP: 0010:lock_is_held_type+0xc1/0x190 kernel/locking/lockdep.c:5825
Code: 31 e4 49 83 fc 31 73 24 4c 89 ff 4c 89 f6 e8 26 02 00 00 85 c0 75 2a 49 ff c4 48 63 83 d8 0a 00 00 49 83 c7 28 49 39 c4 7c d8 <eb> 11 48 c7 c7 e0 d6 bf 8c 4c 89 e6 e8 de 3f ea f9 eb cb 31 ed eb
RSP: 0018:ffffc900056d7640 EFLAGS: 00000046
RAX: 0000000000000005 RBX: ffff888040423c00 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffff8cd59a78 RDI: ffff888040424780
RBP: 00000000ffffffff R08: dffffc0000000000 R09: 1ffffffff21b2ca0
R10: dffffc0000000000 R11: fffffbfff21b2ca1 R12: 0000000000000005
R13: 0000000000000246 R14: ffffffff8cd59a78 R15: ffff8880404247a8
FS: 00007f7bb38fcc80(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7bb3b607e8 CR3: 000000005187a000 CR4: 00000000003506f0
Call Trace:
<TASK>
lock_is_held include/linux/lockdep.h:288 [inline]
task_css include/linux/cgroup.h:436 [inline]
mem_cgroup_from_task+0x77/0x110 mm/memcontrol.c:1038
get_obj_cgroup_from_current+0x167/0x280 mm/memcontrol.c:3062
memcg_slab_pre_alloc_hook mm/slab.h:492 [inline]
slab_pre_alloc_hook+0x95/0x310 mm/slab.h:719
slab_alloc_node mm/slub.c:3477 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x5a/0x2e0 mm/slub.c:3519
anon_vma_chain_alloc mm/rmap.c:142 [inline]
anon_vma_clone+0xb8/0x4e0 mm/rmap.c:289
anon_vma_fork+0x79/0x500 mm/rmap.c:352
dup_mmap kernel/fork.c:733 [inline]
dup_mm kernel/fork.c:1692 [inline]
copy_mm+0xd87/0x1ca0 kernel/fork.c:1741
copy_process+0x16d3/0x3d70 kernel/fork.c:2506
kernel_clone+0x21b/0x840 kernel/fork.c:2914
__do_sys_clone kernel/fork.c:3057 [inline]
__se_sys_clone kernel/fork.c:3041 [inline]
__x64_sys_clone+0x18c/0x1e0 kernel/fork.c:3041
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f7bb3a96636
Code: 89 df e8 6d e8 f6 ff 45 31 c0 31 d2 31 f6 64 48 8b 04 25 10 00 00 00 bf 11 00 20 01 4c 8d 90 d0 02 00 00 b8 38 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 52 89 c5 85 c0 75 31 64 48 8b 04 25 10 00 00
RSP: 002b:00007ffcf40a3020 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
RAX: ffffffffffffffda RBX: 00007ffcf40a3028 RCX: 00007f7bb3a96636
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011
RBP: 000055581a658e60 R08: 0000000000000000 R09: 0000000000000000
R10: 00007f7bb38fcf50 R11: 0000000000000246 R12: 000055581a65af70
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup