[v6.6] INFO: task hung in console_callback

0 views
Skip to first unread message

syzbot

unread,
Nov 30, 2025, 3:20:23 PM (13 hours ago) Nov 30
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 1e89a1be4fe9 Linux 6.6.117
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=16744192580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=04ff68f8e65440927612
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=109b7cb4580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=117a7912580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/ad3647e47b66/disk-1e89a1be.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d6f7ba94aea7/vmlinux-1e89a1be.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e08af2290355/bzImage-1e89a1be.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/db1581c3f3a6/mount_1.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+04ff68...@syzkaller.appspotmail.com

INFO: task kworker/1:0:23 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:0 state:D
stack:26088 pid:23 ppid:2 flags:0x00004000
Workqueue: events console_callback

Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x9b/0x280 kernel/time/timer.c:2143
___down_common kernel/locking/semaphore.c:229 [inline]
__down_common+0x308/0x640 kernel/locking/semaphore.c:250
down+0x80/0xd0 kernel/locking/semaphore.c:64
console_lock+0x145/0x1b0 kernel/printk/printk.c:2686
console_callback+0x6a/0x430 drivers/tty/vt/vt.c:2933
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>

Showing all locks held in the system:
2 locks held by kworker/1:0/23:
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900001d7d00 (console_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900001d7d00 (console_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
1 lock held by khungtaskd/28:
#0:
ffffffff8cd2fee0
(rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
(rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
(rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
3 locks held by kworker/u4:2/41:
#0:
ffff88802bc81538
(
(wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
(wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90000b1fd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90000b1fd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfbc5c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd0/0x14e0 net/ipv6/addrconf.c:4158
3 locks held by kworker/u4:6/3435:
#0: ffff888017871538
(
(wq_completion)events_unbound
){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000c747d00
(
(linkwatch_work).work
){+.+.}-{0:0}
, at: process_one_work kernel/workqueue.c:2609 [inline]
, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2:
ffffffff8dfbc5c8
(
rtnl_mutex
){+.+.}-{3:3}
, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:286
5 locks held by kworker/u4:7/3454:
#0: ffff888017873938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017873938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1:
ffffc9000c937d00
(
net_cleanup_work
){+.+.}-{0:0}
, at: process_one_work kernel/workqueue.c:2609 [inline]
, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2:
ffffffff8dfaf790
(
pernet_ops_rwsem
){++++}-{3:3}
, at: cleanup_net+0x136/0xb90 net/core/net_namespace.c:606
#3:
ffffffff8dfbc5c8
(
rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xe9/0xa60 net/core/dev.c:11606
#4: ffffffff8cd358b8 (rcu_state.exp_mutex){+.+.}-{3:3}
, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
, at: synchronize_rcu_expedited+0x448/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by udevd/5157:
#0: ffff88814179d4c8
(&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
2 locks held by getty/5542:
#0: ffff88814dd480a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
2 locks held by kworker/1:3/5821:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900045afd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900045afd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
3 locks held by kworker/1:7/6007:
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1:
ffffc90003677d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
ffffc90003677d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfbc5c8 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
5 locks held by syz.5.84/6326:
1 lock held by syz.7.79/6328:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 787 Comm: kworker/0:2 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: events_power_efficient neigh_periodic_work
RIP: 0010:mark_lock+0x0/0x320 kernel/locking/lockdep.c:4639
Code: 38 c1 7c 90 48 c7 c7 f0 c2 4a 8e e8 2a 4a 75 00 4c 89 f7 41 89 d9 e9 79 ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 <55> 41 57 41 56 41 55 41 54 53 50 89 d5 49 89 f6 48 89 fb 49 bd 00
RSP: 0018:ffffc90003c479d8 EFLAGS: 00000006
RAX: 0000000000000000 RBX: ffff888020dd0ad8 RCX: ffffffff8167c5a4
RDX: 0000000000000006 RSI: ffff888020dd0ae0 RDI: ffff888020dd0000
RBP: ffffc90003c47a88 R08: ffffffff90da95bf R09: 1ffffffff21b52b7
R10: dffffc0000000000 R11: fffffbfff21b52b8 R12: ffff888020dd0b00
R13: 0000000000000000 R14: 1ffff110041ba15b R15: ffff888020dd0ae0
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055bbf13d9820 CR3: 000000002b386000 CR4: 00000000003506f0
Call Trace:
<TASK>
mark_held_locks kernel/locking/lockdep.c:4274 [inline]
__trace_hardirqs_on_caller kernel/locking/lockdep.c:4300 [inline]
lockdep_hardirqs_on_prepare+0x369/0x760 kernel/locking/lockdep.c:4359
trace_hardirqs_on+0x28/0x40 kernel/trace/trace_preemptirq.c:61
__local_bh_enable_ip+0x12e/0x1c0 kernel/softirq.c:411
neigh_periodic_work+0xb53/0xd70 net/core/neighbour.c:1022
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages