Hello,
syzbot found the following issue on:
HEAD commit: f6e38ae624cf Linux 6.1.158
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=106828b4580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=7eb38bd5021fec61
dashboard link:
https://syzkaller.appspot.com/bug?extid=49263cba201c47ac0c58
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/0d8305ca1c94/disk-f6e38ae6.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/8bc85383c07b/vmlinux-f6e38ae6.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/d337081bce56/bzImage-f6e38ae6.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+49263c...@syzkaller.appspotmail.com
INFO: task kworker/1:6:20970 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:6 state:D stack:24864 pid:20970 ppid:2 flags:0x00004000
Workqueue: events_power_efficient reg_check_chans_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
schedule+0xb9/0x180 kernel/sched/core.c:6637
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6696
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x555/0xaf0 kernel/locking/mutex.c:747
reg_check_chans_work+0x8b/0xd80 net/wireless/reg.c:2499
process_one_work+0x898/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cb2b630 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cb2be50 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_preempt/16:
#0: ffff8880b8f3aad8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:537
2 locks held by kworker/1:1/27:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90000a3fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by khungtaskd/28:
#0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
1 lock held by klogd/3626:
2 locks held by getty/4030:
#0: ffff88814e230098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by syz-executor/4253:
3 locks held by kworker/1:3/4306:
#0: ffff88814d529538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc900044f7d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4672
3 locks held by kworker/1:8/11634:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9000c6ffd00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:104
5 locks held by kworker/u4:3/15923:
#0: ffff888017616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9000518fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd34a10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x132/0xb80 net/core/net_namespace.c:594
#3: ffff88805334b2f8 (&devlink->lock_key#18){+.+.}-{3:3}, at: devlink_pernet_pre_exit+0xf8/0x270 net/devlink/leftover.c:12500
#4: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: devlink_nl_port_fill+0x298/0x910 net/devlink/leftover.c:1276
3 locks held by kworker/u4:11/15941:
#0: ffff888017479138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc9000523fd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:263
3 locks held by kworker/0:5/20927:
#0: ffff88814d529538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc90003547d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4672
3 locks held by kworker/1:6/20970:
#0: ffff888017471938 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#1: ffffc900037f7d00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x8b/0xd80 net/wireless/reg.c:2499
2 locks held by syz.4.5814/22690:
#0: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
#0: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3d/0x1b0 drivers/net/tun.c:3492
#1: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#1: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x455/0x830 kernel/rcu/tree_exp.h:962
=============================================
NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xeee/0xf30 kernel/hung_task.c:377
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 15908 Comm: kworker/u4:0 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: bat_events batadv_nc_worker
RIP: 0010:hlock_class kernel/locking/lockdep.c:228 [inline]
RIP: 0010:check_wait_context kernel/locking/lockdep.c:4724 [inline]
RIP: 0010:__lock_acquire+0x5eb/0x7c50 kernel/locking/lockdep.c:4999
Code: 07 89 c3 81 e3 ff 1f 00 00 c1 e8 03 25 f8 03 00 00 48 8d b8 40 22 ae 90 be 08 00 00 00 e8 9d 03 6e 00 48 0f a3 1d e5 1e 4b 0f <73> 1a 48 8d 04 5b c1 e0 06 48 8d 98 00 a1 46 90 49 b8 00 00 00 00
RSP: 0018:ffffc9000513f780 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 00000000000006c9 RCX: ffffffff81630353
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff90ae2318
RBP: ffffc9000513f9d0 R08: dffffc0000000000 R09: fffffbfff215c464
R10: fffffbfff215c464 R11: 1ffffffff215c463 R12: ffff8880610c3b80
R13: 00000000000000d8 R14: 0000000000000002 R15: ffff8880610c46d0
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6a275e5fa8 CR3: 000000000c88e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
<TASK>
lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x32/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
batadv_nc_purge_paths+0xe7/0x3b0 net/batman-adv/network-coding.c:442
batadv_nc_worker+0x365/0x600 net/batman-adv/network-coding.c:722
process_one_work+0x898/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup