[v6.6] INFO: task hung in ppp_release

0 views
Skip to first unread message

syzbot

unread,
Apr 10, 2026, 8:01:37 PMApr 10
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 80de0a958133 Linux 6.6.133
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12105cd2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=c5b35c4db8465904
dashboard link: https://syzkaller.appspot.com/bug?extid=34dcaf7711b15c4100e3
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/3a04f1ef21aa/disk-80de0a95.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/081f2fcfce9b/vmlinux-80de0a95.xz
kernel image: https://storage.googleapis.com/syzbot-assets/3ef5904d5301/bzImage-80de0a95.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+34dcaf...@syzkaller.appspotmail.com

INFO: task syz.0.1988:13106 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.1988 state:D stack:27016 pid:13106 ppid:10306 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
schedule_timeout+0xc1/0x2d0 kernel/time/timer.c:2144
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x2cb/0x5b0 kernel/sched/completion.c:148
__flush_work+0x913/0xaa0 kernel/workqueue.c:3471
flush_all_backlogs net/core/dev.c:6008 [inline]
unregister_netdevice_many_notify+0x824/0x1900 net/core/dev.c:11094
unregister_netdevice_many net/core/dev.c:11168 [inline]
unregister_netdevice_queue+0x32c/0x370 net/core/dev.c:11048
unregister_netdevice include/linux/netdevice.h:3137 [inline]
ppp_release+0xf0/0x1f0 drivers/net/ppp/ppp_generic.c:420
__fput+0x234/0x970 fs/file_table.c:384
task_work_run+0x1d4/0x260 kernel/task_work.c:245
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xa0 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f230f79c819
RSP: 002b:00007ffc8c4b23b8 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007f230fa17da0 RCX: 00007f230f79c819
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007f230fa17da0 R08: 00007f230fa16038 R09: 0000000000000000
R10: 00000000003ffc8c R11: 0000000000000246 R12: 0000000000064a53
R13: 00007f230fa1609c R14: 00000000000647c5 R15: 00007f230fa16090
</TASK>

Showing all locks held in the system:
7 locks held by kworker/0:0/8:
2 locks held by kworker/1:0/23:
#0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc900001d7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc900001d7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
3 locks held by kworker/1:1/28:
#0: ffff888017c71d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff888017c71d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc90000a4fd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc90000a4fd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#2: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x92/0xd90 net/wireless/reg.c:2463
1 lock held by khungtaskd/29:
#0: ffffffff8d1320a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8d1320a0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8d1320a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by getty/5527:
#0: ffff8880317f90a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000328b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x433/0x1390 drivers/tty/n_tty.c:2217
3 locks held by kworker/u4:14/6757:
#0: ffff88802c778938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff88802c778938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc900053c7d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc900053c7d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#2: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4718
2 locks held by kworker/u4:20/6763:
#0: ffff888017c71538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff888017c71538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc9000553fd00 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc9000553fd00 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
2 locks held by kworker/u4:39/6786:
#0: ffff888017c71538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff888017c71538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc9000b71fd00 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc9000b71fd00 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
1 lock held by syz.3.1983/13090:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: ppp_release+0x8a/0x1f0 drivers/net/ppp/ppp_generic.c:418
1 lock held by syz.1.1986/13103:
#0: ffffffff8d1880e8 (event_mutex){+.+.}-{3:3}, at: perf_trace_destroy+0x2e/0x140 kernel/trace/trace_event_perf.c:239
2 locks held by syz.0.1988/13106:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: ppp_release+0x8a/0x1f0 drivers/net/ppp/ppp_generic.c:418
#1: ffffffff8cfcd270 (cpu_hotplug_lock){++++}-{0:0}, at: flush_all_backlogs net/core/dev.c:5992 [inline]
#1: ffffffff8cfcd270 (cpu_hotplug_lock){++++}-{0:0}, at: unregister_netdevice_many_notify+0x59c/0x1900 net/core/dev.c:11094
2 locks held by syz.2.1989/13115:
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:116 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:215 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: perf_init_event kernel/events/core.c:11890 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: perf_event_alloc+0xc06/0x21b0 kernel/events/core.c:12211
#1: ffffffff8d1880e8 (event_mutex){+.+.}-{3:3}, at: perf_trace_init+0x50/0x2d0 kernel/trace/trace_event_perf.c:221
2 locks held by syz.2.1989/13116:
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:116 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:215 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: perf_init_event kernel/events/core.c:11890 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: perf_event_alloc+0xc06/0x21b0 kernel/events/core.c:12211
#1: ffffffff8d1880e8 (event_mutex){+.+.}-{3:3}, at: perf_trace_init+0x50/0x2d0 kernel/trace/trace_event_perf.c:221
2 locks held by syz.2.1989/13117:
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:116 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:215 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: perf_init_event kernel/events/core.c:11890 [inline]
#0: ffffffff9731ac30 (&pmus_srcu){.+.+}-{0:0}, at: perf_event_alloc+0xc06/0x21b0 kernel/events/core.c:12211
#1: ffffffff8d1880e8 (event_mutex){+.+.}-{3:3}, at: perf_trace_init+0x50/0x2d0 kernel/trace/trace_event_perf.c:221
1 lock held by syz-executor/13122:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13128:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13130:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13138:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13144:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13151:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13153:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13156:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13167:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13174:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13177:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
1 lock held by syz-executor/13182:
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8e3c21c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6472
2 locks held by dhcpcd/13188:
#0: ffff88807b47ee20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff88807b47ee20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff88807b47ee20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
#1: ffffffff8d137a78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#1: ffffffff8d137a78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x306/0x880 kernel/rcu/tree_exp.h:1004
2 locks held by dhcpcd/13189:
#0: ffff88807b479a20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff88807b479a20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff88807b479a20 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
#1: ffffffff8d137a78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#1: ffffffff8d137a78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3da/0x880 kernel/rcu/tree_exp.h:1004
1 lock held by dhcpcd/13191:
#0: ffff88805a664130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1780 [inline]
#0: ffff88805a664130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3259

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
nmi_cpu_backtrace+0x3a6/0x3e0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf3d/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 8 Comm: kworker/0:0 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Workqueue: wg-crypt-wg1 wg_packet_tx_worker
RIP: 0010:ip6t_do_table+0x119/0x1510 net/ipv6/netfilter/ip6_tables.c:267
Code: 00 74 08 48 89 df e8 06 21 8d f8 48 89 9c 24 b8 00 00 00 49 8b 46 08 48 85 c0 48 c7 c1 20 89 d1 8b 48 0f 44 c1 48 89 44 24 68 <49> 8d 5e 10 48 89 d8 48 c1 e8 03 48 89 84 24 a0 00 00 00 42 80 3c
RSP: 0018:ffffc90000006ca0 EFLAGS: 00000286
RAX: ffff888078218000 RBX: ffffc90000006f68 RCX: ffffffff8bd18920
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffc90000006e10
RBP: ffffc90000006ea0 R08: ffffc90000006e0f R09: 0000000000000000
R10: ffffc90000006df0 R11: fffff52000000dc2 R12: 0000000000000002
R13: dffffc0000000000 R14: ffffc90000006f60 R15: ffffc90000006df0
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fc67b64edd5 CR3: 000000000cf32000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
<IRQ>
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_slow+0xbd/0x200 net/netfilter/core.c:626
nf_hook include/linux/netfilter.h:259 [inline]
NF_HOOK+0x594/0x700 include/linux/netfilter.h:302
br_nf_forward_ip+0xcc1/0x1110 net/bridge/br_netfilter_hooks.c:754
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_slow+0xbd/0x200 net/netfilter/core.c:626
nf_hook include/linux/netfilter.h:259 [inline]
NF_HOOK+0x23e/0x3e0 include/linux/netfilter.h:302
__br_forward+0x433/0x610 net/bridge/br_forward.c:115
deliver_clone net/bridge/br_forward.c:131 [inline]
maybe_deliver+0xb5/0x150 net/bridge/br_forward.c:191
br_flood+0x31b/0x670 net/bridge/br_forward.c:237
br_handle_frame_finish+0x13c5/0x18f0 net/bridge/br_input.c:215
br_nf_hook_thresh+0x3cd/0x4a0 net/bridge/br_netfilter_hooks.c:1184
br_nf_pre_routing_finish_ipv6+0x9dc/0xd00 net/bridge/br_netfilter_ipv6.c:-1
NF_HOOK include/linux/netfilter.h:304 [inline]
br_nf_pre_routing_ipv6+0x349/0x6b0 net/bridge/br_netfilter_ipv6.c:184
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
br_handle_frame+0x1245/0x14d0 net/bridge/br_input.c:424
__netif_receive_skb_core+0xfab/0x3af0 net/core/dev.c:5528
__netif_receive_skb_one_core net/core/dev.c:5632 [inline]
__netif_receive_skb+0x74/0x290 net/core/dev.c:5748
process_backlog+0x391/0x6f0 net/core/dev.c:6076
__napi_poll+0xc0/0x460 net/core/dev.c:6638
napi_poll net/core/dev.c:6705 [inline]
net_rx_action+0x616/0xc40 net/core/dev.c:6841
handle_softirqs+0x280/0x820 kernel/softirq.c:578
do_softirq+0xfa/0x1a0 kernel/softirq.c:479
</IRQ>
<TASK>
__local_bh_enable_ip+0x184/0x1c0 kernel/softirq.c:406
wg_socket_send_skb_to_peer+0x16b/0x1c0 drivers/net/wireguard/socket.c:184
wg_packet_create_data_done drivers/net/wireguard/send.c:251 [inline]
wg_packet_tx_worker+0x1c8/0x7c0 drivers/net/wireguard/send.c:276
process_one_work kernel/workqueue.c:2653 [inline]
process_scheduled_works+0xa5d/0x15d0 kernel/workqueue.c:2730
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2811
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages