Hello,
syzbot found the following issue on:
HEAD commit: 8e8fc038cad5 Linux 6.1.168
git tree: linux-6.1.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=12eed8ce580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=f0605c5af04d7603
dashboard link:
https://syzkaller.appspot.com/bug?extid=cbdbbcb231cdbc606959
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/6dfc76304daa/disk-8e8fc038.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/cc3f335da424/vmlinux-8e8fc038.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/52204dcd4a1b/bzImage-8e8fc038.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+cbdbbc...@syzkaller.appspotmail.com
INFO: task kworker/0:6:14976 blocked for more than 144 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:6 state:D stack:25168 pid:14976 ppid:2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x11d1/0x40e0 kernel/sched/core.c:6562
schedule+0xb9/0x180 kernel/sched/core.c:6638
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6697
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x562/0xaf0 kernel/locking/mutex.c:747
ovs_lock net/openvswitch/datapath.c:107 [inline]
ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
process_one_work+0x8a2/0x1160 kernel/workqueue.c:2292
worker_thread+0xaa2/0x1270 kernel/workqueue.c:2439
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Showing all locks held in the system:
3 locks held by kworker/0:0/7:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc900000c7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cb2df30 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cb2e750 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
#0: ffffffff8cb2d5a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff8cb2d5a0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#0: ffffffff8cb2d5a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
3 locks held by kworker/1:3/3653:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90002f87d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
2 locks held by getty/4029:
#0: ffff88814d092098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x429/0x1390 drivers/tty/n_tty.c:2198
2 locks held by kworker/0:3/4275:
#0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003f37d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
3 locks held by kworker/1:4/4312:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90004537d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/1:5/4314:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90004547d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:7/4345:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc900045b7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:8/4348:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90004577d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
5 locks held by kworker/u4:21/5039:
#0: ffff888017616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc9000e47fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8dd3af10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x148/0xba0 net/core/net_namespace.c:594
#3: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#3: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0xe1/0x7a0 net/openvswitch/datapath.c:2653
#4: ffffffff8cb33140 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x48/0x600 kernel/rcu/tree.c:4023
3 locks held by kworker/1:6/9059:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc9000546fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:1/13602:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003207d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/1:9/14166:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003637d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:4/14176:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc900038ffd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/1:11/14245:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003d9fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/1:12/14249:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003e57d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/1:13/14922:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003dd7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:6/14976:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003687d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/1:15/15158:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90004657d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/1:16/15204:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc9000479fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:9/15606:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003eb7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:10/15709:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc9000488fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by syz-executor/15900:
#0: ffff88802a91d468 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close net/bluetooth/hci_core.c:526 [inline]
#0: ffff88802a91d468 (&hdev->req_lock){+.+.}-{3:3}, at: hci_unregister_dev+0x20e/0x500 net/bluetooth/hci_core.c:2731
#1: ffff88802a91c428 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_close_sync+0x45b/0xf40 net/bluetooth/hci_sync.c:5234
#2: ffffffff8dea6268 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1829 [inline]
#2: ffffffff8dea6268 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xac/0x290 net/bluetooth/hci_conn.c:2504
3 locks held by kworker/0:11/16031:
3 locks held by kworker/0:12/16036:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003697d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
3 locks held by kworker/0:13/16101:
#0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#1: ffffc90003397d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
#2: ffffffff8e05fbe8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2b/0xd0 net/openvswitch/datapath.c:2504
1 lock held by syz.9.3451/16710:
#0: ffffffff8cb33278 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#0: ffffffff8cb33278 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3c0/0x890 kernel/rcu/tree_exp.h:962
2 locks held by syz.2.3452/16719:
#0: ffffffff8cb82668 (event_mutex){+.+.}-{3:3}, at: perf_trace_destroy+0x2a/0x140 kernel/trace/trace_event_perf.c:239
#1: ffffffff8cb33278 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#1: ffffffff8cb33278 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x2ec/0x890 kernel/rcu/tree_exp.h:962
2 locks held by syz.7.3453/16729:
#0: ffffffff96c51168 (&pmus_srcu){....}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#0: ffffffff96c51168 (&pmus_srcu){....}-{0:0}, at: srcu_read_lock include/linux/srcu.h:165 [inline]
#0: ffffffff96c51168 (&pmus_srcu){....}-{0:0}, at: perf_init_event kernel/events/core.c:11664 [inline]
#0: ffffffff96c51168 (&pmus_srcu){....}-{0:0}, at: perf_event_alloc+0xbe2/0x21b0 kernel/events/core.c:11990
#1: ffffffff8cb82668 (event_mutex){+.+.}-{3:3}, at: perf_trace_init+0x4c/0x2d0 kernel/trace/trace_event_perf.c:221
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0x188/0x24e lib/dump_stack.c:106
nmi_cpu_backtrace+0x3e6/0x460 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xeee/0xf30 kernel/hung_task.c:377
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 15 Comm: ksoftirqd/0 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:__srcu_read_lock+0x2/0x80 kernel/rcu/srcutree.c:635
Code: 76 fe ff ff 44 89 f9 80 e1 07 80 c1 03 38 c1 0f 8c 24 ff ff ff 4c 89 ff e8 5b 77 67 00 e9 17 ff ff ff 66 0f 1f 44 00 00 41 57 <41> 56 53 48 89 fb 49 bf 00 00 00 00 00 fc ff df 48 81 c7 80 01 00
RSP: 0018:ffffc90000146898 EFLAGS: 00000082
RAX: 0000000080000300 RBX: 0000000000000000 RCX: ffffffff8188e20d
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8cb742e0
RBP: ffffc90000146970 R08: ffffffff8e1ff26f R09: 1ffffffff1c3fe4d
R10: dffffc0000000000 R11: fffffbfff1c3fe4e R12: dffffc0000000000
R13: 0000000000000000 R14: ffffffff814f6627 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f4704af0d58 CR3: 000000008c7bc000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
<TASK>
srcu_read_lock_notrace include/linux/srcu.h:175 [inline]
trace_irq_disable_rcuidle+0x98/0x140 include/trace/events/preemptirq.h:36
__local_bh_enable_ip+0xd7/0x1c0 kernel/softirq.c:403
local_bh_enable include/linux/bottom_half.h:33 [inline]
ip6_pol_route+0xe37/0x12a0 net/ipv6/route.c:2298
pol_lookup_func include/net/ip6_fib.h:579 [inline]
fib6_rule_lookup+0x208/0x5d0 net/ipv6/fib6_rules.c:116
ip6_route_input_lookup net/ipv6/route.c:2329 [inline]
ip6_route_input+0x725/0xa40 net/ipv6/route.c:2625
ip6_rcv_finish+0x13f/0x230 net/ipv6/ip6_input.c:77
ip_sabotage_in+0x1f0/0x270 net/bridge/br_netfilter_hooks.c:1001
nf_hook_entry_hookfn include/linux/netfilter.h:142 [inline]
nf_hook_slow+0xb9/0x200 net/netfilter/core.c:614
nf_hook include/linux/netfilter.h:257 [inline]
NF_HOOK+0x219/0x3b0 include/linux/netfilter.h:300
__netif_receive_skb_one_core net/core/dev.c:5619 [inline]
__netif_receive_skb+0xcc/0x290 net/core/dev.c:5733
netif_receive_skb_internal net/core/dev.c:5819 [inline]
netif_receive_skb+0x1d4/0x830 net/core/dev.c:5878
NF_HOOK+0x9a/0x390 include/linux/netfilter.h:302
br_handle_frame_finish+0x1263/0x16f0 net/bridge/br_input.c:204
br_nf_hook_thresh+0x3c9/0x4a0 net/bridge/br_netfilter_hooks.c:1184
br_nf_pre_routing_finish_ipv6+0x9da/0xd00 net/bridge/br_netfilter_ipv6.c:-1
NF_HOOK include/linux/netfilter.h:302 [inline]
br_nf_pre_routing_ipv6+0x345/0x6b0 net/bridge/br_netfilter_ipv6.c:243
nf_hook_entry_hookfn include/linux/netfilter.h:142 [inline]
nf_hook_bridge_pre net/bridge/br_input.c:260 [inline]
br_handle_frame+0x1167/0x13c0 net/bridge/br_input.c:406
__netif_receive_skb_core+0x1004/0x38f0 net/core/dev.c:5513
__netif_receive_skb_one_core net/core/dev.c:5617 [inline]
__netif_receive_skb+0x74/0x290 net/core/dev.c:5733
process_backlog+0x38d/0x6f0 net/core/dev.c:6061
__napi_poll+0xc0/0x460 net/core/dev.c:6628
napi_poll net/core/dev.c:6695 [inline]
net_rx_action+0x5dd/0xb20 net/core/dev.c:6809
handle_softirqs+0x2a1/0x930 kernel/softirq.c:596
run_ksoftirqd+0xa4/0x100 kernel/softirq.c:968
smpboot_thread_fn+0x64a/0xa40 kernel/smpboot.c:164
kthread+0x29d/0x330 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup