Hello,
syzbot found the following issue on:
HEAD commit: 5fa4793a2d2d Linux 6.6.119
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=102db31a580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=691a6769a86ac817
dashboard link:
https://syzkaller.appspot.com/bug?extid=d9f3e3e4778c146a77fe
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/857fe583cdec/disk-5fa4793a.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/ed4d3c1402bc/vmlinux-5fa4793a.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/40296d968c3d/bzImage-5fa4793a.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+d9f3e3...@syzkaller.appspotmail.com
INFO: task kworker/u4:2:42 blocked for more than 142 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:2 state:D stack:22376 pid:42 ppid:2 flags:0x00004000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4700
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
INFO: task kworker/u4:21:8358 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:21 state:D stack:22376 pid:8358 ppid:2 flags:0x00004000
Workqueue: events_unbound linkwatch_event
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
linkwatch_event+0xe/0x60 net/core/link_watch.c:286
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
INFO: task syz-executor:12058 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:25320 pid:12058 ppid:1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
rtnl_lock net/core/rtnetlink.c:78 [inline]
rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
netlink_rcv_skb+0x216/0x480 net/netlink/af_netlink.c:2545
netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline]
netlink_unicast+0x751/0x8d0 net/netlink/af_netlink.c:1346
netlink_sendmsg+0x8c1/0xbe0 net/netlink/af_netlink.c:1894
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
__sys_sendto+0x46a/0x620 net/socket.c:2201
__do_sys_sendto net/socket.c:2213 [inline]
__se_sys_sendto net/socket.c:2209 [inline]
__x64_sys_sendto+0xde/0xf0 net/socket.c:2209
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f34d6d915dc
RSP: 002b:00007ffc900dce20 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f34d7b14620 RCX: 00007f34d6d915dc
RDX: 0000000000000028 RSI: 00007f34d7b14670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007ffc900dce74 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f34d7b14670 R15: 0000000000000000
</TASK>
INFO: task syz-executor:12060 blocked for more than 144 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:25320 pid:12060 ppid:1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
rtnl_lock net/core/rtnetlink.c:78 [inline]
rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
netlink_rcv_skb+0x216/0x480 net/netlink/af_netlink.c:2545
netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline]
netlink_unicast+0x751/0x8d0 net/netlink/af_netlink.c:1346
netlink_sendmsg+0x8c1/0xbe0 net/netlink/af_netlink.c:1894
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
__sys_sendto+0x46a/0x620 net/socket.c:2201
__do_sys_sendto net/socket.c:2213 [inline]
__se_sys_sendto net/socket.c:2209 [inline]
__x64_sys_sendto+0xde/0xf0 net/socket.c:2209
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f233e5915dc
RSP: 002b:00007ffc251640b0 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f233f314620 RCX: 00007f233e5915dc
RDX: 0000000000000028 RSI: 00007f233f314670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007ffc25164104 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f233f314670 R15: 0000000000000000
</TASK>
INFO: task syz-executor:12062 blocked for more than 144 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:25320 pid:12062 ppid:1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
rtnl_lock net/core/rtnetlink.c:78 [inline]
rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
netlink_rcv_skb+0x216/0x480 net/netlink/af_netlink.c:2545
netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline]
netlink_unicast+0x751/0x8d0 net/netlink/af_netlink.c:1346
netlink_sendmsg+0x8c1/0xbe0 net/netlink/af_netlink.c:1894
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
__sys_sendto+0x46a/0x620 net/socket.c:2201
__do_sys_sendto net/socket.c:2213 [inline]
__se_sys_sendto net/socket.c:2209 [inline]
__x64_sys_sendto+0xde/0xf0 net/socket.c:2209
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7ff19e1915dc
RSP: 002b:00007fffc4abad40 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007ff19ef14620 RCX: 00007ff19e1915dc
RDX: 0000000000000028 RSI: 00007ff19ef14670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007fffc4abad94 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007ff19ef14670 R15: 0000000000000000
</TASK>
Showing all locks held in the system:
3 locks held by kworker/1:1/28:
#0: ffff888017871d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90000a4fd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90000a4fd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x91/0xd70 net/wireless/reg.c:2463
1 lock held by khungtaskd/29:
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
3 locks held by kworker/u4:2/42:
#0: ffff88802ca28138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88802ca28138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90000b2fd00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90000b2fd00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4700
2 locks held by getty/5524:
#0: ffff88814ca790a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000328b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
3 locks held by kworker/0:3/5750:
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90004297d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90004297d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
5 locks held by kworker/u5:2/5759:
#0: ffff888028123d38 ((wq_completion)hci7){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888028123d38 ((wq_completion)hci7){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900042f7d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900042f7d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88807a834e70 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1d4/0x390 net/bluetooth/hci_sync.c:326
#3: ffff88807a8340b8 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1f7/0xdc0 net/bluetooth/hci_sync.c:5658
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:1996 [inline]
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_failed+0x165/0x300 net/bluetooth/hci_conn.c:1251
5 locks held by kworker/u5:3/5763:
#0: ffff88802121fd38 ((wq_completion)hci4){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88802121fd38 ((wq_completion)hci4){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90004437d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90004437d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff888026884e70 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1d4/0x390 net/bluetooth/hci_sync.c:326
#3: ffff8880268840b8 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1f7/0xdc0 net/bluetooth/hci_sync.c:5658
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:1996 [inline]
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_failed+0x165/0x300 net/bluetooth/hci_conn.c:1251
6 locks held by kworker/u5:4/5765:
#0: ffff888063c81d38 ((wq_completion)hci5){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888063c81d38 ((wq_completion)hci5){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90004447d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90004447d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88805faace70 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1d4/0x390 net/bluetooth/hci_sync.c:326
#3: ffff88805faac0b8 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1f7/0xdc0 net/bluetooth/hci_sync.c:5658
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:1996 [inline]
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_failed+0x165/0x300 net/bluetooth/hci_conn.c:1251
#5: ffff888058c7eb38 (&conn->lock#2){+.+.}-{3:3}, at: l2cap_conn_del+0x70/0x660 net/bluetooth/l2cap_core.c:1763
5 locks held by kworker/u5:6/5770:
#0: ffff888063c82138 ((wq_completion)hci6){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888063c82138 ((wq_completion)hci6){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90004427d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90004427d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88802e1b4e70 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1d4/0x390 net/bluetooth/hci_sync.c:326
#3: ffff88802e1b40b8 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1f7/0xdc0 net/bluetooth/hci_sync.c:5658
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:1996 [inline]
#4: ffffffff8e1225c8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_failed+0x165/0x300 net/bluetooth/hci_conn.c:1251
2 locks held by kworker/1:5/5824:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000489fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000489fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
3 locks held by kworker/u4:21/8358:
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90003af7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003af7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:286
2 locks held by syz.5.1756/12006:
#0: ffffffff8dfa8610 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x351/0x5e0 net/core/net_namespace.c:516
#1: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: setup_net+0x6a3/0xa00 net/core/net_namespace.c:365
1 lock held by syz.3.1766/12038:
#0: ffffffff8cd85ea8 (event_mutex){+.+.}-{3:3}, at: perf_trace_destroy+0x2e/0x140 kernel/trace/trace_event_perf.c:239
1 lock held by syz.4.1767/12041:
#0: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#0: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
5 locks held by syz.6.1768/12044:
2 locks held by syz-executor/12050:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
#1: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#1: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x448/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by syz-executor/12058:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12060:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12062:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12065:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12070:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12072:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12074:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12077:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12080:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12084:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
1 lock held by syz-executor/12086:
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
#0: ffffffff8dfb5448 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6469
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 12044 Comm: syz.6.1768 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:rcu_dynticks_curr_cpu_in_eqs include/linux/context_tracking.h:122 [inline]
RIP: 0010:rcu_is_watching+0x3a/0xb0 kernel/rcu/tree.c:700
Code: e8 8b f2 f7 08 89 c3 83 f8 08 73 60 49 bf 00 00 00 00 00 fc ff df 4c 8d 34 dd 30 8a 7c 8c 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 <74> 08 4c 89 f7 e8 ac 64 6d 00 48 c7 c3 28 6b 03 00 49 03 1e 48 89
RSP: 0018:ffffc900000073f0 EFLAGS: 00000046
RAX: 1ffffffff18f9146 RBX: 0000000000000000 RCX: 68e427dc61943b00
RDX: ffff888020f95a00 RSI: ffffffff8afc6f60 RDI: ffffffff8afc6f20
RBP: ffffc90000007570 R08: dffffc0000000000 R09: 1ffffffff21b28a0
R10: dffffc0000000000 R11: fffffbfff21b28a1 R12: ffffc900000075c0
R13: dffffc0000000000 R14: ffffffff8c7c8a30 R15: dffffc0000000000
FS: 00007f3d052fc6c0(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055556e9f85c8 CR3: 000000005c2a4000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000200000000300 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
<IRQ>
rcu_read_lock_held_common kernel/rcu/update.c:108 [inline]
rcu_read_lock_held+0x15/0x40 kernel/rcu/update.c:348
__perf_output_begin kernel/events/ring_buffer.c:170 [inline]
perf_output_begin_forward+0x1a8/0xa20 kernel/events/ring_buffer.c:273
__perf_event_output kernel/events/core.c:7975 [inline]
perf_event_output_forward+0x22b/0x3a0 kernel/events/core.c:7993
__perf_event_overflow+0x447/0x630 kernel/events/core.c:9718
perf_swevent_hrtimer+0x3bc/0x530 kernel/events/core.c:11188
__run_hrtimer kernel/time/hrtimer.c:1750 [inline]
__hrtimer_run_queues+0x4df/0xc40 kernel/time/hrtimer.c:1814
hrtimer_interrupt+0x3c9/0x9c0 kernel/time/hrtimer.c:1876
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1077 [inline]
__sysvec_apic_timer_interrupt+0xfb/0x3b0 arch/x86/kernel/apic/apic.c:1094
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0x51/0xc0 arch/x86/kernel/apic/apic.c:1088
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:__raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline]
RIP: 0010:_raw_spin_unlock_irq+0x29/0x50 kernel/locking/spinlock.c:202
Code: 00 f3 0f 1e fa 53 48 89 fb 48 83 c7 18 48 8b 74 24 08 e8 6a 5f f6 f6 48 89 df e8 82 31 f7 f6 e8 5d d8 1a f7 fb bf 01 00 00 00 <e8> 52 53 ea f6 65 8b 05 43 9a 92 75 85 c0 74 02 5b c3 e8 20 80 8f
RSP: 0018:ffffc90000007cb0 EFLAGS: 00000282
RAX: 68e427dc61943b00 RBX: ffff8880b8e29580 RCX: 68e427dc61943b00
RDX: dffffc0000000000 RSI: ffffffff8aaabce0 RDI: 0000000000000001
RBP: ffffc90000007e10 R08: ffffffff90d945ff R09: 1ffffffff21b28bf
R10: dffffc0000000000 R11: fffffbfff21b28c0 R12: ffff8880b8e29580
R13: 1ffff110171c52b8 R14: ffff88807c6e7208 R15: ffffc90000007d60
expire_timers kernel/time/timer.c:1751 [inline]
__run_timers+0x51e/0x7d0 kernel/time/timer.c:2023
run_timer_softirq+0x67/0xf0 kernel/time/timer.c:2036
handle_softirqs+0x280/0x820 kernel/softirq.c:578
__do_softirq kernel/softirq.c:612 [inline]
invoke_softirq kernel/softirq.c:452 [inline]
__irq_exit_rcu+0xc7/0x190 kernel/softirq.c:661
irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:finish_task_switch+0x26a/0x920 kernel/sched/core.c:5254
Code: 0f 84 37 01 00 00 48 85 db 0f 85 56 01 00 00 0f 1f 44 00 00 4c 8b 75 d0 4c 89 e7 e8 80 ca 14 09 e8 4b a4 2f 00 fb 4c 8b 65 c0 <49> 8d bc 24 f8 15 00 00 48 89 f8 48 c1 e8 03 42 0f b6 04 28 84 c0
RSP: 0018:ffffc900053e7938 EFLAGS: 00000282
RAX: 68e427dc61943b00 RBX: 0000000000000000 RCX: 68e427dc61943b00
RDX: dffffc0000000000 RSI: ffffffff8aaabce0 RDI: ffffffff8afc6f80
RBP: ffffc900053e7990 R08: ffffffff90d945ff R09: 1ffffffff21b28bf
R10: dffffc0000000000 R11: fffffbfff21b28c0 R12: ffff888020f95a00
R13: dffffc0000000000 R14: ffff88807d800000 R15: ffff8880b8e3cac8
context_switch kernel/sched/core.c:5383 [inline]
__schedule+0x14da/0x44d0 kernel/sched/core.c:6699
preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6866
preempt_schedule+0xab/0xc0 kernel/sched/core.c:6890
preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
_raw_spin_unlock_irqrestore+0xfa/0x110 kernel/locking/spinlock.c:194
__do_sys_perf_event_open kernel/events/core.c:12916 [inline]
__se_sys_perf_event_open+0x1802/0x1c20 kernel/events/core.c:12567
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f3d0438f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f3d052fc038 EFLAGS: 00000246 ORIG_RAX: 000000000000012a
RAX: ffffffffffffffda RBX: 00007f3d045e5fa0 RCX: 00007f3d0438f749
RDX: ffffffffffffffff RSI: 0000000000000000 RDI: 0000200000000140
RBP: 00007f3d04413f91 R08: 0000000000000002 R09: 0000000000000000
R10: ffffffffffffffff R11: 0000000000000246 R12: 0000000000000000
R13: 00007f3d045e6038 R14: 00007f3d045e5fa0 R15: 00007ffc9897b788
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup