openresty时不时的单worker引发cpu 100%

226 views
Skip to first unread message

renguang wu

unread,
Jun 18, 2024, 4:00:22 AM6/18/24
to openresty
openresty: 1.19.3 升级到 1.21.4.2之后问题还在,目前只能从火焰图上看到是由 ssl_shutdown导致,没有其他日志报错信息,也无法复现。请教下大家有没有定位的思路

high_cpu_usage.png

renguang wu

unread,
Jul 14, 2024, 11:13:18 PM7/14/24
to openresty
openresty: 1.21.4.2

worker_processes 100;
worker_cpu_affinity auto

看起来像是 _raw_read_unlock_irqrestore 这个内核函数调用导致的,是否因为worker数过多,导致cpu竞争比较激烈?有大牛帮忙分析下吗?

Snipaste_2024-07-15_11-09-11.png

Junlong Li

unread,
Jul 15, 2024, 4:06:41 AM7/15/24
to open...@googlegroups.com
你可以把 svg 格式的 完整的 火焰图发给出来吗?

--
--
邮件来自列表“openresty”,专用于技术讨论!
订阅: 请发空白邮件到 openresty...@googlegroups.com
发言: 请发邮件到 open...@googlegroups.com
退订: 请发邮件至 openresty+...@googlegroups.com
归档: http://groups.google.com/group/openresty
官网: http://openresty.org/
仓库: https://github.com/agentzh/ngx_openresty
教程: http://openresty.org/download/agentzh-nginx-tutorials-zhcn.html
---
您收到此邮件是因为您订阅了Google网上论坛上的“openresty”群组。
要退订此群组并停止接收此群组的电子邮件,请发送电子邮件到openresty+...@googlegroups.com
要在网络上查看此讨论,请访问https://groups.google.com/d/msgid/openresty/28a9c346-b750-4726-b264-d44c1c840f46n%40googlegroups.com

417132187

unread,
Jul 15, 2024, 4:06:54 AM7/15/24
to Junlong Li' via openresty <>

xue

unread,
Jul 15, 2024, 5:54:57 AM7/15/24
to openresty
是否有很多lua脚本在运行?还是没有任何Lua脚本?


在2024年7月15日星期一 UTC+8 16:06:54<417132187> 写道:

renguang wu

unread,
Jul 23, 2024, 9:59:14 PM7/23/24
to openresty
火焰图如下,是运行在apisix 2.13.1版本下的。没有对Openresty做修改
11749-c-on-cpu.svg
11749-lua.svg

Junlong li

unread,
Jul 23, 2024, 11:13:05 PM7/23/24
to openresty
从这个火焰图看,这个看起来是 stap 本身的开销,如果你不执行分析的时候,CPU 会100%吗?
Message has been deleted

renguang wu

unread,
Jul 24, 2024, 3:13:15 AM7/24/24
to openresty
是出现CPU 100%的时候去采集火焰图的,平时在20-30%左右

renguang wu

unread,
Jul 29, 2024, 8:41:54 AM7/29/24
to openresty
gdb跟进去是在ngx_ssl_shutdown的底层系统调用处


(gdb) l
73 in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0  0x00007fe6189cd45a in epoll_ctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x0000564db6a7bfcb in ngx_epoll_add_event (ev=0x7fe59c733918, event=<optimized out>, flags=<optimized out>) at src/event/modules/ngx_epoll_module.c:639
#2  0x0000564db6a720be in ngx_handle_read_event (rev=<optimized out>, flags=flags@entry=0) at src/event/ngx_event.c:317
#3  0x0000564db6a8379a in ngx_ssl_shutdown (c=0x7fe59cdaab60) at src/event/ngx_event_openssl.c:3752
#4  ngx_ssl_shutdown (c=0x7fe59cdaab60) at src/event/ngx_event_openssl.c:3566
#5  0x0000564db6a839e9 in ngx_ssl_shutdown_handler (ev=<optimized out>) at src/event/ngx_event_openssl.c:3862
#6  0x0000564db6a7c4f7 in ngx_epoll_process_events (cycle=<optimized out>, timer=<optimized out>, flags=1) at src/event/modules/ngx_epoll_module.c:1001
#7  0x0000564db6a71f69 in ngx_process_events_and_timers (cycle=cycle@entry=0x564db881db00) at src/event/ngx_event.c:262
#8  0x0000564db6a7a430 in ngx_worker_process_cycle (cycle=cycle@entry=0x564db881db00, data=data@entry=0xd) at src/os/unix/ngx_process_cycle.c:825
#9  0x0000564db6a78d0b in ngx_spawn_process (cycle=cycle@entry=0x564db881db00, proc=proc@entry=0x564db6a7a380 <ngx_worker_process_cycle>, data=data@entry=0xd, name=name@entry=0x564db6bed44d "worker process",
    respawn=respawn@entry=-4) at src/os/unix/ngx_process.c:199
#10 0x0000564db6a7a974 in ngx_start_worker_processes (cycle=cycle@entry=0x564db881db00, n=100, type=type@entry=-4) at src/os/unix/ngx_process_cycle.c:396
#11 0x0000564db6a7b31c in ngx_master_process_cycle (cycle=0x564db881db00) at src/os/unix/ngx_process_cycle.c:247
#12 0x0000564db6a508ea in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:392
Reply all
Reply to author
Forward
0 new messages