Rundeck cpu peak cause unreponsive api and gui ; rundeck : 3.2.2-20200204

138 views
Skip to first unread message

S Ruen

unread,
Feb 17, 2022, 6:39:18 AM2/17/22
to rundeck-discuss
Dear everyone,

we had 2 issues ; A. cpu peak issue B. broken rundeck gui 
on rundeck : 3.2.2-20200204
-----
Issue scenario

A. CPU Peak issue 

around 19.40 ,we had an CPU peak issue from our java-rundeck process ;
which caused following unresponsive service
1. rundeck gui
2. rundeck api
*** but job schedule still working as we saw from new coming rundeck.execution log

we have tried to
1. restart service rundeckd
2. reboot server
but none of these methods, solved cpu peak issue, as after rundeck service started again, cpu rose up and java-rundeck used all cpu again, and after few mins gui and api service became unresponsive again.
cpu_peak.PNG
so we had to roll back our vm(rundeck) to latest state, that did not have cpu peak issue to solve this case.

B. rundeck gui, broken

after cpu issue was gone, we encountered broken gui (css,script redirect path is not correct ; we already set grails.serverURL)
,but issue solved after we updated /etc/rundeck/profile : -Dserver.useForwardHeaders=true, -Dserver.contextPath=/rundeck
following this link : https://github.com/rundeck/rundeck/issues/3851
----
System info

hardware info
:cpu : xeon 4 core
:memory : 20 GB
-
rundeck info
:rundeck version : 3.2.2-20200204
:rundeck Xms : 512MB
:rundeck Xmx : 4GB
:Backend database : mysql
-----
Our investigation

as we receive CPU peak alarm around 19.40

around 19.20 ; there was a user trying kill a job ; but user said they couldn't kill the job so they keep trying to kill a job
,also we check many rundeck log file, and database log at the same time ; which we found some suspicious in 2 logs

1. rundeck.access
before gui and api become unresponsive ; in rundeck.access , there was a lot of action
web.requests "GET /rundeck/execution/ajaxExecState/5495379"
web.requests "POST /rundeck/execution/cancelExecution"
* probably user trying to kill a job
* this project use node_executor : openssh

2. service.log
there are an error (attach as server_error.txt)
org.eclipse.jetty.io.RuntimeIOException: org.eclipse.jetty.io.EofException
org.eclipse.jetty.io.EofException: null
java.io.IOException: Broken pipe

-----
FYI
the attach file contain 
1.thread dump during newly reboot rundeck service before gui become unresponsive.
2.service.log before cpu_peak
-----

Any suggestion on cause of 1.CPU issue 2. rundeck gui, broken will be much appreciated

Best Regards,
Benz
metric_thread_primary.txt
service_error.txt

rac...@rundeck.com

unread,
Feb 17, 2022, 7:25:03 AM2/17/22
to rundeck-discuss

Hi Benz,

Regarding the first issue, probably you’re facing this issue. You can follow this guide to tune and assign more resources to Rundeck, also try the latest Rundeck version, the gap between Rundeck 3.2 and 3.4 is huge. About the second issue, that’s the correct way to configure Rundeck when the instance is behind a proxy server. As an advice, avoid editing the profile file directly, add those configurations in the rundeck file, take a look.

Regards.

S Ruen

unread,
Feb 17, 2022, 8:38:04 AM2/17/22
to rundeck-discuss
Dear rundeck team,

we sincerely appreciate your help and information.

issue A : for now we we increasing : :rundeck Xmx : from 4 -> 8 GB ; which we hope to solve this issue for long time.
*** FYI : in 1 hour we have about 500 jobs execution

issue B : we will move customize update from /etc/rundeck/profile into /etc/sysconfig/rundeckd following guide suggestion.
*** FYI : As you mention, our rundeck app run behind nginx (reverse proxy), but before we face this cpu peak issue, we can access to rundeck app without need to add -Dserver.useForwardHeaders=true, -Dserver.contextPath=/rundeck (we only set grails.serverURL)  , with probably insufficient configuration for running behind proxy as you've mentioned.

Again we are grateful for your help.

Best Regards,
Benz


ในวันที่ วันพฤหัสบดีที่ 17 กุมภาพันธ์ ค.ศ. 2022 เวลา 19 นาฬิกา 25 นาที 03 วินาที UTC+7 rac...@rundeck.com เขียนว่า:
Reply all
Reply to author
Forward
0 new messages