Purge history from rundeck due to extremely slow UI

3,476 views
Skip to first unread message

Jennifer Fountain

unread,
Sep 18, 2014, 10:10:29 AM9/18/14
to rundeck...@googlegroups.com
My UI is running extremely slow and it appears to do that when there is a lot of execution/log history.  I am running mysql as the backend db.  Which tables should I truncate to remove some history?  The two I think should be are base_report and execution but wanted confirmation that is correct. 

Thanks!

Alex Honor

unread,
Sep 18, 2014, 11:37:44 AM9/18/14
to rundeck...@googlegroups.com
Hi Jennifer,

You can use the bulk delete function in the GUI or API
Performance might also be improved by indexing the database in case you need to retain a lot of history. 

Thanks

On Thu, Sep 18, 2014 at 7:10 AM, Jennifer Fountain <jfou...@meetme.com> wrote:
My UI is running extremely slow and it appears to do that when there is a lot of execution/log history.  I am running mysql as the backend db.  Which tables should I truncate to remove some history?  The two I think should be are base_report and execution but wanted confirmation that is correct. 

Thanks!

--
You received this message because you are subscribed to the Google Groups "rundeck-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rundeck-discu...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--

Alex Honor

[SimplifyOps, Inc | a...@simplifyops.com ]

Be sure to comment and vote on Rundeck Feature Development!

Jennifer Fountain

unread,
Sep 18, 2014, 2:38:23 PM9/18/14
to rundeck...@googlegroups.com
I was thinking about indexes thanks for the info. That worked for us!

Carey Rogers

unread,
Sep 24, 2014, 12:36:41 PM9/24/14
to rundeck...@googlegroups.com
Would you be willing to share the SQL you executed for creating these indexes? 

I think it would be useful for many of us to know the tables that need to be altered.

Thanks,
Carey

David Newman

unread,
Jan 5, 2015, 2:35:11 PM1/5/15
to rundeck...@googlegroups.com
I've been looking for the details on these indexes but I can't find them.  Does anyone have them?  I am suffering from a slow UI as well with 100k+ executions.  I can work out the SQL so table and column names would be most helpful.

Thanks
-Dave

cyco...@hotmail.com

unread,
Jun 4, 2015, 8:23:43 PM6/4/15
to rundeck...@googlegroups.com
We had a similar problem with around 650000 item execution log history, with two jobs having around 300000 each. We are using the mysql backend. 
Here is what we did.
1. Tried the Python "delete old logs" script https://github.com/danifr/danifr-rundeck/blob/master/py_scripts/deleteoldlogs.py but it ran out of memory trying to deal with 300K of logs.
2. Tried using the bulk delete API, but the job looked like it was taking about 4 seconds per job and was doing it all in a mysql transaction so it was going to take too long and had a high chance of failure.
3. Using this basic non evasive mysql debugging tools with tcpdump https://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html we figured out the following indexes needed to be added to improve the deletion performance. After adding the indexes each individual execution log deletion went from taking 4 seconds to around 0.1s 
use rundeck;
ALTER TABLE
`base_report` ADD INDEX `jc_exec_id` (`jc_exec_id`);
ALTER TABLE
`base_report` ADD INDEX `class` (`class`);
ALTER TABLE
`base_report` ADD INDEX `version` (`version`);
ALTER TABLE
`execution` ADD INDEX `retry_execution_id` ( `retry_execution_id` );
ALTER TABLE
`execution` ADD INDEX `version` (`version`);
4. While this improved the bulk deletion rate it was still going to take a good number of hours to delete all the execution logs and could still fail at any point at which point nothing would be deleted.
5. So finally we used a simple bash command to iterate over N number of execution logs and delete them one at a time, we had two big 300K job logs which took about 4 hours each. Just run the command over and over until you are happy with how many are left ( it will delete the oldest first )
# Run on rundeck server, bash command
for i in `mysql -u root -N -p -B -e "select jc_exec_id from base_report where report_id like '%<your job group>/<your job name>%' limit 10000;" rundeck`; do echo "Deleting $i"; curl -X DELETE http://localhost:4440/api/12/execution/$i -H "X-RunDeck-Auth-Token: <your API token>" --connect-timeout 10 -m 10; done

NOTE: Make sure you have given your API user permission to delete executions, this is done by editing /etc/rundeck/apitoken.aclpolicy and adding 'delete_execution' to the 'project' acl
description: API Application level access control
context
:
  application
: 'rundeck'
for:
  resource
:
   
- equals:
        kind
: system
      allow
: [read] # allow read of system info
  project
:
   
- match:
        name
: '.*'
      allow
: [read,delete_execution] # allow view and execution deletion of all projects
  storage
:
   
- match:
        path
: '(keys|keys/.*)'
      allow
: '*' # allow all access to manage stored keys
by:
 
group: api_token_group

FINALLY. We have setup a daily rundeck job which runs the "delete old jobs" python script so we should not have this problem again.

Erick Franco

unread,
Sep 10, 2015, 2:16:49 PM9/10/15
to rundeck-discuss
The reason why the bulk delete is still slow after the index is because the jc_exec_id column is set up as varchar(255), while the query does something like "delete from base_report where jc_exec_id = 1111". Notice that it compares the column without quotes, making the index unusable. 

You need to also convert jc_exec_id to an int, then it should run like a mad man.
ALTER TABLE `base_report` MODIFY COLUMN `jc_exec_id` int;

Thibault Richard

unread,
Jan 19, 2017, 10:28:07 AM1/19/17
to rundeck-discuss
You can use the script https://gist.github.com/unicolet/af648a97163ce6b44645 to make the cleanup

Luca Busin

unread,
Mar 28, 2017, 9:21:52 PM3/28/17
to rundeck-discuss
I have extended that script to also take care of workflow, workflow_step and workflow_workflow_step tables, otherwise those will keep growing:

Reply all
Reply to author
Forward
0 new messages