Run Per-Node Jobs on Rundeck Server

726 views
Skip to first unread message

Dave Woodman

unread,
Sep 29, 2015, 10:59:10 AM9/29/15
to rundeck-discuss
Problem:
I wish to be able to run a job (script) on the Rundeck server based on matched nodes, one execution per node, preferably on separate threads.

If I want to run the script locally (Execute locally radio button selected),  then I cannot match nodes - if I can match nodes (Dispatch to Nodes selected) then the server will try to dispatch the scripts.

Use case:
I wish to use the Rundeck server to provision virtual machines. The nodes to be provisioned are defined resources, but they obviously will not be available until they have been provisioned.

Current work around:
I currently additionally specify the nodes to be provisioned as an option - this requires that the nodes be defined in two places, and also means that the script runs sequentially rather than in parallel, and is therefore undesirable for both reasons.

Environment:
Rundeck 2.5.3 via RPM on CENTOS 7.1. OpenJDK 1.8.0_60

Sam

unread,
Sep 30, 2015, 1:53:37 AM9/30/15
to rundeck-discuss
Can you explain what the provisioner script actually does - say for example it launches instances based on parameter you pass, then you only need to run the script on a local machine (rundeck). In that case the parallel thread scenario is ruled out. 
The other option that I see is you pass the to-be-provisioned nodes via url as an option. As you are building your logic in the provisioner script, there is not much rundeck can do here other than take the parameters and pass it to your script to run. 
If you want to run this parallely, you need to give a list of matched nodes where the provisioner script can run but breaking out the nodes to be launched to each matched node would be a challenge for you.

Dave Woodman

unread,
Sep 30, 2015, 2:29:24 AM9/30/15
to rundeck-discuss

Indeed this is pretty much what happens - I have one script that actually creates the VMs (from cloning) based on the list of required machines passed to it, and then configures them. Another script will start them and further scripts will kick off applications. The last script, of course, is run as a distributed job.

I understand the restriction on parallel running at the host (a shame it has to be so, though!) It would be good, however, if there was a way to get at the resource list directly and filter it to determine the target machine list. An option provider seems to be my only choice here.

Thanks for your answer - I might explore some other options. I'll post if I get anything not-too-ugly to work


Sam

unread,
Sep 30, 2015, 3:02:17 AM9/30/15
to rundeck-discuss
For your use case,  you could break out the workflow like so -
  • Job takes the parameters - list of required machines
  • Job has multiple tasks - task 1 runs script which provisions, task 2 calls script which starts up the nodes 
  • Task 3 would be to call another job where you run parallel distributed job and kick off applications. This is a bit tricky but the key here is to pass the list of launched nodes in the options and then use that in your node filters, define thread count in this job to allow you to run it as a true distributed job 
Hope that helps and works for you :)

Dave Woodman

unread,
Sep 30, 2015, 4:11:09 AM9/30/15
to rundeck-discuss
That is pretty much how I am doing things at present.

I might try something along the lines of:-

First Job issues API calls (a quick piece of curl here, perhaps) to execute the provisioning per-node, and then polls to verify all that the jobs have been completed. I'll make it general -purpose by passing the UUID of the subjob as an option.

As long as the Multiple Executions flag is set this stands a chance of doing what I want.

Sam

unread,
Sep 30, 2015, 4:36:31 AM9/30/15
to rundeck-discuss
Yes that would work, also an exit code in your script can trigger the next job if your job workflow covers doing one step after another.

Espen Blikstad

unread,
Oct 2, 2015, 4:03:44 AM10/2/15
to rundeck-discuss
I have just created a job running a "Local command" "Dispatched to nodes". The command actually runs locally on the Rundeck server, once for each node selected for the job and I'm using RD_NODE_HOSTNAME environment variable to target remote nodes in the script the "local command" executes. Works like a charm.

My job is a restart job for Windows nodes, which gets the uptime using winrs (SSH equivalent on Windows), executes a restart command and tries winrs to get uptime again and finishes when it gets a different uptime value (server is up and running again). This job wouldn't run properly on the remote node.

The "Dispatched to nodes" can be renamed to run for each node ? Wether the "command" is run remotely or locally depends on the the job step.

Regards,
Espen Blikstad
Reply all
Reply to author
Forward
0 new messages