Re: Passing Output Parameters Between Jobs?

7,364 views
Skip to first unread message

phamby

unread,
Oct 20, 2012, 5:28:41 PM10/20/12
to rundeck-discuss
I'd be interested to see if anyone else has tackled this problem in a
different manner. I'd like to chain together a couple of shell and
Perl scripts passing the output from one to the other. Using a JSON
store appears to be an option, but for me, writing a wrapper script
would probably be an easier route.

-Paul

On Oct 18, 8:48 am, Soumyak Bhattacharyya
<bhattacharyya.soum...@gmail.com> wrote:
> Hi Brian,
>
> This is just an idea for the problem statement you mentioned .....
>
> Jobs can receive their parameters from a json store ..... i.e. an url say
> in form ofhttp://localhost:8080/jsonstore/options.json
>
> Now as a last step of a job, if it is able to post the outcome to the json
> store, the following job can extract from it.
>
> I must say that I have not experimented this option, but will do and
> confirm.
>
> Regards
> Soumyak
>
>
>
>
>
>
>
> On Friday, October 12, 2012 8:46:23 AM UTC+5:30, Brian Clowers wrote:
>
> > RunDeckers:
>
> > I am struggling on how to pass the output of one job onto another.  After
> > searching the documentation and forum using terms I would consider relevant
> > I still have not found an example that explains how to make this happen
> > either via the command line or via the webUI.  If someone has a simple
> > example I would greatly appreciate the help.
>
> > Ideally it would be nice to have the output of one function pass a string
> > variable pointing to another path for a continuation of execution.  Granted
> > this example could be done in a single job I was looking for something
> > simple to demonstrate the principle.
>
> > For example some pseudo code for a workflow would be
> >     User Input Option: user_option = 2
> >     Job 1:
> >              Square Input:
> >                     return ${user_option}*${user_option} as output1
>
> >    Job 2:
> >             Replicate New Value:
> >                    for i in range(${output1}):
> >                          print i*i
>
> > Please Help,
>
> > Brian

Andrew Steady

unread,
Nov 7, 2012, 12:07:09 PM11/7/12
to rundeck...@googlegroups.com
Hi,

But this is not useful when the subsequent job is a secondary password sudo authentication job - such jobs cannot be a script but only a single command. In such a situation the 2nd job must execute in a single line from a command and its difficult to fetch values from a file all in one go. Also, its passing sensitive info around and I don't want them lying on the file system - prefer to pass in memory and discard.

Andy

On Wed, Nov 7, 2012 at 4:30 PM, Quentin Hartman <qhar...@gmail.com> wrote:
Assuming the jobs are executing on the same machine, you could have the first job write it's output to an intermediate file which is then read by the second job to get it's options from.

Greg Schueler

unread,
Nov 7, 2012, 2:14:00 PM11/7/12
to rundeck...@googlegroups.com
Hi all,

This is a feature request that has come up before and clearly would be useful.  In the current model, scripts and commands are executed in separate processes, with the standard output/error streams recorded into a log file. This doesn't really give us access to any variable state within the process that you might want to return as output, unless it is printed into the output stream and Rundeck parses this somehow.  I don't really like that mechanism, although it could work: it is a bit fragile as it requires the script author to know exactly how to format the data, and depends on the script not producing output that Rundeck might spuriously interpret as output data.

Essentially we are trying to tack on function call return values to a simple command & outputstream model.

Here's one idea: We could introduce something like "function" steps.  These could e.g. be a bash script (perhaps that defines a "rundeck_output()" function that returns a value, or perhaps the script simply sets certain environment variables like RD_OUT_X).  Rundeck could then execute the script with a bit of generated code appended.  The generated script code would use a mechanism to get output values back, which is a little less fragile because Rundeck can control how the output values are passed back, such as specially formatted output on the stdout.  (It would be useful to use named pipes, however this would only work on the local (rundeck server) node.  Any remote node connection will only give us stdout/stderr.)

thoughts?

Andrew Steady

unread,
Nov 9, 2012, 4:53:46 AM11/9/12
to rundeck...@googlegroups.com
Hi,

I think you are on the right track.

Some requirements I came across:
- Parent job needs to call another sub-job with parameters returned by a preceding child job invocation
- Parent job needs to call another sub-job in a loop with parameters returned by a preceding invocation of a sibling job
- Any job needs to call any other job with parameters it dynamically calculated during its invocation
- Need to see password values passed through from parent job to child (I saw an issue for this but not sure if its in 144?)

One useful thing would be if:
- Sibling jobs are able to change the value of an option and that change in value will be reflected in the scope of the parent job so that when the next child job is invoked those options have the updated value

- The looping item begs more questions. Perhaps its just going too far to have certain rundeck jobs configured as loops where multi-value parameters returned by other jobs are used as the way to parameterize each execution (and know how many times to loop). Of course a script can invoke rd-jobs to start a job and pass params on the command line but this is async (you don't know if it finished or with what result). So...

I wrote a script which I place in usr/bin to help this issue called rd-job-sync which starts a run-deck job and waits until it is finished before returning. That way in a rundeck script job I can execute other rd-jobs in a synchronous manner, passing it parameters dynamically created as I loop. It also features a time-out and kill functionality. I've attached the script (as txt) in case it helps anyone. I invoke it like this from a script executed in rundeck:

`rd-job-sync <jobName> <targetNode> <timeOutInSecs> "-param1Name $param1Value -param2Name $param2Value"` 
RESULT=$? 


I also wrote another generic job to do SCPing of multiple source-target which I think is something missing at the moment. I also attached that job xml as txt.

Rundeck rocks.

Andy
copyfiles.txt
rd-job-sync.txt

Greg Schueler

unread,
Apr 15, 2013, 9:28:02 PM4/15/13
to rundeck...@googlegroups.com
Hi Kelly, 

What you describe meshes with our plan for the internals of this feature: there is a shared context that workflow steps would be able to modify, and each Step operates with an identity defined by the step (and possible sub-step) number, and the node. Variables that are produced by steps would be stored in that context within a namespace produced by the identity. Adding scopes is a good idea, and it would allow a way to make explicit how the variables are shared.

As far as *producing* the variables+values from a step, we have this idea for the feature:  Essentially we will add a new plugin type that can filter the output from a step and then write data to the context.  It is up to the plugin and the job to define how to convert output from a step into variable values, but a couple examples would be: a plugin that is configured with a Regex, and captures "key=value" as variable output.  Another example is a plugin that can parse JSON and uses that data to capture some of the values. Step plugins would also be enhanced to allow plugin authors to add variables to the context in code, instead of relying on the filter plugins.

Passing values from one child job back to the parent is a bit simpler.  We would allow Jobs to define "exported" variables, taken from the final execution context, and then a parent job would also be able to inject those back into the running context when the child job is done.


On Sun, Apr 14, 2013 at 9:29 PM, Kelly Shkuratoff <kshku...@gmail.com> wrote:
I was thinking about this recently as well, we might have some good cases for wanting to pass parameters between jobs or threads. Something I've seen done elsewhere is a writable shared context which is inherited by child jobs from the parent, so they can post key values back for other threads or jobs to see. It can require a bit of careful thought, if you have a lot of child threads executing you'd need to manage uniqueness of results in the context yourself, but that can be done by prepending keys with meaningful IDs (job execution id, thread ID etc). You can also add scopes to the context variables - such that when you write to it you write at the local/thread scope, or global (parent) scope. 

Kelly

--
You received this message because you are subscribed to the Google Groups "rundeck-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rundeck-discu...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Brice LOUIS

unread,
Oct 9, 2013, 12:37:06 PM10/9/13
to rundeck...@googlegroups.com
thanks for jobs done for rundeck,

What would be cool , it's to get the feedback code for the batch to the orginal batch.
I Mean
we launch a batch file which execute java code. The issue of the batch give us a code, which must be interpreted by RD. with that translation of issue code, we coul activate batch or not.

In the same way, another thing will be to ba able with the error handler for a batch to rerun once or more times a batch,, script or else, until it becomes true or after a number of tries.

for example the server on which the job must be completed take a long time to reboot, so the job will come back in error. in that way all the job will turn in error. that's bad luck, just for a time out for the reboot server.

thanks for feedback if i get wrong.
Message has been deleted

nidkil

unread,
Jan 16, 2014, 2:07:21 PM1/16/14
to rundeck...@googlegroups.com
Hi Greg,

From your post I understand you guys are working on a feature to pass/share parameters between jobs. Is there a release date planned?

Regards, nidkil

Levent Tutar

unread,
Mar 9, 2014, 3:23:11 AM3/9/14
to rundeck...@googlegroups.com
Hi Greg,

I am using Rundeck since last week. After my own tool selection for orchestration, this came as the best. I also would like to use the result of one job in an another.

Job 1: Create image: Create an virtual and assign its ipaddress to variable ipaddress_of_the_vm
Job 2: Bootstrap image: Copy puppet agent to ipaddress_of_the_vm
Job 3: Register puppet agent in puppet master
etc.

I need some results between jobs and some unique variables in order to logically link the jobs to each other. To do this, I created one job called orchesrator that calls the other jobs.
But again, I am missing the value of the variable at this point.

I am also not sure if I can realize this but having one job and the rest as steps. I also could not find how to pass/share/assign parameters between steps. Assigning options happens at the beginning of the job and not later.

Using JSON as repository between jobs/steps is mentioned, but again you have to make sure that each job writes to unique file, otherwise you will have concurrency problems. Making sure that the related job reads the correct JSON file will also be a problem. I also could not find a way to pass some identifiers/correlation id's down to other jobs.

Any ideas/suggestions until you have this feature is welcome.

Kind regards,

Levent

Levent Tutar

unread,
Mar 9, 2014, 7:14:16 AM3/9/14
to rundeck...@googlegroups.com
Hi,

I tried to add my comment to this post this morning. I do not think that it went OK. I will try again.

After a small research about orchestration servers, we decided to use rundeck. Thank you for providing Rundeck. In my opinion, the best.

We identified several jobs/steps that we would like to orchestrate.

1) Create a virtual image. Capture its ip-address.

2) Copy puppet agent to this ip-address.

3) Install puppet agent at this ip-address.

4) Register puppet agent at the puppet master.

etc.

Is it correct that until a new feature is implemented, I can only use JSON store to pass parameters like ip-address between jobs?
I define options in my jobs during creation but I can not assign values to these options since I do not know them yet. These are known during running.

If I had one job and the rest as steps. Can I then pass values between steps?

How can I solve concurrency issues? If I hardcode the json file name, then I can not run the jobs simultaneously. They may overwrite each others value.

Can somebody help us with these questions? This would help us begin with a good design and implementation.

What is at this moment the best way to pass values between jobs and steps.

I am using rundeck 2.0.1.

Kind regards,

Levent

Greg Schueler

unread,
Mar 10, 2014, 12:28:36 PM3/10/14
to Levent Tutar, rundeck...@googlegroups.com
Hi Levant,

Passing data between steps is on our roadmap but is not currently implemented.  You would have to use another mechanism to store intermediate data.

You can use the execution ID as an identifier for the current workflow execution, and you could use the job name or ID to distinguish between jobs: http://rundeck.org/docs/manual/jobs.html#context-variables
--
Greg Schueler
For more options, visit https://groups.google.com/d/optout.

Levent Tutar

unread,
Mar 10, 2014, 2:28:14 PM3/10/14
to rundeck...@googlegroups.com, Levent Tutar

Hi Greg,

Thank you. I saw the context-variables also this morning. It is working. I am passing the job.id of the master job to the jobs that are defined as steps. This way, they can all refer to the same json file to write and retrieve variables.

Kind regards,

Levent
Message has been deleted

kas...@web.de

unread,
Sep 9, 2014, 6:51:57 AM9/9/14
to rundeck...@googlegroups.com, gr...@dtosolutions.com
Hi Greg,
I want to use this to pass the output of the first Job (curl http://somehost/generatevalue) to the next Job as an option variable. The second step would then call the next URL with the generated value, e.g.: http://somehost/passoption?option=${prevjob.mygeneratedvalue}
Is this possible yet - if so, how?
Thanks in advance,
Regards Chris
Reply all
Reply to author
Forward
Message has been deleted
0 new messages