Starting a new workflow within a running workflow using the same engine

49 views
Skip to first unread message

Idan Moyal

unread,
Oct 20, 2013, 7:44:56 AM10/20/13
to openwfe...@googlegroups.com
Hi,

It is possible to launch workflows using the EngineParticipant.

Is it possible to use the participant to launch workflows on the same engine?
If so, the example in the page describes how to pass Ruote::FsStorage info the the engine, can I use an existing Ruote::HashStorage instance?


Thanks,
Idan

John Mettraux

unread,
Oct 20, 2013, 6:14:40 PM10/20/13
to openwfe...@googlegroups.com

On Sun, Oct 20, 2013 at 04:44:56AM -0700, Idan Moyal wrote:
>
> According to this page:
> http://ruote.rubyforge.org/part/engine_participant.html
> It is possible to launch workflows using the EngineParticipant.
>
> Is it possible to use the participant to launch workflows on the same
> engine?

Hello Idan,

well, you usually do that by simply using "subprocess" as in

```ruby
Ruote.define do
subprocess 'do_that_other_thing'
end
```

> If so, the example in the page describes how to pass Ruote::FsStorage info
> the the engine, can I use an existing Ruote::HashStorage instance?

No, there is currently no way to pass an existing Ruote::HashStorage. If you
need that, I can add it to master.

Another way, though:

you can set :engine to nil and that'll run the subprocess locally:

```ruby
Ruote.define do
subprocess 'do_this_other_thing', :engine => 'engine_x'
subprocess 'do_that_other_thing', :engine => nil
end
```

which is equivalent to:

```ruby
Ruote.define do
do_this_other_thing :engine => 'engine_x'
do_that_other_thing :engine => nil
end
```

which is equivalent to:

```ruby
Ruote.define do
do_this_other_thing :engine => 'engine_x'
do_that_other_thing
end
```

Perhaps you prefer to use:

```ruby
Ruote.define do
engine_x :pdef => 'do_this_other_thing'
engine_self :pdef => 'do_that_other_thing'
end
```

In that case, tell me and I'll add a way to pass an existing storage when
registering an engine participant.


Kind regards,

--
John Mettraux - http://lambda.io/jmettraux

Idan Moyal

unread,
Oct 21, 2013, 3:15:26 AM10/21/13
to openwfe...@googlegroups.com
Thanks for the answer.

What would you suggest if I need to launch an independent workflow from my "main" workflow?
AFAIK sub processes are bound to their containing workflow.

I guess I can always write a participant for launching workflows on the current engine.


Idan

John Mettraux

unread,
Oct 21, 2013, 3:43:11 AM10/21/13
to openwfe...@googlegroups.com

On Mon, Oct 21, 2013 at 12:15:26AM -0700, Idan Moyal wrote:
>
> What would you suggest if I need to launch an independent workflow from my
> "main" workflow?
> AFAIK sub processes are bound to their containing workflow.
>
> I guess I can always write a participant for launching workflows on the
> current engine.

Hello again,

yes, you can do that.

You could also do

```ruby
subprocess 'x', :forget => true
```

The subprocess would have the same wfid has the parent process but it would
be execute on its own.

```ruby
subprocess 'x', :new => true
```

Might be interesting. Keeping it for ruote 3.0


Best regards,

Idan Moyal

unread,
Oct 21, 2013, 4:42:06 AM10/21/13
to openwfe...@googlegroups.com
Thanks again,

What would happen to the sub process if the parent workflow ends? and what if calling dashboard.wait_for with the wfid?
Would that be enough to achieve what I'm looking for? running a workflow regardless of whether the parent workflow ends (it would probably end before the sub process) and i'd like to know when the parent process ends successfully
and keep the sub process running...

Perhaps launching the workflow from a participant is more suitable for that case?


Idan

John Mettraux

unread,
Oct 21, 2013, 4:55:53 AM10/21/13
to openwfe...@googlegroups.com

On Mon, Oct 21, 2013 at 01:42:06AM -0700, Idan Moyal wrote:
>
> What would happen to the sub process if the parent workflow ends?

The sub process should go on.

> and what if calling dashboard.wait_for with the wfid?

wait_for will return when the parent workflow terminates.

But please remember that Dashboard#wait_for is for testing environments, not
production ones. If you have more that one worker, wait_for in worker 1
doesn't see actions processed in worker 2.

> Would that be enough to achieve what I'm looking for? running a workflow
> regardless of whether the parent workflow ends (it would probably end
> before the sub process) and i'd like to know when the parent process ends
> successfully
> and keep the sub process running...

Sounds like a real, independent process, might be better. You can pass it the
parent wfid when launching it (in a dedicated participant) so that the new
process may query the state of the parent process via the dashboard.

> Perhaps launching the workflow from a participant is more suitable for that
> case?

Maybe you could rephrase the question at a higher level (business case >
ruote implementation case).


Best regards,

John

Idan Moyal

unread,
Nov 13, 2013, 10:59:18 AM11/13/13
to openwfe...@googlegroups.com
Hi again,

Sorry for the delay :/

I eventually decided to start the 2nd workflow once the "parent" workflow finishes its execution.
As I understand it would be best if I do this using a ProcessObserver.

Is there a way to specify a ProcessObserver instance instead of class since i'd like to be able to interact with that instance for verifying my workflows are successfully executed?
Perhaps there's some other best practice for achieving this?

Thanks,
Idan

John Mettraux

unread,
Nov 13, 2013, 4:55:17 PM11/13/13
to openwfe...@googlegroups.com

On Wed, Nov 13, 2013 at 07:59:18AM -0800, Idan Moyal wrote:
>
> I eventually decided to start the 2nd workflow once the "parent" workflow
> finishes its execution.
> As I understand it would be best if I do this using a ProcessObserver.

Hello,

not necessarily. The "parent" workflow could simply start (fire and forget)
the second workflow on its own before terminating.


> Is there a way to specify a ProcessObserver instance instead of class since
> i'd like to be able to interact with that instance for verifying my
> workflows are successfully executed?
> Perhaps there's some other best practice for achieving this?

Yes, you can pass an instance (anything that responds to #on_msg and/or
#on_pre_msg or a class that declare instances that sport those methods).

If you have a multi-worker setup, the different observer won't see the same
messages (for example, a flow will terminate in a single worker). But it's OK
for your "observe and react" scenario (it's not OK for "hey, I want all
observers to see the whole movie" scenarii).
(sorry, mostly a reminder for making the email thread more complete for
future readers).
Reply all
Reply to author
Forward
0 new messages