Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to exec and wait for child process to finish

579 views
Skip to first unread message

Hai Vu

unread,
Sep 10, 2008, 11:48:23 PM9/10/08
to
Hi all,
I am working on a project where as the master script spawns a number
of child scripts in this manner:
set childPID [exec ... ... &]
The number of child processes could be anywhere from 1 to 500 (or even
more). Is there a way for the master script to wait for all the child
processes to complete before doing something?

At this point, I am tempted to put all the children's process IDs into
a list, then have a loop in which I exec the ps command to see if the
process has finished. Besides being a big hack, there is also the
performance problem with spawning sub processes repeatedly. Please let
me know if you have any suggestions. Thank you.
Thanks.

Arjen Markus

unread,
Sep 11, 2008, 2:42:41 AM9/11/08
to
On 11 sep, 05:48, Hai Vu <wuh...@gmail.com> wrote:
> At this point, I am tempted to put all the children's process IDs into
> a list, then have a loop in which I exec the ps command to see if the
> process has finished. Besides being a big hack, there is also the
> performance problem with spawning sub processes repeatedly. Please let
> me know if you have any suggestions. Thank you.
> Thanks.

I normally use [open] and [fileevent] to monitor if an external
process is still running. But with 500 subprocesses, that might
be a bit much.

Whether a repeated call to ps would be a performance bottleneck
depends on the time your child processes run. Suppose the average
run time is 5 seconds. Then it makes very little sense to run ps
every 0.1 seconds. I'd say once every 5 or even every 10 seconds
would provide the granularity you require. If they run for a
much shorter time, then it probably makes sense to call ps
once every second (being the granularity useful for a human user).

The output from each run of "ps -u user" may reveal more than
one process having completed - you just need to examine the
whole list.

Regards,

Arjen

Arjen Markus

unread,
Sep 11, 2008, 2:40:18 AM9/11/08
to
On 11 sep, 05:48, Hai Vu <wuh...@gmail.com> wrote:
> At this point, I am tempted to put all the children's process IDs into
> a list, then have a loop in which I exec the ps command to see if the
> process has finished. Besides being a big hack, there is also the
> performance problem with spawning sub processes repeatedly. Please let
> me know if you have any suggestions. Thank you.
> Thanks.

I normally use [open] and [fileevent] to monitor if an external

Ralf Fassel

unread,
Sep 11, 2008, 3:20:23 AM9/11/08
to
* Hai Vu <wuh...@gmail.com>

| The number of child processes could be anywhere from 1 to 500 (or even
| more).

Do you mean 500 separate exec's or one exec starting 500 subprocesses?

| Is there a way for the master script to wait for all the child
| processes to complete before doing something?

Don't start it in the background? Then 'exec' does all the magic for
you. If you need the script responsive during the wait, use the
pipe-version of 'open' and 'filevent' to monitor the subprocess.
Could be a problem to obtain 500 fd's though. Since you mention 'ps'
this sounds like Unix, so it should be possible to tweak the process
limits for open fd's.

| At this point, I am tempted to put all the children's process IDs into
| a list, then have a loop in which I exec the ps command to see if the
| process has finished. Besides being a big hack, there is also the
| performance problem with spawning sub processes repeatedly.

The kill(2) system call on Unix (and OpenProcess() on Windows) can
tell whether a process is still running. I *think* tclX has it as TCL
command. Coding it yourself as a tiny TCL extension is a SMOP, if you
have a compiler at hand...

HTH
R'

Uwe Klein

unread,
Sep 11, 2008, 4:14:15 AM9/11/08
to
Ralf Fassel wrote:

> The kill(2) system call on Unix (and OpenProcess() on Windows) can
> tell whether a process is still running. I *think* tclX has it as TCL
> command. Coding it yourself as a tiny TCL extension is a SMOP, if you
> have a compiler at hand...

using tclX or expect:
[ trap ] death of children with some signal action.
or
[ wait ]

uwe


Colin Macleod

unread,
Sep 11, 2008, 5:02:22 AM9/11/08
to
On 11 Sep, 04:48, Hai Vu <wuh...@gmail.com> wrote:
> Hi all,
> I am working on a project where as the master script spawns a number
> of child scripts in this manner:
>     set childPID [exec ... ... &]
> The number of child processes could be anywhere from 1 to 500 (or even
> more). Is there a way for the master script to wait for all the child
> processes to complete before doing something?

If you have TclX or can install it, you can set up a signal handler to
run when each child process dies, remove its pid from your list and
check
if the list is now empty. Here's an example (adapted from other code
and
not retested, so may need fixing):

package require Tclx

proc child_died {} {
signal -restart trap SIGCHLD child_died
while {![catch {wait -nohang} stat] && $stat ne {}} {
# schedule cleanup for later, too risky to run in signal
handler
after idle [list child_gone $stat]
}
}

signal -restart trap SIGCHLD child_died

proc child_gone stat {
global pidlist
lassign $stat pid how code
puts "CHILD PID $pid FINISHED"
# remove pid from pid list:
set pidlist [lindex [intersect3 $pidlist $pid] 0]
if {[llength $pidlist]} return
puts "ALL CHILDREN FINSHED"
# now do final processing
}

# Start children
....
set childPIDs [exec ... ... &]
# note that exec can return multiple pids
foreach pid $childPIDs {lappend pidlist $pid}
....


vwait forever

Alexandre Ferrieux

unread,
Sep 11, 2008, 12:08:45 PM9/11/08
to


Since you're mentioning ps, I'll assume you're on unix.
Then, you can spawn an "sh", tell it to launch each and every child
with an "&", and after all this, tell it to wait for all of them with
the zero-argument "wait":

set ff [open "|sh 2>@ stderr" r+]
fconfigure $ff -translation binary -buffering line
...
puts $ff "child arg arg... &"
puts $ff "child arg arg... &"
puts $ff "child arg arg... &"
...
puts $ff "wait;exit"
gets $ff line ;# <-- unlocks only after all are done

Of course you can asynchronize the blocking [gets] with a fileevent if
you have a running event loop.

-Alex

Glenn Jackman

unread,
Sep 11, 2008, 12:22:30 PM9/11/08
to
At 2008-09-11 12:08PM, "Alexandre Ferrieux" wrote:
> Since you're mentioning ps, I'll assume you're on unix.
> Then, you can spawn an "sh", tell it to launch each and every child
> with an "&", and after all this, tell it to wait for all of them with
> the zero-argument "wait":
>
> set ff [open "|sh 2>@ stderr" r+]
> fconfigure $ff -translation binary -buffering line
> puts $ff "child arg arg... &"
> ...
> puts $ff "wait;exit"
> gets $ff line ;# <-- unlocks only after all are done

Don't know about the OP, but that looks like a nice "right tool for the
job" "glue language" kind of solution.

--
Glenn Jackman
Write a wise saying and your name will live forever. -- Anonymous

Hai Vu

unread,
Sep 11, 2008, 4:37:16 PM9/11/08
to
I would like to thank Collin for an excellent solution. I was able to
get it to work with a couple of minor changes to my liking. One change
I made is instead of:
vwait forever
I coded it as:
set allChildrenTerminated 0
after 60*60*1000 set allChildrenTerminated 1; # time out after an
hour
vwait allChildrenTerminated
# code to check to see if we have any hanging processes
0 new messages