Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Nonblocking execution of multiple processes in parallel

132 views
Skip to first unread message

Alexandru

unread,
Apr 13, 2019, 5:48:32 AM4/13/19
to
Hi,

Say I want my Tcl/Tk app to execute multiple external longl lasting processes in parallel. I would want to use twapi to send a charge of multiple tasks to the system (which is Windows in my case) in async mode and wait for the processes to finish one by one an send some new tasks until als tasks are finished.

What is the best way to do this, if the requirement is the UI should remain responsive?

Should I use:
1. Threads or
2. Is it also possible to achieve the same thing faster by using event based programming.
3. Or should I even use the coroutines (which I must say I don't fully understand until now).

Many thanks.
Alexandru

Harald Oehlmann

unread,
Apr 13, 2019, 6:16:25 AM4/13/19
to
If the long lasting processes are not blocking the interpreter, you may
go with events. Coroutine and Events both run in the main interpretreter
and block them.
You will need threads (or even subprocesses if the called routines are
not thread-safe) if the called routines are blocking.

Harald

Alexandru

unread,
Apr 13, 2019, 6:23:21 AM4/13/19
to
Thanks Harald for the fast help.
Since the long lasting processes are external (different executable) I guess they are not blocking the interpreter. So you are saying in this case I should use events. But how exactly will I add an event that will fire when the external process finishes? Using twapi I get a process ID in async mode. Until now I enter a while loop and check every 100 ms if the process still exists or not. But this is not event based...

Harald Oehlmann

unread,
Apr 13, 2019, 6:26:55 AM4/13/19
to
if you do the checking by an after event, it is event based...

Harald Oehlmann

unread,
Apr 13, 2019, 6:28:04 AM4/13/19
to
Am 13.04.2019 um 12:23 schrieb Alexandru:
If those are external commands, you may use open with a pipe to execute
them. Then you put a fileevent readable to observe any results.

Alexandru

unread,
Apr 13, 2019, 6:38:33 AM4/13/19
to
Thanks, I'll check that.

Alexandru

unread,
Apr 13, 2019, 6:41:02 AM4/13/19
to
But for example the "after 100" command is blocking for 100 ms. I would like to not block at all.

Harald Oehlmann

unread,
Apr 13, 2019, 6:47:47 AM4/13/19
to
after 100 cmd

Alexandru

unread,
Apr 13, 2019, 6:57:15 AM4/13/19
to
You just made my day! I wasn't aware of this subtle difference.

Alexandru

unread,
Apr 13, 2019, 9:59:58 AM4/13/19
to
I was just doing some tests with "exec ... &" and see that the executable I'm calling is still blocking although I use the "&" at the end of exec. With another executable I see no blocking. Can it be, that the blocking depends on the executable?

Harald Oehlmann

unread,
Apr 13, 2019, 10:24:47 AM4/13/19
to
Please look to open with the pipe.

Harald

Alexandru

unread,
Apr 13, 2019, 10:27:40 AM4/13/19
to
I was just doing that. But still, the documentation in exec writes "If the last arg is “&” then the pipeline will be executed in background. In this case the exec command will return a list whose elements are the process identifiers for all of the subprocesses in the pipeline." Looks that I found an executable where this is not true and the process is not run in the background.

Alexandru

unread,
Apr 13, 2019, 10:33:41 AM4/13/19
to
Okay, I tried to use the "open" command with the pipe symbol and it still blocks:

set code [catch {set pid [open "|$executable" r]} err]

Rich

unread,
Apr 13, 2019, 12:06:13 PM4/13/19
to
Without showing us, exactly, the syntax you used, all we can say is "it
is not supposed to work that way".

Rich

unread,
Apr 13, 2019, 12:11:25 PM4/13/19
to
What is the name of the executable?

You can also simplify the above to:

set code [catch {open "|$executable" r} fd err]

And you'll get the file descriptor (open returns file descriptors) in
the fd variable, or if you receive an error, then an error message in
'fd' instead.

But the above should not block (unless you are using a different
definition of 'block' from what we are using):

$ rlwrap tclsh
% set code [catch {open "|/usr/bin/sleep 10m" r} pid err]
0
% set code
0
% set pid
file5
% set err
-code 0 -level 0
%

Robert Heller

unread,
Apr 13, 2019, 12:35:25 PM4/13/19
to
At Sat, 13 Apr 2019 07:33:38 -0700 (PDT) Alexandru <alexandr...@meshparts.de> wrote:

>
> Am Samstag, 13. April 2019 16:24:47 UTC+2 schrieb Harald Oehlmann:
> > Am 13.04.2019 um 15:59 schrieb Alexandru:
> > > Am Samstag, 13. April 2019 12:28:04 UTC+2 schrieb Harald Oehlmann:
> > >> Am 13.04.2019 um 12:23 schrieb Alexandru:
> > >>> Am Samstag, 13. April 2019 12:16:25 UTC+2 schrieb Harald Oehlmann:
> > >>>> Am 13.04.2019 um 11:48 schrieb Alexandru:
> > >>>>> Hi,
> > >>>>>
> > >>>>> Say I want my Tcl/Tk app to execute multiple external longl lasting=
> processes in parallel. I would want to use twapi to send a charge of multi=
> ple tasks to the system (which is Windows in my case) in async mode and wai=
> t for the processes to finish one by one an send some new tasks until als t=
> asks are finished.
> > >>>>>
> > >>>>> What is the best way to do this, if the requirement is the UI shoul=
> d remain responsive?
> > >>>>>
> > >>>>> Should I use:
> > >>>>> 1. Threads or=20
> > >>>>> 2. Is it also possible to achieve the same thing faster by using ev=
> ent based programming.=20
> > >>>>> 3. Or should I even use the coroutines (which I must say I don't fu=
> lly understand until now).
> > >>>>>
> > >>>>> Many thanks.
> > >>>>> Alexandru
> > >>>>>
> > >>>>
> > >>>> If the long lasting processes are not blocking the interpreter, you =
> may
> > >>>> go with events. Coroutine and Events both run in the main interpretr=
> eter
> > >>>> and block them.
> > >>>> You will need threads (or even subprocesses if the called routines a=
> re
> > >>>> not thread-safe) if the called routines are blocking.
> > >>>>
> > >>>> Harald
> > >>>
> > >>> Thanks Harald for the fast help.
> > >>> Since the long lasting processes are external (different executable) =
> I guess they are not blocking the interpreter. So you are saying in this ca=
> se I should use events. But how exactly will I add an event that will fire =
> when the external process finishes? Using twapi I get a process ID in async=
> mode. Until now I enter a while loop and check every 100 ms if the process=
> still exists or not. But this is not event based...
> > >>>
> > >>
> > >> If those are external commands, you may use open with a pipe to execut=
> e
> > >> them. Then you put a fileevent readable to observe any results.
> > >=20
> > > I was just doing some tests with "exec ... &" and see that the executab=
> le I'm calling is still blocking although I use the "&" at the end of exec.=
> With another executable I see no blocking. Can it be, that the blocking de=
> pends on the executable?
> > >=20
> >=20
> > Please look to open with the pipe.
> >=20
> > Harald
>
> Okay, I tried to use the "open" command with the pipe symbol and it still b=
> locks:
>
> set code [catch {set pid [open "|$executable" r]} err]

The above line is wrong (in too many ways).

This what *I* would do:

(assumes the event loop is running, either this is in wish (or else "package
require Tk" has happened) OR there is a "vwait forever" at the bottom of the
script or something like that)

proc readPipe {fp} {
if {[gets $fp line] >= 0} {
## do something with line
} else {
catch {close $fp}
}
}

if {[catch {open "|$executable" r} fp]} {
error "Failed to open pipe (|$executable): $fp"
} else {
fileevent $fp readable [list readPipe $fp]
}

Question: Does the "blocking" process read from its input stream?

>
>

--
Robert Heller -- 978-544-6933
Deepwoods Software -- Custom Software Services
http://www.deepsoft.com/ -- Linux Administration Services
hel...@deepsoft.com -- Webhosting Services

Alexandru

unread,
Apr 13, 2019, 12:41:32 PM4/13/19
to
Am Samstag, 13. April 2019 18:11:25 UTC+2 schrieb Rich:
> Alexandru <alexandr...@meshparts.de> wrote:
> > Am Samstag, 13. April 2019 16:24:47 UTC+2 schrieb Harald Oehlmann:
> >> Please look to open with the pipe.
> >>
> >> Harald
> >
> > Okay, I tried to use the "open" command with the pipe symbol and it
> > still blocks:
> >
> > set code [catch {set pid [open "|$executable" r]} err]
>
> What is the name of the executable?

gmsh.exe

Alexandru

unread,
Apr 13, 2019, 12:44:30 PM4/13/19
to
Could you name at least a few reasons why it's wrong?

>
> This what *I* would do:
>
> (assumes the event loop is running, either this is in wish (or else "package
> require Tk" has happened) OR there is a "vwait forever" at the bottom of the
> script or something like that)
>
> proc readPipe {fp} {
> if {[gets $fp line] >= 0} {
> ## do something with line
> } else {
> catch {close $fp}
> }
> }
>
> if {[catch {open "|$executable" r} fp]} {
> error "Failed to open pipe (|$executable): $fp"
> } else {
> fileevent $fp readable [list readPipe $fp]
> }

Okay, I'll try this too.

>
> Question: Does the "blocking" process read from its input stream?

I don't know what you mean. The executable is called with additional parameters which tell to read from a file, process the input and then write another file as an output.

Alexandru

unread,
Apr 13, 2019, 4:17:30 PM4/13/19
to
Am Samstag, 13. April 2019 18:35:25 UTC+2 schrieb Robert Heller:
I did exactly as in your code but the process is still blocking.
As already said, looks like it's only that specific process (gmsh.exe) where I get the problem.

briang

unread,
Apr 13, 2019, 5:18:23 PM4/13/19
to
If gmsh.exe is a Windows program, it will not have and std channels for the pipe to connect to. It has to be a console program.

-Brian

Alexandru

unread,
Apr 13, 2019, 8:24:52 PM4/13/19
to
Am Samstag, 13. April 2019 23:18:23 UTC+2 schrieb briang:
> If gmsh.exe is a Windows program, it will not have and std channels for the pipe to connect to. It has to be a console program.
>
> -Brian

Okay, so "open" will not work. Thanks for the tip.
But how about exec? It should be able to run in the background, right?

Alexandru

unread,
Apr 13, 2019, 9:12:24 PM4/13/19
to
I managed to solve the problem. Though I still don't understand the true reason, I'll try to explain what I know.

After executing the external process with "exec ... &" I had a call to a waiting procedure, that waited for the process to finish:

proc ProcessWaitToFinish {pid {cycle 100}} {
if {[::twapi::process_exists $pid]} {
after $cycle [list ProcessWaitToFinish $pid $cycle]
}
}

set pid [exec gmsh.exe &]
ProcessWaitToFinish $pid

The above constellation did not worked and I realized it was because ProcessWaitToFinish did not blocked the further execution of the program. What I don't understand, is why I could not see this.

The corrected version looks like this:

proc ProcessWaitToFinish {pid {cycle 100}} {
if {[::twapi::process_exists $pid]} {
after $cycle [list ProcessWaitToFinish $pid $cycle]
} else {
uplevel 1 {set pid ""}
}
}

set pid [exec gmsh.exe &]
ProcessWaitToFinish $pid
vwait pid

Robert Heller

unread,
Apr 13, 2019, 9:39:01 PM4/13/19
to
The "after" command needs the event loop running to work properly (in the form
you are using -- "after time script"). Normally plain tclsh *does not run the
event loop* -- everything runs synchronously. Wish *always* runs the event loop
(all of the GUI code depends on the event loop -- everything is asynchronously
event driven).

When you run a plain CLI tclsh program that needs the event loop, you need to
enter the event loop. The way you do this is by using the vwait command,
which changes the program from a synchronous program to an asynchronous one.
Which you discovered.

Alexandru

unread,
Apr 13, 2019, 9:48:49 PM4/13/19
to
Thought so too from the beginning. The app is alrady running in wish and I explicitly enter the event loop by "vwait forever" at the end of the code. But obviously that was not enough.

Rich

unread,
Apr 13, 2019, 11:38:42 PM4/13/19
to
What do you mean by "the process is still blocking"?

1) Which process are you referencing? (pick one):
1a) the Tcl process?
1b) the gmsh.exe process?
1c) some other process?

2) What do you mean by "blocking"?
2a) the Tcl script stops executing any additional code.
2b) the gmsh.exe process never starts
2c) the gmsh.exe process starts, but then locks up and accepts no
user input
2d) some other meaning (if you choose this one, explain the "other
meaning")

Robert Heller

unread,
Apr 13, 2019, 11:49:49 PM4/13/19
to
If the application is running from wish, it is already running the event loop.

I don't understand what *exactly* you want to do.

This code:

proc ProcessWaitToFinish {pid {cycle 100}} {
if {[::twapi::process_exists $pid]} {
after $cycle [list ProcessWaitToFinish $pid $cycle]
} else {
uplevel 1 {set pid ""}
}
}

set pid [exec gmsh.exe &]
ProcessWaitToFinish $pid
vwait pid

*Effectivly* does the same as:

exec gmsh.exe

*Except* the event loop is running, allowing events to be processed.

It *sounds* like you want to run a program, *wait for it to finish*, but allow
events to be serviced in meantime. Is this correct? It is not really about
blocking/nonblocking.

If so, what you really want to be doing is this:

proc ProcessWaitToFinish {pidname {cycle 100}} {
upvar $pidname pid
if {[::twapi::process_exists $pid]} {
after $cycle [list ProcessWaitToFinish pid $cycle]
} else {
set pid 0
}
}

set pid [exec gmsh.exe &]
ProcessWaitToFinish pid
vwait pid


Actually, a better way would be to do things the OO way using snit (this
avoids hardcoded global variables):

package require snit


snit::stringtype ExecutableFile -regexp {^.+\.exe$}
snit::type runProcess {
variable pid 0
option -executable -readonly yes -type ExecutableFile
constructor {args} {
$self configurelist $args
}
method run {} {
set pid [exec $options(-executable) &]
$self _waitToFinish
vwait [myvar pid]
}
method _waitToFinish {{cycle 100}} {
if {[::twapi::process_exists $pid]} {
after $cycle [mymethod _waitToFinish $cycle]
} else {
set pid 0
}
}
}

Then:

runProcess create gmsh -executable gmsh.exe

gmsh run

Note: coded this way, you can run the program as many times as you need.q

Alexandru

unread,
Apr 14, 2019, 4:42:40 AM4/14/19
to
Am Sonntag, 14. April 2019 05:38:42 UTC+2 schrieb Rich:
> Alexandru <alexandr...@meshparts.de> wrote:
> > Am Samstag, 13. April 2019 18:35:25 UTC+2 schrieb Robert Heller:
> >> This what *I* would do:
> >>
> >> (assumes the event loop is running, either this is in wish (or else "package
> >> require Tk" has happened) OR there is a "vwait forever" at the bottom of the
> >> script or something like that)
> >>
> >> proc readPipe {fp} {
> >> if {[gets $fp line] >= 0} {
> >> ## do something with line
> >> } else {
> >> catch {close $fp}
> >> }
> >> }
> >>
> >> if {[catch {open "|$executable" r} fp]} {
> >> error "Failed to open pipe (|$executable): $fp"
> >> } else {
> >> fileevent $fp readable [list readPipe $fp]
> >> }
> >
> > I did exactly as in your code but the process is still blocking.
>
> What do you mean by "the process is still blocking"?
>
> 1) Which process are you referencing? (pick one):
> 1a) the Tcl process?
> 1b) the gmsh.exe process?
> 1c) some other process?

I guess it's 1b. While the problem is now solved, the problem was that during the execution of 1b the Tk GUI did not react to any user input.

>
> 2) What do you mean by "blocking"?
> 2a) the Tcl script stops executing any additional code.
> 2b) the gmsh.exe process never starts
> 2c) the gmsh.exe process starts, but then locks up and accepts no
> user input
> 2d) some other meaning (if you choose this one, explain the "other
> meaning")

See answer to question 1.

Alexandru

unread,
Apr 14, 2019, 4:52:10 AM4/14/19
to
While I find your method more elegant than mine because it's using upvar instead of uplevel and it's more general, I think there is no big difference and the result will be the same.

>
>
> Actually, a better way would be to do things the OO way using snit (this
> avoids hardcoded global variables):
>
> package require snit
>
>
> snit::stringtype ExecutableFile -regexp {^.+\.exe$}
> snit::type runProcess {
> variable pid 0
> option -executable -readonly yes -type ExecutableFile
> constructor {args} {
> $self configurelist $args
> }
> method run {} {
> set pid [exec $options(-executable) &]
> $self _waitToFinish
> vwait [myvar pid]
> }
> method _waitToFinish {{cycle 100}} {
> if {[::twapi::process_exists $pid]} {
> after $cycle [mymethod _waitToFinish $cycle]
> } else {
> set pid 0
> }
> }
> }
>
> Then:
>
> runProcess create gmsh -executable gmsh.exe

That's exactly what I ultimately want to achieve (see OP): Run multiple executables in parallel and wait for them to finish, then send new ones as soon they finish. I could do this with thread but I was just wondering if event based programming will also work. Must I use snit? I immagined my original proc should also work when multiple execution run in parallel(?)

Alexandru

unread,
Apr 14, 2019, 4:55:53 AM4/14/19
to
Is it better to write in the above proc
after $cycle [list ProcessWaitToFinish $pidname $cycle]

>
> set pid [exec gmsh.exe &]

Eric

unread,
Apr 14, 2019, 5:56:42 AM4/14/19
to
Alexandru, I am using twapi to do such things:

set res [twapi::create_process "" -cmdline $cmdline -returnhandles 1]
lassign $res pid tid hproc hthread
twapi::close_handle $hthread
twapi::wait_on_handle $hproc -executeonce 1 \
-async ProcessEnded

Eric

Alexandru

unread,
Apr 14, 2019, 8:33:48 AM4/14/19
to
Oh, that's very nice! Thanks. I was also wondering if this possible with twapi but couldn't find anything in the docs.

Many thanks!
Alexandru

Robert Heller

unread,
Apr 14, 2019, 9:00:51 AM4/14/19
to
At Sun, 14 Apr 2019 01:52:07 -0700 (PDT) Alexandru <alexandr...@meshparts.de> wrote:

>
> Am Sonntag, 14. April 2019 05:49:49 UTC+2 schrieb Robert Heller:
> > At Sat, 13 Apr 2019 18:48:46 -0700 (PDT) Alexandru <alexandru.dadalau@mes=
> hparts.de> wrote:
> >=20
> > >=20
> > > Am Sonntag, 14. April 2019 03:39:01 UTC+2 schrieb Robert Heller:
> > > > At Sat, 13 Apr 2019 18:12:22 -0700 (PDT) Alexandru <alexandru.dadalau=
> @meshparts.de> wrote:
> > > >=20
> > > > >=20
> > > > > I managed to solve the problem. Though I still don't understand the=
> true reason, I'll try to explain what I know.
> > > > >=20
> > > > > After executing the external process with "exec ... &" I had a call=
> to a waiting procedure, that waited for the process to finish:
> > > > >=20
> > > > > proc ProcessWaitToFinish {pid {cycle 100}} {
> > > > > if {[::twapi::process_exists $pid]} {
> > > > > after $cycle [list ProcessWaitToFinish $pid $cycle]
> > > > > }
> > > > > }
> > > > >=20
> > > > > set pid [exec gmsh.exe &]
> > > > > ProcessWaitToFinish $pid
> > > > >=20
> > > > > The above constellation did not worked and I realized it was becaus=
> e ProcessWaitToFinish did not blocked the further execution of the program.=
> What I don't understand, is why I could not see this.
> > > > >=20
> > > > > The corrected version looks like this:
> > > > >=20
> > > > > proc ProcessWaitToFinish {pid {cycle 100}} {
> > > > > if {[::twapi::process_exists $pid]} {
> > > > > after $cycle [list ProcessWaitToFinish $pid $cycle]
> > > > > } else {
> > > > > uplevel 1 {set pid ""}
> > > > > }
> > > > > }
> > > > > =20
> > > > > set pid [exec gmsh.exe &]
> > > > > ProcessWaitToFinish $pid
> > > > > vwait pid
> > > >=20
> > > > The "after" command needs the event loop running to work properly (in=
> the form
> > > > you are using -- "after time script"). Normally plain tclsh *does not=
> run the
> > > > event loop* -- everything runs synchronously. Wish *always* runs the =
> event loop
> > > > (all of the GUI code depends on the event loop -- everything is async=
> hronously
> > > > event driven).
> > > >=20
> > > > When you run a plain CLI tclsh program that needs the event loop, you=
> need to=20
> > > > enter the event loop. The way you do this is by using the vwait comm=
> and,=20
> > > > which changes the program from a synchronous program to an asynchrono=
> us one. =20
> > > > Which you discovered.
> > > >=20
> > > > > =
> =20
> > > >=20
> > > > --=20
> > > > Robert Heller -- 978-544-6933
> > > > Deepwoods Software -- Custom Software Services
> > > > http://www.deepsoft.com/ -- Linux Administration Services
> > > > hel...@deepsoft.com -- Webhosting Services
> > >=20
> > > Thought so too from the beginning. The app is alrady running in wish an=
> d I
> > > explicitly enter the event loop by "vwait forever" at the end of the co=
> de.
> > > But obviously that was not enough.=20
> >=20
> > If the application is running from wish, it is already running the event =
> loop.=20
> >=20
> > I don't understand what *exactly* you want to do.
> >=20
> > This code:
> >=20
> > proc ProcessWaitToFinish {pid {cycle 100}} {
> > if {[::twapi::process_exists $pid]} {
> > after $cycle [list ProcessWaitToFinish $pid $cycle]
> > } else {
> > uplevel 1 {set pid ""}
> > }
> > }
> > =20
> > set pid [exec gmsh.exe &]
> > ProcessWaitToFinish $pid
> > vwait pid
> >=20
> > *Effectivly* does the same as:
> >=20
> > exec gmsh.exe
> >=20
> > *Except* the event loop is running, allowing events to be processed.
> >=20
> > It *sounds* like you want to run a program, *wait for it to finish*, but =
> allow=20
> > events to be serviced in meantime. Is this correct? It is not really ab=
> out=20
> > blocking/nonblocking.
> >=20
> > If so, what you really want to be doing is this:
> >=20
> > proc ProcessWaitToFinish {pidname {cycle 100}} {
> > upvar $pidname pid
> > if {[::twapi::process_exists $pid]} {
> > after $cycle [list ProcessWaitToFinish pid $cycle]
> > } else {
> > set pid 0
> > }
> > }
> >=20
> > set pid [exec gmsh.exe &]
> > ProcessWaitToFinish pid
> > vwait pid
>
> While I find your method more elegant than mine because it's using upvar in=
> stead of uplevel and it's more general, I think there is no big difference =
> and the result will be the same.

*Except* your version depends on a hardwired global variable name. It will
*break* if you, for example do this:

set otherpid [exec foo.exe &]
ProcessWaitToFinish $otherpid
vwait otherpid


>
> >=20
> >=20
> > Actually, a better way would be to do things the OO way using snit (this=
> =20
> > avoids hardcoded global variables):
> >=20
> > package require snit
> >=20
> >=20
> > snit::stringtype ExecutableFile -regexp {^.+\.exe$}
> > snit::type runProcess {
> > variable pid 0
> > option -executable -readonly yes -type ExecutableFile
> > constructor {args} {
> > $self configurelist $args
> > }
> > method run {} {
> > set pid [exec $options(-executable) &]
> > $self _waitToFinish
> > vwait [myvar pid]
> > }
> > method _waitToFinish {{cycle 100}} {
> > if {[::twapi::process_exists $pid]} {
> > after $cycle [mymethod _waitToFinish $cycle]
> > } else {
> > set pid 0
> > }
> > }
> > }
> >=20
> > Then:
> >=20
> > runProcess create gmsh -executable gmsh.exe
>
> That's exactly what I ultimately want to achieve (see OP): Run multiple exe=
> cutables in parallel and wait for them to finish, then send new ones as soo=
> n they finish. I could do this with thread but I was just wondering if even=
> t based programming will also work. Must I use snit? I immagined my origina=
> l proc should also work when multiple execution run in parallel(?)

Actually it will break, since you will have hardwired the global variable name
(pid). *My* first version passes the name of the variable that is being
waited on, and my SNIT version uses a class instance variable.

If you want to run multiple processes in parallel, you will need to separate
the waits from the exe starts:

package require snit


snit::stringtype ExecutableFile -regexp {^.+\.exe$}
snit::type runProcess {
variable pid 0
option -executable -readonly yes -type ExecutableFile
constructor {args} {
$self configurelist $args
}
method run {} {
set pid [exec $options(-executable) &]
}
method wait {} {
$self _waitToFinish
vwait [myvar pid]
}
method _waitToFinish {{cycle 100}} {
if {[::twapi::process_exists $pid]} {
after $cycle [mymethod _waitToFinish $cycle]
} else {
set pid 0
}
}
}

Then:

# Start gmsh.exe
runProcess create gmsh -executable gmsh.exe
gmsh run
# Start other.exe
runProcess create other -executable other.exe
other run
# Start another.exe
runProcess create another -executable another.exe
run another

# Wait for gmsh.exe to finish (other.exe and another.exe will also be running)
# while we are waiting for it to complete, Tk events are being handled.
gmsh wait
# gmsh.exe finished, now wait for other.exe to finish (it might already be done)
other wait
# other.exe is done, now wait for another.exe
another wait
# all processes are now done.

Note with this code, one can restart the processes later, should you need to.

>
> >=20
> > gmsh run
> >=20
> > Note: coded this way, you can run the program as many times as you need.q
> > --=20

Robert Heller

unread,
Apr 14, 2019, 9:00:51 AM4/14/19
to
Not really.

Alexandru

unread,
Apr 14, 2019, 9:49:46 AM4/14/19
to
Yes, that's why I wrote, your procedure is more general. But I don't think my procedure will break, since I don't use a global variable. The procedure variable is packed inside another procedure.
That's a lot of stuff to chew on, since I'm not so familiar to OO or snit.

Rich

unread,
Apr 14, 2019, 9:52:53 AM4/14/19
to
Ok, then that was explained elsewhere where someone pointed out (after
you posted some example code [1]) that you were utilizing the blocking
version of 'after' to do your busy waiting.

This blocks (even if the event loop is running):

after 1000

And the above is clearly documented as 'blocking' of the Tcl/Tk event
loop:

after ms
Ms must be an integer giving a time in milliseconds. The com-
mand sleeps for ms milliseconds and then returns. *While the
command is sleeping the application does not respond to events.*

This version places a 'job' onto the event loop and returns
immediately to the calling code:

after ms ?script script script ...?

In one of the other sub threads someone pointed these two out.

[1] This is an excellent example of why posting more, useful,
information initially will generally result in a much more useful and
comprehensive result. This thread has been going on for several days,
and has branched out into multiple, somewhat disparate, sub-threads.
Some of which ended up heading down dead-end paths. Had you posted a
better question initially, with more specific details, including
specific code that created the situation of your question, the rest of
us on the group would likely have pointed you to your answer much more
quickly.

Alexandru

unread,
Apr 14, 2019, 10:20:18 AM4/14/19
to
While this answer elevated my knowledge about Tcl, I actually encountered another problem after fixing the above. The problem was that I never used vwait after the procedure ProcessWaitToFinish.
>
> In one of the other sub threads someone pointed these two out.
>
> [1] This is an excellent example of why posting more, useful,
> information initially will generally result in a much more useful and
> comprehensive result. This thread has been going on for several days,
> and has branched out into multiple, somewhat disparate, sub-threads.
> Some of which ended up heading down dead-end paths. Had you posted a
> better question initially, with more specific details, including
> specific code that created the situation of your question, the rest of
> us on the group would likely have pointed you to your answer much more
> quickly.

I admit, the questions deviated from the original question. I could have opened another question.

Thechnically, the thread goes for less than a day. I hope this is bearable. Thank you for all the help. I hope that one day Tcl will get to be as strong as the main stream languages today and this will not be possible without the help of you, the experts.

briang

unread,
Apr 14, 2019, 11:51:46 AM4/14/19
to
Hi Alexandru,

You should use this method described above for handling multiple parallel worker processes and discard all the other approaches discussed in this thread. The use of looping timeout values (100 ms) will not scale with many parallel processes. The Tk app will end up consuming cpu time polling all the threads of execution. The proper way to manage this is to let the OS signal the Tk app when the processes terminate, and this is exactly what the code above is doing.

I would also recommend using coroutines to manage this, but this topic is probably best discussed on a separate thread.

-Brian

Robert Heller

unread,
Apr 14, 2019, 11:56:00 AM4/14/19
to
No, you *have* to use a global variable for vwait (read the man page
*carefully*). (The "myvar" procedure in the SNIT code creates the proper
namespace blather for the psuedo-local instance variable in the snit class
object instance.) So you code is dependent on a hardwired global variable. My
version passes the *name* of the global variable that will be vwait'ed on. The
difference is upvar vs uplevel. They are more different than you think.

Alexandru

unread,
Apr 14, 2019, 12:36:38 PM4/14/19
to
Indeed. I was already using a global variable. Thanks for pointing that out.

Alexandru

unread,
Apr 14, 2019, 12:38:42 PM4/14/19
to
Hi Brian, that's a good point and that's the reason why I just love twapi. I will consider implementing the twapi methods instead.

Alexandru

unread,
Apr 14, 2019, 2:25:55 PM4/14/19
to
Am Sonntag, 14. April 2019 11:56:42 UTC+2 schrieb Eric:
I tried the above code with slight modifications but twapi waits forever for the process to finish, although it actually finishes in few seconds:

set cmdline "\"[auto_execok wscript.exe]\" \"[file join [pwd] $modelfilename.vbs]\""
set res [list]
set pid ""
set code [catch {set res [twapi::create_process "" -cmdline $cmdline -returnhandles 1]} err]
if {[llength $res]==4} {
lassign $res pid tid hproc hthread
twapi::wait_on_handle $hproc -executeonce 1 -async {set ::pid ""}
vwait ::pid
}

two...@gmail.com

unread,
Apr 14, 2019, 3:14:37 PM4/14/19
to
On Sunday, April 14, 2019 at 11:25:55 AM UTC-7, Alexandru wrote:

> twapi::wait_on_handle $hproc -executeonce 1 -async {set ::pid ""}
> vwait ::pid
> }

Just a guess, but the script following -async is called with 2 appended arguments. At first blush, this would seem to cause an error with the [set ::pid ""] command, for having extra parameters. So, I don't think ::pid is actually getting set. I would think you'd get an error delivered somehow though.

Alexandru

unread,
Apr 14, 2019, 3:49:27 PM4/14/19
to
Very good intuition! That was exactly the cause. And no, there was no error delivered. I remebered I already had the same problem with another twapi command "begin_filesystem_monitor". I will ask Ashok, if this behavior is intended.

briang

unread,
Apr 14, 2019, 4:14:44 PM4/14/19
to
It is almost certainly intended. Often callback options are documented as "-option script", but using the term "script" is a bad idea. Callbacks should always be in the form of a function with 0 or more arguments, "-option cmd". It's cleaner, and it's faster. Ideally, documentation of callbacks would provide illustrative expected arguments for the function.

In this particular situation, parameters are necessary to return information, it's the only path back to the program. Of particular importance, the returned values will indicate success or failure of the process.

But if you do want to stick to the script, try: {set ::pid "" ;#}

-Brian

Alexandru

unread,
Apr 14, 2019, 4:18:31 PM4/14/19
to
I was writing about the intended silence of the callback (no erros got through).
I understand very well the reason for the apended args.
BTW: Nice trick {set ::pid "" ;#}

briang

unread,
Apr 14, 2019, 4:38:04 PM4/14/19
to
The reason it's silent is that there is no easy way to report the error when evaluating the "script". It could try calling bgerror, if it exists, but that may not be the best course in all cases. Writing something to stdout or stderr, especially on Windows, can be a problem as well.

Ralf Fassel

unread,
Apr 15, 2019, 9:53:55 AM4/15/19
to
* Alexandru <alexandr...@meshparts.de>
| Since the long lasting processes are external (different executable) I
| guess they are not blocking the interpreter. So you are saying in this
| case I should use events. But how exactly will I add an event that
| will fire when the external process finishes?

I'd just open the process via a pipe and let fileevent handle the
process exit.

set fd [open "|process" "r+"]
fileevent $fd readable [list handle_process $fd]
fconfigure $fd -blocking 0

proc handle_process fd {
# read any process output, handle it if necessary
set output [read $fd]
if {[eof $fd]} {
# process has exited
if {[catch {close $fd} err]} {
# check ::errorCode for abnormal process exit
} else {
# regular process exit
}
}
}


HTH
R'

Alexandru

unread,
Apr 15, 2019, 12:32:34 PM4/15/19
to
Hi Ralf, in the meanwhile I decided to give it a try with twapi. It works fine for one case, does not work for another one. The case where it's not working is when I simply add the options "-stdchannels [list $fid1 $fid2 stderr] -inherithandles 1" to the ::twapi::wait_on_handle function. No idea why... Perhaps Ashok can help.

Not working means here that the ::twapi::wait_on_handle does not wait for the process to finish.

two...@gmail.com

unread,
Apr 15, 2019, 2:35:39 PM4/15/19
to
On Monday, April 15, 2019 at 6:53:55 AM UTC-7, Ralf Fassel wrote:
>
> fconfigure $fd -blocking 0
>
.. snip ..
> if {[catch {close $fd} err]} {
> # check ::errorCode for abnormal process exit
> } else {
> # regular process exit
> }

Excellent example. One slight issue, with -blocking 0, the
close will not catch an error. The fileevent will trigger
on the process exit though. And data can be read back from
the child's stdout. So, I guess on an error exit, the child
would have to write something there to communicate that.

Gerald Lester

unread,
Apr 15, 2019, 7:47:28 PM4/15/19
to
On 4/13/19 4:48 AM, Alexandru wrote:
> Hi,
>
> Say I want my Tcl/Tk app to execute multiple external longl lasting processes in parallel. I would want to use twapi to send a charge of multiple tasks to the system (which is Windows in my case) in async mode and wait for the processes to finish one by one an send some new tasks until als tasks are finished.
>
> What is the best way to do this, if the requirement is the UI should remain responsive?
>
> Should I use:
> 1. Threads or
> 2. Is it also possible to achieve the same thing faster by using event based programming.
> 3. Or should I even use the coroutines (which I must say I don't fully understand until now).

I see others have helped you solve your problem, but I have a couple of
observations:

1) A process is not a thread, nor a co-routine. You can spawn off
multiple sub-processes which can then communicate back to you in various
ways.

2) Multi-processing and multi-tasking are useful if (a) what you are
doing is compute intensive and you have several processors/cores or (2)
you are using a library that is forcing you to do blocking I/O.

3) Event driven programming is useful when you have events happening
that require very little processing to deal with.

4) If you have to ask about co-routines, they are not for you.

5) 2, 3 and 4 can all be mixed and used together if need be.

5) Please learn the terms and use their exact meaning, it will make
getting help faster and less frustrating.


--
+----------------------------------------------------------------------+
| Gerald W. Lester, President, KNG Consulting LLC |
| Email: Gerald...@kng-consulting.net |
+----------------------------------------------------------------------+

Ralf Fassel

unread,
Apr 16, 2019, 8:35:13 AM4/16/19
to
* two...@gmail.com
| On Monday, April 15, 2019 at 6:53:55 AM UTC-7, Ralf Fassel wrote:
| >
| > fconfigure $fd -blocking 0
| >
| .. snip ..
| > if {[catch {close $fd} err]} {
| > # check ::errorCode for abnormal process exit
| > } else {
| > # regular process exit
| > }
>
| Excellent example. One slight issue, with -blocking 0, the
| close will not catch an error.

Ah, ok, hadn't thought about that... So probably reconfigure the fd
-blocking 1 before close (though I had the impression that I get errors
reported, even with -blocking 0, but not 100% sure).

R'

Harald Oehlmann

unread,
Apr 16, 2019, 8:44:23 AM4/16/19
to
At least, I can tell from the socket (not open pipe as used here) code:

- close is always blocking, even with "-blocking 0". If there are bytes
to write, they are written synchonously and the close process is
initiated and terminated. Any error, which may have happened before is
reported with the close.
- any error will be reported by close in non-blocking mode to.

Harald

two...@gmail.com

unread,
Apr 16, 2019, 12:21:34 PM4/16/19
to
On Tuesday, April 16, 2019 at 5:35:13 AM UTC-7, Ralf Fassel wrote:

> reconfigure the fd
> -blocking 1 before close

Good idea, I tested it (on windows) and that works.
Another snippit for my toolbox. Thanks.
0 new messages