I'm developing an application that currently makes extensive use of
bgexec--it runs various commands in the background to keep the GUI
responsive. I'm researching alternatives to this approach, and am
wondering if the "fork" command in TclX or "thread" would provide
roughly equivalent functionality: sending a command off to a subprocess
while keeping the GUI from blocking/becoming unresponsive. I'm also
interested in which approach (assuming both might be viable) is simpler
to implement.
Any advice is appreciated.
--
Cheers,
Kevin Walzer, PhD
WordTech Software - "Tame the Terminal"
http://www.wordtech-software.com
sw at wordtech-software.com
> Hello,
> I'm developing an application that currently makes extensive use of
> bgexec--it runs various commands in the background to keep the GUI
> responsive. I'm researching alternatives to this approach, and am
> wondering if the "fork" command in TclX or "thread" would provide
> roughly equivalent functionality: sending a command off to a subprocess
> while keeping the GUI from blocking/becoming unresponsive. I'm also
> interested in which approach (assuming both might be viable) is simpler
> to implement.
> Any advice is appreciated.
I suppose it depends on what you are doing with those processes, how
much interaction and control you need over them. Why are you
researching alternatives? What does bgexec not do for you?
--
MKS
Threads are useful if you are running stuff in Tcl and want it
to be concurrent. If you are dealing with external processes,
then use [exec ... &] if you don't care about output, and
[open | ...] if you do. Are you really needing the extra
functionality that bgexec provides?
--
Jeff Hobbs, The Tcl Guy
http://www.ActiveState.com/, a division of Sophos
Here's a code snippet that I find helpful to run something from a GUI:
namespace eval runcommand {}
proc ::runcommand::runcommand {cmd} {
variable standarderror
variable standardout
#puts "running $cmd"
set standardout ""
set standarderror ""
set fl [open "| $cmd" r]
fconfigure $fl -blocking 0
fconfigure $fl -buffering none
::runcommand::runcommandloop $fl
vwait ::runcommand::finished
#puts "stdout: $standardout"
#puts "stderr: $standarderror"
return "Output:\n$standardout\nErrors:\n$standarderror"
}
proc ::runcommand::runcommandloop {fl} {
variable standarderror
variable standardout
append standardout [read $fl]
if { [eof $fl] } {
fconfigure $fl -blocking 1
catch { close $fl } err
append standarderror $err
after 1 [list set ::runcommand::finished 1]
return
}
after 100 [list ::runcommand::runcommandloop $fl]
}
set err [runcommand::runcommand "make aaaa"]
puts $err
--
David N. Welton
- http://www.dedasys.com/davidw/
Linux, Open Source Consulting
- http://www.dedasys.com/
Use pipes and fileevent.
uwe
I'm trying to reduce the dependency of my program on binary extensions
and move it more into a "pure" Tcl/Tk environment that is extended by
script packages only.
The program was using a separate tcl script run as a helper tool via
tclsh. After a bit of tweaking, fileevent and pipes works fine, so that
is the way I will go.
Thanks to all for their advice.
> > What did trigger your interest in replacing bgexec?
> I'm trying to reduce the dependency of my program on binary extensions
> and move it more into a "pure" Tcl/Tk environment that is extended by
> script packages only.
Then do you realize that the two specific extensions you asked about,
TclX and Thread, are binary?
> The program was using a separate tcl script run as a helper tool via
> tclsh. After a bit of tweaking, fileevent and pipes works fine, so that
> is the way I will go.
That would be the no-binary-extension method. It really only fails to
do what you want (not hold up the GUI) if your executed programs have a
LOT of output to handle/process, such that you get stuck sucking the
read end of the pipe for so long that you don't re-enter the event loop.
--
MKS
Teh;) reminds me of a port-fowarding-tool I hacked up once,
and I had it non-blockingly [read] all available data from
one socket then write it to the other one. It worked well until
data came in faster than the script could swallow it, so it
hung completely, allocating more and more memory for the string.
Solution was to limit the read-chunk to 65536 (or whatever)
so it avoided piling up infinite data in one single [read].