Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Monitor changing file size

18 views
Skip to first unread message

Kevin Walzer

unread,
May 17, 2008, 2:45:55 PM5/17/08
to
I'm trying to keep my GUI from blocking by writing data from a
long-running process to a file via "exec mycmd > file.txt &" , then
reading the file when it's done. Here's an example:

button .b -text "Locate" -command [list exec locate txt >
~/Desktop/textsearch.txt &]

pack .b

The only way I can think of figure out when the process is complete,
however, is to monitor the file size every x milliseconds, and read the
file when the size no longer changes. I can't quite grok how to do this.
Can anyone point me in the right direction?

(I know pipes are often used for this kind of problem, but for various
reasons, I want to use the exec & mechanism.)

TIA,
Kevin
--
Kevin Walzer
Code by Kevin
http://www.codebykevin.com

Bezoar

unread,
May 18, 2008, 5:20:29 PM5/18/08
to

exec will return a list of pid's of all the processes involved in a
pipline. You need only monitor for the last pid in the list. when the
pid no no longer exists on the process table then it has completed.

On linux/unix you need only invoke ps -ef | grep -w $pid. The final
grep will either fail or succeed, use -w to make sure you don't find
the pid inside of another pid ( e.g pid 234 inside of 4234 ).
Alternately, many OS's expose a filesystem that use the pid's as
directory names for process inspection. You can check for the
existance of a directory instead; this prevents an expensive fork and
exec and lowers the chance of a block occuring. For example, in linux
the proc filesystem is provided by the kernel so the existance of /
proc/234 means your program is still running.

Carl

Alexandre Ferrieux

unread,
May 18, 2008, 5:35:54 PM5/18/08
to

Well, maybe you should explain your "various reasons" because pipes
really rock here:

set pi [open "|sh -c {locate ... > textsearch.txt} 2>@ stderr" r]
fileevent $pi readable child_done
proc child_done {
close $::pi
...
}

The idea is that the pipe need not hold the high traffic of the
'locate' command. As you can see, the redirection to your text file is
still there. However, the stdout from the sh process will stay open
all the time locate is running, and be closed only on shell exit. At
this time, you'll get an EOF on the other side, and the pipe will
become readable for the first (and last) time. The [close] takes care
of doing one quick waitpid() clearing the exit report from the kernel
(avoiding a zombie).

-Alex

Bezoar

unread,
May 18, 2008, 5:37:17 PM5/18/08
to

I also seem to recall that there was an extension announces recently
that had to to with having linux send an event to your program when a
file event occures. tcl_inotify. I don't know if its portable to other
OS's.
Just another thought your could use your script as an inbetween for
your program. Use exec and redirect the stdin and stdout to a
filedescriptor or two. Use fileevent handlers to read in the data and
write it to your file or throw it away. Your file even handler can
read in the data and throw it away but more importantly check for eof.
When it detects eof then increment a global variable that you are
vwaiting on. ( tk and after scripts can still work when you are
vwaiting).

If you can use Tclx or Expect you can set up a signal handler to catch
the sigCHLD signal. The handler can
increment a global vwaited variable or you can poll if you need to.

Lots of ways to do it. Of course some of this stuff will not work on
Windows. You did not mention your OS so this may not help. But it will
give you some ideas.

Carl

0 new messages