I am not aware of any means to SIMULTANEOUSLY redirect STDOUT _AND_ STDERR
of an external program (e.g. spawned with backticks or "system") _UNDER_WIN32_.
The only way I know of is to redirect both to a file each, and read these two files back
after the program is finished:
$rc = system("program >C:\Temp\$uniquename1 2>C:\Temp\$uniquename2");
This is unfortunate because first of all this needs temporary files, which are alweays a
certain hassle to create (one has to find a unique name first) and to cleanly remove
(even if the user presses ctrl-C), and second this makes it impossible to have a
Tk window which shows the output of the external program _AS_IT_HAPPENS_.
Does anybody know of a way?
What I want to do is to call "make" externally and to show its output in a Perl/Tk
window (moreover, I want to write the output to a log file).
The order of output on STDOUT and STDERR as it normally shows when you
run the external program (here: make) on the command line should not change,
i.e., when output on STDERR and SDTOUT is intertwined it should stay so,
that is, there shouldn't be first all outputs on STDERR and then all outputs on
STDOUT (or vice-versa), because that way important context information
would be lost.
Thank you very much for your help.
Best regards,
Steffen Beyer
>I am not aware of any means to SIMULTANEOUSLY redirect STDOUT _AND_ STDERR
>of an external program (e.g. spawned with backticks or "system")
>_UNDER_WIN32_.
>
>The only way I know of is to redirect both to a file each, and read these
>two files back
>after the program is finished:
>
>$rc = system("program >C:\Temp\$uniquename1 2>C:\Temp\$uniquename2");
Instead of spawning your process with system, try Win32::Job. By default,
the child process shares the parent's standard output and standard error,
but you can also redirect STDOUT and STDERR to your own filehandles.
Sébastien Nadeau
Technicien en informatique
Bibliothèque de l'Université Laval
> The order of output on STDOUT and STDERR as it normally shows when you
> run the external program (here: make) on the command line should not change,
> i.e., when output on STDERR and SDTOUT is intertwined it should stay so,
> that is, there shouldn't be first all outputs on STDERR and then all outputs on
> STDOUT (or vice-versa), because that way important context information
> would be lost.
Is there any chance that this behavior is the result of the program
(make in this case) deciding that it is not running with TTY control and
making the decision to not flush it's STDOUT? In other words, does this
work with any pipe at all? Many processes will change their output
behavior based on their perception of writing to a TTY vs a pipe.
The redirection issues are most likely due to limitiations of the win32
"shell". You could replace them with a perl script that did the
redirections instead, I assume.
run_make
#!/usr/bin/perl
open(STDERR, '>&1');
exec 'make';
Obviously you'd want to put in stuff for handling arguments.
I'm running with 'cygwin' perl at the moment. I'm imagining that it has
behavior closer to what UNIX does under 'sh'.
--
Darren Dunham ddu...@taos.com
Unix System Administrator Taos - The SysAdmin Company
Got some Dr Pepper? San Francisco, CA bay area
< This line left intentionally blank to confuse you. >
: I am not aware of any means to SIMULTANEOUSLY redirect STDOUT _AND_ STDERR
: of an external program (e.g. spawned with backticks or "system") _UNDER_WIN32_.
: The only way I know of is to redirect both to a file each, and read these two files back
: after the program is finished:
: $rc = system("program >C:\Temp\$uniquename1 2>C:\Temp\$uniquename2");
Don't know if this helps, but on NT you can use the 2>&1 notation, so for
example
dir XXXXXXXXXX 2>&1 | more
correctly intermingles the `File Not Found' with the rest of the output in
more.
IPC::Open3
--
$;=qq qJ,krleahciPhueerarsintoitq;sub __{0 &&
my$__;s ee substr$;,$,&&++$__%$,--,1,qq;;;ee;
$__>2&&&__}$,=22+$;=~y yiy y;__ while$;;print
Steffen Beyer <Steffe...@de.bosch.com> wrote:
: Dear Perl experts,
: I am not aware of any means to SIMULTANEOUSLY redirect STDOUT _AND_ STDERR
: of an external program (e.g. spawned with backticks or "system")
: _UNDER_WIN32_.
You can either get cmd.exe to redirect stdout and stderr both on to stdout,
or I think you can run it with open3 and use select.
: The only way I know of is to redirect both to a file each, and read these
: two files back after the program is finished:
: $rc = system("program >C:\Temp\$uniquename1 2>C:\Temp\$uniquename2");
: This is unfortunate because first of all this needs temporary files,
: which are alweays a certain hassle to create (one has to find a unique
: name first) and to cleanly remove (even if the user presses ctrl-C), and
: second this makes it impossible to have a Tk window which shows the
: output of the external program _AS_IT_HAPPENS_.
: Does anybody know of a way?
$rc = system("program > foo.txt 2>&1");
You'll have to make sure buffering doesn't bite you there, but that does
what you want. It looks just like how you'd do it under Unix. Works
in Windows 2000 and anything newer, and probably in NT4. Can't speak for
98/ME, etc. I suspect they don't work.
If you want to know all about all the nifty tricks that Windows NT4 and
later's CMD.EXE shell can do, I'll reccomend "Windows NT Shell Scripting"
by Tim Hill, published by New Riders Press. It covers all the things
you can get the shell to do for you, including this. (As well as some
nifty tricks like batch file library functions.)
(NT4 and later's CMD.EXE is a 'real' shell, IMO. It can do calculations,
comparisons, parse text files, use backticks, manage stdin and stdout,
use local variables in batch files, and return them from those functions,
run pipelines, all sorts of good things. The syntax is horrible so as not
to break old batch files, but it is there. Buy the book or see CMD /?
which should have it all there. Almost like a man page. Can you imagine?)
: What I want to do is to call "make" externally and to show its output in
: a Perl/Tk window (moreover, I want to write the output to a log file).
If you can run the program with 2>&1, then you can just use open to catch
the output, and be done with it.
If you need both seperately, you're probably best to go ahead and use
IPC::Open3, and use a select loop. There's a pretty good sample in the Perl
Cookbook, which I believe will work in Win32. All the needed modules seem
to exist, but I haven't tried them all in Win32.
Are you sure make uses both output and error meaningfully? I thought all
it sent to stdout was the program ID banner, which you can ignore
pretty safely. That may just be nmake, if you're using another, though.
And I may be misremembering; it's been a long time since I did that.
: The order of output on STDOUT and STDERR as it normally shows when you
: run the external program (here: make) on the command line should not change,
: i.e., when output on STDERR and SDTOUT is intertwined it should stay so,
: that is, there shouldn't be first all outputs on STDERR and then all
: outputs on STDOUT (or vice-versa), because that way important context
: information would be lost.
Again, make sure the program is using unbuffered IO properly; a Perl script
usually buffers STDOUT but not STDERR, so even if they're both redirected
to the same stream, you get STDERR then STDOUT. I suspect make does it
right, but if you get the wrong output, it may be the program you're running,
and not Perl or the OS in how you're catching it.
If it is doing it right, 2>&1 should do that for you, and you won't have to
use the modules. In this case, getting the system shell to do it for you
is probably the right idea.
Hope that helps!
--
Louis Erickson - wwo...@rdwarf.com - http://www.rdwarf.com/~wwonko/
Intolerance is the last defense of the insecure.
> >$rc = system("program >C:\Temp\$uniquename1 2>C:\Temp\$uniquename2");
>
> Instead of spawning your process with system, try Win32::Job. By default,
> the child process shares the parent's standard output and standard error,
> but you can also redirect STDOUT and STDERR to your own filehandles.
If this is possible, why not move this functionality to Open3.pm, so
things will magically "start to work"? Or if Open3 already can do
this, why use Win-specific solution?
Puzzled,
Ilya
> > Instead of spawning your process with system, try Win32::Job. By default,
> > the child process shares the parent's standard output and standard error,
> > but you can also redirect STDOUT and STDERR to your own filehandles.
>
>If this is possible, why not move this functionality to Open3.pm, so
>things will magically "start to work"? Or if Open3 already can do
>this, why use Win-specific solution?
>
>Puzzled,
>Ilya
I'm used to spawning processes this way with C++ so I immediatly looked up
for an equivalent Perl module. I browsed rapidly the Win32 modules and
found Win32::Job which looked like to be a satisfying way to do exactly
what Steffen wanted. I didn't go any further.
If open3 can do the same thing as easily, then I guess open3 should be
used. But can it? As far as I looked, there are a lot of IO buffering
issues you must consider when using it.
Sébastien
I do not understand the SIMULTANEOUSLY part of what you are saying,
because I have succesfully redirected STDOUT and STDERR to the same
file, with perl5.001 on Windows2000. In my case, I was happy with
getting it into a file, but I really had no problem with the redirection
as such. After setting up the redirection, I then execute some internal
logic (with print and warn commands) and some system() calls, and _all_
output goes to the same file.
I use this (and I found all needed from my old Camel book; this doesn't
even need perl5):
# Try to ensure stdout and stderr are in sync.
select(STDERR); $| = 1;
select(STDOUT); $| = 1;
# Redirect SDTOUT and STDERR to (same) log file.
sub redirect {
my $log = shift;
print "$0: Output redirected to $log\n";
open(OLDOUT, ">&STDOUT") or
$err++ and warn "$0: Warning: Cannot save stdout: $^E";
open(OLDERR, ">&STDERR") or
$err++ and warn "$0: Warning: Cannot save stderr: $^E";
open(STDOUT, ">>$log") or
$err++ and warn "$0: Warning: Cannot redirect stdout: $^E";
open(STDERR, ">&STDOUT") or
$err++ and warn "$0: Warning: Cannot redirect stderr: $^E";
}
# Restore redirected SDTOUT and STDERR.
sub restore {
my $log = shift;
print "$0: Output redirected to $log\n";
close(STDOUT) or
$err++ and warn "$0: Warning: Cannot close stdout: $^E";
close(STDERR) or
$err++ and warn "$0: Warning: Cannot close stderr: $^E";
open(STDOUT, ">&OLDOUT") or
$err++ and warn "$0: Warning: Cannot restore stdout: $^E";
open(STDERR, ">&OLDERR") or
$err++ and warn "$0: Warning: Cannot restore stderr: $^E";
}
Note the ">>$log" in the above; I needed to append to the same file on
different runs.
Heini
Actually, the *same exact* IO buffering issues exist with Win32::Job,
the only difference is that Win32::Job chooses not to document them.
Heini
I hadn't tried system("command 1>file 2>&1") yet, maybe this works
without losing the correct order of messages on STDOUT and STDERR,
indeed, even under Win32. (Even though STDOUT is usually buffered and
STDERR usually isn't, as someone pointed out)
Problem is, though, that this way I can't get at the output AS IT HAPPENS.
I would like to have the output of the external program scroll by on my
Perl/Tk window IN REAL TIME...
So I think I probably should try IPC::Open3 with select or IO::Select,
using the recipe in The Perl Cookbook as someone suggested.
Someone offline suggested to me to use IPC::Run. I had a look at the
documentation and found it unsuitable for someone new on the subject
(looks like it's more a reference manual). At least I didn't understand
a word it was talking about. What the heck is a "harness" in the context
of that module, and why on earth must I "pump"?! And why mustn't I on
some occasions?
Does anybody know of some introductory documentation for this module,
or are there other similar modules around (besides Win32::Job)?
Someone wrote that "the same IO buffering issues exist with Win32::Job,
and that Win32::Job only chooses not to document them".
What are the differences between the solution with Open3/select, IPC::Run
and Win32::Job?
Has anybody used any of these yet, what would you recommend?
Thanks a lot for all your help!
P.S.: Louis Erickson <wwo...@rdwarf.com> wrote: "Can you please try
and keep your lines less than 80 columns? Replying in a sensible way was
a pain." I'm terribly sorry for that, because I know how it is, but I'm on a
new job here and I have to use M$ Outlook to write my messages, and
I don't know how to (easily) confine the window / the lines to 80 columns...
Especially since it uses a proportional font... :(
Does anybody know how to configure M$ Outlook Express 5 accordingly?
Thanks!!
Best regards,
Steffen Beyer
: I hadn't tried system("command 1>file 2>&1") yet, maybe this works
: without losing the correct order of messages on STDOUT and STDERR,
: indeed, even under Win32. (Even though STDOUT is usually buffered and
: STDERR usually isn't, as someone pointed out)
: Problem is, though, that this way I can't get at the output AS IT HAPPENS.
: I would like to have the output of the external program scroll by on my
: Perl/Tk window IN REAL TIME...
Did you say that? I thought you wanted output to a file.
You might try:
open(FH, "command 2&>1|");
That should have the shell merge stderr and stdout, and then let Perl pick
that up instead of sending it to the file... I have no idea if the buffering
issue will raise it's ugly head, though.
<snip comments about different ways>
: Has anybody used any of these yet, what would you recommend?
I've used IPC::Open3 and it seems to work okay. It would be my suggestion
after getting the shell to flatten everything to one stream. It has the
chance of working on other platforms, which is often interesting if not
actually needed all the time.
: Thanks a lot for all your help!
Good luck. You're asking nontrivial questions.
: P.S.: Louis Erickson <wwo...@rdwarf.com> wrote: "Can you please try
: and keep your lines less than 80 columns? Replying in a sensible way was
: a pain." I'm terribly sorry for that, because I know how it is, but
But you were careful, and listened, and I appreciate that. Thank you!
: I'm on a new job here
New jobs are good... =)
: and I have to use M$ Outlook to write my messages, and
Outlook isn't so good. Bleh. Maybe download Free Agent or another Windows
newsreader? (Never used a Windows newsreader, so I'm guessing what's
out there.
: I don't know how to (easily) confine the window / the lines to 80 columns...
: Especially since it uses a proportional font... :(
: Does anybody know how to configure M$ Outlook Express 5 accordingly?
Tools/Options/Read/Fonts... Select "Courier New" as your proportional font,
and keep a text file with eight copies of ---------| in it on your desktop
so you can paste in a guide line, or size the window to that line.
Or be conservative and bear the jokes about being on a C64. =)
--
Louis Erickson - wwo...@rdwarf.com - http://www.rdwarf.com/~wwonko/
While having never invented a sin, I'm trying to perfect several.
Doing so will keep the order of messages the same as they were actually
printed to the underlying filedescriptors.
If buffering causes printf() or whatever to not immediately print to the
underlying filedescriptor -- well, there isn't much which can be done to
get the messages in the right order.
You can (possibly, on some OS's, with some programs) trick the other
program to *not* buffer it's output, by making it's STDOUT go to a
psuedo terminal, instead of a pipe or file, but I'm not sure if that can
be done on windows.
> Problem is, though, that this way I can't get at the output AS IT
> HAPPENS. I would like to have the output of the external program
> scroll by on my Perl/Tk window IN REAL TIME...
>
> So I think I probably should try IPC::Open3 with select or IO::Select,
> using the recipe in The Perl Cookbook as someone suggested.
Beware: Under windows, IO::Select only works properly to Sockets -- it
won't work right with pipes.
Also, the Tk documentation for Tk::fileevent contains a hideous mistake:
it implies that you can read data in the callback function using read()
or <> (aka readline) -- this is a very *wrong* thing to try, as it will
frequently lead to deadlock. Use only sysread() for reading data in
such a callback; also, call sysread precisely *once* in the callback,
never more or less often than that.
(Also, it suggests using print() in conjunction with 'writeable'
callbacks... This too is wrong, but less harmful, due to OS level
buffering, which is independent from program level (stdio) buffering...
if you really need a 'writeable' callback, you should be using the
syswrite function.)
(In spite of ->fileevent using select() internally, which doesn't work
with pipes on windows, Tk::fileevent is still the best way to deal with
data from an external process; however, you need to make sure that you
create the connections to/from the external process with sockets instead
of pipes).
> Someone offline suggested to me to use IPC::Run. I had a look at the
> documentation and found it unsuitable for someone new on the subject
> (looks like it's more a reference manual). At least I didn't
> understand a word it was talking about. What the heck is a "harness"
> in the context of that module, and why on earth must I "pump"?! And
> why mustn't I on some occasions?
A harness is an object which contains a process identifyer, the
filehandles to/from it, and references to the variables which data
to/from the process goes into/comes out from.
You must "pump" it because perl can't magically move data from your
string variable into the handle into the other process, and because perl
can't magically move data from the handles from the other process into
your string variables.
Each time you "pump" the harness, perl will send any data from the
string you've set up for input to the process, and, in addition, check
the output handles from the harness for the presence of data, and if
it's there, read stuff from those handles into your variables.
As to when/why you "mustn't" call pump on some occasions: if you have
not supplied any input to the other process, and if that other process
is not going to output anything *until* you supply some input, then
obviously calling pump will put both your process and the other one into
deadlock.
> Does anybody know of some introductory documentation for this module,
> or are there other similar modules around (besides Win32::Job)?
>
> Someone wrote that "the same IO buffering issues exist with
> Win32::Job, and that Win32::Job only chooses not to document them".
>
> What are the differences between the solution with Open3/select,
> IPC::Run and Win32::Job?
IPC::Run will create sockets to communicate with the child process,
instead of pipes. Select() is able to work correctly on pipes. Because
of this, IPC::Run works fine on both *nix and on windows.
If you want to use IPC::Run and Tk::fileevent together, consider
something like this:
sub mycb {
my ($fh, $fh_name, $harness, $hcount) = @_;
my $n = sysread( $fh, my $buf, 8192 );
if( $n ) {
$buf =~ s/\n?\z/\n/;
print "${fh_name}: " . localtime . $buf;
return;
}
warn "${fh_name}: $!" if not defined $n;
$mw->fileevent( $fh, 'readable', '' );
close $fh;
# if we've closed both stdout and stderr from the
# process, then
$handle->finish if --handles_open == 0;
}
my $harness = start ["some external program"],
"<", \undef
">pipe", \*OUT,
"2>pipe", \*ERR,
);
my $count = 2; # to know when to call $handle->finish
for my $foo ( [\*OUT,"STDOUT"], [\*ERR,"STDERR"] ) {
my $callback = [\&mycb, @$foo, $handle, \$count];
$mw->fileevent( $foo->[0], 'readable', $callback );
}
__END__
[untested]
IPC::Open3 will create pipes to communicate with the child process,
which means that you can't use select/IO::Select/Tk::fileevent with the
handles it creates.
Win32::Job also will create pipes to communicate with the child process,
which means that you can't use select/IO::Select/Tk::fileevent with the
handles it creates. (The advantages of Win32::Job are the stuff about
starting a new job group, or starting without a window, or starting the
window minimized, etc.. But for IO, there's *no* advantage over
IPC::Open3).
> Has anybody used any of these yet, what would you recommend?
I've used IPC::Open3, but not the others.
--
$a=24;split//,240513;s/\B/ => /for@@=qw(ac ab bc ba cb ca
);{push(@b,$a),($a-=6)^=1 for 2..$a/6x--$|;print "$@[$a%6
]\n";((6<=($a-=6))?$a+=$_[$a%6]-$a%6:($a=pop @b))&&redo;}
That line is of course supposed to be:
$harness->finish if --$$hcount == 0;
> open(FH, "command 2&>1|");
>
> That should have the shell merge stderr and stdout
On "normal" systems the 2>&1 redirection would be done by Perl, not by
shell. Do not know whether the win32 implementation of do_open() is
smart enough for that...
Ilya
:> open(FH, "command 2&>1|");
:>
:> That should have the shell merge stderr and stdout
: On "normal" systems the 2>&1 redirection would be done by Perl, not by
: shell. Do not know whether the win32 implementation of do_open() is
: smart enough for that...
The documentation suggested otherwise - is this new to 5.8 or is the
documentation in perlipc merely missing this?
I checked there, and it doesn't say. The documentation for backticks and
for system() both explicitly say that any redirection is handled by the
shell, and I had assumed that 2>&1 is redirection. The documentation for
backticks specifically says 2>&1 is handled by the shell, while system()
merely says that any redirection is handled by the shell.
I assumed that open would have worked the same way as those other two
functions in this regard. Was I wrong to? Does anyone know why this is
different?
Should the docs be updated?
I have to say that this surprises me. Perl usually doesn't.
However, I decided to check - using 2>&1 under win32 with open() does capture
the output, but buffering issues may still make things come out in the
wrong order, and you may not be able to fix that from the command line.
Open3 might capture it.
--
Louis Erickson - wwo...@rdwarf.com - http://www.rdwarf.com/~wwonko/
Travel important today; Internal Revenue men arrive tomorrow.
I think it should be in 5.6.
> I assumed that open would have worked the same way as those other two
> functions in this regard.
open() works the same.
> Should the docs be updated?
I do not know. At the moment of my pulling the plug from p5p, I had
an almost finished patch which would also unplug the shell from the
simplest quoting jobs, as in
open q(foo 'bar foo' "baz dra" 2>&1 );
I do not think overdocumenting such details (which are very
implementation-dependent) is a good thing in the long run. Especially
since Win32 port breaks the documented behaviour a lot...
Ilya
[Resume: Problem is to run an external program
(nmake.exe or the like calling a C compiler on
the command line) from a Perl script under Win32
(currently Windows 2000) and to write both its
STDOUT and STDERR output into a Tk::Text widget
and a log file. It is important that the output
scrolls off the screen in real time (because the
compiler run takes about 10 minutes, and users
should be able to see something in the meantime)
and that the order of STDERR and STDOUT messages
isn't altered - some of the compiler's output
may be on STDERR, some on STDOUT, and the relative
position of both may be crucial for locating
possible errors in the code being compiled.]
> However, I decided to check - using 2>&1 under win32 with open() does capture
> the output, but buffering issues may still make things come out in the
> wrong order, and you may not be able to fix that from the command line.
> Open3 might capture it.
Unfortunately it doesn't.
I tried various possible solutions including
$pid = open(FH, "command 2>&1 |");
Tk::ExecuteCommand (which uses the above "open", internally)
IPC::Open3 and select() (however, select() doesn't work on pipes
under Win32 according to perlport(1))
BTW, IPC::Open3 uses the following command to spawn
the external process:
$pid = eval { system 1, @_ }; # 1 == P_NOWAIT
The funny thing about this special "system()" call
is that I couldn't find this documented anywhere.
What does "system()" with a numeric first parameter
do?!?! Note that it's NOT the "indirect object"
syntax (note the comma after the "1"!).
I tried all these possibilities above, and found out
the following:
IPC::Open3 and select() do not work under Win32 as
expected. IPC::Open3 uses the above "system()" call,
which internally uses "exec()", which uses the "fork()"
emulation under Win32, according to the docs - which
means that a new Perl interpreter (thread) is started,
not a true new external process.
However, external programs eventually DO get started -
I haven't understood yet how, though. (Can somebody
enligthen me?)
According to perlport(1), select() doesn't work on
pipes under Win32 (and VMS). This was confirmed by
my tests: The small example script I'd devised did
spawn the external program, but I could never get
at the actual output. select() kept returning -1
and $! contained "Bad file descriptor".
The very same script however DID work flawlessly under
Unix (FreeBSD), which I tested at home yesterday night.
BIG drawback however: IPC::Open3 and select() _DO_
change the order of output on STDOUT and STDERR due
to buffering issues. If STDOUT is left buffered in
the external program, then you get first all output
on STDERR and then all output on STDOUT, which is
unacceptable in my case.
The same applies to the solution using
$pid = open(FH, "command 2>&1 |"); (which
also works under Win32!).
In other words, _ALL_ depends on the buffering of
STDOUT in the external program - no matter which
solution is used!
Since I intend to start an external program I can't
change (such as "nmake.exe" and a given C compiler),
the big question is, how can I make it write its
output to STDOUT unbuffered?!
I haven't tried IPC::Run yet, which is said to use
sockets instead of pipes (which select() is documented
to work with under Win32), because I found the manpage
very confusing, and even the example given by someone
in this thread still eludes me.
But I am afraid it could suffer from the same buffering
issues as the other solutions...
I also haven't had a very close look at Win32::Job and
Win32::Process yet - can somebody say something about
them?
Thanks a lot for all your help!
Best regards,
Steffen Beyer
> BTW, IPC::Open3 uses the following command to spawn
> the external process:
>
> $pid = eval { system 1, @_ }; # 1 == P_NOWAIT
>
> The funny thing about this special "system()" call
> is that I couldn't find this documented anywhere.
> What does "system()" with a numeric first parameter
> do?!?! Note that it's NOT the "indirect object"
> syntax (note the comma after the "1"!).
What documentation there is is hidden in perlport.
Search for "system LIST".
--
Paul Johnson - pa...@pjcj.net
http://www.pjcj.net
> The same applies to the solution using
> $pid = open(FH, "command 2>&1 |"); (which
> also works under Win32!).
> In other words, _ALL_ depends on the buffering of
> STDOUT in the external program - no matter which
> solution is used!
Yup. I think I mentioned that might be the real problem earlier...
> Since I intend to start an external program I can't
> change (such as "nmake.exe" and a given C compiler),
> the big question is, how can I make it write its
> output to STDOUT unbuffered?!
I don't believe there is a general solution to this. That process is in
control of the buffering of it's filehandles. You can't directly change
them.
However, rather than wanting to explicitly disable the buffering, you're
just trying to recreate what the program does when it's run
interactively. I would think you have to convince the process that it
is not talking to a pipe, but to a TTY. It probably checks for that
when deciding whether to flush STDOUT.
Expect has to deal with those issues. I believe it can create the TTY
that the program might require. If so, you might be able to use it as
some glue here. I haven't heard of any particular module which would do
the TTY stuff by itself.
While this is true, select() *does* work on sockets.
With sufficient cleverness, you can make pairs of sockets which are
connected to each other (perl5.8 uses this to emulate socketpair).
Then, use shutdown() to make each handle one-way, in the appropriate
direction. Then, save your old STD{IN,OUT,ERR}, dup three of the new
sockets over those, spawn the external process, and restore the saved
STD{IN,OUT,ERR}.
Assuming that you're using perl5.8, you could do:
use IPC::Open3;
if( $^O eq "MSWin32" ) {
require Socket;
my ($domain, $type, $proto) = do { package Socket;
AF_UNIX(), SOCK_STREAM(), PF_UNSPEC();
};
no warnings 'redefine';
*IPC::Open3::xpipe = sub {
socketpair($_[0], $_[1], $domain, $type, $proto) and
shutdown($_[0], 1) and
shutdown($_[1], 0) or Carp::croak(
"$IPC::Open3::Me: pipe($_[0], $_[1]) failed: $!"
);
};
}
[untested]
The emulated socketpair would result in two network sockets, (ignore
that AF_UNIX argument up there; it's required, but ignored) which
*should* work properly with select().
If you're not using 5.8, then *maybe* something like this will work:
use IPC::Open3;
if( $^O eq "MSWin32" ) {
require IO::Socket::INET;
no warnings 'redefine';
my ($dom, $type, $proto) = do {
package Socket;
PF_INET(), SOCK_STREAM(),
scalar getprotobyname("tcp");
};
*IPC::Open3::xpipe = sub {
{
my $listen = IO::Socket::INET->new() or last;
$listen->bind( 0, Socket::INADDR_LOOPBACK() ) and
$listen->listen(1) and
my $n = $listen->sockname or last;
socket( $_[0], $dom, $type, $proto ) and
socket( $_[1], $dom, $type, $proto ) and
connect( $_[0], $n ) and
accept ( $_[1], $listen ) and
shutdown($_[0], 1) and
shutdown($_[1], 0) and
return 1;
}
{ local $!; close $_ if fileno $_ for @_[0,1] }
Carp::croak(
"$IPC::Open3::Me: pipe($_[0], $_[1]) failed: $!"
);
};
}
[snip]
> I tried all these possibilities above, and found out
> the following:
>
> IPC::Open3 and select() do not work under Win32 as
> expected. IPC::Open3 uses the above "system()" call,
> which internally uses "exec()", which uses the "fork()"
> emulation under Win32, according to the docs - which
> means that a new Perl interpreter (thread) is started,
> not a true new external process.
No. This is untrue.
system(1, "Blah")
Does NOT do a fork followed by an exec. It internally uses the Windows
CreateProcess function to start the new process.
[snip]
> BIG drawback however: IPC::Open3 and select() _DO_
> change the order of output on STDOUT and STDERR due
> to buffering issues.
Not directly.
When your child process detects that it's output is a pipe, rather than
a terminal, it alters its own output buffering.
You can use IO::Pty to avoid this issue.
> If STDOUT is left buffered in the external program,
> then you get first all output on STDERR and then
> all output on STDOUT, which is unacceptable in my case.
Doubtful.
More likely, you get STDERR as it's produced, and STDOUT a few thousand
bytes at a time, depending on the child process's internal buffering.
Of course, if your child process only produces only a few thousand bytes
total, it may appear to be as you describe.
> The same applies to the solution using
> $pid = open(FH, "command 2>&1 |"); (which
> also works under Win32!).
>
> In other words, _ALL_ depends on the buffering of
> STDOUT in the external program - no matter which
> solution is used!
>
> Since I intend to start an external program I can't
> change (such as "nmake.exe" and a given C compiler),
> the big question is, how can I make it write its
> output to STDOUT unbuffered?!
You can't *force* it to, magically, but if you know how it decided when
to buffer and when not to buffer, you can try and supply that
environment, whatever it is. On *nix, the decision is usually made
based on whether or not stdout is a tty.
> I haven't tried IPC::Run yet, which is said to use
> sockets instead of pipes (which select() is documented
> to work with under Win32), because I found the manpage
> very confusing, and even the example given by someone
> in this thread still eludes me.
What, my solution?
How about a simpler version of that:
my $harness = start ["some external program"],
"<", \undef
">pipe", \*OUT,
"2>pipe", \*ERR,
);
$mw->fileevent( \*OUT, \&out_callback, "readable" );
$mw->fileevent( \*ERR, \&err_callback, "readable" );
Just make sure that out_callback and err_callback read from their
respective handles using sysread() ... don't use any of <>, or read(),
or readline(), since three functions can potentially lead to deadlock.
(Also, the eof() function does the same kind of buffering as <>, read(),
and readline() do, so it, too might lead to deadlock... so don't use
it. Oh, and if you use eof() on a data stream you're using sysread()
on, you'll probably get corrupted or missing data.)
Don't call sysread() more than once in the callback, or else you can get
a deadlock.
Don't *forget* to call sysread() in the callback, or else Tk will call
that callback repeatedly, until you *do* read something.
Once both OUT and ERR have reached EOF, then the child process will have
exited. (Or at least, waiting for it to exit after that's happened,
likely won't result in a deadlock)
When the child process exits, you should call $harness->finish, to avoid
leaking memory.
Don't call $harness->finish before *both* handles reach EOF, or else
there's a possiblity of going into deadlock.
> But I am afraid it could suffer from the same buffering
> issues as the other solutions...
Hmm, well, for *nix, you should be able to convince the child process
not to buffer by making it's output handles into ttys:
my $harness = start ["some external program"],
"<", \undef
">tty", \*OUT,
"2>tty", \*ERR,
);
$mw->fileevent( \*OUT, \&out_callback, "readable" );
$mw->fileevent( \*ERR, \&err_callback, "readable" );
But I dunno about how you'd convince the child process of that on
windows, since it doesn't have the concept of a psuedo-tty, AFAIK.
> I also haven't had a very close look at Win32::Job and
> Win32::Process yet - can somebody say something about
> them?
Win32::Job has the *exact same* buffering problems that IPC::Open3 has.
It's advantages are that you've got these "job group" things, which make
waiting for a group of processes to finish, and you can start processes
minimized or without a window, or whatever.
And Win32::Process offers you *only* the ability to create a new
process, but it doesn't do anything at all for you wrt to making handles
to/from the child process. (You can make it so that the child process
inherits your own handles (or doesn't), but it won't create io handles
for you).
>I also haven't had a very close look at Win32::Job and
>Win32::Process yet - can somebody say something about
>them?
I've been around this entire track quite a few times and,
in my experience, Win32::Process has proven to be the most
robust of the various options in some demanding production
apps.
It gives you at least a partial entry directly into the
native Windows API CreateProcess() routine, and bypasses
many of the CMD.EXE/COMMAND.COM "quirks" with long file
names etc.
If you want "robust", Win32::Process is the way to go, IMO.
It takes a little more work than say, Open3(), but you
can obtain a good deal more control over your child and
the communications with it.
--
|~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
| Malcolm Hoar "The more I practice, the luckier I get". |
| ma...@malch.com Gary Player. |
| http://www.malch.com/ Shpx gur PQN. |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I posted earlier a correction to my own posting, and then realized that
the correction was wrong. Then, I noticed a small problem with my script
and ended up testing it more thoroughly. Now the results surprise me so
much I have to come back to this thread.
In my script, I have been using system() calls instead of open pipes or
any of the other stuff. I have managed to get the output of the commands
I execute with the system() calls to the same output file that I use as
STDOUT&STDERR in my perl script. I had done that without stopping to
think of it until now.
I hope someone could explain what is happening here.
This is my setup
================
I have done the testing with ActiveState perl 5.6.1 on Windows2000
(but also with perl5.001, and the results are the same).
Find the test script at end (ends with the "exit;" line).
1) The test script redirects both stdout and stderr to the same log
file:
open(STDOUT, ">>$log");
open(STDERR, ">>&STDOUT");
2) The script executes several system() calls:
system($command); # No redirection here!
By parametrization, the same script executes the system calls with an
additional redirection:
system("$command 1>>$log"); # Same log file as STDOUT&STDERR
system("$command 2>>$log"); # Same log file as STDOUT&STDERR
system("$command 1>>$out"); # Separate log file
system("$command 2>>$err"); # Separate log file
I wrote a batch script to run all these:
perl -s redirect.pl -log=log
perl -s redirect.pl -log=log -out=log
perl -s redirect.pl -log=log -err=log
perl -s redirect.pl -log=log -out=out
perl -s redirect.pl -log=log -err=err
(I also had " 1>> %0.log 2>>%0.err" redirection on top of the perl
command, but that always produced empty log files so I skip it here
for clarity.)
Test results
============
1) perl -s redirect.pl -log=log
Without the second redirection, all system() calls succeeded, and the
output from the executed commands appeared in the same log file. When
the script was executed in Windows Task Scheduler, some _error_ messages
were missing from the output file, but none of the normal output. Why?
2) perl -s redirect.pl -log=log -out=log
3) perl -s redirect.pl -log=log -err=log
Both behaved the same way: all system() calls failed with message "The
process cannot access the file because it is being used by another
process."
4) perl -s redirect.pl -log=log -out=out
5) perl -s redirect.pl -log=log -err=err
Both behaved the same way.
a) When the script was executed on command line, only the first system()
call succeeded, and all other failed.
b) When the script was executed via Windows Task Scheduler, more of the
commands succeeded, but still a majority failed. The succeeding commands
varied on different test rounds. Why?
(If I had the both, "-out=out -err=err", on the command line, only one
of the first file was created.)
Heini
use strict;
require 5.001;
select(STDERR); $| = 1;
select(STDOUT); $| = 1;
unless ($::log) {
print "
Usage:
perl -s $0 -log=LOGFILENAME [OTHEROPTIONS] [COMMAND]...
Options:
-log=LOGFILENAME redirect stdout and stderr to LOGFILENAME.log
-out=OUTFILENAME add ' >>OUTFILENAME.log' to system() calls
-err=ERRFILENAME add ' 2>>ERRFILENAME.log' to system() calls
-ctl=CTLFILENAME report redirection failures to CTLFILENAME.log
";
exit 1;
}
# Command line options.
my $redirect = '';
$redirect .= " 1>>$::out.log-$$" if $::out;
$redirect .= " 2>>$::err.log-$$" if $::err;
my $log = "$::log.log-$$";
open LOG, ">$::ctl.log-$$" if $::ctl;
sub warning {
my $text = "$0: Warning: " . shift;
if ($::ctl) {
print LOG $text;
} else {
warn $text;
}
}
# Redirect output.
open(OLDOUT, ">>&STDOUT") or warning "Cannot save stdout: $^E";
open(OLDERR, ">>&STDERR") or warning "Cannot save stderr: $^E";
open(STDOUT, ">>$log") or warning "Cannot redirect stdout: $^E";
open(STDERR, ">>&STDOUT") or warning "Cannot redirect stderr: $^E";
select(STDERR); $| = 1;
select(STDOUT); $| = 1;
# Execute commands.
my @err = ();
foreach (@ARGV || ('cleartool lsview ccvobpdm_view',
'cleartool startview ccvobpdm_view',
'cleartool cd M:/ccvobpdm_view/toy_source/tumble',
'cleartool lsview ccvobpdm_view',
'net stop albd',
'net stop lockmgr',
'net stop cccredmgr',
'kill view_server.exe',
'net start albd',
'net start lockmgr',
'net start cccredmgr',
'cleartool lsview ccvobpdm_view',
'cleartool startview ccvobpdm_view',
'cleartool cd M:/ccvobpdm_view/toy_source/tumble',
'cleartool lsview ccvobpdm_view',
)
) {
print "# $_\n";
system("$_$redirect");
push @err, "$_\n" if $?>>8;
}
warning "Failed with:\n @err" if @err;
open(STDOUT, ">>&OLDOUT") or warning "Cannot restore stdout: $^E";
open(STDERR, ">>&OLDERR") or warning "Cannot restore stderr: $^E";
exit;
> > IPC::Open3 and select() (however, select() doesn't work on pipes
> > under Win32 according to perlport(1))
>
> While this is true, select() *does* work on sockets.
It's good to have the confirmation from someone who actually tried
this (?), not just from the documentation (as cited by me further
below in my previous posting).
> With sufficient cleverness, you can make pairs of sockets which are
> connected to each other (perl5.8 uses this to emulate socketpair).
BTW, an important question:
Do processes started with Win32::Process also inherit their parent's
pipes, sockets etc. just as forked processes do under Unix???
> Then, use shutdown() to make each handle one-way, in the appropriate
> direction. Then, save your old STD{IN,OUT,ERR}, dup three of the new
> sockets over those, spawn the external process, and restore the saved
> STD{IN,OUT,ERR}.
>
> Assuming that you're using perl5.8, you could do:
>
> use IPC::Open3;
> if( $^O eq "MSWin32" ) {
> require Socket;
> my ($domain, $type, $proto) = do { package Socket;
> AF_UNIX(), SOCK_STREAM(), PF_UNSPEC();
> };
> no warnings 'redefine';
> *IPC::Open3::xpipe = sub {
> socketpair($_[0], $_[1], $domain, $type, $proto) and
> shutdown($_[0], 1) and
> shutdown($_[1], 0) or Carp::croak(
> "$IPC::Open3::Me: pipe($_[0], $_[1]) failed: $!"
> );
> };
> }
>
> [untested]
Yes, I'm using Perl 5.8.0 (a native build, that is, out-of-the-box).
I'll probably have to do it this way, then, if I understand it right.
What about redirecting the output like this:
system( 1, "command 1>%TEMP%\\command_$$.out 2>%TEMP%\\command_$$.err" );
and then checking for new output at the end of these files,
just like "tail(1)" does?
(BTW, how does "tail" do that?)
Any opinions if this has chances to work (reasonably) reliably?
> The emulated socketpair would result in two network sockets, (ignore
> that AF_UNIX argument up there; it's required, but ignored) which
> *should* work properly with select().
Ok.
> If you're not using 5.8, then *maybe* something like this will work:
[skipped because I do use Perl 5.8.0]
> system(1, "Blah")
>
> Does NOT do a fork followed by an exec. It internally uses the Windows
> CreateProcess function to start the new process.
But isn't the fork() emulation done using exec() which in turn
uses CreateProcess? That's how I interpreted the documentation,
at least. Did I get that wrong?
> When your child process detects that it's output is a pipe, rather than
> a terminal, it alters its own output buffering.
>
> You can use IO::Pty to avoid this issue.
Also under Win32?!?
(That'd be great!)
> > If STDOUT is left buffered in the external program,
> > then you get first all output on STDERR and then
> > all output on STDOUT, which is unacceptable in my case.
>
> Doubtful.
> More likely, you get STDERR as it's produced, and STDOUT a few thousand
> bytes at a time, depending on the child process's internal buffering.
>
> Of course, if your child process only produces only a few thousand bytes
> total, it may appear to be as you describe.
Yes, my test program doesn't produce very much output,
this could be the explanation.
However, I also get this behaviour on programs with lots
of output when I redirect them using the shell:
command 2>&1 | more
> You can't *force* it to, magically, but if you know how it decided when
Of course I can't force it, but I hoped there might be a
commandline option for that, or some way similar to what
you can do with "tty" under Unix, or something else.
Is there anything like that?
> to buffer and when not to buffer, you can try and supply that
> environment, whatever it is. On *nix, the decision is usually made
> based on whether or not stdout is a tty.
The Windows command shell seems to do the same; the following
works under Unix and Win32 alike:
unless ((-t STDOUT) && (open(MORE, "| more")))
{
unless (open(MORE, ">-"))
{
die "$self: can't open STDOUT: $!\n";
}
}
print MORE @lots_of_stuff;
close(MORE);
> > I haven't tried IPC::Run yet, which is said to use
> > sockets instead of pipes (which select() is documented
> > to work with under Win32), because I found the manpage
> > very confusing, and even the example given by someone
> > in this thread still eludes me.
>
> What, my solution?
>
> How about a simpler version of that:
>
> my $harness = start ["some external program"],
> "<", \undef
> ">pipe", \*OUT,
> "2>pipe", \*ERR,
> );
> $mw->fileevent( \*OUT, \&out_callback, "readable" );
> $mw->fileevent( \*ERR, \&err_callback, "readable" );
Ah, this is much clearer to me!
Thanks a lot!
> Just make sure that out_callback and err_callback read from their
> respective handles using sysread() ... don't use any of <>, or read(),
> or readline(), since three functions can potentially lead to deadlock.
Ok. I already did so, I read this somewhere in the docs.
> (Also, the eof() function does the same kind of buffering as <>, read(),
> and readline() do, so it, too might lead to deadlock... so don't use
> it. Oh, and if you use eof() on a data stream you're using sysread()
> on, you'll probably get corrupted or missing data.)
Ok.
But how can I actually find out whether my child has finished
sending things?
> Don't call sysread() more than once in the callback, or else you can get
> a deadlock.
Yes, I learned that from experience already... :-)
> Don't *forget* to call sysread() in the callback, or else Tk will call
> that callback repeatedly, until you *do* read something.
Ok.
> Once both OUT and ERR have reached EOF, then the child process will have
> exited. (Or at least, waiting for it to exit after that's happened,
> likely won't result in a deadlock)
How do I find out?
Does it mean the file handle reached EOF when I get a "readable" fileevent
and then sysread() returns zero bytes?
> When the child process exits, you should call $harness->finish, to avoid
> leaking memory.
Not much of a trouble here, as the program I'm writing will usually
be one-shot (or at most a few shots) only, but good to know to do it
right from the beginning, in order not to stumble over this later...
> Don't call $harness->finish before *both* handles reach EOF, or else
> there's a possiblity of going into deadlock.
Ok, but again, how can I detect EOF without using eof()?
> > But I am afraid it could suffer from the same buffering
> > issues as the other solutions...
>
> Hmm, well, for *nix, you should be able to convince the child process
> not to buffer by making it's output handles into ttys:
>
> my $harness = start ["some external program"],
> "<", \undef
> ">tty", \*OUT,
> "2>tty", \*ERR,
> );
> $mw->fileevent( \*OUT, \&out_callback, "readable" );
> $mw->fileevent( \*ERR, \&err_callback, "readable" );
>
> But I dunno about how you'd convince the child process of that on
> windows, since it doesn't have the concept of a psuedo-tty, AFAIK.
What a pity.
Windows really is sort of a pain...
> > I also haven't had a very close look at Win32::Job and
> > Win32::Process yet - can somebody say something about
> > them?
>
> Win32::Job has the *exact same* buffering problems that IPC::Open3 has.
> It's advantages are that you've got these "job group" things, which make
> waiting for a group of processes to finish, and you can start processes
> minimized or without a window, or whatever.
Ah, I see.
> And Win32::Process offers you *only* the ability to create a new
> process, but it doesn't do anything at all for you wrt to making handles
> to/from the child process. (You can make it so that the child process
> inherits your own handles (or doesn't), but it won't create io handles
> for you).
How can I do that?
This seems to answer my question from the beginning whether Windows
processes do inherit from their parents - so just to confirm: Do they?
Thanks to all of you for your tremendous help!
Best regards,
Steffen
That depends on whether or not you pass a true value as the fourth
argument to Win32::Process.
If so, then yes. If not, then I'm not sure what it gets (closed
handles? open handles to the "nul" file? A new console window, with
handles open to/from it? Something else?).
[snip]
> What about redirecting the output like this:
>
> system(1,"command 1>%TEMP%\\command_$$.out 2>%TEMP%\\command_$$.err");
>
> and then checking for new output at the end of these files,
I suppose that you could, but if you do that, then you can probably just
forget about trying to keep straight the ordering of data (whether a
string printed to stdout was before or after a string printed to
stderr).
> just like "tail(1)" does?
> (BTW, how does "tail" do that?)
AFAIK, all it does is read or seek to the end of the file, then
alternatly sleep() and stat() the file, seeing if it has grown since the
last time it was stat()ed.
There are, of course, various platform-dependent means of receiving
notification when a file changes size, but I'm not going to do the
research for you.
This isn't really a perl problem (though there is a perl solution --
File::Tail).
[snip]
> > system(1, "Blah")
> >
> > Does NOT do a fork followed by an exec. It internally uses the
> > Windows CreateProcess function to start the new process.
>
> But isn't the fork() emulation done using exec() which in turn
> uses CreateProcess? That's how I interpreted the documentation,
> at least. Did I get that wrong?
Yes. Very.
The fork() emulation is done with threads, not processes.
First, the perl intpreter gets copied (everything -- all package
variables, lexical variables, the stack, *everything*). Then, a new
thread is started, and it sets it's 'current interpreter' variable to
that copied interpreter. Then, the new interpreter (in the new thread)
continues running from the same point as the old one was.
When you run system(1, "blah"), I *believe* (but am not sure) that perl
actually calls ProcessCreate (that is, the same windows library function
that &Win32::Process::Create() calls).
> > When your child process detects that it's output is a pipe, rather
> > than a terminal, it alters its own output buffering.
> >
> > You can use IO::Pty to avoid this issue.
>
> Also under Win32?!?
> (That'd be great!)
Alas, no. Not as far as I know. Windows (or at least, win95) does not
have any way of creating a pty.
[snip]
> > You can't *force* it to, magically, but if you know how it decided
> > when
>
> Of course I can't force it, but I hoped there might be a
> commandline option for that, or some way similar to what
> you can do with "tty" under Unix, or something else.
> Is there anything like that?
If your other program is a perl program, you can set the $| variable to
a true value.
[snip]
When sysread() returns 0, then that signals an EOF condition.
> > Don't call sysread() more than once in the callback, or else you can
> > get a deadlock.
>
> Yes, I learned that from experience already... :-)
>
> > Don't *forget* to call sysread() in the callback, or else Tk will
> > call that callback repeatedly, until you *do* read something.
>
> Ok.
>
> > Once both OUT and ERR have reached EOF, then the child process will
> > have exited. (Or at least, waiting for it to exit after that's
> > happened, likely won't result in a deadlock)
>
> How do I find out?
> Does it mean the file handle reached EOF when I get a "readable"
> fileevent and then sysread() returns zero bytes?
Yes. Gee, it sounds like you've been reading from "perldoc -f sysread"
:)
> > When the child process exits, you should call $harness->finish, to
> > avoid leaking memory.
>
> Not much of a trouble here, as the program I'm writing will usually
> be one-shot (or at most a few shots) only, but good to know to do it
> right from the beginning, in order not to stumble over this later...
>
> > Don't call $harness->finish before *both* handles reach EOF, or else
> > there's a possiblity of going into deadlock.
>
> Ok, but again, how can I detect EOF without using eof()?
See above.
[snip]
> > > I also haven't had a very close look at Win32::Job and
> > > Win32::Process yet - can somebody say something about
> > > them?
> >
> > Win32::Job has the *exact same* buffering problems that IPC::Open3
> > has. It's advantages are that you've got these "job group" things,
> > which make waiting for a group of processes to finish, and you can
> > start processes minimized or without a window, or whatever.
>
> Ah, I see.
>
> > And Win32::Process offers you *only* the ability to create a new
> > process, but it doesn't do anything at all for you wrt to making
> > handles to/from the child process. (You can make it so that the
> > child process inherits your own handles (or doesn't), but it won't
> > create io handles for you).
>
> How can I do that?
As an example:
open( SAVEIN, "<&SAVEIN" );
open( SAVEOUT, ">&STDOUT" );
open( SAVEERR, ">&STDERR" );
open( STDIN, "<", "somefile.txt" );
open( STDOUT, ">", "foo.$$.out.txt" );
open( STDERR, ">", "foo.$$.err.txt" );
Win32::Process::Create( my ($obj),
"C:/path/to/command",
"command and args",
1, # <-- this tells the new process to inherit handles.
NORMAL_PRIORITY_CLASS,
"."
) or die $^E;
open( STDIN, "<&SAVEIN" );
open( STDOUT, ">&SAVEOUT" );
open( STDERR, ">&SAVEERR" );
close( SAVEIN );
close( SAVEOUT );
close( SAVEERR );
Obviously, in a real program, you would *probably* make the child
process's handles point to sockets (to make them usable with select or
IO::Select or Tk::fileevent) or pipes (if you don't need to do stuff
like that), rather than to plain files.
> This seems to answer my question from the beginning whether Windows
> processes do inherit from their parents - so just to confirm: Do they?
Sometimes :)
> > Do processes under Win32 also inherit their parent's
> > pipes, sockets etc.???
> That depends on whether or not you pass a true value as the fourth
> argument to Win32::Process.
Ah! Great!
> If so, then yes. If not, then I'm not sure what it gets (closed
> handles? open handles to the "nul" file? A new console window, with
> handles open to/from it? Something else?).
I suppose a new MS-DOS Box window?
Anyway, this doesn't matter in my case,
as I am not interested in letting this happen. :-)
> I suppose that you could, but if you do that, then you can probably just
> forget about trying to keep straight the ordering of data [...].
Yes, this is the risk, of course.
I thought I could watch the file in a tight loop (but yes,
who says the OS doesn't preempt me) and thus see the output
as it happens...
> AFAIK, all it does is read or seek to the end of the file, then
> alternatly sleep() and stat() the file, seeing if it has grown since the
> last time it was stat()ed.
Ah, yes, I had forgotten about stat()! :-)
> There are, of course, various platform-dependent means of receiving
> notification when a file changes size, but I'm not going to do the
> research for you.
Of course not.
> This isn't really a perl problem (though there is a perl solution --
> File::Tail).
Ah, thanks a lot, that was exactly what I wanted to know!!
> The fork() emulation is done with threads, not processes.
> [...]
> When you run system(1, "blah"), I *believe* (but am not sure) that perl
> actually calls ProcessCreate (that is, the same windows library function
> that &Win32::Process::Create() calls).
Ah, so in this case system() DOESN'T fork() (or emulate to fork()),
as the documentation says, but actually runs the external process?
> > > You can use IO::Pty to avoid this issue.
> > Also under Win32?!?
>
> Alas, no. Not as far as I know. Windows (or at least, win95) does not
> have any way of creating a pty.
Sometimes Windows really annoys me... :-(
> If your other program is a perl program, you can set the $| variable to
> a true value.
Unfortunately it's not. It's nmake.exe and a C compiler.
> When sysread() returns 0, then that signals an EOF condition.
Ok, perfect!
> > Does it mean the file handle reached EOF when I get a "readable"
> > fileevent and then sysread() returns zero bytes?
>
> Yes. Gee, it sounds like you've been reading from "perldoc -f sysread"
> :)
I did, but now that I re-read to confirm, I am ashamed to discover
that it's actually written there ("0 at end of file, or undef if
there was an error") - I just overlooked that part! <blush>
Sorry!
> As an example:
>
> open( SAVEIN, "<&SAVEIN" );
> open( SAVEOUT, ">&STDOUT" );
> open( SAVEERR, ">&STDERR" );
> open( STDIN, "<", "somefile.txt" );
> open( STDOUT, ">", "foo.$$.out.txt" );
> open( STDERR, ">", "foo.$$.err.txt" );
> Win32::Process::Create( my ($obj),
> "C:/path/to/command",
> "command and args",
> 1, # <-- this tells the new process to inherit handles.
> NORMAL_PRIORITY_CLASS,
> "."
> ) or die $^E;
> open( STDIN, "<&SAVEIN" );
> open( STDOUT, ">&SAVEOUT" );
> open( STDERR, ">&SAVEERR" );
> close( SAVEIN );
> close( SAVEOUT );
> close( SAVEERR );
So mixing this with the other solution with socketpair
you showed me should do the trick?
I.e., putting in the Win32::Process::Create() call
instead of system() (or fork() or whatever) in your
solution with socketpair()?
> > Do processes inherit from their parents?
> Sometimes :)
Since I can control it (using that "TRUE" 4th parameter you explained
above), it's as good as "anytime you want"! :-)
Thanks again for spelling things out for me!!
Best regards,
Steffen
Well, the docs for system() do say that it forks, but AFAIK, those docs
were written before perl got ported to windows.
On Windows, system() neither fork() nor psuedo-fork()s nor exec()s;
instead it does some special windows-specific thing.
Also, any *other* platform which doesn't have fork and exec will, for
system, do some platform-specific thing. (There's usually a way to do
*something* like system(), even if a platform doesn't grok the concepts
of fork/exec).
I'm not sure what's done on Windows for system() -- it probably merely
calls the C runtime system() function.
When I added support for spawn()ing on OS/2, I explicitly decided not
to change the docs of system(), the documentation of system() being
"so out-of-touch with reality". Later the support of spawn() was
ported to other platforms, but the porters did not touch the docs -
probably by the same reason.
> On Windows, system() neither fork() nor psuedo-fork()s nor exec()s;
> instead it does some special windows-specific thing.
Quite the opposite. Contemporary OSes support spawn()ing processes.
spawn() maps more-or-less one-to-one to the semantic of system().
Legacy OSes (like *nix) do not have spawn(). As a consequence, one
needs to use some special *nix-specific thing to implement system().
> Also, any *other* platform which doesn't have fork and exec will, for
> system, do some platform-specific thing. (There's usually a way to do
> *something* like system(), even if a platform doesn't grok the concepts
> of fork/exec).
Even on platform which have fork()/exec() who would use them for
system() if the platform supports spawn() too? If you are interested,
look into the sources to see which hoola-hoops one needs to go through
to get a reasonable error code from system() on exec/fork
architectures.
Hope this helps,
Ilya
P.S. spawn() is similar to exec, but it starts a child process and
returns the pid or -1.
The spwan() family of functions are non-posix.
AFAIK, fork and exec are.
It's quite concievable that on different systems, spawn behaves in
different ways (even if those systems are both posix compliant). OTOH,
if you use fork and exec (hoola-hoops notwithstanding), you can be
reasonably assured of consistant behavior.
> The spwan() family of functions are non-posix.
> AFAIK, fork and exec are.
Too bad for POSIX. :-( BTW, there is no way to make exec() POSIX if
it is going to do something useful too.
> It's quite concievable that on different systems, spawn behaves in
> different ways (even if those systems are both posix compliant). OTOH,
> if you use fork and exec (hoola-hoops notwithstanding), you can be
> reasonably assured of consistant behavior.
There may be no consistency wrt system(). Just look at the name. ;-)
There are too many variables to consider. Is the system supporting
#!? Is Perl emulating the support for #!? Is the system adding an
executable extension? What kinds of executable extensions? What are
the rules when the executable extension is added and when it is not?
What kind of process should the system start - console one or GUI?
Should the GUI window be minimized, maximized or "normal"? Should the
GUI window get the focus?
[These are just a few which immediately come to mind.]
Hope this helps,
Ilya
> P.S. spawn() is similar to exec, but it starts a child process and
> returns the pid or -1.
In which cases does it return -1?
I had the problem that my
$pid = open(FH, "command 2>&1 |");
actually succeeded (I was able to
read the expected data with <FH>),
but "close()" returned -1 nevertheless
in "$?" (and "$!" contained "No child processes").
(Windows 2000, native build of Perl 5.8.0 with MS VC++ 6.0)
Thanks!
Best regards,
Steffen
Actually, there are several kinds of spawn() [distinguished by a
flag]. What I described is an async spawn(); e.g., one done for
system(1, @ARGV), or pipe open. The sync one (used for the "standard"
system()) returns the exit code of the child or -1.
> In which cases does it return -1?
??? When the operation fails. (*This* is a major advantage over
fork()/exec(), which typically returns success no matter what is the
actual result.)
> (Windows 2000, native build of Perl 5.8.0 with MS VC++ 6.0)
What you describe is a failure of close() on Win32 after a
*successful* async spawn(). Keep in mind that close() is very similar
on contemporary/legacy architectures. But I have very little
knowledge of bugs in Win32...
Yours,
Ilya
POSIX specifies an optional posix_spawn() and posix_spawnp(). The
stated reason is that realtime systems with no virtual memory or other
address munging abilities could have difficulty implementing fork()
efficiently. One of the goals in designing them was that they should
be able to replace at least 50% of typical executions of fork.