Failed pipe open always exits?

34 views
Skip to first unread message

6si...@polari.online.com

unread,
Jun 24, 1992, 2:24:51 PM6/24/92
to
I'm writing a script that I want to send its output through a pager
if output is to a tty and it can start the pager, otherwise to STDOUT.
I tried the following:

$pager = $ENV{'PAGER'} || 'more';

open (PAGER, "| $pager") || open (PAGER, '>-');

write output to PAGER filehandle...

close (PAGER);

If the PAGER environment variable isn't set or is set to the name of
a program, the script works as expected. However, if the PAGER
environment variable is set to something that can't be exec'd, the
open prints an error and the script exits!

If I change the "| $pager" to "> $pager", it works as expected. If
the file $pager can't be opened for writing, the first open fails,
the second open works, and the output is dumped to STDOUT.

This occurs using both perl 4.019 and perl 4.034. I've also tried
wrapping things in evals, and the only change is that perl reports
the failure is in an eval :-).

Comments?
--
Brian L. Matthews b...@6sceng.UUCP

Tom Christiansen

unread,
Jun 25, 1992, 9:01:00 AM6/25/92
to
From the keyboard of 6si...@polari.online.com:
:I'm writing a script that I want to send its output through a pager

The answer here is that you can't expect to know if your open worked
the way your doing it! From the FAQ:

33) Why doesn't open return an error when a pipe open fails?

These statements:

open(TOPIPE, "|bogus_command") || die ...
open(FROMPIPE, "bogus_command|") || die ...

will not fail just for lack of the bogus_command. They'll only
fail if the fork to run them fails, which is seldom at best.

If you're writing to the TOPIPE, you'll get a SIGPIPE if the child
exits prematurely or doesn't run. If you are reading from the
FROMPIPE, you need to check the close() to see what happened.

If you want an answer sooner than pipe buffering might otherwise
afford you, you can do something like this:

$kid = open (PIPE, "bogus_command |"); # XXX: check defined($kid)
(kill 0, $kid) || die "bogus_command failed";

This works fine if bogus_command doesn't have shell metas in it, but
if it does, the shell may well not have exited before the kill 0. You
could always introduce a delay:

$kid = open (PIPE, "bogus_command </dev/null |");
sleep 1;
(kill 0, $kid) || die "bogus_command failed";

but this is sometimes undesirable, and in any event does not guarantee
correct behavior. But it seems slightly better than nothing.

Similar tricks can be played with writable pipes if you don't wish to
catch the SIGPIPE.

Maybe Larry will fix this in perl5. (Hint hint :-)

--tom
--
There are probably better ways to do that, but it would make the parser
more complex. I do, occasionally, struggle feebly against complexity... :-)
--Larry Wall in 78...@jpl-devvax.JPL.NASA.GOV
Tom Christiansen tch...@convex.com convex!tchrist

Michael Cook

unread,
Jun 25, 1992, 10:44:22 AM6/25/92
to
Tom Christiansen <tch...@convex.COM> writes:

> open(TOPIPE, "|bogus_command") || die ...
> open(FROMPIPE, "bogus_command|") || die ...

>Maybe Larry will fix this in perl5. (Hint hint :-)

How? It seems that the problem is with the UNIX fork/exec concept. What
could Perl do to overcome the problem?

Michael.

Tom Christiansen

unread,
Jun 25, 1992, 3:04:06 PM6/25/92
to
From the keyboard of mc...@mrc.dev.cdx.mot.com (Michael Cook):

The problem is that people *expect* perl's open() to fail if it
can't exec the program. Dealing with this is left as an
exercise to the implementor. :-)

--tom
--
Tom Christiansen tch...@convex.com convex!tchrist

A formal parsing algorithm should not always be used.
-- D. Gries

6si...@polari.online.com

unread,
Jun 29, 1992, 2:41:58 AM6/29/92
to
In article <1992Jun25.1...@news.eng.convex.com> tch...@convex.COM (Tom Christiansen) writes:
|The answer here is that you can't expect to know if your open worked
|the way your doing it!

Sigh. Of course if I'd stopped to think about how opening a pipe to
or from a command must be implemented, I would have realized that open
can't know if the exec worked or not.

However, even given that open can't report the failure, the child
perl always prints a warning if the exec fails. It seems quite
unacceptable for a script to be spitting out cryptic (to the average
user) error messages that the script can't suppress (I suppose I
could do a pipe and a fork myself and twiddle with stderr before
doing the open but that sounds hard :-)), so I'd like to suggest
only printing a warning on -w:

*** orig/util.c Sat Jun 27 14:51:23 1992
--- util.c Sat Jun 27 14:51:51 1992
***************
*** 1464,1466 ****
do_exec(cmd); /* may or may not use the shell */
! warn("Can't exec \"%s\": %s", cmd, strerror(errno));
_exit(1);
--- 1464,1467 ----
do_exec(cmd); /* may or may not use the shell */
! if (dowarn)
! warn("Can't exec \"%s\": %s", cmd, strerror(errno));
_exit(1);

Larry Wall

unread,
Jun 29, 1992, 3:07:18 PM6/29/92
to
In article <1992Jun25.1...@news.eng.convex.com> tch...@convex.COM (Tom Christiansen) writes:
: The problem is that people *expect* perl's open() to fail if it

: can't exec the program. Dealing with this is left as an
: exercise to the implementor. :-)

You think you can so easily push that hot button, eh? Most of my hot
buttons are harder to push than that, unless I help you. :-)

I agree it's a pity that Unix doesn't allow Perl to predict the future.
On the other hand, not many parents can afford to twiddle their thumbs
waiting for their kids to get their act together...

Larry

Chip Salzenberg

unread,
Jun 30, 1992, 9:14:39 AM6/30/92
to
According to lw...@netlabs.com (Larry Wall):

>I agree it's a pity that Unix doesn't allow Perl to predict the future.

What about the old close-on-exec trick? It can be used to report
accurately whether the child's exec() worked or not. It should be
reasonable at least for open(,"x|") and open(,"|x").

Brief recap:
Parent creates a pipe and before forking child.
Parent closes pipe[1] and does a blocking read() on pipe[0].
Child closes pipe[0] and turns on the close-on-exec flag of pipe[1].
Now the magic: if the child exec() succeeds, the parent's read() returns
zero (EOF). If the child exec() fails, the child writes errno to the
pipe and exits; so the parent read() returns non-zero, and it gets the
exec() errno as a bonus.
--
Chip Salzenberg at Teltronics/TCT <ch...@tct.com>, <7371...@compuserve.com>
"Do Rush place weird subliminal backmasked messages in their songs to
compel unwilling geeks to commit evil .sig atrocities?" -- Dean Engelhardt

Larry Wall

unread,
Jun 30, 1992, 3:43:11 PM6/30/92
to
In article <1992Jun29.064158.13115@polari> 6si...@polari.online.com writes:
: However, even given that open can't report the failure, the child

: perl always prints a warning if the exec fails. It seems quite
: unacceptable for a script to be spitting out cryptic (to the average
: user) error messages that the script can't suppress

The alternatives are even more unacceptable, unfortunately. "Silence
is too often mistaken for consent."

: (I suppose I


: could do a pipe and a fork myself and twiddle with stderr before
: doing the open but that sounds hard :-)), so I'd like to suggest
: only printing a warning on -w:

Alas, people don't run with -w often enough. And let's not forget that
stderr was invented for printing out warnings and such, after all.
Complaints about cryptic messages are valid. Complaints about the
presence of unexpected messages on stderr are not. Applications that
don't want their pretty screens messed up must capture stderr in some
fashion or other.

However, in this case it's not all that hard to get more control over
this particular error message. You needn't fiddle STDERR--the
automatic message only comes out on very-high-level popen-style
failures. Just use the lower level constructs by changing

$pid = open(PIPE, "| my_command my_arg");
die "Fork failure" unless defined $pid;

to

$pid = open(PIPE, "|-);
die "Fork failure" unless defined $pid;
if (!$pid) { # I'm the child.
exec "my_command myarg";
die "This is not a cryptic error message";
}

One could encapsulate this in a subroutine if clutter is perceived.

Larry

Michael Cook

unread,
Jun 30, 1992, 4:14:35 PM6/30/92
to
ch...@tct.com (Chip Salzenberg) writes:

>According to lw...@netlabs.com (Larry Wall):
>>I agree it's a pity that Unix doesn't allow Perl to predict the future.

>What about the old close-on-exec trick? It can be used to report
>accurately whether the child's exec() worked or not. It should be
>reasonable at least for open(,"x|") and open(,"|x").

>Brief recap:
> Parent creates a pipe and before forking child.
> Parent closes pipe[1] and does a blocking read() on pipe[0].
> Child closes pipe[0] and turns on the close-on-exec flag of pipe[1].
> Now the magic: if the child exec() succeeds, the parent's read() returns
> zero (EOF). If the child exec() fails, the child writes errno to the
> pipe and exits; so the parent read() returns non-zero, and it gets the
> exec() errno as a bonus.

Interesting trick. But it doesn't work for open(FH, "|bogus;"), because
Perl's exec (of /bin/sh -c "bogus;") would succeed, but then sh's exec (of
"bogus") would fail.

E. Tye McQueen

unread,
Jul 1, 1992, 7:12:09 PM7/1/92
to
mc...@mrc.dev.cdx.mot.com (Michael Cook) writes:
)ch...@tct.com (Chip Salzenberg) writes:
)>According to lw...@netlabs.com (Larry Wall):
)>>I agree it's a pity that Unix doesn't allow Perl to predict the future.
)
)>What about the old close-on-exec trick?
)
)Interesting trick. But it doesn't work for open(FH, "|bogus;"), because
)Perl's exec (of /bin/sh -c "bogus;") would succeed, but then sh's exec (of
)"bogus") would fail.

I would love for this to work in cases where the dreaded /bin/sh is not
used to interpret the arguments. In fact, whether it gets incorporated
into Perl or not I will soon have popen(handle,"r"/"w",pgm,arg,...)
written which will always do this and never call /bin/sh.

I find that invoking /bin/sh to parse open() command strings is almost
always to redirect in/output or to clumsilly handle quoting things that
you don't want /bin/sh to mess up. I much prefer passing my arguments
as an array and not having to worry about spaces and shell meta characters.
And redirecting in/output is easy in Perl. Plus you don't have all those
/bin/sh processes hanging around.

In fact, one of our customers wrote a /bin/sh wrapper that exec's w/o
forking when called as "sh -c 'cmd'" and exec's /bin/sh otherwise just so
they wouldn't have so many worse-than-useless /bin/sh's clogging things up.

Larry, any plans to teach Perl how to handle in/output redirection without
resorting to /bin/sh? Its easy enough to write in Perl so it may not be
worth adding (like better-than-csh globbing).

--
t...@spillman.com Tye McQueen, E.
----------------------------------------------------------
Nothing is obvious unless you are overlooking something.
----------------------------------------------------------
--
t...@spillman.com Tye McQueen, E.
----------------------------------------------------------
Nothing is obvious unless you are overlooking something.
----------------------------------------------------------

Larry Wall

unread,
Jul 2, 1992, 6:20:39 PM7/2/92
to
In article <1992Jul01.2...@spillman.uucp> t...@spillman.uucp (E. Tye McQueen) writes:
: Larry, any plans to teach Perl how to handle in/output redirection without

: resorting to /bin/sh? Its easy enough to write in Perl so it may not be
: worth adding (like better-than-csh globbing).

Down that path lies madness.

On the other hand, the road to hell is paved with melting snowballs.

Larry

Reply all
Reply to author
Forward
0 new messages