Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: [PATCH] perlipc.pod Revamp

3 views
Skip to first unread message

Shlomi Fish

unread,
Nov 1, 2010, 1:28:45 PM11/1/10
to Paul Johnson, perl-docu...@perl.org, perl5-...@perl.org
Hi Paul,

thanks for your E-mail.

On Monday 01 November 2010 16:47:50 Paul Johnson wrote:
> On Thu, Oct 28, 2010 at 06:18:01PM +0200, Shlomi Fish wrote:
> > This is a new version of the patch for revamping perlipc.pod now that the
> > whitespace normalisation patch was committed.
>
> Hello Shlomi,
>
> [ Apologies to all for the length of this post. If you don't want to
> read it all, please skip to section 5 at the bottom. ]
>
> I find myself uneasy with the changes you are proposing in this patch.
> I have given the matter a little thought and would like to explain why
> that is so.
>
> We have a document describing the way in which contributed modules are
> to be managed within the perl core. This is perlpolicy.pod. Please
> take a moment read this document if you are not familiar with it.

Read it now.

> The
> relevant section is entitled "CONTRIBUTED MODULES" with the sub-heading
> "A Social Contract about Artistic Control".
>
> Although this document does not specifically mention documentation, I
> believe that many of the principles defined therein are applicable to
> documentation as well as code and perhaps even more so to code within
> documentation.

Actually, it does mention documentation under "MAINTENANCE BRANCHES" saying:

{{{
· Documentation updates are acceptable.
}}}

I also disagree that we should treat the core documentation (which exists only
in the core) in the same way that we treat dual-life modules.

>
> Writing documentation is hard. Writing good documentation is even
> harder. With that in mind, I am grateful to anyone to attempts such a
> task. However, one of the problems Perl now faces, in my opinion, is
> not a dearth of documentation, but rather an abundance of documents
> which are losing cohesion. Most aspects of the language are documented,
> if only you can find the correct location. This is not a simple problem
> to solve, and it gets slightly harder every time someone comes with a
> well intentioned and locally useful documentation patch, adding more
> details or examples to some section which they had found less than
> clear.

OK, do you think my proposed series-of-patches alleviates this problem?

>
> However, that is not the problem you are trying to solve. I believe
> that you are wanting to modernise the code examples to demonstrate best
> practices and (perhaps) to make them into standalone snippets that can
> be copied and pasted into users' code.

Right.

>
> Whilst this, on the surface, might be considered a laudable goal, I do
> have a number of concerns, not specifically about the changes you have
> made (although many of them do concern me), but more generally about the
> direction in which this takes us. Let me try to explain some of my
> concerns.
>
> 1. I'm not convinced that we would be able to get a consensus on what
> the best practices are that we would like to promote. For example,
> I would consider some of the changes you have made to be changes for
> the worse, as would perlstyle, which is probably the closest thing
> we have to a definition here, and which is a fairly relaxed
> document. We certainly don't want to get anywhere near to edit wars
> over style.

Well, someone of authority will commit the changes to perlipc.pod with the
style that they see fit, and this will be accepted until perlipc.pod requires
further updates. At the moment, however, my patch corrected many style and
best-practices issues based on the sources here:

http://perl-begin.org/tutorials/bad-elements/

>
> 2. Best practices change over time. When this document was first
> written it didn't contain anything which, at the time, would have
> been considered bad practice. (I'm not sure it does even now,
> though I suspect that even Tom would write the code differently
> nowadays.) This means the we would periodically need to update the
> code in the examples, possibly using constructs recently added to
> the language. I'm not suggesting this is a particularly difficult
> problem to solve should we want to, I'm more questioning whether we
> want to.

Well, we should strive to update the document as much as we possibly have the
resources for. I volunteered some of my time to update peripc up to 5.12.0
(and hopefully 5.14.0 and above) standards. I don't rule out that it will need
future updates, but it would still be better $X years from now than the
current perlipc is, which is no longer adequate *now*. So in the interests of
the present and the future, you should apply my patch or a similar one.

>
> 3. And why should examples in the documents have to demonstrate best
> practices anyway? The purpose of most examples is not to explain
> how to program Perl in general, but to describe some particular
> aspect of the language. It could be considered that any extraneous
> code detracts from that goal. That is why I would consider it
> unnecessary to declare and initialise all the variables used, for
> example. Similarly, the addition of extra blank lines, which may be
> useful in code itself, could simply prove distracting in a code
> example where it might the more useful to be able to see all the
> code on the same page as the text describing it.
>

The problem is that often people come to us for help with bad Perl code that
they lifted or written based on a bad (and likely old) source. We don't need
more badly written Perl code out there, and if we have a possibility to fix it
for future generations to see, then it ought to be done. And we need to make
sure that the example code found in the Perl 5 core documentation also
demonstrates best practices, because if a fire has caught the firs, what will
the moss on the wall say?

> 4. It might even be considered advantageous to have numerous code
> styles sprinkled around the official perl docs. There are certainly
> numerous Perl styles in the core, in CPAN modules and in the vast
> majority of codebases where more than a couple of people have worked
> on the code. Why not in the documentation?

I don't mind having a small amount of style variation (see
http://www.joelonsoftware.com/articles/Wrong.html ), but we should not
sacrifice best practices in the core Perl 5 documentation. Naturally, if some
of are blocks are:

[code]
if (COND) {
.
.
.
}
[/code]

And some are:

[code]
if (COND)
{
.
.
.
}
[code]

Then it would still be OK.

>
> 5. My greatest concern, though, is that if someone has gone to the
> effort to write a document, then they should have the right to
> determine the style they will use not only in writing the text, but
> in writing the code examples too. Obviously if the code is wrong,
> or is rendered incorrect by subsequent changes to the language, then
> it needs to be updated. Otherwise, I would be wary of making code
> changes and would, in the spirit of perlpolicy.pod, prefer that such
> changes only be made with the blessing of the original author, where
> that is feasible. In this case I think this should be feasible and,
> if you have such a blessing, then I withdraw all my concerns about
> this particular patch, although my general concerns still stand.

Well, I hereby ask the original author of perlipc.pod (Tom Christiansen I
believe) if he approves of such changes. Regardless of that, the documentation
has been written many years ago, and we should expect that it be kept up-to-
date, and it is licensed under a free-and-open licence which permits everybody
to make such changes. And in our case, I believe they are necessary, and other
people who've commented on my submissions seem to think so too.

Whatever "perldoc perlpolicy" say about the dual-life modules is not too
relevant to perlipc.pod.

Also see what I've written about it on my blog (for a different context) in a
post titled "Changing the Seldon Plan":

http://community.livejournal.com/shlomif_tech/37969.html

(Gabor Szabo has written a follow-up to this here:
http://szabgab.com/blog/2009/11/1259431123.html ).

To sum up, we should not be afraid of modernising, fixing or refactoring old
open-source code, because this yields many short-term and long-term benefits,
and we should not treat FOSS code as holy.

> So, in summary, whilst I'm very thankful for the changes you have made
> fixing mistakes, I fear that applying the stylistic changes would set a
> dangerous precedent.

How would it be dangerous? I didn't throw away the examples, and I've
preserved their spirit. I didn't even rewrite them completely from scratch,
but rather revamped them incrementally. I believe the spirit of the document
and most of its contents are preserved.

Regards,

Shlomi Fish

--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/
"The Human Hacking Field Guide" - http://shlom.in/hhfg

<rindolf> She's a hot chick. But she smokes.
<go|dfish> She can smoke as long as she's smokin'.

Please reply to list if it's a mailing list post - http://shlom.in/reply .

Shlomi Fish

unread,
Nov 7, 2010, 2:23:54 PM11/7/10
to perl5-...@perl.org, perl-docu...@perl.org, Tom Christiansen
OK, now replying in detail.

On Monday 01 November 2010 19:49:54 Tom Christiansen wrote:
> Thank you for your edits. I agree with most of them.
>
> A lot of these are just personal style things. It's nearly not worth
> commenting on. My own style *has* gotten more regular since I wrote that,
> but not much looks too seriously different from how I'd write things today.
> This is mostly quibbling, although there are a few genuine items that
> reauire addressing.
>
> I haven't read the replies yet.
>
> >+ our $shucks = 0;
>
> I'm always nervous about initializing a global through our().
> You should only do so once, but nothing guarantees that you do.
> I'd never such in an example, because it's too fragile.
>

Well, without this our declaration, the example won't pass "use strict;", so I
had to add it. Would "my" be better here? "my" should work I think But I
wasn't sure about it.

> > sub catch_zap {
> >
> > my $signame = shift;
> >
> >+
> >
> > $shucks++;
> >
> >+
> >
> > die "Somebody sent me a SIG$signame";
> >
> > }
> >
> >+
> >
> > $SIG{INT} = 'catch_zap'; # could fail in modules
> > $SIG{INT} = \&catch_zap; # best strategy
>
> I don't much mind the vertical whitespace following the function,
> but I see no reason for it within the function. The more
> vertical whitespace, the more you dilute the code density of what
> can be immediately apprehended at a glance. Sometimes you want
> to do that, but sometimes you want *not* to do it.
>
> Remembering that these code snippets need to fit comfortably on a
> manpage, which is a paged document, I suspect that this case
> belongs to the latter group, not to the former.
>

Very well, I removed it in my copy on github.

> >@@ -45,11 +48,21 @@ system, or you can retrieve them from the Config
> >module. Set up an
> >
> > indexed by name to get the number:
> > use Config;
> >
> >- defined $Config{sig_name} || die "No sigs?";
> >- foreach $name (split(' ', $Config{sig_name})) {
> >- $signo{$name} = $i;
> >- $signame[$i] = $name;
> >- $i++;
> >+
> >+ if (!defined $Config{sig_name})
> >+ {
>
> See, more spurious vertical whitespace. If you must make this
> a block, it should be clustered. And I suspect that boolean
> truth would suffice.

OK, I'll reconsider.

>
> >+ die "No sigs?";
> >+ }
> >+
> >+ my (%signo, @signame);
> >+
> >+ my $index = 0;
> >+
> >+ foreach my $name (split(' ', $Config{sig_name})) {
>
> I never write "foreach my" because my brain never inserts the
> English word "of" before the "my"; I always write "for my".

I don't have a problem with that (People say "for each child in my family",
etc.) and generally the Perl custom is to use foreach for iterating over a
list, and "for" for either C-style for-loops or iterating over a range.

>
> >+ $signo{$name} = $index;
> >+ $signame[$index] = $name;
>
> These days I would work harder to vertically align the equal signs
> between adjacent assignments.

Fixed.

>
> >+
>
> More diluting vertical whitespace.
>
> >+ $index++;
> >
> > }
> >
> > So to check whether signal 17 and SIGALRM were the same, do just this:
> >@@ -79,8 +92,9 @@ values are "inherited" by functions called from within
> >that block.)
> >
> > sub precious {
> >
> > local $SIG{INT} = 'IGNORE';
>
> I'd now write that "IGNORE". My current *personal* style is to
> use double quotes on all strings I am not trying to have *not*
> interpolate. I find it easier to read because the glyph is
> wider, so can't ever look like a backquote no matter the font.

Hmmm. Damian's PBP recommends using single-quotes for strings that don't have
interpolated variables/etc., and double quotes when you need interpolation and
other features of ""/qq{}.

> > sub more_functions {
> >
> > # interrupts still ignored, for now...
>
> I might say SIGINTS instead of interrupts, which I
> tend to think of as hardware things.

Fixed in my copy.

>
> > }
> >
> >@@ -116,7 +130,7 @@ You may be able to determine the cause of failure
> >using C<%!>.
> >
> > You might also want to employ anonymous functions for simple signal
> >
> > handlers:
> >- $SIG{INT} = sub { die "\nOutta here!\n" };
> >+ $SIG{INT} = sub { die "\nOutta here!\n"; };
>
> Oh, no. I would no more put a semicolon at the end of a statement whose
> close brace immediately follows on the same line than I would put a comma
> after the last list element right before a close paren on the same line,
> and for just the same reason.
>
> I would also not never omit them if the closing punctuation is on
> a separate line.

Well, this is bikeshedding. I think always adding a semicolon at the end of
every statement is good style, and is safer if you add stuff later on. PBP
recommends it, I believe.

> >+
> >
> > cleanup_child($pid, $?);
> >
> > };
>
> That's a great deal of vertical whitespace. I can't see what
> it usefully adds. It looks like double-spaced text.
>

Removed to make the example shorter.

> >+
> >+ $children{$pid}=1;
> >
> > # ...
> > system($command);
> > # ...
> >
> >+
>
> Why the vertical whitespace here? Why swap the other
> "# ..." with a blank line, but not these?
>

I swapped it with the comment "I'm the child - do something".

> > }
> >
> > }
> >
> >@@ -199,12 +226,16 @@ using longjmp() or throw() in other languages.
> >
> > Here's an example:
> > eval {
> >
> >- local $SIG{ALRM} = sub { die "alarm clock restart" };
> >+
> >+ local $SIG{ALRM} = sub { die "alarm clock restart"; };
>
> No new semicolon.
>

OK, who gets to veto this?

> > If the operation being timed out is system() or qx(), this technique
> > is liable to generate zombies. If this matters to you, you'll
> >
> >@@ -246,16 +277,18 @@ info to show that it works and should be replaced
> >with the real code.
> >
> > $|=1;
>
> Spaces around the equals sign.
>
Done.

> > my $c = 0;
> > while (++$c) {
> > sleep 2;
> > print "$c\n";
>
> I'm not proud of having used a signal letter for a variable name.
> In fact, I don't care for it here.

Changed to $count.

I integrated all your commentary into:

[code]
sub _create_named_pipe
{
local $ENV{PATH} = $ENV{PATH}
. join "", map { ":$_" } qw(/etc /usr/etc /sbin/ /usr/sbin);

if (system('mknod', $path, 'p') != 0) {
if (system('mkfifo', $path) != 0) {
die "mk{nod,fifo} $path failed - $?";
}
}

# Return success.
return 1;
}
[/code]

The brace on a separate line is my own personal style and does not belong in
perl*.pod.

> >+ }
> >+ }
> >+
> >+ # Return success.
> >+ return 1;
> >
> > }
> >
> >@@ -313,22 +357,26 @@ from that file, the reading program will block and
> >your program will
> >
> > supply the new signature. We'll use the pipe-checking file test B<-p>
> > to find out whether anyone (or anything) has accidentally removed our
> > fifo.
> >
> >- chdir; # go home
> >- $FIFO = '.signature';
> >+ use POSIX qw();
> >+
> >+ chdir; # Go home
> >+ my $fifo_filename = '.signature';
> >
> > while (1) {
> >
> >- unless (-p $FIFO) {
> >- unlink $FIFO;
> >- require POSIX;
> >- POSIX::mkfifo($FIFO, 0700)
> >- or die "can't mkfifo $FIFO: $!";
>
> The goal was not to load the POSIX module unconditionally.
> That has been lost.
>

Why should we not load POSIX always? Is this a micro-optimisation?

> >- print FIFO "John Smith (smith\@host.org)\n", `fortune -s`;
> >- close FIFO;
> >- sleep 2; # to avoid dup signals
> >+ # Next line blocks until there's a reader.
> >+ open (my $fifo_fh, ">", $fifo_filename)
> >+ or die "can't write ${fifo_filename}: $!";
> >+ print {$fifo_fh} "John Smith (smith\@host.org)\n", `fortune -s`;
>
> Why in the world has that become a dative block?
>

PBP recommends (and I agree) against writing

print $fifo_fh "Hello\n";

Because it is too easy to confuse with:

print $fifo_fh, "Hello\n";

And recommends writing:

print {$fifo_fh} "Hello\n";

instead.

> >+ close $fifo_fh;
>
> That should test its return value.
>
> close($fifo_fh) || die "can't close $fifo_fh:

That would interpolate $fifo_fh there.

I changed it to:

close($fifo_fh) or die "can't close \$fifo_fh: $?";

> >+
> >+ sleep 2; # To avoid duplicate signals
>
> Wish I knew what I had been thinking.
>

Should I just remove it?

> >-N.B. If a signal of any given type fires multiple times during an opcode
> >+N.B.: if a signal of any given type fires multiple times during an opcode
>
> I'd try to find something less Latinate. Plus the style guide says not
> to start sentence with "note that", anyway.
>

Changed to "One should note that"

> > (such as from a fine-grained timer), the handler for that signal will
> > only be called once after the opcode completes, and all the other
> > instances will be discarded. Furthermore, if your system's signal queue
> >
> >@@ -397,7 +445,7 @@ breaks into IO operations like C<read> (used to
> >implement Perls
> >
> > E<lt>E<gt> operator). On older Perls the handler was called
>
> I'd write that as C<< <> >> now.

Done, thanks.

>
> > immediately (and as C<read> is not "unsafe" this worked well). With
> > the "deferred" scheme the handler is not called immediately, and if
> >
> >-Perl is using system's C<stdio> library that library may re-start the
> >+Perl is using the system's C<stdio> library, that library may re-start
> >the
>
> I don't thikn I'd hyphenate restart.
>

Hyphen removed.

> > =back
> >
> >@@ -470,24 +519,27 @@ C<"unsafe"> (a new feature since Perl 5.8.1).
> >
> > Perl's basic open() statement can also be used for unidirectional
>
> It's not a statement. It's a function.

Changed.

>
> > interprocess communication by either appending or prepending a pipe
> >
> >-symbol to the second argument to open(). Here's how to start
> >-something up in a child process you intend to write to:
> >+symbol to the second argument to open(), or by using the three-args form
> >+with the C<"|-"> or C<"-|"> modes (which won't work on Windows systems.)
> >+. Here's how to start something up in a child process you intend to write
> >to:
> >
> >- open(SPOOLER, "| cat -v | lpr -h 2>/dev/null")
> >- || die "can't fork: $!";
> >+ open (my $spooler, "| cat -v | lpr -h 2>/dev/null")
> >+ or die "can't fork: $!";
>
> That's unnecessary.

What is and why?

> >+ close ($spooler) or die "bad spool: $! $?";
>
> That should be
>
> close($spooler) || die "close to spooler pipe failed: $? $!";
>
> But that's still awkward.

>
> > And here's how to start up a child process you intend to read from:
> >- open(STATUS, "netstat -an 2>&1 |")
> >- || die "can't fork: $!";
> >- while (<STATUS>) {
> >+ open (my $status, "netstat -an 2>&1 |")
> >+ or die "can't fork: $!";
>
> Please don't make it all look like the same stuff. The "||"
> is used for a reason.

Why do you find it preferable over "or"? "or" has an ultra-low precendance and
so is harder to misuse.

>
> >+
> >+ while (<$status>) {
> >
> > next if /^(tcp|udp)/;
> > print;
> >
> > }
> >
> >- close STATUS || die "bad netstat: $! $?";
>
> That's actually a bad thing. Looks like a perl4 vestige.
>
> >+
> >+ close ($status) or die "bad netstat: $! $?";
>
> No space.
>
> close($status) || die "bad netstat: $! $?";
>
> All the || should line up to the same column in that example.
> That's why it's strangely indented earlier.
>

More bikeshedding.

> > =head2 Filehandles
> >
> >@@ -546,7 +598,8 @@ child process cannot outlive the parent.
> >
> > =head2 Background Processes
> >
> >-You can run a command in the background with:
> >+Assuming your shell supports that, you can run a command in the
> >background
> >
> >+with:
> > system("cmd &");
> >
> >@@ -569,8 +622,8 @@ output doesn't wind up on the user's terminal).
> >
> > sub daemonize {
> >
> > chdir '/' or die "Can't chdir to /: $!";
> >
> >- open STDIN, '/dev/null' or die "Can't read /dev/null: $!";
> >- open STDOUT, '>/dev/null'
> >+ open STDIN, '<', '/dev/null' or die "Can't read /dev/null:
> >$!";
>
> Nope. Make all the || die align. AND DO NOT OMIT THE PARENS AROUND
> THE FUNCTIon CALL!!

Fixed the alignment. There were no parentheses around open.


> >+ close($kid_to_write) or warn "kid exited $?";
>
> No "or". I absolutely never ever use it. That way I never
> get confused as the recent CPAN bug report shows that people do.

Which bug report?

>
> > } else { # child
> >
> > ($EUID, $EGID) = ($UID, $GID); # suid progs only
> >
> >- open (FILE, "> /safe/file")
> >- || die "can't open /safe/file: $!";
> >+ open (my $file, ">", "/safe/file")
>
> Why? There is 100.00000000000% chance that that will behave.
> It's a constant string of known content.

Still, we should not encourage such potentially bad idioms. Here it is
harmless, but people can learn that it's OK to do ">$myfile" too.

> > # add error processing as above
> >
> >- $pid = open(KID_TO_READ, "-|");
> >+ $pid = open($kid_to_read, "-|");
>
> Since we're getting our knickers in a snit now,
> I wonder where that was declared?

It wasn't - thanks for spotting it. Fixed now.


> >+ # NOT REACHED
> >
> > }
> >
> > And here's a safe pipe open for writing:
> > # add error processing as above
> >
> >- $pid = open(KID_TO_WRITE, "|-");
> >+ $pid = open($kid_to_write, "|-");
> >
> > $SIG{PIPE} = sub { die "whoops, $program pipe broke" };
> >
> > if ($pid) { # parent
> >
> > for (@data) {
> >
> >- print KID_TO_WRITE;
> >+ print $kid_to_write;
>
> And *HERE* is where we have a problem, Houston. That is
> now incorrect!! This is the price of political correctness
> gone MAD!
>
> % perl -le '$fh = \*STDOUT; print $fh'
> GLOB(0x7e0e6020)
>
> SEE??? That's why you use handles that Perl knows are handles.
>

Fixed.
> > defined $pid or die "fork failed; $!";
>
> Bad demicolon. And other things.
>
> die "fork failed: $!" unless defined $pid;
>
> > if ($pid) {
> >
> > if (my $sub_pid = fork()) {
> >
> >- close WRITER;
> >+ close $writer;
>
> close($writer) || die ....
>

Fixed.

> >-In the above, the true parent does not want to write to the WRITER
>
> In the above *what*? In the code above, I bet.
>

Fixed.

> > if ($pid) {
> >
> >- close READER;
> >+ close $reader;
> >
> > if (my $sub_pid = fork()) {
> >
> >- close WRITER;
> >+ close $writer;
> >
> > }
> > else {
> >
> >- # write to WRITER...
> >+ # write to $writer...
> >
> > exit;
> >
> > }
> >
> >- # write to WRITER...
> >+ # write to $writer...
> >
> > }
> > else {
> >
> >- open STDIN, "<&READER";
> >- close WRITER;
> >+ open STDIN, "<&", $reader;
> >+ close $writer;
> >
> > # do something...
> > exit;
> >
> > }
> >
> >@@ -733,11 +787,11 @@ open() which sets one file descriptor to another, as
> >
> >below:
> > Since Perl 5.8.0, you can also use the list form of C<open> for pipes :
> > the syntax
> >
> >- open KID_PS, "-|", "ps", "aux" or die $!;
> >+ open my $kid_ps, "-|", "ps", "aux" or die $!;
>
> That looks just terrible I'm sure I would never have
> written such a nasty thing.
>
> open(my $kid_ps, "-|", qw[ ps aux ])
>
> || die "pipe from ps failed: $!";

Corrected.

>


> > forks the ps(1) command (without spawning a shell, as there are more than
> > three arguments to open()), and reads its standard output via the
> >
> >-C<KID_PS> filehandle. The corresponding syntax to write to command
> >+C<$kid_ps> filehandle. The corresponding syntax to write to command
> >
> > pipes (with C<"|-"> in place of C<"-|">) is also implemented.
> >
> > Note that these operations are full Unix forks, which means they may not
> > be
> >
> >@@ -775,7 +829,7 @@ While this works reasonably well for unidirectional
> >communication, what
> >
> > about bidirectional communication? The obvious thing you'd like to do
> >
> > doesn't actually work:
> >- open(PROG_FOR_READING_AND_WRITING, "| some program |")
> >+ open (my $prog_for_reading_and_writing, "| some program |")
> >
> > and if you forget to use the C<use warnings> pragma or the B<-w> flag,
> >
> > then you'll miss out entirely on the diagnostic message:
> >@@ -813,27 +867,32 @@ commands are designed to operate over pipes, so this
> >seldom works
> >
> > unless you yourself wrote the program on the other end of the
> > double-ended pipe.
> >
> >-A solution to this is the nonstandard F<Comm.pl> library. It uses
> >-pseudo-ttys to make your program behave more reasonably:
> >+A solution to this is the non-core IPC-Run CPAN distribution (
> >+L<http://search.cpan.org/dist/IPC-Run/> )
>
> How come that's not written as an IPC::Run module?
>

What do you mean?

> >+
> >+ use IPC::Run qw(run timeout);
> >+
> >+ my ($in, $out, $err);
> >+ run ['cat', '-n'], \$in, \$out, \$err, timeout(10)
> >+ or die "Cannot run cat: $?";
>
> That's just terrible. Use this:
>
> run([qw(cat -n)], \($in, $out, $err), timeout(10))
>
> || die "couldn't run cat -n program: $?";
>
> Better yet, use an array so you don't have the same
> info in two places:
>
> @cat_args = qw[cat -n];
>
> run(\(@cat_args, $in, $out, $err) => timeout(10))
>
> || die "couldn't run @cat_args program: $?";

Fixed to an extent.

> >
> >- require 'Comm.pl';
> >- $ph = open_proc('cat -n');
> >
> > for (1..10) {
>
> Spaces around operators.

Fixed.

> >also
> >+addresses this kind of thing, but for Unix systems only. This module
> >requires +two other modules from CPAN: IO::Pty and IO::Stty. It sets up a
> >+pseudo-terminal to interact with programs that insist on using talking to
>
> Don't think that needs hyphenation.
>
> Don't think using belongs there.
>

Fixed the using. I think that pseduoterminal looks weird.

> >+the terminal device driver. If your system is amongst those supported,
> >this
>
> In my own speech, I more often use "among" when the following
> word begins with a consonant sound. I'd probably say "amongst
> others" but "among those". That's a tad precious, perhaps even
> silly: too much like "my nose and mine eyes".
>

Fixed to among.

> >+may be your best bet.
> >
> > =head2 Bidirectional Communication with Yourself
> >
> >@@ -1095,7 +1154,7 @@ go back to service a new client.
> >
> > if (! defined($pid = fork)) {
> >
> > logmsg "cannot fork: $!";
> > return;
> >
> >- }
> >+ }
> >
> > elsif ($pid) {
> >
> > logmsg "begat $pid";
> > return; # I'm the parent
> >
> >@@ -1125,7 +1184,7 @@ to be reported. However the introduction of safe
> >signals (see
> >
> > L</Deferred Signals (Safe Signals)> above) in Perl 5.7.3 means that
> > accept() may also be interrupted when the process receives a signal.
> > This typically happens when one of the forked sub-processes exits and
>
> Don't think that needs hyphenation.
>

I think it does. Not fixed yet.

Regards,

Shlomi Fish

--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/

Why I Love Perl - http://shlom.in/joy-of-perl

Shlomi Fish

unread,
Nov 7, 2010, 6:18:47 PM11/7/10
to perl5-...@perl.org, Paul Johnson, Jesse Vincent, perl-docu...@perl.org, Tom Christiansen
On Sunday 07 November 2010 22:37:44 Paul Johnson wrote:
> On Sun, Nov 07, 2010 at 03:32:12PM -0500, Jesse Vincent wrote:

> > On Sun, Nov 07, 2010 at 09:23:54PM +0200, Shlomi Fish wrote:
> > > > > eval {
> > > > >
> > > > >- local $SIG{ALRM} = sub { die "alarm clock restart" };
> > > > >+
> > > > >+ local $SIG{ALRM} = sub { die "alarm clock restart"; };
> > > >
> > > > No new semicolon.
> > >
> > > OK, who gets to veto this?
> >
> > That may well be me. I'm not a fan of those ;s.
>
> Or how about Larry?
>
> See perlstyle.pod.

I see. I removed these semicolons here:

https://github.com/shlomif/perl/tree/perlipc-revamp

Thanks!

Regards,

Shlomi Fish

--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/

Parody of "The Fountainhead" - http://shlom.in/towtf

Shlomi Fish

unread,
Nov 8, 2010, 5:18:21 AM11/8/10
to perl5-...@perl.org, Abigail, perl-docu...@perl.org, Tom Christiansen
On Monday 08 November 2010 01:48:50 Abigail wrote:
> On Sun, Nov 07, 2010 at 09:23:54PM +0200, Shlomi Fish wrote:
> > Hmmm. Damian's PBP recommends using single-quotes for strings that don't
> > have interpolated variables/etc., and double quotes when you need
> > interpolation and other features of ""/qq{}.
>
> [ Snipped other references to PBP ]
>
>
> Can we please stop treating PBP as gospel?
>

I didn't treat PBP (the book "Perl Best Practices") as gospel. I mentioned
some recommendations out of it because I agree with them and because they make
sense. I disagree with PBP on many points, and wouldn't use it as the basis
for passing criticism on code.

Regards,

Shlomi Fish

--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/

The Case for File Swapping - http://shlom.in/file-swap

Shlomi Fish

unread,
Nov 8, 2010, 9:14:14 AM11/8/10
to perl5-...@perl.org, demerphq, perl-docu...@perl.org, Tom Christiansen
On Monday 08 November 2010 15:45:02 demerphq wrote:

> On 7 November 2010 20:23, Shlomi Fish <shl...@iglu.org.il> wrote:
> >> > } else { # child
> >> >
> >> > ($EUID, $EGID) = ($UID, $GID); # suid progs only
> >> >
> >> >- open (FILE, "> /safe/file")
> >> >- || die "can't open /safe/file: $!";
> >> >+ open (my $file, ">", "/safe/file")
> >>
> >> Why? There is 100.00000000000% chance that that will behave.
> >> It's a constant string of known content.
> >
> > Still, we should not encourage such potentially bad idioms. Here it is
> > harmless, but people can learn that it's OK to do ">$myfile" too.
>
> If the intent of these changes is to demonstrate a better style, then
> I would expect the filename to be stored in a var so that it isn't
> duplicated in the error message.

Good point.

>
> Also, im not sure that i agree that changing open(FILE, ... ) to
> open(my $file, ...) is very sensible, in that to me it is not clear
> what "$file" is (in my coding style such a var would almost always be
> a file /name/), whereas FILE is very clearly a handle, accordingly I
> would tend to use var names like $out_fh for a write handle and $in_fh
> for a read handle, in particular I almost always suffix the var with
> '_fh' or something to make it clear that it is not the name of a file
> but a handle to one.

Another good point. On this web page, I (not Damian) recommend against calling
variables "$file" due to ambiguity between file handle and file name:

http://perl-begin.org/tutorials/bad-elements/#calling-variables-file

As a result, I changed it to this code:

my $safe_filename = "/safe/file";
open (my $safe_fh, ">", $safe_filename)
or die "can't open ${safe_filename}: $!";
while (<STDIN>) {
print {$safe_fh} $_; # child's STDIN is parent's $kid_to_write
}
exit; # don't forget this

These changes can be found here:

http://github.com/shlomif/perl/tree/perlipc-revamp

Regards,

Shlomi Fish

>
> IOW, (respecting Tom's style expectations):
>
> my $safe_file = "/safe/file";
> open(my $safe_fh, ">", $safe_file) || die "Failed to open '$safe_file'
> for writing: $!";
>
> Cheers,
> yves

--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/

"The Human Hacking Field Guide" - http://shlom.in/hhfg

<rindolf> She's a hot chick. But she smokes.

Tom Christiansen

unread,
Nov 8, 2010, 10:27:13 AM11/8/10
to Shlomi Fish, perl5-...@perl.org, demerphq, perl-docu...@perl.org
What the heck is this:

Section 5 of the F<modules> file is devoted to "Networking, Device Control
(modems), and Interprocess Communication", and contains numerous unbundled
modules numerous networking modules, Chat and Expect operations, CGI
programming, DCE, FTP, IPC, NNTP, Proxy, Ptty, RPC, SNMP, SMTP, Telnet,
Threads, and ToolTalk--just to name a few.

What "modules" file is this talking about?

--tom

Tom Christiansen

unread,
Nov 8, 2010, 1:19:59 PM11/8/10
to Perl5 Porters Mailing List, demerphq, Shlomi Fish, perl-docu...@perl.org, Abigail, Jesse Vincent
Yves wrote:

> If the intent of these changes is to demonstrate a better style, then
> I would expect the filename to be stored in a var so that it isn't
> duplicated in the error message.

Quite.

> Also, im not sure that i agree that changing open(FILE, ... ) to
> open(my $file, ...) is very sensible,

Ayup.

> in that to me it is not clear what "$file" is (in my coding style such
> a var would almost always be a file /name/), whereas FILE is very
> clearly a handle, accordingly I would tend to use var names like
> $out_fh for a write handle and $in_fh for a read handle, in particular
> I almost always suffix the var with '_fh' or something to make it
> clear that it is not the name of a file but a handle to one.

Yes, that also.

> IOW, (respecting Tom's style expectations):

***Thank you.***

> my $safe_file = "/safe/file";
> open(my $safe_fh, ">", $safe_file)
> || die "Failed to open '$safe_file' for writing: $!";

Sure.

> Cheers,
> yves

Looking through perlipc to make little fixes, it quickly became clear to me
that it had been patched with code by people with a Perl coding style, and
sometimes with an English language style, both very different to my own.

This made it seem like an old patchwork tattercloth that didn't fit
together very well. It was confusing and detracted from the overall
message. I've tried to fix all this. I started with a recent git pull
and edited. The result is something that is once again internally
self-consistent. Here are the sizes:

% diff -buw perl-git/pod/perlipc.pod ~ | wc
1937 12917 85335

% ls -l ~/perlipc.pod
76 -rw-r--r-- 1 tchrist staff 73749 Nov 8 11:01 /home/tchrist/perlipc.pod

Given that, I don't see much reason to send diffs larger in size
that the original file itself, but if people would like that, I
can. For now, I'm sending the complete revision in toto.

As for using PBP like some code-Bible, I have to side with Abigail
on this matter--and also with Damian himself, who wrote me that:

Not my espoused style, but so many people forget that PBP
was--at its heart--a plea for code to be written in *any*
consistent style, consciously and rationally chosen to meet
one's own needs. No-one, I'm sure, would every accuse you of
failing to do that.

If Damian himself does not condemn me, then let he who is without
sin cast the first stone.

--tom

=head1 NAME

perlipc - Perl interprocess communication (signals, fifos, pipes, safe subprocesses, sockets, and semaphores)

=head1 DESCRIPTION

The basic IPC facilities of Perl are built out of the good old Unix
signals, named pipes, pipe opens, the Berkeley socket routines, and SysV
IPC calls. Each is used in slightly different situations.

=head1 Signals

Perl uses a simple signal handling model: the %SIG hash contains names
or references of user-installed signal handlers. These handlers will
be called with an argument which is the name of the signal that
triggered it. A signal may be generated intentionally from a
particular keyboard sequence like control-C or control-Z, sent to you
from another process, or triggered automatically by the kernel when
special events transpire, like a child process exiting, your owbn process
running out of stack space, or hitting a process file-size limit.

For example, to trap an interrupt signal, set up a handler like this:

our $shucks;

sub catch_zap {
my $signame = shift;

$shucks++;


die "Somebody sent me a SIG$signame";
}

$SIG{INT} = __PACKAGE__ . "::catch_zap";

$SIG{INT} = \&catch_zap; # best strategy

Prior to Perl 5.7.3 it was necessary to do as little as you possibly
could in your handler; notice how all we do is set a global variable
and then raise an exception. That's because on most systems,
libraries are not re-entrant; particularly, memory allocation and I/O
routines are not. That meant that doing nearly I<anything> in your
handler could in theory trigger a memory fault and subsequent core
dump - see L</Deferred Signals (Safe Signals)> below.

The names of the signals are the ones listed out by C<kill -l> on your


system, or you can retrieve them from the Config module. Set up an

@signame list indexed by number to get the name and a %signo hash table


indexed by name to get the number:

use Config;
defined($Config{sig_name}) || die "No sigs?";


foreach $name (split(" ", $Config{sig_name})) {

$signo{$name} = $i;
$signame[$i] = $name;
$i++;
}

So to check whether signal 17 and SIGALRM were the same, do just this:

print "signal #17 = $signame[17]\n";
if ($signo{ALRM}) {
print "SIGALRM is $signo{ALRM}\n";
}

You may also choose to assign the strings C<"IGNORE"> or C<"DEFAULT"> as
the handler, in which case Perl will try to discard the signal or do the
default thing.

On most Unix platforms, the C<CHLD> (sometimes also known as C<CLD>) signal
has special behavior with respect to a value of C<"IGNORE">.
Setting C<$SIG{CHLD}> to C<"IGNORE"> on such a platform has the effect of
not creating zombie processes when the parent process fails to C<wait()>
on its child processes (i.e., child processes are automatically reaped).
Calling C<wait()> with C<$SIG{CHLD}> set to C<"IGNORE"> usually returns
C<-1> on such platforms.

Some signals can be neither trapped nor ignored, such as the KILL and STOP
(but not the TSTP) signals. One strategy for temporarily ignoring signals
is to use a local() on that hash element, automatically restoring a
previous value once your block is exited. Remember that values created by
the dynamically-scoped local() are "inherited" by functions called from
within their caller's scope.

sub precious {
local $SIG{INT} = "IGNORE";
more_functions();


}
sub more_functions {
# interrupts still ignored, for now...
}

Sending a signal to a negative process ID means that you send the signal
to the entire Unix process group. This code sends a hang-up signal to all
processes in the current process group, and also sets $SIG{HUP} to C<"IGNORE">
so it doesn't kill itself:

# block scope for local
{
local $SIG{HUP} = "IGNORE";
kill HUP => -$$;
# snazzy writing of: kill("HUP", -$$)
}

Another interesting signal to send is signal number zero. This doesn't
actually affect a child process, but instead checks whether it's alive
or has changed its UID.

unless (kill 0 => $kid_pid) {
warn "something wicked happened to $kid_pid";
}

When directed at a process whose UID is not identical to that
of the sending process, signal number zero may fail because
you lack permission to send the signal, even though the process is alive.


You may be able to determine the cause of failure using C<%!>.

unless (kill(0 => $pid) || $!{EPERM}) {
warn "$pid looks dead";
}

You might also want to employ anonymous functions for simple signal
handlers:

$SIG{INT} = sub { die "\nOutta here!\n" };

But that will be problematic for the more complicated handlers that need
to reinstall themselves. Because Perl's signal mechanism is currently
based on the signal(3) function from the C library, you may sometimes be so
unfortunate as to run on systems where that function is "broken"; that
is, it behaves in the old unreliable SysV way rather than the newer, more
reasonable BSD and POSIX fashion. So you'll see defensive people writing
signal handlers like this:

sub REAPER {
$waitedpid = wait;
# loathe SysV: it makes us not only reinstate
# the handler, but place it after the wait
$SIG{CHLD} = \&REAPER;
}
$SIG{CHLD} = \&REAPER;
# now do something that forks...

or better still:

use POSIX ":sys_wait_h";
sub REAPER {
my $child;
# If a second child dies while in the signal handler caused by the
# first death, we won't get another signal. So must loop here else
# we will leave the unreaped child as a zombie. And the next time
# two children die we get another zombie. And so on.
while (($child = waitpid(-1, WNOHANG)) > 0) {
$Kid_Status{$child} = $?;
}
$SIG{CHLD} = \&REAPER; # still loathe SysV
}
$SIG{CHLD} = \&REAPER;
# do something that forks...

Be careful qx(), system(), and some modules for calling external commands
do a fork(), then wait() for the result. Thus, your signal handler
(C<&REAPER> in the example) will be called. Because wait() was already
called by system() or qx(), the wait() in the signal handler will see no
more zombies and will therefore block.

The best way to prevent this issue is to use waitpid(), as in the following
example:

use POSIX ":sys_wait_h"; # for nonblocking read

my %children;

$SIG{CHLD} = sub {
# don't change $! and $? outside handler
local ($!, $?);
my $pid = waitpid(-1, WNOHANG);
return if $pid == -1;
return unless defined $children{$pid};
delete $children{$pid};
cleanup_child($pid, $?);
};

while (1) {
my $pid = fork();
die "cannot fork" unless defined $pid;
if ($pid == 0) {
# ...
exit 0;
} else {


$children{$pid}=1;
# ...
system($command);
# ...
}
}

Signal handling is also used for timeouts in Unix. While safely
protected within an C<eval{}> block, you set a signal handler to trap
alarm signals and then schedule to have one delivered to you in some
number of seconds. Then try your blocking operation, clearing the alarm
when it's done but not before you've exited your C<eval{}> block. If it
goes off, you'll use die() to jump out of the block, much as you might


using longjmp() or throw() in other languages.

Here's an example:

my $ALARM_EXCEPTION = "alarm clock restart";
eval {
local $SIG{ALRM} = sub { die $ALARM_EXCEPTION };
alarm 10;
flock(FH, 2) # blocking write lock
|| die "cannot flock: $!";
alarm 0;
};
if ($@ && $@ !~ quotemeta($ALARM_EXCEPTION)) { die }

If the operation being timed out is system() or qx(), this technique
is liable to generate zombies. If this matters to you, you'll

need to do your own fork() and exec(), and kill the errant child process.

For more complex signal handling, you might see the standard POSIX
module. Lamentably, this is almost entirely undocumented, but
the F<t/lib/posix.t> file from the Perl source distribution has some
examples in it.

=head2 Handling the SIGHUP Signal in Daemons

A process that usually starts when the system boots and shuts down
when the system is shut down is called a daemon (Disk And Execution
MONitor). If a daemon process has a configuration file which is
modified after the process has been started, there should be a way to
tell that process to reread its configuration file without stopping
the process. Many daemons provide this mechanism using a C<SIGHUP>
signal handler. When you want to tell the daemon to reread the file,
simply send it the C<SIGHUP> signal.

Not all platforms automatically reinstall their (native) signal
handlers after a signal delivery. This means that the handler works
the first time the signal is sent, only. The solution to this problem
is to use C<POSIX> signal handlers if available, their behavior
is well-defined.

The following example implements a simple daemon, which restarts
itself every time the C<SIGHUP> signal is received. The actual code is
located in the subroutine C<code()>, which just prints some debugging
info to show that it works; it should be replaced with the real code.

#!/usr/bin/perl -w

use POSIX ();
use FindBin ();
use File::Basename ();
use File::Spec::Functions;

$| = 1;

# make the daemon cross-platform, so exec always calls the script
# itself with the right path, no matter how the script was invoked.
my $script = File::Basename::basename($0);
my $SELF = catfile($FindBin::Bin, $script);

# POSIX unmasks the sigprocmask properly
my $sigset = POSIX::SigSet->new();
my $action = POSIX::SigAction->new("sigHUP_handler",
$sigset,
&POSIX::SA_NODEFER);
POSIX::sigaction(&POSIX::SIGHUP, $action);

sub sigHUP_handler {
print "got SIGHUP\n";
exec($SELF, @ARGV) || die "$0: couldn't restart: $!";
}

code();

sub code {
print "PID: $$\n";
print "ARGV: @ARGV\n";
my $count = 0;
while (++$count) {
sleep 2;
print "$count\n";
}
}


=head1 Named Pipes

A named pipe (often referred to as a FIFO) is an old Unix IPC
mechanism for processes communicating on the same machine. It works
just like regular anonymous pipes, except that the
processes rendezvous using a filename and need not be related.

To create a named pipe, use the C<POSIX::mkfifo()> function.

use POSIX qw(mkfifo);
mkfifo($path, 0700) || die "mkfifo $path failed: $!";

You can also use the Unix command mknod(1), or on some
systems, mkfifo(1). These may not be in your normal path, though.

# system return val is backwards, so && not ||
#
$ENV{PATH} .= ":/etc:/usr/etc";
if ( system("mknod", $path, "p")
&& system("mkfifo", $path) )
{
die "mk{nod,fifo} $path failed";
}


A fifo is convenient when you want to connect a process to an unrelated
one. When you open a fifo, the program will block until there's something
on the other end.

For example, let's say you'd like to have your F<.signature> file be a
named pipe that has a Perl program on the other end. Now every time any
program (like a mailer, news reader, finger program, etc.) tries to read
from that file, the reading program will read the new signature from your
program. We'll use the pipe-checking file-test operator, B<-p>, to find


out whether anyone (or anything) has accidentally removed our fifo.

chdir(); # go home
my $FIFO = ".signature";

while (1) {
unless (-p $FIFO) {
unlink $FIFO; # discard any failure, will catch later
require POSIX; # delayed loading of heavy module
POSIX::mkfifo($FIFO, 0700)


|| die "can't mkfifo $FIFO: $!";
}

# next line blocks till there's a reader
open (FIFO, "> $FIFO") || die "can't open $FIFO: $!";


print FIFO "John Smith (smith\@host.org)\n", `fortune -s`;

close(FIFO) || die "can't close $FIFO: $!";


sleep 2; # to avoid dup signals
}

=head2 Deferred Signals (Safe Signals)

Before Perl 5.7.3, installing Perl code to deal with signals exposed you to
danger from two things. First, few system library functions are
re-entrant. If the signal interrupts while Perl is executing one function
(like malloc(3) or printf(3)), and your signal handler then calls the same
function again, you could get unpredictable behavior--often, a core dump.
Second, Perl isn't itself re-entrant at the lowest levels. If the signal
interrupts Perl while Perl is changing its own internal data structures,
similarly unpredictable behavior may result.

There were two things you could do, knowing this: be paranoid or be
pragmatic. The paranoid approach was to do as little as possible in your
signal handler. Set an existing integer variable that already has a
value, and return. This doesn't help you if you're in a slow system call,
which will just restart. That means you have to C<die> to longjmp(3) out
of the handler. Even this is a little cavalier for the true paranoiac,
who avoids C<die> in a handler because the system I<is> out to get you.
The pragmatic approach was to say "I know the risks, but prefer the
convenience", and to do anything you wanted in your signal handler,
and be prepared to clean up core dumps now and again.

Perl 5.7.3 and later avoid these problems by "deferring" signals. That is,
when the signal is delivered to the process by the system (to the C code
that implements Perl) a flag is set, and the handler returns immediately.
Then at strategic "safe" points in the Perl interpreter (e.g. when it is
about to execute a new opcode) the flags are checked and the Perl level
handler from %SIG is executed. The "deferred" scheme allows much more
flexibility in the coding of signal handler as we know Perl interpreter is
in a safe state, and that we are not in a system library function when the
handler is called. However the implementation does differ from previous
Perls in the following ways:

=over 4

=item Long-running opcodes

As the Perl interpreter looks at signal flags only when it is about
to execute a new opcode, a signal that arrives during a long-running
opcode (e.g. a regular expression operation on a very large string) will
not be seen until the current opcode completes.

If a signal of any given type fires multiple times during an opcode

(such as from a fine-grained timer), the handler for that signal will

be called only once, after the opcode completes; all other


instances will be discarded. Furthermore, if your system's signal queue

gets flooded to the point that there are signals that have been raised
but not yet caught (and thus not deferred) at the time an opcode
completes, those signals may well be caught and deferred during
subsequent opcodes, with sometimes surprising results. For example, you
may see alarms delivered even after calling C<alarm(0)> as the latter
stops the raising of alarms but does not cancel the delivery of alarms
raised but not yet caught. Do not depend on the behaviors described in
this paragraph as they are side effects of the current implementation and
may change in future versions of Perl.

=item Interrupting IO

When a signal is delivered (e.g., SIGINT from a control-C) the operating
system breaks into IO operations like I<read>(2), which is used to
implement Perl's readline() function, the C<< <> >> operator. On older
Perls the handler was called immediately (and as C<read> is not "unsafe",
this worked well). With the "deferred" scheme the handler is I<not> called
immediately, and if Perl is using system's C<stdio> library that library
may restart the C<read> without returning to Perl to give it a chance to
call the %SIG handler. If this happens on your system the solution is to
use C<:perlio> layer to do IO--at least on those handles that you want to
be able to break into with signals. (The C<:perlio> layer checks the signal
flags and calls %SIG handlers before resuming IO operation.)

The default in Perl 5.7.3 and later is to automatically use
the C<:perlio> layer.

Some networking library functions like gethostbyname() are known to have
their own implementations of timeouts which may conflict with your
timeouts. If you have problems with such functions, try using the POSIX
sigaction() function, which bypasses Perl safe signals. Be warned that
this does subject you to possible memory corruption, as described above.

Instead of setting C<$SIG{ALRM}>:

local $SIG{ALRM} = sub { die "alarm" };

try something like the following:

use POSIX qw(SIGALRM);
POSIX::sigaction(SIGALRM,
POSIX::SigAction->new(sub { die "alarm" }))
|| die "Error setting SIGALRM handler: $!\n";

Another way to disable the safe signal behavior locally is to use
the C<Perl::Unsafe::Signals> module from CPAN, which affects
all signals.

=item Restartable system calls

On systems that supported it, older versions of Perl used the
SA_RESTART flag when installing %SIG handlers. This meant that
restartable system calls would continue rather than returning when
a signal arrived. In order to deliver deferred signals promptly,
Perl 5.7.3 and later do I<not> use SA_RESTART. Consequently,
restartable system calls can fail (with $! set to C<EINTR>) in places
where they previously would have succeeded.

The default C<:perlio> layer retries C<read>, C<write>
and C<close> as described above; interrupted C<wait> and
C<waitpid> calls will always be retried.

=item Signals as "faults"

Certain signals like SEGV, ILL, and BUS are generated by virtual memory
addressing errors and similiar "faults". These are normally fatal: there is
little a Perl-level handler can do with them. So Perl now delivers them
immediately rather than attempting to defer them.

=item Signals triggered by operating system state

On some operating systems certain signal handlers are supposed to "do
something" before returning. One example can be CHLD or CLD, which
indicates a child process has completed. On some operating systems the
signal handler is expected to C<wait> for the completed child
process. On such systems the deferred signal scheme will not work for
those signals: it does not do the C<wait>. Again the failure will
look like a loop as the operating system will reissue the signal because
there are completed child processes that have not yet been C<wait>ed for.

=back

If you want the old signal behavior back despite possible
memory corruption, set the environment variable C<PERL_SIGNALS> to
C<"unsafe">. This feature first appeared in Perl 5.8.1.

=head1 Using open() for IPC

Perl's basic open() statement can also be used for unidirectional

interprocess communication by either appending or prepending a pipe

symbol to the second argument to open(). Here's how to start

something up in a child process you intend to write to:

open(SPOOLER, "| cat -v | lpr -h 2>/dev/null")


|| die "can't fork: $!";

local $SIG{PIPE} = sub { die "spooler pipe broke" };
print SPOOLER "stuff\n";
close SPOOLER || die "bad spool: $! $?";

And here's how to start up a child process you intend to read from:

open(STATUS, "netstat -an 2>&1 |")


|| die "can't fork: $!";

while (<STATUS>) {


next if /^(tcp|udp)/;
print;
}

close STATUS || die "bad netstat: $! $?";

If one can be sure that a particular program is a Perl script expecting
filenames in @ARGV, the clever programmer can write something like this:

% program f1 "cmd1|" - f2 "cmd2|" f3 < tmpfile

and no matter which sort of shell it's called from, the Perl program will
read from the file F<f1>, the process F<cmd1>, standard input (F<tmpfile>
in this case), the F<f2> file, the F<cmd2> command, and finally the F<f3>
file. Pretty nifty, eh?

You might notice that you could use backticks for much the
same effect as opening a pipe for reading:

print grep { !/^(tcp|udp)/ } `netstat -an 2>&1`;
die "bad netstatus ($?)" if $?;

While this is true on the surface, it's much more efficient to process the
file one line or record at a time because then you don't have to read the
whole thing into memory at once. It also gives you finer control of the
whole process, letting you kill off the child process early if you'd like.

Be careful to check the return values from both open() and close(). If
you're I<writing> to a pipe, you should also trap SIGPIPE. Otherwise,
think of what happens when you start up a pipe to a command that doesn't
exist: the open() will in all likelihood succeed (it only reflects the
fork()'s success), but then your output will fail--spectacularly. Perl
can't know whether the command worked, because your command is actually
running in a separate process whose exec() might have failed. Therefore,
while readers of bogus commands return just a quick EOF, writers
to bogus command will get hit with a signal, which they'd best be prepared
to handle. Consider:

open(FH, "|bogus") || die "can't fork: $!";
print FH "bang\n"; # neither necessary nor sufficient
# to check print retval!
close(FH) || die "can't close: $!";

The reason for not checking the return value from print() is because of
pipe buffering; physical writes are delayed. That won't blow up until the
close, and it will blow up with a SIGPIPE. To catch it, you could use
this:

$SIG{PIPE} = "IGNORE";
open(FH, "|bogus") || die "can't fork: $!";
print FH "bang\n";
close(FH) || die "can't close: status=$?";

=head2 Filehandles

Both the main process and any child processes it forks share the same
STDIN, STDOUT, and STDERR filehandles. If both processes try to access
them at once, strange things can happen. You may also want to close
or reopen the filehandles for the child. You can get around this by
opening your pipe with open(), but on some systems this means that the


child process cannot outlive the parent.

=head2 Background Processes

You can run a command in the background with:

system("cmd &");

The command's STDOUT and STDERR (and possibly STDIN, depending on your
shell) will be the same as the parent's. You won't need to catch
SIGCHLD because of the double-fork taking place; see below for details.

=head2 Complete Dissociation of Child from Parent

In some cases (starting server processes, for instance) you'll want to
completely dissociate the child process from the parent. This is
often called daemonization. A well-behaved daemon will also chdir()
to the root directory so it doesn't prevent unmounting the filesystem
containing the directory from which it was launched, and redirect its
standard file descriptors from and to F</dev/null> so that random
output doesn't wind up on the user's terminal.

use POSIX "setsid";

sub daemonize {
chdir("/") || die "can't chdir to /: $!";
open(STDIN, "< /dev/null") || die "can't read /dev/null: $!";
open(STDOUT, "> /dev/null") || die "can't write to /dev/null: $!";
defined(my $pid = fork()) || die "can't fork: $!";
exit if $pid; # non-zero now mean I am the paren
(setsid() != -1) || die "Can't start a new session: $!"
open(STDERR, ">&STDOUT") || die "can't dup stdout: $!";
}

The fork() has to come before the setsid() to ensure you aren't a
process group leader; the setsid() will fail if you are. If your
system doesn't have the setsid() function, open F</dev/tty> and use the
C<TIOCNOTTY> ioctl() on it instead. See tty(4) for details.

Non-Unix users should check their C<< I<Your_OS>::Process >> module for
other possible solutions.

=head2 Safe Pipe Opens

Another interesting approach to IPC is making your single program go
multiprocess and communicate between--or even amongst--yourselves. The
open() function will accept a file argument of either C<"-|"> or C<"|-">
to do a very interesting thing: it forks a child connected to the
filehandle you've opened. The child is running the same program as the
parent. This is useful for safely opening a file when running under an
assumed UID or GID, for example. If you open a pipe I<to> minus, you can
write to the filehandle you opened and your kid will find it in I<his>
STDIN. If you open a pipe I<from> minus, you can read from the filehandle
you opened whatever your kid writes to I<his> STDOUT.

use English qw[ -no_match_vars ];
my $PRECIOUS = "/path/to/some/safe/file";
my $sleep_count;
my $pid;

do {


$pid = open(KID_TO_WRITE, "|-");

unless (defined $pid) {
warn "cannot fork: $!";
die "bailing out" if $sleep_count++ > 6;
sleep 10;
}
} until defined $pid;

if ($pid) { # I am the parent
print KID_TO_WRITE @some_data;
close(KID_TO_WRITE) || warn "kid exited $?";
} else { # I am the child
# drop permissions in setuid and/or setgid programs:


($EUID, $EGID) = ($UID, $GID);

open (OUTFILE, "> $PRECIOUS")
|| die "can't open $PRECIOUS: $!";
while (<STDIN>) {
print OUTFILE; # child's STDIN is parent's KID_TO_WRITE
}
close(OUTFILE) || die "can't close $PRECIOUS: $!";
exit(0); # don't forget this!!
}

Another common use for this construct is when you need to execute
something without the shell's interference. With system(), it's
straightforward, but you can't use a pipe open or backticks safely.
That's because there's no way to stop the shell from getting its hands on
your arguments. Instead, use lower-level control to call exec() directly.

Here's a safe backtick or pipe open for read:

my $pid = open(KID_TO_READ, "-|");
defined($pid) || die "can't fork: $!";

if ($pid) { # parent
while (<KID_TO_READ>) {
# do something interesting
}
close(KID_TO_READ) || warn "kid exited $?";

} else { # child
($EUID, $EGID) = ($UID, $GID); # suid only
exec($program, @options, @args)
|| die "can't exec program: $!";
# NOTREACHED
}

And here's a safe pipe open for writing:

my $pid = open(KID_TO_WRITE, "|-");
defined($pid) || die "can't fork: $!";

$SIG{PIPE} = sub { die "whoops, $program pipe broke" };

if ($pid) { # parent
print KID_TO_WRITE @data;
close(KID_TO_WRITE) || warn "kid exited $?";

} else { # child
($EUID, $EGID) = ($UID, $GID);

exec($program, @options, @args)
|| die "can't exec program: $!";
# NOTREACHED
}

It is very easy to dead-lock a process using this form of open(), or
indeed with any use of pipe() with multiple subprocesses. The
example above is "safe" because it is simple and calls exec(). See
L</"Avoiding Pipe Deadlocks"> for general safety principles, but there
are extra gotchas with Safe Pipe Opens.

In particular, if you opened the pipe using C<open FH, "|-">, then you
cannot simply use close() in the parent process to close an unwanted
writer. Consider this code:

my $pid = open(WRITER, "|-"); # fork open a kid
defined($pid) || die "first fork failed: $!";


if ($pid) {
if (my $sub_pid = fork()) {

defined($sub_pid) || die "second fork failed: $!";
close(WRITER) || die "couldn't close WRITER: $!";
# now do something else...
}
else {
# first write to WRITER
# ...
# then when finished
close(WRITER) || die "couldn't close WRITER: $!";
exit(0);
}
}
else {
# first do something with STDIN, then
exit(0);
}

In the example above, the true parent does not want to write to the WRITER
filehandle, so it closes it. However, because WRITER was opened using
C<open FH, "|-">, it has a special behavior: closing it calls
waitpid() (see L<perlfunc/waitpid>), which waits for the subprocess
to exit. If the child process ends up waiting for something happening
in the section marked "do something else", you have deadlock.

This can also be a problem with intermediate subprocesses in more
complicated code, which will call waitpid() on all open filehandles
during global destruction--in no predictable order.

To solve this, you must manually use pipe(), fork(), and the form of
open() which sets one file descriptor to another, as shown below:

pipe(READER, WRITER) || die "pipe failed: $!";
$pid = fork();
defined($pid) || die "first fork failed: $!";
if ($pid) {
close READER;


if (my $sub_pid = fork()) {

defined($sub_pid) || die "first fork failed: $!";
close(WRITER) || die "can't close WRITER: $!";
}
else {
# write to WRITER...
# ...
# then when finished
close(WRITER) || die "can't close WRITER: $!";
exit(0);
}
# write to WRITER...
}
else {
open(STDIN, "<&READER") || die "can't reopen STDIN: $!";
close(WRITER) || die "can't close WRITER: $!";
# do something...
exit(0);
}

Since Perl 5.8.0, you can also use the list form of C<open> for pipes.
This is preferred when you wish to avoid having the shell interpret
metacharacters that may be in your command string.

So for example, instead of using:

open(PS_PIPE, "ps aux|") || die "can't open ps pipe: $!";

One would use either of these:

open(PS_PIPE, "-|", "ps", "aux")
|| die "can't open ps pipe: $!";

@ps_args = qw[ ps aux ];
open(PS_PIPE, "-|", @ps_args)
|| die "can't open @ps_args|: $!";

Because there are more than three arguments to open(), forks the ps(1)
command I<without> spawning a shell, and reads its standard output via the
C<PS_PIPE> filehandle. The corresponding syntax to I<write> to command
pipes is to use C<"|-"> in place of C<"-|">.

This was admittedly a rather silly example, because you're using string
literals whose content is perfectly safe. There is therefore no cause to
resort to the harder-to-readm, multi-argument form of pipe open(). However,
whenever you cannot be assured that the program arguments are free of shell
metacharacters, the fancier form of open() should be used. For example:

@grep_args = ("egrep", "-i", $some_pattern, @many_files);
open(GREP_PIPE, "-|", @grep_args)
|| die "can't open @grep_args|: $!";

Here the multi-argument form of pipe open() is preferred because the
pattern and indeed even the filenames themselves might hold metacharacters.

Be aware that these operations are full Unix forks, which means they may
not be correctly implemented on all alien systems. Additionally, these are
not true multithreading. To learn more about threading, see the F<modules>
file mentioned below in the SEE ALSO section.

=head2 Avoiding Pipe Deadlocks

Whenever you have more than one subprocess, you must be careful that each
closes whichever half of any pipes created for interprocess communication
that it is not using. This is because any child process reading from the
pipe and expecting an EOF will never receive it, and therefore never exit.
A single process closing a pipe is not enough to close it; the last process
with the pipe open must close it for it to read EOF.

Certain built-in Unix features help prevent this most of the time. For
instance, filehandles have a "close on exec" flag, which is set I<en masse>
under control of the C<$^F> variable. This is so any filehandles you
didn't explicitly route to the STDIN, STDOUT or STDERR of a child
I<program> will be automatically closed.

Always explicitly and immediately call close() on the writable end of any
pipe, unless that process is actually writing to it. Even if you don't
explicitly call close(), Perl will still close() all filehandles during
global destruction. As previously discussed, if those filehandles have
been opened with Safe Pipe Open, this will result in calling waitpid(),
which may again deadlock.

=head2 Bidirectional Communication with Another Process

While this works reasonably well for unidirectional communication, what

about bidirectional communication? The most obvious approach doesn't work:

# THIS DOES NOT WORK!!
open(PROG_FOR_READING_AND_WRITING, "| some program |")

If you forget to C<use warnings>, you'll miss out entirely on the
helpful diagnostic message:

Can't do bidirectional pipe at -e line 1.

If you really want to, you can use the standard open2() from the
C<IPC::Open2> module to catch both ends. There's also an open3() in
C<IPC::Open3> for tridirectional I/O so you can also catch your child's
STDERR, but doing so would then require an awkward select() loop and
wouldn't allow you to use normal Perl input operations.

If you look at its source, you'll see that open2() uses low-level
primitives like the pipe() and exec() syscalls to create all the
connections. Although it might have been more efficient by using
socketpair(), this would have been even less portable than it already
is. The open2() and open3() functions are unlikely to work anywhere
except on a Unix system, or at least one purporting POSIX compliance.

=for TODO
Hold on, is this even true? First it says that socketpair() is avoided
for portability, but then it says it probably won't work except on
Unixy systems anyway. Which one of those is true?

Here's an example of using open2():

use FileHandle;
use IPC::Open2;
$pid = open2(*Reader, *Writer, "cat -un");
print Writer "stuff\n";
$got = <Reader>;

The problem with this is that buffering is really going to ruin your
day. Even though your C<Writer> filehandle is auto-flushed so the process
on the other end gets your data in a timely manner, you can't usually do
anything to force that process to give its data to you in a similarly quick
fashion. In this special case, we could actually so, because we gave
I<cat> a B<-u> flag to make it unbuffered. But very few commands are


designed to operate over pipes, so this seldom works unless you yourself
wrote the program on the other end of the double-ended pipe.

A solution to this is use a library which uses pseudottys to make your
program behave more reasonably. This way you don't have to have control
over the source code of the program you're using. The C<Expect> module
from CPAN also addresses this kind of thing. This module requires two
other modules from CPAN, C<IO::Pty> and C<IO::Stty>. It sets up a pseudo
terminal to interact with programs that insist on talking to the terminal
device driver. If your system is supported, this may be your best bet.

=head2 Bidirectional Communication with Yourself

If you want, you may make low-level pipe() and fork() syscalls to stitch
this together by hand. This example only talks to itself, but you could
reopen the appropriate handles to STDIN and STDOUT and call other processes.
(The following example lacks proper error checking.)

#!/usr/bin/perl -w
# pipe1 - bidirectional communication using two pipe pairs
# designed for the socketpair-challenged
use IO::Handle; # thousands of lines just for autoflush :-(
pipe(PARENT_RDR, CHILD_WTR); # XXX: check failure?
pipe(CHILD_RDR, PARENT_WTR); # XXX: check failure?
CHILD_WTR->autoflush(1);
PARENT_WTR->autoflush(1);

if ($pid = fork()) {
close PARENT_RDR;
close PARENT_WTR;
print CHILD_WTR "Parent Pid $$ is sending this\n";
chomp($line = <CHILD_RDR>);
print "Parent Pid $$ just read this: `$line'\n";
close CHILD_RDR; close CHILD_WTR;
waitpid($pid, 0);
} else {
die "cannot fork: $!" unless defined $pid;
close CHILD_RDR;
close CHILD_WTR;
chomp($line = <PARENT_RDR>);
print "Child Pid $$ just read this: `$line'\n";
print PARENT_WTR "Child Pid $$ is sending this\n";
close PARENT_RDR;
close PARENT_WTR;
exit(0);
}

But you don't actually have to make two pipe calls. If you
have the socketpair() system call, it will do this all for you.

#!/usr/bin/perl -w
# pipe2 - bidirectional communication using socketpair
# "the best ones always go both ways"

use Socket;
use IO::Handle; # thousands of lines just for autoflush :-(

# We say AF_UNIX because although *_LOCAL is the
# POSIX 1003.1g form of the constant, many machines
# still don't have it.
socketpair(CHILD, PARENT, AF_UNIX, SOCK_STREAM, PF_UNSPEC)
|| die "socketpair: $!";

CHILD->autoflush(1);
PARENT->autoflush(1);

if ($pid = fork()) {
close PARENT;
print CHILD "Parent Pid $$ is sending this\n";
chomp($line = <CHILD>);
print "Parent Pid $$ just read this: `$line'\n";
close CHILD;
waitpid($pid, 0);
} else {
die "cannot fork: $!" unless defined $pid;
close CHILD;
chomp($line = <PARENT>);
print "Child Pid $$ just read this: '$line'\n";
print PARENT "Child Pid $$ is sending this\n";
close PARENT;
exit(0);
}

=head1 Sockets: Client/Server Communication

While not entirely limited to Unix-derived operating systems (e.g., WinSock
on PCs provides socket support, as do some VMS libraries), you might not have
sockets on your system, in which case this section probably isn't going to
do you much good. With sockets, you can do both virtual circuits like TCP
streams and datagrams like UDP packets. You may be able to do even more
depending on your system.

The Perl functions for dealing with sockets have the same names as
the corresponding system calls in C, but their arguments tend to differ
for two reasons. First, Perl filehandles work differently than C file
descriptors. Second, Perl already knows the length of its strings, so you
don't need to pass that information.

One of the major problems with ancient, antemillennial socket code in Perl
was that it used hard-coded values for some of the constants, which
severely hurt portability. If you ever see code that does anything like
explicitly setting C<$AF_INET = 2>, you know you're in for big trouble.
An immeasurably superior approach is to use the C<Socket> module, which more
reliably grants access to the various constants and functions you'll need.

If you're not writing a server/client for an existing protocol like
NNTP or SMTP, you should give some thought to how your server will
know when the client has finished talking, and vice-versa. Most
protocols are based on one-line messages and responses (so one party
knows the other has finished when a "\n" is received) or multi-line
messages and responses that end with a period on an empty line
("\n.\n" terminates a message/response).

=head2 Internet Line Terminators

The Internet line terminator is "\015\012". Under ASCII variants of
Unix, that could usually be written as "\r\n", but under other systems,
"\r\n" might at times be "\015\015\012", "\012\012\015", or something
completely different. The standards specify writing "\015\012" to be
conformant (be strict in what you provide), but they also recommend
accepting a lone "\012" on input (be lenient in what you require).
We haven't always been very good about that in the code in this manpage,
but unless you're on a Mac from way back in its pre-Unix dark ages, you'll
probably be ok.

=head2 Internet TCP Clients and Servers

Use Internet-domain sockets when you want to do client-server
communication that might extend to machines outside of your own system.

Here's a sample TCP client using Internet-domain sockets:

#!/usr/bin/perl -w
use strict;
use Socket;
my ($remote, $port, $iaddr, $paddr, $proto, $line);

$remote = shift || "localhost";
$port = shift || 2345; # random port
if ($port =~ /\D/) { $port = getservbyname($port, "tcp") }
die "No port" unless $port;
$iaddr = inet_aton($remote) || die "no host: $remote";
$paddr = sockaddr_in($port, $iaddr);

$proto = getprotobyname("tcp");
socket(SOCK, PF_INET, SOCK_STREAM, $proto) || die "socket: $!";
connect(SOCK, $paddr) || die "connect: $!";
while ($line = <SOCK>) {
print $line;
}

close (SOCK) || die "close: $!";
exit(0);

And here's a corresponding server to go along with it. We'll
leave the address as C<INADDR_ANY> so that the kernel can choose
the appropriate interface on multihomed hosts. If you want sit
on a particular interface (like the external side of a gateway
or firewall machine), fill this in with your real address instead.

#!/usr/bin/perl -Tw
use strict;
BEGIN { $ENV{PATH} = "/usr/bin:/bin" }
use Socket;
use Carp;
my $EOL = "\015\012";

sub logmsg { print "$0 $$: @_ at ", scalar localtime(), "\n" }

my $port = shift || 2345;
die "invalid port" unless if $port =~ /^ \d+ $/x;

my $proto = getprotobyname("tcp");

socket(Server, PF_INET, SOCK_STREAM, $proto) || die "socket: $!";
setsockopt(Server, SOL_SOCKET, SO_REUSEADDR, pack("l", 1))
|| die "setsockopt: $!";
bind(Server, sockaddr_in($port, INADDR_ANY)) || die "bind: $!";
listen(Server, SOMAXCONN) || die "listen: $!";

logmsg "server started on port $port";

my $paddr;

$SIG{CHLD} = \&REAPER;

for ( ; $paddr = accept(Client, Server); close Client) {
my($port, $iaddr) = sockaddr_in($paddr);
my $name = gethostbyaddr($iaddr, AF_INET);

logmsg "connection from $name [",
inet_ntoa($iaddr), "]
at port $port";

print Client "Hello there, $name, it's now ",
scalar localtime(), $EOL;
}

And here's a multithreaded version. It's multithreaded in that
like most typical servers, it spawns (fork()s) a slave server to
handle the client request so that the master server can quickly


go back to service a new client.

#!/usr/bin/perl -Tw
use strict;
BEGIN { $ENV{PATH} = "/usr/bin:/bin" }
use Socket;
use Carp;
my $EOL = "\015\012";

sub spawn; # forward declaration
sub logmsg { print "$0 $$: @_ at ", scalar localtime(), "\n" }

my $port = shift || 2345;
die "invalid port" unless if $port =~ /^ \d+ $/x;

my $proto = getprotobyname("tcp");

socket(Server, PF_INET, SOCK_STREAM, $proto) || die "socket: $!";
setsockopt(Server, SOL_SOCKET, SO_REUSEADDR, pack("l", 1))
|| die "setsockopt: $!";
bind(Server, sockaddr_in($port, INADDR_ANY)) || die "bind: $!";
listen(Server, SOMAXCONN) || die "listen: $!";

logmsg "server started on port $port";

my $waitedpid = 0;
my $paddr;

use POSIX ":sys_wait_h";
use Errno;

sub REAPER {
local $!; # don't let waitpid() overwrite current error
while ((my $pid = waitpid(-1, WNOHANG)) > 0 && WIFEXITED($?)) {
logmsg "reaped $waitedpid" . ($? ? " with exit $?" : "");
}
$SIG{CHLD} = \&REAPER; # loathe SysV
}

$SIG{CHLD} = \&REAPER;

while (1) {
$paddr = accept(Client, Server) || do {
# try again if accept() returned because got a signal
next if $!{EINTR};
die "accept: $!";
};
my ($port, $iaddr) = sockaddr_in($paddr);
my $name = gethostbyaddr($iaddr, AF_INET);

logmsg "connection from $name [",
inet_ntoa($iaddr),
"] at port $port";

spawn sub {
$| = 1;
print "Hello there, $name, it's now ", scalar localtime(), $EOL;
exec "/usr/games/fortune" # XXX: "wrong" line terminators
or confess "can't exec fortune: $!";
};
close Client;
}

sub spawn {
my $coderef = shift;

unless (@_ == 0 && $coderef && ref($coderef) eq "CODE") {
confess "usage: spawn CODEREF";
}

my $pid;
unless (defined($pid = fork())) {


logmsg "cannot fork: $!";
return;
}

elsif ($pid) {
logmsg "begat $pid";
return; # I'm the parent
}

# else I'm the child -- go spawn

open(STDIN, "<&Client") || die "can't dup client to stdin";
open(STDOUT, ">&Client") || die "can't dup client to stdout";
## open(STDERR, ">&STDOUT") || die "can't dup stdout to stderr";
exit($coderef->());
}

This server takes the trouble to clone off a child version via fork()
for each incoming request. That way it can handle many requests at
once, which you might not always want. Even if you don't fork(), the
listen() will allow that many pending connections. Forking servers
have to be particularly careful about cleaning up their dead children
(called "zombies" in Unix parlance), because otherwise you'll quickly
fill up your process table. The REAPER subroutine is used here to
call waitpid() for any child processes that have finished, thereby
ensuring that they terminate cleanly and don't join the ranks of the
living dead.

Within the while loop we call accept() and check to see if it returns
a false value. This would normally indicate a system error needs
to be reported. However, the introduction of safe signals (see


L</Deferred Signals (Safe Signals)> above) in Perl 5.7.3 means that

accept() might also be interrupted when the process receives a signal.
This typically happens when one of the forked subprocesses exits and
notifies the parent process with a CHLD signal.

If accept() is interrupted by a signal, $! will be set to EINTR.
If this happens, we can safely continue to the next iteration of
the loop and another call to accept(). It is important that your
signal handling code not modify the value of $!, or else this test
will likely fail. In the REAPER subroutine we create a local version
of $! before calling waitpid(). When waitpid() sets $! to ECHILD as
it inevitably does when it has no more children waiting, it
updates the local copy and leaves the original unchanged.

You should use the B<-T> flag to enable taint checking (see L<perlsec>)
even if we aren't running setuid or setgid. This is always a good idea
for servers or any program run on behalf of someone else (like CGI
scripts), because it lessens the chances that people from the outside will
be able to compromise your system.

Let's look at another TCP client. This one connects to the TCP "time"
service on a number of different machines and shows how far their clocks
differ from the system on which it's being run:

#!/usr/bin/perl -w
use strict;
use Socket;

my $SECS_OF_70_YEARS = 2208988800;
sub ctime { scalar localtime(shift() || time()) }

my $iaddr = gethostbyname("localhost");
my $proto = getprotobyname("tcp");
my $port = getservbyname("time", "tcp");
my $paddr = sockaddr_in(0, $iaddr);
my($host);

$| = 1;
printf "%-24s %8s %s\n", "localhost", 0, ctime();

foreach $host (@ARGV) {
printf "%-24s ", $host;
my $hisiaddr = inet_aton($host) || die "unknown host";
my $hispaddr = sockaddr_in($port, $hisiaddr);
socket(SOCKET, PF_INET, SOCK_STREAM, $proto)
|| die "socket: $!";
connect(SOCKET, $hispaddr) || die "connect: $!";
my $rtime = pack("C4", ());
read(SOCKET, $rtime, 4);
close(SOCKET);
my $histime = unpack("N", $rtime) - $SECS_OF_70_YEARS;
printf "%8d %s\n", $histime - time(), ctime($histime);
}

=head2 Unix-Domain TCP Clients and Servers

That's fine for Internet-domain clients and servers, but what about local
communications? While you can use the same setup, sometimes you don't
want to. Unix-domain sockets are local to the current host, and are often
used internally to implement pipes. Unlike Internet domain sockets, Unix
domain sockets can show up in the file system with an ls(1) listing.

% ls -l /dev/log
srw-rw-rw- 1 root 0 Oct 31 07:23 /dev/log

You can test for these with Perl's B<-S> file test:

unless (-S "/dev/log") {
die "something's wicked with the log system";
}

Here's a sample Unix-domain client:

#!/usr/bin/perl -w
use Socket;
use strict;
my ($rendezvous, $line);

$rendezvous = shift || "catsock";
socket(SOCK, PF_UNIX, SOCK_STREAM, 0) || die "socket: $!";
connect(SOCK, sockaddr_un($rendezvous)) || die "connect: $!";
while (defined($line = <SOCK>)) {
print $line;
}
exit(0);

And here's a corresponding server. You don't have to worry about silly
network terminators here because Unix domain sockets are guaranteed
to be on the localhost, and thus everything works right.

#!/usr/bin/perl -Tw
use strict;
use Socket;
use Carp;

BEGIN { $ENV{PATH} = "/usr/bin:/bin" }
sub spawn; # forward declaration
sub logmsg { print "$0 $$: @_ at ", scalar localtime(), "\n" }

my $NAME = "catsock";
my $uaddr = sockaddr_un($NAME);
my $proto = getprotobyname("tcp");

socket(Server, PF_UNIX, SOCK_STREAM, 0) || die "socket: $!";
unlink($NAME);
bind (Server, $uaddr) || die "bind: $!";
listen(Server, SOMAXCONN) || die "listen: $!";

logmsg "server started on $NAME";

my $waitedpid;

use POSIX ":sys_wait_h";
sub REAPER {
my $child;
while (($waitedpid = waitpid(-1, WNOHANG)) > 0) {
logmsg "reaped $waitedpid" . ($? ? " with exit $?" : "");
}
$SIG{CHLD} = \&REAPER; # loathe SysV
}

$SIG{CHLD} = \&REAPER;


for ( $waitedpid = 0;
accept(Client, Server) || $waitedpid;
$waitedpid = 0, close Client)
{
next if $waitedpid;
logmsg "connection on $NAME";
spawn sub {
print "Hello there, it's now ", scalar localtime(), "\n";
exec("/usr/games/fortune") || die "can't exec fortune: $!";
};
}

sub spawn {
my $coderef = shift();

unless (@_ == 0 && $coderef && ref($coderef) eq "CODE") {
confess "usage: spawn CODEREF";
}

my $pid;
unless (defined($pid = fork())) {


logmsg "cannot fork: $!";
return;
}

elsif ($pid) {
logmsg "begat $pid";
return; # I'm the parent
}

else {
# I'm the child -- go spawn
}

open(STDIN, "<&Client") || die "can't dup client to stdin";
open(STDOUT, ">&Client") || die "can't dup client to stdout";
## open(STDERR, ">&STDOUT") || die "can't dup stdout to stderr";
exit($coderef->());
}

As you see, it's remarkably similar to the Internet domain TCP server, so
much so, in fact, that we've omitted several duplicate functions--spawn(),
logmsg(), ctime(), and REAPER()--which are the same as in the other server.

So why would you ever want to use a Unix domain socket instead of a
simpler named pipe? Because a named pipe doesn't give you sessions. You
can't tell one process's data from another's. With socket programming,
you get a separate session for each client; that's why accept() takes two
arguments.

For example, let's say that you have a long-running database server daemon
that you want folks to be able to access from the Web, but only
if they go through a CGI interface. You'd have a small, simple CGI
program that does whatever checks and logging you feel like, and then acts
as a Unix-domain client and connects to your private server.

=head1 TCP Clients with IO::Socket

For those preferring a higher-level interface to socket programming, the
IO::Socket module provides an object-oriented approach. IO::Socket has
been included in the standard Perl distribution ever since Perl 5.004. If
you're running an earlier version of Perl (in which case, how are you
reading this manpage?), just fetch IO::Socket from CPAN, where you'll also
find modules providing easy interfaces to the following systems: DNS, FTP,
Ident (RFC 931), NIS and NISPlus, NNTP, Ping, POP3, SMTP, SNMP, SSLeay,
Telnet, and Time--to name just a few.

=head2 A Simple Client

Here's a client that creates a TCP connection to the "daytime"
service at port 13 of the host name "localhost" and prints out everything
that the server there cares to provide.

#!/usr/bin/perl -w
use IO::Socket;
$remote = IO::Socket::INET->new(
Proto => "tcp",
PeerAddr => "localhost",
PeerPort => "daytime(13)",
)
|| die "can't connect to daytime service on localhost";
while (<$remote>) { print }

When you run this program, you should get something back that
looks like this:

Wed May 14 08:40:46 MDT 1997

Here are what those parameters to the new() constructor mean:

=over 4

=item C<Proto>

This is which protocol to use. In this case, the socket handle returned
will be connected to a TCP socket, because we want a stream-oriented
connection, that is, one that acts pretty much like a plain old file.
Not all sockets are this of this type. For example, the UDP protocol
can be used to make a datagram socket, used for message-passing.

=item C<PeerAddr>

This is the name or Internet address of the remote host the server is
running on. We could have specified a longer name like C<"www.perl.com">,
or an address like C<"207.171.7.72">. For demonstration purposes, we've
used the special hostname C<"localhost">, which should always mean the
current machine you're running on. The corresponding Internet address
for localhost is C<"127.0.0.1">, if you'd rather use that.

=item C<PeerPort>

This is the service name or port number we'd like to connect to.
We could have gotten away with using just C<"daytime"> on systems with a
well-configured system services file,[FOOTNOTE: The system services file
is found in I</etc/services> under Unixy systems.] but here we've specified the
port number (13) in parentheses. Using just the number would have also
worked, but numeric literals make careful programmers nervous.

=back

Notice how the return value from the C<new> constructor is used as
a filehandle in the C<while> loop? That's what's called an I<indirect
filehandle>, a scalar variable containing a filehandle. You can use
it the same way you would a normal filehandle. For example, you
can read one line from it this way:

$line = <$handle>;

all remaining lines from is this way:

@lines = <$handle>;

and send a line of data to it this way:

print $handle "some data\n";

=head2 A Webget Client

Here's a simple client that takes a remote host to fetch a document
from, and then a list of ifles to get from that host. This is a
more interesting client than the previous one because it first sends
something to the server before fetching the server's response.

#!/usr/bin/perl -w
use IO::Socket;
unless (@ARGV > 1) { die "usage: $0 host url ..." }
$host = shift(@ARGV);
$EOL = "\015\012";
$BLANK = $EOL x 2;
for my $document (@ARGV) {
$remote = IO::Socket::INET->new( Proto => "tcp",
PeerAddr => $host,
PeerPort => "http(80)",
) || die "cannot connect to httpd on $host";
$remote->autoflush(1);
print $remote "GET $document HTTP/1.0" . $BLANK;
while ( <$remote> ) { print }
close $remote;
}

The web server handling the HTTP service is assumed to be at
its standard port, number 80. If the server you're trying to
connect to is at a different port, like 1080 or 8080, you should specify
as the named-parameter pair, C<< PeerPort => 8080 >>. The C<autoflush>
method is used on the socket because otherwise the system would buffer
up the output we sent it. (If you're on a prehistoric Mac, you'll also
need to change every C<"\n"> in your code that sends data over the network
to be a C<"\015\012"> instead.)

Connecting to the server is only the first part of the process: once you
have the connection, you have to use the server's language. Each server
on the network has its own little command language that it expects as
input. The string that we send to the server starting with "GET" is in
HTTP syntax. In this case, we simply request each specified document.
Yes, we really are making a new connection for each document, even though
it's the same host. That's the way you always used to have to speak HTTP.
Recent versions of web browsers may request that the remote server leave
the connection open a little while, but the server doesn't have to honor
such a request.

Here's an example of running that program, which we'll call I<webget>:

% webget www.perl.com /guanaco.html
HTTP/1.1 404 File Not Found
Date: Thu, 08 May 1997 18:02:32 GMT
Server: Apache/1.2b6
Connection: close
Content-type: text/html

<HEAD><TITLE>404 File Not Found</TITLE></HEAD>
<BODY><H1>File Not Found</H1>
The requested URL /guanaco.html was not found on this server.<P>
</BODY>

Ok, so that's not very interesting, because it didn't find that
particular document. But a long response wouldn't have fit on this page.

For a more featurful version of this program, you should look to
the I<lwp-request> program included with the LWP modules from CPAN.

=head2 Interactive Client with IO::Socket

Well, that's all fine if you want to send one command and get one answer,
but what about setting up something fully interactive, somewhat like
the way I<telnet> works? That way you can type a line, get the answer,
type a line, get the answer, etc.

This client is more complicated than the two we've done so far, but if
you're on a system that supports the powerful C<fork> call, the solution
isn't that rough. Once you've made the connection to whatever service
you'd like to chat with, call C<fork> to clone your process. Each of
these two identical process has a very simple job to do: the parent
copies everything from the socket to standard output, while the child
simultaneously copies everything from standard input to the socket.
To accomplish the same thing using just one process would be I<much>
harder, because it's easier to code two processes to do one thing than it
is to code one process to do two things. (This keep-it-simple principle
a cornerstones of the Unix philosophy, and good software engineering as
well, which is probably why it's spread to other systems.)

Here's the code:

#!/usr/bin/perl -w
use strict;
use IO::Socket;
my ($host, $port, $kidpid, $handle, $line);

unless (@ARGV == 2) { die "usage: $0 host port" }
($host, $port) = @ARGV;

# create a tcp connection to the specified host and port
$handle = IO::Socket::INET->new(Proto => "tcp",
PeerAddr => $host,
PeerPort => $port)
|| die "can't connect to port $port on $host: $!";

$handle->autoflush(1); # so output gets there right away
print STDERR "[Connected to $host:$port]\n";

# split the program into two processes, identical twins
die "can't fork: $!" unless defined($kidpid = fork());

# the if{} block runs only in the parent process
if ($kidpid) {
# copy the socket to standard output
while (defined ($line = <$handle>)) {
print STDOUT $line;
}
kill("TERM", $kidpid); # send SIGTERM to child
}
# the else{} block runs only in the child process
else {
# copy standard input to the socket
while (defined ($line = <STDIN>)) {
print $handle $line;
}
exit(0); # just in case
}

The C<kill> function in the parent's C<if> block is there to send a
signal to our child process, currently running in the C<else> block,
as soon as the remote server has closed its end of the connection.

If the remote server sends data a byte at time, and you need that
data immediately without waiting for a newline (which might not happen),
you may wish to replace the C<while> loop in the parent with the
following:

my $byte;
while (sysread($handle, $byte, 1) == 1) {
print STDOUT $byte;
}

Making a system call for each byte you want to read is not very efficient
(to put it mildly) but is the simplest to explain and works reasonably
well.

=head1 TCP Servers with IO::Socket

As always, setting up a server is little bit more involved than running a client.
The model is that the server creates a special kind of socket that
does nothing but listen on a particular port for incoming connections.
It does this by calling the C<< IO::Socket::INET->new() >> method with
slightly different arguments than the client did.

=over 4

=item Proto

This is which protocol to use. Like our clients, we'll
still specify C<"tcp"> here.

=item LocalPort

We specify a local
port in the C<LocalPort> argument, which we didn't do for the client.
This is service name or port number for which you want to be the
server. (Under Unix, ports under 1024 are restricted to the
superuser.) In our sample, we'll use port 9000, but you can use
any port that's not currently in use on your system. If you try
to use one already in used, you'll get an "Address already in use"
message. Under Unix, the C<netstat -a> command will show
which services current have servers.

=item Listen

The C<Listen> parameter is set to the maximum number of
pending connections we can accept until we turn away incoming clients.
Think of it as a call-waiting queue for your telephone.
The low-level Socket module has a special symbol for the system maximum, which
is SOMAXCONN.

=item Reuse

The C<Reuse> parameter is needed so that we restart our server
manually without waiting a few minutes to allow system buffers to
clear out.

=back

Once the generic server socket has been created using the parameters
listed above, the server then waits for a new client to connect
to it. The server blocks in the C<accept> method, which eventually accepts a
bidirectional connection from the remote client. (Make sure to autoflush
this handle to circumvent buffering.)

To add to user-friendliness, our server prompts the user for commands.
Most servers don't do this. Because of the prompt without a newline,
you'll have to use the C<sysread> variant of the interactive client above.

This server accepts one of five different commands, sending output back to
the client. Unlike most network servers, this one handles only one
incoming client at a time. Multithreaded servers are covered in
Chapter 6 of the Camel.

Here's the code. We'll

#!/usr/bin/perl -w
use IO::Socket;
use Net::hostent; # for OOish version of gethostbyaddr

$PORT = 9000; # pick something not in use

$server = IO::Socket::INET->new( Proto => "tcp",
LocalPort => $PORT,
Listen => SOMAXCONN,
Reuse => 1);

die "can't setup server" unless $server;
print "[Server $0 accepting clients]\n";

while ($client = $server->accept()) {
$client->autoflush(1);
print $client "Welcome to $0; type help for command list.\n";
$hostinfo = gethostbyaddr($client->peeraddr);
printf "[Connect from %s]\n", $hostinfo ? $hostinfo->name : $client->peerhost;
print $client "Command? ";
while ( <$client>) {
next unless /\S/; # blank line
if (/quit|exit/i) { last }
elsif (/date|time/i) { printf $client "%s\n", scalar localtime() }
elsif (/who/i ) { print $client `who 2>&1` }
elsif (/cookie/i ) { print $client `/usr/games/fortune 2>&1` }
elsif (/motd/i ) { print $client `cat /etc/motd 2>&1` }
else {
print $client "Commands: quit date who cookie motd\n";
}
} continue {
print $client "Command? ";
}
close $client;
}

=head1 UDP: Message Passing

Another kind of client-server setup is one that uses not connections, but
messages. UDP communications involve much lower overhead but also provide
less reliability, as there are no promises that messages will arrive at
all, let alone in order and unmangled. Still, UDP offers some advantages
over TCP, including being able to "broadcast" or "multicast" to a whole
bunch of destination hosts at once (usually on your local subnet). If you
find yourself overly concerned about reliability and start building checks
into your message system, then you probably should use just TCP to start
with.

UDP datagrams are I<not> a bytestream and should not be treated as such.
This makes using I/O mechanisms with internal buffering like stdio (i.e.
print() and friends) especially cumbersome. Use syswrite(), or better
send(), like in the example below.

Here's a UDP program similar to the sample Internet TCP client given
earlier. However, instead of checking one host at a time, the UDP version
will check many of them asynchronously by simulating a multicast and then
using select() to do a timed-out wait for I/O. To do something similar
with TCP, you'd have to use a different socket handle for each host.

#!/usr/bin/perl -w
use strict;
use Socket;
use Sys::Hostname;

my ( $count, $hisiaddr, $hispaddr, $histime,
$host, $iaddr, $paddr, $port, $proto,
$rin, $rout, $rtime, $SECS_OF_70_YEARS);

$SECS_OF_70_YEARS = 2_208_988_800;

$iaddr = gethostbyname(hostname());
$proto = getprotobyname("udp");
$port = getservbyname("time", "udp");
$paddr = sockaddr_in(0, $iaddr); # 0 means let kernel pick

socket(SOCKET, PF_INET, SOCK_DGRAM, $proto) || die "socket: $!";
bind(SOCKET, $paddr) || die "bind: $!";

$| = 1;
printf "%-12s %8s %s\n", "localhost", 0, scalar localtime();
$count = 0;
for $host (@ARGV) {
$count++;
$hisiaddr = inet_aton($host) || die "unknown host";
$hispaddr = sockaddr_in($port, $hisiaddr);
defined(send(SOCKET, 0, 0, $hispaddr)) || die "send $host: $!";
}

$rin = "";
vec($rin, fileno(SOCKET), 1) = 1;

# timeout after 10.0 seconds
while ($count && select($rout = $rin, undef, undef, 10.0)) {
$rtime = "";
$hispaddr = recv(SOCKET, $rtime, 4, 0) || die "recv: $!";
($port, $hisiaddr) = sockaddr_in($hispaddr);
$host = gethostbyaddr($hisiaddr, AF_INET);
$histime = unpack("N", $rtime) - $SECS_OF_70_YEARS;
printf "%-12s ", $host;
printf "%8d %s\n", $histime - time(), scalar localtime($histime);
$count--;
}

This example does not include any retries and may consequently fail to
contact a reachable host. The most prominent reason for this is congestion
of the queues on the sending host if the number of hosts to contact is
sufficiently large.

=head1 SysV IPC

While System V IPC isn't so widely used as sockets, it still has some
interesting uses. However, you cannot use SysV IPC or Berkeley mmap() to
have a variable shared amongst several processes. That's because Perl
would reallocate your string when you weren't wanting it to. You might
look into the C<IPC::Shareable> or C<threads::shared> modules for that.

Here's a small example showing shared memory usage.

use IPC::SysV qw(IPC_PRIVATE IPC_RMID S_IRUSR S_IWUSR);

$size = 2000;
$id = shmget(IPC_PRIVATE, $size, S_IRUSR | S_IWUSR);
defined($id) || die "shmget: $!";
print "shm key $id\n";

$message = "Message #1";
shmwrite($id, $message, 0, 60) || die "shmwrite: $!";
print "wrote: '$message'\n";
shmread($id, $buff, 0, 60) || die "shmread: $!";
print "read : '$buff'\n";

# the buffer of shmread is zero-character end-padded.
substr($buff, index($buff, "\0")) = "":
print "un" unless $buff eq $message;
print "swell\n";

print "deleting shm $id\n";
shmctl($id, IPC_RMID, 0) || die "shmctl: $!";

Here's an example of a semaphore:

use IPC::SysV qw(IPC_CREAT);

$IPC_KEY = 1234;
$id = semget($IPC_KEY, 10, 0666 | IPC_CREAT);
defined($id) || die "shmget: $!";
print "shm key $id\n";

Put this code in a separate file to be run in more than one process.
Call the file F<take>:

# create a semaphore

$IPC_KEY = 1234;
$id = semget($IPC_KEY, 0, 0);
defined($id) || die "shmget: $!";

$semnum = 0;
$semflag = 0;

# "take" semaphore
# wait for semaphore to be zero
$semop = 0;
$opstring1 = pack("s!s!s!", $semnum, $semop, $semflag);

# Increment the semaphore count
$semop = 1;
$opstring2 = pack("s!s!s!", $semnum, $semop, $semflag);
$opstring = $opstring1 . $opstring2;

semop($id, $opstring) || die "semop: $!";

Put this code in a separate file to be run in more than one process.
Call this file F<give>:

# "give" the semaphore
# run this in the original process and you will see
# that the second process continues

$IPC_KEY = 1234;
$id = semget($IPC_KEY, 0, 0);
die unless defined($id);

$semnum = 0;
$semflag = 0;

# Decrement the semaphore count
$semop = -1;
$opstring = pack("s!s!s!", $semnum, $semop, $semflag);

semop($id, $opstring) || die "semop: $!";

The SysV IPC code above was written long ago, and it's definitely
clunky looking. For a more modern look, see the IPC::SysV module
which is included with Perl starting from Perl 5.005.

A small example demonstrating SysV message queues:

use IPC::SysV qw(IPC_PRIVATE IPC_RMID IPC_CREAT S_IRUSR S_IWUSR);

my $id = msgget(IPC_PRIVATE, IPC_CREAT | S_IRUSR | S_IWUSR);
defined($id) || die "msgget failed: $!";

my $sent = "message";
my $type_sent = 1234;

msgsnd($id, pack("l! a*", $type_sent, $sent), 0)
|| die "msgsnd failed: $!";

msgrcv($id, my $rcvd_buf, 60, 0, 0)
|| die "msgrcv failed: $!";

my($type_rcvd, $rcvd) = unpack("l! a*", $rcvd_buf);

if ($rcvd eq $sent) {
print "okay\n";
} else {
print "not okay\n";
}

msgctl($id, IPC_RMID, 0) || die "msgctl failed: $!\n";

=head1 NOTES

Most of these routines quietly but politely return C<undef> when they
fail instead of causing your program to die right then and there due to
an uncaught exception. (Actually, some of the new I<Socket> conversion
functions do croak() on bad arguments.) It is therefore essential to
check return values from these functions. Always begin your socket
programs this way for optimal success, and don't forget to add the B<-T>
taint-checking flag to the C<#!> line for servers:

#!/usr/bin/perl -Tw
use strict;
use sigtrap;
use Socket;

=head1 BUGS

These routines all create system-specific portability problems. As noted
elsewhere, Perl is at the mercy of your C libraries for much of its system
behavior. It's probably safest to assume broken SysV semantics for
signals and to stick with simple TCP and UDP socket operations; e.g., don't
try to pass open file descriptors over a local UDP datagram socket if you
want your code to stand a chance of being portable.

=head1 AUTHOR

Tom Christiansen, with occasional vestiges of Larry Wall's original
version and suggestions from the Perl Porters.

=head1 SEE ALSO

There's a lot more to networking than this, but this should get you
started.

For intrepid programmers, the indispensable textbook is I<Unix Network
Programming, 2nd Edition, Volume 1> by W. Richard Stevens (published by
Prentice-Hall). Most books on networking address the subject from the
perspective of a C programmer; translation to Perl is left as an exercise
for the reader.

The IO::Socket(3) manpage describes the object library, and the Socket(3)
manpage describes the low-level interface to sockets. Besides the obvious
functions in L<perlfunc>, you should also check out the F<modules> file at
your nearest CPAN site, especially
L<http://www.cpan.org/modules/00modlist.long.html#ID5_Networking_>.
See L<perlmodlib> or best yet, the F<Perl FAQ> for a description
of what CPAN is and where to get it if the previous link doesn't work
for you.

Section 5 of CPAN's F<modules> file is devoted to "Networking, Device


Control (modems), and Interprocess Communication", and contains numerous
unbundled modules numerous networking modules, Chat and Expect operations,
CGI programming, DCE, FTP, IPC, NNTP, Proxy, Ptty, RPC, SNMP, SMTP, Telnet,

Threads, and ToolTalk--to name just a few.

Tom Christiansen

unread,
Nov 9, 2010, 9:44:41 AM11/9/10
to Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
On 09 Nov 2010 06:02:55 GMT you wrote:

>> For now, I'm sending the complete revision in toto.

> Applied as cf21866, with some tpyo corretcions.

Thanks very muhc. I think it's a better document now,
even if only marginally.

=======
TYPOS
=======

Even though my last step was to spell-check it, a few typos still
made it through. Here are four that I'm aware of:

1. Be careful qx(), system(), and some modules for calling external commands
^
^ insert ":" right after "careful"

2. exit if $pid; # non-zero now mean I am the paren
^
^ change "mean" to means"

3. resort to the harder-to-readm, multi-argument form of pipe open(). However,
^
^ delete that "m"

4. Chapter 6 of the Camel.
^
^ That should now be "16", not "6".

========
POLICY
========

That last one snuck past me because I always quickly avert my eyes from
such self-mentions. I'm *terribly* queasy about the least hint of what
could be perceived as pecunious self-promotion. I justify it in this
case solely because it might also help out Larry, who otherwise receives
so very little for his lifetime of work on Perl; it's the least we can do.

Yet I'm bothered because it seems somewhat unfair to readers to point them
only at Camel:16 when Cookbook:14-18 covers that material in much richer
detail. But I cannot do that, given that I can no longer justify it in the
same way: Larry has only an intro in PCB, which had its notional origin in
material I split off from v1 Camel when making v2. Even perlipc itself I
wrote as dry-run for Camel v3, just like many other standard pods.

Not providing better references seems unfair to other authors (not me)
whose own works might quite reasonably be referenced here and elsewhere
in the standard Perl documentation, but that is a huge can of worms that
we long ago decided not to open, as evinced by perlbook(1).

So I feel it must be left as is--or rather, as amended to "16"--without
the Cookbook reference, even though this really is too bad, especially
when the full source code is downloadable for free.

The current policy should stand. I don't want to open that up.
It's just not worth the bother, or the risk. Oh well.

============
CODE FIXES
============

Beyond unifying the code back into my own style where it had become
internally inconsistent and generally tightening it up in many
places, I also improved the code's error checking, naming of
identifier, and comments.

In places I reduced code complexity by factoring out some of the deeply
nested indentation. I provided an additional multi-arg pipe example to
better explain the problem of shell metachars, the guts of which is:

@grep_args = ("egrep", "-i", $some_pattern, @many_files);

===============
ENGLISH FIXES
===============

Besides trivial changes like normalizing spelling and to a lesser extent
fonting, I smoothed out the phasing in quite a few places, including
inlining some of the many parenthetical statements (which can be pretty
distracting otherwise (don't you think?)).

I reduced the number of explicit mentions of Unix where this made sense to
do. I amended statements about the Mac so that it was clear that these
applied only to pre-Darwin releases.

I mentioned IPC::Shareable and threads::shared where appropriate.
I excised mention of Comm.pl, as it is likely older than many of
the readers, and far less healthy.

One of my TODOs I left intact:

=for TODO
Hold on, is this even true? First it says that socketpair() is avoided
for portability, but then it says it probably won't work except on
Unixy systems anyway. Which one of those is true?

That follows these two sentences:

Although it might have been more efficient by using socketpair(), this
would have been even less portable than it already is. The open2() and
open3() functions are unlikely to work anywhere except on a Unix
system, or at least one purporting POSIX compliance.

Are *both* those two statements really still needed?

I've been led to believe that open2/3 do work on Microsoft systems,
but that they do so there for reasons other than MS's not-useful
letter-of-the-law POSIX compliance.

Do they work on VMS? What non-Unix systems apart from Microsoft and
VMS do people still use? What about iOS?

thanks again,

--tom

Tom Christiansen

unread,
Nov 9, 2010, 10:28:02 AM11/9/10
to demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
> Personally I really wish you had kept the changes to use lexically
> scoped filehandles.

That's fine. I'm certainly not *against* lexically scoped indirect
filehandles--except for all the extra syllababbles of English it takes
to *mention* them:(--provided it doesn't complicate things or introduce
bugs, both of which at one point occurred.

I didn't introduce them because I was attempting a minimal edit. You
will note that there are still examples there that don't do error
checking, either, nor which are necessarily use strict compliant. If
you clutter up the logic too much, it weakens the point. Look at
perlfunc, for example. Those are not use strict compliant either, and it
would not be good to make them so.

I nearly always use lexical handles in non-trivial programs, although in
trust I seldom do so in trivial ones. By trivial, I mean those that don't
even have subroutines, or very few. Occasionally I use a global handle
because I open it in one function and use it in another. These aren't big
programs, though, nor split up into modules. I figure for those, it's no
worse using a direct handle than it is using a direct subroutine call. In
large programs with more structure and thus indirection, I do use indirect
handles and indirect subroutine calls (read: methods). In short ones,
I often do not.

There is, however, one issue that lexical filehandles seem especially prone
to. People claim it is a feature than they get automatically closed for
you. I am not entirely certain I agree. In particular, I don't approve of
error checking being omitted when they get implicitly closed due to scoping
or whatnot. That causes errors to be lost, which means the program is
buggy. But the same thing happens with global filehandles, too, including
pre-defined ones like STDOUT. How many people trouble to write this:

END {
close(STDOUT) || die "can't close STDOUT: $!";
}

You really should, you know. But next to no one does.

I did add six more explicit close() calls, and a lot more error-checking
for the rest of them, too, which had often neglected it. I feel this is
far more important than arguing about mere lexical-vs-global, since merely
changing a global to a lexical does absolutely nothing to address the
underlying correctness bug.

Wouldn't you agree?

--tom

Tom Christiansen

unread,
Nov 9, 2010, 11:39:30 AM11/9/10
to Abigail, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
Abigail wrote:

> Before 5.6, lexical scoped file handles were possible, but everyone
> used globs, and the world didn't stop turning. Collision was so
> infrequent, noone wanted to type the few extra keystrokes.

Plus the ALL_CAPPED names really stood out, further decreasing
the chance of collision in programs with just one author. It
just never happened. 3rd-party modules are a different story,
and we were always rather more careful there.

> It's only when autovivifying handles came available everyone
> jumped on the "lexical filehandle" bandwagon.

Indeed.

> It's not worth having a style fight over for.

It's also quite a big job with negligible return value.

I tried to count up uses of lexical vs global handles in the pods.
I didn't consider those that were too hard to easily grep, like
print, printf, stat, and such. Here are the numbers:

Capped $dollared Ratio

pods 5.13.3 399 111 3.6 to 1
cookbook v2 359 215 1.7 to 1
camel v3+ 891 163 5.5 to 1

Not all those that are dollared are lexicals, and not all those that are
capped are not localized. But as a rough measure, it works well enough.

The v3 Camel didn't have autovivving filehandles, as Abigail
points out, which accounts for its somewhat larger ratio.

I figure just leave them be, as they really aren't harming anything,
and would be quite a pain to change: mere mindless busywork. I can
think of many, *much* better uses of that time and effort.

--tom

% tcgrep '^\s[^#]*\b(open(dir)?|eof|binmode|getc|get(peername|sock(name|opt))|listen|write|fcntl|ioctl|flock|fileno|close(dir)?|read(line|dir)?|rewinddir|say|seek(dir)?|send|setsockopt|recv|sys(open|read|write|seek)|pipe|connect|accept|socket(pair)?|shutdown|tell(dir)?)\b\s*([({]\s*)?\$' *.pod | wc -l

% tcgrep '^\s[^#]*\b(open(dir)?|eof|binmode|getc|get(peername|sock(name|opt))|listen|write|fcntl|ioctl|flock|fileno|close(dir)?|read(line|dir)?|rewinddir|say|seek(dir)?|send|setsockopt|recv|sys(open|read|write|seek)|pipe|connect|accept|socket(pair)?|shutdown|tell(dir)?)\b\s*([({]\s*)?\p{Lu}' *.pod | wc -l

Tom Christiansen

unread,
Nov 9, 2010, 1:20:51 PM11/9/10
to brian d foy, perl5-...@perl.org, perl-docu...@perl.org
> Ultimately it's your call, but I think the Cookbook references should
> be in there.

Thanks, Brian.

I still feel that I must recuse myself from making that call.

--tom

brian d foy

unread,
Nov 9, 2010, 1:15:08 PM11/9/10
to perl5-...@perl.org, perl-docu...@perl.org
In article <21672.1289313881@chthon>, Tom Christiansen
<tch...@perl.com> wrote:

> That last one snuck past me because I always quickly avert my eyes from
> such self-mentions. I'm *terribly* queasy about the least hint of what
> could be perceived as pecunious self-promotion.

If you weren't prolific, it would be a much easier situation. However,
you have your row to hoe by not only writing a lot of the core docs,
but most of the best selling books. I'd hate to bow to the
anti-capitialists just because they want to pretend that humans don't
need money to live, especially since any real money you might get
wouldn't bring you close to minimum wages for all the free writing
you've done.

Not everyone will be happy, but the Perl books are just another way of
us providing good information about Perl. The money might have been big
15 years ago, but I don't think any reasonable person. We've created
Perl information for a variety of sources and channels so it's
available in many forms to suit more tastes and preferences.

> Yet I'm bothered because it seems somewhat unfair to readers to point them
> only at Camel:16 when Cookbook:14-18 covers that material in much richer
> detail.

Ultimately it's your call, but I think the Cookbook references should
be in there. You only need a sentence or two. My own policy is to point
people to external references as long as the documentation isn't
deferring to them. If the books enhance rather than replace what's in
core, I consider that to be fair. We shouldn't liberally pepper the
docs with every external reference that might apply, but in this case,
it's topical and appropriate.

I think it's particularly *unfair* to ignore any and all commercial
interests in the docs, in some misguided effort to lead people to form
a fake world where they don't see what is in common practice among
actual developers. For instance, we don't ignore ActiveState or
IndigoPerl, Windows, Mac OS X, many, many payware editors, the list of
Perl books in perlfaq2 (and now perlbook), Github, and many other
things. Ignoring these ignores a lot of the world of Perl.

I'd hate to lose out on providing valuable information to many people
because a few have a particular political bent that would lead them to
believe you were only in it for the money.

brian d foy

unread,
Nov 9, 2010, 4:43:21 PM11/9/10
to perl5-...@perl.org, perl-docu...@perl.org
In article <22527.1289230033@chthon>, Tom Christiansen
<tch...@perl.com> wrote:

This is the old, dead list that Tim Bunce, et alia, used to keep. It's
officially dead and we're pretending that it never existed. Any
reference to it should disappear.

Paul Fenwick

unread,
Nov 9, 2010, 6:48:28 PM11/9/10
to Paul Johnson, Abigail, demerphq, Tom Christiansen, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
Salutations, and apologies for the giant reply-all.

Paul Johnson wrote:

> No. But it might be if autodie could be persuaded to work on an implicit
> close when a lexical filehandle goes out of scope. That is, if these two
> snippets functioned identically:
>
> $ perl5.12.2 -Mautodie -E 'open my $f, ">", "/dev/full"; print $f 23'
> $ perl5.12.2 -Mautodie -E 'open my $f, ">", "/dev/full"; print $f 23; close $f'
>
> Anyone know what it would take to get that to work?

The most straightforward way I can think of would be to have autodie turn
the filehandle into an object before returning it. The object can then
check for a successful close when it falls out of scope.

Fatal::_one_invocation is the most likely place for such code to be
inserted, it already has special cases for system() and flock().

The problem is that this isn't lexical scope, which autodie otherwise
strictly adheres to. You could quite happily return the new filehandle, and
it will still check for a successful close when it's finally destroyed, even
if autodie is no longer in effect. That's definitely action from a distance.

p5p - any opinions on this either way?

Best wishes,

Paul

--
Paul Fenwick <p...@perltraining.com.au> | http://perltraining.com.au/
Director of Training | Ph: +61 3 9354 6001
Perl Training Australia | Fax: +61 3 9354 2681

Leon Timmermans

unread,
Nov 9, 2010, 9:55:52 PM11/9/10
to Tom Christiansen, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
On Tue, Nov 9, 2010 at 4:28 PM, Tom Christiansen <tch...@perl.com> wrote:
> I nearly always use lexical handles in non-trivial programs, although in
> trust I seldom do so in trivial ones.  By trivial, I mean those that don't
> even have subroutines, or very few.

I'm pretty sure everyone on this list is able to make such judgment,
but what about the novices? I'd rather have them default to using
indirect filehandles until they grok the difference.

> In particular, I don't approve of
> error checking being omitted when they get implicitly closed due to scoping
> or whatnot.

I agree with you in theory, but in practice I think it doesn't matter
most of the time: most of the time people don't check the return
values of their print() calls, making the point of checking close() a
bit moot IMHO. Also, closing a valid read-only filedescriptor can't
even generate an error AFAIK.

Leon

Tom Christiansen

unread,
Nov 9, 2010, 9:58:43 PM11/9/10
to Leon Timmermans, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
> I agree with you in theory, but in practice I think it doesn't matter
> most of the time: most of the time people don't check the return
> values of their print() calls, making the point of checking close() a
> bit moot IMHO.

It is neither necessary nor sufficient to check the return value
from print to detect an error in print.

> Also, closing a valid read-only filedescriptor can't
> even generate an error AFAIK.

Certainly it can!!

--tom

Leon Timmermans

unread,
Nov 9, 2010, 10:28:54 PM11/9/10
to Tom Christiansen, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
On Wed, Nov 10, 2010 at 3:58 AM, Tom Christiansen <tch...@perl.com> wrote:
> It is neither necessary nor sufficient to check the return value
> from print to detect an error in print.

I agree it's not sufficient, but I don't agree it's not necessary.
Just imagine a program waiting for a reply to a question that never
reached the other side of a pipe. IMO not checking print's return
value can cause worse bugs than not checking close because it's much
more likely to affect the flow of the program.

> Certainly it can!!

Enlighten me :-)

Leon

Tom Christiansen

unread,
Nov 9, 2010, 11:10:38 PM11/9/10
to Leon Timmermans, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org

You can't trust print(). Technically, you can't trust write(2),
either. That's why you need to call fsync(2). To my knowledge,
only vi(1) does.

>> Certainly it can!!

>Enlighten me :-)

The most common read(2) failure is EIO, but you can also get EINTR and
EAGAIN. NFS failures may or may not fall under EIO; I believe those may
give ETIMEDOUT or ECONNRESET, which can also happen on a regular socket.
And of course you can always get ENOBUFS, which is a real bummer.

So any of those will stickly set the ferror flag on the buffer,
which will show up in the close. Even close(2) itself can
fail, including through EINTR or once again, through EIO
form a previously uncommitted write(2) having its own trouble.

I have seen many and perhaps all of those.

ALWAYS TEST ANYTHING THAT CAN RETURN AN ERROR. ALWAYS!!!

--tom

Abigail

unread,
Nov 10, 2010, 2:37:18 AM11/10/10
to Leon Timmermans, Tom Christiansen, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
On Wed, Nov 10, 2010 at 03:55:52AM +0100, Leon Timmermans wrote:
> On Tue, Nov 9, 2010 at 4:28 PM, Tom Christiansen <tch...@perl.com> wrote:
> > I nearly always use lexical handles in non-trivial programs, although in
> > trust I seldom do so in trivial ones.  By trivial, I mean those that don't
> > even have subroutines, or very few.
>
> I'm pretty sure everyone on this list is able to make such judgment,
> but what about the novices? I'd rather have them default to using
> indirect filehandles until they grok the difference.

I'd rather teach novices lexical, autovivifying handles first.

But regardless. I find "let's shield novices from XXX" not an argument
why something should not (or should) be in the documentation. They
aren't American teenagers who'll be scarred for life if they see a
nipple before they can have a beer.

Abigail

Tom Christiansen

unread,
Nov 10, 2010, 9:53:30 AM11/10/10
to Leon Timmermans, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
I wrote, quoting Leon:

>>>> most of the time people don't check the return values of their
>>>> print() calls, making the point of checking close() a bit moot

>>>> IMHO. Also, closing a valid read-only filedescriptor can't even
>>>> generate an error AFAIK.

>>> Certainly it can!!

>> Enlighten me :-)

I was gentle last time; it's worse than I previously wrote.

> The most common read(2) failure is EIO, but you can also get EINTR
> and EAGAIN. NFS failures may or may not fall under EIO; I believe
> those may give ETIMEDOUT or ECONNRESET, which can also happen on a
> regular socket.

Sun also documents close(2) as producing ENOLINK if the file descriptor
argument represent a remote machine whose link (whatever that exactly
means) is no longer active (or that).

http://docs.sun.com/app/docs/doc/816-5167/close-2?l=en&a=view

> And of course you can always get ENOBUFS, which is a real bummer.

This has *definitely* happened to me. It's really hard to get control
of your system again when this starts happening, but that doesn't mean
you should pretend it isn't there.

> So any of those will stick[i]ly set the ferror flag on the buffer,


> which will show up in the close.

This is the important point about checking print status. Because of
buffering, you cannot trust the return value of print. It does not
mean the data made it to the file. It doesn't even mean it made it
out of your buffer into the kernel's. This is why checking print is
*not sufficient* to determine the success of a print.

However, streams are set up so that their error flag gets set on any
failed I/O operation, and this sticky flag carries through until the
ultimate close, whose return reflects that. You can inspect the flag
with ferror, and you can clear it with clearerr. This is why checking
print is *not necessary* to determine the success of a print.

http://docs.sun.com/app/docs/doc/816-5167/read-2?l=en&a=view

Furthermore, Perl's close can be applied to things other than disk files.

The more obvious case is the way close on a popen()ed handle maps a
nonzero exit status from the waited for child pid into a close failure.

As for not being able to try write(2), because their filesystem seems
more prone to it than the others, the Linux close(2) manpage reminds
you of something that has always been true but more often ignored:

Not checking the return value of close() is a common but
nevertheless serious programming error. It is quite possible that
errors on a previous write(2) operation are first reported at the
final close(). Not checking the return value when closing the file
may lead to silent loss of data. This can especially be observed
with NFS and with disk quota.

A successful close does not guarantee that the data has been
successfully saved to disk, as the kernel defers writes. It is
not common for a filesystem to flush the buffers when the stream is
closed. If you need to be sure that the data is physically stored
use fsync(2). (It will depend on the disk hardware at this point.)

That's why vi(1) calls fsync(2)[*] when it's all done. One *can* do
this in Perl, with IO::Handle->sync, although I don't know who if anyone
does so.

*[aka fsync(3), fsync(3C), and related to fdatasync(2)]

One can also use O_SYNC with the open(2), that is, with Perl's sysopen.

> Even close(2) itself can fail, including through EINTR or once

> again, through EIO [from] a previously uncommitted write(2)
> having its own trouble.

Or a previous read(2), which *easily* incurs EIOs.

Not counting things like EBADF, EFAULT, EISDIR, or EINVAL, there
are still *many* possible failure modes. This is not exaustive:

read: EINTR, EIO, ENXIO, EAGAIN, EWOULDBLOCK, ENOBUFS, EDEADLK,
EBADMSG, ENOLCK, ENOLINK, EOVERFLOW, ETIMEDOUT, ECONNRESET

write: EINTR, EIO, ENXIO, EAGAIN, EWOULDBLOCK, ENOBUFS, EDEADLK,
EDQUOT, EFBIG, ENOLCK, ENOSPC, ENOSR, ERANGE, ENETDOWN,
ENETUNREACH, EPIPE, EDESTADDRREQ

fsync: EINTR, EIO, EROFS, ENOSPC, ETIMEDOUT, plus: "If a queued
I/O operation fails, fsync() may fail with any of the
errors defined for read(2) or write(2)."

close: EINTR, EIO, ENOLINK, plus any lingering read/write errors
from the previous three lists.

Those are merged lists gathered from OpenBSD, Sun, Apple, and Linux
(only), and I still may have missed some.

One key point here is that even if you pretend nothing else matters--
utterly foolish though that would be--the pesky EIO is *always* a
possibility of ruining your day. Bad disk, anybody? *That* never
happens, eh? :(

So Leon, are you "enlightened"? Still think you can *ever* safely
ignore close errors, even on files "merely" opened O_RDONLY? :)

--tom

Leon Timmermans

unread,
Nov 10, 2010, 7:08:02 PM11/10/10
to Tom Christiansen, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
On Wed, Nov 10, 2010 at 3:53 PM, Tom Christiansen <tch...@perl.com> wrote:
> However, streams are set up so that their error flag gets set on any
> failed I/O operation, and this sticky flag carries through until the
> ultimate close, whose return reflects that.

I think PerlIO is currently not checking the ferror bit on close/flush
for read-only handles, it's surprisingly non-trivial to properly test
though. I don't know if this behavior is by accident or on purpose.

>> Even close(2) itself can fail, including through EINTR or once
>> again, through EIO [from] a previously uncommitted write(2)
>> having its own trouble.

PerlIO will handle EINTR actually, at least for read, write and close.
open doesn't though.

> One key point here is that even if you pretend nothing else matters--
> utterly foolish though that would be--the pesky EIO is *always* a
> possibility of ruining your day. Bad disk, anybody? *That* never
> happens, eh? :(

I still have a hard time imagining how closing a read-only filehandle
can cause EIOs, but I believe you if you say they happen. I've heard
weirder things.

> So Leon, are you "enlightened"? Still think you can *ever* safely


> ignore close errors, even on files "merely" opened O_RDONLY? :)

For most trivial programs I still wouldn't bother too much about it,
though you are right it's important for anything serious.

Leon

Tom Christiansen

unread,
Nov 10, 2010, 7:50:52 PM11/10/10
to Leon Timmermans, demerphq, Father Chrysostomos, perl5-...@perl.org, perl-docu...@perl.org
of "Thu, 11 Nov 2010 01: 08:02 +0100." <AANLkTikY36YAudqj2=2RDrPKSRxKoUb...@mail.gmail.com>
References: <18061.1289240399@chthon> <201011090602...@lists-nntp.develooper.com> <21672.1289313881@chthon> <deme...@gmail.com> <AANLkTimn_HrTZneMH2zqx...@mail.gmail.com> <29825.1289316482@chthon> <faw...@gmail.com> <AANLkTikxY3qYBzP+MyBJS...@mail.gmail.com> <9570.1289357923@chthon> <AANLkTinii_EXqOirNQ53R...@mail.gmail.com> <tchrist@chthon> <25267.1289362238@chthon> <9217.1289400810@chthon> <AANLkTikY36YAudqj2=2RDrPKSRxKoUb...@mail.gmail.com>
Mime-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8
X-Mailer: nmh v1.3 && nvi v1.79 (duh!)
Date: Wed, 10 Nov 2010 17:50:52 -0700
Message-ID: <32724.1289436652@chthon>
From: Tom Christiansen <tchrist@chthon>

> I still have a hard time imagining how closing a read-only filehandle
> can cause EIOs, but I believe you if you say they happen. I've heard
> weirder things.

============

CLOSE(2) Linux Programmer’s Manual CLOSE(2)

ERRORS
EBADF fd isn’t a valid open file descriptor.
EINTR The close() call was interrupted by a signal; see signal(7).
EIO An I/O error occurred.

============

SunOS 5.10 Last change: 18 Oct 2005 2
System Calls close(2)

ERRORS

The close() function will fail if:

EBADF The fildes argument is not a valid file
descriptor.
EINTR The close() function was interrupted by a
signal.
ENOLINK The fildes argument is on a remote machine
and the link to that machine is no longer
active.
ENOSPC There was no free space remaining on the
device containing the file.

The close() function may fail if:

EIO An I/O error occurred while reading from or
writing to the file system.

============

Other manpages say that lingering EIO errors from a previous read(2)
and/or write(2) can cause close(2) to do the same. You can kinda
read that in the last EIO description above, if you squint enough.

Here's another problem:

while (<STDIN>) {
...
}
exit(0);

Nobody *EVER* bothers to check that when readline returned undef it did so
*without setting errno*. A program run via

perl script < /bad/disk/file

can get EIO and nobody ever notices. Bad. Very bad. Very very bad.

This kinda thing needs to be autodied or autocarped. It's too important
either to miss or to expect everybody to always do right.

We should help people. It's like the while(<>) getting a defined().

--tom

0 new messages