This is the coredump. This is most likely the main thread. There
were no other processes listed in top (this is one of those Linuxes
where threads don't have pids anymore ;-( )
#0 0xffffe002 in ?? ()
#1 0x4001a8f1 in Perl_ithread_join (my_perl=0x8192838, obj=0x82987bc)
at threads.xs:567
#2 0x4001b3df in XS_threads_join (my_perl=0x8192838, cv=0x8212044)
at threads.xs:685
#3 0x080dde73 in Perl_pp_entersub (my_perl=0x8192838) at pp_hot.c:2840
#4 0x080bb7bf in Perl_runops_debug (my_perl=0x8192838) at dump.c:1438
#5 0x08065225 in S_run_body (my_perl=0x8192838, oldscope=1) at perl.c:1857
#6 0x08064c8f in perl_run (my_perl=0x8192838) at perl.c:1776
#7 0x0805fd37 in main (argc=2, argv=0xbfffe664, env=0xbfffe670)
at perlmain.c:86
#8 0x42015574 in __libc_start_main () from /lib/tls/libc.so.6
Some machine info:
$ cat /proc/version
Linux version 2.4.20-8 (bhco...@porky.devel.redhat.com) (gcc
version 3.2.2 20030222 (Red Hat Linux 3.2.2-5)) #1 Thu Mar 13
17:54:28 EST 2003
$ cat /etc/redhat-release
Red Hat Linux release 9 (Shrike)
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Pentium(R) 4 CPU 2.80GHz
stepping : 7
cpu MHz : 2799.569
cache size : 512 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips : 5583.66
Liz
Elizabeth Mattijsen wrote:
> Ok, with the new knowledge of building debugging versions in a seperate
> install tree, the ext/threads/shared/t/wait test is _still_ hanging on
> my test box.
I'm seeing the same thing on stock RH9, which looks like what you're using
as well.
here's the test output by itself and an strace, in case it helps.
--Geoff
ok 21 - cond_timedwait [simple]: obtained initial lock
ok 22 - cond_timedwait [simple]: child before lock
ok 23 - cond_timedwait [simple]: child obtained lock
mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1,
0) = 0x4132e000
munmap(0x4132e000, 860160) = 0
munmap(0x41500000, 188416) = 0
mprotect(0x41400000, 32768, PROT_READ|PROT_WRITE) = 0
futex(0x40b0cd08, FUTEX_WAIT, 17417, NULL) = 0
brk(0) = 0x83c8000
brk(0) = 0x83c8000
brk(0x838e000) = 0x838e000
brk(0) = 0x838e000
ioctl(0, SNDCTL_TMR_TIMEBASE, {B38400 opost isig icanon echo ...}) = 0
_llseek(0, 0, 0xbfffdb60, SEEK_CUR) = -1 ESPIPE (Illegal seek)
ioctl(1, SNDCTL_TMR_TIMEBASE, {B38400 opost isig icanon echo ...}) = 0
_llseek(1, 0, 0xbfffdb60, SEEK_CUR) = -1 ESPIPE (Illegal seek)
ioctl(2, SNDCTL_TMR_TIMEBASE, 0xbfffdb20) = -1 ENOTTY (Inappropriate ioctl
for device)
_llseek(2, 0, [66207], SEEK_CUR) = 0
clone(child_stack=0x40b0c8d0,
flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID|CLONE_DETACHED,
[17419], {entry_number:6, base_addr:0x40b0ccc0, limit:1048575, seg_32bit:1,
contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0,
useable:1}) = 17419
futex(0x40b0cd08, FUTEX_WAIT, 17419, NULL <unfinished ...>
Whereas the tests pass for me on RH9 [not sure how stock this is]
Summary of my perl5 (revision 5.0 version 8 subversion 3) configuration:
Platform:
osname=linux, osvers=2.4.22, archname=i686-linux-thread-multi
uname='linux rum 2.4.22 #2 smp wed jan 7 19:00:14 gmt 2004 i686 i686 i386 gnulinux '
config_args='-des -Dusethreads -Dprefix=/perl/perl-5.8.3-RC1 -Doptimize=-g -Dusedevel -Uinstallusrbinperl -Uversiononly -DDEBUGGING'
hint=recommended, useposix=true, d_sigaction=define
usethreads=define use5005threads=undef useithreads=define usemultiplicity=define
useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=undef use64bitall=undef uselongdouble=undef
usemymalloc=n, bincompat5005=undef
Compiler:
cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm',
optimize='-g',
cppflags='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -I/usr/include/gdbm'
ccversion='', gccversion='3.2.2 20030222 (Red Hat Linux 3.2.2-5)', gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
alignbytes=4, prototype=define
Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc
perllibs=-lnsl -ldl -lm -lcrypt -lutil -lpthread -lc
libc=/lib/libc-2.3.2.so, so=so, useshrplib=false, libperl=libperl.a
gnulibc_version='2.3.2'
Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'
Characteristics of this binary (from libperl):
Compile-time options: DEBUGGING MULTIPLICITY USE_ITHREADS USE_LARGE_FILES PERL_IMPLICIT_CONTEXT
Locally applied patches:
MAINT22085
Built under linux
Compiled at Jan 9 2004 19:43:53
@INC:
lib
/perl/perl-5.8.3-RC1/lib/5.8.3/i686-linux-thread-multi
/perl/perl-5.8.3-RC1/lib/5.8.3
/perl/perl-5.8.3-RC1/lib/site_perl/5.8.3/i686-linux-thread-multi
/perl/perl-5.8.3-RC1/lib/site_perl/5.8.3
/perl/perl-5.8.3-RC1/lib/site_perl
.
Nicholas Clark
I think you're losing the race. From wait.t:
### N.B.: RACE! If $timeout is very soon and/or we are unlucky, we
### might timeout on the cond_timedwait before the signaller
### thread even attempts lock()ing.
### Upshot: $thr->join() never completes, because signaller is
### stuck attempting to lock the mutex we regained after waiting.
As far as the route through wait.t goes for $test =~ /simple/
threads->create(\&ctw, 5)->join;
sub ctw($) {
my $to = shift;
## which lock to obtain in this scope?
$test =~ lock($cond);
ok(1,1, "$test: obtained initial lock");
my $thr = threads->create(\&signaller);
### N.B.: RACE! If $timeout is very soon and/or we are unlucky, we
### might timeout on the cond_timedwait before the signaller
### thread even attempts lock()ing.
### Upshot: $thr->join() never completes, because signaller is
### stuck attempting to lock the mutex we regained after waiting.
my $ok = 0;
$ok=cond_timedwait($cond, time() + $to); # simple
print "# back from cond_timedwait; join()ing\n";
$thr->join;
ok(5,$ok, "$test: condition obtained");
}
sub signaller {
ok(2,1,"$test: child before lock");
$test =~ lock($cond);
ok(3,1,"$test: child obtained lock");
cond_signal($cond);
ok(4,1,"$test: child signalled condition");
}
I can't see how to avoid the race, even by bailing out (in that failing tests
are better than hanging tests).
Trying to put a "bail out now" flag after the cond_timewait and testing that
in &signaller() before attempting the lock would just create a new race
(I think), with the possibility of $bail_out getting set fractionally after
the signaller had tested it.
Nicholas Clark
$ rpm -q glibc
glibc-2.3.2-27.9.7
(I'm going to bed now)
Nicholas Clark
> > ok 23 - cond_timedwait [simple]: child obtained lock
>
> I think you're losing the race. From wait.t:
>
> ### N.B.: RACE! If $timeout is very soon and/or we are unlucky, we
> ### might timeout on the cond_timedwait before the signaller
> ### thread even attempts lock()ing.
> ### Upshot: $thr->join() never completes, because signaller is
> ### stuck attempting to lock the mutex we regained after waiting.
However, we know that the signaller thread _has_ obtained the lock, since we
see Geoffrey's "ok 23."
Despite the menacing shoutcaps in the comment, it's hard to lose the race
referred to above. You'd need extreme load, and it'd look something like
this:
t waiter signaller
- --------------------------- ---------------------------------
0 lock($cond) n/a
1 spawn signaller [initializing? not executing?]
2 unlock($cond)/wait (atomic) [?]
3 [await signal] [?]
...5 wallclock seconds pass...
4 timeout, relock($cond) [?]
5 attempt join signaller... attempt lock($cond)...
HANG! HANG!
In practice, the signaller's lock is successfully obtained around t3, as soon
as the parent releases the mutex by timed_wait()ing. (And that mutex prevents
a race in the other direction, wherein the signaller completes before the
parent is ready and waiting.)
-Mike
> > Whereas the tests pass for me on RH9 [not sure how stock this is]
>
> $ rpm -q glibc
> glibc-2.3.2-27.9.7
You've got a stock update or two, and I think that's the answer for RH9.
Liz graciously retried the failing tests with NPTL disabled (defaulting to the
older LinuxThreads implementation), and the tests no longer failed. Her glibc
package is older than the one referenced above. I trust/hope Geoffrey will
report the same.
RH's glibc-2.3.2-27.9, which does _not_ fail for Nick, introduces fixes for
bugs in both pthread_cond_wait and NPTL pthread_cond_timedwait. [0] As for
me, I can't reproduce the wait.t failure on RH9 with that more recent glibc,
with or without NPTL.
So, RH9ers (and RHEL users, too, I think) have two options, AFAICT: update
glibc or disable NPTL for threaded perl. [1] I've no idea how to easily
detect the presence of an older NPTL for overriding during "make test".
[0] https://rhn.redhat.com/errata/RHBA-2003-136.html
[1] No changes to compilation are required. NPTL can be suppressed at runtime
on a per-process basis via LD_ASSUME_KERNEL=2.4.1, or at boot time
systemwide -- see your RH9 release notes.
-Mike
> >ok 21 - cond_timedwait [simple]: obtained initial lock
> >ok 22 - cond_timedwait [simple]: child before lock
> >ok 23 - cond_timedwait [simple]: child obtained lock
[....]
> >futex(0x40b0cd08, FUTEX_WAIT, 17419, NULL <unfinished ...>
(Liz, thanks for the forward.)
Does the same failure occur with NPTL disabled -- that is, "export
LD_ASSUME_KERNEL=2.4.1; ./run_the_test"?
Can an I get an strace of the behavior of all threads in the test? Meantime,
I'll look for a similar environment in which to reproduce.
-Mike
That would only solve the problem during testing. As expected,
setting the environment variable inside the test-script, even
_before_ loading threads.pm, does not solve the problem (probably
because glibc doesn't see Perl's environment changes).
However, one could think of a some way of having the test-file
recursing onto itself with an exec() or a system() with the
appropriate environment variable set if this condition is seen (at
least for the test-suite).
Maybe some way should be devised so that you could check from Perl
which threads implementation is being used underneath Perl and maybe
that would give enough info to check for this condition?
Liz
Yes. Good point. It didn't quite make sense to me, because if the race
were lost this (also) should have got printed:
print "# back from cond_timedwait; join()ing\n";
> Despite the menacing shoutcaps in the comment, it's hard to lose the race
> referred to above. You'd need extreme load, and it'd look something like
> this:
Does the lock really need to be held for the join? Can't you avoid the
race entirely with the appended change?
[which adds a nested scope, and unlocks the locked variable before the join]
Also it would seem useful to me to replace that "back from cond_timedwait"
print with another ok, so that the harness diagnostics have a better
chance of showing where things hung.
Nicholas Clark
--- ext/threads/shared/t/wait.t.orig Wed Dec 17 17:24:47 2003
+++ ext/threads/shared/t/wait.t Sat Jan 10 12:57:55 2004
@@ -100,25 +100,24 @@ SYNC_SHARED: {
sub ctw($) {
my $to = shift;
+ my $ok = 0;
+ my $thr;
+ {
+ ## which lock to obtain in this scope?
+ $test =~ /twain/ ? lock($lock) : lock($cond);
+ ok(1,1, "$test: obtained initial lock");
- ## which lock to obtain in this scope?
- $test =~ /twain/ ? lock($lock) : lock($cond);
- ok(1,1, "$test: obtained initial lock");
+ $thr = threads->create(\&signaller);
- my $thr = threads->create(\&signaller);
- ### N.B.: RACE! If $timeout is very soon and/or we are unlucky, we
- ### might timeout on the cond_timedwait before the signaller
- ### thread even attempts lock()ing.
- ### Upshot: $thr->join() never completes, because signaller is
- ### stuck attempting to lock the mutex we regained after waiting.
- my $ok = 0;
- for ($test) {
- $ok=cond_timedwait($cond, time() + $to), last if /simple/;
- $ok=cond_timedwait($cond, time() + $to, $cond), last if /repeat/;
- $ok=cond_timedwait($cond, time() + $to, $lock), last if /twain/;
- die "$test: unknown test\n";
+ for ($test) {
+ $ok=cond_timedwait($cond, time() + $to), last if /simple/;
+ $ok=cond_timedwait($cond, time() + $to, $cond), last if /repeat/;
+ $ok=cond_timedwait($cond, time() + $to, $lock), last if /twain/;
+ die "$test: unknown test\n";
+ }
+ print "# back from cond_timedwait; join()ing\n";
}
- print "# back from cond_timedwait; join()ing\n";
+
> > Also it would seem useful to me to replace that "back from cond_timedwait"
> > print with another ok, so that the harness diagnostics have a better
> > chance of showing where things hung.
>
> I'd agree. IIRC, a number of test routines would benefit from these changes.
> I'll happily update wait.t, but I won't have the opportunity to do so until
> tomorrow.
Tomorrow (or early next week) would be fine as far as getting it into 5.8.3
goes. I'm not sure how many affected systems are trying (and failing) to
run tests from the latest rsync.
If it's OK, I think I'd prefer you to make a patch rather than me just
apply what I'd suggested, as you may spot other things to improve.
[but don't feel obliged to :-)]
Nicholas Clark
> Does the lock really need to be held for the join? Can't you avoid the
> race entirely with the appended change?
> [which adds a nested scope, and unlocks the locked variable before the join]
Yes -- that'd turn the deadlock into an ordinary test failure. This won't
address the underlying problem with vanilla RH9 and wait.t (an older NPTL, I
think, and not the highly unlikely loss of this race in nature), but your
change is a definite improvement.
> Also it would seem useful to me to replace that "back from cond_timedwait"
> print with another ok, so that the harness diagnostics have a better
> chance of showing where things hung.
I'd agree. IIRC, a number of test routines would benefit from these changes.
I'll happily update wait.t, but I won't have the opportunity to do so until
tomorrow.
-Mike
> That would only solve the problem during testing. As expected,
> setting the environment variable inside the test-script, even
> _before_ loading threads.pm, does not solve the problem (probably
> because glibc doesn't see Perl's environment changes).
Right -- the NPTL v. LinuxThreads runtime decision is a dynamic linker
instruction, so it needs to be made before the interpreter is loaded.
> Maybe some way should be devised so that you could check from Perl
> which threads implementation is being used underneath Perl and maybe
> that would give enough info to check for this condition?
I think that would be ideal; there appears to be code to determine NPTLishness
at runtime [0]. If we could beyond that determine NPTL version (not sure what
confstr to look for), then we could know whether or not to export the
LD_ASSUME_KERNEL envvar for subsequent perl runs.
Easier (less palatable?) would be merely to declare "caveat executor" -- the
older NPTL has a bug, it's since been fixed, and that's that. (I know that
admins can't simply upgrade glibc at the drop of a hat, and that the older
NPTL is widely deployed thanks with RH9.)
[0] https://listman.redhat.com/archives/phil-list/2003-April/msg00036.html
-Mike
I could live with that. However, we do need a way for the test just
to fail (with the appropriate pointers to updating glibc or setting
LD_ASSUME_KERNEL=2.4.1) in that case rather than hang.
Liz
In this case, I think this is the way to go. The particular bad glibc
also had other problems, iirc. Perl can't be responsible for fixing
all operating system glitches -- especially since the vendor has fixed
them.
-R
I hate to be a party-pooper, but this patch doesn't work ;-(. It
hangs after test 26:
ok 26 - cond_timedwait [simple]: child obtained lock
(silence)
Liz
I'm afraid the latter is the case... maybe the test can be wrapped
in an alarm()ed eval?
Liz
With LD_ASSUME_KERNEL=2.4.1, all tests pass (as before).
I thought the point of the changes was that it wouldn't hang even
when this was not specified?
Liz
I believe not yet observed. But someone will manage it. :-(
> > b) buggy underlying pthread_cond_timedwait() &c.
> > - un-updated RH9 glibc (specifically NPTL)
> >
> >(a) and (b) will still produce failure, of course, but the test suite won't
> >hang and diagnostics will be easier.
> >
> >-Mike
>
> I hate to be a party-pooper, but this patch doesn't work ;-(. It
> hangs after test 26:
Given that it will fix (a), should I test and apply it as is?
And we work from here?
Nicholas Clark
> If it's OK, I think I'd prefer you to make a patch rather than me just
> apply what I'd suggested, as you may spot other things to improve.
Patch for wait.t attached (thanks for the suggestions!), sporting more ok()s
and "unlocked" joins for all waiter+signaller tests. No deadlock in the cases
of:
a) the exceedingly slow signaller thread
- not yet observed in nature?
b) buggy underlying pthread_cond_timedwait() &c.
> I hate to be a party-pooper, but this patch doesn't work ;-(. It
> hangs after test 26:
>
> ok 26 - cond_timedwait [simple]: child obtained lock
> (silence)
Even with ``LD_ASSUME_KERNEL=2.4.1'' ?
-Mike
> I hate to be a party-pooper, but this patch doesn't work ;-(. It
> hangs after test 26:
Oh, wait; I see what you were saying.
Perhaps the old NPTL bug is deeper than I'd suspected -- I'd thought the
corresponding cond_timedwait() completed and the deadlock was in perl's
join(), but perhaps the hang is way down in pthread_cond_wait...
-Mike
> I'm afraid the latter is the case... maybe the test can be wrapped
> in an alarm()ed eval?
I'm not sure of the semantics. Only one thread would get the signal;
moreover, the pthread_cond_*wait functions aren't EINTRuptable per se --
they're cancellation points, of which ithreads has no notion. (Where's my
Butenhof book?)
Maybe as a separate process we could alarm the main thread and wait() for
failure? Unsure of feasibility; even then, I'd worry that right
signals-n-threads mixture for NPTL would be wrong prescription for other
systems' thread libraries.
-Mike
> Given that it will fix (a), should I test and apply it as is?
Yes.
> And we work from here?
Hopefully. :-) Liz has broached a solution for (b).
-Mike
Maybe you could fork() on non-Win32 systems and have the parent kill
the child after X seconds when it has not returned?
One approach I use with Benchmark::Thread::Size is to start a
seperate process (script) with open and wait for the child to write
something back to indicate it's done. You could wrap that into a
timed eval, couldn't you?
Liz
Thanks, applied (22115)
On Sun, Jan 11, 2004 at 05:37:46PM -0600, Mike Pomraning wrote:
> they're cancellation points, of which ithreads has no notion. (Where's my
> Butenhof book?)
I don't know, but mine's on top of my dictionary, just to the right of me.
Nicholas Clark
Elizabeth Mattijsen wrote:
> At 16:41 -0600 1/11/04, Mike Pomraning wrote:
>
>> On Sun, 11 Jan 2004, Elizabeth Mattijsen wrote:
>>
>>> I hate to be a party-pooper, but this patch doesn't work ;-(. It
>>> hangs after test 26:
>>>
>>> ok 26 - cond_timedwait [simple]: child obtained lock
>>
>> > (silence)
>> Even with ``LD_ASSUME_KERNEL=2.4.1'' ?
>
>
> With LD_ASSUME_KERNEL=2.4.1, all tests pass (as before).
sorry for checking late...
just to concur, LD_ASSUME_KERNEL=2.4.1 works for me on all counts, and 22115
hangs on 26 without it.
--Geoff
> Maybe you could fork() on non-Win32 systems and have the parent kill
> the child after X seconds when it has not returned?
Fork-n-fail away! ;-) The attached runs each cond_* test set under a
separate, alarmed child process with 90 seconds patience.
I've only simulated the deadlock, of course, and I haven't tried this on win32
at all. This approach is a little dicier than I like -- the fork() perhaps
copying some locked internal mutex (if I recall the book to Nick's right
correctly :-)) -- but at forktime there's only one thread in play.
If this doesn't work, I'd be in favor of simply alarm()ing wait.t to allow the
other tests to proceed.
-Mike
I think 90 seconds is a bit too much to wait for. I would think 30
seconds is _really_ enough.
>I've only simulated the deadlock, of course, and I haven't tried this on win32
>at all. This approach is a little dicier than I like -- the fork() perhaps
>copying some locked internal mutex (if I recall the book to Nick's right
>correctly :-)) -- but at forktime there's only one thread in play.
>If this doesn't work, I'd be in favor of simply alarm()ing wait.t to allow the
>other tests to proceed.
The patch works. You just have to have enough patience. I didn't,
the first time. ;-)
One remark, though. Maybe a diag() should be added which explains
why the test most likely failed. Similar to the DB message you get
when testing with Mac OS X (which still has a buggy Berkeley DB).
Liz
I've applied this to blead. I'll merge it into maint if Steve Hay's blead
smoke on Win32 is happy.
However, I've tried building blead on Compaq's VMS testdrive and I get
ext/threads/shared/t/wait............FAILED at test 6
I've not investigated further, as the tests are still running, I don't
know the VMS-speak for ./perl (how do I run the perl I just built?) and
I'm seeing quite a lot of failures that I didn't expect, so I'm not sure
if the testdrive machine is causing the failures rather than the test
itself. Import part is that it doesn't cause a hang for me on VMS.
The print code in wait.t looks OK (no print "not "; print "ok\n")
so I'm not sure where the problem may lie. It may be simplest to simply
skip the fork on VMS, given that VMS has one of the best threads
implementations around so is unlikely to need this bug-placating band-aid.
If someone with VMS-foo (it doesn't have to be Craig) is able to work
out how to make the test happy - possibly by just testing an edit of this
line:
*forko = ($^O =~ /^dos|os2|mswin32|netware$/i) # Not on DOSish platforms
I'd be grateful.
Nicholas Clark
FWIW smokes started with this patch on blead (HP-UX 10.20 and Cygwin-1.5.5)
and on 5.8.3 (HP-UX 10.20, manually applied the patch)
If they don't hang, reports (with the upcoming T::S-1.19) are to be expected
tomorrow
--
H.Merijn Brand Amsterdam Perl Mongers (http://amsterdam.pm.org/)
using perl-5.6.1, 5.8.0, & 5.9.x, and 806 on HP-UX 10.20 & 11.00, 11i,
AIX 4.3, SuSE 8.2, and Win2k. http://www.cmve.net/~merijn/
http://archives.develooper.com/daily...@perl.org/ per...@perl.org
send smoke reports to: smokers...@perl.org, QA: http://qa.perl.org
And while whoever is at it, change it to apply both anchors to all the
alternates:
/^(?:dos|os2|mswin32|netware)$/i)