Looking for balanced critique of Perl

61 views
Skip to first unread message

Don Libes

unread,
Jun 25, 1990, 6:43:26 PM6/25/90
to
I'm looking for a balanced well-educated critique of Perl. What I've
read in magazines (and certainly Usenet) has been, let's say
"overenthusiastic" and is difficult to swallow because it never
mentions anything bad about Perl. It would be interesting to hear,
say, language-designers, compiler-writers, and real computer
scientists opinions about Perl rather than just people who don't have
much else to compare it to. I've posted this to comp.unix.wizards
because I want to reach people who have rejected or ignored Perl.

Don't get me wrong. I've been using Perl for a large project, and am
impressed by power of it. Yet, there are problems as well. It's hard
to put my finger on them, but there is circumstantial evidence.

For instance, I've been using Perl for several months...every so often
when I have a problem, I'll do my usual thing - cruise through
neighboring offices asking for help. Well, I have been unable to find
a single other person here at NIST using Perl. None of the system
administrators here are interested in it, yet they are well-connected
to the net, have the latest tools otherwise, and seem to know what
they are doing.

Maybe, they see Perl as just "yet another tool", and they know plenty
already. Maybe they recognize it as "another goddamn language" that
they will have to spend years mastering. It's hard to tell. ("I
wasted all that time learning "sh", "csh", "ksh", "awk", C, and now
you're telling me I should forget them and learn another?")

I get the feeling it is very hard to master Perl. Any time I read a
posting in comp.lang.perl, someone will followup how to write a
shorter, more correct solution, and this iterates until finally Larry
himself will top everyone else. Is Larry the only one who can write
Perl well?

Do many people use more than 20% of the functions? It reminds me a
lot of PL/1 in that there are just so many ways to do things. It's
difficult to know, prima facie, what is the most efficient.

Another difficulty that I have run into is that the manual is just too
skimpy (and yet intimidating at the same time). I'm used to looking
at source to resolve questions, but it is just too turgid and I don't
have that much time. The first couple of times I used Perl, I was
ready to deep six the whole thing because it was so hard finding
anything in the manual.

Another different kind of information that the manual doesn't provide
at all is at what point Perl is inappropriate. On the scale of large
code? Efficient code? Etc. It's really hard to know whether Perl is
appropriate for a project before starting out and trying it.

I'm trying to be reasonable here. I'm not berating Perl because there
are bugs, is still in development, or some other irrelevant reason.
I'm more interested in the essential nature of Perl.

Rather than retorts to all these remarks and questions, I'm actually
more interested in something that has actually been published. So if
you could give me pointers, I'd appreciate it more.

Oh, by the way, I'm looking for this information because I'm writing a
section of a paper that necessarily demands a philisophical comparison
between Perl and other tools. I'd like to do it in a balanced way.

Thanks.

Don Libes li...@cme.nist.gov ...!uunet!cme-durer!libes

Stephane Payrard

unread,
Jun 26, 1990, 1:37:18 AM6/26/90
to

Perl is becoming one of my favorite tool.
A few month ago, I have sent an apologetic mail
about it. Today, I will discuss some of its limitations.


Perl has a clear limitation : its unability to deal proprely with
complex data-structures. I guess that limitation has an historical
explanation: perl was originally intended to extract information from
text files.
Nevertheless, with its ability of doing system calls (and soon)
library calls; one is tempted to use perl where he would have
otherwise used C. As of today, if you need complex data structures,
forget it.

more precisely, Perl so far
1/ has no clean way of: allocating, deallocating,
accessing data in memory (ie: no pointer)
2/ has no composite datatype equivalent to the C struct
3/ is not object oriented


1 et 2 can be emulated in a very dirty way using strings pack() and
unpack().
Now that perl is used in programs who are manipulating data structure
using such kludges is very painful and unnatural.

I guess the main problem for implementing such features is to find place in an already
cluttered "syntactic space" for:

declaring types
defining typed objects
referencing pointed objects, ( ^ seems not to be used yet as an unary operator)
elements of structure.

Anyway, I wonder if perl is really designed to be used for writing
program long enough to require to be split in many files and to deal
with complex data-structures.

stef

--
Stephane Payrard -- st...@sun.com -- (415) 336 3726
Sun Microsystems -- 2550 Garcia Avenue -- M/S 10-09 -- Mountain View CA 94043
room number: 138


Tom Neff

unread,
Jun 26, 1990, 3:43:01 AM6/26/90
to
In article <48...@muffin.cme.nist.gov> li...@cme.nist.gov (Don Libes) writes:
>I'm looking for a balanced well-educated critique of Perl.

What you get out of Perl is proportional to what you put into it.
Naturally this drives some people up the wall! Others are ecstatic.

Like the Korn Shell, Perl is a 'better mousetrap' language, which means
it faces an uphill battle to prove why anyone really needs it. Also
like ksh, it's addictive: once you get used to having it you wonder how
you ever endured the old days.

Perl is also at the stage where many (if not most) people who want to
use it have to build it themselves. This gives users a vested interest
in the language if they succeed, but it's also a barrier to wider
acceptance. If a few vendors started shipping Perl binaries with their
OS releases, it'd become a standard in months... but it would also tend
to freeze development and/or enshrine bugs (cf. sendmail).

It will help if Larry releases the 'rn in Perl' he is rumored to be
working on, since many more people will encounter the language and be
tempted to learn about it.

>I get the feeling it is very hard to master Perl. Any time I read a
>posting in comp.lang.perl, someone will followup how to write a
>shorter, more correct solution, and this iterates until finally Larry
>himself will top everyone else. Is Larry the only one who can write
>Perl well?

This just shows the richness of the language. In most of the
terser-than-thou Perl hacker dialogues in comp.lang.perl, ALL of the
entries are written "well." (Except Randal's winners, which are
invariably egregiously unfair abuses of the language :-) As the manual
points out, Perl emphasizes functionality over elegance, so if a script
works properly, and is maintainable and reasonably efficient, it doesn't
need "improving" -- except for fun, which is what we see a lot of here.

>Do many people use more than 20% of the functions?

As a "kitchen sink" language Perl has more doodads than one person
normally needs. But different users embrace different subsets.
Everything in it is useful to SOMEONE. C's external link-time
libraries and Bourne's casual subprogram invocations aren't available
as central language elements, so Perl must compromise on what things
are and aren't included -- hence the doodads.

> It reminds me a
>lot of PL/1 in that there are just so many ways to do things. It's
>difficult to know, prima facie, what is the most efficient.

Actually the most bewildering choices in Perl have nothing to do with
the Swiss-Army-knife kit of system functions, but rather with the
wide-open freedom of control structures, variable/array manipulation and
interpretation, and the mix of competing features from awk, sed and
Bourne shell. In this Perl reflects its polyglot heritage: the
ingenious attempt to make things easy for coders migrating from those
other languages succeeds, at the cost of a slight identity crisis for
native Perl developers! Which predecessor to imitate - or none?

>Another difficulty that I have run into is that the manual is just too
>skimpy (and yet intimidating at the same time). I'm used to looking
>at source to resolve questions, but it is just too turgid and I don't
>have that much time. The first couple of times I used Perl, I was
>ready to deep six the whole thing because it was so hard finding
>anything in the manual.

This is something of a drawback. The Reference Cards help. I extracted
a syntax-only cheat sheet for my own use. Perl really needs a thick
manual that goes into EVERYTHING in detail, but someone else will
probably have to do it.

>Another different kind of information that the manual doesn't provide
>at all is at what point Perl is inappropriate. On the scale of large
>code? Efficient code? Etc. It's really hard to know whether Perl is
>appropriate for a project before starting out and trying it.

As long as you aren't saddled with a large latency cost in loading the
interpreter (I used to be before I chmod +t'd it on my V/386), I find
there are few size limitations (large or small) on Perl's usefulness.
If there *were* limitations they would vary from system to system,
so the manual really can't help!

Obviously if the task is to print "HI" a Perl script would be
overkill... unless this was, say, a SUB-script to be chosen from among
a dozen possibilities and executed from a running Perl program, in which
case Perl's built in #! recognizer would save time! (I think.)

In practice portablility is a much bigger constraint on Perl's usefulness
than program size or complexity. If I am writing an install script to
be shipped with a set of software, I cannot assume the recipient has Perl
available so I naturally use Bourne. On the other hand Perl has been
successfully ported to a surprising array of platforms.

>I'm trying to be reasonable here. I'm not berating Perl because there
>are bugs, is still in development, or some other irrelevant reason.
>I'm more interested in the essential nature of Perl.

If you write your own Perl scripts and read comp.lang.perl, you know
the essential nature of Perl -- probably better than the average
computer journalist does. Rob Kolstad has a very nice three-parter in
UNIX REVIEW's current issues which should do a lot to turn people on to
the language, but everything in it is old hat by c.l.p standards.

>Oh, by the way, I'm looking for this information because I'm writing a
>section of a paper that necessarily demands a philisophical comparison
>between Perl and other tools. I'd like to do it in a balanced way.

Best wishes on the paper. IMHO 'balance' is something others cannot
supply -- it has to come from within.
--
When considering "victim's rights," remember || vv Tom Neff
that an innocent defendant is also a victim. ^^ || tn...@bfmny0.BFM.COM

Larry Wall

unread,
Jun 26, 1990, 3:22:38 PM6/26/90
to
In article <48...@muffin.cme.nist.gov> li...@cme.nist.gov (Don Libes) writes:
: I'm looking for a balanced well-educated critique of Perl. What I've

: read in magazines (and certainly Usenet) has been, let's say
: "overenthusiastic" and is difficult to swallow because it never
: mentions anything bad about Perl. It would be interesting to hear,
: say, language-designers, compiler-writers, and real computer
: scientists opinions about Perl rather than just people who don't have
: much else to compare it to. I've posted this to comp.unix.wizards
: because I want to reach people who have rejected or ignored Perl.

I hope you'll let me say what I think is wrong with Perl--I don't claim
to be a "real computer scientist", but I think I qualify as a language-
designer and compiler writer... :-)

In a sense, nobody has anything out there to compare Perl to, because
it's not aimed at the same ecological niche as any other language. On
the other hand, you can compare it with lots of things because Perl
borrows so heavily from Unix tradition. It's not for nothing that
one of the things Perl stands for is "Pathologically Eclectic Rubbish
Lister". Perl is, in fact, a kitchen-sink language. On purpose.

I agree that there's some "hype" out there for Perl (some of it even put
there by me), but you don't expect the salesman to tell you what's wrong
with his product, do you? Even if he knows what's wrong with it.

Anyway, taking off the sales hat (or is that the drug pusher hat?), and
putting on the C.S. hat...

The basic problem with Perl is that it's not about complex data structures.
Just as spreadsheet programs take a single data structure and try to
cram the whole world into it, so too Perl takes a few simple data structures
and drives them into the ground. This is both a strength and a weakness,
depending on the complexity and structure of the problem. Perl grew out
of text processing, and to the extent that your problem matches that kind
of a problem, it maps easily onto Perl. You can write object-oriented
programs in Perl, but you'll find that the methods of your objects are pieces
of program, or names of subroutines. Likewise you can program recursive
data structures, but you'll find that you're using symbolic pointers rather
than numeric. In some ways, this is at it should be. But it does take a
hit on performance.

The basic underlying fault of Perl is that there isn't a real good way
of building composite structures, or to make one variable refer to a piece
of another variable, without giving an operational definition of it.

The basic composite type of Perl is simply a list of values. Any other
composite value has to be translated into and out of this format. On the
other hand, the only simple type is the scalar variable, so any objects
you want to remember have to be encoded as strings.

There are two primary ways of mapping between lists and strings--you can use
pack/unpack to translate a structure with fixed offsets, or you can
use split/join to translate a variable-length, field-separated string.
The pack/unpack has the advantage that it uses the same format to do either,
and that format is simply a string, so in a sense it's kind of first-class
type. Except that the names by which you will refer to the fields aren't
part of the type, just the positions in the list. And the fact that you
have to explictly do the pack/unpack or split/join.

: Don't get me wrong. I've been using Perl for a large project, and am


: impressed by power of it. Yet, there are problems as well. It's hard
: to put my finger on them, but there is circumstantial evidence.
:
: For instance, I've been using Perl for several months...every so often
: when I have a problem, I'll do my usual thing - cruise through
: neighboring offices asking for help. Well, I have been unable to find
: a single other person here at NIST using Perl. None of the system
: administrators here are interested in it, yet they are well-connected
: to the net, have the latest tools otherwise, and seem to know what
: they are doing.

This may partly be because of antipathy, and partly because of apathy.
But I think the main reason is simply that Perl isn't actually necessary.
After all, we got along without it fine for nearly 20 years. Perl has
never been intended as an exclusive thing--rather Perl is making a
statement about pluralism.

Pluralism is the opposite of, or at least somewhat antagonistic to,
minimalism. The minimalist says that there's one right way to do things.
And often there is, if you're only optimizing for one quality, such as
speed or memory usage. On the other hand, I don't believe in only
optimizing for speed, or memory, or readability, or writability, or
learnability.

One of the ideas in Perl is controlled growth. Unlike the language we
call Unix, which has experienced uncontrolled growth, to the extent that
we can't write a portable shell script without effort measured in megatonnage,
Perl is structured such that I can add new keywords without (normally)
impacting older scripts. I'm well aware that what may seem unimportant
today may be a required feature in 3 years. If such a feature is added to
Perl, at the minimum I want new scripts that use the feature to be able to
test the version and patchlevel of the current Perl to see which features
they can use. I was gratified the other day when somebody told me that
the brought some Perl version 1 scripts up under Perl version 3 and they
ran without a hitch, despite the major changes since them.

: Maybe, they see Perl as just "yet another tool", and they know plenty


: already. Maybe they recognize it as "another goddamn language" that
: they will have to spend years mastering. It's hard to tell. ("I
: wasted all that time learning "sh", "csh", "ksh", "awk", C, and now
: you're telling me I should forget them and learn another?")

Since Perl is about pluralism, it's providing another way (possibly several
ways) to do the same thing. So in a sense, the time spent learning the
other things isn't wasted, because much of your experience comes right
across. There can, of course, be destructive interference as well.

In particular, I hope they see Perl as just "yet another tool". Or at
least, as a mechanism for producing more tools easily. I don't want Perl to
take over the world. Not in that sense.

: I get the feeling it is very hard to master Perl. Any time I read a


: posting in comp.lang.perl, someone will followup how to write a
: shorter, more correct solution, and this iterates until finally Larry
: himself will top everyone else. Is Larry the only one who can write
: Perl well?

More often, I say, "Well, you could do it like this, or you could do it
like that, or you could do it this other way." Occasionally I say,
"I think this is the most efficient way" or "I think this is the
most succinct way", but anything that gets the job done before the
boss fires you is "correct". When I say "This is how I'd write it",
it's not meant to a moral judgement--merely an aesthetic one.

When it comes to computers, we all grew up in the Depression. We're all
still stuffing bytes and cycles under the mattress. Computer Science
naturally started with the mathematicians, but it's becoming more
like one of the humanities. So there's a natural progression from
languages that are very concise to languages that are more expressive
but also more difficult to learn. I am writing in English because I
can express myself better in English than in other languages. It took
me a long time to learn English. English contains lots of redundancy,
and there are many ways to say a given thing, with various shades of
meaning. Many of them are "correct". I can be creative with English,
because I have the freedom to choose from more than a minimalistic set
of primatives.

I suspect that language design is heading this direction. In a sense,
the problem with Perl is not that it is too complicated or hard to learn,
but that perhaps it is not expressive enough for the effort you put into
learning it. Then again, maybe it is. Your call. Some people are
excited about Perl because, despite its obvious faults, it lets them
get creative.

: Do many people use more than 20% of the functions? It reminds me a


: lot of PL/1 in that there are just so many ways to do things. It's
: difficult to know, prima facie, what is the most efficient.

Efficient at what?

And do I use more than 20% of the words in English? Does it matter?

I think we should distinguish the number of syntactic structures from the
number of "words" that can fill the slots in those syntactic structures.
Perl has lots of "words" in that sense. But the syntax isn't all that
much more complicated than C, certainly not more than about twice as
complicated. I personally think C's syntax is a little too simple (heresy!)
for certain kinds of problems, and forces a lot of extra overloading onto
subroutine calls. Just look at the setjmp()/longjmp() fiasco, or the
various ways of doing multi-threading.

In many ways Perl is a simple language. Much simpler than the language
called Unix.

: Another difficulty that I have run into is that the manual is just too


: skimpy (and yet intimidating at the same time). I'm used to looking
: at source to resolve questions, but it is just too turgid and I don't
: have that much time. The first couple of times I used Perl, I was
: ready to deep six the whole thing because it was so hard finding
: anything in the manual.

Agreed, the manual is lousy. We're working on the book, and we hope
it won't be quite so lousy... :-)

The main problem with the manual is that it doesn't present Perl "small-end
first". You get this welter of detail that isn't important intermixed
with what you need to know to start out. That's what we're trying to
sort out in the book. And there will be LOTS of examples.

(However, I do have a complaint against people that don't know how to
use the / key on a manual page--presuming their pager knows about the
/ key. With many of the questions that people ask in comp.lang.perl,
I just search through the man page using the very keyword they used,
and find the thing right there in the manual. People really don't know
how to use computers yet. Sigh.)

: Another different kind of information that the manual doesn't provide


: at all is at what point Perl is inappropriate. On the scale of large
: code? Efficient code? Etc. It's really hard to know whether Perl is
: appropriate for a project before starting out and trying it.

Unfortunately, that's true of lots of things. I can tell you that if
you want lots of complicated data structures, you don't want to use
Perl. That's what C and C++ are for. There's a time for everything
under the sun.

On the other hand, Perl can deal with some amount of complexity. The reason
that Perl got the capability of dealing with binary data between version 2
and version 3 was not that I thought Perl would be a great language for
doing binary transmogrification (it isn't), but because there are a lot
of problems out there that are 95% text processing and 5% binary. Such
as setting up an Internet server. Adding the binary capability roughly
doubled the number of problems that Perl was useful for.

: I'm trying to be reasonable here. I'm not berating Perl because there


: are bugs, is still in development, or some other irrelevant reason.
: I'm more interested in the essential nature of Perl.

I appreciate that. Much of the essential nature of Perl springs from
my particular biases, so I thought you'd be interested in some of the
philosophical and linguistic underpinnings as I see them. My training
is more linguistic than comp sci-ish, and it shows.

There are many things I'd do differently if I were designing Perl from
scratch. It would probably be a little more object oriented. Filehandles
and their associated magical variables would probably be abstract types
of some sort. I don't like the way the use of $`, $&, $' and $<digit>
impact the efficiency of the language. I'd probably consider some kind
of copy-on-write semantics like many versions of BASIC use. The subroutine
linkage is currently somewhat problematical in how efficiently it can
be implemented. And of course there are historical artifacts that wouldn't
be there.

: Rather than retorts to all these remarks and questions, I'm actually


: more interested in something that has actually been published. So if
: you could give me pointers, I'd appreciate it more.

I'm not aware of anything like that that has been published. Probably
no one who cares is willing to admit they know the language... :-)

: Oh, by the way, I'm looking for this information because I'm writing a


: section of a paper that necessarily demands a philisophical comparison
: between Perl and other tools. I'd like to do it in a balanced way.

Best of luck--I'd certainly like a copy. I might figure out how to do
it right.

Larry Wall
lw...@jpl-devvax.jpl.nasa.gov

Mark Lawrence

unread,
Jun 26, 1990, 3:03:09 PM6/26/90
to
[I sent this via e-mail and then thought that the comments might be of
general interest]

Don,

I saw your post in comp.lang.perl and wanted to share our (admittedly
limited) experience with Perl.

Being fairly novice to UNIX (I'm the senior UNIX user in-house having used
it since 1986, others are much less comfortable with it), basic
capabilities that experienced folks might take for granted (effective use
of RegExps, awk, sed, sophisticated use of shell and so forth) has come very
hard to us. Perl sort of tied everything together in one place, gave all
these things a sense of cohesiveness, and now we understand a lot more
about the features we discover in awk, sed, shell and the like that Perl
obviously derived from. Incidently, we use Perl to write a lot of the code
that makes up the core of an application that I'm the project manager for.
It involves data management (because the application deals with a lot of
data from various sources) and generating code to model structures,
initialize maps and so forth is a very straightforward job with perl (as it
probably would be with a combination of shell, awk and sed, but as I say --
it took perl to put it all together for us).

The documentation ain't great, but I found that a single serious
read-throughof the notorious man page gave me enough to get going pretty
well. At present, I think it lacks heavily in the area of packages and how
to use them effectively. The reference cards that Vromans put together are
an invaluable help. Of course, Schwartz and Wall claim that a book is in
the works, and we'll probably purchase multiple copies when it comes available.

The text-oriented-ness of Perl seems really logical to us and having all the
capabilities in one tool seems like it should be a performance win.
Actually, the original reason I got interested in it was because awk didn't
have a debugger (except: bailing out at line n :-) and perl did.

In summary, our experience with Perl has been fairly positive. Obtuse code
*can* be written in Perl, but then, I've seen some obtuse shell/awk/sed
scripts, too. Certainly, Larry seems to be able to top anybody in terms of
reducing an algorithm to the tersest and most efficient set of statements,
but then, he wrote it. Doesn't bother me. I get done what needs to get done.
--
ma...@DRD.Com uunet!apctrc!drd!mark$B!J%^!<%/!!!&%m!<%l%s%9!K(B(918)743-3013

Tom Christiansen

unread,
Jun 26, 1990, 11:30:21 PM6/26/90
to

I suspect that most readers here have already read things I've posted
extolling the virtues of perl programming over shell programming, so
I'll try to skip such scintillating remarks. On the darker side, I
honestly do maintain that there are several areas in which perl is weak
and therefore a sub-optimal pick as the programming tool of choice.

Interacting with binary data is cumbersome and error-prone, albeit
feasible. I say cumbersome and error-prone because even if you set
things up to automagically rebuild the perl version of <sys/acct.h>
when the C version in updated (I do), you've probably got an $acct_t
variable somewhere to serve as the format for pack/unpack conversions,
and this WON'T get automagically rebuilt. So you lose and there's
nothing to warn you of this.

I'm not entirely convinced that socket-level networking is really moost
appropriately done in perl, although I've written some programs of the
order of 500 lines that do appear easier in perl. There are no
facilities for RPC calls. I'm not sure there ought to be, either.

I don't know that I'd be thrilled to see Xlib built into perl, and
while I know Larry's adding curses, or at least providing the ability
to do so, I wonder how well this will work out. I'm concerned about
efficiency and ease of coding of these things. Will the ability to
patch in your own C functions cause people to turn from C in cases
where this is not honestly merited?

I also wonder how well perl scales to very large applications. My
largest single perl program (man) is itself a bit over 1300 lines long,
not a long program as programs go, but due to the frequency with which
it is run and the annoyance factor of having to wait a couple seconds
for the parse to complete each time, I've undumped the script into an
a.out, at which point it does beat the C version man program (and does
a lot more, too.) But I'm sure there must be a point of diminishing returns.

I've also had plenty of experiences with bugs, although to his credit I
must admit that Larry's been a lot more responsive in this arena than
any software vendor I've ever had dealings with, even though THEY were
getting paid for maintenance. Still, sometimes you encounter a nasty
bug and get a core dump or wrong answer and spend hours isolating it to
prove to yourself it's not your own fault. Sometimes even when I'm
convinced it's not, it really is, such as a sprintf() problem I had
with a %-1000s field or some such similar nonsense.

The bug that bites me worst right now is that sometimes in large
programs, function calls under the debugger mysteriously return 1
rather than the value they are supposed to return. This problem
evaporates when subjected to intense scrutiny: if run NOT under the
debugger, or reduced to a small test case, all works well.

One of the criticisms that one can make of perl is that it's prone to
obfuscation, even more so than C. The regular expressions can easily
become illegible, and with the ability to eval newly generated code on
the fly, all things are possible. Of course, much of the guilt lies
on the individual programmer for poor coding and commenting habits, but
nonetheless there seems something in the language that too easily
lends itself to obfuscation.

Don Libes, the original poster, mentions that most of what he's read
in magazines and on USENIX has been over-enthusiastic, with little
criticism to the contrary. Well, if you've read Kolstad's UNIX REVIEW
articles of the past three months (inspired/derived to a certain extent
from my USENIX tutorials), you'll see that Rob has in several places
been less than fawningly complimentary. He mentions that it's a
kitchen-sink language, perhaps a little feature heavy. He speaks of
the daunting, information-dense man "page". He complains that how are
you supposed to just "know" that to access the aliases DBM database you
have to concatenate a null byte, as in $aliases{'kolstad'."\000"}.
(This latter actually makes sense when you figure it out, but I won't
try to explain it here.) So he's at least trying to acknowledge some
of the difficulties people may have with it.

[Don Libes also ponders what "real computer scientists" have to say about
the language. Well, what's it take to be a "real computer scientist"? Do
O/S people count, or only language designers and compiler writers? Do you
need certain degrees, publications, or world renown?]

It's true that I was the first here to use perl; I grabbed it when the
first version came out. But unlike Don Libes's site, there are quite a lot
of people using perl here. Some of these are for projects purely in perl,
some are as auxiliary tools for major projects involving C and C++, while
others are for automated software test scripts or system administration
purposes. It was originally for purposes of system management that it was
first appreciated, but in the last year or so many others have embraced it
as well. I don't really know how many perl programmers we have here now:
it's well over a dozen, maybe two, and the number continues to grow weekly.

So in answer to Don's question, yes, I do think that other people than
Larry can program in perl. I might amend that to say that the answer is a
qualified yes. The qualification is that I don't believe anyone can
program quite so effectively in perl as can Larry. He of course
understands not just some but each and every one of the semi- and
undocumented nuances of the language. I think I'm pretty good at
programming in perl, but still most of what I do still comes out looking
like C with embedded sed. Larry takes a problem, looks at it a different
way, and often comes up with something two orders of magnitude simpler and
faster because of his intimate acquaintance with the language. It's only
now and then that I come up with something that doesn't look very C-like,
as in:
next if $sections && !grep($mandir =~ /man$_/, @sections);
and even then I feel somewhat guilty about it. :-)

I hope that most of the subtleties of the language will be outlined in
that fabled tome, the perl book he and Randal are working on. I'm
especially interested in matters of efficiency and optimization. Larry
often writes thing with big multi-line evals, and I'd like to have a
better grasp on why this is so often so important for getting the promised
'faster-than-sed' performance. I think that this book has the potential
for making perl more accessible to the general public.

One final concern still makes me wonder, and is not a new one: just where
is this thing called perl going to? Towards what is it evolving? Will it
reach a point in its evolution when it is "done"? I hope so, but let it
not be at the hands of some maiming standards committee. Let it be the
handiwork of just one craftsman, one vision.

I'd like to be fair and optimistic without an undue quantity of zeal
fueling my discussions. I, too, am very interested to hear what others
who've used this tool long enough to have a balanced view of it have to
say. I've heard, and myself written, plenty of the good, and I, too,
would appreciate hearing the darker experiences people have had about it.

There is no ultimate answer to anything, let alone programming. But for
what it was designed for, perl is a refreshing and pleasant change of
pace. I'm reminded of around a decade ago on a little Z-80 running CP/M
with only an assembler how very painful it was to generate any program at
all. When I finally got a C compiler, it was such a refreshing pleasure,
I cranked out a new tool on nearly a daily basis. (Of course, some may
argue that the pleasure was as that of stopping banging your head against
the wall. :-)

I will dare to suggest that some of the bad experiences people may have
had with perl stem from trying to use the wrong tool for the job, but I
don't know that for sure. All I know is that for much of the quotidian
toil that faces the tool builder and the system administrator, who often
have to whip together a passably functioning piece of software in nothing
at all resembling the normal, well-deliberated process of planned software
development, that perl is a true blessing. It is in my sincere and considered
opinion the most significant piece of general-purpose software to hit the
software community since awk, and in that respect far exceeds awk's humble
ambitions.

--tom
--

Tom Christiansen {uunet,uiucdcs,sun}!convex!tchrist
Convex Computer Corporation tch...@convex.COM
"EMACS belongs in <sys/errno.h>: Editor too big!"

Randal Schwartz

unread,
Jun 27, 1990, 1:14:49 PM6/27/90
to
In article <103...@convex.convex.com>, tchrist@convex (Tom Christiansen) writes:
| I hope that most of the subtleties of the language will be outlined in
| that fabled tome, the perl book he and Randal are working on.

I know that from our present work on the book, what you will probably
see is 250 pages in the tone of the 67 page manpage, so count on about
4 times the number of examples, and a few larger programs documented,
along with some "philosophy" and "religion" regarding programming in
Perl effectively. To get everything you ask for, and to document the
features that Larry will most certainly add while the book is at the
printer, you will probably have to get "The Book, version 2.0."
Sorry... it's the nature of the biz.

We are definitely aiming to capture as much of the incremental wisdom
of comp.lang.perl, my coding, and Larry's intimate knowledge of Perl
as we can, but it is really difficult to document a moving target.

Buy the book! Yell at us for not having any examples of
"Perl-assisted honeydanber UUCP management" or "how to eval an
arbitrary expression inside a quoted string" or "when do I use $\"!
But 250 pages is certainly better than 67 pages. If you are a regular
reader of comp.lang.perl, don't expect any great revelations... just a
broadening of your understanding.

Just another Perl-book hacker,
--
/=Randal L. Schwartz, Stonehenge Consulting Services (503)777-0095 ==========\
| on contract to Intel's iWarp project, Beaverton, Oregon, USA, Sol III |
| mer...@iwarp.intel.com ...!any-MX-mailer-like-uunet!iwarp.intel.com!merlyn |
\=Cute Quote: "Welcome to Portland, Oregon, home of the California Raisins!"=/

John Nall

unread,
Jun 27, 1990, 2:02:18 PM6/27/90
to
In article <1990Jun27....@iwarp.intel.com> mer...@iwarp.intel.com (Randal Schwartz) writes:
>...

>Buy the book! Yell at us for not having any examples of
>"Perl-assisted honeydanber UUCP management" or "how to eval an
>arbitrary expression inside a quoted string" or "when do I use $\"!
>But 250 pages is certainly better than 67 pages. If you are a regular
>reader of comp.lang.perl, don't expect any great revelations... just a
>broadening of your understanding.
>
>Just another Perl-book hacker,

So SELL the book already! (Know why the Perl programmer got divorced? He
never did anything - just told his wife how great it was GONNA be!!)

Oh...all of the above is covered by :-)

Just another waiting-for-godot book purchaser.

--
John W. Nall | Supercomputation Computations Research Institute
na...@sun8.scri.fsu.edu | Florida State University, Tallahassee, FL 32306
"They said it couldn't be done/they said nobody could do it/
But he tried the thing that couldn't be done!/He tried - and he couldn't do it"

Marc Evans

unread,
Jun 27, 1990, 1:23:15 PM6/27/90
to
In article <15...@bfmny0.BFM.COM>, tn...@bfmny0.BFM.COM (Tom Neff) writes:
|> If a few vendors started shipping Perl binaries with their
|> OS releases, it'd become a standard in months..

You may be glad to know that I am doing all that I can to get DEC to provide
perl on the unsupported tape that comes with ULTRIX. This tape has historically
contained many publicly available programs (sources and/or binaries), including
GNU stuff and even Larrys' rn. I'll let you know when everthing is in place...

- Marc
===========================================================================
Marc Evans - WB1GRH - ev...@decvax.DEC.COM | Synergytics (603)635-8876
Unix and X Software Contractor | 21 Hinds Ln, Pelham, NH 03076
===========================================================================

Marc Evans

unread,
Jun 27, 1990, 1:37:07 PM6/27/90
to
In article <84...@jpl-devvax.JPL.NASA.GOV>, lw...@jpl-devvax.JPL.NASA.GOV

(Larry Wall) writes:
|> (However, I do have a complaint against people that don't know how to
|> use the / key on a manual page--presuming their pager knows about the
|> / key. With many of the questions that people ask in comp.lang.perl,
|> I just search through the man page using the very keyword they used,
|> and find the thing right there in the manual. People really don't know
|> how to use computers yet. Sigh.)

There is an midnight effort inside of DEC ULTRIXland to convert the manual page
for perl to DEC's bookreader format (kind of a hypertext reader with lots of
cross referencing). The / and ? mechanisms of more/less are great, but
hypermedia
is a whole lot quicker (IMHO).

Tony Olekshy

unread,
Jun 28, 1990, 6:55:47 AM6/28/90
to
In <STEF.90Ju...@zweig.sun>, st...@zweig.sun (Stephane Payrard) writes:
-
- Perl is becoming one of my favorite tools.
- Today, I will discuss some of its limitations.
-
- ... more precisely, Perl so far
- 1/ has no clean way of: allocating, deallocating,
- accessing data in memory (ie: no pointer)

Optimizations left as an exercise to the reader:

$Memory{&UniqueKey($datum)} = $datum;
push(@Pointers, &UniqueKey($datum));

Stop thinking so low level; that's not what perl is for (IMHO)...

$C = "-?\\d+\\.?\\d*"; # Coordinate.
$P = "p:($C):($C):"; # Point.
$L = "l:($P)($P)b:($C):"; # Line.
$S = "s:(($L)+)"; # Line String.

sub GetLines # Return list of spatial lines in $Polygon.
{
local($Polygon) = $_[0]; local(@Out);
die "Bad Polygon" unless $Polygon =~ s/^$S$/\1/;
while ($Polygon =~ s/^($L)//) { push(@Out, $1); }
return (@Out);
}

Now that's a data structure (and don't laugh, it works, with a
GKS child, and makes a great prototyping workbench -- so there ;-).

--
Yours etc., Tony Olekshy. Internet: tony%o...@CS.UAlberta.CA
BITNET: tony%o...@UALTAMTS.BITNET
uucp: alberta!oha!tony

Leo de Wit

unread,
Jun 28, 1990, 5:21:32 PM6/28/90
to
In article <84...@jpl-devvax.JPL.NASA.GOV> lw...@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes:
[stuff left out...]

|(However, I do have a complaint against people that don't know how to
|use the / key on a manual page--presuming their pager knows about the
|/ key. With many of the questions that people ask in comp.lang.perl,
|I just search through the man page using the very keyword they used,
|and find the thing right there in the manual. People really don't know
|how to use computers yet. Sigh.)

Unfortunately, the very keyword you're looking for is often underlined,
or typed over multiple times, so it contains embedded backspaces (and
possible underscores or repetitions) in the manual text. I had a few
bad experiences with that lately. There really should be an option of
the pager to compare modulo underlining/fat printing.

Leo.

Peter da Silva

unread,
Jun 28, 1990, 12:13:05 PM6/28/90
to
In article <103...@convex.convex.com> tch...@convex.COM (Tom Christiansen) writes:
> I don't know that I'd be thrilled to see Xlib built into perl, and
> while I know Larry's adding curses, or at least providing the ability
> to do so, I wonder how well this will work out. I'm concerned about
> efficiency and ease of coding of these things. Will the ability to
> patch in your own C functions cause people to turn from C in cases
> where this is not honestly merited?

One thing I have found useful is John Ousterhout's TCL: Tool Command
Language. It's designed to add an extension language to various tools
and (at least in the original, and in Karl Lehenbauer's AmigaTCL version)
uses an RPC mechanism to communicate between separate programs. This way
no individual program becomes a kitchen-sink.

I have published, to the net, a version of my "browse" directory browser
with a TCL interface. It's a nice clean language (sort of like a text-
oriented Lisp), and adding extensions to it is amazingly easy. Here's
a section of my browse.rc:

proc key_'K' {} {
browse message {Edit key }
set key [get key]
set func key_[get keyname $key]
set file [get env HOME]/.function
if { [length [info procs $func] ] != 0 } {
set def [list proc $func {} [info body $func]]
} else {
set def [list proc $func {} { ... }]
}
print $def\n $file
browse message !vi $file
browse shell [concat vi $file]
source $file
}

proc key_'F' {} {
set func [get response {Edit function }]
if { [length $func chars] == 0 }
return
set file [get env HOME]/.function
if { [length [info procs $func] ] != 0 } {
set def [list proc $func {} [info body $func]]
} else {
set def [list proc $func {} { ... }]
}
print $def\n $file
browse message !vi $file
browse shell [concat vi $file]
source $file
}

proc key_'d' {} {
if { [string compare d [get key -d-]] == 0 } {
set file [get file .]
set prompt [concat Delete $file {? }]
if { [string match {[yY]} [get key $prompt]] } {
if { ![eval [concat browse delete $file]] } {
perror
}
}
}
}
--
Peter da Silva. `-_-'
+1 713 274 5180.
<pe...@ficc.ferranti.com>

Joe Garvey

unread,
Jun 28, 1990, 4:17:36 PM6/28/90
to
In article <1990Jun27....@iwarp.intel.com>, mer...@iwarp.intel.com (Randal Schwartz) writes:
> I know that from our present work on the book, what you will probably
> see is 250 pages in the tone of the 67 page manpage, so count on about
> 4 times the number of examples, and a few larger programs documented,
> along with some "philosophy" and "religion" regarding programming in
> Perl effectively. To get everything you ask for, and to document the
> features that Larry will most certainly add while the book is at the
> printer, you will probably have to get "The Book, version 2.0."
> Sorry... it's the nature of the biz.

> Buy the book! Yell at us for not having any examples of
> "Perl-assisted honeydanber UUCP management" or "how to eval an
> arbitrary expression inside a quoted string" or "when do I use $\"!
> But 250 pages is certainly better than 67 pages. If you are a regular
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Not if it isn't well organized and thought out. Despite the terseness
of the man page (a time honored unix tradition :-)), it does a "complete"
job, and is well organized. No, there aren't scads of examples. No, it
doesn't start you off with just enough information to get you started
if you haven't used perl before... it dives right in. The book should
be targeted (in my opinion) to help those others who can't get started
in perl from the man page. I hope one of the objectives of the book is
to increase the numbers of perl users... to mainstream perl. I hope
that eventually Sun, HP, SCO, OSF will offer perl as part of their
standard unix.

Write an important program, give it away. Make money off the book, and
the movie. :-)

--

Joe Garvey UUCP: {apple,backbone}!versatc!mips!cmic!garvey
California Microwave Internet: garvey%cm...@mips.com
990 Almanor Ave HP Desk: garvey (cm...@mips.com) / hp1900/ux
Sunnyvale, Ca, 94086

Larry Wall

unread,
Jun 29, 1990, 2:11:37 AM6/29/90
to
In article <3...@cmic.UUCP> gar...@cmic.UUCP (Joe Garvey) writes:
: Not if it isn't well organized and thought out. Despite the terseness

: of the man page (a time honored unix tradition :-)), it does a "complete"
: job, and is well organized. No, there aren't scads of examples. No, it
: doesn't start you off with just enough information to get you started
: if you haven't used perl before... it dives right in. The book should
: be targeted (in my opinion) to help those others who can't get started
: in perl from the man page. I hope one of the objectives of the book is
: to increase the numbers of perl users... to mainstream perl. I hope
: that eventually Sun, HP, SCO, OSF will offer perl as part of their
: standard unix.

That is exactly where the book is targeted. It starts off with very much
of a hold-your-hand tutorial, giving just what you need to know to
get started, and then has some longer examples of things you might do,
and then has everything else arranged so that you can look things up
as you need them.

Most of the examples will NOT be for system administration, since we
think that most system administrators got to be one because they knew
how to read manual pages already. Contrariwise, we think the typical
SA will be able to take the general examples and whip up something
of their own. After all, they'll do that anyway, can't stop 'em...

: Write an important program, give it away. Make money off the book, and
: the movie. :-)

I dunno, I dream in Perl sometimes...don't suppose that counts...

Larry

pri=2 Dan Stromberg

unread,
Jun 29, 1990, 8:16:53 AM6/29/90
to
In article <8...@ehviea.ine.philips.nl>, l...@ehviea.ine.philips.nl (Leo de Wit) writes:
> Unfortunately, the very keyword you're looking for is often underlined,
> or typed over multiple times, so it contains embedded backspaces (and
> possible underscores or repetitions) in the manual text. I had a few
> bad experiences with that lately. There really should be an option of
> the pager to compare modulo underlining/fat printing.
>
> Leo.

I don't know if this is possible on all systems, but:

man ls | col -b | pg

seems to work nicely for me on a couple different Sys V machines.

- Dan Stromberg ...!tut.cis.ohio-state.edu!uccba!ucqais!dstrombe

Craig Ruff

unread,
Jun 29, 1990, 10:31:21 AM6/29/90
to
In article <SUA...@xds13.ferranti.com> pe...@ficc.ferranti.com (Peter da Silva) writes:
>One thing I have found useful is John Ousterhout's TCL: Tool Command
>Language. ...

I used TCL as part of a library on a project, and it turned out to be useful.
However, I would have liked to use a subroutine callable version of perl
instead! Then I wouldn't have had to add all sorts of additional functions
to TCL.
--
Craig Ruff NCAR cr...@ncar.ucar.edu
(303) 497-1211 P.O. Box 3000
Boulder, CO 80307

Randal Schwartz

unread,
Jun 29, 1990, 11:14:58 AM6/29/90
to
In article <3...@cmic.UUCP>, garvey@cmic (Joe Garvey) writes:
| Write an important program, give it away. Make money off the book, and
| the movie. :-)

Shhh! You weren't supposed to tell them about the *movie* yet...

"Perl Three Dot Oh: The Syntax Strikes Back!"

Sigh.

Barton E. Schaefer

unread,
Jun 29, 1990, 1:57:02 PM6/29/90
to
In article <8...@ehviea.ine.philips.nl> l...@ehviea.UUCP (Leo de Wit) writes:
} In article <84...@jpl-devvax.JPL.NASA.GOV> lw...@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes:
} [stuff left out...]
} |With many of the questions that people ask in comp.lang.perl,
} |I just search through the man page using the very keyword they used,
} |and find the thing right there in the manual.
}
} Unfortunately, the very keyword you're looking for is often underlined,
} or typed over multiple times, so it contains embedded backspaces (and
} possible underscores or repetitions) in the manual text. I had a few
} bad experiences with that lately. There really should be an option of
} the pager to compare modulo underlining/fat printing.

I have taken to using

man perl | less -i

Searches in the "less" pager, at least in more recent versions, will match
underlined or overstruck text when the ignore-case option is used.
--
Bart Schaefer scha...@cse.ogi.edu

Leo de Wit

unread,
Jun 29, 1990, 2:06:18 PM6/29/90
to

Yep, works here too.

Normally, using man(1) in the UCB universe (on a Pyramid), I get it for
nothing, because the output is piped through ul(1); lately I did
something like att man curses|more (without the 'col' or 'ul'), which
caused my problem. Well, I guess that's what you deserve if you want
the best of two worlds 8-).

Also thanks to John Merritt, who gave me the 'ul' suggestion (mail to
him bounced).

Leo.

Peter da Silva

unread,
Jul 1, 1990, 8:39:25 PM7/1/90
to
In article <8...@ehviea.ine.philips.nl> l...@ehviea.UUCP (Leo de Wit) writes:
> Unfortunately, the very keyword you're looking for is often underlined,
> or typed over multiple times, so it contains embedded backspaces (and
> possible underscores or repetitions) in the manual text.

What I do is run it through a program I wrote called "strike" that converts
this: _^Hu_^Hn_^Hd_^He_^Hr_^Hl_^Hi_^Hn_^He into
this: _________^M underline. It's much nicer on the printer, and you can
do searches on it...

Gary Benson

unread,
Jul 2, 1990, 6:12:31 PM7/2/90
to
The person who originally posted the request for pros and cons about perl
asked for things that had been published, but this forum seems to be the
major place where perl topics are published!

My experiences and observations may be of interest, since I am not a
programmer, but I have had to learn a little about sed and awk and shell
programming, and now perl, out of necessity to support our group, a
Technical Publications department. I have written a number of filters that
clean up common problems in text files. For example, our current typesetting
equipment requires one space to separate sentences, not two, as most people
learned in typing class. So sed was the logical choice. I did a bit of that
kind of thing on and off -- it is not really my job, but the need was there,
and we did not have regular access to programming expertise. Occasionally we
have had to extract and rearrange some information from a database, and so
I've learned a little bit about awk. Then, we developed a system to
"centralize" our department archives of manuals in print, and so I learned
enough about shell programming to automate that process somewhat.

Then one day we decided that we had enough information to ask a "real
programmer" to attack a problem we had been facing for a long time. The work
of inserting typesetting codes into a document is tedious, boring, and
error-prone. When the people who had been doing typesetting were no longer
in the group, or were doing other work, we decided to write a software
requirement for a program that would scan a text file and based on
structural clues (like the word CAUTION centered on a line by itself), would
insert the appropriate typesetting codes for font changes, bolding,
centering, and so on.

Our original hope was that perhaps 90% of a document could be auto-coded by
such clues, leaving the remaining 10% for hand work, which we thought could
be done by someone without typesetting expertise. A programmer here at Fluke
by the name of Corey Satten had wanted to tackle this problem for some time,
but we had never organized the boring, tedious requirements. When we did,
back in April of 1988, Corey had been looking at perl and was itching to try
it out. Seeing our requirements, he determined that perl would be a good
language choice, and in less than a week, his perl script easily met our 90%
target.

When he left the company, it fell to me to maintain the script he left us;
I have been able to permute it into over a dozen variations, covering all
the differing formats we use for publishing our manuals. Along the way, I
have managed to add a bit more functionality, and only one hurdle keeps us
from 99% automatic coding of "clear text files". We are in the process of
purchasing an electronic page-makeup package and our perl script will become
the front end, interfacing on the input side with the files Technical Writers
provide, and it will generate SGML coding for the page makeup program.

We have come much further than we ever anticipated we would back in the
spring of 1988. I am certain that our success is in large measure due to
the many tools that perl brings under the same umbrella. Our only
alternative would be a shell script calling an awk script, a few lines of
sed, multiple intermediate temporary files, and all-in-all, a generally
ugly, hard-to-maintain, twistingly interactive group of programs.

From my perspective, perl is much easier to understand and learn than sed,
awk, grep, and shell programming. Because it is one program, there are no
syntax discontinuities that used to drive me up a wall (Hmmm, that sounds
odd for some reason). Sure, as others have pointed out, the syntax may be
wierd and difficult at times, but at least it is cohesive. In fact, I have
even translated my old sed filters into perl, a painless process using s2p.
Last year, we were preparing to publish a manual that required a new feature
in the typesetting program, so we asked for help from an engineering group
who we knew would be needing the feature in one fo their upcoming manuals.
They wrote an awk script, then used a2p to translate it to perl. It fit into
the larger perl script as a module in need of only minor tweaking. It worked
like a charm.

Our programs are 200 to 400 lines long; not daunting by any means; but
substantial, at least to me, whose shell scripts were usually about
one-tenth that length. It may be true that there are too many ways to do
things in perl, but for me, that is a decided plus, since I know that all I
really have to do is find ONE of them, and it is going to fly. Too often it
has seemed that there is only one way to accomplish a task, and that way was
hidden in some tricky corner.

The problem I have is simply in seeing an analog between what I am trying to
accomplish and any particular feature of the language. But this is a failing
in me, not the language. For example, I totally ignored the "system" command
for a long time because its name didn't seem to apply to anything I needed
to do ... after all, I wasn't concerned with system tasks, setuid and
password files and all that... Having heard that perl is quite well suited
to system administration kinds of jobs, I always just passed over "system",
assuming it applied only to that kind of programming. How wrong I was! After
I learned about the power of the system call, the world opened for me,
literally. I finally saw that this command made my entire repertoire of
system commands available, just as if I was popping out to the shell for a
while.

The major drawback that non-programmers like myself face is that the manual
is written at a pretty high level; the descriptions and examples assume you
already know, for example, what a system call is. All indications are that
the book will be written at the same level, but will have many more examples
and will guide the reader up a less steep learning curve. I'm all for that -
I have read the entire manual 7 times, and there are still great gaping
areas that I read and remain clueless. Again, this may not be a failing of
the manual, but of this reader.

Occasionally, the discussion in this group turns to "perl as a job
requirement". Today was the first day of work for a contract programmer whom
we hired specifically to program in perl. I have more studying to do before
I will be up to the task we have set him, but I know for certain that the
language will bend flexibly to his will. We interviewed over a dozen
programmers for this position, and the number one requirement was a
familiarity with perl.

I am also pleased to note that last week, our corporate Software Technology
Group announced that perl would become "officially supported software"
effective immediately. Of course, I am all for official support, because as
I tell my boss, I need all the gurus I can get! Perl now takes its place
here at Fluke in /usr/local, alongside awk and sed and grep; it is no longer
"user supported" in /usr/public. The significance of this comes through
clearly in a remark I heard only a few weeks ago, to the effect that
"software is not given corporate support just because it's cool". It is good
to learn that support is not withheld BECAUSE of coolness, either!

I realize that most of what I have written here is anecdotal and may not
provide much insight. Then again, all perspectives have merit and everything
I am learning is proving that Tom Christiansen is correct in characterizing
perl as a significant and important contribution.

--
Gary Benson -=[ S M I L E R ]=- -_-_-...@fluke.tc.com_-_-_-_-_-_-_-_-_-

Those who mourn for "USENET like it was" should remember the original design
estimates of maximum traffic volume: 2 articles/day. -Steven Bellovin

James L. Logan

unread,
Jul 3, 1990, 6:07:16 PM7/3/90
to
In article <8...@ehviea.ine.philips.nl> l...@ehviea.UUCP (Leo de Wit) writes:
# In article <84...@jpl-devvax.JPL.NASA.GOV> lw...@jpl-devvax.JPL.NASA.GOV
# (Larry Wall) writes:
# | [ . . . ] People really don't know
# |how to use computers yet. Sigh.)
#
# [ . . . ] There really should be an option of
# the pager to compare modulo underlining/fat printing.

Use the public-domain pager called "less". It can be configured
to ignore underlining, boldfacing, etc. In fact, I use it to
scan the perl man pages myself.

Just another happy hacker,
-Jim
--
James Logan UUCP: uunet!inpnms!logan
Data General Telecommunications Inet: lo...@rockville.dg.com
2098 Gaither Road Phone: (301) 590-3198
Rockville, MD 20850

Joe Wells

unread,
Jul 5, 1990, 9:43:20 PM7/5/90
to
In article <6...@inpnms.ROCKVILLE.DG.COM> lo...@rockville.dg.com (James L. Logan) writes:
In article <8...@ehviea.ine.philips.nl> l...@ehviea.UUCP (Leo de Wit) writes:
# In article <84...@jpl-devvax.JPL.NASA.GOV> lw...@jpl-devvax.JPL.NASA.GOV
# (Larry Wall) writes:
# | [ . . . ] People really don't know
# |how to use computers yet. Sigh.)
#
# [ . . . ] There really should be an option of
# the pager to compare modulo underlining/fat printing.

Use the public-domain pager called "less". It can be configured
to ignore underlining, boldfacing, etc. In fact, I use it to
scan the perl man pages myself.

I like to look at the man page from inside GNU Emacs (where I can use
find-tag to jump to the relevant source code with the touch of a key). So
I use the Emacs function nuke-nroff-bs to clean up the man page. I've
also got a version of nuke-nroff-bs that also correctly strips all types
of man page headers and footers, if anyone wants one.

On a separate issue, does anyone know where less version 123 is archived?
I have a copy I can email to people, but I'd prefer to refer people to a
convenient archive. I looked for one a few months ago, but I couldn't
find less version 123 anywhere.

--
Joe Wells <j...@uswest.com>

Peter da Silva

unread,
Jul 5, 1990, 5:32:37 PM7/5/90
to
In article <78...@ncar.ucar.edu> cr...@handies.UCAR.EDU (Craig Ruff) writes:
> In article <SUA...@xds13.ferranti.com> pe...@ficc.ferranti.com (Peter da Silva) writes:
> >One thing I have found useful is John Ousterhout's TCL: Tool Command
> >Language. ...

> I used TCL as part of a library on a project, and it turned out to be useful.
> However, I would have liked to use a subroutine callable version of perl
> instead! Then I wouldn't have had to add all sorts of additional functions
> to TCL.

Yes, TCL is sort of short in the subroutines department, but I think it makes
a better extension language than, say, perl (or REXX, for that matter) because
it's such a clean language... like a cross between lisp and awk. This makes
it relatively easy to operate on programs as data... something I'd hate to
have to do with (say) an algol-like language.

I think I'd really prefer a postscript core to the language. Anyone know how
to get hold of the author of the Gosling postscript? He doesn't seem to be
the Emacs Gosling, and the address in the docco is defunct.

Reply all
Reply to author
Forward
0 new messages