Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

can lisp do what perl does easily?

3,618 views
Skip to first unread message

; ; ; h e l m e r . . .

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
I have been slowly learning lisp over the past year and have had someone
mention to me that I should learn perl, for jobs etc.

If I learn lisp well will I be able to do what people do with perl, I
know that we are not exactly comparing apples to apples since perl is a
'scripting' language. My hear is really in to lisp, looking for lispers'
opinions.

--
; ; ; h e l m e r . . .


Sent via Deja.com http://www.deja.com/
Before you buy.

Joe Marshall

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
; ; ; h e l m e r . . . <assem...@t-three.com> writes:

> I have been slowly learning lisp over the past year and have had someone
> mention to me that I should learn perl, for jobs etc.
>
> If I learn lisp well will I be able to do what people do with perl, I
> know that we are not exactly comparing apples to apples since perl is a
> 'scripting' language. My hear is really in to lisp, looking for lispers'
> opinions.

Depends on what you want to do with your life. The largest
`advantage' that Perl has over Lisp is the huge amount of CGI scripts
written in Perl. If you want to write and maintain scripts, it is
unlikely that you will find many written in Lisp, and if you write
scripts in Lisp, you may find it hard to get people to adopt them.

Of course, if you really want to make money scripting, then Visual
Basic is the way to go.

On the other hand, if you really like computers, and if you find Lisp
fun and entertaining, you will most likely find Perl (and VB) to be
rather dull, poorly thought out, and painful to use. You will find
that any program you write in Perl could have been written faster and
easier in Lisp, and if it had been, it would have actually worked.

But since you're already asking this in comp.lang.lisp, you already
know what you *want* to do, so just *do* it. Then figure out how to
get paid for it. You'll be happier.

--
~jrm


Espen Vestre

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
Joe Marshall <jmar...@alum.mit.edu> writes:

> Depends on what you want to do with your life. The largest
> `advantage' that Perl has over Lisp is the huge amount of CGI scripts
> written in Perl. If you want to write and maintain scripts, it is
> unlikely that you will find many written in Lisp, and if you write
> scripts in Lisp, you may find it hard to get people to adopt them.

if you already know lisp (and thus already at least one other
language), it's nothing wrong with learning perl, though. Perl is
pretty impressing in its text-processing speed, and is a quite useful
tool as long as you don't start to write real programs with it.

Ignore the members of the perl-worshipping community who might say
otherwise and let perl do what it's good at: Quick (& dirty) scripting.
Perl is a good substitute for awk, sed and sh, but not lisp!

--
(espen)

Matt Curtin

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
>>>>> On Mon, 27 Mar 2000 03:41:04 GMT,
>>>>> ; ; ; h e l m e r . . . <assem...@t-three.com> said:

h> If I learn lisp well will I be able to do what people do with
h> perl, I know that we are not exactly comparing apples to apples
h> since perl is a 'scripting' language.

That Perl has a reputation as a "scripting" language is irrelevant,
especially given the way that the Perl community defines "scripting":
essentially it's writing a program that you'll throw away. So you can
have C scripts and Perl programs. The issue is whether you're going
to put it into production or whether you're going to cobble the thing
together in your home directory, run it a time or two, and be done
with it.

The simple answer to your question is "yes". Both Common Lisp is a
very capable language. At the risk of being flamed by Perl's
detractors, I'll brave a pronouncement that Perl is also generally a
capable language.

Perl's primary advantage for many people is the huge library of
ready-to-use modules available from the Comprehensive Perl Archive
Network (CPAN). Those modules provide interfaces to just about every
sort of system you could ever imagine needing to talk to. Thus, in
many cases, the job of the Perl programmer is simply to work on The
Problem at hand, providing the logic and other glue to get everything
talking nicely.

Lisp also has a nice body of code available for inspection, but it
tends to be harder to find (because it isn't well centralized or
cataloged anywhere AFAIK), more academic, and otherwise focused on
areas about which people aren't presently highly excited. Add to that
the sorts of baggage that accompany many Lisp programs whose source is
available for inspection, and there are some serious burdens that
might provide some encouragement for new folks to look elsewhere.
Almost everything in CPAN is freely available, almost always under the
same conditions as Perl itself.

Some things that would help Lisp in its comparisons with Perl:
o A centralized archive of freely-usable code for doing real jobs,
especially things related to databases, various network protocols,
HTML generation and analysis, etc.
o A simple, standardized mechanism for handling GUIs portably. Tk is
the de facto standard for GUIs in Perl. It works reasonably well
and is highly portable. Unfortunately, a lot of people write code
that quickly turns into a mess when Tk is involved, and it tends to
be hard to debug. It'd be nice to learn something from these
lessons.
o Standardized support for text-whackage a la Perl's patterns (as
they're properly called, since they're actually regular exprssion
extensions). I know that CLISP offers regexp support, but not all
Lisps do it.
o Easier deployment. Some Lisps have solved this problem, but I
haven't seen any free Lisps that will let you build a system that
can be readily installed somewhere else. This is also a problem
for Perl, but Perl is pretty much everywhere that Unix is.

People who will defend Lisp on many of these counts will say "get a
good commercial environment". That's fine and dandy, but if I, as a
fan of Lisp, am not willing to plonk down some ridiculously huge
amount of money on a "good commercial environment", why should we
expect anyone to do that? I don't even know how much money we're
talking about here; several months ago, I mailed Franz to ask about
pricing. I never heard anything more than an auto-ACK.

I'm willing to live without support. I think the last time I used
vendor support for anything was in 1994, when I was a system
administrator on an AIX system rebuilding a filesystem for the first
time. I can't remember ever using any vendor support any other time.
CMU CL, as it turns out, fulfills my needs very well. I have only a
few gripes about it:
o motifd dies a lot on Solaris (maybe elsewhere, too, I don't know).
o I can crash the Lisp by throwing incredibly huge numbers at it. It
goes off into foreign-function-land to handle the bignums, and
exits immediately.
o I don't seem to be able to compile the code into any sort of useful
form for people who don't have CMU CL.

I will not share in the Perl-bashing that many Lispers enjoy, as Perl
that cannot be read is almost always the fault of the programmer, not
of Perl itself. People who are not familiar with the "Unix tools"
find Perl's syntax strange and annoying. Understanding sed, awk,
troff, C, shell, and friends, I can tell you that I find Perl's syntax
to be quite intuitive. Usually.

Nevertheless, Perl does some things very poorly, some of these being
things that Lisp does very well:
o Getting along with programs written in other languages. It's
obviously no big deal to talk to other programs over an inet or
af_unix socket, but if you need to use a library that's implemented
in something like C, it can be nasty. XS isn't pretty. Getting
complex data types between the C library and Perl can be painful.
o Perl's threads are broken. Period. OTOH, I don't see much talk of
threads in Lisp, either.
o There isn't a good Perl compiler that will let you dump bytecode
("parse trees", you'll hear them called in Perl circles) or object
code. This forces the compilation step at startup time, thus
introducing some latency that can be annoying if you've got a very
big program.
o Perl cannot (easily) be used interactively. One fakes it with the
debugger. This is kind of annoying, as I like to write code by
testing code snippets interactively and then adding them to my
source file as I go.

So whether Perl or Lisp will work better for you will depend on the
problem at hand and its criteria for success. If you know Lisp well,
you should be able to do essentially any job. If you know Perl well,
you should be able to do essentially any job. Each has its strengths
and weaknesses. It's your job as a programmer to use the strengths
and avoid the weaknesses.

--
Matt Curtin cmcu...@interhack.net http://www.interhack.net/people/cmcurtin/

Lieven Marchand

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
; ; ; h e l m e r . . . <assem...@t-three.com> writes:

> If I learn lisp well will I be able to do what people do with perl, I
> know that we are not exactly comparing apples to apples since perl is a
> 'scripting' language. My hear is really in to lisp, looking for lispers'
> opinions.

You could learn both.

One of the advantages of perl is the huge CPAN archive that has a
solution to almost any problem of the kind "I need to talk to service
<foo> with protocol <bar>". Especially for one of a kind tasks I often
use a small perl script to get the data and write them out in a lisp
friendly format, and then I use CL to do the rest of the work. This
works especially fine for proof of concept things. When I see I use
one of these things regularly, I can always write an interface module
in CL.

--
Lieven Marchand <m...@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker

Tom Breton

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
; ; ; h e l m e r . . . <assem...@t-three.com> writes:

> I have been slowly learning lisp over the past year and have had someone
> mention to me that I should learn perl, for jobs etc.
>

> If I learn lisp well will I be able to do what people do with perl,

They each can do anything the other one does. It's just a question of
what sort of hoops you have to jump thru to make them do it.

WRT elegance, ease of use, and niceness to work with, ISTM Lisp is the
clear winner. From personal experience, when I write Perl, I am
constantly reminded of how much easier it would be to write Lisp.

WRT code base and cachet among employers, Perl is the winner, and let
me say that that's a little unfortunate.

WRT speed, it depends completely on which implementation you're using,
but that said, it's easier to get a fast, slim Perl than a fast, slim
Lisp IMO.

--
Tom Breton, http://world.std.com/~tob
Not using "gh" since 1997. http://world.std.com/~tob/ugh-free.html
Rethink some Lisp features, http://world.std.com/~tob/rethink-lisp/index.html

Tom Breton

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
Matt Curtin <cmcu...@interhack.net> writes:

> >>>>> On Mon, 27 Mar 2000 03:41:04 GMT,
> >>>>> ; ; ; h e l m e r . . . <assem...@t-three.com> said:
>
> h> If I learn lisp well will I be able to do what people do with
> h> perl, I know that we are not exactly comparing apples to apples
> h> since perl is a 'scripting' language.

I agree very much with what Matt said. There are just a few tiny things.


>
> Some things that would help Lisp in its comparisons with Perl:
> o A centralized archive of freely-usable code for doing real jobs,
> especially things related to databases, various network protocols,
> HTML generation and analysis, etc.

Definitely. There are some beginnings of that, eg CLOCC and the
Codex.

[snip more good ones]

> o Standardized support for text-whackage a la Perl's patterns (as
> they're properly called, since they're actually regular exprssion
> extensions). I know that CLISP offers regexp support, but not all
> Lisps do it.

Now here's something I hope Lisp doesn't acquire. Too often, eg with
format strings, file paths, the loop facility, Lisp has forgotten its
own elegance and grabbed at some byzantine, syntax-heavy notation just
because other languages used it.

Lisp has (not part of the X3J13 standard) an alternative to regexes:
s-regexes, where instead of "^abc$" it's (sequence bol "abc" eol). I
know which one I prefer to work with.


> People who will defend Lisp on many of these counts will say "get a
> good commercial environment". That's fine and dandy, but if I, as a
> fan of Lisp, am not willing to plonk down some ridiculously huge
> amount of money on a "good commercial environment", why should we
> expect anyone to do that? I don't even know how much money we're
> talking about here; several months ago, I mailed Franz to ask about
> pricing. I never heard anything more than an auto-ACK.

On this ng, you're sure to get a few catcalls over that, so let me
pre-emptively say, you're absolutely rite.


> I will not share in the Perl-bashing that many Lispers enjoy, as Perl
> that cannot be read is almost always the fault of the programmer, not
> of Perl itself. People who are not familiar with the "Unix tools"
> find Perl's syntax strange and annoying. Understanding sed, awk,
> troff, C, shell, and friends, I can tell you that I find Perl's syntax
> to be quite intuitive. Usually.

Well, intuitive because familiar. But I'd keep the word "annoying".
Figuring out how many $'s are needed, whether you should change $ to
@, whether lists have flattened sublists, and so forth, sorry, all
that's a royal pain even if it saves typing a few characters. IMO of
course.

> o Perl cannot (easily) be used interactively. One fakes it with the
> debugger. This is kind of annoying, as I like to write code by
> testing code snippets interactively and then adding them to my
> source file as I go.

Ouch. I remember that now.

> So whether Perl or Lisp will work better for you will depend on the
> problem at hand and its criteria for success. If you know Lisp well,
> you should be able to do essentially any job. If you know Perl well,
> you should be able to do essentially any job. Each has its strengths
> and weaknesses. It's your job as a programmer to use the strengths
> and avoid the weaknesses.

--

Erik Winkels

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Matt Curtin <cmcu...@interhack.net> writes:

> Perl's primary advantage for many people is the huge library of
> ready-to-use modules available from the Comprehensive Perl Archive

> Network (CPAN). [...]


>
> Lisp also has a nice body of code available for inspection, but it
> tends to be harder to find (because it isn't well centralized or
> cataloged anywhere AFAIK),

There was a thread a few months ago about a CPAN for Lisp (CLAN).

See: http://www.deja.com/[ST_rn=ps]/viewthread.xp?AN=539647115&search=thread&svcclass=dnyr&ST=PS&CONTEXT=954196843.318832649&HIT_CONTEXT=954196843.318832649&HIT_NUM=0&recnum=%3c3811c3dd.6159887@judy%3e%231/1&group=comp.lang.lisp&frpage=getdoc.xp&back=clarinet

And especially: http://www.deja.com/[ST_rn=ps]/getdoc.xp?AN=539647115&search=thread&CONTEXT=954196843.318832649&HIT_CONTEXT=954196843.318832649&hitnum=1

Does anyone know whether anything came from that? Are there people
behind the scenes working on it?

Erik Naggum

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
* ; ; ; h e l m e r . . . <assem...@t-three.com>

| I have been slowly learning lisp over the past year and have had someone
| mention to me that I should learn perl, for jobs etc.

the unemployed programmer had a problem. "I know", said the programmer,
"I'll just learn perl." the unemployed programmer now had two problems.

having a job is not unimportant, but if knowing perl is a requirement for
a particular job, consider another one before taking that one. this is
true even if you know perl very well. life is too long to be an expert
at harmful things, including such evilness as C++ and perl.

I once studied perl enough to read perl code and spot bugs in other
people's programs (but later gained the wisdom that this was not an
accomplishment -- spotting a bug in a perl program is like spotting the
dog that brought the fleas), but I don't write in it and I don't ever
plan to use it for anything (part of my new position is quality assurance
for the systems I'm inheriting responsibility for, and part of any
serious QA is removing perl code the same way you go over a dilapidated
building you inherit to remove chewing gum and duct tape and fix whatever
was kept together for real). also, very much unlike any other language I
have ever studied, perl has failed to stick to memory, a phenomenon that
has actually puzzled me, but I guess there are some things that are so
gross you just have to forget, or it'll destroy something with you. perl
is the first such thing I have known.

this is your brain. this is perl. this is your brain on perl. any
questions?

| If I learn lisp well will I be able to do what people do with perl[?]

no, you won't. however, there is a very important clue to be had from
this: what people do with perl is wrong. perl makes a whole lot of tasks
easy to do, but if you look closely, you will see that those tasks are
fundamentally braindamaged, and should never have been initiated. perl
is perhaps the best example I can think of for a theory I have on the
ills of optimization and the design choices people make. most people,
when faced with a problem, will not investigate the cause of the problem,
but will instead want to solve it because the problem is actually in the
way of something more important than figuring out why something suddenly
got in their way out of nowhere. if you are a programmer, you may reach
for perl at this point, and perl can remove your problem. happy, you go
on, but find another problem blocking your way, requiring more perl --
the perl programmer who veers off the road into the forest will get out
of his car and cut down each and every tree that blocks his progress,
then drive a few meters and repeat the whole process. whether he gets
where he wanted to go or not is immaterial -- a perl programmer will
happily keep moving forward and look busy. getting a perl programmer
back on the road is a managerial responsibility, and it can be very hard:
the perl programmer is very good at solving his own problems and assure
you that he's on the right track -- he looks like any other programmer
who is stuck, and this happens to all of us, but the perl programmer is
very different in one crucial capacity: the tool is causing the problems,
and unlike other programmers who discover the cause of the problem sooner
or later and try something else, perl is rewarding the programmer with a
very strong sense of control and accomplishment that a perl programmer
does _not_ try something else.

it's not that perl programmers are idiots, it's that the language rewards
idiotic behavior in a way that no other language or tool has ever done,
and on top of it, it punishes conscientiousness and quality craftsmanship
-- put simply: you can commit any dirty hack in a few minutes in perl,
but you can't write an elegant, maintainabale program that becomes an
asset to both you and your employer; you can make something work, but you
can't really figure out its complete set of failure modes and conditions
of failure. (how do you tell when a regexp has a false positive match?)

a person's behavior is shaped by the rewards and the punishment he has
received while not thinking about his own actions. few people habitually
engage in the introspection necessary to break out of this "social
programming" or decide to ignore the signals that other people send them,
so this is a powerful mechanism for programming the unthinking masses.
rewarding idiotic behavior and punishing smart behavior effectively
brainwashes people, destroying their value systems and their trust in
their own understanding and appreciation of the world they live in, but
if you're very good at it, you can create a new world for them in which
all of this makes sense.

to really destroy any useful concepts of how software is supposed to work
together, for instance, the best possible way is to ridicule the simple
and straightforward concepts inherent in Lisp's read and print syntax,
then ridicule the overly complex and entangled concepts in stuff like IDL
and CORBA, which does basically the same thing as Lisp's simple syntax,
and then hail the randomness of various programs that output junk data,
because you can easily massage the data into the randomness that some
other program accepts as input. instead of having syntax-driven data
sharing between programs, you have code-driven glue between programs, and
because you are brainwashed perl idiot, this is an improvement, mostly to
your job security. and once you start down this path, every move forward
is a lot cheaper than any actual improvements to the system that would
_obviate_ the need for more glue code. however, if you never start down
this path, you have a chance of making relevant and important changes.

that's why, if you learn Lisp and become a good programmer, you will
never want to do what people do with perl. as such a good programmer,
one in five managers will notice that you solve problems differently and
will want to hire you to clean up after the perl programmers he used to
be mortally afraid of firing, and you can push any language you want at
this point -- just make sure you can find more programmers he can hire
who know it and always keep your code well-documented and readable -- you
do _not_ want to make any other programming language appear as random as
perl to any manager. perl is already a "necessary evil", but still evil,
while other languages don't have the "necessary" label, so if you screw
up, it will hurt other programmers, too. this problem can always be
minimized by simply being good at what you do. few perl programmers are
actually good at anything but getting perl to solve their _immediate_
problems, so you have an incredible advantage if you're a good Lisper.

I'll concede, however, that it is very important to be able to understand
what perl programmers do. if you don't understand what they are talking
about, you won't understand what they are actually trying to accomplish
with all the incredibly braindamaged uses of hash tables and syntactic
sadomasochism, and you won't be able to see through their charades and
"just one more hack, and I'll be there" lies.

here's a simple rule to use on perl programmers. if a solution is clean
and complete, it will immediately look like a tremendous amount of work
to a perl programmer, which it will: writing code that does the right
thing in perl is incredibly arduous. this is the only positive use for
perl programmers. like a really bad horror movie, where the evil guys
have no redeeming qualities whatsoever and will hate anything beautiful
or good, a true perl programmer will have a strong emotional reaction to
any really good solution: there's no way he can improve on it with his
perl hackery, and the very existence of his expertise is threatened.

then there are good programmers who know and use perl for some tasks, but
more than anything else know when _not_ to use it. they are _very_ rare.

#:Erik

Pierre R. Mai

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Tom Breton <t...@world.std.com> writes:

> WRT speed, it depends completely on which implementation you're using,
> but that said, it's easier to get a fast, slim Perl than a fast, slim
> Lisp IMO.

Is it? I'd gamble that both CLISP and ECL/ECLS are probably both
faster and slimmer than a current 5.00x Perl. OTOH I haven't done any
benchmarks to back-up that claim (excluding that silly micro-bench on
start-up times), nor am I likely to do this in the near future, since
I'm not currently in the scripting language business...

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Tom Breton

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
pm...@acm.org (Pierre R. Mai) writes:

> Tom Breton <t...@world.std.com> writes:
>
> > WRT speed, it depends completely on which implementation you're using,
> > but that said, it's easier to get a fast, slim Perl than a fast, slim
> > Lisp IMO.
>
> Is it? I'd gamble that both CLISP and ECL/ECLS are probably both
> faster and slimmer than a current 5.00x Perl.

I've used both CLISP and Perl 5.00{4,5}, and it didn't seem that way
to me. Granted, I used them for very different projects, so Perl may
have had an advantage. The Perl system was much easier to find and
install. RedHat bundled it, so I basically just pushed a button.
CLISP has rpms too but they were so nonstandard I ended up just
building it from source.

Christopher Browne

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Centuries ago, Nostradamus foresaw a time when Erik Winkels would say:

>Does anyone know whether anything came from that? Are there people
>behind the scenes working on it?

Take a look at <http://sourceforge.net>, and search for Common Lisp.

Several projects should pop up...
--
Rules of the Evil Overlord #77. "I will design fortress hallways with
no alcoves or protruding structural supports which intruders could use
for cover in a firefight."
<http://www.eviloverlord.com/lists/overlord.html>
cbbr...@hex.net - - <http://www.hex.net/~cbbrowne/lisp.html>

Andrew K. Wolven

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

Tom Breton wrote:

> Now here's something I hope Lisp doesn't acquire. Too often, eg with
> format strings, file paths, the loop facility, Lisp has forgotten its
> own elegance and grabbed at some byzantine, syntax-heavy notation just
> because other languages used it.

True, format string suck. (sheesh, might as well use shtml or something)
File paths seem to be an endless pain in the ass.
But loop is cool. I have gotten work done with it.

loopy,
AKW


William Deakin

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Tom Breton wrote:

> The Perl system was much easier to find and install. RedHat bundled it, so I
> basically just pushed a button. CLISP has rpms too but they were so nonstandard
> I ended up just building it from source.

Do you think that this a serious comparison between lisp and perl?

For example: Under Linux the Debian clisp and cmucl installations are as seamless
as the perl installation. Also the under Solaris 2.6 I experienced problems
installing perl whilst found the installation of cmucl and clisp binaries to be
straightforward.

Best Regards,

:) will


Tim Bradshaw

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
* William Deakin wrote:

> For example: Under Linux the Debian clisp and cmucl installations are as seamless
> as the perl installation. Also the under Solaris 2.6 I experienced problems
> installing perl whilst found the installation of cmucl and clisp binaries to be
> straightforward.

Too right. I upgraded from perl 5.00mumble to perl 5.00(incf mumble),
half our perl stuff we got from CPAN just broke, and I haven't yet
found time to mend it. Version hell.

--tim


Espen Vestre

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Tim Bradshaw <t...@cley.com> writes:

> Too right. I upgraded from perl 5.00mumble to perl 5.00(incf mumble),
> half our perl stuff we got from CPAN just broke, and I haven't yet
> found time to mend it. Version hell.

How true. I think there are quite a lot of large websites who haven't
yet upgraded from 5.003, since the entire *language* changed in the
.004 and .005 versions. The result: Loads of cgi-scripts that start with
a #!/usr/bin/perl5.004 (or .005). Horryfying.

--
(espen)

Barry Margolin

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
In article <31631935...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
>* ; ; ; h e l m e r . . . <assem...@t-three.com>
>| I have been slowly learning lisp over the past year and have had someone
>| mention to me that I should learn perl, for jobs etc.
>
> the unemployed programmer had a problem. "I know", said the programmer,
> "I'll just learn perl." the unemployed programmer now had two problems.
>
> having a job is not unimportant, but if knowing perl is a requirement for
> a particular job, consider another one before taking that one. this is
> true even if you know perl very well. life is too long to be an expert
> at harmful things, including such evilness as C++ and perl.

While it's easy to say that when you're talking about a "particular job", I
don't think it's right to be so cavalier about this. If you have a more
popular skill, it expands your choice of employers. If you're interested
in the web industry and know Perl, you can get a job just about anywhere.
If you know Lisp, job opportunities are much more scarce. It may be that
those jobs will be more interesting, since they're likely to be more
enlightened companies, but finding them may be difficult.

And if you have multiple skills (e.g. you know both Lisp *and* Perl) then
you have even more choice *and* your range of skills will make you more
attractive to all potential employers. Basically, I think programmers
should try to be familiar with all the popular languages. It's fine to
have a preference, but adaptability is an important strength. Being a
language or OS snob is not going to improve your life.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Fernando D. Mato Mira

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
Barry Margolin wrote:

> have a preference, but adaptability is an important strength. Being a
> language or OS snob is not going to improve your life.

You can quit and start making movies ;)

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1 email: matomira AT acm DOT org
CH-2007 Neuchatel tel: +41 (32) 720-5157
Switzerland FAX: +41 (32) 720-5720

www.csem.ch www.vrai.com ligwww.epfl.ch/matomira.html


Tom Breton

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

Have you tried series? I admit, I'm just starting to use it myself so
I can't say too much, but it seems as expressive as loop, and is nice
in other ways.

Tom Breton

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
William Deakin <wi...@pindar.com> writes:

> Tom Breton wrote:
>
> > The Perl system was much easier to find and install. RedHat bundled it, so I
> > basically just pushed a button. CLISP has rpms too but they were so nonstandard
> > I ended up just building it from source.
>
> Do you think that this a serious comparison between lisp and perl?

Since it was a comparison of easy availability, yes.

> For example: Under Linux the Debian clisp and cmucl installations are as seamless
> as the perl installation. Also the under Solaris 2.6 I experienced problems
> installing perl whilst found the installation of cmucl and clisp binaries to be
> straightforward.

Obviously those systems' mileage varied.

David Hanley

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

To answer tersley, yes, you can write hard-to-read
prroly structured code in lisp, too. It's just not
the only option. :)

dave


David Hanley

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

"Andrew K. Wolven" wrote:

> Tom Breton wrote:
>
> > Now here's something I hope Lisp doesn't acquire. Too often, eg with
> > format strings, file paths, the loop facility, Lisp has forgotten its
> > own elegance and grabbed at some byzantine, syntax-heavy notation just
> > because other languages used it.
>
> True, format string suck. (sheesh, might as well use shtml or something)
> File paths seem to be an endless pain in the ass.
> But loop is cool. I have gotten work done with it.

Actually, I've considered writing a format-string generating macro, with
the idea of looking something like the following:

(format nil (format-string "employee " STRING " makes " (MONEY (places 6 2))
"per year")
emp-name emp-salary)
=> "employee jeff makes $50,000.00 per year"

Of course, there's so many ways to do the same thing. You could write a
print
function which takes an arbitrary number of strings and pastes them together,
and
Write functions like (MONEY..) to format data properly.

I opted to just learn (format..) a bit better. Thay ways others can
(hopefully)
read my code.

As for loop, well, I use it sometimes, but I usually feel ashamed afterwards.

dave


Kragen Sitaker

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Very interesting article, Erik. As always :)

In article <31631935...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:

> also, very much unlike any other language I
> have ever studied, perl has failed to stick to memory, a phenomenon that
> has actually puzzled me, but I guess there are some things that are so
> gross you just have to forget, or it'll destroy something with you. perl
> is the first such thing I have known.

Perl is so large and complex that it makes Common Lisp, COBOL, and C++
look small and simple by comparison. Large and complex things are hard
to memorize.

I just refer to the manual a lot.

> it's not that perl programmers are idiots, it's that the language rewards
> idiotic behavior in a way that no other language or tool has ever done,
> and on top of it, it punishes conscientiousness and quality craftsmanship
> -- put simply: you can commit any dirty hack in a few minutes in perl,
> but you can't write an elegant, maintainabale program that becomes an
> asset to both you and your employer;

CGI.pm is a counterexample, IMHO.

Can you give some concrete examples of how Perl rewards idiotic
behavior and punishes conscientiousness? I must be so brainwashed that
it's not obvious to me.

> you can make something work, but you can't really figure out its
> complete set of failure modes and conditions of failure. (how do
> you tell when a regexp has a false positive match?)

This is a serious criticism, and one that I agree with to some extent.
I tend to think the power of Perl's hard-to-predict features outweigh
their difficulty of prediction.

I'd be interested to see some examples of short Perl snippets that had
subtle failure modes and a shorter (or quicker to read and write)
Common Lisp snippet that performed the same function, without the
subtle failure modes. I can think of a few --- there's one in perldoc
-f open. :)

> and once you start down this path [of stupid data formats], every


> move forward is a lot cheaper than any actual improvements to the
> system that would _obviate_ the need for more glue code. however,
> if you never start down this path, you have a chance of making
> relevant and important changes.

There are a lot of systems I talk to that have stupid data formats, and
it doesn't matter how much I want to fix them; I can't.

Perl is better than anything else I know at handling stupid data
formats reliably and effortlessly.

> few perl programmers are actually good at anything but getting perl
> to solve their _immediate_ problems, so you have an incredible
> advantage if you're a good Lisper.

I think you mean "are not actually good", not "are actually good".

Most Perl programmers are not skilled programmers. Perl makes it
possible for them to do things they couldn't have done by hand, and
makes it possible for them to do things more reliably and quickly than
they could have done them by hand. It does not turn them into
competent programmers.

Getting something useful out of Lisp requires that you be at least a
minimally competent programmer, so there are few Lisp programmers who
are not at least minimally competent.
--
<kra...@pobox.com> Kragen Sitaker <http://www.pobox.com/~kragen/>
The Internet stock bubble didn't burst on 1999-11-08. Hurrah!
<URL:http://www.pobox.com/~kragen/bubble.html>
The power didn't go out on 2000-01-01 either. :)

Tom Breton

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
David Hanley <d...@ncgr.org> writes:

> "Andrew K. Wolven" wrote:
>
> > Tom Breton wrote:
> >
> > > Now here's something I hope Lisp doesn't acquire. Too often, eg with
> > > format strings, file paths, the loop facility, Lisp has forgotten its
> > > own elegance and grabbed at some byzantine, syntax-heavy notation just
> > > because other languages used it.
> >
> > True, format string suck. (sheesh, might as well use shtml or something)
> > File paths seem to be an endless pain in the ass.
> > But loop is cool. I have gotten work done with it.
>
> Actually, I've considered writing a format-string generating macro, with
> the idea of looking something like the following:
>
> (format nil (format-string "employee " STRING " makes " (MONEY (places 6 2))
> "per year")
> emp-name emp-salary)
> => "employee jeff makes $50,000.00 per year"

For my rtest package, whose formatting functions had to work for both
Common Lisp and Elisp, I just used the argument names directly, rather
than positionally. Since there was no case where I tried to use the
same format string on many instances of data, it worked out easily.

Christopher Browne

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Centuries ago, Nostradamus foresaw a time when Tom Breton would say:

>William Deakin <wi...@pindar.com> writes:
>
>> Tom Breton wrote:
>>
>> > The Perl system was much easier to find and install. RedHat bundled it, so I
>> > basically just pushed a button. CLISP has rpms too but they were so nonstandard
>> > I ended up just building it from source.
>>
>> Do you think that this a serious comparison between lisp and perl?
>
>Since it was a comparison of easy availability, yes.
>
>> For example: Under Linux the Debian clisp and cmucl installations are as seamless
>> as the perl installation. Also the under Solaris 2.6 I experienced problems
>> installing perl whilst found the installation of cmucl and clisp binaries to be
>> straightforward.
>
>Obviously those systems' mileage varied.

This parallels the performance benchmarks for "null scripts;" the net
results are not necessarily meaningful except for establishing minor
claims.

But.

It nicely establishes that it is *NOT* fair to say that Perl is
"simple" to configure and install whilst CL (whether in CLISP or CMUCL
incarnations) are "complex."

To the contrary, Perl is _rather_ hairy, and the problems that there
have occasionally been with Debian make it manifestly clear that this
is so. It is not clear that CL (in *any* incarnation) is more
"hairy;" I'd have no problem with the contention that all that's
"hairy" about it is peoples' _perceptions_ of the complexity of CL...
--
Why do scientists call it research when looking for something new?
cbbr...@hex.net- <http://www.ntlug.org/~cbbrowne/lisp.html>

William Deakin

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Tom Breton wrote:

> Will writes:
>
> > Tom Breton wrote:
> >
> > > The Perl system was much easier to find and install. RedHat bundled it, so I
> > > basically just pushed a button. CLISP has rpms too but they were so nonstandard
> > > I ended up just building it from source.
> >
> > Do you think that this a serious comparison between lisp and perl?
>
> Since it was a comparison of easy availability, yes.

If I understand you correctly, the only readily available source of perl and lisp is in
the RedHat RPM format. And more than this, this only includes RPM's that are bundled
with the RedHat distribution! My mistake.

Thanks for clearing this up,

:) will

William Deakin

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Christopher Browne wrote:

> It nicely establishes that it is *NOT* fair to say that Perl is "simple" to configure
> and install whilst CL (whether in CLISP or CMUCL incarnations) are "complex."

I disagree. I would say that to install and configure CL is infact easier in both the
cases of CLISP and CMUCL (more straightforward, robust, less sensitive to non-standard
Linux/Solaris installations [1]) that the comperable perl installation. Particularly
when Apache/modperl/perlDBI or any of the libraries/modules, required to get any
sensible work done, are factored in.

Cheers,

:) will

[1] I will not bore you with the hair pulling details of my `oh dear I've got an untidy
old C++ library in /usr/local/lib and a new C++ library in /usr/lib that caused dynamic
linking to fail' story.


William Deakin

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
David Hanley wrote:

> To answer tersley,

I must have missed the posting from tersley. My newsreader is giving me
jip. Who he?

;) will


see.signature

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
On Wed, 29 Mar 2000 10:00:31 GMT, William Deakin <wi...@pindar.com> wrote:

>If I understand you correctly, the only readily available source of perl and lisp is in
>the RedHat RPM format. And more than this, this only includes RPM's that are bundled
>with the RedHat distribution! My mistake.
>

Please also have a look at debian .deb files, which include perl, clisp,
cmucl, gcl, xlispstat and different scheme's.

Marc


--
------------------------------------------------------------------------------
email: marc dot hoffmann at users dot whh dot wau dot nl
------------------------------------------------------------------------------

William Deakin

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
"see.signature" wrote:

> Will wrote:
>
> >If I understand you correctly, the only readily available source of perl and lisp is in
> >the RedHat RPM format. And more than this, this only includes RPM's that are bundled
> >with the RedHat distribution! My mistake.
>
> Please also have a look at debian .deb files, which include perl, clisp,
> cmucl, gcl, xlispstat and different scheme's.

Yes, (as my four-year-old nephew would say) I *know* that. I'm afraid you have misunderstood
what I was trying to say.

Sorry,

;) will


Andrew K. Wolven

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to

Tom Breton wrote:

>
> Have you tried series? I admit, I'm just starting to use it myself so
> I can't say too much, but it seems as expressive as loop, and is nice
> in other ways.

Direct translation of c code into lisp:
;;;Algorithm A1.6; page 36, The Nurbs Book
(defun horner2 (a n m u0 v0)
"Computes a point on a power basis surface."
(loop for i from 0 to n
with b = (make-array (1+ n))
do (let ((ith-row-of-a
(make-array (1+ m)
:displaced-to a
:displaced-index-offset (* (1+ m) i))))
(setf (aref b i)
(horner1 ith-row-of-a m v0)))
finally (return (horner1 b n u0))))

series equivalent:
(defun my-horner2 (reverse-A u v)
(flet ((b (series) (my-horner1 series v)))
(my-horner1 (#Mb reverse-A) u)))

These may be totally broken.
I am still dense with series. I will have to get back to you on this issue when
I have a chance to get back to this stuff in a few months.

That flet in the series example was giving me a compiler warning and was running
much slower than the c translation.

My goal is to take the algorithms in the book:
Have a 'C' version, (maybe NLib)
A direct lisp translation (as best I can) of the 'C' version,
And a series version.

That way I can race. (I still have a lot to learn about optimization, though)
;)

A Nurbs curve is a kind of series, why not have lisp s-expressions for math
expressions, right?

If I remember correctly series macroexpands to loop.

Loop is still a very useful macro, I think.
Do you like parenthesis or do you like lisp?

AKW


Raymond Toy

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
>>>>> "Andrew" == Andrew K Wolven <awo...@redfernlane.org> writes:


Andrew> If I remember correctly series macroexpands to loop.

No. series macroexpands into a much lower level than that, consisting
of tagbody's, go's, etc.

Ray

Tim Moore

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
On Wed, 29 Mar 2000, Kragen Sitaker wrote:

> Very interesting article, Erik. As always :)
>
> In article <31631935...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:

> > also, very much unlike any other language I
> > have ever studied, perl has failed to stick to memory, a phenomenon that
> > has actually puzzled me, but I guess there are some things that are so
> > gross you just have to forget, or it'll destroy something with you. perl
> > is the first such thing I have known.
>

> Perl is so large and complex that it makes Common Lisp, COBOL, and C++
> look small and simple by comparison. Large and complex things are hard
> to memorize.
>
> I just refer to the manual a lot.

Me too, kind of like when I was programming in Common Lisp.

> > it's not that perl programmers are idiots, it's that the language rewards
> > idiotic behavior in a way that no other language or tool has ever done,
> > and on top of it, it punishes conscientiousness and quality craftsmanship
> > -- put simply: you can commit any dirty hack in a few minutes in perl,
> > but you can't write an elegant, maintainabale program that becomes an
> > asset to both you and your employer;
>

> CGI.pm is a counterexample, IMHO.

Or any other code by Lincoln Stein. The book "Writing Apache Modules with
Perl and C" was a complete revelation for me: it's a well written
exposition of a useful Perl module coupled with excellent examples of
well writen code in a clear style. Plus, it does a great job of
explaining the hairy inner workings of Apache.

> > you can make something work, but you can't really figure out its
> > complete set of failure modes and conditions of failure. (how do
> > you tell when a regexp has a false positive match?)
>

> This is a serious criticism, and one that I agree with to some extent.
> I tend to think the power of Perl's hard-to-predict features outweigh
> their difficulty of prediction.

It would seem to be an endemic problem with regular expression based
search in any language. Perhaps one can criticize Perl for encouraging
one to use regexps for everything.

> Perl is better than anything else I know at handling stupid data
> formats reliably and effortlessly.

As Kragen said, there's a lot of stupid data out there, for one reason
or another. For example: SQL databases. What a pain in the ass. But the
reality is that they are the only practical solution for reliable access
to huge amounts of data, and Perl's DBI module makes it very easy to get
data in and out of (e.g.) Oracle and do useful stuff with it. The fact
that I need to do this doesn't a priori mean that I'm eeking out a
miserable existance in some slime pit.

As I've gained more experience with Perl it strikes me that it resembles
Lisp in many ways, albeit Lisp as channeled by an awk script on acid.

Consider that perl has:
lists as a ubiquitous data structure
first class functions
closures
lexical and dynamic binding
(a weak form of) garbage collection
a reasonable package system
an object system that, even though it's a great big hack, is very flexible
eval
...

It all seems eerily familiar. So, even though I might rather be
programming in Lisp and do worry occasionally about my mortal soul, many
of the lessons I learned as a Lisp hacker are directly applicable in Perl.

Tim

David Hanley

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to

William Deakin wrote:

> David Hanley wrote:
>
> > To answer tersley,
>
> I must have missed the posting from tersley.

Understandable. It was incredibly short.

dave


Tom Breton

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
kra...@dnaco.net (Kragen Sitaker) writes:

> Getting something useful out of Lisp requires that you be at least a
> minimally competent programmer, so there are few Lisp programmers who
> are not at least minimally competent.

I suggest that Elisp is a counterexample. Plenty of people who
otherwise don't program have written .emacs files.

Tom Breton

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
William Deakin <wi...@pindar.com> writes:

> Tom Breton wrote:
>
> > Will writes:
> >
> > > Tom Breton wrote:
> > >
> > > > The Perl system was much easier to find and install. RedHat bundled it, so I
> > > > basically just pushed a button. CLISP has rpms too but they were so nonstandard
> > > > I ended up just building it from source.
> > >
> > > Do you think that this a serious comparison between lisp and perl?
> >
> > Since it was a comparison of easy availability, yes.
>

> If I understand you correctly, the only readily available source of perl and lisp is in
> the RedHat RPM format. And more than this, this only includes RPM's that are bundled
> with the RedHat distribution! My mistake.
>

> Thanks for clearing this up,

It's too bad that you need to employ sarcasm for this tiny point. I
merely pointed out that it's easier to get a good Perl. You obviously
hated hearing that, but I can't help that.

To answer your sarcasm, Redhat is the largest distributor of Linux.
Having push-button install there counts for a lot.

But this is getting more and more like a language war, so I probably
won't answer again.

Christopher Browne

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Centuries ago, Nostradamus foresaw a time when William Deakin would say:

>Christopher Browne wrote:
>
>> It nicely establishes that it is *NOT* fair to say that Perl is "simple" to configure
>> and install whilst CL (whether in CLISP or CMUCL incarnations) are "complex."
>
>I disagree. I would say that to install and configure CL is infact easier in both the
>cases of CLISP and CMUCL (more straightforward, robust, less sensitive to non-standard
>Linux/Solaris installations [1]) that the comperable perl installation. Particularly
>when Apache/modperl/perlDBI or any of the libraries/modules, required to get any
>sensible work done, are factored in.

Hum? Disagree?

I wrote, parsed into Lisp-like predicates:

(not (fair-p (simpler? (ease-of-install 'perl) (ease-of-install
'cl))))
:-)

You seem to think that I wrote the equivalent to:
(simpler? (ease-of-install 'perl) (ease-of-install 'cl))

Which is the exact *opposite* to what I said. That "not" in the
expression corresponds nicely to the "*NOT*" that is pretty prominent
in the sentence.
--
If you're sending someone some Styrofoam, what do you pack it in?
cbbr...@hex.net- <http://www.hex.net/~cbbrowne/lisp.html>

Mark-Jason Dominus

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
In article <8btgtf$lnr$0...@216.39.145.192>,

Tim Moore <mo...@herschel.bricoworks.com> wrote:
>As I've gained more experience with Perl it strikes me that it resembles
>Lisp in many ways, albeit Lisp as channeled by an awk script on acid.
>
>Consider that perl has:
>lists as a ubiquitous data structure
>first class functions
>closures
>lexical and dynamic binding
>(a weak form of) garbage collection
>a reasonable package system
>an object system that, even though it's a great big hack, is very flexible
>eval
>...
>
>It all seems eerily familiar.

I have been slowly coming to the same realization myself over the past
two years.

Most Perl programmers seem to come out of the C world, but Perl is
actually much more like Lisp than it is like C. Somewhere near the
beginning of Norvig's `Paradigms of Artificial Intelligence
Programming', there is a list of the eight important and unusual
features of Lisp. Perl shares seven of these. (A few moments'
thought will reveal the identity of the eighth.)

This has an interesting implication, which is that Perl programmers
are not using Perl effectively. Very few come from a Lisp background,
and they don't know what to do with closures even though they have
them. They are going around writing C programs in Perl. This is not
as ineffective as writing C programs in Lisp, but nevertheless they
could be doing much better.

>many of the lessons I learned as a Lisp hacker are directly
>applicable in Perl.

Yes, just so! And in fact I'm presently at work on a book whose main
goal is to introduce those same lessons to Perl programmers.


Kragen Sitaker

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
In article <m3g0t9b...@world.std.com>,

Tom Breton <t...@world.std.com> wrote:
>kra...@dnaco.net (Kragen Sitaker) writes:
>> Getting something useful out of Lisp requires that you be at least a
>> minimally competent programmer, so there are few Lisp programmers who
>> are not at least minimally competent.
>
>I suggest that Elisp is a counterexample. Plenty of people who
>otherwise don't program have written .emacs files.

Good point. I still think the rule usually holds, though.

William Deakin

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Christopher Browne wrote:

> Hum? Disagree?

Damn. I did not readwhat you posted correctly [1]

> Which is the exact *opposite* to what I said. That "not" in the
> expression corresponds nicely to the "*NOT*" that is pretty prominent
> in the sentence.

Another case of read the message carefully (I have a long history of failing exams by not
answering the questions). Anyway, if I had read the message correctly would say we were
`violent in agreement.'
Please accept my humble appologies,

:( will

[1] I think Fernando Mato Mira was correct yesterday, I *should* learn to read.

William Deakin

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Tom Breton wrote:

> It's too bad that you need to employ sarcasm for this tiny point.

Would you prefer me to make large points using sarcasm. Anyway what is wrong with sarcasm?

> I merely pointed out that it's easier to get a good Perl.

What you communicated was something `a good cl is hard to get/install, perl is easy' and I
think this is wrong. I don't think I was the only person who interpreted your postings as
this.

> You obviously hated hearing that, but I can't help that.

I didn't hate hearing that. Although it didn't fill me with raptorous joy either.

> Having push-button install there counts for a lot.

Are you serious? I would say that having a push-button install/upgrade that then breaks (in
subtle and mysterious ways, whos wonders are to behold) the existing installation and modules
does *not* count for a lot. Do you work using MS kit much? Lots of push-button installs
there.

> ...this is getting more and more like a language war, so I probablywon't answer again.

Fine. Whatever. I agree.

Best Regards,

:) will


Christopher Browne

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Centuries ago, Nostradamus foresaw a time when William Deakin would say:
>Christopher Browne wrote:
>
>> Hum? Disagree?
>
>Damn. I did not readwhat you posted correctly [1]

No biggie. Apparently I didn't joke enough about this; I thought that
recoding the expression in pseudo-Lisp would a clear joke, but apparently
it wasn't clear enough...

Making it clear:

Whilst there may be some *perceptions* out there that deploying CL is
"tough," there are graver challenges in deploying Perl.

In some cases (and Red Hat RPM's would be a good example of this), the
tremendous amount of effort going into Perl "packages" and the dearth
of effort going into Lisp equivalents has the result of it *appearing*
easier to deploy Perl. If the *tiniest* bit of additional effort went
into packages for CLISP or CMU-CL, the situation would very likely
reverse itself.

Perl is quite amazing in the amount of effort that it goes through to
"autoconf" itself to find a *huge* amount of information about the
system it is being installed on; that effort is fairly frightening...
--
I called that number and they said whom the Lord loveth he chasteneth.
cbbr...@hex.net - - <http://www.hex.net/~cbbrowne/lsf.html>

William Deakin

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
Christopher Browne wrote:

> No biggie. Apparently I didn't joke enough about this; I thought that
> recoding the expression in pseudo-Lisp would a clear joke, but apparently
> it wasn't clear enough...

Doh! [I'm suffering from a sense of humour failier today involving a small
15-week old child at 2am, 4am, 6am...it was a relief to come to work today :]

Cheers,

:) will

Joe Marshall

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to

[Erik's superb rant against perl elided]

Erik gets a lot of `hate mail' for his blunt and acrid remarks, but
not enough appreciation for his insightful essays like this one.

Well said, Erik. Posts like this one are a joy to read and right on
the money.

--
~jrm

Fernando D. Mato Mira

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
"Andrew K. Wolven" wrote:

> series equivalent:
> (defun my-horner2 (reverse-A u v)
> (flet ((b (series) (my-horner1 series v)))
> (my-horner1 (#Mb reverse-A) u)))

1. While theoreticall possible, Series does not currently support high-order series

2. Series does not currently support definition of local series functions as by
flet or labels
3. When you define an optimizable series function, you should
(declare (optimizable-series-function
as appropriate
4. How did you generate reverse-A? By (scan (reverse <some-list>)) or reversing an
array?
If you plan on transducing a series into its reversed counterpart, you'll need
buffering in between.
There's no way out.

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1 email: matomira AT acm DOT org
CH-2007 Neuchatel tel: +41 (32) 720-5157
Switzerland FAX: +41 (32) 720-5720

www.csem.ch www.vrai.com ligwww.epfl.ch/matomira.html


Rahul Jain

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
In article <m3g0t9b...@world.std.com> posted on Wednesday, March

29, 2000 4:40 PM, Tom Breton <t...@world.std.com> wrote:
> kra...@dnaco.net (Kragen Sitaker) writes:
>
>> Getting something useful out of Lisp requires that you be at least a
>> minimally competent programmer, so there are few Lisp programmers who
>> are not at least minimally competent.
>
> I suggest that Elisp is a counterexample. Plenty of people who
> otherwise don't program have written .emacs files.
>

Most .emacs files I've seen are not programming at all. They are simply
multiple assignments of parameters, in order to customize the emacs
environment. Of course, a programmer's .emacs file will likely contain
numerous snippets of actual code.
Note that not all emacs users are programmers... particularly not lisp
programmers.

--
-> -\-=-=-=-=-=-=-=-=-=-/^\-=-=-=<*><*>=-=-=-/^\-=-=-=-=-=-=-=-=-=-/- <-
-> -/-=-=-=-=-=-=-=-=-=/ { Rahul -<>- Jain } \=-=-=-=-=-=-=-=-=-\- <-
-> -\- "I never could get the hang of Thursdays." - HHGTTG by DNA -/- <-
-> -/- http://photino.sid.rice.edu/ -=- mailto:rahul...@usa.net -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
Version 11.423.999.210020101.23.50110101.042
(c)1996-2000, All rights reserved. Disclaimer available upon request.


Kragen Sitaker

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
In article <p6RE4.1100$75.2...@ptah.visi.com>,
David Thornley <thor...@visi.com> wrote:
>I think the problem is that there are fewer underlying principles in Perl
>than in any other language I'm familiar with. Every so often, I run into
>something that makes me think that Larry Wall's breakfast must have
>disagreed with him that day.

Perl is not designed around underlying principles, I think. Unless you
count very general principles like, "There's more than one way to do
it" and "easy things should be easy, and hard things should be possible".

>I think it could be better done. The power of the features is impressive,
>but when you're using that much power in a language like Perl you're
>not likely to feel comfortably in control. To put it another way,
>Perl isn't necessarily a bad language because of this, but it could have
>been done better.

Do you think it has been done better in Lisp?

If so, where should I read about it?

If not, how would you do it better?

>>Perl is better than anything else I know at handling stupid data
>>formats reliably and effortlessly.
>

>If they fit into Perl's idea of a regular expression, which most do.
>Common Lisp is also excellent, but that's mostly because it's excellent
>at so many things.

Not everything that can be parsed with a Perl pattern should be, even
in Perl. But Perl's patterns often do 75% of the work of handling more
complex data formats. However, they cease to be reliable or
effortless, and in these situations, occasionally, they make Erik's
criticism look mild.

Hashes are a nice help, too, as are variable interpolation in strings,
pack, unpack, and sprintf.

Kragen Sitaker

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
In article <38e2c271.72bc$1...@news.op.net>,

Mark-Jason Dominus <m...@plover.com> wrote:
>This has an interesting implication, which is that Perl programmers
>are not using Perl effectively. Very few come from a Lisp background,
>and they don't know what to do with closures even though they have
>them.

You can't do as much with them as you can in Lisp, unfortunately. In
particular, recursive or mutually recursive closures are data
structures containing circular references, which break Perl's primitive
garbage collector.

>Yes, just so! And in fact I'm presently at work on a book whose main
>goal is to introduce those same lessons to Perl programmers.

I assure you I will purchase it the day it is released :)

Kragen Sitaker

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
In article <slrn8e6mvo....@knuth.brownes.org>,

Christopher Browne <cbbr...@hex.net> wrote:
>Perl is quite amazing in the amount of effort that it goes through to
>"autoconf" itself to find a *huge* amount of information about the
>system it is being installed on; that effort is fairly frightening...

This has a lot to do with Perl providing, as part of the language, an
interface to nearly all of the standard Unix system calls, and quite a
number that aren't very standard, as well as some other facilities like
dbm files, dynamic module loading, gethostent(), etc.

In order to do this correctly, it must (a) figure out what libraries
things are located in; (b) figure out what facilities aren't there; (c)
figure out which variants of particular facilities are present.

It also thinks it has to compile on very primitive systems lacking
things like an ANSI C compiler, a decent shell, #! support, filenames
longer than 14 characters, or ASCII. (The Configure script mentions
PDP-11 support when it's asking you about memory models.)

There are pros and cons to this approach.

Does scsh check fewer things and support as many platforms?

Still, I'm mystified by why Configure wants to know where awk, comm,
and pg are. It points to some scary makefiles.

Thom Goodsell

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
Rahul Jain wrote:
>
> In article <m3g0t9b...@world.std.com> posted on Wednesday, March
> 29, 2000 4:40 PM, Tom Breton <t...@world.std.com> wrote:
> > I suggest that Elisp is a counterexample. Plenty of people who
> > otherwise don't program have written .emacs files.
> >
>
> Most .emacs files I've seen are not programming at all. They are simply
> multiple assignments of parameters, in order to customize the emacs
> environment. Of course, a programmer's .emacs file will likely contain
> numerous snippets of actual code.
> Note that not all emacs users are programmers... particularly not lisp
> programmers.

Agreed. I *am* a Lisp programmer, but my .emacs file has been almost
entirely cut and pasted from other people's .emacs file. The only thing
I've done is twiddle a few settings.

Thom Goodsell

Scientist t...@cra.com
Charles River Analytics (617) 491-3474 x574
Cambridge, MA, USA http://www.cra.com/

Mark-Jason Dominus

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
In article <l2TE4.13817$3g5.1...@tw11.nn.bcandid.com>,

Kragen Sitaker <kra...@dnaco.net> wrote:
>You can't do as much with them as you can in Lisp, unfortunately. In
>particular, recursive or mutually recursive closures are data
>structures containing circular references, which break Perl's primitive
>garbage collector.

Yes, but the crappy garbage collector has an upside. People used to
say that garbage collection was impractical and inefficient. Rather
than patiently explain to them that their notion of garbage collection
was thirty years out of date, I would just point out that Perl has an
incredibly rotten garbage collector, maybe the worst possible garbage
collector, and that it is still a gigantic success. Perl's GC is so
bad that it stands as a sort of reductio ad absurdem to arguments that
say that GC is a bad choice, because it succeeds in spite of its
immense badness. ``Look,'' I would say. ``If Perl's garbage
collection is so great, imagine how much greater it would be if it
were actually state of the art.''

This argument is not as useful as it was in the past now that I can
just point at Java. I don't particularly like Java, but it may have
the beneficial effect that people finally shut up about garbage
collection.

In the 1960s there were big language wars about recursion; people
would tell you that recursion was unnecessary (because it can always
be simulated with iterative methods) and that it is inefficient. This
is true in some sense, but totally misses the value of recursion. But
you can't explain the value of recursion to a programmer who knows
only FORTRAN; you are not going to be able to get past his ignorance.
He is going to reason that FORTRAN does not have recursion, many large
projects are implemented in Fortran, and therefore recursion is
unnecessary for large projects. With the advent of C and Pascal,
industrial and commercial programming languages finally had recursion,
and I think the result is that now if you tried to promulgate a
general-purpose programming language without recursion, you would be
laughed at, and programmers who still don't understand recursion are
considered woefully ignorant, and maybe pitiful relics of the past.

I like to think that the advent of Java has had the same effect on
garbage collection. You used to see people saying the same things
about garbage collection that they said about recursion. It isn't
necessary, it is inefficient, it is only available in ivory-tower
languages that are not suited for doing real work, blah blah blah.
Nobody is going to be able to say this any more now that we have had
Java. I would not be surprised if in twenty years garbage collection
is in the mainstream the way recursion is now, and the idea of a
GC-less general-purpose programming language is laughable.

I have a fantasy that the same thing will somehow happen to closures.
I think it's appalling that people are still promulgating
general-purpose programming languages without lexical closure and
first-class functions. One of the reasons I am writing my book is to
try to help put a stop to this.

Anyway, there is some hope that Perl might someday get a better
garbage collector. Several people are talking about it, and Brad Kuhn
told me a few months ago that some guy he knew in Cincinnati was
looking into putting in the Boehm garbarge collector. Every time
someone appears on the perl developers' list with some idea that
relies on any specific GC semantics, Sarathy warns them that they
cannot do that because Perl might someday get a better garbage
collector. Certainly a lot of people would like to see the
reference counting go away.


Tim Bradshaw

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
* Mark-Jason Dominus wrote:
> You used to see people saying the same things
> about garbage collection that they said about recursion. It isn't
> necessary, it is inefficient, it is only available in ivory-tower
> languages that are not suited for doing real work, blah blah blah.

Surely Java is near-conclusive proof of this point?

--tim

David Hanley

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to

David Thornley wrote:

> >Perl is so large and complex that it makes Common Lisp, COBOL, and C++
> >look small and simple by comparison. Large and complex things are hard
> >to memorize.
> >

> Yup. On the flip side, you can do useful things with a small part of
> the language. I've never touched the object system (it looks like
> something I'm not going to enjoy), and have ignored several other things.
> I've written some useful stuff.

That's surely true. OTOH, a large language with lots of features means
that it may be hard to read and modify someone else's code because they
did some thing with features you are not as familiar with, or interact
in some odd way.

I think that's where CL is a big win. The library is very large, but there
are just a few underlying unfying principles. It's a lot easier to undertand
a few unfamiliar function rather than unfamilar language behavioral issues.
(not contradicting you, just elaborating my ideas on this)

dave


Jonathan Coupe

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to

Tim Bradshaw <t...@cley.com> wrote in message
news:ey3r9cr...@cley.com...

I'm not pro or anti Java (it's not suitable for the work I do yet, for sure)
but the people I've spoken to who have used it for real world projects site
memory leaks as being one of the biggest project killers. It's certainly the
first GC langugae I've seen for which people actually spend money on memory
leak detection tools. (Of course this is also be a reflection of Java's
popularlity - I've certainly used Eiffel and Lisp compilers that leaked
memory.)

Jonathan

Robert Monfera

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to

Tim Moore wrote:

> As Kragen said, there's a lot of stupid data out there, for one reason
> or another. For example: SQL databases. What a pain in the ass.

SQL has quite solid fundamentals and it should not be dismissed on the
basis of its syntax - it's quite easy to generate SQL strings from
classes or query definition objects.

Robert

Frank A. Adrian

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to

Robert Monfera <mon...@fisec.com> wrote in message
news:38EB0520...@fisec.com...
>Tim Moore wrote:

Tim was right when he said that SQL databases are a pain in the ass. And I
don't think he was referring simply to syntax issues. Yes, it's easy to
generate SQL strings (having been employed to do this in the past, I can
attest to this), but why do I even need a separate language and access
methodology? The real issue is whether the RDB model was necessary. It was
a fairly neat enhancement over hierarchical data bases and allowed more
flexible slicing of data but, at its core, the RDB model is simply a
collection of objects joined on the relations between them, with optional
data slicing and transaction control thrown in. Was the construction of a
new model along with the idea of separating the objects used this model from
all other in-memory objects really a good idea? Or would work on the VM
systems in place at the time to provide persistent object spaces with
transparently maintained, use-based indices have been better? More to the
point, would it be better now? The division in access methods and identity
between in-memory and on-disk objects makes construction of software much
more difficult than if the distinction did not exist. To make it worse, the
current implementations of the RDB model revel in pushing the users' noses
into these distinctions. I believe that Henry Baker had a really good paper
a few years back on this topic (Don't hold me to this statement, because
it's a hazy recollection with no reference at this point).

faa

Philip Lijnzaad

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
On Wed, 5 Apr 2000 23:41:49 -0700,
"Frank" == Frank A Adrian <fad...@uswest.net> writes:

Frank> why do I even need a separate language and access methodology?

for fully general and high-level ad-hoc querying, specifying the what, not
the how. For easily working with sets of things, rather than having to spell
it all out.

Frank> The real issue is whether the RDB model was necessary.

from which point of view? Is Lisp necessary?

Frank> at its core, the RDB model is simply a collection of objects

what are your objects, rows or columns? If former, certainly not, if latter,
yes (but this view is less common)

Frank> joined on the relations between them, with optional data slicing and
Frank> transaction control thrown in.

(transactions are essential to any data managemnt, whether relational or
not)

Frank> Was the construction of a new model along with the idea of separating
Frank> the objects used this model from all other in-memory objects really a
Frank> good idea?

yes, most certainly.

Frank> Or would work on the VM systems in place at the time to provide
Frank> persistent object spaces with transparently maintained, use-based
Frank> indices have been better?

?

Frank> More to the point, would it be better now?

Probably not. Look at the rather sorry state of the OODBMS field. OODBMses
have been touted as the next silver bullet for about 15 years now. They still
are a just a very small niche, for a number of reasons.

Frank> The division in access methods and identity between in-memory and
Frank> on-disk objects makes construction of software much more difficult
Frank> than if the distinction did not exist.

yes, that's true if you're talking about one application with just a little
data. If you're talking about Gigabytes worth of complex information, chances
that this data will be needed in unforseen ways are much higher. At this
point, OO database simply become too rigid. Is there an accepted notion of
what constitutes a view in OODM? Set-level manipulations? Schema evolution?
Query language? And not wholly unimportantly, are these things available in
current implementations, in an not entirely unstandardized way?

Frank> To make it worse, the current implementations of the RDB model revel
Frank> in pushing the users' noses into these distinctions.

?

Frank> I believe that Henry Baker had a really good paper a few years back on
Frank> this topic (Don't hold me to this statement, because it's a hazy
Frank> recollection with no reference at this point).

I'd be very interested in a proper reference to this, knowning Baker's high
quality writings.

Philip
--
Not getting what you want is sometimes a wonderful stroke of luck.
-----------------------------------------------------------------------------
Philip Lijnzaad, lijn...@ebi.ac.uk | European Bioinformatics Institute,rm A2-24
+44 (0)1223 49 4639 | Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax) | Cambridgeshire CB10 1SD, GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC 50 3D 1F 64 40 75 FB 53

Christopher C Stacy

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
>>>>> On 06 Apr 2000 13:06:08 +0100, Philip Lijnzaad ("Philip") writes:

Philip> yes, that's true if you're talking about one application with just a little
Philip> data. If you're talking about Gigabytes worth of complex information, chances
Philip> that this data will be needed in unforseen ways are much higher. At this
Philip> point, OO database simply become too rigid. Is there an accepted notion of
Philip> what constitutes a view in OODM? Set-level manipulations? Schema evolution?
Philip> Query language? And not wholly unimportantly, are these things available in
Philip> current implementations, in an not entirely unstandardized way?

If you are looking for a standardized interface to a standardized model
of databases, then you should stick with RDBMS and SQL for now.
This disucssion is about (relatively) new ways of doing things, and, yes,
all the concepts and issues that you mentioned are dealt with in
object-oriented database systems. (Certainly there is nothing in RDBMS
that magically solves those problems, either, and in fact I have found,
for example, schema evolution to be fairly weak in systems such as Oracle.)
I don't know why you would characterize OODBMS as only being suitable
for small or simpl applications -- it has been successfully employed
in gigabyte databases and complex applications.

Seth Gordon

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
"Frank A. Adrian" wrote:

> I believe that Henry Baker had a really good paper

> a few years back on this topic (Don't hold me to this statement, because
> it's a hazy recollection with no reference at this point).

"Relational Databases", a letter to the _ACM Forum,_ October 15, 1991:
ftp://ftp.netcom.com/pub/hb/hbaker/letters/CACM-RelationalDatabases.html

--
perl -le"for(@w=(q[dm='r 0rJaa,u0cksthe';dc=967150;dz=~s/d/substrdm,\
(di+=dc%2?4:1)%=16,1ordi-2?'no':'Perl h'/e whiledc>>=1;printdz]))\
{s/d/chr(36)/eg;eval;}#In Windows type this all on 1 line w/o '\'s"
== seth gordon == sgo...@kenan.com == standard disclaimer ==
== documentation group, kenan systems corp., cambridge, ma ==

Frank A. Adrian

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
Philip Lijnzaad <lijn...@ebi.ac.uk> wrote in message
news:u71z4jz...@o2-3.ebi.ac.uk...

> On Wed, 5 Apr 2000 23:41:49 -0700,
> "Frank" == Frank A Adrian <fad...@uswest.net> writes:
>
> Frank> why do I even need a separate language and access methodology?
>
> for fully general and high-level ad-hoc querying, specifying the what, not
> the how. For easily working with sets of things, rather than having to
spell
> it all out.

Which could have been done in language extensions, not requiring an entirely
new syntax to lear. Lisp does a fine job of working with collections of
things - including high-level, ad hoc queries via sequence functions.

> Frank> The real issue is whether the RDB model was necessary.
>
> from which point of view? Is Lisp necessary?

Well, in the end, are computers necessary? Of course not. But if the
purpose of computing is geting actual work done with a minimum of effort,
the more models you have, the more clutter you have.

> Frank> at its core, the RDB model is simply a collection of objects
>
> what are your objects, rows or columns? If former, certainly not, if
latter,
> yes (but this view is less common)

Either. Data slicing and joining of objects did not have to be combined
with distinctons between in- and out-of-memory storage. Nor does one have
to invent a new syntax.

> Frank> joined on the relations between them, with optional data slicing
and
> Frank> transaction control thrown in.
>
> (transactions are essential to any data managemnt, whether relational or
> not)

For complex applications, yes. But early versions of SQL did not provide
for anything sophisticated in the way of transaction control. Even todays
implementations leave something to be desired (Can you lock single records
in most, or do most use page-locking, still?).

> Frank> Was the construction of a new model along with the idea of
separating
> Frank> the objects used this model from all other in-memory objects really
a
> Frank> good idea?
>
> yes, most certainly.

We'll have to differ on this one.

> Frank> Or would work on the VM systems in place at the time to provide
> Frank> persistent object spaces with transparently maintained, use-based
> Frank> indices have been better?
>
>?

Let me explain. My take on the issue is that an RDB is little more than a
hopped up, user-controlled, VM system used to select and page-in (possibly
partial) objects from a large data-storage space and, after the objects are
(posibly) updated, to page them out again. I'm not so much against the idea
of storing these things on disk, but the explicit user control (with a new
language, no less) strikes me as convoluted.

> Frank> More to the point, would it be better now?
>
> Probably not. Look at the rather sorry state of the OODBMS field. OODBMses
> have been touted as the next silver bullet for about 15 years now. They
still
> are a just a very small niche, for a number of reasons.

Please enlighten us with the reasons. As far as I can tell, the main reason
that OODBMS's have not caught on is inertia, poor integration with current
RDB systems (but you know whose fault I think that is), and performance
issues because they haven't had 25+ years to work out their problems in this
area.

> Frank> The division in access methods and identity between in-memory and
> Frank> on-disk objects makes construction of software much more difficult
> Frank> than if the distinction did not exist.
>

> yes, that's true if you're talking about one application with just a
little

> data. If you're talking about Gigabytes worth of complex information,
chances

> that this data will be needed in unforseen ways are much higher.

That's true. But again, I don't see how making the USER make a distinction
between in- and out-of-memory storage helps things.

> At this


> point, OO database simply become too rigid.
>

Now it's my turn to go "?".

>Is there an accepted notion of

> what constitutes a view in OODM? Set-level manipulations? Schema
evolution?

> Query language? And not wholly unimportantly, are these things available
in

> current implementations, in an not entirely unstandardized way?

Yes, standardization is an issue. But most OODB's have perfectly adequate
data models, views, and ad-hoc query methods.

>
> Frank> To make it worse, the current implementations of the RDB model
revel
> Frank> in pushing the users' noses into these distinctions.
>
> ?

I see I was unclear. Why do I have to explicitly tell a system where
indices go to get reasonable performance. The systems already have tools to
monitor retrieval patterns and feed the information back to me. Most have
advisory systems that recommend index placement. Why put me in the loop at
all? The answer, I fear, is (a) becuase most programmers are control freaks
who care more about feeling in control than getting work done (the C
syndrome), (b) efficiency for the tough cases (never mind that 90% of the
uses aren't in tough cases - everyone has to pay), or (c) it's always been
done that way.

> Frank> I believe that Henry Baker had a really good paper a few years back
on


> Frank> this topic (Don't hold me to this statement, because it's a hazy
> Frank> recollection with no reference at this point).
>
> I'd be very interested in a proper reference to this, knowning Baker's
high
> quality writings.

Somebody has provided it in another article. I'm glad my memory isn't
slipping too much.

William Deakin

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
Frank A. Adrian wrote:

> Can you lock single records in most, or do most use page-locking, still?

Oracle and Sybase give you the choice of table, page or row locking.

> ...and performance issues...

This is the real killer. If you have an online transaction system, like a
telephone based sales order processing, the usual requirements are you want (1)
the telephonist to be able to see how many purple flanges are available within
about 3 seconds (2) to be able to order the last 23 flanges and make sure nobody
else grabs these 23 flanges whilst you are getting the credit card details of
the customer and (3) if a mad-axeman cuts through the power cable of the box
immediately after the transaction is commited, then there is still a permmanent
record of the transaction having taken place.

Best Regards,

:) will


Philip Lijnzaad

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
OK,

this looks set to become a long winded and misplaced thread, but I'll just
add this: Baker's main objection appears to be that hierarchical data is
cumbersome to work with in RDBMSs. It is, if you use the common
employee(emp_id, boss_id, ...) schema. Vendor extensions (such as Oracle's
CONNECT BY PRIOR, and similar constructs in sybase and DB/2) alleviate this
to some extent, and I believe something along those lines is now part of the
(generallly not fully implemented) SQL92 standard.

However, if you use the nested-set model invented by Joe Celko, the pain goes
away: it allows fast and general retrieval (and aggregate functions etc.), in
one query, of complete trees. For some reason, this nested set model isn't
yet very well-known, but in my limited experience, it simply renders the
"trees are tricky" argument against relational databases obsolete.

Details of the model are described in Celko (1999), "SQL for
Smarties", 2nd ed., Morgan-Kaufman.

Philip Lijnzaad

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
On Thu, 6 Apr 2000 08:05:29 -0700,
"Frank" == Frank A Adrian <fad...@uswest.net> writes:

Frank> Which could have been done in language extensions, not requiring an
Frank> entirely new syntax to lear.

I personally don't mind having to use one general (though imperfect) query
language (SQL) when querying a host of database engines from within a host of
different languages.

Frank> Well, in the end, are computers necessary? Of course not. But if the
Frank> purpose of computing is geting actual work done with a minimum of effort,

exactly that is what the relational model offers you: you specify what you
want, and the database figures out how to do it.

Frank> the more models you have, the more clutter you have.

what kind of models do you mean here, data access models? I would argue that
a completely normalized data model (whether implemented as relational tables
or as object oriented classes or what have you) are a prerequisite for any
serious work. I agree that accessing tables when all you have really wanted
were objects presents a barrier, but the benefits of having tables,
constraints, views etc. to juggle with, to me, outweigh the drawback of this
impedance mismatch.

Frank> at its core, the RDB model is simply a collection of objects
>>
>> what are your objects, rows or columns? If former, certainly not, if

Frank> latter,


>> yes (but this view is less common)

Frank> Either. Data slicing and joining of objects

what is a joined object? If you have tables Department(name, dept_id) and
Employee(name, dept_id), usually they are said to hold instances of object of
class Department and Employee). What is the data type of the 'things' (object?)
returned from

SELECT d.name, e.name
FROM Department as d, Employee as e
WHERE d.dept_id = e.dept_id;

? This is the whole problem with trying to impose OO onto a relational
system. For exactly this reason, Chris Date and others argue that row ==
object is simply wrong. See H. Darwen, C.J. Date. The Third Manifesto. SIGMOD
Record, 24(1), 1995, pp.39-49. This seems to be borne out by the recent
object-relational extensions (cartridges, datablades, module) offered by the
big commercial vendors (and, I believe, PostgreSQL).

Frank> did not have to be
Frank> combined with distinctons between in- and out-of-memory storage.

that's a different matter, and one that is not really visible. Indeed, most
big databases have extensive caching machinery built into them, in order to
not have to access disks again.

Frank> Even todays implementations leave something to be desired (Can you
Frank> lock single records in most, or do most use page-locking, still?).

yes, Oracle, Informix, DB/2, and even the lowly Postrgres do row-level locking.

Frank> Or would work on the VM systems in place at the time to provide
Frank> persistent object spaces with transparently maintained, use-based
Frank> indices have been better?

>> ?

Frank> Let me explain. My take on the issue is that an RDB is little more
Frank> than a hopped up, user-controlled, VM system used to select and
Frank> page-in (possibly partial) objects from a large data-storage space
Frank> and, after the objects are (posibly) updated, to page them out again.

ah, but it's again the question if all you ever need is single access to one
type of object at any one time. Sticking with OO, you're practically forced
to.

Frank> I'm not so much against the idea of storing these things on disk, but
Frank> the explicit user control (with a new language, no less) strikes me as
Frank> convoluted.

I see relational modeling as just a type of linear algebra (and the whole
field is called relational algebra for exactly that reason). If you're only
allowed to use with base tables (or, probably, object), all you can do is
move along one dimension at a time, then go to another one, etc. Using the
full power of relational calculus, you can work with any volume in any
subspace, of any dimension and shape. That's quite a bit more general than
pointer chasing, I think.

Frank> Please enlighten us with the reasons.

see above. Add: views, lack of standards (again, after all these years), lack
of ad hoc-querying, lack of provisions for schema evolution, lack of
interactive data manipulation (this is really important).

Frank> As far as I can tell, the main
Frank> reason that OODBMS's have not caught on is inertia, poor integration
Frank> with current RDB systems (but you know whose fault I think that is),

I suppose you blame the RDBMS vendors, but I'd say that iff OODBM were such a
good idea, and (therefore?) a good opportunity to make yet more money by
selling OODBM add-ons, then they would at least have tried to do this. So
far, that hasn't happened. What has happened is that the big vendors have
tried to extend the relational model along the lines of 'database columns can
contain complex objects', rather than the 'rows = objects', by adding
cartridges, datablades etc.

Frank> and performance issues because they haven't had 25+ years to work out
Frank> their problems in this area.

No, they have had 15 years, and ther market penetration has plateaued (sorry,
no reference, I remember seeing a slide in a sales talk by Informix ...)

Frank> The division in access methods and identity between in-memory and
Frank> on-disk objects makes construction of software much more difficult
Frank> than if the distinction did not exist.

no, that's not true. Again, if you want to be transactional, you have to play
by the access rules. That is, open transaction, get your stuff from whereever
it is, do something useful, then commit. You don't care if this came from
disk or not. The 'get your stuff from wherever it is' step is always
necessary, because your previous version may have been touched by another
transaction in the meantime.

>> At this
>> point, OO database simply become too rigid.
>>

Frank> Now it's my turn to go "?".

In RDBMSs, it's trivial to add a column to a table, to rename tables (sadly,
renaming a column is often not possible), replacing a table by a view
(==query) of the same name but showing different things, adding, removing or
changing a constraint, and there's probably a few that I have left out. This
is essential in any production environment. All these things are usually done
by simple interactive commands, no need to write programs for this.

Frank> I see I was unclear. Why do I have to explicitly tell a system where
Frank> indices go to get reasonable performance. The systems already have
Frank> tools to monitor retrieval patterns and feed the information back to
Frank> me. Most have advisory systems that recommend index placement. Why
Frank> put me in the loop at all? The answer, I fear, is (a) becuase most
Frank> programmers are control freaks who care more about feeling in control
Frank> than getting work done (the C syndrome), (b) efficiency for the tough
Frank> cases (never mind that 90% of the uses aren't in tough cases -
Frank> everyone has to pay), or (c) it's always been done that way.

is this such a problem? In most cases, you create indexes (on primary and
foreign keys and fields that are likely to be queried often) once, then
forget about them, until a performance bottleneck shows up. This can hardly
be held against the relational model, I think. Cheers,
Philip

Seth Gordon

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
Philip Lijnzaad wrote:

> ...Details of the model are described in Celko (1999), "SQL for


> Smarties", 2nd ed., Morgan-Kaufman.

Is this book as good as the title would imply?

Robert Monfera

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to

Yes, quite a good one - I mainly bought it for the hierarchical model,
but the other chapters were very informative too.

The great thing about the hierarchy representation is that it is a
non-trivial solution that relies on set operations, in line with the
foundations of SQL. Now we can easily store Lisp programs in a DBMS,
one cons a record!

Robert

Robert Monfera

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to

Please consider answers below more as a defense of the relational model
rather than an argument for choosing an RDBMS over an OODBMS. Data
access should be transparent from the application side, and it requires
less effort to achieve it with an OODBMS. The value of relational
concepts is more general than the realm of RDBMS implementations (dare I
say universal?), for example object oriented design owes it quite a
bit. We should separate the concepts from their implementations,
understand their merits and integate them into our thinking rather than
to dismiss them becasue of their age and less than ideal
implementations.

Frank A. Adrian wrote:

> Data slicing and joining of objects did not have to be combined
> with distinctons between in- and out-of-memory storage. Nor does one have
> to invent a new syntax.

This is right. The relational model itself or SQL standards do not say
anything about it, and implementations make a very heavy use of memory,
as well as facilitate buffering at various levels and in-memory tables
or databases. It is implementational detail.

> But early versions of SQL did not provide
> for anything sophisticated in the way of transaction control.

This is strange. People like Codd were fairly clear on integrity
constraints, and they imply atomicity of operations. If you speak of
implementation, maybe someone has recollections on early DB2 features.

> Even todays
> implementations leave something to be desired (Can you lock single records
> in most, or do most use page-locking, still?).

These days even Microsoft's product supports row-level locking, besides
DB2, Oracle and a number of other implementations.

> > Frank> Was the construction of a new model along with the idea of

> > Frank> separating the objects used this model from all other in-memory
> > Frank> objects really a good idea?


> >
> > yes, most certainly.
>
> We'll have to differ on this one.

First, the idea was not the separation of objects to in-memory and
out-of-memory objects. It is similar to Common Lisp in that the
specification does not tell where and how an object should be stored.
The idea was to give a sound, standardized way of representing and
manipulating information. Disk access is just practical to do if there
are large amounts of data or information is not to be lost upon a crash.

The real separation is that an enterprise is theoretically able to
construct a data model utilized by multiple, heterogenous applications.
If you were to implement a new analytical or reporting tool, would you
wish that the data be locked in other proprietary application systems,
or you would prefer having standardized access to a database?

In practice, it is a great MOP exercise to integrate the concept of CLOS
class with SQL's table or view, and there are existing implementations
that do this.

> > Frank> Or would work on the VM systems in place at the time to provide
> > Frank> persistent object spaces with transparently maintained, use-based
> > Frank> indices have been better?
>

> Let me explain. My take on the issue is that an RDB is little more than a
> hopped up, user-controlled, VM system used to select and page-in (possibly
> partial) objects from a large data-storage space and, after the objects
> are (posibly) updated, to page them out again.

You don't seem to appreciate the benefits of the relational model - it
does not even want to be a VM model. Think of recent in-memory SQL
database implementations, which don't even use the disk except for maybe
saving the image or writing a journal.

The relational model has much in common with Lisp philosophy:

- It is self-descriptive so that you can gain metadata like table names,
relations and constraints _inside_ the system. It is analogous with
CLOS and the metaobject protocol in Lisp.

- It separates the notion of "how" from "what", the same way a Lisp
array or a hash table may well reside on the disk if the implementation
wants to do so. The SQL language itself is declarative rather than
procedural. You can even call it functional.

- It purports to facilitate and do the "right thing" - via achieving
higher levels of normal forms, the data model is becoming clearer and
more disciplined. There should be a lot of similarity between entity
relationship diagrams no matter if you got there via the relational way
or the OOD way.

- Both support the concept of atomical operations that ensure consistent
state: compare transaction commit/rollback with unwind-protect for
example.

If the value of SQL concepts isn't recognised, then one is going to
reimplement half of the relational model, poorly.

> I'm not so much against the idea
> of storing these things on disk, but the explicit user control (with a new
> language, no less) strikes me as convoluted.

SQL cannot exactly be called a new language. (I address explicit user
control further down.)

> As far as I can tell, the main
> reason that OODBMS's have not caught on is inertia, poor integration with
> current RDB systems (but you know whose fault I think that is), and
> performance issues because they haven't had 25+ years to work out their
> problems in this area.

That 25+ years of experience plus the previous years of internal
research at IBM must have accounted for something conceptually solid,
but probably even they had to put in an effort to convince business to
use the then new relational model.

> Why do I have to explicitly tell a system where
> indices go to get reasonable performance. The systems already have tools
> to monitor retrieval patterns and feed the information back to me. Most
> have advisory systems that recommend index placement. Why put me in the
> loop at all?

You are talking about implementation, not concepts. Implementations
purport to meet your demands for transparency already go the extra mile
to analyze and transform queries, optimize access paths etc.

A sufficiently smart RDBMS implementation should convert the heaps of
statistics into index creations etc. I would not be surprised to see MS
moving into that direction, trying to make things "easy" for users.

Declarations, similar to ones in Lisp, are still needed so that the
system knows you want to optimize for throughput or response time etc.
Even an index creation can be perceived as a declaration, telling the
system about expected patterns of use. How are you going to do it with
an OODBMS? Don't you have to create collections like hash tables or
trees to get adequate access performance?

> The answer, I fear, is (a) becuase most programmers are control freaks
> who care more about feeling in control than getting work done

So are we control freaks when we put objects in a hash table rather than
in a list*? The sufficiently smart DBMS will be as elusive as the
sufficiently smart compiler or the sufficiently smart OODBMS. RDBMS
systems still perform a huge amount of optimization, on which web
resources are available.

[* There is a cute package in the repository that allows you to create
tables in Lisp not having to worry about their representation - later
you can declare it to be a hash table, an array or something else.]

In any case, it's pleasing to know that RDBMS statistics are collected
into relational tables themselves, so good ideas can be implemented in
Lisp to automate index creation.

Robert

Will Deakin

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
Robert Monfera wrote:
>A sufficiently smart RDBMS implementation should convert the heaps of
>statistics into index creations etc. I would not be surprised to see MS
>moving into that direction, trying to make things "easy" for users.

Oracle can already do (does?) something like this. Using statistics calculated
from the data, the SQL execution engine select an optimum method of returning
data. This does not include index creation as such, but does select whether (or
which) indices are used.

Best Regards,

:) will


Christopher Browne

unread,
Apr 8, 2000, 3:00:00 AM4/8/00
to
Centuries ago, Nostradamus foresaw a time when Philip Lijnzaad would say:

>However, if you use the nested-set model invented by Joe Celko, the pain goes
>away: it allows fast and general retrieval (and aggregate functions etc.), in
>one query, of complete trees. For some reason, this nested set model isn't
>yet very well-known, but in my limited experience, it simply renders the
>"trees are tricky" argument against relational databases obsolete.
>
>Details of the model are described in Celko (1999), "SQL for
>Smarties", 2nd ed., Morgan-Kaufman.

Looks interesting.

For those that would prefer not to buy the book just to get a general
idea of what it's about (and since this is comp.lang.lisp, that's
rather likely!) the basics were described in his "SQL for Smarties"
column in 1996, and may be found on the web at:
<http://www.dbmsmag.com/9603d06.html>
<http://www.dbmsmag.com/9604d06.html>
<http://www.dbmsmag.com/9605d06.html>

I think I'm going to have to think about this one harder before
drawing conclusions... The "Nested Set Model" looks like something
that would be a *natural* extension to maintain automagically via the
"Object-Relational" systems.
--
MICROS~1: Where do you want to go today? Linux: Been there, done
that.
cbbr...@hex.net- <http://www.hex.net/~cbbrowne/lsf.html>

Christopher Browne

unread,
Apr 8, 2000, 3:00:00 AM4/8/00
to
Centuries ago, Nostradamus foresaw a time when Robert Monfera would say:

>Please consider answers below more as a defense of the relational model
>rather than an argument for choosing an RDBMS over an OODBMS. Data
>access should be transparent from the application side, and it requires
>less effort to achieve it with an OODBMS. The value of relational
>concepts is more general than the realm of RDBMS implementations (dare I
>say universal?), for example object oriented design owes it quite a
>bit. We should separate the concepts from their implementations,
>understand their merits and integate them into our thinking rather than
>to dismiss them becasue of their age and less than ideal
>implementations.

It's hard *not* to focus on implementations, particularly when it's
implementations that people will actually see.

>Frank A. Adrian wrote:
>
>> Data slicing and joining of objects did not have to be combined
>> with distinctons between in- and out-of-memory storage. Nor does one have
>> to invent a new syntax.
>
>This is right. The relational model itself or SQL standards do not say
>anything about it, and implementations make a very heavy use of memory,
>as well as facilitate buffering at various levels and in-memory tables
>or databases. It is implementational detail.

Right.

>> But early versions of SQL did not provide
>> for anything sophisticated in the way of transaction control.
>
>This is strange. People like Codd were fairly clear on integrity
>constraints, and they imply atomicity of operations. If you speak of
>implementation, maybe someone has recollections on early DB2 features.

The over-arching problem is that if you look at Codd's "Relational
Principles," the bulk of them are largely ignored in the SQL
implementations.

>> Even todays implementations leave something to be desired (Can you
>> lock single records in most, or do most use page-locking, still?).
>
>These days even Microsoft's product supports row-level locking, besides
>DB2, Oracle and a number of other implementations.

Unfortunately, designing applications in the presence of multiple
possible locking policies makes the task complex. And bad programmers
compounds this... More anon...

>> > Frank> Was the construction of a new model along with the idea of
>> > Frank> separating the objects used this model from all other in-memory
>> > Frank> objects really a good idea?
>> >
>> > yes, most certainly.
>>
>> We'll have to differ on this one.
>
>First, the idea was not the separation of objects to in-memory and
>out-of-memory objects. It is similar to Common Lisp in that the
>specification does not tell where and how an object should be stored.
>The idea was to give a sound, standardized way of representing and
>manipulating information. Disk access is just practical to do if there
>are large amounts of data or information is not to be lost upon a crash.

In the early days, you didn't have enough memory to store the whole
DB. Actually, you *still* don't have enough RAM to store the *whole*
DB, but today the memory space is likely large enough to at least hold
substantial "working sets." From whence cometh things like
Cach\'{e}...

>The real separation is that an enterprise is theoretically able to
>construct a data model utilized by multiple, heterogenous applications.
>If you were to implement a new analytical or reporting tool, would you
>wish that the data be locked in other proprietary application systems,
>or you would prefer having standardized access to a database?
>
>In practice, it is a great MOP exercise to integrate the concept of CLOS
>class with SQL's table or view, and there are existing implementations
>that do this.

Interesting.

>> > Frank> Or would work on the VM systems in place at the time to provide
>> > Frank> persistent object spaces with transparently maintained, use-based
>> > Frank> indices have been better?
>>
>> Let me explain. My take on the issue is that an RDB is little more than a
>> hopped up, user-controlled, VM system used to select and page-in (possibly
>> partial) objects from a large data-storage space and, after the objects
>> are (posibly) updated, to page them out again.
>
>You don't seem to appreciate the benefits of the relational model - it
>does not even want to be a VM model. Think of recent in-memory SQL
>database implementations, which don't even use the disk except for maybe
>saving the image or writing a journal.

Indeed. Cache, TimesTen come to mind. So long as transaction
logs/journals are being pushed to disk forthwith, there's no forcible
need for the database to be treated as being "essentially on disk."

The web site surrounding <http://www.ispras.ru/~knizhnik/fastdb.html>
has several C++-based DBMSes, including:
a) FastDB, which has the DB in memory
b) GigaBASE, which extends FastDB to use paging.

Both offer transaction logging so that the data is always safely on
disk. (I don't know the quality of the implementation of this system,
but it is "open source," so you can at least look at the
implementation to glean ideas.)

>The relational model has much in common with Lisp philosophy:
>
>- It is self-descriptive so that you can gain metadata like table names,
>relations and constraints _inside_ the system. It is analogous with
>CLOS and the metaobject protocol in Lisp.

With the problem that each SQL DBMS implements the metadata somewhat
differently.

>- It separates the notion of "how" from "what", the same way a Lisp
>array or a hash table may well reside on the disk if the implementation
>wants to do so. The SQL language itself is declarative rather than
>procedural. You can even call it functional.

Albeit with insufficient transparency. (One of those "Codd
Principles" that got lost along the way...)

>- It purports to facilitate and do the "right thing" - via achieving
>higher levels of normal forms, the data model is becoming clearer and
>more disciplined. There should be a lot of similarity between entity
>relationship diagrams no matter if you got there via the relational way
>or the OOD way.

"More disciplined" is probably right; "clearer" is not always clear
:-).

>- Both support the concept of atomical operations that ensure consistent
>state: compare transaction commit/rollback with unwind-protect for
>example.

Interesting thought...

>If the value of SQL concepts isn't recognised, then one is going to
>reimplement half of the relational model, poorly.

That seems to be true for all of [SQL concepts, Common Lisp, UNIX].
Those that don't familiarize themselves with the implementations of
the past tend to make *BAD* mistakes that many have made before.

>> I'm not so much against the idea
>> of storing these things on disk, but the explicit user control (with a new
>> language, no less) strikes me as convoluted.
>
>SQL cannot exactly be called a new language. (I address explicit user
>control further down.)
>
>> As far as I can tell, the main
>> reason that OODBMS's have not caught on is inertia, poor integration with
>> current RDB systems (but you know whose fault I think that is), and
>> performance issues because they haven't had 25+ years to work out their
>> problems in this area.
>
>That 25+ years of experience plus the previous years of internal
>research at IBM must have accounted for something conceptually solid,
>but probably even they had to put in an effort to convince business to
>use the then new relational model.

Wasn't MRDS a Multics thing? :-)

>> Why do I have to explicitly tell a system where
>> indices go to get reasonable performance. The systems already have tools
>> to monitor retrieval patterns and feed the information back to me. Most
>> have advisory systems that recommend index placement. Why put me in the
>> loop at all?
>
>You are talking about implementation, not concepts. Implementations
>purport to meet your demands for transparency already go the extra mile
>to analyze and transform queries, optimize access paths etc.
>
>A sufficiently smart RDBMS implementation should convert the heaps of
>statistics into index creations etc. I would not be surprised to see MS
>moving into that direction, trying to make things "easy" for users.
>
>Declarations, similar to ones in Lisp, are still needed so that the
>system knows you want to optimize for throughput or response time etc.
>Even an index creation can be perceived as a declaration, telling the
>system about expected patterns of use. How are you going to do it with
>an OODBMS? Don't you have to create collections like hash tables or
>trees to get adequate access performance?

Unfortunately, they all interact. If you start with a bad set of
algorithm choices, you're likely to get bad results.

If I plan to use hash tables, then I'll use different algorithms and
different "connections of data" than I would if I intended to use
sorted tables.

That's me; I at least have enough background to consider multiple data
structures.

[Here's the "anon" referred to earlier...]

But programmers that are Basically Clueless about this will wind up
creating code that assumes a particular data model underneath. The
"Real Programmers Don't Use Pascal" essay characterized this by the
notion that:
"FORTRAN programmers can write FORTRAN code in *any* language."

Unfortunately, what is more true than this is that "Bad programmers
can write Bad code in *any* language."

The one advantage that Lisp has is that it downright *scares* the bad
programmers, and so they don't even bother trying. [Mind you,
university students that get forced into doing a bit of Lisp for badly
framed courses *also* wind up writing awful Lisp...]
--
Rules of the Evil Overlord #18. "My undercover agents will not have
tattoos identifying them as members of my organization, nor will they
be required to wear military boots or adhere to any other dress
codes."
<http://www.eviloverlord.com/lists/overlord.html>
cbbr...@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>

Reini Urban

unread,
Apr 8, 2000, 3:00:00 AM4/8/00
to
Kragen Sitaker wrote:
>Most Perl programmers are not skilled programmers. Perl makes it
>possible for them to do things they couldn't have done by hand, and
>makes it possible for them to do things more reliably and quickly than
>they could have done them by hand. It does not turn them into
>competent programmers.

>
>Getting something useful out of Lisp requires that you be at least a
>minimally competent programmer, so there are few Lisp programmers who
>are not at least minimally competent.

wrong.
To stretch the elisp point: AutoLISP programmers prefer AutoLISP over
Visual Basic because it is even simplier. You can only do less.

=> lisp is one of the most easiest languages to learn.
--
Reini Urban
http://xarch.tu-graz.ac.at/autocad/news/faq/autolisp.html

Robert Monfera

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to

Christopher Browne wrote:

> But programmers that are Basically Clueless about this will wind up
> creating code that assumes a particular data model underneath.

This is what's great about spreadsheets - there is a uniform
representation of data and no choices have to be made, so the Basically
Clueless programmer (maybe a seasoned business person) does not have to
(isn't allowed to) make a decision. Thus programming would be easier if
it started with a bare spreadsheet (VisiCalc, 1-2-3, Excel style) and
evolved into more and more disciplined forms (through something like
Lotus Improv), enabling type and performance-related declarations on
regions and their automatical inference (like keeping subarrays always
sorted upon insertion to help binary search or use a hash table). The
"macros" (written with CL with some utilities*, of course) would use
representation-independent functions, and at least the next, more
talented maintainer would not have to immediately change code describing
business logic, and he would be free to review declarations first. This
way you have a system that merges the benefits of spreadsheets (ubiquity
and rapid prototyping), OOP and databases (modeling and discipline), GUI
(most interfaces are described as widgets in a grid), speed (native
compilation) and the abstraction power of Common Lisp (for superusers
and wizards, that is). The key is smooth, unobrtusive, evolutionary
advancement of the model.

* CL utilities would include a uniform tabular data structure being
upwards compatible with lists, arrays and classes, and algorithms
operating on it, specialized with compiler macros for transparent
speed. It would sport heterogenous multidimensional arrays
(multidimensional hashing, too) and sparse arrays.

Robert

Joe Celko

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to

>> Baker's main objection appears to be that hierarchical data is
cumbersome to work with in RDBMSs. It is, if you use the common
employee(emp_id, boss_id, ...) schema. Vendor extensions (such as
Oracle's CONNECT BY PRIOR, and similar constructs in sybase and DB/2)
alleviate this to some extent, and I believe something along those
lines is now part of the (generallly not fully implemented) SQL92
standard. <<

No, there is no such construct in SQL-92, but there is a WITH operator
in SQL-99 that can be used for recursion. Only DB2 has it.

>> However, if you use the nested-set model invented by Joe Celko, the
pain goes away: it allows fast and general retrieval (and aggregate
functions etc.), in one query, of complete trees. For some reason, this
nested set model isn't yet very well-known, but in my limited
experience, it simply renders the "trees are tricky" argument against
relational databases obsolete. <<

Before I cut and paste my stock reply on the nested set model, let me
remark that SQL is a set oriented language. The minute you start
drawing boxs and arrows diagrams to solve a problem, you are thinking
in terms of procedural control and list traversals. Instead, draw set
diagrams -- circles with elements inside. This is not a formal rule,
but a pretty good heuristic.

Another way of representing trees is to show them as nested sets.
Since SQL is a set oriented language, this is a better model than the
usual adjacency list approach you see in most text books. Let us
define a simple Personnel table like this, ignoring the left (lft) and
right (rgt) columns for now. This problem is always given with a
column for the employee and one for his boss in the textbooks:

This is a bad example, since it combines Personnel and Organizational
Chart data into one table. I did it this way to keep the post small;
imagine that the names are job titles if you want to be picky.

CREATE TABLE Personnel
(emp CHAR(10) PRIMARY KEY,
boss CHAR(10), -- this column is unneeded & denormalizes the
table
salary DECIMAL(6,2) NOT NULL,
lft INTEGER NOT NULL,
rgt INTEGER NOT NULL);

Personnel
emp boss salary lft rgt
===================================
Albert NULL 1000.00 1 12
Bert Albert 900.00 2 3
Chuck Albert 900.00 4 11
Donna Chuck 800.00 5 6
Eddie Chuck 700.00 7 8
Fred Chuck 600.00 9 10

which would look like this as a directed graph:

Albert (1,12)
/ \
/ \
Bert (2,3) Chuck (4,11)
/ | \
/ | \
/ | \
/ | \
Donna (5,6) Eddie (7,8) Fred (9,10)

This (without the lft and rgt columns) is called the adjacency list
model, after the graph theory technique of the same name; the pairs of
nodes are adjacent to each other. The problem with the adjacency list
model is that the boss and employee columns are the same kind of thing
(i.e. names of personnel), and therefore should be shown in only one
column in a normalized table. To prove that this is not normalized,
assume that "Chuck" changes his name to "Charles"; you have to change
his name in both columns and several places. The defining
characteristic of a normalized table is that you have one fact, one
place, one time.

What we should have is one table for the organizational chart and one
for the personnel, so that you can separate people from their
positions. Ignore that and let me use the names as if they are
employee identifiers.

To show a tree as nested sets, replace the nodes with ovals, then nest
subordinate ovals inside each other. The root will be the largest oval
and will contain every other node. The leaf nodes will be the
innermost ovals with nothing else inside them and the nesting will show
the hierarchical relationship. The rgt and lft columns (I cannot use
the reserved words LEFT and RIGHT in SQL) are what shows the nesting.

If that mental model does not work, then imagine a little worm crawling
anti-clockwise along the tree. Every time he gets to the left or right
side of a node, he numbers it. The worm stops when he gets all the way
around the tree and back to the top.

This is a natural way to model a parts explosion, since a final
assembly is made of physically nested assemblies that final break down
into separate parts.

At this point, the boss column is both redundant and denormalized, so
it can be dropped. Also, note that the tree structure can be kept in
one table and all the information about a node can be put in a second
table and they can be joined on employee number for queries.

To convert the graph into a nested sets model think of a little worm
crawling along the tree. The worm starts at the top, the root, makes a
complete trip around the tree. When he comes to a node, he puts a
number in the cell on the side that he is visiting and increments his
counter. Each node will get two numbers, one of the right side and one
for the left. Computer Science majors will recognize this as a
modified preorder tree traversal algorithm.

Finally, drop the unneeded Personnel.boss column which used to
represent the edges of a graph.

This has some predictable results that we can use for building
queries. The root is always (left = 1, right = 2 * (SELECT COUNT(*)
FROM TreeTable)); leaf nodes always have (left + 1 = right); subtrees
are defined by the BETWEEN predicate; etc. Here are two common queries
which can be used to build others:

1. An employee and all their Supervisors, no matter how deep the tree.

SELECT P2.*
FROM Personnel AS P1, Personnel AS P2
WHERE P1.lft BETWEEN P2.lft AND P2.rgt
AND P1.emp = :myemployee;

2. The employee and all subordinates. There is a nice symmetry here.

SELECT P2.*
FROM Personnel AS P1, Personnel AS P2
WHERE P1.lft BETWEEN P2.lft AND P2.rgt
AND P2.emp = :myemployee;

3. Add a GROUP BY and aggregate functions to these basic queries and
you have hierarchical reports. For example, the total salaries which
each employee controls:

SELECT P2.emp, SUM(P1.salary)
FROM Personnel AS P1, Personnel AS P2
WHERE P1.lft BETWEEN P2.lft AND P2.rgt
GROUP BY P2.emp;

4. The level of each node in the heirarchy is

SELECT COUNT(P1.emp) AS level, P2.emp
FROM Personnel AS P1, Personnel AS P2
WHERE P1.lft BETWEEN P2.lft AND P2.rgt
GROUP BY P2.emp
ORDER BY p2.lft;

This will print out the indented listing of the tree structure.

Nested set models will be two to three orders of magnitude faster than
the adjacency list model.

For details, see chapter 29 in my book JOE CELKO'S SQL FOR SMARTIES
second edition (Morgan-Kaufmann, 1999).

--CELKO--
Joe Celko, SQL and Database Consultant


Sent via Deja.com http://www.deja.com/
Before you buy.

Arvid Grøtting

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
Joe Celko <71062...@compuserve.com> writes:

[_very_ nice model elided]

> Nested set models will be two to three orders of magnitude faster than
> the adjacency list model.

...for queries.

For a large nested set, how expensive are insert, update and delete
operations? How does one ensure insert/delete/update integrity? What
happens (to the speed of the database) when multiple clients attempt
to insert, update or delete at the same time?

--

Arvid

Rahul Jain

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
In article <38ef4e4a.20693105@judy> posted on Saturday, April

8, 2000 10:17 AM, rur...@x-ray.at (Reini Urban) wrote:

> => lisp is one of the most easiest languages to learn.

A day to learn, a lifetime to master. Like Unix.
That's why I love them both :)

Well ok... unix takes more than a day to learn :)
but the issue is the power and flexibility, both of which are
rampant in both lisp and unix

--
-> -\-=-=-=-=-=-=-=-=-=-/^\-=-=-=<*><*>=-=-=-/^\-=-=-=-=-=-=-=-=-=-/- <-
-> -/-=-=-=-=-=-=-=-=-=/ { Rahul -<>- Jain } \=-=-=-=-=-=-=-=-=-\- <-
-> -\- "I never could get the hang of Thursdays." - HHGTTG by DNA -/- <-
-> -/- http://photino.sid.rice.edu/ -=- mailto:rahul...@usa.net -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
Version 11.423.999.210020101.23.50110101.042
(c)1996-2000, All rights reserved. Disclaimer available upon request.


Christopher Browne

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to
Centuries ago, Nostradamus foresaw a time when Robert Monfera would say:
>Christopher Browne wrote:
>> But programmers that are Basically Clueless about this will wind up
>> creating code that assumes a particular data model underneath.
>
>This is what's great about spreadsheets - there is a uniform
>representation of data and no choices have to be made, so the Basically
>Clueless programmer (maybe a seasoned business person) does not have to
>(isn't allowed to) make a decision.

Interesting; I've some other opinions on the demerits of spreadsheets
<http://www.hex.net/~cbbrowne/spreadsheets.html>.

>Thus programming would be easier if
>it started with a bare spreadsheet (VisiCalc, 1-2-3, Excel style) and
>evolved into more and more disciplined forms (through something like
>Lotus Improv), enabling type and performance-related declarations on
>regions and their automatical inference (like keeping subarrays always
>sorted upon insertion to help binary search or use a hash table). The
>"macros" (written with CL with some utilities*, of course) would use
>representation-independent functions, and at least the next, more
>talented maintainer would not have to immediately change code describing
>business logic, and he would be free to review declarations first. This
>way you have a system that merges the benefits of spreadsheets (ubiquity
>and rapid prototyping), OOP and databases (modeling and discipline), GUI
>(most interfaces are described as widgets in a grid), speed (native
>compilation) and the abstraction power of Common Lisp (for superusers
>and wizards, that is). The key is smooth, unobrtusive, evolutionary
>advancement of the model.
>
>* CL utilities would include a uniform tabular data structure being
>upwards compatible with lists, arrays and classes, and algorithms
>operating on it, specialized with compiler macros for transparent
>speed. It would sport heterogenous multidimensional arrays
>(multidimensional hashing, too) and sparse arrays.

If things evolved in that direction, that would be a good thing.

On The Other Hand, when Lotus made spreadsheets "programmable," they
came up with the Lotus Command Language, and people actually wrote
applications using the combination of that and the "macro recording"
system, which is roughly equivalent to programming GNU Emacs using
*its* "macro recorder."

Summary of the Summary: When there's no visible language, people wind
up writing programs that Really Suck Bad. As in, "Suck So Bad that
they could pull matter out of a black hole."
--
Spontaneous Order
"The Austrian Economists discovered that control and chaos were
actually on the same side, and that neither is a viable way to get
smart."
-- Mark Miller
cbbr...@hex.net- <http://www.hex.net/~cbbrowne/lsf.html>

Kragen Sitaker

unread,
Apr 21, 2000, 3:00:00 AM4/21/00
to
In article <38e4b35f.3c5e$3...@news.op.net>,

Mark-Jason Dominus <m...@plover.com> wrote:
>In article <l2TE4.13817$3g5.1...@tw11.nn.bcandid.com>,
>Kragen Sitaker <kra...@dnaco.net> wrote:
>immense badness. ``Look,'' I would say. ``If Perl's garbage
>collection is so great, imagine how much greater it would be if it
>were actually state of the art.''

I'm not sure --- Perl's garbage collector does have the advantages of
being reliable, comprehensible, and deterministic, as far as it goes
(which does not include cyclic data structures). These are the
advantages ascribed to manual allocation.

>In the 1960s there were big language wars about recursion; people
>would tell you that recursion was unnecessary (because it can always
>be simulated with iterative methods) and that it is inefficient.

I wrote a tail-recursive routine in C recently. A friend of mine tried
to optimize it by converting it to an iterative routine. It got
slower. I haven't investigated why yet. :)

>I would not be surprised if in twenty years garbage collection
>is in the mainstream the way recursion is now, and the idea of a
>GC-less general-purpose programming language is laughable.

I think you are right.

>Anyway, there is some hope that Perl might someday get a better
>garbage collector. Several people are talking about it, and Brad Kuhn
>told me a few months ago that some guy he knew in Cincinnati was
>looking into putting in the Boehm garbarge collector.

That would be me, and "looking into" is much too serious a term to
describe what I did. As I recall it, I said, "Wouldn't it be nice if
someone linked the Boehm GC into Perl and stripped out all the
refcounting cruft?"; Brad said, "You should do that," and I responded,
"Well, maybe I will." Or something to that effect.

> Every time
>someone appears on the perl developers' list with some idea that
>relies on any specific GC semantics, Sarathy warns them that they
>cannot do that because Perl might someday get a better garbage
>collector. Certainly a lot of people would like to see the
>reference counting go away.

It will break DESTROY.
--
<kra...@pobox.com> Kragen Sitaker <http://www.pobox.com/~kragen/>
The Internet stock bubble didn't burst on 1999-11-08. Hurrah!
<URL:http://www.pobox.com/~kragen/bubble.html>
The power didn't go out on 2000-01-01 either. :)

Bruce Tobin

unread,
Apr 21, 2000, 3:00:00 AM4/21/00
to

"Tim Bradshaw" <t...@cley.com> wrote in message
news:ey3r9cr...@cley.com...
> * Mark-Jason Dominus wrote:
> > You used to see people saying the same things
> > about garbage collection that they said about recursion. It isn't
> > necessary, it is inefficient, it is only available in ivory-tower
> > languages that are not suited for doing real work, blah blah blah.
>
> Surely Java is near-conclusive proof of this point?


Have you clocked a Java implementation lately? HotSpot is unbelievably
fast.


Scott Ribe

unread,
Apr 21, 2000, 3:00:00 AM4/21/00
to
> "Tim Bradshaw" <t...@cley.com> wrote in message
> news:ey3r9cr...@cley.com...
> > Surely Java is near-conclusive proof of this point?
>

How do you single out one feature of a language to blame for performance
problems? Do you have any idea how much time a "typical" (whatever that
would mean to you) Java program spends in garbage collection?

Of course you don't. I knew the answer to THAT question before I asked
it ;-) Garbage collection is most likely NOT the reason that Java is so
painfully slow!

David Bakhash

unread,
Apr 21, 2000, 3:00:00 AM4/21/00
to
"Bruce Tobin" <bto...@columbus.rr.com> writes:

> Have you clocked a Java implementation lately? HotSpot is unbelievably
> fast.

no. I personally havn't but I visited the pages about it, and read
the hype. Have you actually tried it? is it really comparable to
C++ applications written similarly?

dave

Jon S Anthony

unread,
Apr 21, 2000, 3:00:00 AM4/21/00
to
Bruce Tobin wrote:
>
> "Tim Bradshaw" <t...@cley.com> wrote in message
> news:ey3r9cr...@cley.com...
> > * Mark-Jason Dominus wrote:
> > > You used to see people saying the same things
> > > about garbage collection that they said about recursion. It isn't
> > > necessary, it is inefficient, it is only available in ivory-tower
> > > languages that are not suited for doing real work, blah blah blah.
> >
> > Surely Java is near-conclusive proof of this point?
>
> Have you clocked a Java implementation lately? HotSpot is unbelievably
> fast.

Yes. On multiple platforms. It sucks. You can't blame GC for this.
You can blame the incredibly piss poor implmentations of Sun.

/Jon

--
Jon Anthony
Synquiry Technologies, Ltd. Belmont, MA 02478, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Jon S Anthony

unread,
Apr 21, 2000, 3:00:00 AM4/21/00
to
David Bakhash wrote:

>
> "Bruce Tobin" <bto...@columbus.rr.com> writes:
>
> > Have you clocked a Java implementation lately? HotSpot is unbelievably
> > fast.
>
> no. I personally havn't but I visited the pages about it, and read
> the hype. Have you actually tried it?

Answering for myself: yes.

> is it really comparable to
> C++ applications written similarly?

No. More to the point, it is no where near comparable to CL (which _is_
comparable to C++).

David Combs

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
In article <hu%L4.6300$q8.10...@news-east.usenetserver.com>,

Kragen Sitaker <kra...@dnaco.net> wrote:
>In article <38e4b35f.3c5e$3...@news.op.net>,
>Mark-Jason Dominus <m...@plover.com> wrote:
>>In article <l2TE4.13817$3g5.1...@tw11.nn.bcandid.com>,
>>Kragen Sitaker <kra...@dnaco.net> wrote:
>>immense badness. ``Look,'' I would say. ``If Perl's garbage
>>collection is so great, imagine how much greater it would be if it
>>were actually state of the art.''
>
SNIP

>
>>I would not be surprised if in twenty years garbage collection
>>is in the mainstream the way recursion is now, and the idea of a
>>GC-less general-purpose programming language is laughable.
>

If this newsgroup existed 20 years ago (did ANY newsgroups
exist them?), we would have seen that prediction THEN TOO.

Maybe THIRTY years ago.

Don't hold your breath!

----

The language I use, MAINSAIL (MAchine INdependent SAIL (Stanford A. I.
Language (from long, long, long ago), always has had "real" gc.

(I've been using it since 1982, then on the 2060 (dec), then
crosscompiled to vax 780, then (finally!) to sun3, now
sparc.)

----


I do agree, how can anyone even CONSIDER using a non-gc
language? I don't use C++, but have read lots of books
about it (eg annotated manual, etc), and about patterns
too, and seems to me that most of this need for wrappers
within wrappers within ... within patterns within patterns
is because of the god damn destructors and who is responsible
for getting rid of things. HORRIBLE!


Just my opinion, because I don't (and won't) use the language!

David

Rob Warnock

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
David Combs <dkc...@netcom.com> wrote:
+---------------
| If this newsgroup existed 20 years ago (did ANY newsgroups exist them?)...
+---------------

Yes, just. From <URL:http://www.faqs.org/faqs/usenet/software/part1/>:

Usenet came into being in late 1979, shortly after the release of
V7 Unix with UUCP... At the beginning of 1980 the network consisted
of ["unc" and "duke"] and "phs" (another machine at Duke), and was
described at the January Usenix conference. ...[with] further
modifications, [] this became the "A" news release.

Other than Unix itself, the Usenet software is possibly one of the great
successes of open-source software. By 1981, it was in *very* wide use, and
the completely rewritten "B" News was out. See the FAQ more very many more
details (such as dates for NNTP, "C" News, & INN).

[As an interesting aside, also by 1981, several companies (including my
employer at the time) were already using netnews heavily for internal
communications among engineering & test groups.]


-Rob

-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043

Flemming Gram Christensen

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
Jon S Anthony <j...@synquiry.com> writes:

> No. More to the point, it is no where near comparable to CL (which _is_
> comparable to C++).
>

I am very interested in this. Do you have any pointers to
articles that measures this?


Regards
Flemming

Paolo Amoroso

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
On 24 Apr 2000 03:02:56 GMT, rp...@rigden.engr.sgi.com (Rob Warnock) wrote:

> [As an interesting aside, also by 1981, several companies (including my
> employer at the time) were already using netnews heavily for internal
> communications among engineering & test groups.]

Your employer's marketing guys missed the opportunity to be the first to
call such a thing an "intranet" :)


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

Jon S Anthony

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to

I'm speaking of various alternative implementations among several
languages that we've done here in two significant areas of
functionality in our application. These were not as a "study" but
more along the lines of "the school of hard knocks".

Rob Warnock

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
Paolo Amoroso <amo...@mclink.it> wrote:
+---------------
| rp...@rigden.engr.sgi.com (Rob Warnock) wrote:
| > by 1981 ... were already using netnews heavily for internal

| > communications among engineering & test groups.
|
| Your employer's marketing guys missed the opportunity to be the first to
| call such a thing an "intranet" :)
+---------------

Yeah, well, I never liked that neologism anyway...
Calling it a "private internet" (lower-case "i")
seemed quite adequate to me.

0 new messages