Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

O'reilly subjugated to the Lisp juggenaut (well, almost ;-)

161 views
Skip to first unread message
Message has been deleted

Kenny Tilton

unread,
Jan 3, 2004, 12:37:30 PM1/3/04
to

Sebastian Stern wrote:

> Great things are afoot. O'Reilly usual anti-devotion to anything Lisp
> notwithstanding, I just saw on Paul Graham's website that O'Reilly
> will publish the third book of Paul Graham in May 2004, called
> 'Hacker's and Painters', containing the collection of his essays on
> his webpage, and more. This is good news indeed.
>
> As a totally unrelated side note, the Road to Lisp Survey seems to be
> broken. It only lists a small fraction of all the respondents. I hope
> it's not that 3l33t anonymous CLiki deleter again.

I just noticed that. Checking the page source, it seems as if the /(...)
cross-referencing mechanism is broken two ways. It returns the first
line of a page instead of the page link, and it only finds seven or so
matches at most.

Ping Mr. Barlow? Or the ALU site maintainers? Or...?

kt

--
http://tilton-technology.com

Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film

Your Project Here! http://alu.cliki.net/Industry%20Application

Christophe Rhodes

unread,
Jan 3, 2004, 12:58:53 PM1/3/04
to
Kenny Tilton <kti...@nyc.rr.com> writes:

> Ping Mr. Barlow? Or the ALU site maintainers? Or...?

Pinging Mr Barlow is probably best accomplished by e-mail, as I
suspect his time for USENET is limited.

Christophe
--
http://www-jcsu.jesus.cam.ac.uk/~csr21/ +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%") (pprint #36rJesusCollegeCambridge)

Kenny Tilton

unread,
Jan 3, 2004, 11:13:44 PM1/3/04
to

Sebastian Stern wrote:

> Great things are afoot. O'Reilly usual anti-devotion to anything Lisp
> notwithstanding, I just saw on Paul Graham's website that O'Reilly
> will publish the third book of Paul Graham in May 2004, called
> 'Hacker's and Painters', containing the collection of his essays on
> his webpage, and more. This is good news indeed.

I forgot to jump and down with glee and then fall over laughing at this
development. :)

Sashank Varma

unread,
Jan 5, 2004, 1:41:09 PM1/5/04
to
In article <ad7d32de.04010...@posting.google.com>,
sste...@yahoo.com (Sebastian Stern) wrote:

> Great things are afoot. O'Reilly usual anti-devotion to anything Lisp
> notwithstanding, I just saw on Paul Graham's website that O'Reilly
> will publish the third book of Paul Graham in May 2004, called
> 'Hacker's and Painters', containing the collection of his essays on
> his webpage, and more. This is good news indeed.

Graham's essays are much-discussed on Slashdot, whose
members claim to buy O'Reilly books by the cartful.
Pretty crafty decision...

Ryan Kaulakis

unread,
Jan 7, 2004, 5:58:15 PM1/7/04
to

Does this perhaps mean that O'Reilly will begin publishing books on Lisp
now? I'd love to see a book that does the same thing for CL as "Learning
Perl" did for Perl.

Edi Weitz

unread,
Jan 7, 2004, 1:45:51 PM1/7/04
to
On Wed, 07 Jan 2004 17:58:15 -0500, Ryan Kaulakis <rmk...@psu.edu> wrote:

> Does this perhaps mean that O'Reilly will begin publishing books on
> Lisp now?

I doubt it. My forecast is that Paul Graham and O'Reilly together (for
different reasons) will make sure that this is not seen as a book
about Lisp but rather as something much more general. O'Reilly will
still have the "We don't publish books about Lisp and TeX" note on
their website.

> I'd love to see a book that does the same thing for CL as "Learning
> Perl" did for Perl.

If it doesn't have to be O'Reilly I think that Peter's book[1] looks
like it'll be able to take that place in the Lisp world of the 21th
century.

Edi.

[1] <http://www.gigamonkeys.com/book/>

Tayssir John Gabbour

unread,
Jan 7, 2004, 6:38:30 PM1/7/04
to
Edi Weitz <e...@agharta.de> wrote in message news:<m3ptdvk...@bird.agharta.de>...

> I doubt it. My forecast is that Paul Graham and O'Reilly together (for
> different reasons) will make sure that this is not seen as a book
> about Lisp but rather as something much more general. O'Reilly will
> still have the "We don't publish books about Lisp and TeX" note on
> their website.

It's probably just the observation that the Lisp community already has
good docs, and O'Reilly is a bit predictable. They seem to mostly
look for an opensource implementation which leaves a doc vaccuum they
can fill; or a market where the highbrow publishers don't really
compete, and the lowbrow ones have bad reputations.

Come to think of it, I should start keeping tabs on APress's forums.
http://forums.apress.com/
They make O'Reilly look pretty conservative. Must be exciting to work
there.

Karl A. Krueger

unread,
Jan 7, 2004, 8:51:57 PM1/7/04
to
Tayssir John Gabbour <tayss...@yahoo.com> wrote:
> It's probably just the observation that the Lisp community already has
> good docs, and O'Reilly is a bit predictable. They seem to mostly
> look for an opensource implementation which leaves a doc vaccuum they
> can fill; or a market where the highbrow publishers don't really
> compete, and the lowbrow ones have bad reputations.

That's a little unfair to O'Reilly. They've also published a number of
editions of open-source documentation -- that is, documentation which is
freely redistributable online as well as being documentation for open-
source projects. The Python Cookbook, the Linux Network Administrators'
Guide, and a few others fall into this category.

As someone with a shelf full of O'Reilly books, who's still learning CL,
the tome I would like to buy from them would be, in the O'Reilly style,
entitled _Programming cirCLe_ -- and would come with the cirCLe CD,
first edition. (Of course, I would buy that from someone other than
O'Reilly too!)

A useful CL book for me would not duplicate information I can find in
the HyperSpec or CLTL2. It would not tell me how the language is put
together, but rather how I can put together programs in the language.
It would cover useful open-source libraries, like the ones included on
the CD -- SBCL sockets and threads, araneida, uncommonsql, cl-ppcre,
clx; as well as UFFI and ASDF.

--
Karl A. Krueger <kkru...@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped. s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews

Tayssir John Gabbour

unread,
Jan 8, 2004, 10:40:49 AM1/8/04
to
"Karl A. Krueger" <kkru...@example.edu> wrote in message news:<btid3r$a80$1...@baldur.whoi.edu>...

> Tayssir John Gabbour <tayss...@yahoo.com> wrote:
> > It's probably just the observation that the Lisp community already has
> > good docs, and O'Reilly is a bit predictable. They seem to mostly
> > look for an opensource implementation which leaves a doc vaccuum they
> > can fill; or a market where the highbrow publishers don't really
> > compete, and the lowbrow ones have bad reputations.
>
> That's a little unfair to O'Reilly. They've also published a number of
> editions of open-source documentation -- that is, documentation which is
> freely redistributable online as well as being documentation for open-
> source projects.

Hmm, though I said "a bit predictable," I guess you're right. Let's
pretend I also said that though they're really smart and experimental,
they do have some organization. I don't think they're ashamed to put
out lisp/TeX books, just that they must get lots of proposals from
them, and they don't know how to do much with those markets. I think
it's possibly meaningful that APress is releasing 3 lisp books, not
just one that will sort of be ignored in stores.

Of course, I'm just learning about what the big computer publishers
do, so it's just my 2c... (and I visited the O'Reilly branch in my
city and noticed how happy they seemed working there. So I don't
intend to badmouth them.)

Madhu

unread,
Jan 10, 2004, 5:16:54 AM1/10/04
to
Helu
* Edi Weitz in <m3ptdvk...@bird.agharta.de> :

| On Wed, 07 Jan 2004 17:58:15 -0500, Ryan Kaulakis <rmk...@psu.edu> wrote:
|
|> Does this perhaps mean that O'Reilly will begin publishing books on

|> Lisp now? I'd love to see a book that does the same thing for CL


|> as "Learning Perl" did for Perl.
|
| I doubt it. My forecast is that Paul Graham and O'Reilly together (for
| different reasons) will make sure that this is not seen as a book
| about Lisp but rather as something much more general. O'Reilly will
| still have the "We don't publish books about Lisp and TeX" note on
| their website.

Not sure if their reasons have to be entirely different:

If Mr.O'Reilly wishes to cater to his existing clientele by selling
them books on Mr.Grahams next new language, (which would be better
than all earlier languages) ...

... and do for that language what "Learning Perl" did for perl ...

--
Regards
Madhu :->

Damien Kick

unread,
Jan 13, 2004, 1:00:40 PM1/13/04
to
"Karl A. Krueger" <kkru...@example.edu> writes:

> A useful CL book for me would not duplicate information I can find
> in the HyperSpec or CLTL2. It would not tell me how the language is
> put together, but rather how I can put together programs in the
> language. It would cover useful open-source libraries, like the
> ones included on the CD -- SBCL sockets and threads, araneida,
> uncommonsql, cl-ppcre, clx; as well as UFFI and ASDF.

For example, I half-remember having read in a c.l.l article from
someone about how they freqeuntly create a KILLER-APP-USER package
when diddling with stuff from KILLER-APP interactively, as opposed to
simply playing with KILLER-APP in CL-USER. It was, to me, a novel way
to think about using packages in the REPL because until that time I
was only thinking of packages as something of a CL version of, for
example, C++ namespaces; i.e. one uses them rather statically. The
description of what packages are can be found in the hyper-spec but
this kind of practical, this-is-how-one-might-use-the-feature advice
would be great to find in a book for newbies.

I would also love to see the Lisp equivalent of something like
_Large-Scale C++ Software Design_; i.e. a book that treats the subject
of developing large-scale applications with Common Lisp, though I
would imagine that it is the many idiosyncracies of C++ and the
relative comprehensiveness of CL that have produced the need for the
former the lack of the later. However, I would imagine that such a
beast would definitely need to mention packages, DEFSYSTEM, ASDF, good
source control for CL, etc. How does one organize the layout of lisp
code into files? How does this relate to the use of something like
DEFSYSTEM or ASDF and packages? How does this relate with
compile-time dependencies, load-time dependencies, etc.? What about
cyclical dependencies and its interaction with testability; is there a
unique CL angle to this subject or is it really the more general
design principle of less coupling leads to better testability? Etc.,
etc., etc.

Erik Naggum

unread,
Jan 14, 2004, 2:33:27 AM1/14/04
to
* Damien Kick

| I would also love to see the Lisp equivalent of something like
| _Large-Scale C++ Software Design_; i.e. a book that treats the
| subject of developing large-scale applications with Common Lisp,
| though I would imagine that it is the many idiosyncracies of C++ and
| the relative comprehensiveness of CL that have produced the need for
| the former the lack of the later.

The most important difference would be that a large-scale Common
Lisp project would increasingly be written in an application-domain
language, while a large-scale C++ project would continue to be
written in C++.

| [...] is there a unique CL angle to this subject or is it really the


| more general design principle of less coupling leads to better
| testability?

There is a unique Common Lisp angle which I believe only Scheme will
share with it: Instead of working in the language delivered to you
by some vendor, you will generally build your own development system
in possibly multiple stages and then compile the application at some
point in this multi-stage build process, and it might not even be
the last step that produces the application as users will see it.

If you want to do anything remotely similar to this in any other
environment, regardless of how tightly coupled it is with the core
language, you will have to resort to a multitude of tools that are
loosely coupled instead of closely integrated. Brilliant inventions
like the m4 macro processor under Unix (enter Gary Larson-mode and
visualize the /rejected/ m1, m2, and m3) demonstrate what happens to
large-scale projects like sendmail. Common Lisp itself grew out of
several large projects (even by today's standards), and the various
Lisp machines demonstrated what happened in the Lisp world when they
needed to grow their own tools.

Many people complain about the difficulty of integrating Common Lisp
with other programming languages, but in a large-scale project, you
will have different people working in different languages to talk to
the different parts of the real world that it would be even more
work to include in the Common Lisp world, and they are free to build
the interface any way they like. Small-scale projects face tougher
restrictions in this regard because the same programmers will work
in the different languages and the kind of scaffolding that you set
up when building something that cannot support itself until it nears
completion will appear to consume a lot of resources that some naïve
people think ought not to be consumed, and so they use Perl or some
other horrible crock, instead.

In many important ways, Common Lisp becomes a different language
when the project becomes large enough, but those who have found
Common Lisp to be to their liking in small projects will need a bit
of attitude readjustment to cope with the products of other people's
small projects in their large project. Common Lisp works so well on
a small scale that many people strongly resist the differences in a
large-scale project, and much of Common Lisp's perception problem is
that it works too well on a small scale. It is self-evident to any
working brain that C++ does not work well on a small scale, so it is
easy to dupe oneself into believing that it is because it works so
well on a large scale, but there is no credible evidence that it in
fact does work on a large scale. That people /make/ something work
can never be evidence of anything. The Lisp world has an incredibly
strong proof-of-concept advantage in the Lisp machines, which died
for a lot of reasons unrelated to the scalability of the languages;
a careful reading of the history will reveal that they died because
they were /too successful/. (I have found that the worst possible
thing you could do wrong in this world is to give people something
that is more powerful than they are prepared to understand. Back
when I believed that SGML was a brilliant idea, I did not understand
that the people who were the intended users were completely unable
to understand it, and that only those who were stupid enough not to
realize it in time, would continue to work with it, and so they sat
there with their excellent document production system with a clever
markup system and thought it had to be useful for something grander,
and now we have XML, a non-solution to a non-problem so brilliant
that m4 no longer seems like a prank. We really need Gary Larson-
style cartoons on the history of computer science.)

The only other large-scale projects that one should look at when
trying to see the Common Lisp way, is entire programming systems,
like the complete Debian GNU/Linux development system. It is not
the individual languages or tools or programs that make it up, but
a general sense of the entire system. Common Lisp is how such a
system would be if it were done right. (Those who think Windows is
a usable system may of course feel free to believe that it is also
an instance of a large-scale project, but the distance to how it
would be if done right may just be too large to visualize.)

--
Erik Naggum | Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Message has been deleted

Frode Vatvedt Fjeld

unread,
Jan 14, 2004, 2:16:11 PM1/14/04
to
Hi Erik, nice to see back here.

Erik Naggum <er...@naggum.no> writes:

> [..] The Lisp world has an incredibly strong proof-of-concept


> advantage in the Lisp machines, which died for a lot of reasons
> unrelated to the scalability of the languages; a careful reading of
> the history will reveal that they died because they were /too

> successful/. [..]

Could you expand on this observation? I mean, how were they too
successful, and how did this cause their deaths?

--
Frode Vatvedt Fjeld

Christopher Browne

unread,
Jan 14, 2004, 9:24:39 PM1/14/04
to
In an attempt to throw the authorities off his trail, Erik Naggum <er...@naggum.no> transmitted:

> There is a unique Common Lisp angle which I believe only Scheme will
> share with it: Instead of working in the language delivered to you
> by some vendor, you will generally build your own development system
> in possibly multiple stages and then compile the application at some
> point in this multi-stage build process, and it might not even be
> the last step that produces the application as users will see it.

They used to do this sort of thing in Forth.

In Lisp, you define a set of data structures (STRUCTs or DEFCLASSes),
functions and possibly macros to describe the application's
characteristics, thereby creating something of an
"application-specific language."

In Forth, you would create WORDs implementing the appropriate data
structures, WORDs describing application functions, and possibly
custom control structures, resulting in [quel surpris] an
"application-specific language." Albeit one looking like Forth, not
Lisp. (And people would get noses rather out of joint if you proposed
using floating point numbers, or breaking whatever their doctrines
about Right Forth Usage were...)

Welcome back, by the way.
--
output = reverse("moc.enworbbc" "@" "enworbbc")
http://www3.sympatico.ca/cbbrowne/lsf.html
Rules of the Evil Overlord #49. "If I learn the whereabouts of the one
artifact which can destroy me, I will not send all my troops out to
seize it. Instead I will send them out to seize something else and
quietly put a Want-Ad in the local paper."
<http://www.eviloverlord.com/>

Erik Naggum

unread,
Jan 17, 2004, 10:17:14 PM1/17/04
to
* Frode Vatvedt Fjeld

| Could you expand on this observation? I mean, how were they too
| successful, and how did this cause their deaths?

Being too successful is one of life's biggest risks, but a risk that
has received relatively little attention, primarily because the too
successful simply die off after a very brief period, leaving few, if
any, traces of their existence.

What makes natural selection work is that regardless of which random
factor constitutes the crucial advantage that allows breeding, a large
number of incidental factors are inherited from both parents by random
but without perfect fidelity, so even after a long chain of successful
breeding of the advantageous factor, all sorts of incidental factors
show variation, which means that when the conditions that made the
advantageous factor advantageous change, there will be a large number
of previously non-winning individuals who are suddenly better adapted
than the previously winning individuals. Over time, conditions always
change, so various factors are selected for, and over sufficient time,
a large number of advantageous factors are present in the population.

If, however, one factor is too successful, it will continue to be the
winning factor regardless of the variation in the other factors over
the range of variation in the conditions, and therefore will stifle
the development of other advantageous factors until the conditions
change sufficiently that it no longer is the winning factor. At this
point, the whole population is ill prepared for the change, and may
well perish entirely if the winning factor accidentally becomes the
matching factor for a disease or a predator.

For human optimization of winning factors, we have another problem:
The more we optimize a particular solution for a particular condition,
the most costly it will be to acquire the same optimized match for a
changed condition, for we will not tolerate that somebody else just
happens to be better at it while we perish. Therefore, as conditions
change and competition drives us to optimize, people will voluntarily
become too successful in the sense that they resist change and work to
maintain the advantage by presenting the necessary adaption as a cost
that they cannot afford.

The Lisp Machines were heavily optimized for their particular (if not
peculiar) conditions and were effectively much more dependent on those
conditions than less optimized solutions, which could replace parts of
the system without incurring large development costs to regain the
advantages. The tight coupling between software and hardware became a
problem when cheaper and faster hardware arrived but which would have
required massive development effort to maintain the advantages of the
proprietary hardware, which was, after all, developed under intense
pressure to make the software run fast enough.

Software developers know better than most people how destructive to
the core design intense optimization pressure can be and how the cost
of increase in performance rises. We still run software that was
designed several decades ago, and although the optimization criteria
of modern Intel processors are vastly different from early processors,
we find that most optimizers of Intel code still optimize for sometime
in the early to mid-1990s.

Optimization is generally detrimental to future success, but it is the
only way to accomplish present success in competition with others who
are equally interested in short-term results. In fact, when just one
of the competitors becomes interested in short-term results and hopes
to profit sufficiently to offset the risk of future profits, it takes
more guts than most people can muster to stick with marathon runners
as others rush to support and profit from sprinters. It doesn't take
a genius to figure out that optimizing for short-term profit will be
the death of long-term profitability, but people have made short-term
decisions for decades now and they still wonder why the future is less
bright and much less certain.

In the Lisp Machine case, being too successful meant that they failed
to adapt in time when the external conditions changed. Nothing in the
success of the Lisp Machines indicated that they were on the wrong
track, quite the contrary, until they were eclipsed by much cheaper
hardware that took advantage of a few of their incidental features and
dropped the crucial features because of the cost. Depending too much
on their relatively few winning factors and focusing too much on their
development made it harder for other factors to evolve properly at the
same time, and when these other factors were suddenly advantageous in
the market, the previous winning factors became liabilities.

Put another way, a company that produces one excellent product has a
much, much smaller chance of winning in the long run than one that has
a lot of crappy products that each manages to have a minor advantage
over its competitors. When the crapware producer par excellence keeps
whining about "innovation", they really mean that their advantage over
their competition is materially insignificant and that the only way
they can maintain an advantage at all is by competing with themselves,
i.e., the previous version of each product. Over time, however, this
process necessarily produces high quality products in a large number
of areas, but only as long as their /competitors/ are better than they
are at every single one of them some of the time. When they actually
win over their competitors, as a permanent condition, they, too, will
be too successful and will keep doing what made them successful, which
by the very nature of life, is not what will make them successful in
the future, for /which/ of many incidental factors turned out to be
the winning factor under some conditions is not only unpredictable,
but entirely random. All you know is that /some/ of your incidental
factors /may/ turn out to be advantageous, but once you have found one
of them, it is time to nourish all the /other/ incidental factors, for
that is what your present and future competition is doing. The old
adage that if you find something that works, you should do more of it,
is sound for an individual in a non-competitive environment, but it is
extremely dangerous in a competitive environment, where you won only
because you did something that the previous winner did /not/ do. So,
if you keep doing what made you successful, you will be too successful
in a very short time, and then you just vanish when a competitor gains
ground, like the Lisp Machines or like Digital Research.

0 new messages