Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is Lisp a Blub?

226 views
Skip to first unread message

Don Geddis

unread,
Jul 1, 2007, 11:38:37 PM7/1/07
to
Some thoughts spurred -- I am ashamed to admit -- by our local spammer JH.

Probably most of us are familiar with Paul Graham's hypothetical "Blub"
language, as described in the section "The Blub Paradox" in the middle of
http://paulgraham.com/avg.html
It was a hypothetical example to show how some programmers, only used to
languages less powerful than Lisp, might not appreciate the power of Lisp,
simply because they don't understand those features of Lisp that their own
favorite language lacks.

The core of the argument is here:

Blub falls right in the middle of the abstractness continuum. It is
not the most powerful language, but it is more powerful than Cobol or
machine language.

And in fact, our hypothetical Blub programmer wouldn't use either of
them. Of course he wouldn't program in machine language. That's what
compilers are for. And as for Cobol, he doesn't know how anyone can
get anything done with it. It doesn't even have x (Blub feature of
your choice).

As long as our hypothetical Blub programmer is looking down the power
continuum, he knows he's looking down. Languages less powerful than
Blub are obviously less powerful, because they're missing some
feature he's used to. But when our hypothetical Blub programmer looks
in the other direction, up the power continuum, he doesn't realize
he's looking up. What he sees are merely weird languages. He probably
considers them about equivalent in power to Blub, but with all this
other hairy stuff thrown in as well. Blub is good enough for him,
because he thinks in Blub.

When we switch to the point of view of a programmer using any of the
languages higher up the power continuum, however, we find that he in
turn looks down upon Blub. How can you get anything done in Blub? It
doesn't even have y.

By induction, the only programmers in a position to see all the
differences in power between the various languages are those who
understand the most powerful one.

Along the same lines, John McCarthy apparently once said something like:
Lisp seems to be a lucky discovery of a local maximum in the space of
programming languages.
Also paraphrased as:
Lisp is a kind of local maximum in programming language quality: not
necessarily the best thing possible, but hard to change in any simple
way without making it worse.

OK, so now the question. Common Lisp was formalized quite some time ago;
ANSI CL a bit later, but without many radical changes. Are there tools now
known in the abstract space of programming language design that "ought" to be
part of Common Lisp? That, if there had been an ANSI CL 2 effort, would
surely have been strongly suggested to get added to the language?

I'd like to distinguish this from libraries or specific applications, which
are easy enough to layer on top of existing ANSI CL. Of course everybody can
name sockets, or web programming (XML parsers, HTTP servers). I'm thinking
of generic ideas in the description of computation, which apply across a wide
range of applications. Things like "iteration" or "recursion" or "CLOS".

There are some things clearly on the edge. All the existing trig functions
(SIN, COS, TAN) are highly useful to some small set of programmers, and
generally ignored by the rest. They could have easily been an add-on
library. (On the other hand, the basic numeric tower in Lisp, the automatic
conversion of numbers between data types, is far more fundamental to the
language.)

Regular expression matching, which is missing from the ANSI CL standard, is
probably of this "edge" class and would have been included in a CL2. Maybe
multiprocessing too.

So I'm wondering about language expressions that are so fundamental to
describing computation, that they "ought" to be part of the vocabulary of any
advanced programmer.

The previously mentioned spammer has suggested that pattern-matching function
definition is one such thing. As best I can tell, this seems to be sort of
related to CLOS generic methods, with different code called for different
argument lists, although the power of the expressions to choose the different
code that is called is greater with pattern-matching than with CLOS.

Static typing is another interesting corner case. Lots of pros and cons.
Qi seems to claim a Lisp-like language with "optional" static typing. Would
that be something that "should" be in a CL2?

And the pattern-matching programming idea got me thinking of unification. At
one time, people touted Prolog as the future of programming languages. Very,
very different from Lisp, and hugely useful when it applies. I think Prolog
failed as a general purpose programming language, but should that technology
be part of the bag of tricks available to every Lisp programmer? Unification
and inference is probably a generalization of the pattern-matching idea.

Just curious if anyone here has thoughts on these possible additions to a
core (new) Lisp, or perhaps other similar topics that ought to be included.
ITERATE? SERIES? Partial evaluation? Call/CC?

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
In bed the other night my girlfriend asked "if you could know exactly when and
where you would die, would you want to?" I said "no". She said, "ok, then
forget it". -- Steven Wright

Tamas Papp

unread,
Jul 2, 2007, 1:32:13 AM7/2/07
to
Don Geddis <d...@geddis.org> writes:

> Just curious if anyone here has thoughts on these possible additions to a
> core (new) Lisp, or perhaps other similar topics that ought to be included.
> ITERATE? SERIES? Partial evaluation? Call/CC?

What's the difference between having feature X "included" and loading
a library that implements it? Lisp is very easy to extend.

Sure, if the library doesn't exist yet, you have to write it, but that
also has to happen for having it "included".

Tamas

Slobodan Blazeski

unread,
Jul 2, 2007, 3:24:27 AM7/2/07
to
On Jul 2, 5:38 am, Don Geddis <d...@geddis.org> wrote:
> Some thoughts spurred -- I am ashamed to admit -- by our local spammer JH.

It's better to ignore the trolls completely, you'll learn nothing from
them except bad smell of feeling dirty. Not to mention waste of time.

google a little bit and you'll find that in current state that's
impossible.

It already there remember declare , the, fixnum etc


>
> And the pattern-matching programming idea got me thinking of unification. At
> one time, people touted Prolog as the future of programming languages. Very,
> very different from Lisp, and hugely useful when it applies. I think Prolog
> failed as a general purpose programming language, but should that technology
> be part of the bag of tricks available to every Lisp programmer? Unification
> and inference is probably a generalization of the pattern-matching idea.
>
> Just curious if anyone here has thoughts on these possible additions to a
> core (new) Lisp, or perhaps other similar topics that ought to be included.
> ITERATE? SERIES? Partial evaluation? Call/CC?

There's many libraries that are relatively easy to find and
install,iterate, series, cells, qi....For a lot of the other mentioned
there is a library already ready to use or code exist or as in call/cc
feature is questionable, some like it some believe it doesn't worth.
So you already have a most powerfull toolbox from any other
lanaguages, you only need to build with it.
The only thing that I really believe that lisp is missing is erlang/
termite like concurrency, but time will tell.
For now we only need more apps written in it.
>
> -- Don
> ___________________________________________________________________________ ____

philip....@gmail.com

unread,
Jul 2, 2007, 4:58:08 AM7/2/07
to
Don Geddis wrote:
> Some thoughts spurred -- I am ashamed to admit -- by our local spammer JH.

> Just curious if anyone here has thoughts on these possible additions to a


> core (new) Lisp, or perhaps other similar topics that ought to be included.


I think it's good to keep asking this question but I suspect everyone
wants different things.

For me, Common Lisp occupies an interesting middle ground. The
standard is much more than just some EBNF but much less than say Java
which is both a language standard and an extensive framework of
standard libraries.

I'm not sure how easy it is to add stuff into that core without
breaking things or upsetting a lot of people (look at R6RS) but maybe
identifying a set of generally useful and portable libraries as
'recommended extensions' is a good place to start (a bit like Edi
Weitz starter pack)?

But I'm sure people will hate that idea too :-)

Phil
http://phil.nullable.eu/

Dan Bensen

unread,
Jul 2, 2007, 5:59:26 AM7/2/07
to
Don Geddis wrote:
> Maybe multiprocessing too.
In the Mundus Novus Bellus[1] of multicore concurrency, I've been
wondering how CL's mapping functions and binding forms will fare,
since they're all guaranteed in the spec to be eval'd in order.

> The previously mentioned spammer has suggested that pattern-matching
> function definition is one such thing.

ML-style pattern matching seems to be pretty easy.

> Call/CC?
Haskellers seem to think they're higher on the power spectrum than Lisp.
Apparently, continuations can be implemented in monads.

[1] (Thanks for the idea, MT :))

--
Dan
www.prairienet.org/~dsb/

Pascal Costanza

unread,
Jul 2, 2007, 10:27:35 AM7/2/07
to
Don Geddis wrote:
> Some thoughts spurred -- I am ashamed to admit -- by our local spammer JH.
>
> Probably most of us are familiar with Paul Graham's hypothetical "Blub"
> language, as described in the section "The Blub Paradox" in the middle of
> http://paulgraham.com/avg.html
> It was a hypothetical example to show how some programmers, only used to
> languages less powerful than Lisp, might not appreciate the power of Lisp,
> simply because they don't understand those features of Lisp that their own
> favorite language lacks.

This is one of Paul Graham's worst text fragments, IMHO (the rest of
"Beating the Averages is quite good, though). You can only relate to it
when you think that you're "up in the power continuum." However, you may
simply not know that others are even "higher" in the "power continuum."
There are essentially two possibilities: Either another language is
worse than the one you prefer, or you simply don't understand it, in
which case it simply _appears_ to be worse to you. This effectively
means that, if you think that another language is worse than the one you
prefer, you simply cannot draw any conclusions. [This is an
exaggeration, of course, things are different if you know the involved
languages very closely.]

Bjarne Stroustrup's remarks about language comparisons apply here:

"I also worry about a phenomenon I have repeatedly observed in honest
attempts at language comparisons. The authors try hard to be impartial,
but are hopelessly biased by focusing on a single application, a single
style of programming, or a single culture among programmers. Worse, when
one language is significantly better known than others, a subtle shift
in perspective occurs: Flaws in the well-known language are deemed minor
and simple workarounds are presented, whereas similar flaws in other
languages are deemed fundamental. Often, the workarounds commonly used
in the less-well-known languages are simply unknown to the people doing
the comparison or deemed unsatisfactory because they would be unworkable
in the more familiar language."

See http://www.research.att.com/~bs/bs_faq.html#compare

In contrast, I find Paul Graham's ideas about "Blub" counter-enlightening.

Also: There is nothing special about being "up in the power continuum."
Just because most languages suck nowadays doesn't mean that you're
special because you happen to know a language that doesn't suck that
much. The ideas behind Lisp are only mind blowing when compared to more
mainstream languages. In 200 years from now, though, these ideas will be
regarded as trivial.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Sacha

unread,
Jul 2, 2007, 11:34:36 AM7/2/07
to
Pascal Costanza wrote:
> Don Geddis wrote:
[...]

Also it is important to repeat again and again that the design space of
computer languages is multi-dimentional. There's not a single line from
"sucks" to "superior abstraction".

From concise/obfuscated to verbose/readable, from imperative to
functional, from dynamic typing/exploratory to static typing/static
checks... pick a point in all these dimensions (and others), and the
rest of the language will flow from these.

The local extremum quote from McCarthy says it all.

Sacha

Chris Rathman

unread,
Jul 2, 2007, 12:11:36 PM7/2/07
to
Re: Unification...

An interesting programming language along that particular line is Oz -
one of the few languages that is built around the Prolog model of
unification. I can't really say how well it stacks up with Lisp, but
I did do a bit of translation of Friedman and Felleisen for anyone
interested in a more involved Scheme/Oz comparison:

http://www.codepoetics.com/wiki/index.php?title=Topics:TRS_in_other_languages:Oz

It's also worth mentioning Kanren here as well, as it is basically
Scheme with unification. And speaking of which, the best source for
O'Caml and Scheme would probably be Oleg Kiselyov, as he's a leading
expert on both languages. Of course, Common Lisp (which appears to be
the main subject of comp.lang.lisp) and Scheme are two different
languages. And sometimes the closer the languages are to each other,
the more inflammatory the rhetoric can get. But then I'm one that
likes to see the similarity in programming languages, as much as I
like to see their differences.

I was wondering if there has been some work on unification and/or
futures in Lisp? (Alice-ML draws from the work of futures in
MultiLisp).

Thanks,
Chris Rathman


On Jul 1, 10:38 pm, Don Geddis <d...@geddis.org> wrote:
>
> And the pattern-matching programming idea got me thinking of unification. At
> one time, people touted Prolog as the future of programming languages. Very,
> very different from Lisp, and hugely useful when it applies. I think Prolog
> failed as a general purpose programming language, but should that technology
> be part of the bag of tricks available to every Lisp programmer? Unification
> and inference is probably a generalization of the pattern-matching idea.
>

Reinier Zwitserloot

unread,
Jul 2, 2007, 12:39:59 PM7/2/07
to
Of course LISP is blub. All languages are blub given the right
context. Certain programming language features are mutually exclusive,
and simplicity is also a worthy feature, hence leading to the
conclusion that the perfect language, the one that incorporates all
features of all other languages, does not exist.

On Jul 2, 5:38 am, Don Geddis <d...@geddis.org> wrote:

> Some thoughts spurred -- I am ashamed to admit -- by our local spammer JH.
>
> Probably most of us are familiar with Paul Graham's hypothetical "Blub"

> language, as described in the section "The Blub Paradox" in the middle ofhttp://paulgraham.com/avg.html

cwar...@gmail.com

unread,
Jul 2, 2007, 1:00:53 PM7/2/07
to
On Jul 2, 2:24 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
wrote:

> On Jul 2, 5:38 am, Don Geddis <d...@geddis.org> wrote:
>
> > Some thoughts spurred -- I am ashamed to admit -- by our local spammer JH.
>
> It's better to ignore the trolls completely, you'll learn nothing from
> them except bad smell of feeling dirty. Not to mention waste of time.

I'm not really interested in the main post because my suggestions
would require far too much clarification, but this is an interesting
quote. Yes, trolls are annoying. Yes, most of the time they have no
idea what they're talking about. No, you shouldn't just ignore them.
At least, not in the way suggested here. Ignore them _after_ you
disprove what they're saying. Don't use the fact that they're a troll
prevent you from exploring the issue. (I also like to keep an
extremely open mind when I'm considering other viewpoints. I've
learned that from years of unknowingly being wrong. In my defense,
nobody was ever able to provide proof that I was wrong--I always ended
up doing that myself.)

Anyway, that's just my $0.02.

Don Geddis

unread,
Jul 2, 2007, 12:21:25 PM7/2/07
to
Pascal Costanza <p...@p-cos.net> wrote on Mon, 02 Jul 2007:
> This is one of Paul Graham's worst text fragments, IMHO (the rest of "Beating
> the Averages is quite good, though). You can only relate to it when you think
> that you're "up in the power continuum." However, you may simply not know
> that others are even "higher" in the "power continuum." There are essentially
> two possibilities: Either another language is worse than the one you prefer,
> or you simply don't understand it, in which case it simply _appears_ to be
> worse to you. This effectively means that, if you think that another language
> is worse than the one you prefer, you simply cannot draw any
> conclusions.

I understand this perspective, and to a large extent, agree with it.

Let me try to express it a different way. If you talk to somebody that has
only programmed in C, and try to explain the benefits of having a built-in
garbage collector, they often just "don't get it". Don't understand how it
would aid programmer productivity. Similarly, a Java-only programmer might
not get the power of macros.

In both cases, most Lispers respond with a "higher power" argument: these are
tools that are useful in almost EVERY domain (not just one limited
application), and if you can't appreciate it, it probably means you don't
understand it.

So, I'm interested in avoiding this C/Java limited vision, but coming from
a Lisp background instead. I agree completely with you that Graham provides
no constructive way to tell the difference. He doesn't aid you at all in
understanding where you are in the middle of the power spectrum of programming
languages.

Lispers often believe themselves to be "at the top", or at least nearby.
I'm looking for pointers to generic programming language concepts that might
possibly indicate a higher-power language, but which are missing from Lisp.
Pattern-matching programming is at least a valid candidate. Prolog-style
unification and inference probably is too.

Is it so absurd to try to look beyond Lisp at programming in the abstract?

> In contrast, I find Paul Graham's ideas about "Blub" counter-enlightening.

Upon reflection, I agree with your criticism. But at least he brought up
the topic :-).

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org

Eagles may soar, free and proud, but weasels never get sucked into jet engines.

Don Geddis

unread,
Jul 2, 2007, 12:29:06 PM7/2/07
to
Tamas Papp <tkp...@gmail.com> wrote on Mon, 02 Jul 2007:
> What's the difference between having feature X "included" and loading
> a library that implements it? Lisp is very easy to extend.

Think about that in terms of garbage collection and macros. If Lisp had
begun without them, could they be merely added later as a "library"?

CLOS is probably on the border. In ANSI CL, it kind of is just bolted on at
the end, much as a library that will always be loaded.

So what's the difference? To me, it seems that there are some things you
expect every Lisp programmer to be aware of, to be capable of using in their
own code, and to immediately recognize if they read code written by others.
Things like GC and macros permeate code written for every application area.
This isn't like numerical analysis, or sockets, or something, where if you
aren't doing that kind of programming, you wouldn't necessarily use those
libraries.

But regular expressions, or prolog-style inference, could appear in any piece
of code at any time. I suspect they aren't more common in Lisp because they
aren't a standard part of the language, not because they aren't useful to the
programmer.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org

I can please only one person per day.
Today is not your day.
Tomorrow isn't looking good either.
-- DNRC Motto

Pascal Bourguignon

unread,
Jul 2, 2007, 1:44:17 PM7/2/07
to
Don Geddis <d...@geddis.org> writes:
> Lispers often believe themselves to be "at the top", or at least nearby.
> I'm looking for pointers to generic programming language concepts that might
> possibly indicate a higher-power language, but which are missing from Lisp.
> Pattern-matching programming is at least a valid candidate. Prolog-style
> unification and inference probably is too.
>
> Is it so absurd to try to look beyond Lisp at programming in the abstract?
>
>> In contrast, I find Paul Graham's ideas about "Blub" counter-enlightening.
>
> Upon reflection, I agree with your criticism. But at least he brought up
> the topic :-).

The point is whether a feature needs specific support from the core of
the lisp system (eg EVAL/COMPILE), or if you can integrate it smoothly
only by user code.

It happens that most features just don't call for any change to
EVAL/COMPILE, and can be integrated as smoonthly as wanted (using
macros or reader macros).

That's also the reason why lisp programmers can confidently say that
they're using the top notch programming language: it's very rare to
find a high level feature(*) in another programming language that cannot
be integrated into lisp with just user level code.

(*) continuations are considered a low-level feature, therefore we
don't have any scheme envy.
--
__Pascal Bourguignon__ http://www.informatimago.com/

NOTE: The most fundamental particles in this product are held
together by a "gluing" force about which little is currently known
and whose adhesive power can therefore not be permanently
guaranteed.

Pascal Costanza

unread,
Jul 2, 2007, 2:22:31 PM7/2/07
to
Don Geddis wrote:
> Pascal Costanza <p...@p-cos.net> wrote on Mon, 02 Jul 2007:
>> This is one of Paul Graham's worst text fragments, IMHO (the rest of "Beating
>> the Averages is quite good, though). You can only relate to it when you think
>> that you're "up in the power continuum." However, you may simply not know
>> that others are even "higher" in the "power continuum." There are essentially
>> two possibilities: Either another language is worse than the one you prefer,
>> or you simply don't understand it, in which case it simply _appears_ to be
>> worse to you. This effectively means that, if you think that another language
>> is worse than the one you prefer, you simply cannot draw any
>> conclusions.
>
> I understand this perspective, and to a large extent, agree with it.
>
> Let me try to express it a different way. If you talk to somebody that has
> only programmed in C, and try to explain the benefits of having a built-in
> garbage collector, they often just "don't get it".

It seems to be the case that for ultra-large-scale programs, garbage
collection is not feasible anymore. (That was also the message in Jans
Aasman's talk at ILC'07.)

> Don't understand how it
> would aid programmer productivity. Similarly, a Java-only programmer might
> not get the power of macros.

Gregor Kiczales most certainly knows about the power of macros and
argues against them: http://www.ddj.com/dept/windows/184415142 (Note
that I disagree with his position, it's just to note that views may
differ even when knowing about the benefits.)

> In both cases, most Lispers respond with a "higher power" argument: these are
> tools that are useful in almost EVERY domain (not just one limited
> application), and if you can't appreciate it, it probably means you don't
> understand it.

The problem here is that you seem to confuse "higher power" with
"better". Not every increase in expressive power makes a language
better. For example, Common Lisp forbids changing source code by way of
side effects (a practice that was apparently common in previous Lisp
dialects). That's clearly a decrease in expressive power, but generally
considered an improvement of the language.

If it were that simple, that more expressive power equals better
language, then we could just look for the most expressive language and
be done with it. Unfortunately, this is not so.

This ultimately means that the decision for or against a language is
ultimately a subjective one, and depends highly on personal preferences.
For example, some of my colleagues definitely have a very good idea
about the expressive power of Lisp and Scheme dialects, but prefer to
work in Smalltalk because of its minimal set of basic language
constructs and _because_ there is no straightforward way to extend the
language (not in spite of that fact). They have good reasons for that
(to which I don't agree, but this doesn't prevent me from acknowledging
their reasons).

> So, I'm interested in avoiding this C/Java limited vision, but coming from
> a Lisp background instead. I agree completely with you that Graham provides
> no constructive way to tell the difference. He doesn't aid you at all in
> understanding where you are in the middle of the power spectrum of programming
> languages.

Yes, and it's important to understand that this is a dangerous position
to take. You should be able to explain your preferences, and not just
say "I know better than thou", without giving any good reasons. The
latter is just condescending.

We all made the experience that we were wrong about some of our choices
in the past. It is not so unlikely that we are still wrong, at least in
some regards.

Computer science is only a couple of decades old. It would be a great
surprise if we already know everything there is to know about it.

> Lispers often believe themselves to be "at the top", or at least nearby.
> I'm looking for pointers to generic programming language concepts that might
> possibly indicate a higher-power language, but which are missing from Lisp.
> Pattern-matching programming is at least a valid candidate. Prolog-style
> unification and inference probably is too.
>
> Is it so absurd to try to look beyond Lisp at programming in the abstract?

No, many of us are doing this all the time.

jimb...@gmail.com

unread,
Jul 2, 2007, 2:33:00 PM7/2/07
to
On Jul 2, 3:24 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
wrote:

> There's many libraries that are relatively easy to find and
> install,iterate, series, cells, qi....For a lot of the other mentioned
> there is a library already ready to use or code exist or as in call/cc
> feature is questionable, some like it some believe it doesn't worth.
> So you already have a most powerfull toolbox from any other
> lanaguages, you only need to build with it.
> The only thing that I really believe that lisp is missing is erlang/
> termite like concurrency, but time will tell.
> For now we only need more apps written in it.

I work at a university, and just out of curiosity I sat in on a talk
given by young Apple employees to CS students considering applying
there for internships or jobs.

In response to a question about how to present one's self in an
interview at Apple, one of the speakers replied (paraphrase) "Whatever
you do, don't say Mac OS X is perfect. If you can't find anything
wrong with it, how could you possibly have any chance of making it
better?"

I think that would be a healthy attitude to take towards Lisp, or,
frankly, anything else you like. If you think you have found any
piece of utopia on this Earth, you are probably either deceiving
yourself or need to raise your standards.

Now, to your credit, you do mention the concurrency thing. But do you
really think there is nothing else that could make Lisp better?

And with the reliance on the idea that "you can always add that as a
library" you come close to the Godwin's Law of Referencing Turing
Completeness [1].

To avoid hypocrisy, here are some criticisms I made about Lisp:

http://programming.reddit.com/info/1z8b8/comments/c1zbmr

These are more criticisms of Common Lisp culture and community, and I
think most of the weaknesses of Common Lisp reside there as opposed to
something in or not in the text of the standard itself.

-jimbo

[1] http://programming.reddit.com/info/21vs0/comments/c022160

Chris Rathman

unread,
Jul 2, 2007, 2:36:30 PM7/2/07
to
I didn't come away with the impression that Kiczales was against
macros. More like he was saying that macros can be overused and
abused, and that they must be used judiciously. Anyhow, interesting
article in that it draws the comparison between C# and Java tags and
macro facilities. Though I suspect the Lisp community would consider
them as seriously hobbled. :-)

Chris

On Jul 2, 1:22 pm, Pascal Costanza <p...@p-cos.net> wrote:
>
> Gregor Kiczales most certainly knows about the power of macros and
> argues against them:http://www.ddj.com/dept/windows/184415142(Note
> that I disagree with his position, it's just to note that views may
> differ even when knowing about the benefits.)
>

Tamas Papp

unread,
Jul 2, 2007, 2:47:23 PM7/2/07
to
Pascal Costanza <p...@p-cos.net> writes:

> Don Geddis wrote:
>> Is it so absurd to try to look beyond Lisp at programming in the abstract?
>
> No, many of us are doing this all the time.

To qualify: time is a finite resource. You can spend it on
exploration of new things, or "exploitation" (operations research
jargon, sorry), which in this case means learning Lisp really well.
Spending zero amount of time on exploration is possibly foolish,
because the marginal benefit of even a small amount of exploration is
large. But spending a lot or all the time exploring is also
suboptimal, since we wouldn't get anything done.

I am raising this point because many people (and trolls) come to this
list urging lispers to try their pet language and take offense when
they don't. c.l.l seems to get a heavy load of this kind of posts,
compared to, say, comp.lang.fortran. I wonder why.

Tamas

Pillsy

unread,
Jul 2, 2007, 2:59:58 PM7/2/07
to
On Jul 2, 2:33 pm, jimbo...@gmail.com wrote:
[...]

> Now, to your credit, you do mention the concurrency thing. But do you
> really think there is nothing else that could make Lisp better?

For me (and, I gather from your reddit post, you), there are many
things that could make Lisp better, but relatively few of them are in
the realm of supporting something like "concurrency". It's all
straightforward stuff like more libraries and a better set of options
for GUIs.

Of course, "straightforward" and "trivial" are completely different
things, which is why Lisp still has these problems.

Cheers,
Pillsy

jimb...@gmail.com

unread,
Jul 2, 2007, 3:04:23 PM7/2/07
to
On Jul 2, 4:58 am, philip.armit...@gmail.com wrote:
> I'm not sure how easy it is to add stuff into that core without
> breaking things or upsetting a lot of people (look at R6RS) but maybe
> identifying a set of generally useful and portable libraries as
> 'recommended extensions' is a good place to start (a bit like Edi
> Weitz starter pack)?

I've been trying to push the meme of making Starter Pack a standard
part of every Lisp distribution any chance I get [1], especially all
the free Lisps. Lisp in a Box plus Starter Pack prepackaged and
double clickable would be a brilliant introduction to Common Lisp,
provided you know Emacs or are willing to learn.

-jimbo

[1] Standard disclaimer: yes, I should be doing more to actually make
this happen. It's on the list of "this would be a great thing to do
if I found the time."

Edi Weitz

unread,
Jul 2, 2007, 3:05:20 PM7/2/07
to
On Mon, 02 Jul 2007 20:22:31 +0200, Pascal Costanza <p...@p-cos.net> wrote:

> It seems to be the case that for ultra-large-scale programs, garbage
> collection is not feasible anymore.

It also seems not everybody agrees on this:

http://lukego.livejournal.com/4773.html

Edi.

--

Lisp is not dead, it just smells funny.

Real email: (replace (subseq "spam...@agharta.de" 5) "edi")

philip....@gmail.com

unread,
Jul 2, 2007, 3:19:25 PM7/2/07
to
On Jul 2, 8:04 pm, jimbo...@gmail.com wrote:

> the free Lisps. Lisp in a Box plus Starter Pack prepackaged and
> double clickable would be a brilliant introduction to Common Lisp,
> provided you know Emacs or are willing to learn.


I agree apart from Lisp in a Box...but I'm not exactly impartial ;-)

-- Phil
http://phil.nullable.eu/

Mark Hoemmen

unread,
Jul 2, 2007, 3:19:42 PM7/2/07
to
Don Geddis wrote:
> OK, so now the question. Common Lisp was formalized quite some time ago;
> ANSI CL a bit later, but without many radical changes. Are there tools now
> known in the abstract space of programming language design that "ought" to be
> part of Common Lisp? That, if there had been an ANSI CL 2 effort, would
> surely have been strongly suggested to get added to the language?
>
> I'd like to distinguish this from libraries or specific applications, which
> are easy enough to layer on top of existing ANSI CL. Of course everybody can
> name sockets, or web programming (XML parsers, HTTP servers). I'm thinking
> of generic ideas in the description of computation, which apply across a wide
> range of applications. Things like "iteration" or "recursion" or "CLOS".

Maybe it's more natural to let programming languages evolve on their
own, as hardware and applications evolve. For example, who could have
seen in the late 1950's that parallelism would become both important for
performance, and difficult to reason about in code? If they had tried
to formulate parallel language constructs back then, it's almost certain
that they wouldn't have hit the right level of abstraction.

There are some things I'd like to see made part of a CL implementation,
but these are not so much language features:

1. More control over the garbage collector in a multithreaded
environment -- in particular, being able to "pin" array data without
interrupting the GC or synchronizing threads;

2. Guarantees of bit-level compatibility with C / Fortran arrays for
certain array types.

> There are some things clearly on the edge. All the existing trig functions
> (SIN, COS, TAN) are highly useful to some small set of programmers, and
> generally ignored by the rest. They could have easily been an add-on
> library. (On the other hand, the basic numeric tower in Lisp, the automatic
> conversion of numbers between data types, is far more fundamental to the
> language.)

It's interesting that you point this out, because the "basic numeric
tower" isn't quite as tower-like as you think: there are all kinds of
floating-point numbers that people want (e.g., arbitrary or controllable
precision, "exact real" types, decimal arithmetic), and the relations
between them don't always fit into a hierarchy. (For example, sometimes
you _want_ sqrt(-1) to be an error, because otherwise it invalidates
things like the Cholesky factorization.)

This suggests an interesting language feature: more control over the
relationships between numeric types, what conversions between them are
legal, how they round, etc. A kind of "CLOS for number properties."
This power would revolutionize numerical analysis -- the hard-core IEEE
754 types would worship whoever set something like this up.

mfh

Mark Hoemmen

unread,
Jul 2, 2007, 3:23:57 PM7/2/07
to
Dan Bensen wrote:
> Don Geddis wrote:
> > Maybe multiprocessing too.
> In the Mundus Novus Bellus[1] of multicore concurrency, I've been
> wondering how CL's mapping functions and binding forms will fare,
> since they're all guaranteed in the spec to be eval'd in order.

The IEEE 754 types are having a discussion about similar issues
involving bitwise reproducibility, especially on parallel platforms.
There seems to be a general consensus to provide two modes: slow and
(somewhat) reproducible (on one machine, i.e., ./a.out called twice
returns the same answer), and fast and not necessarily reproducible.

Perhaps a CL implementation could offer the same choice of modes -- it
defaults to the standard, but allows relaxation if the user accepts the
possible consequences.

> [1] (Thanks for the idea, MT :))

Ha, I push this stuff too hard... ;-P

mfh

Mark Hoemmen

unread,
Jul 2, 2007, 3:30:35 PM7/2/07
to
Dan Bensen wrote:
> Don Geddis wrote:
> > Maybe multiprocessing too.
> In the Mundus Novus Bellus[1] of multicore concurrency, I've been
> wondering how CL's mapping functions and binding forms will fare,
> since they're all guaranteed in the spec to be eval'd in order.

The IEEE 754 types are having a discussion about similar issues

involving bitwise reproducibility, especially on parallel platforms.
There seems to be a general consensus to provide two modes: slow and
(somewhat) reproducible (on one machine, i.e., ./a.out called twice
returns the same answer), and fast and not necessarily reproducible.

Perhaps a CL implementation could offer the same choice of modes -- it
defaults to the standard, but allows relaxation if the user accepts the
possible consequences.

> [1] (Thanks for the idea, MT :))

Ha, I push this stuff too hard... ;-P

mfh

Mark Hoemmen

unread,
Jul 2, 2007, 3:30:57 PM7/2/07
to
Don Geddis wrote:
> OK, so now the question. Common Lisp was formalized quite some time ago;
> ANSI CL a bit later, but without many radical changes. Are there tools now
> known in the abstract space of programming language design that "ought" to be
> part of Common Lisp? That, if there had been an ANSI CL 2 effort, would
> surely have been strongly suggested to get added to the language?
>
> I'd like to distinguish this from libraries or specific applications, which
> are easy enough to layer on top of existing ANSI CL. Of course everybody can
> name sockets, or web programming (XML parsers, HTTP servers). I'm thinking
> of generic ideas in the description of computation, which apply across a wide
> range of applications. Things like "iteration" or "recursion" or "CLOS".

Maybe it's more natural to let programming languages evolve on their

own, as hardware and applications evolve. For example, who could have
seen in the late 1950's that parallelism would become both important for
performance, and difficult to reason about in code? If they had tried
to formulate parallel language constructs back then, it's almost certain
that they wouldn't have hit the right level of abstraction.

There are some things I'd like to see made part of a CL implementation,
but these are not so much language features:

1. More control over the garbage collector in a multithreaded
environment -- in particular, being able to "pin" array data without
interrupting the GC or synchronizing threads;

2. Guarantees of bit-level compatibility with C / Fortran arrays for
certain array types.

> There are some things clearly on the edge. All the existing trig functions


> (SIN, COS, TAN) are highly useful to some small set of programmers, and
> generally ignored by the rest. They could have easily been an add-on
> library. (On the other hand, the basic numeric tower in Lisp, the automatic
> conversion of numbers between data types, is far more fundamental to the
> language.)

It's interesting that you point this out, because the "basic numeric

jimb...@gmail.com

unread,
Jul 2, 2007, 3:33:42 PM7/2/07
to
On Jul 2, 3:19 pm, philip.armit...@gmail.com wrote:
> I agree apart from Lisp in a Box...but I'm not exactly impartial ;-)

Aha! I had browsed your ABLE page before, but did not remember the
name of the author :).

Well, by bundling a Starter Pack you would be way ahead of the Lisp in
a Box and SLIME distributions.

I'm going to stick with Aquamacs with bundled SLIME + SBCL on my Mac,
but I think ABLE + CLisp or SBCL + Starter Pack could be a great
solution for programmers who don't know Emacs to be introduced to
Lisp.

Good luck with ABLE. Looks like a neat project.

-jimbo

Pascal Costanza

unread,
Jul 2, 2007, 3:37:11 PM7/2/07
to
Tamas Papp wrote:
> Pascal Costanza <p...@p-cos.net> writes:
>
>> Don Geddis wrote:
>>> Is it so absurd to try to look beyond Lisp at programming in the abstract?
>> No, many of us are doing this all the time.
>
> To qualify: time is a finite resource. You can spend it on
> exploration of new things, or "exploitation" (operations research
> jargon, sorry), which in this case means learning Lisp really well.
> Spending zero amount of time on exploration is possibly foolish,
> because the marginal benefit of even a small amount of exploration is
> large. But spending a lot or all the time exploring is also
> suboptimal, since we wouldn't get anything done.

There are quite a few libraries with well-working language extensions
for Common Lisp. So we have a rather healthy situation in that regard.

Of course, things could always be improved, but that's obvious.

> I am raising this point because many people (and trolls) come to this
> list urging lispers to try their pet language and take offense when
> they don't. c.l.l seems to get a heavy load of this kind of posts,
> compared to, say, comp.lang.fortran. I wonder why.

They take ANSI Common Lisp as the final word on Common Lisp, when it's
clearly just the starting point.

They are also used to the notion that some higher authority is typically
in charge of deciding what gets in and out of a language. This is not
the case for Common Lisp. This has both advantages and disadvantages.
Some people seem to focus on the disadvantages, but forget about the
advantages (like: an extension can be optimal for a particular problem
domain without having to be compatible with everything else, which
typically restricts what you can and cannot do in a language extension).

Pascal Costanza

unread,
Jul 2, 2007, 3:39:59 PM7/2/07
to
Edi Weitz wrote:
> On Mon, 02 Jul 2007 20:22:31 +0200, Pascal Costanza <p...@p-cos.net> wrote:
>
>> It seems to be the case that for ultra-large-scale programs, garbage
>> collection is not feasible anymore.
>
> It also seems not everybody agrees on this:
>
> http://lukego.livejournal.com/4773.html

The fact that there is disagreement proves my point.

But more to the point: I recall Guy Steele making similar statements -
something like if you have to deal with terabytes of data, garbage
collection becomes infeasible because a program starts to spend more
time on collecting garbage than on anything else. I may be
misremembering and he may have said something different, but I think it
was close to that. Unfortunately, I don't remember where I got this from...

Christopher Browne

unread,
Jul 2, 2007, 3:50:27 PM7/2/07
to
Mark Hoemmen <mark.h...@gmail.com> writes:
> Don Geddis wrote:
>> OK, so now the question. Common Lisp was formalized quite some time ago;
>> ANSI CL a bit later, but without many radical changes. Are there tools now
>> known in the abstract space of programming language design that "ought" to be
>> part of Common Lisp? That, if there had been an ANSI CL 2 effort, would
>> surely have been strongly suggested to get added to the language?
>> I'd like to distinguish this from libraries or specific
>> applications, which
>> are easy enough to layer on top of existing ANSI CL. Of course everybody can
>> name sockets, or web programming (XML parsers, HTTP servers). I'm thinking
>> of generic ideas in the description of computation, which apply across a wide
>> range of applications. Things like "iteration" or "recursion" or "CLOS".
>
> Maybe it's more natural to let programming languages evolve on their
> own, as hardware and applications evolve. For example, who could
> have seen in the late 1950's that parallelism would become both
> important for performance, and difficult to reason about in code?
> If they had tried to formulate parallel language constructs back
> then, it's almost certain that they wouldn't have hit the right
> level of abstraction.

There's a problem with *that* in that Common Lisp is in a phase where
it is difficult to "make it evolve" in that making standard changes
would require opening up a process that seems impractical to open up.

I don't think that XML parsing is well enough definable (in a
sufficiently standard way) to make that something highly worth "adding
to the language," whether as a 'native construct' or as a library.
There are too many well-known viable models out there (DOM, SAX, to
mention two) for there to be a clear *one* thing to add.

It seems plausible that having quasi-standard mechanisms for pattern
matching (which pretty well all of the functional languages have
gotten *very* keen on, of late) and Prolog-style unification might be
useful additions. If that could be handled via libraries that would
have a small impact on the 'language design,' I suppose so much the
better...

Improving parallelism capabilities seems like a tougher one, in that
it might require, for support of "threading-like" models, deeper
changes to language semantics.
--
output = reverse("ofni.sesabatadxunil" "@" "enworbbc")
http://linuxfinances.info/info/lisp.html
"If your parents never had children, chances are you won't either."
-- Dick Cavett

Pascal Costanza

unread,
Jul 2, 2007, 3:53:23 PM7/2/07
to
Chris Rathman wrote:
> I didn't come away with the impression that Kiczales was against
> macros. More like he was saying that macros can be overused and
> abused, and that they must be used judiciously. Anyhow, interesting
> article in that it draws the comparison between C# and Java tags and
> macro facilities. Though I suspect the Lisp community would consider
> them as seriously hobbled. :-)

Sure. :)

Here is another data point where someone (Gilad Bracha) argues against
macro (more specifically against the inclusion of macros in Java):
http://www.artima.com/weblogs/viewpost.jsp?thread=5246

That is more outspoken than Kiczales's position.

Again, note that I don't share those views. It's just to show that
people who are aware of what macros are can be against their use, for
what they consider to be good reasons.

And to make this absolutely clear: This is in the discussion of a text
by Paul Graham which seems to suggest that there is a clear hierarchy of
programming languages in terms of better and worse in an objective
sense. That's the part I dispute. Of course, I definitely have my
subjective opinions. ;)

Message has been deleted
Message has been deleted

Rainer Joswig

unread,
Jul 2, 2007, 4:12:01 PM7/2/07
to
In article <5esu37F...@mid.individual.net>,
Pascal Costanza <p...@p-cos.net> wrote:

> Don Geddis wrote:
> > Pascal Costanza <p...@p-cos.net> wrote on Mon, 02 Jul 2007:
> >> This is one of Paul Graham's worst text fragments, IMHO (the rest of "Beating
> >> the Averages is quite good, though). You can only relate to it when you think
> >> that you're "up in the power continuum." However, you may simply not know
> >> that others are even "higher" in the "power continuum." There are essentially
> >> two possibilities: Either another language is worse than the one you prefer,
> >> or you simply don't understand it, in which case it simply _appears_ to be
> >> worse to you. This effectively means that, if you think that another language
> >> is worse than the one you prefer, you simply cannot draw any
> >> conclusions.
> >
> > I understand this perspective, and to a large extent, agree with it.
> >
> > Let me try to express it a different way. If you talk to somebody that has
> > only programmed in C, and try to explain the benefits of having a built-in
> > garbage collector, they often just "don't get it".
>
> It seems to be the case that for ultra-large-scale programs, garbage
> collection is not feasible anymore. (That was also the message in Jans
> Aasman's talk at ILC'07.)

For a lot of software naive consing and using naive Garbage Collection
was never feasible.

Reducing consing has a long history. Just check out the manual
of the MIT Lisp Machine. Specifically the chapters
on memory management and areas. This is probably more than
25 years old. I think it is easy to see that Garbage Collection
is not THE single method of memory management. Even
Garbage Collection? Which one? Once you write any program
that has certain demands on memory management, you see
that the 'simple' garbage collection won't do it.
It has nothing to do with 'ultra-large scale'.

Say, you have a 3d Modeller for complex models.
Do you want to see GCs during interaction?

You have a disk driver. GCs during disk read and write?

There are lots of situations where the user don't want to
see delays. There are also lots of situations where
consing reduces the system performance in situations
not easy to see (reduced throughput in networking or
writing to disk, ...).

A few examples:

* a TCP stack. Packets as objects. Plain memory management?
Probably not, you want fast allocation and you want
them to be wired in memory.

Cost of varying access times?

* 24/32bit (or larger) deep bitmaps with tens or
hundreds of mega-bytes of space needed for image processing.
You really want to reallocate them when you need new?
The Lisp Machine had special datatypes for bitmaps
and special storage areas for them.

Cost of reinitialization? Cost of getting fresh memory?

* large amount of pointers in data structures. Think
objects pointing to other objects. Do you really want
them to be scattered across your address space?

How about locality of data?

* (Almost?) no production-quality Lisp has a concurrent GC.

What is with (near) real-time demands? How to
have bounds on the time used for incremental GC?

* You may need more data than what fits into main memory.

How to make sure that the GC doesn't create
bad paging behavior?

In web-applications you can sometimes get away with
many small GCs. Or you can pre-allocate all data-structure.
"On the Internet, nobody knows you are a dog." ;-)
Users will see nothing or only effects that are similar to
other factors contributing to latency. So a simple
app might not be a problem. These are the same apps
people write in languages like Python, Ruby, ...
which are often one or two orders of magnitudes slower
and have relatively primitive memory management
(reference counting and up).

But when you start using
CLOS and you have lots of changing domain data,
say, you cons lots of memory at page creation time and so
on - then you want to introduce simple user-level
optimizations (stack allocation, memoization, resources,
destructive operations, ...). If that doesn't help, then it gets
tricky.

--
http://lispm.dyndns.org

Tamas Papp

unread,
Jul 2, 2007, 4:27:57 PM7/2/07
to
Mark Hoemmen <mark.h...@gmail.com> writes:

> There are some things I'd like to see made part of a CL
> implementation, but these are not so much language features:
>
> 1. More control over the garbage collector in a multithreaded
> environment -- in particular, being able to "pin" array data without
> interrupting the GC or synchronizing threads;
>
> 2. Guarantees of bit-level compatibility with C / Fortran arrays for
> certain array types.

Yes! Exactly whay I need. Scheme is ahead of CL regarding these
features.

Tamas

Rainer Joswig

unread,
Jul 2, 2007, 4:32:21 PM7/2/07
to
In article <1183401180.7...@c77g2000hse.googlegroups.com>,
jimb...@gmail.com wrote:

I find your remarks very unfriendly. I find your posting
shows that you might yourself be socially challenged.

I'm a long time here at
comp.lang.lisp and I have seen a lot of very friendly
people (plus the usual dose of idiots). Even though the
Lisp community is very diverse, there are lots of helpful
people here. Ask a question and often you get very good answers.

If somebody trolls, wants to cheat with his tests
in the school or the university, asks questions where
answers can easily be googled or faqed, then one might get
an unwanted reply. Sometimes there are more specific
mailing lists or newgroups one should use to find answers.
People will get pointers.

comp.lang.lisp has improved on the signal-to-noise ratio.
I'd like to get better not worse.

Sometimes Lisp attracts people with very strange ideas.
They may look at Lisp as one of the few tools that
could be able to express their ideas in computational
terms. At one point programming gene-like algorithms
with mutations could have looked strange. But then
came Koza and published a book about it using Lisp
for his research. This was a start of a new field
in computer science. Or Chaitin's work on
algorithmic information theory. Or Hofstadter's books.
All with strange Lisp stuff.

But fun and interesting.

Sorry, but your remarks are unfriendly and I will
killfile you.

> -jimbo
>
> [1] http://programming.reddit.com/info/21vs0/comments/c022160

--
http://lispm.dyndns.org

Daniel Barlow

unread,
Jul 2, 2007, 5:17:36 PM7/2/07
to
Christopher Browne wrote:
> I don't think that XML parsing is well enough definable (in a
> sufficiently standard way) to make that something highly worth "adding
> to the language," whether as a 'native construct' or as a library.
> There are too many well-known viable models out there (DOM, SAX, to
> mention two) for there to be a clear *one* thing to add.

Which is why, IMO, it is not evidence for "lisp is a blub": lispers can
easily identify when they need to parse XML and how, and get a library
that does it.

> It seems plausible that having quasi-standard mechanisms for pattern
> matching (which pretty well all of the functional languages have
> gotten *very* keen on, of late) and Prolog-style unification might be
> useful additions. If that could be handled via libraries that would
> have a small impact on the 'language design,' I suppose so much the
> better...

OK, my only experience of pattern matching is from using O'Caml and its
predecessor Caml Light about 12 years ago (one of those ironies of life,
the Caml language family was a strong influence in pushing me to learn
Lisp), but as I experienced it then it seemed to be about centralising
decision making ("glorified switch statement") whereas OO is more about
decentralising it. AOP decentralises it even further with its talk of
"cross-cutting concerns" - you can now also make decisions based on
categories that aren't modelled in the class hierarchy . So, unless
that's not what we're talking about any more when we say "pattern
matching", it doesn't seem like anything _new_, just a swing back
towards having more of the logic in the same place. Of course, that's
exactly what a blub programmer would say, and maybe languages with
pattern-matching support have different tradeoffs that make that style
more elegant and expressive than the decentralised model.

> Improving parallelism capabilities seems like a tougher one, in that
> it might require, for support of "threading-like" models, deeper
> changes to language semantics.

I wouldn't be at all surprised if this is the Next Big Thing(tm). Not
because I understand anything about it, but just because I have a mental
image of the Superior language advocate saying "you just do foo and bar
and then baz happens, and it's trivially re-entrant and won't deadlock
because of the quux guarantees", and the Blub programmer going "that's
not special, you just need mutexes here and here and this is a
producer-consumer pattern and I could do it in an afternoon. Nobody
gets race conditions if they've thought about the problem at all, and
your way must introduce overhead."

The more I think about it the more I tend to the opinion that Blub is
more about the attitude of the programmer than the facilities provided
by the language, and the attitude is typified by a willingness to do
$some_hard_problem by hand instead of looking for a way to make the
computer handle the bookkeeping.

* subroutines: "it's not hard to keep a stack of the PC location so you
know where to jump back to"
* interactive development: "you should be forced to program with punched
cards, then you'd learn to get it right the first time"
* structured programming: "it's all just syntactic sugar over GOTO"
* GC: "a crutch for lazy programmers who can't keep track of their own
allocations"
* OO: "I can do that, it's just structs with function pointers in them"
* macros: "how often would you want to do that? anyway, you can use the
preprocessor"

I'd like to hope that Lisp programmers are less susceptible to this than
most programmers, just because we do have a language that has given us
the tools to invent new ways of abstracting stuff, and maybe that helps
keep our brains alert. But maybe I'm kidding myself ...


-dan

jimb...@gmail.com

unread,
Jul 2, 2007, 5:30:45 PM7/2/07
to
On Jul 2, 4:32 pm, Rainer Joswig <jos...@lisp.de> wrote:
> Sorry, but your remarks are unfriendly and I will
> killfile you.

Well, that's too bad because I wasn't singly you out personally and
have found many of your comments useful and interesting and it's a
shame you won't know I feel that way that because you won't be reading
this.

For anyone still reading, in other reddit comments I mention that the
reason for c.l.l.'s quick reaction of labeling a comment a troll is
due to the large number of real trolls that come through this group.
I think because of that there is sometimes an over reaction of
declaring someone a troll when they have a legitimate criticism of
Lisp or an honest question that unfortunately resembles a question or
comment that was a troll in the past.

Beyond that, there seems to be a paranoia about adding anything in any
way to "Lisp". Notice the quotation marks, because I don't care if
the existing Common Lisp standard ever officially changes, but I think
"Lisp" could benefit from more defacto-standardization of picking some
core functionality and convincing the various Lisp implementors to
bundle it through social pressure and pitching in to help.

Elsewhere in the same comments thread as my offending remark, someone
asks why Lisp has no literal syntax for hash tables. I pointed out
that Lisp syntax has literals for anything tree like, and that Lisp
prefers consistency of syntax (enabling powerful macros, etc.) instead
of special case syntaxes like Perl, Python, Ruby, etc.

Well, that did make me curious why Common Lisp does not have a reader
syntax for hash tables. In the process, I discovered hash tables are
one of the few things that lack a reader syntax (numbers, strings,
lists, vectors, structs all covered; did I miss any?). Investigating
more, I found a long, involved c.l.l. thread from 1999 discussing this
very issue. The conversation revolved around "what about the edge
cases?" You might not be able to re-establish notions of identity
when you read your hash table back in, thus breaking semantics.

Well, so what? It's an edge case. Document the limitations somewhere
and move on. (Which several people advocated.)

But there is no one who can just say "let's just make a decision and
move on" for Lisp. No Matz, Guido, Larry, or Stroustrup. The spec is
great. But the fear of establishing any kind of consensus for
anything not in the spec is a shortcoming of the Lisp community, in my
opinion.

My criticism goes to Common Lisp culture and community as a whole, not
to any one individual. It is good to have individuals in the
community that say "wait a sec, what about..." before blindly pushing
forward with a new idea. But when those sentiments overwhelm the
community as a whole and change, thus progress, becomes impossible, I
consider that a broken culture.

So please don't think I'm picking out individuals here as the problem
with Lisp. But I do think the sum of Lisp culture is holding Lisp
back from its full potential at this junction.

-jimbo

jimb...@gmail.com

unread,
Jul 2, 2007, 5:44:36 PM7/2/07
to
On Jul 2, 3:50 pm, Christopher Browne <cbbro...@mail.libertyrms.com>
wrote:

> I don't think that XML parsing is well enough definable (in a
> sufficiently standard way) to make that something highly worth "adding
> to the language," whether as a 'native construct' or as a library.
> There are too many well-known viable models out there (DOM, SAX, to
> mention two) for there to be a clear *one* thing to add.

Then pick one or two and call it a day.

Java and Python and other languages with large standard libraries have
many XML libraries available, but they still pick one or a few to
include in the standard distribution by default. This means that
every programmer has SOME way of dealing with XML without thinking
about where to find good libraries, whether a library works with their
distribution, whether a library is any good, and downloading and
installing it. The programmer still has the option of downloading a
library that better suits his needs if necessary.

I will now take this opportunity to push my personal meme regarding
this topic: convince the Lisp implementors to all bundle Edi Weitz'
Starter Pack. See my disclaimer from earlier in this thread.

Hmm, maybe I should organize an email campaign to all of the Common
Lisp implementors out there.

-jimbo

Holger Schauer

unread,
Jul 2, 2007, 5:58:19 PM7/2/07
to
On 5052 September 1993, Don Geddis wrote:
> OK, so now the question. Common Lisp was formalized quite some time ago;
> ANSI CL a bit later, but without many radical changes. Are there tools now
> known in the abstract space of programming language design that "ought" to be
> part of Common Lisp? That, if there had been an ANSI CL 2 effort, would
> surely have been strongly suggested to get added to the language?

Don, a lot of people have pointed you in the direction why the
question to begin with is questionable. I'll won't do that but still I
won't answer your question, so this probably hints also that there is
something basic problem with your question.

> I'd like to distinguish this from libraries or specific applications, which
> are easy enough to layer on top of existing ANSI CL.

I believe that this restriction is too strong. A lot of stuff at least
started as a set of CL application or library.

> Things like "iteration" or "recursion" or "CLOS".

Well, iteration and recursion isn't really something to be taken from
the language specification. And CLOS is implemented/implementable in
CL.

> Regular expression matching, which is missing from the ANSI CL standard, is
> probably of this "edge" class and would have been included in a CL2. Maybe
> multiprocessing too.

But this is not a sharp boundary -- take PPCRE, which is a CL
library. Multiprocessing on the other hand is very likely not
implementable without considerable effort if the CL implementation
doesn't provide at least basic support.

> And the pattern-matching programming idea got me thinking of
> unification.

You do know that there are (I think several) libraries implementing
unification in CL?

> At one time, people touted Prolog as the future of programming
> languages. Very, very different from Lisp, and hugely useful when
> it applies. I think Prolog failed as a general purpose programming
> language, but should that technology be part of the bag of tricks
> available to every Lisp programmer? Unification and inference is
> probably a generalization of the pattern-matching idea.

JFTR, Mercury, which started out as a better Prolog, includes a lot of
features typically offered by functional languages. Still, it hasn't
really attracted a lot of new users. And for inference, there are
literally myriads of inferencing systems implemented in Lisp.

What I want to say is that this stuff is already available.

Holger

--
--- http://hillview.bugwriter.net/ ---
"Time flies Time crowls Like an insect Up and down the walls.
The conspiracy of silence ought to revolutionise My thought."
-- Magazine, "The light pours out of me"

jayessay

unread,
Jul 2, 2007, 6:20:05 PM7/2/07
to
Edi Weitz <spam...@agharta.de> writes:

> On Mon, 02 Jul 2007 20:22:31 +0200, Pascal Costanza <p...@p-cos.net> wrote:
>
> > It seems to be the case that for ultra-large-scale programs, garbage
> > collection is not feasible anymore.
>
> It also seems not everybody agrees on this:
>
> http://lukego.livejournal.com/4773.html

And if you follow it through, Foderaro also backs away from that
claim. Which is a good thing since the claim makes no sense and I've
always considered JF a very savvy guy.

What you can say about this is something like "free wheeling _consing_
in 'ultra-large-scale' <whatever that is...> programs is a foolish
(and potentially disastrous) practice."[1] Of course, that's not
controversial or even particularly interesting, and so doesn't lend
itself to the subsequent marketing spin...


/Jon

1. Oddly enough (given the context here) there is unnecessary consing
in the lowlevel btree write code for applications that need to
handle very large low level buffering. I'm sure this only affects
applications that are "at least six sigmas out", and so is, last I
discussed with JF, a low priority item. Similar discussions
resulted in the analogous situation being completely resolved on
the read side.

--
'j' - a n t h o n y at romeo/charley/november com

Rainer Joswig

unread,
Jul 2, 2007, 6:25:29 PM7/2/07
to
In article <yxzejjq...@elendil.holgi.priv>,
Holger Schauer <Holger....@gmx.de> wrote:

> On 5052 September 1993, Don Geddis wrote:
> > OK, so now the question. Common Lisp was formalized quite some time ago;
> > ANSI CL a bit later, but without many radical changes. Are there tools now
> > known in the abstract space of programming language design that "ought" to be
> > part of Common Lisp? That, if there had been an ANSI CL 2 effort, would
> > surely have been strongly suggested to get added to the language?
>
> Don, a lot of people have pointed you in the direction why the
> question to begin with is questionable. I'll won't do that but still I
> won't answer your question, so this probably hints also that there is
> something basic problem with your question.
>
> > I'd like to distinguish this from libraries or specific applications, which
> > are easy enough to layer on top of existing ANSI CL.
>
> I believe that this restriction is too strong. A lot of stuff at least
> started as a set of CL application or library.
>
> > Things like "iteration" or "recursion" or "CLOS".
>
> Well, iteration and recursion isn't really something to be taken from
> the language specification. And CLOS is implemented/implementable in
> CL.

Not without system dependent stuff.

--
http://lispm.dyndns.org

Daniel Barlow

unread,
Jul 2, 2007, 6:31:20 PM7/2/07
to
Holger Schauer wrote:
> You do know that there are (I think several) libraries implementing
> unification in CL?

As are there libraries/preprocessors/other tools implementing GC for C,
AOP for Java, and so on. I think the more important question is not
whether you /can/ do $foo in Lisp, but whether the typical Lisp
programmer would want to/know when to.


-dan

Alan Crowe

unread,
Jul 2, 2007, 8:33:46 PM7/2/07
to
Don Geddis <d...@geddis.org> writes:
> OK, so now the question. Common Lisp was formalized quite some time ago;
> ANSI CL a bit later, but without many radical changes. Are there tools now
> known in the abstract space of programming language design that "ought" to be
> part of Common Lisp? That, if there had been an ANSI CL 2 effort, would
> surely have been strongly suggested to get added to the language?
>
> Regular expression matching ...
>
> Static typing is another interesting corner case....
>
> And the pattern-matching programming idea got me thinking of unification...
>
> So I'm wondering about language expressions that are so fundamental to
> describing computation, that they "ought" to be part of the vocabulary of any
> advanced programmer.
>

The concept of a computer programming language is pretty
much played out. The progression through machine code,
assembly, C, Lisp has traversed a conceptual space all the
way to its boundary. It is time to strike out in a new
direction.

The two issues are

1)Programming is the boundary activity, separating the
manual from the automatic.

2)The conflict between writing clear, simple code and
writing efficient code.

Think of the document flow in an organisation. There are
discussions with customers. Requirements are
analysed. Software products are proposed and discussed. Lots
of documents are written and rewritten.

Think about a change to the requirements. The author of the
software product proposal responds to the the change by
thinking through its implications and manually amending his
proposal.

Constrast the relationship between source code and assembler
listing with the relationship between requirements and
proposal. When the source code changes the programmer
merely reruns the compiler. The production of documents
``below'' the level of the computer programming language has
been automated.

A computer programming language marks a boundary between the
manual and the automatic and the goal of ``improving'' a
computer programming language is to shift this boundary
higher up the business, automating what was previously
manual.

Let me make this more concrete with an admittedly
hypothetical example. A finance company must respond to tax
changes by changing the programs that calculate tax. It
wants to obtain the taxation statutes in machine readable
form and run a program that extracts the new rates and
updates the code. If the tax changes are just adjustments to
numerical parameters this is within the grasp of current
technology, but when for example, Gordon Brown splits the
tax on cars into bands according to fuel efficiency, the
code base much change more radically.

The finance company wants to "program" its computers in a tax
law programming language. Realistically I'm imagining an
extra level of complication. There is a hairy program,
perhaps in Common Lisp, that compiles TLPL (my hypothetical
Tax Law Programming Language) into Common Lisp. This is the
standard idea of a domain specific programming language and
Common Lisp, with its defmacro, is about as good as it gets.

Attempts to think about improving computer programming
languages falter because they conceive of the issue in terms
of writing programs, but the goal is the opposite of
this. The goal is to NOT write programs. Programming in the
conventional sense is to be automated away. To put it in
boring terms, programming's future is to split into three. One
part is writing compilers for languages closer to the
business needs of the customers who pay the bills. Another
part is using those languages. The third part is ``paddling
up waterfalls''. A waterfall model with the compiler writers
presenting the completed language to those who must program
in it is implausible.

So far I articulated my vision in familiar language. Instead
of application programmers writing applications we have
compiler writers writing compilers, application programmers
writing application in application specific languages, and
some guys paddling up waterfalls. My vision is vague.

It is also incoherent as written. The term "compiler writer"
still has its old meaning, writing programs that turn C or
CL into assembler. I should not be reusing it to mean
something else. Much the same problem applies to my use of
the phrase "application programmer". It is only the job
description "waterfall paddler" that is honest in its
novelty and vagueness.

To see the difficulties, think about Ruby on Rails. Is it
created by compiler writers? Not really. What do power users
do with it? I don't think it is being leveraged to build
more elaborate websites. I think that canny programmers are
using it to spend less time coding and more time thinking
about how to make money.

Lets call the three roles

1)Business programmer: he writes in the domain specific
language, but spends more time understanding customers
than he does understanding computers

2)Transformation programmer: Like a compiler writer except
that Common Lisp is the object language not the source
language

3)Waterfall paddler: My lack of explicitness here reveals a
major weakness in my thinking. This is a post to a
newsgroup, you haven't paid $100 for a book :-)

The Business Programmer is writing clear, simple code. Perhaps
the naive interpretation of the code is as an inefficient
algorithm that runs too slowly, but the business programmer
has no time to spend rewriting it to run faster. He is out
of the office talking to customers. When he comes back he
will change it to keep up with a fast changing world, not to
fix efficiency problems.

What about the Transformation Programmer? Does he rewrite
the code himself to make it run faster? Absolutely
not. ``Programming'' = ``writing code'' is what we are trying to
eliminate, or more precisely automate. Remember that
tomorrow the Business Programmer will change key Business
Declarations. All the effort put into manual tuning will be
lost. The Transformation Programmer aims to come up with a
transformation that turns today's Business Declarations into
efficient code, but is not so tightly tuned that the
transformation breaks tomorrow. The waterfall paddler tries
to optimise a tricky trade-off. The transformations must be
general enough to survive tomorrows changes, but if this
vision is to be practical them must be done in a way that
keeps well away from attempting to build a general artifical
intelligence. Some-one must distinguish the likely changes
from the unlikely changes in a way that permits a trade-off:
being less prepared for unlikely changes buys being more
prepared for likely changes.

Technology and Implementation
-----------------------------

I've recast the problem. It is no longer about creating
better, fixed languages for writing applications. It is
about better tools for writing optimising
compilers. Especially about having more flexible tools so
that an efficiency problem is not solved by changing the
source, but by adding extra transformations to the compiler
on an as needed basis.

How might this be possible. The work that stikes me as
exciting and mysterious is that of Futamura.

http://www.cubiclemuses.com/cm/blog/archives/000419.html

If the transformation programmer's tool chest includes a
reasonably powerful language (RPL) that comes with a partial
evaluator and a native code compiler, he uses it to write an
interpreter for the business language, then uses the first
futamura projection to get a native code compiler for free,
by partially evaluating the business declarations with the
interpreter to get RPL that can be compiled to native code.

I think that this kind of staged approach covers the issues
of pattern matching and static typing.

It is easy enough to write code that matches data structures
at run time. So you can write an interpreter for a pattern
matching language. You can even write macros that expand to
calls to a pattern matching routine, so you can add pattern
matching to a compiled language and only take the
interpretive performance hit in code that actually uses the
pattern matching. It you write the pattern matcher in a
language for which you have a partial evaluator, you can
partially evaluate the pattern matching code with the
pattern list at macro expansion time and get full
compilation.

I think that something similar happens with static
typing. Much of the type checking overhead is removed by
partial evaluation, because the types are independent of the
actual data. One can even regard the elimination of type
checks as the criteria for whether the code is well typed.

Notice that if the Business Programmer wants a more
expressive type system or more elaborate pattern matching,
the Transformation Programmer is in a position to offer this
because he is hacking an interpreter for which he has a
partial evaluator.

There are interesting issues with regular expressions. My
Perl book gives the example of matching

"aaaaab" against /a*a*a*a*a*[b]/

and warns that leaving of the "b" and just matching against
"aaaaa" will take too long to fail. This is the kind of
conflict between clarity and efficiency that makes
programming an all consuming technical activity. It does
strike me that there is something wrong here. Regular
expressions are a subset of context free languages and the
Earley algorithm parses context free grammars in at worst
cubic time, so I don't understand Larry Wall's hint these
kinds of expression take exponential time.

I've come across this issue before with a computer
programming language called REFAL. It is very attractive,
because the built-in pattern matching subsumes most
looping. Unfortunately it does not live up to its promise
because there are various ways that clear code can be unbearably
inefficient. The manual has to go into some detail of how
certain obvious ways of matching have quadratic performance
and how to write non-obvious code to get linear performance.

In future the Business Programmer will not obfuscate the
Business Declarations to obtain performance, he will get the
Transformation Programmer to fix the regular expression
matching engine (either with a better algorithm or by adding
special case code to handle common troublesome
situations). Transformations from quadratic to linear will
be done by adding transformations to the optimiser, not
obfuscating the source code. (The source code might well
need declarations saying that linear performance is needed in
one parameter of a function but not in another. It might
even need to include a hint about how the transformation is
to be done.)

The big obstacle to this vision is that deep within the
partial evaluator lurks a theorem prover, and it is the
strength of this theorem prover that determines whether the
partial evaluation of an interpreter with source is actually
compiling.

I'm intrigued by what I have read of ACL2. It seems that it
is not very smart "out of the box". You ask it to prove a
theorem and it cannot manage it. You ask it to prove a well
chosen lemma and it manages the lemma. Now the lemma is
added to its stock of theorems, and when you return to the
main theorem it is able to prove it automatically. One
feature of this kind of dialogue is that you cannot break it
by accidently teaching it a false lemma. You can only build
its stock of theorems incrementally by getting it prove them
for itself and working up.

What fascinates me is that I'm not looking for artificial
intelligence. I'm only expecting to automate processes that
are currently routinely done manually. I'm not expecting
automated tools to come up with a program transformation
that the Transformation Programmer would not have done
himself in the old days. I'm imagining that the
Transformation Programmer doesn't write transformations in
the style of defmacro function bodies. Instead he comes up
with a series of lemmas that get the ACL2 like theorem
prover inside the partial evaluator up to the point that it
can carry out the transformation. There is then a good
chance that when the Business Declarations change, the
theorem prover will be able to prove the trivial variant
needed to optimise the new code all on its own.

So here is a radical vision for what computer programming
looks like in 2027.

The Business Programmer writes clean, clear code. Probably
declarative code. It might look a lot like Prolog, but
actually it will be very different. Naively written Prolog
often triggers the backtracking search producing
exponentially awful programs. The Business Programmer will
not personally have to worry about this and will never
obfuscate his code for efficiency.

The necessary transformations are never done manually. The
Transformation Programmer engages in a Socratic dialog with
the theorem prover inside the partial evaluator, leading it
to see how to transform the code to make it efficient.

Source control means both the Business Declarations and the
Lemma List. Today creative engineers must consider the
software/hardware trade-off. In future the Waterfall Paddler
must design the domain specific language by trading off the
complexity of the Business Declarations against the length
of the Lemma List and the difficulty of constructing it.

Now I can explain why I am not very interested in improving
Common Lisp. It is not that I think that Common Lisp is the
last word in computer programming languages. Far from it. I
have the opposite concern. I look forward to radical
developments. The tool chain underpining future languages
might well be written first time around in Common Lisp, but
I look forward to those languages being very different
indeed. For example the semantics of partial evaluation
would be part of the specification. If the Lemma List is to
be part of the program source, then the specification of the
language would have to include minimum capabilities for the
theorem prover inside the partial evaluator and say how to
dialogue with it. That is not sounding like anything one can
reach by further development of Common Lisp.

Is Common Lisp a Blub? It will be.

Alan Crowe
Edinburgh
Scotland

Ken Tilton

unread,
Jul 2, 2007, 9:21:02 PM7/2/07
to

jimb...@gmail.com wrote:
> On Jul 2, 4:32 pm, Rainer Joswig <jos...@lisp.de> wrote:
>
>>Sorry, but your remarks are unfriendly and I will
>>killfile you.
>
>
> Well, that's too bad because I wasn't singly you out personally and
> have found many of your comments useful and interesting and it's a
> shame you won't know I feel that way that because you won't be reading
> this.
>
> For anyone still reading, in other reddit comments I mention that the
> reason for c.l.l.'s quick reaction of labeling a comment a troll is

> due to ...

...our being smarter than frickin losers like you? Name one serious
poster labelled a troll who is still posting. Oh. None? Fine, STFU and
go find a newsgroup that won't laugh your sorry *ss into oblivion.

For anyone still reading, that is.

hth,kenny

Wade Humeniuk

unread,
Jul 2, 2007, 11:52:06 PM7/2/07
to
Pascal Costanza <p...@p-cos.net> writes:

> Also: There is nothing special about being "up in the power
> continuum." Just because most languages suck nowadays doesn't mean
> that you're special because you happen to know a language that doesn't
> suck that much. The ideas behind Lisp are only mind blowing when
> compared to more mainstream languages. In 200 years from now, though,
> these ideas will be regarded as trivial.


The 200 year assertion is interesting. Since modern day languages
(english compared to say, phoenician) are not that different then
their 5000 year old ancestors, how can that assertion be made?
Computer languages (so far) are about human expression and
communication. People try to "tell" a machine to do something and
machines being thick as a brick do not cooperate very well. This is
compounded by people not really knowing what they are saying and you
end up with hands waving around in grand style.

I suppose the next step are programs that are more introspective, that
know what they are doing. Then instead of just "knowing" what they
are supposed to do they also know and discover what they are "not"
supposed to do. Its the negative space knowledge that programs lack.
You even see that with programs now, ones that have better error
checking and detection have better running behaviour. That is a
simple example but things like "search" can be greatly improved by the
program being critical of its own operations and learning what not to
do.

Wade

Duane Rettig

unread,
Jul 3, 2007, 1:05:55 AM7/3/07
to
Pascal Costanza <p...@p-cos.net> writes:
>
> It seems to be the case that for ultra-large-scale programs, garbage
> collection is not feasible anymore. (That was also the message in Jans
> Aasman's talk at ILC'07.)

It's unfortunate that that is what his talk appeard to emphasize.
I've read other posts on this thread, as well as following the links
through Edi Weitz's answer, and on to John Foderaro's conversation on
the sbcl mailing list (which explains better what our position and
practice is than a one-hour presentation that was too short to get
everything said that Jans wanted to say). Perhaps the blashphemous
statement was a bit of hyperbole that he made at the beginning of the
talk, which was intended more to inspire thought than to state a "new
direction". To expand on it further: consider that the term "garbage
collector" isn't even defined in the Ansi Spec, and it is used is very
few places (mostly to describe implementation-dependent actions that
functions may take involving gcs). No assumptions are made about a
garbage collector in CL, and in fact there need not be one (a
perfectly legal garbage collector might be to spawn a new lisp and
die, thus automatically removing non-ephemeral garbage very quickly).
There are no rules made in CL about manual memory management,
resourcing, or reduction of consing, though I franky have no idea why
some Lispers seem to treat these concepts as ugly or even anti-Lisp -
on the contrary: Common Lisp promotes diversity and multiple
solutions, and that is what makes it so hard to promote or eeven
describe Common Lisp, and it is what makes Common Lisp so special.
There is also a mistaken notion that gc is magic, and takes no time
when it is generational. The fallacy in this notion is due to the
fact that it is so often true: when the garbage generated is truly
ephemeral, the gc time tends to be ignorable. But what do you do when
your garbage is not ephemeral? The memory-management strategy then
becomes one of decisions made on how to deal with the data that
becomes medium-lived, and the choice that one might make to simply
generate less garbage should not be a choice that is shunned. I think
that this is what Jans was trying to get people to think about in his
talk.

>> Don't understand how it
>> would aid programmer productivity. Similarly, a Java-only programmer might
>> not get the power of macros.
>
> Gregor Kiczales most certainly knows about the power of macros and
> argues against them: http://www.ddj.com/dept/windows/184415142 (Note
> that I disagree with his position, it's just to note that views may
> differ even when knowing about the benefits.)

Chris Rathman already disagreed with your assessment, and although I
usually don't say "me too", I will do so here, only because it ties in
to what is being said about GC: I often find it to be the case that
Lispers will latch onto a concept - "X is the greatest thing in Lisp"
and then promote the use of that concept ad nauseum - when a more
experienced Lisper (such as Gregor; I'm sure you'll agree that he
has more Lisp experience than you or me combined :-) comes along and
says "well, it's not the only thing", we tend to view it as a
statement of rejection of that concept rather than a call for wisdom.

For my own experience, I got a taste of the good - and the bad - of
macros when I first started learning Lisp, in 1984 when I ported Franz
Lisp to the IBM 370 architecture while I was at Amdahl; included with
the distribution tape was an in-memory database program called pearl -
it was an exercise in extreme macrology. I found it fascinating at
first, because _everything_ was a macro; there were extremely few
actual functions in the package. But I found two problems with the
system: First, it was impossible to debug, and second, it killed the
mainframe's caches due to the extreme spread and non-locality of the
code. It was then and there that I learned to _balance_ use of
macrology and out-of-line code, and, as with any other great concept,
to use it in moderation, where it was necessary, rather than where it
was possible.

Well, I just looked back on my response, and it was much too long and
rambling. Sorry.

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Matthew Swank

unread,
Jul 3, 2007, 1:26:45 AM7/3/07
to
On Tue, 03 Jul 2007 01:33:46 +0100, Alan Crowe wrote:

> The work that stikes me as exciting and mysterious is that of Futamura.

Whenever I see references to Futamura, I always have to read the
reference two or three times. This is because I invariably see the text
as a reference to the *Futurama* projection.

However, you can see why this mistake is an easy one to make: Futurama is
also prominent in programming language theory.

Matt

--
"You do not really understand something unless you
can explain it to your grandmother." - Albert Einstein.

Pascal Costanza

unread,
Jul 3, 2007, 4:14:15 AM7/3/07
to
Duane Rettig wrote:
> Pascal Costanza <p...@p-cos.net> writes:
>> It seems to be the case that for ultra-large-scale programs, garbage
>> collection is not feasible anymore. (That was also the message in Jans
>> Aasman's talk at ILC'07.)
>
> It's unfortunate that that is what his talk appeard to emphasize.

My statement was obviously too dense. Sorry for that. If I understand
correctly, what seems to be feasible is to divide storage into different
regions which are handled differently. Some regions can be nicely
garbage collected, while others need a more manual approach, due to the
size and longevity of the data to be stored. My understanding was that
Jans's talk was about similar ideas. I have never understood it as
saying that we should get rid of garbage collection altogether.

>>> Don't understand how it
>>> would aid programmer productivity. Similarly, a Java-only programmer might
>>> not get the power of macros.
>> Gregor Kiczales most certainly knows about the power of macros and
>> argues against them: http://www.ddj.com/dept/windows/184415142 (Note
>> that I disagree with his position, it's just to note that views may
>> differ even when knowing about the benefits.)
>
> Chris Rathman already disagreed with your assessment, and although I
> usually don't say "me too", I will do so here, only because it ties in
> to what is being said about GC: I often find it to be the case that
> Lispers will latch onto a concept - "X is the greatest thing in Lisp"
> and then promote the use of that concept ad nauseum - when a more
> experienced Lisper (such as Gregor; I'm sure you'll agree that he
> has more Lisp experience than you or me combined :-) comes along and
> says "well, it's not the only thing", we tend to view it as a
> statement of rejection of that concept rather than a call for wisdom.

I think the subtext of Gregor's article is important here: It basically
tries to suggest that annotations + AOP are good enough. (While they in
fact only provide a fraction of what macros can do. At least that's how
I understand his article. It's also interesting to read his follow-up
article in that regard - see http://www.ddj.com/dept/architect/184415205
- to quote: "In April, I summarized attributes, explaining that they
introduce into C# the basic syntactic hook required to enable Lisp-style
macros.")

> For my own experience, I got a taste of the good - and the bad - of
> macros when I first started learning Lisp, in 1984 when I ported Franz
> Lisp to the IBM 370 architecture while I was at Amdahl; included with
> the distribution tape was an in-memory database program called pearl -
> it was an exercise in extreme macrology. I found it fascinating at
> first, because _everything_ was a macro; there were extremely few
> actual functions in the package. But I found two problems with the
> system: First, it was impossible to debug, and second, it killed the
> mainframe's caches due to the extreme spread and non-locality of the
> code. It was then and there that I learned to _balance_ use of
> macrology and out-of-line code, and, as with any other great concept,
> to use it in moderation, where it was necessary, rather than where it
> was possible.

Yes, these are very wise insights, but not good reasons to drop macros
altogether and replace them with something much weaker, IMHO.

Pascal Costanza

unread,
Jul 3, 2007, 4:18:33 AM7/3/07
to
Wade Humeniuk wrote:
> Pascal Costanza <p...@p-cos.net> writes:
>
>> Also: There is nothing special about being "up in the power
>> continuum." Just because most languages suck nowadays doesn't mean
>> that you're special because you happen to know a language that doesn't
>> suck that much. The ideas behind Lisp are only mind blowing when
>> compared to more mainstream languages. In 200 years from now, though,
>> these ideas will be regarded as trivial.
>
> The 200 year assertion is interesting. Since modern day languages
> (english compared to say, phoenician) are not that different then
> their 5000 year old ancestors, how can that assertion be made?
> Computer languages (so far) are about human expression and
> communication. People try to "tell" a machine to do something and
> machines being thick as a brick do not cooperate very well. This is
> compounded by people not really knowing what they are saying and you
> end up with hands waving around in grand style.

...but we are not doing science about natural languages in the same way
as we do science about programming languages. Except for very few
examples (like Esperanto), we don't try to come up with new natural
languages / language constructs and try to study in what ways they make
things more or less convenient.

My hope is that, like in other sciences and engineering disciplines,
there will be shifts in perspectives.

> I suppose the next step are programs that are more introspective, that
> know what they are doing. Then instead of just "knowing" what they
> are supposed to do they also know and discover what they are "not"
> supposed to do. Its the negative space knowledge that programs lack.
> You even see that with programs now, ones that have better error
> checking and detection have better running behaviour. That is a
> simple example but things like "search" can be greatly improved by the
> program being critical of its own operations and learning what not to
> do.

Interesting.

Wade Humeniuk

unread,
Jul 3, 2007, 9:35:04 AM7/3/07
to
Pascal Costanza <p...@p-cos.net> writes:
>
>> I suppose the next step are programs that are more introspective, that
>> know what they are doing. Then instead of just "knowing" what they
>> are supposed to do they also know and discover what they are "not"
>> supposed to do. Its the negative space knowledge that programs lack.
>> You even see that with programs now, ones that have better error
>> checking and detection have better running behaviour. That is a
>> simple example but things like "search" can be greatly improved by the
>> program being critical of its own operations and learning what not to
>> do.
>
> Interesting.
>
>

Marvin Minsky has a good draft chapter on this kind of
thing.

http://web.media.mit.edu/~minsky/E5/eb5.html

and another thesis, (which has some Lisp in it).

http://web.media.mit.edu/~push/push-thesis.pdf

Wade

Christopher Browne

unread,
Jul 3, 2007, 9:53:29 AM7/3/07
to
Daniel Barlow <d...@coruskate.net> writes:

> Christopher Browne wrote:
>> Improving parallelism capabilities seems like a tougher one, in that
>> it might require, for support of "threading-like" models, deeper
>> changes to language semantics.
>
> I wouldn't be at all surprised if this is the Next Big Thing(tm).

A lot of people have been wanting "inherent parallelism" to be the
Next Big Thing(tm) for a long time now.

So long, indeed, that ConnectionMachine Lisp* (a Lisp variation
intended to harness certain sorts of highly parallel machines) came
and went at least 15 years ago.

In the *recent* past, the fact that Erlang uses FP to minimize
concurrency issues along with a threading model allowing huge numbers
of concurrent processes has brought that into something of a
forefront; the Stackless Python folks have taken a different tack on
it.

Of course, neither of those recent things diminish the fact that there
have been failed attempts to build better ways to program 'clustered'
systems more or less continually over the last 20 years...

Long and short of it: I don't think we have the 'silver bullet' on
parallelism yet. Unfortunately, CL seems a bit less friendly to "fine
grained parallelism" than more primitive languages. Whether that
means that CL needs to be "fixed" or whether it should be applied to
its favoured domains is less clear...
--
output = reverse("moc.enworbbc" "@" "enworbbc")
http://linuxdatabases.info/info/linuxxian.html
Howe's Law:
Everyone has a scheme that will not work.

Slobodan Blazeski

unread,
Jul 3, 2007, 10:01:08 AM7/3/07
to
On Jul 2, 7:00 pm, "cwarre...@gmail.com" <cwarre...@gmail.com> wrote:
> On Jul 2, 2:24 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
> wrote:
>
> > On Jul 2, 5:38 am, Don Geddis <d...@geddis.org> wrote:
>
> > > Some thoughts spurred -- I am ashamed to admit -- by our local spammer JH.
>
> > It's better to ignore the trolls completely, you'll learn nothing from
> > them except bad smell of feeling dirty. Not to mention waste of time.
>
> I'm not really interested in the main post because my suggestions
> would require far too much clarification, but this is an interesting
> quote. Yes, trolls are annoying. Yes, most of the time they have no
> idea what they're talking about. No, you shouldn't just ignore them.
> At least, not in the way suggested here. Ignore them _after_ you
> disprove what they're saying. Don't use the fact that they're a troll
> prevent you from exploring the issue. (I also like to keep an
> extremely open mind when I'm considering other viewpoints. I've
> learned that from years of unknowingly being wrong. In my defense,
> nobody was ever able to provide proof that I was wrong--I always ended
> up doing that myself.)
>
> Anyway, that's just my $0.02.

The trolls or rather in this case a spammer are utter waste of time.
You can't teach someone stubborn enough who don't want to learn a
little bit of programming lisp what the hell we're talking in this
newsgroup.Beside a lot of people who want to read about lisp topics
end up reading rehashing of the same bs from trollers and spammers .So
I will continue to ignore trolls and spammers not matter what they
say, you have a right to shape your opinion as you please.


Slobodan Blazeski

jimb...@gmail.com

unread,
Jul 3, 2007, 10:06:01 AM7/3/07
to
On Jul 2, 9:21 pm, Ken Tilton <kennytil...@optonline.net> wrote:
> ...our being smarter than frickin losers like you? Name one serious
> poster labelled a troll who is still posting. Oh. None? Fine, STFU and
> go find a newsgroup that won't laugh your sorry *ss into oblivion.

Isn't that kind of the point? Someone has a question, they get
labeled a troll, they stop posting because it's not worth the hassle
and find some other way to learn Lisp, or just go on happily
programming in Ruby on Rails or whatever.

Note: I'm not denying that there is A LOT of trolling on c.l.l. I am
claiming that occasionally really questions and comments get caught up
in the troll net.

OK, random hit on searching "not a troll" in c.l.l.

http://groups.google.com/group/comp.lang.lisp/msg/c21597a2cc4f7a7b

Random third party after poster A labels poster B a troll:

"I disagree very strongly. This person has done nothing but ask
reasonable
questions in a reasonable manner. That is not a troll. It is not his
fault
that he walks a path this group has seen many times before."

Not hard at all to find other examples like this.

-jimbo

Slobodan Blazeski

unread,
Jul 3, 2007, 10:38:19 AM7/3/07
to
On Jul 2, 8:33 pm, jimbo...@gmail.com wrote:
> On Jul 2, 3:24 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>

> wrote:
>
> > There's many libraries that are relatively easy to find and
> > install,iterate, series, cells, qi....For a lot of the other mentioned
> > there is a library already ready to use or code exist or as in call/cc
> > feature is questionable, some like it some believe it doesn't worth.
> > So you already have a most powerfull toolbox from any other
> > lanaguages, you only need to build with it.
> > The only thing that I really believe that lisp is missing is erlang/
> > termite like concurrency, but time will tell.
> > For now we only need more apps written in it.
>
> I work at a university, and just out of curiosity I sat in on a talk
> given by young Apple employees to CS students considering applying
> there for internships or jobs.
>
> In response to a question about how to present one's self in an
> interview at Apple, one of the speakers replied (paraphrase) "Whatever
> you do, don't say Mac OS X is perfect.

Yes it shares a simigliar problem with lisp, lacks consumer software
since all the DESKTOP software is written for Windows only.

> If you can't find anything
> wrong with it, how could you possibly have any chance of making it
> better?"
>
> I think that would be a healthy attitude to take towards Lisp, or,
> frankly, anything else you like.

Agreed.


> If you think you have found any
> piece of utopia on this Earth, you are probably either deceiving
> yourself or need to raise your standards.
>

I didn't say common lisp is perfect, but is the best thing (language +
implementation + libraries + community + literature) I've found after
going through: c, c++, basic, pascal/delphi, c#, TSQL and some
python , java, php , ocaml & erlang.


> Now, to your credit, you do mention the concurrency thing. But do you
> really think there is nothing else that could make Lisp better?
>

AllegroCache in lisp that I could afford ( 2000$ at most)

> And with the reliance on the idea that "you can always add that as a
> library" you come close to the Godwin's Law of Referencing Turing
> Completeness [1].
>

I prefer embedded database as library.


> To avoid hypocrisy, here are some criticisms I made about Lisp:
>
> http://programming.reddit.com/info/1z8b8/comments/c1zbmr

1 I think the main place to find out about Lisp is comp.lang.lisp, and
a lot of the long time posters there are seriously socially
challenged.

Names or an apology about above. Else I'll take this as an insult
for a lot of people I have in high esteem.

--People newly interested in Lisp can often be called a troll when
they ask for comparisons between Lisp and other languages, for
example.

Asking a same question every week from a same person who doesn't even
show that at least understands at least a little lisp to continue
conversation with him is trolling in my not so humble opinion but
YMMW

2. Too many Lisps to choose from. This is both good and bad. The good
is that Common Lisp is available on a lot of platforms with a lot of
different performance profiles and different built in libraries. The
bad news is that it is difficult for people new to Lisp to pick one to
start using and that a Common Lisp library may or may not support the
Lisp you picked.

Lispers are spoiled of choice, that's very bad thing . The good thing
is that any lisp, whose development didn't stoppeed in 1986, will do
for a newbie. Afterwards they'll be able to choose what suits them
best.

3. Batteries not included. This is not a problem with commercial Lisp,
their batteries are included and generally pretty good. But the free
Lisps do not include the breadth of functionality in, say, Python,
Perl or Ruby. That leaves it to the new Lisper to evaluate and choose
libraries on his own with no prior experience with the language to
draw on.

Ever heard of starter pack , or lisp in a box.

4. Setting up the development environment. Again, this is only a
problem for free Lisps, the commercial Lisps have good, integrated
editors and IDEs ready to go. The free Lisps mostly rely on SLIME and
emacs. This is not so bad if you know and like Emacs (my situation),
but even then there are a few steps to get started, like downloading
SLIME and your free Lisp separately, installing both, and configuring
them to work together.

You use :
a: Free commercial lisp like Allegro express, LW personal
b: Lisp in a box comes with everything you need.

5.
Not wanting to try new things. Ruby, Python and Perl are at least
syntactically similar to C and Java and most programmers probably
already know at least one of these. Lisp is different, but different
for good reasons (read what Paul Graham has to say about parentheses
and Lisp macros). I think Lisp should improve on 1 - 4, but I believe
Lisp should not try to appeal to programmers who don't want to learn
something new. That will make Lisp worse, not better, in my opinion.

Agreed.

>
> These are more criticisms of Common Lisp culture and community, and I
> think most of the weaknesses of Common Lisp reside there as opposed to
> something in or not in the text of the standard itself.
>

> -jimbo
>
> [1]http://programming.reddit.com/info/21vs0/comments/c022160


Slobodan Blazeski

unread,
Jul 3, 2007, 10:48:40 AM7/3/07
to

So you would prefer a dictator ?

Slobodan Blazeski

Ken Tilton

unread,
Jul 3, 2007, 10:59:59 AM7/3/07
to

jimb...@gmail.com wrote:
> On Jul 2, 9:21 pm, Ken Tilton <kennytil...@optonline.net> wrote:
>
>>...our being smarter than frickin losers like you? Name one serious
>>poster labelled a troll who is still posting. Oh. None? Fine, STFU and
>>go find a newsgroup that won't laugh your sorry *ss into oblivion.
>
>
> Isn't that kind of the point?

Yeah, it was a completely ironic and deliberately self-referential post
meant to amuse and edify. How did I know it would be lost on a dolt like
you?

hth,kt

jimb...@gmail.com

unread,
Jul 3, 2007, 11:22:20 AM7/3/07
to
On Jul 3, 10:48 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
wrote:

> So you would prefer a dictator ?

For a programming language, a dictator certainly has its uses. Of
course, it all depends on the quality, taste, and judgement of the
dictator in question.

I think a dictator would solve some of the cultural problems of Common
Lisp. Maybe there is a better solution to be had from a mechanism for
forming consensus, but I do not see any recent successes in this
regard.

-jimbo

jimb...@gmail.com

unread,
Jul 3, 2007, 11:28:14 AM7/3/07
to
On Jul 3, 10:38 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
wrote:

> 1 I think the main place to find out about Lisp is comp.lang.lisp, and
> a lot of the long time posters there are seriously socially
> challenged.
>
> Names or an apology about above. Else I'll take this as an insult
> for a lot of people I have in high esteem.

I apologize for the words "socially challenged". That was uncivil, ad-
hominem, not useful, and detracts from the point I was trying to
make. I am sorry.

I stand by the claim that legitimate questions, comments and
criticisms have been labeled trolls on comp.lang.lisp. I cited an
example in a reply to Ken elsewhere. I believe I can find others.
Searching for "not a troll" in this group reveals a lot of meta
discussion about who is a troll and who is not.

-jimbo

jimb...@gmail.com

unread,
Jul 3, 2007, 12:10:41 PM7/3/07
to
On Jul 3, 10:38 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
wrote:

> Yes it shares a simigliar problem with lisp, lacks consumer software
> since all the DESKTOP software is written for Windows only.

I think there are cultural similarities, as well. Throughout the 90s,
the Mac really was better than Windows. However, Windows was getting
significantly better, and the Mac was not.

I think the overall sense of the general public was that the Mac might
be a little bit better, but not better enough to overcome the cultural
issues of more applications for Windows, more friends who knew Windows
and could give them advice, more places to buy a Windows PC, bigger
variety of supported hardware, etc.

This left the supporters of the Macintosh in the position of defending
things that seemed more and more esoteric, even though they were real
benefits. For example, the Mac had excellent meta-data support built
into the file system. This made it easy to track what applications
could open which files, which application created a file, which icon
to show, application resources bundled with the application, and all
kinds of other useful things. Windows relied on people typing the
correct file extension, and if the user made a mistake, everything
fell apart.

In Mac OS X, Apple decided to favor file extensions over the meta data
in the resource fork for associating files with applications. Many
long time supporters of the Mac were upset at Apple for deliberately
choosing the technically inferior solution.

However, the reasons for Apple's decision were cultural, not
technical. Windows users expected to associate a file with an
application through a file extension. They did not know anything
about resource forks. So when a Mac user gave a file to a Windows
user, the Windows could not easily open it without the expected file
extension. By making the Mac more like Windows, Apple made the choice
of buying a Mac easier.

Now, I'm not saying Lisp should become more like other languages just
to fit in. But I do think that there is a perception that even if
Lisp is better in certain ways than other languages, those other
languages are improving and Lisp is not. The perception might even be
wrong. But even the perception itself is a liability for Lisp
acceptance.

In short, sometimes the cultural outweighs the technical.

> > If you think you have found any
> > piece of utopia on this Earth, you are probably either deceiving
> > yourself or need to raise your standards.
>
> I didn't say common lisp is perfect, but is the best thing (language +
> implementation + libraries + community + literature) I've found after
> going through: c, c++, basic, pascal/delphi, c#, TSQL and some
> python , java, php , ocaml & erlang.

I would agree here, too. Lisp is not perfect, just the best so far
(of the languages I know).

> > Now, to your credit, you do mention the concurrency thing. But do you
> > really think there is nothing else that could make Lisp better?
>
> AllegroCache in lisp that I could afford ( 2000$ at most)

Agreed, and note that I would consider this (pricing) a cultural issue
as well. There was a recent, extended thread on making a commercial
Lisp open source. Price and open source offerings definitely affect
perceptions of Common Lisp.

> Lispers are spoiled of choice, that's very bad thing . The good thing
> is that any lisp, whose development didn't stoppeed in 1986, will do
> for a newbie. Afterwards they'll be able to choose what suits them
> best.

Choice is a barrier to entry for a newbie. By definition as a newbie,
they do not have the information they need to make a good choice. So
the newbie can spend time evaluating the major Lisp distributions, or
just pick Python or Ruby and start hacking. There are multiple
distributions of these, but not as many and there is a default
distribution that new programmers will be steered towards.

Also, please note I clearly said this is a good and a bad thing. The
availability of Common Lisps with different tradeoffs is a good thing
for someone knowledgable about Lisp.

> Ever heard of starter pack , or lisp in a box.

Yes! My preferred solution would be for Starter Pack and Lisp in a
Box packaged together with either SBCL or CLisp is a double clickable
package. The only barrier then is knowing Emacs, but I think Phil
Armitage will have that covered, based on conversation elsewhere in
this thread :).

This one thing would do more than anything else I can think of for
erasing barriers to entry for new Lisp programmers.

> You use :
> a: Free commercial lisp like Allegro express, LW personal
> b: Lisp in a box comes with everything you need.

I tried the Lisp in a Box with Allegro express. I was using it for a
homework project and immediately blew through the Allegro memory limit
because the assignment required processing a lot of data. So I found
AquaMacs (for Mac OS X), which had SLIME pre-installed, and got it
working with SBCL. (This is now my favorite development environment,
by the way, better than any IDE I've ever used.) Not that big of a
deal, but adds just a little bit more of a barrier to entry.

I suppose this could be subsumed under item 2., though.

-jimbo

jimb...@gmail.com

unread,
Jul 3, 2007, 12:15:09 PM7/3/07
to
On Jul 3, 10:59 am, Ken Tilton <kennytil...@optonline.net> wrote:
> Yeah, it was a completely ironic and deliberately self-referential post
> meant to amuse and edify. How did I know it would be lost on a dolt like
> you?

OK, I'll try harder in the future to keep up.

-jimbo

Duane Rettig

unread,
Jul 3, 2007, 12:38:35 PM7/3/07
to
Pascal Costanza <p...@p-cos.net> writes:

> Duane Rettig wrote:
>> Pascal Costanza <p...@p-cos.net> writes:
>>> It seems to be the case that for ultra-large-scale programs, garbage
>>> collection is not feasible anymore. (That was also the message in Jans
>>> Aasman's talk at ILC'07.)
>> It's unfortunate that that is what his talk appeard to emphasize.
>
> My statement was obviously too dense. Sorry for that. If I understand
> correctly, what seems to be feasible is to divide storage into
> different regions which are handled differently. Some regions can be
> nicely garbage collected, while others need a more manual approach,
> due to the size and longevity of the data to be stored.

Yes. When it comes down to it, memory has always been (and will
likely be for a long time, barring some inventions) a linear resource,
and dividing that linear resource into "the right size" segments for a
task is always going to be an issue. Right now, we have a new
breathing space, having expanded into the world of 64-bit addresses,
and thus the segments of memory allocation are less likely to run into
each other like they were starting to do for 32-bits, but that will
not last forever, and truly scalable programs will quickly use up this
memory space which will still have to be managed carefully.

>> My
> understanding was that Jans's talk was about similar ideas. I have
> never understood it as saying that we should get rid of garbage
> collection altogether.

That's a relief, but it's not surprising. My comment was not made to
you, per se, but to your statement, which you have now retracted but
which I have heard others say as well (and perhaps meant it when they
said it).

> I think the subtext of Gregor's article is important here: It
> basically tries to suggest that annotations + AOP are good
> enough. (While they in fact only provide a fraction of what macros can
> do. At least that's how I understand his article. It's also
> interesting to read his follow-up article in that regard - see
> http://www.ddj.com/dept/architect/184415205 - to quote: "In April, I
> summarized attributes, explaining that they introduce into C# the
> basic syntactic hook required to enable Lisp-style macros.")

Well, doesn't this suggest to you then that either he was wrong in his
implication that annotation + AOP are good enough, or that he changed
his mind, or that your understanding of his emphasis was not quite
right? Why would he be looking toward providing Lisp-style macros
unless he thought what was available was _not_ good enough? I don't
personally know what he is thinking, so I don't know which of the
three are true. And it doesn't matter much, but I have a gut feeling
that in his career as language-improvement researcher, he would tend
to _never_ call something good enough, unless it is only for the
moment.

>> It was then and there that I learned to _balance_ use of
>> macrology and out-of-line code, and, as with any other great concept,
>> to use it in moderation, where it was necessary, rather than where it
>> was possible.
>
> Yes, these are very wise insights, but not good reasons to drop macros
> altogether and replace them with something much weaker, IMHO.

Since I never advocated dropping macros, I am assuming that you are
referring to Gregor's move out of Lisp and into other languages with
AOP. It is interesting that many old Lispers move away from Lisp and
try their hand at something else - many of the Dylan inventors were
ex-lispers, who tried to fix the "problem" of Lisp's syntax (which
turned out not to be a problem at all) - others try to invent new
ways of programming altogether. Gregor seems to have moved into
an area that is more productive; taking existing languages and
retrofitting them with Lisp-like qualities. It is clear that these
retrofits will be weaker; the moment someone discovers a hybrid that
is better than Lisp, we'll probably all start moving toward that
next-generation language... But the retrofits clearly make the other
language stronger, if not as strong as Lisp.

Ken Tilton

unread,
Jul 3, 2007, 2:26:24 PM7/3/07
to

I was thinking "catch up". You come into one of the better NGs on UseNet
where honest noobs get about the best help you can imagine and lecture
us on our behavior? After also doing so in a public forum?

And you think /we/ are socially challenged?!!!!!!!!!!!!!

I'd further LMAO at your premise that I am personally responsible for
Lisp's small mindshare, but I am also looking forward to thus getting
credit for Lisp's booming popularity and other languages copying Lisp
all the time. Kenny made us do it!

Well, they hate it when I feed the trolls, and I better leave something
for the dogs....

kxo

Mark Hoemmen

unread,
Jul 3, 2007, 2:33:14 PM7/3/07
to
Christopher Browne wrote:
> A lot of people have been wanting "inherent parallelism" to be the
> Next Big Thing(tm) for a long time now.

That kind of died in the 90's (or perhaps earlier). The compiler people
worked on it and found out that it was a Hard Problem (tm).

> So long, indeed, that ConnectionMachine Lisp* (a Lisp variation
> intended to harness certain sorts of highly parallel machines) came
> and went at least 15 years ago.

Picky point -- CMLisp was sort of the "Omega Point" of *Lisp; it was
never implemented in full. *Lisp was considered an intermediate
solution. It also doesn't do anything like "inherent parallelism" --
all the parallelism is quite explicit (all the parallel operations have
different names, parallel data isn't garbage-collected and must be
explicitly freed, etc.).

> Of course, neither of those recent things diminish the fact that there
> have been failed attempts to build better ways to program 'clustered'
> systems more or less continually over the last 20 years...

How do you define "failed"? Certainly no satisfactory dominant paradigm
has emerged, but many of them had their successes (and some continue to
have successes!).

> Long and short of it: I don't think we have the 'silver bullet' on
> parallelism yet. Unfortunately, CL seems a bit less friendly to "fine
> grained parallelism" than more primitive languages.

How "fine-grained" is "fine-grained"? Do you mean SIMD instructions,
like SSE? Do you mean data parallelism, like OpenMP?

You're right that there is no "silver bullet," and maybe searching for
one is the wrong approach. I was at a conference recently that pointed
out three different parallel C variants, each of which had an advantage
for expressing a particular sort of parallelism: UPC for SPMD-style
computations, Cilk for task trees, and some other dialect whose name I
can't recall for distributed stuff. For example, Cilk makes task trees
easy but sacrifices some performance and clarity for data-parallel
operations (you have to spawn processes recursively to add a long vector
to another long vector in parallel). Maybe it's better to keep the
language as simple as possible and tailor it to a certain class of
applications.

mfh

Mark Hoemmen

unread,
Jul 3, 2007, 2:38:21 PM7/3/07
to
Christopher Browne wrote:
> A lot of people have been wanting "inherent parallelism" to be the
> Next Big Thing(tm) for a long time now.

That kind of died in the 90's (or perhaps earlier). The compiler people

worked on it and found out that it was a Hard Problem (tm).

> So long, indeed, that ConnectionMachine Lisp* (a Lisp variation


> intended to harness certain sorts of highly parallel machines) came
> and went at least 15 years ago.

Picky point -- CMLisp was sort of the "Omega Point" of *Lisp; it was

never implemented in full. *Lisp was considered an intermediate
solution. It also doesn't do anything like "inherent parallelism" --
all the parallelism is quite explicit (all the parallel operations have
different names, parallel data isn't garbage-collected and must be
explicitly freed, etc.).

> Of course, neither of those recent things diminish the fact that there


> have been failed attempts to build better ways to program 'clustered'
> systems more or less continually over the last 20 years...

How do you define "failed"? Certainly no satisfactory dominant paradigm

has emerged, but many of them had their successes (and some continue to
have successes!).

> Long and short of it: I don't think we have the 'silver bullet' on


> parallelism yet. Unfortunately, CL seems a bit less friendly to "fine
> grained parallelism" than more primitive languages.

How "fine-grained" is "fine-grained"? Do you mean SIMD instructions,

Message has been deleted

André Thieme

unread,
Jul 3, 2007, 3:31:49 PM7/3/07
to
Don Geddis schrieb:
> Tamas Papp <tkp...@gmail.com> wrote on Mon, 02 Jul 2007:
>> What's the difference between having feature X "included" and loading
>> a library that implements it? Lisp is very easy to extend.
>
> Think about that in terms of garbage collection and macros. If Lisp had
> begun without them, could they be merely added later as a "library"?

Yes.

jos...@corporate-world.lisp.de

unread,
Jul 3, 2007, 3:35:20 PM7/3/07
to
On 3 Jul., 20:33, Mark Hoemmen <mark.hoem...@gmail.com> wrote:

> Picky point -- CMLisp was sort of the "Omega Point" of *Lisp; it was
> never implemented in full. *Lisp was considered an intermediate
> solution. It also doesn't do anything like "inherent parallelism" --
> all the parallelism is quite explicit (all the parallel operations have
> different names, parallel data isn't garbage-collected and must be
> explicitly freed, etc.).

The Connection Machine was attached to a front end machine. The Lisp
was running on the front end. There was no Lisp on the Connection
Machine
as I understand it. The front end Lisp had instructions for
communicating
with the Connection Machine, running programs on it (a SIMD machine)
and working with data on it. Thus the operations were explicit, since
they were executed on the Connection Machine - not the host's Lisp.
The processors on the first Connection Machine were tiny, but many (up
to 65536).

But there were also a more than a a dozen other attempts to add
support
for parallel execution to Common Lisp or to write parallel languages
on top of Common Lisp.


Dimiter "malkia" Stanev

unread,
Jul 3, 2007, 3:54:04 PM7/3/07
to jos...@corporate-world.lisp.de
Don't know much about the Connection Machine, but could this effort be
revived using the IBM CELL based machines - for example the Sony
Playstation 3.

Right now there is linux for it, and you can talk to the 6 SPU's (only 6
are available of 8 total). Each SPU has 256kb of memory, so no much lisp
can live there (unless it's interpretted, or some other tricks are made).

But it could be used where the main lisp operates on the PPU (The dual
PowerPC chip) where you have about 200MB of memory, and it can generate
code for the SPU.

Btw, is anyone thinking of working with lisp on the PS3 trying to
isolate the SPU (SPE) CELL power? I was able to get slime + any of
OpenMCL, SBCL and Allegro to work on it, but that was all. Didn't do
much more than that, as I myself am still learning the language.

Dimiter "malkia" Stanev

unread,
Jul 3, 2007, 3:55:33 PM7/3/07
to
Don't know much about the Connection Machine, but could this effort be
revived using the IBM CELL based machines - for example the Sony
Playstation 3.

Right now there is linux for it, and you can talk to the 6 SPU's (only 6
are available of 8 total). Each SPU has 256kb of memory, so no much lisp
can live there (unless it's interpretted, or some other tricks are made).

But it could be used where the main lisp operates on the PPU (The dual
PowerPC chip) where you have about 200MB of memory, and it can generate
code for the SPU.

Btw, is anyone thinking of working with lisp on the PS3 trying to
isolate the SPU (SPE) CELL power? I was able to get slime + any of
OpenMCL, SBCL and Allegro to work on it, but that was all. Didn't do
much more than that, as I myself am still learning the language.

Jon Harrop

unread,
Jul 3, 2007, 4:26:14 PM7/3/07
to
Daniel Barlow wrote:
> Holger Schauer wrote:
>> You do know that there are (I think several) libraries implementing
>> unification in CL?
>
> As are there libraries/preprocessors/other tools implementing GC for C,
> AOP for Java, and so on. I think the more important question is not
> whether you /can/ do $foo in Lisp, but whether the typical Lisp
> programmer would want to/know when to.

Exactly.

--
Dr Jon D Harrop, Flying Frog Consultancy
The OCaml Journal
http://www.ffconsultancy.com/products/ocaml_journal/?usenet

Mark H.

unread,
Jul 3, 2007, 4:47:21 PM7/3/07
to
On Jul 3, 12:55 pm, "Dimiter \"malkia\" Stanev" <mal...@mac.com>
wrote:

> Don't know much about the Connection Machine, but could this effort be
> revived using the IBM CELL based machines - for example the Sony
> Playstation 3.
>
> Right now there is linux for it, and you can talk to the 6 SPU's (only 6
> are available of 8 total). Each SPU has 256kb of memory, so no much lisp
> can live there (unless it's interpretted, or some other tricks are made).

You'll probably want to build a "Lisp for Cell" on the existing Cell
API. See for example:

http://www.bsc.es/projects/deepcomputing/linuxoncell/ -> Documentation

http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/FC857AE550F7EB83872571A80061F788

http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/30B3520C93F437AB87257060006FFE5E

Cell's parallel model is much different than the CM model used by
*Lisp: Cell has 128-bit SIMD vector instructions, shared-memory
thread-based parallelism, and an explicitly managed memory hierarchy
in the SPE's, whereas *Lisp treats the parallel machine like a remote
massively SIMD thing that operates on configurable grids of data.

> Btw, is anyone thinking of working with lisp on the PS3 trying to
> isolate the SPU (SPE) CELL power? I was able to get slime + any of
> OpenMCL, SBCL and Allegro to work on it, but that was all. Didn't do
> much more than that, as I myself am still learning the language.

Maybe a good starting point would be hacking a Lisp -> C compiler,
like ECL?

mfh

Jon Harrop

unread,
Jul 3, 2007, 4:53:34 PM7/3/07
to
Rainer Joswig wrote:
> Say, you have a 3d Modeller for complex models.
> Do you want to see GCs during interaction?

This is exactly the kind of work that we do and OCaml's incremental GC
or .NET's concurrent GC handle it very well.

Jon Harrop

unread,
Jul 3, 2007, 4:56:31 PM7/3/07
to
Dan Bensen wrote:
> ML-style pattern matching seems to be pretty easy.

Then you'll be able to post a shorter and faster Lisp implementation of the
symbolic simplifier benchmark we discussed recently.

Jon Harrop

unread,
Jul 3, 2007, 5:11:56 PM7/3/07
to
Tamas Papp wrote:
> What's the difference between having feature X "included" and loading
> a library that implements it?

Performance, reliability and uniformity. Try writing a correct optimizing
pattern match compiler, for example. Then try getting as many people to use
it has Haskell and OCaml have done.

> Lisp is very easy to extend.

No more than the next language.

> Sure, if the library doesn't exist yet, you have to write it...

If you get that far: Greenspun.

Most Lispers don't get that far though. They fall for the hype and believe
that everything invented in all other languages since Lisp is both obvious
and easily retrofitted onto Lisp.

jimb...@gmail.com

unread,
Jul 3, 2007, 5:21:09 PM7/3/07
to
On Jul 3, 2:26 pm, Ken Tilton <kennytil...@optonline.net> wrote:
> I'd further LMAO at your premise that I am personally responsible for
> Lisp's small mindshare, but I am also looking forward to thus getting
> credit for Lisp's booming popularity and other languages copying Lisp
> all the time. Kenny made us do it!

Looking forward to it. I'll be sure to give you all the credit.

-jimbo

Jon Harrop

unread,
Jul 3, 2007, 5:34:08 PM7/3/07
to
Don Geddis wrote:
> Probably most of us are familiar with Paul Graham's hypothetical "Blub"
> language, as described in the section "The Blub Paradox" in the middle of
> http://paulgraham.com/avg.html
> It was a hypothetical example to show how some programmers, only used to
> languages less powerful than Lisp, might not appreciate the power of Lisp,
> simply because they don't understand those features of Lisp that their own
> favorite language lacks.

Blub doesn't exist. It is based upon the assumption that programming
language power is a one-dimensional function when it clearly is not.

Problem solving efficiency is the real holy grail. This is even more
complicated because it is also a function of what you are doing.

So people should ask "is Lisp the most efficient language to solve my
problems?".

I personally cannot think of a single application that Common Lisp is well
suited to. Indeed, I constantly post example programs written in various
languages that cannot be translated into Lisp without becoming long,
obfuscated and slow. I cannot remember ever having seen an example where a
problem can be solved more succinctly, elegantly and efficiently in Lisp.

Dan Bensen

unread,
Jul 3, 2007, 6:01:19 PM7/3/07
to
> Dan Bensen wrote:
>> ML-style pattern matching seems to be pretty easy.

Jon Harrop wrote:
> Then you'll be able to post a shorter and faster Lisp implementation
> of the symbolic simplifier benchmark we discussed recently.

Could you point me to it please?

--
Dan
www.prairienet.org/~dsb/

evins...@gmail.com

unread,
Jul 3, 2007, 6:10:30 PM7/3/07
to
On Jul 3, 10:38 am, Duane Rettig <d...@franz.com> wrote:

Hi, Duane!

[snip]

> Since I never advocated dropping macros, I am assuming that you are
> referring to Gregor's move out of Lisp and into other languages with
> AOP. It is interesting that many old Lispers move away from Lisp and
> try their hand at something else - many of the Dylan inventors were
> ex-lispers, who tried to fix the "problem" of Lisp's syntax (which
> turned out not to be a problem at all) - others try to invent new
> ways of programming altogether.

Just a small nit to pick here: I don't think this is a very good
description of what happened with Dylan's syntax. Dylan was a language
with s-expression syntax and a working implementation for a while
before the syntax changed. It was prospective users, most of them in
Apple but not on the Dylan team, who provided the pressure for a
different syntax. The language designers were obeying their mandate to
take input from the people who were supposed to be their customers,
and a bunch of those customers wanted a different syntax.

Originally, the language was supposed to support either syntax, but in
the fullness of of time s-expressions fell away. At the time, I didn't
think syntax was a big deal, but I've changed my mind. I think the
change in syntax was a mistake. It changed Dylan from the language
that I preferred to use to one that I look at once a year or so with
mild curiosity.


André Thieme

unread,
Jul 3, 2007, 6:10:19 PM7/3/07
to
Jon Harrop schrieb:

> I personally cannot think of a single application that Common Lisp is well
> suited to. Indeed, I constantly post example programs written in various
> languages that cannot be translated into Lisp without becoming long,
> obfuscated and slow. I cannot remember ever having seen an example where a
> problem can be solved more succinctly, elegantly and efficiently in Lisp.

Try it the other way around.
Translate cl-ppcre to OCaml. Take the source code for AllegroServe and
translate it to F#.
Write an OCaml program that takes OCaml code during runtime and inserts
it into the middle of some other function f, compiles f and runs f without
restarting the program.

Matthias Buelow

unread,
Jul 3, 2007, 6:37:00 PM7/3/07
to
Jon Harrop wrote:

> Try writing a correct optimizing pattern match compiler, for example.

Have you?

André Thieme

unread,
Jul 3, 2007, 6:40:59 PM7/3/07
to
Jon Harrop schrieb:

> I personally cannot think of a single application that Common Lisp is well
> suited to. Indeed, I constantly post example programs written in various
> languages that cannot be translated into Lisp without becoming long,
> obfuscated and slow. I cannot remember ever having seen an example where a
> problem can be solved more succinctly, elegantly and efficiently in Lisp.

Oh, and Wade is still waiting for your F# version of the game
"Concentration".
Wade Humeniuk wrote on December 13 of last year:
>
http://groups.google.de/group/comp.lang.lisp/tree/browse_frm/thread/70e6afa9012cbd5c/0871cddcf502af47?rnum=231&hl=de&_done=%2Fgroup%2Fcomp.lang.lisp%2Fbrowse_frm%2Fthread%2F70e6afa9012cbd5c%2F4050caff2ebed840%3Ftvc%3D1%26hl%3Dde%26#doc_79d6592dbc65996d

And you answered to his question if you want to write a version in F#:
Absolutely. While you're at it, I'll try implementing it in F#... :-)


It took Wade around 8 hours.
Your counter is now at 4700 hours and you still are not done.
You realized that it can't be written as fast as in Lisp I suppose.

Dimiter "malkia" Stanev

unread,
Jul 3, 2007, 6:41:17 PM7/3/07
to Jon Harrop
Somehow I don't agree with you. I'm still learning the language,
although I have 15+ years experience with the likes of C/C++, Basic and
Pascal.

I was able to translate the LZO (lzo1x to be more precise) decompression
algorithm to Common Lisp, and it was only 2.4 slower (lispworks
professional 5.02/x86 on HP xw8400 workstation running Windows XP). It
came down to finding (disassemble'ing much) what options are good, and
testing. I can imagine other lisp systems might do better (SBCL, CMUCL),
given more specifics (optimization flags, or other things). The
unoptimized version is 24 times slower.

Now it was slower, because in "C" on x86 you can cheat and copy 4 bytes
at a time, even when they are not aligned, and that's what the clever
lZO "C" code is doing (Markus Oberhumer [1] is really a kind of genius,
his code is one of the most effective I've ever saw). The lisp version
was copying byte by byte, as you can't lie to the system about it
(although vendor specific extensions might provide such tricks).

Now I've did some mistakes at the beginning for example, declaring the
type of the byte array to be "(simple-array (unsigned-byte 8))" instead
of "(simple-array (unsigned-byte 8) 1)" or "(simple-array (unsigned-byte
8) (*))" but this was due to my inexperience, now I know where that's
needed and when not (or I believe so).

I've attached the code for your refference. It's not finished, so you
won't be able to run it. It was based on the java version [2] of the
decompressor (the "C" one is full with C macros, and after preprocessing
on my machine it uses the 4 bytes unaligned copy trick which as I've
said works only on x86 machines, and only "correctly" in C).

[1]
http://www.oberhumer.com/opensource/lzo/

[2]
http://www.oberhumer.com/opensource/lzo/download/LZO-v1/java-lzo-1.00.tar.gz

lzo-decompress.lisp

Rainer Joswig

unread,
Jul 3, 2007, 6:47:31 PM7/3/07
to

I was just thinking about it today: is there a HTML version of
the first Dylan Interim Reference Manual? The one with
Lisp syntax/semantics?

This one:
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/others/dylan/doc/manual/0.html

--


http://lispm.dyndns.org/

Ken Tilton

unread,
Jul 3, 2007, 7:12:55 PM7/3/07
to

You should see the acceptance speech. Modest, self-deprecating...

kt

Sacha

unread,
Jul 3, 2007, 7:19:24 PM7/3/07
to
Jon Harrop wrote:
> Don Geddis wrote:
>> Probably most of us are familiar with Paul Graham's hypothetical "Blub"
>> language, as described in the section "The Blub Paradox" in the middle of
>> http://paulgraham.com/avg.html
[...]

> So people should ask "is Lisp the most efficient language to solve my
> problems?".
>
> I personally cannot think of a single application that Common Lisp is well
> suited to. Indeed, I constantly post example programs written in various
> languages that cannot be translated into Lisp without becoming long,
> obfuscated and slow. I cannot remember ever having seen an example where a
> problem can be solved more succinctly, elegantly and efficiently in Lisp.
>

Looks like it's time for you to stop working on toy example programs.

Sacha

Jon Harrop

unread,
Jul 3, 2007, 8:12:32 PM7/3/07
to

Yes.

Jon Harrop

unread,
Jul 3, 2007, 8:43:35 PM7/3/07
to
André Thieme wrote:
> Translate cl-ppcre to OCaml.

<=> Str module or libpcre-ocaml

> Take the source code for AllegroServe

<=> Ocsigen

But translating a solution from the language it was designed for to another
language is not the same as solving a problem.

There are many such bad examples already out there so there is no need to
create more. Look at this translation of SICP into statically typed
languages, for example:

http://www.codepoetics.com/wiki/index.php?title=Topics:SICP_in_other_languages

(WARNING: totally impractical ML code!)

If you want a useful comparison, find a problem and get good coders to solve
it in each language. This is exactly what I did for the ray tracer but you
might like to find problems that Lisp is better suited to.

> Write an OCaml program that takes OCaml code during runtime and inserts
> it into the middle of some other function f, compiles f and runs f without
> restarting the program.

Higher-order function.

Jon Harrop

unread,
Jul 3, 2007, 9:03:55 PM7/3/07
to
Dan Bensen wrote:
> Jon Harrop wrote:
> > Then you'll be able to post a shorter and faster Lisp implementation
> > of the symbolic simplifier benchmark we discussed recently.
>
> Could you point me to it please?

http://groups.google.com/group/comp.lang.functional/msg/bffeb5a3948a3a1a

The problem is to simplify symbolic expressions by applying the following
rewrite rules from the leaves up:

rational n + rational m -> rational(n + m)
rational n * rational m -> rational(n * m)
symbol x -> symbol x
0+f -> f
f+0 -> f
0*f -> 0
f*0 -> 0
1*f -> f
f*1 -> f
a+(b+c) -> (a+b)+c
a*(b*c) -> (a*b)*c

This can be written in OCaml as:

let rec ( +: ) f g = match f, g with
| `Int n, `Int m -> `Int (n +/ m)
| `Int (Int 0), e | e, `Int (Int 0) -> e
| f, `Add(g, h) -> f +: g +: h
| f, g -> `Add(f, g)

let rec ( *: ) f g = match f, g with
| `Int n, `Int m -> `Int (n */ m)
| `Int (Int 0), e | e, `Int (Int 0) -> `Int (Int 0)
| `Int (Int 1), e | e, `Int (Int 1) -> e
| f, `Mul(g, h) -> f *: g *: h
| f, g -> `Mul(f, g)

let rec simplify = function
| `Int _ | `Var _ as f -> f
| `Add (f, g) -> simplify f +: simplify g
| `Mul (f, g) -> simplify f *: simplify g

Several people came up with different approaches in Lisp.

Andre Thieme came up with quite an elegant solution that is 100x slower than
the OCaml:

(defun simplify (a)
(if (atom a)
a
(destructuring-bind (op x y) a
(let* ((f (simplify x))
(g (simplify y))
(nf (numberp f))
(ng (numberp g))
(+? (eq '+ op))
(*? (eq '* op)))
(cond
((and +? nf ng) (+ f g))
((and +? nf (zerop f)) g)
((and +? ng (zerop g)) f)
((and (listp g) (eq op (first g)))
(destructuring-bind (op2 u v) g
(simplify `(,op (,op ,f ,u) ,v))))
((and *? nf ng) (* f g))
((and *? (or (and nf (zerop f))
(and ng (zerop g)))) 0)
((and *? nf (= 1 f)) g)
((and *? ng (= 1 g)) f)
(t `(,op ,f ,g)))))))

Nathan Froyd compiled the pattern match down into Lisp by hand to greatly
improve performance at the cost of obfuscation but the result is still many
times slower than the OCaml:

(defun simplify-no-redundant-checks (xexpr)
(declare (optimize (speed 3)))
(if (atom xexpr)
xexpr
(let ((op (first xexpr))
(z (second xexpr))
(y (third xexpr)))
(let* ((f (simplify-no-redundant-checks z))
(g (simplify-no-redundant-checks y))
(nf (numberp f))
(ng (numberp g)))
(tagbody
START
(if (eq '+ op) (go OPTIMIZE-PLUS) (go TEST-MULTIPLY))
OPTIMIZE-PLUS
(when (and nf ng) (return-from simplify-no-redundant-checks (+
f g)))
TEST-PLUS-ZEROS
(when (eql f 0) (return-from simplify-no-redundant-checks g))
(when (eql g 0) (return-from simplify-no-redundant-checks f))
(go REARRANGE-EXPR)
TEST-MULTIPLY
(unless (eq '* op) (go REARRANGE-EXPR))
OPTIMIZE-MULTIPLY
(when (and nf ng) (return-from simplify-no-redundant-checks (*
f g)))
TEST-MULTIPLY-ZEROS-AND-ONES
(when (or (eql f 0) (eql g 0)) (return-from
simplify-no-redundant-checks 0))
(when (eql f 1) (return-from simplify-no-redundant-checks g))
(when (eql g 1) (return-from simplify-no-redundant-checks f))
REARRANGE-EXPR
(when (and (listp g) (eq op (first g)))
(let ((op2 (first g))
(u (second g))
(v (third g)))
(declare (ignore op2))
(return-from simplify-no-redundant-checks
(simplify-no-redundant-checks (list op (list op f u)
v)))))
MAYBE-CONS-EXPR
(if (and (eq f z) (eq g y))
(return-from simplify-no-redundant-checks xexpr)
(return-from simplify-no-redundant-checks (list op f
g))))))))

The fastest Lisp implementation to date was written by Pascal Constanza. It
is also quite elegant but still 3x slower than the OCaml. More
interestingly, it avoids s-exprs:

(defstruct add x y)
(defstruct mul x y)

(defgeneric simplify-add (x y)
(declare (optimize (speed 3)))
(:method ((x number) (y number)) (+ x y))
(:method ((x (eql 0)) y) y)
(:method (x (y (eql 0))) x)
(:method (x (y add))
(simplify-add (simplify-add x (add-x y)) (add-y y)))
(:method (x y) (make-add :x x :y y)))

(defgeneric simplify-mul (x y)
(declare (optimize (speed 3)))
(:method ((x number) (y number)) (* x y))
(:method ((x (eql 0)) y) 0)
(:method (x (y (eql 0))) 0)
(:method ((x (eql 1)) y) y)
(:method (x (y (eql 1))) x)
(:method (x (y mul))
(simplify-mul (simplify-mul x (mul-x y)) (mul-y y)))
(:method (x y) (make-mul :x x :y y)))

(defgeneric simplify (exp)
(declare (optimize (speed 3)))
(:method (exp) exp)
(:method ((exp add))
(simplify-add (simplify (add-x exp)) (simplify (add-y exp))))
(:method ((exp mul))
(simplify-mul (simplify (mul-x exp)) (simplify (mul-y exp)))))

Duane Rettig

unread,
Jul 3, 2007, 9:17:26 PM7/3/07
to
"mi...@evins.net" <evins...@gmail.com> writes:

> On Jul 3, 10:38 am, Duane Rettig <d...@franz.com> wrote:
>
> Hi, Duane!

Hi, Mikel. Hope you're well.

> [snip]
>
>> Since I never advocated dropping macros, I am assuming that you are
>> referring to Gregor's move out of Lisp and into other languages with
>> AOP. It is interesting that many old Lispers move away from Lisp and
>> try their hand at something else - many of the Dylan inventors were
>> ex-lispers, who tried to fix the "problem" of Lisp's syntax (which
>> turned out not to be a problem at all) - others try to invent new
>> ways of programming altogether.
>
> Just a small nit to pick here: I don't think this is a very good
> description of what happened with Dylan's syntax. Dylan was a language
> with s-expression syntax and a working implementation for a while
> before the syntax changed. It was prospective users, most of them in
> Apple but not on the Dylan team, who provided the pressure for a
> different syntax. The language designers were obeying their mandate to
> take input from the people who were supposed to be their customers,
> and a bunch of those customers wanted a different syntax.

Well, you were in a position to know, being an Apple insider. But
when two of us from Franz paid Apple a visit in Feb of 1995, with the
strong interest in taking over the product before Apple dumped it, we
had the distint impression that the people we talked to were not at
all interested in bringing back the old syntax; they gave us the
impression that the syntax had been changed for good, and that it was
part of the plan; there was no going back, and that they viewed the
situation as precisely the right direction. We also questioned them
on the state of the macro facility, and they were fairly cavalier
about it; they seemed to view it as an add-on to the language, rather
than a fundamental element of it. We went away with a sense that
their team and ours were going in completely different directions, and
we declined to take the project.

I don't know what the internal vendor/customer dynamics were, and as a
developer I understand the need to satisfy a customer (especially a
powerful, internal organization) but I also believe that it is a
developer's responsibility to defend technically the direction that he
is going - and so whether or not the developers changed their minds
midstream, or whether they were just bootstrapping from lisp syntax
into the already-conceived syntax change that had been preplanned, in
effect they are one and the same. I can also buy that they were
forced _into_ the C-like syntax, but I don't at all buy that they were
forced _out_ of the lisp syntax - lisp (and Dylan, presumably) has
always been powerful enough to support both styles, if the will to do
so is there.

> Originally, the language was supposed to support either syntax, but in
> the fullness of of time s-expressions fell away. At the time, I didn't
> think syntax was a big deal, but I've changed my mind. I think the
> change in syntax was a mistake. It changed Dylan from the language
> that I preferred to use to one that I look at once a year or so with
> mild curiosity.

Agreed. And to be fair to the times, back then everybody was trying
to keep warm in the midst of the AI/Lisp winter, and they had definite
ideas about what should be burned in order to do so. We even had
factions within Franz (all no longer with us) that tried to remove the
L word from our literature. Fortunately, they did not succeed.

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Ken Tilton

unread,
Jul 4, 2007, 12:01:39 AM7/4/07
to

jimb...@gmail.com wrote:
> On Jul 3, 10:38 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
> wrote:
>
>>1 I think the main place to find out about Lisp is comp.lang.lisp, and
>>a lot of the long time posters there are seriously socially
>>challenged.
>>
>>Names or an apology about above. Else I'll take this as an insult
>>for a lot of people I have in high esteem.
>
>
> I apologize for the words "socially challenged". That was uncivil, ad-
> hominem, not useful, and detracts from the point I was trying to
> make. I am sorry.
>
> I stand by ...

Jeez, you do not even know how to admit you were wrong. You try getting
credit for it and in the next sentence repeat the original stupidity. We
have a pool and a pond, the pond would be good for you.

kxo

evins...@gmail.com

unread,
Jul 4, 2007, 1:22:31 AM7/4/07
to
On Jul 3, 7:17 pm, Duane Rettig <d...@franz.com> wrote:

> "m...@evins.net" <evins.mi...@gmail.com> writes:
> > On Jul 3, 10:38 am, Duane Rettig <d...@franz.com> wrote:
>
> > Hi, Duane!
>
> Hi, Mikel. Hope you're well.

As well as can be expected. :-)

Nothing in this account surprises me, but 1995 is pretty late in the
game. The language-design meetings at which syntax was discussed
happened something like three or four years earlier, when the Lisp
folks had not yet been thoroughly ghoettoized within Apple.

We had a series of meetings back then in which Apple Cambridge folks
(Andrew Shalit, Jeremy Jones, Gail Zacharias, Jeff Piazza, etc. etc.)
would fly out and meet with a bunch of folks from ATG and talk about
what the language should be like. Feature after feature went in, no
problem, but the non-lispers kept asking over and over and over, "can
we get rid of the Lisp syntax?"

And eventually, designers started saying, "well, maybe, sure; what if
we add another syntax that you like better? We could support them
both, no problem." Naturally, this idea met with an enthusiastic
response from the we-hate-lisp-syntax crowd, and I said, "what the
heck? It's just syntax; why not, if it makes them happy?" And, to be
fair, the idea of languages with alternate surface syntaxes was pretty
popular back then; I wonder how many people remember that there used
to be, for example, Japanese and Italian syntaxes for AppleScript.

A couple years later the "infix syntax," as it was known, was in, but
the major projects that had been committed to Dylan were being
encouraged by management to abandon it for various reasons, and the
people who had pushed hardest for the infix syntax were sure not using
it much, if at all. By 1994 Dylan had become a non-issue in Cupertino;
nobody would have taken seriously a proposal to use it for a project,
and those projects that had been using it were told to stop (my own
project was told, "great job! you guys really showed what Dylan can
do! Now stop that and start using C++ right now."). By 1995, anything
to do with Dylan was surely in shutting-down mode, dragged along by
whatever promises and deals Apple had made before, when people who
were enthusiastic about it had more influence in the company.

By the way, the language was originally called "Ralph", and Apple paid
a large sum of money to some firm to come up with a better name for
it. All of us who were users, designers, or implementors were asked to
comment on the names, which varied from bad to awful. I remember
posting comments asking us to stay away from names that sounded like
economy cars, Japanese movie monsters, or classic bugs from CS 101. I
wish I could repeat some of them for amusement value, but maybe Apple
regards the contents of those discussions as trade secrets, I don't
know.

> I don't know what the internal vendor/customer dynamics were, and as a
> developer I understand the need to satisfy a customer (especially a
> powerful, internal organization) but I also believe that it is a
> developer's responsibility to defend technically the direction that he
> is going - and so whether or not the developers changed their minds
> midstream, or whether they were just bootstrapping from lisp syntax
> into the already-conceived syntax change that had been preplanned, in
> effect they are one and the same. I can also buy that they were
> forced _into_ the C-like syntax, but I don't at all buy that they were
> forced _out_ of the lisp syntax - lisp (and Dylan, presumably) has
> always been powerful enough to support both styles, if the will to do
> so is there.

The pressure to make it less lisp-like was relentless. It was a topic
at every single meeting attended by non-Lisp programmmers. If I were
of a cynical frame of mind I might speculate that the anti-Lisp crowd
used the addition of the infix syntax as a lever to get rid of the
hated Lisp syntax: "Why do we need to support and document 2 syntaxes?
We have a perfectly good one that ISN'T LISP. Isn't one enough? Isn't
it less work and expense to support and document just the one?"

> > Originally, the language was supposed to support either syntax, but in
> > the fullness of of time s-expressions fell away. At the time, I didn't
> > think syntax was a big deal, but I've changed my mind. I think the
> > change in syntax was a mistake. It changed Dylan from the language
> > that I preferred to use to one that I look at once a year or so with
> > mild curiosity.
>
> Agreed. And to be fair to the times, back then everybody was trying
> to keep warm in the midst of the AI/Lisp winter, and they had definite
> ideas about what should be burned in order to do so. We even had
> factions within Franz (all no longer with us) that tried to remove the
> L word from our literature. Fortunately, they did not succeed.

Hooray for that. I miss the old Dylan. I keep thinking I'll get around
to writing an implementation of a language like it, but with some
updated libraries and things. I've gone so far as to write a couple of
interpreters and a compiler for subsets of it. My exciting health
issues put a stop to that (and almost everything else) for a while,
but it's not cmpletely out of the realm of possibility that I'll
evnetually produce something along those lines, if only to please
myself. As a language qua language, the old s-expression Dylan is
still my favorite.


evins...@gmail.com

unread,
Jul 4, 2007, 1:29:51 AM7/4/07
to
On Jul 3, 4:47 pm, Rainer Joswig <jos...@lisp.de> wrote:

> On 2007-07-04 00:10:30 +0200, "m...@evins.net" <evins.mi...@gmail.com> said:

> > Originally, the language was supposed to support either syntax, but in
> > the fullness of of time s-expressions fell away. At the time, I didn't
> > think syntax was a big deal, but I've changed my mind. I think the
> > change in syntax was a mistake. It changed Dylan from the language
> > that I preferred to use to one that I look at once a year or so with
> > mild curiosity.
>
> I was just thinking about it today: is there a HTML version of
> the first Dylan Interim Reference Manual? The one with
> Lisp syntax/semantics?
>

> This one:http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/others/dyl...

There sure is, because I have it (along with a stack of paper copies,
stored 1200 miles from where I now live). But I don't remember where I
got it, and don't know where you can find it.

Apple Computer claims the copyright; maybe if you could find the right
person there you could wangle permission to reproduce it, in which
case I could send you a copy of what I have. My past attempts to get
Apple to give permission to reproduce the Bauhaus code (the OS code we
were writing in Dylan) met with no success. Basically I think folks at
Apple are not interested in spending the time to find out whether
allowing copies is a good idea or not.


Pascal Costanza

unread,
Jul 4, 2007, 3:43:14 AM7/4/07
to
mi...@evins.net wrote:

> I miss the old Dylan. I keep thinking I'll get around
> to writing an implementation of a language like it, but with some
> updated libraries and things. I've gone so far as to write a couple of
> interpreters and a compiler for subsets of it. My exciting health
> issues put a stop to that (and almost everything else) for a while,
> but it's not cmpletely out of the realm of possibility that I'll
> evnetually produce something along those lines, if only to please
> myself. As a language qua language, the old s-expression Dylan is
> still my favorite.

I think this could even have some real impact. R6RS Scheme seems to be
turning out as a real mess (as far as I can tell), so a neater
Lisp-1-dialect could be interesting to some people. (Just speculating,
but still...)

Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Matthew D Swank

unread,
Jul 4, 2007, 3:48:37 AM7/4/07
to
On Tue, 03 Jul 2007 22:22:31 -0700, mi...@evins.net wrote:

> I miss the old Dylan. I keep thinking I'll get around
> to writing an implementation of a language like it, but with some
> updated libraries and things. I've gone so far as to write a couple of
> interpreters and a compiler for subsets of it. My exciting health
> issues put a stop to that (and almost everything else) for a while,
> but it's not cmpletely out of the realm of possibility that I'll
> evnetually produce something along those lines, if only to please
> myself.

Well there is goo: http://people.csail.mit.edu/jrb/goo/ though it seems
kind of comatose at this point.

> As a language qua language, the old s-expression Dylan is still my
> favorite.

I suppose s-expression syntax is not something modern Dylan users want
to embrace. Do any current Dylan implementors still read this list?

On the other hand, if s-expression Dylan was a language a Lisper wanted to
use, it probably wouldn't be too hard to write a preprocessor using the
pretty printer in LTD: http://norvig.com/ltd/doc/ltd.html. Given Dylan's
more traditional write/compile cycle, s-expressions -> Dylan would just be
another stage. Good Old Fashioned Interactive Development (tm) would
require more work of course.

Matt


--
"You do not really understand something unless you
can explain it to your grandmother." — Albert Einstein.

evins...@gmail.com

unread,
Jul 4, 2007, 4:33:25 AM7/4/07
to
On Jul 4, 1:48 am, Matthew D Swank <akopa-is-very-much-like-my-mail-
addr...@c.net> wrote:

> On Tue, 03 Jul 2007 22:22:31 -0700, m...@evins.net wrote:
> > I miss the old Dylan. I keep thinking I'll get around
> > to writing an implementation of a language like it, but with some
> > updated libraries and things. I've gone so far as to write a couple of
> > interpreters and a compiler for subsets of it. My exciting health
> > issues put a stop to that (and almost everything else) for a while,
> > but it's not cmpletely out of the realm of possibility that I'll
> > evnetually produce something along those lines, if only to please
> > myself.
>
> Well there is goo:http://people.csail.mit.edu/jrb/goo/though it seems

> kind of comatose at this point.
>
> > As a language qua language, the old s-expression Dylan is still my
> > favorite.
>
> I suppose s-expression syntax is not something modern Dylan users want
> to embrace. Do any current Dylan implementors still read this list?
>
> On the other hand, if s-expression Dylan was a language a Lisper wanted to
> use, it probably wouldn't be too hard to write a preprocessor using the
> pretty printer in LTD:http://norvig.com/ltd/doc/ltd.html. Given Dylan's
> more traditional write/compile cycle, s-expressions -> Dylan would just be
> another stage. Good Old Fashioned Interactive Development (tm) would
> require more work of course.

There is a small but nonzero chance that I will eventually finish an
implementation of a (circa 1992) Dylan-like language that works in the
familiar way that Lisp programmers expect, with a REPL and incremental
compilation and so on.

There is zero chance that I will ever even start an implementation
that doesn't work that way, because I have not the slightest interest
in it.

Apple's original Dylan implementation, when the language was still
called "Ralph", was a set of extensions to MCL. In fact, an entire MCL
environment was available, with Common Lisp editors and listeners,
alongside the Dylan editors and listeners. The compiled Dylan code ran
on a differennt processor--an ARM board was ribbon-cabled to our Mac
development platforms--but aside from that, interacting with Dylan was
scarcely different from interacting with Common Lisp.

The "more traditional write/compile cycle" you refer to is just one
more reason I don't care to use current Dylan implementations.


Nicolas Neuss

unread,
Jul 4, 2007, 4:46:59 AM7/4/07
to
André Thieme <address.good.un...@justmail.de> writes:

> Oh, and Wade is still waiting for your F# version of the game
> "Concentration".
> Wade Humeniuk wrote on December 13 of last year:
> >
> http://groups.google.de/group/comp.lang.lisp/tree/browse_frm/thread/70e6afa9012cbd5c/0871cddcf502af47?rnum=231&hl=de&_done=%2Fgroup%2Fcomp.lang.lisp%2Fbrowse_frm%2Fthread%2F70e6afa9012cbd5c%2F4050caff2ebed840%3Ftvc%3D1%26hl%3Dde%26#doc_79d6592dbc65996d
>
> And you answered to his question if you want to write a version in F#:
> Absolutely. While you're at it, I'll try implementing it in F#... :-)
>
> It took Wade around 8 hours.
> Your counter is now at 4700 hours and you still are not done.
> You realized that it can't be written as fast as in Lisp I suppose.

Yes. I remember that, too. Probably we have run into the problem that F#
and Ocaml are not Turing complete and concentration games simply cannot be
written :-)

Nicolas

--
- I've heard that Jon Harrop has written a new book.
- Oh? What title? Is it "The Final Countdown: F# against Ocaml"?
- No. It aspires to Knuth's seminal work and is called "The Art of
Usenet Spamming".

Slobodan Blazeski

unread,
Jul 4, 2007, 4:48:38 AM7/4/07
to
On Jul 3, 8:26 pm, Ken Tilton <kennytil...@optonline.net> wrote:
> jimbo...@gmail.com wrote:
> > On Jul 3, 10:59 am, Ken Tilton <kennytil...@optonline.net> wrote:
>
> >>Yeah, it was a completely ironic and deliberately self-referential post
> >>meant to amuse and edify. How did I know it would be lost on a dolt like
> >>you?
>
> > OK, I'll try harder in the future to keep up.
>
> I was thinking "catch up". You come into one of the better NGs on UseNet

=>(one-of-the-better-NGs-on-UseNet-p comp.lang.lisp)
nil
-=>(best-ng-on-the-usenet-p comp.lang.lisp)
t

> where honest noobs get about the best help you can imagine and lecture
> us on our behavior? After also doing so in a public forum?
>
> And you think /we/ are socially challenged?!!!!!!!!!!!!!


>
> I'd further LMAO at your premise that I am personally responsible for
> Lisp's small mindshare, but I am also looking forward to thus getting
> credit for Lisp's booming popularity and other languages copying Lisp
> all the time. Kenny made us do it!
>

> Well, they hate it when I feed the trolls, and I better leave something
> for the dogs....
>
> kxo


Slobodan Blazeski

unread,
Jul 4, 2007, 5:54:15 AM7/4/07
to
On Jul 3, 5:28 pm, jimbo...@gmail.com wrote:
> On Jul 3, 10:38 am, Slobodan Blazeski <slobodan.blaze...@gmail.com>
> wrote:
>
> > 1 I think the main place to find out about Lisp is comp.lang.lisp, and
> > a lot of the long time posters there are seriously socially
> > challenged.
>
> > Names or an apology about above. Else I'll take this as an insult
> > for a lot of people I have in high esteem.
>
> I apologize for the words "socially challenged". That was uncivil, ad-
> hominem, not useful, and detracts from the point I was trying to
> make. I am sorry.

>
> I stand by the claim that legitimate questions, comments and
> criticisms have been labeled trolls on comp.lang.lisp. I cited an
> example in a reply to Ken elsewhere. I believe I can find others.
> Searching for "not a troll" in this group reveals a lot of meta
> discussion about who is a troll and who is not.
>
> -jimbo


Apology is an acknowledgment expressing regret or asking pardon for a
fault or offense. Above looks to me more like mocking than apology. If
your claim was that:
> I stand by the claim that legitimate questions, comments and
> criticisms have been labeled trolls on comp.lang.lisp.
I agree, there's bad people and good people with bad day but your
original claim is:
...a lot of the long time posters there are seriously socially
challenged.
something I find I complete rubbish .
So cool your head a little bit and think what you wrote. And before
you wrote another re to Ken think a little but what he done for lisp
(cells, cello, celtk, cellgtk..) and compare it with your
contributions. We don't have elders here but some respect is question
of good manners.

I wish you happy lisping

Slobodan Blazeski


Jon Harrop

unread,
Jul 4, 2007, 6:06:13 AM7/4/07
to
Nicolas Neuss wrote:
> Yes. I remember that, too. Probably we have run into the problem that F#
> and Ocaml are not Turing complete and concentration games simply cannot be
> written :-)

http://coding.derkeiler.com/Archive/Lisp/comp.lang.lisp/2005-08/msg01865.html

Ken Tilton

unread,
Jul 4, 2007, 6:22:44 AM7/4/07
to

Slobodan Blazeski wrote:
> On Jul 3, 8:26 pm, Ken Tilton <kennytil...@optonline.net> wrote:
>
>>jimbo...@gmail.com wrote:
>>
>>>On Jul 3, 10:59 am, Ken Tilton <kennytil...@optonline.net> wrote:
>>
>>>>Yeah, it was a completely ironic and deliberately self-referential post
>>>>meant to amuse and edify. How did I know it would be lost on a dolt like
>>>>you?
>>
>>>OK, I'll try harder in the future to keep up.
>>
>>I was thinking "catch up". You come into one of the better NGs on UseNet
>
>
> =>(one-of-the-better-NGs-on-UseNet-p comp.lang.lisp)
> nil
> -=>(best-ng-on-the-usenet-p comp.lang.lisp)
> t
>

What part of set theory do you not understand?

hth,kt

It is loading more messages.
0 new messages