Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

cautios question (about languages)

50 views
Skip to first unread message

Friedrich Dominicus

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
I have try to find out myself and checked some Scheme and Lisp pages.
But althought I'm sure this question was asked a hundred times before, I
wasn't able to figure it out.

It's about Scheme and Common Lisp. Does it make sense to learn one or
the other, or both, or just one of it?

My background: I'm an Eiffel-Programmer, and I think I know quite a bit
about C/C++ and I refuse to do programming a line of C++. So Common Lisp
seems to me a bit like C++. Is that a correct impression? Someone
compared Scheme Standard to CL standard and this was quite frightning.
Scheme comes along with around 50 pages CL with 1100 pages. So does that
means Scheme (lean and mean) CL (big and complex) but more powerful?

On other pages I read the CL is very good for prototyping and it seems
possible to add some type-checking to CL Functions. But I don't have to.
So maybe I could use CL for scritping as for larger scale programming.

I have downloaded diverse Common Lisp systems and I have the Scheme
Shell. And of course I would like to to scripting in one of the two
languages. Does it make sense to use CL for such an area or is Scheme a
better choice here?

I have started with doing some scsh-programming. And even thought I had
a hard time, the elegance of Scheme begins to shine IMO. And I have
three books around which uses Scheme so I have to admit I'm starting to
like it more and more. But a look onto CL gave let me start thinking
again. CL seems to have far more support for OO-programming and Eiffel
is an OO-languae so I migth like CL more.

It would be nice if some of you gave me your thoughts about that.

Regards
Friedrich

Kent M Pitman

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus <Friedrich...@inka.de> writes:

> I have try to find out myself and checked some Scheme and Lisp pages.
> But althought I'm sure this question was asked a hundred times before, I
> wasn't able to figure it out.
>
> It's about Scheme and Common Lisp. Does it make sense to learn one or
> the other, or both, or just one of it?

The two languages are quite different in terms of how they are used,
what programming styles they promote, what community of users they
attract, the capabilities they offer, and the overall language focus.

You ask an extraordinarily subjective question, but you offer enough
about yourself to answer in a way that is probably best for you. Please
do not repeat this answer to anyone else; send them instead back to this
group and let them get their own answer based on their own needs.

> ... the elegance of Scheme begins to shine ...

If elegance is what you seek in a language, my take is that you're
likely better off with Scheme. That's not because Scheme is a better
(or worse) language. It's because it was mostly designed by and for
people who, when faced with a decision between a choice of doing
something "a practical way" or "an elegant way", chose the elegant
way. CL is more concerned with practicality than elegance in the case
where the two compete. Often CL is elegant anyway, but sometimes it
is not. Often Scheme is practical anyway, but sometimes it is not.
free of any commercial constraint as a programming language can be.
We all want elegant, and we all want practical. But sometimes push
comes to shove and we have to decide between them. The difference
between the two languages shows itself ways that stem from the
resolution of conflicts. Languages are like political parties; you
should choose the one that is maintained by people who think like
you do and care about the things you care about. It sounds, based on
your brief summary like that would be the Scheme crowd.

Of course, my opinion is just that of one person. Others might view
the situation differently. Good luck making your choice.

If in the end you're note sure, I'm sure you won't be harmed by learning
both. But please don't hold them to the same standards. Languages
should be judged by what they seek to do, not by their sameness to
each other.

Erik Naggum

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
* Friedrich Dominicus <Friedrich...@inka.de>

| It's about Scheme and Common Lisp. Does it make sense to learn one or
| the other, or both, or just one of it?

if you learn Scheme, you need to learn both. if you learn Common Lisp,
you don't need to learn Scheme.

| My background: I'm an Eiffel-Programmer, and I think I know quite a bit
| about C/C++ and I refuse to do programming a line of C++. So Common Lisp
| seems to me a bit like C++. Is that a correct impression?

no. Common Lisp is not a bit like C++.

| Someone compared Scheme Standard to CL standard and this was quite
| frightning.

this "fear" might mean that Merriam-Webster's Third New International
Dictionary of the English Language are frightening to a kid and that a
1000-phrase book for tourists is the best solution to teaching kids to
write? I have personally looked at huge reference tomes with a sort of
"wow! somebody did all that work and organized it for me!" attitude. I
also consider it frightening that anyone would sit down and actually
attempt to design a language and yet leave so much work to its users as
Scheme does. the consequence is that Scheme is a much bigger language
than Common Lisp in practice: Scheme is not a language you can use out of
the box, so you have to know at least a few implementations and each has
a million functions, all non-standard and slightly different in most
ways. in Scheme, purity rules, so every time someone sees what he
considers to be impure, he goes off to write his own, creating yet more
incompatible Scheme implementations and yet more unreadable Scheme code.
irony has never been quite so strong as in the "lean and mean design" of
Scheme. Common Lisp has a "just do it" quality to it that I like a lot,
especially since it's much, much harder to get a large language beautiful
than to make a small language beautiful, and Common Lisp is beautiful.

| Scheme comes along with around 50 pages CL with 1100 pages. So does that
| means Scheme (lean and mean) CL (big and complex) but more powerful?

it means Scheme was designed to prove something and Common Lisp was
designed to build something.

scripting has to fit into a much large scheme of things (pun intended),
and Scheme is easier to force into other schemes of things because it
doesn't carry enough weight of its own to be a burden to anyone: in other
words, if used for scripting, Scheme is a thin veneer of syntax on top of
a different system's semantics. for some, this seems to be sufficient.

| CL seems to have far more support for OO-programming and Eiffel is an

| OO-languae so I might like CL more.

I think you should approach new languages as "a programmer", not "an
Eiffel programmer".

#:Erik
--
suppose we blasted all politicians into space.
would the SETI project find even one of them?

Marco Antoniotti

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

Friedrich Dominicus <Friedrich...@inka.de> writes:

> I have try to find out myself and checked some Scheme and Lisp pages.
> But althought I'm sure this question was asked a hundred times before, I
> wasn't able to figure it out.
>

> It's about Scheme and Common Lisp. Does it make sense to learn one or
> the other, or both, or just one of it?

I'd go with CL. Scheme is nice, but it lack a "wide" standard.

>
> My background: I'm an Eiffel-Programmer, and I think I know quite a bit
> about C/C++ and I refuse to do programming a line of C++. So Common Lisp
> seems to me a bit like C++. Is that a correct impression?

Get away from here! :)


> Someone
> compared Scheme Standard to CL standard and this was quite
> frightning.

If you refer to the 903 pages of the latest Stroustroup's book on C++
you are right. Anyway, the "standards" for CL and Scheme are rather
different in content. Let's consider multi-dimensional arrays. They
account for a big chunk of the CL standard. The account for 0 pages of
the Scheme standard R5RS. That is because multi-dimensional arrays
are not in the standard. Let's talk about records (structures). They
account for another big chunk of the CL standard and they account for
0 pages of the R5RS of Scheme: once again they are not in the
standard. I can go on and on and on... I feel like the pink rabbit. :)

So, comparing the CL standard to R5RS (the Scheme standard) is unfair
to both. They do not cover the same material.

> Scheme comes along with around 50 pages CL with 1100 pages. So does that
> means Scheme (lean and mean) CL (big and complex) but more powerful?

Not it does not. It means that your CL program will be more easily
portable across different implementations. E.g. suppose you use CL
and write

(defstruct point
(x 0 :type (mod 1024))
(y 0 :type (mod 1024)))

and then go ahead and write your program using this definition. Now
you are sure that this program will run in other CL implementations as
well.

Suppose, thet, by chance you write the same (non standard) program in
the Scheme implementation X which does support DEFSTRUCT and then try
to run it in another Scheme implementation Y, which does not. You
have - at the very least - make sure that you can have the correct
DEFSTRUCT library installed in implementation Y. Which of course
means that, since implementation Y already supports DEFINE-RECORD,
that your footprint starts to blow up. You get the idea.


> On other pages I read the CL is very good for prototyping and it seems
> possible to add some type-checking to CL Functions. But I don't have to.
> So maybe I could use CL for scritping as for larger scale
> programming.

The term "scripting" has always eluded me :)

> I have downloaded diverse Common Lisp systems and I have the Scheme
> Shell. And of course I would like to to scripting in one of the two
> languages. Does it make sense to use CL for such an area or is Scheme a
> better choice here?
>
> I have started with doing some scsh-programming. And even thought I had
> a hard time, the elegance of Scheme begins to shine IMO. And I have
> three books around which uses Scheme so I have to admit I'm starting to
> like it more and more. But a look onto CL gave let me start thinking

> again. CL seems to have far more support for OO-programming and Eiffel
> is an OO-languae so I migth like CL more.

AFAIK CL has the same support for OO as Eiffel and then some. If some
feature is missing, you can build it in CL (alright, let's not talk
too much about type-checking - we can always invite the ML crowd for
this). If some CL feature is missing in Eiffel you cannot (that is as
much as I know of Eiffel). :)

Some CL compilers also do a decent job at "non lethal" :) type
checking.

Hope it helps

Cheers

--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Kent M Pitman wrote:

> resolution of conflicts. Languages are like political parties; you
> should choose the one that is maintained by people who think like
> you do and care about the things you care about. It sounds, based on
> your brief summary like that would be the Scheme crowd.

But the OO thing pushes him to che CL side so..

But now that one can do FUN, instead of #'FUN,
(lambda ..) instead of #'(lambda ..)

((lambda ..) ..) instead of (funcall (lambda ..) ..)

what's left are warts like (funcall fun ..) and naming inconsistencies
[no real prob with the latter, just define nullp (or null?, pair? (or cons?)),
etc]

Saying (CAR NIL) (CDR NIL) are wrong is just a game of words, and whether Lisp
or Scheme is right about #f is one of the great mysteries of the universe..

If you read "The Little Schemer", only in a couple of occassions does the
ugliness show.

The package system is constantly under attack, and while some Schemes have
nice module systems (eg: MzScheme), there's no standard.

You can write beautiful code in CL. Maybe it not "perfect", but it's not so
far from that.Scheme is small, so you can get all of it quick. Get your basic
lisp style up and running.
Then look at CL. You can learn things you never dreamed of while doing
Smalltalk, Eiffel, etc. (maybe tinyclos would do too, but I'm not sure.
Certainly tinyclos is nicer, but it's the son of CLOS, after all). You'll find
several books on CLOS. Will you find _any_ for the Scheme object systems
(practically all of which cannot even touch CLOS?).

BTW, don't let _anybody_ talk you into believing that CL is to Scheme as C++
is to C, that's plainly an extremely innacurate and dangerous analogy.

Welcome to schizofrenia ;-)

[PS: I have an Eiffel background, am charmed by Scheme's and purely functional
languages, but I'm happy with CL (sometimes I'll wonder "what's next" but I'll
look more to FP than to Scheme). It's easy to think about Scheme being `dirty'
then, too.
I really hate bad stuff, including C++, and sometimes I surprise myself when I
do some design using templates that is `cool', or by liking some well written
software like ACE, that makes things `neat' (it's a dellusion, bu we all know
that ;->).
So maybe you can take CL. That's your call.]


Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Erik Naggum wrote:

[stuff]

I forgot that if you do CL, you'll have to take a speed reading course and buy
some asbestos underwear ;-) [Erik is cool]


Pierre R. Mai

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus <Friedrich...@inka.de> writes:

> It's about Scheme and Common Lisp. Does it make sense to learn one or
> the other, or both, or just one of it?
>

> My background: I'm an Eiffel-Programmer, and I think I know quite a bit
> about C/C++ and I refuse to do programming a line of C++. So Common Lisp

> seems to me a bit like C++. Is that a correct impression? Someone

As others have pointed out, C++ and CL aren't similar at all, neither
in complexity (C++ is much more complex than CL!) nor in the reasons
for the sizes of their standards. C++ is a very complex language in
itself, even without any library. CL OTOH is quite a simple language
at it's core (similar in simplicity to Scheme), but it includes a large
library (which isn't complex in itself either). And the CL standard is
rather more readable and helpful (even to non-language-lawyers) than
the C++ standard.

> compared Scheme Standard to CL standard and this was quite frightning.

> Scheme comes along with around 50 pages CL with 1100 pages. So does that
> means Scheme (lean and mean) CL (big and complex) but more powerful?

Since you are an Eiffel programmer, I'd ask you whether you found
Eiffel's "one way to loop has to be enough for everyone" approach
lean, mean _and_ _practical_? If you did, I'd say you might be happy
with Scheme, which isn't quite as restricted as the "RISC" language
called Eiffel, and since R5RS has macros, you can at least built
yourself new loops if you so desire. Of course all those loops will
be non-standard, so they will be difficult to read for others.

I OTOH found the "one loop fits all" attitude extremely non-practical
(in effect, Eiffel was the first ever language I used, where I really
needed it's ability to automatically check variants and invariants in
loops, since I regularly made mistakes in writing those loops. I
normally make mistakes of that sort in other languages about once in
every 5 years or so).

In effect I'd be careful to distinguish between unnecessary complexity
(which should be elided, but which makes up large part of C++, where
the complexity seems necessary, but only because of underlying design
decisions which are ill matched to it's current uses), and necessary
complexity or size, which should be welcomed, since they make your
life as a programmer easier, IMHO.

> three books around which uses Scheme so I have to admit I'm starting to
> like it more and more. But a look onto CL gave let me start thinking
> again. CL seems to have far more support for OO-programming and Eiffel
> is an OO-languae so I migth like CL more.

Well, there are 100 OO subsystems for Scheme (no, wait a minute, now
there are 101, no 102 ;), so you can certainly do OO programming in
Scheme. Not portably though.

Also, CL doesn't make so much of it's object-orientedness than
mainstream languages like Eiffel, C++, etc. do. The pure OO part of
CL (i.e. CLOS) isn't fundamental to CL. The basic idea of OO (which
most often get's lost in mainstream languages, i.e. that OO isn't
about classes or methods, anymore than astronomy is about optical
telescopes) OTOH is central to Lisp (in both CL and Scheme's guises).

Try to stop thinking in mainstream terms, it will help you avoid
confusion...

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Friedrich Dominicus

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Erik Naggum wrote:
>
> * Friedrich Dominicus <Friedrich...@inka.de>

> | It's about Scheme and Common Lisp. Does it make sense to learn one or
> | the other, or both, or just one of it?
>
> if you learn Scheme, you need to learn both. if you learn Common Lisp,
> you don't need to learn Scheme.

That't a clear opinion, thanks.


>
> | My background: I'm an Eiffel-Programmer, and I think I know quite a bit
> | about C/C++ and I refuse to do programming a line of C++. So Common Lisp
> | seems to me a bit like C++. Is that a correct impression?
>

> no. Common Lisp is not a bit like C++.

Maybe stated to lazy. I should know better. Let's state it another way.
I thinkI can like C but I can't like the OO-stuff above it in C++. So
is Scheme more like C and CL in this sense more than C++?


>
> | Someone compared Scheme Standard to CL standard and this was quite
> | frightning.
>

> this "fear" might mean that Merriam-Webster's Third New International
> Dictionary of the English Language are frightening to a kid and that a
> 1000-phrase book for tourists is the best solution to teaching kids to
> write?

you mean possibly not ;-)


> I have personally looked at huge reference tomes with a sort of
> "wow! somebody did all that work and organized it for me!" attitude.

Oh I guess that's a good way.

>I
> also consider it frightening that anyone would sit down and actually
> attempt to design a language and yet leave so much work to its users as
> Scheme does. the consequence is that Scheme is a much bigger language
> than Common Lisp in practice: Scheme is not a language you can use out of
> the box, so you have to know at least a few implementations and each has
> a million functions, all non-standard and slightly different in most
> ways. in Scheme, purity rules, so every time someone sees what he
> considers to be impure, he goes off to write his own, creating yet more
> incompatible Scheme implementations and yet more unreadable Scheme code.
> irony has never been quite so strong as in the "lean and mean design" of
> Scheme. Common Lisp has a "just do it" quality to it that I like a lot,
> especially since it's much, much harder to get a large language beautiful
> than to make a small language beautiful, and Common Lisp is beautiful.

In short Scheme conform to their standard but extend far beyond for
doing real progamming but CL is good enough to have it all standardized?
I bet this is harsh simpliefied but that's what I read out of the above.


>
> | Scheme comes along with around 50 pages CL with 1100 pages. So does that
> | means Scheme (lean and mean) CL (big and complex) but more powerful?
>

> it means Scheme was designed to prove something and Common Lisp was
> designed to build something.

Oh I think I want to build some things ;-)


>
> | CL seems to have far more support for OO-programming and Eiffel is an

> | OO-languae so I might like CL more.
>
> I think you should approach new languages as "a programmer", not "an
> Eiffel programmer".

Sorry I can't be just a programmer. I come form a context and this
context means that I'm very used to do Eiffel programming but I started
with some imperative languages. So I'm completly biased but possibly I'm
able to see advantages of FP even if I don't know much about it. I guess
if you have to learn a new language which is Lisp-like you will have
your bias too.

But nevertheless thanks for taking the time to answer.

Regards
Friedrich

Friedrich Dominicus

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Marco Antoniotti wrote:
>
> Friedrich Dominicus <Friedrich...@inka.de> writes:
>
> > I have try to find out myself and checked some Scheme and Lisp pages.
> > But althought I'm sure this question was asked a hundred times before, I
> > wasn't able to figure it out.
> >
> > It's about Scheme and Common Lisp. Does it make sense to learn one or
> > the other, or both, or just one of it?
>
> I'd go with CL. Scheme is nice, but it lack a "wide" standard.
>
> >
> > My background: I'm an Eiffel-Programmer, and I think I know quite a bit
> > about C/C++ and I refuse to do programming a line of C++. So Common Lisp
> > seems to me a bit like C++. Is that a correct impression?
>
> Get away from here! :)

Oh now I like this group very much, don't though me out please;-) Sorry
it was lazy formulated. It was in the sense C (small and relative
consequent) and C++ a nighmare because of complexity added by all the
virtual stuff ;-)

>
> > Someone
> > compared Scheme Standard to CL standard and this was quite
> > frightning.
>

> If you refer to the 903 pages of the latest Stroustroup's book on C++
> you are right. Anyway, the "standards" for CL and Scheme are rather
> different in content. Let's consider multi-dimensional arrays. They
> account for a big chunk of the CL standard. The account for 0 pages of
> the Scheme standard R5RS. That is because multi-dimensional arrays
> are not in the standard. Let's talk about records (structures). They
> account for another big chunk of the CL standard and they account for
> 0 pages of the R5RS of Scheme: once again they are not in the
> standard. I can go on and on and on... I feel like the pink rabbit. :)

just curious what is a this pink rabbit? But thanks for pointing out why
things are like they are. I dared to download the CL Hyperspec stuff and
I bet some looks into it may show if it's organized that it's really
helpful. I bet it is, others have stated that they found it the one of
the best documents around.

>
> So, comparing the CL standard to R5RS (the Scheme standard) is unfair
> to both. They do not cover the same material.
>

> > Scheme comes along with around 50 pages CL with 1100 pages. So does that
> > means Scheme (lean and mean) CL (big and complex) but more powerful?
>

> Not it does not. It means that your CL program will be more easily
> portable across different implementations. E.g. suppose you use CL
> and write
>
> (defstruct point
> (x 0 :type (mod 1024))
> (y 0 :type (mod 1024)))
>
> and then go ahead and write your program using this definition. Now
> you are sure that this program will run in other CL implementations as
> well.
>
> Suppose, thet, by chance you write the same (non standard) program in
> the Scheme implementation X which does support DEFSTRUCT and then try
> to run it in another Scheme implementation Y, which does not. You
> have - at the very least - make sure that you can have the correct
> DEFSTRUCT library installed in implementation Y. Which of course
> means that, since implementation Y already supports DEFINE-RECORD,
> that your footprint starts to blow up. You get the idea.

Thanks I didn't know that. I've downloaded the Scheme Schell and
DrScheme both seem to be quite good for learning.

>
> > On other pages I read the CL is very good for prototyping and it seems
> > possible to add some type-checking to CL Functions. But I don't have to.
> > So maybe I could use CL for scritping as for larger scale
> > programming.
>
> The term "scripting" has always eluded me :)

Why that what is negative with it. IMO it's quite positive to have a
language which scales nicely. I can't tell this from Eiffel.


> Hope it helps

Yes thanks for taking the time to answer. It seems the tendency in this
group is in learning CL what is not a suprise because we are in c.l.l
;-)

Regards
Friedrich

Friedrich Dominicus

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Pierre R. Mai wrote:
>
> Friedrich Dominicus <Friedrich...@inka.de> writes:
>
> > It's about Scheme and Common Lisp. Does it make sense to learn one or
> > the other, or both, or just one of it?
> >
> > My background: I'm an Eiffel-Programmer, and I think I know quite a bit
> > about C/C++ and I refuse to do programming a line of C++. So Common Lisp
> > seems to me a bit like C++. Is that a correct impression? Someone
>
> As others have pointed out, C++ and CL aren't similar at all, neither
> in complexity (C++ is much more complex than CL!) nor in the reasons
> for the sizes of their standards. C++ is a very complex language in
> itself, even without any library. CL OTOH is quite a simple language
> at it's core (similar in simplicity to Scheme), but it includes a large
> library (which isn't complex in itself either). And the CL standard is
> rather more readable and helpful (even to non-language-lawyers) than
> the C++ standard.

This was pointed out in other replies. So I guess this fear is not all
so worse.


>
> Since you are an Eiffel programmer, I'd ask you whether you found
> Eiffel's "one way to loop has to be enough for everyone" approach
> lean, mean _and_ _practical_? If you did, I'd say you might be happy
> with Scheme, which isn't quite as restricted as the "RISC" language
> called Eiffel, and since R5RS has macros, you can at least built
> yourself new loops if you so desire. Of course all those loops will
> be non-standard, so they will be difficult to read for others.

I'm quite a fan of Eiffel especiall it's idea of Design-by-Contract but
I like to the static type-checking and yes I don't have any problems
with the one loop. But because I think I've done quite a bit
OO-programming I may be better of with CL.


>
> In effect I'd be careful to distinguish between unnecessary complexity
> (which should be elided, but which makes up large part of C++, where
> the complexity seems necessary, but only because of underlying design
> decisions which are ill matched to it's current uses), and necessary
> complexity or size, which should be welcomed, since they make your
> life as a programmer easier, IMHO.
>
> > three books around which uses Scheme so I have to admit I'm starting to
> > like it more and more. But a look onto CL gave let me start thinking
> > again. CL seems to have far more support for OO-programming and Eiffel
> > is an OO-languae so I migth like CL more.
>
> Well, there are 100 OO subsystems for Scheme (no, wait a minute, now
> there are 101, no 102 ;), so you can certainly do OO programming in
> Scheme. Not portably though.

How often do you change you Common Lisp programming environment?

>
> Try to stop thinking in mainstream terms, it will help you avoid
> confusion...

I do think that I'm not thinking in mainstream terms. I like the if
things are done consequently and in this aspect I like Eiffel very much.
But I like too C and Python and as told before Scheme-solutions look
quite interesting.

But what I like most about Eiffel is it's idear of Design-by-Contract.
Wouldn't that s.th. what would be nice vor CL too?

Regards
Friedrich

Martin Rodgers

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
In article <379EF170...@inka.de>, Friedrich...@inka.de
says...

> So is Scheme more like C and CL in this sense more than C++?

No. It's more like this: Common Lisp is to Scheme as C is to Pascal.
They're different languages with different histories & cultures.

Another perspective looks like this: CL is to Smalltalk as Smalltalk
is to C. Now substitute C++ for C (or Pascal for C) and Scheme for CL.
That's how I'd begin to explain it to someone totally unfamiliar with
CL and Scheme. As we zoom in closer, some details emerge. At some
point you have a clear enough picture for you to make a choice.

I would put it another way for someone familiar with other tools.
As you know Eiffel, I might substitute Eiffel for C. The problem with
all of this is that it's like saying an elephant is a bit like a
hippo. While it helps give a vague feeling for what these animals are
like, it doesn't help you choose an animal for a zoo.

So, unless you're satisfied with some simple advise ("learn Common
Lisp") you're probably going to have to do a lot of reading. I learned
CL long before I even heard of Scheme, so I may be heavily biased.
OTOH, that might be a significant point. You decide.
--
Please note: my email address is munged; You can never browse enough
"There are no limits." -- ad copy for Hellraiser

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus wrote:

> Thanks I didn't know that. I've downloaded the Scheme Schell and
> DrScheme both seem to be quite good for learning.

DrScheme is probably the best environment to learn Lisp. After all, that's its
purpose!

> > The term "scripting" has always eluded me :)
> Why that what is negative with it. IMO it's quite positive to have a
> language which scales nicely. I can't tell this from Eiffel.

Because Unix-minded people need `scripts' and `real programs'. Lisp `scripts'
are just real programs.

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus wrote:

> How often do you change you Common Lisp programming environment?

Some people never, some people often. Typical reasons for switching:

1. Thread-safe implementation.
2. Native thread support.
3. Real-time garbage collection
4. Overall efficiency
5. floating-point efficiency
6. CLOS efficiency
7. Facilitated C++ interfacing
8. Footprint
9. Royalties
10. No redundant GC (when combining with Java)
11. Platform (including JVM).
12. Support
13. Vendor-specific add-ons


William Deakin

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

Friedrich Dominicus wrote:

> .... It seems the tendency in this group is in learning CL what is not a
> suprise because we are in c.l.l ...

What do they say in comp.lang.scheme?

;-) will

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus wrote:

> Maybe stated to lazy. I should know better. Let's state it another way.

> I thinkI can like C but I can't like the OO-stuff above it in C++. So


> is Scheme more like C and CL in this sense more than C++?

Then you are OK. C is a portable assembler, which makes sense; while C++ is an
extension of that with even more emphasis as a solution for the development of
applications.

No matter the base differences between Scheme and CL, nothing comes into play
when you go into the OO arena. You might not like
the `lack' of encapsulation in CLOS, but there's a different mechanism for that
in CL (packages). If you want classic OO encapsulation,
you might find that in some classic OO system for scheme, but then, classic OO is
wrong. That's the main conclusion you should come to, if you take Dylan, Cecil,
or CLOS. That's one of the main interests of your journey:

1. Syntax
2. Reflection
3. Sentence Oriented Programming

Marco Antoniotti

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

Friedrich Dominicus <Friedrich...@inka.de> writes:

> > Well, there are 100 OO subsystems for Scheme (no, wait a minute, now
> > there are 101, no 102 ;), so you can certainly do OO programming in
> > Scheme. Not portably though.
>

> How often do you change you Common Lisp programming environment?

The point is that if you change from CMUCL to Harlequin LW (or
whatever), you know that your system will work with a far greater
probability than when switching from one Scheme environment to the other.

> > Try to stop thinking in mainstream terms, it will help you avoid
> > confusion...
>
> I do think that I'm not thinking in mainstream terms. I like the if
> things are done consequently and in this aspect I like Eiffel very much.
> But I like too C and Python and as told before Scheme-solutions look
> quite interesting.
>
> But what I like most about Eiffel is it's idear of Design-by-Contract.
> Wouldn't that s.th. what would be nice vor CL too?

What exactly is "Design by Contract"?

Stig Hemmer

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
[Eiffel]

Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> writes:
> What exactly is "Design by Contract"?

The DbC school of system design makes heavy use of pre-conditions,
post-conditions, invariants and so on.

When using a DbC system, all these are given to the system in a way
that it can understand and check. Check on compile-time if possible,
on run-time if not.

An Eiffel person might give more detail but these are the essentials.

Stig Hemmer,
Jack of a Few Trades.


Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

Stig Hemmer wrote:

Then she would have to learn the MOP and implement a new method
combination (a macro layer :before, :after, :around methods should work,
but if you're also using those..)


Kent M Pitman

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Fernando Mato Mira <mato...@iname.com> writes:

Replying not specifically to you, Fernando, but to some specific things
you said that triggered me to want to add some general points
in this discussion. Your words included, semi-out-of-context, to help
understand what triggered these thoughts.

> But now that one can do FUN, instead of #'FUN,
> (lambda ..) instead of #'(lambda ..)
>
> ((lambda ..) ..) instead of (funcall (lambda ..) ..)
>
> what's left are warts like (funcall fun ..) and naming inconsistencies
> [no real prob with the latter, just define nullp (or null?, pair?
> (or cons?)), etc]

I will continue to defend FUNCALL as a non-wart. I think it's the
other way around. At the very minimuym, you should understand that
there is an equally strong aesthetic theory by which (F X) to call a
variable F is UNaesthetic. The Scheme community, in keeping with
their one-namespace view, also often have a one-aesthetic chauvinism
as well. The issue that divides us is less the technical decision for
"a single namespace" and more the political decision for "single
decisions". My favorite quote in this area is from an unknown source
which I would happily and gratefully identify if I could: "There are
two kinds of people in the world: people who think there are two kinds
of people in the world and people who do not."

I also think a lack of political inclusivity is a wart. We know very
little about the human brain, probably, compared to what we might
know. But among those things we know is that it is capable of
accomodating a huge amount of what the Scheme community would call
"unaesthetic" complexity. That is, the kinds of natural languages
people devise are full of special case and we have special brain
hardware that make our use of that quite efficient. To build languages
that do not take good advantage of our ability to do this is to
waste our personal processor power. All the claims that are made about
what languages are learnable or not are unscientific anecdotes
based on claims of how hard it was for teachers who had a bias toward
certain presentations or who had prsentations differing in huge ways
(not just namespace) retrospectively and unscientifically gauging reasons
and surely boiling them down to reasons that due not satisfy any definition
of science I know. My experience is that CL is easy to teach to anyone
who has not been fed propaganda from other languages about how languages
"should be" and who does not balk merely because he/she has been
trained to balk. Just as the whle LISP family is easy to teach to anyone
who has not been trained to balk at non-infix. The degree to which
the aesthetics by which Lisp is often externally judged is born with
you is questionable at best. People do have a born-in aesthetics, but
I think they often unlearn their intuitions about it when faced by a lack of
ability to articulate it and a pushy person who purports (sometimes without
foundation) who alleges to be able to articulate what they think better
than they themselves can do. People LEARN to be bothered by two namespaces
because they are TAUGHT to think namespaces don't matter. But namespaces
are ROUTINELY used in natural language, and people accomodate them very well.
Almost no word in any naturally arising human language has only a single
definition, and yet people do not rise in revolt at this "lack of elegance".
They resolve by context. If you want to do SCIENCE about what is and
is not "natural", the data awaits and it does not come from languages
that are made and taught by their makers to captive audiences over a five
to twenty year span.

Human-derived attempts to make unambiguous languages have gone
nowhere. English, which is full of contextual nuance and blunt
construction, has won over the regularities of Latin and Greek and the
pseudo-regularities of languages like Spanish and German partly, at
least, because people don't have patience with aesthetics when they
stand in the way of practicality. They want to ELECT practicality
when it suits their need, but they don't want to be REQUIRED
practicality.

> You can write beautiful code in CL. Maybe it not "perfect", but it's not so

> far from that. Scheme is small, so you can get all of it quick.

Short languages make long programs, I claim.
Long languages make short programs.
The more pre-defined stuff you can draw on, and the richer
the available set of words in a given context, the fewer words
you must say to get across your meaning. Langauges that syntactically
isolate you from the space of things you would say at a given moment
in a given program make either the program bigger (as you say more baroque
things or as you rewrite your program to avoid saying baroque things).
In CL, if you have written


(defun foo (list) ... <cursor>

and you want to now access the LIST function, you can do it with only local
editing. You can ALSO access the LIST function with only local editing.

In Scheme, the decision to bind the variable LIST is a [premature]
decision not to use the variable LIST in the program you are writing.
To access the outer variable requires rewriting the program. This is
usually "compensated" for by not ever using the obvious names in
programs, preferring variables like "LST". This makes programs
inelegant, too [that is, by someone elses's (my) metric, metrics of
elegance not being uniquely determined], because it doublles the size
of the local set of terms, having in effect to always choose different
names for globally defined names than local ones.

CL allows the decision about which LIST to access to be locally
administered, and the "price" (to some) or "feature" (to others) of
this is that the kind of use is case-marked. (FUNCALL f x) identifies
the UNUSUAL (in CL) and less syntactically optimized case of calling a
variable in ordder to allow (f x) to mean a call to the thing
familiarly known as F without fear that F has been locally bound.

This is a simple trade-off. Scheme assumes you will be passing so
many functions that you will have at least an equi-likely chance of
wanting to call a variable named F as a globally defined function
named F, and so it assumes you want to optimize this case. Most CL
programmers I know would gag if functional args were passed around
with enough frequency to justify optimizing this case. Likewise, as
arguments, it's not common to pass the contents of globally defined
function names in CL, so the case of passing one is pessimized
slightly in order to make it clear that (f x) is passing a local name
[you don't have to look to be sure if the binding list is on a
previous screen. you always know x means "some local x" and that the
user would use #'x to denote a global function by this name.

BOTH languages are doing Statistics and Politics, though, not science.
That's not bad--it's what's called for; but it is not fair to say that
we have made a "political outcome" and Scheme has made a "scientific
outcome". A maxim I've made up for myself on this is the following:
"There are no political answers, only political questions." (Something
I just this moment thought of: It follows from strong typing that this
must be true, right? Both arms of the IF have to have the same return
type.) Both languages are probably doing right by their user
base. Each one is aesthetic within the context of what it is trying to
promote. Each one is political in that it has an agenda of a
programming style that it thinks is useful. Loosely, and I'm
exaggerating slightly for visual effect, so please don't be a stickler
here: Scheme: tolerate, but don't prefer, non-functions. Lisp:
tolerate, but don't prefer, functions. But what really burns me is
the oft-made claim by the Scheme community that their statistical
trade-off makes so much sense that it can be referred to without
adverbial qualification as "aesthetic" as if there were no other kind
of aesthetic worth mentioning.

I don't think Scheme is more aesthetic than Lisp, but I always send people
who want "aesthetics" toward the Scheme community because I think that
Scheme is a magnate for people who think that the word "aesthetic" as an
unqualified term and because I just tire of talking to people with that
kind of intolerant attitude. I suppose that makes me intolerant.
One never get anything for free--the cost of being around tolerant folks
is to be intolerant. Hmm.


Matthias Hölzl

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus wrote:

> But what I like most about Eiffel is it's idear of Design-by-Contract.
> Wouldn't that s.th. what would be nice vor CL too?

Yes, see <http://www.gauss.muc.de/tools/dbc/dbc-intro.html>

Regards

Matthias

Friedrich Dominicus

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

first answer: Learn both. Scheme will let you more easily graps the
"spirit" of Lisp-Programming;-)

Regards
Friedrich

Friedrich Dominicus

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
>
> What exactly is "Design by Contract"?

Let me give an example. The all so well known factorial. Written in an
imperative style to show some extra assertions
factorial(n: INTEGER): INTEGER is
require
n_positiv: n >= 0
local
i : INTEGER
do
Result :=1
-- holds for n = 0
from
i:= n
invariant
i_positive_and_less_than_n: i >=0 and then i <= n;
variant
i_decreases: i
until
i <=1
loop
Result := Result * i;
i := i-1
end; -- from

end; -- faculty

The interesting points are require. In which I state clearly what I
expect n to be a positive number. And because loops will be problematic
you have the invariant part. Which must hold fo every iteration and the
variant part which must be a decreasing number but always positive. Now
you may say that isn't a big thing. But if you have an ancestor and some
descendants the assertions are inherited too.

This assertions can be extracted from the source and give a nice
specification. And indeed you can use Eiffel for specifyin your problem.

I guess the conditions look familar to LISPERS because you are used to
do declarative programming. That is what they are, they don't state how
s.th is done but what has to hold. But this is quite what FP-programming
is about I guess.

If that assertions does not hold an exception is risen, which you can
handle in a rescue part.

Does that example answer your question?

Regards
Friedrich

Erik Naggum

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
* Friedrich Dominicus <Friedrich...@inka.de>

| first answer: Learn both. Scheme will let you more easily graps the
| "spirit" of Lisp-Programming;-)

this reminds me of something. Scheme people will go out of their way to
claim that Scheme is a Lisp. this is like communist dictatorships which
refer to themselves as "democracies" and people who kill doctors and bomb
medical clinics and call themselves "pro-life", as if hijacking a nice
word can make up for the facts. I just wish Scheme people could be proud
of what they have instead of being so childish and immature that they can
only enjoy their little scheming games if they can usurp somebody else's
terminology.

but, hey, my first objection to GUILE was that it was a name I'd expect
from politicians too self-loathing to be proud of what they did, and
Schemer and Conniver have similar connotations with meน, strengthened
after looking at what Scheme people do when you say that Scheme should be
good enough on its own not to need the Lisp label so it can blame Common
Lisp for everything when it fails. you see, "Scheme" is what they call
their language when they succeed. when they fail, they call it "Lisp".
for as long as I have been watching Scheme people, they have contributed
much more to the bad name that "Lisp" has than to the good name that they
think "Scheme" has by unloading all their misery on the "Lisp" name. so
look out for people who object to Common Lisp because they once got
burned on a Scheme project, or learned Scheme from a bad teacher who
insisted that Scheme is the only true Lisp. scheming bastards, indeed.

(this article contains 240% of the RDA of cynicism. please limit your
intake if you feel irritated. prolonged exposure may also cause anger.)

#:Erik
-------
น like FORTH was called just that because the IBM system on which the
author worked had only five letters available in compiler names, not the
six he needed to name his fourth language, SCHEME comes from a similar
limit of six characters unable to hold the name "SCHEMER", which was the
sequel to "CONNIVER", which followed the normal, boring name "PLANNER" --
all projects within the AI community.

Friedrich Dominicus

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

I'm thinking that I like Lisp-like syntax more and more. It's a pitty
that braces can keep someone from learning new things. But this was true
for me. Ash on my head ;-)

Regards
Friedrich

Rainer Joswig

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
In article <sfw9081...@world.std.com>, Kent M Pitman <pit...@world.std.com> wrote:

> The two languages are quite different in terms of how they are used,
> what programming styles they promote, what community of users they
> attract, the capabilities they offer, and the overall language focus.

Scheme tries to grow up. People are using Scheme for
systems programming. MIT Scheme is quite a large environment.
Some Scheme systems are now running on bare machines.
Scheme libs now give similar power like Common Lisp.
Scheme often has for experiments the advantage that
it is easier to move around (-> you need fewer people).


What the Common Lisp community needs (IMHO, -> see the "MIT List"),
is a small Common Lisp core (incl. threads and networking).
The small core then could be moved around (different platforms,
distributed computing, web stuff, new compilers, ...) much faster.
It also could be the kernel for a new operating system.

Lieven Marchand

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus <Friedrich...@inka.de> writes:

> But what I like most about Eiffel is it's idear of Design-by-Contract.
> Wouldn't that s.th. what would be nice vor CL too?
>

You can just do it yourself in CLOS. If you check dejanews for the
previous Design-by-Contract discussion in c.l.l., you'll find a post
by someone who implemented DbC as a method combination for CLOS.

--
Lieven Marchand <m...@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker

Robert Monfera

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

Friedrich Dominicus wrote:
> How often do you change you Common Lisp programming environment?

I guess far less often than Schemers change theirs?
(Nothing beats CLOS and MOP by the way.)

Robert

Robert Monfera

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

Friedrich Dominicus wrote:
>
> >
> > What exactly is "Design by Contract"?
>

> Let me give an example...

Search the newsgroup or the ALU pages, there is a DBC implementation out
there for Common Lisp.
Robert

Kent M Pitman

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
jos...@lavielle.com (Rainer Joswig) writes:

> Scheme tries to grow up. People are using Scheme for
> systems programming. MIT Scheme is quite a large environment.

I have a lot of respect for the scheme vendors generally, who have
worked hard to be more than what the Scheme standard is. The
language designers, I think, often hold them back. As a language,
Scheme stagnated long ago, barely able to change because everyone's
differing notion of what's elegant or necessary or important drags
the language in a different direction such that "consensus" isn't
very easy on any kind of change. Any individual vendor, through
extensions they just decide for themselves, does better.

CL has done slightly better, IMO, because the members come to the
table with an understanding that concessions to practicality will be
in the language proper. And so the language is able to evolve, even
if slowly. The slowness less reflects a willingness of the
participants to change as it does the slowness of the bodies which
bless the change as "official"; that is, slowness is more dictated by
ANSI and NCITS than by CL people per se. A bit of it is dictated by
budget, too. But some of the reasons budget is an issue is that
ANSI and NCITS also make the process hugely expensive in ways that
some might argue are not needed. Then again, the absence of the
procedural mechanisms ANSI and NCITS provide might leave the committee
deadlocked like the Scheme committee, with no way to force motion forward.

The world is complicated.

Rainer Joswig

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
In article <sfwd7xc...@world.std.com>, Kent M Pitman <pit...@world.std.com> wrote:

> jos...@lavielle.com (Rainer Joswig) writes:
>
> > Scheme tries to grow up. People are using Scheme for
> > systems programming. MIT Scheme is quite a large environment.
>
> I have a lot of respect for the scheme vendors generally, who have
> worked hard to be more than what the Scheme standard is.

Scheme users are currently discussing various extensions
to Scheme. See:

http://srfi.schemers.org/

Scheme Requests for Implementation

The "Scheme Requests for Implementation" (SRFI) process is a new
approach to helping Scheme users to write portable and yet
useful code. It is a forum for people interested in coordinating
libraries and other additions to the Scheme language
between implementations.

> CL has done slightly better, IMO, because the members come to the
> table with an understanding that concessions to practicality will be
> in the language proper.

Yes, and we owe that to persons like you who had put
a lot of work into that - we are standing on your shoulders. :-)
Now we have to add some weight.

> some might argue are not needed. Then again, the absence of the
> procedural mechanisms ANSI and NCITS provide might leave the committee
> deadlocked like the Scheme committee, with no way to force motion forward.

As I understand Scheme currently has IEEE, ANSI and ISO standards.

Jeffrey Mark Siskind

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
> > How often do you change you Common Lisp programming environment?
>
> Some people never, some people often. Typical reasons for switching:
>
> 1. Thread-safe implementation.
> 2. Native thread support.
> 3. Real-time garbage collection
> 4. Overall efficiency
> 5. floating-point efficiency
> 6. CLOS efficiency
> 7. Facilitated C++ interfacing
> 8. Footprint
> 9. Royalties
> 10. No redundant GC (when combining with Java)
> 11. Platform (including JVM).
> 12. Support
> 13. Vendor-specific add-ons

You forget the most important one:

14. The vendor going out of business or ceasing to market the product or the
university losing the grant to continue development.

The list is long:

1. Symbolics
2. TI
3. LMI
4. Xerox
5. Gold Hill
6. Lucid
7. Harlequin
8. ExperCommonLisp
9. Procyon
10. Star Sapphire
11. WCL
12. Chestnut
13. GCL
14. CMUCL
15. Poplog
16. MCL

I may have missed a few.

Jeff (http://www.neci.nj.nec.com/homepages/qobi)

Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Kent M Pitman wrote:

> I will continue to defend FUNCALL as a non-wart. I think it's the
> other way around. At the very minimuym, you should understand that

Sure ((lambda (x) x) 1) is a wart in the CL context (unless maybe if say "search
in operator space" as I did before,
but then, one should at least be able to specify anonymous macros as well. But
still, why can I remove FUNCALL
for that but not for bindings?).

One expression. `Prettier' and ugly at the same time.

But I assume the reason is compatibility (a noble goal, at least for CL ;->) and
not fancy.

Some might say even when making it better you make it worse. Oh, well..


Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Jeffrey Mark Siskind wrote:

> You forget the most important one:
>
> 14. The vendor going out of business or ceasing to market the product or the
> university losing the grant to continue development.

I hid that under 12 ;->

But you reminded me of a very important one:

15. Verification


Fernando Mato Mira

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Friedrich Dominicus wrote:

> I'm thinking that I like Lisp-like syntax more and more. It's a pitty
> that braces can keep someone from learning new things. But this was true
> for me. Ash on my head ;-)

Tastes GOOD!!
Who wants a mirror? I don't need it.. ;-)


Christopher R. Barry

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
jos...@lavielle.com (Rainer Joswig) writes:

> What the Common Lisp community needs (IMHO, -> see the "MIT List"),
> is a small Common Lisp core (incl. threads and networking).
> The small core then could be moved around (different platforms,
> distributed computing, web stuff, new compilers, ...) much faster.
> It also could be the kernel for a new operating system.

Richard Gabriel in the Worse Is Better paper as well as John Mallery
in a semi-recent post to the X3J13 list both call for a core Lisp.
(Mallery seemed to want everything under the sun in Lisp in that
post--numerous rediculously difficult and unrealistic things.) In my
view, there's less rationale for a core Lisp than against and I don't
see how one would really help the community and vendors.

You already can write a Common Lisp in Common Lisp and only have to
bootstrap a small subset of Common Lisp to define and compile the
rest.[1] What good does standardizing what this core should be really
do? It takes away flexibility and optimization oportunities from the
vendors.

Christopher

1. Though black magic and weird circular dependencies like things like
functions that only implement enough functionality to compile exactly
what they need to before being redefined and recompiled by the
functions they helped compile like you see so much of in CMU CL make
testing out changes and experimenting _impossible_. I'd like to help
out with the CMU CL effort but the damn thing is so frustratingly
impossible to rebuild if you change or modify anything interesting and
I just don't have time right now to learn every last in and out of how
it compiles itself so I can get it to rebuild after
interesting/important modifications.

Juanma Barranquero

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Wed, 28 Jul 1999 19:08:51 +0200, Matthias Hölzl
<hoe...@informatik.uni-muenchen.de> wrote:

>Yes, see <http://www.gauss.muc.de/tools/dbc/dbc-intro.html>

BTW,

I tried to use it with ACL 5.0, but it didn't work.

/L/e/k/t/u

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 6.0.2i

iQA/AwUBN58yz/4C0a0jUw5YEQIHtwCeOMGKEXXmHQjDiRtxbQpR8Rn+eckAn2Ys
9fEoyHVpuWUgZ3z5qQ76o2Pu
=EExD
-----END PGP SIGNATURE-----


Gareth McCaughan

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Rainer Joswig wrote:

> Yes, and we owe that to persons like you who had put
> a lot of work into that - we are standing on your shoulders. :-)
> Now we have to add some weight.

Ouch!

--
Gareth McCaughan Gareth.M...@pobox.com
sig under construction

Pierre R. Mai

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
jos...@lavielle.com (Rainer Joswig) writes:

> What the Common Lisp community needs (IMHO, -> see the "MIT List"),
> is a small Common Lisp core (incl. threads and networking).
> The small core then could be moved around (different platforms,
> distributed computing, web stuff, new compilers, ...) much faster.
> It also could be the kernel for a new operating system.

Well, there is already the possibility of sub-setting the existing
standard. The problem here might be, that those who want to sub-set
can't agree on a common sub-set. It might be advantageous to discuss
such a common sub-set in a sub-group of the current standardization
process. OTOH, I don't see who _of the implementors_ would be
interested in such a sub-set. Those which have existing
implementations probably don't need a sub-set, or they would have
created one already.

That would only leave those implementors, which _don't_ have an
existing (fairly complete) implementation. I don't see many of those
around. And without an implementor of the sub-set, what good would
the sub-set do? And more importantly, who would design it? Without a
specific audience and implementor in mind, I don't see how a good
sub-set could be created. IIRC similar reasons where the cause for
the current situation, where sub-sets are allowed, but no standard
sub-sets where defined. See the corresponding write-up (which IIRC,
is included in the HyperSpec).

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Christopher Browne

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
On Wed, 28 Jul 1999 19:06:34 +0200, Friedrich Dominicus
<Friedrich...@inka.de> wrote:
>William Deakin wrote:
>> Friedrich Dominicus wrote:
>> > .... It seems the tendency in this group is in learning CL what
>> > is not a surprise because we are in c.l.l ...

>>
>> What do they say in comp.lang.scheme?
>
>first answer: Learn both. Scheme will let you more easily grasp the
>"spirit" of Lisp-Programming;-)

If part of your desire is to be able to quickly implement your own
version, Scheme is not unlike FORTH in being a language that is almost
trivial to reimplement.

In that process, you will learn some things about abstraction that
will be valuable.

CL aficionados will, unsurprisingly, encourage you to then move on as
quickly as possible to a more full-featured dialect. Which is not an
unreasonable suggestion. The unfortunate thing about Scheme is that
it doesn't include all the rich functionality Common Lisp has. Folks
that try to cobble additional functionality onto Scheme tend to
recreate in non-interoperable ways things that CL already has...
--
debugging, v:
Removing the needles from the haystack.
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/langlisp.html>

Kent M Pitman

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Fernando Mato Mira <mato...@iname.com> writes:

> Kent M Pitman wrote:
>
> > I will continue to defend FUNCALL as a non-wart. I think it's the
> > other way around. At the very minimuym, you should understand that
>

> Sure ((lambda (x) x) 1) is a wart in the CL context

I don't know why. First, CL is about tolerance, so multiple paradigms
have a place because there are multiple kinds of people. Second, this
expression is a very old tradition, and I see no wart in tradition any
more than I would see in a finely crafted 19th century table. Third,
I don't see anything conceptually inconsistent about this. I was taught
that the car of a form is for the name of a function, and (lambda (x) x)
is one of the ways to name the identity function. Just as for Joe you
can say "Joe" or "that guy standing over there"; in this case you can
say IDENTITY or (LAMBDA (X) X). I read #'(LAMBDA (X) X) as meaning
"the function whose name is (lambda (x) x)" and I read
(LAMBDA (X) X) as "the macro expression that expands to the function
whose name is (LAMBDA (X) X)". I don't find it troubling to have both.
Nor inelegant.

No more "inelegant" than it is to live in a society that has two or ten
major religions. I would find it "inelegant" to live in a society that
forced everyone to pray to one religion just because that was
administratively simpler for some bureaucrat who could just as easily
ignore the religions that don't affect him personally and leave them to
be used by those who do desire them.

> (unless maybe if
> say "search in operator space" as I did before,

I did not understand this.

> but then, one should at least be able to specify anonymous macros as
> well.

I don't have any philosophical objection to this, though I've only used
them about half a dozen times in my entire career. For that frequency
of use, doing
(macrolet ((do-it () ...))
(do-it))
is adequately anonymous for me. I'm not sure why this relates to the
elegance of the other issues though--this seems orthogonal to me, but
maybe you can tie it in.

> But still, why can I remove FUNCALL for that but not for
> bindings?).

I did not understand this either.

> One expression. `Prettier' and ugly at the same time.

Nor this.

> But I assume the reason is compatibility (a noble goal, at least for
> CL ;->) and not fancy.

Nothing that I could discern in the above is a design decision made
for compatibility reasons, other than "compatibility with the desires
of the user community", which is not the classical meaning of
compatibility. I don't think the decision to have FUNCALL or FUNCTION
or two namespaces is motivated by backward compatibility. There are
reasons of presentational clarity that call for the use of these
separations. There are pragmatic reasons. And there are even some
efficiency arguments for the distinction. I would design the language
with this if there was no compabitility. That's what I've been trying
to say. There are different underlying sensibilities at work here. I
do not find the absence of FUNCALL aesthetic at all. I do not find it
simpler.

> Some might say even when making it better you make it worse. Oh, well..

Some might and do say this. I am not asserting they don't. I am saying
there are multiple sentiments and that my believing its aesthetic must
mean others think it aesthetic. But I do think others should not refer
to it as unconditionally unaesthetic. I think they should understand that
their belief that something is either aesthetic or unaesthetic is a personal
choice not necessarily uniquely determined and not necessarily shared
with [some or all] others.

I am for TOLERANCE of multiple sentiments. I am for allowing people to
like what they like without being hassled about it. I am really tired of
having people focus on what is NOT something they like; CL provides multiple
ways of doing things, and it's possible for people to focus on the things
they do find aesthetic. I am AGAINST giving people a hard time for
liking different things than others like, as long as those people don't
try to tell the others they should change, or that their view is inferior.

It's not been a good day for me. I'm sorry if this makes me sound a bit
stressed here. But I still wanted to make these comments. I apologize
in advance for the tone.

Rainer Joswig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <yq7oggw...@qobi.nj.nec.com>, Jeffrey Mark Siskind <qo...@research.nj.nec.com> wrote:

14. The vendor going out of business or ceasing to market the product or the
> university losing the grant to continue development.
>

> The list is long:
>
> 1. Symbolics
> 2. TI
> 3. LMI
> 4. Xerox
> 5. Gold Hill
> 6. Lucid
> 7. Harlequin
> 8. ExperCommonLisp
> 9. Procyon
> 10. Star Sapphire
> 11. WCL
> 12. Chestnut
> 13. GCL
> 14. CMUCL
> 15. Poplog
> 16. MCL
>

I think this list dangerously mixes different things. I'm
not really happy with it, sorry. So that nobody gets a wrong
*impression* about it, MCL for example is still with us and it
is supported and ...

Rainer Joswig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <87wvvkfp...@2xtreme.net>, cba...@2xtreme.net (Christopher R. Barry) wrote:

> Richard Gabriel in the Worse Is Better paper as well as John Mallery
> in a semi-recent post to the X3J13 list both call for a core Lisp.
> (Mallery seemed to want everything under the sun in Lisp in that
> post--numerous rediculously difficult and unrealistic things.)

(with-irony
Yeah, I thought you had a qualified comment on that.)

> In my
> view, there's less rationale for a core Lisp than against and I don't
> see how one would really help the community and vendors.

This doesn't mean that others see it.

A few possible applications:

- you could run it on bare PCs
- you could run it in switches
- you could run it as networking nodes running web servers, ...
- you could use it on board of a space ship
- you could put it on robots
- you could put it in a game console
- ...


> You already can write a Common Lisp in Common Lisp and only have to
> bootstrap a small subset of Common Lisp to define and compile the
> rest.[1] What good does standardizing what this core should be really
> do? It takes away flexibility and optimization oportunities from the
> vendors.
>
> Christopher
>
> 1. Though black magic and weird circular dependencies like things like
> functions that only implement enough functionality to compile exactly
> what they need to before being redefined and recompiled by the
> functions they helped compile like you see so much of in CMU CL make
> testing out changes and experimenting _impossible_. I'd like to help
> out with the CMU CL effort but the damn thing is so frustratingly
> impossible to rebuild

(with-iteration
How about a smaller core Lisp, that would be easier
to build and move around?)

> if you change or modify anything interesting and
> I just don't have time right now to learn every last in and out of how
> it compiles itself so I can get it to rebuild after
> interesting/important modifications.

Just what I said.

Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

jos...@lavielle.com (Rainer Joswig) writes:

> In article <sfwd7xc...@world.std.com>, Kent M Pitman <pit...@world.std.com> wrote:
>
> > jos...@lavielle.com (Rainer Joswig) writes:
> >
> > > Scheme tries to grow up. People are using Scheme for
> > > systems programming. MIT Scheme is quite a large environment.
> >
> > I have a lot of respect for the scheme vendors generally, who have
> > worked hard to be more than what the Scheme standard is.
>
> Scheme users are currently discussing various extensions
> to Scheme. See:
>
> http://srfi.schemers.org/
>
> Scheme Requests for Implementation
>
> The "Scheme Requests for Implementation" (SRFI) process is a new
> approach to helping Scheme users to write portable and yet
> useful code. It is a forum for people interested in coordinating
> libraries and other additions to the Scheme language
> between implementations.

As a matter of fact I believe that the SRFI is a rather good mechanism
for adding things to the language in a "regular" way. Of course, ther
is so much more to add to Scheme that a lot of work needs to be done
there. Re-inventing the wheel takes time :)

I would favor something like a CLAN ("Common Lisp Added Newness" :) -
for lack of a better acronym). With clear coding and documentation
standards that would invite people to provide implementations for
relatively small features/packages given an appropriate specification
first. An integrated REGEX (I know there are many such packages
around) would be an example of the size of the projects to be part of
this effort. I.e. an implementation of CLIM would not be within the
scope of this enterprise. Have a look at the SRFIs to have an idea.

>
> > CL has done slightly better, IMO, because the members come to the
> > table with an understanding that concessions to practicality will be
> > in the language proper.
>

> Yes, and we owe that to persons like you who had put
> a lot of work into that - we are standing on your shoulders. :-)
> Now we have to add some weight.
>

> > some might argue are not needed. Then again, the absence of the
> > procedural mechanisms ANSI and NCITS provide might leave the committee
> > deadlocked like the Scheme committee, with no way to force motion forward.
>
> As I understand Scheme currently has IEEE, ANSI and ISO standards.

Yes. Maybe. They all lack multidimensional arrays and
records/structures. It is easy to get standards down to 50 pages if
you leave out useful things.

Cheers

--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa

Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Friedrich Dominicus <Friedrich...@inka.de> writes:

> >
> > What exactly is "Design by Contract"?
>

> Let me give an example. The all so well known factorial. Written in an
> imperative style to show some extra assertions
> factorial(n: INTEGER): INTEGER is
> require
> n_positiv: n >= 0
> local
> i : INTEGER
> do
> Result :=1
> -- holds for n = 0
> from
> i:= n
> invariant
> i_positive_and_less_than_n: i >=0 and then i <= n;
> variant
> i_decreases: i
> until
> i <=1
> loop
> Result := Result * i;
> i := i-1
> end; -- from
>
> end; -- faculty


(defmethod factorial ((n integer) &optional (acc 1))
(if (zerop n)
acc
(factorial (1- n) (* n acc))))

(defmethod factorial :before ((n integer) &optional (acc 1))
(when (minusp n)
(error 'arithmetic-error
:operation 'factorial
:operands (list n))))

Not quite the same, but similar. Now that I remember I have
downloaded some macrology to have DBC in Common Lisp.

> The interesting points are require. In which I state clearly what I
> expect n to be a positive number. And because loops will be problematic
> you have the invariant part. Which must hold fo every iteration and the
> variant part which must be a decreasing number but always positive. Now
> you may say that isn't a big thing. But if you have an ancestor and some
> descendants the assertions are inherited too.
>
> This assertions can be extracted from the source and give a nice
> specification. And indeed you can use Eiffel for specifyin your problem.
>
> I guess the conditions look familar to LISPERS because you are used to
> do declarative programming. That is what they are, they don't state how
> s.th is done but what has to hold. But this is quite what FP-programming
> is about I guess.
>
> If that assertions does not hold an exception is risen, which you can
> handle in a rescue part.
>
> Does that example answer your question?

Yes it does. It is a nice feature. Not too difficult to implement in
CL, apart from the LOOP invariants. I suppose Exceptions are
objectified in Eiffel (as in CL and Java) aren't they?

Friedrich Dominicus

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Marco Antoniotti wrote:

> >
> > Does that example answer your question?
>
> Yes it does. It is a nice feature. Not too difficult to implement in
> CL, apart from the LOOP invariants. I suppose Exceptions are
> objectified in Eiffel (as in CL and Java) aren't they?


No unfortunatly not.


Ok if it's not too difficult. But please remember assertions are
inherited too. And you can turn them on or of as you like. So I bet a
bit work to do ;-)


Regards
Friedrich

Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Kent M Pitman <pit...@world.std.com> writes:

Ahem!

> Human-derived attempts to make unambiguous languages have gone
> nowhere. English, which is full of contextual nuance and blunt
> construction, has won over the regularities of Latin and Greek and the
> pseudo-regularities of languages like Spanish and German partly, at
> least, because people don't have patience with aesthetics when they
> stand in the way of practicality. They want to ELECT practicality
> when it suits their need, but they don't want to be REQUIRED
> practicality.

At a certain point in history Latin had won over Greek and a plethora
of other languages in the Mediterranean basin. "La Divina Commedia"
is from the 13th century :) German had won over Latin in the mid 800
hundred for scientific communication and "War and Peace" begins with
"En fin..." :)

I am sure I could gang up with Vassil Nikolov and come up with a

(defpackage "LATIN" (:use))

That would allow you to program in Latin :)

Languages are strange. History has strange ways to define what's
better. Sects and cults can keep on going proclaiming the faith.
People whoi write on C.L.L. are such.

And for the Italians over here (it's an inside joke sorry) :) ).....
I propose a name for an Italian Common Lisp user group:
Rifondazione Comulispa!

Fernando Mato Mira

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Kent M Pitman wrote:

> Fernando Mato Mira <mato...@iname.com> writes:
>
> > Kent M Pitman wrote:
> >
> > > I will continue to defend FUNCALL as a non-wart. I think it's the
> > > other way around. At the very minimuym, you should understand that
> >
> > Sure ((lambda (x) x) 1) is a wart in the CL context
>

> I don't see anything conceptually inconsistent about this. I was taught
> that the car of a form is for the name of a function, and (lambda (x) x)
> is one of the ways to name the identity function. Just as for Joe you

> > (unless maybe if


> > say "search in operator space" as I did before,
>
> I did not understand this.

What you just said.

> > But still, why can I remove FUNCALL for that but not for
> > bindings?).
>
> I did not understand this either.

It's OK to do:

(funcall (lambda (x) x) 1) --> ((lambda (x) x) 1)

but not

(flet ((f --> (flet ((f
(funcall f)) (f))

>
> > One expression. `Prettier' and ugly at the same time.

See above.

> > But I assume the reason is compatibility (a noble goal, at least for
> > CL ;->) and not fancy.

[FUNCALL defense]

I was talking about funcall ellision for compatibility with Scheme, not about
having FUNCALL for CL backward compatibility.

> It's not been a good day for me. I'm sorry if this makes me sound a bit
> stressed here. But I still wanted to make these comments. I apologize
> in advance for the tone.

I think it's worse to irritate/waste people's time by not expressing oneself
more clearly
(what's the emoticon for "reflective" or "self-referent"?
[No need to reply for any potential `does not apply here', I'm guilty enough of
that] ;->)


Fernando Mato Mira

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Rainer Joswig wrote:

> In article <87wvvkfp...@2xtreme.net>, cba...@2xtreme.net (Christopher R. Barry) wrote:
>
> (with-iteration
> How about a smaller core Lisp, that would be easier
> to build and move around?)

In order for Marco to be able to continue flagging the FTPL, one would have to look at those Old
World things (shocking! ;-)).
But then, he is an Euro, so that shouldn't be a problem. Or did he get "assimilated"? ;-)


Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Fernando Mato Mira <mato...@iname.com> writes:

Of course I got assimilated. I'd like a date with 7! :)

Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
jos...@lavielle.com (Rainer Joswig) writes:

> A few possible applications:
>
> - you could run it on bare PCs
> - you could run it in switches
> - you could run it as networking nodes running web servers, ...
> - you could use it on board of a space ship
> - you could put it on robots
> - you could put it in a game console
> - ...
>

I realised another thing about this. CL, as it stands, is probably
marginal on small devices -- a palm pilot for instance will probably
only just run something like a small CL system. So, superficially, it
looks like a sensible thing to produce a shrunken CL with less
functionality. But in the year or so it takes to design that, the
memory available on all these tiny devices has doubled, so the old,
bloated, CL would have fit. So, really, I don't see an argument for a
core CL based around a requirement for small devices.

There may be other reasons -- ease of porting perhaps? However I
think even this turns out not to be that good an argument, since there
are at least 2 pretty portable public CLs -- gcl and clisp -- and one
apparently very portable commercial offering -- Franz Allegro.

--tim

Erik Naggum

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
* Tim Bradshaw <t...@tfeb.org>

| I realised another thing about this. CL, as it stands, is probably
| marginal on small devices -- a palm pilot for instance will probably only
| just run something like a small CL system. So, superficially, it looks
| like a sensible thing to produce a shrunken CL with less functionality.
| But in the year or so it takes to design that, the memory available on
| all these tiny devices has doubled, so the old, bloated, CL would have
| fit. So, really, I don't see an argument for a core CL based around a
| requirement for small devices.

a handheld computer with Common Lisp would need an editor that would be a
little different from what we're used to. perhaps it doesn't make much
sense to deal with textual input and then reading it the normal way? the
discussion about context-sensitive structure editors may be more useful
when dealing with handhelds than with keyboard-based computers. in other
words, the complexity of reading input may be significantly reduced in
one end, while we pay for the interface on the other end. I don't see
the point in a quick compiler -- leave those to the development tools to
be used on other computers.

something like CLISP would probably be more beneficial on a handheld than
a full-blown native compiler. on a handheld computer, there would be a
definitite need for fully dynamic CLOS support, as the "program" would be
started once and the objects would essentially live forever.

| There may be other reasons -- ease of porting perhaps? However I think
| even this turns out not to be that good an argument, since there are at
| least 2 pretty portable public CLs -- gcl and clisp -- and one apparently
| very portable commercial offering -- Franz Allegro.

I don't see any purpose in a Core Lisp at all. whether you should allow
the tree shaking before rather than after application development doesn't
seem like a question worth answering at this time.

#:Erik
--
suppose we blasted all politicians into space.
would the SETI project find even one of them?

Erik Winkels

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Tim Bradshaw <t...@tfeb.org> writes:
> So, superficially, it looks like a sensible thing to produce a
> shrunken CL with less functionality. But in the year or so it
> takes to design that, the memory available on all these tiny
> devices has doubled, so the old, bloated, CL would have fit.

True, no doubt, for the small devices of the moment. However in
the next few years there will probably be a new generation of small
devices, wrist pilots perhaps, that will start out with little memory
again.


Paolo Amoroso

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
On Wed, 28 Jul 1999 14:14:17 +0200, Friedrich Dominicus
<Friedrich...@inka.de> wrote:

> Marco Antoniotti wrote:
> > standard. I can go on and on and on... I feel like the pink rabbit. :)
>
> just curious what is a this pink rabbit? But thanks for pointing out why
^^^^^^^^^^^
Maybe something like a guinea pig ("cavia", "animale da laboratorio" in
Italian)?


Paolo
--
Paolo Amoroso <amo...@mclink.it>

Erik Naggum

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
* Erik Winkels <aer...@xs4all.nl>

| True, no doubt, for the small devices of the moment. However in the next
| few years there will probably be a new generation of small devices, wrist
| pilots perhaps, that will start out with little memory again.

isn't it odd how designers keep making that mistake? I have this mental
picture of C programmers who do char input_line[128] and thinking nobody
could ever want longer lines than that, whenever I see some new gizmo
that comes out with a ridiculous amount of memory. my Nokia GSM (EU)
cell phone, for instance, can hold 5 messages in memory, while my Sanyo
PCS (US) cell phone can hold a whopping 30. like, wow. the messages are
160 character long with GSM SMS, and 100 characters with PCS messaging.
they both can hold 100 phone numbers with names. whee! I'm so excited.

"640K should be enough for everyone" is probably the most brilliant thing
the man has said (at least if we judge by his book "The Road Ahead"), but
there's something in the engineering culture that just doesn't quite get
this idea that people want to be relieved of remembering accurately, and
there's no limit to what people don't want to remember. whenever I call
directory assistance, for instance, chances are that I will call again
the next time I need the same number unless I write it down, but all of
this stuff is already in electronic form, so why can't the stupid
telephone just record it? sigh. some technologies are so lame.

Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Erik Winkels <aer...@xs4all.nl> writes:


> True, no doubt, for the small devices of the moment. However in
> the next few years there will probably be a new generation of small
> devices, wrist pilots perhaps, that will start out with little memory
> again.

Well, I'll happily admit to not knowing too much about the
technologies, but it seems to me that, although these devices may have
only tiny RAM, they will probably be able to have plenty of ROM (or
flashable ROM preferably). So, so long as most of your system can sit
in ROM (and I can't see a reason why most of a CL system should not be
in ROM), you're probably OK. I think that ROM is very low power and
pretty physically tiny. A more likely limit is perhaps address space
in the CPU if they have less-than-32 bit cpus, but I think most things
are now 32?

Anyway, I don't really see the tiny-device argument being a good
reason for a CL subset. There may be other good reasons.

--tim

Rainer Joswig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

I find this example particular ugly.

- trivialities are expanded

- the compiler does not care a thing about the assertions at
compile time.
would the compiler help you to find the error of calling (factorial -3) ?
You **could** use the subrange (integer 0) as a type for n,
but then it is already required.

- it fails to capture the pattern of iteration or recursion in any,
instead you are writing invariants for the
same recursive/iterative patterns in different algorithms
over and over

- if you run it with assertions on, the code will be dead slow,
so you turn off assertions

- it fails to integrate with the error system, the
hints that are providable to a user/developer in case of
an error are little above zero.
How can I continue?

- the loop syntax is as bloated as Common Lisp's

> > This assertions can be extracted from the source and give a nice
> > specification. And indeed you can use Eiffel for specifyin your problem.

But then I'd rather use a logic language or a theorem prover.

Rainer Joswig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <nkjlnbz...@tfeb.org>, Tim Bradshaw <t...@tfeb.org> wrote:

> I realised another thing about this. CL, as it stands, is probably
> marginal on small devices -- a palm pilot for instance will probably
> only just run something like a small CL system.

Apple's Newton was a system which was very close to a Common Lisp
environment. Some features of the Newton OS:

- Garbage Collection
- tagged data
- byte code interpreter and compiler builtin (in ROM)
- code and data is tiny, you have your prototype
objects in ROM and at runtime only the differences
are in RAM.
Applications were only a fraction in size of what you have on other
platforms - still they were fully object-oriented and dynamic.
- optional native code compilation
- dynamic objects
- dynamic typing

> There may be other reasons -- ease of porting perhaps? However I
> think even this turns out not to be that good an argument, since there
> are at least 2 pretty portable public CLs -- gcl and clisp

GCL and Clisp (as nice as they are) fail to support crucial
things like an advanced GC, threads and performance.

> -- and one
> apparently very portable commercial offering -- Franz Allegro.

"Portable" is relative - portable across Unix systems perhaps?

How big is a working Franz Allegro lisp image? Last time I looked,
ACL generated really large code on RISC machines
(has this changed?).


Reasons for a Core Lisps are:

- small footprint is still an advantage on various devices
(imaging placing a Lisp system in any arm of a robot)

- it's much easier to understand

- it's much easier to port

- it's much easier to experiment with extensions and changes

- faster startup time

- small means also that the kernel fits into current cache sizes
(I guess the boost MCL has got from the PowerPC G3 processor
is especially because it has a small footprint and the G3 has a
nice second level cache architecture)

- you might be able to experiment with different GC strategies

(remember that for example G2 which is popular
in process control and simulation is written
in a non-consing version of Common Lisp).

- it might be a good teaching vehicle

- it could be the base for an OS kernel

- you should be able to run thousands of Lisp threads on a single machine
(-> a web server, file server, ...)

There were projects in this direction already.
Minima was a mininal Lisp system from Symbolics
(the opposite of Genera, so to speak). You could have
embedded systems running Minima. Symbolics
even once built special hardware for a customer:
http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/zora/zora.html
See L, etc.


or to quote "Modernizing Common Lisp: Recommended Extensions", John Mallery, ...:

"The most important recommendation with the most leverage is the
recommendation to develop a PORTABLE BASE LANGUAGE FOR COMMON LISP.
This strategic move provides:

* An opportunity to reengineer the core language;
* A minimal base to port to new platforms;
* Opportunities for new hardware and compiler efforts
(base-to-platform & portable-to-base);
* Increased sharing of higher-level code."

...

"* CL Portable Base Language: Develop a base language to which
all extant Common Lisp implementations can be targeted. This
language should include locatives and basic OS operations like
processes, file I/O, networking. This base layer will
increase portability of lisp applications and enable work to
improve performance with specialized hardware or compilers.
This layer should make efforts to support real-time and
parallel dialects. Experimental, Very High Impact, Examples,
Implementations, Hard. KR, RL, JM"

Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Erik Naggum <er...@naggum.no> writes:

> * Erik Winkels <aer...@xs4all.nl>


> isn't it odd how designers keep making that mistake?

Well, to give them their due it is not always a mistake. For very
small devices, things like power consumption and physical size can
often result in fairly draconian limits on things like memory.

For instance, my HP calculator came with 128k of memory, which really
wasn't enough. I bought another 1.x Mb for it, which *is* enough (at
least, the limit is now the addressability of the processor), but it's
had bad effects on battery life -- it now requires batteries every
month or so, which is a pain compared to the 6 months or so before.
Fortunately the new HP calculators come with loads of (presumably
lower-power) internal memory and flashable ROM.

--tim

Kent M Pitman

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> Ahem!
>
> > Human-derived attempts to make unambiguous languages have gone
> > nowhere. English, which is full of contextual nuance and blunt
> > construction, has won over the regularities of Latin and Greek and the
> > pseudo-regularities of languages like Spanish and German partly, at
> > least, because people don't have patience with aesthetics when they
> > stand in the way of practicality. They want to ELECT practicality
> > when it suits their need, but they don't want to be REQUIRED
> > practicality.
>
> At a certain point in history Latin had won over Greek and a plethora
> of other languages in the Mediterranean basin. "La Divina Commedia"
> is from the 13th century :) German had won over Latin in the mid 800
> hundred for scientific communication and "War and Peace" begins with
> "En fin..." :)

I got some private mail about this, too. There are political and other
factors involved here.

But even if you take any one of those languages and ignore the
inter-language fighting, you can see that strong/regular language
features fall in favor of weaker ones. That is, people's natural
inclination is to make lots of variations for pragmatic reasons,
especially ones that favor pronunciation, not to try to distill the
language to a microscopic and easy-to-teach core. Often the
pronunciation accomodations make the language harder to learn, not
easier, but presumably the reason people do it is that they'd rather
have a language they can use easily than one they can learn easily,
because they are more equipped to learn it in spite of irregularity
than they are to speak it in spite of regularity.

Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Kent M Pitman <pit...@world.std.com> writes:

> Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> writes:
>
> > At a certain point in history Latin had won over Greek and a plethora
> > of other languages in the Mediterranean basin. "La Divina Commedia"
> > is from the 13th century :) German had won over Latin in the mid 800
> > hundred for scientific communication and "War and Peace" begins with
> > "En fin..." :)
>
> I got some private mail about this, too. There are political and other
> factors involved here.
>
> But even if you take any one of those languages and ignore the
> inter-language fighting, you can see that strong/regular language
> features fall in favor of weaker ones. That is, people's natural
> inclination is to make lots of variations for pragmatic reasons,
> especially ones that favor pronunciation, not to try to distill the
> language to a microscopic and easy-to-teach core. Often the
> pronunciation accomodations make the language harder to learn, not
> easier, but presumably the reason people do it is that they'd rather
> have a language they can use easily than one they can learn easily,
> because they are more equipped to learn it in spite of irregularity
> than they are to speak it in spite of regularity.

You are stating the obvious: there is *no* relationship whatsoever
between written and spoken English. :)

Spelling context in Italian or German are nonsense :)

Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

jos...@lavielle.com (Rainer Joswig) writes:

> In article <lwlnbzl...@copernico.parades.rm.cnr.it>, Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> wrote:
>
> > Friedrich Dominicus <Friedrich...@inka.de> writes:
> >
> > > >
> > > > What exactly is "Design by Contract"?
> > >
> > > Let me give an example. The all so well known factorial. Written in an
> > > imperative style to show some extra assertions
> > > factorial(n: INTEGER): INTEGER is

...

> - it fails to integrate with the error system, the
> hints that are providable to a user/developer in case of
> an error are little above zero.
> How can I continue?

This is pretty ugly indeed. Non objectified exceptions 'a la` CL
(which was the first language to have this feature? Maybe CLU?)

> - the loop syntax is as bloated as Common Lisp's

Impossible! No language construct can be as bloated as CL LOOP. :)

>
> > > This assertions can be extracted from the source and give a nice
> > > specification. And indeed you can use Eiffel for specifyin your problem.
>
> But then I'd rather use a logic language or a theorem prover.

This is a very good point indeed. Algorithm and Implementation
verification are things that are done better with some tools of the
Theorem Prover (Model Checker) class. This is particularly true when
you want to set up assertions regarding very complex interactions
among subsystems (distributed algorithms and real-time scheduling
problems come to mind).

The problem then becomes. Wehn do I stop having a compiler to play
with and start having a Theorem Prover?

Certainly, things like type inference are very close to this grey area.

Erik Naggum

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
* Rainer Joswig

| How big is a working Franz Allegro lisp image?

on my system, the free system memory decreases by 740K when I start up
the second Allegro CL 5.0.1. the similar process with CLISP requires
1108K of fresh memory. it is very hard on my system to measure the exact
memory consumption of a process except for the fresh memory it grabs.

| Last time I looked, ACL generated really large code on RISC machines (has
| this changed?).

it's impossible to tell since you don't give any clue when that last time
was, what "really large code" means, or which RISC processor you're
talking about. I _could_ say "yes, it's much, much better now" and you
wouldn't know what I had answered, but anyone careless enough to believe
your question was meaningful would probably believe my answer, too. that
is to say, I don't believe people are actually interested in performance
information from others in a free forum -- even if people are honest,
they are way too sloppy to produce meaningful data to base business
decisions on, and anyone with an agenda will "win" by being selectively
dishonest, as much comparative "marketing" and campaigning is already.

| Reasons for a Core Lisps are:
|
| - small footprint is still an advantage on various devices
| (imaging placing a Lisp system in any arm of a robot)

as others have indicated, ROM is cheaper than RAM.

| - it's much easier to understand

this is actually not so. a Core Lisp would need more macrology and more
higher-leve code to support good programming styles, and would suffer
like Scheme when small communities of people develop incompatible
"libraries" of extensions. agreeing on a large base serves a larger
community. we cannot afford to let a thousand flowers bloom when the
soil only supports a hundred -- we'll all vanish and the weed will take
over completely.

| - it's much easier to port

you don't port the Lisp code, you port the compiler and run-time system.
if you're even moderately smart, the run-time system is written largely
in Lisp and what you really need is a proto-Lisp, not a Core Lisp, but
you wouldn't want anyone to actually program in the proto-Lisp besides
the engineers who boot up a Common Lisp system.

| - it's much easier to experiment with extensions and changes

this is wrong -- tweaking something is easier than building from scratch.

| - faster startup time

this is wrong -- startup time is unaffected by total system size.

| - small means also that the kernel fits into current cache sizes
| (I guess the boost MCL has got from the PowerPC G3 processor
| is especially because it has a small footprint and the G3 has a
| nice second level cache architecture)

what use is this when you need space for all the user code that makes
code worth writing, again?

| - you might be able to experiment with different GC strategies

unless you by "Core Lisp" mean a proto-Lisp that lives below the real
Lisp, this does not seem to be a valid argument.

| - it might be a good teaching vehicle

we've been there before. some students prefer to know that as they learn
more and more, a lot of work has already been done for them, while other
students prefer to be able to learn it all in a short time and go on to
build stuff from scratch. e.g., you could easily teach medicine in a
year if you wanted to produce doctors faster, but they would still need
seven years to be trustable in any critical situation where they would be
called upon. society would have to respond to one-year doctors with a
lot more bureaucrazy and each doctor's skills would need to be charted
with much more detail than we do today. so you would get more doctors
through the system, at phenomenal increases in total system costs. the
same is true of any other complex system that is taught in stages.



| - it could be the base for an OS kernel

nothing prevents you from doing this already. you don't need somebody
else to define a Core Lisp for you first, in other words. just do it.

| - you should be able to run thousands of Lisp threads on a single machine
| (-> a web server, file server, ...)

this does not relate to system size, only _incremental_ process size. a
bigger base system will generally have smaller incremental sizes than a
small base system, where each thread needs to set up its own environment.

I wonder which agenda John Mallery actually has -- what he says doesn't
seem to be terribly consistent. neither do your arguments, Rainer. in
brief, it looks like you guys want to destabilize the agreement that has
produced what we have today, for no good reason except that insufficient
catering to individual egos have taken place up to this point.

haven't various people tried to produce a Core English already? how well
did that those projects go? more importantly, why isn't there anything
available in Core English except the designer's teaching materials? I'd
say the evidence is clear: people don't want to be artificially limited
by some minimalist committee.

Core Lisp is a mistake, but it will be a serious drain on the resources
available in the Common Lisp community. language designer wannabes and
language redesigners should go elsewhere.

Lars Marius Garshol

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

* Rainer Joswig

|
| "* CL Portable Base Language: Develop a base language to which all
| extant Common Lisp implementations can be targeted. This language
| should include locatives and basic OS operations like processes,
| file I/O, networking.

Does anyone know what the chances are that this will actually happen?
Is it controversial in any way that this should happen? Is there
interest?

--Lars M.

Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Marco Antoniotti

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

jos...@lavielle.com (Rainer Joswig) writes:

> GCL and Clisp (as nice as they are) fail to support crucial
> things like an advanced GC, threads and performance.

I am not all that in favor of Threads in Lisp as long as there is no
standard set in stone. (Yes. CLIM docs have one. How widepsread is it?)

Let me comment on the following.

> Reasons for a Core Lisps are:
>
> - small footprint is still an advantage on various devices
> (imaging placing a Lisp system in any arm of a robot)

Good reason

> - it's much easier to understand

Not necessarily

> - it's much easier to port

Not necessarily

> - it's much easier to experiment with extensions and changes

What "extensions"? I would disagree with this. Extensions should be
seen only in conformance with the ANSI standard.

...


>
> There were projects in this direction already.
> Minima was a mininal Lisp system from Symbolics
> (the opposite of Genera, so to speak). You could have
> embedded systems running Minima. Symbolics
> even once built special hardware for a customer:
> http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/zora/zora.html
> See L, etc.
>
>
> or to quote "Modernizing Common Lisp: Recommended Extensions", John Mallery, ...:
>
> "The most important recommendation with the most leverage is the
> recommendation to develop a PORTABLE BASE LANGUAGE FOR COMMON LISP.
> This strategic move provides:
>
> * An opportunity to reengineer the core language;
> * A minimal base to port to new platforms;
> * Opportunities for new hardware and compiler efforts
> (base-to-platform & portable-to-base);
> * Increased sharing of higher-level code."
>
> ...
>

> "* CL Portable Base Language: Develop a base language to which
> all extant Common Lisp implementations can be targeted. This
> language should include locatives and basic OS operations like
> processes, file I/O, networking.

I beg to differ. All of these 'desiderata' are not part of a 'core'.
They are desirable features of a full blown system programming
language. May of thes could be *added* to CL as it is, by providing
an appropriate set of packages (i.e. many of them).

> This base layer will
> increase portability of lisp applications and enable work to
> improve performance with specialized hardware or compilers.
> This layer should make efforts to support real-time and
> parallel dialects. Experimental, Very High Impact, Examples,
> Implementations, Hard. KR, RL, JM"

If we consider Java as a sort of hindsight on many CL design decisions
(as I think it is), we see that the story does not go this way, when
it comes to embedded systems (read: memory constrained - that's the name of
the game).

The latest "small" JVM from Sun and Motorola (the KVM) runs in about
80k with a given set of libraries. It runs on the Palm Pilot. But
its main characteristic is that it *drops* many of the features of the
JDK that - by similitude - according to Mallery should go in the
'core' CL. I do not know what kind of GC or threading they provide in
the KVW, but certainly they sacrificed portability of code. An
application for the KVM/Pilot will most likely not run under a generic
JVM higher in Sun's hierarchy of Java architectures.

So the story is very simple. You need to *drop* parts of the
language. Not requiring *extra* things like networking.

Apart form this there are trends in embedded systems which seems
couterintuitive to the software engineer. An embedded device is never
(at least until now) built according to *what it should do*. The
cycle is more like.

Hardware engineer along with manager:
"Let's 16 bytes and a third of memory on the chip which will cost 4
USD, now let's cram the very complex RANDOM number generator on it.
If we don't do it this way then somebody else will and they will sell
their system to the system integrator"

Software engineer:
"But for .25 USD more we can have 472 bytes and four fifths of memory
and then I will be able to add PLUS to the system."

Hardware engineer along with manager:
"Ary you mad?"

Of course, if you then count the number of chips in a car you'll be
astounded (upper tier BMWs have about 20). But the realization that
having more memory and less chips may be more cost effective in terms
of 'time to market' and reusability is not there yet. After all,
capitalism is all about quarterly reports. As Quark said in Rule of
Acquisition #X, "Long term benefits are always less important than
short term profits". :)

BTW. This are personal opinions which do not reflect that of my
collegues and bosses :)

So, the story is the following. If you want to have a (Common) Lisp
running on an embedded system today, you need to shoot for a <100K
"processor" which should include some form of GC. Anything different
would not work. The core language should run with this contraints in
mind and be specified in such a way to have the full CL runnable on
it, once the memory contraints are lifted.

(end-of-ranting)

Fernando Mato Mira

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Tim Bradshaw wrote:

> Erik Winkels <aer...@xs4all.nl> writes:
>
> > True, no doubt, for the small devices of the moment. However in
> > the next few years there will probably be a new generation of small
> > devices, wrist pilots perhaps, that will start out with little memory
> > again.
>
> Well, I'll happily admit to not knowing too much about the
> technologies, but it seems to me that, although these devices may have
> only tiny RAM, they will probably be able to have plenty of ROM (or
> flashable ROM preferably). So, so long as most of your system can sit
> in ROM (and I can't see a reason why most of a CL system should not be
> in ROM), you're probably OK. I think that ROM is very low power and
> pretty physically tiny. A more likely limit is perhaps address space
> in the CPU if they have less-than-32 bit cpus, but I think most things
> are now 32?

Reducing chip count is a big issue. How many years still until you can have
DSP, analog wireless and audio frontend, display controller, and say,
4Mwords of PROM and 4Mwords of RAM in a single chip?

What's the minimum you need for a full CL?

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <31422405...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
>* Erik Winkels <aer...@xs4all.nl>

>| True, no doubt, for the small devices of the moment. However in the next
>| few years there will probably be a new generation of small devices, wrist
>| pilots perhaps, that will start out with little memory again.
>
> isn't it odd how designers keep making that mistake?

As Tim Bradshaw pointed out, this is rarely a mistake. There may be
very valid price, power, and size constraints that restrict the memory
size of a portable device.

In space flight applications, for instance, memory that can withstand
the environmental radiation is relatively expensive, slow, power-hungry,
and not very dense, and requires expensive and power-hungry error
correction hardware. And the more mass you fly, the bigger the rocket
needed to push it into space. Given these constraints, it was a miracle
we got 128 MB on the New Millenium Deep Space One (recently renamed
Space Technologies One). I don't think we can count on that kind of
memory being available on future flights.

> "640K should be enough for everyone" is probably the most brilliant thing
> the man has said (at least if we judge by his book "The Road Ahead"), but
> there's something in the engineering culture that just doesn't quite get
> this idea that people want to be relieved of remembering accurately, and

> there's no limit to what people don't want to remember.

Aside from the constraints mentioned above, there is the problem of
keying that memory so that facts stored in it can be recalled quickly
and easily, how you deal with records that share the same key, etc.
It's more than just a resource issue, it's a user interface issue.
(This is most true in the context of small portable devices with minimal
input hardware, e.g. phones and palmtop assistants, less so in desktop
computer software.)

> whenever I call
> directory assistance, for instance, chances are that I will call again
> the next time I need the same number unless I write it down, but all of
> this stuff is already in electronic form, so why can't the stupid
> telephone just record it? sigh. some technologies are so lame.

So why don't you go design such an intelligent telephone? No one is
stopping you.

-- Chuck, again speaking for himself
--
Chuck Fry -- Jack of all trades, master of none
chu...@chucko.com (text only please) chuc...@home.com (MIME enabled)
Lisp bigot, mountain biker, car nut, sometime guitarist and photographer
The addresses above are real. All spammers will be reported to their ISPs.

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <nkjhfmn...@tfeb.org>, Tim Bradshaw <t...@tfeb.org> wrote:
>Well, I'll happily admit to not knowing too much about the
>technologies, but it seems to me that, although these devices may have
>only tiny RAM, they will probably be able to have plenty of ROM (or
>flashable ROM preferably).

See my other article in this thread. Mask ROM (*really* read-only
memory) is cheapest in large quantity applications, always has been, and
always will be. Flash ROM adds cost in several ways: first, it is
inherently more expensive; second, a programming interface (meaning more
components) must be provided, increasing component count and space
consumption; third, the flash ROM and its programming interface draw
more power.

>Anyway, I don't really see the tiny-device argument being a good
>reason for a CL subset. There may be other good reasons.

Engineering is the art of doing for a dime what any damned fool can do
for a dollar. IMHO there will *always* be good reasons for a reduced CL
subset.

I don't know why, but the comment seen in some Lisp Machine source
files:
;;; This file is in the cold load.
always gives me a warm and fuzzy feeling. :-)

-- Chuck

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <wk7lnjx...@ifi.uio.no>,
Lars Marius Garshol <lar...@ifi.uio.no> wrote:
>
>* Rainer Joswig

>|
>| "* CL Portable Base Language: Develop a base language to which all
>| extant Common Lisp implementations can be targeted. This language
>| should include locatives and basic OS operations like processes,
>| file I/O, networking.
>
>Does anyone know what the chances are that this will actually happen?

I have no idea.

>Is it controversial in any way that this should happen? Is there
>interest?

Yes, it's controversial -- look at the discussion in this thread
already.

Is there interest? Yes, I'm interested! I think there are many good
reasons for a standardized core Lisp, which I don't have time to
enumerate right now.

Raymond Toy

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
>>>>> "Chuck" == Chuck Fry <chu...@best.com> writes:

Chuck> In article <31422405...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
>> whenever I call
>> directory assistance, for instance, chances are that I will call again
>> the next time I need the same number unless I write it down, but all of
>> this stuff is already in electronic form, so why can't the stupid
>> telephone just record it? sigh. some technologies are so lame.

Chuck> So why don't you go design such an intelligent telephone? No one is
Chuck> stopping you.

My Sony CDMA phone keeps a list of the last few numbers dialed. I'm
sure there's a way to add one of those numbers to your personal phone
list.

Ray

Fernando Mato Mira

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Fernando Mato Mira wrote:

> Reducing chip count is a big issue. How many years still until you can have
> DSP, analog wireless and audio frontend, display controller, and say,
> 4Mwords of PROM and 4Mwords of RAM in a single chip?

Here's a more interesting question: would you need full CL to program
microrobots?
(Low-level motion control might be hardware-based).

Or maybe having a dual-purpose CL-1/CL-7 spec is a better idea, instead of a
Core Lisp? (you know what I mean).


Lars Marius Garshol

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

* Lars Marius Garshol
|
| [CL Portable Base Language]

| Is it controversial in any way that this should happen? Is there
| interest?

* Chuck Fry


|
| Yes, it's controversial -- look at the discussion in this thread
| already.

That discussion seems to be about Core Lisp, not CL Portable Base
Language. (OK, Marco mentioned it, but nobody else seems to have.)



| Is there interest? Yes, I'm interested! I think there are many
| good reasons for a standardized core Lisp, which I don't have time
| to enumerate right now.

I'm not sure if I care about core Lisp, but I do care about the PBL.

--Lars M.

Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
jos...@lavielle.com (Rainer Joswig) writes:

> > -- and one
> > apparently very portable commercial offering -- Franz Allegro.
>
> "Portable" is relative - portable across Unix systems perhaps?
>

Unix and Windows 9x and NT. I get the impression that they can port
to anything pretty much including embedded-type systems, but Duane or
someone should speak to that.

--tim

Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
chu...@best.com (Chuck Fry) writes:


> In space flight applications, for instance, memory that can withstand
> the environmental radiation is relatively expensive, slow, power-hungry,
> and not very dense, and requires expensive and power-hungry error
> correction hardware. And the more mass you fly, the bigger the rocket
> needed to push it into space. Given these constraints, it was a miracle
> we got 128 MB on the New Millenium Deep Space One (recently renamed
> Space Technologies One). I don't think we can count on that kind of
> memory being available on future flights.

I'd be interested in knowing if these constraints apply to ROM -- or
was most of that 128Mb ROM anyway? I kind of assume that the same
things that make it hard (impossible...) for programs to write to ROM
also make it hard for radiation to write to it, so it should be a good
bet for bad environments. OTOH it is perhaps not a good bet because
you don't always have the opportunity to upgrade the thing when you
find a bug in the ROM and it's on Mars or something.

Presumably you could do some clever thing of having a whole load of
(hardened) ROM in there and shadowing bits of it (possibly at language
level) with patches in the much scarcer RAM. Lisp would be a really
cool language for experimenting with that as it's all about
redefinability in some sense.

Do people play with having lots of cheapo errory memory and then
really, really good error correction, or is that too expensive in some
way?

--tim (Whose most recent experience of anything like this was
observing people react in horror when it became apparent that
the 1k ROM for an 1802-based system was just going to have to
change to 2k with resultant nightmare of finding a 2k
suitably-hardened part.)

Friedrich Dominicus

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Rainer Joswig wrote:
>
> In article <lwlnbzl...@copernico.parades.rm.cnr.it>, Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> wrote:
>
> > Friedrich Dominicus <Friedrich...@inka.de> writes:
> >
> > > >
> > > > What exactly is "Design by Contract"?
> > >
> > > Let me give an example. The all so well known factorial. Written in an
> > > imperative style to show some extra assertions
> > > factorial(n: INTEGER): INTEGER is
> > > require
> > > n_positiv: n >= 0
> > > local
> > > i : INTEGER
> > > do
> > > Result :=1
> > > -- holds for n = 0
> > > from
> > > i:= n
> > > invariant
> > > i_positive_and_less_than_n: i >=0 and then i <= n;
> > > variant
> > > i_decreases: i
> > > until
> > > i <=1
> > > loop
> > > Result := Result * i;
> > > i := i-1
> > > end; -- from
> > >
> > > end; -- faculty
>
> I find this example particular ugly.
>
> - trivialities are expanded

It's an example. You may not find it usefule in this context but let the
package grow and you'l be very happy.

>
> - the compiler does not care a thing about the assertions at
> compile time.
> would the compiler help you to find the error of calling (factorial -3) ?
> You **could** use the subrange (integer 0) as a type for n,
> but then it is already required.

Yes you're right.

>
> - it fails to capture the pattern of iteration or recursion in any,
> instead you are writing invariants for the
> same recursive/iterative patterns in different algorithms
> over and over

That's true.


>
> - if you run it with assertions on, the code will be dead slow,
> so you turn off assertions

No not necessarily.


>
> - it fails to integrate with the error system, the
> hints that are providable to a user/developer in case of
> an error are little above zero.
> How can I continue?

It integrates very nicely with the EXCEPTION handling. You can easily
check if some piece are violated. But you insist that trivialities are
expanded so I won't do that.

>
> - the loop syntax is as bloated as Common Lisp's

Why is it bloated? It's as good or bad as others.


>
> > > This assertions can be extracted from the source and give a nice
> > > specification. And indeed you can use Eiffel for specifyin your problem.
>
> But then I'd rather use a logic language or a theorem prover.

That's up to you.

Regards
Friedrich

Robert Monfera

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Tim Bradshaw wrote:
...

> Presumably you could do some clever thing of having a whole load of
> (hardened) ROM in there and shadowing bits of it (possibly at language
> level) with patches in the much scarcer RAM. Lisp would be a really
> cool language for experimenting with that as it's all about
> redefinability in some sense.

Yes, this mechanism was implemented in Apple Newton - the system is in
the ROM, and if a slot of a pre-delivered object changes, the object in
the ROM is referred and only the amended slot is stored in the RAM. This
is hugely supported by the fact that the OO system of NewtonScript is
prototype-based, but this should also be straightforward using MOP. It
is amazing how much functionality could fit in so little RAM.
-Robert

Pierre R. Mai

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Friedrich Dominicus <Friedrich...@inka.de> writes:

> > - it fails to integrate with the error system, the
> > hints that are providable to a user/developer in case of
> > an error are little above zero.
> > How can I continue?
> It integrates very nicely with the EXCEPTION handling. You can easily
> check if some piece are violated. But you insist that trivialities are
> expanded so I won't do that.

The problem I found with Eiffel is it's extremely limited notion of
EXCEPTION handling. In effect, I can find out that "some piece" was
violated somewhere, but I will find it extremely difficult to find
out what exactly went wrong, where things turned sour, and what
possibilities of handling this exist. Although most other languages
suffer from weaknesses in their exception handling facilities, I
found Eiffel's capabilities in this regard among the weakest. Given
Eiffel's exception handling, the most you can do most of the time is
to try to shutdown safely (and even this can get complicated, if you
don't really know what happened). That is IMHO not enough in todays
complex environments.

To be fair to Eiffel though, I've heard that work is underway to
improve Eiffel's exception handling capabilities. And most other
languages don't get this stuff quite right either. I'd hope that more
would read Kent M. Pitman's paper on "Exceptional Situations in Lisp",
(http://world.std.com/~pitman/Papers/Exceptional-Situations-1990.html).

In effect, CL's condition system is the first exception handling
mechanism I've come across, that really gives me confidence in
handling exceptional situations in a way that allows me to be
confident of successful operation in the face of errors. With most
other languages and error handling mechanisms, I've ended up dumping
core on many ocassions.

> > - the loop syntax is as bloated as Common Lisp's
> Why is it bloated? It's as good or bad as others.

I don't find Eiffel's loop bloated, either. I do find Eiffel's
insistence on only one looping construct a bloody nuissance,
though ;)

But enough of Eiffel in c.l.l, I say, let's hear more about Lisp...

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Christopher R. Barry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
pm...@acm.org (Pierre R. Mai) writes:

> In effect, CL's condition system is the first exception handling
> mechanism I've come across, that really gives me confidence in
> handling exceptional situations in a way that allows me to be
> confident of successful operation in the face of errors. With most
> other languages and error handling mechanisms, I've ended up dumping
> core on many ocassions.

In my opinion, Java also has really good, robust exception
handling. It's hard for me to say whether CL's or Java's is better
since I haven't yet been in a situation where I've really felt the
limitations of either. (Both have been very satisfactory, in other
words.)

Christopher

Lieven Marchand

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Kent M Pitman <pit...@world.std.com> writes:

> But even if you take any one of those languages and ignore the
> inter-language fighting, you can see that strong/regular language
> features fall in favor of weaker ones.

I'm not a professional linguist but I strongly doubt you can
generalize that. When you take the history of a language family over a
few thousand years you can see whole shifts from analytical to
inflected and back again. It's true that for Indo-European languages
the shift has been to losing the 8 original cases (although Old
Lithuanian even picked up some additional ones from the Finnish-Ugrian
group) in favor of more analytical features but in general natural
languages are also ecologies in which tradeoffs are made. The
conjugation of verbs in French is very complex which has led to the
disappearance of a lot of tenses in normal speech and a plethora of
tenses that are nowadays regarded as literary or affected. On the
other hand, Turkish has even more tenses but its verbs are very
regular.

--
Lieven Marchand <m...@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker

Lieven Marchand

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
chu...@best.com (Chuck Fry) writes:

> I don't know why, but the comment seen in some Lisp Machine source
> files:
> ;;; This file is in the cold load.
> always gives me a warm and fuzzy feeling. :-)

Could you explain what this means for those who haven't seen a Lisp
Machine?

Rainer Joswig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

> > - if you run it with assertions on, the code will be dead slow,
> > so you turn off assertions
>
> No not necessarily.

But "probably"?

> > - it fails to integrate with the error system, the
> > hints that are providable to a user/developer in case of
> > an error are little above zero.
> > How can I continue?
> It integrates very nicely with the EXCEPTION handling. You can easily
> check if some piece are violated. But you insist that trivialities are
> expanded so I won't do that.

The thing is, that in Common Lisp I declare the places
of restart (not everything is a useful restart),
I declare the handlers and when the error happens
an error object will be generated. This error
object than should contain the necessary information
that is needed to describe the situation.
Forms like ASSERT and CHECK-TYPE give you an
indication what went wrong and what you
can do about it to get out of that problem.

> > - the loop syntax is as bloated as Common Lisp's
> Why is it bloated? It's as good or bad as others.

It fails to support orthogonal functionality in different
syntactic constructs. Instead eveything is one big
syntactic construct - something that is difficult to analyze
(in Common Lisp, LOOP will be expanded to smaller
parts, but it may be really difficult to reconstruct what
those parts purpose was when you looking at them
in a debugger) and understand.

> > > > This assertions can be extracted from the source and give a nice
> > > > specification. And indeed you can use Eiffel for specifyin your problem.
> >
> > But then I'd rather use a logic language or a theorem prover.
> That's up to you.

See for example:

Applicative Common Lisp:
http://www.cs.utexas.edu/users/moore/acl2/v2-3/acl2-doc.html
ACL2 is both a programming language in which you can model computer
systems and a tool to help you prove properties of those models.

Rainer Joswig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Hmm, it's surely nice that they now have it "portable".
But it took them quite some time to do so.
Earlier ACL versions were different between Windows and Unix.
Having ported it to Unix and Windows doesn't make it
"very portable", IMHO.

Stig E. Sandų

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
>| - small footprint is still an advantage on various devices
>| (imaging placing a Lisp system in any arm of a robot)
>
> as others have indicated, ROM is cheaper than RAM.

This sortof leads to something I pondered upon a few months back when I
played with genetic programming and some robot simulations. Inspired by
the Toy Robots made by LEGO I hoped there existed a somewhat affordable
(toy-) robot option which could run a Lisp system. Unfortunately the
"marketed" robots were either too low on ROM and RAM and using some form
of BASIC or some (even more) crippled version of C which would take too
long to find weird things in when porting. It would have been fun to
have some toy robots try out some of the more succesful evolutionized
code. Anyone know of an easy way to get hold of stuff/code for such a
robot?

------------------------------------------------------------------
Stig Erik Sandoe Institute of Informatics, University of Bergen
st...@ii.uib.no http://www.ii.uib.no/~stig/

Rainer Joswig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <31422483...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:

> * Rainer Joswig
> | How big is a working Franz Allegro lisp image?
>
> on my system, the free system memory decreases by 740K when I start up
> the second Allegro CL 5.0.1. the similar process with CLISP requires
> 1108K of fresh memory. it is very hard on my system to measure the exact
> memory consumption of a process except for the fresh memory it grabs.

You have not answered my question.

> | Reasons for a Core Lisps are:
> |
> | - small footprint is still an advantage on various devices
> | (imaging placing a Lisp system in any arm of a robot)
>
> as others have indicated, ROM is cheaper than RAM.

Unfortunately it's easy for me to put my software in RAM,
not in ROM.

> | - it's much easier to understand
>
> this is actually not so. a Core Lisp would need more macrology and more
> higher-leve code to support good programming styles,

A user would use the usual CL on top of that.

> | - it's much easier to port
>
> you don't port the Lisp code, you port the compiler and run-time system.
> if you're even moderately smart, the run-time system is written largely
> in Lisp and what you really need is a proto-Lisp, not a Core Lisp, but
> you wouldn't want anyone to actually program in the proto-Lisp besides
> the engineers who boot up a Common Lisp system.

True, but this doesn't contradict what I'm saying.

> | - it's much easier to experiment with extensions and changes
>
> this is wrong -- tweaking something is easier than building from scratch.

Tweaking something small should be easier than tweaking something
large.

> | - faster startup time
>
> this is wrong -- startup time is unaffected by total system size.

This is wrong. Sure startup time is affected
by total system size. You need to be careful about that
at initialization time and load time. Is the code still in cache, etc.
Many systems now have very fast cache systems (for example
the backside cache of the G3), taking advantage of that
is not unreasonable. You might have to deal with
non-locality of code and data, ...



> | - small means also that the kernel fits into current cache sizes
> | (I guess the boost MCL has got from the PowerPC G3 processor
> | is especially because it has a small footprint and the G3 has a
> | nice second level cache architecture)
>
> what use is this when you need space for all the user code that makes
> code worth writing, again?

The use of that is that a large part of your program might
uses routines from your kernel. Additionally runtime services
like GC would surely benefit if they could stay in cache.

> | - it might be a good teaching vehicle
>
> we've been there before.

Still you seem to respond only to one possible dimension of it.

> | - you should be able to run thousands of Lisp threads on a single machine
> | (-> a web server, file server, ...)
>
> this does not relate to system size, only _incremental_ process size.

Sure, but it also relates to things like startup time, scheduling
overhead, time to respond to requests, etc.

> a
> bigger base system will generally have smaller incremental sizes than a
> small base system, where each thread needs to set up its own environment.

yes, still you get stuff like increased non-locality. The
Lisp machine has for example introduced things like
Areas and reordering of objects in (incrementally) saved images.

> I wonder which agenda John Mallery actually has -- what he says doesn't
> seem to be terribly consistent. neither do your arguments, Rainer. in
> brief, it looks like you guys want to destabilize the agreement that has
> produced what we have today, for no good reason except that insufficient
> catering to individual egos have taken place up to this point.

Sure, go on Erik. Make fool out of yourself by blaming other people.
It's a well known tactic by you to mix in your personal attacks.

> haven't various people tried to produce a Core English already?

What has this to do with the topic I was discussing?

> Core Lisp is a mistake, but it will be a serious drain on the resources
> available in the Common Lisp community. language designer wannabes and
> language redesigners should go elsewhere.

Erik, you finally made it in my kill file. Not that I expect you to
care, but I don't feel that urgent a need to read your pointless
rantings anymore. Better try to impress beginners with your excurses
in Lisp and how the world is according to you.

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <nkjaesf...@tfeb.org>, Tim Bradshaw <t...@tfeb.org> wrote:
>chu...@best.com (Chuck Fry) writes:
>> In space flight applications, for instance, memory that can withstand
>> the environmental radiation is relatively expensive, slow, power-hungry,
>> and not very dense, and requires expensive and power-hungry error
>> correction hardware. And the more mass you fly, the bigger the rocket
>> needed to push it into space. Given these constraints, it was a miracle
>> we got 128 MB on the New Millenium Deep Space One (recently renamed
>> Space Technologies One). I don't think we can count on that kind of
>> memory being available on future flights.
>
>I'd be interested in knowing if these constraints apply to ROM -- or
>was most of that 128Mb ROM anyway? I kind of assume that the same
>things that make it hard (impossible...) for programs to write to ROM
>also make it hard for radiation to write to it, so it should be a good
>bet for bad environments. OTOH it is perhaps not a good bet because
>you don't always have the opportunity to upgrade the thing when you
>find a bug in the ROM and it's on Mars or something.

Programs were stored in EEROM ("flash" memory), in a "solid state disk"
configuration. It's not economically viable to use mask ROM for a
one-of-a-kind application, and the team wanted to be able to update the
software in flight. I'm not clear on how radiation-resistant flash
memory is; I think it's better than dynamic RAM, but I don't know by how
much.

There used to be a ROM technology that used fuses, which clearly would
have been more robust against radiation, but it was not reprogrammable,
and extremely power-hungry.

You can't use real disk in flight, because it tends to spin the
spacecraft.

>Do people play with having lots of cheapo errory memory and then
>really, really good error correction, or is that too expensive in some
>way?

It adds complexity and cost, and slows access time. Few terrestrial PCs
even have *parity* on memory any more.

-- Chuck, again speaking only for himself

William Tanksley

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
On Thu, 29 Jul 1999 22:13:50 +0200, Rainer Joswig wrote:
>In article <37A08867...@inka.de>, Friedrich...@inka.de wrote:

>> > - if you run it with assertions on, the code will be dead slow,
>> > so you turn off assertions

>> No not necessarily.

>But "probably"?

Not after they've caught even a single bug for you. Running with
assertions is WAY faster than singlestepping through code.

>It fails to support orthogonal functionality in different
>syntactic constructs. Instead eveything is one big
>syntactic construct - something that is difficult to analyze
>(in Common Lisp, LOOP will be expanded to smaller
>parts, but it may be really difficult to reconstruct what
>those parts purpose was when you looking at them
>in a debugger) and understand.

Eiffel's loop is worse -- it's very bulky, and it really only supports one
type of loop. It does, however, support loop invariants, which fits with
the purpose of the language. You can't have everything :).

>> > > > This assertions can be extracted from the source and give a nice
>> > > > specification. And indeed you can use Eiffel for specifyin your problem.

>> > But then I'd rather use a logic language or a theorem prover.
>> That's up to you.

You're right there -- a logic language would be a FAR better way to
express invariants, postconditions, and preconditions. Eiffel was merely
using a convenient form; Meyer himself said as much.

--
-William "Billy" Tanksley

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <m31zdrg...@localhost.localdomain>,

Lieven Marchand <m...@bewoner.dma.be> wrote:
>chu...@best.com (Chuck Fry) writes:
>
>> I don't know why, but the comment seen in some Lisp Machine source
>> files:
>> ;;; This file is in the cold load.
>> always gives me a warm and fuzzy feeling. :-)
>
>Could you explain what this means for those who haven't seen a Lisp
>Machine?

At Symbolics, at least, the "cold load" was the core Lisp system, the
kernel if you will. The cold load was the first part of the Lisp system
to be loaded at startup, and lacked much of the sophistication of the
full running system (e.g. virtual memory, window system, most
object-oriented facilities, etc.). Yet the cold load had a debugger,
and could type to the screen through a basic character-at-a-time
interface.

-- Chuck

Martin Rodgers

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <joswig-2907...@194.163.195.67>,
jos...@lavielle.com says...

> Erik, you finally made it in my kill file. Not that I expect you to
> care, but I don't feel that urgent a need to read your pointless
> rantings anymore. Better try to impress beginners with your excurses
> in Lisp and how the world is according to you.

Good grief, not you too. ;)

Anyway, I was playing with an "ultra core Lisp" about 5 years ago,
using a crude compiler for an unbelievably minimal subset of Scheme
that produced even more crude C code and used a simple compacting GC.
I got some curiously satisfying results on a 20 Mhz 386.

So this thread is fascinating me.
--
Please note: my email address is munged; You can never browse enough
"There are no limits." -- tagline for Hellraiser

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <31422483...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
>* Rainer Joswig
>| Reasons for a Core Lisps are:
>|
>| - small footprint is still an advantage on various devices
>| (imaging placing a Lisp system in any arm of a robot)
>
> as others have indicated, ROM is cheaper than RAM.

But ROM is not free in any sense. And some applications will always be
cost-, size-, or power-sensitive.

>| - it's much easier to understand
>
> this is actually not so.

This depends very much on the language specification, and upon the
implementation.

>| - it's much easier to port
>
> you don't port the Lisp code, you port the compiler and run-time system.

I agree with Erik so far...

> if you're even moderately smart, the run-time system is written largely
> in Lisp and what you really need is a proto-Lisp, not a Core Lisp, but
> you wouldn't want anyone to actually program in the proto-Lisp besides
> the engineers who boot up a Common Lisp system.

... and this is where I disagree. What's the difference between
"proto-Lisp" and "Core Lisp"? No one has so much as posted a
hypothetical description of a Core Lisp, so where is your basis for
comparison?

>| - it's much easier to experiment with extensions and changes
>
> this is wrong -- tweaking something is easier than building from scratch.

I don't see how Rainer's comment precludes tweaking.

>| - faster startup time
>
> this is wrong -- startup time is unaffected by total system size.

Again, since no spec has been made available for comparison, I don't see
how one can draw a conclusion either way.

>| - small means also that the kernel fits into current cache sizes
>| (I guess the boost MCL has got from the PowerPC G3 processor
>| is especially because it has a small footprint and the G3 has a
>| nice second level cache architecture)
>
> what use is this when you need space for all the user code that makes
> code worth writing, again?

In general I agree with Erik, though I think the hope is that Core Lisp
+ extensions + application <= cache size.

>| - you might be able to experiment with different GC strategies
>
> unless you by "Core Lisp" mean a proto-Lisp that lives below the real
> Lisp, this does not seem to be a valid argument.

Again, since no one has spec'd a Core Lisp, this can't be answered.

>| - it might be a good teaching vehicle
>

> we've been there before. [...]

I'm not going to comment on this.

>| - it could be the base for an OS kernel
>
> nothing prevents you from doing this already. you don't need somebody
> else to define a Core Lisp for you first, in other words. just do it.

I think you'd have to design a Core Lisp very carefully to be able to
build an OS kernel in it. Most commercial CLs, as they exist today,
wouldn't work. (One notable exception being Symbolics Common Lisp.)

>| - you should be able to run thousands of Lisp threads on a single machine
>| (-> a web server, file server, ...)
>

> this does not relate to system size, only _incremental_ process size. a


> bigger base system will generally have smaller incremental sizes than a
> small base system, where each thread needs to set up its own environment.

Actually this is more a function of the multitasking strategy and
implementation, and as such is not really answerable in the absence of a
spec for same.

> haven't various people tried to produce a Core English already?

What does that have to do with programming languages?

> Core Lisp is a mistake, but it will be a serious drain on the resources
> available in the Common Lisp community. language designer wannabes and
> language redesigners should go elsewhere.

I strongly disagree. To me, Core Lisp is the heart of a layered Lisp
system that can be bulked up to full ANSI CL (or beyond) or pared down
to just the essentials, as the application requires. Few applications
use all the features provided by ANSI CL, so why burden all applications
with them?

Users have been screaming for a core + libraries architecture for years.
It's time we gave it to them.

-- Chuck, definitely not speaking for any Lisp vendors!

Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
* Rainer Joswig wrote:

> - you could run it on bare PCs
> - you could run it in switches
> - you could run it as networking nodes running web servers, ...
> - you could use it on board of a space ship
> - you could put it on robots
> - you could put it in a game console
> - ...

But full CL is already small enough to do at least the space ship
part, and I'd guess all the others too (game consoles, for instance
are really big machines now). CL is really a small system these days.

There may be other reasons for a small core, but these are probably
not very good ones.

--tim

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <joswig-2907...@194.163.195.67>,
Rainer Joswig <jos...@lavielle.com> wrote:
>How big is a working Franz Allegro lisp image? Last time I looked,
>ACL generated really large code on RISC machines
>(has this changed?).

In general, RISC trades compactness of object code for the ability to
execute that code very quickly. The reasons are obvious: the typical
3-register fixed-length instruction format with 16 or more registers
from which to choose, in combination with a limit of one relatively
simple operation per instruction.

Contrast this with the x86 architecture, which is accumulator-based with
a small number of registers, but whose instructions are variable length,
and thus require more logic to decode. The x86 wins on code density for
many simple code fragments, but the limited number of registers requires
much saving and restoring when computations are more involved.

Also note that a lot of Allegro's perceived "bloat" in small functions
is type-checking and other overhead, which (when appropriate to the
task) can be reduced or eliminated with the right declarations and good
data structure design.

-- Chuck, longtime Allegro user and student of CPU architecture

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <joswig-3007...@194.163.195.67>,
Rainer Joswig <jos...@lavielle.com> wrote:

>In article <7nqj8g$5ih$1...@shell5.ba.best.com>, chu...@best.com (Chuck Fry) wrote:
>
>> In general I agree with Erik, though I think the hope is that Core Lisp
>> + extensions + application <= cache size.
>
>I just would be sufficient to ensure that the most used
>parts of a system will fit into the cache: For example
>code that will run very often (like incremental GC,
>memory allocation routines, the thread scheduler, base
>CL library, ...)

... interrupt service routines...

>and data that might be used very often (prototype objects,
>resource pools, symbol tables, ...).

Add: the most frequently used levels of the generational allocation
pool.

-- Chuck

Reini Urban

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Fernando Mato Mira <mato...@iname.com> wrote:
>The package system is constantly under attack, and while some Schemes have
>nice module systems (eg: MzScheme), there's no standard.

"nice" is different. for me the MzScheme/MrEd/DrScheme module system
(the "units") is one of the most powerful ones, but "horrible" to read
and understand.
CL packages and defsystem are IMHO much simplier and of comparable
functionality.

Just this example for the module (better "library") COMPAT.SS:
Files: compat.ss, compatu.ss, compatr.ss, compats.ss
Requires: functios.ss
Opened form requires: functiou.ss
Syntactic forms only: compatm.ss
Signature: mzlib:compat^
Unit: mzlib:compat@, imports mzlib:function^

whenever see such weird characters as "@" or "^" I get totally
frightened. please not another perl!

--
Reini

Chuck Fry

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <lwr9lr1...@copernico.parades.rm.cnr.it>,

Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> wrote:
>jos...@lavielle.com (Rainer Joswig) writes:
>> or to quote "Modernizing Common Lisp: Recommended Extensions", John Mallery, ...:
>>
>> "The most important recommendation with the most leverage is the
>> recommendation to develop a PORTABLE BASE LANGUAGE FOR COMMON LISP.
>> This strategic move provides:
>>
>> * An opportunity to reengineer the core language;
>> * A minimal base to port to new platforms;
>> * Opportunities for new hardware and compiler efforts
>> (base-to-platform & portable-to-base);
>> * Increased sharing of higher-level code."
>>
>> ...

>>
>> "* CL Portable Base Language: Develop a base language to which
>> all extant Common Lisp implementations can be targeted. This
>> language should include locatives and basic OS operations like
>> processes, file I/O, networking.
>
>I beg to differ. All of these 'desiderata' are not part of a 'core'.
>They are desirable features of a full blown system programming
>language. May of thes could be *added* to CL as it is, by providing
>an appropriate set of packages (i.e. many of them).

Marco is right, these are desirable features of a full blown system
programming language. Consider the source. This recommendation comes
from people with substantial experience with Lisp *as a system
programming language*.

Why should anyone want any less from a language? If you want less than
a full blown system programming language, use Java.

Without seeing a detailed proposal, I don't think it's fair to comment
on the necessity of supporting file I/O, processes, etc. in the base
Lisp. There are a range of underlying multitasking/multiprocessing and
file system implementations in the world today, and I don't think a
one-size-fits-all approach will work for all of them. If you assume a
typical workstation or PC environment, there may be enough commonality
to get away with this.

>> This base layer will
>> increase portability of lisp applications and enable work to
>> improve performance with specialized hardware or compilers.
>> This layer should make efforts to support real-time and
>> parallel dialects. Experimental, Very High Impact, Examples,
>> Implementations, Hard. KR, RL, JM"
>
>If we consider Java as a sort of hindsight on many CL design decisions
>(as I think it is), we see that the story does not go this way, when
>it comes to embedded systems (read: memory constrained - that's the name of
>the game).

But small embedded systems, while an important application domain, are
not the only game in town.

>The latest "small" JVM from Sun and Motorola (the KVM) runs in about
>80k with a given set of libraries. It runs on the Palm Pilot. But
>its main characteristic is that it *drops* many of the features of the
>JDK that - by similitude - according to Mallery should go in the
>'core' CL.

But again, look at the environment in which it's meant to run. That's
appropriate for a system with one user, extremely limited I/O, limited
memory, and no file system to speak of.

>So the story is very simple. You need to *drop* parts of the
>language. Not requiring *extra* things like networking.

[... much deleted ...]

>So, the story is the following. If you want to have a (Common) Lisp
>running on an embedded system today, you need to shoot for a <100K
>"processor" which should include some form of GC. Anything different
>would not work. The core language should run with this contraints in
>mind and be specified in such a way to have the full CL runnable on
>it, once the memory contraints are lifted.

I think the conflict here is that Marco is thinking of a Lisp that would
run *exclusively* in embedded-system-like environments, while the MIT
team is probably thinking about richer environments. I don't think you
can satisfy both requirements with a single implementation. But I could
be wrong.

Frank A. Adrian

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> wrote in message
news:lwbtcv6...@copernico.parades.rm.cnr.it...
> Of course I got assimilated. I'd like a date with 7! :)

You mean "38 of D"?

Duane Rettig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to

I will answer both of these together:

> In article <nkjd7xb...@tfeb.org>, Tim Bradshaw <t...@tfeb.org> wrote:
>
> > jos...@lavielle.com (Rainer Joswig) writes:
> >

> > > > -- and one
> > > > apparently very portable commercial offering -- Franz Allegro.
> > >
> > > "Portable" is relative - portable across Unix systems perhaps?
> > >
> >
> > Unix and Windows 9x and NT. I get the impression that they can port
> > to anything pretty much including embedded-type systems, but Duane or
> > someone should speak to that.

There are two aspects of a port that one must worry about:

1. The architecture. This is usually not a problem for byte-code
lisps, And is an extra task for us. However, we have the porting
effort to new architectures down to a relatively short time; it used
to take over a year to add a new architecture to our fold, but now it
is a matter of weeks. Optimizations can follow, but correctness comes
quickly.

2. The os interface. This is the major problem for all lisps, and
it tends to level the playing field. (lisp is is not the only language
that is affected, though; and even that great high-level assembler, C,
sometimes bleeds some os differences through to the user). C is usually
otherwise portable because it is well-understood and is the major
language that almost every hardware-vendor provides for their product.

Porting to embedded systems will mostly depend on the system/monitor
interface, but should not be too hard. Instead, it is more a question
of the demand for such a thing; most of the ports we've done we've either
been paid for or we've decided it was good business based on the general
nature of the architecture; embedded systems are less general, and though
we've been approached for ports to them, I can only remember once when
we were actually offered money to do the port.

Our lisp is definitely not as portable, even theoretically, as one that
is written in a different language such as C, or one that compiles to
a virtual lisp machine that executes byte-codes, due to the necessary
architecture port. However, we have seen problems with byte-code lisps
(e.g. emacs/xemacs, which used to generate portable byte codes no longer
do so). It is up to each vendor to exercise control over the
implementations they keep and the versions they run on. I believe
that we are very competitive in that area: For example, I have 16 rlogins
to different machines at all times in my xemacs; when I run a script that
makes our lisp it runs on 19 different machines at once, including an
x86/NT and Alpha/Nt machine over various rsh packages. Thus, I keep
my development in-sync at (almost) all times.

This results in a different kind of portability; one where the user sees
identical results on all ports of the same version of Allegro on all
architectures [Disclaimer: this is the goal, not always attainable].
Over time our support staff has had a harder time being able to tell
the difference when customers send in bug reports. And, as a measure
of the success of the ports, the number of times in which it makes a
difference what architecture the problem is on has become smaller over
time, as well.


jos...@lavielle.com (Rainer Joswig) writes:

> Hmm, it's surely nice that they now have it "portable".
> But it took them quite some time to do so.

Yes, it did.

> Earlier ACL versions were different between Windows and Unix.

This is exactly why it took so long. The two versions were night
and day. One was CLtL1 + CLOS, and the other was post-CLtL2. One
was multithreaded and one was not. We took the unix lisp, ported to
the x86 (the linux version was a side-effect of this), and then
ported to NT and put CG on top for the NT port. It was definitely
our hardest port.

> Having ported it to Unix and Windows doesn't make it
> "very portable", IMHO.

That is correct. However, we've learned a lot from doing all of
these ports, and so the next challenges become much easier each
time. We had also had experience in other operating systems, like
DEC/VMS and IBM VM/360, and though these didn't work out business-wise,
they contributed to the portability of our o/s interface. We were
also on Crays and Amdahls, but since Unicos and UTS are both unix, they
don't qualify as different.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Duane Rettig

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
jos...@lavielle.com (Rainer Joswig) writes:

> In article <4so67v...@beta.franz.com>, Duane Rettig <du...@franz.com> wrote:
>
> > > Having ported it to Unix and Windows doesn't make it
> > > "very portable", IMHO.
> >
> > That is correct. However, we've learned a lot from doing all of
> > these ports, and so the next challenges become much easier each
> > time. We had also had experience in other operating systems, like
> > DEC/VMS and IBM VM/360, and though these didn't work out business-wise,
> > they contributed to the portability of our o/s interface. We were
> > also on Crays and Amdahls, but since Unicos and UTS are both unix, they
> > don't qualify as different.
>

> Could you comment on the "difficulty" of porting it to say
> Be OS or MacOS X?

I honestly don't know BeOS. From what I understand, it is a fully
preemptive multitasking os like unix, so it might not be hard. I
hadn't looked at it lately ever since it lost the bid to replace
Rhapsody, about the same time Jobs came back to Apple and shut the
hardware OEMs down.

MacOS X server is just BSD4.x on top of the Mach kernel.

> I was quite surprised to see Franz supporting Linux PPC -
> was that one of the "few weeks" ports?

Actually, it was only a few days :-) There were a few trivial changes
to make it compatible with the MkLinux port, which was a few-weeks
port. That might not be a fair statement, however, since the actual
time span was over a year; we actually started the port on the DR 2.1
release of MKlinux, when Apple was still "supporting" it; we had to
report some of the same bugs that we had reported to linux/x86 a year
or so before that. In manpower scale, though, it was not hard.

> I found it
> a bit amazing that you could afford the effort to do the
> port - it's a good sign, though.

Perhaps we are a little more portable than you might think! And I
had forgotten about another port in my last post: remember Allegro
CL 3.0 on NeXT? To the extent that Mach is any different than unix,
we do have experience on that one, too. And though we didn't have
a window-interface to NeXT, note that the MacOS X Yellow Box is really
Rhapsody, which is ... which is really NeXTstep...

Raffael Cavallaro

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
In article <37a124e5...@news.mclink.it>, amo...@mclink.it (Paolo
Amoroso) wrote:

>> Marco Antoniotti wrote:
>> > standard. I can go on and on and on... I feel like the pink rabbit. :)
>>
>> just curious what is a this pink rabbit? But thanks for pointing out why
> ^^^^^^^^^^^
>Maybe something like a guinea pig ("cavia", "animale da laboratorio" in
>Italian)?


I think what the original poster meant was something like:

"I can go on and on and on.... I fell like the Enegizer Bunny. :)"

However, he called the Enegizer Bunny the "pink rabbit," and this was lost
on some of us who don't watch as many TV commercials as I do.

Raf

--

Raffael Cavallaro, Ph.D.
raf...@mediaone.net

Joerg-Cyril Hoehle

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Kent M Pitman <pit...@world.std.com> writes:
> This is a simple trade-off. Scheme assumes you will be passing so
> many functions that you will have at least an equi-likely chance of
> wanting to call a variable named F as a globally defined function
> named F, and so it assumes you want to optimize this case. Most CL
> programmers I know would gag if functional args were passed around
> with enough frequency to justify optimizing this case. Likewise, as
> arguments, it's not common to pass the contents of globally defined
> function names in CL, so the case of passing one is pessimized
> slightly in order to make it clear that (f x) is passing a local name

Do you intend to mean that CL programmers use less HOF (higher order
functions) techniques than Schemers? What about all Lispers fighting
together for the case of LAMBDA against the evil C etc. crowd? :-)

Or rather that more is defined at top-level, without too deep nesting,
for reasons you exposed another time, mostly being able to dynamically
overwrite functions etc. individually?

Do you intend to mean that the style in CL is like in C where you
don't pass much function pointers around, mostly primitive integers
and constructed structs, except for the obvious sort()
parameterization? Or like in OO, where you mostly manipulate
structs/records/objects/subjects as well? (E.g. few Smalltalkers seem
to pass block closures around, the usual exception being the sorter
for a collection).

> outcome". A maxim I've made up for myself on this is the following:
> "There are no political answers, only political questions." (Something
> I just this moment thought of: It follows from strong typing that this
> must be true, right? Both arms of the IF have to have the same return
> type.)
Or the return type is the union of the two types
for the languages that allow this.

Regards,
Jorg Hohle
Telekom Research Center -- SW-Reliability

Rainer Joswig

unread,
Jul 30, 1999, 3:00:00 AM7/30/99
to
In article <7nqj8g$5ih$1...@shell5.ba.best.com>, chu...@best.com (Chuck Fry) wrote:

> In general I agree with Erik, though I think the hope is that Core Lisp
> + extensions + application <= cache size.

I just would be sufficient to ensure that the most used
parts of a system will fit into the cache: For example
code that will run very often (like incremental GC,
memory allocation routines, the thread scheduler, base
CL library, ...)

and data that might be used very often (prototype objects,
resource pools, symbol tables, ...).

> >| - it might be a good teaching vehicle


> >
> > we've been there before. [...]
>
> I'm not going to comment on this.

The idea is that you want to give students a system
they can understand in one or two semesters and on
which they can base useful work - like experimenting
with multiple processors, distribution or whole program
analysis.

Similar goals have been followed by Scheme (as a teaching
vehicle for introductory computer science) or
by Oberon (which runs on raw PCs.)

It is loading more messages.
0 new messages