Best combination of {hardware / lisp implementation / operating system}

458 views
Skip to first unread message

Jules F. Grosse

unread,
Oct 22, 2002, 3:13:58 PM10/22/02
to
What is the best combination of {hardware / Lisp implementation /
operating system} generally available today for a Lisp environment?

One could wonder what "best" means here -=----- I would say that how
efficiently the Lisp implementation uses the hardware to boost the
performance of Lisp programs. Classical example: I consider
Genera/Lisp Machines an excellent combination (since they where made
one of the other of course, but you get my point).

This is a flammable topic, I understand, but I think a good one.

On the possible combinations I would say (but not restricted to):

harware: alpha, sun, hp, risc, ibm, intel, amd, powerpc, etc.
lisp implementation: cmucl, sbcl, clisp, allegro, lispworks, corman,
mcl, openmcl
operating system: the usual ones for the harware platorms above

tia

Will Deakin

unread,
Oct 22, 2002, 4:20:24 PM10/22/02
to
Jules F. Grosse wrote:
> One could wonder what "best" means here -=----- I would say that how
> efficiently the Lisp implementation uses the hardware to boost the
> performance of Lisp programs.
Yes one could. For me this is a strange definition of "best" and reminds
me of the use by Michael Jackson of the word `bad.'

I would suggest that there are (at least) three components to this: how
`good' is the lisp implementation at generating native binary code, how
`good' is the operating system at provinding an environment for running
this binary and finally how `good' is the hardware that executes the
whole three-ring circus.

> On the possible combinations I would say (but not restricted to):
>
> harware: alpha, sun, hp, risc, ibm, intel, amd, powerpc, etc.

Hmmm. To be really picky, I think you have conflated three different
things here: OS vendor (sun, hp, ibm); processor or hardware
manufacturer (alpha, intel, amd, powerpc); and generic processor type
(risc.) So, for example, if you include risc I would have expected to
see cisc too.

FWIW, to answer the OS and hardware question: for the work I do I have
always found that the sun/sparc/solaris combination to be robust, stable
and well supported. I also have good experiences of ibm/rs6000/aix and
(much more limited) positive experience IIRC of hp/powerpc/hp-ux. But
then again, if you pay the money this is no more than you would expect.
However, I could say the same of OEM pc bits/amd/linux.

With regard to lisp implementation, all have benefits and issues.
However, having spent a rather torid day moving the contents of deleted
tables from an oracle database in the us to one in india -- because of a
(suspected) bug -- I would humbly suggest that rather than worrying
about the implementation, write portable code on a implementation you
feel comfortable with (be that price, ease of use or whatever) and look
at getting that right, and at that point worry about the `best'
implementation. You can alway move if, say, what you have written is
really floating point mad, cons-tastic or integer-bound, get an
implementation that is good at this and can be measured to be so.

Hope this is of some little help,

:)w

Frank A. Adrian

unread,
Oct 22, 2002, 10:16:35 PM10/22/02
to
Jules F. Grosse wrote:

> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?
>
> One could wonder what "best" means here -=----- I would say that how
> efficiently the Lisp implementation uses the hardware to boost the
> performance of Lisp programs. Classical example: I consider
> Genera/Lisp Machines an excellent combination (since they where made
> one of the other of course, but you get my point).

Then I would say that Open Genera on an Alpha system would be your best bet.

faa


Dave Bakhash

unread,
Oct 23, 2002, 1:01:47 PM10/23/02
to
jlsg...@netscape.net (Jules F. Grosse) writes:

> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?

It's a silly question (since "best" is not a straightforward metric),
but all things considered (including price, performance, flexibility,
etc.) Here's my vote, FWIW:

hardware: PC x86
Common Lisp implementation: Xanalys LispWorks
OS: Linux

the hardware is cheap; Linux is fast and free; LispWorks under Linux is
very good and affordable, supports ODBC, Corba, and lots more; no
runtime licenses.

Ng Pheng Siong

unread,
Oct 23, 2002, 9:23:44 PM10/23/02
to
According to Jules F. Grosse <jlsg...@netscape.net>:

> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?

As always, it depends on your situation:

1. Start-up on a shoestring.

2. Start-up with $12mil venture capital. ;-)

3. Employed person or student play-playing in your copious free time.

4. Employed person trying to sneak Lisp into the shop.

5. What OS you're familiar with.

6. What Lisp implementations you're familiar with.

7. The application you are building; its delivery and threat models.

Etc. etc.

I'm (1). I decided on (5) right off the bat. Chose (6) after a short eval
(when I was a rank newbie... there: 12mil VC dollars riding on a newbie's
decision. ;-) I deal with (7) daily.


--
Ng Pheng Siong <ng...@netmemetic.com> * http://www.netmemetic.com

Tim Bradshaw

unread,
Oct 24, 2002, 9:10:22 AM10/24/02
to
* Jules F Grosse wrote:
> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?

> One could wonder what "best" means here -=----- I would say that how
> efficiently the Lisp implementation uses the hardware to boost the
> performance of Lisp programs. Classical example: I consider
> Genera/Lisp Machines an excellent combination (since they where made
> one of the other of course, but you get my point).

Well, this turns out to be wrong. The Lisp machines did have special
support, but (1) it turns out to be possible to write compilers which
will produce perfectly decent code without requiring HW support, and
(2) because it costs a *huge* amount of money (billions of dollars a
year) to produce processors which perform well, unless you have enough
market share to spend these billions of dollars every year, you *must*
target the processors on which this money is being spent. Even in the
late 80s the LispMs were lagging other systems in terms of
performance, by now the situation is hopeless.

Similarly, producing and maintaining an OS costs a huge amount of
money - probably also billions of dollars a year. Unlike hardware,
it's rather easy to conceal this cost unfortunately. So, say, Linux is
`free' which actually means that there's a huge accounting scandal
where hundreds of thousands of students are slaving away on the thing
but not getting paid, and tens of thousands of employees are also
`borrowing' time from their employers to work on it which is getting
misaccounted for. But if you look at companies that do maintain
commercial OSs - MicroSoft, Sun, et al, you'll soon see that they cost
lots of real money. Despite the idiot `gift economy' stuff that
people spewed out in the dot-com years, there is no such thing as a
free OS, lunch, chip, editor, lisp implementation or whatever -
someone is paying and my guess is that the `free' systems cost about
the same as the `commercial' systems if the accounting is done
correctly (of course, it never will be done correctly). So you need
to target one of the existing OSs. Fortunately, it turns out that
they're not too bad - despite all the Unix-haters stuff, Unix has
turned into quite a decent OS by now, and it supports Lisp quite
nicely. Maybe even Windows is OK (certainly, it benefits from a
standard GUI, which all the Unixes seem to be racing to duplicate
while pretending to `innovate': I wish either of GNOME or KDE was half
as innovative as my 10-year-old tvtwm setup...).

The underlying point behind all this is this: modern lisp systems on
modern off-the-shelf hardware perform, maybe 50% as well as bummed C
code. Probably if the Lisp code is similarly bummed (probably taking
advantage of implementation-specific stuff) you can get better than
that - say 70-100%. So the *best possible* gain from spending huge
money is a factor of 2. Alternatively you can just do nothing, and in
6 months the thing will be twice as fast anyway.

--tim

Pascal Costanza

unread,
Oct 24, 2002, 9:34:11 AM10/24/02
to
Tim Bradshaw wrote:
> * Jules F Grosse wrote:
>
>>What is the best combination of {hardware / Lisp implementation /
>>operating system} generally available today for a Lisp environment?
>
>>One could wonder what "best" means here -=----- I would say that how
>>efficiently the Lisp implementation uses the hardware to boost the
>>performance of Lisp programs. Classical example: I consider
>>Genera/Lisp Machines an excellent combination (since they where made
>>one of the other of course, but you get my point).

> Well, this turns out to be wrong.

[...]

> So you need
> to target one of the existing OSs. Fortunately, it turns out that
> they're not too bad - despite all the Unix-haters stuff, Unix has
> turned into quite a decent OS by now, and it supports Lisp quite
> nicely. Maybe even Windows is OK (certainly, it benefits from a
> standard GUI, which all the Unixes seem to be racing to duplicate
> while pretending to `innovate': I wish either of GNOME or KDE was half
> as innovative as my 10-year-old tvtwm setup...).

You have forgotten to include Mac OS X in your list which is currently
the best OS available, IMHO. It's based on Unix (BSD+Mach Kernel), runs
X11 applications, runs "classic" Mac applications, runs "real" OS X
applications, has one of the best Java implementations, and even
Microsoft Office for OS X is better than the Windows version. ;) It's
already supported by Allegro Common Lisp, and Macintosh Common Lisp is
just around the corner. Then, there are also some "free" Common Lisp's
available for Mac OS X. So Mac OS X offers plenty of value.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Immanuel Litzroth

unread,
Oct 24, 2002, 9:57:46 AM10/24/02
to
>>>>> "Pascal" == Pascal Costanza <cost...@web.de> writes:

Pascal> Tim Bradshaw wrote:
>> So you need to target one of the existing OSs. Fortunately, it
>> turns out that they're not too bad - despite all the
>> Unix-haters stuff, Unix has turned into quite a decent OS by
>> now, and it supports Lisp quite nicely. Maybe even Windows is
>> OK (certainly, it benefits from a standard GUI, which all the
>> Unixes seem to be racing to duplicate while pretending to
>> `innovate': I wish either of GNOME or KDE was half as
>> innovative as my 10-year-old tvtwm setup...).

Pascal> You have forgotten to include Mac OS X in your list which
Pascal> is currently the best OS available, IMHO. It's based on
Pascal> Unix (BSD+Mach Kernel), runs X11 applications, runs
Pascal> "classic" Mac applications, runs "real" OS X applications,
Pascal> has one of the best Java implementations, and even
Pascal> Microsoft Office for OS X is better than the Windows
Pascal> version. ;) It's already supported by Allegro Common Lisp,
Pascal> and Macintosh Common Lisp is just around the corner. Then,
Pascal> there are also some "free" Common Lisp's available for Mac
Pascal> OS X. So Mac OS X offers plenty of value.

I work on MacOSX daily and beg to differ. The integration between the
graphical system and the commandline unix system is very bad, starting
up the classic system makes the whole system unstable, it is very slow
and programming it is difficult because of the complexity of it's
subsystems and their interaction and the dearth of documentation . It
offers little or nothing in comparison to an out of the box suse or
redhat system. I can send you a list of major & minor annoyances
beginning with the fact that you can't start commandline executables
from the finder.
Immanuel


Jules F. Grosse

unread,
Oct 24, 2002, 10:05:39 AM10/24/02
to
>
> As always, it depends on your situation:
>

Actually, I don't have an OS preference, but I would tend towards
UNIX-like ones. But if there is something that is GREAT on Windows
(or MacOSX or MacOS), then I could consider using that.

Regarding costs, I wouldn't worry about this right now. If there is
an EXCEPTIONAL option that is more expensive than all the others, I
may be able to purchase it.

There is no definite application, so I want to consider a general
case.

thanks

Jules F. Grosse

unread,
Oct 24, 2002, 10:10:00 AM10/24/02
to
> I would suggest that there are (at least) three components to this: how
> `good' is the lisp implementation at generating native binary code, how
> `good' is the operating system at provinding an environment for running
> this binary and finally how `good' is the hardware that executes the
> whole three-ring circus.

Actually, thats what I meant, but you were clearer on the explanation.
thanks.

> Hmmm. To be really picky, I think you have conflated three different
> things here: OS vendor (sun, hp, ibm); processor or hardware

Yes, you are right. I was lazy on that point, but apparently you got
the idea.

>
> With regard to lisp implementation, all have benefits and issues.
> However, having spent a rather torid day moving the contents of deleted

Those issues that interest me. to give you an specific point, is
cmucl on x86 worse than cmucl on sparc? Does clisp on Windows
performs better than clisp on linux? And so on.

for what i see, there isn't an exact answer on this issue. well, one
could deduce this for the amount of excellent options
(cmucl,sbl,clisp,allegro,lispworks,scl,etc,etc,et) out there. but
surely there is some good points and bad points to be observed about
them.

thanks for your time

Jules F. Grosse

unread,
Oct 24, 2002, 10:11:34 AM10/24/02
to
>
> Then I would say that Open Genera on an Alpha system would be your best bet.
>

maybe, but the fact that genera isn't supported anymore and that the
alpha line has been discontinued, this is really a dangerous options
in terms of support, right?

thanks

Joe Marshall

unread,
Oct 24, 2002, 10:26:13 AM10/24/02
to
Tim Bradshaw <t...@cley.com> writes:

> * Jules F Grosse wrote:
> > What is the best combination of {hardware / Lisp implementation /
> > operating system} generally available today for a Lisp environment?
>
> > One could wonder what "best" means here -=----- I would say that how
> > efficiently the Lisp implementation uses the hardware to boost the
> > performance of Lisp programs. Classical example: I consider
> > Genera/Lisp Machines an excellent combination (since they where made
> > one of the other of course, but you get my point).
>
> Well, this turns out to be wrong. The Lisp machines did have special
> support, but (1) it turns out to be possible to write compilers which
> will produce perfectly decent code without requiring HW support, and
> (2) because it costs a *huge* amount of money (billions of dollars a
> year) to produce processors which perform well, unless you have enough
> market share to spend these billions of dollars every year, you *must*
> target the processors on which this money is being spent. Even in the
> late 80s the LispMs were lagging other systems in terms of
> performance, by now the situation is hopeless.

Wait a sec! The LMI K-machine cost about $1 million to develop and
ran at about 13 million instructions per second. The TAK benchmark
(the only one I remember off hand) with full safety completed in .03
seconds. This was in 1986. Lisp on stock hardware did not catch up
to this level of performance until the late 90s.

True, LMI's other products were dogs (the LMI Lambda took seven
seconds to complete TAK), but performance is not the main predicator
of what people will buy.

> The underlying point behind all this is this: modern lisp systems on
> modern off-the-shelf hardware perform, maybe 50% as well as bummed C
> code. Probably if the Lisp code is similarly bummed (probably taking
> advantage of implementation-specific stuff) you can get better than
> that - say 70-100%. So the *best possible* gain from spending huge
> money is a factor of 2.

Maybe a bit more than 2 (but certainly much less than 100), but it
*does* cost a *lot*.

> Alternatively you can just do nothing, and in
> 6 months the thing will be twice as fast anyway.

And it's free. It's that zero in the denominator that gives such a
huge ratio of performance to price.

Tim Bradshaw

unread,
Oct 24, 2002, 11:04:46 AM10/24/02
to
* Joe Marshall wrote:

> Wait a sec! The LMI K-machine cost about $1 million to develop and
> ran at about 13 million instructions per second. The TAK benchmark
> (the only one I remember off hand) with full safety completed in .03
> seconds. This was in 1986. Lisp on stock hardware did not catch up
> to this level of performance until the late 90s.

Yes, but could you actually *buy* a K machine? I was under the
impression that you couldn't. I'm not trying to be negative about it
(from what I've read it was a very interesting system), but it's
important to compare like with like - in particular you need to look
at the actual cost of the system to end users complete with OS and so
on. This was especially true in the 80s where there was much more
room for throwing money at a single-CPU system to make it run faster
(unlike now, where you can buy a 2.xGHz cpu for a few hundred dollars,
but building a 10GHz CPU with comparable CPI would cost you billions).

Secondly, if the K machine had been commercially produced, how fast
would it have run C? In particular, could some variant of the tricks
that made Lisp run very fast on it have been used to make C run very
fast. If not, why not?

As I said above, I really don't want to be negative about the K
machine - I've read only a tiny description of it, and I'm not in a
position to make any judgements. But from what I have read it looks
like the classic 80s RISC win applied to Lisp - fast clock, all
instructions complete in one clock (or fixed clocks) enabling
pipelining, optimistic execution with later backing out, load-store
architecture (?), parallelism in the HW to do things like type checks
&c &c. Obviously you know much more about this than I do!

So it looks, to me, like the K machine was really about the only case
where someone actually did the sums on performance rather than
ritually reproduced the kind of hardware that had seemed reasonable in
the 70s.

(Of course, this leaves me the inconvenient problem of explaining why
it took so long to realize these wins for stock hardware Lisps. I'll
just punt on that...)

--tim

Duane Rettig

unread,
Oct 24, 2002, 1:00:00 PM10/24/02
to
Tim Bradshaw <t...@cley.com> writes:

> * Joe Marshall wrote:
>
> > Wait a sec! The LMI K-machine cost about $1 million to develop and
> > ran at about 13 million instructions per second. The TAK benchmark
> > (the only one I remember off hand) with full safety completed in .03
> > seconds. This was in 1986. Lisp on stock hardware did not catch up
> > to this level of performance until the late 90s.

Joe, this is incorrect. My earliest on-line (CVS) records show that
the 1993 tak benchmarks for sun3 were .05 (both run time and real time)
and for sparc were .03 (run time and real time). I'd have to dig out
some archives for earlier results, but in my memory we did most of the
super-optimizations right after Gabriel's book ("performance and Evaluation
of Lisp Systems", MIT Press, 1985) came out, in the mid to late 80's
and _not_ the late 90's.

Of course, it was also the late 80's to early 90's when we really started
realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
not represent real applications and in some cases are actually detrimental
to total system performance. So a second wave of optimizations took place
in the early 90's geared more toward total system performance. These
optimizations are ongoing.

[Tim Bradshaw <t...@cley.com> writes:]

> (Of course, this leaves me the inconvenient problem of explaining why
> it took so long to realize these wins for stock hardware Lisps. I'll
> just punt on that...)

Punt away; the time was a decade off anyway.

The reason why we didn't optimize CL until even as late as the mid to
late 80's is that people did not demand such performance strongly
until then. At that time, Lisp was "Big and slow", and even Lisp
proponents had bought into that lie, thus making it artificially the
truth.

Of course, very likely the first fastest Lisp in the world was the
first port I did of Franz Lisp to the Amdahl 580 in 1984. It's
probably not fair, though, for the same reason as for the K machine...

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Tim Bradshaw

unread,
Oct 24, 2002, 2:14:20 PM10/24/02
to
* Duane Rettig wrote:

> Joe, this is incorrect. My earliest on-line (CVS) records show that
> the 1993 tak benchmarks for sun3 were .05 (both run time and real time)
> and for sparc were .03 (run time and real time). I'd have to dig out
> some archives for earlier results, but in my memory we did most of the
> super-optimizations right after Gabriel's book ("performance and Evaluation
> of Lisp Systems", MIT Press, 1985) came out, in the mid to late 80's
> and _not_ the late 90's.

If those are correct (which I'm sure they are!) then you must have (or
could have) been equalling the K machine by ~1990 - there were some
68k HP boxes which were really a lot faster than any of the Sun3s in
1989-90 as I remember, and I don't think that Sun3s got any faster
after that time frame. And presumably if you ran on any of the
high-performance RISC machines (such as: anything but SPARC...) back
then you could have done a lot better.

So I feel comforted by that (:-).

> Of course, it was also the late 80's to early 90's when we really started
> realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
> not represent real applications and in some cases are actually detrimental
> to total system performance. So a second wave of optimizations took place
> in the early 90's geared more toward total system performance. These
> optimizations are ongoing.

Whenever I look at the Gabriel benchmarks and try and compare them to
what code I write does, I feel pretty uncomfortable that they measure
anything at all other than performance on the Gabriel benchmarks...

--tim

Joe Marshall

unread,
Oct 24, 2002, 2:48:12 PM10/24/02
to
Duane Rettig <du...@franz.com> writes:

> Tim Bradshaw <t...@cley.com> writes:
>
> > * Joe Marshall wrote:
> >
> > > Wait a sec! The LMI K-machine cost about $1 million to develop and
> > > ran at about 13 million instructions per second. The TAK benchmark
> > > (the only one I remember off hand) with full safety completed in .03
> > > seconds. This was in 1986. Lisp on stock hardware did not catch up
> > > to this level of performance until the late 90s.
>
> Joe, this is incorrect. My earliest on-line (CVS) records show that
> the 1993 tak benchmarks for sun3 were .05 (both run time and real time)
> and for sparc were .03 (run time and real time). I'd have to dig out
> some archives for earlier results, but in my memory we did most of the
> super-optimizations right after Gabriel's book ("performance and Evaluation
> of Lisp Systems", MIT Press, 1985) came out, in the mid to late 80's
> and _not_ the late 90's.

Ok. I just googled around to find some results and could have gotten
some lame ones.

At http://www-eksl.cs.umass.edu/~westy/benchmark/bench1.html
They report a time of 0.334 for Allegro 4.3b on a Sparc10 (Feb 96)

At http://www.computists.com/crs/crs11n17.html
They report a time of 0.030 for CMUCLc (unknown platform, probably wintel) (May 2001)

At http://www.eligis.com/benchmarks.html
They report a time of 0.020 for ISLISP (400 MHz PII) after 2000

In any case, I was refuting Tim Bradshaws's claims that

``(2) because it costs a *huge* amount of money (billions of


dollars a year) to produce processors which perform well, unless
you have enough market share to spend these billions of dollars
every year, you *must* target the processors on which this money

is being spent.''

I believe that processors that perform *well* (i.e. a year or two
behind the bleeding edge) can be created for orders of magnitude less
money than billions of dollars. Of course this is still millions of
dollars, but it within the grasp of a smaller company.

and
``Even in the late 80s the LispMs were lagging other systems in
terms of performance, by now the situation is hopeless.''

The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
in terms of performance. However, performance at Lisp doesn't make a
whole lot of difference if you want to run NFS under UNIX.

I also don't think the situation is hopeless (although I doubt the
wisdom of trying to sell lisp-specific hardware. There are easier
ways to go broke.)

> Of course, it was also the late 80's to early 90's when we really started
> realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
> not represent real applications and in some cases are actually detrimental
> to total system performance. So a second wave of optimizations took place
> in the early 90's geared more toward total system performance. These
> optimizations are ongoing.

Oh, of course. Gabriel's benchmarks are a lot like IQ tests: they
seem to measure *something* and there's a rough correlation between
them and `performance'.

> [Tim Bradshaw <t...@cley.com> writes:]
>
> > (Of course, this leaves me the inconvenient problem of explaining why
> > it took so long to realize these wins for stock hardware Lisps. I'll
> > just punt on that...)
>
> Punt away; the time was a decade off anyway.
>
> The reason why we didn't optimize CL until even as late as the mid to
> late 80's is that people did not demand such performance strongly
> until then.

Excatly so.

Tim Bradshaw

unread,
Oct 24, 2002, 3:01:35 PM10/24/02
to
* Joe Marshall wrote:

> ``(2) because it costs a *huge* amount of money (billions of
> dollars a year) to produce processors which perform well, unless
> you have enough market share to spend these billions of dollars
> every year, you *must* target the processors on which this money
> is being spent.''

> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.

I think that this may have been true in the 80s but it's likely not
true today. In the 80s you could build more-or-less competitive
processors out of more-or-less commodity chips, or if you couldn't you
had to arrange to fab a few mildly special chips. Nowadays in order
to build last year's bleeding edge processor you have to stump up for
last year's fab. If you're lucky, you will be able to get Intel to
fab your chip for you on whatever they are no longer using for this
year's chip...

I'd probably agree an order of magnitude or so variation, (so hundreds
of millions to produce something not quite bleeding edge - my guess is
that this is what, say Sun or DEC spend (spent in the DEC case)). But
I really doubt you could do anything significant for 10s of millions.
This is one of the reasons why there are so very few high-performance
processors around...

> The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
> in terms of performance. However, performance at Lisp doesn't make a
> whole lot of difference if you want to run NFS under UNIX.

Maybe things were different in the US, but by ~1989 the commodity
hardware that you could get in the UK was just eating the Lisp
machines alive, *especially* in terms of value for money - you could
buy *three* really seriously configured (`AI configuration' they
called it) Suns for the cost of a symbolics.

--tim

Thomas F. Burdick

unread,
Oct 24, 2002, 3:25:03 PM10/24/02
to
Immanuel Litzroth <imma...@enfocus.be> writes:

Are you running 10.2? Maybe you are, but my experience has been quite
different. Classic crashes often (mostly from MS applications, which
isn't surprising), but it only brings the classic environment down
with it, everything else is fine. My system has been up for 23 days
(I rebooted it when I decided to use the Finder after all, restored
the symlink, and restarted to make sure I didn't screw anything up).
The interaction with Carbon stuff can be weird sometimes, but for the
most part, the integration is really good. Do you have the developer
documentation installed? It's certainly not perfect (I wouldn't
consider running a really important server on it -- I'm sticking to
Solaris for that), but for an end-user OS, it's the best I've seen.

> offers little or nothing in comparison to an out of the box suse or
> redhat system. I can send you a list of major & minor annoyances
> beginning with the fact that you can't start commandline executables
> from the finder.

Huh, I rarely use the Finder. Getting slightly back on topic, ".app"s
are Really Really Cool for languages like Smalltalk or Lisp. From
POSIX, it looks like a directory tree, but from the Finder, it looks
like an application that you can click on to launch. You can put your
VM, core file, and fasl's together in one package. Wonderful. Does
anyone know if ACL or OpenMCL can make .app's? Or if MCL will be able
to? I've been using CocoaSqueak and loving it, especially the way I
can stick everything in the dot-app and send it to someone -- it's
like a "stand-alone" executable delivery system, without the downsides.

(setf ns:*gushing* nil)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Will Deakin

unread,
Oct 24, 2002, 3:29:35 PM10/24/02
to
Joe Marshall wrote:
> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.
Since you are quibling, may I declare open season on the quibble. I
would argue with `*orders* of magnitude less than billions of dollars.'
My best guess would be no more than an order of magnitude. Also, I would
suggest that a company with millions -- if not tens of millions -- is
not that small, really. (I am always amazed at how `small' `big'
companies like British Airways or the Allied Irish bank, really are...)

> I also don't think the situation is hopeless (although I doubt the
> wisdom of trying to sell lisp-specific hardware. There are easier
> ways to go broke.)

Sure. I could also imagine more fun ones too ;)

:)w


Barry Margolin

unread,
Oct 24, 2002, 3:30:54 PM10/24/02
to
In article <ey3n0p3...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

>* Joe Marshall wrote:
>> The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
>> in terms of performance. However, performance at Lisp doesn't make a
>> whole lot of difference if you want to run NFS under UNIX.
>
>Maybe things were different in the US, but by ~1989 the commodity
>hardware that you could get in the UK was just eating the Lisp
>machines alive, *especially* in terms of value for money - you could
>buy *three* really seriously configured (`AI configuration' they
>called it) Suns for the cost of a symbolics.

It was pretty much the same over here. When Sun came out with the
SPARC-based systems, it was hard to justify any more Symbolics purchases at
Thinking Machines. By the time Symbolics came out with the Ivory machines,
which brought them a little closer, our management no longer considered
them a serious option. I expect that the situation was similar at many
other Symbolics customers, which is why Ivory didn't save them from having
to declare bankruptcy.

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Duane Rettig

unread,
Oct 24, 2002, 4:00:01 PM10/24/02
to
Joe Marshall <j...@ccs.neu.edu> writes:

> Ok. I just googled around to find some results and could have gotten
> some lame ones.
>
> At http://www-eksl.cs.umass.edu/~westy/benchmark/bench1.html
> They report a time of 0.334 for Allegro 4.3b on a Sparc10 (Feb 96)

My first reaction was "It's probably being done wrong", and to suggest
that you try the same benchmark source on today's machines (to get similar
results). But after having looked at the other results, which seem
consistent with the results I have, it looks as though this was one of
the versions of the benchmarks which did a "10X" tak (i.e. run the tak
benchmark 10 times in order to compensate for the increasingly
noisy readings due to the speed of machines and software). Still, my
recommendation would tend to be "try the same test now". The .03 number
you're seeing on K machines is a 1X result, and the .7 is probably a 10X
result with not all possible optimization levels triggered.

Benchmarking is a black-art.

> In any case, I was refuting Tim Bradshaws's claims that
>
> ``(2) because it costs a *huge* amount of money (billions of
> dollars a year) to produce processors which perform well, unless
> you have enough market share to spend these billions of dollars
> every year, you *must* target the processors on which this money
> is being spent.''
>
> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.
>
> and
> ``Even in the late 80s the LispMs were lagging other systems in
> terms of performance, by now the situation is hopeless.''
>
> The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
> in terms of performance. However, performance at Lisp doesn't make a
> whole lot of difference if you want to run NFS under UNIX.
>
> I also don't think the situation is hopeless (although I doubt the
> wisdom of trying to sell lisp-specific hardware. There are easier
> ways to go broke.)

I agree that performance by itself is not an indication of success
(look at Alphas). It is instead the prospect of gaining or losing
customer base which justifies or not the manpower commitment which
Tim mentions. I do believe that at the time Lisp Machines were being
developed, Moore's law was well established, and a good business plan
for a LM company would have been able to see the dismal projections
as to how hard it would be to keep up (and thus to keep the customer
base).

> > Of course, it was also the late 80's to early 90's when we really started
> > realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
> > not represent real applications and in some cases are actually detrimental
> > to total system performance. So a second wave of optimizations took place
> > in the early 90's geared more toward total system performance. These
> > optimizations are ongoing.
>
> Oh, of course. Gabriel's benchmarks are a lot like IQ tests: they
> seem to measure *something* and there's a rough correlation between
> them and `performance'.

And, as with the cases with idiot savants and absent-minded professors,
there are local measurements that correlate not at all. Benchmarking,
like IQ testing, is a black-art.

Will Deakin

unread,
Oct 24, 2002, 4:15:07 PM10/24/02
to
[...resend from 20:28. appologies if this appears twice...]

Joe Marshall wrote:
> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.
Since you are quibling, may I declare open season on the quibble. I
would argue with `*orders* of magnitude less than billions of dollars.'
My best guess would be no more than an order of magnitude. Also, I would
suggest that a company with millions -- if not tens of millions -- is
not that small, really. (I am always amazed at how small `big' companies
like British Airways or the Allied Irish bank, really are...)

> I also don't think the situation is hopeless (although I doubt the


> wisdom of trying to sell lisp-specific hardware. There are easier
> ways to go broke.)

Barry Margolin

unread,
Oct 24, 2002, 4:24:16 PM10/24/02
to
In article <4y98nv...@beta.franz.com>,

Duane Rettig <du...@franz.com> wrote:
>I agree that performance by itself is not an indication of success
>(look at Alphas). It is instead the prospect of gaining or losing
>customer base which justifies or not the manpower commitment which
>Tim mentions. I do believe that at the time Lisp Machines were being
>developed, Moore's law was well established, and a good business plan
>for a LM company would have been able to see the dismal projections
>as to how hard it would be to keep up (and thus to keep the customer
>base).

The same issue killed us at Thinking Machines. We eventually switched to
commodity processors (the Connection Machine CM-5 was based on SPARC rather
than the custom, single-bit processors we used in the CM-1 and CM-2), but
still the high cost of the proprietary interconnect architecture made it a
difficult sell (we also had competition from Intel, which was practically
giving away their parallel system by subsidizing it with the profits from
commodity systems).

Like Symbolics, what real set us apart was our software. There was
significant dissent among management about whether we should be a hardware
or a software company. Sales people understand how to sell boxes, so they
won, and that spelled the downfall of both companies.

Greg Neumann

unread,
Oct 24, 2002, 4:47:26 PM10/24/02
to
Immanuel Litzroth <imma...@enfocus.be> wrote in message news:<m2fzuw0...@enfocus.be>...

> redhat system. I can send you a list of major & minor annoyances
> beginning with the fact that you can't start commandline executables
> from the finder.

Are you absolutely sure? I recall there was a .cmd or .command
extension you could give these apps that allowed them to be executed
via double-click.

Is this what you're talking about? It would require wrapping your app
in a shell script.
http://216.239.51.100/search?q=cache:0m2OEVtDuBkC:www.osxfaq.com/Tutorials/LearningCenter/HowTo/Startup/index.ws+%22.command%22+extension+command+line+applications+macos+x&hl=en&ie=UTF-8

MacOS X is a weird conversation. There are many apps for which it's
the natural choice, but it helps to be the early-adopter type. Many
are waiting till 2004 when they have their processor supply issues
ironed out.

Greg Neumann

Christopher Browne

unread,
Oct 24, 2002, 4:47:57 PM10/24/02
to
In an attempt to throw the authorities off his trail, Joe Marshall <j...@ccs.neu.edu> transmitted:

> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude
> less money than billions of dollars. Of course this is still
> millions of dollars, but it within the grasp of a smaller company.

But this essentially points back to an argument I /regularly/ point
out concerning the costs of non-commodity systems.

You can get StrongARM and MIPS and PPC and other CPUs that are pretty
nifty, pretty fast, and which, in any kind of quantity, are quite
cheap.

Linux runs very nicely on any of these architectures; it sure would be
neat to be able to build a cheap MIPS box. A little microcode later
and it might be /quite/ slick as a Lisp Machine.

Unfortunately, while there may be $25 MIPS chips and $25 StrongARM
chips, I defy you to find motherboards costing less than $1500. That
price point seems to be a "magic" minimum price for "evaluation
boards" for these sorts of architectures.

The fact that (seemingly) every other Asian electronic fab plant makes
motherboards means that the equivalent for AMD or Intel CPUs costs a
mere $100, and that's /not/ associated with a 15:1 performance
reduction.

Designing a custom Lisp CPU points to building and selling the whole
set of hardware: motherboard, CPU, and, if you're /not/ lucky, your
own video and I/O hardware. (If you don't have an IA-32 emulator on
your CPU, it will be really problematic to make use of commodity I/O
hardware that expects to have some BIOS boot process...)
--
(concatenate 'string "chris" "@cbbrowne.com")
http://cbbrowne.com/info/nonrdbms.html
Lisp stoppped itself
FEP Command:

Joe Marshall

unread,
Oct 24, 2002, 4:49:37 PM10/24/02
to
Will Deakin <aniso...@hotmail.com> writes:

> Joe Marshall wrote:
> > I believe that processors that perform *well* (i.e. a year or two
> > behind the bleeding edge) can be created for orders of magnitude less
> > money than billions of dollars. Of course this is still millions of
> > dollars, but it within the grasp of a smaller company.
>
> Since you are quibling, may I declare open season on the quibble.

Declare away.

> I would argue with `*orders* of magnitude less than billions of
> dollars.' My best guess would be no more than an order of
> magnitude.

I think you could develop a decent processor for in the tens of
millions. I think it counts as `orders' less than `billions'.

> Also, I would suggest that a company with millions -- if
> not tens of millions -- is not that small, really. (I am always amazed
> at how `small' `big' companies like British Airways or the Allied
> Irish bank, really are...)

A ten-person company can easily consume a million bucks in a year. A
200 person company is considered by many people (not myself, though)
to be `small'. I think a team on the order of dozens of people can
design a decent processor.

Barry Margolin

unread,
Oct 24, 2002, 5:09:41 PM10/24/02
to
In article <vg3rsg...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu> wrote:
>A ten-person company can easily consume a million bucks in a year. A
>200 person company is considered by many people (not myself, though)
>to be `small'. I think a team on the order of dozens of people can
>design a decent processor.

Of course, "big" and "small" are relative to the industry. At our peak,
Thinking Machines had around 500 employees; but since our competitors were
Intel and Cray, we were tiny by comparison. The design team for the CM-5
was a couple dozen engineers, but we also needed software developers, sales
and marketing people, management, and support staff. The system
administration group (where I worked) had 8 people at its max.

Christopher Browne

unread,
Oct 24, 2002, 5:18:37 PM10/24/02
to

Chuck Moore, of Forth fame, has done this several times.

He's designed built a couple of 16 bit "Forth chips," a 32 bit one,
and, in the "wow, that's odd" category, built his own combination
Forth implementation/IA-32 OS/Electronic CAD system called OKAD that
was then used to design a 21 bit embedded processor called the uP21.
(There seem to be a bunch of 21 bit processors built by him...)


--
(concatenate 'string "chris" "@cbbrowne.com")

http://cbbrowne.com/info/emacs.html
The next person to mention spaghetti stacks to me is going to have his
head knocked off. -- Bill Conrad

Tim Bradshaw

unread,
Oct 24, 2002, 5:34:19 PM10/24/02
to
* Joe Marshall wrote:

> A ten-person company can easily consume a million bucks in a year. A
> 200 person company is considered by many people (not myself, though)
> to be `small'.

Given that a that's only 100,000 per person, so I'd not be at all
surprised! A 10 person company that uses significant capital
equipment (not PCs, but machine tools or something) would be lucky to
be run that cheaply I'd think.

> I think a team on the order of dozens of people can design a decent
> processor.

Probably, although they seem to be bigger than that now, as processors
have got far more complex (I but Intel's team is *much* bigger,
although they struggle against hideous historical obstacles (and seem
to be building plenty more with Itanic, presumably to keep themselves
comfortable...). But the question isn't really whether you can design
the thing, it's whether you can *build* it.

--tim


Bruce Hoult

unread,
Oct 24, 2002, 5:35:12 PM10/24/02
to
In article <m2fzuw0...@enfocus.be>,
Immanuel Litzroth <imma...@enfocus.be> wrote:

> I work on MacOSX daily and beg to differ. The integration between the
> graphical system and the commandline unix system is very bad

How so? The cli "Open" command starts GUI stuff as if you'd
double-clicked on the object named. You can start cli stuff from the
GUI using a simple wrapper bundle (the format of which is abundently
documented).

> starting up the classic system makes the whole system unstable,

Not that I've seen. I rebooted my four year old G3/266 PowerBook
yesterday because of installing updated system software. I checked the
uptime first. 35 days, during which time it's been
slept/woken/slept/woken continually, and had Classic running (mostly MPW
and Hypercard) the entire time.


> it is very slow

OK, it's not speedy on a 266 MHz machine, but then what is these days?
Even with a 1.8 GHz (OK, 1.5) PC next to it I still find I prefer to do
many things on the 266 MHz Mac.


> and programming it is difficult because of the complexity of it's
> subsystems and their interaction and the dearth of documentation.

If you pick one subsystem (classic Mac APIs, or Cocoa, or Java, or
POSIX) then there are few problems. Mixing different APIs is indeed
pretty poorly documented, though that has started coming through more
quickly recently, especially since the last developer's conference.


> It offers little or nothing in comparison to an out of the box suse or
> redhat system.

That's palpably not true -- and I'm running RedHat 8.0 on the previously
mentioned Athlon 1800+.

> I can send you a list of major & minor annoyances beginning with the
> fact that you can't start commandline executables from the finder.

Which is false. If you can't figure out how to make the relevent bundle
dirctory structure then go and get something like DropScript
(http://www.versiontracker.com/redir.fcgi/kind=1&db=mac&id=10459/DropScri
pt-0.5.dmg) to do it for you.

-- Bruce

Tim Bradshaw

unread,
Oct 24, 2002, 5:45:54 PM10/24/02
to
* Barry Margolin wrote:

> It was pretty much the same over here. When Sun came out with the
> SPARC-based systems, it was hard to justify any more Symbolics purchases at
> Thinking Machines. By the time Symbolics came out with the Ivory machines,
> which brought them a little closer, our management no longer considered
> them a serious option. I expect that the situation was similar at many
> other Symbolics customers, which is why Ivory didn't save them from having
> to declare bankruptcy.

I don't have any figures to hand (or even not to hand), but I think
that - at least in terms of price/performance if not in terms of
absolute performance - this was true in the UK for the later 68k Suns.
3/180s and their kin were particularly nice, I think - cheap enough to
buy, and there was a hi-res screen / lots of memory config which was
really nice. I'm not sure if these machines actually beat the
Symbolics boxes (which at that stage would have been 36xxs for xx
being 50 or 70 or 4x I think?) in raw single-user performance, but in
price/performance they definitely did. Some of this may have been an
artifact of Symbolics having only a very small market share in the UK
anyway, and Sun pricing aggressively to drive them out (which they
succeeded in doing!), so it may not have been universal...

--tim

Will Deakin

unread,
Oct 24, 2002, 6:11:22 PM10/24/02
to
Joe Marshall wrote:
> Will Deakin <aniso...@hotmail.com> writes:
>>I would argue with `*orders* of magnitude less than billions of
>>dollars.' My best guess would be no more than an order of
>>magnitude.
> I think you could develop a decent processor for in the tens of
> millions.
Sure. When I was at Manchester I got drinking with a couple of blokes
working in the comp.sci department who were involved in post-grad chip
burning. IIRC they were banging out runs of tens of chips in a facility
that cost about L5-10M sterling.

> I think it counts as `orders' less than `billions'.

Sure. (I think this is a mistranslation on my part -- I tend to read
`orders' as `more than two.'[1])

>>Also, I would suggest that a company with millions -- if
>>not tens of millions -- is not that small, really.

> A ten-person company can easily consume a million bucks in a year. A
> 200 person company is considered by many people (not myself, though)
> to be `small'. I think a team on the order of dozens of people can
> design a decent processor.

I agree with all of this.

However, in support of Tim's point, on the basis of -- what little I
know -- manufacturing and with a type of mythical man month analysis:
If you want to design, burn and manufacture chips I think you will need
a team of say ten of cool chip designers. To test, make and put in small
cardboard boxes you will need, say, three times as many people. To
document, flog and market this you will need about another three times
again. On this basis you will need, say, 100+ people[2]. I would then
guess that this would cost at least $10M p.a. and to make it worthwhile
you should be looking at a turnover of at least $50-$100M. But I am also
happy to accept that I am talking through my pants. Be those US or UK.

:)w

[1] I was about 8 or 9 when I had an argument that turned into a fight
with my brothera about whether `a couple' was more, equal to or less
than `a few.' :)
[2] ...this is where it gets *really* vague...

Edi Weitz

unread,
Oct 24, 2002, 6:26:02 PM10/24/02
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> [Mac OS X is] certainly not perfect (I wouldn't consider running a


> really important server on it -- I'm sticking to Solaris for that),
> but for an end-user OS, it's the best I've seen.

Let me take a this chance to ask Mac OS Lispers a couple of questions.

Background: I've been using Macs in the 90s and have fond memories -
but that was back when I was a mathematician at the
university and didn't care much about programming. I've
sinced switched to FreeBSD for Servers and Linux for my
laptops. After having read all the marketing hype about
Mac OS X I was one of the first to order a PowerBook G4 in
Germany and was so disappointed with 10.0 that I sold the
laptop a mere three months later and returned to my
Thinkpads. Now everybody tells me that 10.2 is really
quite mature and I should give it another try. I might
have the chance to get a used but almost new iBook and I'm
considering to buy it.

What I'd like to know from people who've been using Lisp on Mac OS X
is:

1. How do the currently available implementations (I think that's
CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
why? (Stability, correctness, OS integration, ...)

(I have no plans to use Classic so MCL is not an option - yet.)

2. What about speed? I don't mean PowerPC compared to x86 but rather
Lisp compared to other languages on the same platform. On Linux
I've often seen CMUCL and also LispWorks and AllegroCL to be on par
with C/C++ programs. Is this also the case on OS X?

3. How good is Emacs on Mac OS X? I've heard current CVS sources will
build an Aqua version - is anybody actually using it and can
recommend it?

4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
development, mail, writing, browsing - no fancy multimedia stuff.)

5. Other comments are welcome as well.

Sorry for the slightly dumb questions but I try to avoid the fiasco I
had last time. If you think I'm trying to start a flame war about
operating systems or implementations you can mail me privately... :)

Thanks in advance,
Edi.

Tim Bradshaw

unread,
Oct 24, 2002, 6:53:26 PM10/24/02
to
* Will Deakin wrote:
> Sure. When I was at Manchester I got drinking with a couple of blokes
> working in the comp.sci department who were involved in post-grad chip
> burning. IIRC they were banging out runs of tens of chips in a
> facility that cost about L5-10M sterling.

Yes, but what kind of chip - it really does matter. Things like
transistor counts have gone up a lot and feature sizes have gone down
a lot for microprocessors. Both of these make *making* the things
*very* expensive, especially if you want to get a good enough yield
that you can sell them for anything like reasonable money.

I'm willing to believe that you can make a processor that's 1 or 2
years behind the bleeding edge for an order of magnitude less than you
can make the bleeding edge chips for, and you can probably make it for
reasonable unit cost, since you can use the fabs that are coming off
the back of the bleeding-edge chips. But assuming you're going for the
desktop market or above (and I don't think we're talking about chips
for PDAs or embedded applications here), you now have a fairly hard
job selling it. In order for it to be competitive with the current
bleeding edge chips for Lisp you have to get 2-4 times the performance
that you can get for a Lisp system on the bleeding edge chips. And it
won't run Windows, so you are looking at getting a decent chunk of the
Linux market, or starting a new proprietary-architecture server
company. Don't ask me to invest in this.

The first machine I ran was a minicomputer made up of an off-the-shelf
bitsliced processor and some random logic. They were a bit slower
than a VAX, but lots cheaper, and they were made by a company of maybe
10-20 people. Other than the boards, the physical hardware including
wiring, backplane, eproms and so on, and the software I don't think
there was any custom logic in those machines (there may have been, but
not much). They were almost literally made in someone's back room.
The second generation of machines this company made used Fairchild
clippers, and a fair number of PLAs I think. They were quite
competitive performance-wise, but rather expensive compared to the
systems Sun was making at the time, and the clipper was a mistake
because they were ages late and scarce at best. They never made a
third generation because they couldn't compete with people like Sun
and MIPS who were designing their own processors and lots of custom
logic chips. Since that time the cost of designing a competitive box
has gone up really a lot unless you want to just clone what someone
else has done, including using all their custom logic (like processors
&c). A whole world of small computer makers (and some large ones) has
vanished in the last 10-15 years, and I think that cost is why.

--tim

Pascal Costanza

unread,
Oct 24, 2002, 9:06:06 PM10/24/02
to
Edi Weitz wrote:
> t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Let me take a this chance to ask Mac OS Lispers a couple of questions.
>

> After having read all the marketing hype about
> Mac OS X I was one of the first to order a PowerBook G4 in
> Germany and was so disappointed with 10.0 that I sold the
> laptop a mere three months later and returned to my
> Thinkpads. Now everybody tells me that 10.2 is really
> quite mature and I should give it another try.

I have switch from Wintel to Apple in April and started with Mac 10.1.4.
I was very pleased from the beginning. Long-time Mac enthusiasts told me
that 10.0 was really extremely immature. 10.2 is just great.

> What I'd like to know from people who've been using Lisp on Mac OS X
> is:
>
> 1. How do the currently available implementations (I think that's
> CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> why? (Stability, correctness, OS integration, ...)
>
> (I have no plans to use Classic so MCL is not an option - yet.)

Since I don't like Emacs I have decided to use MCL. They are going to
announce MCL 5.0 at the Lisp Conference, so this will be the first
Common Lisp with an IDE for Mac OS X as far as I know.

I don't know the non-IDE Lisps.

> 2. What about speed? I don't mean PowerPC compared to x86 but rather
> Lisp compared to other languages on the same platform. On Linux
> I've often seen CMUCL and also LispWorks and AllegroCL to be on par
> with C/C++ programs. Is this also the case on OS X?

MCL "feels" quite fast. ;-) If you'd post a benchmark I would be willing
to check this for you.

> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

From what I have seen, the Aqua Emacs is not mature yet. It seems to
lack several essential features. However, you can install an X server
and run Xemacs. There's a window manager for X called OroborOSX that
tries to emulate the Aqua look. You might want to take a look at
http://fink.sourceforge.net

> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

Yep, definitely. I use a G3 600MHz iBook and it's quite fast. I run MCL,
the Mail app, Mozilla, OmniWeb, Office, LaTeX and even the fancy
multimedia stuff like Quicktime, RealOne and the somewhat buggy Windows
Media Player.

> 5. Other comments are welcome as well.

I haven't yet found a convincing and inexpensive backup solution for Mac
OS X. Other than that I don't really know why you shouldn't switch. Even
Ellen Feiss did it. ;-)


Pascal

--
Given any rule, however ‘fundamental’ or ‘necessary’ for science, there
are always circumstances when it is advisable not only to ignore the
rule, but to adopt its opposite. - Paul Feyerabend

Erik Naggum

unread,
Oct 24, 2002, 9:14:44 PM10/24/02
to
* Pascal Costanza <cost...@web.de>

| Since I don't like Emacs I have decided to use MCL.

Could you quickly summarize the reasons you do not like Emacs?
Is it specific to Emacs on a particular platform or general?

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Pascal Costanza

unread,
Oct 24, 2002, 9:31:13 PM10/24/02
to
Erik Naggum wrote:
> * Pascal Costanza <cost...@web.de>
> | Since I don't like Emacs I have decided to use MCL.
>
> Could you quickly summarize the reasons you do not like Emacs?

I could, but it wouldn't make sense. My reasons are very subjective and
I haven't taken the time yet to form a better informed opinion.

I hope I haven't given the impression to discourage anyone to learn
Emacs. This wasn't my intention.

Thomas F. Burdick

unread,
Oct 24, 2002, 10:07:17 PM10/24/02
to
Pascal Costanza <cost...@web.de> writes:

> Edi Weitz wrote:
> > t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > Let me take a this chance to ask Mac OS Lispers a couple of questions.
> >
> > After having read all the marketing hype about
> > Mac OS X I was one of the first to order a PowerBook G4 in
> > Germany and was so disappointed with 10.0 that I sold the
> > laptop a mere three months later and returned to my
> > Thinkpads. Now everybody tells me that 10.2 is really
> > quite mature and I should give it another try.
>
> I have switch from Wintel to Apple in April and started with Mac 10.1.4.
> I was very pleased from the beginning. Long-time Mac enthusiasts told me
> that 10.0 was really extremely immature. 10.2 is just great.

To put this in perspective, only loonies get version point-zero of an
Apple OS. 10.0 was even worse than normal. I didn't hear anyone who
was happy with 10.0 as their primary OS, but I've heard very few
negative experiences with 10.2, and mostly from Mac users, whose
complaints sound mind-bogglingly nit-picky to a Unix user (and this
nit-pickiness is probably why it's as nice as it is). So, while it
might bug Mac users, that's a ringing endorsement for Unix or (god
help them) Windows users.

> > What I'd like to know from people who've been using Lisp on Mac OS X
> > is:
> >
> > 1. How do the currently available implementations (I think that's
> > CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> > why? (Stability, correctness, OS integration, ...)
> >
> > (I have no plans to use Classic so MCL is not an option - yet.)

I've played with MCL under Classic, and it's quite nice -- I can see
why it has the reputation it does. CLISP is CLISP, on every platform.
OpenMCL looks pretty nice, and I've played with it some. I haven't
gotten to check out the Objective-C bridge, but I want to sometime
soon. The Cocoa API is really lovely from Smalltalk, and I'd imagine
it's similarly pleasant from Lisp. I haven't gotten to look at ACL
yet. I've been doing a lot of real work with my Lisping time, which
has cut into my avaiable playing-around time. SBCL should hopefully
make it to OS X in November sometime, when I've got time to do the
port. If someone wants to be me to it, though, I wouldn't complain!

> > 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> > build an Aqua version - is anybody actually using it and can
> > recommend it?
>
> From what I have seen, the Aqua Emacs is not mature yet. It seems to
> lack several essential features. However, you can install an X server
> and run Xemacs. There's a window manager for X called OroborOSX that
> tries to emulate the Aqua look. You might want to take a look at
> http://fink.sourceforge.net

I'm running GNU Emacs 21, compiled from source, under X11. All your
X11 windows show up in the same "X" application on the dock, which
would be annoying if I ran much more than Emacs and rxvt under X. As
far as I can tell, Emacs 21 w/ X is more stable under OS X than it is
under Solaris, where I sometimes get weird lockups and a stack so
screwed up I can't figure anything out with gdb. This is good because
you can't get Emacs 20 to compile on OS X. The Aqua version is
unstable, but you can build for Carbon, which should work fine,
although I haven't tried. I have an iBook, so I like being able to
display remotely to get a larger screen and better keyboard. I'm
really looking forward to getting Hemlock, though.

I've tried to get fink working a couple of times, then decided that
I'm perfectly able to compile from source myself. I'm not sure I'd
want several different package systems on my system.

> > 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> > development, mail, writing, browsing - no fancy multimedia stuff.)

That's plenty fast, just be sure you get enough memory. 128MB is the
minimum so you really want >= 256MB.

> > 5. Other comments are welcome as well.
>
> I haven't yet found a convincing and inexpensive backup solution for Mac
> OS X. Other than that I don't really know why you shouldn't switch. Even
> Ellen Feiss did it. ;-)

You got something against tar and cron?

Pascal Costanza

unread,
Oct 24, 2002, 10:55:39 PM10/24/02
to
Thomas F. Burdick wrote:
> Pascal Costanza <cost...@web.de> writes:

>>I haven't yet found a convincing and inexpensive backup solution for Mac
>>OS X.

> You got something against tar and cron?

tar doesn't handle the Mac OS X file system correctly. A backup solution
for Mac OS X needs to be able to deal with long file names, resource
forks and finder information at the same time. Tools ported from the
Unix world generally get the resource forks and/or the finder
information wrong, and tools ported from Mac OS 9 are usually not able
to deal with long file names and sometimes even with the file names
themselves, because OS 9 and OS X use different character sets for file
names.

To put it mildly, this area needs some consolidation. To put it
differently, I would be happy if you could point me to a good solution -
I haven't found any yet. The best I have found so far is PsyncX at
http://sourceforge.net/projects/psyncx

Erik Naggum

unread,
Oct 24, 2002, 10:58:23 PM10/24/02
to
* Pascal Costanza <cost...@web.de>

| I could, but it wouldn't make sense. My reasons are very subjective and I
| haven't taken the time yet to form a better informed opinion.

Most of the time, subjective reasons are perfectly acceptable for the
individual who has to make the choices, and to state them as such is
quite unproblematic. They just need to be distinguished from more
universalizable reasons. I had hoped you had reasons that could be
classified as fixable or personal, and we could perhaps do something
about the fixable ones, but your answer is as complete as it gets.

| I hope I haven't given the impression to discourage anyone to learn
| Emacs.

Not initially in my view, but certainly not anymore.

Chris Beggy

unread,
Oct 24, 2002, 11:26:30 PM10/24/02
to
Christopher Browne <cbbr...@acm.org> writes:

> Linux runs very nicely on any of these architectures; it sure would be
> neat to be able to build a cheap MIPS box. A little microcode later
> and it might be /quite/ slick as a Lisp Machine.

Think Sony Playstation2...

Chris

Raffael Cavallaro

unread,
Oct 25, 2002, 12:22:40 AM10/25/02
to
Edi Weitz <e...@agharta.de> wrote in message news:<8765vrpj...@bird.agharta.de>...

> 1. How do the currently available implementations (I think that's
> CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> why? (Stability, correctness, OS integration, ...)
>
> (I have no plans to use Classic so MCL is not an option - yet.)

OpenMCL is much faster than CLISP since everything in OpenMCL is
compiled to native PPC code, not bytecode (or, slower still,
interpreted). I believe it's more ANSI compliant wrt CLOS as well.

>
> 2. What about speed? I don't mean PowerPC compared to x86 but rather
> Lisp compared to other languages on the same platform. On Linux
> I've often seen CMUCL and also LispWorks and AllegroCL to be on par
> with C/C++ programs. Is this also the case on OS X?
>

OpenMCL is very fast - faster than c++ for recursive function call
intensive benchmarks like tak, and just as fast on an array access
benchmark like sieve of eratosthenes. BTW, OpenMCL has the same
compiler core as MCL. You can even get a crude Cocoa IDE to run, but
it slows things down significantly (~25%) relative to the command
line.


> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

I'll see your OS flamewar, and raise you an editor flamewar! ;^) Just
kidding. I don't use Emacs, but I have run it and it seems to be the
same ugly kitchen sink under MacOS X as it is under Linux, etc. Of
course, if you actually *like* Emacs, then, yes, it's available for
MacOS X. I think there may be some sort of integration for OpenMCL as
well.


>
> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

Yes. I got one for my daughter a couple of months ago precisely
because it does Quartz Extreme (i.e., OpenGL video card acceleration
of all GUI screen compositing/drawing, which is an issue, because
MacOS X uses transparency for everything.) In fact, you can even use
it for the fancy multimedia stuff too (listen to mp3s, rip CDs, watch
movies, while doing the other stuff, with no hiccups).

>
> 5. Other comments are welcome as well.

I would use OpenMCL while waiting for MCL native for MacOS X to ship
since you don't want to use Classic (I agree here). You could also
contact Digitool and ask to be a beta tester for MCL native for MacOS
X- that way you'll hit the ground running even faster when MCL native
for OS X ships.

>
> Sorry for the slightly dumb questions but I try to avoid the fiasco I
> had last time.

Yes, MacOS X 10.0 was not quite ready for prime time in some ways, but
10.2 on a 700 MHz iBook is very usable - I run it all the time when I
don't want to be tied to my desk (I just borrow my daughter's iBook -
the Airport wireless net is nice too).

> Thanks in advance,
> Edi.

You're welcome.

Christopher Browne

unread,
Oct 25, 2002, 12:59:13 AM10/25/02
to
Oops! Chris Beggy <chr...@kippona.com> was seen spray-painting on a wall:

.. which may have really nice stereo hardware, and some interesting
DSP hardware, but is /really/ lean on CPU power, RAM, and disk space,
and more or less totally non-expandable.

By the time you add the "run Linux on it" additions, it's about the
price of a "real computer," and the result is a pretty wimpy system.
I know; I've seen one in action. It's not totally worthless, but it's
no "Lisp Machine-would-be"...
--
(concatenate 'string "cbbrowne" "@cbbrowne.com")
http://www.ntlug.org/~cbbrowne/sap.html
What should you do when you see an endangered animal that is eating an
endangered plant?

Vassil Nikolov

unread,
Oct 25, 2002, 1:18:29 AM10/25/02
to
On 24 Oct 2002 22:34:19 +0100, Tim Bradshaw <t...@cley.com> said:

[...]
TB> Itanic

Was this spelling intentional?

---Vassil.

--
Non-googlable is googlable.

Espen Vestre

unread,
Oct 25, 2002, 3:02:29 AM10/25/02
to
t...@fallingrocks.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Apple OS. 10.0 was even worse than normal. I didn't hear anyone who
> was happy with 10.0 as their primary OS, but I've heard very few

I was happy with 10.0 (but not as my primary OS) - compared to e.g. the
first versions of Solaris 2 or *.0 (1)-versions of Cisco IOS, it was
rock solid ;-)
--
(espen)

Espen Vestre

unread,
Oct 25, 2002, 3:12:26 AM10/25/02
to
greg_n...@yahoo.com (Greg Neumann) writes:

you don't have to put an .command extension on shell scripts, you can
just tell OS X that you want them to be opened by Terminal (but you
do need to wrap programs up in a shell script to make that work, I think).
--
(espen)

Rob Warnock

unread,
Oct 25, 2002, 4:38:45 AM10/25/02
to
Vassil Nikolov <vnik...@poboxes.com> wrote:
+---------------

| Tim Bradshaw <t...@cley.com> said:
| TB> Itanic
|
| Was this spelling intentional?
+---------------

Almost certainly, since in much of the world these days
the term "Itanic" is used as a sardonic pun and comment
on the perceived [by those using the term] probable future
of the Itanium Processor Family. As in: "This *has* to
succeed! We *cannot* fail! This *will* succee...*CRASH!*
Oops, where did that iceberg come from?"


-Rob

-----
Rob Warnock, PP-ASEL-IA <rp...@rpw3.org>
627 26th Avenue <URL:http://www.rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Christopher C. Stacy

unread,
Oct 25, 2002, 6:50:11 AM10/25/02
to
Symbolics had a high-performance RISC chip that could beat the SPARC family.
It also had the ability to move Ivory (the processor that you may be more
familiar with) into newer processes without much cost. Unfortunately,
they fired basically all of that staff prior to releasing those products.

The lossage at Symbolics was not about hardware performance.
It was about management vision, direction, and understanding the market.

Having made large real estate investments at the worst possible time
was the proximate cause of the company's demise.

George Demmy

unread,
Oct 25, 2002, 8:26:49 AM10/25/02
to
Edi Weitz <e...@agharta.de> writes:
> Let me take a this chance to ask Mac OS Lispers a couple of questions.
>
> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

My wife and I purchaced an emac with OS X 10.1 for "around the house
stuff". My background is unixy *but* I'm a devoted emacs user. I've
been using contol-next-to-a keyboards and emacs meta next to the space
bar from way back. I ran into a wall of cognitive dissonance with the
default mac key bindings. Meta is "pretty important" in the emacs
world, and dragging my thumb off of the space bar onto the adjacent
key is as natural as breathing after doing it for years. Now, that
spot is occupied by the Apple command key -- a "pretty important" key
for Apple folks. Of course, Apple has gone along with everyone else in
moving the caps lock over where the control key belongs (control key
is "pretty important" in the emacs world...), so that one-two punch
dazed me a bit. The point of all of this is that you might consider
getting some software to rebind the keys if you've developed any
habits that might not translate well. Since I share this computer with
my wife (a professional Mac user, btw), I *didn't* rebind the keys,
and I prefer using my old toshiba (as in 167MHz w/ 80MB ram) laptop
running linux as an emacs launcher with the keys in the right
spot. I'm considering getting a different keyboard that we might share
the machine, and I can get an opportunity to see how emacs dances on a
G4...

>
> Thanks in advance,
> Edi.

Good luck!
--
George Demmy

Joe Marshall

unread,
Oct 25, 2002, 9:07:28 AM10/25/02
to
Tim Bradshaw <t...@cley.com> writes:

> In order for it to be competitive with the current
> bleeding edge chips for Lisp you have to get 2-4 times the performance
> that you can get for a Lisp system on the bleeding edge chips.

At *least* that.

> And it won't run Windows, so you are looking at getting a decent
> chunk of the Linux market, or starting a new proprietary-
> architecture server company. Don't ask me to invest in this.

I agree completely. I wouldn't invest money in something like this.

On the other hand, I would invest a lot of time (and someone else's
money) in it because it's something that I'd love to do.

Lars Brinkhoff

unread,
Oct 25, 2002, 9:25:17 AM10/25/02
to
George Demmy <gde...@layton-graphics.com> writes:
> My wife and I purchaced an emac with OS X 10.1 for "around the house
> stuff". My background is unixy *but* I'm a devoted emacs user. I've
> been using contol-next-to-a keyboards and emacs meta next to the
> space bar from way back. I ran into a wall of cognitive dissonance
> with the default mac key bindings.

In a similar situation, I very quickly replaced the Apple keyboard
with a Happy Hacking keyboard. Works like a charm.

--
Lars Brinkhoff http://lars.nocrew.org/ Linux, GCC, PDP-10,
Brinkhoff Consulting http://www.brinkhoff.se/ HTTP programming

Alain Picard

unread,
Oct 25, 2002, 9:31:07 AM10/25/02
to
Edi Weitz <e...@agharta.de> asks:

> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

The native aqua version (of GNU emacs 21) still crashes occasionally.
If you run the X version under XDarwin, you're fine.

You _have_ to use uControl to remap the caps-lock to control,
or you go totally carpal and insane.


> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

I have a 500Mhz iBook. To me, OS X 10.2 feels slow as molasses.
My iBook dual boots into linux; it feels at *least* 3 times faster
under linux. Of course, you lose the beautiful display PDF...
it's your call. I _hate_ waiting. Did I mention it's SLOW?

> 5. Other comments are welcome as well.

I was very excited at the prospect of a "user friendly unix"; but
found it took me weeks to find out where the hell Apple had put
everything (short summary; there's this thing called "netinfo manager"
which is a big DB of what you'd normally find in the zillion text
files in /etc on a "normal" unix system).

It looks dazzlingly beautiful, no doubt, but a hard core unixer
always wants his _own_ key bindings, window manager, etc etc, which
are all difficult or impossible to get under os X.

To me, it basically feels more like a system for "users" rather than
developers. It does a good job of being a nice, stable box(*) if you
don't try to do anything fancy.

Biggest drawback is it doesn't run CMUCL. :-/

All in all, I keep using it anyway (how I got to have an iBook is a
long story) but I'd trade it in for an x86 laptop running linux on
equally sleek hardware(+). [The iBook hardware is _gorgeous_, though, I
have to admit. The powerbook even more so. Just astounding.]


(*) It still hangs the finder forever if you unplug the laptop
from the lan with mounted remote servers... I guess linux
does that too, but on linux there isn't a central "finder"
app which renders the box useless when it hangs. You can
always spawn a new shell and kill -9 the locked PID.
If anyone knows how to restart a locked finder, I'd love to know.

(+) Let's face it---what I really want is that sexy 980g VHS cassete
sized Sony thing, but who's got that kind of cash!?

George Demmy

unread,
Oct 25, 2002, 9:42:19 AM10/25/02
to
Lars Brinkhoff <lars...@nocrew.org> writes:

I'm drooling. Thanks for the tip! Can you "hot swap" the keyboards?

Hej då
--
George Demmy

Raymond Toy

unread,
Oct 25, 2002, 9:39:37 AM10/25/02
to
>>>>> "George" == George Demmy <gde...@layton-graphics.com> writes:

George> for Apple folks. Of course, Apple has gone along with everyone else in
George> moving the caps lock over where the control key belongs (control key
George> is "pretty important" in the emacs world...), so that one-two punch
George> dazed me a bit. The point of all of this is that you might consider
George> getting some software to rebind the keys if you've developed any
George> habits that might not translate well. Since I share this computer with

AFAIK, you can't rebind the keys. Well, you can, but it doesn't work
like you would want. I tried with xkeycaps. The capslock key now
becomes a control-lock key. Every key thereafter is a control key.
You have to hit control-lock again to turn it off.

But maybe a standard PC USB keyboard will work, and this might work
better. But I don't know of any Mac software to rebind the keys....

Ray

Raymond Toy

unread,
Oct 25, 2002, 9:35:23 AM10/25/02
to
>>>>> "Thomas" == Thomas F Burdick <t...@fallingrocks.OCF.Berkeley.EDU> writes:

Thomas> To put this in perspective, only loonies get version point-zero of an
Thomas> Apple OS. 10.0 was even worse than normal. I didn't hear anyone who
Thomas> was happy with 10.0 as their primary OS, but I've heard very few
Thomas> negative experiences with 10.2, and mostly from Mac users, whose
Thomas> complaints sound mind-bogglingly nit-picky to a Unix user (and this
Thomas> nit-pickiness is probably why it's as nice as it is). So, while it

I was quite happy with 10.0 (my first Mac) for what it was used for.
10.1 was much better. 10.2 has become worse. My wife is a quite
upset with me because she can't use Apple's Mail.app to send mail in
Korean that her friends can read. (It seems it uses iso-2022-kr
encoding instead of euc-kr). It can't talk to the printer on my
Linux box anymore. Yuck.

Thomas> I've played with MCL under Classic, and it's quite nice -- I can see
Thomas> why it has the reputation it does. CLISP is CLISP, on every platform.

Clisp on OS X is missing the FFI (because no one has ported the
RS-6000 FFI to powerpc?).

>> > 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
>> > development, mail, writing, browsing - no fancy multimedia stuff.)

Thomas> That's plenty fast, just be sure you get enough memory. 128MB is the
Thomas> minimum so you really want >= 256MB.

My iMac 400 MHz is more than adequate for mail, writing, browsing.
Development is ok, especially compared to the 300 MHz sparc at
work. :-)

Ray

Lars Brinkhoff

unread,
Oct 25, 2002, 9:55:18 AM10/25/02
to
George Demmy <gde...@layton-graphics.com> writes:
> Lars Brinkhoff <lars...@nocrew.org> writes:
> > George Demmy <gde...@layton-graphics.com> writes:
> > > My wife and I purchaced an emac ... I ran into a wall of

> > > cognitive dissonance with the default mac key bindings.
> > In a similar situation, I very quickly replaced the Apple keyboard
> > with a Happy Hacking keyboard. Works like a charm.

I should have mentioned that I run Linux on the Mac, so I'm not as
sure that it works well in MacOS, but I see no reason why not.

> I'm drooling. Thanks for the tip! Can you "hot swap" the keyboards?

Sure. In Linux, anyway.

Russell McManus

unread,
Oct 25, 2002, 10:09:25 AM10/25/02
to

I bet you can plug in a happy hacking USB keyboard and achieve emacs
happiness, then put it in a drawer when you wife wants to use the
machine.

-russ

Michael Livshin

unread,
Oct 25, 2002, 10:11:44 AM10/25/02
to
Lars Brinkhoff <lars...@nocrew.org> writes:

> George Demmy <gde...@layton-graphics.com> writes:
>> My wife and I purchaced an emac with OS X 10.1 for "around the house
>> stuff". My background is unixy *but* I'm a devoted emacs user. I've
>> been using contol-next-to-a keyboards and emacs meta next to the
>> space bar from way back. I ran into a wall of cognitive dissonance
>> with the default mac key bindings.
>
> In a similar situation, I very quickly replaced the Apple keyboard
> with a Happy Hacking keyboard. Works like a charm.

couldn't you just remap the keys? I know X allows you to do that,
dunno about Aqua.

I use a regular keybord, but it has two controls (who needs the
CapsLock thing, anyway?), Meta instead of Alt and Alt instead of the
window key. thank (symbol-value '*deity*) for xmodmap.

--
All ITS machines now have hardware for a new machine instruction --
CIZ
Clear If Zero.
Please update your programs.

Will Deakin

unread,
Oct 25, 2002, 10:29:23 AM10/25/02
to
Tim Bradshaw wrote:

> * I wrote:
>>Sure. When I was at Manchester I got drinking with a couple of blokes
>>working in the comp.sci department who were involved in post-grad chip
>>burning. IIRC they were banging out runs of tens of chips in a
>>facility that cost about L5-10M sterling.
> Yes, but what kind of chip - it really does matter. Things like
> transistor counts have gone up a lot and feature sizes have gone down
> a lot for microprocessors. Both of these make *making* the things
> *very* expensive, especially if you want to get a good enough yield
> that you can sell them for anything like reasonable money.
Sure. Please don't take my hazy, drink addled ramblings as a statment
of fact. Also this was about 8-9 years ago.

From what I understood they were making cpu chips with strange clock
cycles the different part of the chip -- say the cache -- ran at a
constant factor faster than say the FPU. From what I understood it was
alleged to be about as powerful as the 386 which was fairly current at
the time but probably not bleeding edge. Also, this was by no means an
manufacturing process.

> Since that time the cost of designing a competitive box
> has gone up really a lot unless you want to just clone what someone
> else has done, including using all their custom logic (like processors
> &c). A whole world of small computer makers (and some large ones) has
> vanished in the last 10-15 years, and I think that cost is why.

Yes. Things *have* changed a lot. (I still remember playing with stuff
like the ZX81, Dragon, Apricot and Acorn...)

:)w

Chris Beggy

unread,
Oct 25, 2002, 11:52:36 AM10/25/02