Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Best combination of {hardware / lisp implementation / operating system}

526 views
Skip to first unread message

Jules F. Grosse

unread,
Oct 22, 2002, 3:13:58 PM10/22/02
to
What is the best combination of {hardware / Lisp implementation /
operating system} generally available today for a Lisp environment?

One could wonder what "best" means here -=----- I would say that how
efficiently the Lisp implementation uses the hardware to boost the
performance of Lisp programs. Classical example: I consider
Genera/Lisp Machines an excellent combination (since they where made
one of the other of course, but you get my point).

This is a flammable topic, I understand, but I think a good one.

On the possible combinations I would say (but not restricted to):

harware: alpha, sun, hp, risc, ibm, intel, amd, powerpc, etc.
lisp implementation: cmucl, sbcl, clisp, allegro, lispworks, corman,
mcl, openmcl
operating system: the usual ones for the harware platorms above

tia

Will Deakin

unread,
Oct 22, 2002, 4:20:24 PM10/22/02
to
Jules F. Grosse wrote:
> One could wonder what "best" means here -=----- I would say that how
> efficiently the Lisp implementation uses the hardware to boost the
> performance of Lisp programs.
Yes one could. For me this is a strange definition of "best" and reminds
me of the use by Michael Jackson of the word `bad.'

I would suggest that there are (at least) three components to this: how
`good' is the lisp implementation at generating native binary code, how
`good' is the operating system at provinding an environment for running
this binary and finally how `good' is the hardware that executes the
whole three-ring circus.

> On the possible combinations I would say (but not restricted to):
>
> harware: alpha, sun, hp, risc, ibm, intel, amd, powerpc, etc.

Hmmm. To be really picky, I think you have conflated three different
things here: OS vendor (sun, hp, ibm); processor or hardware
manufacturer (alpha, intel, amd, powerpc); and generic processor type
(risc.) So, for example, if you include risc I would have expected to
see cisc too.

FWIW, to answer the OS and hardware question: for the work I do I have
always found that the sun/sparc/solaris combination to be robust, stable
and well supported. I also have good experiences of ibm/rs6000/aix and
(much more limited) positive experience IIRC of hp/powerpc/hp-ux. But
then again, if you pay the money this is no more than you would expect.
However, I could say the same of OEM pc bits/amd/linux.

With regard to lisp implementation, all have benefits and issues.
However, having spent a rather torid day moving the contents of deleted
tables from an oracle database in the us to one in india -- because of a
(suspected) bug -- I would humbly suggest that rather than worrying
about the implementation, write portable code on a implementation you
feel comfortable with (be that price, ease of use or whatever) and look
at getting that right, and at that point worry about the `best'
implementation. You can alway move if, say, what you have written is
really floating point mad, cons-tastic or integer-bound, get an
implementation that is good at this and can be measured to be so.

Hope this is of some little help,

:)w

Frank A. Adrian

unread,
Oct 22, 2002, 10:16:35 PM10/22/02
to
Jules F. Grosse wrote:

> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?
>
> One could wonder what "best" means here -=----- I would say that how
> efficiently the Lisp implementation uses the hardware to boost the
> performance of Lisp programs. Classical example: I consider
> Genera/Lisp Machines an excellent combination (since they where made
> one of the other of course, but you get my point).

Then I would say that Open Genera on an Alpha system would be your best bet.

faa


Dave Bakhash

unread,
Oct 23, 2002, 1:01:47 PM10/23/02
to
jlsg...@netscape.net (Jules F. Grosse) writes:

> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?

It's a silly question (since "best" is not a straightforward metric),
but all things considered (including price, performance, flexibility,
etc.) Here's my vote, FWIW:

hardware: PC x86
Common Lisp implementation: Xanalys LispWorks
OS: Linux

the hardware is cheap; Linux is fast and free; LispWorks under Linux is
very good and affordable, supports ODBC, Corba, and lots more; no
runtime licenses.

Ng Pheng Siong

unread,
Oct 23, 2002, 9:23:44 PM10/23/02
to
According to Jules F. Grosse <jlsg...@netscape.net>:

> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?

As always, it depends on your situation:

1. Start-up on a shoestring.

2. Start-up with $12mil venture capital. ;-)

3. Employed person or student play-playing in your copious free time.

4. Employed person trying to sneak Lisp into the shop.

5. What OS you're familiar with.

6. What Lisp implementations you're familiar with.

7. The application you are building; its delivery and threat models.

Etc. etc.

I'm (1). I decided on (5) right off the bat. Chose (6) after a short eval
(when I was a rank newbie... there: 12mil VC dollars riding on a newbie's
decision. ;-) I deal with (7) daily.


--
Ng Pheng Siong <ng...@netmemetic.com> * http://www.netmemetic.com

Tim Bradshaw

unread,
Oct 24, 2002, 9:10:22 AM10/24/02
to
* Jules F Grosse wrote:
> What is the best combination of {hardware / Lisp implementation /
> operating system} generally available today for a Lisp environment?

> One could wonder what "best" means here -=----- I would say that how
> efficiently the Lisp implementation uses the hardware to boost the
> performance of Lisp programs. Classical example: I consider
> Genera/Lisp Machines an excellent combination (since they where made
> one of the other of course, but you get my point).

Well, this turns out to be wrong. The Lisp machines did have special
support, but (1) it turns out to be possible to write compilers which
will produce perfectly decent code without requiring HW support, and
(2) because it costs a *huge* amount of money (billions of dollars a
year) to produce processors which perform well, unless you have enough
market share to spend these billions of dollars every year, you *must*
target the processors on which this money is being spent. Even in the
late 80s the LispMs were lagging other systems in terms of
performance, by now the situation is hopeless.

Similarly, producing and maintaining an OS costs a huge amount of
money - probably also billions of dollars a year. Unlike hardware,
it's rather easy to conceal this cost unfortunately. So, say, Linux is
`free' which actually means that there's a huge accounting scandal
where hundreds of thousands of students are slaving away on the thing
but not getting paid, and tens of thousands of employees are also
`borrowing' time from their employers to work on it which is getting
misaccounted for. But if you look at companies that do maintain
commercial OSs - MicroSoft, Sun, et al, you'll soon see that they cost
lots of real money. Despite the idiot `gift economy' stuff that
people spewed out in the dot-com years, there is no such thing as a
free OS, lunch, chip, editor, lisp implementation or whatever -
someone is paying and my guess is that the `free' systems cost about
the same as the `commercial' systems if the accounting is done
correctly (of course, it never will be done correctly). So you need
to target one of the existing OSs. Fortunately, it turns out that
they're not too bad - despite all the Unix-haters stuff, Unix has
turned into quite a decent OS by now, and it supports Lisp quite
nicely. Maybe even Windows is OK (certainly, it benefits from a
standard GUI, which all the Unixes seem to be racing to duplicate
while pretending to `innovate': I wish either of GNOME or KDE was half
as innovative as my 10-year-old tvtwm setup...).

The underlying point behind all this is this: modern lisp systems on
modern off-the-shelf hardware perform, maybe 50% as well as bummed C
code. Probably if the Lisp code is similarly bummed (probably taking
advantage of implementation-specific stuff) you can get better than
that - say 70-100%. So the *best possible* gain from spending huge
money is a factor of 2. Alternatively you can just do nothing, and in
6 months the thing will be twice as fast anyway.

--tim

Pascal Costanza

unread,
Oct 24, 2002, 9:34:11 AM10/24/02
to
Tim Bradshaw wrote:
> * Jules F Grosse wrote:
>
>>What is the best combination of {hardware / Lisp implementation /
>>operating system} generally available today for a Lisp environment?
>
>>One could wonder what "best" means here -=----- I would say that how
>>efficiently the Lisp implementation uses the hardware to boost the
>>performance of Lisp programs. Classical example: I consider
>>Genera/Lisp Machines an excellent combination (since they where made
>>one of the other of course, but you get my point).

> Well, this turns out to be wrong.

[...]

> So you need
> to target one of the existing OSs. Fortunately, it turns out that
> they're not too bad - despite all the Unix-haters stuff, Unix has
> turned into quite a decent OS by now, and it supports Lisp quite
> nicely. Maybe even Windows is OK (certainly, it benefits from a
> standard GUI, which all the Unixes seem to be racing to duplicate
> while pretending to `innovate': I wish either of GNOME or KDE was half
> as innovative as my 10-year-old tvtwm setup...).

You have forgotten to include Mac OS X in your list which is currently
the best OS available, IMHO. It's based on Unix (BSD+Mach Kernel), runs
X11 applications, runs "classic" Mac applications, runs "real" OS X
applications, has one of the best Java implementations, and even
Microsoft Office for OS X is better than the Windows version. ;) It's
already supported by Allegro Common Lisp, and Macintosh Common Lisp is
just around the corner. Then, there are also some "free" Common Lisp's
available for Mac OS X. So Mac OS X offers plenty of value.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Immanuel Litzroth

unread,
Oct 24, 2002, 9:57:46 AM10/24/02
to
>>>>> "Pascal" == Pascal Costanza <cost...@web.de> writes:

Pascal> Tim Bradshaw wrote:
>> So you need to target one of the existing OSs. Fortunately, it
>> turns out that they're not too bad - despite all the
>> Unix-haters stuff, Unix has turned into quite a decent OS by
>> now, and it supports Lisp quite nicely. Maybe even Windows is
>> OK (certainly, it benefits from a standard GUI, which all the
>> Unixes seem to be racing to duplicate while pretending to
>> `innovate': I wish either of GNOME or KDE was half as
>> innovative as my 10-year-old tvtwm setup...).

Pascal> You have forgotten to include Mac OS X in your list which
Pascal> is currently the best OS available, IMHO. It's based on
Pascal> Unix (BSD+Mach Kernel), runs X11 applications, runs
Pascal> "classic" Mac applications, runs "real" OS X applications,
Pascal> has one of the best Java implementations, and even
Pascal> Microsoft Office for OS X is better than the Windows
Pascal> version. ;) It's already supported by Allegro Common Lisp,
Pascal> and Macintosh Common Lisp is just around the corner. Then,
Pascal> there are also some "free" Common Lisp's available for Mac
Pascal> OS X. So Mac OS X offers plenty of value.

I work on MacOSX daily and beg to differ. The integration between the
graphical system and the commandline unix system is very bad, starting
up the classic system makes the whole system unstable, it is very slow
and programming it is difficult because of the complexity of it's
subsystems and their interaction and the dearth of documentation . It
offers little or nothing in comparison to an out of the box suse or
redhat system. I can send you a list of major & minor annoyances
beginning with the fact that you can't start commandline executables
from the finder.
Immanuel


Jules F. Grosse

unread,
Oct 24, 2002, 10:05:39 AM10/24/02
to
>
> As always, it depends on your situation:
>

Actually, I don't have an OS preference, but I would tend towards
UNIX-like ones. But if there is something that is GREAT on Windows
(or MacOSX or MacOS), then I could consider using that.

Regarding costs, I wouldn't worry about this right now. If there is
an EXCEPTIONAL option that is more expensive than all the others, I
may be able to purchase it.

There is no definite application, so I want to consider a general
case.

thanks

Jules F. Grosse

unread,
Oct 24, 2002, 10:10:00 AM10/24/02
to
> I would suggest that there are (at least) three components to this: how
> `good' is the lisp implementation at generating native binary code, how
> `good' is the operating system at provinding an environment for running
> this binary and finally how `good' is the hardware that executes the
> whole three-ring circus.

Actually, thats what I meant, but you were clearer on the explanation.
thanks.

> Hmmm. To be really picky, I think you have conflated three different
> things here: OS vendor (sun, hp, ibm); processor or hardware

Yes, you are right. I was lazy on that point, but apparently you got
the idea.

>
> With regard to lisp implementation, all have benefits and issues.
> However, having spent a rather torid day moving the contents of deleted

Those issues that interest me. to give you an specific point, is
cmucl on x86 worse than cmucl on sparc? Does clisp on Windows
performs better than clisp on linux? And so on.

for what i see, there isn't an exact answer on this issue. well, one
could deduce this for the amount of excellent options
(cmucl,sbl,clisp,allegro,lispworks,scl,etc,etc,et) out there. but
surely there is some good points and bad points to be observed about
them.

thanks for your time

Jules F. Grosse

unread,
Oct 24, 2002, 10:11:34 AM10/24/02
to
>
> Then I would say that Open Genera on an Alpha system would be your best bet.
>

maybe, but the fact that genera isn't supported anymore and that the
alpha line has been discontinued, this is really a dangerous options
in terms of support, right?

thanks

Joe Marshall

unread,
Oct 24, 2002, 10:26:13 AM10/24/02
to
Tim Bradshaw <t...@cley.com> writes:

> * Jules F Grosse wrote:
> > What is the best combination of {hardware / Lisp implementation /
> > operating system} generally available today for a Lisp environment?
>
> > One could wonder what "best" means here -=----- I would say that how
> > efficiently the Lisp implementation uses the hardware to boost the
> > performance of Lisp programs. Classical example: I consider
> > Genera/Lisp Machines an excellent combination (since they where made
> > one of the other of course, but you get my point).
>
> Well, this turns out to be wrong. The Lisp machines did have special
> support, but (1) it turns out to be possible to write compilers which
> will produce perfectly decent code without requiring HW support, and
> (2) because it costs a *huge* amount of money (billions of dollars a
> year) to produce processors which perform well, unless you have enough
> market share to spend these billions of dollars every year, you *must*
> target the processors on which this money is being spent. Even in the
> late 80s the LispMs were lagging other systems in terms of
> performance, by now the situation is hopeless.

Wait a sec! The LMI K-machine cost about $1 million to develop and
ran at about 13 million instructions per second. The TAK benchmark
(the only one I remember off hand) with full safety completed in .03
seconds. This was in 1986. Lisp on stock hardware did not catch up
to this level of performance until the late 90s.

True, LMI's other products were dogs (the LMI Lambda took seven
seconds to complete TAK), but performance is not the main predicator
of what people will buy.

> The underlying point behind all this is this: modern lisp systems on
> modern off-the-shelf hardware perform, maybe 50% as well as bummed C
> code. Probably if the Lisp code is similarly bummed (probably taking
> advantage of implementation-specific stuff) you can get better than
> that - say 70-100%. So the *best possible* gain from spending huge
> money is a factor of 2.

Maybe a bit more than 2 (but certainly much less than 100), but it
*does* cost a *lot*.

> Alternatively you can just do nothing, and in
> 6 months the thing will be twice as fast anyway.

And it's free. It's that zero in the denominator that gives such a
huge ratio of performance to price.

Tim Bradshaw

unread,
Oct 24, 2002, 11:04:46 AM10/24/02
to
* Joe Marshall wrote:

> Wait a sec! The LMI K-machine cost about $1 million to develop and
> ran at about 13 million instructions per second. The TAK benchmark
> (the only one I remember off hand) with full safety completed in .03
> seconds. This was in 1986. Lisp on stock hardware did not catch up
> to this level of performance until the late 90s.

Yes, but could you actually *buy* a K machine? I was under the
impression that you couldn't. I'm not trying to be negative about it
(from what I've read it was a very interesting system), but it's
important to compare like with like - in particular you need to look
at the actual cost of the system to end users complete with OS and so
on. This was especially true in the 80s where there was much more
room for throwing money at a single-CPU system to make it run faster
(unlike now, where you can buy a 2.xGHz cpu for a few hundred dollars,
but building a 10GHz CPU with comparable CPI would cost you billions).

Secondly, if the K machine had been commercially produced, how fast
would it have run C? In particular, could some variant of the tricks
that made Lisp run very fast on it have been used to make C run very
fast. If not, why not?

As I said above, I really don't want to be negative about the K
machine - I've read only a tiny description of it, and I'm not in a
position to make any judgements. But from what I have read it looks
like the classic 80s RISC win applied to Lisp - fast clock, all
instructions complete in one clock (or fixed clocks) enabling
pipelining, optimistic execution with later backing out, load-store
architecture (?), parallelism in the HW to do things like type checks
&c &c. Obviously you know much more about this than I do!

So it looks, to me, like the K machine was really about the only case
where someone actually did the sums on performance rather than
ritually reproduced the kind of hardware that had seemed reasonable in
the 70s.

(Of course, this leaves me the inconvenient problem of explaining why
it took so long to realize these wins for stock hardware Lisps. I'll
just punt on that...)

--tim

Duane Rettig

unread,
Oct 24, 2002, 1:00:00 PM10/24/02
to
Tim Bradshaw <t...@cley.com> writes:

> * Joe Marshall wrote:
>
> > Wait a sec! The LMI K-machine cost about $1 million to develop and
> > ran at about 13 million instructions per second. The TAK benchmark
> > (the only one I remember off hand) with full safety completed in .03
> > seconds. This was in 1986. Lisp on stock hardware did not catch up
> > to this level of performance until the late 90s.

Joe, this is incorrect. My earliest on-line (CVS) records show that
the 1993 tak benchmarks for sun3 were .05 (both run time and real time)
and for sparc were .03 (run time and real time). I'd have to dig out
some archives for earlier results, but in my memory we did most of the
super-optimizations right after Gabriel's book ("performance and Evaluation
of Lisp Systems", MIT Press, 1985) came out, in the mid to late 80's
and _not_ the late 90's.

Of course, it was also the late 80's to early 90's when we really started
realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
not represent real applications and in some cases are actually detrimental
to total system performance. So a second wave of optimizations took place
in the early 90's geared more toward total system performance. These
optimizations are ongoing.

[Tim Bradshaw <t...@cley.com> writes:]

> (Of course, this leaves me the inconvenient problem of explaining why
> it took so long to realize these wins for stock hardware Lisps. I'll
> just punt on that...)

Punt away; the time was a decade off anyway.

The reason why we didn't optimize CL until even as late as the mid to
late 80's is that people did not demand such performance strongly
until then. At that time, Lisp was "Big and slow", and even Lisp
proponents had bought into that lie, thus making it artificially the
truth.

Of course, very likely the first fastest Lisp in the world was the
first port I did of Franz Lisp to the Amdahl 580 in 1984. It's
probably not fair, though, for the same reason as for the K machine...

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Tim Bradshaw

unread,
Oct 24, 2002, 2:14:20 PM10/24/02
to
* Duane Rettig wrote:

> Joe, this is incorrect. My earliest on-line (CVS) records show that
> the 1993 tak benchmarks for sun3 were .05 (both run time and real time)
> and for sparc were .03 (run time and real time). I'd have to dig out
> some archives for earlier results, but in my memory we did most of the
> super-optimizations right after Gabriel's book ("performance and Evaluation
> of Lisp Systems", MIT Press, 1985) came out, in the mid to late 80's
> and _not_ the late 90's.

If those are correct (which I'm sure they are!) then you must have (or
could have) been equalling the K machine by ~1990 - there were some
68k HP boxes which were really a lot faster than any of the Sun3s in
1989-90 as I remember, and I don't think that Sun3s got any faster
after that time frame. And presumably if you ran on any of the
high-performance RISC machines (such as: anything but SPARC...) back
then you could have done a lot better.

So I feel comforted by that (:-).

> Of course, it was also the late 80's to early 90's when we really started
> realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
> not represent real applications and in some cases are actually detrimental
> to total system performance. So a second wave of optimizations took place
> in the early 90's geared more toward total system performance. These
> optimizations are ongoing.

Whenever I look at the Gabriel benchmarks and try and compare them to
what code I write does, I feel pretty uncomfortable that they measure
anything at all other than performance on the Gabriel benchmarks...

--tim

Joe Marshall

unread,
Oct 24, 2002, 2:48:12 PM10/24/02
to
Duane Rettig <du...@franz.com> writes:

> Tim Bradshaw <t...@cley.com> writes:
>
> > * Joe Marshall wrote:
> >
> > > Wait a sec! The LMI K-machine cost about $1 million to develop and
> > > ran at about 13 million instructions per second. The TAK benchmark
> > > (the only one I remember off hand) with full safety completed in .03
> > > seconds. This was in 1986. Lisp on stock hardware did not catch up
> > > to this level of performance until the late 90s.
>
> Joe, this is incorrect. My earliest on-line (CVS) records show that
> the 1993 tak benchmarks for sun3 were .05 (both run time and real time)
> and for sparc were .03 (run time and real time). I'd have to dig out
> some archives for earlier results, but in my memory we did most of the
> super-optimizations right after Gabriel's book ("performance and Evaluation
> of Lisp Systems", MIT Press, 1985) came out, in the mid to late 80's
> and _not_ the late 90's.

Ok. I just googled around to find some results and could have gotten
some lame ones.

At http://www-eksl.cs.umass.edu/~westy/benchmark/bench1.html
They report a time of 0.334 for Allegro 4.3b on a Sparc10 (Feb 96)

At http://www.computists.com/crs/crs11n17.html
They report a time of 0.030 for CMUCLc (unknown platform, probably wintel) (May 2001)

At http://www.eligis.com/benchmarks.html
They report a time of 0.020 for ISLISP (400 MHz PII) after 2000

In any case, I was refuting Tim Bradshaws's claims that

``(2) because it costs a *huge* amount of money (billions of


dollars a year) to produce processors which perform well, unless
you have enough market share to spend these billions of dollars
every year, you *must* target the processors on which this money

is being spent.''

I believe that processors that perform *well* (i.e. a year or two
behind the bleeding edge) can be created for orders of magnitude less
money than billions of dollars. Of course this is still millions of
dollars, but it within the grasp of a smaller company.

and
``Even in the late 80s the LispMs were lagging other systems in
terms of performance, by now the situation is hopeless.''

The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
in terms of performance. However, performance at Lisp doesn't make a
whole lot of difference if you want to run NFS under UNIX.

I also don't think the situation is hopeless (although I doubt the
wisdom of trying to sell lisp-specific hardware. There are easier
ways to go broke.)

> Of course, it was also the late 80's to early 90's when we really started
> realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
> not represent real applications and in some cases are actually detrimental
> to total system performance. So a second wave of optimizations took place
> in the early 90's geared more toward total system performance. These
> optimizations are ongoing.

Oh, of course. Gabriel's benchmarks are a lot like IQ tests: they
seem to measure *something* and there's a rough correlation between
them and `performance'.

> [Tim Bradshaw <t...@cley.com> writes:]
>
> > (Of course, this leaves me the inconvenient problem of explaining why
> > it took so long to realize these wins for stock hardware Lisps. I'll
> > just punt on that...)
>
> Punt away; the time was a decade off anyway.
>
> The reason why we didn't optimize CL until even as late as the mid to
> late 80's is that people did not demand such performance strongly
> until then.

Excatly so.

Tim Bradshaw

unread,
Oct 24, 2002, 3:01:35 PM10/24/02
to
* Joe Marshall wrote:

> ``(2) because it costs a *huge* amount of money (billions of
> dollars a year) to produce processors which perform well, unless
> you have enough market share to spend these billions of dollars
> every year, you *must* target the processors on which this money
> is being spent.''

> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.

I think that this may have been true in the 80s but it's likely not
true today. In the 80s you could build more-or-less competitive
processors out of more-or-less commodity chips, or if you couldn't you
had to arrange to fab a few mildly special chips. Nowadays in order
to build last year's bleeding edge processor you have to stump up for
last year's fab. If you're lucky, you will be able to get Intel to
fab your chip for you on whatever they are no longer using for this
year's chip...

I'd probably agree an order of magnitude or so variation, (so hundreds
of millions to produce something not quite bleeding edge - my guess is
that this is what, say Sun or DEC spend (spent in the DEC case)). But
I really doubt you could do anything significant for 10s of millions.
This is one of the reasons why there are so very few high-performance
processors around...

> The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
> in terms of performance. However, performance at Lisp doesn't make a
> whole lot of difference if you want to run NFS under UNIX.

Maybe things were different in the US, but by ~1989 the commodity
hardware that you could get in the UK was just eating the Lisp
machines alive, *especially* in terms of value for money - you could
buy *three* really seriously configured (`AI configuration' they
called it) Suns for the cost of a symbolics.

--tim

Thomas F. Burdick

unread,
Oct 24, 2002, 3:25:03 PM10/24/02
to
Immanuel Litzroth <imma...@enfocus.be> writes:

Are you running 10.2? Maybe you are, but my experience has been quite
different. Classic crashes often (mostly from MS applications, which
isn't surprising), but it only brings the classic environment down
with it, everything else is fine. My system has been up for 23 days
(I rebooted it when I decided to use the Finder after all, restored
the symlink, and restarted to make sure I didn't screw anything up).
The interaction with Carbon stuff can be weird sometimes, but for the
most part, the integration is really good. Do you have the developer
documentation installed? It's certainly not perfect (I wouldn't
consider running a really important server on it -- I'm sticking to
Solaris for that), but for an end-user OS, it's the best I've seen.

> offers little or nothing in comparison to an out of the box suse or
> redhat system. I can send you a list of major & minor annoyances
> beginning with the fact that you can't start commandline executables
> from the finder.

Huh, I rarely use the Finder. Getting slightly back on topic, ".app"s
are Really Really Cool for languages like Smalltalk or Lisp. From
POSIX, it looks like a directory tree, but from the Finder, it looks
like an application that you can click on to launch. You can put your
VM, core file, and fasl's together in one package. Wonderful. Does
anyone know if ACL or OpenMCL can make .app's? Or if MCL will be able
to? I've been using CocoaSqueak and loving it, especially the way I
can stick everything in the dot-app and send it to someone -- it's
like a "stand-alone" executable delivery system, without the downsides.

(setf ns:*gushing* nil)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Will Deakin

unread,
Oct 24, 2002, 3:29:35 PM10/24/02
to
Joe Marshall wrote:
> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.
Since you are quibling, may I declare open season on the quibble. I
would argue with `*orders* of magnitude less than billions of dollars.'
My best guess would be no more than an order of magnitude. Also, I would
suggest that a company with millions -- if not tens of millions -- is
not that small, really. (I am always amazed at how `small' `big'
companies like British Airways or the Allied Irish bank, really are...)

> I also don't think the situation is hopeless (although I doubt the
> wisdom of trying to sell lisp-specific hardware. There are easier
> ways to go broke.)

Sure. I could also imagine more fun ones too ;)

:)w


Barry Margolin

unread,
Oct 24, 2002, 3:30:54 PM10/24/02
to
In article <ey3n0p3...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

>* Joe Marshall wrote:
>> The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
>> in terms of performance. However, performance at Lisp doesn't make a
>> whole lot of difference if you want to run NFS under UNIX.
>
>Maybe things were different in the US, but by ~1989 the commodity
>hardware that you could get in the UK was just eating the Lisp
>machines alive, *especially* in terms of value for money - you could
>buy *three* really seriously configured (`AI configuration' they
>called it) Suns for the cost of a symbolics.

It was pretty much the same over here. When Sun came out with the
SPARC-based systems, it was hard to justify any more Symbolics purchases at
Thinking Machines. By the time Symbolics came out with the Ivory machines,
which brought them a little closer, our management no longer considered
them a serious option. I expect that the situation was similar at many
other Symbolics customers, which is why Ivory didn't save them from having
to declare bankruptcy.

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Duane Rettig

unread,
Oct 24, 2002, 4:00:01 PM10/24/02
to
Joe Marshall <j...@ccs.neu.edu> writes:

> Ok. I just googled around to find some results and could have gotten
> some lame ones.
>
> At http://www-eksl.cs.umass.edu/~westy/benchmark/bench1.html
> They report a time of 0.334 for Allegro 4.3b on a Sparc10 (Feb 96)

My first reaction was "It's probably being done wrong", and to suggest
that you try the same benchmark source on today's machines (to get similar
results). But after having looked at the other results, which seem
consistent with the results I have, it looks as though this was one of
the versions of the benchmarks which did a "10X" tak (i.e. run the tak
benchmark 10 times in order to compensate for the increasingly
noisy readings due to the speed of machines and software). Still, my
recommendation would tend to be "try the same test now". The .03 number
you're seeing on K machines is a 1X result, and the .7 is probably a 10X
result with not all possible optimization levels triggered.

Benchmarking is a black-art.

> In any case, I was refuting Tim Bradshaws's claims that
>
> ``(2) because it costs a *huge* amount of money (billions of
> dollars a year) to produce processors which perform well, unless
> you have enough market share to spend these billions of dollars
> every year, you *must* target the processors on which this money
> is being spent.''
>
> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.
>
> and
> ``Even in the late 80s the LispMs were lagging other systems in
> terms of performance, by now the situation is hopeless.''
>
> The LMI Lambda notwithstanding, LispMs were definitely *not* lagging
> in terms of performance. However, performance at Lisp doesn't make a
> whole lot of difference if you want to run NFS under UNIX.
>
> I also don't think the situation is hopeless (although I doubt the
> wisdom of trying to sell lisp-specific hardware. There are easier
> ways to go broke.)

I agree that performance by itself is not an indication of success
(look at Alphas). It is instead the prospect of gaining or losing
customer base which justifies or not the manpower commitment which
Tim mentions. I do believe that at the time Lisp Machines were being
developed, Moore's law was well established, and a good business plan
for a LM company would have been able to see the dismal projections
as to how hard it would be to keep up (and thus to keep the customer
base).

> > Of course, it was also the late 80's to early 90's when we really started
> > realizing that Gabriel's benchmarks (aka the "Stanford benchmarks") did
> > not represent real applications and in some cases are actually detrimental
> > to total system performance. So a second wave of optimizations took place
> > in the early 90's geared more toward total system performance. These
> > optimizations are ongoing.
>
> Oh, of course. Gabriel's benchmarks are a lot like IQ tests: they
> seem to measure *something* and there's a rough correlation between
> them and `performance'.

And, as with the cases with idiot savants and absent-minded professors,
there are local measurements that correlate not at all. Benchmarking,
like IQ testing, is a black-art.

Will Deakin

unread,
Oct 24, 2002, 4:15:07 PM10/24/02
to
[...resend from 20:28. appologies if this appears twice...]

Joe Marshall wrote:
> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude less
> money than billions of dollars. Of course this is still millions of
> dollars, but it within the grasp of a smaller company.
Since you are quibling, may I declare open season on the quibble. I
would argue with `*orders* of magnitude less than billions of dollars.'
My best guess would be no more than an order of magnitude. Also, I would
suggest that a company with millions -- if not tens of millions -- is
not that small, really. (I am always amazed at how small `big' companies
like British Airways or the Allied Irish bank, really are...)

> I also don't think the situation is hopeless (although I doubt the


> wisdom of trying to sell lisp-specific hardware. There are easier
> ways to go broke.)

Barry Margolin

unread,
Oct 24, 2002, 4:24:16 PM10/24/02
to
In article <4y98nv...@beta.franz.com>,

Duane Rettig <du...@franz.com> wrote:
>I agree that performance by itself is not an indication of success
>(look at Alphas). It is instead the prospect of gaining or losing
>customer base which justifies or not the manpower commitment which
>Tim mentions. I do believe that at the time Lisp Machines were being
>developed, Moore's law was well established, and a good business plan
>for a LM company would have been able to see the dismal projections
>as to how hard it would be to keep up (and thus to keep the customer
>base).

The same issue killed us at Thinking Machines. We eventually switched to
commodity processors (the Connection Machine CM-5 was based on SPARC rather
than the custom, single-bit processors we used in the CM-1 and CM-2), but
still the high cost of the proprietary interconnect architecture made it a
difficult sell (we also had competition from Intel, which was practically
giving away their parallel system by subsidizing it with the profits from
commodity systems).

Like Symbolics, what real set us apart was our software. There was
significant dissent among management about whether we should be a hardware
or a software company. Sales people understand how to sell boxes, so they
won, and that spelled the downfall of both companies.

Greg Neumann

unread,
Oct 24, 2002, 4:47:26 PM10/24/02
to
Immanuel Litzroth <imma...@enfocus.be> wrote in message news:<m2fzuw0...@enfocus.be>...

> redhat system. I can send you a list of major & minor annoyances
> beginning with the fact that you can't start commandline executables
> from the finder.

Are you absolutely sure? I recall there was a .cmd or .command
extension you could give these apps that allowed them to be executed
via double-click.

Is this what you're talking about? It would require wrapping your app
in a shell script.
http://216.239.51.100/search?q=cache:0m2OEVtDuBkC:www.osxfaq.com/Tutorials/LearningCenter/HowTo/Startup/index.ws+%22.command%22+extension+command+line+applications+macos+x&hl=en&ie=UTF-8

MacOS X is a weird conversation. There are many apps for which it's
the natural choice, but it helps to be the early-adopter type. Many
are waiting till 2004 when they have their processor supply issues
ironed out.

Greg Neumann

Christopher Browne

unread,
Oct 24, 2002, 4:47:57 PM10/24/02
to
In an attempt to throw the authorities off his trail, Joe Marshall <j...@ccs.neu.edu> transmitted:

> I believe that processors that perform *well* (i.e. a year or two
> behind the bleeding edge) can be created for orders of magnitude
> less money than billions of dollars. Of course this is still
> millions of dollars, but it within the grasp of a smaller company.

But this essentially points back to an argument I /regularly/ point
out concerning the costs of non-commodity systems.

You can get StrongARM and MIPS and PPC and other CPUs that are pretty
nifty, pretty fast, and which, in any kind of quantity, are quite
cheap.

Linux runs very nicely on any of these architectures; it sure would be
neat to be able to build a cheap MIPS box. A little microcode later
and it might be /quite/ slick as a Lisp Machine.

Unfortunately, while there may be $25 MIPS chips and $25 StrongARM
chips, I defy you to find motherboards costing less than $1500. That
price point seems to be a "magic" minimum price for "evaluation
boards" for these sorts of architectures.

The fact that (seemingly) every other Asian electronic fab plant makes
motherboards means that the equivalent for AMD or Intel CPUs costs a
mere $100, and that's /not/ associated with a 15:1 performance
reduction.

Designing a custom Lisp CPU points to building and selling the whole
set of hardware: motherboard, CPU, and, if you're /not/ lucky, your
own video and I/O hardware. (If you don't have an IA-32 emulator on
your CPU, it will be really problematic to make use of commodity I/O
hardware that expects to have some BIOS boot process...)
--
(concatenate 'string "chris" "@cbbrowne.com")
http://cbbrowne.com/info/nonrdbms.html
Lisp stoppped itself
FEP Command:

Joe Marshall

unread,
Oct 24, 2002, 4:49:37 PM10/24/02
to
Will Deakin <aniso...@hotmail.com> writes:

> Joe Marshall wrote:
> > I believe that processors that perform *well* (i.e. a year or two
> > behind the bleeding edge) can be created for orders of magnitude less
> > money than billions of dollars. Of course this is still millions of
> > dollars, but it within the grasp of a smaller company.
>
> Since you are quibling, may I declare open season on the quibble.

Declare away.

> I would argue with `*orders* of magnitude less than billions of
> dollars.' My best guess would be no more than an order of
> magnitude.

I think you could develop a decent processor for in the tens of
millions. I think it counts as `orders' less than `billions'.

> Also, I would suggest that a company with millions -- if
> not tens of millions -- is not that small, really. (I am always amazed
> at how `small' `big' companies like British Airways or the Allied
> Irish bank, really are...)

A ten-person company can easily consume a million bucks in a year. A
200 person company is considered by many people (not myself, though)
to be `small'. I think a team on the order of dozens of people can
design a decent processor.

Barry Margolin

unread,
Oct 24, 2002, 5:09:41 PM10/24/02
to
In article <vg3rsg...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu> wrote:
>A ten-person company can easily consume a million bucks in a year. A
>200 person company is considered by many people (not myself, though)
>to be `small'. I think a team on the order of dozens of people can
>design a decent processor.

Of course, "big" and "small" are relative to the industry. At our peak,
Thinking Machines had around 500 employees; but since our competitors were
Intel and Cray, we were tiny by comparison. The design team for the CM-5
was a couple dozen engineers, but we also needed software developers, sales
and marketing people, management, and support staff. The system
administration group (where I worked) had 8 people at its max.

Christopher Browne

unread,
Oct 24, 2002, 5:18:37 PM10/24/02
to

Chuck Moore, of Forth fame, has done this several times.

He's designed built a couple of 16 bit "Forth chips," a 32 bit one,
and, in the "wow, that's odd" category, built his own combination
Forth implementation/IA-32 OS/Electronic CAD system called OKAD that
was then used to design a 21 bit embedded processor called the uP21.
(There seem to be a bunch of 21 bit processors built by him...)


--
(concatenate 'string "chris" "@cbbrowne.com")

http://cbbrowne.com/info/emacs.html
The next person to mention spaghetti stacks to me is going to have his
head knocked off. -- Bill Conrad

Tim Bradshaw

unread,
Oct 24, 2002, 5:34:19 PM10/24/02
to
* Joe Marshall wrote:

> A ten-person company can easily consume a million bucks in a year. A
> 200 person company is considered by many people (not myself, though)
> to be `small'.

Given that a that's only 100,000 per person, so I'd not be at all
surprised! A 10 person company that uses significant capital
equipment (not PCs, but machine tools or something) would be lucky to
be run that cheaply I'd think.

> I think a team on the order of dozens of people can design a decent
> processor.

Probably, although they seem to be bigger than that now, as processors
have got far more complex (I but Intel's team is *much* bigger,
although they struggle against hideous historical obstacles (and seem
to be building plenty more with Itanic, presumably to keep themselves
comfortable...). But the question isn't really whether you can design
the thing, it's whether you can *build* it.

--tim


Bruce Hoult

unread,
Oct 24, 2002, 5:35:12 PM10/24/02
to
In article <m2fzuw0...@enfocus.be>,
Immanuel Litzroth <imma...@enfocus.be> wrote:

> I work on MacOSX daily and beg to differ. The integration between the
> graphical system and the commandline unix system is very bad

How so? The cli "Open" command starts GUI stuff as if you'd
double-clicked on the object named. You can start cli stuff from the
GUI using a simple wrapper bundle (the format of which is abundently
documented).

> starting up the classic system makes the whole system unstable,

Not that I've seen. I rebooted my four year old G3/266 PowerBook
yesterday because of installing updated system software. I checked the
uptime first. 35 days, during which time it's been
slept/woken/slept/woken continually, and had Classic running (mostly MPW
and Hypercard) the entire time.


> it is very slow

OK, it's not speedy on a 266 MHz machine, but then what is these days?
Even with a 1.8 GHz (OK, 1.5) PC next to it I still find I prefer to do
many things on the 266 MHz Mac.


> and programming it is difficult because of the complexity of it's
> subsystems and their interaction and the dearth of documentation.

If you pick one subsystem (classic Mac APIs, or Cocoa, or Java, or
POSIX) then there are few problems. Mixing different APIs is indeed
pretty poorly documented, though that has started coming through more
quickly recently, especially since the last developer's conference.


> It offers little or nothing in comparison to an out of the box suse or
> redhat system.

That's palpably not true -- and I'm running RedHat 8.0 on the previously
mentioned Athlon 1800+.

> I can send you a list of major & minor annoyances beginning with the
> fact that you can't start commandline executables from the finder.

Which is false. If you can't figure out how to make the relevent bundle
dirctory structure then go and get something like DropScript
(http://www.versiontracker.com/redir.fcgi/kind=1&db=mac&id=10459/DropScri
pt-0.5.dmg) to do it for you.

-- Bruce

Tim Bradshaw

unread,
Oct 24, 2002, 5:45:54 PM10/24/02
to
* Barry Margolin wrote:

> It was pretty much the same over here. When Sun came out with the
> SPARC-based systems, it was hard to justify any more Symbolics purchases at
> Thinking Machines. By the time Symbolics came out with the Ivory machines,
> which brought them a little closer, our management no longer considered
> them a serious option. I expect that the situation was similar at many
> other Symbolics customers, which is why Ivory didn't save them from having
> to declare bankruptcy.

I don't have any figures to hand (or even not to hand), but I think
that - at least in terms of price/performance if not in terms of
absolute performance - this was true in the UK for the later 68k Suns.
3/180s and their kin were particularly nice, I think - cheap enough to
buy, and there was a hi-res screen / lots of memory config which was
really nice. I'm not sure if these machines actually beat the
Symbolics boxes (which at that stage would have been 36xxs for xx
being 50 or 70 or 4x I think?) in raw single-user performance, but in
price/performance they definitely did. Some of this may have been an
artifact of Symbolics having only a very small market share in the UK
anyway, and Sun pricing aggressively to drive them out (which they
succeeded in doing!), so it may not have been universal...

--tim

Will Deakin

unread,
Oct 24, 2002, 6:11:22 PM10/24/02
to
Joe Marshall wrote:
> Will Deakin <aniso...@hotmail.com> writes:
>>I would argue with `*orders* of magnitude less than billions of
>>dollars.' My best guess would be no more than an order of
>>magnitude.
> I think you could develop a decent processor for in the tens of
> millions.
Sure. When I was at Manchester I got drinking with a couple of blokes
working in the comp.sci department who were involved in post-grad chip
burning. IIRC they were banging out runs of tens of chips in a facility
that cost about L5-10M sterling.

> I think it counts as `orders' less than `billions'.

Sure. (I think this is a mistranslation on my part -- I tend to read
`orders' as `more than two.'[1])

>>Also, I would suggest that a company with millions -- if
>>not tens of millions -- is not that small, really.

> A ten-person company can easily consume a million bucks in a year. A
> 200 person company is considered by many people (not myself, though)
> to be `small'. I think a team on the order of dozens of people can
> design a decent processor.

I agree with all of this.

However, in support of Tim's point, on the basis of -- what little I
know -- manufacturing and with a type of mythical man month analysis:
If you want to design, burn and manufacture chips I think you will need
a team of say ten of cool chip designers. To test, make and put in small
cardboard boxes you will need, say, three times as many people. To
document, flog and market this you will need about another three times
again. On this basis you will need, say, 100+ people[2]. I would then
guess that this would cost at least $10M p.a. and to make it worthwhile
you should be looking at a turnover of at least $50-$100M. But I am also
happy to accept that I am talking through my pants. Be those US or UK.

:)w

[1] I was about 8 or 9 when I had an argument that turned into a fight
with my brothera about whether `a couple' was more, equal to or less
than `a few.' :)
[2] ...this is where it gets *really* vague...

Edi Weitz

unread,
Oct 24, 2002, 6:26:02 PM10/24/02
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> [Mac OS X is] certainly not perfect (I wouldn't consider running a


> really important server on it -- I'm sticking to Solaris for that),
> but for an end-user OS, it's the best I've seen.

Let me take a this chance to ask Mac OS Lispers a couple of questions.

Background: I've been using Macs in the 90s and have fond memories -
but that was back when I was a mathematician at the
university and didn't care much about programming. I've
sinced switched to FreeBSD for Servers and Linux for my
laptops. After having read all the marketing hype about
Mac OS X I was one of the first to order a PowerBook G4 in
Germany and was so disappointed with 10.0 that I sold the
laptop a mere three months later and returned to my
Thinkpads. Now everybody tells me that 10.2 is really
quite mature and I should give it another try. I might
have the chance to get a used but almost new iBook and I'm
considering to buy it.

What I'd like to know from people who've been using Lisp on Mac OS X
is:

1. How do the currently available implementations (I think that's
CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
why? (Stability, correctness, OS integration, ...)

(I have no plans to use Classic so MCL is not an option - yet.)

2. What about speed? I don't mean PowerPC compared to x86 but rather
Lisp compared to other languages on the same platform. On Linux
I've often seen CMUCL and also LispWorks and AllegroCL to be on par
with C/C++ programs. Is this also the case on OS X?

3. How good is Emacs on Mac OS X? I've heard current CVS sources will
build an Aqua version - is anybody actually using it and can
recommend it?

4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
development, mail, writing, browsing - no fancy multimedia stuff.)

5. Other comments are welcome as well.

Sorry for the slightly dumb questions but I try to avoid the fiasco I
had last time. If you think I'm trying to start a flame war about
operating systems or implementations you can mail me privately... :)

Thanks in advance,
Edi.

Tim Bradshaw

unread,
Oct 24, 2002, 6:53:26 PM10/24/02
to
* Will Deakin wrote:
> Sure. When I was at Manchester I got drinking with a couple of blokes
> working in the comp.sci department who were involved in post-grad chip
> burning. IIRC they were banging out runs of tens of chips in a
> facility that cost about L5-10M sterling.

Yes, but what kind of chip - it really does matter. Things like
transistor counts have gone up a lot and feature sizes have gone down
a lot for microprocessors. Both of these make *making* the things
*very* expensive, especially if you want to get a good enough yield
that you can sell them for anything like reasonable money.

I'm willing to believe that you can make a processor that's 1 or 2
years behind the bleeding edge for an order of magnitude less than you
can make the bleeding edge chips for, and you can probably make it for
reasonable unit cost, since you can use the fabs that are coming off
the back of the bleeding-edge chips. But assuming you're going for the
desktop market or above (and I don't think we're talking about chips
for PDAs or embedded applications here), you now have a fairly hard
job selling it. In order for it to be competitive with the current
bleeding edge chips for Lisp you have to get 2-4 times the performance
that you can get for a Lisp system on the bleeding edge chips. And it
won't run Windows, so you are looking at getting a decent chunk of the
Linux market, or starting a new proprietary-architecture server
company. Don't ask me to invest in this.

The first machine I ran was a minicomputer made up of an off-the-shelf
bitsliced processor and some random logic. They were a bit slower
than a VAX, but lots cheaper, and they were made by a company of maybe
10-20 people. Other than the boards, the physical hardware including
wiring, backplane, eproms and so on, and the software I don't think
there was any custom logic in those machines (there may have been, but
not much). They were almost literally made in someone's back room.
The second generation of machines this company made used Fairchild
clippers, and a fair number of PLAs I think. They were quite
competitive performance-wise, but rather expensive compared to the
systems Sun was making at the time, and the clipper was a mistake
because they were ages late and scarce at best. They never made a
third generation because they couldn't compete with people like Sun
and MIPS who were designing their own processors and lots of custom
logic chips. Since that time the cost of designing a competitive box
has gone up really a lot unless you want to just clone what someone
else has done, including using all their custom logic (like processors
&c). A whole world of small computer makers (and some large ones) has
vanished in the last 10-15 years, and I think that cost is why.

--tim

Pascal Costanza

unread,
Oct 24, 2002, 9:06:06 PM10/24/02
to
Edi Weitz wrote:
> t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Let me take a this chance to ask Mac OS Lispers a couple of questions.
>

> After having read all the marketing hype about
> Mac OS X I was one of the first to order a PowerBook G4 in
> Germany and was so disappointed with 10.0 that I sold the
> laptop a mere three months later and returned to my
> Thinkpads. Now everybody tells me that 10.2 is really
> quite mature and I should give it another try.

I have switch from Wintel to Apple in April and started with Mac 10.1.4.
I was very pleased from the beginning. Long-time Mac enthusiasts told me
that 10.0 was really extremely immature. 10.2 is just great.

> What I'd like to know from people who've been using Lisp on Mac OS X
> is:
>
> 1. How do the currently available implementations (I think that's
> CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> why? (Stability, correctness, OS integration, ...)
>
> (I have no plans to use Classic so MCL is not an option - yet.)

Since I don't like Emacs I have decided to use MCL. They are going to
announce MCL 5.0 at the Lisp Conference, so this will be the first
Common Lisp with an IDE for Mac OS X as far as I know.

I don't know the non-IDE Lisps.

> 2. What about speed? I don't mean PowerPC compared to x86 but rather
> Lisp compared to other languages on the same platform. On Linux
> I've often seen CMUCL and also LispWorks and AllegroCL to be on par
> with C/C++ programs. Is this also the case on OS X?

MCL "feels" quite fast. ;-) If you'd post a benchmark I would be willing
to check this for you.

> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

From what I have seen, the Aqua Emacs is not mature yet. It seems to
lack several essential features. However, you can install an X server
and run Xemacs. There's a window manager for X called OroborOSX that
tries to emulate the Aqua look. You might want to take a look at
http://fink.sourceforge.net

> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

Yep, definitely. I use a G3 600MHz iBook and it's quite fast. I run MCL,
the Mail app, Mozilla, OmniWeb, Office, LaTeX and even the fancy
multimedia stuff like Quicktime, RealOne and the somewhat buggy Windows
Media Player.

> 5. Other comments are welcome as well.

I haven't yet found a convincing and inexpensive backup solution for Mac
OS X. Other than that I don't really know why you shouldn't switch. Even
Ellen Feiss did it. ;-)


Pascal

--
Given any rule, however ‘fundamental’ or ‘necessary’ for science, there
are always circumstances when it is advisable not only to ignore the
rule, but to adopt its opposite. - Paul Feyerabend

Erik Naggum

unread,
Oct 24, 2002, 9:14:44 PM10/24/02
to
* Pascal Costanza <cost...@web.de>

| Since I don't like Emacs I have decided to use MCL.

Could you quickly summarize the reasons you do not like Emacs?
Is it specific to Emacs on a particular platform or general?

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Pascal Costanza

unread,
Oct 24, 2002, 9:31:13 PM10/24/02
to
Erik Naggum wrote:
> * Pascal Costanza <cost...@web.de>
> | Since I don't like Emacs I have decided to use MCL.
>
> Could you quickly summarize the reasons you do not like Emacs?

I could, but it wouldn't make sense. My reasons are very subjective and
I haven't taken the time yet to form a better informed opinion.

I hope I haven't given the impression to discourage anyone to learn
Emacs. This wasn't my intention.

Thomas F. Burdick

unread,
Oct 24, 2002, 10:07:17 PM10/24/02
to
Pascal Costanza <cost...@web.de> writes:

> Edi Weitz wrote:
> > t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > Let me take a this chance to ask Mac OS Lispers a couple of questions.
> >
> > After having read all the marketing hype about
> > Mac OS X I was one of the first to order a PowerBook G4 in
> > Germany and was so disappointed with 10.0 that I sold the
> > laptop a mere three months later and returned to my
> > Thinkpads. Now everybody tells me that 10.2 is really
> > quite mature and I should give it another try.
>
> I have switch from Wintel to Apple in April and started with Mac 10.1.4.
> I was very pleased from the beginning. Long-time Mac enthusiasts told me
> that 10.0 was really extremely immature. 10.2 is just great.

To put this in perspective, only loonies get version point-zero of an
Apple OS. 10.0 was even worse than normal. I didn't hear anyone who
was happy with 10.0 as their primary OS, but I've heard very few
negative experiences with 10.2, and mostly from Mac users, whose
complaints sound mind-bogglingly nit-picky to a Unix user (and this
nit-pickiness is probably why it's as nice as it is). So, while it
might bug Mac users, that's a ringing endorsement for Unix or (god
help them) Windows users.

> > What I'd like to know from people who've been using Lisp on Mac OS X
> > is:
> >
> > 1. How do the currently available implementations (I think that's
> > CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> > why? (Stability, correctness, OS integration, ...)
> >
> > (I have no plans to use Classic so MCL is not an option - yet.)

I've played with MCL under Classic, and it's quite nice -- I can see
why it has the reputation it does. CLISP is CLISP, on every platform.
OpenMCL looks pretty nice, and I've played with it some. I haven't
gotten to check out the Objective-C bridge, but I want to sometime
soon. The Cocoa API is really lovely from Smalltalk, and I'd imagine
it's similarly pleasant from Lisp. I haven't gotten to look at ACL
yet. I've been doing a lot of real work with my Lisping time, which
has cut into my avaiable playing-around time. SBCL should hopefully
make it to OS X in November sometime, when I've got time to do the
port. If someone wants to be me to it, though, I wouldn't complain!

> > 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> > build an Aqua version - is anybody actually using it and can
> > recommend it?
>
> From what I have seen, the Aqua Emacs is not mature yet. It seems to
> lack several essential features. However, you can install an X server
> and run Xemacs. There's a window manager for X called OroborOSX that
> tries to emulate the Aqua look. You might want to take a look at
> http://fink.sourceforge.net

I'm running GNU Emacs 21, compiled from source, under X11. All your
X11 windows show up in the same "X" application on the dock, which
would be annoying if I ran much more than Emacs and rxvt under X. As
far as I can tell, Emacs 21 w/ X is more stable under OS X than it is
under Solaris, where I sometimes get weird lockups and a stack so
screwed up I can't figure anything out with gdb. This is good because
you can't get Emacs 20 to compile on OS X. The Aqua version is
unstable, but you can build for Carbon, which should work fine,
although I haven't tried. I have an iBook, so I like being able to
display remotely to get a larger screen and better keyboard. I'm
really looking forward to getting Hemlock, though.

I've tried to get fink working a couple of times, then decided that
I'm perfectly able to compile from source myself. I'm not sure I'd
want several different package systems on my system.

> > 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> > development, mail, writing, browsing - no fancy multimedia stuff.)

That's plenty fast, just be sure you get enough memory. 128MB is the
minimum so you really want >= 256MB.

> > 5. Other comments are welcome as well.
>
> I haven't yet found a convincing and inexpensive backup solution for Mac
> OS X. Other than that I don't really know why you shouldn't switch. Even
> Ellen Feiss did it. ;-)

You got something against tar and cron?

Pascal Costanza

unread,
Oct 24, 2002, 10:55:39 PM10/24/02
to
Thomas F. Burdick wrote:
> Pascal Costanza <cost...@web.de> writes:

>>I haven't yet found a convincing and inexpensive backup solution for Mac
>>OS X.

> You got something against tar and cron?

tar doesn't handle the Mac OS X file system correctly. A backup solution
for Mac OS X needs to be able to deal with long file names, resource
forks and finder information at the same time. Tools ported from the
Unix world generally get the resource forks and/or the finder
information wrong, and tools ported from Mac OS 9 are usually not able
to deal with long file names and sometimes even with the file names
themselves, because OS 9 and OS X use different character sets for file
names.

To put it mildly, this area needs some consolidation. To put it
differently, I would be happy if you could point me to a good solution -
I haven't found any yet. The best I have found so far is PsyncX at
http://sourceforge.net/projects/psyncx

Erik Naggum

unread,
Oct 24, 2002, 10:58:23 PM10/24/02
to
* Pascal Costanza <cost...@web.de>

| I could, but it wouldn't make sense. My reasons are very subjective and I
| haven't taken the time yet to form a better informed opinion.

Most of the time, subjective reasons are perfectly acceptable for the
individual who has to make the choices, and to state them as such is
quite unproblematic. They just need to be distinguished from more
universalizable reasons. I had hoped you had reasons that could be
classified as fixable or personal, and we could perhaps do something
about the fixable ones, but your answer is as complete as it gets.

| I hope I haven't given the impression to discourage anyone to learn
| Emacs.

Not initially in my view, but certainly not anymore.

Chris Beggy

unread,
Oct 24, 2002, 11:26:30 PM10/24/02
to
Christopher Browne <cbbr...@acm.org> writes:

> Linux runs very nicely on any of these architectures; it sure would be
> neat to be able to build a cheap MIPS box. A little microcode later
> and it might be /quite/ slick as a Lisp Machine.

Think Sony Playstation2...

Chris

Raffael Cavallaro

unread,
Oct 25, 2002, 12:22:40 AM10/25/02
to
Edi Weitz <e...@agharta.de> wrote in message news:<8765vrpj...@bird.agharta.de>...

> 1. How do the currently available implementations (I think that's
> CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> why? (Stability, correctness, OS integration, ...)
>
> (I have no plans to use Classic so MCL is not an option - yet.)

OpenMCL is much faster than CLISP since everything in OpenMCL is
compiled to native PPC code, not bytecode (or, slower still,
interpreted). I believe it's more ANSI compliant wrt CLOS as well.

>
> 2. What about speed? I don't mean PowerPC compared to x86 but rather
> Lisp compared to other languages on the same platform. On Linux
> I've often seen CMUCL and also LispWorks and AllegroCL to be on par
> with C/C++ programs. Is this also the case on OS X?
>

OpenMCL is very fast - faster than c++ for recursive function call
intensive benchmarks like tak, and just as fast on an array access
benchmark like sieve of eratosthenes. BTW, OpenMCL has the same
compiler core as MCL. You can even get a crude Cocoa IDE to run, but
it slows things down significantly (~25%) relative to the command
line.


> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

I'll see your OS flamewar, and raise you an editor flamewar! ;^) Just
kidding. I don't use Emacs, but I have run it and it seems to be the
same ugly kitchen sink under MacOS X as it is under Linux, etc. Of
course, if you actually *like* Emacs, then, yes, it's available for
MacOS X. I think there may be some sort of integration for OpenMCL as
well.


>
> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

Yes. I got one for my daughter a couple of months ago precisely
because it does Quartz Extreme (i.e., OpenGL video card acceleration
of all GUI screen compositing/drawing, which is an issue, because
MacOS X uses transparency for everything.) In fact, you can even use
it for the fancy multimedia stuff too (listen to mp3s, rip CDs, watch
movies, while doing the other stuff, with no hiccups).

>
> 5. Other comments are welcome as well.

I would use OpenMCL while waiting for MCL native for MacOS X to ship
since you don't want to use Classic (I agree here). You could also
contact Digitool and ask to be a beta tester for MCL native for MacOS
X- that way you'll hit the ground running even faster when MCL native
for OS X ships.

>
> Sorry for the slightly dumb questions but I try to avoid the fiasco I
> had last time.

Yes, MacOS X 10.0 was not quite ready for prime time in some ways, but
10.2 on a 700 MHz iBook is very usable - I run it all the time when I
don't want to be tied to my desk (I just borrow my daughter's iBook -
the Airport wireless net is nice too).

> Thanks in advance,
> Edi.

You're welcome.

Christopher Browne

unread,
Oct 25, 2002, 12:59:13 AM10/25/02
to
Oops! Chris Beggy <chr...@kippona.com> was seen spray-painting on a wall:

.. which may have really nice stereo hardware, and some interesting
DSP hardware, but is /really/ lean on CPU power, RAM, and disk space,
and more or less totally non-expandable.

By the time you add the "run Linux on it" additions, it's about the
price of a "real computer," and the result is a pretty wimpy system.
I know; I've seen one in action. It's not totally worthless, but it's
no "Lisp Machine-would-be"...
--
(concatenate 'string "cbbrowne" "@cbbrowne.com")
http://www.ntlug.org/~cbbrowne/sap.html
What should you do when you see an endangered animal that is eating an
endangered plant?

Vassil Nikolov

unread,
Oct 25, 2002, 1:18:29 AM10/25/02
to
On 24 Oct 2002 22:34:19 +0100, Tim Bradshaw <t...@cley.com> said:

[...]
TB> Itanic

Was this spelling intentional?

---Vassil.

--
Non-googlable is googlable.

Espen Vestre

unread,
Oct 25, 2002, 3:02:29 AM10/25/02
to
t...@fallingrocks.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Apple OS. 10.0 was even worse than normal. I didn't hear anyone who
> was happy with 10.0 as their primary OS, but I've heard very few

I was happy with 10.0 (but not as my primary OS) - compared to e.g. the
first versions of Solaris 2 or *.0 (1)-versions of Cisco IOS, it was
rock solid ;-)
--
(espen)

Espen Vestre

unread,
Oct 25, 2002, 3:12:26 AM10/25/02
to
greg_n...@yahoo.com (Greg Neumann) writes:

you don't have to put an .command extension on shell scripts, you can
just tell OS X that you want them to be opened by Terminal (but you
do need to wrap programs up in a shell script to make that work, I think).
--
(espen)

Rob Warnock

unread,
Oct 25, 2002, 4:38:45 AM10/25/02
to
Vassil Nikolov <vnik...@poboxes.com> wrote:
+---------------

| Tim Bradshaw <t...@cley.com> said:
| TB> Itanic
|
| Was this spelling intentional?
+---------------

Almost certainly, since in much of the world these days
the term "Itanic" is used as a sardonic pun and comment
on the perceived [by those using the term] probable future
of the Itanium Processor Family. As in: "This *has* to
succeed! We *cannot* fail! This *will* succee...*CRASH!*
Oops, where did that iceberg come from?"


-Rob

-----
Rob Warnock, PP-ASEL-IA <rp...@rpw3.org>
627 26th Avenue <URL:http://www.rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Christopher C. Stacy

unread,
Oct 25, 2002, 6:50:11 AM10/25/02
to
Symbolics had a high-performance RISC chip that could beat the SPARC family.
It also had the ability to move Ivory (the processor that you may be more
familiar with) into newer processes without much cost. Unfortunately,
they fired basically all of that staff prior to releasing those products.

The lossage at Symbolics was not about hardware performance.
It was about management vision, direction, and understanding the market.

Having made large real estate investments at the worst possible time
was the proximate cause of the company's demise.

George Demmy

unread,
Oct 25, 2002, 8:26:49 AM10/25/02
to
Edi Weitz <e...@agharta.de> writes:
> Let me take a this chance to ask Mac OS Lispers a couple of questions.
>
> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

My wife and I purchaced an emac with OS X 10.1 for "around the house
stuff". My background is unixy *but* I'm a devoted emacs user. I've
been using contol-next-to-a keyboards and emacs meta next to the space
bar from way back. I ran into a wall of cognitive dissonance with the
default mac key bindings. Meta is "pretty important" in the emacs
world, and dragging my thumb off of the space bar onto the adjacent
key is as natural as breathing after doing it for years. Now, that
spot is occupied by the Apple command key -- a "pretty important" key
for Apple folks. Of course, Apple has gone along with everyone else in
moving the caps lock over where the control key belongs (control key
is "pretty important" in the emacs world...), so that one-two punch
dazed me a bit. The point of all of this is that you might consider
getting some software to rebind the keys if you've developed any
habits that might not translate well. Since I share this computer with
my wife (a professional Mac user, btw), I *didn't* rebind the keys,
and I prefer using my old toshiba (as in 167MHz w/ 80MB ram) laptop
running linux as an emacs launcher with the keys in the right
spot. I'm considering getting a different keyboard that we might share
the machine, and I can get an opportunity to see how emacs dances on a
G4...

>
> Thanks in advance,
> Edi.

Good luck!
--
George Demmy

Joe Marshall

unread,
Oct 25, 2002, 9:07:28 AM10/25/02
to
Tim Bradshaw <t...@cley.com> writes:

> In order for it to be competitive with the current
> bleeding edge chips for Lisp you have to get 2-4 times the performance
> that you can get for a Lisp system on the bleeding edge chips.

At *least* that.

> And it won't run Windows, so you are looking at getting a decent
> chunk of the Linux market, or starting a new proprietary-
> architecture server company. Don't ask me to invest in this.

I agree completely. I wouldn't invest money in something like this.

On the other hand, I would invest a lot of time (and someone else's
money) in it because it's something that I'd love to do.

Lars Brinkhoff

unread,
Oct 25, 2002, 9:25:17 AM10/25/02
to
George Demmy <gde...@layton-graphics.com> writes:
> My wife and I purchaced an emac with OS X 10.1 for "around the house
> stuff". My background is unixy *but* I'm a devoted emacs user. I've
> been using contol-next-to-a keyboards and emacs meta next to the
> space bar from way back. I ran into a wall of cognitive dissonance
> with the default mac key bindings.

In a similar situation, I very quickly replaced the Apple keyboard
with a Happy Hacking keyboard. Works like a charm.

--
Lars Brinkhoff http://lars.nocrew.org/ Linux, GCC, PDP-10,
Brinkhoff Consulting http://www.brinkhoff.se/ HTTP programming

Alain Picard

unread,
Oct 25, 2002, 9:31:07 AM10/25/02
to
Edi Weitz <e...@agharta.de> asks:

> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

The native aqua version (of GNU emacs 21) still crashes occasionally.
If you run the X version under XDarwin, you're fine.

You _have_ to use uControl to remap the caps-lock to control,
or you go totally carpal and insane.


> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

I have a 500Mhz iBook. To me, OS X 10.2 feels slow as molasses.
My iBook dual boots into linux; it feels at *least* 3 times faster
under linux. Of course, you lose the beautiful display PDF...
it's your call. I _hate_ waiting. Did I mention it's SLOW?

> 5. Other comments are welcome as well.

I was very excited at the prospect of a "user friendly unix"; but
found it took me weeks to find out where the hell Apple had put
everything (short summary; there's this thing called "netinfo manager"
which is a big DB of what you'd normally find in the zillion text
files in /etc on a "normal" unix system).

It looks dazzlingly beautiful, no doubt, but a hard core unixer
always wants his _own_ key bindings, window manager, etc etc, which
are all difficult or impossible to get under os X.

To me, it basically feels more like a system for "users" rather than
developers. It does a good job of being a nice, stable box(*) if you
don't try to do anything fancy.

Biggest drawback is it doesn't run CMUCL. :-/

All in all, I keep using it anyway (how I got to have an iBook is a
long story) but I'd trade it in for an x86 laptop running linux on
equally sleek hardware(+). [The iBook hardware is _gorgeous_, though, I
have to admit. The powerbook even more so. Just astounding.]


(*) It still hangs the finder forever if you unplug the laptop
from the lan with mounted remote servers... I guess linux
does that too, but on linux there isn't a central "finder"
app which renders the box useless when it hangs. You can
always spawn a new shell and kill -9 the locked PID.
If anyone knows how to restart a locked finder, I'd love to know.

(+) Let's face it---what I really want is that sexy 980g VHS cassete
sized Sony thing, but who's got that kind of cash!?

George Demmy

unread,
Oct 25, 2002, 9:42:19 AM10/25/02
to
Lars Brinkhoff <lars...@nocrew.org> writes:

I'm drooling. Thanks for the tip! Can you "hot swap" the keyboards?

Hej då
--
George Demmy

Raymond Toy

unread,
Oct 25, 2002, 9:39:37 AM10/25/02
to
>>>>> "George" == George Demmy <gde...@layton-graphics.com> writes:

George> for Apple folks. Of course, Apple has gone along with everyone else in
George> moving the caps lock over where the control key belongs (control key
George> is "pretty important" in the emacs world...), so that one-two punch
George> dazed me a bit. The point of all of this is that you might consider
George> getting some software to rebind the keys if you've developed any
George> habits that might not translate well. Since I share this computer with

AFAIK, you can't rebind the keys. Well, you can, but it doesn't work
like you would want. I tried with xkeycaps. The capslock key now
becomes a control-lock key. Every key thereafter is a control key.
You have to hit control-lock again to turn it off.

But maybe a standard PC USB keyboard will work, and this might work
better. But I don't know of any Mac software to rebind the keys....

Ray

Raymond Toy

unread,
Oct 25, 2002, 9:35:23 AM10/25/02
to
>>>>> "Thomas" == Thomas F Burdick <t...@fallingrocks.OCF.Berkeley.EDU> writes:

Thomas> To put this in perspective, only loonies get version point-zero of an
Thomas> Apple OS. 10.0 was even worse than normal. I didn't hear anyone who
Thomas> was happy with 10.0 as their primary OS, but I've heard very few
Thomas> negative experiences with 10.2, and mostly from Mac users, whose
Thomas> complaints sound mind-bogglingly nit-picky to a Unix user (and this
Thomas> nit-pickiness is probably why it's as nice as it is). So, while it

I was quite happy with 10.0 (my first Mac) for what it was used for.
10.1 was much better. 10.2 has become worse. My wife is a quite
upset with me because she can't use Apple's Mail.app to send mail in
Korean that her friends can read. (It seems it uses iso-2022-kr
encoding instead of euc-kr). It can't talk to the printer on my
Linux box anymore. Yuck.

Thomas> I've played with MCL under Classic, and it's quite nice -- I can see
Thomas> why it has the reputation it does. CLISP is CLISP, on every platform.

Clisp on OS X is missing the FFI (because no one has ported the
RS-6000 FFI to powerpc?).

>> > 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
>> > development, mail, writing, browsing - no fancy multimedia stuff.)

Thomas> That's plenty fast, just be sure you get enough memory. 128MB is the
Thomas> minimum so you really want >= 256MB.

My iMac 400 MHz is more than adequate for mail, writing, browsing.
Development is ok, especially compared to the 300 MHz sparc at
work. :-)

Ray

Lars Brinkhoff

unread,
Oct 25, 2002, 9:55:18 AM10/25/02
to
George Demmy <gde...@layton-graphics.com> writes:
> Lars Brinkhoff <lars...@nocrew.org> writes:
> > George Demmy <gde...@layton-graphics.com> writes:
> > > My wife and I purchaced an emac ... I ran into a wall of

> > > cognitive dissonance with the default mac key bindings.
> > In a similar situation, I very quickly replaced the Apple keyboard
> > with a Happy Hacking keyboard. Works like a charm.

I should have mentioned that I run Linux on the Mac, so I'm not as
sure that it works well in MacOS, but I see no reason why not.

> I'm drooling. Thanks for the tip! Can you "hot swap" the keyboards?

Sure. In Linux, anyway.

Russell McManus

unread,
Oct 25, 2002, 10:09:25 AM10/25/02
to

I bet you can plug in a happy hacking USB keyboard and achieve emacs
happiness, then put it in a drawer when you wife wants to use the
machine.

-russ

Michael Livshin

unread,
Oct 25, 2002, 10:11:44 AM10/25/02
to
Lars Brinkhoff <lars...@nocrew.org> writes:

> George Demmy <gde...@layton-graphics.com> writes:
>> My wife and I purchaced an emac with OS X 10.1 for "around the house
>> stuff". My background is unixy *but* I'm a devoted emacs user. I've
>> been using contol-next-to-a keyboards and emacs meta next to the
>> space bar from way back. I ran into a wall of cognitive dissonance
>> with the default mac key bindings.
>
> In a similar situation, I very quickly replaced the Apple keyboard
> with a Happy Hacking keyboard. Works like a charm.

couldn't you just remap the keys? I know X allows you to do that,
dunno about Aqua.

I use a regular keybord, but it has two controls (who needs the
CapsLock thing, anyway?), Meta instead of Alt and Alt instead of the
window key. thank (symbol-value '*deity*) for xmodmap.

--
All ITS machines now have hardware for a new machine instruction --
CIZ
Clear If Zero.
Please update your programs.

Will Deakin

unread,
Oct 25, 2002, 10:29:23 AM10/25/02
to
Tim Bradshaw wrote:

> * I wrote:
>>Sure. When I was at Manchester I got drinking with a couple of blokes
>>working in the comp.sci department who were involved in post-grad chip
>>burning. IIRC they were banging out runs of tens of chips in a
>>facility that cost about L5-10M sterling.
> Yes, but what kind of chip - it really does matter. Things like
> transistor counts have gone up a lot and feature sizes have gone down
> a lot for microprocessors. Both of these make *making* the things
> *very* expensive, especially if you want to get a good enough yield
> that you can sell them for anything like reasonable money.
Sure. Please don't take my hazy, drink addled ramblings as a statment
of fact. Also this was about 8-9 years ago.

From what I understood they were making cpu chips with strange clock
cycles the different part of the chip -- say the cache -- ran at a
constant factor faster than say the FPU. From what I understood it was
alleged to be about as powerful as the 386 which was fairly current at
the time but probably not bleeding edge. Also, this was by no means an
manufacturing process.

> Since that time the cost of designing a competitive box
> has gone up really a lot unless you want to just clone what someone
> else has done, including using all their custom logic (like processors
> &c). A whole world of small computer makers (and some large ones) has
> vanished in the last 10-15 years, and I think that cost is why.

Yes. Things *have* changed a lot. (I still remember playing with stuff
like the ZX81, Dragon, Apricot and Acorn...)

:)w

Chris Beggy

unread,
Oct 25, 2002, 11:52:36 AM10/25/02
to
Christopher Browne <cbbr...@acm.org> writes:

> Oops! Chris Beggy <chr...@kippona.com> was seen spray-painting on a wall:
>> Christopher Browne <cbbr...@acm.org> writes:
>>
>>> Linux runs very nicely on any of these architectures; it sure would be
>>> neat to be able to build a cheap MIPS box. A little microcode later
>>> and it might be /quite/ slick as a Lisp Machine.
>>
>> Think Sony Playstation2...
>
> .. which may have really nice stereo hardware, and some interesting
> DSP hardware, but is /really/ lean on CPU power, RAM, and disk space,
> and more or less totally non-expandable.

It's a cheap, mass produced MIPS card.

Chris

Tim Moore

unread,
Oct 25, 2002, 1:23:16 PM10/25/02
to
Edi Weitz <e...@agharta.de> wrote in message news:<8765vrpj...@bird.agharta.de>...
>>
> Let me take a this chance to ask Mac OS Lispers a couple of questions.
>

FWIW, I'm using openMCL + XDarwin + OroborOSX for all my McCLIM
development these days, on a 700Mhz iBook.

> 1. How do the currently available implementations (I think that's
> CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> why? (Stability, correctness, OS integration, ...)

I can't compare OpenMCL to other Mac implementations but I can say it
holds its own against CMUCL on x86. The compiler is very fast, and
various "industrial strength" features like processes and the
interface to the OS feel quite mature. On the other hand, basic user
documentation is pretty scarce, the compiler is not nearly as
ambitious as CMUCL's, and I find it much harder to debug code, get
meaningful backtraces, etc. To put that in perspective, though, I'm a
CMUCL developer and have only recently started using OpenMCL.

As others have said, the bridge to ObjectiveC and Cocoa is very cool
and has a ton of potential.


>
> (I have no plans to use Classic so MCL is not an option - yet.)
>

> 2. What about speed? I don't mean PowerPC compared to x86 but rather
> Lisp compared to other languages on the same platform. On Linux
> I've often seen CMUCL and also LispWorks and AllegroCL to be on par
> with C/C++ programs. Is this also the case on OS X?

Hard to say, but OpenMCL does have a long history on the Mac.


>
> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

I use the X Windows version of Emacs21. At the moment X is a weak
link in my Mac development chain. XDarwin, the port of XFree86, seems
reasonably solid but quite slow. I use it with OroborOSX, a window
manager that attempts to integrate X windows with the rest of the GUI.
The results are attractive, but there some annoying bugs, mostly with
OroborOSX seeming to lose track of keyboard focus and window stacking
order. I'm hoping the situation will improve.


>
> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

OS X runs very nicely on that machine, though I'm looking to upgrade
my 256M of memory to 640M. As I mentioned, XDarwin performance is not
good. On the bright side, that motivated me to use pointer motion
hints in McCLIM :)

Tim

Lars Brinkhoff

unread,
Oct 25, 2002, 1:45:36 PM10/25/02
to
Michael Livshin <use...@cmm.kakpryg.net> writes:
> Lars Brinkhoff <lars...@nocrew.org> writes:
> > In a similar situation, I very quickly replaced the Apple keyboard
> > with a Happy Hacking keyboard. Works like a charm.
> couldn't you just remap the keys? I know X allows you to do that,
> dunno about Aqua.

Sure, but I already used Happy Hacking keyboards with other computers,
so I wanted the same for my Mac.

Thomas F. Burdick

unread,
Oct 25, 2002, 2:15:03 PM10/25/02
to
Pascal Costanza <cost...@web.de> writes:

> Thomas F. Burdick wrote:
>
> > You got something against tar and cron?
>
> tar doesn't handle the Mac OS X file system correctly. A backup solution
> for Mac OS X needs to be able to deal with long file names, resource
> forks and finder information at the same time. Tools ported from the
> Unix world generally get the resource forks and/or the finder
> information wrong, and tools ported from Mac OS 9 are usually not able
> to deal with long file names and sometimes even with the file names
> themselves, because OS 9 and OS X use different character sets for file
> names.

Crud, you're right. I hadn't thought of resource forks (I admit to
using OS X primarily as a kind of odd Unix). For my files that need
backing up, tar is good enough.

> To put it mildly, this area needs some consolidation. To put it
> differently, I would be happy if you could point me to a good solution -
> I haven't found any yet. The best I have found so far is PsyncX at
> http://sourceforge.net/projects/psyncx

You could also try making disk images of what you need with hdiutil.

Thomas F. Burdick

unread,
Oct 25, 2002, 2:30:53 PM10/25/02
to
Alain Picard <apicard+die...@optushome.com.au> writes:

> I have a 500Mhz iBook. To me, OS X 10.2 feels slow as molasses.
> My iBook dual boots into linux; it feels at *least* 3 times faster
> under linux. Of course, you lose the beautiful display PDF...
> it's your call. I _hate_ waiting. Did I mention it's SLOW?

For Unix people, remember, it *is* Unix under there. The Finder, the
window manager, etc., are all just normal Unix processes. They get
run by the startup scripts. You know how to use Emacs or vi -- go
edit the damn things. Don't want the Finder? Move it and make a
symlink to the terminal app. You don't have to run the pretty Display
PDF slowness, if you really don't want to -- you can use the
command-line. Hell, it's Mach, you don't even need to run the dynamic
pager, if you're crazy. Follow the boot process, and elide what you
don't want. Much to some Mac users' horror, it really is just
OpenStep made to look like the Mac.

* ns:*gushing*
T
* #|It keeps doing that|# (setf ns:*gushing* nil)
NIL

Michael Sullivan

unread,
Oct 25, 2002, 2:46:34 PM10/25/02
to
Tim Bradshaw <t...@cley.com> wrote:
> * Duane Rettig wrote:

> > Joe, this is incorrect. My earliest on-line (CVS) records show that
> > the 1993 tak benchmarks for sun3 were .05 (both run time and real time)
> > and for sparc were .03 (run time and real time). I'd have to dig out
> > some archives for earlier results, but in my memory we did most of the
> > super-optimizations right after Gabriel's book ("performance and Evaluation
> > of Lisp Systems", MIT Press, 1985) came out, in the mid to late 80's
> > and _not_ the late 90's.

> If those are correct (which I'm sure they are!) then you must have (or
> could have) been equalling the K machine by ~1990 - there were some
> 68k HP boxes which were really a lot faster than any of the Sun3s in
> 1989-90 as I remember, and I don't think that Sun3s got any faster
> after that time frame.

IIRC, Sun 3s were already obsoleted by 1990. When I was at Rochester,
our undergrad computer group inherited a couple of sun 3s that the CS
department couldn't sell (asking less than $2K) after they were replaced
by sparcs/SGIs. That was in the fall of '89. I don't think they were
making 3s any more after that.


Michael

--
Michael Sullivan
Business Card Express of CT Thermographers to the Trade
Cheshire, CT mic...@bcect.com

Michael Sullivan

unread,
Oct 25, 2002, 2:46:33 PM10/25/02
to
Raymond Toy <t...@rtp.ericsson.se> wrote:

> >>>>> "George" == George Demmy <gde...@layton-graphics.com> writes:
>
> George> for Apple folks. Of course, Apple has gone along with everyone

> George> else in moving the caps lock over where the control key
> George> belongs (control key is "pretty important" in the emacs
> George> world...), so that one-two punch dazed me a bit. The point of
> George> all of this is that you might consider getting some software
> George> to rebind the keys if you've developed any habits that might
> George> not translate well. Since I share this computer with


>
> AFAIK, you can't rebind the keys. Well, you can, but it doesn't work
> like you would want. I tried with xkeycaps. The capslock key now
> becomes a control-lock key. Every key thereafter is a control key.
> You have to hit control-lock again to turn it off.
>
> But maybe a standard PC USB keyboard will work, and this might work
> better. But I don't know of any Mac software to rebind the keys....

Rebinding the keys in classic MacOS is a simple matter of copying and
editing a KHCR resource and then installing it in the system. I still
haven't moved over to X, so I'm not sure exactly how it works there, but
I would assume that if it doesn't work like OS 9, it'll probably be
about the same as in NextStep.

Either way, it should be a pretty simple hack if nobody else has done it
yet.

I agree with the caps lock problem. I wish that was a toggle in the
software, rather than in the keyboard itself. Well, not personally
because I've long since adjusted to the mac keyboard, but it *should* be
that way.

Thomas A. Russ

unread,
Oct 25, 2002, 12:06:29 PM10/25/02
to
Edi Weitz <e...@agharta.de> writes:

> Now everybody tells me that 10.2 is really
> quite mature and I should give it another try. I might
> have the chance to get a used but almost new iBook and I'm
> considering to buy it.

Well, 10.2 is a good improvement, but there are still a few small
glitches in it. Printing support is much improved versus 10.1, but
still not quite as flexible as in the 9.x OS.

If you really want to run OS X, though, I think that you want to get a
G4-based machine rather than a G3-based one. OS X has various
optimizations that exploit the Altivec engine on the G4, so it is a lot
snappier on those platforms.

> What I'd like to know from people who've been using Lisp on Mac OS X
> is:


>
> 1. How do the currently available implementations (I think that's
> CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> why? (Stability, correctness, OS integration, ...)
>

> (I have no plans to use Classic so MCL is not an option - yet.)

Well, I only use MCL, so I can't quite help you here. I've run OpenMCL,
but didn't like the lack of an integrated editor. With the addition of
some user-contributed code and a few other minor improvements
contributed by myself and others, I get pretty good integration. The
only place where integration isn't complete is in the support for long
file names.

Digitool has annouced that they will make an annoucement about MCL 5 at
the Lisp conference. (Is that a meta-annoucement?)

> 2. What about speed? I don't mean PowerPC compared to x86 but rather
> Lisp compared to other languages on the same platform. On Linux
> I've often seen CMUCL and also LispWorks and AllegroCL to be on par
> with C/C++ programs. Is this also the case on OS X?

I have no real comparison data on Lisp versus C++.

> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?

I use it. The CVS sources will build an Aqua version. It did take me
several tries to get it to succeed, but now that it is built, it works
fine. I wish someone would make a Mac installer package available. It
is easy to generate one from the sources, but I don't have a good place
to put it.

> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)

OS X will run on it. I do run OS X on a 350MHz G3 at home. It really
does better on a G4, though.

> 5. Other comments are welcome as well.
>

> Sorry for the slightly dumb questions but I try to avoid the fiasco I

> had last time. If you think I'm trying to start a flame war about
> operating systems or implementations you can mail me privately... :)
>
> Thanks in advance,
> Edi.

--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

Brad Miller

unread,
Oct 25, 2002, 5:50:15 PM10/25/02
to

"Pascal Costanza" <cost...@web.de> wrote in message
news:apa6t0$csv$1...@newsreader2.netcologne.de...
> Erik Naggum wrote:
> > * Pascal Costanza <cost...@web.de>
> > | Since I don't like Emacs I have decided to use MCL.
> >
> > Could you quickly summarize the reasons you do not like Emacs?
>
> I could, but it wouldn't make sense. My reasons are very subjective and
> I haven't taken the time yet to form a better informed opinion.

It's interesting particularly because FRED (MCL's editor) stands for "FRED
Resembles Emacs Deliberately", i.e., it's an EMACS clone. So I'm curious
what it is you didn't like about Emacs but did like about FRED.

>
> I hope I haven't given the impression to discourage anyone to learn
> Emacs. This wasn't my intention.
>
>
> Pascal
>
> --
> Given any rule, however ‘fundamental’ or ‘necessary’ for science, there
> are always circumstances when it is advisable not only to ignore the
> rule, but to adopt its opposite. - Paul Feyerabend
>


Pascal Costanza

unread,
Oct 25, 2002, 6:41:48 PM10/25/02
to
Brad Miller wrote:
> "Pascal Costanza" <cost...@web.de> wrote in message
> news:apa6t0$csv$1...@newsreader2.netcologne.de...
>
>>Erik Naggum wrote:
>>
>>>* Pascal Costanza <cost...@web.de>
>>>| Since I don't like Emacs I have decided to use MCL.
>>>
>>> Could you quickly summarize the reasons you do not like Emacs?
>>
>>I could, but it wouldn't make sense. My reasons are very subjective and
>>I haven't taken the time yet to form a better informed opinion.
>
> It's interesting particularly because FRED (MCL's editor) stands for "FRED
> Resembles Emacs Deliberately", i.e., it's an EMACS clone. So I'm curious
> what it is you didn't like about Emacs but did like about FRED.

FRED supports both Emacs key bindings and the usual Apple key bindings
at the same time. I just don't make use of the Emacs keystrokes, but use
purely what I have learned from other text editors. For example, I
prefer to use the mouse for several purposes than to learn some of the
Emacs keystrokes that seem to be counter-intuitive to me.

However, let me stress again that this is a purely subjective and not
well-informed opinion. I know some people who strongly prefer Emacs and
it seems to me that they have very good reasons. I would be happy if
someone could point me to a document that explains the "philosophy"
behind, or gives a good mental model for, the Emacs key bindings.

Christopher Browne

unread,
Oct 25, 2002, 7:43:15 PM10/25/02
to

That it is, but it is not purchasable as a mass-produced MIPS card.
You have to buy the whole PlayStation unit around it.

And as the components are all soldered into place, it is pretty much
non-expandable.

It demonstrates that the only way you can get "cheap" MIPS hardware is
if someone sets up a production run that leads to them making hundreds
of thousands of the units. And they probably /aren't/ profitable to
any significant degree; Sony more than likely expects to make the
/real/ money selling you PS2 games...
--
(reverse (concatenate 'string "moc.enworbbc@" "enworbbc"))
http://cbbrowne.com/info/sap.html
"One often contradicts an opinion when what is uncongenial is really
the tone in which it was conveyed." -- Nietzsche

Thien-Thi Nguyen

unread,
Oct 25, 2002, 7:47:11 PM10/25/02
to
Pascal Costanza <cost...@web.de> writes:

> explains the "philosophy" behind, or gives a good mental model for,
> the Emacs key bindings.

simple: the keybindings shipped are chosen by the emacs programmers.
the keybindings you use are chosen by .... YOU!

thi

Robert Swindells

unread,
Oct 25, 2002, 8:01:43 PM10/25/02
to
Will Deakin <aniso...@hotmail.com> wrote in message news:<apbkdj$vag$1...@newsreaderg1.core.theplanet.net>...

> Tim Bradshaw wrote:
> > * I wrote:
> >>Sure. When I was at Manchester I got drinking with a couple of blokes
> >>working in the comp.sci department who were involved in post-grad chip
> >>burning. IIRC they were banging out runs of tens of chips in a
> >>facility that cost about L5-10M sterling.
> > Yes, but what kind of chip - it really does matter. Things like
> > transistor counts have gone up a lot and feature sizes have gone down
> > a lot for microprocessors. Both of these make *making* the things
> > *very* expensive, especially if you want to get a good enough yield
> > that you can sell them for anything like reasonable money.

> Sure. Please don't take my hazy, drink addled ramblings as a statment
> of fact. Also this was about 8-9 years ago.
>
> From what I understood they were making cpu chips with strange clock
> cycles the different part of the chip -- say the cache -- ran at a
> constant factor faster than say the FPU. From what I understood it was
> alleged to be about as powerful as the 386 which was fairly current at
> the time but probably not bleeding edge. Also, this was by no means an
> manufacturing process.

The project would have been Amulet. They were building an asynchronous
implementation of ARM.

Don't forget that ARM themselves are fabless, and that Steve Furbur
is the prof at Manchester who was running this project.

I wouldn't be suprised if the chips were being made by the same people
that make commercial prototypes for ARM Ltd.

I don't think that this proves one way or another that you can just walk
in off the street and easily get a chip fabbed in a recent process.

Robert Swindells

Louis Theran

unread,
Oct 25, 2002, 8:11:26 PM10/25/02
to
Raymond Toy <t...@rtp.ericsson.se> wrote in message
> AFAIK, you can't rebind the keys.

There's a program that lets you do pretty much arbitrary key remapping for OS X.

http://www.gnufoo.org/ucontrol/ucontrol.html

^L

Alain Picard

unread,
Oct 25, 2002, 10:08:03 PM10/25/02
to
mic...@bcect.com (Michael Sullivan) writes:

> >
> > AFAIK, you can't rebind the keys. Well, you can, but it doesn't work
> > like you would want. I tried with xkeycaps. The capslock key now
> > becomes a control-lock key. Every key thereafter is a control key.
> > You have to hit control-lock again to turn it off.

Check out uControl.

Advance Australia Dear

unread,
Oct 26, 2002, 1:22:24 AM10/26/02
to
Dave Bakhash <ca...@alum.mit.edu> , an acolyte of Cthulu the True God
wrote:


>
>the hardware is cheap; Linux is fast and free; LispWorks under Linux is
>very good and affordable

900 $ for the professional edition ?

Will Deakin

unread,
Oct 26, 2002, 3:04:43 AM10/26/02
to
Robert Swindells wrote:
> The project would have been Amulet. They were building an asynchronous
> implementation of ARM.
Thanks for this. (It is good to see I wasn't completely making it up...)

> Don't forget that ARM themselves are fabless, and that Steve Furbur
> is the prof at Manchester who was running this project.

Sure. I really wounldn't (didn't) know. I was a post-grad in the
Schuster Labs accross the car park. IIRC the `discussions' were held
round the back of the John Rylands at the Duce and I had involved lots
of guiness.

> I don't think that this proves one way or another that you can just walk
> in off the street and easily get a chip fabbed in a recent process.

Fine. The point I was making is that for an amount x -- less than a
billion and more than a million $ -- you *can* burn chips. Not in an
industrial capacity and I had forgotten how long ago this was...

:)w

Will Deakin

unread,
Oct 26, 2002, 4:06:02 AM10/26/02
to
Advance Australia Dear wrote:

> Dave Bakhash wrote:
>>the hardware is cheap; Linux is fast and free; LispWorks under Linux is
>>very good and affordable
> 900 $ for the professional edition ?
(sigh). Is $900 really so much?

It depends who you are: If I were a plumber and was paid to go round to
houses to fix leaky pipes or install heating systems I would say $900
was really, really cheap for the price of my important tool.
Particularly since I could phone the manufacturer up and ask for help if
stuff was going wrong and they would try really to help me. However, if
I was a DIY enthusiast who had recently moved house and was fitting a
new sink in his kitchen, this is propably too much.

Where in the spectrum of pipe related employment do you think you sit?

:)w

Ng Pheng Siong

unread,
Oct 26, 2002, 4:42:14 AM10/26/02
to
According to Erik Naggum <er...@naggum.no>:

> * Pascal Costanza <cost...@web.de>
> | Since I don't like Emacs I have decided to use MCL.
>
> Could you quickly summarize the reasons you do not like Emacs?
> Is it specific to Emacs on a particular platform or general?

I'm not Pascal, and I don't not like Emacs, but if I may:

I've been using vim for many years. I started using Emacs when I began
learning Lisp some months ago. After a while, my left thumb, with which I
press Meta, which == Alt next to the space bar, feels funny. I've never had
fingering problems with vim.

Are there any key-mapping or keyboard suggestions I can try? Any one
programs Lisp using something like Dragon Naturally Speaking? ;-)

(Someone else mentioned a 250 USD ergonomic keyboard a while ago.)

Thanks. Cheers.

--
Ng Pheng Siong <ng...@netmemetic.com> * http://www.netmemetic.com

Pascal Costanza

unread,
Oct 26, 2002, 6:47:28 AM10/26/02
to

This doesn't sound too bad. ;-) However, it's a very terse summary,
isn't it. ;)

For example, why is the meta key called "meta"? There's metaphysics,
meta object protocol, meta data, meta this and meta that. ;) In which
sense is the meta key meta?

There are several standard key bindings that start with ctrl-x or ctrl-k
(and then another key), if I remember correctly. What kind or sets of
functions do these "prefixes" denote respectively? Are there some good
mnemonics?

Perhaps these are historical accidents. So where can I read about the
history of Emacs?

For example, I have learned far more about Common Lisp from historical
descriptions and "classic" texts than from any technical tutorial. I
expect this also to be the case for Emacs.

Christopher C. Stacy

unread,
Oct 26, 2002, 7:02:31 AM10/26/02
to
>>>>> On Sat, 26 Oct 2002 12:47:28 +0200, Pascal Costanza ("Pascal") writes:

Pascal> Thien-Thi Nguyen wrote:
>> Pascal Costanza <cost...@web.de> writes:
>>
>>> explains the "philosophy" behind, or gives a good mental model for,
>>> the Emacs key bindings.
>> simple: the keybindings shipped are chosen by the emacs programmers.
>> the keybindings you use are chosen by .... YOU!

Pascal> This doesn't sound too bad. ;-) However, it's a very terse summary,
Pascal> isn't it. ;)

Pascal> For example, why is the meta key called "meta"? There's
Pascal> metaphysics, meta object protocol, meta data, meta this and
Pascal> meta that. ;) In which sense is the meta key meta?

It is used to chord commands that are an abstraction level higher
than the commands chorded by the CONTROL key.

Pascal> There are several standard key bindings that start with
Pascal> ctrl-x or ctrl-k (and then another key), if I remember
Pascal> correctly. What kind or sets of functions do these "prefixes"
Pascal> denote respectively? Are there some good mnemonics?

Ctrl-x dispatches the eXtended commands - the ones that are typed
with two characters, rather than a single character.

Ctrl-k is for Killing a line.

The commands are generally abbreviations for English words.
Control-e goes to the "end" of a line.
Meta-e goes to the "end" of a sentence.
Control-Meta-e goes to the end of a sexpr.

Control-a goes to the start of the line. Why not Control-s?
Because "s" was taken by "Search", and "b" was already taken
for "backwards". It's a little arbitrary, but not hard to learn.

I would suggest running the interactive tutorial that is automatically
offered when Emacs starts up, then maybe reading the Emacs manual.

Pascal> Perhaps these are historical accidents.
Pascal> So where can I read about the history of Emacs?

Read "Theory and Practice of Text Editors: A Cookbook for an Emacs",
which was Craig Finseth's undergraduate thesis at MIT circa 1980.

Pascal> For example, I have learned far more about Common Lisp from
Pascal> historical descriptions and "classic" texts than from any
Pascal> technical tutorial. I expect this also to be the case for Emacs.

Being familiar with the line-oriented editors for the operating
systems of the early 1970s, and also with the idea of a stream
oriented context editor (an application on APL*PLUS), but never
having seen such a thing as a "display editor", it took someone
about 10 minutes to teach me how to use Emacs. My point being:
There's no great mystery involved!

Erik Naggum

unread,
Oct 26, 2002, 10:59:25 AM10/26/02
to
* Ng Pheng Siong

| Are there any key-mapping or keyboard suggestions I can try? Any one
| programs Lisp using something like Dragon Naturally Speaking? ;-)

Emacs actually comes with a builting Emacs Aptitude Test. Do you remap
your keyboard or the Emacs keybindings before the chords and sequences it
comes with by default have wreaked havoc with your hands? If you do not
do anything to make Emacs more convenient for yourself, you may not have
the prerequisite aptitude to use it productive.

I have a "standard" PC keyboard (except with tons of function keys) and
have moved the digits off the normal keypad and onto the numeric keypad
and moved all the shifted symbols down to unshifted, and then physically
moved the column that used to read 6YHN and all columns right of it one
column right. The vacated holes in they keyboard are filled with whatever
left-over keys you have, and then you tell your keyboard driver og X or
whatever that you have placed the parens {} [] () and <> (unshifted and
shifted) in what used to be the 6YHN colum. This move some incredibly
frequently used keys to between your index fingers instead of terrorizing
your right little finger. Also, return and backspace are now one column
closer and also strain the little finger much less. You also get a little
more space between your hands, which some of those ergonomic stunts seem
to argue is a good thing. Also, with Unicode, typing becomes much more
fun than remembering those TeX monstrosities, but the downside is that you
need a lot more keys. So AltGr is also one column closer to your right
thumb and painless to use even with other keys on the right hand. Then
the left thumb can be used for space, too, or the Alt key that you can use
to teach Emacs to use much shorter commands. Also, remap the useless CAPS
key to something useful, like window manager shortcuts. The really good
thing about this keyboard layout is that your computer is /secure/ even if
someone should break in and get console access.

I have a system-wide Xmodmap file because the keyboard driver stuff under
Linux is so goddamned "i18nized" it has become impossible to customize to
get predictable results, plus even Debian nukes those files on upgrades.
Let me know if you would like to see it.

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Ng Pheng Siong

unread,
Oct 26, 2002, 12:25:39 PM10/26/02
to
According to Erik Naggum <er...@naggum.no>:
> Emacs actually comes with a builting Emacs Aptitude Test. Do you remap
> your keyboard or the Emacs keybindings before the chords and sequences it
> comes with by default have wreaked havoc with your hands? If you do not
> do anything to make Emacs more convenient for yourself, you may not have
> the prerequisite aptitude to use it productive.

Heh heh. Well said.

> I have a system-wide Xmodmap file because the keyboard driver stuff under
> Linux is so goddamned "i18nized" it has become impossible to customize to
> get predictable results, plus even Debian nukes those files on upgrades.
> Let me know if you would like to see it.

Yes, please. TIA!

Paul Wallich

unread,
Oct 26, 2002, 12:36:29 PM10/26/02
to
In article <apdid8$6mh$1...@knossos.btinternet.com>,
Will Deakin <aniso...@hotmail.com> wrote:

In woodworking, there's an adage that every (class of) serious tool
costs $1,500. You can buy 3 $500 tablesaws over the course of your
career, or one $1500 saw. Maybe buying 500 $3 screwdrivers over the
course of 30 years is pushing the example a bit, but the idea is the
same. It's a pity that there's no medium-term market in programmer
productivity where capital for these sorts of things could be raised
(instead, people have to invest in themselves, and are often cash
constrained.) Total cost of ownership can be a real pain all round.

paul

Andreas Eder

unread,
Oct 26, 2002, 1:14:13 PM10/26/02
to
Erik Naggum <er...@naggum.no> writes:

> I have a system-wide Xmodmap file because the keyboard driver stuff under
> Linux is so goddamned "i18nized" it has become impossible to customize to
> get predictable results, plus even Debian nukes those files on upgrades.
> Let me know if you would like to see it.

Yes, I would very much like to see it! Could you post it here?

Thanks,

'Andreas
--
Wherever I lay my .emacs, there愀 my $HOME.

Thien-Thi Nguyen

unread,
Oct 26, 2002, 2:46:17 PM10/26/02
to
Pascal Costanza <cost...@web.de> writes:

> This doesn't sound too bad. ;-) However, it's a very terse summary,
> isn't it. ;)

that is the best i can do when reporting the essence of a philosophy.
luckily, others have shared some next-level-up info. (ok, back to lurking.)

thi

Eduardo Muñoz

unread,
Oct 26, 2002, 4:06:53 PM10/26/02
to
Erik Naggum <er...@naggum.no> writes:

> I have a "standard" PC keyboard (except with tons of function keys) and
> have moved the digits off the normal keypad and onto the numeric keypad
> and moved all the shifted symbols down to unshifted, and then physically
> moved the column that used to read 6YHN and all columns right of it one
> column right. The vacated holes in they keyboard are filled with whatever
> left-over keys you have, and then you tell your keyboard driver og X or
> whatever that you have placed the parens {} [] () and <> (unshifted and
> shifted) in what used to be the 6YHN colum. This move some incredibly
> frequently used keys to between your index fingers instead of terrorizing
> your right little finger.

I mapped those to M-5 <-> M-8 like this:

(defmacro definsert (string &optional back-chars)
`(lambda () (interactive) (insert ,string) (backward-char (or ,back-chars 0))))

(global-set-key [?\M-5] (definsert "[]" 1))
....

> [...] So AltGr is also one column closer to your right


> thumb and painless to use even with other keys on the right hand. Then
> the left thumb can be used for space, too, or the Alt key that you can use
> to teach Emacs to use much shorter commands.

I have an IBM Model-M¹ and can't reach AltGr with
my right thumb, so I use the right pinky (sp?)
finger. I feel that this is quite comfortable but
then I can't type [,{, etc so I made the above
key-bindings. BTW, what keyboard do you use?

I picked mine from my company computer's
junkyard. Like new after an ammonia bath :)


¹ http://www.3m3718.com/modelm.php

--

Eduardo Muñoz

Advance Australia Dear

unread,
Oct 26, 2002, 9:59:43 PM10/26/02
to
Paul Wallich <p...@panix.com> , an acolyte of Cthulu the True God
wrote:

>>>>the hardware is cheap; Linux is fast and free; LispWorks under Linux is
>>>>very good and affordable
>>> 900 $ for the professional edition ?
>>(sigh). Is $900 really so much?
>>
>>It depends who you are

Strictly DIY :-)

Although I am remunerated adequately for what I do at work, I find it
difficult to justify US $ 900 = AU $ 1800 for a purely hobbyist tool.


Marc Spitzer

unread,
Oct 26, 2002, 10:30:55 PM10/26/02
to
Advance Australia Dear <fundamenta...@yahoo.com> wrote in
news:iuhmru8nerbq8s5m2...@4ax.com:

cmucl is free. That new one that came out(scneerer sp?) is $250 for a
licence.

marc

Zachary Beane

unread,
Oct 27, 2002, 8:09:47 AM10/27/02
to

It still occasionally forgets the state and acts like sticky ctrl,
especially when coming out of sleep. I've gotten in the habit of
toggling ctrl a few times if things seem to get weird. Ugh.

Zach

Erik Naggum

unread,
Oct 27, 2002, 9:09:58 AM10/27/02
to
* Advance Australia Dear <fundamenta...@yahoo.com>

| Although I am remunerated adequately for what I do at work, I find it
| difficult to justify US $ 900 = AU $ 1800 for a purely hobbyist tool.

Very few hobbies are inexpensive. The majority of hobbies cost /far/
more than computing and programming. One might be tempted to ask what it
is about computers that make some people so unwilling to pay for anything
and who think that "hobby" is an adequaty excuse when almost any hobby
they could take up instead would be more expensive. In fact, there is
nothing about computers, The attitude is that of the /masses/ who are
unwilling to do something seriously. /Real/ hobbyists are enthusiasts who
spare no expense: they seek ways to make more money so they can play with
their hobby. What we have in the computer industry is not hobbyists, but
we did have hobbyists in the past. Today, we have bored dabblers who see
the computer as frivolous entertainment, as puzzles, as meaningless toys.
And on the Net, we have people who think that /not/ being serious about
their very own interests should be a mark of distinction.

This is what happens when computers become mass-marketed goods: The
masses actually get ahold of them and apply "mass ethics" to it: Being
serious about something carries the potential that you will stand out
from the crowd, and the masses knows what it standing out does to people
with an interest in computers and programming: They become geeks and
nerds, they become socially maladjusted because the masses cannot tolerate
that someone becomes good at or is even interested in something that has
an intellectual bent. Therefore, a true hobbyist would easily spend ten
times as much as the computer "hobbyist" would /not/ spend on his "hobby"
because it is no hobby at all. It is the absence of interest that marks
the man of the masses who has unfortunately bought a computer and wants
to play with it for free, without monetary or intellectual investment,
for fear of not being a member of the masses if he did either.

What /hobbies/ do people have that easily cost more than USD 900? One of
my hobbies is marksmanship, target shooting with .22LR and .38 pistols.
It would be hard to even /begin/ with that hobby for less than USD 900.
Another hobby is literature. Unless I want to read mass market editions
of interesting books, I have to shell out hard cash for hardcover. Right
now, Amazon.com tells me that I could make more than USD 900 if I sold my
past purchases as used books through them. I have taken up movies on DVD
as another hobby, but have decided against spending several thousand
dollars on a home entertainment center like many people do. Still, at
USD 25 or more per acquisition, USD 900 does not last long. Then I have
a cat, but she is not the hobby she could have been with cat shows and
everything -- I do think it is nuts to spend thousands of dollars every
year on such a hobby. But she has to maintain a good health and deserve
the best treatment any living thing can get, and this is not free at all.
I would be /very/ unhappy if she started to need USD 900 a year in vet
bills, not because of the money, but because that would mean she was
living on overtime.

One of the things I do /not/ have because I want to spend money on valuable
things is a car. A car costs about four times as much to purchase and own
in Norway as in the United States, and almost all of the additional cost
is taxation, which I think has already taken too big a chunk out of my
life. We have a bonus system on insurance that makes it very expensive
to start to own a car, and if you are a very good customer, it often pays
to cover minor repair work from your own wallet compared to a reduced
bonus. USD 900 would therefore cover a fender-bender. USD 900 would
also buy you less than 25 gallons of gasoline. So I decided against this
necessity because I value many other things higher than a car. My rent
is also very low for the Oslo market and I have not bought a new computer
since 600 MHz was fast, but I saved up money to purchase the gigabyte of
memory and up the total disk space to 350G. And then I spend a lot more
time in the library than I used to because I want to save money on books
I would probably only read once and which would force me to move if I had
to purchase more bookshelves for them. Budgeting is a balancing act and
I probably refuse to spend money on many things that other people consider
bare necessities, just like the car and the house I do not own.

But you find it difficult to "justify" USD 900 on your hobby. Amazing.
How much does your Internet connection cost you? With a commercially
hosted server and digital cable TV with a megabit Internet connection,
USD 900 gives me about 9 weeks' worth of full service, but this probably
does not count, because I do consider it a business expense.

So what would you /really/ have to give up to get USD 900 for your hobby?
Or do you make the purchase tomorrow morning now that you find some
difficulty in /not/ justifying the expenditure?

Steve Long

unread,
Oct 27, 2002, 12:16:22 PM10/27/02
to
On 10/24/02 2:26 PM, in article 8765vrpj...@bird.agharta.de, "Edi
Weitz" <e...@agharta.de> wrote:

> t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
>> [Mac OS X is] certainly not perfect (I wouldn't consider running a
>> really important server on it -- I'm sticking to Solaris for that),
>> but for an end-user OS, it's the best I've seen.
>
> Let me take a this chance to ask Mac OS Lispers a couple of questions.
>
> Background: I've been using Macs in the 90s and have fond memories -
> but that was back when I was a mathematician at the
> university and didn't care much about programming. I've
> sinced switched to FreeBSD for Servers and Linux for my
> laptops. After having read all the marketing hype about
> Mac OS X I was one of the first to order a PowerBook G4 in
> Germany and was so disappointed with 10.0 that I sold the
> laptop a mere three months later and returned to my
> Thinkpads. Now everybody tells me that 10.2 is really


> quite mature and I should give it another try. I might
> have the chance to get a used but almost new iBook and I'm

> considering to buy it.


>
> What I'd like to know from people who've been using Lisp on Mac OS X
> is:
>
> 1. How do the currently available implementations (I think that's
> CLISP, OpenMCL, AllegroCL) compare? Which one do you prefer and
> why? (Stability, correctness, OS integration, ...)
>
> (I have no plans to use Classic so MCL is not an option - yet.)
>

> 2. What about speed? I don't mean PowerPC compared to x86 but rather
> Lisp compared to other languages on the same platform. On Linux
> I've often seen CMUCL and also LispWorks and AllegroCL to be on par
> with C/C++ programs. Is this also the case on OS X?
>

> 3. How good is Emacs on Mac OS X? I've heard current CVS sources will
> build an Aqua version - is anybody actually using it and can
> recommend it?
>

> 4. Do you think a G3 700MHz iBook is fast enough for OS X? (Mainly
> development, mail, writing, browsing - no fancy multimedia stuff.)
>

> 5. Other comments are welcome as well.
>
> Sorry for the slightly dumb questions but I try to avoid the fiasco I
> had last time. If you think I'm trying to start a flame war about
> operating systems or implementations you can mail me privately... :)
>
> Thanks in advance,
> Edi.

I can only recommend Allegro CL and Emacs as the tool of choice because
that's what is provided by my employer. Emacs is an incredibly flexible and
robust tool for typing your source code. Its Lisp-mode is second-to-none for
auto-formatting text as you work. I would love (i.e., pay) to have a version
that works the same way on my Mac.

While I use Allegro CL (via ICAD) for work, I very often use MCL for
developing fundamental Lisp tools on a 500MHz Mac in Classic Mode (OS 10.1
is the boot enviro). This means I'm running a run-time environment on top of
a Classic on top of a Unix-based OS. It still performs the thousands of
geometric analyses and extensive string-manipulations faster than the Java
equivalent. I used to use PowerLisp under OS9, but it would not support the
hierarchical object system I was developing.

slong

Thomas F. Burdick

unread,
Oct 27, 2002, 1:40:08 PM10/27/02
to
Erik Naggum <er...@naggum.no> writes:

> * Advance Australia Dear <fundamenta...@yahoo.com>
> | Although I am remunerated adequately for what I do at work, I find it
> | difficult to justify US $ 900 = AU $ 1800 for a purely hobbyist tool.

If you're trying to imply that LW costs $1800, that's misleading.
Translating that number to Yen doesn't make LW suddenly absurdly
expensive. Presumably people in the Australian middle class have
around twice as many AU$ as their American equivalents would have US$.

> Very few hobbies are inexpensive. The majority of hobbies cost /far/
> more than computing and programming. One might be tempted to ask what it
> is about computers that make some people so unwilling to pay for anything
> and who think that "hobby" is an adequaty excuse when almost any hobby
> they could take up instead would be more expensive. In fact, there is
> nothing about computers, The attitude is that of the /masses/ who are
> unwilling to do something seriously. /Real/ hobbyists are enthusiasts who
> spare no expense: they seek ways to make more money so they can play with
> their hobby. What we have in the computer industry is not hobbyists, but
> we did have hobbyists in the past. Today, we have bored dabblers who see
> the computer as frivolous entertainment, as puzzles, as meaningless toys.
> And on the Net, we have people who think that /not/ being serious about
> their very own interests should be a mark of distinction.

I find it pretty amusing that this thread is going on along side a
thread about the best Lisp setup for OS X, which I am quite happy to
pay for[*]. And this thread is about a Lisp for Linux, which means
the people using it have a computer with an OS they got for free or
almost-free -- meaning they have about $200 more money left over for
the development tools. Some people don't have budgets that allow them
to save this amount of money in anything like a reasonable amount of
time -- but most poor folks don't complain that their computers are
missing bells and whistles compared to a bourgeois. We're talking
about the *best* Lisp system, not the cheapest adequate system. $900
for "the best" is a damn good deal.

[*] Man, it'd been a long time since I'd paid more than $20 for an OS.
It's not that I haven't been willing to, for an OS I considered worth
it. But with Linux and FreeBSD being almost-free, the OSes I wanted
the most only cost $20 or so. It's nice to be running something on my
desktop that I value.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Michael Sullivan

unread,
Oct 27, 2002, 4:08:09 PM10/27/02
to
Pascal Costanza <cost...@web.de> wrote:

> Thomas F. Burdick wrote:
> > Pascal Costanza <cost...@web.de> writes:
>

> >>I haven't yet found a convincing and inexpensive backup solution for Mac
> >>OS X.

[resource forks & long filenames. You say tomayto, I say tomahto.]

> To put it mildly, this area needs some consolidation. To put it
> differently, I would be happy if you could point me to a good solution -
> I haven't found any yet. The best I have found so far is PsyncX at
> http://sourceforge.net/projects/psyncx

There's a utility called Carbon Copy Cloner that most of the mac admins
I know are using, but I haven't tried it yet so I can't give a personal
recommendation. I know it handles resource forks and file metadata, but
I'm not so sure about long filenames (everyone I'm talking to is
concerned about the mac migration direction).


Michael

Will Deakin

unread,
Oct 27, 2002, 5:44:25 PM10/27/02
to
Advance Australia Dear wrote:
> Strictly DIY :-)
There's lot of people in the same boat. However, it could be seen as bad
form to denigrate those who make industrial machine tools...

> Although I am remunerated adequately for what I do at work, I find it
> difficult to justify US $ 900 = AU $ 1800 for a purely hobbyist tool.

Hmmm. I think you've misunderstood my analogy.

When I moved into my current home I plumbed in a new sink in the kitchen
which has given me good service for four plus years. To do this I had to
buy and use tools that a plumber would use. These were *not* hobbyist
tools but since I am unlikely to fit sinks every day I did not buy the
most hardwearing -- partly because the reason for doing the job was that
I was brassic. Equally, I did not buy the hobbyist ones -- if I
understand what you mean by this -- but ones supplied by a builders
merchants that a full time plumber could get reasonable use from.

As I have learnt from bitter experience in a whole range of things: if
you are serious about doing something spend the money to get the tools
to do the job.

Will

Erik Naggum

unread,
Oct 27, 2002, 7:50:10 PM10/27/02
to
* Advance Australia Dear

| Although I am remunerated adequately for what I do at work, I find it
| difficult to justify US $ 900 = AU $ 1800 for a purely hobbyist tool.

* Thomas F. Burdick


| If you're trying to imply that LW costs $1800, that's misleading.

It was not misleading to people who have become used to deal with multiple
currencies, but to those who are not, please note that $ is /meaningless/
outside of the United States. That is, there is no telling which currency
people refer to with this symbol. Those who have become used to deal with
multiple currencies also favor using ISO 4217 to refer to them. ISO 4217
is based on ISO 3166, which, with the exception of the country that does
not believe it is world's number one most backward technological country
and the country that thinks it is the number one most advanced country,
namely the UK and the US, respectively, is the top-level domain name.
That is, if you know the top-level domain name of the country, you know
the first two letters of their international standard currency code and
the third is usually pretty easy to intuit, but if not, looking it up is
not hard given this simple guideline. The UK is of course GB.

The international standard currency code for U.S. dollars is USD. The
international standard currency code for Australian dollars is AUD. If
you want to write a currency in standard notation. the currency code
prefixes the amount: USD 900.00 or AUD 1800.00. Like so many other
things that normally require international cooperation, like bombing a
country back to from the bronze age to the stone age, the United States
is among the hardest countries to convert to standards not of their own
making. It is partly because of the stubbornness of the symbolic value
of "$" that the European Union needed their own stupid currency symbol
and screwed up character sets to boot. The Euro is better known as EUR,
however, and there /is/ a .EU top-level domain, too. The age-old and
very good question "If I want to call Europe, who do I call?" also has an
answer these days.

| Some people don't have budgets that allow them to save this amount of
| money in anything like a reasonable amount of time -- but most poor folks
| don't complain that their computers are missing bells and whistles
| compared to a bourgeois. We're talking about the *best* Lisp system, not
| the cheapest adequate system. $900 for "the best" is a damn good deal.

Matter of fact, I am unable to grasp why so many stinking poor people
want to use the commercial Common Lisp environments and come here to
complain about it, alongside their inevitable whining that they are not
/really/ interested in Common Lisp, it is just a "hobby" and that they
would not use Common Lisp for anything productive if they did /get/ their
hands on the expensive environments. If I attempt to understand it, I
only come up with evil and destructive reasons like people who do not
want something that they cannot get to /exist/ to begin with. There are
a lot of those people around; I would not expect them to become interesed
in computer programming and Common Lisp in particular, however.

Raffael Cavallaro

unread,
Oct 28, 2002, 9:08:41 AM10/28/02
to
m...@panix.com (Michael Sullivan) wrote in message news:<1fkpqac.1li2vm313htrzpN%m...@panix.com>...

> Pascal Costanza <cost...@web.de> wrote:
>
> > Thomas F. Burdick wrote:
> > > Pascal Costanza <cost...@web.de> writes:
>
> > >>I haven't yet found a convincing and inexpensive backup solution for Mac
> > >>OS X.
>
> [resource forks & long filenames. You say tomayto, I say tomahto.]
>
> > To put it mildly, this area needs some consolidation. To put it
> > differently, I would be happy if you could point me to a good solution -
> > I haven't found any yet. The best I have found so far is PsyncX at
> > http://sourceforge.net/projects/psyncx
>
> There's a utility called Carbon Copy Cloner that most of the mac admins
> I know are using, but I haven't tried it yet so I can't give a personal
> recommendation.

Carbon Copy Cloner is really an Applescript that calls ditto. You use
ditto from the command line in the form:

ditto -v -rsrcFork /Path/to/sourcedirectory
/Path/to/destinationdirectory

(or an individual file instead, or course). That's all one line, of
course, in case your newsreader breaks it. The -v flag is verbose. See
"man ditto" for the full set of options.

I use ditto from the command line for backups. I've also used PsynchX.
Both work quite well.

There's also CpMac, which also preserves resource forks, but might be
more familiar to those used to *nix cp.

You should note that the real stated purpose of Carbon Copy Cloner is
to produce a bootable clone of an existing installation. It's author
has dertermined which files are necessary for MacOS X to boot, and
makes sure to copy those correctly, along with your data.
Unfortuneately, although It correctly copies all the files, I've never
gotten it to produce a bootable FireWire clone. The OS thinks that's
it's a bootable drive, and lets me choose it as a boot volume, but the
system just hangs. I think this is a defect in Apple's Firewire
implementation, not a problem with CCC. Others have had problems with
bootable Firewire clones, but cloning to an IDE drive seems to work
just fine.

P.S. For general MacOS X info like this, I've yet to find a better
resource than MacOS X Hints <http://www.macosxhints.com>, and it's
searchable too.

Joe Marshall

unread,
Oct 28, 2002, 9:36:32 AM10/28/02
to
Erik Naggum <er...@naggum.no> writes:

> I have a "standard" PC keyboard (except with tons of function keys) and
> have moved the digits off the normal keypad and onto the numeric keypad
> and moved all the shifted symbols down to unshifted, and then physically
> moved the column that used to read 6YHN and all columns right of it one
> column right. The vacated holes in they keyboard are filled with whatever
> left-over keys you have, and then you tell your keyboard driver og X or
> whatever that you have placed the parens {} [] () and <> (unshifted and
> shifted) in what used to be the 6YHN colum.

Great idea! I've remapped a number of things including moving the
parens to a more reasonable position, but I'll have to try this.

> The really good thing about this keyboard layout is that your
> computer is /secure/ even if someone should break in and get
> console access.

Either that, or your kids become proficient in touch-typing at a very
early age.

Raymond Toy

unread,
Oct 28, 2002, 12:00:09 PM10/28/02
to
>>>>> "Joe" == Joe Marshall <j...@ccs.neu.edu> writes:

Joe> Erik Naggum <er...@naggum.no> writes:
>> I have a "standard" PC keyboard (except with tons of function keys) and
>> have moved the digits off the normal keypad and onto the numeric keypad
>> and moved all the shifted symbols down to unshifted, and then physically
>> moved the column that used to read 6YHN and all columns right of it one
>> column right. The vacated holes in they keyboard are filled with whatever
>> left-over keys you have, and then you tell your keyboard driver og X or
>> whatever that you have placed the parens {} [] () and <> (unshifted and
>> shifted) in what used to be the 6YHN colum.

Joe> Great idea! I've remapped a number of things including moving the
Joe> parens to a more reasonable position, but I'll have to try this.

Long ago I taught myself how to use a Dvorak layout over a weekend.
Was getting pretty good. Then I went to the lab at school where the
keyboards weren't Dvorak. I couldn't type at all.

I don't massively munge keyboards anymore....

Ray

Jules F. Grosse

unread,
Oct 28, 2002, 12:11:49 PM10/28/02
to
thank you all for your answers. I guess the answer to my question is:
there isn't a combination that is clearly better than all others.

in fact, for every combination of hardware/operating system there is
an excellent choice. which is good! ;)

Erik Naggum

unread,
Oct 28, 2002, 12:48:29 PM10/28/02
to
* Raymond Toy <t...@rtp.ericsson.se>

| Long ago I taught myself how to use a Dvorak layout over a weekend.
| Was getting pretty good. Then I went to the lab at school where the
| keyboards weren't Dvorak. I couldn't type at all.

Well, over here in this backward country, the geniuses who never believed
that leaving good enough was an option have already reworked the keyboard
so anyone accustomed to US layout will go from fast touch-type to hunt-
and-peck anyway, so there is little point in /not/ optimizing the hell
out of one's own keyboard layout. So, sometimes, this "localization"
crap can actually be of benefit to users.

Besides, I have one of those cordless keyboards. That should mean I can
take it with me everywhere I go. (Like the remote control that Mr. Chance
took with him into the real world in «Being There» with Peter Sellers in
the lead role.)

It is loading more messages.
0 new messages