Is LISP suited for neural networks?
The old question: is it slow (since it was not designed with matrix algebra in mind
, as it says in <http://www-2.cs.cmu.edu/~mmv/15-381/spring97/prog4.html>)?
How does it compare with C/C++ for the task?
TIA
Regs,
HL
There's no reason that a Lisp compiler can't generate very efficient
numerical code, equalling or surpassing that of FORTRAN or C.
This was first demonstrated around 1977.
Lisp was originally invented for AI researchers for programming
things like constraint systems and neural networks.
Transliterating the implementation of an algorithm from some
other language into Lisp may not be the best way to do something.
Since your homework assignment begins by handing you some C source
code, and says that it's your starting point, and since the instructor
recommends not using Lisp, and since neither he nor you seem to be
experienced Lisp programmers, maybe you should take his advice and use C.
Otherwise, I'd recommend writing it in Lisp; that's what I would use!
We have a CL program which includes a 200-line self-organizing map
(aka Kohonen map) implementation. I would say it is.
--
Ola Rinta-Koski o...@cyberell.com
Cyberell Oy +358 41 467 2502
Rauhankatu 8 C, FIN-00170 Helsinki, FINLAND www.cyberell.com
:)w
Languages do not compare. Code written in them do not compare. Compiled
with a particular compiler on a particular platform _might_ compare. You
have not compared languages when you compared the execution times of your
code, only _your_ competence in writing such code.
Exceptionally fast code can be written in any language. The question is
where the "line of convenience is" drawn. Common Lisp (and other Lisps
in the past) have made it convenient to stop coding when the function
performed its job _correctly_. C and C++ have made it convenient to stop
coding when the function performed its job _quickly_. Unless you are
willing to continue coding past the "line of convenience" to quick Common
Lisp and correct C/C++ code, you are comparing apple-tree flowers and
rotting oranges for edibility.
The only interesting speed factor for _languages_ (as opposed to code
written in them) is how much they slow down the programmer on his journey
from problem to solution.
///
--
Including INTERCAL, TECO, and TRAC? :-)
--
(reverse (concatenate 'string "gro.gultn@" "enworbbc"))
http://www3.sympatico.ca/cbbrowne/sap.html
"I've seen estimates that 10% of all IDs in the US are phony. At least
one-fourth of the president's own family has been known to use phony
IDs." -- Bruce Schneier CRYPTO-GRAM, December 15, 2001
| Including INTERCAL, TECO, and TRAC? :-)
Any fast code in those languages would be exceptional. :)
///
--
Are you kidding? Teco is one of those things like APL or PostScript.
Very fast. Runs tightly optimized code fragments, coming up only briefly
for more instructions, which require absolute minimum processing to interpret.
That doesn't mean you'd want to (oh, say... :-)) calculate Pi using
TECO... Or, more seriously, write ray-tracing software using TECO.
--
(reverse (concatenate 'string "gro.gultn@" "enworbbc"))
http://www.ntlug.org/~cbbrowne/teco.html
Babbage's Rule: "No man's cipher is worth looking at unless the
inventor has himself solved a very difficult cipher" (The Codebreakers
by Kahn, 2nd ed, pg 765)
Yes, despite the current outpouring of hatred from one of our regular
hatemongerers and from the new frog-eating vermin, I was kidding.
///
--
>Hello-
In its original life, CMUCL has not only been used for Neural Network
implementation, but kinda optimized for it. Because the head of the
CMUCL development team happend to be an active researcher in neural
networks and used CMUCL for it.
Martin
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <crac...@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/
I just find it funny that:
the hard, symbolic Lisp
is also
the soft, connectionist Lisp
what directed you you this course of study?
hard -vs- soft flames to followup in comp.ai :-)
Michael> the hard, symbolic Lisp
Michael> is also
Michael> the soft, connectionist Lisp
Michael> what directed you you this course of study?
Michael> hard -vs- soft flames to followup in comp.ai :-)
I don't have any idea what you're trying to say there.
> Kent M Pitman <pit...@world.std.com> writes:
>
> > Erik Naggum <er...@naggum.net> writes:
> > ...
> > Are you kidding? Teco is one of those things like APL or PostScript.
> > Very fast. Runs tightly optimized code fragments, coming up only
> > briefly for more instructions, which require absolute minimum
> > processing to interpret.
>
> That doesn't mean you'd want to (oh, say... :-)) calculate Pi using
> TECO... Or, more seriously, write ray-tracing software using TECO.
Actually, there's a TECO library that ran in Teco-based Emacs which
implemented LOGO. Gene Cicarelli (whose name I'm probably misspelling)
wrote it. It's called SLUGGO. It uses integer approximations to sines
and cosines in order to establish a way of interpreting turtle motions in
terms of a rectangular grid of characters, into which it deposits dots or
other characters. I suppose with a little imagination, you could have sluggo
do a somewhat crude (ok, very crude) approximation to ray tracing.
Btw, for all I know, someone could come up with some cute algorithm
for dealing with computation of Pi's digits in a similar way. Leaves
an interesting philosophical question: We chide people for trying to
"program Fortran in CL" (that is, for coming to CL with a FORTRAN [or
Scheme or C++] attitude and then expecting things to work without some
accomodation to a new paradigm). Is it then fair for Teco to chide
people for not expressing things "textually", or is that "too much
contortion" to be considered "mere accomodation of paradigm"? Because
if the paradigm were accomodated, perhaps the language would be fast enough.
And, related, is a language that is capable of doing a fast computation but
that you don't know how to request that fast computation of, still fast?
Some of these questions are relevant again to how we, and others,
perceive Lisp.
I would be very interested in in your version (I would try it on AL CL).
At the silly "computer language shootout" page there is also a
matrix-matrix-multiplication version written for CMUCL; but honestly:
does anybody understand what this version does?
For the kick I implemented one myself in AL CL. Without any declarations
(only some compiler settings) the AL CL version is about 10 times slower
(on a 1024x1024 array and the same "pseudo-code" implementation) than a
Clean or C version.
S. Gonzi
[The gcc is not a compiler which produces terrific code; as it is with
Microsoft Visual C++; a good C compiler will give you another boost of a
factor of 2]
That's nothing. I have solid knowledge that microsoft windows was
written in INTERCAL.
dave
> [The gcc is not a compiler which produces terrific code; as it is with
> Microsoft Visual C++; a good C compiler will give you another boost of a
> factor of 2]
As a general rule of thumb when comparing C code on x86 platforms,
yes. But gcc sometimes produces good code. You really have to take
it on a case-by-case basis, and *look* at the code produced. I
mention this only because I hope you're not quitely interpreting this
as "CMUCL does matrix multiplication at 125% the speed of gcc, but
250% of a good C compiler". That *may* be, but this could also be one
of those times when gcc produces decent code. You'll have to check
for yourself to know.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
>At the silly "computer language shootout" page there is also a
>matrix-matrix-multiplication version written for CMUCL; but honestly:
>does anybody understand what this version does?
Hmmm. I think I can see what it is getting at...
> a good C compiler will give you another boost of a factor of 2
I've done some timings using gcc and Sun Forte C and maybe I'm doing something
wrong since I've seen maybe 50% speed up or maybe less, but I'm not sure I've
seen a 100% speed up.
Sorry to end this message on such a down note,
:)w
> [The gcc is not a compiler which produces terrific code; as it is with
> Microsoft Visual C++; a good C compiler will give you another boost of a
> factor of 2]
You should qualify a statement like "gcc is not a compiler which
produces terrific code" with a lot of details like which version, for
which architecture, with what settings, at what time, compared with
what other compiler etc.
gcc has traditionally been designed with RISC systems in mind (a lot
of interchangeable general purpose registers and an orthogonal
instruction set), and the ia32 architecture is about the worst case
for it. On some other architectures gcc has been one of the best
compilers and has effectively killed a number of competitors. Even for
ia32 in the latest gcc releases a lot of work has been done to make
gcc competitive, but some optimisations aren't on by default and have
to be selected by various -m flags.
--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words
> Siegfried Gonzi <siegfri...@kfunigraz.ac.at> writes:
>
> > [The gcc is not a compiler which produces terrific code; as it is with
> > Microsoft Visual C++; a good C compiler will give you another boost of a
> > factor of 2]
>
> You should qualify a statement like "gcc is not a compiler which
> produces terrific code" with a lot of details like which version, for
> which architecture, with what settings, at what time, compared with
> what other compiler etc
I do not have any "scientifically-approved" results to show. But I had a
conversation (per email) with the developer of Yorick (Yorick is a matrix
language and includes some kind of Lisp-like lists; no wonder David M. holds a
PhD from MIT).
Yorick is Unix native (but there exists ports to Macintosh, Linux und
Windows).
The developer wrote me, that gcc delivers code which in turn is about 2 times
slower than a good C compiler. I had the conversation (2 years ago) because I
could not believe that Clean has been that fast (compared to Yorick).
But he also wrote me that he will in no way throw away the gcc compiler,
because he acknowledge that it is free.
S. Gonzi
> > a good C compiler will give you another boost of a factor of 2
> I've done some timings using gcc and Sun Forte C and maybe I'm doing something
> wrong since I've seen maybe 50% speed up or maybe less, but I'm not sure I've
> seen a 100% speed up.
>
> Sorry to end this message on such a down note,
I wrote it elsewhere that such mini-benchmarks are, hardly ever, good benchmark
gauge-tools.
On my old Macintosh I once wrote a simulation-program (in Fourier optics) in Yorick
(about 1000 lines of code). It performed well on smaller grids (256x256) on my
100MHz Macintosh but on bigger problems (1024x1024) it completely sucked (there was
also no help in assigning 100MB of virtual RAM). The program was heavily based on
FFTs; even I could make a FFT (as a mini-benchmark) of an 1024x1024 array on my old
Mac (with Yorick) but it has been never possible to do it in a great program, where
the dependencies are much more complex.
But I would like to see a greater Common Lisp program which is scrutnizing Common
Lisp's behavior and gargabe-collector, when it comes to bigger problems (I know only
the so called pseudo-knot-benchmark-test).
There is one point which I think Graham is rigth, when he about writes that using
Lisp's declaration-commands are hard to learn, because there is no common sense
when/where/what to declare.
S. Gonzi
As a newbie, I have noticed working through most the classic exercises,
a lot of them
(fib,!,ToH) should realy only allow positive-interger arguments. (for
correctness, not speed)
I thought this would be perfect for "my first macro", something like:
(defmacro positive-integer ;;; or something, I'm still working on
(and (integer) (> 0 ?))) ;;; declarations, never mind macros
and postive-fixnum, positive-bignum, etc...
But then looking around at built-in functions, GCD & LCM, (on CMUCL)
describe stated
Its result type is:
UNSIGNED-BYTE
even though (gcd 65536 32768)
32768 ;;; more than a byte, right ?!?
so then does UNSIGNED-BYTE promote (correct term?) to positive-<bigger
types> ?
> Its result type is:
> UNSIGNED-BYTE
>
> even though (gcd 65536 32768)
> 32768 ;;; more than a byte, right ?!?
Wrong.
(typep 32768 'unsigned-byte)
=> T
The concept of a "byte" is more flexible in Lisp than in C.
If you mean an eight-bit byte, you need to say that explicitly:
(typep 32768 '(unsigned-byte 8))
=> NIL
Without such a size restriction, the unsigned-byte type includes
all nonnegative integers.
The set of positive integers, which excludes 0, can be specified
as (integer (0) *), for example.
So based on some conversation you had two years ago you believe
some poorly expressed opinion?
The GNU compiler targets a lot of architectures. You won't find any
*single* compiler that produces faster code than GCC for *every*
architecture that GCC supports. It is likely that for some given
architectures, you will find a compiler that is better than GCC on
those architectures, but that doesn't support others. Speed can only
be compared on platforms that are exactly alike.
In recent years, there has been a lot of development going into the
GNU compiler collection, so any statements that don't mention a specific
version, down to the last digit, and the architecture being targeted,
not to mention the actual member of the CPU family, are useless.
It's quite possible for two pieces of generated code A and B to have the
property that A runs faster than B on some processor in the architecture
family, whereas B runs faster than A on some other processor in that
family.
>But he also wrote me that he will in no way throw away the gcc compiler,
>because he acknowledge that it is free.
There are other considerations besides speed of generated code and
licensing. Quality of diagnostics, standard compliance, architecture
support and portability, properties of arithmetic (particularly
floating point) and the presence of useful extensions.
If there are some language extensions you need, it behooves you to use the
same compiler to target as many of your needed architectures as possible.
Code *size* could matter in embedded work, as could execution speed
on older processors, rather than the latest, hottest family members.
For instance if you are targetting an embedded system that runs on a 16
Mhz 80386, maybe you're not concerned about Pentium III optimizations.
What an unfortunate name. Wasn't Microsoft's reviled 'Bob'
partly implemented in Lisp? This bit of guilt-by-association
has not helped Lisp's name in the larger community.
Perhaps Mr. Betz will consider changing the name of his
system to 'Robert', 'Robby', 'Billy-Bob', etc.
Actually, my "Bob" language predates the Microsoft version of "Bob".
> Perhaps Mr. Betz will consider changing the name of his
> system to 'Robert', 'Robby', 'Billy-Bob', etc.
There was a roast beef sandwich place near where I went to school that was
called "Bill and Bob's". The same people opened an upscale restaurant and
called it "William and Robert's". Maybe "Robert" (or "Rob") wouldn't be such
a bad idea. :-)
>There is an interesting article by David Betz (author of XLisp and
>>XScheme ) on Bob a language for embedded systems at
>>http://www.ddjembedded.com/resources/articles/2002/0202g/0202g.htm
>
>What an unfortunate name. .
I think that it is a pun on Dylan
"israel r t" <isra...@optushome.com.au> wrote in message
news:lmmf3us7d3kmj2lvq...@4ax.com...
Sure. I missed that.
>even I could make a FFT (as a mini-benchmark) of an 1024x1024 array on my old
>Mac (with Yorick) but it has been never possible to do it in a great program,
where
>the dependencies are much more complex.
Alas, poor Yorick...[1]
>But I would like to see a greater Common Lisp program which is scrutnizing
Common
>Lisp's behavior and gargabe-collector, when it comes to bigger problems (I know
only
>the so called pseudo-knot-benchmark-test).
>There is one point which I think Graham is rigth, when he about writes that
using
>Lisp's declaration-commands are hard to learn, because there is no common sense
>when/where/what to declare.
FWIW here is a trick. Try compiling the code using a high speed compiler setting
with as many compilers as you can get your hands on. I have lispworks, cmucl,
franz and clisp immediately to hand under linux. This incidentally gives you a
fair idea about how portable your code is. See the compilers what the compilers
whinge about. For example: cmucl (in particular) then produces copious amount of
warning about types. Optimise these. Maybe try running the code through a
profiler and work out where the speed up can be achieved. Anyway, since it is
12th Night I must dismantle my Christmas tree.
`A presto'
:)w
[1] Sorry. Couldn't resist.
> On Sat, 05 Jan 2002 16:17:24 -0600, sashan...@vanderbilt.edu
> (Sashank Varma) wrote:
>>> XScheme ) on Bob a language for embedded systems at
>> What an unfortunate name. .
> I think that it is a pun on Dylan
So what's wrong with "Thomas"?
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
> The GNU compiler targets a lot of architectures. You won't find any
> *single* compiler that produces faster code than GCC for *every*
> architecture that GCC supports. It is likely that for some given
> architectures, you will find a compiler that is better than GCC on
> those architectures, but that doesn't support others. Speed can only
> be compared on platforms that are exactly alike.
This is certainly true. However, I don't think I've ever used a
machine (both RISC and CISC if those terms mean much any more), where
gcc was competitive with the leading compilers in terms of
performance. There almost certainly are some - there are almost
certainly systems where gcc is the *only* C compiler apart from
anything else.
But basically people don't care that much about performance any more,
outside the macho contingent. A far better reason for using gcc is
that it's bug-compatible with itself. This really matters for a
language like C++ where the language definition has historically been
very unstable and compilers even more so - if you want to develop
multiplatform C++ systems gcc is a real win because you onyl have to
fight it once (well, only once for each version).
--tim
http://www.dreamsongs.com/BiologicalFramings.html
The second is a conference within a conference at OOPSLA 2002, called
Onward!: Seeking New Paradigms and New Thinking. There is a separate program
committee for it and we expect to have our own keynote etc. The idea is to
try to get some larger vision type papers, but backed up with some initial
work if possible. The URL is
http://oopsla.acm.org/2n_onward.html
-rpg-
It's taken already. By some DEC compiler, IIRC (compiling Dylan?).
david rush
--
Scheme: Making Turing Machines obsolete.
-- Anton van Straaten (the Scheme Marketing Dept from c.l.s)
>> So what's wrong with "Thomas"?
>It's taken already. By some DEC compiler, IIRC (compiling Dylan?).
See "Thomas: Compiler for a Dylan-like language."
at :
http://www-2.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/others/dylan/impl/thomas/0.html
BTW, Thomas is written in and emits Scheme.
That's why they keep buying faster and faster computers.
> That's why they keep buying faster and faster computers.
That's the macho contingent I mentioned (young males who need to show
off and want to play quake really fast).
--tim
Word up, most people try to buy a pretty fast computer, so they
*don't* need to buy one for a long time. Pretty much everyone I know
who's bought a new computer over the last year or two had a computer
that was at least 6 years old when they replaced it.
Yes, this is pretty much what I do. I get the fastest machine I think
I can get (often traded off with it needs to have feature x where
`feature x' might be `reasonably cheap & small laptop with decent
battery life' or something else that constrains performance) and then
stick with it until it is unviable or dies, which seems to be 6-10
years for desktops (which make their way down to be DNS servers or
something) and much less so far for laptops which get broken. On
thing that has changed is that almost any machine I can get now is
fast enough to work on and will remain so for this 6-10 year period as
far as I can see. This was not always true - early-mid 90s machines
were *never* fast enough, not even when they were new, and when
processors got fast enough memory (and sometimes disk) stayed too dear
to get enough, and then too expensive to upgrade later. Now it's easy
to get even a laptop with 3-400Mb of memory and stupid disk, and
desktops with 512M-2Gb and *really* stupid disk.
But I think this is quite different than the kind of `I must have the
latest graphics card and 2GHz cpu so I can play quake & boast down the
pub' thing that I was associating with the macho contingent.
--tim
> But I think this is quite different than the kind of `I must have the
> latest graphics card and 2GHz cpu so I can play quake & boast down the
> pub' thing...
If they were truely macho they would forswear the pub -- to boast
on the web on a site called something like my.ssTM.IZ.fztst!
This is because they've spent all their money on ridiculously
overpriced memory and the latest motherboard that they can
overclock, and on half a dozen ultra-expensive 0.5THz cpu's to
allow for the couple they fry whilst "tuning up".
Enough, before I start going on about how they all live with
their mum...
:)w
[snip]
> But I think this is quite different than the kind of `I must have the
> latest graphics card and 2GHz cpu so I can play quake & boast down the
> pub' thing that I was associating with the macho contingent.
You don't need a new machine for quake for sure, but I recently
replaced my motherboard, CPU, graphics card and memory mainly so I
can play Return To Castle Wolfenstein with reasonable FPS values.
What is so ridiculous about that? If you don't want to play RTCW
(or are such a bad player that FPS doesn't matter for you ;-), fine,
stick with your old machine, then. But why does it bother you if
others don't? It's fun, that's all.
Regards,
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."
PGP key ID 0x42B32FC9
>But why does it bother you if others don't? It's fun, that's all.
Yes. Sure, it all horses for courses. As an ageing git with a young family my
opportunity for such entertainment is almost non-existent -- so maybe I am
suffering from FPS envy.
:)w
>>You don't need a new machine for quake for sure, but I recently
>>replaced my motherboard, CPU, graphics card and memory mainly so I
>>can play Return To Castle Wolfenstein with reasonable FPS values.
> Ah, but do you still live with your parents? :)
No, I swear :-)
>>But why does it bother you if others don't? It's fun, that's all.
>
> Yes. Sure, it all horses for courses. As an ageing git with a young
> family my opportunity for such entertainment is almost non-existent --
> so maybe I am suffering from FPS envy.
Oh, it's easy: Just don't sleep at night ;-|
> You don't need a new machine for quake for sure, but I recently
> replaced my motherboard, CPU, graphics card and memory mainly so I
> can play Return To Castle Wolfenstein with reasonable FPS values.
> What is so ridiculous about that? If you don't want to play RTCW
> (or are such a bad player that FPS doesn't matter for you ;-), fine,
> stick with your old machine, then. But why does it bother you if
> others don't? It's fun, that's all.
Sure, it's the kind of fun that young males enjoy, there's nothing
wrong with it. Did I say there was?
--tim
Recap:
Tim: But basically people don't care that much about performance
any more
Some Dude: [sarcastically] That's why they keep buying faster and
faster computers.
Tim: [ But that's the macho contingent ]
Me: [ Word up, no doubt ]
Okay, so I ain't saying you're the same as the macho contingent, but
your reasons are similar. You want to play a game, it's not because
you need performace for more reasonable computer activities (I think
everyone agrees that games are an odd application for general-purpose
computers).
And, yeah, I go to my homeboy's place to play the PS2, I love it, I
just don't front trying to make my general-purpose computer into a
game machine (which would be fronting if I were gonna try).
Actually this is kind of interesting. I guess everyone knows this but
me, but I read an article the other day about games, and a really
successful PC game might sell 100,000 copies, while a successful PS2
game will sell 2-3 million. So the games-playing market for PCs is
actually really small (this is why the xbox exists of course). So
people buying state-of-the-art PCs for games are, I guess, a small
minority.
--tim
Uh, not me. If the computer is general purpose, why would playing
games on it be odd?
I don't agree, so obviously not everyone. What /is/ a reasonable
computer activity? Using a database? If CVS doesn't count as a
database, I think using one would be pretty unreasonable for me,
as I can easily keep track of everything I need with vi and awk :-)
As a side effect of upgrading, all my Lisp systems are /noticably/
faster, too, BTW. Or a word processor? The only thing I use such
a thing, if LaTeX and Emacs don't count, is for reading Emails from
people who don't know any better.
> Actually this is kind of interesting. I guess everyone knows this but
> me, but I read an article the other day about games, and a really
> successful PC game might sell 100,000 copies, while a successful PS2
> game will sell 2-3 million. So the games-playing market for PCs is
> actually really small (this is why the xbox exists of course). So
> people buying state-of-the-art PCs for games are, I guess, a small
> minority.
Maybe so, but you don't see any PS2 users on public game servers or
LAN parties either, so maybe it isn't the same thing, after all.
Those can't be the number of copies sold worldwide. A really
succesful PC game also sells into the millions of copies. For
example, a really really succesful game Diablo2 has apparently sold
four millions worldwide now while its expansion pack has broken the
one million mark:
http://www.blizzard.com/press/010726.shtml
(OK, so it's a press release. Take the numbers with a grain of
salt please ;-)
> So the games-playing market for PCs is actually really small
Not really. Besides the balance is always shifting between the PCs
and the consoles, at the moment it's in favour of the consoles.
It's a bit like Lisp and USENET, the imminent death of the PC as a
gaming platform has been announced for years.
Erik.
--
"please realize that the Common Lisp community is more than 40 years old.
collectively, the community has already been where every clueless newbie
will be going for the next three years." -- #:Erik (Erik Naggum) in CLL
The first part of the article you respond to wasn't a quote from me
BTW.
--tim
> Those can't be the number of copies sold worldwide. A really
> succesful PC game also sells into the millions of copies. For
> example, a really really succesful game Diablo2 has apparently sold
> four millions worldwide now while its expansion pack has broken the
> one million mark:
OK, I found the quote I had. It's from Devember 2001's Computer (the
IEEE computer society magazine). They say that a PC game that sells
100,000 copies is considered by analysts to be a major hit, while
Final Fantasy X for the PS2 sold 2.4 million *in Japan* (my italics).
There's then an extended discusssion of this, one of the significant
issues being that PCs are a nightmare for game developers because they
change every few months while consoles are stable for years.
> It's a bit like Lisp and USENET, the imminent death of the PC as a
> gaming platform has been announced for years.
Oh, I didn't say it was dying. I just think it's pretty small beer cf
the console market.
--tim
> But I think this is quite different than the kind of `I must have the
> latest graphics card and 2GHz cpu so I can play quake & boast down the
> pub' thing that I was associating with the macho contingent.
Well, for me it would be much more fun to run these space simulators:
Eagle Lander (est est est)
http://www.wright-flyer.net/desertaviation/eagle/
Orbiter
http://www.orbitersim.com
Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://web.mclink.it/amoroso/ency/README
[http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/]