}
I recived the following message form, Tom Knight, at MIT:
>
Several people have asked about the naughty bits of the original MIT
lisp machine architecture. I've put my master's thesis (1979) on line
for those of you with a generous non-critical spirit to take a look
at. I will duck all arrows, but praise will be gratefully received.
There are genuine logic diagrams for those of you who recall Schottky
TTL logic or who want to know how hard it was to do anything back in
the bad old days.
General features include a 32 bit word, 180ns cycle, 3-stage pipe,
bypass logic, barrel shifter, single cycle arbitrary field
extract/deposit, and a "dispatch" instruction which did an extracted
field multi-way table lookup branch in a single cycle. Microcode PC
push down stack, top-of-stack cache, and an ability to variablize
microcode instructions on-the-fly are also interesting features.
Branch delay slots appeared here also.
Best, tk
http://www.ai.mit.edu/people/tk/tk-sm-79.pdf
>
and then I recived this letter:
> I meant to post that last message instead of mail it.
>
(remove #\space "tk @ ai . mit . edu") ;; In his letter he wanted
reply's.
;; I hope including this code is OK.
I am pushing for a Lisp Archetecture becuase I think ANSI CL/Scheme
would make an excellent machine/micro code. (Don't flame; I don't know
how machine/micro code differ.) I think this for the following
reasons:
1.) Programs + Data are in same format.
2.) Programs are portable to different platforms.
3.) ANSI CL/IEEE Scheme allows more abstractions that machine
language.
4.) Macros make it easy to write code generators/transaltors.
5.) Debuging is interactive.
6.) Standards exist.
7.) It is easier to learn two HLL's even if they are difficult to
learn--
easier than learning thousands of machine languages for each CPU.
8.) Memory Support is built in.
9.) Lisp has been used as a Machine Language. Read the paper that is
being
linked.
10.) Lots of packages are avail. for Lisp. Not many that can run on
all
Machine Languages.
C is the Universal Assebly Language; Make Lisp the Universal Machine
Language.
Lisp is more readable that machine language.
0000011101010101010101001 or (+ 1 1) ;; 1 + 1, you decide.
}
NOTE: I have not yet found out the status of the software, and if it
is in the public domain. I'll post an other message when I do. But, I
was given permission
to provide the link to Dr. Knight's paper.
PS
It should make it possible to write an emulator for a Lispm and for a
Lisp-based machine language. :) Lisp-like machine codes make me a
happy coder.
Opening to a random page (41) I found a discussion of the stack cache
and lack of memory cache:
"Of the useful cycles, 6.6% read data from the stack buffer, and 4.7%
write such data. ... In contrast, about 6.6% of the cycles initiate
main memory references. ... A cache mechanism would perform well on
the instruction and forwarding pointer references, but would likely
perform poorly on the random references to list structures. Assuming
a 75% hit rate on a cache, and an average of saving of 3 cycles per
cache hit, installation of a cache on this processor would improve
performance by a little less than 15%. With speeds of main memory
going down, the 3 cycle saving figure is generous today, and likely
will continue to be reduced."
I'm trying to decide whether the last sentence has come true or
was completely wrong. Measured in nanoseconds memory is indeed
faster today. Measured in cycles, you can build a cache the size
of the lisp machine's main memory with single cycle access but
you can't get RAM that delivers results in less than 100 cycles.
The total memory reference rate (18%) seems a bit low to me.
--
John Carr (j...@mit.edu)
The Lisp Compiler could be programmed into EPROMS along with the
Microcode--this would make it easier to update the machine. (Just
pop out one EPROM and pop a new one in.)
I might even try this as a School project.
I've looked at LiSP, EoPL, SICP, and Peter Novig's book --it prob.
won't be fast, but trying it will be fun.
What CHIP lang. should I use? VHDL, etc. I am new at chip design.
Are they any tutorials?
I think the arch. will be stack bases. (I might just write the
code in a Chip Lang. and post it online, for other people to play
with, for simulations, and for fun. :) )
> What CHIP lang. should I use? VHDL, etc. I am new at chip design.
VHDL and Verilog are the two standards. I know that most FPGA-type hardware
will work with VHDL input.
> Are they any tutorials?
I know there are books. I have one with a CD that has software that would
translate a VHDL design to various Cypress Semiconductor FPGA's. The main
problem is that each of the different vendors have their own translation /
optimization tools that are usually bundled only with their development
systems. Also, I'm not sure of any public domain simulators that you can
use to debug your VHDL.
> I think the arch. will be stack bases. (I might just write the
> code in a Chip Lang. and post it online, for other people to play
> with, for simulations, and for fun. :) )
Why go with a stack-based architecture? If you ever want the system to be
used as a basis for future work or have faster implementations, you need to
plan for the future. A stack becomes a bottleneck for implementations that
use instruction-level parallelism without a huge amount of register
renaming logic under the covers. I'd start with a simple register-based
machine that had support for simultaneous tag and arithmetic / memory
operations that had a fairly quick tag / overflow trap mechanism. Of
course, it is your machine, so do what you think is best.
faa
Isn't there a VHDL frontend to GCC? I don't know how far it is, since I use
Verilog only, and Icarus Verilog (under GPL) with GtkWave is sufficiently
powerful to debug my Verilog.
>> I think the arch. will be stack bases. (I might just write the
>> code in a Chip Lang. and post it online, for other people to play
>> with, for simulations, and for fun. :) )
>
> Why go with a stack-based architecture? If you ever want the system to be
> used as a basis for future work or have faster implementations, you need
> to plan for the future.
My advice is *not* to plan for the future when not necessary. If you find
that you want to have a faster implementation, look at what you have
learned from your slower implementation and redo the architecture. This is
especially important when you lack the knowledge to make a solid plan.
Prepare to throw one away.
Furthermore, the architecture in question here is definitely a high-level
architecture to execute high-level languages (Lisp/Scheme), so binary
compatibility is not an issue.
--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
I think I might go with that if I could better understand what
Data-Flow arch. is.
Or I might make an arch. that can execute the byte codes a GNU
Scheme/Lisp produce. (I'll add machinery for run-time type checking,
and Garbage Collection because they are the features of the lang. that
are not supported by stock hardware.
I might put it on opencores but I am a programmer not a hardware buf.
(yet) and I need tools to write the Chip, and test it.
Is there any free lang. that is good?
Any chip design tool written in Scheme/Lisp--I think Symbolics may
have
made one.
PS
Anyone who wants to get involved please respond.
> Lisp could be used for a Machine Language for a machine that
> only processed Lisp and communicated to the users computer with a
> CL-HTTP like interface.
>
> The Lisp Compiler could be programmed into EPROMS along with the
> Microcode--this would make it easier to update the machine. (Just
> pop out one EPROM and pop a new one in.)
>
> I might even try this as a School project.
>
> I've looked at LiSP, EoPL, SICP, and Peter Novig's book --it prob.
> won't be fast, but trying it will be fun.
>
> What CHIP lang. should I use? VHDL, etc. I am new at chip design.
>
> Are they any tutorials?
Everybody puts soft processors in FPGA these days. Making a simple one is
relatively easy but don't expect to run at 2.2GHz. The current soft cores
run at about 100MHz. You can get free VHDL compilers chains from the FPGA
vendors like Xilinx and Altera. They are generally limited to 300K gates
FPGA but this should be enough.
If you don't want to make a PCB you can find lots of prototyping boards to
get started.
You should ask in comp.arch.fpga.
Marc
> http://www.ai.mit.edu/people/tk/tk-sm-79.pdf
>>
Have you looked at those papers from the MIT?
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-528.pdf
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-559.pdf
They give some details about the lisp machine processor (cadr) and the
scheme 79 chip designed by Guy Steele.
Emmanuel
> I think the arch. will be stack bases. (I might just write the
> code in a Chip Lang. and post it online, for other people to play
> with, for simulations, and for fun. :) )
You may wish to examine these papers for alternative approaches to
lisp hardware.
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf
http://www.swiss.ai.mit.edu/~mhwu/scheme86/scheme86-home.html
http://home.attbi.com/~prunesquallor/kmachine.htm
Interesting. This sort of implies that FPGA chips now are capable
of handily exceeding the performance of the famous Symbolics Ivory
chip.
Bear
Stack computers:
http://www.ece.cmu.edu/~koopman/stack_computers/index.html
Graph reduction architecture:
http://www.ece.cmu.edu/~koopman/tigre/index.html
--
George Morrison
Aberdeen, Scotland
> Graph reduction architecture:
> http://www.ece.cmu.edu/~koopman/tigre/index.html
Cool stuff!
In this vein, you should check out Alan Bawden's PhD thesis:
`Implementing Distributed Systems Using Linear Naming'
formerly titled
`Linear Graph Reduction: Confronting the Cost of Naming'
at
ftp://publications.ai.mit.edu/ai-publications/1500-1999/AITR-1627.ps
> What CHIP lang. should I use? VHDL, etc. I am new at chip design.
there is a a woefully incomplete "scheme-based" hdl at:
http://www.glug.org/people/ttn/software/thud/
i don't recommend it for anything, although feedback is welcome. probably
best would be to use Aubrey Jaffer's SIMSYNCH, if you don't mind scheme.
thi
Some stuff by Henry Baker:
http://home.pipeline.com/~hbaker1/LinearLisp.html
http://home.pipeline.com/~hbaker1/ForthStack.html
Dataflow architecture. Again, the world is yet to catch up with
the early 1980s. A dataflow machine would have contained many
processors
(a prototype I saw at Manchester Uni had about a dozen). If a
processor
was free, it could accept input data. WHen all its inputs were
available,
it would execute an instruction and produce output, which then found its
way to whatever processors were ready to accept it. Data was generally
tagged, which was necessary for aligning it while executing recursive
functions, etc. They were to have played a major part in the Japanese
Fifth Generation Project.
I'm still actively interested in dataflow machines, having first
(I think) learned of the idea from Stoyan Kableshkov, a friend and
colleague at Burroughs, with whom I have long since lost touch.
He designed one and wrote a book about them.
> PS
Le Hibou
--
In any large organization, mediocrity is almost by definition
an overwhelming phenomenon; the systematic disqualification
of competence, however, is the managers' own invention, for
the sad consequences of which they should bear the full blame.
-- Edsger W. Dijkstra, 1986.