Bronze Age Lisp design part 4 - conclusion

116 views
Skip to first unread message

Oto Havle

unread,
Sep 14, 2013, 5:08:23 AM9/14/13
to kl...@googlegroups.com
Hi all,

this is the last post of a series about design choices
in Bronze Age Lisp, interpreter of the Kernel Language
(https://bitbucket.org/havleoto/bronze-age-lisp).

The most visible difference between Klisp and Bronze Age Lisp is the
implementation language. Klisp is implemented in standard C. Bronze Age
Lisp is implemented in x86 assembly. The standard C helps with
portability and reuse of third-party code. In Klisp, pre-existing C
library was used to implement arbitrary-sized integers and rational
numbers. In Bronze Age Lisp, I implemented arbitrary-sized integers in
assembly myself, poorly. The standard C also offers some functionality
over assembly, e.g. string handling or file I/O.

On the other hand, the C language restricts control flow between large
blocks of code to stack-oriented function calls. Therefore, Klisp is
forced to implement tail calls and continuation using a trapoline.
Integration of primitives implemented in C code with the garbage
collector is also a problem. Explicit management of GC roots is
necessary.

In Bronze Age Lisp, tail calls are just indirect jumps. The native stack
and all general purpose registers are GC roots. Explicit management of
GC roots in primitives is not necessary. Therefore, some features are
actually easier to implement, and the implementation is cleaner and less
prone to errors, if you use the assembly langauge.

There is one more difference between Klisp and Bronze Age Lisp I
would like to point out. In klisp, all built-in combiners are
implemented in C. The source code of Klisp does not contain any lisp
code (except tests and demos). You can build Klisp with the C toolchain
only.

In Bronze Age Lisp, many built-in combiners are implemented in lisp and
interpreted. The build process depends on having working Kernel
interpreter. I use Klisp for bootstrapping (although Bronze Age Lisp,
once it is built, can build another copy of itself).


To conclude this series of posts, I would like to add few words about a
relationship between an compiler and an interpreter.

The purpose of the techniques described in this series of posts is to
improve speed of the interpreter. But one may ask, if interpreters
matter at all. Compiled programs always run faster than the interpreted
ones. The interpreter overhead cannot be completely eliminated. If
compiler is not available, the execution will be slow anyway.

Unrestricted use of fexprs and eval make it impossible in principle to
fully compile all Kernel Language programs. The compiler, however
sophisticated, will give up at some point, and leave (residualise) part
of the program to be interpreted. The compiler will either generate an
(maybe specialized) interpreter in the course of the compilation, or
integrate the compiled program with pre-existing interpreter. Therefore,
the efficiency of the interpreter, and simplicity of its internal
interfaces are still of interest.

Thank you for your attention,

Oto Havle.

Andres Navarro

unread,
Sep 14, 2013, 2:54:22 PM9/14/13
to kl...@googlegroups.com

I followed with interest this series of articles.

Some of the ideas you mention here will probably be incorporated in klisp itself in the future.  Some I had consider already, many others I discovered just now. There's many important issues raised throughout this series of mails.

I encourage anyone else working in their own implementations to discuss here any implementation ideas or doubts they had, so we can share and discuss different strategies both for Kernel exclusive features & regular ones aswell.

Regards,
Andres Navarro

--
You received this message because you are subscribed to the Google Groups "klisp" group.
To unsubscribe from this group and stop receiving emails from it, send an email to klisp+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Reply all
Reply to author
Forward
0 new messages