On 2019-01-31, luserdroog <
mij...@yahoo.com> wrote:
> I was reading in the documentation for PicoLisp and came across
> a statement that PicoLisp does not have macro /because/ it does
> not compile. "No compiler -- no macros!"
>
> Is this "true" generally? Are there no pure interpreters which
> implement macros? Is is weird or difficult or clunky for those
> that do if it's rare?
There are Lisps that both interpret and compile that consistently
use macros for both situations.
There are Lisps that were AST-interpreted and had macros, before
becoming compiled. For instance GNU Emacs Lisp had macros before
Zawinski developed the byte compiler.
My TXR Lisp had macros for quite a number of years before I developed a
compiler and VM for it. The interpreter is critically required by the
implementation; the compiler won't bootstrap without it. And the
compiler is written using the full macro system, so it cannot be
interpreted without macros.
CLISP is a CL implementaton that has an interpreter. It also bootstraps
using that interpreter. Functions are interpreted by default and must be
explicitly compiled, or COMPILE-FILE must be used. Its LOAD function
has a compiling mode.
Implementing an expansion pass in an interpreter, even though
it may be motivated by macros, will prove beneficial.
For instance, the following diagnostic come from the code walker that
performs macro expansion:
This is the TXR Lisp interactive listener of TXR 208.
Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
1> (defun a (x)
(+ x y))
** warning: (expr-1:2) unbound variable y
The code isn't compiled; we didn't do (compile 'a).
In an interpreter without macros, there would be no expansion pass, and
so the opportunity for this kind of static check wouldn't arise.
The opportunity arises from the fact that the macro system supports
local macros (both symbol and operator macros), and so the expander
performs a detailed code walk that passes down information about the
lexical scope. It knows that x has a binding, but y doesn't.
The expander can perform some small optimizations also, like reducing
progn forms that contain trivial side-effect-free forms:
2> (expand '(progn 1 2 3 4 5))
5
An expander can easily do simple source-level optimizations like
arithmetic strength reduction, constant folding and dead code
elimination, which do benefit interpretation. Of course, compilation is
best for speed, but if all you have is interpretation, faster is better
than slower, nonetheless.
Macros are superior to fexprs for extending the language, because
a reduction from some new syntax to an existing syntax that is known
to be well implemented is easier to validate than a new fexpr.
We can inspect what the macro is doing without running the
program which uses the macro.
A fexpr can use another fexpr as a subroutine, thereby providing
syntactic sugar around that target fexpr; but that has to be debugged by
running instances of the syntax.
I would argue that if a fexpr massages its input syntax and calls
another fexpr, that fexpr "wants" to be macro. If the fexpr massages the
input syntax in a fixed way, it's basically wasting time doing the
equivalent of macro expansion on each call. Such a situation shows that
even in a fexpr-based environment, macros would in fact be beneficial;
a macros can then be thought of as a staged pre-computation of what
would be a fexpr.