Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

newLISP is simple, terse, and well documented

460 views
Skip to first unread message

simple.lan...@gmail.com

unread,
Jan 18, 2009, 3:17:28 PM1/18/09
to

Tamas K Papp

unread,
Jan 18, 2009, 3:32:03 PM1/18/09
to

"In fact, there are several notable language-related differences which
bring newLISP much closer to Scheme than Common LISP. However, even in
comparison to Scheme, the dominating aspect of newLISP is
simplification. Just one example: newLISP replaces the various
equality operators in Common LISP and Scheme with a single
one. Therefore, you no longer need to remember the difference between
equal, eql, eq, equalp, =, string=, string-equal, char= and char-eq --
which is really how things should be."

This is where I stopped reading... Cf. http://www.nhplace.com/kent/PS/
EQUAL.html

It is not clear what newLISP is fixing, or how it improves on CL. One
can easily define another equality operator in CL, and in fact
implement all of the fantastic new "features" of newLISP. The fact
that nobody does so should be a hint.

Tamas

Raffael Cavallaro

unread,
Jan 18, 2009, 4:05:50 PM1/18/09
to
On 2009-01-18 15:32:03 -0500, Tamas K Papp <tkp...@gmail.com> said:

> It is not clear what newLISP is fixing, or how it improves on CL.

My impression of newLISP has always been that it "simplifies" common
lisp by punting on things that common lisp takes the trouble to deal
with thoughtfully. For example, newlisp integer operations can overflow
because the language has "simplified" all that silly fixnum/bignum
stuff. All a real programmer needs is fixed length integers that wrap,
right?
--
Raffael Cavallaro, Ph.D.

Tamas K Papp

unread,
Jan 18, 2009, 4:17:57 PM1/18/09
to

Well, another way to look at it is that you get a built in RNG for all
the arithmetic you do, at no extra cost :-)

Tamas

Cor Gest

unread,
Jan 18, 2009, 4:17:52 PM1/18/09
to
Some entity, AKA Tamas K Papp <tkp...@gmail.com>,
wrote this mindboggling stuff:
(selectively-snipped-or-not-p)


> It is not clear what newLISP is fixing, or how it improves on CL.

fixing stupidity is a laudible cause

Cor
--
It is YOUR right to restrict MY rights, but ONLY in YOUR house
If YOU try that in MY house, YOU better be prepared to be kicked out
The only difference between GOD and me is that GOD has mercy
My other cheek is a .40 JHP

jos...@corporate-world.lisp.de

unread,
Jan 18, 2009, 4:35:34 PM1/18/09
to
On 18 Jan., 21:32, Tamas K Papp <tkp...@gmail.com> wrote:
> On Sun, 18 Jan 2009 12:17:28 -0800, simple.language.yahoo wrote:
> > A Look at newLISP:
> > page 1:http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP
> > page 2:
> >http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page2/
> > page 3:
> >http://www.newmobilecomputing.com/story/20728/A_Look_at_newLISP/page3/
>
> > newLISP Home Page:http://www.newlisp.org/
>
> > Wikipedia article about newLISP:http://en.wikipedia.org/wiki/NewLISP
>
> "In fact, there are several notable language-related differences which
> bring newLISP much closer to Scheme than Common LISP.

http://www.newlisp.org/index.cgi?page=Differences_to_Other_LISPs

Does not sound much like Scheme.

WalterGR

unread,
Jan 18, 2009, 5:29:05 PM1/18/09
to
On Jan 18, 1:17 pm, Cor Gest <c...@clsnet.nl> wrote:
> Some entity, AKA Tamas K Papp <tkp...@gmail.com>,
> wrote this mindboggling stuff:
> (selectively-snipped-or-not-p)
>
> > It is not clear what newLISP is fixing, or how it improves on CL.
>
> fixing stupidity is a laudible cause

Did you mean "laudable"?

WalterGR

andrew.n...@gmail.com

unread,
Jan 18, 2009, 6:14:08 PM1/18/09
to
Raffael Cavallaro wrote:

> For example, newlisp integer operations can overflow
> because the language has "simplified" all that silly
> fixnum/bignum stuff.

The largest positive integer you can have in newLISP is 9 223 372 036
854 775 807, and the largest negative integer is -9 223 372 036 854
775 808.

Alberto Riva

unread,
Jan 18, 2009, 6:49:10 PM1/18/09
to

Or "laughable"?

Alberto

William James

unread,
Jan 18, 2009, 6:56:06 PM1/18/09
to
simple.lan...@gmail.com wrote:

My suggestions for NewLisp:

1. Become lexically scoped.
2. Add hash tables (associative arrays).

Raffael Cavallaro

unread,
Jan 18, 2009, 7:46:32 PM1/18/09
to
On 2009-01-18 18:14:08 -0500, andrew.n...@gmail.com said:

> The largest positive integer you can have in newLISP is 9 223 372 036
> 854 775 807, and the largest negative integer is -9 223 372 036 854
> 775 808.

An no one *ever* needs integers larger than that so common lisp's
automatic overflow detection and bignums are pointless, right? For
example, the thousandth fibonacci number is much smaller than newlisps
integers, right?

Ooops...

(defun fib (n &optional (b 1))
(declare
(ftype (function (integer fixnum) integer) fib)
(optimize (speed 3) (safety 0) (compilation-speed 0) (space 0) (debug 0)))
#+lispworks (declare (optimize (fixnum-safety 0)))
(if (= 0 b) ;calculate lucas numbers
(case n
((0) 2)
((1) 1)
(otherwise
(if (evenp n)
(let* ((k (/ n 2)) (l (fib k 0)) )
(+ (* l l) (if (evenp k) -2 2)))
(let* ((k (1- n)) (l (fib k 0)) (f (fib k 1)) )
(/ (+ (* 5 f) l) 2)))))
(case n ;calculate fibonacci numbers
((0) 0)
((1) 1)
(otherwise
(if (evenp n)
(let* ((k (/ n 2)) (l (fib k 0)) (f (fib k 1)))
(* f l))
(let* ((k (1- n)) (l (fib k 0)) (f (fib k 1)))
(/ (+ f l) 2)))))))

* (fib 1000)

43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
*


they're

called "bignums" for a reason ;^)

--
Raffael Cavallaro, Ph.D.

Pascal J. Bourguignon

unread,
Jan 18, 2009, 8:12:09 PM1/18/09
to
Raffael Cavallaro <raffaelc...@pas.espam.s.il.vous.plait.mac.com> writes:

> On 2009-01-18 18:14:08 -0500, andrew.n...@gmail.com said:
>
>> The largest positive integer you can have in newLISP is 9 223 372 036
>> 854 775 807, and the largest negative integer is -9 223 372 036 854
>> 775 808.
>
> An no one *ever* needs integers larger than that so common lisp's
> automatic overflow detection and bignums are pointless, right? For
> example, the thousandth fibonacci number is much smaller than newlisps
> integers, right?
>
> Ooops...
>

> * (fib 1000)
>
> 43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
> *
>
> they're
>
> called "bignums" for a reason ;^)

Who needs a number that big? Even the Zimbabwe needs only 15 digits.

--
__Pascal Bourguignon__

Majorinc Kazimir

unread,
Jan 18, 2009, 8:21:16 PM1/18/09
to
Tamas K Papp wrote:

>
> It is not clear what newLISP is fixing, or how it improves on CL.

In "code=data" area, Newlisp offers several improvements
over CL and Scheme:

(1) Unrestricted eval, i.e. eval can access to
local variables. CL and Scheme evals cannot do that.
Eval is, in my opinion, single most important feature
for code=data.

(2) Fexprs, i.e. "the first class" macros. Newlisp macros
can be generated during runtime, passed as arguments
to other macros or functions, applied like functions
and so forth.

(3) Newlisp has support for dynamic and static scope.
Local variables are dynamically scoped. The namespaces,
called "contexts" are statically scoped. That allows
significantly more expressive functions than in
dialects with statically scoped local variables.

(4) The Functions are lambda-expressions, not results of
the evaluation of lambda expressions. That means,
program can analyze and modify functions during
runtime. And the same is the case for macros.

(5) Newlisp is an intepreter. That means, if you want
speed, you are in the same league as Python or Perl.
But, if your code contains lot of evals - Newlisp is
faster than many other Lisps.

Here are some benchmarks i did:

http://kazimirmajorinc.blogspot.com/2008/12/
speed-of-newlisp-eval-test-v100.html

So, for those who like Lisp because of code=data,
Newlisp is promising choice. Of course, the languages
have many other different aspects, and not
all are in favor of Newlisp. But it would be too
ambitious to summarize all differences in one post.

The majority of people use it in some form of "scripting."


Another language that might be interesting to look at
is Pico Lisp. It maintains similar spirit as Newlisp.

Raffael Cavallaro

unread,
Jan 18, 2009, 9:25:08 PM1/18/09
to
On 2009-01-18 20:12:09 -0500, p...@informatimago.com (Pascal J.
Bourguignon) said:

they're

Apologies - I didn't realize that lisp numerics were only supposed to
be used for financial calculations. ;^)
--
Raffael Cavallaro, Ph.D.

Majorinc Kazimir

unread,
Jan 18, 2009, 9:26:50 PM1/18/09
to
Raffael Cavallaro wrote:

> An no one *ever* needs integers larger than that so common lisp's
> automatic overflow detection and bignums are pointless, right? For
> example, the thousandth fibonacci number is much smaller than newlisps
> integers, right?

Newlisp has interface to GMP library.

http://www.newlisp.org/code/modules/gmp.lsp.html

Mike

unread,
Jan 18, 2009, 9:42:59 PM1/18/09
to

Rather than a quasi-lisp that defines 'defun' as 'de', is there a
simple lisp that can be used as a static binary for system
administration and and configuration instead of using cfengine(8)?
I like what Mark Burgess has put together (met him once too, briefly),
and the people around him. That application will not work in my
environment and I prefer to use lisp (not all of Common Lisp nor
the twists of Scheme, just simple LISP, and not all the baggage of
tcl (though I like tcl also)).

It has been odd enough for me to get accustomed to (first) and
(rest) instead of (car) and (cdr).

I want something that I can use cross platform, windows, mac, linux,
unix, to manage my boxes. I want something that I can compile from
source into a static binary so I can manipulate the registry on
windows or plists on mac. And I want to use lisp for this.

Mike

Andrew Reilly

unread,
Jan 18, 2009, 10:26:25 PM1/18/09
to
On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:

> Apologies - I didn't realize that lisp numerics were only supposed to be
> used for financial calculations. ;^)

Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
some crypto libraries (that know deterministically that they'll be using
large numbers, and can be coded accordingly), who realistically gets
milage from the smooth transition to bignums? I was wondering that
myself, recently. Certainly with 32-bit machine integers and 30 or 31
bit fixnums there could be some problems, but I think that I could live
quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit
system. The caveat would have to be that overflow trapped: silent wrong
answers are unacceptable, of course. It certainly seems that most
compilers have "assume fixnums for integers" options on their performance
knobs, and I would guess that *most* programs work just fine with that
setting...

Cheers,

--
Andrew

Rob Warnock

unread,
Jan 18, 2009, 11:00:58 PM1/18/09
to
Mike <mi...@mikee.ath.cx> wrote:
+---------------

| Rather than a quasi-lisp that defines 'defun' as 'de', is there a
| simple lisp that can be used as a static binary for system
| administration and and configuration ... ?
| ... I prefer to use lisp (not all of Common Lisp nor
| the twists of Scheme, just simple LISP ...)
...

| I want something that I can use cross platform, windows, mac, linux,
| unix, to manage my boxes. I want something that I can compile from
| source into a static binary so I can manipulate the registry on
| windows or plists on mac. And I want to use lisp for this.
+---------------

There's always Lisp500:

http://www.modeemi.fi/~chery/lisp500/

Of course, it's slow, has rather opaque source, and is missing a few bits
[though has quite a lot, for its size!], but it's a single source file
of only ~500 lines [plus a 5.8 K line init file in Lisp, which could
presumably be hacked into the source as a string]. And its GC is at
least stable enough to handle (ACKERMANN 3 9)! ;-}


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Tamas K Papp

unread,
Jan 18, 2009, 11:24:09 PM1/18/09
to
On Mon, 19 Jan 2009 02:21:16 +0100, Majorinc Kazimir wrote:

> Tamas K Papp wrote:
>
>
>> It is not clear what newLISP is fixing, or how it improves on CL.
>
> In "code=data" area, Newlisp offers several improvements over CL and
> Scheme:
>
> (1) Unrestricted eval, i.e. eval can access to
> local variables. CL and Scheme evals cannot do that. Eval is, in my
> opinion, single most important feature for code=data.

I don't think Lisp programmers use eval that much, directly. But if
you wanted some parametrized eval that uses local variables, it would
be possible to do it in CL.

> (3) Newlisp has support for dynamic and static scope.
> Local variables are dynamically scoped. The namespaces, called

Ouch. I don't find pervasive dynamic scope an improvement over
lexical scope. Quite the opposite.

> (4) The Functions are lambda-expressions, not results of
> the evaluation of lambda expressions. That means, program can
> analyze and modify functions during runtime. And the same is the
> case for macros.

Which again is a trivial improvement that you can easily replicate in
CL, should the need arise -- but it does not, in practice. If I found
myself directly manipulating S-expressions that create functions
during runtime, I would be pretty sure I am doing something wrong, and
I should be using closures (and hide the machinery with macros).

> (5) Newlisp is an intepreter. That means, if you want
> speed, you are in the same league as Python or Perl. But, if your
> code contains lot of evals - Newlisp is faster than many other
> Lisps.
>
> Here are some benchmarks i did:
>
> http://kazimirmajorinc.blogspot.com/2008/12/
> speed-of-newlisp-eval-test-v100.html

But why would your code contain a lot of evals? I don't see the
point. Here you try to sell a weakness (no compiler) as a feature --
nice try.

> So, for those who like Lisp because of code=data, Newlisp is promising
> choice. Of course, the languages have many other different aspects, and

Again, you repeat this code=data thing as a buzzword and try to
portray newLisp in a good light. But CL already treats code as data
--- what do you think macros are doing?

> Another language that might be interesting to look at is Pico Lisp. It
> maintains similar spirit as Newlisp.

You mean making brain-dead changes to CL? Or "simplifying" the
language by clearing up things like the Great and Horrible Multitude
of Functions that Test for Equality? Thanks, I think I will pass.

McCarthy has remarked that Lisp is a local optimum in the space of
programming languages. Maybe CL is a local optimum in the space of
Lisps, too. At least I frequently find that "improved" dialects
frequently arise from grave misunderstandings and are actually much
worse than CL. The exceptions that come to mind are Qi (esp. version
2) and maybe Clojure (of which I know very little).

Tamas

Cor Gest

unread,
Jan 18, 2009, 11:34:36 PM1/18/09
to

Some entity, AKA WalterGR <walt...@gmail.com>,

wrote this mindboggling stuff:
(selectively-snipped-or-not-p)

>> > It is not clear what newLISP is fixing, or how it improves on CL.
>>
>> fixing stupidity is a laudible cause
>
> Did you mean "laudable"?

why would you have people burn in cars ?

Cor Gest

unread,
Jan 18, 2009, 11:35:43 PM1/18/09
to
Some entity, AKA Alberto Riva <ar...@nospam.ufl.edu>,

wrote this mindboggling stuff:
(selectively-snipped-or-not-p)


>>>> It is not clear what newLISP is fixing, or how it improves on CL.
>>> fixing stupidity is a laudible cause
>>
>> Did you mean "laudable"?
>
> Or "laughable"?

Ah, yes it is funny indeed.

Pascal Costanza

unread,
Jan 19, 2009, 5:17:59 AM1/19/09
to

640 KB ought to be enough for anybody.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Alexander Lehmann

unread,
Jan 19, 2009, 5:36:29 AM1/19/09
to
Pascal Costanza wrote:
> 640 KB ought to be enough for anybody.

I second that :-) Besides, don't "we all" praise the extensibility and most
notably the flexibility of Lisp? So why does one compare apples and oranges by
saying newLisp was better than Lisp? IMO, both newLisp and Lisp have their
rightful places, yet why stating that one improves the other?

jos...@corporate-world.lisp.de

unread,
Jan 19, 2009, 5:57:56 AM1/19/09
to
On 19 Jan., 11:17, Pascal Costanza <p...@p-cos.net> wrote:
> Andrew Reilly wrote:
> > On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:
>
> >> Apologies - I didn't realize that lisp numerics were only supposed to be
> >> used for financial calculations.  ;^)
>
> > Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
> > some crypto libraries (that know deterministically that they'll be using
> > large numbers, and can be coded accordingly), who realistically gets
> > milage from the smooth transition to bignums?  I was wondering that
> > myself, recently.  Certainly with 32-bit machine integers and 30 or 31
> > bit fixnums there could be some problems, but I think that I could live
> > quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit
> > system.  The caveat would have to be that overflow trapped: silent wrong
> > answers are unacceptable, of course.  It certainly seems that most
> > compilers have "assume fixnums for integers" options on their performance
> > knobs, and I would guess that *most* programs work just fine with that
> > setting...
>
> 640 KB ought to be enough for anybody.

Which would be 3200 KB in total.

Kaz Kylheku

unread,
Jan 19, 2009, 6:52:05 AM1/19/09
to
On 2009-01-18, simple.lan...@gmail.com <simple.lan...@gmail.com> wrote, via Google Groups:
> A Look at newLISP:

ROFL!

I was disconnected from the machine where I normally read news,
so I was browsing the newsgroup using Google Groups.

I noticed that someone has been given multiple low Google Groups star ratings
to all of the intelligent criticisms posted by our comp.lang.c regulars in this
thread.

Is that you doing that? Do you actually have friends helping, or is it all sock
puppet accounts?

Do you people think anyone gives a flying leap about newLISP /or/ Google Groups
ratings?

Yeah, you're really socking it to the infidels who dare criticize newLISP!
One star on Google Groups, three times over again---take THAT, you
nay-saying scumbags!


Here is what I think.

newLISP is a retarded heap of dung worked on by clueless monkeys.

As such, we shouldn't care about it, except that it is proactively harmful,
because it damages the view toward the Lisp family of languages from
the perspective any Lisp newcomer who may acome across it as a point of first
contact.

Nobody should ever come into contact with newLISP, but newbies are more
susceptible to the damage it may inflict. I'm sure I could poke around in
newLISP without any permanent braindamage, but I fear for the vulnerable
noob.

The name may be ``new''LISP, but it's anything but new. It's something that's
not even state of the art for 1960. It's represents an almost fifty year
backwards leap in progress.

The author should write a lengthy apology for newLISP, and move on.

I don't know of a beter way to put it: newLISP is to Lisp what Jack
Skellington's concept of Santa Claus is to the real Santa Claus.

Note Jack that Jack let Santa Claus go and repented for being an asshole.

Will Klutz do the same one day?

Kaz Kylheku

unread,
Jan 19, 2009, 7:01:31 AM1/19/09
to
On 2009-01-19, Andrew Reilly <andrew-...@areilly.bpc-users.org> wrote:
> On Sun, 18 Jan 2009 21:25:08 -0500, Raffael Cavallaro wrote:
>
>> Apologies - I didn't realize that lisp numerics were only supposed to be
>> used for financial calculations. ;^)
>
> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
> some crypto libraries (that know deterministically that they'll be using
> large numbers, and can be coded accordingly), who realistically gets
> milage from the smooth transition to bignums? I was wondering that
> myself, recently. Certainly with 32-bit machine integers and 30 or 31
> bit fixnums there could be some problems, but I think that I could live
> quite happily within the limits of a 62 or 63 bit fixnum on a 64-bit
> system. The caveat would have to be that overflow trapped: silent wrong
> answers are unacceptable, of course.

What if overflow trapped, substituted the correct bignum value, and continued?

> It certainly seems that most
> compilers have "assume fixnums for integers" options on their performance
> knobs, and I would guess that *most* programs work just fine with that
> setting...

It's not ``assume fixnums for integers''. It's ``assume that these
particular objects are fixnums''.

This means that a symbol, string or cons will be treated as an integer if you
pass it in.

Kaz Kylheku

unread,
Jan 19, 2009, 7:08:18 AM1/19/09
to
On 2009-01-19, Majorinc Kazimir <fa...@email.address> wrote:

This examle suggests that a special + function is required to work
with these bignums, and that they are represented as character strings.

Why on Earth are you staining your credibility by defending this pitiful
garbage?

Matthias Buelow

unread,
Jan 19, 2009, 8:21:33 AM1/19/09
to
Andrew Reilly wrote:

> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
> some crypto libraries (that know deterministically that they'll be using
> large numbers, and can be coded accordingly), who realistically gets
> milage from the smooth transition to bignums

It doesn't matter; if only fixnums are available, then that's a
(somewhat arbitrarily defined) restriction that doesn't have to be
there, that's enough reason to hate it.

John Thingstad

unread,
Jan 19, 2009, 9:38:33 AM1/19/09
to
På Mon, 19 Jan 2009 04:26:25 +0100, skrev Andrew Reilly
<andrew-...@areilly.bpc-users.org>:

To me it is more to be able to work at the 'right level' of abstraction.
is it acceptable to put artificial limit's into general libraries. When
trying to use find large primes you easily have thousands of digits. To
use Mathematica as a example, I like the programs abillity to find the
apropiate algoriathm automatically 'under the hood' letting me focus on
the math rather than numeric implementaion issues.

--------------
John Thingstad

Dan Weinreb

unread,
Jan 19, 2009, 10:23:44 AM1/19/09
to
On Jan 18, 8:21 pm, Majorinc Kazimir <fa...@email.address> wrote:
> Tamas K Papp wrote:
>
> In "code=data" area, Newlisp offers several improvements
> over CL and Scheme:

I would be interested to see some examples of what
problems you'd want to solve that work out better
in Newlisp than they do in, say, Common Lisp.

The way you have defined the language means that
you cannot write a "compiler" in the sense of
nearly all existing Common Lisp compilers. In
order to be able to create and "apply" macros
during program execution, it's hard to see any
other way to implement Newlisp other than as
a classic Lisp interpreter, actually manipulating
things at the "S-expression" level.

There's nothing wrong with that per se, as long
as there's some important benefit to be gained
from it. There are many useful applications of
programming languages that do not particularly
require a lot of speed, so that doesn't bother
me.

The interesting thing is whether Newlisp makes
it easier to solve real-world problems. Just
saying "it's good for code=data" is too abstract
a claim without real examples to back it up.
They need to be examples of problems that would
be hard to solve in Common Lisp but are easier
to solve in Newlisp.

By the way, I'm sure Newlisp has cleaned up a
lot, but that's not particularly interesting
since it's so easy to see how to do that
if you don't need to retain compatibility.
Any new Lisp without back-compatibility
requirements would be far cleaner than CL.

In short, what problem is it that Newlisp
was created to solve?

Thank you!

-- Dan

Tamas K Papp

unread,
Jan 19, 2009, 10:39:10 AM1/19/09
to
On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:

> don't need to retain compatibility. Any new Lisp without
> back-compatibility requirements would be far cleaner than CL.

Why is "cleanliness" a goal? Scheme is "clean", but I don't see that
buying anything (except for those writing implementations). It is
good to have standardized solutions for common problems, so that
people understand each other's code. In case one is not satisfied
with the existing solutions in CL, it is always possible to roll one's
own (cf iterate vs loop).

I think that zealously striving for a "clean" and "minimal" language
for its own sake just misses the point. The article quoted by the OP
thinks that = as a replacement for eq, eql, equal, =, etc is somehow a
good thing, but fails to elaborate.

Tamas

Alex Mizrahi

unread,
Jan 19, 2009, 12:00:11 PM1/19/09
to
PJB> Who needs a number that big? Even the Zimbabwe needs only 15 digits.

http://en.wikipedia.org/wiki/Hyperinflation#Examples_of_hyperinflation

Hungary
...
The highest denomination in mid-1946 was 100,000,000,000,000,000,000 pengo.
...
The overall impact of hyperinflation: On 18 August, 1946
400,000,000,000,000,000,000,000,000,000 or 4×10^29 (four hundred octillion
[ short scale ] ) pengo became 1 forint.


Mark Wooding

unread,
Jan 19, 2009, 12:06:59 PM1/19/09
to
Majorinc Kazimir <fa...@email.address> writes:

> (1) Unrestricted eval, i.e. eval can access to local variables. CL and
> Scheme evals cannot do that.

Indeed. It's a tradeoff between efficient compilation and EVAL: if the
compiler is given the necessary freedom to arrange a function's
environment in the most efficient manner, then either EVAL will be
horrifically complicated or just won't be able to `see' lexically bound
variables.

Emacs Lisp does this, by the way.

> Eval is, in my opinion, single most important feature for
> code=data.

Ahh. I think I'd put efficient and comprehensible macros and ready
availability of COMPILE rather higher.

> (2) Fexprs, i.e. "the first class" macros.

The `F' stands for `funny'. Interestingly, fexprs were the cause of
many of the problems that macros were introduced to fix, most notably
the fact that very existence of fexprs the compiler doesn't have
built-in knowledge about makes it nearly impossible to compile correct
code.

The MACLISP interpreter used `fsubrs' -- built-in functions which
received an unevaluated argument list -- to implement what we'd now call
the various special forms. The compiler needed to have special
knowledge of all of these fsubrs in order to compile programs properly
-- and the implementations needed to be kept in step between the
interpreter -- in PDP10 assembler -- and the compiler, written in
MACLISP.

> Newlisp macros can be generated during runtime, passed as
> arguments to other macros or functions, applied like functions and
> so forth.

That's nice. Err... why?

The idea of APPLYing a macro is strange, since APPLY, by definition,
passes a list of /evaluated/ arguments to the function.

> (3) Newlisp has support for dynamic and static scope. Local variables
> are dynamically scoped. The namespaces, called "contexts" are
> statically scoped. That allows significantly more expressive
> functions than in dialects with statically scoped local variables.

Of course, CL has both anyway, with lexical scoping as the (correct)
default. Proper lexical scoping is, of course, the right solution to
the famous historical `funargs' problem.

For better or for worse, Emacs Lisp is entirely dynamically scoped. It
looks to me (though I didn't look hard, so I might be mistaken) that one
can somewhat emulate newLISP's contexts by messing with Emacs Lisp
obarrays. Not that it's a lot of fun. (I speculate that Emacs Lisp
systems run more code from more disparate places than newLISP systems,
and they seem get by pretty well without any of this namespace
malarkey.)

> (4) The Functions are lambda-expressions, not results of the
> evaluation of lambda expressions. That means, program can analyze
> and modify functions during runtime. And the same is the case for
> macros.

Oh, heavens. Look, if you want Emacs Lisp, you know where to find it.
But it really isn't very new.

> (5) Newlisp is an intepreter. That means, if you want speed, you are
> in the same league as Python or Perl. But, if your code contains
> lot of evals - Newlisp is faster than many other Lisps.

You don't have to be mad to write programs which make heavy use of EVAL
but, no, wait, you do.

> Here are some benchmarks i did:

OK, let's just put the issue of whether the benchmark is useful to one
side for now.

I compiled newLISP v.10.0.1 from source, and ran it. I got these
answers:

421
170
409
446

which I assume are milliseconds. (This should help to calibrate my
machine against yours.)

I hacked the Common Lisp example code to make it be Emacs Lisp, which
involved replacing `setf' by `setq', and writing a `time' macro.

(defmacro time (form)
`(let (before after carry diff func)
(setq func (byte-compile `(lambda () ,',form)))
(setq before (get-internal-run-time))
(funcall func)
(setq after (get-internal-run-time))
(setq diff (time-subtract after before))
(message "%d.%06d" (cadr diff) (caddr diff))))

(time (do ((i 0 (+ i 1)))
((= i 1000000))
(progn (setq x 0)
(setq x (+ x 1)))))

(time (do ((i 0 (+ i 1)))
((= i 1000000))
'(progn (setq x 0)
(setq x (+ x 1)))))

(time (do ((i 0 (+ i 1)))
((= i 1000000))
(eval '(progn (setq x 0)
(setq x (+ x 1))))))

(time (do ((i 0 (+ i 1)))
((= i 1000000))
(progn (setq q (eval 0))
(setq q (+ x 1)))))

I got these answers:

0.076005
0.032002
0.388024
0.120007

which are /all/ better than I got from newLISP, by factors between 1.05
and 5.54.

The moral of the story is: if you're an EVAL-crazed idiot, or just enjoy
the slightly musty aroma of old Lisp systems, Emacs Lisp is just better
at it than the new boy.

Did I mention that Emacs Lisp has a byte-compiler?

> So, for those who like Lisp because of code=data, Newlisp is promising
> choice. Of course, the languages have many other different aspects,
> and not all are in favor of Newlisp. But it would be too ambitious to
> summarize all differences in one post.

Except that Emacs Lisp is more mature, is faster, addresses your
perverted `code=data' notions, ...

Oh, I'm told that it comes with a very powerful system for reading news
and mail (in fact, several different mail handling systems), and even
comes with a half-decent text editor.

-- [mdw]

Alex Mizrahi

unread,
Jan 19, 2009, 12:44:36 PM1/19/09
to
sly> newLISP Home Page: http://www.newlisp.org/

http://www.newlisp.org/MemoryManagement.html

"newLISP follows a one reference only (ORO) rule. ... As a result, each
newLISP object only requires one reference."

Most modern programming languages (including languages of Lisp family) allow
to reference objects freely,
so programming in newLISP is very different. For example, ORO rule forbids
creating a graph of
arbitary structure in newLISP -- it is not possible to create a circular
list, I kid you not. [1] Thus, I think
newLISP is not good for a general purpose programming.

[1]:
http://www.alh.net/newlisp/phpbb/viewtopic.php?t=458&sid=07b356982378072a89a4af46bddf5f4b


Kaz Kylheku

unread,
Jan 19, 2009, 1:02:34 PM1/19/09
to
On 2009-01-19, Tamas K Papp <tkp...@gmail.com> wrote:
> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
>
>> don't need to retain compatibility. Any new Lisp without
>> back-compatibility requirements would be far cleaner than CL.
>
> Why is "cleanliness" a goal?

So that you could enter the temple, filthy infidel.

jos...@corporate-world.lisp.de

unread,
Jan 19, 2009, 1:29:18 PM1/19/09
to
On 19 Jan., 16:39, Tamas K Papp <tkp...@gmail.com> wrote:
> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
> > don't need to retain compatibility. Any new Lisp without
> > back-compatibility requirements would be far cleaner than CL.
>
> Why is "cleanliness" a goal?

Why not? If you drive a car do you want the instruments nicely lined
up, nicely visible, easy to use, no visual distractions, etc?
I'd say a good designed cockpit helps the user. Simple,
clean methods are also easier to remember and lead to
less cognitive effort.

There is a cleaner dialect of CL just inside ANSI CL.
ISLisp also is a bit 'cleaner'.

'clean' does not mean that is has less power, it means
that the naming is regular, interfaces look similar,
mechanisms look similar, there are not twenty things
to do the same (just a few), facilities don't overlap
in functionality while being different to use (structures
and classes), there are not widely different interpretations
by implementations tolerated, etc.

Even if I like CL (which I do), I don't have to believe
that the CL design is finished. But pragmatics says
I take the language like it is and like implementations
have extended it.

But:

* CLOS should be wider used (conditions, pathnames, streams, i/o,
readtables, ...)
* get rid of structures, provide a simple DEFCLASS* macro,
make SLOT-VALUE as fast as structure access
* make streams extensible
* get rid of LOOP and replace it with ITERATE
* steal the collection stuff from Dylan
* all built-in function (minus primitive type specialized functions)
functions
should be generic functions

etc etc.

Kaz Kylheku

unread,
Jan 19, 2009, 1:43:32 PM1/19/09
to
On 2009-01-19, Dan Weinreb <d...@alum.mit.edu> wrote:
> On Jan 18, 8:21 pm, Majorinc Kazimir <fa...@email.address> wrote:
>> Tamas K Papp wrote:
>>
>> In "code=data" area, Newlisp offers several improvements
>> over CL and Scheme:
>
> I would be interested to see some examples of what
> problems you'd want to solve that work out better
> in Newlisp than they do in, say, Common Lisp.

The things that work out in newlisp are the completely naive
newbie nonsens that has good reasons not to work.

Q: ``I have a function (lambda (x y) (+ x y)). I tried

(setq f (lambda (x y) (+ x y)))
(third f)

expecting (+ X Y), but it didn't work. Waah!''

A: ``U SHLD BE USING NEWLISP D00D!''

> The way you have defined the language means that
> you cannot write a "compiler" in the sense of
> nearly all existing Common Lisp compilers.

If you wrote a compiler for newLISP, and it actually achieved a bit of speed,
it would expose the performance weakness of the ``copy-nearly-everything''
memory management scheme.

That would make the author look like an (even bigger) fool, for having
sung odes in praise of the memory management scheme and its performance.

> In
> order to be able to create and "apply" macros
> during program execution, it's hard to see any
> other way to implement Newlisp other than as
> a classic Lisp interpreter, actually manipulating
> things at the "S-expression" level.

What ``classic''? Lisp has been compiled since 1961, which was probably before
Klutz was born.

> There's nothing wrong with that per se, as long
> as there's some important benefit to be gained
> from it. There are many useful applications of
> programming languages that do not particularly
> require a lot of speed, so that doesn't bother
> me.

Since you have access to the evaluator at run-time, there is no strict
separation between compile time and run time. If you want compiled code to
expand a macro during its run time, you can do that. This doesn't have to be
elevated into a fully-fledged language restriction.

> The interesting thing is whether Newlisp makes
> it easier to solve real-world problems. Just
> saying "it's good for code=data" is too abstract

It's not good for code=data, because this si just a misunderstanding. It's
actually ``source code=structured data''. Code in general /isn't/ data;
not in all of its representations. If you want only one representation for
code, the source code, then you're making a mistake which is on
the same level as ``source code = characters in a file''.

> a claim without real examples to back it up.
> They need to be examples of problems that would
> be hard to solve in Common Lisp but are easier
> to solve in Newlisp.

In the GNU bash, you can eval code in a function, and that eval has access to
local variables.

I took ``advantage'' of this in a script recently.

But it was in a situation where I wanted a local macro. I didn't /want/ to
eval, I wanted to transform some syntax occuring in the function (just once
when the function definition is processed processed).

The eval was just hack to solve the problem, because of a lack of the proper
tools.

The vast majority of eval use is like this.

See, by the way, how I can admit to having worked on a Bash script, without
having to treat my cognitive dissonance by becoming some kind of
shell scripting advocating nutjob.

It's okay to use something /and/ admit that it's a piece of shit, at the same
time.

If someone wants to script something with newLISP, that's one thing. Just
don't come into comp.lang.lisp to be the whipping boy for it, and destroy your
credibility in the process, you know what I mean?

The true professional doesn't need to feel that every tool he is working
with is the best, because his pride is in the work, not in the tools.

> By the way, I'm sure Newlisp has cleaned up a
> lot, but that's not particularly interesting
> since it's so easy to see how to do that
> if you don't need to retain compatibility.
> Any new Lisp without back-compatibility
> requirements would be far cleaner than CL.

newLISP is anything but cleaner. In the areas of common functionality, newLISP
is a mess.

Proper garbage collection is cleaner than some ad-hoc scheme where some things
are copied and some things are allowed to be references.

Fixnums and bignums being subtyped from the same type and smoothly
interoperating is cleaner than (GMP:+ "1234..." "456...").

Lexical closures are cleaner than the context nonsense.

Macros are cleaner than frexps.

Cons cells are cleaner than the corresponding contortion in newLISP.

List structure that has complete freedom to be circular if need be
is cleaner than one with restrictions. Restrictions are dirt because
they are something extra that needs to be said in the specificatino
(``by the way, you can't to this, this nor that''), and though they
might save some implementation complexity in the language, working
around them adds complexity to programs.

Dynamic variables are cleaner in Common Lisp because they don't have to be
endowed with bizarre responsibilities in order to emulate lexical variables.


What is dirty and what is clean has be judged through the proper experience.

Here is a very good and relevant Joel on Software article about this.

Making Wrong Code Look Wrong

http://www.joelonsoftware.com/articles/Wrong.html

Here he tells the story of how he worked in a bakery, and had to
learn to see what it means for something to be dirty.

jos...@corporate-world.lisp.de

unread,
Jan 19, 2009, 1:49:35 PM1/19/09
to
On 19 Jan., 19:43, Kaz Kylheku <kkylh...@gmail.com> wrote:

...

> Q: ``I have a function (lambda (x y) (+ x y)). I tried
>
>     (setq f (lambda (x y) (+ x y)))
>     (third f)
>
>     expecting (+ X Y), but it didn't work. Waah!''
>
> A:  ``U SHLD BE USING NEWLISP D00D!''

No, you can still use Common Lisp:

CL-USER 23 > (setq f (lambda (x y) (+ x y)))
#<anonymous interpreted function 406000086C>

CL-USER 24 > (third (function-lambda-expression f))
(+ X Y)

...


Tamas K Papp

unread,
Jan 19, 2009, 1:55:46 PM1/19/09
to
On Mon, 19 Jan 2009 10:29:18 -0800, jos...@corporate-world.lisp.de wrote:

> On 19 Jan., 16:39, Tamas K Papp <tkp...@gmail.com> wrote:
>> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
>> > don't need to retain compatibility. Any new Lisp without
>> > back-compatibility requirements would be far cleaner than CL.
>>
>> Why is "cleanliness" a goal?
>
> Why not? If you drive a car do you want the instruments nicely lined up,
> nicely visible, easy to use, no visual distractions, etc? I'd say a good
> designed cockpit helps the user. Simple, clean methods are also easier
> to remember and lead to less cognitive effort.

That I can agree with in general. But my perception is that some
language designs have taken "cleanliness" too far, at the cost of the
usability of the language (Scheme comes to mind).

> Even if I like CL (which I do), I don't have to believe that the CL
> design is finished. But pragmatics says I take the language like it is
> and like implementations have extended it.

Indeed, but that is not how newLISP was started. Its designers didn't
have anything specific in mind that they wanted to improve on, just
some general fuzzy ideas (like "code=data", as if CL didn't have
that). You can see the result for yourself.

I am not saying CL cannot be improved. Just that it is pretty hard to
improve on, and takes a lot of experience and conscious effort.

Tamas

Kaz Kylheku

unread,
Jan 19, 2009, 2:08:23 PM1/19/09
to
On 2009-01-19, jos...@corporate-world.lisp.de <jos...@corporate-world.lisp.de> wrote:
> On 19 Jan., 16:39, Tamas K Papp <tkp...@gmail.com> wrote:
>> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
>> > don't need to retain compatibility. Any new Lisp without
>> > back-compatibility requirements would be far cleaner than CL.
>>
>> Why is "cleanliness" a goal?
>
> Why not? If you drive a car do you want the instruments nicely lined
> up, nicely visible, easy to use, no visual distractions, etc?

What I'm ``driving'' when I write code is not the programming language,
but the program.

As far as driving goes, the instrument panel of consumer vehicles
is compromised for idiots, so it makes a bad example.

Gauges such as vaccuum and oil pressure are missing.

If there is an oil pressure problem, there is only an ``idiot light'' that
comes on, when it may be too late. ``You may have lost all your oil! You have
ten seconds to pull over and stop the engine!''

jos...@corporate-world.lisp.de

unread,
Jan 19, 2009, 2:14:38 PM1/19/09
to
On 19 Jan., 19:55, Tamas K Papp <tkp...@gmail.com> wrote:
> On Mon, 19 Jan 2009 10:29:18 -0800, jos...@corporate-world.lisp.de wrote:
> > On 19 Jan., 16:39, Tamas K Papp <tkp...@gmail.com> wrote:
> >> On Mon, 19 Jan 2009 07:23:44 -0800, Dan Weinreb wrote:
> >> > don't need to retain compatibility. Any new Lisp without
> >> > back-compatibility requirements would be far cleaner than CL.
>
> >> Why is "cleanliness" a goal?
>
> > Why not? If you drive a car do you want the instruments nicely lined up,
> > nicely visible, easy to use, no visual distractions, etc? I'd say a good
> > designed cockpit helps the user. Simple, clean methods are also easier
> > to remember and lead to less cognitive effort.
>
> That I can agree with in general.  But my perception is that some
> language designs have taken "cleanliness" too far, at the cost of the
> usability of the language (Scheme comes to mind).

Scheme R4RS is perfectly usable. R5RS even more. For teaching,
writing algorithms, etc. It gets even more useful with added libraries
(see SLIB, SRFI). Generally I like Common Lisp more for application
development.

>
> > Even if I like CL (which I do), I don't have to believe that the CL
> > design is finished. But pragmatics says I take the language like it is
> > and like implementations have extended it.
>
> Indeed, but that is not how newLISP was started.  Its designers didn't
> have anything specific in mind that they wanted to improve on, just
> some general fuzzy ideas (like "code=data", as if CL didn't have
> that).  You can see the result for yourself.
>
> I am not saying CL cannot be improved.  Just that it is pretty hard to
> improve on, and takes a lot of experience and conscious effort.

Luckily there is now experience about some extensions (streams,
conditions in
CLOS, MOP, etc.). In several areas one would just write down what some
of the implementations are doing anyway.

>
> Tamas

java...@gmail.com

unread,
Jan 19, 2009, 2:14:52 PM1/19/09
to
simple.language.ya...@gmail.com:
> A Look at newLISP:
> B SeeThatItsPERFECT
> C UseItAndForgetTheREST

newLISP seems perfect, the definite answer to all Commonly Lisped
questions.
But the name is simply awful.
Lispers hate camelCase.
Please call it new-lisp

thank-you-very-much

daniel....@excite.com

unread,
Jan 19, 2009, 2:43:32 PM1/19/09
to
On Jan 18, 10:26 pm, Andrew Reilly <andrew-newsp...@areilly.bpc-
users.org> wrote:

> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
> some crypto libraries (that know deterministically that they'll be using
> large numbers, and can be coded accordingly), who realistically gets
> milage from the smooth transition to bignums?

> --
> Andrew


One of my main uses of Lisp is so simulate timing of systems that will
be implemented on embedded chips, and compare with ideal timing over
the
long term.

I implement the algorithm going on the chip, which will be keeping the
values within "normal" integer size limits, like 32 bits and also 64
bits.

Then I implement a timing measure using the regular Lisp numerical
stack,
where I freely accumulate fractions of seconds.

What happens is that I get a lot of rationals build of giant bignums.
These rationals are entirely accurate - there is no rounding or
trucation
error. I can allow the simulations to run for long periods and get
validation of accuracy of timing of the embedded algorithm. For
instance,
if a mistake in the embedded algorithm will drop a least significant
bit
once a year, I pick that up easily.

So I get really valuable mileage out of bignums such as above with the
Fibonacci example, even bigger, just used as parts of rationals. In
the
end I might convert to a real for final reporting, but the zero loss
in
accumulation is the most important.

How about numerical estimation of solutions to differential equations?
For instance finite element analysis? Any application that stands to
suffer from round-off error can benefit from bignums & rationals.

Dimiter "malkia" Stanev

unread,
Jan 19, 2009, 5:04:39 PM1/19/09
to
> McCarthy has remarked that Lisp is a local optimum in the space of
> programming languages. Maybe CL is a local optimum in the space of
> Lisps, too. At least I frequently find that "improved" dialects
> frequently arise from grave misunderstandings and are actually much
> worse than CL. The exceptions that come to mind are Qi (esp. version
> 2) and maybe Clojure (of which I know very little).

Well said... Heh, So CL is really LOL! Nice! LOL as in Locally Optimum
Lisp. No that can't be :)

But yes, it's the middle ground! I love it

simple.lan...@gmail.com

unread,
Jan 19, 2009, 6:23:12 PM1/19/09
to
The arguments used in this thread are not new. They were used over
2000 years ago when ancient Egyptian scribes were improving their
hieroglyphic alphabet. Some of the hieroglyphs represented consonants,
so they could be used the same way as phonetic letters. The idea of
using phonetic hieroglyphs ONLY was a total insanity according to the
scribes, because hieroglyphs representing words were much more terse;
they saved time and parchment. A professional scribe knew over 5000
hieroglyphs. From his point of view, someone who knew only the
phonetic hieroglyphs was an illiterate idiot wasting parchment.

The ancient Greeks also used fairly complex hieroglyphic script
(called Linear B) before Dorian invasion destroyed their civilization.
After the invasion there were no professional scribes left, so a new
script "for illiterate idiots" had to be invented. The new script was
entirely phonetic, and so simple that anyone could learn it in a few
days. All modern Europeans use a modified version of this script.

In my opinion the Common Lisp resembles the ancient hieroglyphs, as
well as cars with manual transmission (for professional drivers), and
obsolete Unix computers (for computer professionals)... Common Lisp
can do the work efficiently, but is too complex for part-time
programmers. Most programmers alive today are part-time programmers.
Many of them spend as much time writing AutoLISP programs as driving a
car, but they do not consider themselves professional programmers or
professional drivers. Computer programming is just one of many skills
they need in everyday life. It seems that the newLISP is simple enough
and terse enough for billions of part-time programmers. It is more
terse (clean) than Scheme and more mature than Pico Lisp.

Kaz Kylheku

unread,
Jan 19, 2009, 6:58:22 PM1/19/09
to
On 2009-01-19, Tamas K Papp <tkp...@gmail.com> wrote:
> McCarthy has remarked that Lisp is a local optimum in the space of
> programming languages.

Damn, I somehow read that as OPIUM on the first scan! :)

Raffael Cavallaro

unread,
Jan 19, 2009, 7:03:47 PM1/19/09
to
On 2009-01-19 18:23:12 -0500, simple.lan...@gmail.com said:

> It seems that the newLISP is simple enough
> and terse enough for billions of part-time programmers. It is more
> terse (clean) than Scheme and more mature than Pico Lisp.

And more broken than either.

If you want small and clean use scheme. There are plenty of
implementations including those specifically designed for scripting
(such as gauche).

newlisp is just an ill conceived jumble which foists restrictions on
the user to make the language implementor's job easier. This priority
is backwards.

--
Raffael Cavallaro, Ph.D.

Tamas K Papp

unread,
Jan 19, 2009, 7:04:03 PM1/19/09
to
On Mon, 19 Jan 2009 15:23:12 -0800, simple.language.yahoo wrote:

> In my opinion the Common Lisp resembles the ancient hieroglyphs, as well
> as cars with manual transmission (for professional drivers), and

Yeah, whatever.

You still haven't showed any actual code to back up your points, yet
you hope to make up for that by bullshitting in metaphors.

I still don't know why eval should be used a extensively. I still
haven't seen any examples where manipulating the S-expressions in
lambda forms is an advantage, not something hairy and error-prone.
Etc.

I don't care about your stories. You are discussing a fricking
programming language, support your arguments with code or get lost.

Tamas

Majorinc Kazimir

unread,
Jan 19, 2009, 7:05:50 PM1/19/09
to
Alex Mizrahi wrote:
> sly> newLISP Home Page: http://www.newlisp.org/
>
> http://www.newlisp.org/MemoryManagement.html
>
> "newLISP follows a one reference only (ORO) rule. ... As a result, each
> newLISP object only requires one reference."
>
> Most modern programming languages (including languages of Lisp family) allow
> to reference objects freely,
> so programming in newLISP is very different. For example, ORO rule forbids
> creating a graph of
> arbitary structure in newLISP -- it is not possible to create a circular
> list, I kid you not. [1] Thus, I think
> newLISP is not good for a general purpose programming.

It is easy to get around it, really.
My favorite way is - using symbols.

(set 'x (list 1 'x))
(set 'y (list 1 'y))
(set 'z (list 3 'z))

Symbols can be generated during
runtime. Symbols in Newlisp are
like generalized memory addresses.

Bruce Stephens

unread,
Jan 19, 2009, 7:12:55 PM1/19/09
to
Tamas K Papp <tkp...@gmail.com> writes:

[...]

> I still don't know why eval should be used a extensively.

It isn't, is it? (Except in an implicit sort of way. I mean a lisp
system which didn't evaluate anything would probably not be very
useful. I mess about with elisp quite a bit, and I don't recall
seeing eval very often, and I'm fairly sure I haven't written any code
using it. It's the kind of thing one uses interactively: eval-defun,
for example.)

> I still haven't seen any examples where manipulating the
> S-expressions in lambda forms is an advantage, not something hairy
> and error-prone. Etc.

That strikes me as a false dichotomy. Macros that manipulate sexprs
can be hairy and error-prone to write, but an advantage when they've
been written. When written carefully they can be quite safe to
use---that's the advantage of sexprs. Nevertheless, writing them
isn't (in my limited experience) especially common, but when you need
it, it's useful to be able to have it.

[...]

Majorinc Kazimir

unread,
Jan 19, 2009, 7:33:55 PM1/19/09
to
Mark Wooding wrote:
> Majorinc Kazimir <fa...@email.address> writes:
>
>> (1) Unrestricted eval, i.e. eval can access to local variables. CL and
>> Scheme evals cannot do that.
>
> Indeed. It's a tradeoff between efficient compilation and EVAL: if the
> compiler is given the necessary freedom to arrange a function's
> environment in the most efficient manner, then either EVAL will be
> horrifically complicated or just won't be able to `see' lexically bound
> variables.

So, CL chose an option that leads to speed, and Newlisp
chose the option that leads to expressive power. That is
exactly what I said.

>
> Emacs Lisp does this, by the way.
>
>> Eval is, in my opinion, single most important feature for
>> code=data.
>
> Ahh. I think I'd put efficient and comprehensible macros and ready
> availability of COMPILE rather higher.

So, you value speed more than expressive power. OK. I do not.

>
>> (2) Fexprs, i.e. "the first class" macros.
>
> The `F' stands for `funny'. Interestingly, fexprs were the cause of
> many of the problems that macros were introduced to fix, most notably
> the fact that very existence of fexprs the compiler doesn't have
> built-in knowledge about makes it nearly impossible to compile correct
> code.

So, CL chose the path that leads to speed, Newlisp
chose the path that leads to expressive power.


>> (3) Newlisp has support for dynamic and static scope. Local variables
>> are dynamically scoped. The namespaces, called "contexts" are
>> statically scoped. That allows significantly more expressive
>> functions than in dialects with statically scoped local variables.
>
> Of course, CL has both anyway, with lexical scoping as the (correct)
> default. Proper lexical scoping is, of course, the right solution to
> the famous historical `funargs' problem.

So, CL chose the path that leads to what? Safety? Newlisp
chose the path that leads to expressive power.

The funarg is however, not the problem is Newlisp practice.

There are few reasons for that. One reason is that people
simply use support for lexical scope existing in language.

Other reason is that funarg problem is not really that
hard - even if one decide to use only dynamic scope.

>> (4) The Functions are lambda-expressions, not results of the
>> evaluation of lambda expressions. That means, program can analyze
>> and modify functions during runtime. And the same is the case for
>> macros.
>
> Oh, heavens. Look, if you want Emacs Lisp, you know where to find it.
> But it really isn't very new.

So, where are we now? You gave up from CL and you are trying to
compare Newlisp with Emacs Lisp. Then go back to fexprs vs macros,
Newlisp has fexprs, Emacs has macros.

>
>> (5) Newlisp is an intepreter. That means, if you want speed, you are
>> in the same league as Python or Perl. But, if your code contains
>> lot of evals - Newlisp is faster than many other Lisps.
>
> You don't have to be mad to write programs which make heavy use of EVAL
> but, no, wait, you do.

Mad? Yeah right pal, I have to go, good luck to you ... :)

Matthew D Swank

unread,
Jan 19, 2009, 7:42:13 PM1/19/09
to
On Mon, 19 Jan 2009 10:29:18 -0800, jos...@corporate-world.lisp.de wrote:
...

> * all built-in function (minus primitive type specialized functions)
> functions should be generic functions
>
> etc etc.
>
>

Should funcall _and_ apply both be generic functions?

Matt

--
Communicate! It can't make things any worse.

Andrew Reilly

unread,
Jan 19, 2009, 7:47:49 PM1/19/09
to

That's a good answer. Thanks. In fact, it was pretty much exactly my
own reaction, during my first encounter with lisp and scheme, and my
first year of coding in scheme. More recently, though, I've been
thinking about implementation issues (I guess I'm not alone in wanting to
write my own, having read "the art of the interpreter"...) and it just
seems that the cost of that capability is quite high. Perhaps not in the
grand scheme of things, and not when coding at a high level on file or
GUI-related things, but it's got to hurt when you try to write tight
computational kernels or low-level "bit-bashing" controller code. Maybe
having a switch to force the issue when you know you want to, and having
a fully flexible and abstract default is the right answer after all...

Cheers,

--
Andrew

Tamas K Papp

unread,
Jan 19, 2009, 7:59:43 PM1/19/09
to
On Tue, 20 Jan 2009 01:33:55 +0100, Majorinc Kazimir wrote:

> Mark Wooding wrote:
>
> So, CL chose an option that leads to speed, and Newlisp chose the option
> that leads to expressive power. That is exactly what I said.
>

>> Ahh. I think I'd put efficient and comprehensible macros and ready
>> availability of COMPILE rather higher.
>
> So, you value speed more than expressive power. OK. I do not.
>

>> The `F' stands for `funny'. Interestingly, fexprs were the cause of
>> many of the problems that macros were introduced to fix, most notably
>> the fact that very existence of fexprs the compiler doesn't have
>> built-in knowledge about makes it nearly impossible to compile correct
>> code.
>
> So, CL chose the path that leads to speed, Newlisp chose the path that
> leads to expressive power.
>

>> Of course, CL has both anyway, with lexical scoping as the (correct)
>> default. Proper lexical scoping is, of course, the right solution to
>> the famous historical `funargs' problem.
>
> So, CL chose the path that leads to what? Safety? Newlisp chose the path
> that leads to expressive power.

You keep repeating "expressive power", but that doesn't make it true.
Examples, please? I would like to see that enormous amount of
expressive power that the modifications buy you.

Expressive power. Expressive power. Expressive power. Wow man, just
saying it makes me feel super-powerful. Ok, back to work, let's see
those examples of extreme expressive expressive expressive power.

Tamas

Kaz Kylheku

unread,
Jan 19, 2009, 8:56:40 PM1/19/09
to
On 2009-01-20, Andrew Reilly <andrew-...@areilly.bpc-users.org> wrote:
> On Mon, 19 Jan 2009 14:21:33 +0100, Matthias Buelow wrote:
>
>> Andrew Reilly wrote:
>>
>>> Sure, but outside of demonstrations of fibbonaci sequences, and perhaps
>>> some crypto libraries (that know deterministically that they'll be
>>> using large numbers, and can be coded accordingly), who realistically
>>> gets milage from the smooth transition to bignums
>>
>> It doesn't matter; if only fixnums are available, then that's a
>> (somewhat arbitrarily defined) restriction that doesn't have to be
>> there, that's enough reason to hate it.
>
> That's a good answer. Thanks. In fact, it was pretty much exactly my
> own reaction, during my first encounter with lisp and scheme, and my
> first year of coding in scheme. More recently, though, I've been
> thinking about implementation issues (I guess I'm not alone in wanting to
> write my own, having read "the art of the interpreter"...) and it just
> seems that the cost of that capability is quite high.

The cost of that capbility is buried in dynamic typing. A value coming into
your function can already be anything whatsoever: string, symbol, vector,
struct, fixnum, real, etc.

So unless you have the assurance of a declaration, and safety is set to zero,
you /have/ to look at the type.

You /have/ to handle overflow, unless you're designing some unreliable
programming language that allows overflows situations to silently proceed
with wrong answers.

In terms of efficiency, what does it matter that your integer overflow handler
takes the extra step of producing a bignum?

Even you don't make the bignum, you still have to check the type, to make sure
the operands to the arithmetic operation are numbers, still have to handle
overflow, still have to handle combinations of operands that are mixtures of
floating point numbers, etc.

> Perhaps not in the
> grand scheme of things, and not when coding at a high level on file or
> GUI-related things, but it's got to hurt when you try to write tight
> computational kernels or low-level "bit-bashing" controller code.

The question is, what spectacular thing could you do by removing bignums from
the dynamic language, which would eliminate this alleged hurt?

The right tool is not removing the support for the bignum type, but dropping
down to manifest typing over well-defined sections of code.

Kaz Kylheku

unread,
Jan 19, 2009, 10:46:48 PM1/19/09
to
On 2009-01-20, Majorinc Kazimir <fa...@email.address> wrote:
> Mark Wooding wrote:
>> Majorinc Kazimir <fa...@email.address> writes:
>>
>>> (1) Unrestricted eval, i.e. eval can access to local variables. CL and
>>> Scheme evals cannot do that.
>>
>> Indeed. It's a tradeoff between efficient compilation and EVAL: if the
>> compiler is given the necessary freedom to arrange a function's
>> environment in the most efficient manner, then either EVAL will be
>> horrifically complicated or just won't be able to `see' lexically bound
>> variables.
>
> So, CL chose an option that leads to speed, and Newlisp
> chose the option that leads to expressive power.

s/expressive/excrement/

99% of the time when you think you need eval with lexical access, it's because
you're a Lisp newbie who hasn't figured out APPLY, macros, local functions,
etc.

Recently I used eval-with-access-to-locals in a Bash script. I did that
because Bash doesn't have macros. Actually no, the problem could have been
solved with a local function. Not having local functions, or macros,
I had to fall back on eval. Take the code that should have been put into
a local function body, as a character string, and evaluate it.

Yeah, bash has excrement power too!

newLISP isn't even a language, just an implementation.

There are Common Lisps that have access to lexical environment as an extension.
E.g. look at environments in Allegro CL.

It's just not specified in the ANSI standard.

newLISP doesn't have an ANSI standard. Its behavior is whatever the current
release of the code does.

Comparing the newLISP to the language described in an ANSI standard is idiotic.
Why don't you compare newLISP to various CL implementations, in turn?

Compare language to language, implementation to implementation.

Man, Lisp /must/ be getting popular now. Two Kazimirs in the newsgroup.
Too bad one of them is drooling dummy, pushing newLISP.

Majorinc Kazimir

unread,
Jan 20, 2009, 3:34:42 AM1/20/09
to
Dan Weinreb wrote:

> The interesting thing is whether Newlisp makes
> it easier to solve real-world problems. Just
> saying "it's good for code=data" is too abstract

> a claim without real examples to back it up.
>
> They need to be examples of problems that would
> be hard to solve in Common Lisp but are easier
> to solve in Newlisp.

> In short, what problem is it that Newlisp
> was created to solve?
>
> Thank you!

Author, Lutz Mueller wanted simpler Lisp, useful for "small
programming", i.e. what is today called scripting. He does
not market Newlisp as especially powerful language,
quite contrary, he presents it as a language simple and
fun to use and learn, with good "built in" support for
different "real world" needs like web protocols. Majority
of discussions on forum are about such issues - how to use
Newlisp for webserver, creating some graphics etc.

But, although it was not the main intention of the author,
Newlisp has the features I mentioned, partly because modern
computers are much better, so there was far less pressure for
various optimizations.


> I would be interested to see some examples of what
> problems you'd want to solve that work out better
> in Newlisp than they do in, say, Common Lisp.

Example for functions identical to their definitions:
-----------------------------------------------
Once I wanted to "infect" SOME functions with extra
code during runtime, and later to clean that extra
code from functions. Wraping around function was not
enough, code had to be inserted in function definitions.

It had to be done WITHOUT knowledge and cooperation of
infected function, these had to be ordinary,
user defined functions, not purposely prepared to be infected.

It turned to be easy in Newlisp, because functions are
just lambda-expressions, they can be processed like
any list.


Example for unrestricted eval:
-----------------------------

What macros actually do? They behave *like*
functions that produce code and - evaluate that
code in the caller environment once again.

Sure, macros have other specificities, they
are not functions, they are not evaluated but
expanded, etc, but those two things interest me
now: code generation and evaluation in the caller
environment.

Now, what if one wants code transformation, but
macros are not suitable (1) because they are
not the first class, (2) code transformation
is not known in macro expansion time, or especially
interesting (3) because they are just too big
to be used for every small code transformation
one can imagine? People do not define functions
for every expression they evaluate, why should
they define macros for every code transformation
they do?!

OK, in that case, code transformations can be done
without macros, using normal Lisp expressions,
possibly functions - but resulting code has to
be EVALUATED ONCE MORE in the caller environment,
explicitly. And Newlisp has exactly such eval,
eval able to access to local variables.

More specifically, lets say one need macro m
which has to be determined during runtime.
In CL, such macro cannot be defined (without
eval). But it can be replaced with function f
that does exactly the same code transformation.
So, instead of impossible

(m expr1 ... exprn)

it is possible to write

(eval (f 'expr1 ... 'exprn))

Functional version works, because functions
are the first class citizens. Macro version
doesn't. But, eval able to access to local
variables needs to be inserted.

If one takes a look on Newlisp code, it is
full of evals. Some of these evals could
be replaced with macros - but many of those
would be ultrasmall "define once - use once"
macros, it is just more trouble defining
macros than applying eval directly on the
result of some expression. Some evals couldn't
be replaced with macros at all.


Example for dynamic scope:
--------------------------
Define IF as function.

(set 'IF (lambda()
(eval
((args)
(find (true? (eval (first (args))))
'(* true nil))))))

(let ((x 2)(y 3))
(IF '(< x y)
'(println x " is less than " y)
'(println x " is not less than " y))

(IF '(> x y)
'(println x " is greater than " y)
'(println x " is not greater than " y)))

It works in Newlisp.

This particular definition of IF is complicated,
because I did it as game - IF is not only function,
but it is defined without built in operators
like IF and COND, and also, without using any variable.

Many other - less extravagant definitions are possible.

More generally, whenever one can define macro,
he can define equivalent function as well.

Example for first-class macros
--------------------------------------------
Experienced Lisper probably noted that he sometimes
write macros only because it is impossible to write
equivalent function. Also, that he sometimes write
functions although macros would be syntactically better,
but he needs the first class citizens.

In Newlisp, macros are fexprs, whenever one can write
the function, he can write macro that does the same
and vice versa. Both have the advantages and
disadvantages resulting from its syntax and semantics.
Because of that, it is possible to define two functions:

function-to-macro
macro-to-function

For example, after

(set 'for-function (macro-to-function for))
(set 'append-macro (function-to-macro append))

are evaluated the expressions

(for-function '(i 1 10) '(println i))
(append-macro (1 2 3) (4 5 6))

are valid and behave just like one might expect.

OK?


Pascal Costanza

unread,
Jan 20, 2009, 4:36:56 AM1/20/09
to
Matthew D Swank wrote:
> On Mon, 19 Jan 2009 10:29:18 -0800, jos...@corporate-world.lisp.de wrote:
> ...
>
>> * all built-in function (minus primitive type specialized functions)
>> functions should be generic functions
>>
>> etc etc.
>>
>
> Should funcall _and_ apply both be generic functions?

If a CL implementation supports the CLOS MOP, that's already the case.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Matthias Buelow

unread,
Jan 20, 2009, 7:51:53 AM1/20/09
to
jos...@corporate-world.lisp.de wrote:

> * CLOS should be wider used (conditions, pathnames, streams, i/o,
> readtables, ...)
> * get rid of structures, provide a simple DEFCLASS* macro,
> make SLOT-VALUE as fast as structure access

> * all built-in function (minus primitive type specialized functions)
> functions
> should be generic functions

Sounds very C++-ish.. I don't think it's a good idea to OO-ize the basic
language. That's a too low abstraction layer.

Matthias Buelow

unread,
Jan 20, 2009, 8:00:39 AM1/20/09
to
Majorinc Kazimir wrote:

> Experienced Lisper probably noted that he sometimes
> write macros only because it is impossible to write
> equivalent function.

I thought that was the only reason to use macros.

> Also, that he sometimes write
> functions although macros would be syntactically better,

When would that be the case?

> In Newlisp, macros are fexprs, whenever one can write
> the function, he can write macro that does the same
> and vice versa.

Fish<->bicycle.

jos...@corporate-world.lisp.de

unread,
Jan 20, 2009, 8:57:50 AM1/20/09
to

No, it's more like the language ZetaLisp from which CL was mostly
derived by simplifying. ZetaLisp and the software written
in it was heavy object-oriented using Flavors. Common Lisp
got designed in version 1 without an object system - just
because there was not a single one that was agreed on
at that time and there were several different ones researched/
implemented.
Some were using Flavors. Xerox was using LOOPS. LMI
was using Object Lisp. HP was proposing Common Objects,
Intellicorp used Units, Coral was using Object Lisp, Germans
had some other proposals, there were systems in France, ...

So we got CLtL1 without an object system, though almost everyone was
already using some quite heavily.

Then CLOS got developed and added to CL. CLOS was designed in such a
way that many projects could move their code to CLOS by extending
CLOS with the necessary facilities.

But unfortunately Common Lisp the language got not redesigned to
a state that would represent 'the reality' - meaning all
major implementations of CL and the languages that CL
was mostly based on, were using objects (Flavors, LOOPS,
Object Lisp, Common Objects, Units, New Flavors, CommonLOOPS, ...).
CLtL1 was mostly an accident. CLtL1 is a dent in time and space.
Unfortunately it shaped ANSI CL too much.

See this message from net.lang.lisp, 1985:
http://groups.google.com/group/net.lang.lisp/msg/73eab605311fdba2

Dylan then simplified CL by usind a CLOS
derivative everywhere possible. Most CL implementations have
CLOS-based implementations of important parts - unfortunately
ANSI CL only specifies a non CLOS interface. So I can't subclass
stream in ANSI CL. Most CL implementations use something like
gray streams. Why should I use READ-CHAR when it calls
a generic function anyway? I could either get rid of READ-CHAR
and use the generic function STREAM-READ-CHAR directly

See:
http://www.sbcl.org/manual/Character-input-stream-methods.html#Character-input-stream-methods

or have READ-CHAR as the generic function.

There is a layer of interface that can be removed without any loss
of functionality. It would make the language more extensible - in a
way that most implementations provide anyway. Implementations
provide extensible streams and applications are using them. Today.

DESCRIBE calls DESCRIBE-OBJECT. Just get rid of DESCRIBE and replace
it with the generic function DESCRIBE-OBJECT renamed.

We have efficient maths in CL so it should possibly not use CLOS,
but we don't have efficient streams anyway, since they are implemented
using CLOS (or with CLOS tacked on some internal implementation
strategy).

The design needs to be understood based on its history. There
are opportunities to simplify CL AND adding flexibility and power.
Many applications already are developed with some kind
of extended CL (non-portable, or via portability libs).


Majorinc Kazimir

unread,
Jan 20, 2009, 9:17:21 AM1/20/09
to
Matthias Buelow wrote:

> Majorinc Kazimir wrote:
>
>> Experienced Lisper probably noted that he sometimes
>> write macros only because it is impossible to write
>> equivalent function.
>
> I thought that was the only reason to use macros.

Sometimes there are other reasons, because of prettier
syntax OR some speed up due to macroexpansion. For
example, if you note that you use function calls only
with constant arguments, (f 3), (f 7), but never
(f x) it *could* be useful to rewrite f as macro. Or
(g '(+ x 1)), (g '(- x 1)) but never (g (+ x 2)). Not
always, but typically yes.

OK?

>
>> Also, that he sometimes write
>> functions although macros would be syntactically better,
>
> When would that be the case?

Macros are typically syntactically better, because
there are no extra apostrophes. Typically, macros cannot
be used instead of functions in CL. Sometimes they can, g
from previous example.

OK?

Mark Wooding

unread,
Jan 20, 2009, 11:15:00 AM1/20/09
to
Majorinc Kazimir <fa...@email.address> writes:

> So, CL chose an option that leads to speed, and Newlisp chose the
> option that leads to expressive power. That is exactly what I said.

Err..., no, MACLISP chose speed and Common Lisp followed.

I've got an Interlisp manual from 1973 in front of me, and it looks as
if Interlisp would happily compile functions which called fexprs, though
you had to warn it in advance which symbols denoted fexprs. Since many
fexprs actually want only some of their arguments unevaluated, they'll
call EVAL on the others, and drop back into the interpreter.
Fortunately the compiler had macros as well, so you could write a macro
version of your fexpr for the compiler to chew on. A 1985 manual for
Interlisp-D explains that the interpreter also honours macros, though
the behaviour differs if a symbol has both function and macro
definitions: the interpreter prefers the symbol's function cell while
the compiler prefers the macro.

I think this demonstrates that the move from fexprs to macros wasn't
just limited to the MACLISP community, and that Common Lisp's loss of
fexprs was probably viewed as leaving behind a historical misstep rather
than sacrificing critical expressiveness. I stand ready to be corrected
on this by people who were actually there, though.

Besides, briefly comparing the lengths of the trivial programs from your
`benchmark' -- discussion of which you have avoided, I note -- tells me
everything I need to know about the comparative expressive power of
Common Lisp (or even Emacs Lisp) and newLISP.

Besides, you brought up the issue of speed yourself, with your bogus
benchmark. If you're not interested in speed, why did you bring the
question up in the first place? Note also that the newLISP website
claims `fast' as an important feature: `Friendly --- fast --- small' is
the entire first bullet point. `Slower than Emacs Lisp' doesn't sound
quite so good, does it?

(Note the correct capitalization of `newLISP', by the way. It's strange
that you advocate it, but type its name incorrectly throughout.)

[scoping]

> So, CL chose the path that leads to what? Safety? Newlisp
> chose the path that leads to expressive power.

CL followed Scheme down the path to comprehensible programs which can be
built out of modules understood in isolation (lexical scope allows the
reader to prove to himself that various kinds of side effects are
impossible -- even, by CL rules, if the code invokes EVAL), and which
make use of closures.

> The funarg is however, not the problem is Newlisp practice.

Presumably because newLISP has, for some reason, shied away from the
expressive power of proper functional abstraction.

> So, where are we now? You gave up from CL and you are trying to
> compare Newlisp with Emacs Lisp. Then go back to fexprs vs macros,
> Newlisp has fexprs, Emacs has macros.

I prefer Common Lisp to Emacs Lisp, because CL has proper closures,
lexical scoping, bignums, keyword arguments, CLOS, and many other good
things. Emacs Lisp is stuck in a timewarp, because it follows the
traditional MACLISP interpreter semantics (rather than the compiler),
but it's tolerable. If you like those traditional semantics, then I've
shown that Emacs Lisp is quite considerably faster than newLISP -- at
running your own benchmark! -- leaving newLISP without an obvious niche.
After all, it doesn't even have a built-in text editor.

Hmmm. Extensive use of eval. Dynamic scoping. Code as data.
Non-sharing of lists. Don't care too much about performance. I've got
the very thing! comp.lang.tcl is right over there.

-- [mdw]

Matthias Buelow

unread,
Jan 20, 2009, 11:58:29 AM1/20/09
to
Majorinc Kazimir wrote:

> For
> example, if you note that you use function calls only
> with constant arguments, (f 3), (f 7), but never
> (f x) it *could* be useful to rewrite f as macro.

Sorry, I don't understand this example.

> Or
> (g '(+ x 1)), (g '(- x 1)) but never (g (+ x 2)). Not
> always, but typically yes.
>
> OK?

Nope.

> Macros are typically syntactically better, because
> there are no extra apostrophes.

What's "syntactically better"? Generally, if I see something like (g (+
x 2)) I expect that g is called with the result of evaluating (+ x 2)
and not getting the unevaluated list. Macros should be used sparingly
because of considerations like these.

> Typically, macros cannot
> be used instead of functions in CL. Sometimes they can, g
> from previous example.

(lambda (x) (your-macro x))

Mark Wooding

unread,
Jan 20, 2009, 12:12:10 PM1/20/09
to
Majorinc Kazimir <fa...@email.address> writes:

> Now, what if one wants code transformation, but macros are not
> suitable (1) because they are not the first class, (2) code
> transformation is not known in macro expansion time, or especially
> interesting (3) because they are just too big to be used for every
> small code transformation one can imagine?

In what possible sense is can a macro be `too big'?

> People do not define functions for every expression they evaluate, why
> should they define macros for every code transformation they do?!

Err... Is there some problem I don't understand with MACROLET?

> More specifically, lets say one need macro m which has to be
> determined during runtime.

[etc.]

Your `examples' presume that one is in the situation where one needs the
feature in question: it's a circular argument. I'd like some actual
/examples/ -- practical, and concrete -- which demonstrate /why/ one
might find oneself in this situation in the first place.

> If one takes a look on Newlisp code, it is full of evals. Some of
> these evals could be replaced with macros - but many of those would be
> ultrasmall "define once - use once" macros, it is just more trouble
> defining macros than applying eval directly on the result of some
> expression.

Common Lisp gives you two choices here. Either MACROLET, which is ideal
for one-shot macros (which replace boring repetitive code), or #.(...)
to hack the right code together at read-time.

> Some evals couldn't be replaced with macros at all.

Again, examples please, not circular arguments or hand-wavy abstract
nonsense.

> Define IF as function.

But why would one want to?

-- [mdw]

Xah Lee

unread,
Jan 20, 2009, 12:36:47 PM1/20/09
to

Very well put.

Xah
http://xahlee.org/


Mark Wooding

unread,
Jan 20, 2009, 12:50:19 PM1/20/09
to
Majorinc Kazimir <fa...@email.address> writes:

> Matthias Buelow wrote:
>> Majorinc Kazimir wrote:
>>
>>> Experienced Lisper probably noted that he sometimes write macros
>>> only because it is impossible to write equivalent function.
>>
>> I thought that was the only reason to use macros.

The reason to use a macro is to make use of syntactic abstraction.
Maybe that's a restatement of what Majorinc Kazimir said, but I just
can't tell.

> Sometimes there are other reasons, because of prettier syntax

Isn't this covered by `impossible to write equivalent function'?

> OR some speed up due to macroexpansion.

Use DECLAIM INLINE for this.

-- [mdw]

Alexander Schreiber

unread,
Jan 20, 2009, 3:24:25 PM1/20/09
to
Pascal J. Bourguignon <p...@informatimago.com> wrote:
> Raffael Cavallaro <raffaelc...@pas.espam.s.il.vous.plait.mac.com> writes:
>
>> On 2009-01-18 18:14:08 -0500, andrew.n...@gmail.com said:
>>
>>> The largest positive integer you can have in newLISP is 9 223 372 036
>>> 854 775 807, and the largest negative integer is -9 223 372 036 854
>>> 775 808.
>>
>> An no one *ever* needs integers larger than that so common lisp's
>> automatic overflow detection and bignums are pointless, right? For
>> example, the thousandth fibonacci number is much smaller than newlisps
>> integers, right?
>>
>> Ooops...
>>
>> * (fib 1000)
>>
>> 43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
>> *
>>
>> they're
>>
>> called "bignums" for a reason ;^)
>
> Who needs a number that big? Even the Zimbabwe needs only 15 digits.

For instance people who need to do very, very precise calculations?
Finance is not the only candidate for that. And I've seen enough people
using float for that *shudder*

Regards,
Alex.
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison

Alex Mizrahi

unread,
Jan 21, 2009, 12:16:54 PM1/21/09
to
MK> It is easy to get around it, really.
MK> My favorite way is - using symbols.

MK> (set 'x (list 1 'x))
MK> (set 'y (list 1 'y))
MK> (set 'z (list 3 'z))

you mean

(set 'x (list 1 'y))

maybe?

this is pretty ugly, as for me.


Alex Mizrahi

unread,
Jan 21, 2009, 12:34:44 PM1/21/09
to
sly> obsolete Unix computers (for computer professionals)... Common Lisp
sly> can do the work efficiently, but is too complex for part-time
sly> programmers.

what is complex about Common Lisp and how exactly does newLISP fix this?
i find CL being quite simple in it's core. there are lots of functions in
standard
library, some of them being weird, but you do not need to learn them all.

on the other hand, newLISP is quite complicated due to ORO restrictions,
as one needs workarounds like symbols and contexts to implement even
basic stuff. and this stuff is actually pretty complex and not good for
"part-time"
programmers you worry about.

here's an example from newLISP documentation

-----
To avoid passing data objects by value copy, they can be enclosed in context
objects and passed by reference. The following code snippet shows reference
passing using a namespace default functor:

(define (modify data value)
(push value data))

(set 'reflist:reflist '(b c d e f g))

(modify reflist 'a)

reflist:reflist => (a b c d e f g)
-----
wtf is "namespace default functor" and why one has to use them to implement
a relatively simple function?

it looks like newLISP features were structured around its interpreter
implementation -- whatever
was easier to implement was included. this might be simplier for ones who
write interpreters,
but it is not good for ones who are trying to learn it.

you should be ashamed for supporting this shit, i suggest you doing a
sepukku
to defend your reputation.


jos...@corporate-world.lisp.de

unread,
Jan 21, 2009, 12:50:31 PM1/21/09
to
On 21 Jan., 18:34, "Alex Mizrahi" <udode...@users.sourceforge.net>
wrote:

ROTFL

Majorinc Kazimir

unread,
Jan 21, 2009, 10:01:41 PM1/21/09
to
Alex Mizrahi wrote:
>
> what is complex about Common Lisp and how exactly does newLISP fix this?
> i find CL being quite simple in it's core. there are lots of functions in
> standard
> library, some of them being weird, but you do not need to learn them all.

It is code=data, eval and macros. CL/Scheme macros are
complex. Some CL users repeat all the time "if you use
eval, that means that you're n00b that didn't l3arn3d
h0w to use macros!"

Newlisp users use eval if it solves their problem. Their
eval understands local variables, so it is far more
frequently used.

And what if one WANTS macros? Newlisp macros are easier to
understand, more powerful (CL-ers claim no use
of additional power) and easier to write in practice.

> it looks like newLISP features were structured around
> its interpreter > implementation -- whatever
> was easier to implement was included. this might be
> simplier for ones who write interpreters,
> but it is not good for ones who are trying to learn it.

Method might be true, but not the result. This is why:

CL tries to serve two masters, to give programmers almost
as much power as interpreted Lisps, at almost as much
speed as Pascal (if he does not use eval.) It is complicated
- complicated to design, implement, use.

Newlisp is designed to be simple. Like Perl, Python, Ruby,
Basic, it doesn't try to compete with C. It allowed
significant simplifications, and some of these also increase
expressive power. But Newlisp is primarily advertised as
simple to learn and use and good enough for many
'scripting' purposes."

>
> you should be ashamed for supporting this shit, i suggest you doing a
> sepukku
> to defend your reputation.
>

Yeah right ...

In many areas CL and Scheme standards and implementations
are equal or better than Newlisp. In many areas, they are
just different. But, if one puts *easy to learn and use*
and powerful *code=data* on top of his priorities, Newlisp
is excelent choice.


Raffael Cavallaro

unread,
Jan 22, 2009, 12:47:46 AM1/22/09
to
On 2009-01-21 22:01:41 -0500, Majorinc Kazimir <fa...@email.address> said:

> Newlisp is designed to be simple

...minded?

;^)
--
Raffael Cavallaro, Ph.D.

jos...@corporate-world.lisp.de

unread,
Jan 22, 2009, 1:53:02 AM1/22/09
to
On Jan 22, 4:01 am, Majorinc Kazimir <fa...@email.address> wrote:
> Alex Mizrahi wrote:
>
> > what is complex about Common Lisp and how exactly does newLISP fix this?
> > i find CL being quite simple in it's core. there are lots of functions in
> > standard
> > library, some of them being weird, but you do not need to learn them all.
>
> It is code=data, eval and macros. CL/Scheme macros are
> complex. Some CL users repeat all the time "if you use
> eval, that means that you're n00b that didn't l3arn3d
> h0w to use macros!"

One says that EVAL is often just not needed and that it makes the code
slower than it needs to be. In many cases it is better
to generate code at compile-time and have it compiled than generate
code at runtime. Using EVAL where not needed is
like driving and not shifting into a higher gear.

> Newlisp users use eval if it solves their problem.

Which problem? Any practical problem? What kind
of problems are solved by Newlisp users this
way?

> Their
> eval understands local variables, so it is far more
> frequently used.

Stuff like that has been removed earlier in the history of Lisp,
because it makes the understanding of code harder - for
the human and the compiler.

Reason:

In a lexical Lisp I can see what code uses (reads/writes/binds) the
variables.
In a Lisp that provides unrestricted EVAL with access
to local variables, many pieces of code can set the local variables
and it is
not obvious when and where it happens. It provides more
flexibility at the cost of rendering any non-trivial
amount of code to be much harder to understand.

> And what if one WANTS macros? Newlisp macros are easier to
> understand, more powerful (CL-ers claim no use
> of additional power) and easier to write in practice.

Because they are easier to use? Do you have a more specific reason?

For example I can take a piece of code in Common Lisp
and have it macro expanded and look at it. I can trace/debug
the macro expansion in isolation. Sounds easier
than a FEXPR which at runtime does code generation, where
code generation is mixed with execution.

>
> > it looks like newLISP features were structured around
>
>  > its interpreter > implementation -- whatever> was easier to implement was included. this might be
>
>  > simplier for ones who write interpreters,
>
> > but it is not good for ones who are trying to learn it.
>
> Method might be true, but not the result. This is why:
>
> CL tries to serve two masters, to give programmers almost
> as much power as interpreted Lisps, at almost as much
> speed as Pascal (if he does not use eval.) It is complicated
>   - complicated to design, implement, use.

Common Lisp provides support for interpretation and compilation.
What it did was removing the semantic differences
between those. For example before Common Lisp it was often the case
that the Lisp implementation was using dynamic binding
in the interpreter and lexical binding for compiled code.
Common Lisp demands that the interpreter also uses
lexical binding. It makes Lisp follow a simpler execution
model (more in line with lambda calculus).

> Newlisp is designed to be simple.

I was using some Lisps like early Scheme implementations that
look quite a bit simpler to me.

> Like Perl, Python, Ruby,
> Basic, it doesn't try to compete with C. It allowed
> significant simplifications, and some of these also increase
> expressive power. But Newlisp is primarily advertised as
> simple to learn and use and good enough for many
> 'scripting' purposes."

I think a simple Lisp in the Scheme tradition is just
as easy to use for scripting and better understandable.
Actually a former co-worker used scsh (the scheme shell)
for a while and then moved on to use GNU CLISP for
scripting purposes and liked CLISP a lot.

> > you should be ashamed for supporting this shit, i suggest you doing a
> > sepukku
> > to defend your reputation.
>
> Yeah right ...
>
> In many areas CL and Scheme standards and implementations
> are equal or better than Newlisp. In many areas, they are
> just different. But, if one puts *easy to learn and use*
> and powerful *code=data* on top of his priorities, Newlisp
> is excelent choice.

I would think that real newbies are much better starting with
Scheme (R5RS), since it has clearer semantics and is
more powerful at the same time.

Kaz Kylheku

unread,
Jan 22, 2009, 2:11:08 AM1/22/09
to
On 2009-01-22, Majorinc Kazimir <fa...@email.address> wrote:
> Alex Mizrahi wrote:
>>
>> what is complex about Common Lisp and how exactly does newLISP fix this?
>> i find CL being quite simple in it's core. there are lots of functions in
>> standard
>> library, some of them being weird, but you do not need to learn them all.
>
> It is code=data, eval and macros. CL/Scheme macros are
> complex. Some CL users repeat all the time "if you use
> eval, that means that you're n00b that didn't l3arn3d
> h0w to use macros!"

You've evidently chosen to suffer a language just because in it, eval behaves
in a particular way. No matter what anyone says, you just keep harping about
``code=data'' and ``eval that understands locals''.

You don't seem to realize, that you can write an interpreter in Common Lisp for
a dialect which has the eval that you want. Writing a Lisp interpreter
in Lisp is a student exercise. You have things like reading, printing,
representation of symbols and memory management taken care of already.

Bad technology like some crummy evaluator can easily be hacked with the good
technology in a few evenings. In a week, you could cob together something
that blows newLISP out of the water.

And, lastly, note that Lisp has dynamic variables. If you want eval to
understand local variables, use those!

We could say that just like Common Lisp, newLISP has an eval which doesn't
give you access to lexical variables. But Common Lisp /has/ lexical variables.

newLISP's access to local variables is easy to implement because they are
dynamic.

If newLISP had real lexical variables, the author of newLISP would probably
work out some excuse why his eval cannot access them. Put a newbie
perspective on it, like it's for ease of use or something. Or he'd point
to real Lisps and claim that they don't do it either.

Slobodan Blazeski

unread,
Jan 22, 2009, 3:57:23 AM1/22/09
to
On Jan 22, 6:47 am, Raffael Cavallaro

ROFL. Instant classic.
thank you
bobi

Tamas K Papp

unread,
Jan 22, 2009, 7:35:15 AM1/22/09
to
On Thu, 22 Jan 2009 04:01:41 +0100, Majorinc Kazimir wrote:

> It is code=data, eval and macros. CL/Scheme macros are complex. Some CL
> users repeat all the time "if you use eval, that means that you're n00b
> that didn't l3arn3d h0w to use macros!"

More abstract handwaving. Still no code. Must be a cultural trait
for the newLISP community.

Tamas

Majorinc Kazimir

unread,
Jan 22, 2009, 10:49:12 AM1/22/09
to
jos...@corporate-world.lisp.de wrote:

>
>> Newlisp users use eval if it solves their problem.
>
> Which problem? Any practical problem? What kind
> of problems are solved by Newlisp users this
> way?
>

Any problem - eval is widely used in Newlisp.
Check my blog for about one hundred of
uses of eval.

>
> In a lexical Lisp I can see what code uses (reads/writes/binds) the
> variables.
> In a Lisp that provides unrestricted EVAL with access
> to local variables, many pieces of code can set the local variables
> and it is
> not obvious when and where it happens. It provides more
> flexibility at the cost of rendering any non-trivial
> amount of code to be much harder to understand.

If unrestricted eval exists, it doesn't mean you
must use it. If you do not like it, don't use it.
Lisp is not supposed to be police state language.

>
>> And what if one WANTS macros? Newlisp macros are easier to
>> understand, more powerful (CL-ers claim no use
>> of additional power) and easier to write in practice.
>
> Because they are easier to use? Do you have a more specific reason?

Yes. CL macros construct CODE that will be inserted
in original source and evaluated during runtime. Generally,
it is harder to write CL macros than functions, it is
one level of abstraction more. Not always, but generally.

Newlisp macros are just functions that do not evaluate
arguments. They do not necessarily construct any code,
they just evaluate during runtime. It is equally hard
to write Newlisp macros as functions.

Here is example of Newlisp macro

(set 'at-least-two
(lambda-macro()
(let ((c 0)
(L $args))
(dolist (i L (= c 2))
(println i)
(if (eval i)
(inc c)))
(>= c 2))))

(println (at-least-two (= 1 1)
(= 3 2)
(= 2 2)
(= 4 4)))

(exit)

at-least-two returns true if at least two arguments
evaluate to true, and nil otherwise. It has to be
macro because it is "lazy." As you can see, it
really looks like function.

Note: it is not the most terse or "intelligent"
macro I can do here, it is written to demonstrate
fundamental difference between CL and Newlisp.

But, one can, however, write CL-style macros in
Newlisp, if that is what he wants.

> For example I can take a piece of code in Common Lisp
> and have it macro expanded and look at it. I can trace/debug
> the macro expansion in isolation. Sounds easier
> than a FEXPR which at runtime does code generation, where
> code generation is mixed with execution.

You can test it like any other function. No special
treatmant required. I have unit tests for my macros
and my functions, so each time I change macro and
load library, test report me: "Macro 27 is bad" if
I did something wrong.

>
> Common Lisp provides support for interpretation and compilation.
> What it did was removing the semantic differences
> between those. For example before Common Lisp it was often the case
> that the Lisp implementation was using dynamic binding
> in the interpreter and lexical binding for compiled code.

True, it was strange idea.

>
> I think a simple Lisp in the Scheme tradition is just
> as easy to use for scripting and better understandable.
> Actually a former co-worker used scsh (the scheme shell)
> for a while and then moved on to use GNU CLISP for
> scripting purposes and liked CLISP a lot.

I do not want to put down anyone. CLisp is nice,
mature implementation, with lot of invested work,
without bugs, good bignums, etc. It was my
first Lisp. If authors read this thanks guys.

Nevertheless, macros are the problem.

This are four rules that, according to my best knowledge,
completely describe syntax and semantics of Newlisp macros:
--------------------------------------
1) How I define macros?
Just like functions, but use define-macro instead of define.

2) How I define anoynomous macros?
Just like functions, but use keyword
lambda-macro instead of lambda.

3) And, how such macro works?
(m expr1 ...exprn)=(f expr1 ... exprn)

where m and f are macro and function defined
on the same way, difference only in keywords.

4) How do I test whether object is macro?
(macro? object).

-------------------------------------
If you try to explain CL macros syntax and semantics,
including whole concept of macroexpand time vs compile
time vs runtime and concept of the non-first class
ciziten, and then macroexpand, macroexpand-1, macrolet,
macro-function, hook you'll certainly need much more space
and time.

Finally, about easy to use - challenge me. There are
people who use CL for 20 years, whole community around
it, lot of scientific papers etc. Lispers are proud for
macros writing macros writing macros etc. Give me the
hardest, the most sofisticated problem lispers solved
with macro, and I'll solve it in Newlisp, so we can
discuss solutions. As long as problem is in
technique, not in the lack of aglorithm.


Majorinc Kazimir

unread,
Jan 22, 2009, 11:17:52 AM1/22/09
to
Typo. It should be:

Alex Mizrahi

unread,
Jan 22, 2009, 11:44:28 AM1/22/09
to
??>> what is complex about Common Lisp and how exactly does newLISP fix
??>> this? i find CL being quite simple in it's core. there are lots of
??>> functions in standard library, some of them being weird, but you do
??>> not need to learn them all.

MK> It is code=data, eval and macros. CL/Scheme macros are
MK> complex.

eval and macros are rarely used even by professionally users
-- it is not something you do each day. if we are speaking about
non-professional ones, most of them do not need it at all,
and for ones who need eval is there.

MK> Some CL users repeat all the time "if you use eval, that means that
MK> you're n00b that didn't l3arn3d h0w to use macros!"

and what is wrong with it? eval is inferior for most purposes, so if
you want to write good code, do not use it. still, if we are speaking
about "part-time programmers", they do not really need to write
good code, only code that works.

MK> Newlisp users use eval if it solves their problem.

and CL users can use eval to solve their problems, so how
is it different? do they need blessing to use eval?

MK> Their eval understands local variables,

by making all variables special? this is _very_ bad, as it
breaks closures. closures are much more important
than eval, as they allow solving most problems that otherwise would need
eval
in much more elegant and safe way.

MK> so it is far more frequently used.

if local variables is eval is such a big deal, is it that hard to declare
variables
as special with defvar?

or, if you insist on "local variables", we can fix it with a simple macro
like this:

(defmacro dlet ((a b) &body body)
`(let ((,a ,b))
(declare (special ,a))
,@body))

CL-USER> (dlet (x 2)
(dlet (y 3)
(eval '(+ x y))))
5


huh, eval sees local variables now! will this macro make CL easier to use?

MK> Method might be true, but not the result.

are you sure you're not biased about the result? if you know the language
inside-out, it might seem simple and logical. but it is not like that to
external observers.
if I compare newLISP to Scheme, i do not have reason to be biased toward
any of these languages, as i do not use any of them. yet, Scheme appears
well-designed to me, while newLISP appears to be weird.

and i've seen lots of examples like this. JavaScript and PHP are similar in
their
role, style and heavy use of associative arrays, but they are very different
in their nature
-- JavaScript was mostly designed as a language, and PHP was designed as
interpreter
(whatever was easier to code for Rasmus Lerdorf). and results are
different -- JS is
pretty nice and solid language (pretty similar to Lisp/Scheme, by the way,
in everything
except syntax), while PHP is weird. JS is evolving by extending, PHP changes
all the time
(in version 5, for example, they have totally changed object
assignment/identity semantics,
because initially semantics was weird.)

by the way, i find PHP and newLISP similar in many ways -- both are
interpreter-centric,
both do not support closures but some ad-hoc weirdness instead (new version
of PHP supports
closures in some weird way, though), both have weird assignments and
refereces semantics
and half-baked memory management model.

MK> This is why:

MK> CL tries to serve two masters, to give programmers almost
MK> as much power as interpreted Lisps, at almost as much
MK> speed as Pascal (if he does not use eval.) It is complicated
MK> - complicated to design, implement, use.

if you think that CL is complicated to make it fast and eval is deprecated
for same reason, you're wrong. it is just more clean way to do it with
macros and closures, and, incidentally, it is also a faster way. Common Lisp
is not complicated in its core. there are additional features -- such as
type
declarations -- that make it quite complicated, but these features are
optional.
(for example, i'm not really a good at type declarations despite i consider
myself a professional CL programmer).

MK> Newlisp is designed to be simple.

it does not look like that. if you want to make it simple, remove all shit
like ORO, contexts and "default functors". it makes _implementation_ simple,
not the language!

a good example of language which is really simple is JavaScript -- it has
very clean semantics with almost no questionable parts, everything serves
its purpose. and it is very easy to learn, lots of people have mastered it
without any problems.

MK> it doesn't try to compete with C.

then why ORO memory management is important? i have an impression
that it is made such to avoid GC overhead and its unpredictability.
if you look at it from programmers point of view, GC is strictly superior
to ORO, as GC works totally transparently without making any limitations.

if you worry about GC overhead, then it means that it is newLISP who
serves two masters. but it made wrong compromises, it seems --
Common Lisp almost never sacrifices programmer's convenience
for performance reasons, but newLISP did sacrifice transparency in
a most critical part.

MK> In many areas CL and Scheme standards and implementations
MK> are equal or better than Newlisp. In many areas, they are
MK> just different. But, if one puts *easy to learn and use*
MK> and powerful *code=data* on top of his priorities, Newlisp
MK> is excelent choice.

indeed, CL was not designed to be easy to learn. but Scheme was.
you haven't said anything why newLISP is better than Scheme,
and i think as a language Scheme is strictly superior to newLISP,
by leaps and bounds.


Matthias Buelow

unread,
Jan 22, 2009, 1:04:06 PM1/22/09
to
Majorinc Kazimir wrote:

> Any problem - eval is widely used in Newlisp.
> Check my blog for about one hundred of
> uses of eval.

Eval is like GOTO in other languages (or GO in CL) -- it exists, it is
useful in some situations but shouldn't be overused.

Kaz Kylheku

unread,
Jan 22, 2009, 2:48:33 PM1/22/09
to
On 2009-01-22, Majorinc Kazimir <fa...@email.address> wrote:
> jos...@corporate-world.lisp.de wrote:
>
>
>
>>
>>> Newlisp users use eval if it solves their problem.
>>
>> Which problem? Any practical problem? What kind
>> of problems are solved by Newlisp users this
>> way?
>>
>
> Any problem - eval is widely used in Newlisp.
> Check my blog for about one hundred of
> uses of eval.

Fair enough, why should you repost stuff here? Let's go to the blog.

Currently, the very first thing we see on your blog is some completely moronic
code that defines IF as a function.

;; Code from KM's blog:


(set 'IF (lambda()
(eval
((args)

(- 5 (length
(string
(true?
(eval (first (args)))))))))))

Here we see something completely laughable. The result of evaluating
the guard expression must firs be normalized to a boolean using TRUE?

This is a useless stupidity that is eliminated in Lisp with the
concept of the generalized boolean.

Next, it's converted to a string, whose length is used to control evaluation!

Let me guess, the boolean produces either TRUE (symbol name four letters long)
or FALSE (five letters long).

Good grief! What is going on? Has the aristocracy of stupidity now been
democratized, so you have to actively run for election to take office as the
crowned prince of morons?

You've taken Lisp back to before the day when McCarthy invented IF.

Now the sane Lisp approach. No eval, no character processing of the printed
reprsentation of atoms, no quoting of arguments to the resulting form:

;; for example, we have COND but not IF

(defmacro if (antecedent consequent &optional alternate)
`(cond (,antecedent ,consequent)
(t ,alternate)))

Shall we keep going?

Majorinc Kazimir

unread,
Jan 22, 2009, 3:49:57 PM1/22/09
to
Alex Mizrahi wrote:

>
> or, if you insist on "local variables", we can fix it with a simple macro
> like this:
>
> (defmacro dlet ((a b) &body body)
> `(let ((,a ,b))
> (declare (special ,a))
> ,@body))
>
> CL-USER> (dlet (x 2)
> (dlet (y 3)
> (eval '(+ x y))))
> 5
>
>
> huh, eval sees local variables now! will this macro make CL easier to use?
>

Macro loses first class, but the first definition,
is only bit heavy, otherwise that's it. I must accept
90% of this point.

> MK> Method might be true, but not the result.
>

> if I compare newLISP to Scheme, i do not have reason to be biased toward
> any of these languages, as i do not use any of them. yet, Scheme appears
> well-designed to me, while newLISP appears to be weird.

But I switched from Scheme to Newlisp, so I should be anti-biased.
Eval, macros and mmutable functions in Newlisp were main reasons.

>
> by the way, i find PHP and newLISP similar in many ways -- both are
> interpreter-centric,
> both do not support closures but some ad-hoc weirdness instead (new version
> of PHP supports
> closures in some weird way, though), both have weird assignments and
> refereces semantics
> and half-baked memory management model.

In Newlisp, functions are lists and you can store any kind of data
on any way in those. So, if you need closures, you can have it.


>
> MK> This is why:
>
> MK> CL tries to serve two masters, to give programmers almost
> MK> as much power as interpreted Lisps, at almost as much
> MK> speed as Pascal (if he does not use eval.) It is complicated
> MK> - complicated to design, implement, use.
>
> if you think that CL is complicated to make it fast and eval is deprecated
> for same reason, you're wrong. it is just more clean way to do it with
> macros and closures, and, incidentally, it is also a faster way.

CL macros are not clean, they are not the first class
citizens, one cannot even be sure whether they exist
or not. Now you see it, now you dont.

Closures are clean, but they are just accidental
functional data type, objects turned inside out.
One can find dozens of such constructs floating around.

Eval is essence of code = data, it just cannot
be different than it is.

Another advocate of eval:

http://blogs.oreilly.com/digitalmedia/2004/12/
lisp-is-better-than-xml-but-wo.html

Common Lisp
> is not complicated in its core. there are additional features -- such as
> type
> declarations -- that make it quite complicated, but these features are
> optional.
> (for example, i'm not really a good at type declarations despite i consider
> myself a professional CL programmer).
>
> MK> Newlisp is designed to be simple.
>
> it does not look like that. if you want to make it simple, remove all shit
> like ORO, contexts and "default functors". it makes _implementation_ simple,
> not the language!

I guess that designer was pragmatic about ORO, but it is
important what he did. ORO is in between GC and MM management.

I like ORO because I usually feel GC as heavy monster who does
something behind my back and I do not believe him. ORO gives
more flexibility to me, and it appears that other users are
also satisfied. So, I think it was step in good direction.

ORO symbols are like generalized memory addresses. One cannot
store one object on two addresses. But, he can store adress
of the object on 100 places. Once one realize that, ORO is
natural.

> if you worry about GC overhead, then it means that it is newLISP who
> serves two masters. but it made wrong compromises, it seems --

Obviously it does. He cannot completely neglect the speed.
However, speed has much lower level of priority, so, if
we are not in nitpicking mood, we should agree.


Kaz Kylheku

unread,
Jan 22, 2009, 4:18:42 PM1/22/09
to
On 2009-01-22, Majorinc Kazimir <fa...@email.address> wrote:
> Alex Mizrahi wrote:
>
>>
>> or, if you insist on "local variables", we can fix it with a simple macro
>> like this:
>>
>> (defmacro dlet ((a b) &body body)
>> `(let ((,a ,b))
>> (declare (special ,a))
>> ,@body))
>>
>> CL-USER> (dlet (x 2)
>> (dlet (y 3)
>> (eval '(+ x y))))
>> 5
>>
>>
>> huh, eval sees local variables now! will this macro make CL easier to use?
>>
>
> Macro loses first class, but the first definition,
> is only bit heavy, otherwise that's it.

The idiotic processing of a symbol as a string in your newLISP implementation
of IF on your blog is what is heavy!!!

/You/ are heavy.

(In some East European languages, the word for ``heavy'' is applied to someone
in a way that in English we say someone is ``dense'' or ``thick'').

A heavy bohunk has landed and the newsgroup is still shaking.

> I must accept 90% of this point.

Accepting 90% upfront is a good start, but you must work harder on accepting
the remaining 10%. It's okay, you have lots of time.

Fact is that newLISP has access in eval to local variables because they are
dynamic. In exactly the same way, Common Lisp also has access to dynamic
variables under eval.

newLISP eval doesn't in fact let you access lexical variables in eval, because
lexical variables do not exist in newLISP. Common Lisp doesn't allow it for a
different reason, namely that eval takes place in a null lexical environment.

(It is recognized that the semantics of evaluation depends on a choice of
environment and that other choices are possible in principle).

The proper way to extend eval is to give it an environment parameter, so it
is not restricted to the null lexical environment. Then you can correctly do
things like this:

(defun my-eval-wrapper (expr environment)
(let ((lexical-variable 43))
(extension:eval-with-environment expr environment)))

(let ((lexical-variable 42))
(my-eval-wrapper 'lexical-variable (extension:current-lexical-env)))

See, now the call to EVAL-WITH-ENVIRONMENT takes place in our wrapper function
MY-EVAL-WRAPPER. The lexical environment there has three bindings: EXPR,
ENVIRONMENT and LEXICAL-VARIABLE. But this is a different LEXICAL-VARIABLE from
the one in the expression!!!

The eval will still resolve to the correct LEXICAL-VARIABLE, because the
environment is properly passed through the wrapper down into
EVAL-WITH-ENVIRONMENT. So the result will be that 42 is returned:

(let ((lexical-variable 42))
(my-eval-wrapper 'lexical-variable (extension:current-lexical-env)))

-> 42 ;; not 43!

That's the right way to do it. Which is precisely why newLISP doesn't do it
this way. The design principle in newLISP is: ``the wrong way or the
highway''.

Tamas K Papp

unread,
Jan 22, 2009, 4:52:04 PM1/22/09
to
On Thu, 22 Jan 2009 21:49:57 +0100, Majorinc Kazimir wrote:

> In Newlisp, functions are lists and you can store any kind of data on
> any way in those. So, if you need closures, you can have it.

Ouch. You clearly don't grok what closures are.

And of course you can have closures in any language if you fight hard
for it, eg you can implement CL in C and then "have" closures in the
latter. Thinking about it, reimplementing CL in C is a much cleaner
way to get closures than strugging with the brain-dead newLISP.

The mere suggestion that one should emulate closures by fiddling with
lambda forms as lists reeks.

> CL macros are not clean, they are not the first class citizens, one
> cannot even be sure whether they exist or not. Now you see it, now you
> dont.

CL has this thing called a language specification. Have a look at it,
you are in for a surprise. Among other things, it will answer a lot
of questions you might have (or should have, given your level of
ignorance) about macros and how they work.

> Closures are clean, but they are just accidental functional data type,
> objects turned inside out. One can find dozens of such constructs
> floating around.

Closures are more fundamental and versatile than objects.

> Eval is essence of code = data, it just cannot be different than it is.

Repeating your meaningless mantra won't make it true.

> speed has much lower level of priority, so, if we are not in nitpicking
> mood, we should agree.

It is true that speed is less important than programming convenience,
elegance and readability but newLisp doesn't buy you any of the latter
so the point is moot. You are sacrificing speed for nothing.

Tamas

Rainer Joswig

unread,
Jan 22, 2009, 5:10:11 PM1/22/09
to
In article <gla4i3$pd3$1...@ss408.t-com.hr>,
Majorinc Kazimir <fa...@email.address> wrote:


I find this to be one of the examples why fexprs are bad.

I want the code to look like:

(let ((counter 0))
(block nil


(when (= 1 1)
(incf counter))
(when (= counter 2)
(return t))


... ; repeat above for each expression


nil ; default value
))

I write a macro to generate the code:

(defmacro at-least-two (&rest expressions)
(let ((c (gensym)))
`(let ((,c 0))
(block nil
,@(loop for expression in expressions
collect `(when ,expression (incf ,c))
collect `(when (= ,c 2) (return t)))
nil))))


Now I can look at the code:

CL-USER 62 > (pprint (macroexpand-1 '(at-least-two (= 1 1)


(= 3 2)
(= 2 2)

(= 4 4))))

(LET ((#:G14496 0))
(BLOCK NIL
(WHEN (= 1 1) (INCF #:G14496))
(WHEN (= #:G14496 2) (RETURN T))
(WHEN (= 3 2) (INCF #:G14496))
(WHEN (= #:G14496 2) (RETURN T))
(WHEN (= 2 2) (INCF #:G14496))
(WHEN (= #:G14496 2) (RETURN T))
(WHEN (= 4 4) (INCF #:G14496))
(WHEN (= #:G14496 2) (RETURN T))
NIL))

He, I can see the source and it looks like my sketch above.

The macro generated totally simple easy to understand
code that can be compiled just fine.

There is no need to use EVAL in any way.

The compiler checks that it is valid code as an added bonus.

>
> at-least-two returns true if at least two arguments
> evaluate to true, and nil otherwise. It has to be
> macro because it is "lazy." As you can see, it
> really looks like function.
>
> Note: it is not the most terse or "intelligent"
> macro I can do here, it is written to demonstrate
> fundamental difference between CL and Newlisp.

That's the style of Lisp's of the 60s, 70s and the
early 80s. I have been using FEXPRs 20 years ago.
I don't miss them...

> This are four rules that, according to my best knowledge,
> completely describe syntax and semantics of Newlisp macros:
> --------------------------------------
> 1) How I define macros?
> Just like functions, but use define-macro instead of define.
>
> 2) How I define anoynomous macros?
> Just like functions, but use keyword
> lambda-macro instead of lambda.
>
> 3) And, how such macro works?
> (m expr1 ...exprn)=(f expr1 ... exprn)
>
> where m and f are macro and function defined
> on the same way, difference only in keywords.
>
> 4) How do I test whether object is macro?
> (macro? object).
>
> -------------------------------------
> If you try to explain CL macros syntax and semantics,
> including whole concept of macroexpand time vs compile
> time vs runtime and concept of the non-first class
> ciziten, and then macroexpand, macroexpand-1, macrolet,
> macro-function, hook you'll certainly need much more space
> and time.

The problem is not to learn the concept and
machinery of a macro. The problem is understanding written
code. Understanding code with lexical binding
and a separate macro phase is easier than
code that uses fexprs, since fexprs allow
all kinds of non-local effects that only
appear at runtime. Most of the time this is not
needed. Learn to use lexical binding and macros
once, that should help to write better code
for the rest of your life.

>
> Finally, about easy to use - challenge me. There are
> people who use CL for 20 years, whole community around
> it, lot of scientific papers etc. Lispers are proud for
> macros writing macros writing macros etc. Give me the
> hardest, the most sofisticated problem lispers solved
> with macro, and I'll solve it in Newlisp, so we can
> discuss solutions. As long as problem is in
> technique, not in the lack of aglorithm.

He, you can all kinds of stuff with fexprs, you
just can't compile them in any useful way
and you just can't control the non-locality
of the effects. If I write in Lisp

(lambda (some-function)
(let ((a 10))
; do something
(funcall some-function)
; do something
))

I know that some-function can not change the value
of a. Only code in the lexical environment can change it.
I can hand out functions f that change a and make those
available to other functions g. Still these functions f
have to have local lexical access to a.

Allowing non-lexical access to a is evil because
it makes program understanding much harder.

--
http://lispm.dyndns.org/

Rainer Joswig

unread,
Jan 22, 2009, 5:35:38 PM1/22/09
to
In article <87vdsakl8...@metalzone.distorted.org.uk>,
Mark Wooding <m...@distorted.org.uk> wrote:

> Majorinc Kazimir <fa...@email.address> writes:
>
> > So, CL chose an option that leads to speed, and Newlisp chose the
> > option that leads to expressive power. That is exactly what I said.
>
> Err..., no, MACLISP chose speed and Common Lisp followed.

Scheme chose correctness and Common Lisp followed.

FEXPRs are a runtime feature that makes debugging harder and
makes compilation next to useless.


If we have

(foo (some-code ...))

and foo gets it as literal code (s-expressions),
then it can do everything at runtime with it.
We don't know from looking at the source code and
we can't run an expander - because there is none.

If we have a compiled Lisp implementation, the compiler
can check for example the code for basic correctness
(syntax, ...), can replace constant code with the
result, etc. BEFORE runtime. Since Lisp switched
to lexical semantics and easy to use compilers
I usually don't load code as a file into Lisp,
but compile it first and make the warnings
and errors of the compiler go away. Having code
that compiles cleanly without warnings and errors
is already a first step to working code.
I got rid of syntax errors, typos, expansion errors,
...

--
http://lispm.dyndns.org/

Kaz Kylheku

unread,
Jan 22, 2009, 6:37:38 PM1/22/09
to
On 2009-01-22, Rainer Joswig <jos...@lisp.de> wrote:
> In article <87vdsakl8...@metalzone.distorted.org.uk>,
> Mark Wooding <m...@distorted.org.uk> wrote:
>
>> Majorinc Kazimir <fa...@email.address> writes:
>>
>> > So, CL chose an option that leads to speed, and Newlisp chose the
>> > option that leads to expressive power. That is exactly what I said.
>>
>> Err..., no, MACLISP chose speed and Common Lisp followed.
>
> Scheme chose correctness and Common Lisp followed.
>
> FEXPRs are a runtime feature that makes debugging harder and
> makes compilation next to useless.

Kent Pitman, _Special Forms in Lisp_, Conclusions:

``It should be clear from this discussion that FEXPR's are only safe
if no part of their ``argument list'' is to be evaluated, and even
then only when there is a declaration available in the environment
in which they appear. Using FEXPR's to define control primitives will
be prone to failure due to problems of evaluation context and due to
their potential for confusing program-manipulating programs such as
compilers and macro packages.''

``MACRO's on the other hand, offer a more straightforward and reliable
approach to all of the things which we have said should be required
of a mechanism for defining special forms.''

``It is widely held among members of the MIT Lisp community that FEXPR,
NLAMBDA, and related concepts could be omitted from the Lisp language
with no loss of generality and little loss of expressive power, and that
doing so would make a general improvement in the quality and reliability
of program-manipulating programs.''

[Pitman, 1980]

The following paper looks like it promises a contrasting viewpoint, but
I don't have access to it:

_Sometimes an FEXPR is beter than a macro_, Z. Lichtman, ACM SIGART
Bulletin, Issue 97, July 1986.

We /do/ have the abstract, at least, which suggests that the reason a FEXPR is
sometimes better has to do with efficiency, not expressiveness --- ``despite
its negative aspects''! Even the author of this paper defending fexprs
recognizes that there are negative aspects:

``Common Lisp, which is becoming THE Lisp standard, does not support
call by text (FEXPR mechanism in Mac/Franz Lisp). This effect can
be obtained using macros. Based on the experience of converting an
OPS5 implementation from Franz Lisp to Common Lisp, it is argued that
sometimes call by text is needed for efficiency, despite its negative
aspects. In the case of languages embedded in a Lisp system, using
the macro alternative for call by text can cause macro expansion at
execution time. This leads to a some-what less efficient implementation
of the embedded language.''

[Z. Lichtman, 1986]

Without having the full text of the paper paper, we can nevertheless easily
conjecture what the Abstract might be talking about. How would macro-expansion
happen at execution time? You'd have to EVAL or COMPILE some form that is a
macro call, or contains one. Sure, doing the macro expansion could be a waste
of time, if the expansion is only evaluated once (or, more generally, a small
number of times). The expanded text of the macro might run no faster than the
original form being interpreted by the FREXP, plus you're pay for the cost of
the macroexpansion, so it could be the case that no matter how many times the
macroexpansion is re-evaluated, it does not pay for the cost of macroexpansion.

Macros are often written with the assumption that expansion time is not
important, and so it could spend many cycles and do a lot of consing; the
efficiency of the code that the macro generates is important. Given that you
are EVAL'ing this code already, an interpretive alternative to macroexpansion
(call by source code) may be faster.

Indeed, an frexp behaves a lot like an extension to /interpreter/ for handling
a special form, whereas a macro is an extension to a /compiler/ for handling a
special form. So, in a sense, if code is purely interpreted, it doesn't always
make sense to be invoking a /compiler/ extension for the handling of some
special form.

Sometimes interpreting can even be outright faster than compiling. This
happens, for instance, when some straightforward compiled code generates a loop
that exceeds the size of a small cache on the processor, whereas encoding the
algorithm as a tiny interpreter plus a compact piece of data behaving as code,
allows everything to fit into the cache.

But where is the paper that argues that fexprs are sometimes better than macros
because they are more flexible and powerful than macros? Or easier to use,
learn, etc? It sounds like Mr. Majorinc is confident enough that he should be
able to set about writing that paper. I'm looking forward to it!

Rob Warnock

unread,
Jan 22, 2009, 9:43:19 PM1/22/09
to
Tamas K Papp <tkp...@gmail.com> wrote:
+---------------

| And of course you can have closures in any language if you fight hard
| for it, eg you can implement CL in C and then "have" closures in the
| latter. Thinking about it, reimplementing CL in C is a much cleaner
| way to get closures than strugging with the brain-dead newLISP.
+---------------

Yup, viz.:

$ cat ctest.c
typedef unsigned long lispobj; /* Or whatever your generic type is */

typedef struct closure_s {
lispobj (*func)(lispobj *, ...);
lispobj *env;
} closure_t, *closure_p;

#define funcall_closure(closure, args...) \
(((closure)->func)((closure)->env, ## args))

#define CFUNCP lispobj (*)(lispobj *, ...)

lispobj
foo(lispobj *env, lispobj x, lispobj y)
{
return (env[0] * x) + (env[1] * y);
}

main()
{
lispobj e[2] = {3, 17}; /* Create a small environment */
closure_t c = {(CFUNCP)&foo, &e[0]}; /* Close FOO over it. */

printf("%d\n", funcall_closure(&c, 1, 1000));
}
$ make ctest
cc -O -pipe ctest.c -o ctest
$ ./ctest
17003
$

I've been playing these sorts of games for years, with great success! ;-}

+---------------


| The mere suggestion that one should emulate closures
| by fiddling with lambda forms as lists reeks.

+---------------

Indeed!


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Kaz Kylheku

unread,
Jan 22, 2009, 11:45:50 PM1/22/09
to
On 2009-01-23, Rob Warnock <rp...@rpw3.org> wrote:
> $ ./ctest
> 17003
> $
>
> I've been playing these sorts of games for years, with great success! ;-}

Who's winning today, and what is the score?

Majorinc Kazimir

unread,
Jan 23, 2009, 12:33:41 AM1/23/09
to
Tamas K Papp wrote:
> On Thu, 22 Jan 2009 21:49:57 +0100, Majorinc Kazimir wrote:
>
>> In Newlisp, functions are lists and you can store any kind of data on
>> any way in those. So, if you need closures, you can have it.
>
> Ouch. You clearly don't grok what closures are.

Closures? Move on ...

>> CL macros are not clean, they are not the first class citizens, one
>> cannot even be sure whether they exist or not. Now you see it, now you
>> dont.
>
> CL has this thing called a language specification. Have a look at it,
> you are in for a surprise. Among other things, it will answer a lot
> of questions you might have (or should have, given your level of
> ignorance) about macros and how they work.

What, you are trying to fake that you have first class
macros in CL? And replace it with reading manuals?

Didn't worked.


Pascal J. Bourguignon

unread,
Jan 23, 2009, 3:28:27 AM1/23/09
to
Kaz Kylheku <kkyl...@gmail.com> writes:


The embedded languages should do minimal compilation themselves. Then
macros wouldn't have to be exanded at run-time.


> Macros are often written with the assumption that expansion time is not
> important, and so it could spend many cycles and do a lot of consing; the
> efficiency of the code that the macro generates is important. Given that you
> are EVAL'ing this code already, an interpretive alternative to macroexpansion
> (call by source code) may be faster.
>
> Indeed, an frexp behaves a lot like an extension to /interpreter/ for handling
> a special form, whereas a macro is an extension to a /compiler/ for handling a
> special form. So, in a sense, if code is purely interpreted, it doesn't always
> make sense to be invoking a /compiler/ extension for the handling of some
> special form.
>
> Sometimes interpreting can even be outright faster than compiling. This
> happens, for instance, when some straightforward compiled code generates a loop
> that exceeds the size of a small cache on the processor, whereas encoding the
> algorithm as a tiny interpreter plus a compact piece of data behaving as code,
> allows everything to fit into the cache.
>
> But where is the paper that argues that fexprs are sometimes better than macros
> because they are more flexible and powerful than macros? Or easier to use,
> learn, etc? It sounds like Mr. Majorinc is confident enough that he should be
> able to set about writing that paper. I'm looking forward to it!

We're waiting...

--
__Pascal Bourguignon__

Rob Warnock

unread,
Jan 23, 2009, 5:23:30 AM1/23/09
to
Kaz Kylheku <kkyl...@gmail.com> wrote:
+---------------
+---------------

Seriously, having even ad-hoc closures in my "C toolbox" has been
a big win for me.

- There are some standard C library functions that are a lot less
convenient if you *don't* have them [e.g., the "ftw(3)/nftw(3)"
file tree walkers come to mind, or the "twalk(3)" library].

- It makes writing your own polymorphic/generic higher-order functions
*much* easier.

- It makes writing GUI event callback functions easier.

Yes, you can fake it by requiring all your higher-order functions
to take both a "callback" function pointer and an opaque cookie
(which is passed to the "callback"), but you can't depend on all
the library functions you want to use doing this.

Helmut Eller

unread,
Jan 23, 2009, 5:35:11 AM1/23/09
to
* Tamas K Papp [2009-01-22 22:52+0100] writes:

> The mere suggestion that one should emulate closures by fiddling with
> lambda forms as lists reeks.

Why? It seems to me that that is just the same trick as using the
symbol nil as empty list.

Helmut

Kaz Kylheku

unread,
Jan 23, 2009, 1:39:14 PM1/23/09
to

Look at that, an apparent Schemer pipes up to criticize NIL.

Don't you have something better to do, like look for places in your
code where you accidentally wrote list rather (not (null? list)),
or (car list) instead of (if (not (null? list)) (car list))?

:)

Helmut Eller

unread,
Jan 23, 2009, 2:07:58 PM1/23/09
to

I'm not a Schemer; I'm a Emacs Lisp guy. And I find it amusing that
both Schemers and CLers fail to write decent Emacs Lisp code exactly
because they prefer those other "obviously better" languages and can't
make the needed mental switch.

Helmut.

Thomas F. Burdick

unread,
Jan 23, 2009, 3:16:46 PM1/23/09
to
> >> * Tamas K Papp [2009-01-22 22:52+0100] writes:
> >>> The mere suggestion that one should emulate closures by fiddling with
> >>> lambda forms as lists reeks.

> > On 2009-01-23, Helmut Eller <eller.hel...@gmail.com> wrote:
> >> Why?  It seems to me that that is just the same trick as using the
> >> symbol nil as empty list.

> * Kaz Kylheku [2009-01-23 19:39+0100] writes:
> > Look at that, an apparent Schemer pipes up to criticize NIL.

Wow, whatever Kaz is smoking ... keep that shit away from me! Helmut
is "an apparent Schemer", that's really classic.

> > Don't you have something better to do, like look for places in your
> > code where you accidentally wrote list rather (not (null? list)),
> > or (car list) instead of (if (not (null? list)) (car list))?

On 23 jan, 20:07, Helmut Eller <eller.hel...@gmail.com> wrote:
> I'm not a Schemer; I'm a Emacs Lisp guy.  And I find it amusing that
> both Schemers and CLers fail to write decent Emacs Lisp code exactly
> because they prefer those other "obviously better" languages and can't
> make the needed mental switch.

Actually, I think it's an occupational hazard of Lisp languages in
general. It's so easy to mold your environment into something more
familiar that you risk doing so before you get a sense of style in
your new environment. But that doesn't mean that you never get a sense
of taste, just that the more you can adapt the environment to your
will, the longer it takes. Myself, I write elisp with a clear CL
accent, but correctly and in good style.

And yeah, funcallable lists can do a lot of things that closures can,
and some that they can't. If you throw in buffer-local variables,
you've got a very nice environment. But you really feel that they're
not closures when you look at something like cl-ppcre. Compiled
networks of closures ... well, they require closures to get the real
benefit.

Thomas F. Burdick

unread,
Jan 23, 2009, 3:23:13 PM1/23/09
to
As a response to a whole subsection of this thread: I bet you can
indeed compile a Lisp dialect with fexprs. With the same semantics,
and with very good performance.

Rainer has the right argument here: it's the semantics that are the
problem.

D Herring

unread,
Jan 23, 2009, 4:44:02 PM1/23/09
to

I downloaded the paper. Email me if you need a copy, but its wasn't
much to read. He cites Pitman's paper as a reference.

Lichtman's paper was based on experience with
2. Forgy, C.L. OPS5 Users Manual, CMU-CS-81-135 , Dept.
of Computer Science, Carnegie-Mellon University,
Pittsburgh, PA, 1981.

He demonstrated two cases where OPS5 ran 7-15% slower without FEXPRs.
- OPS5 loaded plain-text definitions of its functions for compilation;
this was slowed by macro expansions.
- OPS5 passed all parameters by text; this sometimes triggered a
"call-by-text" macroexpansion at runtime.

The conclusion:
``
We showed that in some situations of languages embedded within
Lisp, doing call by text through a macro call, instead of using an
explicit call by text mechanism, causes macro expansion at runtime
which is an undesired overhead.
Therefore, in spite of some dangerous aspects of explicit call
by text mechanisms such as FEXPR, &quote and nlambda, it should be
reconsidered whether such an explicit mechanism be added to Common
Lisp, for the sake of efficiency when call by text is required at run
time.''

To me, its sounds like OPS5 could have used some refactoring or a
better compiler (e.g. save fasls). Call-by-text will always be slow,
macroexpansions or no. I don't believe the above problems affect
other languages built on lisp, such as [Open]Axiom/FriCAS. Even toy
CAS's such as Fateman's MockMMA make an effort to replace text with
symbols and structures ASAP.

For a better FEXPR, I would recommend TCL -- EIAS.

- Daniel

Timofei Shatrov

unread,
Jan 24, 2009, 6:05:28 AM1/24/09
to
On Fri, 23 Jan 2009 18:39:14 +0000 (UTC), Kaz Kylheku <kkyl...@gmail.com> tried
to confuse everyone with this message:

That would be "lst", not "list".

--
|Don't believe this - you're not worthless ,gr---------.ru
|It's us against millions and we can't take them all... | ue il |
|But we can take them on! | @ma |
| (A Wilhelm Scream - The Rip) |______________|

0 new messages