Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

More: Very large reals...

36 views
Skip to first unread message

Deepak Goel

unread,
Feb 22, 2002, 10:20:29 AM2/22/02
to
hello
(i posted a similar question recently.. )


consider this question:

Lisp will eval for you:
(expt 2.222 2.222),
(expt 100 100) as well as
(expt 101 101),

but NOT

(expt 100.5 100.5)

Going by your responses to the last question, a suggested solution may
be:

(expt 201/2 201/2)

, but give it a try on your lisp, to see that the latter doesn't work
either. (this is because an operation on 2 rationals has resulted in
an irrational here...).. And for irrationals, lisp tries to convert
them to floats and then butts out if the inbuilt float limitations do
not allow it.. Isn't it bizarre that (expt 201 201) is okay, but
(expt 201/2 201/2) can't be calculated?


i guess the only solution is the roundabout way of working with logs.
In principle, even that can exceed the bounds if large enough..


wish things were 'cleaner'... yes, i did see the point, that lisp
leaves the 'floats' and its limitations to the machine.. but then
wouldn't it be nice if one had another data-type, say "RREAL" [1]---while
decimal-precision could still be finite, it should allow the expt part
of such a "RREAL" to get arbitrarily large..


[1] this is just a newbie trying to find a relevant package that does
his job. This is NOT a request to CHANGE the language! lisp is
awesome, as are the (mostly silent...) majority on the ng, but judging
from the gurus fighting, some insecure people will interpret even an
innocuous question of a newbie as an 'attack' on lisp.. heavens lord!

Deepak <http://www.glue.umd.edu/~deego>
--
Got root beer?

Raymond Toy

unread,
Feb 22, 2002, 11:36:27 AM2/22/02
to
>>>>> "Deepak" == Deepak Goel <de...@glue.umd.edu> writes:

Deepak> , but give it a try on your lisp, to see that the latter doesn't work
Deepak> either. (this is because an operation on 2 rationals has resulted in
Deepak> an irrational here...).. And for irrationals, lisp tries to convert
Deepak> them to floats and then butts out if the inbuilt float limitations do
Deepak> not allow it.. Isn't it bizarre that (expt 201 201) is okay, but
Deepak> (expt 201/2 201/2) can't be calculated?

Because 201/2 raised to the 201/2 power eventually requires a square
root, which has no rational solution. And the language says that in
this case, the result should be a single-float.

Ray

Nils Goesche

unread,
Feb 22, 2002, 11:57:58 AM2/22/02
to
In article <ap3vgcp...@fosters.umd.edu>, Deepak Goel wrote:
> consider this question:
>
> Lisp will eval for you:
> (expt 2.222 2.222),
> (expt 100 100) as well as
> (expt 101 101),
>
> but NOT
>
> (expt 100.5 100.5)
>
> Going by your responses to the last question, a suggested solution may
> be:
>
> (expt 201/2 201/2)
>
> , but give it a try on your lisp, to see that the latter doesn't work
> either. (this is because an operation on 2 rationals has resulted in
> an irrational here...).. And for irrationals, lisp tries to convert
> them to floats and then butts out if the inbuilt float limitations do
> not allow it.. Isn't it bizarre that (expt 201 201) is okay, but
> (expt 201/2 201/2) can't be calculated?

What is bizarre about it? The whole point of using rationals is that
computation with them is exact. You can't represent an /irrational/
number by a rational, because... that's why we call them irrational
:-) I hope you are aware of the fact that the ``limitations'' you
speak of are built into the very concept of ``floats'', it is not
something Lisp introduced.

> i guess the only solution is the roundabout way of working with logs.
> In principle, even that can exceed the bounds if large enough..

Sure. It's one possible workaround that might work in your situation.
Nobody can tell, because you didn't explain why you think you need
such precision, and I hold the claim that if you think you really
need it you are most probably mistaken :-)

> wish things were 'cleaner'... yes, i did see the point, that lisp
> leaves the 'floats' and its limitations to the machine.. but then
> wouldn't it be nice if one had another data-type, say "RREAL" [1]---while
> decimal-precision could still be finite, it should allow the expt part
> of such a "RREAL" to get arbitrarily large..

One could certainly build such a thing, maybe people have although I
have never heard of such a thing. Why is it you think that people
don't use it?

> [1] this is just a newbie trying to find a relevant package that does
> his job. This is NOT a request to CHANGE the language! lisp is
> awesome, as are the (mostly silent...) majority on the ng, but judging
> from the gurus fighting, some insecure people will interpret even an
> innocuous question of a newbie as an 'attack' on lisp.. heavens lord!

Don't worry about them. The titans seem to enjoy fighting each other
because it helps them to keep in shape. They usually don't bother
crushing lowly underlings like us in the course :-)

Regards,
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x42B32FC9

Wade Humeniuk

unread,
Feb 22, 2002, 12:10:18 PM2/22/02
to

"Deepak Goel" <de...@glue.umd.edu> wrote in message
news:ap3vgcp...@fosters.umd.edu...

>
> i guess the only solution is the roundabout way of working with logs.
> In principle, even that can exceed the bounds if large enough..
>
>
> wish things were 'cleaner'... yes, i did see the point, that lisp
> leaves the 'floats' and its limitations to the machine.. but then
> wouldn't it be nice if one had another data-type, say "RREAL" [1]---while
> decimal-precision could still be finite, it should allow the expt part
> of such a "RREAL" to get arbitrarily large..

Not being facetious, what is wrong with having an answer as (expt 201/2
201/2)? This answer is exact. You can do whole calculations getting
precise symbolic answers. If you need a approximation at the end of the
calculation then this would be better.

(expt 201/2 201/2) evals to (expt 201/2 201/2)

(* (expt 201/2 10) (expt 201/2 201/2)) ---> (expt 201/2 221/2)

Wade


Roland Kaufmann

unread,
Feb 22, 2002, 12:08:09 PM2/22/02
to

To calculate (expt 100.5 100.5)
you can (in addition to the logarithm-based methods suggested already)
use rational arithmetic (assuming that you will need ;-) only an
approximation to the irrational result:

> (isqrt (floor (expt 201/2 201)))
16507800303323340337571484403104940541623178104893086641071371478023\
49720792903936774411401317382740507585660507430122529624669724273290\
236960001903788005693959319215387930656145357337789457629163068052

or use CLISP's long float, which have adjustable mantissa length and a
huge exponent range:

> MOST-POSITIVE-LONG-FLOAT
8.8080652584198167656L646456992

> (expt 100.5l0 100.5l0)
1.6507800303323340439L201

Note that the last 3 digits are incorrect. Increase the mantissa
lenght (in binary digits)

> (setf (long-float-digits) 128)
128

to get a better approximation.

> (expt 100.5l0 100.5l0)
1.65078003033233403375714844031049405522L201

Hope this helps.
Roland Kaufmann

Nils Goesche

unread,
Feb 22, 2002, 12:16:52 PM2/22/02
to
In article <a55tal$4vauh$1...@ID-125440.news.dfncis.de>, Nils Goesche wrote:
> Why is it you think that people don't use it?

English is hard. What I mean is: People don't use arbitrary precision
floats. What do you think might be the reason for that?

Deepak Goel

unread,
Feb 22, 2002, 12:42:48 PM2/22/02
to
>
> Sure. It's one possible workaround that might work in your situation.
> Nobody can tell, because you didn't explain why you think you need
> such precision

no, i don't need precision. All i was asking was that lisp not give
me *excl::infinity* for an answer, when i know that writing down (a
rational approximation to the) answer is entirely within lisp's
capabilities, as has been demonstrated by another reply..


But thanks, your answer did help.

Kent M Pitman

unread,
Feb 22, 2002, 2:00:39 PM2/22/02
to
Deepak Goel <de...@glue.umd.edu> writes:

> > Sure. It's one possible workaround that might work in your situation.
> > Nobody can tell, because you didn't explain why you think you need
> > such precision
>
> no, i don't need precision. All i was asking was that lisp not give
> me *excl::infinity* for an answer, when i know that writing down (a
> rational approximation to the) answer is entirely within lisp's
> capabilities, as has been demonstrated by another reply..

There is no case that I know of other than error by an end-user programmer
that allows a rational to come back from a computation and not be an exact
result.

Scheme, I believe, allows approximation rationals and exact floats. While
conceptually there's some justification for this, I think the problem is
that it fights the hardware.

I'm a little surprised that a conforming IEEE program is permitted to
return *excl::infinity* (as you call it) for something that is not
infinite. Surely it must bug the numerical analysts not to be able to
distinguish "infinity" (or even "various classes of infinity") from
"gosh that's a really big number and I didn't bring much note paper".
But I've steered mostly clear of floats because I consider them
utterly black magic to be dealt with only by skilled experts, not mere
mortals like myself. I use them only very, very occasionally usually
in interactive computations, never with big exponents, and always
expecting to get lousy precision.

Edward Jason Riedy

unread,
Feb 22, 2002, 3:36:24 PM2/22/02
to
[This is long and not necessarily related to comp.lang.lisp, but
INF v. OVF and floating-point "black magic" need a bit of
explanation.]

And Kent M Pitman writes:
-
- I'm a little surprised that a conforming IEEE program is permitted to
- return *excl::infinity* (as you call it) for something that is not
- infinite. Surely it must bug the numerical analysts not to be able to
- distinguish "infinity" (or even "various classes of infinity") from
- "gosh that's a really big number and I didn't bring much note paper".

One alternative is to have an OVF that's "between" the largest
representable number and INF. This was proposed once upon a
time, and it occasionally re-surfaces. The original was from
the DEC VAX folks, if I have my history correct. The most
recent recurrence was from Steele at Java One last year.
Another alternative is not to have INF at all, simply OVF.
That will fall into the same argument below. I'm not aware of
other workable finite-precision alternatives.

Why isn't there an OVF? Well, what's 1.0 / OVF? There are two
reasonable answers: 0.0 and NaN. The former case (0.0) follows
the idea of not generating spurious NaNs. Continuing in that
direction gives arithmetic rules leaving OVF and INF practically
indistinguishable. A finite number divided by OVF would give a
signed zero, finite plus or minus OVF would give plus or minus
OVF, etc. OVF and INF end up behaving the same way, so including
both would be more confusing.

So what about 1.0 / OVF becoming a NaN? In this case, you'll
be flooded with spurious NaNs. Continued fractions that
converge perfectly well when 1.0 / (OVF or INF) == 0.0 will
suddenly return NaNs. There are other series expansions
that die similarly. NaN is just not a useful value for 1.0 /
UNF. Similar cases occur for finite +/- OVF, etc. The
spurious NaNs wreak havoc on formulae that are transliterated
into programs directly.

Ok, there is more possibility for 1.0 / OVF: An UNF "underflow
value". That has a host of its own problems, most of which are
more severe than the algebraic issues above. UNF is problematical
enough to deserve its own dismissal, but this message is going to
be far too long already.

So why not call INF OVF? INF was introduced as an algebraic
completion for division. The traditional symbol for that
is the nifty double-loop infinity sign. I suppose it _could_
be re-named OVF, but I tend to think that change would confuse
more people than it would help.

And there _is_ a way to distinguish the two uses of INF. A
"pure" infinity can only be produced through division by zero,
which signals a condition. Overflows signal their own conditions.
Programmers are starting to get tools where they can _use_ these
things portably... (C99 requires these tools on platforms that
purport to support IEEE754, while CL only allows them. The
upcoming Fortran revision has some provisions in it, too.]

Dr. Kahan has taken to calling the division by zero signal
"infinite result from finite operands". That would be a more
accurate renaming, but who wants to say that all the time?

Oh, and my office mate just pointed out the real reason why most
numerical analysts aren't bugged... They almost always assume
no overflow or underflow. ;)

- But I've steered mostly clear of floats because I consider them
- utterly black magic to be dealt with only by skilled experts, not mere
- mortals like myself. I use them only very, very occasionally usually
- in interactive computations, never with big exponents, and always
- expecting to get lousy precision.

This is exactly the situation IEEE754 was trying to solve, and it
does when people don't look too hard at it. ;)

The balance between exponents and precision is intended to give
people more precision than they think they need for numbers in the
exponent's range. Extra precision yields in good results; the extra
precision shrinks the area of floating-point "oddness" drastically.

Some good rules of thumb, courtesy of Dr. Kahan, paraphrased by
me:
1) Store your known quantities as narrowly as you can.
2) Compute intermediate results as widely as you can afford.
3) Treat derived properties as intermediate results.

There's some art in applying these, obviously. The known
quantities are often things passed between major modules or
temporal stages in your code. Intermediate results are both
the result of a + b in a + b + c (or b+c) and those named
quantities you're using for formula simplification.

Derived properties are tricky. Consider rotating a cube.
Your known properties are the vertices of the unrotated cube and
the rotation parameters. The rotation matrix could be seen as
a derived property, as can the vertices of the rotated cube. If
you're incrementally rotating the derived vertices without adding
all the rotations, you'll want to keep them in higher precision
until some time phase when you clamp them down.

Not a perfect example, but it's also not a clear-cut principal.
As with all rules of thumb, it takes a bit of experience to
figure out where and how to apply them. I don't have enough to
produce guidelines.

One of the main conceptual hurdles is that IEEE754 floating point
was designed to _model_ the real number system, not _be_ the
real number system. Mathematicians don't need NaNs, signed
zeros, overflows, or underflows. Those are artifacts of limited
precision and range. IEEE754 provides an algebraically complete
model of the real numbers. That was the goal guiding behavioral
choices.

The difference between the _model_ v. _be_ views come up most
often with signed zeros. People who insist that 754 is meant to
_be_ the real numbers pretend that signed zeros are actually
infinitessimal numbers just above or below zero. The intent is
simply that signed zeros are zero with an algebraic sign. The
behavior is sometimes guided by non-standard analysis (don't
read too much into the name; it's just analysis with some
formal infinitessimal elements), but only because that produces
good results for many practical calculations.

Jason
--

Kent M Pitman

unread,
Feb 22, 2002, 3:39:18 PM2/22/02
to
"Wade Humeniuk" <hume...@cadvision.com> writes:

You want Macsyma.

Not only does it have the ability to carry around expressions in "noun form"
(inhibiting evaluation and dealing with them symbolically) but it also has
bigfloats, which use pairs of integers (because it was written long before
Common Lisp added the rational datatype) to manage an arbitrary, user-chosen
level of precision for large pseudo floating point approximations.

Once again, Lisp is strong enough to allow you to embed solutions to its
perceived weaknesses within itself...

Erik Naggum

unread,
Feb 22, 2002, 4:03:14 PM2/22/02
to
* Deepak Goel <de...@glue.umd.edu>

| Going by your responses to the last question, a suggested solution may be:
|
| (expt 201/2 201/2)
|
| but give it a try on your lisp, to see that the latter doesn't work
| either. (this is because an operation on 2 rationals has resulted in
| an irrational here...).. And for irrationals, lisp tries to convert
| them to floats and then butts out if the inbuilt float limitations do
| not allow it.. Isn't it bizarre that (expt 201 201) is okay, but
| (expt 201/2 201/2) can't be calculated?
|
|
| i guess the only solution is the roundabout way of working with logs.

Well, for the case where you have an integral exponent of 1/2, you can
play with this

(let* ((integral (expt 201/2 201)))
(/ (isqrt (numerator integral)) (isqrt (denominator integral))))

Note that this might not provide an exceptionally accurate result. :)

There are algorithms to compute the integer part of (expt m 1/n) for most
integral values of n.

| [1] this is just a newbie trying to find a relevant package that does his
| job. This is NOT a request to CHANGE the language! lisp is awesome, as
| are the (mostly silent...) majority on the ng, but judging from the gurus
| fighting, some insecure people will interpret even an innocuous question
| of a newbie as an 'attack' on lisp.. heavens lord!

What nonsense, but it is a kind of self-fulfilling nonsense, because you
piss people off and tend to push them over on the defensive with your own
attitude, here. Keep such inflammatory comments to yourself. You have a
job to do: Learn Common Lisp. Just stick to it, and you will get help.
Ignore the likes of Erann Gat whose job it seems to be to make people
stop using Common Lisp unless it is somehow magically "enhanced" to his
satisfaction. The rest of object to this nay-saying and just want to do
our job: use Common Lisp productively in our own carreers and/or for fun.

///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.

Duane Rettig

unread,
Feb 22, 2002, 5:00:01 PM2/22/02
to
Kent M Pitman <pit...@world.std.com> writes:

> Deepak Goel <de...@glue.umd.edu> writes:
>
> > no, i don't need precision. All i was asking was that lisp not give
> > me *excl::infinity* for an answer, when i know that writing down (a
> > rational approximation to the) answer is entirely within lisp's
> > capabilities, as has been demonstrated by another reply..
>

> I'm a little surprised that a conforming IEEE program is permitted to
> return *excl::infinity* (as you call it) for something that is not
> infinite.

Such surprise is probably based on the assumption that floating point
mirrors mathematical concepts. It doesn't :-)

Deepak is probably referring to the infinity values that we provide
in Allegro CL, excl::*infinity-single*, excl::*infinity-double*,
excl::*negative-infinity-single*, and excl::*negative-infinity-double*.

> Surely it must bug the numerical analysts not to be able to
> distinguish "infinity" (or even "various classes of infinity") from
> "gosh that's a really big number and I didn't bring much note paper".

But that is precisely how infinities are described in IEEE-754: for
example, +Infinity is sometimes described as "Positive Overflow"
As for bugging numerical analysts, on the contrary I think that
such distinctions are really what makes them tick. What would
distinguish a numerical analyst from a mathemetician if not almost
solely how computer math differs from theoretical math?

> But I've steered mostly clear of floats because I consider them
> utterly black magic to be dealt with only by skilled experts, not mere
> mortals like myself. I use them only very, very occasionally usually
> in interactive computations, never with big exponents, and always
> expecting to get lousy precision.

Precisely (pun intended). Floats are inexact, can't be compared, and
are hard to understand. They give the impression that they can represent
irrational numbers, but they can't. Their only saving grace is that
they are _fast_.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Harald Hanche-Olsen

unread,
Feb 22, 2002, 5:08:04 PM2/22/02
to
+ Kent M Pitman <pit...@world.std.com>:

| You want Macsyma.
|
| Not only does it have the ability to carry around expressions in
| "noun form" (inhibiting evaluation and dealing with them
| symbolically) but it also has bigfloats, which use pairs of integers
| (because it was written long before Common Lisp added the rational
| datatype) to manage an arbitrary, user-chosen level of precision for
| large pseudo floating point approximations.

Do you think it would have been done differently today? If you do
exact arithmetic on rationals for too long, you might very likely end
up with truly humongous numerators and denominators. So there is a
need to limit the precision. And Macsyma's pairs of integers (which I
assume represent a mantissa [1] and an exponent) seems to me an easier
representation to work with in that respect.

(Hmmm... It just occured to me that one possible way to limit the
precision of rationals might be to simply chop of the same number of
bits at the right end of enumerator and denominator. But that would
still not be good enough for very small numbers, which would have a
moderate enumerator and a giant denominator.)

[1] Wrong word? Webster's defines "mantissa" as the part of a
logarithm to the right of the decimal point, which is not quite right.
But then Webster's is often not a good authority for scientific or
technical terms.

--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- Yes it works in practice - but does it work in theory?

Barry Margolin

unread,
Feb 22, 2002, 5:19:29 PM2/22/02
to
In article <ap3vgcp...@fosters.umd.edu>,

Deepak Goel <de...@glue.umd.edu> wrote:
And for irrationals, lisp tries to convert
>them to floats and then butts out if the inbuilt float limitations do
>not allow it..

Since irrationals have infinite mantissas when represented numerically,
there's no easy alternative; you'd have to use something like continued
fractions or symbolic algebra. If you want symbolic algebra, get Macsyma
or Mathematica; this is beyond the scope of something that's just a
programming language.

>wish things were 'cleaner'... yes, i did see the point, that lisp
>leaves the 'floats' and its limitations to the machine.. but then
>wouldn't it be nice if one had another data-type, say "RREAL" [1]---while
>decimal-precision could still be finite, it should allow the expt part
>of such a "RREAL" to get arbitrarily large..

Since allowing the exponent to get arbitrarily large wouldn't solve the
(expt 201/1 201/2) problem, what's your point?

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Lieven Marchand

unread,
Feb 22, 2002, 12:49:59 PM2/22/02
to
Deepak Goel <de...@glue.umd.edu> writes:

> wish things were 'cleaner'... yes, i did see the point, that lisp
> leaves the 'floats' and its limitations to the machine.. but then
> wouldn't it be nice if one had another data-type, say "RREAL" [1]---while
> decimal-precision could still be finite, it should allow the expt part
> of such a "RREAL" to get arbitrarily large..

If you haven't done this already, you should read the Goldberg paper
cited earlier in this thread. This would be a very bad idea for any
realistic calculations you want to do with these RREALs.

The combination of choice of base[1], size of mantissa in bits and
size of exponent in bits has to be carefully designed to get
meaningful results. The floating point numbers are far more spread out
at the ends of their ranges than in the middle and the result in
precision of a function like sine can be large.

[1] Old IBM mainframes used 16 as base instead of 2 like IEEE floats,
which worsened a lot of floating point artefacts.

--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words

Wade Humeniuk

unread,
Feb 22, 2002, 6:05:22 PM2/22/02
to

"Kent M Pitman" <pit...@world.std.com> wrote in message
news:sfwbseh...@shell01.TheWorld.com...
>
> You want Macsyma.


Indeed Macsyma is a solution. But I am wondering what Deepak Goel is doing
using such large numbers in calculations? (still infinitely less that
infinity though) If Deepk has a specific calculation in mind maybe the
calculation can be reordered so these large numeric calculations can be
avoided.

Stuff can also be discarded, like..
(- (expt 201/2 201/2) 1000000) for all practical purposes is (expt 201/2
201/2).

There are all sorts of ways around numerical calculations, but they have to
be hand built.

Break out your numerical methods books!

Wade


Edward Jason Riedy

unread,
Feb 22, 2002, 7:07:48 PM2/22/02
to
And Duane Rettig writes:
-
- Such surprise is probably based on the assumption that floating point
- mirrors mathematical concepts. It doesn't :-)

- Precisely (pun intended). Floats are inexact, can't be compared, and
- are hard to understand.

- They give the impression that they can represent irrational numbers,
- but they can't.

Floating point numbers don't give impressions. Poorly written
descriptions of floating-point systems give lousy impressions,
like those above. The points below may not be the worded in the
best possible way, but they are correct.

1) Floating-point arithmetic is built on mathematical concepts.
There is no mirroring. There _is_ modelling of one mathematical
system (requiring infinite resources) by another (bounded by
finite resources).

2) Comparisons between floating-point numbers are excruciatingly
well-defined. That section of the standard needs no important
content modification. We're moving the suggested (and mostly
ignored) language bindings and fixing some terminology. That's
the one section no one wants to change at all.

You may be referring to the oft-repeated superstition about
equality and floating-point numbers. Many people believe that
each floating-point number represent a range of real numbers,
and thus they don't think equality means anything. But each
(finite) floating-point number corresponds to exactly one
real number. Equality is perfectly well defined. And it is
very useful.

Sometimes you may _want_ a result only within some tolerance
factor, for economic or numerical reasons. That's fine, but it
doesn't invalidate equality. (And yes, comparisons are very
confusing in interval arithmetic, but interval arithmetic is not
basic floating-point arithmetic.)

3) That single real value is a _rational_ value. There is no
serious floating-point format where it is anything _but_ a
rational value. That's the floating part. For IEEE754, a finite,
floating-point value consists of a sign bit s, an integer
significand n, and an integer exponent e. It's numerical value
is
(-1)^s * (n / 2^k) * 2^e,
where k is the factor needed to normalize things. This is very
rational, with no illusions of magically representing irrationals.

4) With that description, floating point values _are_ exact.
Each finite value corresponds to exactly one rational value.
Computations may yield values outside the representable set, and
_that_ is the source of inexactness. The default IEEE rules produce
the representable value closest to the exact value. You can switch
definitions of "closest" by changing rounding modes.

Jason
--

Nils Goesche

unread,
Feb 22, 2002, 7:10:56 PM2/22/02
to
In article <pcod6yx...@thoth.math.ntnu.no>, Harald Hanche-Olsen wrote:
> (Hmmm... It just occured to me that one possible way to limit the
> precision of rationals might be to simply chop of the same number of
> bits at the right end of enumerator and denominator. But that would
> still not be good enough for very small numbers, which would have a
> moderate enumerator and a giant denominator.)

If you plan to chop'em off you should use floats right away, I reckon.

Edward Jason Riedy

unread,
Feb 22, 2002, 7:15:09 PM2/22/02
to
And Harald Hanche-Olsen writes:
-
- If you do exact arithmetic on rationals for too long, you might very
- likely end up with truly humongous numerators and denominators. So
- there is a need to limit the precision.

Mathematica pulls a trick like you suggest sometimes. It _sucks_.

You think you're working symbolically, but somewhere in the middle
it sneaks in a numerical evaluation. Without warning (to my
knowledge, which is limited to beating my head against it about
three times a year).

So you think you're getting an exact, symbolic result. But you're
not. With the right user interface, you may be able to do better.

- [1] Wrong word? Webster's defines "mantissa" as the part of a
- logarithm to the right of the decimal point, which is not quite right.

I think many people have settled on "significand".

Jason
--

Duane Rettig

unread,
Feb 22, 2002, 10:00:00 PM2/22/02
to
e...@cs.berkeley.edu (Edward Jason Riedy) writes:

> And Duane Rettig writes:
> -
> - Such surprise is probably based on the assumption that floating point
> - mirrors mathematical concepts. It doesn't :-)
>
> - Precisely (pun intended). Floats are inexact, can't be compared, and
> - are hard to understand.
>
> - They give the impression that they can represent irrational numbers,
> - but they can't.
>
> Floating point numbers don't give impressions. Poorly written
> descriptions of floating-point systems give lousy impressions,
> like those above. The points below may not be the worded in the
> best possible way, but they are correct.

When I first read your response, especially this first paragraph,
I thought you were disagreeing with me. But upon rereading it,
it seems as though you are trying to expand upon what I said.

> 1) Floating-point arithmetic is built on mathematical concepts.
> There is no mirroring. There _is_ modelling of one mathematical
> system (requiring infinite resources) by another (bounded by
> finite resources).

Precisely what I said.

> 2) Comparisons between floating-point numbers are excruciatingly
> well-defined. That section of the standard needs no important
> content modification. We're moving the suggested (and mostly
> ignored) language bindings and fixing some terminology. That's
> the one section no one wants to change at all.
>
> You may be referring to the oft-repeated superstition about
> equality and floating-point numbers. Many people believe that
> each floating-point number represent a range of real numbers,
> and thus they don't think equality means anything. But each
> (finite) floating-point number corresponds to exactly one
> real number. Equality is perfectly well defined. And it is
> very useful.

No, I'm referring to the age-old rule of Fortran to never write
a DO loop with floating point numbers as loop variables. You
may end up with a non-terminating loop.

> Sometimes you may _want_ a result only within some tolerance
> factor, for economic or numerical reasons. That's fine, but it
> doesn't invalidate equality. (And yes, comparisons are very
> confusing in interval arithmetic, but interval arithmetic is not
> basic floating-point arithmetic.)

Agreed, but irrelevant to my post.

> 3) That single real value is a _rational_ value. There is no
> serious floating-point format where it is anything _but_ a
> rational value. That's the floating part. For IEEE754, a finite,
> floating-point value consists of a sign bit s, an integer
> significand n, and an integer exponent e. It's numerical value
> is
> (-1)^s * (n / 2^k) * 2^e,
> where k is the factor needed to normalize things. This is very
> rational, with no illusions of magically representing irrationals.

If you are heavily into floating point and its issues, then such
concepts are clear, and there are no illusions. However, if you
are just a CL programmer, you might be under the false impression
that the constant PI is an irrational (knowing fully that it is
defined as the best long-float approximation to pi).

> 4) With that description, floating point values _are_ exact.
> Each finite value corresponds to exactly one rational value.

Again, precisely what I said. But your definition of "exact" is
circular, because it deals with a mapping of floats onto floats,
rather than the one-to-many mapping of floats onto reals (which
includes rationals not representable with the given precision, as
well as the irrationals). Given any two adjacent floats, there are
an infinite number of reals between them, both rational and irrational,
which can't be represented.

> Computations may yield values outside the representable set, and
> _that_ is the source of inexactness. The default IEEE rules produce
> the representable value closest to the exact value. You can switch
> definitions of "closest" by changing rounding modes.

Yet again, precisely what I said in fewer words: "floats are inexact".

Edward Jason Riedy

unread,
Feb 23, 2002, 1:19:54 AM2/23/02
to
And Duane Rettig writes:
-
- When I first read your response, especially this first paragraph,
- I thought you were disagreeing with me. But upon rereading it,
- it seems as though you are trying to expand upon what I said.

Your first impression was correct. I guess I didn't express
my point well enough.

- No, I'm referring to the age-old rule of Fortran to never write
- a DO loop with floating point numbers as loop variables. You
- may end up with a non-terminating loop.

That is equivalent to advising someone never to write a DO loop
over even integers. If you test against equality with an odd
integer, your loop will never terminate.

And the age-old rule derives more from _binary_ than it does
from inexactness in operations. People wanted to add 0.1 ten
times and get 1.0 exactly. In a decimal floating-point system,
no problem. But 0.1 isn't exactly representable in the binary
schemes. The problem would still have occured on a decimal
arithmetic, but not often enough to merit an age-old rule.

More recently, rules like that have come from compilers using
random precision on the IA32 architecture. One side of the
comparison is flushed to a narrow cell in memory, while the
other is kept to 80 bits. This problem would occur in
integers if a compiler pretended a 64-bit integer register
was 32 bits wide, so it's not a fundamental floating-point
problem.

- If you are heavily into floating point and its issues, then such
- concepts are clear, and there are no illusions. However, if you
- are just a CL programmer, you might be under the false impression
- that the constant PI is an irrational (knowing fully that it is
- defined as the best long-float approximation to pi).

Only if someone mislead you about the nature of floating point,
and computers in general. I realize that few people bother
explaining the important points, and that language designers
habitually mislead people through mis-naming types. Admittedly,
the culture of scare-mongering cultivated by some numerical
analysts doesn't help.

But that shouldn't over-ride common sense. A full expansion of
an irrational number requires infinte resources. A computer has
finite resources. If someone believes pi is treated symbolically,
well, that's a different issue.

- > 4) With that description, floating point values _are_ exact.
- > Each finite value corresponds to exactly one rational value.
-
- Again, precisely what I said. But your definition of "exact" is
- circular, because it deals with a mapping of floats onto floats,
- rather than the one-to-many mapping of floats onto reals

NO! There is _NO_ inherent one-to-many mapping. At all.
None. That is my point.

Again, a given finite floating-point number has a numerical
value equal to _precisely_one_ real number. _Not_ many.

You're thinking of a "mapping" that does not exist. There
is no mapping from floats to all reals. None. Get rid of
that notion, and you're half-way there. There is not even a
mapping from all floats to the reals. There is no NaN in
real arithmetic.

Given a rounding mode, you can convert any real to a float
abstractly. However, not all reals are amenable to being
converted in a fixed, finite amount of space. You could
always have .500...1 with just enough digits not to be
representable in the space you have (assuming round-to-
nearest).

In other words, thinking of a mapping from "all reals" to
floats (or back again) is silly. Numerical analysis needs
no such mapping. Proofs at this low level use the
correspondance between finite floating-point numbers and the
rationals (-1)^s (n/2^k) 2^e. See Peter Markstein's IA64 book
for some examples.

(That's the numerical analysis book you'll most likely find in a
book store. Cody's often has a copy. Mueller's elementary
function book is more general, but you'd probably find it less
interesting. Analysis at the perturbation theory level follows
a different path altogether, chosing to model floating-point
arithmetic with real arithmetic. That's a slight amusing role
reversal. ;) )

- > Computations may yield values outside the representable set, and
- > _that_ is the source of inexactness. The default IEEE rules produce
- > the representable value closest to the exact value. You can switch
- > definitions of "closest" by changing rounding modes.
-
- Yet again, precisely what I said in fewer words: "floats are inexact".

No. A floating-point number corresponds exactly to one real
number. Operations on floats may produce inexact answers.
_MAY_. They don't always.

So "floats are inexact" is an incorrect and misleading assertion.
"Floating-point arithmetic is not real arithmetic" is a correct
assertion. I consider "floats are inexact" equivalent to "Lisp
is only lists". It's somewhat nonsensical.

Floating-point arithmetic _models_ real arithmetic. It is not
real arithmetic in itself. Just as physics models the real
world but is not the world itself. People have no difficulties
understanding that a physical simulation of a bridge is not a
bridge.

Once people understand that floating point arithmetic is a
mathematical simulation of real arithmetic, they generally
figure out that it isn't real arithmetic itself. Sometimes it
takes knockng out a few misconceptions on the way, though.

Jason, who really wishes languages would stop calling the
types "real"...
--

Duane Rettig

unread,
Feb 23, 2002, 12:00:01 PM2/23/02
to
e...@cs.berkeley.edu (Edward Jason Riedy) writes:

> And Duane Rettig writes:
> -
> - When I first read your response, especially this first paragraph,
> - I thought you were disagreeing with me. But upon rereading it,
> - it seems as though you are trying to expand upon what I said.
>
> Your first impression was correct. I guess I didn't express
> my point well enough.

Perhaps. But you've gone and done it again - I had a good laugh
reading this article of yours below. I even slept on it, and
it was still funny this morning. I'm sorry; you're probably
taking this very seriously, and that in and of itself is what
is so funny to me. I almost didn't answer the body of this
article, except for your sig, which explains everything. But then
I decided that there were still a couple of interesting points
to be made, so I am answering your whole article.

Note: you are coming across as being very sensitive to word usage;
perhaps if I had used the word "accurate" instead of "exact", it
might have made a difference.

> - No, I'm referring to the age-old rule of Fortran to never write
> - a DO loop with floating point numbers as loop variables. You
> - may end up with a non-terminating loop.
>
> That is equivalent to advising someone never to write a DO loop
> over even integers. If you test against equality with an odd
> integer, your loop will never terminate.

No, because when you iterate with integers, there is no surprise
if you skip over your loop end; that is simply a bug. But why
should I explain it again, when you so beautifully illustrate
my point in your next paragraph? Witness:

> And the age-old rule derives more from _binary_ than it does
> from inexactness in operations. People wanted to add 0.1 ten
> times and get 1.0 exactly. In a decimal floating-point system,
> no problem. But 0.1 isn't exactly representable in the binary
> schemes. The problem would still have occured on a decimal
> arithmetic, but not often enough to merit an age-old rule.

Nevertheless, the rule is there. I don't know if _every_ Fortran
manual has such a rule, but yesterday I did a web search to
verify that the caveat was still there, and sure enough, the
first online Fortran manual I came to had the warning against
using floats in DO loops.

> More recently, rules like that have come from compilers using
> random precision on the IA32 architecture. One side of the
> comparison is flushed to a narrow cell in memory, while the
> other is kept to 80 bits. This problem would occur in
> integers if a compiler pretended a 64-bit integer register
> was 32 bits wide, so it's not a fundamental floating-point
> problem.

Ah, yes, too much precision is a Bad Thing. But here, you're
not talking about accuracy, but about repeatability. Repeatability
is Good in computer programming, and that is of course the reason why
it is Not Good to for floats to use too many bits in their
computations.

> - If you are heavily into floating point and its issues, then such
> - concepts are clear, and there are no illusions. However, if you
> - are just a CL programmer, you might be under the false impression
> - that the constant PI is an irrational (knowing fully that it is
> - defined as the best long-float approximation to pi).
>
> Only if someone mislead you about the nature of floating point,
> and computers in general. I realize that few people bother
> explaining the important points, and that language designers
> habitually mislead people through mis-naming types. Admittedly,
> the culture of scare-mongering cultivated by some numerical
> analysts doesn't help.
>
> But that shouldn't over-ride common sense. A full expansion of
> an irrational number requires infinte resources. A computer has
> finite resources. If someone believes pi is treated symbolically,
> well, that's a different issue.

Of course.

> - > 4) With that description, floating point values _are_ exact.
> - > Each finite value corresponds to exactly one rational value.
> -
> - Again, precisely what I said. But your definition of "exact" is
> - circular, because it deals with a mapping of floats onto floats,
> - rather than the one-to-many mapping of floats onto reals
>
> NO! There is _NO_ inherent one-to-many mapping. At all.
> None. That is my point.

Such violent agreement. You really should reread my statement,
and understand why your whole paragraph (except for the "NO!")
is exactly what I said.

> Again, a given finite floating-point number has a numerical
> value equal to _precisely_one_ real number. _Not_ many.

Precisely what I said.

> You're thinking of a "mapping" that does not exist. There
> is no mapping from floats to all reals. None. Get rid of
> that notion, and you're half-way there. There is not even a
> mapping from all floats to the reals. There is no NaN in
> real arithmetic.

Again, precisely what I said (although I do like the twist
in the opposite direction with NaNs [for other c.l.l. readers,
NaN means "Not a Number"], which is a range of floats which
don't represent actual numbers).

> Given a rounding mode, you can convert any real to a float
> abstractly. However, not all reals are amenable to being
> converted in a fixed, finite amount of space. You could
> always have .500...1 with just enough digits not to be
> representable in the space you have (assuming round-to-
> nearest).

Gets back to my original statement: "Floats are inexact" (or,
if you prefer, "Floats are inaccurate"). In other words, if
you want to represent a real number, then given the vastness
of the continuum of reals, the probability of being able to
represent such a real number with a float value accurately is
very small.

> In other words, thinking of a mapping from "all reals" to
> floats (or back again) is silly. Numerical analysis needs
> no such mapping. Proofs at this low level use the
> correspondance between finite floating-point numbers and the
> rationals (-1)^s (n/2^k) 2^e. See Peter Markstein's IA64 book
> for some examples.
>
> (That's the numerical analysis book you'll most likely find in a
> book store. Cody's often has a copy. Mueller's elementary
> function book is more general, but you'd probably find it less
> interesting. Analysis at the perturbation theory level follows
> a different path altogether, chosing to model floating-point
> arithmetic with real arithmetic. That's a slight amusing role
> reversal. ;) )
>
> - > Computations may yield values outside the representable set, and
> - > _that_ is the source of inexactness. The default IEEE rules produce
> - > the representable value closest to the exact value. You can switch
> - > definitions of "closest" by changing rounding modes.
> -
> - Yet again, precisely what I said in fewer words: "floats are inexact".
>
> No. A floating-point number corresponds exactly to one real
> number. Operations on floats may produce inexact answers.
> _MAY_. They don't always.

This is the funniest passage of all. Here, in the _same_ paragraph,
you are contradicting yourself and proving my point. Floats are
inexact.

> So "floats are inexact" is an incorrect and misleading assertion.

Only if you take "exact" to mean "repeatable" or "rigorous". That
was not the meaning I was using. Perhaps if I had used the word
"accurate" it would not have touched such a nerve...

> "Floating-point arithmetic is not real arithmetic" is a correct
> assertion. I consider "floats are inexact" equivalent to "Lisp
> is only lists". It's somewhat nonsensical.

Again, substituting the word "inaccurate" for "inexact", we get the
statement "floats are inaccurate". Which is a true but incomplete
statement (because floats can indeed be accurate for some real
values).

Your statement "Lisp is only lists" is obviously false, as of
course you are implying, but consider instead the statement "Lisp
is lists". Now the latter statement is indeed true, but incomplete.
A more complete statement would be "Lisp is lists and a whole lot
of other things".

So, in summary, I disagree with your consideration, but would have
agreed with you if you had considered "floats are inexact" equivalent
to "Lisp is lists".

> Floating-point arithmetic _models_ real arithmetic. It is not
> real arithmetic in itself. Just as physics models the real
> world but is not the world itself. People have no difficulties
> understanding that a physical simulation of a bridge is not a
> bridge.
>
> Once people understand that floating point arithmetic is a
> mathematical simulation of real arithmetic, they generally
> figure out that it isn't real arithmetic itself. Sometimes it
> takes knockng out a few misconceptions on the way, though.

Good points, both. But until you actually knock out those
misconceptions, you have to deal with people having those
misconceptions. If you reread my previous two articles,
you'll find that that is really what I was saying.

> Jason, who really wishes languages would stop calling the
> types "real"...

Yes, this does explain everything. Don't try to change the
way people use language; instead, try to understand how they
use it, and you'll be a lot happier. The two oldest programming
languages use "real" for floating point types, and that's not
going to change. Just deal with it.

Bradley J Lucier

unread,
Feb 23, 2002, 12:35:43 PM2/23/02
to
In article <41yfc9...@beta.franz.com>,

Duane Rettig <du...@franz.com> wrote:
>e...@cs.berkeley.edu (Edward Jason Riedy) writes:

Pitman says he knows almost nothing about FP, and decides to stop
talking about it.

You say you know almost nothing about FP, and seem to take it as
an incentive to post nonsense.

>> - No, I'm referring to the age-old rule of Fortran to never write
>> - a DO loop with floating point numbers as loop variables. You
>> - may end up with a non-terminating loop.
>>
>> That is equivalent to advising someone never to write a DO loop
>> over even integers. If you test against equality with an odd
>> integer, your loop will never terminate.
>
>No, because when you iterate with integers, there is no surprise
>if you skip over your loop end; that is simply a bug.

Yes, *and it's simply a bug to expect .1 added 10 times to equal 1.*
How do bugs/not bugs depend on your level of surprise? Does that
mean that newbies never write buggy programs because they're always
surprised? What nonsense.

>> And the age-old rule derives more from _binary_ than it does
>> from inexactness in operations. People wanted to add 0.1 ten
>> times and get 1.0 exactly. In a decimal floating-point system,
>> no problem. But 0.1 isn't exactly representable in the binary
>> schemes. The problem would still have occured on a decimal
>> arithmetic, but not often enough to merit an age-old rule.
>
>Nevertheless, the rule is there. I don't know if _every_ Fortran
>manual has such a rule, but yesterday I did a web search to
>verify that the caveat was still there, and sure enough, the
>first online Fortran manual I came to had the warning against
>using floats in DO loops.

So you claim that nonsense, if it comes up first in a Google search, == truth.
Which is, in itself, more nonsense, but now at the meta level.

>> More recently, rules like that have come from compilers using
>> random precision on the IA32 architecture. One side of the
>> comparison is flushed to a narrow cell in memory, while the
>> other is kept to 80 bits. This problem would occur in
>> integers if a compiler pretended a 64-bit integer register
>> was 32 bits wide, so it's not a fundamental floating-point
>> problem.
>
>Ah, yes, too much precision is a Bad Thing. But here, you're
>not talking about accuracy, but about repeatability. Repeatability
>is Good in computer programming, and that is of course the reason why
>it is Not Good to for floats to use too many bits in their
>computations.

Gee, one might think from your comments that if you repeat a computation
on IA32, you'll get different results.

I find it hilarious the way you translate "extended double precision"
to "too many bits". Those technical terms are always too hard, it's
better to use simple terms that people who don't know FP can use in
their next usenet posting.

>Gets back to my original statement: "Floats are inexact" (or,
>if you prefer, "Floats are inaccurate"). In other words, if
>you want to represent a real number, then given the vastness
>of the continuum of reals, the probability of being able to
>represent such a real number with a float value accurately is
>very small.

Yet, in practice, people do this *every day*! Conspiracy? Voodoo?
News at 11!

Brad


Deepak Goel

unread,
Feb 23, 2002, 12:44:21 PM2/23/02
to
>
> Since allowing the exponent to get arbitrarily large wouldn't solve the
> (expt 201/1 201/2) problem , what's your point?

IMHO, it would. An answer like 2.333445454 E 20000 to the above
expression would be okay.

Of course, one may counter that it is not exact, but then neither is
what lisp *already* returns for (expt 2 1/3) exact.


Thanks to all who followed up. In response to a post by Wade: no,
there is no problem that i am trying to solve... the minor overflow
problems i was having initially could easily be solved by using
doubles or long-floats etc. as demonstrated by another FUP here. i
guess all i was looking for was a way (package?) to ask lisp to never
return *infinity* for an answer- --- if the answer does turn out to
have a huge expt-part, it would be nice if instead of returning
infinite, somehow it could switch to some other form of representation
that does not suffer from limitations on the expt-part, which would:

Round off the mantissa at some number of decimal points (of course..),
but still represent the entire expt part.

because that seems to me, IMHO, to be the right way....

Edward Jason Riedy

unread,
Feb 23, 2002, 3:10:03 PM2/23/02
to
And Duane Rettig writes:
-
- I'm sorry; you're probably taking this very seriously, and that in
- and of itself is what is so funny to me.

Glad I could amuse.

My objection is about the phrase "floats are inexact". Floats
are data. Outside of a context, they are themselves.

- Note: you are coming across as being very sensitive to word usage;
- perhaps if I had used the word "accurate" instead of "exact", it
- might have made a difference.

Accuracy comes from producing results that are close to some
measurable value. This is a property of an entire computation,
not an individual operation. It is also a property of how the
computed value is used. Accuracy does not exist without a
context.

Exactness is a property of a single operation. If no compromises
are made due to limited precision and range, an operation's result
is exact. Hence, the inexact condition in IEEE754. This is a
firm, historically well-established definition.

Some people confuse accuracy with precision or the count of
digits in a number. Dr. Kahan has wonderful example of why
these are different concepts: "The moon is made of blue
cheese." That's a rather precise statement (you could argue
that there are many blue cheeses), but it is not accurate.

- > NO! There is _NO_ inherent one-to-many mapping. At all.
- > None. That is my point.
-
- Such violent agreement. You really should reread my statement,
- and understand why your whole paragraph (except for the "NO!")
- is exactly what I said.

We must have some fundamental miscommunication here. I did
re-read your paragraph. Here it is again:


- Again, precisely what I said. But your definition of "exact" is
- circular, because it deals with a mapping of floats onto floats,

- rather than the one-to-many mapping of floats onto reals (which
- includes rationals not representable with the given precision, as
- well as the irrationals). Given any two adjacent floats, there are
- an infinite number of reals between them, both rational and irrational,
- which can't be represented.

Note "the one-to-many mapping of floats onto reals". That means
there is a mapping of one float to many reals, and that each real
is mapped to from some float.

My assertion is that such a mapping is nonsense. Each numerical
floating-point value maps to exactly one extended real. Period.

And the infinite reals _are_ represented, but not exactly.

One of these directions _is_ exact, and that's very important.
Floating-point numbers are not fuzzy or floating in a sea of
uncertainty. Each floating-point number corresponds exactly to
one real number.

The assertion that "floats are inexact" implies some fuzziness in
the data. This leads most people to believe that after some
number of operations, fuzziness accumulates, and naturally floats
no longer mean anything. That implication is misleading. It
gives rise to the idea that floating-point arithmetic is
inherently untrustworth or black magic.

I've seen people follow this line of reasoning, and it always
starts from the idea that "floats are inexact". Declaring that
they are necessarily inaccurate is worse.

- > Again, a given finite floating-point number has a numerical
- > value equal to _precisely_one_ real number. _Not_ many.
-
- Precisely what I said.

Perhaps it is what you meant, but it is not what you said.

- In other words, if you want to represent a real number, then
- given the vastness of the continuum of reals, the probability
- of being able to represent such a real number with a float value
- accurately is very small.

No need to be vague. The probability is zero. Floating-point
values form a discrete set, and that has measure zero. The
reals have infinite measure. Even in 754, 0.0 / Inf == 0.0. ;)

However, you only look at "all reals" once, on input. Once you're
in the floating-point arithmetic system, you can only produce a
finite number of results from any operation.

- > - Yet again, precisely what I said in fewer words: "floats are inexact".

- >
- > No. A floating-point number corresponds exactly to one real
- > number. Operations on floats may produce inexact answers.
- > _MAY_. They don't always.
-
- This is the funniest passage of all. Here, in the _same_ paragraph,
- you are contradicting yourself and proving my point. Floats are
- inexact.

Ah, ok. You need to separate the data from the operations. Each
numerical datum corresponds exactly to one real number. The result
of an operation on some data also corresponds to exactly one real
number (when numeric), but that number may not be the same as if
you had been in real or exact arithmetic.

So the floating-point operation's result may not be exact. Floats
are exact in that each numerical float corresponds to exactly one
real number. They may be inexact in the sense of the result of an
operation with limited resources. There is no contradiction.

- Again, substituting the word "inaccurate" for "inexact", we get the
- statement "floats are inaccurate". Which is a true but incomplete
- statement (because floats can indeed be accurate for some real
- values).

Now you've assigned a partial context to accuracy. Binary
floats are as accurate a representation as possible given their
form. Decimal would likely be perceived as more accurate.

But floats in themselves are neither accurate nor inaccurate.

- The two oldest programming languages use "real" for floating
- point types, and that's not going to change. Just deal with it.

Um, Common Lisp defines FLOAT, and REAL exists as a superclass
to relate RATIONAL and FLOAT. Both clisp and sbcl return
SINGLE-FLOAT for (TYPE-OF 1.2).

So there's hope yet.

Jason
--

Harald Hanche-Olsen

unread,
Feb 23, 2002, 8:46:35 AM2/23/02
to
+ Nils Goesche <car...@cartan.de>:

| In article <pcod6yx...@thoth.math.ntnu.no>, Harald Hanche-Olsen wrote:
| > (Hmmm... It just occured to me that one possible way to limit the
| > precision of rationals might be to simply chop of the same number
| > of bits at the right end of enumerator and denominator. But that
| > would still not be good enough for very small numbers, which would
| > have a moderate enumerator and a giant denominator.) | | If you
| > plan to chop'em off you should use floats right away, I reckon.

My thought exactly. But what we're discussing here is the situation
when you want more precision than the built-in flots can provide, like
a hundred decimal digits or more. Kent's post could almost be taken
to suggest the use of rationals for this purpose, which is why I
asked. (Yeah, I know Kent is always very precise in his choice of
words. He did not outright suggest it, which is why I wrote those
weasely words "could almost be taken to". No intention to put words
into Kent's mouth.)

Edward Jason Riedy

unread,
Feb 23, 2002, 3:54:46 PM2/23/02
to
And Bradley J Lucier writes:
-
- >No, because when you iterate with integers, there is no surprise
- >if you skip over your loop end; that is simply a bug.
-
- Yes, *and it's simply a bug to expect .1 added 10 times to equal 1.*
- How do bugs/not bugs depend on your level of surprise? Does that
- mean that newbies never write buggy programs because they're always
- surprised? What nonsense.

You know this, but I'll repeat it anyways:

When you start by assuming that floating-point arithmetic is
somehow strange or untrustworthy, lots of things are no longer
your fault, so they can't be bugs.

The bugginess isn't the nonsense. It's the assumption that's
incorrect. In this case, the assumption is buoyed by the
difference between program text (0.1 in decimal) and reality
(a rounded binary representation).

- So you claim that nonsense, if it comes up first in a Google
- search, == truth.

The current Fortran draft codifies that rule. I think it may
have been codified in Fortran 90 or 95, but I'm not sure. So
yes, it is truth. ;)

- Gee, one might think from your comments that if you repeat a computation
- on IA32, you'll get different results.

With some dynamic compilers, you may. It has happened to various
Java implementations in the past, and it may well have happened to
Lisps and MLs... I'm not sure if Franz Lisp gathers stats and
recompiles, but I'd imagine they've investigated it at some point.

- I find it hilarious the way you translate "extended double precision"
- to "too many bits".

You have to admit that it's a very common translation, especially
when compilers pretend that double extended is the same as double
sometimes. You've seen the bug reports for gcc (and flames).

When a programmer has requested double precision and expects strict
evaluation, it _is_ too many bits. Most languages don't give a
programmer any way to control the expression evaluation, so...

For Lisp and other mostly dynamically typed languages, the only
reasonable evaluation discipline is strict evaluation (as far as I
can see). So sneaking in extra bits can be painful. (If anyone
has ideas on how to support widest-needed evaluation in a
dynamically typed language, _please_ contact me.)

- Yet, in practice, people do this *every day*! Conspiracy? Voodoo?

Just poor definitions. Accuracy needs a context. Given a
format, IEEE754-style floating point numbers _do_ accurately
represent reals. Is it the most accurate possible in the given
number of bits? I don't know.

Some people would argue that it isn't as accurate as a decimal
arithmetic from the context of everyday usage. I generally
agree. If we had fast decimal floating-point, I doubt if
people would consider FP really unusual.

We may get it someday, but in the mean time we should describe
_why_ people still receive accurate results. The reason is
partly that the majority of computations appear to need
far less precision than is available.

It's also because there is no fuzziness. IEEE754 floating-point
arithmetic is a strict system with relatively simple rules. The
numeric values have an exact mapping to extended reals. Although
the results of operations may not be exactly the same as if the
operations had occured in real arithmetic, the result still has
an exact mapping to the extended reals.

(Yes, you know this, but I'm trying to drive that point into
common consciousness. ;) )

Jason
--

Duane Rettig

unread,
Feb 23, 2002, 5:00:01 PM2/23/02
to
b...@cs.purdue.edu (Bradley J Lucier) writes:

> In article <41yfc9...@beta.franz.com>,
> Duane Rettig <du...@franz.com> wrote:
> >e...@cs.berkeley.edu (Edward Jason Riedy) writes:
>
> Pitman says he knows almost nothing about FP, and decides to stop
> talking about it.
>
> You say you know almost nothing about FP, and seem to take it as
> an incentive to post nonsense.

Excuse me? Please show me where I said anything about knowing almost
nothing about FP.

Duane Rettig

unread,
Feb 23, 2002, 5:00:01 PM2/23/02
to

One more try:

e...@cs.berkeley.edu (Edward Jason Riedy) writes:

> We must have some fundamental miscommunication here. I did
> re-read your paragraph. Here it is again:
> - Again, precisely what I said. But your definition of "exact" is
> - circular, because it deals with a mapping of floats onto floats,
> - rather than the one-to-many mapping of floats onto reals (which
> - includes rationals not representable with the given precision, as
> - well as the irrationals). Given any two adjacent floats, there are
> - an infinite number of reals between them, both rational and irrational,
> - which can't be represented.
>
> Note "the one-to-many mapping of floats onto reals". That means
> there is a mapping of one float to many reals, and that each real
> is mapped to from some float.
>
> My assertion is that such a mapping is nonsense. Each numerical
> floating-point value maps to exactly one extended real. Period.

Can you _really_ read that statement and conclude that I _believe_
that there is a one-to-many mapping of floats to reals? On the
contrary, my statement was worded so that one would draw the conclusion
that such a mapping _is_ nonsense.

Erann Gat

unread,
Feb 24, 2002, 1:18:57 AM2/24/02
to
In article <4wux39...@beta.franz.com>, Duane Rettig <du...@franz.com> wrote:

> b...@cs.purdue.edu (Bradley J Lucier) writes:
>
> > In article <41yfc9...@beta.franz.com>,
> > Duane Rettig <du...@franz.com> wrote:
> > >e...@cs.berkeley.edu (Edward Jason Riedy) writes:
> >
> > Pitman says he knows almost nothing about FP, and decides to stop
> > talking about it.
> >
> > You say you know almost nothing about FP, and seem to take it as
> > an incentive to post nonsense.
>
> Excuse me? Please show me where I said anything about knowing almost
> nothing about FP.

It's good to know I'm not the only one who is occasionally misunderstood
around here! :-)

E.

Erann Gat

unread,
Feb 24, 2002, 1:37:39 AM2/24/02
to
In article <32234005...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:

> Ignore the likes of Erann Gat whose job it seems to be to make people
> stop using Common Lisp unless it is somehow magically "enhanced" to his
> satisfaction.

Sigh. I was really hoping to end this, but I cannot allow this libel to
go unchallenged.

It is not my "job" nor is it my intention to dissuade anyone from using
Lisp. The more people use Lisp the better. It is my intention to point
out that from where I sit it appears that not many people are using Lisp,
and some people who would like to can't. It is my intention to engage in
some speculation about why this might be and what might be done about it.
And it is my intention to try to find like-minded people who would also
see this as a problem and who are interested in working together to do
something about it. (It is becoming increasingly clear that I won't find
them here.)

> You have a job to do: Learn Common Lisp.

On this point we agree.

E.

Erik Naggum

unread,
Feb 24, 2002, 3:20:53 AM2/24/02
to
* Erann Gat

| Sigh. I was really hoping to end this, but I cannot allow this libel to
| go unchallenged.

It was a fair summary of your position. That you do not like it when you
see what you do with other people's eyes does not make it "libel".

| (It is becoming increasingly clear that I won't find them here.)

Let me get this straight. You think that Common Lisp needs to improve
before it can be used for certain tasks, so you go use Python. However,
_other_ people should listen to your whining about how you had to choose
Python for those tasks and spend the energy to improve Common Lisp --
instead of just using Python like you did?

The premises that would need to be added to this mess to make it appear
to make sense are unfit for print.

Bijan Parsia

unread,
Feb 24, 2002, 9:52:59 AM2/24/02
to
On Sat, 23 Feb 2002, Erann Gat wrote:
[snip]

> It is not my "job" nor is it my intention to dissuade anyone from using
> Lisp. The more people use Lisp the better. It is my intention to point
> out that from where I sit it appears that not many people are using Lisp,
> and some people who would like to can't.

I thought there was the more subtle point that Lisp isn't merely a
non-mainstream choice, but an anti-mainstream choice, at least in the
sense that one faces nigh uniform HUGE resistence to using it in a *lot*
of contexts, even one's where it's pretty obvious that it would do a lot
of good.

FWIW, you can still get that for Python. I have. But it's much easier,
when encountering such blockage to say, "But, Python *is* a mainstream,
standard, sensible, safe choice."

> It is my intention to engage in
> some speculation about why this might be and what might be done about it.

Now *this* is a minefield, and rightly so. I don't know if you recognize
*how fraught* such speculation is, especially in fora where folks come in
regularly saying "If only Lisp had Foo like C++, Visual Basic, wha wha
wha." If you mean to be an effective political leader (and surely, your
intentions translate in an intention to be a political leader), it would
be good, IMHO, if you handled that a bit better (on c.l.lisp, at least).

> And it is my intention to try to find like-minded people who would also
> see this as a problem and who are interested in working together to do
> something about it.

Is the "like-minded"ness more that those who see lisp being
anti-mainstream as a problem and would like to do something about it? Or
must they also share your temperment, and feel relaxed about the way you
put things, etc.?

> (It is becoming increasingly clear that I won't find
> them here.)

If the latter, then yes, it seems so. At least, not as many as you may
have hoped and not without those who are anti-minded (in the latter
sense).

------

Also, FWIW, I don't think adding functionality or convenience is the
either necessary or sufficient to getting past the
anti-mainstreamness. Java started with almost no-functionality and little
convenience. There are plenty of people who find Python seriously lacking
in its libraries and some who reject it for those reasons and some who
embrace it in spite of those.

It seems to me that given the supposed Lispiness of Python, that Python
folks would be as natural a consituacy for becoming Lispers/promoting Lisp
as the reverse. One way to win them over is to have a fast, complete,
powerful, portable implementation of Python in Lisp (like Jython is for
Java). Some of these second implementations have failed in the Python
community (e.g., Vyper in O'Caml, but I think that was due to the
ambitious of the implementer and his failure to get a practical,
everyday-code working version out), so it's not a promise.

The other "tractor app" I standardly wave around is XML processing, in
particular XSLT. XSLT processors are regularly evaluated in the trade
press (like XML.com, with the implementation languages discussed). The
really nice thing about a Lisp based processor is the potential smoothness
of integration between XML and sexpr (see SXML for examples).

I often wonder if it's not simple lack of *exposure*. There will always be
folks who won't like Common Lisp. But if you can expose more folks, in
lights that let them get a small grip quickly (e.g., modifying something
in a domain they understand well), I suspect you'll pull in more
people. (SK8 being implemented in MCL *certainly* helped dispell my Scheme
scorn.)

Cheers,
Bijan Parsia.

Erann Gat

unread,
Feb 24, 2002, 2:01:54 PM2/24/02
to
In article <32235276...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:

> * Erann Gat
> | Sigh. I was really hoping to end this, but I cannot allow this libel to
> | go unchallenged.
>
> It was a fair summary of your position.

No, it was not. It was a gross distortion of my position. And it's not
the first time you have engaged in such distortion. It is a tactic you
commonly employ when you cannot defend your position on its merits.

> | (It is becoming increasingly clear that I won't find them here.)
>
> Let me get this straight. You think that Common Lisp needs to improve
> before it can be used for certain tasks, so you go use Python.

No. I have said this time and time again. How many more times will you
make me repeat it? Just because I have chosen to use Python (for one
class of tasks) does not mean that I want to convince other people to use
Python. Ironically, the Python people say exactly the same thing that you
do, but in reverse. They think I'm trying to dissuade people from using
Python in favor of Lisp. You're both wrong.

> However,
> _other_ people should listen to your whining about how you had to choose
> Python for those tasks and spend the energy to improve Common Lisp --
> instead of just using Python like you did?

I did not "have" to choose Python, I chose Python (for a very specific
task) of my own free will. No one held a gun to my head. It's not a
difficult concept to comprehend: just because I make a choice it does not
mean I advocate that choice for others.

E.

Erann Gat

unread,
Feb 24, 2002, 2:46:30 PM2/24/02
to
In article
<Pine.A41.4.21L1.02022...@login5.isis.unc.edu>, Bijan
Parsia <bpa...@email.unc.edu> wrote:

> On Sat, 23 Feb 2002, Erann Gat wrote:
> [snip]
>
> > It is not my "job" nor is it my intention to dissuade anyone from using
> > Lisp. The more people use Lisp the better. It is my intention to point
> > out that from where I sit it appears that not many people are using Lisp,
> > and some people who would like to can't.
>
> I thought there was the more subtle point that Lisp isn't merely a
> non-mainstream choice, but an anti-mainstream choice, at least in the
> sense that one faces nigh uniform HUGE resistence to using it in a *lot*
> of contexts, even one's where it's pretty obvious that it would do a lot
> of good.
>
> FWIW, you can still get that for Python. I have. But it's much easier,
> when encountering such blockage to say, "But, Python *is* a mainstream,
> standard, sensible, safe choice."

Yes, I think I agree with that (though I'm not sure I fully understand
your distinction between non-mainstream and anti-mainstream). Ironically,
I think Python is actually a much less "safe" language (in the technical
sense) than Lisp because of its syntactically significant white space.
(This position has made me a pariah in the Python community. I just can't
seem to help but piss some people off. Oh well, tough.)

> > It is my intention to engage in
> > some speculation about why this might be and what might be done about it.
>
> Now *this* is a minefield, and rightly so. I don't know if you recognize
> *how fraught* such speculation is, especially in fora where folks come in
> regularly saying "If only Lisp had Foo like C++, Visual Basic, wha wha
> wha." If you mean to be an effective political leader (and surely, your
> intentions translate in an intention to be a political leader), it would
> be good, IMHO, if you handled that a bit better (on c.l.lisp, at least).

All else being equal I'd rather not be a political leader. I would much
rather be doing technical work and leave the politics to others. I don't
really enjoy politics, and I'm not particularly good at it. However, I'm
not so naive that I think politics can be made to just go away. My most
optimistic hope is that I will find people who *do* enjoy politics and who
are good at it to do that work for me. I've actually had a fair amount of
success over the years because JPL has a lot of good politicians whom I
have "hired" to be my supervisors. :-)

I really wish that the *real* leaders of the Lisp community, like Kent and
Duane and Paul Graham and Peter Norvig and Dick Gabriel, and even Erik,
would lead this community in what I perceived to be a productive direction
that I could both contribute to and benefit from. Alas, what I perceive
is that this is not happening. Paul and Peter are off doing their own
thing, Kent is taking a lassez-faire approach, and Erik seems intent on
transforming Lisp from a programming language into a religion.

I believe in the maxim that one should lead, follow, or get out of the
way. I tried following, that worked for a long time, but it doesn't
today, at least not for me, and apparently not for a lot of people. So
I'm going to try leading for a while. If that doesn't work, I'll get out
of the way.

> > And it is my intention to try to find like-minded people who would also
> > see this as a problem and who are interested in working together to do
> > something about it.
>
> Is the "like-minded"ness more that those who see lisp being
> anti-mainstream as a problem and would like to do something about it? Or
> must they also share your temperment, and feel relaxed about the way you
> put things, etc.?

Yes to the first question, no to the second.

> > (It is becoming increasingly clear that I won't find
> > them here.)
>
> If the latter, then yes, it seems so. At least, not as many as you may
> have hoped and not without those who are anti-minded (in the latter
> sense).

Hm, I didn't quite follow that.

> Also, FWIW, I don't think adding functionality or convenience is the
> either necessary or sufficient to getting past the
> anti-mainstreamness. Java started with almost no-functionality and little
> convenience. There are plenty of people who find Python seriously lacking
> in its libraries and some who reject it for those reasons and some who
> embrace it in spite of those.

Yes, I agree. That's why one of the things I think needs to be done is to
hide Common Lisp (or something like it) under the guise of a new name and
surface syntax with a simple mapping to S-expressions underneath. BTW, I
state this as a proposed solution to a problem, not as gospel that has to
be taken as axiomatic. If someone has a better idea of how to solve the
problem (described in some more detail below) I'm listening.

[snip]

> I often wonder if it's not simple lack of *exposure*. There will always be
> folks who won't like Common Lisp. But if you can expose more folks, in
> lights that let them get a small grip quickly (e.g., modifying something
> in a domain they understand well), I suspect you'll pull in more
> people. (SK8 being implemented in MCL *certainly* helped dispell my Scheme
> scorn.)

Yes. Here's a theory I've developed over the years: the digital world has
become so complicated that it's impossible for any one person to keep up.
People therefore rely on the opinions of others at least as guides to
where they should focus their attention and often as guides to actual
decisions. In such a climate there is a positive-feedback effect where
ideas get accepted as truth completely irrespective of actual facts. One
influential "consultant" says something, enough people embody that opinion
into their worldview, and suddenly "everybody just knows" that X is true.
This has happened to languages like Lisp and Dylan and Eiffel: everybody
just knows that no one uses them, they aren't good for anything (except
that Lisp is good for AI), and so the vast majority of people don't even
bother to learn about them. Interestingly, the Lisp community is a little
island of this same phenomenon happening, but anchored to a different
"local maximum". In the Lisp community, Lisp is the ultimate language,
all language innovations that can possibly happen have already happened,
and Lisp incorporates them all, so there is no point in learning any
language other than Lisp. I am caricaturing, of course, and most people
don't take these extreme position in their pure form. But (putting on my
scientist hat for a moment) this theory actually makes a testable
prediction: that most people who like Lisp don't know much about other
langauges, and most people who like other langauges don't know much about
Lisp. (Here's a data point: when Paul Graham started designing Arc he
didn't know Python.)

E.

Tim Bradshaw

unread,
Feb 24, 2002, 4:01:20 PM2/24/02
to
* Erann Gat wrote:
> In the Lisp community, Lisp is the ultimate language,
> all language innovations that can possibly happen have already happened,
> and Lisp incorporates them all, so there is no point in learning any
> language other than Lisp. I am caricaturing, of course, and most people
> don't take these extreme position in their pure form. But (putting on my
> scientist hat for a moment) this theory actually makes a testable
> prediction: that most people who like Lisp don't know much about other
> langauges, and most people who like other langauges don't know much about
> Lisp. (Here's a data point: when Paul Graham started designing Arc he
> didn't know Python.)

This is something that could actually be tested (modulo enormous
self-selection /sampling bias) and although I'm not sure of the
usefulness of the result I'd bet (small) money that the opposite is
true to your caricature: Lisp people know *more* languages well than
other people.

For me, I'd be comfortable in CL, scheme (but frustrated) other Lisps,
perl, C (somewhat rusty now), python (haven't written significant
systems in it), some C++ (very rusty, gave up (went back to C) when it
became obviously insane), prolog, fortran (rusty), smalltalk (very
rusty) many Unixoid scripting languages (the usual bag of sh, awk, sed
&c &c). Read manuals but not used significantly: Java (could use
language, not familiar with 98% of library), TCL (actually gave this
up in disgust). Really a long time ago: BCPL, Basic (several),
assembler (several, not big-machine), Forth. I must have missed some
stuff here.

I have no idea how that compares with most people.

--tim

Wade Humeniuk

unread,
Feb 24, 2002, 4:53:30 PM2/24/02
to
"Erann Gat" <g...@jpl.nasa.gov> wrote in message
news:gat-240202...@192.168.1.50...

> bother to learn about them. Interestingly, the Lisp community is a little
> island of this same phenomenon happening, but anchored to a different
> "local maximum". In the Lisp community, Lisp is the ultimate language,
> all language innovations that can possibly happen have already happened,
> and Lisp incorporates them all, so there is no point in learning any
> language other than Lisp. I am caricaturing, of course, and most people
> don't take these extreme position in their pure form. But (putting on my
> scientist hat for a moment) this theory actually makes a testable
> prediction: that most people who like Lisp don't know much about other
> langauges, and most people who like other langauges don't know much about
> Lisp. (Here's a data point: when Paul Graham started designing Arc he
> didn't know Python.)

We accept that innovations and technologies in other areas reach plateaus.
Examples from aviation include the B-52 and the Boeing 747. At the
beginning of aviation there were lots of innovations, now all commericial
airlines look approximately the same. This pattern is holding for
programming languages. Lots of innovation in the 50s, 60s and 70s. Then a
plateauing. Programming languages seemed governed by physical laws just
like other things. How old is computer programming now? I think the
advances ahead now will be able to build upon what exists, not through
innovation.

The best example of how the programming community has not advanced is the
efforts to replace the US air traffic control system. With all the new
programming languages and technological advances they have dismally failed
to replace the systems, systems done long ago, with more archaic tools and
understanding, BUT, with great intelligence. The techniques developed in
the past, including the AI boom are well documented but many programmers
think they can ignore them and move on. If Lisp did not exist someone would
invent it. Lisp was created by an insight into the nature of programming
languages by its creators. There is something very fundamental about its
structure.

Yes Lisp incorparates most programming styles. That is why I chose it. It
also has history, which means it has been shaped by actual use. How many
times did I have to write List libraries in C? Had to do resource
management (reference counting) in C++? It was a waste of time.

Knowing a programming language does not make you a good programmer.

Wade


Bulent Murtezaoglu

unread,
Feb 24, 2002, 5:26:06 PM2/24/02
to
>>>>> "TimB" == Tim Bradshaw <t...@cley.com> writes:

TimB> [...] I'd bet (small) money
TimB> that the opposite is true to your caricature: Lisp people
TimB> know *more* languages well than other people.

I'd bet the same way. I only claim to know CL (- CLOS/MOP expertise)
and C (ok tcl/tk too). But I have gotten paid (by employers who are
still happy to have employed me) to do 4GL's, COBOL, sh, basic, pascal
etc. also. I have also produced substantial amounts of code in
scheme, Fortran, Modula-II, SR (some experimental Algol/Pascal variant
for parallel stuff), MIPS and Z80 assembler, AMD 2900 bit-slice code, C++
and probably a gazillion other things I have forgotten. I have read
the Java manuals much like Tim, and decided its strength was mostly
marketing. I actually looked into Python, and yeah I'd prefer it over
Java but I have found no use for it just yet. I don't program full
time so unless somebody hires me (despite my claims of ignorance) to
fix a broken Python project I prefer to be just aware of it (ie it
didn't excite me). I think you'll find that credible people who claim
to know and like lisp either have academic CS backgrounds or are very
disciplined and gifted self-learners (I am the former). It would
therefore stand to reason that those folks usually are not from the "I
read the learn XX in 21 days, therefore I know XX now" crowd and have had
non-trivial exposure to other languages.

I claim ingorance (an rightly so) of a lot of stuff that I know more
about than most people who claim to be experts in them. I know I am
not the only lisper who is like that (KMP and yourself come to my
mind). This is just one more instance of worse is better: you don't
need optimality for success nor do you need expertise to be
productive. No news there, then.

cheers,

BM

Bulent Murtezaoglu

unread,
Feb 24, 2002, 5:36:12 PM2/24/02
to
>>>>> "EG" == Erann Gat <g...@jpl.nasa.gov> writes:
[...]
EG> I really wish that the *real* leaders of the Lisp community,
EG> like Kent and Duane and Paul Graham and Peter Norvig and Dick
EG> Gabriel, and even Erik, would lead this community in what I
EG> perceived to be a productive direction that I could both
EG> contribute to and benefit from. [...]

OK, I have read most of your postings and still do not understand what
you would percieve to be a productive direction. In the parts of this
particular posting I have deleted you seem to indicate 'hiding' s-exps
would be beneficial. I also know that lack of solid DB connectivity
libraries in MCL was a factor in your choosing Python. What else?

[...]
EG> Yes. Here's a theory I've developed over the years: the
EG> digital world has become so complicated that it's impossible
EG> for any one person to keep up.

OK, I doubt this happened recently though.

EG> People therefore rely on the
EG> opinions of others at least as guides to where they should
EG> focus their attention and often as guides to actual decisions.

Sure, this is no different than other fields.

EG> In such a climate there is a positive-feedback effect where
EG> ideas get accepted as truth completely irrespective of actual
EG> facts. One influential "consultant" says something, enough
EG> people embody that opinion into their worldview, and suddenly
EG> "everybody just knows" that X is true. [...]

I think it takes more than one influential consultant. I have been
that influential consultant occasionally in the past. The influence
usually does not stick. What does happen is that people get into a
senseless mindset that entails fear of looking stupid in front of
someone they percieve as 'very smart' and carefully parrot stuff you
uttered back to you. I think magazines, ill-informed[ing] web pages,
slashdot or even TV is more influential than any single person. I
think Erik's cognitive dissonance explanations are on target here with
a slight twist: people seem to read variants of 'learn XXX in 21 days
while acting like a dummy' and then think that they actually have
learned something. This conviction about 'knowing' something is tough
to challenge especially when livelihoods are based on this
misrepresentation. One sees this at all levels, people 'know' how to
set up network services when in fact they know how to monkey with
files until something just happens to limp along, they 'know' how to
write database backed programs because they have got 10 instances of
the same buggy program to work under a load that doesn't show the
corruption bug, they know how use C except they couldn't explain
pointers to you or themselves etc. etc. You correctly point out that
getting Lisp accepted is a social/political problem. You'll find the
same problem wrt. C vs. C++ (VC++ monkeys will resist C even for Unix
networking code), anything vs. Java either because management sent
their mokeys to Java classes and they think they have Java experts or
that marketing thinks a server based on Java stuff is better than
anything else (both the new-defunct Ars Digita and Vignette did this
in the web/app server market. google should be able to dig up public
whinings from their engineers).

No doubt CL is not flawless, but social/political/cultural problems
masquerading as technical ones should not be in that list if we'll get
anywhere in this thread (reals? what reals?).

cheers,

BM

Bijan Parsia

unread,
Feb 24, 2002, 6:09:44 PM2/24/02
to
On Sun, 24 Feb 2002, Erann Gat wrote:

> In article
> <Pine.A41.4.21L1.02022...@login5.isis.unc.edu>, Bijan
> Parsia <bpa...@email.unc.edu> wrote:
>
> > On Sat, 23 Feb 2002, Erann Gat wrote:
[snip]

> Yes, I think I agree with that (though I'm not sure I fully understand
> your distinction between non-mainstream and anti-mainstream).

Bad terms, but reasonable concepts, I think. One can be non-mainstream
simply by being obscure or specialized. The *only* "social stigma" of
making a non-mainstream choice is that it is out of the mainstream. And
anit-mainstream choice (ugh, need a better term) is one that the
"mainstream" actively condemns. The social stigma includes
(supposedly) having made a *bad*, non-conventional choice (or rather,
a *clearly* stupid one). People think you especially perverse, rather than
just a bit eccentric.

> Ironically,
> I think Python is actually a much less "safe" language (in the technical
> sense) than Lisp because of its syntactically significant white space.

Never been a problem for me; I doubt Python folks agree.

> (This position has made me a pariah in the Python community. I just can't
> seem to help but piss some people off. Oh well, tough.)

See? :)

I don't want to argue the Python whitespace thing but that's *exactly* the
kind of battle that is, imho, both futile and counterproductive. And for,
perhaps, good reason. I would *worry* about your "tough" reaction, that
is, it seems to me a sign of missing something important about that
community.

Since the whitespace issue *hasn't* discernably damaged Python or its
community, that seems an *especially* bad place to start improving it
:) Since, significant whitespace is a *enjoyed feature* for many in that
community, who've had to fight this battle a billion times, it's would
take *extraordinary* measures to keep raising it from being a total waste
of time for all involved.

[snip]


> All else being equal I'd rather not be a political leader. I would much
> rather be doing technical work and leave the politics to others. I don't
> really enjoy politics, and I'm not particularly good at it.

Yes, I'd agree with that :) (The Python example sorta shows.)

> However, I'm
> not so naive that I think politics can be made to just go away. My most
> optimistic hope is that I will find people who *do* enjoy politics and who
> are good at it to do that work for me. I've actually had a fair amount of
> success over the years because JPL has a lot of good politicians whom I
> have "hired" to be my supervisors. :-)

But this suggests that unless you can *find* such to rally behind, you may
want to brush up your own political leadership skills. Otherwise, you may
end up in a pretty frustrating situtation.

I notice that in debates you tend to externalize the difficulties of those
debates, i.e., people misreading you, double standards, folks resistent to
change or uninterested in helping, Erik's tactics or personality,
etc. Even if these externalizations are largely correct, it doesn't seem
that confronting them head on has made much headway. Perhaps there are
other ways to deal?

> I really wish that the *real* leaders of the Lisp community, like Kent and
> Duane and Paul Graham and Peter Norvig and Dick Gabriel, and even Erik,

Actually, while these are all significant personages in the community, I
don't think any are leaders, *per se*. Graham seems to be trying to be, a
bit. I don't know if there *are* "general" leaders (many people take the
inititive for specific projects).

> would lead this community in what I perceived to be a productive direction
> that I could both contribute to and benefit from. Alas, what I perceive
> is that this is not happening. Paul and Peter are off doing their own
> thing, Kent is taking a lassez-faire approach, and Erik seems intent on
> transforming Lisp from a programming language into a religion.

I don't share your perception of the latter two. At all.

> > > And it is my intention to try to find like-minded people who would also
> > > see this as a problem and who are interested in working together to do
> > > something about it.
> >
> > Is the "like-minded"ness more that those who see lisp being
> > anti-mainstream as a problem and would like to do something about it? Or
> > must they also share your temperment, and feel relaxed about the way you
> > put things, etc.?
>
> Yes to the first question, no to the second.

Then I think there are many such (though I may be wrong about it). OTOH,
there are vested interests in Lisp as it is that can comfortably be
satisfied with some varient of the *status quo*. I suspect that people
with such interests may well prefer Lisp to become more mainstream *but*
not at the sacrifice of their vested interests.

Take Arc. Let's suppose, for a moment, that Arc (in it's finished form) is
all the proper blend of Lisp and Python you, personally, could desire (not
just technically, but politically). It seems pretty clear that Arc won't
help people using a *large* number of Common Lisp's features (start with
CLOS :)). Arc doesn't seem to be as close to CL as Dylan. (What's wrong
with Dylan as your "popular" Lisp, btw?) Thus, people who want to use
something substantially like *Common* Lisp are not going to be remotely
helped by Arc. Indeed, they might be hurt by it.


> > > (It is becoming increasingly clear that I won't find
> > > them here.)
> >
> > If the latter, then yes, it seems so. At least, not as many as you may
> > have hoped and not without those who are anti-minded (in the latter
> > sense).
>
> Hm, I didn't quite follow that.

Hm. Maybe I don't either :)

I think you *will* find people who are interested in Lisp becoming more
mainstream, but many of them are also interested in Lisp being Lisp. So
they want to be careful in their tradeoffs.

> > Also, FWIW, I don't think adding functionality or convenience is the
> > either necessary or sufficient to getting past the
> > anti-mainstreamness. Java started with almost no-functionality and little
> > convenience. There are plenty of people who find Python seriously lacking
> > in its libraries and some who reject it for those reasons and some who
> > embrace it in spite of those.
>
> Yes, I agree. That's why one of the things I think needs to be done is to
> hide Common Lisp (or something like it) under the guise of a new name and
> surface syntax with a simple mapping to S-expressions underneath.

Er..Hmm. Ok, I guess I see how you got here from what I wrote :)

Personally, I don't think that a new name and surface syntax are necessary
or sufficent. See Dylan.

> BTW, I
> state this as a proposed solution to a problem,

Which problem? Parenthephobia? Anti-lisp bias?

> not as gospel that has to
> be taken as axiomatic. If someone has a better idea of how to solve the
> problem (described in some more detail below) I'm listening.

I think simple applications, that fill an important niche well, that are
easy to understand while demonstrating nifty CL features, that may be
*used* without hacking, but are easy to get into for hacking, can bring
people in. If someone's using an CL based XSLT processer everyday and it
beats out the others in convenience, features, and/or performance, then
that person is going to be inclined to think well of Common Lisp. If
adding a bit of functionality is as easy as (or easier than) tweaking
settings in Emacs, then they'll be inclined to explore more programming in
Common Lisp.

AllegroServe and mod_lisp strike me as such apps, at least in principle.

[snip]


> bother to learn about them. Interestingly, the Lisp community is a little
> island of this same phenomenon happening, but anchored to a different
> "local maximum". In the Lisp community, Lisp is the ultimate language,
> all language innovations that can possibly happen have already happened,
> and Lisp incorporates them all, so there is no point in learning any
> language other than Lisp.

I don't see that *at all*. That seems to be something an "expert" said was
true :) Worse, you're saying may have that effect on others!

Aside from the people posting about their language experience, I see Kent
and Erik (to take two) both discussing Java and what it got right. People
seem reasonably open to my interjections of Smalltalk :) (Indeed, I
recognized several Smalltalkers 'round here.)

> I am caricaturing, of course, and most people
> don't take these extreme position in their pure form. But (putting on my
> scientist hat for a moment) this theory actually makes a testable
> prediction: that most people who like Lisp don't know much about other
> langauges, and most people who like other langauges don't know much about
> Lisp.

I agree it's testable, but I don't think it's true ;)

> (Here's a data point: when Paul Graham started designing Arc he
> didn't know Python.)

He didn't know Python *at all*? Wow, that seems a bit odd. Especially if
you're going to sit down to design a language. Indeed, I kinda hope that
before *I* sat down to design a language (that I intended to catch on) I'd
survey the current state of things, just to make sure it wasn't easier to
fix something up (or just adopt it).

Cheers,
Bijan Parsia.

Erann Gat

unread,
Feb 24, 2002, 6:34:27 PM2/24/02
to
In article <ey3r8na...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

> * Erann Gat wrote:
> > In the Lisp community, Lisp is the ultimate language,
> > all language innovations that can possibly happen have already happened,
> > and Lisp incorporates them all, so there is no point in learning any
> > language other than Lisp. I am caricaturing, of course, and most people
> > don't take these extreme position in their pure form. But (putting on my
> > scientist hat for a moment) this theory actually makes a testable
> > prediction: that most people who like Lisp don't know much about other
> > langauges, and most people who like other langauges don't know much about
> > Lisp. (Here's a data point: when Paul Graham started designing Arc he
> > didn't know Python.)
>
> This is something that could actually be tested (modulo enormous
> self-selection /sampling bias) and although I'm not sure of the
> usefulness of the result I'd bet (small) money that the opposite is
> true to your caricature: Lisp people know *more* languages well than
> other people.

Yeah, I'd want to keep my wager on this small as well (which is to say
that you may very well be right -- I don't know).

> For me, I'd be comfortable in CL, scheme (but frustrated) other Lisps,
> perl, C (somewhat rusty now), python (haven't written significant
> systems in it), some C++ (very rusty, gave up (went back to C) when it
> became obviously insane), prolog, fortran (rusty), smalltalk (very
> rusty) many Unixoid scripting languages (the usual bag of sh, awk, sed
> &c &c). Read manuals but not used significantly: Java (could use
> language, not familiar with 98% of library), TCL (actually gave this
> up in disgust). Really a long time ago: BCPL, Basic (several),
> assembler (several, not big-machine), Forth. I must have missed some
> stuff here.
>
> I have no idea how that compares with most people.

Part of the problem obivously is that it's hard to say when someone
"knows" a language. At one point in my life I wrote maybe 100 lines of
Forth that actually did something useful. Does that mean I know Forth?
Another problem is that languages change over time. I thought I knew C++,
but the language that was C++ when I thought I knew it is different in
significant ways from the language that is C++ today. The same is true
for Lisp and Python too. I thought I knew Lisp pretty well, but I keep
learning new things about it even after twenty years.

But what people actually know is not nearly as important as what they
think they know (I think) :-)

E.

Software Scavenger

unread,
Feb 24, 2002, 7:39:38 PM2/24/02
to
g...@jpl.nasa.gov (Erann Gat) wrote in message news:<gat-240202...@192.168.1.50>...

> Yes, I agree. That's why one of the things I think needs to be done is to
> hide Common Lisp (or something like it) under the guise of a new name and
> surface syntax with a simple mapping to S-expressions underneath. BTW, I
> state this as a proposed solution to a problem, not as gospel that has to

"Needs to be done" is such a common phrase. You can hear it in any
bar, any day. But those who say it, sit there, relaxing, all agreeing
with each other, or fighting with each other when the mood strikes,
with no real care for actually doing what needs to be done. In Common
Lisp there is no need for such common language as "needs to be done".
The Common Lisp way is to say "here is my trial-balloon implementation
of what I think would make things so much better, and here is why I
think it would make them so much better."

In Common Lisp its almost as easy to implement something as to think
of it. The process of implementing it consists mostly of clarifying
your thoughts, till they eventually reach the point where people might
take them seriously. That point is when you have something
interesting to show them for discussion.

Tim Bradshaw

unread,
Feb 24, 2002, 5:24:42 PM2/24/02
to
* Wade Humeniuk wrote:
> Knowing a programming language does not make you a good programmer.

But I think that having practical[1] exposure to a fair number helps a
lot, which is why I think the question Erann asked is interesting (if
hard to get any real data about).

--tim

Footnotes:
[1] In other words not some 10-minute presentation in a CS degree.

Patrick

unread,
Feb 24, 2002, 10:30:56 PM2/24/02
to

"Erann Gat" <g...@jpl.nasa.gov> wrote in message
news:gat-240202...@192.168.1.50...

> [...] Here's a theory I've developed over the years: the digital world


has
> become so complicated that it's impossible for any one person to keep up.
> People therefore rely on the opinions of others at least as guides to
> where they should focus their attention and often as guides to actual

> decisions. In such a climate there is a positive-feedback effect [...]

People collude to keep stay happy about the choices they've made, even at
the expense of dismissing what may be better choices.

That's an unfortunate but understandable aspect of the mainstream of IT. As
Java and .NET become more comprehensive and stable, I think this attitude is
likely to become more entrenched. As you point out, no programmer can know
everything about everything. We face a choice of being "jack of all trades,
master of none", or carefully selecting a few of the best languages and
striving for genuine expertise over many years.

Learning languages is easy. Learning a "platform" these days is not. Anybody
who commits to fully mastering one or two mainstream programming "platforms"
these days is undertaking a task that's large enough to be considered a form
of specialisation.

There is a huge psychological investment in specialisation because being
committed to something requires turning a temporarily blind eye to
alternatives; otherwise there can be no focus. You should not be too
surprised when you encounter this blind eye in your travels. It's
inevitable.

The real question (IMO) is: what is this _worth_ to you? Is it worth your
time, money and energy to try to change the perceptions of those who have a
strong psychological and financial interest in maintaining them? Or should
you take advantage of what you know, through your own investigation and
experience, to be superior (and trust other intelligent individuals to do
likewise, as they most certainly will - albeit in small numbers)?

Erann, you seem to have been pretty successful (and happy) doing the latter
for most of your life. Your choice of Lisp might have resulted in a few
wrangles over the years, but I think the energy you've wasted there is
_nothing_ compared to the energy you'll waste in trying to make Lisp
desirable to the mainstream.

Of course, this is only one man's idle speculation -- and I hate to be
negative. If you really want to continue your crusade, and really believe it
can succeed, and really believe that this kind of "success" is actually what
you want, I humbly suggest that you start with a proof of concept rather
than with politics.

Maybe you could join Paul Graham's effort to make a Lisp that's optimised
for the stuff you're currently doing in Python? (I, for one, wish him all
the best with this project. I think it will result in a mutually beneficial
relationship with the more stable and conservative CL community). Or maybe
you could devote some time to figuring out how Lisp (preferably Common Lisp)
could leverage Java and/or .NET's facilities for mainstream tasks, and thus
level the playing field. If you help to remove the mainstream technical
objections _first_, then you'll have a better insight into the reasons why
most programmers don't use Lisp.

Tim Bradshaw

unread,
Feb 25, 2002, 3:20:31 AM2/25/02
to
* Erann Gat wrote:
> Part of the problem obivously is that it's hard to say when someone
> "knows" a language. At one point in my life I wrote maybe 100 lines of
> Forth that actually did something useful. Does that mean I know Forth?

I think that's hard, and lots of people will claim to know things they
don't really, or not claim to know things they do really depending on
how much they want the job.

One could have some simple criteria like `been paid to program in' but
this is obviously bad - people are paid to program in all sorts of
languages which they can't program in, should people like Linus
Torvalds be disqualified since he didn't get paid for writing Linux
(well maybe he did, but you know what I mean).

> But what people actually know is not nearly as important as what they
> think they know (I think) :-)

Ah, I think that's dangerous. Because one way your contention could
be true is if Lisp people *think* they know that all these other
languages are no good but actually are wrong. So they'd falsely
report that they know all about lots of other languages when in fact
they have missed all the interesting stuff. Actually, your contention
kind of has to be something like that, because I think rather few
people would say `I don't know about any of that stuff, and I don't
care if it's better than Lisp'[1]. For instance I look at XML with
horror as a kind of hugely bloated but somehow less functional version
of s-expressions (and I just read an article on web services this
morning and now I'm really depressed), and I think I have some
knowledge of the area which might justify my feelings about it (used
to work for an SGML company before the web hype), but maybe I'm just
missing the point.

--tim

Footnotes:
[1] Actually, *I* might. However my reasoning is perhaps a bit more
complex: I'm sure there are good things I don't know about, but
I'm also sure that I need to earn my living and sleep and eat and
have some spare time to read, make photographs, playq the guitar
and so on, and I'm happy to trade missing these possible good
things for being able to have a life.

Erik Naggum

unread,
Feb 25, 2002, 5:49:38 AM2/25/02
to
* Erann Gat

| I really wish that the *real* leaders of the Lisp community, like Kent
| and Duane and Paul Graham and Peter Norvig and Dick Gabriel, and even
| Erik, would lead this community in what I perceived to be a productive
| direction that I could both contribute to and benefit from.

The obnoxious follower is in other words looking for the right leaders to
follow ...

| Alas, what I perceive is that this is not happening.

... and faults his leaders for not being "followable".

| Erik seems intent on transforming Lisp from a programming language into a
| religion.

What was that gripe about gross distortions of your views, again? Are
you _really_ so goddamn unintelligent as you prefer to portray yourself?

You are longer on insult than I am, Erann Gat, and _way_ shorter on
contents. Your thinking here is just as vacuous as that insult is.



| I believe in the maxim that one should lead, follow, or get out of the
| way. I tried following, that worked for a long time, but it doesn't
| today, at least not for me, and apparently not for a lot of people. So
| I'm going to try leading for a while. If that doesn't work, I'll get out
| of the way.

I honestly believe you are in the way and should get the hell out of it.

Popularity contests are not fought in newsgroups. The only people who
voice their opinions are those who have a contrary view to yours, and
when that happens, you go bananas and think "religion" and what-not and
claim that it is somebody else's fault that you cannot get what you want.

The argument made by Patrick that you externalize your own problems seems
right on target.

| Yes, I agree. That's why one of the things I think needs to be done is
| to hide Common Lisp (or something like it) under the guise of a new name
| and surface syntax with a simple mapping to S-expressions underneath.

And the DYLAN experience was not enough to discourage this nonsense?

It is _not_ the s-expression syntax that ticks people off, it is the
total lack of familiarity with the _words_, the very unusual way that the
words have meaning while the syntax has (almost) none, and the fact that
you have to know _each_ word to grasp the meaning of a function that
someone else wrote. This is _not_ a problem: What is a problem is that
people want to be able to do something "useful" without knowing squat
about their tools or languages. I mean, when people have picked up K&R
and typed in the "hello, world" codelet and it runs, that is rewarding to
a rank beginner. It is _not_ rewarding to a seasoned programmer. It is
a huge mistake to offer Common Lisp to beginners. (Scheme even more so.)

| Here's a theory I've developed over the years: the digital world has
| become so complicated that it's impossible for any one person to keep up.

What does this mean? "My circle of friends has become so large that I
can no longer keep up" means that you are no longer able to obtain
information about what happens to people you care about, miss out on
deaths in their families, new or passing pets, new or passing lovers,
people moving, etc. If this is the meaning, it has _never_ been possible
for any one person to keep up with the "digital world", wherever the hell
the boundaries for _that_ is, so the criticism is _completely_ vacuous.

| People therefore rely on the opinions of others at least as guides to
| where they should focus their attention and often as guides to actual
| decisions.

And this was a change from _what_? There is absolutely no merit to the
"therefore". People have always done this, too. Local communities of
people who do things certain ways is the way human society has evolved.
There is _nothing_ peculiar about the digital world in this regard.

| In such a climate there is a positive-feedback effect where ideas get
| accepted as truth completely irrespective of actual facts.

This is true in all possible climates. Nothing special, nothing new.

| One influential "consultant" says something, enough people embody that
| opinion into their worldview, and suddenly "everybody just knows" that X
| is true.

Sure, until the next influential "consultant" comes along.

| This has happened to languages like Lisp and Dylan and Eiffel: everybody
| just knows that no one uses them, they aren't good for anything (except
| that Lisp is good for AI), and so the vast majority of people don't even
| bother to learn about them.

This line of reasoning is not empirically supported. If the majority of
people do not bother to learn Common Lisp, Dylan, or Eiffel, it is _not_
because they know that _nobody_ uses them. It is because the people they
hang out with, the people they compete with for jobs, etc, do not use it.
If a job offer comes up that requires a skill, a lot of people are smart
enough to figure out that they could get it if they had that skill, and
go learn it. If some entrepreneurial guy discovers something unusual
that he thinks will offer him an edge over his competition, he _will_ use
it to that effect.



| Interestingly, the Lisp community is a little island of this same
| phenomenon happening, but anchored to a different "local maximum".

But how the hell is this _different_ from anything? _Every_ community is
anchored to its own local maximum. Every community is different. There
is no global maximum. What is considered the most popular thing on earth
depends on who you ask.

The only problem we seem to have is that Common Lisp is not even the most
popular language in its very own community, thanks to people like you.

| In the Lisp community, Lisp is the ultimate language, all language
| innovations that can possibly happen have already happened, and Lisp
| incorporates them all, so there is no point in learning any language
| other than Lisp.

What a load of crap. If this is one your premises, no wonder you have to
be a fucking obnoxious pest in this forum and fight what you think is
wrong, like you fight all other meaningless things you think is wrong.

| I am caricaturing, of course, and most people don't take these extreme
| position in their pure form.

I challenge you to find that it is present in even one person regarding
even one programming language innovation. The insulting attribution of
stupidity on such a massive scale to a whole community is how you get
dethroned as a leader, not how you gain followers.

| But (putting on my scientist hat for a moment) this theory actually makes
| a testable prediction: that most people who like Lisp don't know much
| about other langauges, and most people who like other langauges don't
| know much about Lisp.

With a scientist hat, you would at least have called it a hypothesis.

But. good, at last a way to make you realize that you are wrong. Not
only empirically, but also logically: Common Lisp is not only hard to
find, it is, by your very own testimony, unpopular and good for nothing,
so one cannot escape wondering how people first discover it. It is _so_
very unlikely that people pick up a Common Lisp book in the computer
section of their favorite bookstore or library and say "hey, I want to
learn about computers and this language!". (Well, I actually discovered
computers and Basic that way from my favorite library as a kid because it
had classified computers right next to mathematics, but the likelihood
that it be Common Lisp is and will remain very close to zero.)

The average discoverer of Common Lisp _must_ therefore have been exposed
to at least several other languages, have found that computer programming
is rewarding and also have been _dissatisfied_ with what he was already
familiar with, or he would simply have stuck with that "tradition". In
fact, I discovered Lisp by accident and was unable to use it for anything
for years, but the ideas it had presented somehow "fit" how I thought.

Being so obnoxious as to list a lot of languages is counter-productive,
but the languages that I have taken an interest in _after_ I learned
Common Lisp appear to be even more important to defeat your stupid line
of argument than those I knew before, and those include: Java, Dylan, Ada
95, JavaScript, Python, SQL, and Perl. In all cases, I have looked for
stuff that I could learn from and incorporate into my own thinking. In a
lot of cases, I have determined, after a significant period of study,
that the value of switching to those languages as my "mainstay" would be
a very bad idea. Java is too big for me and too much of a moving target
and also seems to depend very much on "living" in a Java community, but
that means a lot of _really_ ignorant people who know _only_ Java, and
very little about computer programming. If I cannot become filthy rich
programming Common Lisp, I no longer have the stomach (literally) to try
to become filthy rich _programming_. This is why I am preparing to
change carreer to law over the next few years, because, as I joke: there
are two _really_ suspicious professionals: an old programmer, and a young
lawyer.

| (Here's a data point: when Paul Graham started designing Arc he didn't
| know Python.)

That is not a data point for your hypothesis.

Raymond Toy

unread,
Feb 25, 2002, 9:29:11 AM2/25/02
to
>>>>> "Duane" == Duane Rettig <du...@franz.com> writes:

Duane> e...@cs.berkeley.edu (Edward Jason Riedy) writes:
>> And Duane Rettig writes:
>> And the age-old rule derives more from _binary_ than it does
>> from inexactness in operations. People wanted to add 0.1 ten
>> times and get 1.0 exactly. In a decimal floating-point system,
>> no problem. But 0.1 isn't exactly representable in the binary
>> schemes. The problem would still have occured on a decimal
>> arithmetic, but not often enough to merit an age-old rule.

Duane> Nevertheless, the rule is there. I don't know if _every_ Fortran
Duane> manual has such a rule, but yesterday I did a web search to
Duane> verify that the caveat was still there, and sure enough, the
Duane> first online Fortran manual I came to had the warning against
Duane> using floats in DO loops.

Perhaps that was true long ago for, say, Fortran IV and earlier, but
Fortran 77 does not have this problem. An integral iteration count is
computed from the start, end, and step values of the loop. This
iteration count is used to control the loop end condition.

See www.fortran.com and click on the information link to get the to
F77 standard.

Presumably, later standards continue this.

Ray

Fred Gilham

unread,
Feb 25, 2002, 11:58:50 AM2/25/02
to

Erann Gat writes:

> But (putting on my scientist hat for a moment) this theory actually
> makes a testable prediction: that most people who like Lisp don't
> know much about other langauges, and most people who like other
> langauges don't know much about Lisp.


I've written significant (for-pay) programs in BASIC, 6809 assembly,
Honeywell DPS-6 assembly, C, p-best (a production system language),
COBOL, php (ouch!!!), Java, perl, and Lisp. I've programmed a lot in
Logo for volunteer work I've done in schools.

I've written at least one program in Fortran, Ada, Pascal, Forth,
Smalltalk, 6502 assembly, 8080 assembly, 68000 assembly,
etc. etc. etc.

I used to be a `language junkie', installing and trying out every new
language that came along.

Do I count as a data point?

I like Lisp.

Erann, you have enormous prestige in my eyes from the Lisp work you
did at JPL. In my mind, that work earns you a hearing.

But it doesn't earn you agreement. I simply disagree with your
current views on Lisp. Someone who wants to lead needs followers.
But a follower has to think the leader will lead him where he wants to
go. I don't think you'll do that for me.

Thinking about my programming experiences, the following occurs to me:

There's a feeling of power I get when I program in assembly, the sense
that I'm the complete master of the machine. The strange thing is
that the only programming language that gave me this same feeling is
Lisp. I know it's not the same, but somehow there's the sense that
Lisp lets me get my ideas across to the machine. I don't think I'm a
master programmer. But I think Lisp lets me reach most of my
potential.

People have already discussed this issue, but the term I like to use
is `mental overhead'. Lisp minimizes the mental overhead of
programming. At the opposite end is something like C++, where one
puts in a lot of mental effort that doesn't directly address the
problem. This is not to say that C++ programmers are stupid, in fact
almost exactly the opposite. They are very clever --- they have to
be! I'm thinking about an article I read about the constructor
problem of mixin-based programming in C++. The cleverness is consumed
by dealing with artificial problems, problems that are caused by the
formalism they express their ideas in. Lisp, after one gets over a
few conceptual hurdles, is almost transparent.

Here's a quotation from Kaz Kylheku that captures this thought:

I can't escape the sensation that I have already been thinking in
Lisp all my programming career, but forcing the ideas into the
constraints of bad languages, which explode those ideas into a
bewildering array of details, most of which are workarounds for
the language.

--
-Fred Gilham gil...@csl.sri.com
"In America, we have a two-party system. There is the stupid
party. And there is the evil party. I am proud to be a member of the
stupid party. Periodically, the two parties get together and do
something that is both stupid and evil. This is called --
bipartisanship." --Republican congressional staffer

Erann Gat

unread,
Feb 25, 2002, 12:46:38 PM2/25/02
to
In article <87u1s6f...@nkapi.internal>, Bulent Murtezaoglu
<b...@acm.org> wrote:

> OK, I have read most of your postings and still do not understand what
> you would percieve to be a productive direction. In the parts of this
> particular posting I have deleted you seem to indicate 'hiding' s-exps
> would be beneficial. I also know that lack of solid DB connectivity
> libraries in MCL was a factor in your choosing Python. What else?

That's a fair question. I'm working on a paper that will hopefully give a
more cogent answer than a bunch of newsgroup postings can. Stay tuned.

E.

Erann Gat

unread,
Feb 25, 2002, 1:09:04 PM2/25/02
to
In article
<Pine.A41.4.21L1.02022...@login3.isis.unc.edu>, Bijan
Parsia <bpa...@email.unc.edu> wrote:

> But this suggests that unless you can *find* such to rally behind, you may
> want to brush up your own political leadership skills. Otherwise, you may
> end up in a pretty frustrating situtation.

I'm working on it.

> I notice that in debates you tend to externalize the difficulties of those
> debates, i.e., people misreading you, double standards, folks resistent to
> change or uninterested in helping, Erik's tactics or personality,
> etc. Even if these externalizations are largely correct, it doesn't seem
> that confronting them head on has made much headway. Perhaps there are
> other ways to deal?

I'm open to suggestions.

> (What's wrong with Dylan as your "popular" Lisp, btw?)

Perhaps nothing, but I've bandied Dylan and Eiffel about a bit, and my
perception is that they aren't "new" enough any more.

> Thus, people who want to use
> something substantially like *Common* Lisp are not going to be remotely
> helped by Arc. Indeed, they might be hurt by it.

Right. That's why one of my criteria has always been
backwards-compatibility with Common Lisp. Don't forget, I'm one of those
people who wants to use Common Lisp. The backwards-compatibility might
not necessarily be 100% though (I think it would be good to do a little
housecleaning. How many people really want to port CL applications to
TOPS20 file systems?)

> Personally, I don't think that a new name and surface syntax are necessary
> or sufficent. See Dylan.

Certainly not sufficient. I think the jury is out on necessary.

> > BTW, I
> > state this as a proposed solution to a problem,
>
> Which problem? Parenthephobia? Anti-lisp bias?

Yes.

> I don't see that *at all*. That seems to be something an "expert" said was
> true :) Worse, you're saying may have that effect on others!

I may well be projecting too much of my own mindset on others, but I
really don't see how a reasonable person could see me as advocating
knowing Lisp and nothing else at this point. A few people have come away
with the opposite impression, so let me say again: nothing I have said
should in any way be construed as advocating not learning or using Lisp.

> Aside from the people posting about their language experience, I see Kent
> and Erik (to take two) both discussing Java and what it got right. People
> seem reasonably open to my interjections of Smalltalk :) (Indeed, I
> recognized several Smalltalkers 'round here.)

Yes, but how often does a C++ advocate get a warm reception?

> > I am caricaturing, of course, and most people
> > don't take these extreme position in their pure form. But (putting on my
> > scientist hat for a moment) this theory actually makes a testable
> > prediction: that most people who like Lisp don't know much about other
> > langauges, and most people who like other langauges don't know much about
> > Lisp.
>
> I agree it's testable, but I don't think it's true ;)
>
> > (Here's a data point: when Paul Graham started designing Arc he
> > didn't know Python.)
>
> He didn't know Python *at all*? Wow, that seems a bit odd.

Well, he knew *something* about it. This was a while ago and I don't want
to misrepresent Paul, but I'm pretty sure he said he had never used it.

E.

Erann Gat

unread,
Feb 25, 2002, 2:01:05 PM2/25/02
to
In article <32236229...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:

> | Alas, what I perceive is that this is not happening.
>
> ... and faults his leaders for not being "followable".

No, I don't "fault" them for anything. I simply observe that the
directions in which they are leading do not seem likely to solve the
problems I am having.

> | Erik seems intent on transforming Lisp from a programming language into a
> | religion.
>
> What was that gripe about gross distortions of your views, again?

"Seems". To me. That is a statement about my perceptions, not about your
views. Your representations of my views come with no such disclaimers.
You simply state them as facts.

While I'm at it, here's what gives me the impression: 1) you wrote a long
post a while ago on the topic "what I want from my common lisp vendor" or
something like that. In it you said that everyone should stand up and
proclaim "I love Common Lisp." That feels like religion to me. 2) you
say that one of the great things about Common Lisp is that people can
change the language to suit their needs, but when someone actually does so
in a way of which you disapprove (like when John wrote his if* macro, or I
wrote my bind macro) you attack them for it. Maybe that's demagoguery
instead of religion, but I really don't want to split that kind of
semantic hair.

> | In the Lisp community, Lisp is the ultimate language, all language
> | innovations that can possibly happen have already happened, and Lisp
> | incorporates them all, so there is no point in learning any language
> | other than Lisp.
>
> What a load of crap. If this is one your premises, no wonder you have to
> be a fucking obnoxious pest in this forum and fight what you think is
> wrong, like you fight all other meaningless things you think is wrong.
>
> | I am caricaturing, of course, and most people don't take these extreme
> | position in their pure form.
>
> I challenge you to find that it is present in even one person regarding
> even one programming language innovation. The insulting attribution of
> stupidity on such a massive scale to a whole community is how you get
> dethroned as a leader, not how you gain followers.

I accept your challenge. The claim about language innovations was in fact
not mine, it was Wade's.

Wade Humeniuk wrote:

> Bringing it back to Lisp. I wonder if there are some people who think that
> we have to "solve" the solution to finding a better Lisp. There have been
> no computer language innovations for a LONG time, Common Lisp pretty well
> has them all.

Now, what was that point you were trying to make again?

> | But (putting on my scientist hat for a moment) this theory actually makes
> | a testable prediction: that most people who like Lisp don't know much
> | about other langauges, and most people who like other langauges don't
> | know much about Lisp.
>
> With a scientist hat, you would at least have called it a hypothesis.

OK. Hypothesis it is.

> But. good, at last a way to make you realize that you are wrong. Not
> only empirically, but also logically: Common Lisp is not only hard to
> find, it is, by your very own testimony, unpopular and good for nothing,

Another gross distortion. Unpopular, yes. Good for nothing, no.

> so one cannot escape wondering how people first discover it.

It was once popular. Many people discovered it then. Also, some
microcosms (like JPL) have Common Lisp advocates who promote the
language. Sometimes their efforts result in news coverage that gets the
word out to other people.

> It is _so_
> very unlikely that people pick up a Common Lisp book in the computer
> section of their favorite bookstore or library and say "hey, I want to
> learn about computers and this language!". (Well, I actually discovered
> computers and Basic that way from my favorite library as a kid because it
> had classified computers right next to mathematics, but the likelihood
> that it be Common Lisp is and will remain very close to zero.)

That is a self-fulfilling prophesy. There is no inherent reason why
Common Lisp should not be the language that people use to first discover
computers. But if enough people believe that it won't be then in fact it
won't be.

> The average discoverer of Common Lisp _must_ therefore have been exposed
> to at least several other languages, have found that computer programming
> is rewarding and also have been _dissatisfied_ with what he was already
> familiar with, or he would simply have stuck with that "tradition". In
> fact, I discovered Lisp by accident and was unable to use it for anything
> for years, but the ideas it had presented somehow "fit" how I thought.

That depends on what the demographics are. If the CL community has a lot
of young people who started programming after AI winter started then you
may be right. But if it consists mostly of older people who start CL in
the 80's then your argument fails. This, too, is an empirical question.

> Being so obnoxious as to list a lot of languages is counter-productive,

Indeed (though I note that you proceed to do so anyway). Anecdotes prove
nothing.

> This is why I am preparing to
> change carreer to law over the next few years, because, as I joke: there
> are two _really_ suspicious professionals: an old programmer, and a young
> lawyer.

I wish you the best of luck in your new career.

E.

Wade Humeniuk

unread,
Feb 25, 2002, 5:17:15 PM2/25/02
to

"Erann Gat" <g...@jpl.nasa.gov> wrote in message
news:gat-250202...@192.168.1.50...

> In article <32236229...@naggum.net>, Erik Naggum <er...@naggum.net>
wrote:
> I accept your challenge. The claim about language innovations was in fact
> not mine, it was Wade's.
>
> Wade Humeniuk wrote:
>
> > Bringing it back to Lisp. I wonder if there are some people who think
that
> > we have to "solve" the solution to finding a better Lisp. There have
been
> > no computer language innovations for a LONG time, Common Lisp pretty
well
> > has them all.

And, I stand by assertion. If you need a authoritative reference, I was at
a OO programming conference in (I believe 1991). Richard Gabriel was on a
panel about programming standardization (he was part of the CL
standardization then). Someone asked him if there had been any programming
innovations lately. He thought for a bit and answered, no, not for at least
10 years (and then referred to something about denotational semantics).
That was 10 years ago. Has something happened since?

The response that Monads are new, I just take with a grain of salt. I am
not sure CL needs them since it is not pure functional language.

Wade


Tim Bradshaw

unread,
Feb 25, 2002, 5:45:34 PM2/25/02
to
* Wade Humeniuk wrote:

> The response that Monads are new, I just take with a grain of salt. I am
> not sure CL needs them since it is not pure functional language.

I guess the interesting question would be: have any programming
innovations achieved widespread and long-term use in the last n years,
where n might be 20.

I think the answer is actually yes. The innovation that I think has
made a huge difference is the widespread acceptance of languages which
are almost entirely made up glue for sticking together the disparate
semi-working systems which are all we can manage to build. Perl is
the most obviously successful one, but tcl, python, probably Java
count too.

I guess you could call this an anti-innovation: the realisation that
the computing world is basically a collection of huge piles of manure
of various kinds and the only important characteristic of a language
is that it has a wide variety of shovels so the programmer can remain
only neck-deep in the stuff by shoveling it as fast as possible onto
other programmers' heads.

And CL is terribly lacking in shovels, because it hasn't understood
that shoveling manure all over people is the only important thing: CL
people still think that it might be possible to actually communicate
in a higher level way instead of flinging manure at competitors.

This is really a misunderstanding of human nature of the worst kind.
Although Lisp represents it in perhaps its purest form, it's widely
seen elsewhere: look at the endless, repeated, attempts to replace
manure with structure - SGML, CORBA, XML. None of them nearly so well
thought out as CL, but all, equally, doomed.

It's manure all the way from here on in, and we are going to need more
shovels if we're going to keep our heads above it.

--tim

Damn, too much coffee again


Erik Naggum

unread,
Feb 25, 2002, 6:27:03 PM2/25/02
to
* Erann Gat

| Erik seems intent on transforming Lisp from a programming language into a
| religion.

* Erik Naggum


> What was that gripe about gross distortions of your views, again?

* Erann Gat


| "Seems". To me. That is a statement about my perceptions, not about your
| views. Your representations of my views come with no such disclaimers.
| You simply state them as facts.

Really? Your rabid dishonesty about me is really getting on my nerve.
Why is it so important to you to lie, misrepresent, defame, and hurt me?
You are a very bad person, Erann Gat. To show you just what you went
completely nuts about last time, here is your _own_ quoted text in the
article with message-ID <gat-230202...@192.168.1.50>:

In article <32234005...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:

> Ignore the likes of Erann Gat whose job it seems to be to make people
> stop using Common Lisp unless it is somehow magically "enhanced" to his
> satisfaction.

Sigh. I was really hoping to end this, but I cannot allow this libel to
go unchallenged.

I responded that it was a fair summary, you said it was a gross
distortion and went nuts, but I ACTUALLY SAID "IT SEEMS TO ME", SO YOU
SHOULD HAVE GIVEN ME THE EXACT SAME LATITUDE THAT YOU DEMAND FOR YOURSELF.

| I wish you the best of luck in your new career.

Thanks. The likelihood of finding fewer and less bad guys than you in a
legal practice is one of the reasons. I am so profoundly tired of bad
people like yourself and of having no functional way to respond to your
evil. It is in part vicious, destructive, unprincipled characters like
yourself who opened my eyes to the need for legal infrastructure, people
who take _personal_ revenge upon those who do nothing more than speak
their mind, who make a suggestion they do not like, etc. In truth, we
have no way to get rid of people who have to seek personal revenge, but
at least some education in the one way our society has found to deal with
the apparently "human" tendency to irrationally hostile responses and
vigilante justice and lynch mobs seems to help.

Larry Clapp

unread,
Feb 25, 2002, 11:55:11 PM2/25/02
to
In article <gat-240202...@192.168.1.50>, Erann Gat wrote:
> But (putting on my scientist hat for a moment) this theory actually makes a
> testable prediction: that most people who like Lisp don't know much about
> other langauges, and most people who like other langauges don't know much
> about Lisp. (Here's a data point: when Paul Graham started designing Arc he
> didn't know Python.)

I like Lisp, though I don't know it very well (yet :). I know a goodly amount
of C, Perl, and ksh. I wrote a lot of Turbo Pascal for a while. I used to
know BASIC, but BASIC has changed almost beyond recognition since I programmed
in it. I've read a bit about Python, Java, APL, FORTH ("Forth"? not sure),
and some others.

I don't know what, if anything, this proves, but I thought I'd throw in my "me
too!". :) Does this count as an "anecdote" or a "data point"? Can enough of
the former turn them into the latter? :)

-- Larry

Erann Gat

unread,
Feb 25, 2002, 10:56:00 PM2/25/02
to
In article <32236684...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:

> * Erann Gat
> | Erik seems intent on transforming Lisp from a programming language into a
> | religion.
>
> * Erik Naggum
> > What was that gripe about gross distortions of your views, again?
>
> * Erann Gat
> | "Seems". To me. That is a statement about my perceptions, not about your
> | views. Your representations of my views come with no such disclaimers.
> | You simply state them as facts.
>
> Really?

Yes. Really. To cite but one example from among many:

http://groups.google.com/groups?selm=3223192332962598%40naggum.net

> Erann Gat does not want to enhance Common Lisp, he wants to feel better.

[snip]

> Your rabid dishonesty about me is really getting on my nerve.

Likewise. What do you propose we do about it?

> Why is it so important to you to lie, misrepresent, defame, and hurt me?

Why is it so important to you to blow everything out of proportion?

> > Ignore the likes of Erann Gat whose job it seems to be to make people
> > stop using Common Lisp unless it is somehow magically "enhanced" to his
> > satisfaction.
>
> Sigh. I was really hoping to end this, but I cannot allow this libel to
> go unchallenged.
>
> I responded that it was a fair summary, you said it was a gross
> distortion and went nuts, but I ACTUALLY SAID "IT SEEMS TO ME",

No, you said "it seems to be", but I don't want to split hairs here. The
relevant point it seems to *me* is that it seems we both did pretty much
exactly the same thing.

> SO YOU
> SHOULD HAVE GIVEN ME THE EXACT SAME LATITUDE THAT YOU DEMAND FOR YOURSELF.

My sentiments exactly. Isn't it amazing how much we agree?

E.

Wade Humeniuk

unread,
Feb 26, 2002, 10:30:58 AM2/26/02
to

"Tim Bradshaw" <t...@cley.com> wrote in message
news:ey3r8n9...@cley.com...

> * Wade Humeniuk wrote:
>
> And CL is terribly lacking in shovels, because it hasn't understood
> that shoveling manure all over people is the only important thing: CL
> people still think that it might be possible to actually communicate
> in a higher level way instead of flinging manure at competitors.
>
> This is really a misunderstanding of human nature of the worst kind.
> Although Lisp represents it in perhaps its purest form, it's widely
> seen elsewhere: look at the endless, repeated, attempts to replace
> manure with structure - SGML, CORBA, XML. None of them nearly so well
> thought out as CL, but all, equally, doomed.
>
> It's manure all the way from here on in, and we are going to need more
> shovels if we're going to keep our heads above it.

Ahhh.... .NET!

The important thing may be to keep the manure stacked in neat little piles
and not strewn all over the place. Maybe the programming equivalent of Dung
Beetles! Repackage it all and bury it.

Wade


Erik Naggum

unread,
Feb 26, 2002, 10:35:11 AM2/26/02
to
* Erann Gat

| Isn't it amazing how much we agree?

Yes, it is quite amazing that we agree that everything is your fault and
that the only way you will ever stop harassing me is finding someone else
to blame for all your problems.

Tim Bradshaw

unread,
Feb 26, 2002, 11:39:31 AM2/26/02
to
* Wade Humeniuk wrote:

> Ahhh.... .NET!

Don't talk to me about .NET... Isn't it just *obviously* an attempt
by MS to destroy Java since embrace-and-incompatibly-extend failed.
What else could they be doing? They have no interest in standards -
being a monopoly the only thing that counts for them is market share
in every sector, and they seem to have simply bought the entire US
legal system (and I presume Europe's too) and so no longer need to
worry about these pesky legal issues.

> The important thing may be to keep the manure stacked in neat little piles
> and not strewn all over the place. Maybe the programming equivalent of Dung
> Beetles! Repackage it all and bury it.

BIGGER SHOVELS thats what you need, STEAM shovels that can fling the
stuff thousands of yards with pinpoint accuracy.

--tim

Gareth McCaughan

unread,
Feb 26, 2002, 8:44:25 PM2/26/02
to
Tim Bradshaw wrote:

[Erann Gat claimed that Lispers know relatively little about
other languages.]


> This is something that could actually be tested (modulo enormous
> self-selection /sampling bias) and although I'm not sure of the
> usefulness of the result I'd bet (small) money that the opposite is
> true to your caricature: Lisp people know *more* languages well than
> other people.
>
> For me, I'd be comfortable in CL, scheme (but frustrated) other Lisps,
> perl, C (somewhat rusty now), python (haven't written significant
> systems in it), some C++ (very rusty, gave up (went back to C) when it
> became obviously insane), prolog, fortran (rusty), smalltalk (very
> rusty) many Unixoid scripting languages (the usual bag of sh, awk, sed
> &c &c). Read manuals but not used significantly: Java (could use
> language, not familiar with 98% of library), TCL (actually gave this
> up in disgust). Really a long time ago: BCPL, Basic (several),
> assembler (several, not big-machine), Forth. I must have missed some
> stuff here.
>
> I have no idea how that compares with most people.

Deltas from your list:

+ Not rusty in C or C++.
+ Have written quite substantial things in Python.
- I've never written any Prolog, though in some theoretical
sense I know the language.
- Smalltalk worse than rusty.
+ BCPL, BASIC and some assemblers probably more recent than
you.
+ "Read manuals but not used significantly" would apply also
to Haskell, Modula-3.
- I wouldn't go so far as "comfortable" for other Lisps.
+ I've done (alas) some Visual Basic.

(That probably looks like a lot of deltas, but the overall
picture is very similar.)

--
Gareth McCaughan Gareth.M...@pobox.com
.sig under construc

Gareth McCaughan

unread,
Feb 26, 2002, 8:39:08 PM2/26/02
to
Erann Gat wrote:

> But (putting on my
> scientist hat for a moment) this theory actually makes a testable
> prediction: that most people who like Lisp don't know much about other
> langauges, and most people who like other langauges don't know much about
> Lisp. (Here's a data point: when Paul Graham started designing Arc he
> didn't know Python.)

Here's another data point. I like Lisp, and I know quite a lot
about quite a lot of other languages. *My* prediction is that
most Lispers know more about other languages than typical
users of (say) C++ or Java.

Gareth McCaughan

unread,
Feb 26, 2002, 8:58:57 PM2/26/02
to
Fred Gilham wrote:

> People have already discussed this issue, but the term I like to use
> is `mental overhead'. Lisp minimizes the mental overhead of
> programming. At the opposite end is something like C++, where one
> puts in a lot of mental effort that doesn't directly address the
> problem. This is not to say that C++ programmers are stupid, in fact
> almost exactly the opposite. They are very clever --- they have to
> be! I'm thinking about an article I read about the constructor
> problem of mixin-based programming in C++. The cleverness is consumed
> by dealing with artificial problems, problems that are caused by the
> formalism they express their ideas in. Lisp, after one gets over a
> few conceptual hurdles, is almost transparent.
>
> Here's a quotation from Kaz Kylheku that captures this thought:
>
> I can't escape the sensation that I have already been thinking in
> Lisp all my programming career, but forcing the ideas into the
> constraints of bad languages, which explode those ideas into a
> bewildering array of details, most of which are workarounds for
> the language.

One project of mine, which I might possibly get started and
do 10% of some day if I find some spare time, is to go through
a good C++ book (something like Alexandrescu's "Modern C++
design") and do in CL all the things the book explains how
to do in C++. I think the results would be enlightening, and
much more embarrassing for C++ than for CL.

Example: Alexandrescu's book has a chapter about "typelists",
which are basically a way to use the C++ template system to
manipulate lists of types. You can use them to build machinery
for testing whether something belongs to one of a fixed set of
types, or for automatically generating a piece of class hierarchy
for you. This is all very ingenious and elegant, and it's
very impressive that someone managed to do it in C++. But in
Common Lisp it's *trivial*.

(I think it likely that some things would come out of this
that genuinely do work better in C++. No shame in that.)

Paolo Amoroso

unread,
Feb 28, 2002, 9:58:58 AM2/28/02
to
On Sun, 24 Feb 2002 11:46:30 -0800, g...@jpl.nasa.gov (Erann Gat) wrote:

> I really wish that the *real* leaders of the Lisp community, like Kent and
> Duane and Paul Graham and Peter Norvig and Dick Gabriel, and even Erik,
> would lead this community in what I perceived to be a productive direction

> that I could both contribute to and benefit from. Alas, what I perceive
> is that this is not happening. Paul and Peter are off doing their own

> thing, Kent is taking a lassez-faire approach, and Erik seems intent on


> transforming Lisp from a programming language into a religion.

Are you sure you are still keeping track of what really goes on in the Lisp
world? You only mention a few "leaders", all of whom I personally respect.
But you seem to forget that several less known programmers[*] do
continually move Lisp in a productive direction with their sweat and code
(I hope no blood :) May I suggest that you subscribe to a few development
mailing lists, and get flooded with CVS commit logs?


Paolo

[*] Finding their names is left as a--highly recommended--exercise. CLiki
is a good starting point.
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://www.paoloamoroso.it/ency/README
[http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/]

0 new messages