Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Big Numbers

34 views
Skip to first unread message

Software Scavenger

unread,
Oct 16, 2001, 4:22:51 AM10/16/01
to
If you wanted to find out whether (1- (expt 2 961)) is or isn't a
prime number, would you use Lisp?

Thomas F. Burdick

unread,
Oct 16, 2001, 4:39:21 AM10/16/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> If you wanted to find out whether (1- (expt 2 961)) is or isn't a
> prime number, would you use Lisp?

Yes. (what forum are you asking this on again?)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Jochen Schmidt

unread,
Oct 16, 2001, 5:11:26 AM10/16/01
to
Software Scavenger wrote:

> If you wanted to find out whether (1- (expt 2 961)) is or isn't a
> prime number, would you use Lisp?

It's not a prime.
I've tested it using the Miller-Rabin algorithm implemented in ANSI Common
Lisp in about 0.3 seconds - That's fast enough for me ;-).

ciao,
Jochen

--
http://www.dataheaven.de

Janis Dzerins

unread,
Oct 16, 2001, 5:54:32 AM10/16/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> If you wanted to find out whether (1- (expt 2 961)) is or isn't a
> prime number, would you use Lisp?

Where else does the form evaluate to a number?

--
Janis Dzerins

Eat shit -- billions of flies can't be wrong.

Tord Kallqvist Romstad

unread,
Oct 16, 2001, 7:21:01 AM10/16/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> If you wanted to find out whether (1- (expt 2 961)) is or isn't a
> prime number, would you use Lisp?

No. It is easy to see that it is not a prime, without using a
computer at all. Because 961 = 31 * 31, your number cannot possibly
be a prime (a number of the form 2^n - 1 can only be prime if n is
prime).

Lisp should be a good tool for looking for Mersenne primes (primes of
the form 2^n - 1) in general, though. I just tried this in Macintosh
Common Lisp, running on a PowerBook G3 400MHz:

(defun mersenne-prime-p (p)
"Determines whether 2^p - 1 is a prime, assuming that p is prime."
(let ((m (- (expt 2 p) 1)))
(do ((i 3 (+ i 1))
(s 4 (mod (- (* s s) 2)
m)))
((= i (+ p 1)) (= s 0)))))

(defun prime-p (number)
"Determines whether p is prime."
(do ((i 2 (+ i 1)))
((> i (sqrt number)) t)
(when (= (mod number i) 0)
(return-from prime-p nil))))

(defun find-mersenne-primes (max)
"Lists all Mersenne primes less than 2^max - 1."
(dotimes (i (- max 3))
(let ((n (+ i 3)))
(when (and (prime-p n) (mersenne-prime-p n))
(format t "2^~a - 1 is a prime.~%" n)))))


? (time (find-mersenne-primes 2000))
2^3 - 1 is a prime
2^5 - 1 is a prime
2^7 - 1 is a prime
2^13 - 1 is a prime
2^17 - 1 is a prime
2^19 - 1 is a prime
2^31 - 1 is a prime
2^61 - 1 is a prime
2^89 - 1 is a prime
2^107 - 1 is a prime
2^127 - 1 is a prime
2^521 - 1 is a prime
2^607 - 1 is a prime
2^1279 - 1 is a prime
(FIND-MERSENNE-PRIMES 2000) took 118,932 milliseconds (118.932
seconds) to run.

--
Tord Romstad

Kent M Pitman

unread,
Oct 16, 2001, 7:36:36 AM10/16/01
to
cubic...@mailandnews.com (Software Scavenger) writes:

> If you wanted to find out whether (1- (expt 2 961)) is or isn't a
> prime number, would you use Lisp?

Let's rephrase this question as all the people replying to it are hearing
in case it helps you answer your own question:

If you wanted to find out whether <very large integer expression> is or
isn't a prime number, would you use <one of the only languages in the
known universe that doesn't think it can randomly mod/truncate/labotomize
your big numbers back down to little ones as a 'favor' to you>?

High on the list of things Lisp offers that most other languages botch is
the idea that (+ x 1) for any integer x should return a number bigger than
x in all cases. It seems like such a small point, but it's often quite
useful.

Barry Margolin

unread,
Oct 17, 2001, 2:53:15 PM10/17/01
to
In article <sfwwv1v...@world.std.com>,

Kent M Pitman <pit...@world.std.com> wrote:
>cubic...@mailandnews.com (Software Scavenger) writes:
>
>> If you wanted to find out whether (1- (expt 2 961)) is or isn't a
>> prime number, would you use Lisp?
>
>Let's rephrase this question as all the people replying to it are hearing
>in case it helps you answer your own question:
>
> If you wanted to find out whether <very large integer expression> is or
> isn't a prime number, would you use <one of the only languages in the
> known universe that doesn't think it can randomly mod/truncate/labotomize
> your big numbers back down to little ones as a 'favor' to you>?

I don't think the above expression would be the deciding factor in my
choice of language. There are bignum libraries available for most
programming languages. If I had an application written in language X and I
needed to add the ability to check for Mersenne primes to it, I would
simply link in that library rather than rewrite the whole thing in Lisp
just because it has built-in bignum support. Even if the application
wasn't already written, I would look at the entire application, and other
factors like availability of support personnel, to decide which language to
use.

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Erik Naggum

unread,
Oct 17, 2001, 7:02:34 PM10/17/01
to
* Barry Margolin <bar...@genuity.net>

| There are bignum libraries available for most programming languages. If
| I had an application written in language X and I needed to add the
| ability to check for Mersenne primes to it, I would simply link in that
| library rather than rewrite the whole thing in Lisp just because it has
| built-in bignum support.

CLISP's bignum library was recently made available as C/C++ classes. I
have not looked at it, but there is no way to make the regular integers
in C/C++ overflow properly and spontaneously create a new type, so you
would have to make _some_ changes to your source code to use bignums, not
to mention what you have to do when bignums are to be used in a call to
some other function that expects a regular integer. Several _disjoint_
integer types is not a huge win as far as language features go.

///
--

Barry Margolin

unread,
Oct 17, 2001, 7:46:08 PM10/17/01
to

You're certainly correct if what you're doing is adding bignum support
throughout an entire application that previously handled limited-precision
numbers.

But if you're just adding Mersenne prime-checking to an existing
application that wasn't using bignums, it seems safe to assume that the
application otherwise doesn't need to deal with big numbers. The Mersenne
subroutine would convert the number to bignum, call the functions in the
library, and return a boolean. With proper modularity, few changes should
be required to the rest of the application.

If you're implementing an application from scratch and it needs to use
bignums extensively (e.g. most crypto applications), using a bignum library
is not much harder than using any other classes for other specialized data
types. It seems unlikely that the results of these bignum calculations are
going to be needed as arguments to system libraries that expect ordinary
integers (e.g. there's no reason to use one of these numbers as the buffer
size in a call to read()); about the only place where you're likely to want
to be able to use either kind of integer interchangeably would be in
formatted I/O, and most OO languages (including C++) make it relatively
easy to do this.

Is it nicer to have seamless integration between fixnum and bignum? Sure
it is. But is it necessary? I don't think so. Any competent programmer
who can't deal with the issues of converting at the appropriate interfaces
doesn't deserve the title "competent programmer". I can't imagine this
being the deciding factor in choosing a language. I don't even consider
bignum support to be a defining feature of Lisp -- does Emacs Lisp not
deserve to be called Lisp just because it doesn't have bignums (and what
about Emacs 18 and earlier, which didn't have floats, either)? It's a nice
feature to have, but how many applications *really* need it? I suppose
it's needed in many implementations because their data representation
steals some bits, so an immediate fixnum might be only 24 or 28 bits rather
than the full 32 bits supported by the hardware, and in the early days of
Lisp, immediate fixnums were probably only 16 or 18 bits on some machines;
once you take the step of unboxing some numbers, it's not a big leap to
support bignums.

Macsyma was written in Lisp, so it got its bignum support for free.
Mathematica is a similar application that I presume was written in C or
Pascal. Do you think that C's lack of built-in bignums made a significant
difference (i.e. more than a percent or two) in the difficulty of
implementing Mathematica? I think Lisp is a far better language for
implementing this type of application because of its better support for
complex webs of data structures, *not* because of bignums; that's just the
cherry on top.

Marco Antoniotti

unread,
Oct 17, 2001, 8:05:42 PM10/17/01
to

Barry Margolin <bar...@genuity.net> writes:

...

> Mathematica is a similar application that I presume was written in C or
> Pascal. Do you think that C's lack of built-in bignums made a significant
> difference (i.e. more than a percent or two) in the difficulty of
> implementing Mathematica? I think Lisp is a far better language for
> implementing this type of application because of its better support for
> complex webs of data structures, *not* because of bignums; that's just the
> cherry on top.

It has been a number of years, but I remember that - at least then -
interacting with the Mathematica kernel at a low level (i.e. C) pretty
much exposed its "Lisp Nature". :)

Every sufficiently complex application.....

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.

Erik Naggum

unread,
Oct 17, 2001, 10:12:39 PM10/17/01
to
* Barry Margolin

| Any competent programmer who can't deal with the issues of converting at
| the appropriate interfaces doesn't deserve the title "competent
| programmer".

Ignoring for purposes of the discussion the general absence of competence
among programmers, the most common error in statically typed languages
that have hardware-representation-oriented, disjoint integer types is to
specify too narrow a type, including the increasingly annoying case of
the widest hardware type not being wide enough, as witness the need to
deal with 64-bit file sizes on traditional 32-bit operating systems, not
to mention the many 16-bit limitations that continue to annoy the Unix
world, or the 8-bit limitation on character representation...

There are many ways to specify integer ranges in various languages that
make programmers choose between wasting space and getting wrong results
_quietly_ after overflows, instead of ignoring space issues and getting
the right results always. Some languages elect to raise exceptions upon
overflow, but what are you going to do about it? (Wa have no concept of
"infinity" for integers.)

| I can't imagine this being the deciding factor in choosing a language.

I can imagine it. It does not take more than two languages that differ
only in their bignum support, and considering the proliferation of both
languages and implementations-called-languages, this situation will come
up if it has not already.

| It's a nice feature to have, but how many applications *really* need it?

That depends entirely on how big your integer is.

| Do you think that C's lack of built-in bignums made a significant
| difference (i.e. more than a percent or two) in the difficulty of
| implementing Mathematica?

I have no idea, but considering the work required to get fast bignums,
leaving it to people who know how to do it seems like a good idea.

| I think Lisp is a far better language for implementing this type of
| application because of its better support for complex webs of data
| structures, *not* because of bignums; that's just the cherry on top.

You could say the same about the support for complex numbers, too. There
are a _lot_ of these cherries on top. Take away too many, and you no
longer have a cherry-topped language. I mean, even Scheme got all of
this number stuff right. And if Scheme has it, it cannot be removed
without making the language less than minimal, now can it?

///
--
Norway is now run by a priest from the fundamentalist Christian People's
Party, the fifth largest party representing one eighth of the electorate.

Barry Margolin

unread,
Oct 18, 2001, 11:21:34 AM10/18/01
to
In article <32123599...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:
>* Barry Margolin
>| I can't imagine this being the deciding factor in choosing a language.
>
> I can imagine it. It does not take more than two languages that differ
> only in their bignum support, and considering the proliferation of both
> languages and implementations-called-languages, this situation will come
> up if it has not already.

I thought we were discussing choosing languages, not implementations.
Certainly if I had to choose between Emacs Lisp and Common Lisp, and the
application were heavily numeric, CL's better numeric support (bignums,
rationals, and imaginaries) would probably be the reason.

But when deciding between Lisp and C++, for instance, bignums would be a
pretty low priority criterion.

>| It's a nice feature to have, but how many applications *really* need it?
>
> That depends entirely on how big your integer is.
>
>| Do you think that C's lack of built-in bignums made a significant
>| difference (i.e. more than a percent or two) in the difficulty of
>| implementing Mathematica?
>
> I have no idea, but considering the work required to get fast bignums,
> leaving it to people who know how to do it seems like a good idea.

It seems like someone writing an application like Mathematica would be
expected to know how to do it. Compiler writers would be expected to have
expertise in parsing, code generation, and the like; mathematical expertise
would not necessarily be their forte.

>| I think Lisp is a far better language for implementing this type of
>| application because of its better support for complex webs of data
>| structures, *not* because of bignums; that's just the cherry on top.
>
> You could say the same about the support for complex numbers, too. There
> are a _lot_ of these cherries on top. Take away too many, and you no
> longer have a cherry-topped language. I mean, even Scheme got all of
> this number stuff right. And if Scheme has it, it cannot be removed
> without making the language less than minimal, now can it?

Yes, I agree that the large number of cherries is a big plus on the side of
Lisp. That's why we think it's a great language in general. If you can
get away with it, you would use it for almost everything.

But I interpreted the original question as whether built-in bignum support,
considered all by itself, would be sufficient reason to choose Lisp for a
particular application. When choosing languages, I don't think any single
feature should be a deciding factor. You have to look at the whole
language, as well as the way the application will be developed, deployed,
and maintained, to make this choice. Lisp may be great, but C/C++ (and
Java, these days) programmers are a dime a dozen (and worth every penny).

Jochen Schmidt

unread,
Oct 18, 2001, 11:45:45 AM10/18/01
to
Barry Margolin wrote:

> In article <32123599...@naggum.net>, Erik Naggum <er...@naggum.net>
> wrote:
>>* Barry Margolin
>>| I can't imagine this being the deciding factor in choosing a language.
>>
>> I can imagine it. It does not take more than two languages that differ
>> only in their bignum support, and considering the proliferation of both
>> languages and implementations-called-languages, this situation will come
>> up if it has not already.
>
> I thought we were discussing choosing languages, not implementations.
> Certainly if I had to choose between Emacs Lisp and Common Lisp, and the
> application were heavily numeric, CL's better numeric support (bignums,
> rationals, and imaginaries) would probably be the reason.
>
> But when deciding between Lisp and C++, for instance, bignums would be a
> pretty low priority criterion.

I don't think so - I've read through thousands of lines of C and C++ written
crypto code and I found it _very_ unreadable. Even if you use Classes and
operator-overloading in C++. On the contrary I was very surprised how small
and readable an implementation of RSA looks like in Common Lisp.

The only thing I miss for writing crypto code in CL are fast
implementations of arithmetic in several important modular rings like Z32
(for fast DES, SHA, MD5, Blowfish...) and a fast implementation of modular
exponentation. I have a reasonably fast portable implementation of EXPT-MOD
but if it should be really fast it would have been implemented on a lower
(system-dependent) layer.

I wanted to propose EXPT-MOD here in c.l.l for inclusion to the
"common-lisp-utilities" (like PARTITION/SPLIT-SEQUENCE )
encouraging the vendors to offer an implementation that is faster than the
portable code.

>>| I think Lisp is a far better language for implementing this type of
>>| application because of its better support for complex webs of data
>>| structures, *not* because of bignums; that's just the cherry on top.
>>
>> You could say the same about the support for complex numbers, too.
>> There
>> are a _lot_ of these cherries on top. Take away too many, and you no
>> longer have a cherry-topped language. I mean, even Scheme got all of
>> this number stuff right. And if Scheme has it, it cannot be removed
>> without making the language less than minimal, now can it?
>
> Yes, I agree that the large number of cherries is a big plus on the side
> of
> Lisp. That's why we think it's a great language in general. If you can
> get away with it, you would use it for almost everything.
>
> But I interpreted the original question as whether built-in bignum
> support, considered all by itself, would be sufficient reason to choose
> Lisp for a
> particular application. When choosing languages, I don't think any single
> feature should be a deciding factor. You have to look at the whole
> language, as well as the way the application will be developed, deployed,
> and maintained, to make this choice. Lisp may be great, but C/C++ (and
> Java, these days) programmers are a dime a dozen (and worth every penny).

I think there is simply no reason for a language not to have inbuilt
support for arbitrary long integers today. The fact that the Java (and C#!)
Designers failed in this point only shows how short they think... ;-)

Erik Naggum

unread,
Oct 18, 2001, 2:06:18 PM10/18/01
to
* Erik Naggum

| I can imagine it. It does not take more than two languages that differ
| only in their bignum support, and considering the proliferation of both
| languages and implementations-called-languages, this situation will come
| up if it has not already.

* Barry Margolin


| I thought we were discussing choosing languages, not implementations.

"Implementations-called-languages" was a reference to the many languages
that have only one implementation of themselves, barely meriting the
"language" label. I have no idea what you thought it meant, but it seems
you grew hostile because of it.

| But I interpreted the original question as whether built-in bignum
| support, considered all by itself, would be sufficient reason to choose
| Lisp for a particular application.

That seems like an unwarranted interpretation -- it is obviously so silly
it would cause people to become hostile if they indeed meant such a thing.

I interpreted it as a _necessary_ condition, not at all _sufficient_, or
if you really have to: a sufficient reason to _reject_ a language, but
not choose it.

Lieven Marchand

unread,
Oct 18, 2001, 1:06:19 PM10/18/01
to
Barry Margolin <bar...@genuity.net> writes:

> It seems like someone writing an application like Mathematica would be
> expected to know how to do it. Compiler writers would be expected to have
> expertise in parsing, code generation, and the like; mathematical expertise
> would not necessarily be their forte.

For this reason I would not expect a typical CL implementation to be
very efficient for bignums. It's certainly not required for a lot of
common code. I seem to recall an ACM article by JonL White detailing
measurements on a compiler that most integers were fixnums, the second
most common case had the size of 2 fixnums and the remaining cases
were negligable.

CMUCL's implementation last time I looked at it used the multiply by
chunk and add algorithm (O(n^2)), not even one of the speedups through
divide and conquer. Compare this with the implementation used for
things like the record pi calculatons (some billions of decimals)
where a total of 5 multiplication algorithms are used, and the
switching points between these have to be carefully tuned for each
machine. I don't think CL vendors perceive the speed of their bignum
implementation as that important to their customers.

--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words

Kaz Kylheku

unread,
Oct 18, 2001, 4:23:11 PM10/18/01
to
In article <vskz7.5$Ki7.1969@burlma1-snr2>, Barry Margolin wrote:
>In article <sfwwv1v...@world.std.com>,
>Kent M Pitman <pit...@world.std.com> wrote:
>>cubic...@mailandnews.com (Software Scavenger) writes:
>>
>>> If you wanted to find out whether (1- (expt 2 961)) is or isn't a
>>> prime number, would you use Lisp?
>>
>>Let's rephrase this question as all the people replying to it are hearing
>>in case it helps you answer your own question:
>>
>> If you wanted to find out whether <very large integer expression> is or
>> isn't a prime number, would you use <one of the only languages in the
>> known universe that doesn't think it can randomly mod/truncate/labotomize
>> your big numbers back down to little ones as a 'favor' to you>?
>
>I don't think the above expression would be the deciding factor in my
>choice of language. There are bignum libraries available for most
>programming languages.

But they don't smoothly integrate into most other languages. Instead,
they are lame add-on gadgets, and it shows.

The idea in Lisp is not to have an additional type, but rather integers
which approximate mathematical integers within the limits of the available
storage. The division between fixnum and bignum exists simply for
pragmatic reasons; the programmer doesn't have to be aware of it except
when trying to optimize.

If we add on bignums to some language like C++, that does not cause its
existing integers to behave like mathematical integers; we only gain
the ability to do some special bignum processing. The rest of the code
will still bomb up when it hits some CPU-and-compiler-imposed limit like
32767 or 2147483647.

Most C and C++ programmers I have come across don't know how to select
an appropriate standard C and C++ integral type for a given situation.
They believe in stupid myths, such as short int being required to be
exactly 16 bits wide, and long being exactly 32 bits wide. Some invent
machine-oriented type names like int16 and int32, hoping that future
maintainers will always be able to assign each of these to a type which
have the property suggested by the name.

Other Stone Age languages are not immune to these problems, despite
bogus claims of safety. Part of the reason for the Ariane rocket crash
was that its control program---written in Ada---used a 16 bit
representation for a real-world dynamic quantity which in that
implementation of the rocket didn't fit into the range, causing a floating
point to integer conversion exception.

PKY

unread,
Oct 19, 2001, 1:33:16 AM10/19/01
to
Barry Margolin <bar...@genuity.net> wrote in message news:<2sCz7.9$ji2.936@burlma1-snr2>...

> In article <32123599...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:
> >* Barry Margolin
> >| I can't imagine this being the deciding factor in choosing a language.
> >
> > I can imagine it. It does not take more than two languages that differ
> > only in their bignum support, and considering the proliferation of both
> > languages and implementations-called-languages, this situation will come
> > up if it has not already.
>

> >

> It seems like someone writing an application like Mathematica would be
> expected to know how to do it. Compiler writers would be expected to have
> expertise in parsing, code generation, and the like; mathematical expertise
> would not necessarily be their forte.

Except when writing compiler to Lisp & dealing with float consing
optimizations & fixnum overflow optimization & bignum shift
optimizations & bignum log* operations. There it helps both to
understand assembler and a little bit arithmetics.

Also it comes into my mind, that having a certain kind of understaning
of program complexity in timewise is more math tha comp.science.

And that makes a big difference is CL (optimize (speed 3)) / versus
compilation time measures.


> But I interpreted the original question as whether built-in bignum support,
> considered all by itself, would be sufficient reason to choose Lisp for a
> particular application. When choosing languages, I don't think any single
> feature should be a deciding factor. You have to look at the whole
> language, as well as the way the application will be developed, deployed,
> and maintained, to make this choice. Lisp may be great, but C/C++ (and
> Java, these days) programmers are a dime a dozen (and worth every penny).

CL is great, with C++ & assembler you can do most of the tricks in CL
but Java as a language sucks big time (no macros, no templates, if you
have the same kind of piece of code, you write it all the time again).

PKY

Roger Corman

unread,
Oct 19, 2001, 4:21:11 AM10/19/01
to
On Thu, 18 Oct 2001 17:45:45 +0200, Jochen Schmidt <j...@dataheaven.de> wrote:
>
>I think there is simply no reason for a language not to have inbuilt
>support for arbitrary long integers today. The fact that the Java (and C#!)
>Designers failed in this point only shows how short they think... ;-)
>
>ciao,
>Jochen

Oh, but Java supplies it in a standard library: BigInteger, and BigDecimal
(which is a wrapper on BigInteger).

Unfortunately performance is not great. We had some code running on fast Solaris
servers that brought the 4 CPUs to their knees because somebody was initializing
a variable to:

new BigDecimal(Double.MAX_VALUE)

Each call to this took: 125 milliseconds! of pure CPU usage!

The performance of BigInteger and BigDecimal (which are comparable) is very bad
when the numbers get bigger than builtin longs, and Sun's JDK implementations
are so bad that several commercial vendors have built replacements (not drop-in
unfortunately--you have to modify all your source files to point to the correct
package).

Here are some timings for the Sun JDK 1.3, ArciMath (a commercial replacement),
and the equivalent code in Corman Lisp:

call new BigDecimal(Double.MAX_VALUE) (1000x)
Under Windows, JDK 1.3 19.25 seconds
Under Solaris, JDK 1.3 125.5 seconds
Windows, ArciMath BigDecimal .266 seconds
Solaris, ArciMath BigDecimal .984 seconds
Corman Lisp 1.5 .004 seconds

call new BigDecimal(1000000000.0) (1000x)
Under Windows, JDK 1.3 40 milliseconds
Under Solaris, JDK 1.3 70 milliseconds
Windows, ArciMath BigDecimal 10.2 milliseconds
Solaris, ArciMath BigDecimal 40 milliseconds
Corman Lisp 1.5 1.9 milliseconds

Add this to the "lisp is slow" standard response. Besides noticiing how bad the
java performance is in comparison (this is HotSpot, generally a fast VM), but
also notice how much slower the Sparcstation server is than a Windows
1 ghz workstation. I think the Sparcstation is 500 mhz (or so).

Roger

Lieven Marchand

unread,
Oct 19, 2001, 4:24:26 PM10/19/01
to
k...@ashi.footprints.net (Kaz Kylheku) writes:

> Other Stone Age languages are not immune to these problems, despite
> bogus claims of safety. Part of the reason for the Ariane rocket crash
> was that its control program---written in Ada---used a 16 bit
> representation for a real-world dynamic quantity which in that
> implementation of the rocket didn't fit into the range, causing a floating
> point to integer conversion exception.

As part of a language community that is the victim of a lot of
unfounded myths, we shouldn't make the same errors towards other
languages. Whether or not you want to call Ada a Stone Age language,
it has excellent support for integer ranges, it supports separate
representation clauses to detail how you want your types to be laid
out in memory and it certainly has a far better thought out model of
integers, fixed and floating point than most languages. The Ariane
incident was caused by a decision to reuse a component for the Ariane
4 in the Ariane 5 control program without testing. Since the Ariane 5
could go faster than the Ariane 4, a variable went out of range, which
properly generated an exception that wasn't handled which made the
control program decide to terminate the flight.

Dieter Menszner

unread,
Oct 20, 2001, 1:19:33 AM10/20/01
to
Lieven Marchand wrote:

> As part of a language community that is the victim of a lot of
> unfounded myths, we shouldn't make the same errors towards other
> languages. Whether or not you want to call Ada a Stone Age language,
> it has excellent support for integer ranges, it supports separate
> representation clauses to detail how you want your types to be laid
> out in memory and it certainly has a far better thought out model of
> integers, fixed and floating point than most languages. The Ariane
> incident was caused by a decision to reuse a component for the Ariane
> 4 in the Ariane 5 control program without testing. Since the Ariane 5
> could go faster than the Ariane 4, a variable went out of range, which
> properly generated an exception that wasn't handled which made the
> control program decide to terminate the flight.
>

You are right, of course.

In the Ada textbook literature you find statements like: Ada's strong
typing catches *most* errors and therefore it is superior to lesser
languages like C...

I have some personal experience with Software written in Ada and Ada
adorers.
The Software I had to maintain e.g. produced constaint errors whenever
the incoming data didn't fit into the datatypes. Which is not
completely wrong.
But the exeception handler routines didn't know how to deal with the
situation and just terminated the program.

Just to state the obvious: semantic errors are not caught by
strong typing. This is of course self evident but some Ada
programmers seem to have problems with this.
"The program crashed: this is not possible in Ada" :-)

--
Dieter Menszner

Erik Naggum

unread,
Oct 20, 2001, 5:18:01 PM10/20/01
to
* Dieter Menszner
| The [Ada] Software I had to maintain e.g. produced constaint errors

| whenever the incoming data didn't fit into the datatypes. Which is not
| completely wrong. But the exeception handler routines didn't know how
| to deal with the situation and just terminated the program.

It suddenly dawned on me that this illustrates that the static typing
people have yet to figure out what input to programs is all about.
Neither user nor program input is statically typed. Look at all the
messy coding you have to engage in to "convert" user input to the
_expected_ type in most languages, and how the most convenient way of
dealing with these things is to default to _strings_ that are converted
on demand to the expected type for the expression in which they are used.
A major "feature" of several languages is precisely that they achieve
"dynamic types" through string representation and type conversions. The
same goes for that XML abomination, which is barely able to express the
"structure" of the input, but not any typed values other than "string".
Several database interfaces also choose "string" as their "dynamic type"
representation.

Now, you may be able to control types statically in carefully controlled
environments. Source code and compilation in a system environment that
will never, ever change (it is instead _replaced_ in toto) is perhaps
such a case. Everything else, _every_ input source to a program, is a
source of problems (if you pardon the pun). This is why it is wrong for
integers parsed from an input source to be limited in size by hardware.
This is why it is wrong for a program to make its expectations explicit
before it knows what it is dealing with. This is why Common Lisp's read
is the right interface to deal with user input: It returns objects read
from the character stream input source, parsed according to their _own_
inherent syntactic structure. The standard read function is too weak to
deal with all it needs to deal with to be usable beyond the expectation
to receive Common Lisp objects as input, but that is an implementation
restriction that can be lifted, not an issue with the interface as such.

How humans deal with the unexpected defines an important part of their
personality. Some cannot deal with the unexpected at all, and work very
hard to force the world into being what they expect it to be. This can
go really bad when they try to force people to be what they expect them
to be, such as by treating people not as people, but as manifestations of
one's expectations about them. It appears that few people are able to
work like read does: Deal with whatever there is they are exposed to.
Most of the friction on USENET can be condensed to a failure to deal with
people on their own terms -- as witness, all the moralists who claim that
others should behave some different way and all those who reject _every_
notion of being told how to behave, even if they are actually not told.

The enormous number of security problems found in programs written in C,
with its amazingly unintelligent fixed buffer size design, shows us that
even people one would expect (!) to be able to remember that buffer
overruns is _the_ major cause of problems, still cannot bring themselves
to write code without using that horribly broken design paradigm because
they are so heavily influenced and consequently fooled by static typing.

One might argue that static typing is not necessarily synonymous with
disjoint hardware-oriented types, but the main reason people want static
typing is the perceived ability to produce more efficient machine code
from having the programmer to over-specify type information. This also
means that the value of a type specification is lost if it is a general
type like "integer" instead of a specific number of bits, such as would
be the case if the language supported arbitrarily wide integers. Since
there is so little value to these people in using any type in the type
hierarchy above the hardware-supported types, this naturally leads to
preferring _disjoint_ types, indeed _only_ supporting disjoint types.
This is probably a good idea within a very restricted domain, that in
which all "possible" values can be known a priori. A priori knowledge
has great appeal to some people, but they seem to get flustered when
facing the unknown, the uncertain, or the unexpected. This observation
leads me to believe that those who want static typing want it for a far
more important reason than ease of compiler design, neat type theories,
and more efficient code: It presents an easy-to-understand universe with
very few exceptional situations and where everything is somehow in order.
Such is not the universe we live in, nor should we attempt to make the
one we live in like that.

In order to make input from the external world, which must be treated as
unknowable a priori, work at all, the only way you can survive is to use
some form of dynamic typing and types with no upper limit to precision.
The way Common Lisp does this with its reader is still geared towards an
expectation that it will receive Common Lisp input, but the core ideas of
the syntax of Common Lisp are fundamentally _correct_: Objects identify
their types syntactically, usually with a very low syntactic overhead,
and the reader returns the appropriate type object after parsing it,
potentially having dealt with any anomalies and errors in the input.

I think the reader in the Common Lisp system is extremely under-utilized,
probably because of the large number of "security problems" related to
its current design and implementation, such as remembering to turn off
*read-eval* and the inability to control the behavior of what would be
interned as symbols. If it were better understood as the _right_ way to
deal with user input (as opposed to _expecting_ certain kinds of input),
it would be used more and would probably also have a much improved, but
as long as people stick to the statically-typed _model_ of doing user
input, they will both work too hard on it getting worse results and their
code will have to be modified ever time they make a small change to the
syntax or the type system or their expectations. Notice how the Common
Lisp _compiler_ is a lot more adaptible than that because it uses read to
get the source code and objects into the internal form it can deal with.
The same should _ideally_ apply to any Common Lisp application.

///
--
Norway is now run by a priest from the fundamentalist Christian People's
Party, the fifth largest party representing one eighth of the electorate.

--
The purpose of computing is insight, not numbers. -- Richard Hamming

Ray Blaak

unread,
Oct 22, 2001, 2:28:25 AM10/22/01
to
Erik Naggum <er...@naggum.net> writes:
> * Dieter Menszner
> | The [Ada] Software I had to maintain e.g. produced constaint errors
> | whenever the incoming data didn't fit into the datatypes. Which is not
> | completely wrong. But the exeception handler routines didn't know how
> | to deal with the situation and just terminated the program.

But at least the errors were detected at the source, rather than silently
"succeeding" and corrupting the application later in an way impossible to
debug. But it is only really fair to compare Ada against C/C++. Lisp is in
another universe. In terms of evolutionary development we have C => C++ => Ada
=> Lisp, even if the historical timeline is not the same.

> It suddenly dawned on me that this illustrates that the static typing
> people have yet to figure out what input to programs is all about.
> Neither user nor program input is statically typed.

Please do not tar all static typing freaks with the same brush. Static *typing*
has nothing at all to do with static *behaviour*, and those who program as if
they do indeed deserve all the application bugs and crashes they get.

Instead, static typing is about describing one's abstractions in such a way so
that the language environment can detect a useful amount of errors and
inconsistencies in their usage before execution (i.e. statically).

But this does not mean that one's abstractions cannot still be flexible and
adaptable. There is nothing inconsistent with having *both* static typing and
Lisp's dynamic flexibility with regards to data inputs (e.g. arbitrary
precision, self-describing objects, etc.).

Good programmers do the right thing according to the circumstances, and dealing
with raw inputs intelligently is just so necessary and basic. If the Ada
program above was in environment where termination could not be tolerated, then
its behaviour of crashing with bad input would have been criminal.

> The enormous number of security problems found in programs written in C,
> with its amazingly unintelligent fixed buffer size design, shows us that
> even people one would expect (!) to be able to remember that buffer
> overruns is _the_ major cause of problems, still cannot bring themselves
> to write code without using that horribly broken design paradigm because
> they are so heavily influenced and consequently fooled by static typing.

It is simpler than that. Buffer overruns are a sign of shitty programming,
plain and simple, as opposed to being fooled by static typing. One's
abstractions must be designed according the problem domain at hand. If the very
nature of the domain leads to unbounded and unknown inputs, the design must
take that into account in a reasonable way instead of croaking, no matter what
language is being used.

Now, which language is better at supporting this flexibility is not being
disagreed with here.

> In order to make input from the external world, which must be treated as
> unknowable a priori, work at all, the only way you can survive is to use
> some form of dynamic typing and types with no upper limit to precision.

Yes. And this dynamism can still be realized in static typing environments,
although I suppose the your point is made by the fact *useful* static typing
environments have an element of dynamism to them, even if only via OO with
inheritance and polymorphism.

Alternatively, a program could choose to glean from arbitrary inputs only those
it recognized, and reject others in a reasonable way. I want my bank's web site
to be very very picky about the inputs it receives from the Internet, for
example.

In general, static vs dynamic typing does not need to be an all or nothing
approach. One can have a "continuum" of typing, where one can be as flexible or
restrictive as required (Dylan has some nice abilities that demonstrate this).

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@telus.net The Rhythm has my soul.

Harald Hanche-Olsen

unread,
Oct 23, 2001, 2:44:57 PM10/23/01
to
+ Erik Naggum <er...@naggum.net>:

| The purpose of computing is insight, not numbers. -- Richard Hamming

I don't usually post a followup to someone's signature, but here the
temptation proved too great. I managed to locate a precise reference
for this quote recently, and this is what I found:

The attitude adopted in this book is that while we expect to get
numbers out of the machine, we also expect to take action based on
them, and, therefore we need to understand thoroughly what numbers
may, or may not, mean. To cite the author's favorite motto,
``The purpose of computing is insight, not numbers,''
although some people claim,
``The purpose of computing numbers is not yet in sight.''
There is an innate risk in computing because ``to compute is to
sample, and one then enters the domain of statistics with all its
uncertainties.''

-- Richard W. Hamming, Introduction to applied numerical analysis,
McGraw-Hill 1971, p.31.

--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- Yes it works in practice - but does it work in theory?

0 new messages