Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

More numeric issues in Maple and Mathematica

1 view
Skip to first unread message

Dr F Carter

unread,
Aug 17, 1998, 3:00:00 AM8/17/98
to
>If you could be more specific as to what you tested, it would
> be more useful.
>
> The right way to do numerical calculation in machine precision
> in either Maple or Mathematica is to call numerical subroutine
> libraries written in other languages (typically C, these days).

This is basically what was suggested when I first had problems with the
Maple numerics "use something else" although those people were
suggesting Matlab. I suppose that was becaue Maple does not link to C in
an easy way. But I really want to be able to stay within one system,
which is why I switched to Mathematica. I may yet find problems with
this, so far its certainly better than trying to hack together a hybrid
of Maple MATLAB and C libraries.

> What you see now may be replaced.

How do you mean?

> Higher precision is a different matter ... the libraries are
> not so well recognized though Bailey's programs (used by
> neither Maple or Mathematica, as far as I know) are pretty
> good.
>
> From my own observations, the design of Mathematica's numerical
> code (significance arithmetic) is deeply flawed, and can lead
> to dreadful over or underestimates of the error in the answer.
> You may have been relatively lucky.

I suppose that if you understand the algorithms that they have used then
it must be asy to exploit the areas where they dont work so well but for
a bunch of practical problems I tried, I couldn't fault it.

Here are a couple of simple examples that I tried of Maple and
Mathematica handling numerical rounding errors:
If you take a classic ill conditioned matrix. 1/(x+y-1) with x and y
both from 1 to 10.
Now do some basic tests comparing operations on the numerical version
and the exact version
In Maple
det(evalf(m));
-.1128605090e-50
evalf(det(m));
.2164179226e-52

These are different by a factor of 100.

In Mathematica:

Det[N[m]]
2.164405264621389`*^-53
N[Det[m]]
2.164179226431492`*^-53
Agreeing to 3 decimal places. So simplistically speaking the errors are
100000 times smaller.

Now looking at higher precision arithmatic. The Maple answer gets no
better
> det(evalf(m,50));
-.1128605090e-50
> evalf(det(m),50);
.21641792264314918690605949836507259090507621790197e-52

While Mathematica reports only correct digits, and quite a lot of them.

Det[N[m,50]]
2.164179226431491869060594983650725909`*^-53
N[Det[m],50]
2.1641792264314918690605949836507259090507621790197`*^-53

If you do machine precision inverse in both systems, Mathematica warns
you about catastrophic loss of precision and returns a bad answer but
Maple just reports a wrong answer without warning.

Another example: differences between numbers that are very simillar like
this approximation for Sqrt[2]
approx:=96845919575610633161/68480406462161287469;

As you might expect, in machine arithmatic both fail to detect any
difference.
evalf(approx-sqrt(2));
0
N[approx-Sqrt[2]]
0.
Increasing the working digits to 20 doesn't help Maple but allows
Mathematica to report quite a few digits of the answer.
evalf(approx-sqrt(2),20);
0
N[approx-Sqrt[2],20]
-7.5391493663691237037*^-41
Even going up to 50 digits in Maple is not enough. Although it is
interesting to note the way the rounding works.Here the last digit is
not rounded up even though the next digit is >5
evalf(approx-sqrt(2),50);
-.753914936e-40
but in this example it IS rounded up even though the next digit <5
evalf(approx-sqrt(2),51);
-.7539149367e-40
So it seems that the last digit in Maple numerics should always be
considered at least +/- 1.
And should really be ignored as it can be completely wrong eg the last
digit here....

evalf(1-approx / sqrt(2), 45);
.53306e-40
N[1 - approx/Sqrt[2], 20]
5.3309836413378703703*^-41

Charles Krug

unread,
Aug 17, 1998, 3:00:00 AM8/17/98
to fca...@dircon.co.uk
Dr F Carter wrote:

> Here are a couple of simple examples that I tried of Maple and
> Mathematica handling numerical rounding errors:
> If you take a classic ill conditioned matrix. 1/(x+y-1) with x and y
> both from 1 to 10.
> Now do some basic tests comparing operations on the numerical version
> and the exact version
> In Maple
> det(evalf(m));
> -.1128605090e-50
> evalf(det(m));
> .2164179226e-52

Interesting. I just tried this example on the Student Edition of Maple V,
Version 4.00c, and got matching answers both times. This is running under WinNT
on a 200MHz Pentium.

.2164179226e-52

Has something changed in more recent versions?

--
Charles Krug, Jr.
Application Engineer
Pentek Corp
1 Park Way
Upper Saddle River, NJ 07458

Richard J. Fateman

unread,
Aug 17, 1998, 3:00:00 AM8/17/98
to
In article <35D85B...@dircon.co.uk>,
Dr F Carter <fca...@dircon.co.uk> wrote:
....
Dr. Carter gives more examples..

I mentioned that the numerics in the languages Mathematica and
Maple are prone to change because they are not necessarily the
best available now, they are not very completely specificed
and therefore they COULD be changed later. As they have in
the past. How much reliance should you have on a routine
that has changed substantially in each of the last 3 versions?
How much reliance should you have on routines that the
vendor (apparently with pride) proclaims consist of 500 pages
of secret code (NDSolve in Mathematica).

My concern is that anything built upon a system that allows x>y, x==y,
x=!=y to all be true but x<=y to be false, is likely to have some
surprises.

>I suppose that if you understand the algorithms that they have used then

>it must be easy to exploit the areas where they dont work so well but for

>a bunch of practical problems I tried, I couldn't fault it.

It is fine to try any software on "practical" cases by (for example)
re-running it in much higher precision. It is not, however, a useful
endorsement of that software. You cannot use as a serious
measure of how good such software is, an observation that it worked
fine on straightforward cases.

If you tested an aircraft that way you would conclude that it works
fine since it seems to work for normal flying (clear skies, daylight).
A good numerical package will be safer for the naive user even when
venturing out under "stormy skies."

>Here are a couple of simple examples that I tried of Maple and
>Mathematica handling numerical rounding errors:
>If you take a classic ill conditioned matrix. 1/(x+y-1) with x and y
>both from 1 to 10.
>Now do some basic tests comparing operations on the numerical version
>and the exact version
>In Maple
>det(evalf(m));
>-.1128605090e-50
>evalf(det(m));
>.2164179226e-52
>

>These are different by a factor of 100.

My guess is that Maple should replace its determinant program with one
that accumulates using higher precision. And/or it may be using 10
decimal digits in its arithmetic (depends on your version of Maple),
instead of the 16 or so in Mathematica. On either system, doing the
computation exactly and then doing one rounding is going to be
better than doing a pile of arithmetic, much
of which is hardly precise enough for the problem.

>In Mathematica:
>
>Det[N[m]]
>2.164405264621389`*^-53
>N[Det[m]]
>2.164179226431492`*^-53
>Agreeing to 3 decimal places. So simplistically speaking the errors are
>100000 times smaller.

If you examine the 50x50 matrix determinant, you find that
Mathematica gets about 1.3x10^(-1466) for the "right" answer
and 3.9x10^(-680) for the "wrong" answer. So it is wrong by about
786 orders of magnitude. What to conclude? You have to know
what precision you are using. It is also kind of tough to do this
calculation without numeric underflow..

>
>Now looking at higher precision arithmatic. The Maple answer gets no
>better
>> det(evalf(m,50));
> -.1128605090e-50
>> evalf(det(m),50);
> .21641792264314918690605949836507259090507621790197e-52
>

I think that's because you didn't REALLY do the calculation
in higher precision in Maple.

It seems to me you only converted the coefficients to floats of 50 digits.
You then computed the determinant in ordinary floating point.

Maybe you want to try Digits=50
and then do the computation.

I suspect that most of the other complaints with Maple are
also manifestations of not setting Digits. This is not
to say that Maple numerics could not also be overhauled.

But the Maple folks should defend themselves. (Why don't they??)

--
Richard J. Fateman
fat...@cs.berkeley.edu http://http.cs.berkeley.edu/~fateman/

Daniel Lichtblau

unread,
Aug 18, 1998, 3:00:00 AM8/18/98
to Richard J. Fateman
Sorry to enter this thread so late, I was away from news for a couple of
weeks. Let me comment on some of the issues raised by RJF about numeric
computation in Mathematica.


Richard J. Fateman wrote:
>From my own observations, the design of Mathematica's numerical
>code (significance arithmetic) is deeply flawed, and can lead
>to dreadful over or underestimates of the error in the answer.
>You may have been relatively lucky.

[from a later RJF post in this thread]
>There is a folk theorem concerning Significance Arithmetic
>(What Mathematica does) that says one can construct examples
>such that the "significance" will be either vastly overstated
>or understated, and thus, while you may be comforted by
>Mathematica telling you "This answer is accurate to 6 places"
>it may be wrong!

I would be curious to see an example of the former, which is clearly the
more serious sort of problem.

Bear in mind that many numerical routines in Mathematica, e.g.
FindMinimum, are not at heart using blind significance arithmetic. They
can be adjusted by use of documented options such as AccuracyGoal,
WorkingPrecision, and the like. Automatic settings are well and good,
but no system is an oracle (that is, there is no free lunch). Some
amount of hand tuning on the part of the user will be inevitable, if one
requires results that are beyond the reach of the default settings.

Significance arithmetic per se is used primarily in arithmetic and
numeric function evaluation. To give an example from my own area, it is
the arithmetic engine behind computation of numeric Groebner bases. This
in turn we are using to advantage in solving numeric polynomial systems
(this in our development version).


[from a yet later post in the thread by RJF]


> I mentioned that the numerics in the languages Mathematica and
> Maple are prone to change because they are not necessarily the
> best available now, they are not very completely specificed
> and therefore they COULD be changed later. As they have in
> the past. How much reliance should you have on a routine
> that has changed substantially in each of the last 3 versions?

Me, I am not suprised to see code that is upgraded more than once over a
long period of time. Specifically which Mathematica numerical routine(s)
do you believe to be unstable? To what numerical codes do you compare
them?


> How much reliance should you have on routines that the
> vendor (apparently with pride) proclaims consist of 500 pages
> of secret code (NDSolve in Mathematica).

I believe the size is measured in double-spaced printed pages, with
not-insignificant margins. It includes preprocessing code as well as
algorithms to handle stiff and non-stiff systems, in both machine number
and bignum flavors. While we are beginning to make progress in more
"oject-oriuented" style that may cut down on code redundency, there is
alot of room for improvement in this regard.

> ...


>My guess is that Maple should replace its determinant program with
>one that accumulates using higher precision. And/or it may be
>using 10 decimal digits in its arithmetic (depends on your version
>of Maple), instead of the 16 or so in Mathematica. On either system,
>doing the computation exactly and then doing one rounding is going
>to be better than doing a pile of arithmetic, much of which is
>hardly precise enough for the problem.

This need not be true in general. Depends on how you compute the exact
determinant, and on condition of the matrix, and so on. More generally
it need not be the case that numericizing an exact result iwill be more
accurate than using approximate methods from the start. A standard
example would be numeric root extraction of a quartic vs numerical
evaluation of an exact radical solution.

I guess I ought to note that numeric determinants are virtually never
what one actually wants to compute....


Daniel Lichtblau
Wolfram Research

Edwin Clark

unread,
Aug 18, 1998, 3:00:00 AM8/18/98
to Charles Krug

It seems you get better accuarcy in Maple V R4 by
setting the "environmental" variable Digits:

In fact, the help page for evalf seems to imply that unless
Digits is changed its default value is 10. I suppose this
means even if evalf(x,n) is used for n > 10?

Here's the example in question done both ways:

m:=matrix(10,10, (i,j)->1/(i+j-1)):
Digits := 50
> det(map(x->evalf(x),evalm(m)));
> evalf(det(m));
-50
-.1128605090 10
-52
.2164179226 10
> det(map(x->evalf(x,50),evalm(m)));
> evalf(det(m),50);
-50
-.1128605090 10
-52
.21641792264314918690605949836507259090507621790197 10
>
> Digits:=50;
Digits := 50
>
> det(map(x->evalf(x),evalm(m))); evalf(det(m));
-52
.21641792264314918690605949836507259090928610126368 10
-52
.21641792264314918690605949836507259090507621790197 10
>

The moral seems to be that you really don't get much
accuracy by selecting a high value for n in evalf(x,n).

--Edwin Clark


On Mon, 17 Aug 1998, Charles Krug wrote:

> Dr F Carter wrote:
>
> > Here are a couple of simple examples that I tried of Maple and
> > Mathematica handling numerical rounding errors:
> > If you take a classic ill conditioned matrix. 1/(x+y-1) with x and y
> > both from 1 to 10.
> > Now do some basic tests comparing operations on the numerical version
> > and the exact version
> > In Maple
> > det(evalf(m));
> > -.1128605090e-50
> > evalf(det(m));
> > .2164179226e-52
>

Richard J. Fateman

unread,
Aug 19, 1998, 3:00:00 AM8/19/98
to
In article <35D9FB36...@wolfram.com>,
Daniel Lichtblau <da...@wolfram.com> wrote:


>[from a later RJF post in this thread]
>>There is a folk theorem concerning Significance Arithmetic
>>(What Mathematica does) that says one can construct examples
>>such that the "significance" will be either vastly overstated
>>or understated, and thus, while you may be comforted by
>>Mathematica telling you "This answer is accurate to 6 places"
>>it may be wrong!
>
>I would be curious to see an example of the former, which is clearly the
>more serious sort of problem.
>

Mathematica's significance arithmetic implementation must show at
least one of the two flaws. But it might show both.
The theorem does not require that both flaws are present. Once
one error over-estimate is discovered, that operation can be
repeated to make the over-estimate arbitrarily large. Ditto
in the other direction. Mathematica mixes its significance
arithmetic with machine arithmetic, which probably provides
plenty of overly optimistic results. Finding a flaw in the
non-machine (arbitrary precision) arithmetic would take some searching.
How much would you pay me for an example?

I agree that Mathematica gives you the option to set up
stuff if you know what you are doing. But the concepts
it uses "Precision" and "Accuracy" which it makes up.

I think they correlates roughly to log-base-10 of relative error
and log-base-10 of absolute error, terms known to
numerical analysis.

Robert Israel

unread,
Aug 19, 1998, 3:00:00 AM8/19/98
to
In article <Pine.GSO.3.96.980818...@tarski.math.usf.edu>, Edwin Clark <ecl...@math.usf.edu> writes:

|> It seems you get better accuarcy in Maple V R4 by
|> setting the "environmental" variable Digits:
|>
|> In fact, the help page for evalf seems to imply that unless
|> Digits is changed its default value is 10. I suppose this
|> means even if evalf(x,n) is used for n > 10?

Not quite: evalf(x,n) uses n digits, not Digits digits.
However, automatic simplification is applied to x before evalf gets
hold of it, and automatic simplification includes floating-point
arithmetic which is performed at the current setting of Digits.

So, for example:

> Digits:= 10:
evalf(1/3., 20);
.3333333333

This was performed with 10 digits, not 20, because automatic simplification
did it first. But:

> d:= 3.:
evalf(1/d, 20);
.33333333333333333333

Automatic simplification comes before variable evaluation, so this time the
arithmetic was done by evalf with 20 digits.

Now in your case:

|> m:=matrix(10,10, (i,j)->1/(i+j-1)):
|> Digits := 50

I assume you mean Digits := 10.

|> > det(map(x->evalf(x),evalm(m)));
|> > evalf(det(m));
|> -50
|> -.1128605090 10
|> -52
|> .2164179226 10

In the first case, you have det of a matrix containing floats, so the numerical
determinant routine `linalg/det/float` is used. I believe this is a straightforward implementation of the Gaussian elimination method with
partial pivoting.

In the second case, det(m) is evaluated using exact rational arithmetic (the
answer is 1/46206893947914691316295628839036278726983680000000000) and then
evalf is applied to that. The result is the correct 10-digit rounding
of the exact answer.

|> > det(map(x->evalf(x,50),evalm(m)));
|> > evalf(det(m),50);
|> -50
|> -.1128605090 10
|> -52
|> .21641792264314918690605949836507259090507621790197 10

In the first case, the entries are rounded to 50-digit floats, and then
det is applied (using Digits = 10). One of the first steps of `linalg/det/float`
applies evalf to every entry of the matrix, so the result is the same as
with 10-digit floats. In the second case, the result is the correct
50-digit rounding of the exact rational answer.



|> > Digits:=50;
|> Digits := 50

|> > det(map(x->evalf(x),evalm(m))); evalf(det(m));
|> -52
|> .21641792264314918690605949836507259090928610126368 10

In the first case, the det is performed using 50 digit arithmetic on 50-digit
floats, and the result is reasonably good.

|> -52
|> .21641792264314918690605949836507259090507621790197 10

Again, this is the correct 50-digit rounding of the exact answer.



|> The moral seems to be that you really don't get much
|> accuracy by selecting a high value for n in evalf(x,n).

I would rather say that the moral is you should understand what you're asking
Maple to do. If you want to use 50-digit arithmetic on 50-digit data, you
could say (with Digits still at 10)

> evalf(det(map(evalf, m)), 50);
-52
.21641792264314918690605949836507259090928610126368 10

Robert Israel isr...@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia
Vancouver, BC, Canada V6T 1Z2

Richard J. Fateman

unread,
Aug 20, 1998, 3:00:00 AM8/20/98
to
In article <6rd6si$euo$1...@agate.berkeley.edu>,

Richard J. Fateman <fat...@peoplesparc.cs.berkeley.edu> wrote:
>In article <35D9FB36...@wolfram.com>,
>Daniel Lichtblau <da...@wolfram.com> wrote:

It's probably bad form to follow up on one's own posting,
but I was leaving my office and didn't really edit the last
response; I think it was probably decipherable. But
here's a question: how bad is it if your system, starting
with 20 decimal places correct, and using 20 decimal
places of arithmetic, gives you an answer that it
says has 5 places correct, when in fact 11 are valid?

With a bunch more operations it may even give you an answer
that it says has no places valid, when it is still good to
6 places or more...

raising a number to a power N takes about log[2](n) multiplies,
and so one should be able to compute this in 66 operations:
( N[1+10*^-19,25])^(10*^19) .

The answer, about 2.6*10^43, is computed correctly to about
11 places, but Mathematica thinks it is ok to only 5, and any
attempt to combine it with more numbers will appear to
be kind of clumsy. Some more operations can easily
run the precision down to "negative correct digits" in
which case it is by default "clamped" to 0 correct digits.
Short of deliberately re-setting the precision of numbers,
in a way that requires some substantial subtlety,
this leads to inevitable loss of precision in further calculations.

Computation with numbers of 0 digits of "Precision" may lead down
some strange paths.

Daniel Lichtblau

unread,
Aug 21, 1998, 3:00:00 AM8/21/98
to Richard J. Fateman
Richard J. Fateman wrote:
>
> ... But here's a question: how bad is it if your system,

> starting with 20 decimal places correct, and using 20
> decimal places of arithmetic, gives you an answer that it
> says has 5 places correct, when in fact 11 are valid?
>
> With a bunch more operations it may even give you an answer
> that it says has no places valid, when it is still good to
> 6 places or more...
>
> raising a number to a power N takes about log[2](n) multiplies,
> and so one should be able to compute this in 66 operations:
> ( N[1+10*^-19,25])^(10*^19) .
>
> The answer, about 2.6*10^43, is computed correctly to about
> 11 places, but Mathematica thinks it is ok to only 5, and any
> attempt to combine it with more numbers will appear to
> be kind of clumsy. Some more operations can easily
> run the precision down to "negative correct digits" in
> which case it is by default "clamped" to 0 correct digits.
> Short of deliberately re-setting the precision of numbers,
> in a way that requires some substantial subtlety,
> this leads to inevitable loss of precision in further calculations.
>
> Computation with numbers of 0 digits of "Precision" may lead down
> some strange paths.
>
> --
> Richard J. Fateman
> fat...@cs.berkeley.edu http://http.cs.berkeley.edu/~fateman/


Below is a reply by Mark Sofroniou of Wolfram Research
(ma...@wolfram.com). As is typical for August, he has news feed problems
and all the sys-admins in his part of the world have apparently left for
vacation. I am therefore posting it on his behalf. I will pass any
relevant news group replies on to him, but feel free to e-mail him
directly.

Daniel Lichtblau
Wolfram Research

---------------------------------


Richard Fateman's example overlooks the fact that Mathematica's
arbitrary precision numbers actually represent very narrow
intervals and possess guard digits. These guard digits can
sometimes combine to give us a more accurate answer than we
perhaps expected, but since they are not significant digits
we would not be correct in taking them into account. Guard
digits are used to ensure correct rounding and are discussed
in the context of IEEE arithmetic in:

I. Koren, Computer Arithmetic Algorithms, Prentice Hall,
New Jersey, 1993.

Mathematica's significance arithmetic model assumes that a
number x contains an intrinsic error interval, of width
10^-Accuracy[x]. The model assumes that a number lies at
the center of its defining interval and that errors combine
in an independent fashion.

In accordance with these assumptions, the boundary between
significant and insignificant digits in Richard Fateman's
example is exactly where it should be, as the following
example illustrates.

Take the exact base and exponent.

In[1]:= x = 1 + 1*^-19;

In[2]:= e = 1*^19;

Now numericise the base so that it has 25 significant
decimal digits.

In[3]:= x1 = N[x, 25]

Out[3]= 1.000000000000000000100000

Now let us take a number which is exactly the same through
the significant digits, but has the leading insignificant
binary bit changed. (Recall that internally Mathematica uses
a binary representation but this is converted to decimal on output.)

In[4]:= x2 = x1 + 2^-83

Out[4]= 1.000000000000000000100000

This number prints the same since all the significant digits
are the same. (You can use the undocumented function $NumberBits
to see that only the leading insignificant binary bit has changed.)

Now let's exponentiate the two values and compare the results.

In[5]:= y1 = x1^e

Out[5]= 2.71828

In[6]:= y2 = x2^e

Out[6]= 2.71828

The numbers have the same number of significant digits.
Now use SetPrecision to effectively shift the boundary between
the good and the bad (guard) digits.

In[7]:= SetPrecision[ y1, 10 ]

Out[7]= 2.718281828

In[8]:= SetPrecision[ y2, 10 ]

Out[8]= 2.718284639

Only 6 decimal digits correspond. So we see that an error in the
insignificant digits in the input manifested itself as an error
of a certain magnitude in the output. The number of significant
digits in the result correctly reflects the forward error
propagated by Power from the set of input numbers that lie in
the interval defined by In[3].

If you don't need (or like) the error tracking used in Mathematica's
significance arithmetic, you can use fixed precision arithmetic
instead.

Here all computations are performed using fixed precision
arithmetic of 25 decimal digits.

In[9]:= Block[{$MinPrecision = 25, $MaxPrecision = 25}, x1^e ]

Out[9]= 2.718281828459028577298326

How many of these digits are correct and how can we check them?

Ascertaining how many digits are correct is a problem with fixed
precision number systems. It is a common misconception that
simply repeating the computation with higher precision allows
one to determine which digits are correct. An example of Rump
shows that a computation with single, double and quadruple
precision gives the same wrong result (not even the sign is correct).
For details, see:

S. M. Rump, Algorithm for verified inclusions - theory and practice,
in Reliability in Computing, R. E. Moore ed., Academic press,
San Diego, pp. 109--126, 1988.

I hope you now see why the arithmetic model is working the way
it is in this example.

Mark Sofroniou

Richard J. Fateman

unread,
Aug 21, 1998, 3:00:00 AM8/21/98
to
In article <35DDB329...@wolfram.com>,
Daniel Lichtblau <da...@wolfram.com> wrote:

>Below is a reply by Mark Sofroniou of Wolfram Research

>Richard Fateman's example overlooks the fact that Mathematica's
>arbitrary precision numbers actually represent very narrow
>intervals and possess guard digits. These guard digits can
>sometimes combine to give us a more accurate answer than we
>perhaps expected, but since they are not significant digits
>we would not be correct in taking them into account. Guard
>digits are used to ensure correct rounding and are discussed
>in the context of IEEE arithmetic in:
>
>I. Koren, Computer Arithmetic Algorithms, Prentice Hall,
>New Jersey, 1993.

Actually, the IEEE 754 (binary) arithmetic model can be implemented with
1 guard BIT, and does require not a pile of decimal digits.
It is also convenient to have a second bit ("sticky bit") to
get the rounding-to-nearest-even correct.

If you want to implement the idea of the IEEE 854
floating-point arithmetic standard in arbitrary precision arithmetic,
and even in radix 10 (used by Maple) or radix 2 (used by Macsyma) or
radix 2 internally and then confusingly translated into radix 10 for
output (used by Mathematica).

>
>Mathematica's significance arithmetic model assumes that a
>number x contains an intrinsic error interval, of width
>10^-Accuracy[x].

Yes, and the quarrel I have is right here at the very beginning. If
I write

2.0

I mean 2.000000000000000000000000000000000000000000000...

Mathematica's significance model says 2.0 is an interval from
1.950000000000000000000000000000000000...1
to
2.499999999999999999999999999999999999...

>The model assumes that a number lies at
>the center of its defining interval and that errors combine
>in an independent fashion.

So in fact 2.0-2.0 could be between -0.1 and 0.1, about, in
Mathematica's model.

In my model, the answer is 0. In my model, if I want to
implement interval arithmetic, I can talk about the INTERVAL
(1.95 < x < 2.5). In Mathematica's model, it is impossible
to talk about such intervals because
1.95 is any number between
1.945 and 1.955.

Why do I say impossible? Certainly a numerical expert
could figure out a way around this, but we struck up
this discussion on the grounds that a good model should
be provided for NON experts.

In mathematica's model, let us create
a two-decimal-digit-2, FuzzyTwo=SetPrecision[2,2].

compute z= FuzzyTwo - 2, 0*10^(-2). hm.
We know for sure that FuzzyTwo is in (1.95,2.05), so that
z must be in the open interval (-0.5,0.5).

which of the following relationships should be true?

Col 1 Col 2

z<0 z==0
z>0 z<=0
z<0.01 z>=0
z>0.001 z<0.03
z>=0.01
(-0.01<z<0.03) (-0.01<=z<0.03)
(-0.02<z<0.03) (0.01<z<0.03)

See the bottom of this message for the answers.


Now we can, in fact, create intervals in
Mathematica. Here is one: w= Interval[{FuzzyTwo,FuzzyTwo}]

How wide is this interval? r=Max[w]-Min[w]. Sorry you
asked. Mathematica give 0*10^-1. If you delve further,
by asking InputForm[r], you get 0.0625`0.198 which says
that it thinks the answer is 0.0625 but not to enough
precision to say anything much. The true width is WIDER,
namely just a hair under 0.2.


>In accordance with these assumptions, the boundary between
>significant and insignificant digits in Richard Fateman's
>example is exactly where it should be, as the following
>example illustrates.

I have no doubt that Mathematica's significance arithmetic
follows the algorithm set out by Mark. The problem is,
it is implementing significance arithmetic, and then justifying
its results by assuming that the last digit you typed could
be off by "half a digit", and that everything beyond that
is conspiring to lead you astray.
Furthermore, (and this is confounded by the interplay between
the internal binary and external decimal number system),
Mathematica cannot really tell you how many digits you have,
and for some numbers, when you ask for their InputForm,
the system refuses to print them out without changing their
precision, and refuses to read them back in!


>Ascertaining how many digits are correct is a problem with fixed
>precision number systems.

This is true. It is also a problem with significance arithmetic!
Mark's calculation demonstrates that clearly!

>It is a common misconception that
>simply repeating the computation with higher precision allows
>one to determine which digits are correct.

I agree that this can be shown to be wrong; there are few
simple solutions to proving accuracy of complicated numerical
calculations. If you want to be conservative, correctly implemented
interval arithmetic will help.

For many kinds of problems, however, re-doing with even a few extra bits of
precision, ASSUMING YOU DON'T JUST TOSS THEM AWAY BECAUSE YOU CAN"T
PROVE THEM CORRECT can be an important confirmation.

.........
And now the answer to our quiz


In Mathematica 3.0, everything in column 2 is true
and everything in column 1 is false. Thus, although z is alleged
to be between 0.01 and 0.03, it is not greater than 0.001,
and also z is equal to 0.0.

Daniel Lichtblau

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to Richard J. Fateman
Sorry, I could not forgo the obvious pun. But bignum arithmetic with two
significant digits is, for practical purposes, fairly obscure territory.

For most practical purposes if you want to do useful work with
significance arithmetic, it is best to have several significant digits.
Further comments are interspersed below.


Richard J. Fateman wrote:
>
> >Mathematica's significance arithmetic model assumes that a
> >number x contains an intrinsic error interval, of width
> >10^-Accuracy[x].
>
> Yes, and the quarrel I have is right here at the very beginning. If
> I write
>
> 2.0
>
> I mean 2.000000000000000000000000000000000000000000000...
>
> Mathematica's significance model says 2.0 is an interval from
> 1.950000000000000000000000000000000000...1
> to
> 2.499999999999999999999999999999999999...
>

Actually documented behavior is that 2.0 will be a machine number. You
need to use something like SetPrecision (as you do below). I just wanted
to clarify this for those less familiar with Mathematica semantics.


> >The model assumes that a number lies at
> >the center of its defining interval and that errors combine
> >in an independent fashion.
>
> So in fact 2.0-2.0 could be between -0.1 and 0.1, about, in
> Mathematica's model.
>
> In my model, the answer is 0. In my model, if I want to
> implement interval arithmetic, I can talk about the INTERVAL
> (1.95 < x < 2.5). In Mathematica's model, it is impossible
> to talk about such intervals because
> 1.95 is any number between
> 1.945 and 1.955.
>

You can use exact intervals, e.g. Interval[{39/20, 41/20}]. When you use
imprecise intervals then of course your interval bounds will be fuzzy.
And if you want to use these then it follows that the more precise the
bounds, the less fuzzy (I guess that's a tautology). More on this
presently....

This certainly is true for the exact interval analog. For fuzzy
intervals it is a different matter. The arithmetic operation of
subtraction, in conjunction with low precision values, gives pessimistic
results. To some extent this is misbehavior in our handling of fuzzy
intervals. For example, if you look at w, -w, w-2, and 2-w, that the
outward rounding fuzz doubles for one arithmetic operation and triples
for two.

Let me address the issue of why the bounds of w are what they are rather
than, say, 1.95 and 2.05. First note that there is no specification for
how much outward rounding will be done for fuzzy interval bounds. To see
in base 2 how much we round you can do as below.

In[82]:= $NumberBits[FuzzyTwo] // InputForm
Out[82]//InputForm=
{1, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0}, 16}

In[83]:= $NumberBits[w[[1,1]]] // InputForm
Out[83]//InputForm=
{1, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 16}

In[84]:= $NumberBits[w[[1,2]]] // InputForm
Out[84]//InputForm=
{1, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1,
0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, 16}

You see that in this case there are several "good" bits beyond the lone
'1' in FuzzyTwo, and it is the penultimate good bit that changes in
rounding outward these interval bounds. So we are using interval bounds
that are justified by the precision of FuzzyTwo. This is not the same
thing as using 1.95 or 2.05, and there is really no reason to assume
that the values you expect would (or should) in fact be used.


> >In accordance with these assumptions, the boundary between
> >significant and insignificant digits in Richard Fateman's
> >example is exactly where it should be, as the following
> >example illustrates.
>
> I have no doubt that Mathematica's significance arithmetic
> follows the algorithm set out by Mark. The problem is,
> it is implementing significance arithmetic, and then justifying
> its results by assuming that the last digit you typed could
> be off by "half a digit", and that everything beyond that
> is conspiring to lead you astray.
> Furthermore, (and this is confounded by the interplay between
> the internal binary and external decimal number system),
> Mathematica cannot really tell you how many digits you have,
> and for some numbers, when you ask for their InputForm,
> the system refuses to print them out without changing their
> precision, and refuses to read them back in!
>
> >Ascertaining how many digits are correct is a problem with fixed
> >precision number systems.
>
> This is true. It is also a problem with significance arithmetic!
> Mark's calculation demonstrates that clearly!

I think Mark's example amply demonstrated why 5 and not 11 digits is the
thing to use (in the Mathematica significance arithmetic model) for the
result of computing N[1+10*^-19,25])^(10*^19).

More to the point, significance arithmetic often provides an excellent
warning that catastrophic cancellation has occurred or for other reasons
precision has degraded to the point where further work is useless. Fixed
precision does not by itself give any hint of this, hence trouble must
be detected by other means. For some algorithms, e.g. LU factorization,
such means are well-understood e.g. use the estimated condition number
of the matrix. For others, e.g. numerical Groebner bases, the problem is
not so well understood, and significance arithmetic provides one useful
way to handle it.


Daniel Lichtblau
Wolfram Research

Daniel Lichtblau

unread,
Aug 26, 1998, 3:00:00 AM8/26/98
to Richard J. Fateman
While I find myself in agreement with several of your points, I think a
few words are in order on the onew below.


>I don't like to do arithmetic with a model in which there
>are surprises, or in which I have to look at the bit patterns
>to figure out what is up. A clean mathematical model is helpful.

While I am not certain, I suspect the Mathematica fuzzy interval model
is clean. It is woefully undocumented in the manual. I am not sure
whether it is documented elsewhere, for that matter. But what I saw of
it from an admittedly small set of examples looked sensible, if unduly
pessimistic in boundary fuzz.


>>I think Mark's example amply demonstrated why 5 and not 11 digits
>>is >the thing to use (in the Mathematica significance arithmetic
>>model) for the result of computing N[1+10*^-19,25])^(10*^19).

>This is merely a trace of what computation is done.

Mark gave a detailed demonstration that if you alter the first
non-significant bit in the base, then the sixth digit in the result will
change. This is how significance arithmetic operates.


>I have checked in the INSPEC database and found in the period
>1994-98 some 38 references to the phrase "significance arithmetic"

I applaud your diligence in checking the literature references (though I
believe there may be other names used besides "significance
arithmetic"). I also realize that there is not a great body of
literature for this subject.

I understand that you may not like this model of arithmetic. I am aware
that it is underdocumented. I certainly understand why your example
involving FuzzyTwo was mysterious. In part this is due to a weak
implementation of fuzzy interval rounding (the fleas on the fleas). I
saw nothing to suggest that significance arithmetic per se was to blame.


>So there is evidence that significance arithmetic is a crock, >unsuitable for serious computation, although perhaps ok for
>a few computations on low-precision numbers as is done for
>signal processing.

I see no evidence to support this claim. Certainly a dearth of
literature does not do so. Me, I find it of most use on
higher-than-machine-precision numbers. For an example that we've been
batting around in-house today, significance arithmetic as used in
approximate Groebner basis computation is quite useful in obtaining an
approximate implicitization of the parametric curve in 't' given by

polys = {x - (110*t^2 - 495*t^3 + 1320*t^4 - 2772*t^5 + 5082*t^6 -
7590*t^7 + 8085*t^8 - 5555*t^9 + 2189*t^10 - 374*t^11),
y - (22*t - 110*t^2 + 330*t^3 - 1848*t^5 + 3696*t^6 - 3300*t^7 +
1650*t^8 - 550*t^9 + 88*t^10 + 22*t^11)};

For anyone who wants to try this, I include at the bottom the
Mathematica InputForm of the result in machine precision.

>>For others, e.g. numerical Groebner bases, the problem is
>>not so well understood, and significance arithmetic provides
>>one useful way to handle it.

>Perhaps a reference to a paper here would help.

The only relevant reference I have seen would be
"Groebner bases in Mathematica 3.0" by myself, The Mathematica Journal
6(4), pp 81-88 (1996). See page 84. I am sorry I do not have an
independent authority to site. Moreover there is but half a page on this
particular matter in that reference. I may revisit the topic in more
detail if and when I write up some related work on numeric solving of
zero dimensional polynomial systems.


Daniel Lichtblau
Wolfram Research

-----------------------------------
The approximate implicit polynomial:

1.9755241341074337*^30*x^3 - 6.727318523257048*^29*x^4 +
1.7532450201952422*^29*x^5 - 1.9546920343445204*^28*x^6 +
1.1399349892760117*^27*x^7 - 5.59875421058122*^24*x^8 -
1.2429603327366183*^19*x^9 - 1.0164234986939*^13*x^10 + 1.*x^11 -
8.692306190072709*^30*x*y + 5.893911893041806*^29*x^2*y -
4.795734635718977*^29*x^3*y + 8.903161466828359*^28*x^4*y -
7.338342416113781*^27*x^5*y + 7.902345733554497*^26*x^6*y +
1.9599828164202066*^25*x^7*y + 2.6099824016747053*^20*x^8*y +
1.27452861472893*^14*x^9*y + 187.*x^10*y + 2.5049279569909745*^30*x*y^2
-
6.842819971479819*^29*x^2*y^2 - 1.6547193279211299*^27*x^3*y^2 +
8.602932367935953*^26*x^4*y^2 + 9.401631069041822*^26*x^5*y^2 -
1.1094895962821045*^25*x^6*y^2 - 1.8214901061908903*^20*x^7*y^2 -
2.2343813922786098*^15*x^8*y^2 + 15895.*x^9*y^2 +
1.9755241341074337*^30*y^3 - 5.726698953848964*^28*x*y^3 -
6.632137423990659*^28*x^2*y^3 + 7.983697731560627*^27*x^3*y^3 +
7.192353937808919*^25*x^4*y^3 - 2.3981033273722571*^24*x^5*y^3 -
4.686041402570199*^21*x^6*y^3 + 1.954083636552604*^16*x^7*y^3 +
810645.*x^8*y^3 - 7.542077488018118*^28*y^4 +
4.426693389121433*^27*x*y^4 +
1.384380693482555*^27*x^2*y^4 - 1.379331847614209*^25*x^3*y^4 -
2.8577115668884237*^23*x^4*y^4 + 1.2482393144980266*^22*x^5*y^4 -
1.4762413541282003*^17*x^6*y^4 + 2.756193*^7*x^7*y^4 +
1.4716396624404689*^27*y^5 - 1.9984143934730087*^25*x*y^5 +
6.036942014700848*^24*x^2*y^5 - 4.616142255981953*^22*x^3*y^5 -
2.974991894286421*^21*x^4*y^5 + 7.518707154996777*^17*x^5*y^5 +
6.55973934*^8*x^6*y^5 - 8.807931697950434*^24*y^6 +
1.978557162017415*^24*x*y^6 - 7.98734858552133*^22*x^2*y^6 +
1.2664351278033396*^20*x^3*y^6 - 2.4239665946904914*^18*x^4*y^6 +
1.1151556878*^10*x^5*y^6 - 1.9218508992577886*^22*y^7 -
5.453957778698034*^21*x*y^7 + 3.807721720859103*^19*x^2*y^7 +
3.266295798875459*^18*x^3*y^7 + 1.3541176209*^11*x^4*y^7 +
5.723721450720245*^20*y^8 - 3.903047812945855*^19*x*y^8 -
6.956919162554195*^17*x^2*y^8 + 1.150999977765*^12*x^3*y^8 -
1.5332814370874296*^18*y^9 + 2.057483800091193*^17*x*y^9 +
6.522333207335*^12*x^2*y^9 - 6.226037704234758*^15*y^10 +
2.2175932904939*^13*x*y^10 + 3.4271896307633*^13*y^11

Richard J. Fateman

unread,
Aug 27, 1998, 3:00:00 AM8/27/98
to
In article <35E434C5...@wolfram.com>,
Daniel Lichtblau <da...@wolfram.com> wrote:

>You can use exact intervals, e.g. Interval[{39/20, 41/20}].

Yes, of course one could do this, but it rapidly loses value.
Each time you do an operation on such intervals, the rational
numbers used to describe the endpoints become "larger" (that
is, their numerators and denominators have more digits). What
you end up with are monstrosities like
the 10th power of the interval above, which is

8140406085191601 13422659310152401
Out[3]= Interval[{----------------, -----------------}]
10240000000000 10240000000000

When you use floats (say, double floats correctly
rounded, your complexity is bounded by the double-float
representation. In this case you get something like

Interval[{794.962, 1310.81}],
and it doesn't get any worse if you compute the 100th power. Not
so for the rational stuff. A plausible approach is to
find a crude "small" approximation for the rational endpoints,
rounding appropriately, but frankly, that is what floating
point numbers are.

>imprecise intervals then of course your interval bounds will be fuzzy.

I think that this is a kind of infinite regress with regard to
errors.

"Dogs have fleas, and fleas have fleas,
with smaller fleas upon them.
Those fleas have fleas and THEIR fleas have fleas
and so on infinitum."


>And if you want to use these then it follows that the more precise the
>bounds, the less fuzzy (I guess that's a tautology).

Not so. When one talks about intervals, [1/2,3/4] is more
precise than [100/201, 301/400], so precise bounds do not
defuzzify..

<snip>


>This is not the same
>thing as using 1.95 or 2.05, and there is really no reason to assume
>that the values you expect would (or should) in fact be used.

I don't like to do arithmetic with a model in which there


are surprises, or in which I have to look at the bit patterns
to figure out what is up. A clean mathematical model is helpful.

>I think Mark's example amply demonstrated why 5 and not 11 digits is the


>thing to use (in the Mathematica significance arithmetic model) for the
>result of computing N[1+10*^-19,25])^(10*^19).

This is merely a trace of what computation is done.


>


>More to the point, significance arithmetic often provides an excellent
>warning that catastrophic cancellation has occurred or for other reasons
>precision has degraded to the point where further work is useless.

If this is the case, one would expect that
there is a literature defending this assertion.

I have checked in the INSPEC database and found in the period
1994-98 some 38 references to the phrase "significance arithmetic"

and these papers appear to be (with very few exceptions)
on topics having to do with signal processing or fuzzy arithmetic,
[one paper, in ISSAC 96, appears to be mischaracterized].

In the period from 1969-79 there were 25 papers, including the
"seminal" papers by Metropolis and Rota, two distinguished
mathematicians. But they abandoned this line, and the papers from
1980-84 show this. There were only 9 papers during these years, and,
by the way, I wrote one of them (critical of the idea).

So there is evidence that significance arithmetic is a crock, unsuitable
for serious computation, although perhaps ok for a few computations
on low-precision numbers as is done for signal processing.

>Fixed
>precision does not by itself give any hint of this, hence trouble must
>be detected by other means.

Yep.

>For some algorithms, e.g. LU factorization,
>such means are well-understood e.g. use the estimated condition number
>of the matrix.

Yep.

For others, e.g. numerical Groebner bases, the problem is
>not so well understood, and significance arithmetic provides one useful
>way to handle it.

Perhaps a reference to a paper here would help.

Even if someone is seriously using GB for finding approximate
solutions to multivariate polynomial equations, and in doing so has
found significance arithmetic useful, I am still unconvinced that this
is a reason to impose it on the everyday computations of a naive user
of mathematica who is likely to be more puzzled than pleased by an
arithmetic model that contradicts common sense.

Regards

Richard J. Fateman

unread,
Aug 27, 1998, 3:00:00 AM8/27/98
to
In article <35E4D33B...@wolfram.com>,

Daniel Lichtblau <da...@wolfram.com> wrote:
>While I find myself in agreement with several of your points, I think a
>few words are in order on the onew below.

Thanks

also thanks for your example, which I hope people will look at.

I thought I'd show a short example in Mathematica of how significance
arithmetic doesn't work unless you mislead the system.

Here is a simple function si (for SquarerootIteration) to
compute the square root of a positive number x,
given an initial non-zero guess g.

si[g_,x_] := If {goodenough[g,x],g, si[ (g + x/g)/2, x]]

This program improves the guess until some criterion is satisfied.

Here's a possible criterion:

goodenough[g_,x_] := Abs( (g^2-x)) < 10^*(-Precision[x])

This says, stop when you can't tell the difference between g^2 and x
TO THE PRECISION OF X, the thing you are computing the square root
of.

Clearly the precision of the initial guess is irrelevant, because
it is not even required to be accurate.

si[3.1,SetPrecision[9,100]] does not give you the answer 3 to 100
places, but the answer 3 to 16 places.

si[1,SetPrecision[9,100]] gives you the answer 3 to 100 places even
though the initial guess is worse!

si[3.1,9] Does not terminate, since 9 is infinitely precise.

si[SetPrecision[3.1,20] ,SetPrecision[9,100]]
si[SetPrecision[3.1,100],SetPrecision[9,20]]

each give the same (20 digit) answer.


>
>Mark gave a detailed demonstration that if you alter the first
>non-significant bit in the base, then the sixth digit in the result will
>change. This is how significance arithmetic operates.

Yep.


>I applaud your diligence in checking the literature references (though I
>believe there may be other names used besides "significance
>arithmetic").

I am willing to run another search if you have another phrase
in mind.

I also realize that there is not a great body of
>literature for this subject.
>
>I understand that you may not like this model of arithmetic. I am aware
>that it is underdocumented.

No, I think the model is completely documented in the papers by Metropolis
and Rota. It may be underdocumented in the Mathematica material :)


>>So there is evidence that significance arithmetic is a crock, >unsuitable for serious computation, although perhaps ok for
>>a few computations on low-precision numbers as is done for
>>signal processing.
>
>I see no evidence to support this claim.

See above, the square root program. I believe I could come
up with a more sophisticated zero-finding program in which
significance arithmetic applied naively fails to find zeros
because the iteration loses so much precision in calculating
the function that it either converges to an incorrect
place, or is unable to find the zero point to a reasonable
accuracy because of unjustified sloppy precision.

These problems could be patched up by judicious use of
SetPrecision, but the point of advocates of significance arithmetic
is that it does such things "automatically" and it does not
really give you a good program!

And the point of critics of significance arithmetic is that it is just
a mediocre overall naive model. It is not appropriate when computers
are capable of doing millions of arithmetic operations per second,
rather than 2 or 3, as one might do in a physics lab with
low-precision measurements.

My guess is that the numerical qualities of numerical GB calculations
can be studied and optimized in ways that are analogous to those used for
Gaussian elimination (by choosing pivots), and that if this
calculation is at all interesting, other bounds can and will
be derived that are more useful, and can be computed faster,
than by naive significance arithmetic.

Richard J. Fateman

unread,
Aug 28, 1998, 3:00:00 AM8/28/98
to
In article <6s663l$jhl$1...@merope.saaf.se>,
Paul Schlyter <pau...@merope.saaf.se> wrote:

>Lets choose 1.0

>> and the base of the floating-point system,
>Binary, or decimal (BCD)
>> and the quality of the rounding,
>Let's assume the rounding is perfect.
>Let's do this in a 20-digit BCD representation:
>1.0 / 3.0 ==> 0.33333333333333333333 correctly rounded
>Multiply by 3 ==> 0.99999999999999999999 correctly rounded

>> (fateman) but I won't argue the point.

Well, if you insist!
You can certainly make up arithmetic systems that mess this
up, but in standard IEEE 754 binary floating point arithmetic
with round-to-nearest-even, 3.0*(1.0/3.0) as well as 3.0d0*(1.0d0/3.0d0)
come out as 1.0 and 1.0d0 respectively.

(and if you chose x=6.0, then dividing by 3 and multiplying
by 3 is fine, so there are obviously many cases in which division by
3 is exact. )
Other things that you might not expect in IEEE 754 arithmetic are
5.0*(7.0/5.0) = 7.0
7.0*(5.0/7.0) = 5.0
and other similar small examples.

>> I don't understand why this would be the case, unless you are
>> allowing rational representations to have more bits available.
>
>No -- the rational expression often require LESS bits.

Usually computer representations are expressed in sizes that
are even multiples of words, 8, 16, 32, 64 bit quantities.
If you are counting how many printed decimal digits are needed
to represent a decimal or rational number, then of course
1/3 as a rational number expresses an infinitely long decimal
number, unless you use a notation like:

.
0.3 where the dot means "repeated"

Which is more compact?

>The point is that the rational representation often require
>LESS precision! Consider for instance
>
> 355/113

>
>which is an approximation to PI good to seven decimal digits
>of precision -- yet it requires only six digits (three for the
>numerator and three for the denominator) to represent 355/113
>in rational form.

To me (though you are welcome to differ on this)
the relevant computer size issue is that you are storing
two integers, each in n bits. for 2n bits. Thus if n=16,
you have 32 bits total. Ignoring the sign of the number, you
can now represent numbers as large as 2^17-1 =131071 and as small as
1/((2^17-1), and you will have between 6 and 7 decimal digits
of precision.

If you took the same 32 bits as a floating point number,
you would be able to represent pi as about
3.1415927 (and this representation can represent much larger and smaller
numbers, as well as negative ones too).
whereas your representation (a notoriously compact representation
for pi that corresponds to a nice coincidence in the continued
fraction expansion for pi)
gives
3.141592920353982...

Since pi is
3.141592654

the float wins in pi-precision, though just barely. You could come
up with a better continuant given 16 bit numbers that should beat
this precision, but you would still have a much restricted range
compared to floats.


If one insisted on representing 355 and 113 as 32-bit integers,
then of course 64 bits would give a double-float and much higher
precision. Most modern computers do 32 bit arithmetic as fast
as 16 bit arithmetic, I think, so this might be more plausible.

If one insisted on representing 355 and 113 as pairs of strings
of decimal digits, then any operations on them would require
transformation to some other form and any efficiency would
be illusory.

(Basically the argument must be that in n bits you can store
2^n different bit patterns at most. and you are free to
choose what they mean, but you can't get more information
packaged up by merely using 2^(n-1) bits in a numerator
and 2^(n-1) bits in a denominator. {the argument is slightly
more elaborate if you include sign bit, exponent field, etc})

>Where is FPFUN available?

Oop, I meant to type MPFUN...Bailey's code is described at
http://science.nas.nasa.gov/Groups/AAA/db.webpage/mpdist/mpdist.html.

Regards.

0 new messages