1 view

Skip to first unread message

Jan 30, 2001, 12:51:45 PM1/30/01

to

Maybe another example would help.

Consider

Consider

test= N[10^30,20]

this is 10^30 to 20 decimal places in Mathematica.

compute logtest= Log[test]. You get 69.0775527898213705205

Mathematica thinks it is accurate to 20 decimal places.

Now forceably convert it to 35 places :

SetPrecision[logtest]

69.07755278982137052053974364053093

All these digits are correct, too. So Mathematica's

intuition here is wrong.

As most of us know, Log changes rather slowly. Mathematica

doesn't know this, even though it could in principle figure

this out by taking a derivative. So a carefully

written program would likely have to do this:

(a) compute the proper error bounds, perhaps based on

other formulas, derivatives, etc.

(b) disabling/resetting the precision and/or accuracy

of some (all?) default numerical results from mathematica

to the correct ones.

Will Mathematica tell you that more digits are correct

than are justified? I haven't fiddled enough with the

latest Mathematica to find any examples, and it could

be that it is now sufficiently conservative that this

phenomenon is gone.

RJF

PS, is there an alternative?

Yes: allow the user to set the precision carried in each number,

and don't get in the way of the user who (separately) computes

the accuracy, or relative error, in the numbers computed.

Macsyma's bigfloat package does this.

I think that merging interval arithmetic into the bigfloats

of Mathematica was a mistake, and even if they get all the

bugs out, it will still not be a good design.

Note that if the user wants it, there is an implementation

of interval arithmetic in Mathematica.

Jan 30, 2001, 6:51:35 PM1/30/01

to

Richard Fateman wrote:

>

> Maybe another example would help.

> Consider

>

> test= N[10^30,20]

>

> this is 10^30 to 20 decimal places in Mathematica.

>

> compute logtest= Log[test]. You get 69.0775527898213705205

>

> Mathematica thinks it is accurate to 20 decimal places.

>

> Now forceably convert it to 35 places :

> SetPrecision[logtest]

>

> 69.07755278982137052053974364053093

>

> All these digits are correct, too. So Mathematica's

> intuition here is wrong.

>

> Maybe another example would help.

> Consider

>

> test= N[10^30,20]

>

> this is 10^30 to 20 decimal places in Mathematica.

>

> compute logtest= Log[test]. You get 69.0775527898213705205

>

> Mathematica thinks it is accurate to 20 decimal places.

>

> Now forceably convert it to 35 places :

> SetPrecision[logtest]

>

> 69.07755278982137052053974364053093

>

> All these digits are correct, too. So Mathematica's

> intuition here is wrong.

Not so. I'll explain why presently.

> As most of us know, Log changes rather slowly. Mathematica

> doesn't know this, even though it could in principle figure

> this out by taking a derivative.

There is a good mathematical reason why this statement is wrong. I give

it at the bottom. Right now I'll analyze in some detail this particular

example.

Yes, you can "force' more correct digits, in effect by exposing the use

of so-called guard digits in significance arithmetic.

In[17]:= ff = N[10^30,20];

In[18]:= fflog = Log[ff]

Out[18]= 69.0775527898213705205

Are there some correct hidden digits?

In[24]:= gg = SetPrecision[fflog,35]

Out[24]= 69.077552789821370520539743572492123

Let's check against validated digits computed to higher precision.

In[25]:= hh = Log[N[10^30, 35]]

Out[25]= 69.0775527898213705205397436405309262

Yes there were. It seems there were six (I am using the development

version of Mathematica, which may be using fewer guard digits than the

version you tried this on). So where did these unannounced correct

digits come from?

First let's look at the exact value of the input in base 2. I'll write

it as a list for ease of eyeballing against the next variation.

In[16]:= IntegerDigits[10^30, 2] // InputForm

Out[16]//InputForm=

{1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1,

1,

1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0,

1,

0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0,

0,

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

0}

In[19]:= InputForm[kk = $NumberBits[ff]]

Out[19]//InputForm=

{1, {1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0,

0, 1,

1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1,

1, 0,

1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0},

{1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

0, 0,

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

0, 0},

100}

It seems that our 20-digit-precision value has some correct nonzero

hidden bits. So what happens if we only use the "correct" bits?

In[51]:= goodbits = kk[[2]];

In[52]:= glen = Length[goodbits];

In[53]:= expon = Last[kk];

In[54]:= mm = 2^Range[-1,-glen,-1] . goodbits * 2^expon

Out[54]= 999999999999999999990336323584

In[56]:= mmlog = Log[N[mm,40]]

Out[56]= 69.077552789821370520530079964114926228033

This agrees with fflog only to 20 places. Bottom line is yes, you might

have more correct digits in your log, but whether and how many will

depend on internals involving guard digits that may not all be correct.

The algorithm was not failing to work hard enough or be smart enough in

its error estimate; it was refusing to certify as "correct" those

possibly incorrect guard digits (that happened to be just fine for

N[10^30,20] and, to an extent, for its logarithm).

> So a carefully

> written program would likely have to do this:

> (a) compute the proper error bounds, perhaps based on

> other formulas, derivatives, etc.

> (b) disabling/resetting the precision and/or accuracy

> of some (all?) default numerical results from mathematica

> to the correct ones.

> [....]

This is simply not right. Here is the essence of the problem with your

example. You have a number n to p digits (base 10, say) of precision,

and you take it's logarithm. How good can that logarithm be? Well, the

accuracy, measured in the same base, can be no better than p. This is

mathematics, not Mathematica. By the way, what is the accuracy (and

precision) of our result?

In[21]:= {Accuracy[fflog], Precision[fflog]}

Out[21]= {20., 21.8393}

So the result we give is in fact as good as it gets, and no better

(which would be a bug).

Daniel Lichtblau

Wolfram Research

Jan 31, 2001, 11:19:03 AM1/31/01

to

My example was too simple. Try this.

ni[a_]:=a- (a^2-4)/(2*a)

This is a newton iteration that will, given an

approximation to the square root of 4 (namely 2..)

provide a better approximation. We use Mathematica's

Nest to repeat this iteration.

Nest[ni,3.0,20] or Nest[ni,3.0,100] provides the floating

point number 2.

Nest[ni,3.00000000000000000,40] answers 2.0000

Nest[ni,3.00000000000000000,50] answers 2.0

Nest[ni,3.00000000000000000,56] answers 0.

SetPrecision[%,10] gives 2.000000000

so it computed the answer but lies about it.

Convergent iterative methods must be reprogrammed to

untangle the automatic "accuracy".

If we write ni[a_]:= 2/a+a/2 we get a different result.

"Automatic numerical analysis" works somewhat better here.

RJF

Jan 31, 2001, 11:33:10 AM1/31/01

to

Hello,

Richard Fateman <fat...@cs.berkeley.edu> writes:

Richard Fateman <fat...@cs.berkeley.edu> writes:

you didn't tell us your version, here it works correctly:

4.0.2.0 for AIX.

Good luck

Alois

Jan 31, 2001, 12:46:54 PM1/31/01

to

Richard Fateman wrote:

Let me quote from your previous post to explain this:

> Note that if the user wants it, there is an implementation

> of interval arithmetic in Mathematica.

In[1]:= ni[a_] := a - (a^2 - 4)/(2*a);

In[2]:= three = Interval[3. + 10^-17];

In[3]:= Nest[ni, three, 40] // InputForm

Out[3]//InputForm= Interval[{1.9982060871169038, 2.001793510752921}]

In[4]:= Nest[ni, three, 50] // InputForm

Out[4]//InputForm= Interval[{-3.60636370053441, 5.722982429090986}]

In[5]:= ni[a_] := 2/a + a/2;

In[6]:= Nest[ni, three, 40] // InputForm

Out[6]//InputForm= Interval[{1.9999999999999647, 2.000000000000036}]

In[7]:= Nest[ni, three, 50] // InputForm

Out[7]//InputForm= Interval[{1.9999999999999558, 2.000000000000045}]

An assumption of interval arithmetic is that that errors combine

in an independent fashion. The above example is well known

in the interval arithmetic community where it is referred

to as the dependency problem.

Essentially since significance arithmetic is adaptive in its

error estimate it behaves in the same manner as interval arithmetic,

where rearranging an expression can give different results - just

as IEEE arithmetic gives different results, but without any

claim about how the result relates to the original problem.

The issue you raise seems to be, once again, that significance

arithmetic does not work like fixed precision floating point

arithmetic. It is not supposed to.

Mark Sofroniou,

Research and Development,

Wolfram Research.

Jan 31, 2001, 8:33:23 PM1/31/01

to

Mark Sofroniou wrote:

<snip>

> An assumption of interval arithmetic is that that errors combine

> in an independent fashion. The above example is well known

> in the interval arithmetic community where it is referred

> to as the dependency problem.

Certainly. But people doing arithmetic in mathematica may not

realize that when they start with 3.000000000000000000

they are doing IEEE arithmetic but when they write

3.00000000000000000

they are doing a kind of interval arithmetic in which

convergent iterations end up with a result that looks like 0.

instead of 2.

>

> Essentially since significance arithmetic is adaptive in its

> error estimate it behaves in the same manner as interval arithmetic,

> where rearranging an expression can give different results

True

>- just

> as IEEE arithmetic gives different results, but without any

> claim about how the result relates to the original problem.

IEEE arithmetic does not have this precision inconsistency. Doing

arithmetic in different orders can give different results, though

here it doesn't. The principle claim with regard to the design

of IEEE binary floating point arithmetic is that it makes the

writing of (additional) programs to track errors somewhat easier,

and to a large extent machine independent. It doesn't claim

to magically derive the error bounds. Mathematica claims to do

so, but doesn't. Is this a bug or a feature? (By the way,

if it could do it RIGHT, it would be remarkable, and would

indeed be a feature.)

>

> The issue you raise seems to be, once again, that significance

> arithmetic does not work like fixed precision floating point

> arithmetic. It is not supposed to.

True, but saying that "it is supposed to work that way even

though it may surprise you," is essentially saying

"We declare this bug to be a feature".

RJF

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu