Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

True==False, 0=1, up is down, etc in Mathematica

1 view
Skip to first unread message

Richard J. Fateman

unread,
Jul 29, 1998, 3:00:00 AM7/29/98
to
In trying to renew my acquaintance
with the Mathematica 3.0 floating-point model,
I came across this set of "facts".

m=SetPrecision[123.0,5]
m1=m+1/10
m3=m+1/10000000000000000

m==m1==m3 is True

m1> m is True m3> m is False
m1>=m is True m3>=m is True
m1==m is True m3==m is True
m1=!=m is True m3=!=m is False
m1===m is False m3===m is True
m1<=m is False m3<=m is True
m1< m is False m3< m is False
m-m1==0 is False m-m3==0 is True

m1^2-m^2==24.61 is True
m1^2-m^2==30 is True
m1^2==m^2 is True

Now you may think that can only happen because I used
SetPrecision and it would never happen to you because
you never use SetPrecision... but

Explore the numbers n=123. and n1= n+10^(-13), and you
will find that n==n1 is true but n1<=n is false.
n==n1 is true but n1-n==0 is false.
n==n1 is true but n=!=n1 is also true..

InputForm[n1] is 123. but
InputForm[123.] is 122.999999999999
and n1-123. is 9.9x10^-14


It is easy to prove from one or two of these items
that 0=1, true=false, black=white, etc.

These "features" have been reported to sup...@wri.com, and
some date back at least to versions 1.0 and 2.0 discussed
in my

``Review of Mathematica,'' J. Symbolic Comp. 13 no. 5, May 1992.

but they are startling enough that maybe others would
like to either play with them, or maybe give some excuse
for how such inconsistencies can be explained.

--
Richard J. Fateman
fat...@cs.berkeley.edu http://http.cs.berkeley.edu/~fateman/

Daniel Lichtblau

unread,
Jul 29, 1998, 3:00:00 AM7/29/98
to Richard J. Fateman


As you are soon co-teaching a workshop on floating point arithmetic I
imagine you wish to understand the rationale behind the results you
show. I am excepting InputForm[123.] not giving 123. as that is of
course a bug (fixed in our development version).

The results you find using SetPrecision are all explained by semantics
of Equal. Quoting the reference manual p. 1078, "Approximate numbers are
considered equal if they differ in at most their last two decimal
digits."

Here is seemingly anomalous behavior. First I replicate one of your
machine arithmetic findings

In[62]:= nmach = 123. ;

In[63]:= nmachplus = nmach + 10^(-14);

In[64]:= nmachplus==nmach
Out[64]= True

In[65]:= nmachplus - nmach == 0
Out[65]= False

In[66]:= InputForm[nmachplus - nmach]
Out[66]//InputForm= 1.4210854715202004*^-14

Fine so far, all in accordance with documented semantics. Out[64] is
explained by the fact that nmachplus agrees with nmach to within the
last two places.

In[67]:= InputForm[nmachplus]
Out[67]//InputForm= 123.00000000000001

Now redo this in significance arithmetic, using $MachinePrecision digits
of precision.

In[81]:= nbig = SetPrecision[123, $MachinePrecision];

In[82]:= nbigplus = nbig + 10^(-14);

In[83]:= nbigplus==nbig
Out[83]= True

In[84]:= nbigplus - nbig == 0
Out[84]= True

But why is Out[84] True? This is because nbigplus - nbig agrees with
zero in all of its significant digits (it has no precision, but is
accurate to 14 places to the right of the decimal point).

In[85]:= nbigplus - nbig // InputForm
Out[85]//InputForm= 1.`-0.4363*^-14

One moral is that you are working with two floating point models, not
one. They are generally referred to as machine vs. significance
arithmetic.

The machine arithmetic results are closely related to a topic I expect
you will cover in your workshop. Languages typically define machine
epsilon for, say, double precision floats, to be the smallest such float
that, when added to one, gives a result bigger than one. Typically
something like 2*10^(-16). This number is MUCH larger than the smallest
normalized positive float. This distinction is analogous to the
different behaviors of comparing a nonzero value x to y vs comparing x-y
to zero.

I am curious to know why you might consider the behaviors noted (other
than the obvious bug) to be "startling". Specifically, what results did
you expect? They all follow documented semantics so far as I could tell.


Daniel Lichtblau
(not speaking for) Wolfram Research

Richard J. Fateman

unread,
Jul 30, 1998, 3:00:00 AM7/30/98
to
In article <35BFEF6E...@wolfram.com>,

Daniel Lichtblau <da...@wolfram.com> wrote:
>
>The results you find using SetPrecision are all explained by semantics
>of Equal. Quoting the reference manual p. 1078, "Approximate numbers are
>considered equal if they differ in at most their last two decimal
>digits."

I don't use the written manual, but the on-line help I found
suggested that when 2 numbers are compared, they are equal if
their difference is less than the uncertainty in the most-uncertain
participant in the arithmetic. This did not explain the anomalies,
which I repeat below.


>
>One moral is that you are working with two floating point models, not
>one. They are generally referred to as machine vs. significance
>arithmetic.

Is it a feature that you have a model (significance arithmetic) that is
different and incompatible with the machine arithmetic?


>I am curious to know why you might consider the behaviors noted (other
>than the obvious bug) to be "startling". Specifically, what results did
>you expect? They all follow documented semantics so far as I could tell.


I did not expect to have two real numeric values in Mathematica, call
them x and y, such that x>y, x==y and x=!=y are all true.

Most people would also view x<=y as (x<y) or (x==y) and thus
x==y implies x<y.

Are you saying that this is intentional as well as documented?

Daniel Lichtblau

unread,
Jul 30, 1998, 3:00:00 AM7/30/98
to Richard J. Fateman
Richard J. Fateman wrote:
>
> In article <35BFEF6E...@wolfram.com>,
> Daniel Lichtblau <da...@wolfram.com> wrote:
> >
> >The results you find using SetPrecision are all explained by semantics
> >of Equal. Quoting the reference manual p. 1078, "Approximate numbers are
> >considered equal if they differ in at most their last two decimal
> >digits."
>
> I don't use the written manual, but the on-line help I found
> suggested that when 2 numbers are compared, they are equal if
> their difference is less than the uncertainty in the most-uncertain
> participant in the arithmetic. This did not explain the anomalies,
> which I repeat below.
> >

I checked the on-line dox for Equal. One of the bulleted items says:
"Approximate numbers are considered equal if they differ in at most

their last eight binary digits (roughly their last two decimal digits)."

I suspect you were looking at SameQ ('===').


> >One moral is that you are working with two floating point models, not
> >one. They are generally referred to as machine vs. significance
> >arithmetic.
>
> Is it a feature that you have a model (significance arithmetic) that is
> different and incompatible with the machine arithmetic?
>

They have different semantics, and that is by design. I do not see this
as an incompatibility.


> >I am curious to know why you might consider the behaviors noted (other
> >than the obvious bug) to be "startling". Specifically, what results did
> >you expect? They all follow documented semantics so far as I could tell.
>
> I did not expect to have two real numeric values in Mathematica, call
> them x and y, such that x>y, x==y and x=!=y are all true.

I suppose that might seem unintuitive to one unfamiliar with the
semantics. But I think it makes good sense. Equal and SameQ have
different standards of comparison, the former in essence allowing a byte
of slop. As for Equal vs Greater see my remark at the end.


> Most people would also view x<=y as (x<y) or (x==y) and thus
> x==y implies x<y.

This is not clear to me. Maybe you mistyped somewhere?


> Are you saying that this is intentional as well as documented?

The fact that Equal allows for slop certainly implies that one can have
both x==y and x>y. And the semantics of Equal are by design.


Daniel Lichtblau
(still not speaking for) Wolfram Research

0 new messages