533 views

Skip to first unread message

Apr 27, 2004, 5:13:06â€¯AM4/27/04

to

Here is my code in Mathematica 5:

h1=1+(y-x)^2;

h2=1+(2/3-x^2)^2;

b=h2/(h1+h2);

f=x(1-x)^2;

s=f*D[b,y];

g[y_]:=NIntegrate[s,{x,0,1}];

FindRoot[g[y]==0,{y,0.35,0.45}]

The output is

NIntegrate::inum: Integrand H is not numerical at {x} = {0.5`}.

FindRoot::cvmit: Failed to converge to the requested accuracy or precision

within 100 iteration

and it gives me y->0.45 or the upper boundary. However, when I plot g[y] I

can see that the solution is somewhere around y=0.4. Using Solve instead of

FindRoot does not give me any solutions.

What am I doing wrong here? Thank you in advance. Mukhtar Bekkali

Apr 28, 2004, 7:06:33â€¯AM4/28/04

to

Hi,

and what is with

h1 = 1 + (y - x)^2;

h2 = 1 + (2/3 - x^2)^2;

b = h2/(h1 + h2);

f = x(1 - x)^2;

s = f*D[b, y];

g[yy_?NumericQ] := NIntegrate[Evaluate[s /. y -> yy], {x, 0, 1}]

FindRoot[g[y] == 0, {y, 0.35, 0.45}]

Regards

Jens

"Mukhtar Bekkali" <mbek...@iastate.edu> schrieb im Newsbeitrag

news:c6l872$ire$1...@smc.vnet.net...

Apr 28, 2004, 7:12:47â€¯AM4/28/04

to

Hi Mukhtar,

Attributes[NIntegrate]

{HoldAll, Protected}

So s is not evaluated until later.

Your code would look much cleaner if

you kept arguments of your functions

explicitly

h1[x_, y_] = 1 + (y - x)^2;

h2[x_] = 1 + (2/3 - x^2)^2;

b[x_, y_] = h2[x]/(h1[x, y] +

h2[x]);

s[x_, y_] = FullSimplify[

x*(1 - x)^2*Evaluate[

D[b[x, y], y]]];

g[y_] := NIntegrate[s[x, y],

{x, 0, 1}]

Now, unlike in your code, Plot[g[y],{y,0.2, 0.8}]

does plotting without prolifirating bunch of errors.

Furhermore, it produces correct result for

FindRoot:

FindRoot[g[y], {y, 0.35,

0.45}, WorkingPrecision ->

16]

<< Error messages NItegrate::ploss

suppressed >>

{y -> 0.399370682489405571438\

7173794`16.}

If those messages annoy you, like they do for me,

you can work with the interpolated function

ig = Interpolation[

Table[{y, g[y]},

{y, 0.2, 0.8, 0.005}],

InterpolationOrder -> 5];

Then the result is spit out without errors.

FindRoot[ig[y], {y, 0.35,

0.45}, WorkingPrecision ->

16]

{y -> 0.399370682489405632471\

6472443`16.}

So now I've got the question for the group.

It seems to me like a general problem, having

a function f[x], defined via Integral

(or Sum, or Product)

f[x] = Integrate[ f[x,y],{ y, domain}]

i can plot the function, and see it's zero approximately,

but when fed to FindRoot, I get a bunch of errors.

I do not understand why, but that aside, the only clean

way out is to interpolate f[x] first, and then use FindRoot.

If anybody on the list has a better workaround, please share :).

Your insight would be greatly appreciated.

Regards,

Sasha

Apr 28, 2004, 7:13:48â€¯AM4/28/04

to

Mukhtar Bekkali wrote:

>Here is my code in Mathematica 5:

>

>h1=1+(y-x)^2;

>h2=1+(2/3-x^2)^2;

>b=h2/(h1+h2);

>f=x(1-x)^2;

>s=f*D[b,y];

>g[y_]:=NIntegrate[s,{x,0,1}];

>FindRoot[g[y]==0,{y,0.35,0.45}]

>

>The output is

>

>NIntegrate::inum: Integrand H is not numerical at {x} = {0.5`}.

>FindRoot::cvmit: Failed to converge to the requested accuracy or precision

>within 100 iteration

>

>and it gives me y->0.45 or the upper boundary. However, when I plot g[y] I

>can see that the solution is somewhere around y=0.4. Using Solve instead of

>FindRoot does not give me any solutions.

>

>What am I doing wrong here? Thank you in advance. Mukhtar Bekkali

>

>

>

You can try this:

In[158]:=

Clear[f, g, s, y, x]

h1 = 1 + (y - x)^2;

h2 = 1 + (2/3 - x^2)^2;

b = h2/(h1 + h2);

f = x*(1 - x)^2;

s = f*D[b, y];

g[s_, (y_)?NumericQ] := NIntegrate[s, {x, 0, 1}];

FindRoot[g[s, y] == 0, {y, 0.35, 0.45}]

Out[165]=

{y -> 0.3993706824894056}

The reason why NIntegrate gives "inum"messages is explained at

http://support.wolfram.com/mathematica/mathematics/numerics/nsumerror.html

Anton Antonov

Wolfram Research, Inc.

Apr 29, 2004, 12:42:05â€¯AM4/29/04

to

Hi,

and what is with

h1 = 1 + (y - x)^2;

h2 = 1 + (2/3 - x^2)^2;

b = h2/(h1 + h2);

f = x(1 - x)^2;

s = f*D[b, y];

g[yy_?NumericQ] := NIntegrate[Evaluate[s /. y -> yy], {x, 0, 1}]

FindRoot[g[y] == 0, {y, 0.35, 0.45}]

Regards

Jens

"Mukhtar Bekkali" <mbek...@iastate.edu> schrieb im Newsbeitrag

news:c6l872$ire$1...@smc.vnet.net...

Apr 29, 2004, 12:48:09â€¯AM4/29/04

to

Hi Mukhtar,

Attributes[NIntegrate]

{HoldAll, Protected}

Regards,

Sasha

Apr 29, 2004, 12:49:09â€¯AM4/29/04

to

Mukhtar Bekkali wrote:

>Here is my code in Mathematica 5:

>

>h1=1+(y-x)^2;

>h2=1+(2/3-x^2)^2;

>b=h2/(h1+h2);

>f=x(1-x)^2;

>s=f*D[b,y];

>g[y_]:=NIntegrate[s,{x,0,1}];

>FindRoot[g[y]==0,{y,0.35,0.45}]

>

>The output is

>

>NIntegrate::inum: Integrand H is not numerical at {x} = {0.5`}.

>FindRoot::cvmit: Failed to converge to the requested accuracy or precision

>within 100 iteration

>

>and it gives me y->0.45 or the upper boundary. However, when I plot g[y] I

>can see that the solution is somewhere around y=0.4. Using Solve instead of

>FindRoot does not give me any solutions.

>

>What am I doing wrong here? Thank you in advance. Mukhtar Bekkali

>

>

>

You can try this:

In[158]:=

Clear[f, g, s, y, x]

h1 = 1 + (y - x)^2;

h2 = 1 + (2/3 - x^2)^2;

b = h2/(h1 + h2);

f = x*(1 - x)^2;

s = f*D[b, y];

Apr 29, 2004, 3:23:08â€¯AM4/29/04

to

I think that with all these nice workarounds it went unnoticed that

there *was* something wrong with Mukhtar's example. Consider:

there *was* something wrong with Mukhtar's example. Consider:

In[1]:=

Module[

{y = Cos[x], f},

f[x_] := y;

FindRoot[f[x], {x, 1, 2},

Compiled -> True, MaxIterations -> 15,

StepMonitor :> Print["step: ", {x, f[x]}]]

]

Out[1]=

{x -> 1.5708}

In[2]:=

Module[

{y = Cos[x], f},

f[x_?NumericQ] := y;

FindRoot[f[x], {x, 1, 2},

Compiled -> True, MaxIterations -> 15,

StepMonitor :> Print["step: ", {x, f[x]}],

EvaluationMonitor -> 0]

]

FindRoot::cvmit: Failed to converge to the requested

accuracy or precision within 15 iterations.

Out[2]=

{x -> 2.}

This is similar to the issue that was discussed at

http://forums.wolfram.com/mathgroup/archive/2003/Dec/msg00401.html

Just as with the Plot examples, f[1] here is not numerical because x

in Cos[x] will remain unbound. But if we evaluate Block[{x=1}, f[x]],

then the result will be numerical, and this is how FindRoot works. And

then there's the question of how it will interact with Compiled->True.

But here we have a new twist, because, according to Trace, FindRoot

does obtain numerical values evaluating f[x] in both cases, so it's

not clear what happens in the second example, when we use

f[x_?NumericQ].

Oh, and EvaluationMonitor/StepMonitor won't help us here, because they

perform true delayed evaluation, while what FindRoot seems to do is to

evaluate the first argument once and then to perform all computations

on what it obtains (why does it have HoldAll attribute then? Simple:

to add extra confusion if FindRoot is used inside Module or With and

there is a name conflict). As an additional bonus, if you omit

EvaluationMonitor in the second example, then StepMonitor won't print

anything either. Lovely.

Maxim Rytin

m...@prontomail.com

Apr 29, 2004, 3:24:09â€¯AM4/29/04

to

the following seems to provide the correct solution.

but, you will get a warning that NIntegrate is having trouble because the

integral is approximately zero.

i think that the code in the original posting may have applied the

Replacement Rule for the y before applying the D function.

exprGen=

Simplify[

With[{h1 = 1 + (yVar - xVar)^2,

h2 = 1 + (2/3 - xVar^2)^2},

With[{b = h2/(h1 + h2),f = xVar(1 - xVar)^2},

expr=(f*D[b, yVar])]]]

g[yArg_?NumberQ]:=

NIntegrate[(expr)/.yVar->yArg,{xVar,0,1}]

FindRoot[g[y] == 0, {y, 0.35, 0.45}]

"Jens-Peer Kuska"

<ku...@informatik.uni- To: math...@smc.vnet.net

leipzig-de> cc:

Subject: Re: FindRoot cannot find obvious solution

04/28/2004 06:56 AM

Hi,

and what is with

h1 = 1 + (y - x)^2;

h2 = 1 + (2/3 - x^2)^2;

b = h2/(h1 + h2);

f = x(1 - x)^2;

s = f*D[b, y];

g[yy_?NumericQ] := NIntegrate[Evaluate[s /. y -> yy], {x, 0, 1}]

FindRoot[g[y] == 0, {y, 0.35, 0.45}]

Regards

Jens

"Mukhtar Bekkali" <mbek...@iastate.edu> schrieb im Newsbeitrag

news:c6l872$ire$1...@smc.vnet.net...

> Here is my code in Mathematica 5:

>

> h1=1+(y-x)^2;

> h2=1+(2/3-x^2)^2;

> b=h2/(h1+h2);

> f=x(1-x)^2;

> s=f*D[b,y];

> g[y_]:=NIntegrate[s,{x,0,1}];

> FindRoot[g[y]==0,{y,0.35,0.45}]

>

> The output is

>

> NIntegrate::inum: Integrand H is not numerical at {x} = {0.5`}.

> FindRoot::cvmit: Failed to converge to the requested accuracy or

precision

Apr 29, 2004, 7:53:41â€¯PM4/29/04

to

On 29 Apr 2004, at 16:05, Maxim wrote:

> I think that with all these nice workarounds it went unnoticed that

> there *was* something wrong with Mukhtar's example. Consider:

>

> In[1]:=

> Module[

> {y = Cos[x], f},

> f[x_] := y;

> FindRoot[f[x], {x, 1, 2},

> Compiled -> True, MaxIterations -> 15,

> StepMonitor :> Print["step: ", {x, f[x]}]]

> ]

>

> Out[1]=

> {x -> 1.5708}

>

> In[2]:=

> Module[

> {y = Cos[x], f},

> f[x_?NumericQ] := y;

> FindRoot[f[x], {x, 1, 2},

> Compiled -> True, MaxIterations -> 15,

> StepMonitor :> Print["step: ", {x, f[x]}],

> EvaluationMonitor -> 0]

> ]

>

>

> FindRoot::cvmit: Failed to converge to the requested

> accuracy or precision within 15 iterations.

>

> Out[2]=

> {x -> 2.}

>

> This is similar to the issue that was discussed at

>

> http://forums.wolfram.com/mathgroup/archive/2003/Dec/msg00401.html

>

> Just as with the Plot examples, f[1] here is not numerical because x

> in Cos[x] will remain unbound. But if we evaluate Block[{x=1}, f[x]],

> then the result will be numerical, and this is how FindRoot works. And

> then there's the question of how it will interact with Compiled->True.

>

> But here we have a new twist, because, according to Trace, FindRoot

> does obtain numerical values evaluating f[x] in both cases, so it's

> not clear what happens in the second example, when we use

> f[x_?NumericQ].

>

> Oh, and EvaluationMonitor/StepMonitor won't help us here, because they

> perform true delayed evaluation, while what FindRoot seems to do is to

> evaluate the first argument once and then to perform all computations

> on what it obtains (why does it have HoldAll attribute then? Simple:

> to add extra confusion if FindRoot is used inside Module or With and

> there is a name conflict). As an additional bonus, if you omit

> EvaluationMonitor in the second example, then StepMonitor won't print

> anything either. Lovely.

>

> Maxim Rytin

> m...@prontomail.com

>

>

>

I really fail to see what point are you trying to make? Certainly, your

use of Module has nothing to do with the problem which is entirely due

to the setting Compile->True. If you set Compile->False, which is only

natural since the function defined in this way obviously can't be

compiled, then everything, including StepMonitor works exactly as it

should. Why on earth you keep insisting that Mathematica should behave

correctly when you give her nonsensical input to work on?

y = Cos[x];

f[(z_)?NumericQ] := y;

FindRoot[f[x], {x, 1, 2}, Compiled -> False,

MaxIterations -> 15, StepMonitor :>

Print["step: ", {x, f[x]}]]

"step: "{2., -0.4161468365471424}

"step: "{1.5649043758915777, 0.005891916813448233}

"step: "{1.570978574535018, -0.00018224773911250164}

"step: "{1.570796325773051, 1.0218455583390503*^-9}

"step: "{1.5707963267948966, 6.123233995736766*^-17}

{x -> 1.5707963267948966}

Andrzej Kozlowski

Chiba, Japan

http://www.mimuw.edu.pl/~akoz/

May 8, 2004, 1:46:47â€¯AM5/8/04

to

The recommendation to simply set "EvaluateNumericalFunctionArgument"

to False ( http://support.wolfram.com/mathematica/mathematics/numerics/nsumerror.html

) can be a bad idea. The reason is that with the default settings for

FindRoot the jacobian is still pre-evaluated. Hence it may happen that

we are evaluating one function and using the derivative of an entirely

different function:

to False ( http://support.wolfram.com/mathematica/mathematics/numerics/nsumerror.html

) can be a bad idea. The reason is that with the default settings for

FindRoot the jacobian is still pre-evaluated. Hence it may happen that

we are evaluating one function and using the derivative of an entirely

different function:

In[1]:=

Developer`SetSystemOptions["EvaluateNumericalFunctionArgument" ->

False];

Module[{f = If[TrueQ[# > 0], #, -#] &},

FindRoot[f[x] == 2, {x, 1}]

]

FindRoot::lstol: The line search decreased the step size to within

tolerance specified by AccuracyGoal and PrecisionGoal but was

unable to find a sufficient decrease in the merit function.

You may need more than MachinePrecision digits of working

precision to meet these tolerances.

Out[2]=

{x -> 1.}

One way is to use RuleDelayed for Jacobian:

In[3]:=

Developer`SetSystemOptions["EvaluateNumericalFunctionArgument" ->

False];

Module[{f = If[TrueQ[# > 0], #, -#] &},

FindRoot[f[x] == 2, {x, 1}, Jacobian :> {{f'[x]}}]

]

Out[4]=

{x -> 2.}

This method isn't guaranteed to work either, because the first step in

evaluation of f'[x] is to evaluate f', and the question of how

Mathematica does that is equally unclear:

In[5]:=

Module[{f, g},

f = If[TrueQ[# > 0], #, -#] &;

g[x_] := If[TrueQ[x > 0], x, -x];

{f', g'}

]

Out[5]=

{If[TrueQ[#1 > 0], 1, -1] &, -1 &}

The output for g' agrees with what the documentation says: g' is

evaluated as Evaluate[D[g[#],#]]& (actually the reference for

Derivative says D[f[#]&,{#,n}], which doesn't make any sense. So I

assume it was supposed to mean Evaluate[D[f[#],{#,n}]]&). But f' is

evaluated differently, because there are special rules for the

derivatives of pure functions (or maybe specifically for control

structures like If).

Because of this, the safest way is still to use f[x_?NumericQ], even

with "EvaluateNumericalFunctionArgument"->False, to block the

evaluation of f'...and then to hope that it doesn't give some

unexpected effect in combination with Compiled->True (which is the

default setting for all numerical functions that use Compiled).

Maxim Rytin

m...@prontomail.com

May 9, 2004, 3:08:12â€¯AM5/9/04

to

On 8 May 2004, at 14:24, Maxim wrote:

>

> Because of this, the safest way is still to use f[x_?NumericQ], even

> with "EvaluateNumericalFunctionArgument"->False, to block the

> evaluation of f'...and then to hope that it doesn't give some

> unexpected effect in combination with Compiled->True (which is the

> default setting for all numerical functions that use Compiled).

>

There seem to be no point in using

"EvaluateNumericalFunctionArgument"->False together with f[x_?NumericQ]

since using the first option you loose all the performance benefits of

the new approach in Mathemtica 5.0 (which you can see in the examples

on the Wolfram support page). And secondly: I have not observed any

problems with using f[x_?NumericQ] except when one chooses to do

something unnatural like:

Module[

{y = Cos[x], f},

f[x_?NumericQ] := y;

FindRoot[f[x], {x, 1, 2},

Compiled -> True, MaxIterations -> 15,

StepMonitor :> Print["step: ", {x, f[x]}],

EvaluationMonitor -> 0]

]

Just changing it to

Module[

{ f},

f[x_?NumericQ] := Cos[x];

FindRoot[f[x], {x, 1, 2},

Compiled -> True, MaxIterations -> 15,

StepMonitor :> Print["step: ", {x, f[x]}]]

]

makes everything work perfectly. This seems to be the case with almost

every "problem" you have reported for as long as I can remember. (To

deal with the remaining ones, like why 1`2 == 2 gives True, etc,

requires only careful reading of the documentation).

Andrzej

> The recommendation to simply set "EvaluateNumericalFunctionArgument"

> to False (

> http://support.wolfram.com/mathematica/mathematics/numerics/

> nsumerror.html

> ) can be a bad idea. The reason is that with the default settings for

> FindRoot the jacobian is still pre-evaluated. Hence it may happen that

> we are evaluating one function and using the derivative of an entirely

> different function:

>

> In[1]:=

> Developer`SetSystemOptions["EvaluateNumericalFunctionArgument" ->

> False];

> Module[{f = If[TrueQ[# > 0], #, -#] &},

> FindRoot[f[x] == 2, {x, 1}]

> ]

>

> FindRoot::lstol: The line search decreased the step size to within

> tolerance specified by AccuracyGoal and PrecisionGoal but was

> unable to find a sufficient decrease in the merit function.

> You may need more than MachinePrecision digits of working

> precision to meet these tolerances.

>

> Out[2]=

> {x -> 1.}

>

> One way is to use RuleDelayed for Jacobian:

>

> In[3]:=

> Developer`SetSystemOptions["EvaluateNumericalFunctionArgument" ->

> False];

> Module[{f = If[TrueQ[# > 0], #, -#] &},

> FindRoot[f[x] == 2, {x, 1}, Jacobian :> {{f'[x]}}]

> ]

>

> Out[4]=

> {x -> 2.}

>

May 11, 2004, 5:37:52â€¯AM5/11/04

to

Andrzej Kozlowski <ak...@mimuw.edu.pl> wrote in message news:<c7klcs$2kn$1...@smc.vnet.net>...

> makes everything work perfectly. This seems to be the case with almost

> every "problem" you have reported for as long as I can remember. (To

> deal with the remaining ones, like why 1`2 == 2 gives True, etc,

> requires only careful reading of the documentation).

Actually this was discussed in a different thread (

http://forums.wolfram.com/mathgroup/archive/2004/Apr/msg00615.html ),

so I'll paste your answer from there:

--------paste----------

Andrzej Kozlowski wrote:

Besides it is well known that Mathematica's high precision arithmetic

model, based on approximate interval arithmetic, works well only for

numbers whose uncertainty is much smaller than the number itself. This

is the case in all applications I have ever heard of. For very low

precision numbers like your examples, it is known to give excessively

pessimistic or wrong answers, but so what?

---------end-----------

The statement that arbitrary-precision arithmetics in Mathematica is

based on interval arithmetics is inaccurate; perhaps we might say that

since we have an estimate for the error, then we also implicitly have

an interval, but this just leads to confusion with true interval

arithmetics which is implemented for Interval objects.

Instead, arbitrary-precision computations in Mathematica use

significance arithmetics and the notion of condition number to keep

track of precision. For example, squaring a number can lose one binary

digit, so after calculating the square Mathematica subtracts Log[10,2]

from the precision of the input -- it doesn't use Interval to

represent bignums and this squaring isn't performed on any intervals.

Next, the suggestion that the uncertainty should be much smaller

(exactly how much that would be?) than the number itself seems to be

rather vacuous -- if the uncertainty and the number are of the same

order, that would mean that no digits are known for certain, that is,

the precision is zero.

I wouldn't say that true interval arithmetics is 'known' to give wrong

answers -- that would totally defy the purpose of intervals; the whole

point of using them for approximate calculations is to obtain bounds

which are certain to encapsulate the possible error (so 'pessimistic'

and 'wrong' are more like antinomes -- pessimistic means that those

bounds might be too wide).

But now, as far as I understand from your last post, you changed your

mind and instead of saying that those examples just gave meaningless

output due to low precision (which doesn't seem like a wise thing for

Interval to do anyway), you claim that, after reading documentation,

you can offer some explanation other than "so what?". Then please

elaborate on why 1`2==2.2, 1`2==2.3 and

IntervalMemberQ[Sin[Interval[1`2]],0] work the way they do. I'd like

to hear the explanation, that's why I was asking the question in the

first place.

Maxim Rytin

m...@prontomail.com

May 13, 2004, 12:15:38â€¯AM5/13/04

to

On 11 May 2004, at 18:20, Maxim wrote:

> Andrzej Kozlowski <ak...@mimuw.edu.pl> wrote in message

> news:<c7klcs$2kn$1...@smc.vnet.net>...

>

>> makes everything work perfectly. This seems to be the case with almost

>> every "problem" you have reported for as long as I can remember. (To

>> deal with the remaining ones, like why 1`2 == 2 gives True, etc,

>> requires only careful reading of the documentation).

>

I wrote "approximate" interval arithemtic. Mathematica's model of

significnace arithemtic is an approximation to interval aithmetic.

Mathemaitca uses an approximate model because real interval arithemtic

implemented via Interval is far too slow for everyday use. An

approximate model rather naturally suffers form weaknesses: in

Mathematica's case one of them is dealing with low precision numbers.

However, if you ned to do that you can always use true interval

arithemtic. As for your other question I will just cite from the

documentation for Equal:

Approximate numbers are considered equal if they differ in at most

their last eight binary digits (roughly their last two decimal digits).

Andrzej

May 14, 2004, 12:30:49â€¯AM5/14/04

to

On 13 May 2004, at 23:26, Daniel Lichtblau wrote:

>

> The issue of 1`2==2.2 is a bit dicey. I believe the most useful

> interpretation is roughly as Andrzej suggests, that the model breaks

> down at sufficiently low precision. Yes, there are "contradictions" is

> the logic that lead to the equality above, but to me they are not

> terribly compelling.

>

>

> Daniel

>

>

At least in this respect Mathemaitca's behaviour agrees with the

documentation: since

>

> The issue of 1`2==2.2 is a bit dicey. I believe the most useful

> interpretation is roughly as Andrzej suggests, that the model breaks

> down at sufficiently low precision. Yes, there are "contradictions" is

> the logic that lead to the equality above, but to me they are not

> terribly compelling.

>

>

> Daniel

>

>

At least in this respect Mathemaitca's behaviour agrees with the

documentation: since

RealDigits[1`2,2]

{{1,0,0,0,0,0,0},1}

has only 7 digits it does "differ in at most last eight binary digits"

from 2.

This is a bit more tricky

1.`2 == 0

True

Sin[1`2]==0

False

The key point is clearly the fact that here Sin slightly increases

precision:

Precision[Sin[1`2]]

2.1924

Looking at RealDigits[Sin[1`2],2] is not quite convincing, since

Mathematica still returns only 7 digits:

RealDigits[Sin[1`2],2]

{{1,1,0,1,1,0,0},0}

but I suspect that the statement in the documentation about "last eight

binary digits" is also only approximate and that this is actually a

border line case (just over the border line).

In any case, these are clearly pathologies of significance arithmetic

and without importance.

May 14, 2004, 12:32:51â€¯AM5/14/04

to

Andrzej Kozlowski <ak...@mimuw.edu.pl> wrote in message news:<c7uspa$q7s$1...@smc.vnet.net>...

In other words, using true interval arithmetic with low precision

numbers is safe? But that was exactly what I did in the examples with

Interval:

IntervalMemberQ[Interval[1`2],0]

IntervalMemberQ[Sin[Interval[1`2]],0]

How does your explanation clarify this inconsistency? As to Equal,

let's count digits:

In[1]:=

RealDigits[1`2, 2, 8, 1]

RealDigits[1`2, 2, 9, 1]

Out[1]=

{{0,1,0,0,0,0,0,0},2}

Out[2]=

{{0,1,0,0,0,0,0,0,0},2}

There's some uncertainty as to how many digits to take: by default

RealDigits returns 7 digits, but we can ask it to give us 8 digits;

all the next ones will be Indeterminate. I added leading zero just for

alignment with RealDigits[2.2,2] and RealDigits[2.3,2], which give

In[3]:=

RealDigits[2.2,2]

RealDigits[2.3,2]

Out[3]=

{{1,0,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,

1,0,0,1,1,0,0,1,1,0,0,1,1,0,1,0},2}

Out[4]=

{{1,0,0,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,

0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0},2}

If the eight-digits rule applies here, 1`2 is either equal to anything

(eight digits is all we know) or unequal both to 2.2 and 2.3 by the

leading digit. Perhaps some more 'careful reading of documentation' is

required?

Maxim Rytin

m...@prontomail.com

May 14, 2004, 9:10:21â€¯PM5/14/04

to

On 14 May 2004, at 13:12, Maxim wrote:

> Andrzej Kozlowski <ak...@mimuw.edu.pl> wrote in message

> news:<c7uspa$q7s$1...@smc.vnet.net>...

> In other words, using true interval arithmetic with low precision

> numbers is safe? But that was exactly what I did in the examples with

> Interval:

>

> IntervalMemberQ[Interval[1`2],0]

> IntervalMemberQ[Sin[Interval[1`2]],0]

>

> How does your explanation clarify this inconsistency?

You are not actually using "Interval arithmetic" here. At best,you are

using a mixture of Interval arithmetic and significance arithmetic. You

can certainly make accurate error estimates using interval arithmetic

with exact numbers.

IntervalMemberQ[Interval[{1-1/100,1+1/100}],0]

False

and so on.

I have already answered the other question in another posting. This

happens because Sin increases Precision (and by the way, it's not yet

another "paradox"). No inconsistency.

As I wrote in my other posting, the actual rule is probably based on

Precision and not exactly the number of digits, which only roughly

corresponds to it. The exact rule for identifying "close" approximate

numbers is not given because nobody needs it.

In any case, as I wrote before, significance arithmetic does not work

well with low precision numbers and that is the price one pays for it

being manageable fast. If you need error estimates for low precision

computations use(exact!) interval arithmetic.

May 15, 2004, 4:06:23â€¯AM5/15/04

to

Since my other posting somehow did not make it or at least I have not

seen it I thought I should clarify just this one point, though I

consider all this exchange totally without any practical significance.

seen it I thought I should clarify just this one point, though I

consider all this exchange totally without any practical significance.

>> But that was exactly what I did in the examples with

>> Interval:

>>

>> IntervalMemberQ[Interval[1`2],0]

>> IntervalMemberQ[Sin[Interval[1`2]],0]

>>

>> How does your explanation clarify this inconsistency?

>

>

Firstly, as I wrote , this has nothing to do with interval arithmetic

an even nothing to do with Interval.

It is simply this:

1`2==0

True

Sin[1`2]==Sin[0]

False

Why is that? Well, the answer is here:

Precision[1`2]

1.9999999999999998

Precision[Sin[1`2]]

2.1924023244417263

Sin increases Precision and 2 digits of precision seem to be the

borderline for this problem. I can't tell with complete confidence

exactly what the "identifying" rule is, but I do not need to know nor

particularly want to know.

Andrzej

On 15 May 2004, at 09:58, Andrzej Kozlowski wrote:

>

> On 14 May 2004, at 13:12, Maxim wrote:

>

>> Andrzej Kozlowski <ak...@mimuw.edu.pl> wrote in message

>> news:<c7uspa$q7s$1...@smc.vnet.net>...

>> In other words, using true interval arithmetic with low precision

>> numbers is safe? But that was exactly what I did in the examples with

>> Interval:

>>

>> IntervalMemberQ[Interval[1`2],0]

>> IntervalMemberQ[Sin[Interval[1`2]],0]

>>

May 17, 2004, 3:29:38â€¯AM5/17/04

to

Andrzej Kozlowski <ak...@mimuw.edu.pl> wrote in message news:<c83qlt$li3$1...@smc.vnet.net>...

> On 14 May 2004, at 13:12, Maxim wrote:

>

> > In other words, using true interval arithmetic with low precision

> > numbers is safe? But that was exactly what I did in the examples with

> > Interval:

> >

> > IntervalMemberQ[Interval[1`2],0]

> > IntervalMemberQ[Sin[Interval[1`2]],0]

> >

> > How does your explanation clarify this inconsistency?

>

>

> You are not actually using "Interval arithmetic" here. At best,you are

> using a mixture of Interval arithmetic and significance arithmetic. You

> can certainly make accurate error estimates using interval arithmetic

> with exact numbers.

>

>

> IntervalMemberQ[Interval[{1-1/100,1+1/100}],0]

> False

> and so on.

>

> I have already answered the other question in another posting. This

> happens because Sin increases Precision (and by the way, it's not yet

> another "paradox"). No inconsistency.

>

> On 14 May 2004, at 13:12, Maxim wrote:

>

> > In other words, using true interval arithmetic with low precision

> > numbers is safe? But that was exactly what I did in the examples with

> > Interval:

> >

> > IntervalMemberQ[Interval[1`2],0]

> > IntervalMemberQ[Sin[Interval[1`2]],0]

> >

> > How does your explanation clarify this inconsistency?

>

>

> You are not actually using "Interval arithmetic" here. At best,you are

> using a mixture of Interval arithmetic and significance arithmetic. You

> can certainly make accurate error estimates using interval arithmetic

> with exact numbers.

>

>

> IntervalMemberQ[Interval[{1-1/100,1+1/100}],0]

> False

> and so on.

>

> I have already answered the other question in another posting. This

> happens because Sin increases Precision (and by the way, it's not yet

> another "paradox"). No inconsistency.

>

But nobody says that Interval[Sin[1`2]] and Sin[Interval[1`2]] should

be identical; it's no surprise that Sin[Interval[{0,Pi}]] is not

Interval[{Sin[0],Sin[Pi]}].

Here's what the question was about (it seems that I have to start with

that). We create Interval[1`2]; numbers -0.29 and 2.2 belong to the

interval, therefore taking Sin of the interval we must obtain an

interval which includes at least the range from -0.28 to 1.0;

otherwise it would be like squaring Interval[{0,2}] and still getting

Interval[{0,2}] as the answer.

Instead, Sin[Interval[1`2]] returns an interval which includes (by

IntervalMemberQ) only numbers from 0.12 to 1.0.

It's quite plausible that, while the interval bounds are rounded

outwards as they should be, the increased precision 'eats up' this

rounding and in the end we get an interval which is narrower than

necessary. But to say that there is no inconsistency in the strict

mathematical sense is incorrect.

We've been through that before: if equality is determined by the

precision of the number (that would be explanation #4 or so by now?),

then only numbers in the range from 1-0.01 to 1+0.01 should be

considered as equal to 1`2.

Maxim Rytin

m...@prontomail.com

May 18, 2004, 4:26:39â€¯AM5/18/04

to

On 17 May 2004, at 16:21, Maxim wrote:

> Andrzej Kozlowski <ak...@mimuw.edu.pl> wrote in message

> news:<c83qlt$li3$1...@smc.vnet.net>...

>> On 14 May 2004, at 13:12, Maxim wrote:

>>

>>> In other words, using true interval arithmetic with low precision

>>> numbers is safe? But that was exactly what I did in the examples with

>>> Interval:

>>>

>>> IntervalMemberQ[Interval[1`2],0]

>>> IntervalMemberQ[Sin[Interval[1`2]],0]

>>>

>>> How does your explanation clarify this inconsistency?

>>

>>

>> You are not actually using "Interval arithmetic" here. At best,you are

>> using a mixture of Interval arithmetic and significance arithmetic.

>> You

>> can certainly make accurate error estimates using interval arithmetic

>> with exact numbers.

>>

>>

>> IntervalMemberQ[Interval[{1-1/100,1+1/100}],0]

>> False

>> and so on.

>>

>> I have already answered the other question in another posting. This

>> happens because Sin increases Precision (and by the way, it's not yet

>> another "paradox"). No inconsistency.

>>

>

> But nobody says that Interval[Sin[1`2]] and Sin[Interval[1`2]] should

> be identical; it's no surprise that Sin[Interval[{0,Pi}]] is not

> Interval[{Sin[0],Sin[Pi]}].

>

> Here's what the question was about (it seems that I have to start with

> that). We create Interval[1`2]; numbers -0.29 and 2.2 belong to the

> interval, therefore taking Sin of the interval we must obtain an

> interval which includes at least the range from -0.28 to 1.0;

> otherwise it would be like squaring Interval[{0,2}] and still getting

> Interval[{0,2}] as the answer.

>

> Instead, Sin[Interval[1`2]] returns an interval which includes (by

> IntervalMemberQ) only numbers from 0.12 to 1.0.

>

> It's quite plausible that, while the interval bounds are rounded

> outwards as they should be, the increased precision 'eats up' this

> rounding and in the end we get an interval which is narrower than

> necessary. But to say that there is no inconsistency in the strict

> mathematical sense is incorrect.

>

> We've been through that before: if equality is determined by the

> precision of the number (that would be explanation #4 or so by now?),

> then only numbers in the range from 1-0.01 to 1+0.01 should be

> considered as equal to 1`2.

>

> Maxim Rytin

> m...@prontomail.com

>

>

>

This is argument is clearly not getting anywhere, and as it concerns

matters of so little significance that nobody else is likely to pay

attention to it, this is defintely my last response.

I never wrote that if equality is determined by the precision of the

number in the sense of interval arithmetic, because significance

arithemtic only approximates interval arithmetic. And by interval

arithmetic I mean what everybody except you means by interval

arithmetic: exact interval arithemtic. I do not know what the hybrid

one inplemented in Matheamtica is for, it seems ot me just a byproduct

of something.

The actual situation is this:

1. Mathematica has a fully implemented exact interval arithemtic. It is

fully logically self-consistent (barring bugs) and the fact that such

arithmetic must be in certian ways quite unlike ordinary arithmetic .

It is in fact "pessimistic" in the sense I described in another thread

that you started by claiming that a perfectly sensible answer was

wrong. (Since you seem to know very little about these matters I

suggest looking at 97 of "Structure and Interpretation of Computer

Programs" by Abelson and Sussman and solving problem 2.16. I would

certainly be interested in seeing your solution.) In any case, exact

interval arithmetic can be used to give bounds on errors when working

with low precision numbers. Fortunately most normal people do not do

this very often so the slowness is not a terrible problem.

2. Mathematica has also significance arithmetic, which is an

approximation to interval arithmetic. It is fast and works well in the

sort of situations that most people most of the time find themselves

in. It achieves this speed by "bending corners", in such a way that

computations with low precision numbers are generally very unreliable

and can give paradoxical results. In particular it produces the effect

you have re-discovered. (If you bothered to check the archives you

would have discovered that exactly the same discussion, including the

example with 1`2 was held here already a few years ago). I believe

that what i have written roughly corresponds to what actually happens

but in any case it does not matter at all. The reason why it does not

mater is that

1. You have been told that significance arithmetic is unreliable in

such cases

2. As Richard Fateman pointed out the last time this issue was

discussed (at least in my memory) question like:

1`2 == 2

do not make sense and there is never any reason to ask them.

3. If you can devise a system of arithmetic that has the advantages of

significance arithmetic without its defects than I am sure everyone

will be impressed and maybe even Wolfram will want to hire you.

Reply all

Reply to author

Forward

0 new messages