Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Points sampled by NIntegrate

15 views
Skip to first unread message

andrew....@gmail.com

unread,
Jun 23, 2006, 4:45:07 AM6/23/06
to
Hi,

[I apologise for the length of this post; I have tried to state my
problem as clearly as possible, and the result is long.]

I am trying to understand how NIntegrate decides at which points to
sample a function when estimating its improper integral over e.g. {1,
Infinity}. Please consider the following code:

sampledPoints = {};
f[r_?NumericQ] :=
(
AppendTo[sampledPoints, r]; 1/(r^2 + Sin[r])
)
NIntegrate[f[r], {r, 1, Infinity}, PrecisionGoal -> 4]
Take[Sort[sampledPoints], -10]
Length[sampledPoints]

The function f is just 1/(r^2 + Sin[r]), except it keeps track of the
points at which it is evaluated, in a list called sampledPoints. After
NIntegrate is called on f over {1, Infinity}, the last two lines
returns the 10 largest values of r for which f[r] was evaluated, and
the number of values of r for which f[r] is evaluated. In the case
above, the following are returned:

{69.3345, 85.2695, 125.67, 130.17, 170.539, 251.341, 341.078, 502.682,
1005.36, 2010.73}

and

99.

Now consider the following code, which is identical except that the
PrecisionGoal option to NIntegrate has been increased from 4 to 5:

sampledPoints = {};
f[r_?NumericQ] :=
(
AppendTo[sampledPoints, r]; 1/(r^2 + Sin[r])
)
NIntegrate[f[r], {r, 1, Infinity}, PrecisionGoal -> 5]
Take[Sort[sampledPoints], -10]
Length[sampledPoints]

The following are now returned by the last two lines:

{1005.363620876827`, 1518.2201755137608`, 2010.727241753654`, \
20094.148354998237`, 34180.18350952692`, 6.859835603121333`*^7, \
1.066219395284406`*^10, 1.931379671660648`*^19, 2.22745842812734`*^55,
\
8.429342764501086`*^109}

and

176.

To me, sampling 176 points instead of 99 seems perfectly reasonable;
but I can't understand the presence of such large numbers as ~10^110 in
the list of sampled points of f. Can anyone explain why they suddenly
emerge as the PrecisionGoal increases?

Now, since 1/(r^2 + Sin[r]) is not (I assume) any more expensive to
compute at ~10^110 than at around ~1, these enormous values at which f
is sampled do not prevent NIntegrate from converging rapidly. However,
I wish to integrate elements of a class of functions that _are_ more
expensive to evaluate at large values of their argument.

The functions I am interested in integrating over {1, Infinity} go to
zero as their argument goes to Infinity (as they must, if they are to
converge) and do so monotonically. Their values at ~10^100 might as
well be zero. By a hack, I could define them to be zero for values of
their argument larger than some cutoff; or I could use only low options
for PrecisionGoal in NIntegrate; but I would prefer to understand what
it is that is making NIntegrate suddenly choose to sample the integrand
at very large values of the argument.

Thanks for any help.

Cheers,

Andrew.

antononcube

unread,
Jun 26, 2006, 12:34:56 AM6/26/06
to

NIntegrate transforms (on the fly) the integrand

(1/(r^2 + Sin[r])

into

1/((1 - r)^2*(1/(1 - r)^2 + Sin[1/(1 - r)]))

using this transformation

{rules, jac} = {{r -> 1/(1 - r)}, 1/(1 - r)^2};

I.e. let

g[r_] := Evaluate[(1/(r^2 + Sin[r]) /. rules) jac]

then

In[22]:= NIntegrate[g[r], {r, 0, 1}]
Out[22]= 0.816047

If we plot g[r] we will see that it oscillates around 1. This can be
seen from these plots:

Plot[g[r], {r, 0, 1}, PlotPoints -> 1000]
Plot[g[r], {r, 0.9, 1}, PlotPoints -> 1000]
Plot[g[r], {r, 0.99, 1}, PlotPoints -> 1000]

We can also plot the sampling points used by these commands:

samplingPoints = Reap[NIntegrate[g[r], {r, 0, 1}, EvaluationMonitor :>
Sow[r]]][[2, 1]];
Length[samplingPoints]
ListPlot[Transpose[{samplingPoints, Range[Length[samplingPoints]]}]]

(Here I used EvaluationMonitor, and Sow and Reap, instead of AppendTo
inside the integrand.)

>From the last plot, one can see how NIntegrate`s global adaptive
algorithms samples the integrand.

When the precision goal is increased from the 4 to 5, NIntegrate's
global adaptive algorithm needs deeper recursion level of interval
bisection. When this recursion level reaches the value given to the
option SingularityDepth (which is by default 4), the singularity
handling mechanism kicks in. The singularity handling transformation
has the property to cluster the sampling points around the singular
point (see http://library.wolfram.com/infocenter/Conferences/5365/).
This is the reason we get large values for precision goal 5.
for this integrand though, the reason for the slower convergence is not
a singularity at 1, it is the oscillatory nature of the integrand
around 1. So the singularity handling is not necessary. If we prevent
the singularity handling by giving a large value to SingularityDepth,
we get result with smaller number of sampling points, that have smaller
magnitude:

In[66]:= sampledPoints = {};
f[r_?NumericQ] := (AppendTo[sampledPoints, r]; 1/(r^2 + Sin[r]))
NIntegrate[f[r], {r, 1, Infinity}, SingularityDepth -> 1000,


PrecisionGoal -> 5]
Take[Sort[sampledPoints], -10]
Length[sampledPoints]

Out[68]= 0.816048

Out[69]= {138.669, 170.539, 251.341, 260.339, 341.078, 502.682,
682.156, 1005.36, 2010.73, 4021.45}

Out[70]= 143


Anton Antonov,
Wolfram Research, Inc.

Andrew Moylan

unread,
Jun 27, 2006, 3:32:10 AM6/27/06
to
Ah OK, I understand what is happening now. Thanks for your clear
explanation Anton.

Cheers,

Andrew

0 new messages