Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A neat infinite series sum

5 views
Skip to first unread message

Dave L. Renfro

unread,
May 25, 2010, 10:01:58 AM5/25/10
to
A few days ago I posted an essay in another group, and
today it occurred to me that the essay might also be of
interest to some sci.math readers. The essay's intended
audience was U.S. high school calculus teachers.

----------------------------------------------------------
----------------------------------------------------------

This morning I came across an interesting infinite series
sum. The series belongs to a fairly well known type of
series, and thus the series doesn't immediately seem
to be designed to produce the result it does, which is
to converge to a number that is within 1,000,000 decimal
places of 1. That is, the sum of the series differs
from 1 by less than 10^(-10^6).

Here is the specific result:

Let N be the least integer such that (ln ln ln ln n) > 1,
that is, the least integer greater than e^e^e^e. For the
record, e^e^e^e is roughly 1.2728 x 10^(3814279).

Then the sum from n = N to infinity of

1 / [n * (ln n) * (ln ln n) * (ln ln ln n) * (ln ln ln ln n)^2]

lies between 1 - 10^(-1.5 x 10^6) and 1 + 10^(-1.5 x 10^6).

I saw this in "Canadian Mathematical Bulletin" 8 #1 (1965),
pp. 113-115, in the "Solutions to Problems" section.
The problem was posed by John D. Dixon and the published
solution is by S. Spital. Although I came across this in
a library bound copy of the journal, the journal issue
is freely available on the internet at:

http://books.google.com/books?id=b4gHogA6iLQC

Click "Read this magazine", enter '113' into the empty
box just right of "Contents" near the top of the screen,
then push "Enter" on your keyboard.

What I intend to do in this post is give a detailed discussion
of the result in a way that should be understandable by most
good BC calculus students (certainly anyone scoring a 5 on
the test, and probably even a 4). It's probably appropriate
for most BC classes during the sequences/series part of the
course when presented by a teacher (assuming time isn't an
issue), since it's much easier to follow someone showing
the details in person on a blackboard than to follow the
details by reading an ASCII post.

Recall the standard p-series:

SUM 1/n^p

This converges for p > 1 and diverges for p <= 1.

There is a variant on the p-series scale of series that
leads to a much finer convergence/divergence scale. Being
a finer scale, it has a wider applicability for comparison
test purposes than the p-series. I used to call this
particular finer scale the "logarithmic p-series", but
a more useful term is probably the logarithmic (p,q)-series:

SUM 1 / [(n^p * (ln n)^q]

Of course, to avoid undefined terms, we need to begin
the summation at n = 2 or at some integer greater than 2.

The n^p term is so dominant that, regardless of q, the
series converges if p > 1 and diverges if p < 1. It's
only in the case of p = 1 that the value of q matters.
The series converges if p = 1 and q > 1, and the series
diverges if p = 1 and q <= 1. Thus, the logarithmic
(1,q)-series provides for a hierarchy (parameterized by q)
of convergent and divergent series that all roughly
fit into the harmonic series classification. You can
prove the convergence and divergence results for p = 1
by using the integral test. Use the substitution u = (ln x)
to evaluate the integral. The convergence and divergence
results for p < 1 or p > 1 can be proved by comparison
with an appropriate p-series. For example, suppose p = 0.9
and q = 50. By "bumping up" the value of p a tiny bit
(but staying below p = 1), say to p = 0.91, we can omit
the (ln n)^50 factor in the denominator and still have
smaller terms (and a divergent series, since we'll have
a p-series with p < 1), since for all sufficiently large
values of n we have n^(0.01) > (ln n)^50. Why? Because
the limit as n --> infinity of [n^(0.01) / (ln n)^50]
is infinity. (Even just having this limit exist and be
greater than 1 would be enough to get the inequality
we want.) One way to see this is to apply L'Hopital's
rule 50 times. (You'll see an obvious pattern after the
first couple of times.) You can also make the variable
change m = (ln n), which changes things to the limit as
m --> infinity of (e^m)^0.01 / m^50 = e^(0.01 * m) / m^50,
and then apply L'Hopital's rule 50 times. In general,
for each value of p less than 1, or for each value of p
greater than 1, we can replace the value of p with another
value of p that is closer to 1 (while still being less
than 1 or greater than 1, respectively), then toss out
the (ln n)^q term, and get a more strongly convergent or
a more strongly divergent p-series against which the given
logarithmic (p,q)-series can be successfully compared against.

The p = q = 1 convergence/divergence boundary for the
logarithmic (p,q)-series can itself be refined. Consider

SUM 1 / [(n^p * (ln n)^q * (ln ln n)^r]

I'll call this the logarithmic (p,q,r)-series.

In order for all the terms to be defined, the summation
has to begin at a value of n such (ln n) > 0, so the
summation index has to begin at 2 or at some integer
greater than 2.

The n^p term is so dominant that, regardless of q and r,
this converges if p > 1 and diverges if p < 1. Also, in
the case where p = 1, the n^p * (ln n)^q = n * (ln n)^q term
is so dominant that, regardless of r, the series converges
if q > 1 and diverges if q < 1. It's only in the case of
p = q = 1 that the value of r matters. The series converges
if p = q = 1 and r > 1, and the series diverges if p = q = 1
and r <= 1. The logarithmic (1,1,r)-series provides for a
hierarchy (parameterized by r) of convergent and divergent
series that all roughly fit into the logarithmic (1,1)-series
classification. You can prove the convergence and divergence
results for p = q = 1 and varying r by using the integral
test. Use the substitution u = (ln ln x) to evaluate the
integral. Using this substitution -- and note that
du/dx = (d/dx)(ln x) / (ln x) = 1 / [x * (ln x)] -- you'll
get an integral in the variable u that itself can be evaluated
using the substitution v = (ln u). The convergence and
divergence results for p < 1 or p > 1 (with q and r varying),
and for p = 1, q < 1 (with r varying) or p = 1, q > 1
(with r varying), can be proved by comparison with appropriate
logarithmic (p,q)-series. The details are messy, so I'll
skip them, but the idea is similar to the "bumping up" and
"bumping down" idea I went through above.

The obvious pattern continues, giving finer and finer
hierarchies of convergence/divergence series, such as the
logarithmic (p,q,r,s)-series, the logarithmic (p,q,r,s,t)-series,
and so on. You can often find these "generalized p-series"
in advanced calculus texts, undergraduate real analysis texts,
and very old (1800's) calculus texts.

These generalized p-series are notorious for being able to
converge or diverge at very slow rates.

Here are some examples.

The harmonic series (p-series with p = 1) diverges, but
to reach 9.788 we need about 10,000 terms, to reach 23.603
we need about 10 billion terms, and to reach 230.835
we need about 10^100 terms.

The logarithmic (1,1)- series diverges, but to reach 20 we need
about 10^(10^8) terms (that's 1 followed by 100 million zeros).

The logarithmic (1,1,1)-series diverges, but to reach 20 we
need about 10 ^ [10 ^ (2,000,000)] terms and to reach 100
we need about 10 ^ [10 ^ (10^41)] terms. The logarithmic
(1,1,2)- series converges but, to get within an accuracy
of 0.005 to the sum, we would need about 10 ^ (3.14 x 10^86)
terms.

For the results in the last two paragraphs, see:

Ralph P. Boas, "Partial sums of infinite series, and how they
grow", American Mathematical Monthly 84 #4 (April 1977), 237-258.
http://mathdl.maa.org/images/upload_library/22/Ford/Boas.pdf

The series at hand is the convergent logarithmic (1,1,1,1,2)-series,
with an initial index value that is a little greater than is
otherwise absolutely necessary. Thus, while (ln ln ln ln n)
is defined as long as (ln ln ln n) > 0, or n > e^e = 15.15426...
(i.e. n >= 16), for the series we will be investigating
the initial index value is the least integer such that
(ln ln ln ln n) > 1, which is roughly 1.2728 x 10^(3814279).

In what follows, let (ln_2 x) = (ln ln x),
(ln_3 x) = (ln ln ln x), and so on. Also, let N be the
least integer n such that (ln_4 x) > 1 and let

f(x) = 1 / [x * (ln x) * (ln_2 x) * (ln_3 x) * (ln_4 x)^2]

Then the series we're considering can be expressed as

SUM[f(n), n=N to infinity].

For completeness, we'll show this series really converges.
Also, the results we get in doing so will be useful in
proving the value of SUM[f(n), n=N to infinity] differs
from 1 by less than 10^(-1.5 x 10^6).

The terms of SUM[f(n), n=N to infinity] are positive. Also,
the sequence of terms is strictly decreasing. To see the
second assertion, not that each of the denominator factors
is strictly increasing, hence their product is strictly
increasing, hence the reciprocal of their product is
strictly decreasing. Therefore, we can apply the integral
test.

To apply the integral test, we need to evaluate the
integral from x = N to infinity of 1 / f(x).

Let u = (ln_4 x). Then using the chain rule we get

du/dx = 1/(ln_3 x) * (d/dx)(ln_3 x)

du/dx = 1/(ln_3 x) * 1/(ln_2 x) * (d/dx)(ln_2 x)

du/dx = 1/(ln_3 x) * 1/(ln_2 x) * 1/(ln x) * (d/dx)(ln x)

du/dx = 1/(ln_3 x) * 1/(ln_2 x) * 1/(ln x) * 1/x

Therefore, the integrand becomes just 1/u^2 = u^(-2),
hence an indefinite integral is

-u^(-1) = -1/u = -1/(ln_4 x),

and hence the improper definite integral is

limit as M --> oo of {-1/(ln_4 x), x=N to x=M}

limit as M --> oo of {-1/(ln_4 M) - [-1/(ln_4 N)]}

0 - [-1/(ln_4 N)]

1/(ln_4 N),

which is finite. Indeed, this value is between 0 and 1,
since N was chosen so that (ln_4 N) > 1.

Thus, by the integral test, the series converges.

However, the integral test tells us more than this. As one
can see from an appropriate diagram (see the file titled
"integral-test.pdf" at [1]), we have

[1] http://mathforum.org/kb/message.jspa?messageID=7002409

f(M) + INTEGRAL[f(x), N to M]

less than

SUM[f(n), n=N to n=M]

less than

f(N) + INTEGRAL[f(x), N to M],

and hence, taking M --> infinity throughout the inequality,
we get

0 + 1/(ln_4 N) < SUM[f(n), n=N to infinity] < f(N) + 1/(ln_4 N),

or

1/(ln_4 N) < SUM[f(n), n=N to infinity] < f(N) + 1/(ln_4 N).

The result we want to show will come from appropriate
estimates for the values of 1/(ln_4 N) and f(N). We will
first find an appropriate estimate for f(N).

In the upcoming calculations we will find it useful to let

x* = e^e^e^e.

Thus, x* is the value of x such that (ln_4 x) = 1. Note that
(ln_4 x) is a one-to-one function, so only one such value of
x exists.

Because 1 = (ln_4 x*) < (ln_4 N)

[recall N is the least integer such that (ln_4 N) > 1],

we have x* < N

[this is because (ln_4 x) is an increasing function],

and hence f(x*) > f(N)

[this is because f(x) is a decreasing function].

That is, we have

0 < f(N) < f(x*) = 1 / (e^e^e^e * e^e^e * e^e * e * 1^2)

< 10^(-1.5 x 10^6),

or

0 < f(N) < 10^(-1.5 x 10^6).

Now that we have f(N) tightly estimated, we turn our attention
to finding an appropriate estimate for 1/(ln_4 N).

To do this we will use the Mean Value Theorem.

Note that

(d/dx)[1 / (ln_4 x)] = (d/dx) of (ln_4 x)^(-1)

is equal to

-(ln_4 x)^(-2) * (d/dx)(ln_4 x),

which is equal to [note we computed (d/dx)(ln_4 x) above]

-(ln_4 x)^(-2) * 1/(ln_3 x) * 1/(ln_2 x) * 1/(ln x) * 1/x

However, since this last expression is equal to -f(x), we have

(d/dx)[1 / (ln_4 x)] = -f(x).

By the Mean Value Theorem for the interval [x*, N], there
exists c such that x* < c < N and

-f(c) = [ 1/(ln_4 N) - 1/(ln_4 x*) ] / (N - x*)

1/(ln_4 x*) - 1/(ln_4 N) = (N - x*)f(c)

1 - 1/(ln_4 N) = (N - x*)f(c)

[recall (ln_4 x*) = 1].

Since N - x* < 1

[this because (ln_4 x*) = 1 and N is the least integer n such
that (ln_4 n) > 1]

and

f(c) < f(x*)

[this because x* < c and f(x) is a decreasing function],

we get [the left = is copied from above]

1 - 1/(ln_4 N) = (N - x*)f(c) < 1*f(c) = f(c) < f(x*).

But f(x*) < 10^(-1.5 x 10^6), so we get

1 - 1/(ln_4 N) < 10^(-1.5 x 10^6)

or

1 - 10^(-1.5 x 10^6) < 1/(ln_4 N) < 1,

where the right < comes from (ln_4 N) > 1.

To recap, from the integral test we got

1/(ln_4 N) < SUM < f(N) + 1/(ln_4 N)

and we then obtained

0 < f(N) < 10^(-1.5 x 10^6).

Thus, we get

1 - 10^(-1.5 x 10^6) < 1/(ln_4 N) < SUM

and

SUM < f(N) + 1/(ln_4 N) < 10^(-1.5 x 10^6) + 1,

which implies

1 - 10^(-1.5 x 10^6) < SUM 1 + 10^(-1.5 x 10^6),

the result we wanted to show.

Dave L. Renfro

Dave L. Renfro

unread,
May 25, 2010, 10:11:10 AM5/25/10
to
Dave L. Renfro wrote (in part):

> which implies
>
> 1 - 10^(-1.5 x 10^6) < SUM 1 + 10^(-1.5 x 10^6),
>
> the result we wanted to show.

Darn, as much as I proofread this the other day, there's
still a typo! Not only that, but it's right at the end,
where I should have noticed it then (or just now when I
reposted this essay -- well, I guess I did notice it just
now, when I reposted it, although it would have been better
to have noticed it _before_ I selected "Send").

The inequality should be

1 - 10^(-1.5 x 10^6) < SUM < 1 + 10^(-1.5 x 10^6)

Dave L. Renfro

0 new messages