Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Logarithm of repeated exponential

84 views
Skip to first unread message

mike3

unread,
Jul 13, 2009, 8:52:22 PM7/13/09
to
Hi.

I noticed this.

log(3) ~ 1.098612288668109691395245237
log(log(3^3)) ~ 1.192660116284808707569579569
log(log(log(3^3^3))) ~ 1.220795907132767865324020633
log(log(log(log(3^3^3^3)))) ~ 1.221729301870251716316203810
(calculated indirectly via identity log(x^y) = y log(x).)
log(log(log(log(log(3^3^3^3^3))))) ~ 1.221729301870251827504003124
(calculated indirectly via identity log(log(x^x^y)) = y log(x) + log
(log(x)).)

It seems to be stabilizing on some weird value, around 1.2217293. What
is this? And we seem to run out of log identities here making it
infeasible to compute further approximations.

Has this been examined before?

Primes?

unread,
Jul 13, 2009, 9:01:38 PM7/13/09
to

Very interesting growth rate:

1.2217293^2
1.49262248247849`^2
2.22792187520025`^2
4.963635881995797`^2
24.637681169036195`^2
607.0153333870808`^2
368467.61496702884`^2
1.3576838327949062`*^11^2
1.8433053898326667`*^22^2
3.3977747601861594`*^44^2
1.1544873320958113`*^89^2
1.332840999969704`*^178^2
1.7764651312002406671837964276`15.653559774527023*^356^2
3.15582836237029*10^712

I.N. Galidakis

unread,
Jul 14, 2009, 3:09:10 AM7/14/09
to

Very likely not. This is probably a tough problem in tetration. I am not sure
that what you are seeing as a "solution" is in fact a solution.

Here's a rough heuristic that a value such as "~1.2217293" may not be a
solution. Call your solution "a" and denote iterated exponentiation using
tetration. For example:

3^3^...^3 = 3^^n (n-terms)

and

x^x^...^x^y = (x,y)^^n (n-x's and on top a y)

Denote iterated composition of the logarithm function as:

log^{(n)}(x) = log(log(log(...log(x)))) (n-iterates)

What you are saying is that the following exists:

lim_{n->oo} log^{(n-1)}(3^^n)

Call the value a. If this value exists, then for every epsilon > 0, there exists
n_0 sufficiently large, such that for all n > n_0:

|3^^n - (e,a)^^{n-1}| < epsilon

Writing this out explicitly, we have:

|3^3^...^3 (n-3's) - e^e^...^e^a (n-1-e's)| < epsilon

The towers have exactly the same height, because you are composing n-1 logs with
a height n tower of 3's. Add the a at the top and they have the same height.

From the above, it immediately follows that

a > e, (because 3^^n > e^^n).

In fact, it looks to me like the sequence defined by: a the solution in:

3^^n - (e,a)^^{n-1} = 0, n\in N,

will be monotone increasing in a, which I think shows that a cannot exist.
Anyone sees anything different, turn on the blow-horn.
--
Ioannis

I.N. Galidakis

unread,
Jul 14, 2009, 3:28:26 AM7/14/09
to
I.N. Galidakis wrote:
[snip]

> In fact, it looks to me like the sequence defined by: a the solution in:
>
> 3^^n - (e,a)^^{n-1} = 0, n\in N,
>
> will be monotone increasing in a,

make the above, actually "unbounded".

master1729

unread,
Jul 14, 2009, 7:21:11 AM7/14/09
to
mike3 wrote :

yes and no.

it occured on the tetration forum.

i dont believe/recall it has a limit but it keeps wobbling around 1.22172.

its intresting though.

I.N. Galidakis

unread,
Jul 14, 2009, 10:21:44 AM7/14/09
to

Here's what I find a clearer argument, which I think is a bit more direct. Use
the terminology of my prior post. I.e.:

x^x^...^x = x^^n (n-x's)
x^x^...^x^y = (x,y)^^n (n-x's and top y)

log(log(...log(x))) = log^{(n)}(x) (n iterates of log)

You've defined the following sequence:

b(n) = log^{(n-1)}(3^^n), n\in N.

First, show that it's monotone increasing, by induction:

b(n) = log^{(n-1)}(3^^n)
= log^{(n-2)}(log(3^^n))
= log^{(n-2)}(log(e^{log(3)*3^^{n-1}}))
= log^{(n-2)}(log(3)*3^^{n-1}) (1)

log(3) > 1, =>

(1) > log^{(n-2)}(3^^{n-1}) = b(n-1)

and b(n) is monotone increasing. That's not enough by itself, because b(n) may
be bounded. Let's show that it's not:

Give me M > 0, large. Can we find n_0, such that for all n > n_0, we have:

b(n) > M?

Equivalently, can we find n_0, such that for all n > n_0, we have:

3^^n > (e,M)^^{n-1} ?

Decompose M as M = (e,x)^^k, some k\in N and some 1 < x < e (Convince yourself
that we can always do that).

The question now becomes: Can we find n_0, such that for all n > n_0, we have:

3^^n > (e,(e,x)^^k)^^{n-1} ?

Note that (e,(e,x)^^k)^^{n-1} = (e,x)^^{n+k-1}. The question now amounts to:

Can we find n_0, such that for all n > n_0:

3^^n > (e,x)^^{n+k-1} ?

Well, yes, we can. 3^x grows faster than e^x, so eventually, a tower of
sufficiently many n-3's will overtake a tower of n+k-1 e's, for fixed k\in N and
with a small x on top (convince yourself that we can do that, too.)

Hence, for any large M > 0, such a n_0 exists and it follows that b(n) is
unbounded.

The above, along with the fact that b(n) is monotone increasing, implies that
lim_{n->oo}b(n) does not exist.

I think that the above is correct, but note that this is fringe area, so the
usual cautions for typos and random nonsense apply :-)

> Has this been examined before?

--
Ioannis

G. A. Edgar

unread,
Jul 14, 2009, 11:38:48 AM7/14/09
to
In article
<20f2d3ea-1e87-45dc...@i18g2000pro.googlegroups.com>,
mike3 <mike...@yahoo.com> wrote:

How about using transseries to examine this?

[Plug ... "Transseries for beginners" http://arxiv.org/abs/0801.487 ]

Now, 3^x = exp(ax) where a = log(3), let's try general a > 0 to start
with. We can put a = log(3) at the end if we want to get back to this
case.

Write
f_0(x) = x,
f_1(x) = log(exp(ax)),
f_2(x) = log(log(exp(a exp(ax))))
f_3(x) = log(log(log(exp(a exp(a exp(ax))))))
and so on, so that
f_n(x) = log(f_{n-1}(exp(ax)))

If f_n converges, we expect the limit function f to satisfy
f(x) = log(f(exp(ax)))
Well, there is a unique transseries solution of this ...

f(x) = a*x + log(a) + log(a)/a*exp(-a*x)
- (1/2)*log(a)^2/a^2*exp(-2*a*x)
+ (1/3)*log(a)^3/a^3*exp(-3*a*x)
- (1/4)*log(a)^4/a^4*exp(-4*a*x)
+ (1/5)*log(a)^5/a^5*exp(-5*a*x)
- (1/6)*log(a)^6/a^6*exp(-6*a*x)
+ (1/7)*log(a)^7/a^7*exp(-7*a*x)
+o(exp(-7*a*x))

(This will come from a transseries fixed point theorem.)
Warning: the transseries expansion has terms beyond all of these
involving monomials exp(-a*exp(a*x)) and such things... it may or may
not actually converge for a given x . This transseries is not
grid-based. Its support has order-type omega^omega, and has infinite
exponential height. So it is beyond that "Beginners" paper!

--
G. A. Edgar http://www.math.ohio-state.edu/~edgar/

G. A. Edgar

unread,
Jul 14, 2009, 2:31:36 PM7/14/09
to
>
> f(x) = a*x + log(a) + log(a)/a*exp(-a*x)
> - (1/2)*log(a)^2/a^2*exp(-2*a*x)
> + (1/3)*log(a)^3/a^3*exp(-3*a*x)
> - (1/4)*log(a)^4/a^4*exp(-4*a*x)
> + (1/5)*log(a)^5/a^5*exp(-5*a*x)
> - (1/6)*log(a)^6/a^6*exp(-6*a*x)
> + (1/7)*log(a)^7/a^7*exp(-7*a*x)
> +o(exp(-7*a*x))
>

Doing more steps, putting x=1, a=log(3), I get the value
1.224651242175237

mike3

unread,
Jul 14, 2009, 4:35:52 PM7/14/09
to
On Jul 14, 9:38 am, "G. A. Edgar" <ed...@math.ohio-state.edu.invalid>
wrote:
> In article
> <20f2d3ea-1e87-45dc-86b2-a917f89e9...@i18g2000pro.googlegroups.com>,

What's that bit mean, terms beyond those involving exp(-aexp(ax))? You
mean it doesn't continue the "obvious" pattern shown?

I.N. Galidakis

unread,
Jul 14, 2009, 5:06:36 PM7/14/09
to

If I recall correctly, Galathaea had mentioned a particular alternating sum by
Ramanujan:

f(x) = e^x - e^e^x + e^e^e^x - e^e^e^e^x +... (+/-)(e,x)^^n...

I cannot find the google post, so I hope I am not mistyping it. She mentioned
that Ramanujan showed in his notebooks that it f(x) was entire and (I think)
monotone increasing in x.

Can those transseries study the growth of this f(x)?
--
Ioannis

W^3

unread,
Jul 15, 2009, 12:30:48 AM7/15/09
to

Here's a fairly elementary proof that this sequence converges (super
fast). Let L_n denote the n-fold logarithm, and let x^{n} denote the
n-fold tower exponential. Your sequence is then a_n = L_n(3^{n}).

Claim 1; a_n is strictly increasing: L_(n+1)(3^{n+1}) =
L_n(L(3^{n+1})) = L_n(3^{n}*L_1(3)) > L_n(3^{n}) (using L_1(3) > 1 and
the fact that L_n is strictly increasing).

Claim 2: a_(n+1) - a_n < 1/[e^{1}*e^{2}*...*e^{n-1}] (which shows a_n
converges rapidly). Proof: From above a_(n+1) = L_n(3^{n}*L_1(3)) <
L_n(2*3^{n}). By the mean value theorem, L_n(2*3^{n}) - L_n(3^{n}) =
3^{n}*L_n'(c) for some c between 3^{n} and 2*3^{n}. Now L_n'(c) =
1/[L_(n-1)(c)*L_(n-2)(c)*...*L_1(c)*c], and c > 3^{n} > e^{n}. Because
L_k(e^{n}) = e^{n-k}, we're done.

G. A. Edgar

unread,
Jul 15, 2009, 9:42:55 AM7/15/09
to
In article <1247605602.636114@athprx04>, I.N. Galidakis
<morp...@olympus.mons> wrote:

>
> If I recall correctly, Galathaea had mentioned a particular alternating sum by
> Ramanujan:
>
> f(x) = e^x - e^e^x + e^e^e^x - e^e^e^e^x +... (+/-)(e,x)^^n...
>
> I cannot find the google post, so I hope I am not mistyping it. She mentioned
> that Ramanujan showed in his notebooks that it f(x) was entire and (I think)
> monotone increasing in x.
>
> Can those transseries study the growth of this f(x)?

No, this one is not a transseries.

G. A. Edgar

unread,
Jul 15, 2009, 9:43:16 AM7/15/09
to
In article
<335ce062-9a40-42bf...@b15g2000yqd.googlegroups.com>,
mike3 <mike...@yahoo.com> wrote:

It does continue the obvious pattern, infinitely many terms, but that
is not the end. There are more terms, asymptotically smaller than all
of those. Terms involving exp(-a*exp(a*x)), terms involving
exp(-a*exp(a*exp(a*x))), and so on.

master1729

unread,
Jul 15, 2009, 10:00:48 AM7/15/09
to
Ioannis wrote :

intresting remark.

but i believe it was gottfried hellms who mentioned it !?

at least he did on his website and - if not mistaken - also on the tetration forum.

i dont think f(x) is entire , for e.g. compare :
y = 2 - 2^2 + 2^2^2 - 2^2^2^2 + 2^2^2^2^2 seems to diverge strongly and the result seems close to the last computed term which has lim +/- oo.

maybe you (edgar) meant base eta = e^(1/e) instead of e or something.

or use a summability method.


regards
tommy1729

master1729

unread,
Jul 15, 2009, 10:08:51 AM7/15/09
to
World Wide Wade wrote :

> In article
> <20f2d3ea-1e87-45dc...@i18g2000pro.goog

i agree.

and so does Ioannis.


>
> Claim 2: a_(n+1) - a_n < 1/[e^{1}*e^{2}*...*e^{n-1}]
> (which shows a_n
> converges rapidly). Proof: From above a_(n+1) =
> L_n(3^{n}*L_1(3)) <
> L_n(2*3^{n}). By the mean value theorem, L_n(2*3^{n})
> - L_n(3^{n}) =
> 3^{n}*L_n'(c) for some c between 3^{n} and 2*3^{n}.
> Now L_n'(c) =
> 1/[L_(n-1)(c)*L_(n-2)(c)*...*L_1(c)*c], and c > 3^{n}
> > e^{n}. Because
> L_k(e^{n}) = e^{n-k}, we're done.

this confuses me ?

what about Ioannis post then ?

http://mathforum.org/kb/message.jspa?messageID=6783965&tstart=0

you cannot both be correct.

No offense but on first sight ; Im betting on Ioannis :)


regards

tommy1729

I.N. Galidakis

unread,
Jul 15, 2009, 10:54:39 AM7/15/09
to

I don't understand Wade's argument, either, but that doesn't mean he is wrong.
Wade is almost never wrong, but see below.

> what about Ioannis post then ?
>
> http://mathforum.org/kb/message.jspa?messageID=6783965&tstart=0
>
> you cannot both be correct.
>
> No offense but on first sight ; Im betting on Ioannis :)

I cannot find fault with my argument either, but I wouldn't bet on me. In any
case, there's another bit which I find strange, so let's see if someone can
explain it.

Clearly:

log[3](x) < log(x) (i.e., the log base-3 function lies below the natural log
function).

Now iterate n-1 times:

log^{(n-1)}[3](x) < log^{(n-1)}(x)

The above is of course supported by Maple. For n=4, for example:

> f:=x->log[3](x);
> g:=x->log(x);
> with(plots):
> p1:=plot((f@@3)(x),x=1..10,color=red):
> p2:=plot((g@@3)(x),x=1..10,color=green):
> display({p1,p2});

Image (red is base 3 log iterate):

http://misc.virtualcomposer2000.com/log3e.gif

Hence:

log^{(n-1)}[3](3^^n) < log^{(n-1)}(3^^n), for all n\in N.

But the left term is just 3, for all n\in N. So if the sequence indeed converges
to a, then, taking limits, we have:

3 <= a.

We have two operators here: One is the power function 3^x and the other is
log(x). I find it highly suspicious that n-1 iterates of log cancel the growth
of n iterates of 3^x, which grows faster than the inverse of log, which is e^x.
Something is very weird, here.

What _I think_ is happening, is that all of us are "correct" more or less, and
this series is probably not convergent "conventionally" (whatever that may mean,
anyway), but is nevertheless convergent using stronger methods, in the same way
some series are divergent using conventional methods, but convergent using, for
example, Euler, Abel or Cessaro summations.

This is as much as I understand.

> regards
>
> tommy1729
--
Ioannis

mike3

unread,
Jul 15, 2009, 2:27:49 PM7/15/09
to
On Jul 15, 7:43 am, "G. A. Edgar" <ed...@math.ohio-state.edu.invalid>
wrote:
> In article
> <335ce062-9a40-42bf-9b78-cfc70e364...@b15g2000yqd.googlegroups.com>,

So is it like some sort of "series of series" then? What do you mean
by its
"support" having "order type omega^omega"? I couldn't get access to
the paper
you linked to. There was some error.

mike3

unread,
Jul 15, 2009, 2:28:00 PM7/15/09
to
On Jul 15, 8:54 am, "I.N. Galidakis" <morph...@olympus.mons> wrote:
> master1729 wrote:
> > World Wide Wade wrote :
>
> >> In article
> >> <20f2d3ea-1e87-45dc-86b2-a917f89e9...@i18g2000pro.goog
> >> legroups.com>,

First off it's n iterates of log(x) on n iterates of 3^x, applied to
one, i.e.

a_n = log^n(exp_3^n(1))

because we have

log(3)
log(log(3^3))
log(log(log(3^3^3)))
etc.

> What _I think_ is happening, is that all of us are "correct" more or less, and
> this series is probably not convergent "conventionally" (whatever that may mean,
> anyway), but is nevertheless convergent using stronger methods, in the same way
> some series are divergent using conventional methods, but convergent using, for
> example, Euler, Abel or Cessaro summations.
>
> This is as much as I understand.
>

But then how does one explain the apparent "conventional" convergence
of the
sequence seen in the post I give?

> > regards
>
> > tommy1729
>
> --
> Ioannis

I.N. Galidakis

unread,
Jul 15, 2009, 2:50:52 PM7/15/09
to
mike3 wrote:
[snip]

>> We have two operators here: One is the power function 3^x and the other is
>> log(x). I find it highly suspicious that n-1 iterates of log cancel the
>> growth of n iterates of 3^x, which grows faster than the inverse of log,
>> which is e^x. Something is very weird, here.
>>
>
> First off it's n iterates of log(x) on n iterates of 3^x, applied to
> one, i.e.
>
> a_n = log^n(exp_3^n(1))
>
> because we have
>
> log(3)
> log(log(3^3))
> log(log(log(3^3^3)))
> etc.

Doesn't change the argument. The 3^x operator should still "win", because it


grows faster than the inverse of log, which is e^x.

>> What _I think_ is happening, is that all of us are "correct" more or less,


>> and this series is probably not convergent "conventionally" (whatever that
>> may mean, anyway), but is nevertheless convergent using stronger methods, in
>> the same way some series are divergent using conventional methods, but
>> convergent using, for example, Euler, Abel or Cessaro summations.
>>
>> This is as much as I understand.
>>
>
> But then how does one explain the apparent "conventional" convergence
> of the
> sequence seen in the post I give?

I have no idea. All I know is that when I compose n logs with n 3^x's, the
latter should eventually win, because it grows faster than the inverse of the
previous.

It is conceivable that a certain sequence may "appear" to converge if you
calculate a few terms, but it may not converge as a whole. Gottfried has
provided many examples of series which do not sum conventionally, and only do so
using Euler summation. I am pretty sure that if you calculated a few terms of
his examples, the series will look like it is converging towards something, but
this says nothing about the total summand, which may not exist using
"conventional" summation methods.
--
Ioannis

G. A. Edgar

unread,
Jul 15, 2009, 2:58:55 PM7/15/09
to
In article
<b3c79276-61ac-4ad0...@r2g2000yqm.googlegroups.com>,
mike3 <mike...@yahoo.com> wrote:

>
> So is it like some sort of "series of series" then? What do you mean
> by its
> "support" having "order type omega^omega"? I couldn't get access to
> the paper
> you linked to. There was some error.

lost the last digit somewhere http://arxiv.org/abs/0801.4877

master1729

unread,
Jul 15, 2009, 4:55:09 PM7/15/09
to
mike 3 wrote :

i think the apparent convergence occurs because it diverges very slowly.

for instance like growing at a rate O ( slog(sqrt(log(n))) )
where slog is the superlogaritm.

intuition might confuse that with convergence.

regards

tommy1729

mike3

unread,
Jul 15, 2009, 5:34:04 PM7/15/09
to
On Jul 15, 12:50 pm, "I.N. Galidakis" <morph...@olympus.mons> wrote:
> mike3 wrote:
>
> [snip]
>
>
>
> >> We have two operators here: One is the power function 3^x and the other is
> >> log(x). I find it highly suspicious that n-1 iterates of log cancel the
> >> growth of n iterates of 3^x, which grows faster than the inverse of log,
> >> which is e^x. Something is very weird, here.
>
> > First off it's n iterates of log(x) on n iterates of 3^x, applied to
> > one, i.e.
>
> > a_n = log^n(exp_3^n(1))
>
> > because we have
>
> > log(3)
> > log(log(3^3))
> > log(log(log(3^3^3)))
> > etc.
>
> Doesn't change the argument. The 3^x operator should still "win", because it
> grows faster than the inverse of log, which is e^x.
>

I guess so, but it was just a small nitpick more than anything else.

> >> What _I think_ is happening, is that all of us are "correct" more or less,
> >> and this series is probably not convergent "conventionally" (whatever that
> >> may mean, anyway), but is nevertheless convergent using stronger methods, in
> >> the same way some series are divergent using conventional methods, but
> >> convergent using, for example, Euler, Abel or Cessaro summations.
>
> >> This is as much as I understand.
>
> > But then how does one explain the apparent "conventional" convergence
> > of the
> > sequence seen in the post I give?
>
> I have no idea. All I know is that when I compose n logs with n 3^x's, the
> latter should eventually win, because it grows faster than the inverse of the
> previous.
>

This seems to be a common denominator in a lot of tetration problems:
they're
really, really hard. What is it about tetration that makes it so hard
to work with,
anyway? (E.g. consider all the constructions for analytic tetration
solutions... it
seems very hopelessly difficult if not impossible to relate them to
any "known"
functions and very weird equations often occur.)

> It is conceivable that a certain sequence may "appear" to converge if you
> calculate a few terms, but it may not converge as a whole. Gottfried has
> provided many examples of series which do not sum conventionally, and only do so
> using Euler summation. I am pretty sure that if you calculated a few terms of
> his examples, the series will look like it is converging towards something, but
> this says nothing about the total summand, which may not exist using
> "conventional" summation methods.

So it is possible it may like "level off" for a while then start
ramping up again?

W^3

unread,
Jul 15, 2009, 5:58:28 PM7/15/09
to
In article <1247669680.848240@athprx04>,
"I.N. Galidakis" <morp...@olympus.mons> wrote:

The sequence is log^{n}(3^^n), leading to 1 < a. Nothing strange there.

> We have two operators here: One is the power function 3^x and the other is
> log(x). I find it highly suspicious that n-1 iterates of log cancel the
> growth
> of n iterates of 3^x, which grows faster than the inverse of log, which is
> e^x.
> Something is very weird, here.

Not really. log^{n+1}(3^^(n+1)) = log^{n}(3^^n*log(3)). So to get to
the next term, you dilate by a fixed amount in the previous term. The
continued interation of logs should imply the effect of that fixed
dilate -> 0 as n -> oo.

> What _I think_ is happening, is that all of us are "correct" more or less,
> and
> this series is probably not convergent "conventionally" (whatever that may
> mean,
> anyway),

No, as I showed, it converges extremely rapidly in the conventional
sense, as the numerical evidence suggests. In fact log^{n}(x^^n)
converges for any x >= e (with the same rapid convergence).

The estimate a_(n+1) - a_n < 1/[e^{1}*e^{2}*...*e^{n-1}] (from my
previous post, using my notation now) was obtained using the mean
value theorem. Is that causing trouble for you, or is it the
convergence of a_n from this estimate that is bothering you?

rancid moth

unread,
Jul 15, 2009, 10:03:09 PM7/15/09
to
G. A. Edgar wrote:
>> f(x) = a*x + log(a) + log(a)/a*exp(-a*x)
>> - (1/2)*log(a)^2/a^2*exp(-2*a*x)
>> + (1/3)*log(a)^3/a^3*exp(-3*a*x)
>> - (1/4)*log(a)^4/a^4*exp(-4*a*x)
>> + (1/5)*log(a)^5/a^5*exp(-5*a*x)
>> - (1/6)*log(a)^6/a^6*exp(-6*a*x)
>> + (1/7)*log(a)^7/a^7*exp(-7*a*x)
>> +o(exp(-7*a*x))
>>
>
> Doing more steps, putting x=1, a=log(3), I get the value
> 1.224651242175237

I don't understand this derivation (i'm still reading the very interesting
paper) - but why not take this series all the way?

sum(n=1,oo) (-1)^(n+1) / n * (k*exp(-a*x))^n = ln(1+k*exp(-a*x)) which in
this case is allowed since ln(ln(3)/(3*ln(3)) < 1.

so letting a=ln(3), x=1

f(1) = ln(ln(27ln(3)))

rancid moth

unread,
Jul 15, 2009, 10:08:37 PM7/15/09
to

I must be doing something wrong here: by that series above

f(1) = ln(ln(ln(3^3^3))) - but that's obviously not the original object.


rancid moth

unread,
Jul 15, 2009, 10:47:02 PM7/15/09
to

sorry read the rest of your post - and found the answer why.


I.N. Galidakis

unread,
Jul 15, 2009, 11:53:53 PM7/15/09
to
W^3 wrote:
[snip]

>> Hence:
>>
>> log^{(n-1)}[3](3^^n) < log^{(n-1)}(3^^n), for all n\in N.
>>
>> But the left term is just 3, for all n\in N. So if the sequence indeed
>> converges
>> to a, then, taking limits, we have:
>>
>> 3 <= a.
>
> The sequence is log^{n}(3^^n), leading to 1 < a. Nothing strange there.

You are right. That was a typo of mine.

>> We have two operators here: One is the power function 3^x and the other is
>> log(x). I find it highly suspicious that n-1 iterates of log cancel the
>> growth
>> of n iterates of 3^x, which grows faster than the inverse of log, which is
>> e^x.
>> Something is very weird, here.
>
> Not really. log^{n+1}(3^^(n+1)) = log^{n}(3^^n*log(3)). So to get to
> the next term, you dilate by a fixed amount in the previous term. The
> continued interation of logs should imply the effect of that fixed
> dilate -> 0 as n -> oo.

I am sorry, I cannot see why the "dilate" term as you say -> 0.

>> What _I think_ is happening, is that all of us are "correct" more or less,
>> and
>> this series is probably not convergent "conventionally" (whatever that may
>> mean,
>> anyway),
>
> No, as I showed, it converges extremely rapidly in the conventional
> sense, as the numerical evidence suggests. In fact log^{n}(x^^n)
> converges for any x >= e (with the same rapid convergence).

I am sorry, I don't agree. I think your claim above is wrong, because as I said,
if y > e, then the growth of y^x is greater than the growth of the inverse of
log, so their iterated composition cannot possibly tend to a limit. The iterates
of y^x should win and the whole thing should go to the attractor +oo.

> The estimate a_(n+1) - a_n < 1/[e^{1}*e^{2}*...*e^{n-1}] (from my
> previous post, using my notation now) was obtained using the mean
> value theorem. Is that causing trouble for you, or is it the
> convergence of a_n from this estimate that is bothering you?

I am sorry, I just cannot follow your proof details. Not clear enough for me.
This may be due to my intelligence being somewhat limited :-)

It is fairly clear I think that a tower of n 3's eventually wins over a tower of
n+k e's, for ANY fixed k\in N. In fact, a tower of n y's, with y > e will
eventually win over a tower of n+k e's for any fixed k\in N.

A weaker statement is possible:

Let x,y > e^(1/e) and x < y. Then E n_0: A n > n_0: y^^n > x^^{n+k} eventually,
for any FIXED k\in N. Towers with larger bases win, even if they are shorter.

The statement that this sequence converges, is equivalent to: 3^^n <= e^^{n+k}
for fixed k and for all n\in N, which is wrong.
--
Ioannis

I.N. Galidakis

unread,
Jul 16, 2009, 3:34:23 AM7/16/09
to
mike3 wrote:
[snip]

> This seems to be a common denominator in a lot of tetration problems:
> they're
> really, really hard. What is it about tetration that makes it so hard
> to work with,
> anyway? (E.g. consider all the constructions for analytic tetration
> solutions... it
> seems very hopelessly difficult if not impossible to relate them to
> any "known"
> functions and very weird equations often occur.)

I have no idea. My intuition says, "whenever there's fringe area, weird things
happen". The "finge area" here happens to be the hyper4 operator.

>> It is conceivable that a certain sequence may "appear" to converge if you
>> calculate a few terms, but it may not converge as a whole. Gottfried has
>> provided many examples of series which do not sum conventionally, and only
>> do so using Euler summation. I am pretty sure that if you calculated a few
>> terms of his examples, the series will look like it is converging towards
>> something, but this says nothing about the total summand, which may not
>> exist using "conventional" summation methods.
>
> So it is possible it may like "level off" for a while then start
> ramping up again?

I don't know, but it is certainly possible for indidvidual terms to exist, and
the limit to not exist, like with the sequence a(n) = (-1)^n. Or, for example,
consider a sine-like vibration times an unbounded sequence, like
a(n)=n^2*sin(1/n), starting with small amplitude and gradually increasing its
amplitude. Individual terms will eventually oscillate very wildly and in an
unbound way, so the limit may not exist. The sequence may be suitably chosen
however, so that initial terms may "appear" to be very close to each other,
suggesting the existence of a limit which may not exist. Something like: a(n) =
([1/epsilon(n)])^2*sin(epsilon(n)), with [] denoting integer part and for
epsilon(n) -> 0.
--
Ioannis

Gottfried Helms

unread,
Jul 16, 2009, 5:39:04 AM7/16/09
to
Am 16.07.2009 09:34 schrieb I.N. Galidakis:

> mike3 wrote:
>> So it is possible it may like "level off" for a while then start
>> ramping up again?
>
> I don't know, but it is certainly possible for indidvidual terms to exist, and
> the limit to not exist, like with the sequence a(n) = (-1)^n. Or, for example,
> consider a sine-like vibration times an unbounded sequence, like
> a(n)=n^2*sin(1/n), starting with small amplitude and gradually increasing its
> amplitude. Individual terms will eventually oscillate very wildly and in an
> unbound way, so the limit may not exist. The sequence may be suitably chosen
> however, so that initial terms may "appear" to be very close to each other,
> suggesting the existence of a limit which may not exist. Something like: a(n) =
> ([1/epsilon(n)])^2*sin(epsilon(n)), with [] denoting integer part and for
> epsilon(n) -> 0.

Remember also the partial evaluation of superroots, which I posted
some months ago and did not overcome a certain paradox concerning
a limt:

(just wildly copy&paste from
http://sci.tech-archive.net/Archive/sci.math/2009-02/msg00286.html)

> "What is the infinite tetraroot of 3?"
>
> x = 3
> x^x = 3
> x^x^x = 3
> x^x^x^x = 3
>
> all have unique solutions. The sequence of solutions to the above
> equations yields a strictly decreasing sequence, bounded below by
> e^(1/e), hence it has a limit.
>
>
> The above is OK, but the next sentence needs revision.
>
> But the limit can't exceed e^(1/e)

just recall this discussion. It looks like a "tunnel-effect"
in quantum-physics: the partial evaluation with increasing n converges,
but does not arrive, with always x^^n > e^(1/e) at finite number of steps,
but for the infinite case we would formulate, that x^^oo = 3
where then -astonishingly- were x=3^(1/3)<e^(1/e)...

Some things in tetration, with limits involved, appear to be
not always and not completely familiar, somehow ... ;)

Gottfried

G. A. Edgar

unread,
Jul 16, 2009, 8:28:58 AM7/16/09
to
In article <h3m4b6$d0n$1...@news-01.bur.connect.com.au>, rancid moth
<ranci...@yahoo.com> wrote:

> rancid moth wrote:
> > rancid moth wrote:
> >> G. A. Edgar wrote:
> >>>> f(x) = a*x + log(a) + log(a)/a*exp(-a*x)
> >>>> - (1/2)*log(a)^2/a^2*exp(-2*a*x)
> >>>> + (1/3)*log(a)^3/a^3*exp(-3*a*x)
> >>>> - (1/4)*log(a)^4/a^4*exp(-4*a*x)
> >>>> + (1/5)*log(a)^5/a^5*exp(-5*a*x)
> >>>> - (1/6)*log(a)^6/a^6*exp(-6*a*x)
> >>>> + (1/7)*log(a)^7/a^7*exp(-7*a*x)
> >>>> +o(exp(-7*a*x))
> >>>>
> >>>
> >>> Doing more steps, putting x=1, a=log(3), I get the value
> >>> 1.224651242175237
> >>
> >> I don't understand this derivation (i'm still reading the very
> >> interesting paper) - but why not take this series all the way?
> >>
> >> sum(n=1,oo) (-1)^(n+1) / n * (k*exp(-a*x))^n = ln(1+k*exp(-a*x))
> >> which in this case is allowed since ln(ln(3)/(3*ln(3)) < 1.
> >>

So far that is order type omega.
write mu[1] for exp(-a*x), these terms have coefficient times powers of
mu[1].

Beyond all of those... we have terms involving mu[2] = exp(-a*exp(a*x)),

mu[2] * ( -ln(a)^2*mu[1]^2 -ln(a)^8*mu[1]^8- ln(a)^4*mu[1]^4
+ ln(a)^7*mu[1]^7 + ln(a)^3*mu[1]^3 + ln(a)^5*mu[1]^5
+ ln(a)*mu[1] + ln(a)^9*mu[1]^9 - ln(a)^6*mu[1]^6 ...)

+ mu[2]^2 * ( (1/2)*ln(a)^7*mu[1]^6 + ln(a)^3*mu[1]^3
+ (1/2)*ln(a)^9*mu[1]^8 - (1/2)*ln(a)^10*mu[1]^9
- (3/2)*ln(a)^4*mu[1]^4 + 2*ln(a)^5*mu[1]^5 + 4*ln(a)^9*mu[1]^9
+ (1/2)*ln(a)^3*mu[1]^2 - (7/2)*ln(a)^8*mu[1]^8 + (1/2)*ln(a)^5*mu[1]^4
- (1/2)*mu[1]*ln(a)^2 - (1/2)*ln(a)^6*mu[1]^5 - (1/2)*ln(a)^4*mu[1]^3
- (1/2)*ln(a)^8*mu[1]^7 - (5/2)*ln(a)^6*mu[1]^6 + 3*ln(a)^7*mu[1]^7
- (1/2)*ln(a)^2*mu[1]^2 ...)

and so on [sorry, I just copied them from Maple, I didn't sort them]

That's order type omega^2.

Beyond all those, we have terms involving mu[3] =
exp(-a*exp(a*exp(a*x)))

That's order type omega^3.

continue.... .... ....

Here is the Maple worksheet:
http://mapleoracles.maplesoft.com:8080/maplenet/primes/worksheet/85_LogI
ter.mw

Increase N for more terms

W^3

unread,
Jul 16, 2009, 4:31:49 PM7/16/09
to
In article <1247716434.991592@athprx03>,
"I.N. Galidakis" <morp...@olympus.mons> wrote:

> W^3 wrote:
> [snip]
>
> >> Hence:
> >>
> >> log^{(n-1)}[3](3^^n) < log^{(n-1)}(3^^n), for all n\in N.
> >>
> >> But the left term is just 3, for all n\in N. So if the sequence indeed
> >> converges
> >> to a, then, taking limits, we have:
> >>
> >> 3 <= a.
> >
> > The sequence is log^{n}(3^^n), leading to 1 < a. Nothing strange there.
>
> You are right. That was a typo of mine.
>
> >> We have two operators here: One is the power function 3^x and the other is
> >> log(x). I find it highly suspicious that n-1 iterates of log cancel the
> >> growth
> >> of n iterates of 3^x, which grows faster than the inverse of log, which is
> >> e^x.
> >> Something is very weird, here.
> >
> > Not really. log^{n+1}(3^^(n+1)) = log^{n}(3^^n*log(3)). So to get to
> > the next term, you dilate by a fixed amount in the previous term. The
> > continued interation of logs should imply the effect of that fixed
> > dilate -> 0 as n -> oo.
>
> I am sorry, I cannot see why the "dilate" term as you say -> 0.

It's a simple idea. If f grows slowly enough and d > 0 is fixed, then
f(d*x) - f(x) -> 0 as x -> oo. For example, log(log(2x)) - log(log(x))
-> 0 as x -> oo. So we would expect L_n(2x) - L_n(x) to be very small
for large n and large x. And a_(n+1) - a_n = L_n(L_1(3)*3^^n) -
L_n(3^^n) is the same kind of thing. This is all heuristic at this
point.

> >> What _I think_ is happening, is that all of us are "correct" more or less,
> >> and
> >> this series is probably not convergent "conventionally" (whatever that may
> >> mean,
> >> anyway),
> >
> > No, as I showed, it converges extremely rapidly in the conventional
> > sense, as the numerical evidence suggests. In fact log^{n}(x^^n)
> > converges for any x >= e (with the same rapid convergence).
>
> I am sorry, I don't agree. I think your claim above is wrong, because as I
> said,
> if y > e, then the growth of y^x is greater than the growth of the inverse of
> log, so their iterated composition cannot possibly tend to a limit. The
> iterates
> of y^x should win and the whole thing should go to the attractor +oo.

"Should" is not too convincing.

> > The estimate a_(n+1) - a_n < 1/[e^{1}*e^{2}*...*e^{n-1}] (from my
> > previous post, using my notation now) was obtained using the mean
> > value theorem. Is that causing trouble for you, or is it the
> > convergence of a_n from this estimate that is bothering you?

> I am sorry, I just cannot follow your proof details. Not clear enough for me.
> This may be due to my intelligence being somewhat limited :-)

Let me add some details. a_(n+1) - a_n = L_n(L_1(3)*3^^n) - L_n(3^^n)
= L_n'(c)(L_1(3)*3^^n - 3^^n) for some c between 3^^n and L_1(3)*3^^n.
This is just the mean value theorem. An easy induction argument shows
L_n'(c) = 1/[L_(n-1)(c)*L_(n-2)(c)*...*L_1(c)*c]. So we have

a_(n+1) - a_n =

(L_1(3) - 1)*3^^n/[L_(n-1)(c)*L_(n-2)(c)*...*L_1(c)*c].

Now L_1(3) - 1 < 1 and c > 3^^n > e^^n. Therefore 3^^n/c < 1, hence
the last expression above is less than

1/[L_(n-1)(c)*L_(n-2)(c)*...*L_1(c)] <

1/[L_(n-1)(e^^n)*L_(n-2)(e^^n)*...*L_1(e^^n)].

Clearly L_(n-k)(e^^n) = e^^k, so a_(n+1) - a_n <
1/[e^^1*e^^2*...*e^^(n-1)].

Does that help?

master1729

unread,
Jul 16, 2009, 6:27:42 PM7/16/09
to
ok this seems like a race between bases.


but WWW , when i compare the race between

1.6 ^^ n + k

1.7 ^^ n

1.7 wins the race !!

( tested on small positive integer values for k )

and both 1.6 and 1.7 are bigger than eta !


Regards

Tommy1729

master1729

unread,
Jul 16, 2009, 6:20:12 PM7/16/09
to
Helms wrote :

Yes that was a classic :)

But i even had similar arguments with ( guess who :p ) Ullrich about limits , and it did not involve tetration.

Im not going into details here and dont want to insult Ullrich , he is quite good at limits actually.

Regards

Tommy1729

Raymond Manzoni

unread,
Jul 16, 2009, 8:17:11 PM7/16/09
to
mike3 a �crit :

> Hi.
>
> I noticed this.
>
> log(3) ~ 1.098612288668109691395245237
> log(log(3^3)) ~ 1.192660116284808707569579569
> log(log(log(3^3^3))) ~ 1.220795907132767865324020633
> log(log(log(log(3^3^3^3)))) ~ 1.221729301870251716316203810
> (calculated indirectly via identity log(x^y) = y log(x).)
> log(log(log(log(log(3^3^3^3^3))))) ~ 1.221729301870251827504003124
> (calculated indirectly via identity log(log(x^x^y)) = y log(x) + log
> (log(x)).)
>
> It seems to be stabilizing on some weird value, around 1.2217293. What
> is this? And we seem to run out of log identities here making it
> infeasible to compute further approximations.
>
> Has this been examined before?


Let's try too!

LP2= log(log(3^3))
= log(3*log(3))
= log(3)+log(log(3))
= l+m (if l=log(3) and m=log(log(3)) )

LP3= log(log(log(3^3^3)))
= log(log(3^3*log(3)))
= log(3*log(3)*(1+log(log(3))/(3*log(3))))
= log(3)+log(log(3))+log(1+log(log(3))/(3*log(3)))
= l+m+log(1+m/(3*l))

LP4= log(log(log(log(3^3^3^3))))
= log(log(3^3*log(3)+log(log(3))))
= log(log(3^3*log(3)*(1+log(log(3))/(3^3*log(3)))))
= log(3*log(3)+log(log(3))+log(1+log(log(3))/(3^3*log(3))))
=log(3)+log(log(3))+log(1+(log(log(3))+log(1+log(log(3))/(3^3*log(3))))/(3*log(3)))
=l+m+log(1+[m+log(1+m/(3^3*l))]/(3*l))

LP5= log(log(log(log(log(3^3^3^3^3)))))
=log(3)+log(log(3))+log(1+(log(log(3))+log(1+(log(log(3))+log(1+log(log(3))/(3^3^3*log(3)))/(3^3*log(3))))/(3*log(3)))
=l+m+log(1+[m+log(1+[m+log(1+m/(3^3^3*l))]/(3^3*l))]/(3*l))

Clearly stated : at each iteration the most internal 'm' (right) is
replaced by [m+log(1+m/(3^3^...^3*l))]

And we get (as you supposed) very fast convergence to
1.2217293018702518275040031244123312415494118045699
(LP5's first billions of digits should be exact because of the error
of order m/(3^3^3^3*log(3)) )


Hoping it helped,
Raymond

rancid moth

unread,
Jul 16, 2009, 9:55:17 PM7/16/09
to
[cut]

> 1/[L_(n-1)(c)*L_(n-2)(c)*...*L_1(c)] <
>
> 1/[L_(n-1)(e^^n)*L_(n-2)(e^^n)*...*L_1(e^^n)].
>
> Clearly L_(n-k)(e^^n) = e^^k, so a_(n+1) - a_n <
> 1/[e^^1*e^^2*...*e^^(n-1)].
>
> Does that help?
>

[cut]

It helped me. i was stuck on 1/[L(n-1)...] for a while. I think i've
manage to scrape together a proof - it's slightly different to what you have
but, basically the same idea.

From letting exp(exp(...n-1 times...exp(a_n) ..)) = 3^{n} for some a_n,
intuitively told me that since the base of the final exponant, on either
side becomes so large, an increasingly smaller change in a_n will be
required for the two to equate.

So let

ln(a_n) = L_(n)(3^{n})

-->

a_n = L_(n-1)(3^{n}) then using ln(1+x) < x, for example, on

a_5 - a_4 = ln( 1 + ln( 1 + ln( 1 + ln(ln(3)) )/L_1(3^{4}) ) / L_2(3^{4}) )
/ L_3(3^{4}) ) < ln(ln(3))/[L_3(3^{4}) L_2(3^{4}) L_1(3^{4})]

--> carrying over to general n,

a_(n+1) - a_n < ln(ln(3)) / [ L_(n-1)(3^{n})....L_1(3^{n})]

then since we know L_(n-1)(3^{n})<L_1(3^{n})

a_(n+1) - a_n < ln(ln(3)) / L_(n-1)(3^{n}) = ln(ln(3))/ 3^{n-1}ln(3) <
1/3^{n-1}

if it's right, it avoids the mean value theorem.


W^3

unread,
Jul 17, 2009, 12:35:47 AM7/17/09
to
In article <1247716434.991592@athprx03>,
"I.N. Galidakis" <morp...@olympus.mons> wrote:

[cut]

> It is fairly clear I think that a tower of n 3's eventually wins over a tower
> of
> n+k e's, for ANY fixed k\in N. In fact, a tower of n y's, with y > e will
> eventually win over a tower of n+k e's for any fixed k\in N.

No, in fact e^^(n+1) > 3^^n for all n. Much more is true:
e^^(n+1)/3^^n > e^^1*e^^2*...*e^^(n-1) for large n. I'll give some
details if anyone's interested.

rancid moth

unread,
Jul 17, 2009, 1:00:29 AM7/17/09
to
rancid moth wrote:
> [cut]
>
>> 1/[L_(n-1)(c)*L_(n-2)(c)*...*L_1(c)] <
>>
>> 1/[L_(n-1)(e^^n)*L_(n-2)(e^^n)*...*L_1(e^^n)].
>>
>> Clearly L_(n-k)(e^^n) = e^^k, so a_(n+1) - a_n <
>> 1/[e^^1*e^^2*...*e^^(n-1)].
>>
>> Does that help?
>>
> [cut]
>
> It helped me. i was stuck on 1/[L(n-1)...] for a while. I think i've
> manage to scrape together a proof - it's slightly different to what
> you have but, basically the same idea.
>
> From letting exp(exp(...n-1 times...exp(a_n) ..)) = 3^{n} for some
> a_n, intuitively told me that since the base of the final exponant,
> on either side becomes so large, an increasingly smaller change in
> a_n will be required for the two to equate.
>
> So let
>
> ln(a_n) = L_(n)(3^{n})
>
> -->
>
> a_n = L_(n-1)(3^{n}) then using ln(1+x) < x, for example, on
>
> a_5 - a_4 = ln( 1 + ln( 1 + ln( 1 + ln(ln(3)) )/L_1(3^{4}) ) /
> L_2(3^{4}) ) / L_3(3^{4}) ) < ln(ln(3))/[L_3(3^{4}) L_2(3^{4})
> L_1(3^{4})]

there's an extra bracket in there (it's hard to sort out on a green screen):

a_5 - a_4 = ln( 1 + ln( 1 + ln( 1 + ln(ln(3))/L_1(3^{4}) ) /L_2(3^{4}) ) /

L_3(3^{4}) ) < ln(ln(3))/[L_3(3^{4}) L_2(3^{4}) L_1(3^{4})]

that's better.

I.N. Galidakis

unread,
Jul 17, 2009, 5:09:09 AM7/17/09
to
W^3 wrote:
> In article <1247716434.991592@athprx03>,
> "I.N. Galidakis" <morp...@olympus.mons> wrote:
>
> [cut]
>
>> It is fairly clear I think that a tower of n 3's eventually wins over a tower
>> of
>> n+k e's, for ANY fixed k\in N. In fact, a tower of n y's, with y > e will
>> eventually win over a tower of n+k e's for any fixed k\in N.
>
> No, in fact e^^(n+1) > 3^^n for all n. Much more is true:
> e^^(n+1)/3^^n > e^^1*e^^2*...*e^^(n-1) for large n. I'll give some
> details if anyone's interested.

If you can prove that, then I concede, and Mike3's original sequence converges,
because that's a counterexample to my proposition.

Can you prove this without using the mean-value theorem? I am not sure I
understand the MVT for sequences.
--
Ioannis

W^3

unread,
Jul 17, 2009, 3:26:38 PM7/17/09
to
In article <1247821751.985711@athprx04>,
"I.N. Galidakis" <morp...@olympus.mons> wrote:

I can prove it using the results and ideas in my previous posts; I'll
do that in another post. You won't like it unless you understand the
earlier stuff.

As for the earlier stuff, the "MVT for sequences" is just the plain
old MVT. The MVT says f(b) - f(a) = f'(c)(b - a) for some c between a
and b. In my earlier posts I am applying this to f(x) = L_n(x), with b
= L_1(3)*3^^n, a = 3^^n. So

L_n(L_1(3)*3^^n) - L_n(3^^n)

= L_n'(c)(L_1(3)*3^^n - 3^^n)

for some c between 3^^n and L_1(3)*3^^n. Now you need a formula for
L_n'(c). We have L_n(x) = L_1(L_(n-1)(x)), so the chain rule gives
L_n'(x) = L_1'(L_(n-1)(x))*L_(n-1)'(x) = (1/L_(n-1)(x))*L_(n-1)'(x).
Now induct to get the formula

L_n'(x) = 1/[L_(n-1)(x)*L_(n-2)(x)*...*L_1(x)*x].

I.N. Galidakis

unread,
Jul 17, 2009, 10:40:43 PM7/17/09
to
W^3 wrote:
> In article <1247821751.985711@athprx04>,
> "I.N. Galidakis" <morp...@olympus.mons> wrote:
>
>> W^3 wrote:
>>> In article <1247716434.991592@athprx03>,
>>> "I.N. Galidakis" <morp...@olympus.mons> wrote:
>>>
>>> [cut]
>>>
>>>> It is fairly clear I think that a tower of n 3's eventually wins over a
>>>> tower
>>>> of
>>>> n+k e's, for ANY fixed k\in N. In fact, a tower of n y's, with y > e will
>>>> eventually win over a tower of n+k e's for any fixed k\in N.
>>>
>>> No, in fact e^^(n+1) > 3^^n for all n. Much more is true:
>>> e^^(n+1)/3^^n > e^^1*e^^2*...*e^^(n-1) for large n. I'll give some
>>> details if anyone's interested.
>>
>> If you can prove that, then I concede, and Mike3's original sequence
>> converges,
>> because that's a counterexample to my proposition.
>>
>> Can you prove this without using the mean-value theorem? I am not sure I
>> understand the MVT for sequences.
>
> I can prove it using the results and ideas in my previous posts; I'll
> do that in another post. You won't like it unless you understand the
> earlier stuff.

I'll give it a try, regardless.

> As for the earlier stuff, the "MVT for sequences" is just the plain
> old MVT. The MVT says f(b) - f(a) = f'(c)(b - a) for some c between a
> and b. In my earlier posts I am applying this to f(x) = L_n(x), with b
> = L_1(3)*3^^n, a = 3^^n. So
>
> L_n(L_1(3)*3^^n) - L_n(3^^n)
>
> = L_n'(c)(L_1(3)*3^^n - 3^^n)
>
> for some c between 3^^n and L_1(3)*3^^n. Now you need a formula for
> L_n'(c). We have L_n(x) = L_1(L_(n-1)(x)), so the chain rule gives
> L_n'(x) = L_1'(L_(n-1)(x))*L_(n-1)'(x) = (1/L_(n-1)(x))*L_(n-1)'(x).
> Now induct to get the formula
>
> L_n'(x) = 1/[L_(n-1)(x)*L_(n-2)(x)*...*L_1(x)*x].

Ok. I see the validity of the MVT for sequences and your result. Please procceed
with the proof of your claim that 3^^n <= e^^{n+1}, for all n\in N.
--
Ioannis

W^3

unread,
Jul 17, 2009, 11:03:24 PM7/17/09
to
In article <aderamey.addw-090...@news.individual.net>,
W^3 <aderam...@comcast.net> wrote:

> In article <1247716434.991592@athprx03>,
> "I.N. Galidakis" <morp...@olympus.mons> wrote:
>
> [cut]
>
> > It is fairly clear I think that a tower of n 3's eventually wins over a
> > tower
> > of
> > n+k e's, for ANY fixed k\in N. In fact, a tower of n y's, with y > e will
> > eventually win over a tower of n+k e's for any fixed k\in N.
>
> No, in fact e^^(n+1) > 3^^n for all n. Much more is true:
> e^^(n+1)/3^^n > e^^1*e^^2*...*e^^(n-1) for large n. I'll give some
> details if anyone's interested.

Here's the proof. With a_n = L_n(3^^n), the work in earlier posts shows

(1) a_(n+1) - a_n < (L_1(3) - 1)/[e^^1*e^^2*...*e^^(n-1)]

< (.1)/[e^^1*e^^2*...*e^^(n-1)]

for n > 1; earlier I used L_1(3) - 1 < 1 and carried on with the 1 to
keep things simple; now I use the better estimate log(3) < 1.1 which
gives the .1 factor above.

Let A = lim a_n. Then

A = a_2 + (a_3 - a_2) + ... + (a_(n+1) - a_n) + ...

Now a_2 = log(3log(3)) < log(3) + log(1.1) < 1.1 + .1 = 1.2 (using
log(1+x) <= x). Use (1) to estimate the other terms. We get

A < 1.2 + (.1)[1/e^^1 + 1/(e^^1*e^^2) + ...] < 1.3,

since the series in brackets is dominated by 1/2 + 1/2^2 + ... = 1.
Note each a_n < 1.3 as well, as this sequence is strictly increasing.

We can now say L_n(e^^(n+1)) = e > 1.3 > L_n(3^^n) for each n. Thus

(2) L_n(e^^(n+1)) - L_n(3^^n) > e - 1.3 > 1

for all n. This of course implies e^^(n+1) > 3^^n for all n.

To get e^^(n+1) >>> 3^^n, use the MVT again, letting e^^(n+1)) = y_n,
3^^n = x_n. By (2),

L_n(y_n) - L_n(x_n) = L_n'(c_n)(y_n - x_n) > 1

for all n. So

y_n/x_n > 1 + 1/(L_n'(c_n)*x_n).

= 1 + L_(n-1)(c_n)*...*L_1(c_n)*c_n/x_n

> L_(n-1)(x_n)*...*L_1(x_n)

> L_(n-1)(e^^n)*...*L_1(e^^n)

= e^^1*e^^2*...*e^^(n-1).

In other words, e^^(n+1)/3^^n > e^^1*e^^2*...*e^^(n-1) for all n.

I.N. Galidakis

unread,
Jul 18, 2009, 2:04:56 AM7/18/09
to

I can see it. Therefore I agree that the limit exists afterall. This inequality
(3^^n << e^^{n+1}) has been an excellent intellectual morcel :-)

Many thanks to all who particiated,
--
Ioannis

Martin Eisenberg

unread,
Jul 18, 2009, 6:18:50 AM7/18/09
to
W^3 wrote:

> To get e^^(n+1) >>> 3^^n, use the MVT again, letting e^^(n+1)) =
> y_n, 3^^n = x_n. By (2),
>
> L_n(y_n) - L_n(x_n) = L_n'(c_n)(y_n - x_n) > 1
>
> for all n. So
>
> y_n/x_n > 1 + 1/(L_n'(c_n)*x_n).
>
> = 1 + L_(n-1)(c_n)*...*L_1(c_n)*c_n/x_n
>
> > L_(n-1)(x_n)*...*L_1(x_n)

Can you please clarify the last step for me?


Martin

--
Quidquid latine scriptum est, altum videtur.

W^3

unread,
Jul 18, 2009, 2:51:34 PM7/18/09
to
In article <h3s7ia$6rr$2...@news.eternal-september.org>,
Martin Eisenberg <martin.e...@udo.edu> wrote:

> W^3 wrote:
>
> > To get e^^(n+1) >>> 3^^n, use the MVT again, letting e^^(n+1)) =
> > y_n, 3^^n = x_n. By (2),
> >
> > L_n(y_n) - L_n(x_n) = L_n'(c_n)(y_n - x_n) > 1
> >
> > for all n. So
> >
> > y_n/x_n > 1 + 1/(L_n'(c_n)*x_n).
> >
> > = 1 + L_(n-1)(c_n)*...*L_1(c_n)*c_n/x_n
> >
> > > L_(n-1)(x_n)*...*L_1(x_n)
>
> Can you please clarify the last step for me?

I dropped the 1, then used x_n < c_n to get c_n/x_n > 1 and L_k(c_n) >
L_k(x_n) for each k (these functions are strictly increasing*). So the
last line is < the preceding line.

*Some care needs to be taken with the domains of the L_n's and their
regions of positivity. Clearly L_1(x) is defined for x > 0, and is
positive for x > 1. For n > 1, L_n(x) is defined for x > e^^(n-2) and
is positive for x > e^^(n-1). So everything's OK.

> Martin

master1729

unread,
Jul 18, 2009, 6:26:36 PM7/18/09
to
mike 3 wrote :

> Hi.
>
> I noticed this.
>
> log(3) ~ 1.098612288668109691395245237
> log(log(3^3)) ~ 1.192660116284808707569579569
> log(log(log(3^3^3))) ~ 1.220795907132767865324020633
> log(log(log(log(3^3^3^3)))) ~
> 1.221729301870251716316203810
> (calculated indirectly via identity log(x^y) = y
> log(x).)
> log(log(log(log(log(3^3^3^3^3))))) ~
> 1.221729301870251827504003124
> (calculated indirectly via identity log(log(x^x^y)) =
> y log(x) + log
> (log(x)).)
>
> It seems to be stabilizing on some weird value,
> around 1.2217293. What
> is this? And we seem to run out of log identities
> here making it
> infeasible to compute further approximations.
>
> Has this been examined before?

Ok some note :

the basic question is if the sequence converges.

my note is about the importance of the bases.

lets take base 1 = e and base 2 = x which is real and > e

now there is a " race " between the bases and the question is who " wins ".

lim n-> oo

e^^n+1 'vs' x^^n

where 'vs' denotes > or = or <

note that ' = ' leads to x wins.

since y = e^^n+1 = x^^n => x^y > e^y.

thus '=' does not exist. ( the race is won or lost )

so we have simple structure , for x to small : x loses.

for x large enough : x wins.

so all that matters for x is its size.

x > q or x < q.

which brings us to q. what value does q have ?

following W^3 it seems q needs to be larger than 3.

im not sure q has a closed form expression , maybe a carefull study of lambert w , W^3 proof and others work will give a solution ...

i have little time for research at the moment , but i will give a brute estimate , which might even be quite exact.

e^e $$ x

take elog on both sides : e $$ elog(x)

e^e^e $$ x^x

take elog on both sides : e^e $$ elog(x) * x

take elog again : e $$ elog(x) + elog(elog(x))

keep increasing tower hight and taking elogs :

e^e $$ elog(x^x) + elog(elog(x^x))

= e^e $$ elog(x) * x + elog(elog(x) * x)

= e^e $$ elog(x) * x + elog(elog(x)) + elog(x)

= e^e^e $$ elog(x^x) * x^x + elog(elog(x^x) * x^x)

= e^e^e $$ elog(x) * x * x^x + elog(elog(x) * x * x^x)

= e^e^e $$ elog(x) * x * x^x + elog(elog(x)) + elog(x) + elog(x) * x

= e^e^e $$ elog(x)[1 + x + x^(x+1)] + elog(elog(x))

now replace '$$' with '=' and solve for real x > e :

=> q estimated around q = x = 6,6568558380496

in the limit

e^^(n+1) grows slower than 6,6568558380496^^n.


I think this has never been posted on sci.math.

I cannot take full credit of tetration research , not even in this thread , since many others gave interesting conjectures , proofs , ideas or good attempts.

Despite still a minority i must say ...

But since i never saw this , and the idea of a boundary for bases ( q ) might look counterintuitive and somewhat new , i posted this " note ".

There is still work of course , a strong proof , a closed form etc

( i have no more digits for 6,6568558380496 because i used paper and pencil and i dont have much time )


Regards

"the master"

tommy1729

" WHY Mr Andersson , why ? Why do you go on ? " Agent Smith

*********************************************************

master1729

unread,
Jul 18, 2009, 6:27:28 PM7/18/09
to

I.N. Galidakis

unread,
Jul 18, 2009, 7:35:47 PM7/18/09
to


I _think_ that what Wade's proof shows is the following:

Given e^(1/e) < x < y, then there exists a\in R: y^^n ~ (x,a)^^n, or explicitly:

y^y^...^y (n-y's) ~ x^x^...^x^a (n-x's)

In other words, both towers will diverge to +oo, but we can make a "slight"
adjustment on the top of the tower with the smaller base, to make it grow
roughly at the same rate as the tower with the larger base. This very
adjustment, seems to be the limit, hence:

lim_{n->oo}log^{(n)}[x](y^^n) = a

It looks to me, like this holds for ANY e^(1/e) < x < y.
--
Ioannis

W^3

unread,
Jul 18, 2009, 8:43:41 PM7/18/09
to
In article
<11432143.1110.12479560...@nitrogen.mathforum.org>,
master1729 <tomm...@gmail.com> wrote:

Actually e^^(n+1) <<< (e^e)^^n. To see this recall that for any x > e,
L_n(x^^n) is strictly increasing with n. So for n > 1,

L_n(e^^(n+1)) = e < e + 1 = L_2((e^e)^^2) <= L_n(e^e)^^n).

Thus e^^(n+1) < (e^e)^^n for all n > 1, and you can get <<< using the
mean value theorem as in previous posts.

master1729

unread,
Jul 19, 2009, 5:44:49 PM7/19/09
to
W^3 wrote :

> In article
> <11432143.1110.1247956026866.JavaMail.jakarta@nitrogen

yes , i know.

i almost posted it myself , but i assumed you already knew , as you do.

e^e is overkill , notice this is much bigger than 6.656...

My special thanks W^3 ; for valuable posts in a tetration thread.

I wonder if someone still has a question , not yet asked or answered.

( 6.656... is still 'mysterious' ; perhaps consider base 6.656 logs of e^e^... or elogs of 6.656...^^ )


Regards

Tommy1729

" I know it does , I've seen it " Smith

Lew

unread,
Jul 23, 2009, 5:15:21 PM7/23/09
to
Here is a simple proof of the convergence of the sequence that is
based on a simple proof that (as noted by W^3) 3^^n < e^^(n+1).
3^^n < (1/2) e^^(n+1) can be easily proved by induction.
It uses 3^(x/2) < (e^x)/2 which is true if x > log(4)/log(e^2/3) =
1.538.
The boundedness of the sequence easily follows from log^^n 3^^n <
log^^n e^^(n+1) = e.

Consequently the sequence converges because it is increasing (using
log 3 > 1, as shown by W^3) and it is bounded.


Robert Munafo

unread,
Jul 28, 2009, 8:35:34 PM7/28/09
to
I have a program called "Hypercalc" that effectively implements level-index arithmetic (search for "symmetric level-index") to within the limits of N-digit roundoff error, where N is as much as 300.

Using this, it is fairly easy to see that mike3's proposed limit of 1.2217293018702... is accurate.

First, here is 3^^n for the first 7 values of n:

C1 = 3^3
R1 = 27
C2 = 3^3^3
R2 = 7625597484987
C3 = 3^3^3^3
R3 = 1.35 x 10 ^ 3638334640024
C4 = 3^3^3^3^3
R4 = 10 ^ ( 6.46 x 10 ^ 3638334640023 )
C5 = 3^3^3^3^3^3
R5 = 10 ^ [ 10 ^ ( 6.46 x 10 ^ 3638334640023 ) ]
C6 = 3^3^3^3^3^3^3
R6 = 3 PT ( 6.46 x 10 ^ 3638334640023 )

and note that "3 PT" means "3 powers of ten".

Defining "log" as the natural logarithm, here are the first five steps in the series:

C12 = ln(3)
R12 = 1.0986122886681
C13 = ln(ln(3^3))
R13 = 1.1926601162848
C14 = ln(ln(ln(3^3^3)))
R14 = 1.2207959071327
C15 = ln(ln(ln(ln(3^3^3^3))))
R15 = 1.2217293018702
C16 = ln(ln(ln(ln(ln(3^3^3^3^3)))))
R16 = 1.2217293018702

To be more formal about the assertion that there is a limit, consider the following relationships (these are different from the initial problem, by having an extra "^x"):

ln(3^x) = ln(3)*x

ln(ln(3^3^x))) = ln(ln(3)*3^x) = ln(ln(3)) + ln(3)*x

ln[ln(ln(3^3^3^x))] = ln[ln(ln(3) * 3^3^x)] = ln[ln(ln(3)) + ln(3)*3^x]

as x gets larger, approaching infinity, ln(ln(3)) + ln(3)*3^x is almost exactly equal to ln(3)*3^x, so we have ln[ln(3)*3^x] which is ln(ln(3)) + ln(3)*x. Note that is the same as the expression for ln(ln(3^3^x))).

The same type of thing happens with ln{ln[ln(ln(3^3^3^3^x))]}.

Going back to the original problem, let "x" stand for "3^3^3^3" and put an extra "ln(ln(ln(ln(...)))))" around the whole thing, and you can see that series has a limit.

Now do the same thing, but let "x" stand for "3^3^3^3^3" and put an extra "ln(ln(ln(ln(ln(...)))))" around each term. Again, there is clearly a limit and this limit will differ only a tiny bit from the previous limit.

Now do the same thing yet again with x = "3^3^3^3^3^3" and "ln(ln(ln(ln(ln(ln(...)))))))", and there is another limit that differs by an even lesser amount.

Hypercalc is a Perl script available here:

http://www.mrob.com/pub/perl/hypercalc.txt

- Robert Munafo

0 new messages