109 views

Skip to first unread message

Aug 2, 2007, 4:51:15 PM8/2/07

to

I understand that, due to its nature as a piecewise constructed

function, it'll be hard to find "the" solution to extending tetration

to the reals.

function, it'll be hard to find "the" solution to extending tetration

to the reals.

Obviously, I can stick any old function f(x) on the interval [-1,0],

so long as the endpoints evaluate to 0 and 1, respectively. Any such

function can then be exponentiated iteratively to the right to fill

out the rest of the function (and a logarithm of the interval to take

us down to -2). But some functions must be better than others, right?

The typical answer I've seen, often with a heck of a lot of obtuse

math (e.g., Cauchy sequences, etc.) to support it as a "good" answer,

is:

f(z, x) =

{

frac(x) if x in [-1,0],

power(z, f(z,x-1)) if x > 0,

log_z(f(z, x+1)) if x in (-2, -1)

}

Often it'll be written like the following, which can mask its

equivalence (for x>=0) to the former if you're not paying attention:

f(z, x) =

{

power(z, frac(x)) if x in [0,1],

power(z, f(z,x-1)) if n > 1,

}

This equation works fine if z is e, and in fact, unless you need a

good deal of precision, it's really great for z=e. Nice and simple.

Good ballpark answers.

But what about z=10, or z=e^e ~= 15.15, or z=1.5?

In those cases, graphing the equation should be sufficient to tell you

that this equation sucks for values of z substantially different from

e.

So I did a bit of thinking about it, and figured that the idea of a

linear approximation over an interval of unit length is actually not

too bad. The problem is, you have to choose your interval carefully.

As an example, take:

z~=1.76322283435189671 (z^z = z^^2 = e, for the curious).

When graphed, you'll notice that at x=0, the slope suddenly drops. It

was rising with slope 1, then you get a "bow" from x=0 to x=1, because

the slope drops to ~0.56714329 (i.e., ln(z)) at x=0, then rises to

~1.5416553 (i.e., e*ln(z)), then another "bow" from x=1 to x=2, and so

on.

However, try using a linear approximation in the interval [0,1], and

you'll notice that all the bows disappear, and that the equation now

looks "right" when graphed.

f(z=1.76322..., x) =

{

1+(z-1)*frac(x) if x in [0,1],

power(z, f(z,x-1)) if n > 1,

log_z(f(z, x+1)) if n in (-2, 0)

}

What we have done is taken a good first order approximation of "the"

function x^^n, and made it a good *second* order approximation. The

first derivative is now continuous.

Try the graph for f(z,n)=z^^x, z=1.60107515392372877932706476774783545

(z^(z^z) = z^^3 = e, for the curious)

To save you the painful legwork, notice that if you use a linear

approximation for the interval x=[1,2], you once again get a smooth

function:

f(z=1.601075..., x) =

{

(2*z-z^z) + (z^z-z)*frac(x) if x in [1,2],

power(z, f(z,x-1)) if x > 2,

log_z(f(z, x+1)) if x in (-2, 1)

}

Here's a more interesting example. Take e^e. You should be able to

confirm that linear approximation over the interval [-1.5, -0.5] gives

a smooth first derivative. The linear approximation is given by

f(x,n)=2/e + (2/e)x in that interval, or piecewise construction

elsewhere.

One final interesting note: for any given z>e^(1/e), there is an

interval [a-1, a] over which linear interpolation will give a

continuous first derivative. Furthermore, by trial an error I have

consistently found that f(a+1) is always equal to e. This is trivially

verified for the examples I gave above (z = e, 1.76322... (z^^2=e),

1.601075... (z^^3=e), and e^e). But here's an example not using some

trivial construction of e:

For z=2:

f(z=2.0, x) =

{

1 + 0.91392866794406579311242690122307732*frac(x) if x in

[-0.51561313654288695275487576262653

, 0.48438686345711304724512423737347],

power(z, f(z,x-1)) if x > 0.48438686345711304724512423737347,

log_z(f(z, x+1)) if x in (-2, 0.51561313654288695275487576262653)

}

The derivative is continuous, and more interestingly, f(2,

1.48438686345711304724512423737347) = e, as promised.

Bear in mind that this is only a second order approximation, but it's

still a nice one.

Finally, here's an approximation for z=10:

For z=10:

f(z=10.0, x) =

{

0.7965101706027150385281614202715 +

0.7965101706027150385281614202715*frac(x) if x in

[-1.45475337549723503548411067918

, -0.45475337549723503548411067918],

power(z, f(z,x-1)) if x > -0.45475337549723503548411067918,

log_z(f(z, x+1)) if x in (-2, -1.45475337549723503548411067918)

}

I have often seen the complaint that using first formula (at the top

of this post), f(0.99)~=9.772, f(1.00)=10.0, f(1.01)=10.551. As you

can see, the two secants are not even remotely parallel.

With my version, you get ~9.590, 10.0, and ~10.435, respectively. The

secants are not quite parallel, but they're much closer, and the

difference is attributable to the length of the secant line. Using a

smaller interval, we see:

f(10, 0.999)= 9.957897

f(10, 1.000)=10.0

f(10, 1.001)= 10.04236

Here, the secants have slopes of 0.042103 and 0.042358, respectively.

Ah, but of course this was to be expected. What about values around

the endpoints of the interval I defined (including integer offsets)?

f(10, 0.545147...)= 2.7177833324664...

f(10, 0.545247...)= 2.7182818284590...

f(10, 0.545347...)= 2.7187804616140...

Here, the secants have slopes of 0.000498496 and 0.000498633,

respectively. Also notice that the central value came out almost

exactly equal to e. With sufficient precision on the coefficients (can

be calculated with methods similar to Newton's Method), the value will

converge on e. I'm still working out why, but there it is. At any

rate, we now have a continuous function with a continuous first

derivative, and a look at the graph shows a much more intuitive "fit"

than the naive first order approximation discussion at the top.

Obviously I don't expect "the" tetration formula to have an entire

unit interval that is linear. The second and higher derivatives are

discontinuous.

But without resorting to finding higher order polynomial functions, or

more complex equations involving things like sinh, this makes a very

nice "quick and dirty" approximating function.

Aug 2, 2007, 6:34:55 PM8/2/07

to

On Aug 2, 1:51 pm, jayd...@gmail.com wrote:

> Here's a more interesting example. Take e^e. You should be able to

> confirm that linear approximation over the interval [-1.5, -0.5] gives

> a smooth first derivative. The linear approximation is given by

> f(x,n)=2/e + (2/e)x in that interval, or piecewise construction

> elsewhere.

>

> One final interesting note: for any given z>e^(1/e), there is an

> interval [a-1, a] over which linear interpolation will give a

> continuous first derivative. [snip]> Here's a more interesting example. Take e^e. You should be able to

> confirm that linear approximation over the interval [-1.5, -0.5] gives

> a smooth first derivative. The linear approximation is given by

> f(x,n)=2/e + (2/e)x in that interval, or piecewise construction

> elsewhere.

>

> One final interesting note: for any given z>e^(1/e), there is an

> interval [a-1, a] over which linear interpolation will give a

I forgot to mention that for z > e, I've found a formula to get an

exact value for a. Once I have a, I haven't figured out an exact

method to retrieve the coefficients for the linear equation, so I

resort to numerical methods to find those.

For z > e, a = 1/(ln(ln(z))+1)-1

For z=10, this gives

-0.4547533754972350354841106791797486379895982525...

See below:

> For z=10:

> f(z=10.0, x) =

> {

> 0.7965101706027150385281614202715 +

> 0.7965101706027150385281614202715*frac(x) if x in

> [-1.45475337549723503548411067918

> , -0.45475337549723503548411067918],

> power(z, f(z,x-1)) if x > -0.45475337549723503548411067918,

> log_z(f(z, x+1)) if x in (-2, -1.45475337549723503548411067918)

>

> }

By the way, there were several spots in my first post where I used x

and n instead of z and x, respecitively. Apologies for the confusion,

I hope it's clear what I meant.

Aug 3, 2007, 12:56:55 AM8/3/07

to

On Aug 2, 3:34 pm, jayd...@gmail.com wrote:

> On Aug 2, 1:51 pm, jayd...@gmail.com wrote:

>

> > Here's a more interesting example. Take e^e. You should be able to

> > confirm that linear approximation over the interval [-1.5, -0.5] gives

> > a smooth first derivative. The linear approximation is given by

> > f(x,n)=2/e + (2/e)x in that interval, or piecewise construction

> > elsewhere.

>

> > One final interesting note: for any given z>e^(1/e), there is an

> > interval [a-1, a] over which linear interpolation will give a

> > continuous first derivative. [snip]

>

> I forgot to mention that for z > e, I've found a formula to get an

> exact value for a. Once I have a, I haven't figured out an exact

> method to retrieve the coefficients for the linear equation, so I

> resort to numerical methods to find those.

> On Aug 2, 1:51 pm, jayd...@gmail.com wrote:

>

> > Here's a more interesting example. Take e^e. You should be able to

> > confirm that linear approximation over the interval [-1.5, -0.5] gives

> > a smooth first derivative. The linear approximation is given by

> > f(x,n)=2/e + (2/e)x in that interval, or piecewise construction

> > elsewhere.

>

> > One final interesting note: for any given z>e^(1/e), there is an

> > interval [a-1, a] over which linear interpolation will give a

> > continuous first derivative. [snip]

>

> I forgot to mention that for z > e, I've found a formula to get an

> exact value for a. Once I have a, I haven't figured out an exact

> method to retrieve the coefficients for the linear equation, so I

> resort to numerical methods to find those.

I can't believe I didn't see it! I'm going to change my definition of

a, for soon-to-be-obvious reasons. If f(z,a)=e, then f(z,a-1)=log_z(e)

and f(z,a-2)=log_z(log_z(e)).

Finding a is then pretty simple. Since we're using a linear function

for the interval [a-2,a-1], and we know that there is an integer value

in that interval (unless a is an integer), all we need to do is find

the value n such that log_z(log_z(e)) < f(z,n) < log_z(e). In other

words, keep exponentiating z (0, 1, z, z^z, z^(z^z), etc.) until you

find a value between log_z(e) and log_z(log_z(e)).

Having found n, you now find a by solving the equation for a line

given three points:

(log_z(e)-z^^n)/(log_z(e)-log_z(log_z(e))) + n

Now we have two points, <a-2, log_z(log_z(e))>, <a-1, log_z(e)>.

Linear interpolation between, iterated exponentiation to the right,

iterated logarithms to the left. Voila! A continuous function with

continuous first derivative.

On an interesting sidenote, as z approaches e^(1/e) from above, a

tends towards infinity. I hypothesize that by using a linear function

on an interval <n, e^^n>, <n+1, e^^(n+1)>, and using iterated

logarithms to the right, then as n tends towards infinity, this will

become an infinitely differentiable function that is "the" solution

for z^^x for z=e^(1/e). If I'm right, I'm certain this fact is already

known, and I'd be interested to see where it's been discussed. Perhaps

I can gain more insight be studying the work of others who have gone

down this path.

Aug 3, 2007, 4:17:55 AM8/3/07

to

Am 02.08.2007 22:51 schrieb jay...@gmail.com:

> However, try using a linear approximation in the interval [0,1], and

> you'll notice that all the bows disappear, and that the equation now

> looks "right" when graphed.

>

> f(z=1.76322..., x) =

> {

> 1+(z-1)*frac(x) if x in [0,1],

> power(z, f(z,x-1)) if n > 1,

> log_z(f(z, x+1)) if n in (-2, 0)

> }

>

>

> What we have done is taken a good first order approximation of "the"

> function x^^n, and made it a good *second* order approximation. The

> first derivative is now continuous.

>

> Try the graph for f(z,n)=z^^x, z=1.60107515392372877932706476774783545

> (z^(z^z) = z^^3 = e, for the curious)

>

I tried my matrix-approach as described in earlier threads,

using the eigensystem-decomposition, and then exponentiating

the eigenvalues.

So to say, using your z as s in my notation:

s = z = 1.60107515392372877932706476774783545

Bs = dV(log(s))* B

where B is the previously described constant matrix,

and Bs works as operator for a vector V(x) containing

the successive powers of x V(x) = [1,x,x^2,x^3,...]

from which then

V(1)~ * Bs = V(s)

V(1)~ * Bs^2 = V(s^s) = V(s^^2)

V(1)~ * Bs^3 = V(s^s^s) = V(s^^3)

....

Using the eigensystem of Bs,

Bs = Qs * Ds * Qs^-1

gives then for the desired operation

V(1)~ * Bs^x = V(s^^x)~

the equation for the actual numerical approximation

V(1)~ * Qs * Ds^x * Qs^-1 = V(s^^x) ~

where we can do elementwise exponentiation of Ds and

keep everything else constant.

Since it is only a rough approximation to use a finite

submatrix of Bs of dimension 20, 24 or 28 I give the table

whose entries are the results, using this three dimension.

The table is

x s^^x (dim=20) s^^x (dim=24) s^^x (dim=28)

-------------------------------------------------------

1 1.60107515392 1.60107515392 1.60107515392

1.1 1.65421216949 1.65421216924 1.65421216919

1.2 1.70681266678 1.70681266636 1.70681266628

1.3 1.75901977647 1.75901977595 1.75901977587

1.4 1.81097069279 1.81097069225 1.81097069216

1.5 1.86279825502 1.86279825451 1.86279825443

1.6 1.91463239939 1.91463239895 1.91463239889

1.7 1.96660151842 1.96660151809 1.96660151804

1.8 2.01883375998 2.01883375976 2.01883375973

1.9 2.07145829533 2.07145829522 2.07145829521

2 2.12460658362 2.12460658362 2.12460658362

2.1 2.17841365921 2.17841365929 2.17841365929

2.2 2.23301946840 2.23301946852 2.23301946852

2.3 2.28857028340 2.28857028351 2.28857028350

2.4 2.34522022304 2.34522022310 2.34522022307

2.5 2.40313291318 2.40313291315 2.40313291310

2.6 2.46248332386 2.46248332372 2.46248332365

2.7 2.52345982592 2.52345982568 2.52345982560

2.8 2.58626651726 2.58626651700 2.58626651691

2.9 2.65112587825 2.65112587820 2.65112587814

3 2.71828182777 2.71828182846 2.71828182846

------------------------------------------------------

and dim=24 and dim=28 agree already up to the 8'th to 10'th

digit.

I could not yet describe the numerical conditions in

all details, but I think that the theoretical approach,

to use the eigenvalue-decomposition (or an approximation

to it) is the best and most coherent one to define the

continuous version of tetration.

Gottfried Helms

--

---

Gottfried Helms, Kassel

Aug 3, 2007, 11:47:31 AM8/3/07

to

On Aug 3, 1:17 am, Gottfried Helms <he...@uni-kassel.de> wrote:

> Am 02.08.2007 22:51 schrieb jayd...@gmail.com:

> > However, try using a linear approximation in the interval [0,1], and

> > you'll notice that all the bows disappear, and that the equation now

> > looks "right" when graphed.

>

> > f(z=1.76322..., x) =

> > {

> > 1+(z-1)*frac(x) if x in [0,1],

> > power(z, f(z,x-1)) if n > 1,

> > log_z(f(z, x+1)) if n in (-2, 0)

> > }

>

> > What we have done is taken a good first order approximation of "the"

> > function x^^n, and made it a good *second* order approximation. The

> > first derivative is now continuous.

>

> > Try the graph for f(z,n)=z^^x, z=1.60107515392372877932706476774783545

> > (z^(z^z) = z^^3 = e, for the curious)

>

> I tried my matrix-approach as described in earlier threads,

> using the eigensystem-decomposition, and then exponentiating

> the eigenvalues.

>

> So to say, using your z as s in my notation:

> s = z = 1.60107515392372877932706476774783545

[snip]

> Am 02.08.2007 22:51 schrieb jayd...@gmail.com:

> > However, try using a linear approximation in the interval [0,1], and

> > you'll notice that all the bows disappear, and that the equation now

> > looks "right" when graphed.

>

> > f(z=1.76322..., x) =

> > {

> > 1+(z-1)*frac(x) if x in [0,1],

> > power(z, f(z,x-1)) if n > 1,

> > log_z(f(z, x+1)) if n in (-2, 0)

> > }

>

> > What we have done is taken a good first order approximation of "the"

> > function x^^n, and made it a good *second* order approximation. The

> > first derivative is now continuous.

>

> > Try the graph for f(z,n)=z^^x, z=1.60107515392372877932706476774783545

> > (z^(z^z) = z^^3 = e, for the curious)

>

> I tried my matrix-approach as described in earlier threads,

> using the eigensystem-decomposition, and then exponentiating

> the eigenvalues.

>

> So to say, using your z as s in my notation:

> s = z = 1.60107515392372877932706476774783545

Fascinating. I'll need to study your method more, to see what it's

doing. I'm surprised you're able to get an answer with a matrix of 28

dimensions, but perhaps it works differently from what I'm picturing

it. I tried my hand at a matrix method for finding the taylor series

for e^^x on the interval [-1,0], and I hit a steep wall at degree 11.

I ended up having to solve a degree 10 equation, and manually tweak

the 11th coefficient until I closed in on the solution. The 11th

degree matrix was extremely unstable, but then again, I'm pretty sure

this is because the coefficients for degree 1 through 5 feed back into

the matrix. (It's at this point that I admit that I'm using Excel with

a multiprecision library called X-Numbers, not having access to

Mathematica or any other proper mathematical tool.)

What I really liked about seeing your solution is that it confirmed a

suspicion of mine that there is an inflection point very close to the

center of the unit interval I gave, which would explain why the

solution can be so closely approximated by a linear equation on that

interval. Using the values in your table at 1.4, 1.5, and 1.6, one can

verify that the secant approximation of the second derivative very

close to 0, and is about 18-20 times smaller than the second

derivative at 1.4 and 1.6, with a sign change occurring just before

1.5.

Actually, if f(z,a)=e, then I'm wondering how closely to f(z,a-1.5)

that inflection point occurs. Has anyone studied this particular

question?

Anyway, this would seem to confirm my analysis, at least for use as a

very quick approximation, especially for small z. But for more

precision, or for larger z, I'm not sure what direction to head, so

I'll start by studying your method.

Aug 4, 2007, 3:39:21 AM8/4/07

to

My method is possibly not much more than reinventing an

elsewhere and already well running wheel, so likely it

is of interest to check the literature.

elsewhere and already well running wheel, so likely it

is of interest to check the literature.

Andrew Robbins (site: http://tetration.itgo.com )

pointed me to the keywords Bell-matrix and Carleman-

matrix. Using this keywords in a search you'll find

lots of articles dealing with "fractional iteration",

which may serve then as a further key.

I didn't read much yet, but it seems, that the eigen-analysis

of this (and similar) operators are well studied in other

contexts.

Also I' going to tend to the opinion, that the eigenanalysis

may be more of theoretical interest, in the sense to give the

continuous tetration a solid base.

The scalar methods described at Robbins' and for instance

Galidakis' sites are possibly more suited for higher

numerical precision, but I didn't apply them yet.

Am 03.08.2007 17:47 schrieb jay...@gmail.com:

> verify that the secant approximation of the second derivative very

> close to 0, and is about 18-20 times smaller than the second

> derivative at 1.4 and 1.6, with a sign change occurring just before

> 1.5.

>

> Actually, if f(z,a)=e, then I'm wondering how closely to f(z,a-1.5)

> that inflection point occurs. Has anyone studied this particular

> question?

Maybe this inflection point is just e^(1/e)~ 1.446... ?

I'm currently putting my scattered results together

in another and better focused overview article,

I'll send you a notification-mail, if I'm finished.

Gottfried

--

Gottfried Helms, Kassel

Aug 6, 2007, 7:59:19 PM8/6/07

to

On Aug 4, 12:39 am, Gottfried Helms <he...@uni-kassel.de> wrote:

> My method is possibly not much more than reinventing an

> elsewhere and already well running wheel, so likely it

> is of interest to check the literature.

>

> Andrew Robbins (site: http://tetration.itgo.com)

> pointed me to the keywords Bell-matrix and Carleman-

> matrix. Using this keywords in a search you'll find

> lots of articles dealing with "fractional iteration",

> which may serve then as a further key.

> I didn't read much yet, but it seems, that the eigen-analysis

> of this (and similar) operators are well studied in other

> contexts.

>

> Also I' going to tend to the opinion, that the eigenanalysis

> may be more of theoretical interest, in the sense to give the

> continuous tetration a solid base.

> The scalar methods described at Robbins' and for instance

> Galidakis' sites are possibly more suited for higher

> numerical precision, but I didn't apply them yet.

>

> My method is possibly not much more than reinventing an

> elsewhere and already well running wheel, so likely it

> is of interest to check the literature.

>

> Andrew Robbins (site: http://tetration.itgo.com)

> pointed me to the keywords Bell-matrix and Carleman-

> matrix. Using this keywords in a search you'll find

> lots of articles dealing with "fractional iteration",

> which may serve then as a further key.

> I didn't read much yet, but it seems, that the eigen-analysis

> of this (and similar) operators are well studied in other

> contexts.

>

> Also I' going to tend to the opinion, that the eigenanalysis

> may be more of theoretical interest, in the sense to give the

> continuous tetration a solid base.

> The scalar methods described at Robbins' and for instance

> Galidakis' sites are possibly more suited for higher

> numerical precision, but I didn't apply them yet.

>

> Am 03.08.2007 17:47 schrieb jayd...@gmail.com:

>

> > verify that the secant approximation of the second derivative very

> > close to 0, and is about 18-20 times smaller than the second

> > derivative at 1.4 and 1.6, with a sign change occurring just before

> > 1.5.

>

> > Actually, if f(z,a)=e, then I'm wondering how closely to f(z,a-1.5)

> > that inflection point occurs. Has anyone studied this particular

> > question?

>

> Maybe this inflection point is just e^(1/e)~ 1.446... ?

>

> I'm currently putting my scattered results together

> in another and better focused overview article,

> I'll send you a notification-mail, if I'm finished.

>

> Gottfried

>

> --

>

> Gottfried Helms, Kassel

>

> > verify that the secant approximation of the second derivative very

> > close to 0, and is about 18-20 times smaller than the second

> > derivative at 1.4 and 1.6, with a sign change occurring just before

> > 1.5.

>

> > Actually, if f(z,a)=e, then I'm wondering how closely to f(z,a-1.5)

> > that inflection point occurs. Has anyone studied this particular

> > question?

>

> Maybe this inflection point is just e^(1/e)~ 1.446... ?

>

> I'm currently putting my scattered results together

> in another and better focused overview article,

> I'll send you a notification-mail, if I'm finished.

>

> Gottfried

>

> --

>

> Gottfried Helms, Kassel

I've made a breakthrough, at least for my own personal knowledge. I've

found an iterative function that's continuous and that should return

the tetration of base e (2.718...). I can only get about 200 digits of

accuracy, due to the math library I have, but in theory one should be

able to get any desired degree of precision, so long as one has a math

library up to the challenge.

Essentially, I have a function F(x,y), where x is the base of

tetration. Define F as follows:

F(x,y,n) =

{

a + y + x^(F(x,y-1,n-1)) if n>0,

0 if n=0

}

Now, x^^y = lim m,n -> infinity ln^m(F(x,y+m, m+n))

Here, n should be reasonably large, 20 or more. For 200 digits of

precision, n=30 or so works for x=e. Essentially, n is an escape

condition, once the desired degree of numerical precision is reached.

However, m will typically be very small. For y<0.4, m=3 works quite

well for x=e. Why? Because e^^3=3814279.1047602205922..., and the next

term is ~= e^^4, so any discrepancy will require at least log_10(e^^4)

~= 1656520 decimal digits to show up. I don't have that much numerical

precision in my calculations, but if you do, then by all means, use

m=4. To get good accuracy up to y=1.4, you'll need m=4 anyway. m=5

requires precision that probably exceeds what is calculable in this

universe, but in theory, precision will continue to increase as m goes

to infinity, and in the limit, the function will be precisely the

continuous tetration function. And at any rate, once you have a value

in the range [-1,0], you can just use iterated exponentiation to get

values in higher ranges. So m=3 should suffice for most intents and

purposes.

Okay, so now, the magic constant a. It's different for each base x,

and I haven't found a formula for it yet, but it's easy to determine

numerically to any desired degree of precision (for a given m).

For x=e, I've found a to be (backwards slashes mean continued on next

line):

0.014322263393121982517624482505047871744221168446\

20378831685782250473459332397481731747217314400277\

47590061365298794608719281309232472324215103926733\

00491056244191834658508783559764260025638841454243

What does e^^y look like, then?

e^^y ~=

ln(

ln(

ln(

3.01432226... + e^(

2.01432226... + e^(

1.01432226... + e^(

0.01432226... + e^(

-0.98567773... + e^(

-1.98567773... + e^(

-2.98567773... + e^(

-3.98567773... + e^(...)))))))))))

The first term is 3.014 + e^q, where q is approximately e^e. Notice

that ln(3 + e^(e^e)) ~= e^e. The factor of 3.014 essentially drops out

(it's what determines the prevision). That's the beauty of it.

And because each successive exponent has a constant factor that is

smaller than the previous by a factor of 1, shifting the equation by 1

guarantees that, up to the discrepancy caused by ln(a+b)=b+epsilon

when b is much larger than a, taking the logarithm of this function

shifts it 1 unit to the right, exactly what we'd expect of the true

tetration function.

I've been tinkering around, trying to find other values of a for other

bases x. For x=1.76322... (x^^2=e), a ~= -0.367538516 (close to 1/e,

but not exactly), and I had to go to m=4 to get a good result at y=0.

Anyway, I still have a lot of tinkering to do. For one thing, while

I'm confident this method converges for all x> e^(1/e), I haven't

tested values less than that. So at worst this is a partial solution.

And given the size of the numbers involved, it'd probably be best to

use high numerical precision math to derive a Taylor series for the

critical interval I described in my earlier posts in this thread, then

save the coefficients for that series and use them in future

calculations.

PS: Apologies for any math errors, I'll correct them if/as I find

them. In my excitement, I wanted to post this and get feedback. I've

seen other formulae that are close this this one (in an altered form

for the superlog), but instead of an arithmetic sequence of constants

added to each exponentiation, I've seen a fixed value such as 1. I

think the descending arithmetic sequence is the key to guaranteeing

that it behaves as tetration should.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu