Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Dismiss

212 views

Skip to first unread message

Dec 13, 2006, 3:36:08 AM12/13/06

to

Tetration (also hyperpower, power tower, super-exponentiation,

and hyper4) can be seen as the fourth operation in the chain addition,

multiplication, power...

I have proposed a way to extend tetration to the range of real numbers,

see

http://forums.wolfram.com/mathgroup/archive/2006/Dec/msg00207.html and

http://forums.wolfram.com/mathgroup/archive/2006/Dec/msg00133.html. I

have also made a Mathematica notebook, with an extended PowerTower

function, available for download on my web page

http://web.telia.com/~u31815170/Mathematica/.

I got into this business by solving the equations f(f(x)) = x^2 +1 and

f(f(x)) = exp (x). (Solving for f(x)). The zero point value for the

first equation is

f(0) = 0.6420945043908285

and for the second equation

f(0) = 0.498743364531670879182837375041686052396842401697695

with derivatives

f'(0) = 0.87682850111901739234161348908444926737095344685410,

f''(0) = 0.489578037438247862041903165991088781934758253721,

f'''(0) = 0.1222616742183662064640316453590024071635142693

I am grateful for any comments on this, and hope that someone can check

if my ideas are sound and relevant.

and hyper4) can be seen as the fourth operation in the chain addition,

multiplication, power...

I have proposed a way to extend tetration to the range of real numbers,

see

http://forums.wolfram.com/mathgroup/archive/2006/Dec/msg00207.html and

http://forums.wolfram.com/mathgroup/archive/2006/Dec/msg00133.html. I

have also made a Mathematica notebook, with an extended PowerTower

function, available for download on my web page

http://web.telia.com/~u31815170/Mathematica/.

I got into this business by solving the equations f(f(x)) = x^2 +1 and

f(f(x)) = exp (x). (Solving for f(x)). The zero point value for the

first equation is

f(0) = 0.6420945043908285

and for the second equation

f(0) = 0.498743364531670879182837375041686052396842401697695

with derivatives

f'(0) = 0.87682850111901739234161348908444926737095344685410,

f''(0) = 0.489578037438247862041903165991088781934758253721,

f'''(0) = 0.1222616742183662064640316453590024071635142693

I am grateful for any comments on this, and hope that someone can check

if my ideas are sound and relevant.

Thus Pi to the superpower e (or "Pi tetrated to e") is

1885451.906681809677772360465630708698

Math is fun

Ingolf Dahl

ingol...@telia.com

Dec 13, 2006, 7:26:09 AM12/13/06

to

Because this topic seems often to be requested, let me mention some key

points and references.

points and references.

There are two problems, that look equal, but are quite different

1. Extension of the hyperpower to fractional/real hyperexponents.

2. Fractional/real iteration, i.e. extension of natural numbered

iterates to fractional/real numbered iterates.

The problem of 1. is not in finding a continuous/differentiable

extension but in finding uniqueness criterions. There are some proposed

extensions already in the internet:

Ioannis Galidakis

http://ioannis.virtualcomposer2000.com/math/papers/Extensions.pdf

Robert Munafo

http://home.earthlink.net/~mrob/pub/math/ln-notes1.html#real-hyper4

The problem is that the usual compatibility with the multiplication,

which is given for multiplication and exponentiation

1x = x; (mn)x = m(nx)

x^1 = x; x^(mn) = (x^m)^n

is no more given for tetration

x^^1 = x; x^^(mn) != (x^^m)^^n

But this compatibility allowed us to uniquely define for example

x^(1/n)

(x^(1/n))^n = x^(n/n) = x

thatswhy is x^(1/n) the inverse function of x^n, and further

x^(m/n)=(x^m)^(1/n) = (x^(1/n))^m. This breaks all down using

tetration.

We see that the problem is the non-associativity of the power. Which

also would allow other bracketings for the definition of tetration. I

thatswhy developed a domain which can deal with different bracketings

and non-associativity and in which indeed

(ab) ** x = a ** ( b ** x ) for all higher operations **.

See http://math.eretrandre.org/tree-aoc/pdf/latest.pdf

As a byproduct of this research there emerged a fairly recursive

structure "fractional trees", which is a (non-associative)

generalization of fractional numbers, for which I wrote a little

visualization and experimentation web application

http://math.eretrandre.org/cgi-bin/ftc/ftc.pl

Now to 2. fractional iteration.

Here again is the main problem the uniqueness.

For the fractional iteration of e^x one should at least have read the

papers of Szekeres on this topic

G. Szekeres, Fractional iteration of exponentially growing functions,

1961

G. Szekeres, Abel's equation and regular growth: Variations ..., 1998

For me the results look rather unpromising to find "the" fractional

iteration of e^x.

For fractional iteration of real-analytic functions:

G. Szekeres, Regular iteration of real and complex functions, 1958

The topic of analytic iteration (i.e. fractional/real/complex iteration

in the set of powerseries developable functions) seems to me the most

interesting field regarding uniqueness of the solutions.

The first interesting thing is that in the ring of formal powerseries

with a_0=0 and a_1=1 there is only one solution to the equation

fof...of=F. There is even a widely unknown explicit formula describing

the fractional/real/complex iterates:

(f^os)_n = sum_{k=0}^{n-1} (-1)^{n-1-k}(s over k)(s-k-1 over n-k-1)

(f^ok)_n

where f^os denotes the s-iterated f (s complex), and f^ok the k times

iterated f. _n denotes the nth coefficient of the powerseries.

And there exists an "iterational logarithm" L. I.e. L maps a

powerseries to a powerseries such that L(f^or) = r(Lf).

For all this see:

Jabotinsky & Erdoes, On analytic iteration, 1961

Jabotinsky, Analytic iteration, 1963

Now it can though happen, that the corresponding iterates do not

converge. For example the iterates of e^x-1 only converge for integer

(composition) exponents. However Szekeres showed in the above mentioned

paper that for real-coefficient powerseries, a formal iterate in 0 can

always uniquely approximated by an analytic function iterate defined on

x>0.

Dec 14, 2006, 7:46:16 AM12/14/06

to

Thank you a lot for your engagement and elaborated answer, I really

love it. Nothing is worse than indifference.

In Mathematica notation, which I find convenient here:

love it. Nothing is worse than indifference.

In Mathematica notation, which I find convenient here:

The PowerTower (tetration) function for integers is defined this way in

Mathematica

PowerTower[(y_)?NumericQ, (n_Integer)?Positive] := Nest[y^#1 & , y, n -

1]

We then easily find that

PowerTower[y, n] == y^PowerTower[y, n - 1]

Thus

Subscript[g, y][x_] := y^x

is a "step-up" function:

PowerTower[y, n] = Subscript[g, y][PowerTower[y, n - 1]]

I have used these step-up functions for my algorithm. If we start with

the PowerTower function for some given y, we can note that just some

point values of the step-up function is used. We could thus define an

infinite number of step-up functions that do the same job, as long as

the function values in these isolated points are the same. And for each

such function, we might find a way of extending the hyper exponent to

real values. So we remove a lot of ambiguity if we associate PowerTower

for real hyper exponents both with the PowerTower for integer hyper

exponents and a specified step-up function. I find it quite natural to

demand that we should choose y^x as "the" step-up function of interest.

If we find a function f[x] for which f[f[x]] = g[x] when g[x] =

Subscript[g, y][x], we can see that f[x] can act as a "halfway" step-up

function.

Then, one thing that I have used a lot is the following observation:

Suppose that we have obtained a solution f[x] to the equation f[f[x]]

== g[x]. Let h[x] be a well-behaved function with the inverse hinv[x].

Then f1[x] = h[f[hinv[x]]] should be a solution to the equation

f1[f1[x]] == h[g[hinv[x]]].

Another thing I have used is the assumption that if f1 is the solution

for g(x) = 2 Sinh[x] and f2 is the solution for g(x) = Exp[x], we could

with very good precision approximate f2 with f1 if x is large enough

(say x > 20 or so). It seems reasonable that the relative error we do

in that approximation should have an upper bound Exp[-2x]. The function

f1 can be found by using the series expansion of 2 Sinh[x] for small x.

With these assumptions there is not very much ambiguity left in the

definition of the halfway function for exp[x]. If there are some other

candidates, obtained by some other procedure, one might check the

asymptotic behaviour.

Considering your first point, you imply that (in Mathematica notation)

PowerTower[PowerTower[2.17, 1./3], 3]

should be different from 2.17. Let us try with my algorithm: Evaluating

PowerTower[PowerTower[2.17, 1./3], 3]

returns 1.51434, so there it is an agreement between your opinion and

the algorithm I have made. When I posted a strategy for solving the

equation f[f[x]] == exp[x] December 1, I did not even know the word

tetration, and have never been tempted to fall into that trap.

But we can instead do the following to get back to 2.17, using the

two-third step-up function when Exp[x Log[2.17]] is the step-up

function for a unit step:

fexpk[PowerTower[N[217/100, 100],

N[1/3, 100]], N[Log[217/100], 100], N[2/3, 100]]

(returns 2.1700000000000000000000000000000000000000000)

and we might do it in two equal steps

fexpk[fexpk[

PowerTower[

N[217/100, 100], N[1/3, 100]], N[Log[217/100], 100], N[1/

3, 100]], N[Log[217/100], 100], N[1/3, 100]]

(returns 2.1700000000000000000000000000000000000000000)

or two unequal steps (here with some loss in precision, have to check

why)

fexpk[fexpk[

PowerTower[

N[217/100, 100], N[1/3, 100]], N[Log[217/100], 100], N[1/

2, 100]], N[Log[217/100], 100], N[1/6, 100]]

( returns 2.170000000000000000000)

(Note that my PowerTower routines presently use the real value also of

rational numbers.)

Such experiments have convinced me that there is some sense in what I

am doing. If you have Mathematica available, try to play with my

routines yourself. I think that my egg still is standing.

The formula describing the fractional/real/complex iterates seems very

handy. I will try to implement it. Thanks a lot.

Best regards

Ingolf Dahl

Dec 14, 2006, 2:23:09 PM12/14/06

to

First of all it would make my life much easier if you were *not* using

Mathematica notation, but normal mathematical/semi-TeX notation. I can

program a bit Maple but am not especially inclined to learn

Mathematica.

So let me rephrase what I understood despite the notation. We have the

power-tower, which I will denote as ^^ in ascii notation. (Btw. we can

discuss this topic also on my forum http://math.eretrandre.org/mybb/

which supports LaTeX, I would be glad to welcome you there.)

Mathematica notation, but normal mathematical/semi-TeX notation. I can

program a bit Maple but am not especially inclined to learn

Mathematica.

So let me rephrase what I understood despite the notation. We have the

power-tower, which I will denote as ^^ in ascii notation. (Btw. we can

discuss this topic also on my forum http://math.eretrandre.org/mybb/

which supports LaTeX, I would be glad to welcome you there.)

The hyperpower/tetration is usually defined as

x^^1 = x ; x^^(n+1) = x^(x^^n)

Now you say if we take the step function g_y(x) = y^x then we can

represent

y^....y^x = g_y o g_y o ... o g_y (x)

x^^{n+1} = (g_x)^{n}(x) (where g^n means the n-times iteration of g)

And hence if we can appropriately define (g_y)^r then we have a

definition of ^^, i.e. x^^r = (g_x)^{r-1}(x).

I guess your Mathematica function fexpk[x,y,k] is just (g_y)^k(x);

the function x -> y^x iterated k times. Now you demonstrate

(2.17 ^^ 1/3) ^^ 1/3 != 2.17

but

(g_2.17)^{2/3} (2.17 ^^ 1/3) = 2.17

(g_2.17)^{1/2}o(g_2.17)^{1/6}(2.17 ^^ 1/3) = 2.17

But this is no wonder, this is how fractional iteration is defined,

i.e. the fractional iterates f^r of f must satisfy the so called

translation equation:

f^0(x)=x and f^{r+s}(x) = f^r(f^s(x))

In your case you merely examplify that your functions fexpk indeed

form a set of iterates.

(g_2.17)^{2/3} (2.17 ^^ 1/3) = (g_2.17)^{2/3}((g_2.17)^{1/3-1}(2.17)) =

g_2.17^{2/3 + 1/3 - 1}(2.17) = g_2.17^0 (2.17) = 2.17

Unfortunately I am still not sure how you compute fexpk (due to

Mathematica notation and my laziness). So it would be nice to present

it in "normal" notation and also to explain why there is not much

ambiguity.

For example the "normal" method to construct exp^{1/2} is to imagine

any function f on the interval (0,1] which lies between x->x and

x->exp(x) and is continuous and strictly increasing and exp(f(0))=f(1).

>From this start you can then define f on (1,e] by f(x):=exp(f(ln(x)))

and so on you can define f on (exp^{n}(0),exp^{n+1}(0)] and in a

similar way in the other direction.

So you can start with nearly any function f on (0,1] (or even on any

interval (x,exp(x)]) and extend it to a continuous strictly increasing

half-iterate of exp on (-oo,+oo). Though of course some functions

oscillate a bit stronger than others (draw it on paper!).

A bit more difficult it becomes if you demand that the fractional

iterates to be analytic (as exp is). But even then fractional iterates

are not unique (as you can read in Szekeres latest paper; the formula

for unique analytic iteration I gave in my previous post is only for

functions with fixed point at 0 (a_0=0) and a_1=1) and there is no

well-established criterion which set of iterates should be considered

the best (for example what is should mean to minimize that oscillation).

Dec 18, 2006, 7:27:04 AM12/18/06

to

As it was just pointed out to me, I made quite a blunder in the

arbitrary construction of exp^{1/2}(x). You dont start defining f on

(0,1] but you start defining f on (0,f(0)] with f(f(0))=exp(0)=1. This

determines f on (f(0),1], (1,f(1)], (f(1),e], (e,f(e)], (f(e),e^e], ...

etc.

Dec 18, 2006, 8:15:00 AM12/18/06

to

A letter very similar to this had been sent to MathGroup.

I think I have to back-off a half step from my claims. But I do not

regret what I have written: I think it is a really good math lesson for

me, and maybe for other. And I think the conclusions are quite

interesting: It is possible to extend PowerTower to real exponents in a

consistent way, BUT the solution will not be unique. And that is not by

our ignorance, that is by mathematical principle. The case is similar

to 0^0, but without a clear "best choice". So we are free to let UN

make a law and decide which solution to choose. And such constants as

Pi^^E will become somewhat fuzzy, and the value I have published is

possible but not necessary. It is not a very common situation in

mathematics that constants have intrinsic uncertainness, even if we are

used to the fact that the primitive function to an arbitrary function

always contains a constant of integration.

And in the end of this letter I think I anyway see the opening of the

tunnel...

What is the problem? One thing that I had assumed was that if f1 is the

solution to the equation f[f[x]] == g[x] for g[x] = 2*Sinh[x] and f2 is

the solution for g[x] = E^x, we could with very good precision

approximate f2 with f1 if x is large enough (say x > 20 or so). It

seems reasonable that the relative error we do in that approximation

should have an upper bound E^(-2*x). The function f1 can be found by

using the series expansion of 2*Sinh[x] for small x, since x=0 is a

fixed point of 2*Sinh[x].

But the mathematics is not so kind and well-behaved. If we compare f1

with the corresponding solution f3 for g[x] = Exp[x]-1, we see that

they also should have the same asymptotic behaviour. Both are derived

from the analytical expression at small x, but anyway the difference

between f1 and f3 will show oscillations when x gets large. When we

pick out the analytic expression for the solution f1 and f3, these

solutions are special and unique just at the fixed point, but a bit

away from the fixed point the mathematics do not discern between that

special solution and other solutions (which might oscillate near to the

fixed point, maybe similar to x*Sin[1/x]). I have obtained a solution

f2 for g[x] = E^x by identifying the asymptotic behaviour with that for

f1, but I could as well have used f3, and would then obtain another

good solution. f2 is thus not an unique solution, even if it seems to

be a nice solution.

This is also illustrated if we solve the equation f[f[x]] == g[x] for

g[x] = Exp[k*x] for small k (0 < k < 1/E). There g[x] has two fixed

point, and the smaller one can be used to find the solution for x < E,

and the larger fixed point can be used for x > E, by identification of

Taylor expansion coefficients. It is thus not needed to compare

asymptotes in this case. If we then calculate the function f, we will

find that the curves do not connect at x = E. There is a jump of size

approx. 0.003 for k=0.01. I plan to include a curve describing this in

my next version of the PowerTower notebook on my web site. The

PowerTower function is not affected by this jump, since this function

will just sample values on the left side of the smaller fixed point.

Thus we can anyway use this fixed point to define PowerTower for real

exponents for 1 < x <= Exp[1/E].

For larger values of k, there are thus several solutions f(x)

available, where it is difficult to say that one is "unique", since

there are no fixed points for Exp[k * x]. (Maybe one could find a

solution that is the "best", according to some criterion.) Then this

spills over to PowerTower too. If x > Exp[1/E], the range of

PowerTower[x, n] for real values of n >= -2 is from -Infinity up to

(approaching) +Infinity. Then if we would have PowerTower defined in

some unique way, the relation

PowerTower[y, n] = f[PowerTower[y, n - 1/2]]

should define an unique solution f(x) to f[f[x]] == exp[k*x] with k =

Log[y], and thus we get a contradiction if we state that there is no

unique solution to this equation.

All this does not seem to be common knowledge today (e.g. it is not

clear for me from Wikipedia), but much of the behaviour follows

directly from the description by G. Szekeres in "Abel's equation and

regular growth: Variations of a theme by Abel", see

www.emis.de/journals/EM/restricted/7/7.2/szekeres.ps. I have just found

it myself (by a tip by the previous poster) and have not yet understood

all details there.

Anyway, even if the solution f is not unique, there could be useful to

have available some nice solutions from the solution set. Then we might

at least be able to choose "the best" or "the best known so far". I

have solved the equation by letting f[x] ride on the asymptotic

behaviour of some "horse function", with fixed points. (I think that

this is my contribution to this field of mathematics.) In the paper by

Szekeres it is indicated that Exp[k*x] - 1 should display "regular

growth" for k > 1. If that is true, it could be the ideal horse

function. Another similar alternative, which also might have regular

growth behaviour, is Exp[k*x] - (1 + Log[k])/k, which becomes identical

to Exp[k*x] at the k value 1/E. So just now it appears to me as the

case is almost solved. Are there more small devils hiding in the bush?

I have got some nice letters and posts commenting my proposals, but

have not had time yet to comment all comments and answer all questions,

but I am working at it.

Best regards

Ingolf Dahl

Dec 18, 2006, 9:45:01 AM12/18/06

to

Just a small addendum:

If g(x) = exp(x) -1

and

h(x) = x/k - Log[k]/k

with the inverse function hinv, then

h(g(hinv(x))) = exp[k*x] - (1 + Log(k))/k

This implies that if f(x) is a solution to f(f(x)) = exp(x) -1, the

function h(f(hinv(x))) is a solution to f(f(x)) = exp[k*x] - (1 +

Log(k))/k.

If the function f has regular growth, we could thus expect that

h(f(hinv(x))) also has regular growth. Thus exp[k*x] - (1 + Log(k))/k

can be used as a "horse function" (according to my previous post)

for exp[k*x] with regular growth.

Wow!

Ingolf Dahl

Dec 18, 2006, 11:04:51 AM12/18/06

to

Ingolf Dahl wrote:

> interesting: It is possible to extend PowerTower to real exponents in a

> consistent way, BUT the solution will not be unique.

The question is even what "consistent" should mean. For fractional

iteration one has at least the condition of the translation equation

F^0=id, F^{s+t}= F^s o F^t

while for extension of the power tower I dont see any algebraic rules.

What one could demand is that x^^t should be at least continuous in x

and t, or stronger infinitly many times differentiable (what Galidakis

showed) or even stronger, analytic.

If we use analytic iterates of g_y(x):=y^x for the extension of the

power tower, i.e. x^^t = (g_x)^{t-1}(x), then it looks as if x^^t is

indeed analytic in x and in t.

But extending the power tower by analytic iterates (which arent already

unique) is just one way. If I remember correctly, Galidakis followed an

other path.

> But the mathematics is not so kind and well-behaved. If we compare f1

> with the corresponding solution f3 for g[x] = Exp[x]-1, we see that

> they also should have the same asymptotic behaviour. Both are derived

> from the analytical expression at small x, but anyway the difference

> between f1 and f3 will show oscillations when x gets large. When we

> pick out the analytic expression for the solution f1 and f3,

Though all formal fractional iterates of exp-1 (i.e. formal solution to

the powerseries, which are even unique) do not converge at the fixed

point 0, except the trivial iterates (exp-1)^k, k integer.

> All this does not seem to be common knowledge today (e.g. it is not

> clear for me from Wikipedia),

Maybe I will start to write an article on wikipedia about analytic

iteration.

Dec 21, 2006, 7:44:50 AM12/21/06

to

Now the bumblebee is flying...

I have uploaded a new version of my notebook PowerTower.nb.

Since the last version, I have changed "horse function" to (Exp[x] -

1), to avoid oscillating functions when x->+Infinity This means

slightly changed values of the constants I have calculated. I have also

included some theory and some graphs in this file, and have structured

it better.

Thus the new value of Pi to the hyper power e (or "Pi tetrated to e")

is

1921616.48318907465

Now I want to comment on the posts by bo198214:

The reference to Szekeres has been of great help, thanks again. But the

books by Szekeres and by Jabotinsky are hard to find.

"For example the iterates of e^x-1 only converge for integer

(composition) exponents."

Reference to this? My numerical tests of the convergence indicate a

convergence circle of radius around pi/2, which seems reasonable due to

the relationship with sin(x), where the derivative gets negative at

pi/2. I need good approximations for x < 0.07 or so.

"First of all it would make my life much easier if you were *not* using

Mathematica notation, but normal mathematical/semi-TeX notation. I can

program a bit Maple but am not especially inclined to learn

Mathematica. "

And I am not inclined to learn TeX. OK, I can do it, but maybe not as

first task use it in such contexts as this, since the probability of

errors will approach one. And TeX notation was designed for making nice

mathematical notation of traditional kind (without really caring about

the mathematical content), while the Mathematica notation was designed

for giving unambiguous specification of mathematical relations. And a

simple guess is that you should gain more by learning Mathematica than

I should gain by learning to master TeX. Think about it, if Mathematica

can help a lazy bum like me produce so much mathematical trash in three

weeks, think of what you could produce...

"I guess your Mathematica function fexpk[x,y,k] is just (g_y)^k(x);

the function x -> y^x iterated k times."

It is the function x -> exp[x y] = exp(y)^x iterated k times (0<= k <

1) (y is a constant parameter).

"Unfortunately I am still not sure how you compute fexpk (due to

Mathematica notation and my laziness). So it would be nice to present

it in "normal" notation and also to explain why there is not much

ambiguity.

For example the "normal" method to construct exp^{1/2} is to imagine

any function f on the interval (0,1] which lies between x->x and

x->exp(x) and is continuous and strictly increasing and exp(f(0))=f(1).

>From this start you can then define f on (1,e] by f(x):=exp(f(ln(x)))

and so on you can define f on (exp^{n}(0),exp^{n+1}(0)] and in a

similar way in the other direction.

So you can start with nearly any function f on (0,1] (or even on any

interval (x,exp(x)]) and extend it to a continuous strictly increasing

half-iterate of exp on (-oo,+oo). Though of course some functions

oscillate a bit stronger than others (draw it on paper!). "

I am not doing it exactly so, and I believe that this is my

contribution to this field. It is well-known how to get the principal

solution near fixed points, and in those few cases I have tested, the

series expansion works nicely near the fixed point. In some cases an

expansion in x^(Sqrt(2)) might be better. But when there is no fixed

point available, my idea is to look at very large x. If I can define

the function there, I could use this to extend the function to any x,

exactly as you propose, with the exception that I might have to iterate

with the inverse function. At large x, I might approximate f(x) with

the corresponding function for exp(x) - 1, or some other function with

the same asymptotic behaviour, with an error that I can estimate. I

have denoted such a function a "horse function", since I let my

solution ride on it. Then of course the rider will have to follow all

oscillations of the horse.

To calculate the iterate for any k, I write k in base two. Then I

iterate binary decomposition a sufficient number of times (maybe 188),

and apply the iterate if there is an "1" in the corresponding place in

the binary expansion. Nice and fast. But I cannot do that in TeX.

"there is no well-established criterion which set of iterates should be

considered the best (for example what is should mean to minimize that

oscillation). "

How about the criterion by Szekeres? He seems quite sure that exp(x) -

1 displays regular growth. I have not tested yet if my (new) solutions

have inherited this, but I find it reasonable. Anyway, my first

solution involved Sinh(2 x), which seems to give violent oscillations,

but anyway gave function values at low x very similar to those I obtain

now. (It is hard to see any differences in graphs, I have to look at

the decimals). If there are several solutions available with regular

growth (I doubt that), the difference for small x are probably even

less. And then it might be useful to have one solution available

anyway.

Best regards

Ingolf Dahl

Download file PowerTower.nb from

http://web.telia.com/~u31815170/Mathematica/

Dec 22, 2006, 8:51:59 AM12/22/06

to

Ingolf Dahl wrote:

> The reference to Szekeres has been of great help, thanks again. But the

> books by Szekeres and by Jabotinsky are hard to find.

Oh, these arent books, just papers.

Paul Erd�1$}os and Eri Jabotinsky, On analytic iteration, J. Analyse

Math. 8 (1960/1961),361�1rs376. MR0125943 (23 #A3240)

Eri Jabotinsky, Analytic iteration, Trans. Amer. Math. Soc. 108 (1963),

457�1rs477.MR0155971 (27 #5904)

G. Szekeres, Regular iteration of real and complex functions, Acta

Math. 100 (1958),203�1rs258. MR0107016 (21 #5744)

G. Szekeres, Fractional iteration of exponentially growing functions,

J. Austral. Math. Soc. 2 (1961/1962), 301�1rs320. MR0141905 (25 #5302)

> "For example the iterates of e^x-1 only converge for integer

> (composition) exponents."

Irvine Noel Baker, Zusammensetzungen ganzer Funktionen, Math. Z. 69

(1958), 121�1rs163 (German). MR0097532 (20 #4000)

> "First of all it would make my life much easier if you were *not* using

> Mathematica notation, but normal mathematical/semi-TeX notation. I can

> program a bit Maple but am not especially inclined to learn

> Mathematica. "

>

> And I am not inclined to learn TeX. OK, I can do it, but maybe not as

> first task use it in such contexts as this, since the probability of

> errors will approach one. And TeX notation was designed for making nice

> mathematical notation of traditional kind (without really caring about

> the mathematical content), while the Mathematica notation was designed

> for giving unambiguous specification of mathematical relations. And a

> simple guess is that you should gain more by learning Mathematica than

> I should gain by learning to master TeX. Think about it, if Mathematica

> can help a lazy bum like me produce so much mathematical trash in three

> weeks, think of what you could produce...

>

That all wasnt my point at all. My points:

* Dont use a proprietary language to describe normal mathematical

content. People understanding standard math notation > p.u. TeX

notation > p.u. Mathematica notation

* Standard ASCII math notation is: functions are written f(x),

subscripts a_i, and superscripts a^i

* I speak already Maple (and dont you think Maple can produce nice

results too?) so why dont you explain it to me in Maple? ;)

> But when there is no fixed

> point available, my idea is to look at very large x.

Ok, thatswhy you used exp(x) - exp(-x) as approximation of exp(x). But

with the benefit that it has a fixed point at 0 with a series

expansion. And one can uniquely take the fractional (or just half)

iterate of a series expansion.

And if one can compute the half iterate then one can easily compute the

value of an binary fraction.

But then I wonder a bit about exp(x)-1 does not approximate exp(x) as

good as exp(x)-exp(-x) does?

> "there is no well-established criterion which set of iterates should be

> considered the best (for example what is should mean to minimize that

> oscillation). "

>

> How about the criterion by Szekeres? He seems quite sure that exp(x) -

> 1 displays regular growth.

Sorry, but that paper of Szekeres is full of conjectures and numerical

"evidence". If some of them are proved/disproved we can speak again

about it.

I personally doubt that there is *the* fractional iteration of exp, I

guess it is more a question of personal asthetics (and of human psyche

to have an ideal ;) )

Did you btw already try to move the fixed point at infinity to 0 by

h(x)=1/x, i.e. to consider h^-1 o exp o h? I mean we generally have

the rule (g^-1 o f o g)^(m/n) = g^-1 o f^(m/n) o g, which you can

easily verify by taking the compositional power to n. In this

particular case (h^-1 o exp o h)(x) = exp(-1/x) which has a fixed point

at 0 (though no series expansion there). So if you had an halfiterate f

of exp(-1/x)=f(f(x)) then 1/f(1/x)) would be a half iterate of exp.

0 new messages

Search

Clear search

Close search

Google apps

Main menu