fractional iteration of functions

56 views
Skip to first unread message

qma...@yahoo.com

unread,
Jan 26, 2005, 11:30:04 AM1/26/05
to
I have discovered some results on how to define fractional iterations
of arbitrary functions recently. I will not go into details besides
describing what a fractional iteration is. f(x) is a function, use the
notation

f_2(x) = f(f(x))
f_3(x) = f(f(f(x)))
etc.

The question is how to define f_n(x) when n is not integer. I have seen
the fractional fourier transform, and it gives a good idea of what I am
talking about. Has there been any work on this allready? I am
considering writing a paper but need refferences.

-- NPC

Peter Flor

unread,
Jan 27, 2005, 9:00:08 AM1/27/05
to
qma...@yahoo.com wrote:
>I have discovered some results on how to define fractional iterations
>of arbitrary functions recently. I will not go into details besides
>describing what a fractional iteration is. f(x) is a function, use
the
>notation
>
>f_2(x) = f(f(x))
>f_3(x) = f(f(f(x)))
>etc.
>
>The question is how to define f_n(x) when n is not integer.
>
>-- NPC
This problem has been treated in a large body of literature at least
for forty years, under the heading "Iteration Theory". It is
documented in many books upon Functional Equations, such as
KUCZMA-GER-CHOZEWSKI: "Iterative Functional Equations" (see the notes
upon the "translation equation" and upon "flows"). Gy. TARGONSKI wrote
a monograph: "Topics in Iteration Theory" (Vandenhoeck & Ruprecht,
1981), containing more than two hundred references. So starting from
scratch might not be advisable.
Best regards, Peter Flor.

G. A. Edgar

unread,
Jan 27, 2005, 9:00:09 AM1/27/05
to

Perhaps look for articles posted by Alain Verghote during the past year
in this newsgroup and in sci.math .

For a reference, see the last chapter of the book:
Lectures on functional equations and their applications, by J. Aczel

Daniel Geisler

unread,
Jan 27, 2005, 9:00:08 AM1/27/05
to
I have links to information on continuous and fractional iteration at
http://www.tetration.org/Dynamics/index.html if that is what you are
asking for. A number of schemes have been proposed for continuous and
fractional iteration, but they are mostly algorithmic in nature. Someone
I respect suggested that a number of scientists realize many such
algorithms can exist; the issue is whether someone can provide a
rigorous axiomatic basis for such studies that provides a solid
foundation and provides a deeper mathematical understanding of the subject.
Daniel Geisler

qma...@yahoo.com

unread,
Jan 28, 2005, 9:00:05 AM1/28/05
to

> A number of schemes have been proposed for continuous and
> fractional iteration, but they are mostly algorithmic in nature.
Someone
> I respect suggested that a number of scientists realize many such
> algorithms can exist; the issue is whether someone can provide a
> rigorous axiomatic basis for such studies that provides a solid
> foundation and provides a deeper mathematical understanding of the
subject.
> Daniel Geisler

Wow, first, thanks for all the responses. I now have more then enough
references to investigate. I have gotten a good response on this
question. Second, I would like to respond to Mr. Geisler's last comment
about axiomatic basis for function iteration. I think that will be the
goal of the paper I write. Well, at least an axiomatic basis for well
behaved functions over the complex plane.

It seems there are some very serious open questions relating to this
subject. One question I find personally very interesting, relates to
how many iterative function solutions a particular function should
have. For instance take f(f(x)) = e^x. The function e^x has no fixed
point on the real line but an infinity of them in C. Does each new
fixed point create its own solution for an iteration function? Are
there solutions of the iteration function outside of the fixed points?

For instance f(x) = 6 + 2x - x^2, has two fixed points x = 3 and x =
-2. Using the method I alluded to in the first post, you can generate
two series solutions to f(x) from the fixed points. So does the
function above have more then one 2 solutions for the iteration
function, or does it have exactly 2?
Of course this could have been answered years ago in other papers for
all I know...

Yours,
-- NPC

Daniel Geisler

unread,
Jan 29, 2005, 5:00:15 PM1/29/05
to
qma...@yahoo.com wrote:
> Wow, first, thanks for all the responses. I now have more then enough
> references to investigate. I have gotten a good response on this
> question. Second, I would like to respond to Mr. Geisler's last comment
> about axiomatic basis for function iteration. I think that will be the
> goal of the paper I write. Well, at least an axiomatic basis for well
> behaved functions over the complex plane.

That would be great. Iterated functions are considered a valid approach
to dynamical systems in physics. As far as I know, Aldrovandi and
Freitas[1] are the only ones would have mastered the mathematics of
either continuously iterated or fractionally iterated functions to
provide additional insight into physics; in their case the Navier-Stokes
equations. Fractionally iterated functions would also be critical in
providing a mechanism for extending the Ackermann function from the
integers to the rational numbers.

> It seems there are some very serious open questions relating to this
> subject. One question I find personally very interesting, relates to
> how many iterative function solutions a particular function should
> have. For instance take f(f(x)) = e^x. The function e^x has no fixed
> point on the real line but an infinity of them in C. Does each new
> fixed point create its own solution for an iteration function? Are
> there solutions of the iteration function outside of the fixed points?
>
> For instance f(x) = 6 + 2x - x^2, has two fixed points x = 3 and x =
> -2. Using the method I alluded to in the first post, you can generate
> two series solutions to f(x) from the fixed points. So does the
> function above have more then one 2 solutions for the iteration
> function, or does it have exactly 2?
> Of course this could have been answered years ago in other papers for
> all I know...

I believe this is one of the most important if not the central question
that needs to be addressed in extending iterated functions. Gralewicz
and Kowalski[2] are quite critical of Aldrovandi and Freitas’s paper
because they find a second solution to a continuously iterated function
(I believe using a second fixed point) when Aldrovandi and Freitas
maintain that they have found a unique solution. Personally I think that
the degree of criticism is inappropriate for a paper I feel is deserving
of a Nobel Prize.

Consider the iterates of f(z); typically there will be not only fixed
points but period k fixed points. Given that f(z) has a period two fixed
point, now consider the iterates of g(z) such that g(z) = f(f(z)). Both
of the period two fixed points for f(z) serve as period one fixed points
for g(z).

Cris Moore suggested an interesting test for validating a continuously
iterated function. It’s easy to kludge together a function that is
faithful to the Lyapunov exponent of a fixed point in the fixed point’s
neighborhood; the trick is for it to also be faithful to the Lyapunov
exponent of other fixed points in their neighborhoods. I had some
positive but not conclusive results with my own work when I modeled the
Taylor series from a fixed point of the continuous iteration of a very
low entropy exponential function c^z with the value of c near to 1. This
takes an incredible amount of computational power to do right since as z
moves away from the fixed point the Taylor series must model the fixed
point’s Lyapunov exponent, further out model chaotic behavior, and then
as z takes even larger values the Taylor series must settle down and
model a second fixed point and its Lyapunov exponent.

Since my own work is based on finding the Taylor series of iteration
functions from a fixed point, assuming the existence of a fixed point
and that the function is smooth, the very existence of a solution
indicates that a solution is associated with every fixed point. If there
is a solution for each fixed point it may be that we are only talking
about one solution appearing as many under translation from one fixed
point to another.

I think it may be appropriate to extend your question about whether a
solution is associated with each fixed point to ask if a solution is
associated with every set of period k fixed points. What about solutions
for when k goes to infinity? What about solutions for strange attractors?

Good luck,
Daniel


[1] R. Aldrovandi and L. P. Freitas,
Continuous iteration of dynamical maps,
J. Math. Phys. 39, 5324 (1998)

[2] P. Gralewicz and K. Kowalski,
Continuous time evolution from iterated maps and Carleman
linearization,
arxiv.org 2000

Alain Verghote

unread,
Jan 29, 2005, 5:00:16 PM1/29/05
to

Dear Qmagick,

You've picked up some 'stuff' for your study about fractional
iteration fonctions.
You've written "one question I find personaly interesting relates to
how many iterative functions solutions a particular function should
have " .I am mainly concerned with real ,continuous functions.
I want to show two examples.
1°)f^[2](x)= x ,just have to write a symmetrical {f,x} expression
, like x*f+f+x =3 ...Is there a generalization to [n] ?

2°) f(x) = 2x +1 , f^[1/2](x)= ? the iterated line (ax+b)[r]
gives two roots f^[1/2](x)=-xsqrt(2)-1-sqrt(2);
f^[1/2](x)= xsqrt(2)-1+sqrt(2) ,
compute f^[1/3](x)!!!
Please do tell me about the significant results you arrived at.
Honestly I think we still have to work in order to clarify and to make
continuous iterated fractions more 'catchable',

Sincerely,Alain.

Daniel Asimov

unread,
Jan 30, 2005, 10:30:06 AM1/30/05
to
NPC (qma...@yahoo.com) wrote:

<<
One question I find personally very interesting, relates to
how many iterative function solutions a particular function should
have. For instance take f(f(x)) = e^x. The function e^x has no fixed
point on the real line but an infinity of them in C. Does each new
fixed point create its own solution for an iteration function? Are
there solutions of the iteration function outside of the fixed points?
>>

I have studied these questions and have some answers (kindly cite me
as the source if you quote the following -- thanks).

1. Yes, each fixed point creates its own solution for an iteration
function.

2. Yes, there are solutions of the iteration function outside of the
fixed points; in particular there are infinitely many real analytic
flows into which e^x embeds on the reals.

Daniel Asimov

Dave Rusin

unread,
Jan 30, 2005, 10:30:06 AM1/30/05
to
In article <ctdgh5$s1d$1...@news.ks.uiuc.edu>, <qma...@yahoo.com> wrote:

>For instance f(x) = 6 + 2x - x^2, has two fixed points x = 3 and x =
>-2. Using the method I alluded to in the first post, you can generate
>two series solutions to f(x) from the fixed points. So does the
>function above have more then one 2 solutions for the iteration
>function, or does it have exactly 2?

Or does it have any at all?

By "solution" you seem to mean a smooth function, domain unspecified.
If you compute this solution by computing power series at a fixed point
(so that what you're computing is actually the germ of a solution, but
never mind) then I think you need to know whether the power series
converges before you can assert that there is a solution (defined somewhere
besides the fixed point of course).

For example I looked for a solution to g o g = x - x^2 of the form
g(x) = x + a2 x^2 + a3 x^3 + ... ; the rational coefficients seem to
grow too rapidly to permit convergence (I went as far as the coefficient of
x^100 and it appeared log(|a_n|) grew faster than linearly). So if
this series has a zero radius of convergence, in what sense is it a solution?

Your specific quadratic may be different. I looked instead at the function
f(x) = 4x - 3x^2 so I could avoid surds, looking for a power-series
solution starting g(x) = 2x + a2 x^2 + ... . I "solved" this in a very
round-about way. First I found the coefficients a3, a4, ..., a_75 in turn.
Factoring numerators and denominators and looking for a pattern, I found
that a_{n+1} / a_n = (3 n^2 - 6)/(4n^2 + 6n + 2) so the series appears to
converge for |z| < 4/3. In particular the value g(x) would be the limit of
the partial sums S_n which satisfy a recurrence
(4n^2 + 6n + 2) S_{n+1} = ((4+3x)n^2 + 6n+(2-6x)) S_n - 3x(n^2 - 2) S_n .
Maple solves this recurrence relation and I can massage the answer to be S_n =
(2*x)*( hypergeom([1, 1-2^(1/2), 1+2^(1/2)],[3/2, 2],3/4*x) +
(2/Pi)^(1/2)*(sin(Pi*(-1+2^(1/2)))/4)*n^(-3/2)*((3/4)*x)^n
*hypergeom([1, n+1-2^(1/2), n+1+2^(1/2)],[n+2, n+3/2],3/4*x)
*(GAMMA(n+1-2^(1/2))*GAMMA(n+1+2^(1/2))/GAMMA(2*n+3)*4^n*n^(3/2)*4/Pi^(1/2)) )
The first line is free of n's and the last line tends to 1 as n->oo;
Maple did not succeed in evaluating the limit of the rest as n --> oo, except
numerically, and for the few x I tried it seems this limit is zero, leaving
g(x) = 2*x*hypergeom([1, 1-2^(1/2), 1+2^(1/2)],[3/2, 2],3/4*x) .
Indeed, this g has the correct Taylor-series coefficients (through x^75).
I have to say I didn't expect a "closed form" solution. Computing numerical
values of g(g(x)) we check we do indeed get 4x-3x^2 but only for
x < 1.07 or so, after which the function g starts to decrease.

Well, this is a lot of guesswork but it seems to nominate a specific
function to be a solution to the functional equation; I suppose someone who
uses the hypergeometric functions regularly could prove g o g = 4x - 3x^2.
I want only to observe that the function g numerically takes the value
g(1) = 1.3226017298723... rather than g(1) = 1, so this is not the
same as the solution you would construct from the fixed point of f at 1.

Since you asked about solving g o g = exp and seem to be interested in
complex-analytic solutions, I will just point out that there is no
such solution defined everywhere in the complex plane, as can be deduced
from thinking about the order of entire functions; see Polya,
"On an integral function of an integral function",
Journal London Math. Soc. 1 (1926), p. 12-15.

dave

qma...@yahoo.com

unread,
Jan 30, 2005, 10:30:07 AM1/30/05
to
I wanted to also add one more thing. I forgot to give one other trivial
solution.

f(x) = c, F(n,x) = c, c is a constant belonging to C. No other
conditions on the iterate.

I was thinking also about piecewise continuous functions p(x) of the
various trivial solutions to see if they are amenable to solution. I
was using Mathematica to define them and play around. For instance:

p(x) = x, if x > 1
2x + 1, if 0 < x <= 1
x^3, if x <= 0

This function has overlap in the values that the function produces
[p(3) = p(1)]. If I could find a general solution or general method the
results might be helpfull in approximating other more general functions
also. This is just an idea currently... It looks like that if the
piecewise functions do not have overlap the solution for the total
iterate will be the set of piecewise iterate solutions, otherwise the
iterations of p(x) become much harder. So for now this method would
only work for functions that are allways increasing or allways
decreasing and can be modeled as piecewise of the trivial solutions. Of
course don't take any of this to be definite except the solution for
f(x) = c for the iterate (sorry).

Sincerely, Nathan

Daniel Geisler

unread,
Jan 30, 2005, 10:30:03 AM1/30/05
to
I forgot to mention that it might be useful to investigate what happens
when you use different types of fixed points because of complex (or
topological) conjugancy. I had written a Mathematica program in the
mid-nineties that was giving some nice results for tetration of complex
numbers (continuously iterating the exponential function), but because I
was using real numbers in my calculations there was always some error. I
realized that if I continuously iterated the sine function, which has a
fixed point a zero, then I could test my results where all the
calculations used rational numbers instead of real numbers. To my dismay
my calculations just produced garbage. I was able to write a different
program that did continuously iterate the sine function without any
error terms, but it took a while for me to realize that my attempts to
tetrate complex numbers had always used a hyperbolic fixed point while
the sine function has a parabolic rationally neutral fixed point.

The classification of fixed points states that complex dynamics (the
dynamics of the complex plane) can result in hyperbolic, irrationally
neutral, rationally neutral, parabolic rationally neutral, and
superattracting fixed points. They all have distinct dynamical
properties from one another. A great deal is known about topological
(qualitative) nature of iterated functions, so any quantitative theory
of iterated functions must be consistent with what is already known
about complex dynamics. I dealt with the different types of fixed points
by realizing that complex dynamics creates nested geometrical
progressions. I little bit of experimentation with summing geometrically
progressions will show that the formula everyone learns in high school
actually has several exceptions. I maintain that understanding these
exceptions allows one to derive the entire classification of fixed
points including the specifics of the complex conjugancy of the
different fixed points.

For people without a background in complex dynamics who are interested
in either fractional or continuous iteration, I highly recommend Robert
Devaney’s Chaotic Dynamical Systems and then Carleson and Gamelin’s
Complex Dynamics.

Daniel

qma...@yahoo.com

unread,
Jan 30, 2005, 10:30:04 AM1/30/05
to
Alain et al.

I am sorry that I cannot at this point release what results I have
arrived at in this public forum. First, I need to do alot of work and
read what everyone else has done to make sure I am not stealing anyone
elses ideas or saying anything blatantly stupid. Second, in the event
that I have come up with something original, I do want to get credit of
course. If I have not come up with anything, I will be the first to
tell you.

To give a hint at what I have found, check out this paper:
L. Smolarek, The formula of fractional iteration of function
differentiable at its fixed point. Zeszyty Nauk. Mat. Uniw. Gdans. 7
(1987), 99-107

Just looking at the paper's title I get the feeling that Mr. Smolarek
might have discovered something similar to my own results. If anyone
here has read this paper please let me know his methodology/results so
I can be for sure... I have allready emailed Mr. Smolarek, but he has
not given back any response as of yet. His paper is in Polish so that
might also be a problem (I don't speak Polish!). Maybe we can all
pressure him to tell us.

Alain. I would like to answer your two questions with some definiteness
so here goes. f(f(x)) = x, does not have just one solution for the
iterate of the function f. In any event, it is not the iterate we are
looking for but the set of functions with periods of two intersecting y
= x. I am using the term iterate to mean a continuous iteration
solution of a function f(x) with continuous variables n,x over some
range for the function F(n,x). F is the "iterate" of f(x). F has to
have the following properties:

1. F(0,x) = x
2. F(1,x) = f(x)
3. F(m, F(n, x)) = F(n, F(m, x)) = F(n+m, x)
4. n is over at least one continuous range including n = 0, and n = 1.
x must also be continuous over a given range for each n so defined.

So what I am saying is that the functional equation f(f(x)) = x has an
infinity of solutions. Any iterate with a period 2 solution along y = x
will suffice, and there are an infinity of them that I do not yet know
how to classify. This is a guesse though. Sorry. I could definitely be
wrong on this one. Maybe I will work on it. I would just be happy with
a general solution for iterates of functions though...

In answer to question 2*) above. The following functions have iterate
solutions for the whole of the complex plane over suitably defined
regions that are easily seen (x,n belong to C):

1. f(x) = x, F(n,x) = x, no need for explaining I hope.
2. f(x) = x + a, F(n,x) = x + na, dito of above. Above is special case
of current where a = 0.
3. f(x) = ax + b F(n,x) = a^n x + (a^n-1)b/(a-1), a != 1. If a = 1 use
above formula.
4. f(x) = (ax+b)/(cx+d), F(n,x) = (M_00 x + M_01)/(M10 x + M_11)

M_00 is the 00 component of the Matrix

| a b | n
| c d |

or the matrix formed from the coefficients raised to the continuous
power n and evaluated for 00. I think this is just the Moebius group. I
imagine you could generalize this result...

5. f(x) = ax^p, F(n,x) = a^((p^n-1)/(p-1)) x^(p^n), a != 0, p != 1.
This function is rather boring if you ask me. It just grows insanely
fast or decays insanely slowly.

These are what I am calling the trivial solutions. Whether they are
trivial or whether they are true, or allready found out yet I do not
know and do not too much care. Try them out yourself to see if they
meet the conditions for iterate solutions I have presented as
conditions 1-4. They should, but if they do not let me know! I will
then have to recant my inexcusable error.

So for the example Alain has given,
f(x) = 2x + 1 and find f^[1/3](x),
a = 2, b = 1, n = 1/3. This gives:

F(1/3, x) = 2^(1/3) x + (2^(1/3) - 1) 1 / (2 - 1)
= 2^(1/3) x + (2^(1/3) - 1)

Now for a test. F(1/3, F(1/3, F(1/3, x))) should equal 2x + 1. Lets
break it up into two parts. First, F(1/3, F(1/3,x)) should equal
F(2/3,x) right?

F(2/3, x) = 2^(2/3) x + (2^(2/3) - 1) 1 / (2 - 1)
= 2^(2/3) x + (2^(2/3) - 1)

F(1/3, F(1/3, x)) = 2^(1/3) [(2^(1/3) x + (2^(1/3) - 1)] + (2^(1/3) -
1)
= 2^(2/3) x + (2^(2/3) - 1)
= F(2/3, x)

So far so good. Now to finish.

F(1/3, F(2/3, x)) = 2^(1/3)[2^(2/3) x + (2^(2/3) - 1)] + (2^(1/3) - 1)
= 2x + 1 = F(1,x)

by the way, F(1/2,x) gives the same formula Alain gave, which is a good
thing.

There you go! As I said, this is a "trivial" solution but you should do
it on your own to make sure the solution meets the conditions 1-4. If
you do the formulas above enough you will see the trick in each one
that makes them work. Let me know if these are original or not, but I
doubt it very seriously.

Well, now for the bad news, or just a recap of what everyone has said
and what I have observed to date. Mr. Geisler has said there are a
number of "methods" for finding the iterates of various functions. From
what little I have read, this seems definitely to be true. I believe
they can roughly be catagorized in the following list:

I. Interpolation.
Using a computer create the functions f(x), f(f(x)), etc. and
interpolate the results from F(n,x) to F(n+1,x) for what the function
is for F(m,x) where n < m < n + 1. The down side of this is that it is
not analytic and merely a very serious "fudge".

II. Pattern recognition.
Take a function f(x) and iterate it over and over again. If you find a
pattern in F(n,x) elevate n to continuous and check whether it meets
conditions 1-4. This is how the trivial solutions above were generated
but it might work for other functions.Can not know until you try.

III. Coefficient Solutions.
Use various series expansions of f(x) and iterate them. Create another
expansion for F(n,x). Equate coefficients This technique is interesting
but has the drawback that it can not be shown to be rigorously the
actuall iterate solution, in fact, it probably is not. One other
approach in this method is making the cooeficients in F(n,x) functions
of n. Then one can produce what is called a "flow" if I remember
correctly. This method also has the drawback that it it is
computationally insane and not rigorously analytic.

IV. Taylor or Laurent Series expansions.
Finding the Taylor series or Laurent series expanded around a fixed
point or otherwise. This is what I am also doing (as well as Geisler).
What are the problems? Most functions can not be expressed as a series,
or that we know that a iterate of one can! Aside from that, the method
seems a very good one. Yet again I can't say exactly what I am doing
as of yet. What is true is that if f(x) is significantly well behaved
there is a chance this method could come out correctly for a large
class of functions, like f(z) = z^2 - 1 and so on.

Finally I want to respond to Mr. Geisler:

Daniel Geisler Jan 29, 2:00 pm


>Since my own work is based on finding the Taylor series of iteration
>functions from a fixed point, assuming the existence of a fixed point
>and that the function is smooth, the very existence of a solution
>indicates that a solution is associated with every fixed point. If
there
>is a solution for each fixed point it may be that we are only talking
>about one solution appearing as many under translation from one >fixed
point to another.

This is an intrigueing idea. Let me summarise it and tell me if I am
somewhat correct. For a function f(x) there exist a set of iterate
solutions generated via the fixed points in a one-to-one and onto
correspondence.

f(p0) = p0
...
f(p_n) = p_n

Make a set p = { p0, p1, ... , p_n } where the set can be infinite or
not. Solve the iterates of f(x) for each p_m and label them F_m (n, x).
I allready see the difference. I am very sorry. What you were talking
about were points x0, x1, ... x_k such that
f(x0) = x1
f(x1) = x2
etc.
f(x_k) = x0

Or a k period. It seems then that if F(n,x) is the iterate of f(x) then
the function F would have to traverse all the points from 0 to k
continuously The fixed points would just be cycles with period 0 or 1
depending how you look at or define it. I do not see also why you could
not have cycles of non-integral period. Of course other points would
not belong in a period at all. This brings to mind all sorts of
Topological questions that I did not even begin to think of. Perhaps
chaos is just a non-periodic set of points with boundaries? I really
have no idea. Requires alot more thought. What about those non-integral
periods though huh? I think I still did not get the whole import of
your idea though...

I would very much like to hear about how Mr. Geisler's method works if
it is allready published, just to make sure it is not the same as my
own. I also want to note that solving the iterates of functions is not
necessarily related to more general functional equations. For instace,
if we knew what the iterate is of e^x, we would not necessarily know
how to solve f(f(x)) + f(x) = e^x. This is just one example but shows
that "functional equations" and "function iterates" should be treated
as two seperate, albeit related topics.

Plus, I hope Mr. Geisler if you don't mind that I use the idea of Mr.
More to test the functions I generate. Is his idea allready published
or should I only use it only for my own confidence tests?

Sincerely to everyone,
-- NPC

qma...@yahoo.com

unread,
Jan 30, 2005, 3:30:03 PM1/30/05
to
Dave,

I am now relatively sure that the solution method I employ for solving
the iterates of functions is not the same as the methods you have
described.

You use method III as listed in the Jan 30, 7:30 post. I do not use
that method. In fact, I do not use any method associated with expanding
polynomials inside of other polynomials. Thanks for showing me the
pitfalls of such a technique (analytic/differentiable etc.). Plus I
very much like the beauty of the result you showed in your Jan 30, 7:30
Post. It must have taken a good bit of time to derive...

However, I also want to note the following on the line of thinking used
in your derivation. If you have a functional equation f^n (x) = g(x),
you can give at least one solution by finding the iterates of g(x),
lets call them G(m,x). Then if you want f(x) you would just go G(1/n,
x) = f(x). The question is does this produce all of the solutions?

No, it probably does not. Consider the following case. f(f(x)) = x. The
iterate of f(x) is F(n,x) = x. So one solution to the functional
equation just listed is F(1/2,x) = f(x) = x. Is that the only solution?
No, f(x) = 1/x is also a solution. If you think about it you can get
alot of solutions to f(f(x))=x. To do them all would be insane. How
many solutions though does the iterate of f(x) have? Only one: F(n,x) =
x.

However, what does that have to do with the beautiful math you just
derived for us? The problem is by solving the functional equation
g o g = x - x^2, you were finding just one of a possible infinity of
such solutions. Ah, I don't know. I hope you see what I mean. You can
either solve g o g = x - x^2, or you can try and solve G(n,x). I think
we would all be on better ground if we solved the later, but the former
is the one that was first introduced, so here we are...

-- NPC

alexand...@ulbsibiu.ro

unread,
Jan 31, 2005, 11:30:04 AM1/31/05
to
Possible References:

[1] N. H. Abel, D'etermination d'une fonction au moyen d'une
equation qui ne contient qu'une seule variable,
OEuvres compl'etes de N. H. Abel (Par L. Sylow et S.
Lie),
Tome 2, 36--39. Grondahl & Søn, Christiania, 1881.

[2] E. Kasner, Conformal geometry,
Proceedings Fifth Int. Congress Math. 1912. (Eds. E. W.
Hobson and A. E. H. Love), 2, 81--87. Cambridge University Press,
Cambridge 1913.


[3] H. Kneser, Reelle analytische L˜osungen der Gleichung ... und
verwandten
Funktionalgleichungen,Reine Angew. Math. 187 (1949),
56--67.

[4] G. Koenigs, Sur les int'egrales de certaines 'equations
fonctionelles,
C. R. Acad. Sci. Paris 99 (1884), 1016--1017.

[5] M. Kuczma, Functional equations in a single variable, Monografie
mat. 46,
Polish Scientific Publishers, Warsaw, 1968.

[6] M. Kuczma, B. Choczewski and R. Ger, Iterative functional
equations,
Encyclopedia of Mathematics and its Applications 32,
Cambridge University Press, 1990.
[7] E. Schroeder, Ueber iterierte Funktionen, Math. Ann. 3 (1871),
296--322.

[8] G. Szekeres, Fractional iteration of exponentially growing
functions,
J. Australian Math. Soc. 2 (1962), 301--333.

[9] Gy. Targonski, A bibliography on functional equations,
Research Report, Fordham University, Bronx N.Y.,
1964.

[10] Gy. Targonski, Topics in iteration theory, Studia Mathematica
Skript 6,
Vandenhoeck & Rupprecht, Goettingen, 1981.


Dave L. Renfro

unread,
Feb 22, 2005, 7:59:51 AM2/22/05
to
qma...@yahoo.com
[sci.math.research: January 28, 2005 14:00:05 +0000 (UTC)]
http://mathforum.org/epigone/sci.math.research/dralyoysli

wrote (in part):

> Wow, first, thanks for all the responses. I now have more
> then enough references to investigate. I have gotten a good
> response on this question. Second, I would like to respond
> to Mr. Geisler's last comment about axiomatic basis for
> function iteration. I think that will be the goal of the
> paper I write. Well, at least an axiomatic basis for well
> behaved functions over the complex plane.

Here's another reference that you might want to look at.
(I didn't see it among those suggested in this thread.)

Daniel S. Alexander, "A History of Complex Dynamics from
Schröder to Fatou and Julia", Aspects of Mathematics E 24,
Friedr. Vieweg & Sohn, Braunschweig, 1994.
[MR 95d:01014; Zbl 788.30001]
http://www.emis.de/cgi-bin/MATH-item?0788.30001

The Zbl review is especially long (the URL above takes you
to a publically available webpage). Two additional reviews
that I know of are:

Theodore W. Gamelin, Historia Mathematica 23 (1996), 74-84

Robert B. Burckel, SIAM Review 36(4) (Dec. 1994), 663-664.

Alexander's book is useful for its survey of early work
on what you're interested in. For example, Section 2.2
"Analytic Iteration", is preceded by this paragraph:

"Before reviewing the responses of Korkine and Farkas to
Schröder's study of functional equations it will be useful
to first say a few words about analytic iteration, and
then to briefly outline the respective approaches of
Schröder, Korkine and Farkas to this problem." (p. 24)


Dave L. Renfro

Reply all
Reply to author
Forward
0 new messages