At the critical point of a 3-D system of ODEs there are infinite
eigenvalues. The system otherwise "would be" of the saddle-type: a one-
dimensional stable manifold converges to the critical point, while all
other trajectories diverge eventually. But as I consider the limit at
the critical point, the eigenvalues diverge to minus infinity.
What is the interpretation?
The Jacobian matrix at the critical point is:
[1, -1, 0]
[0, 0, -1]
[a*b, -a*b, a+b]
where both a and b tend to plus infinity when the limit is evaluated
at the critical point.
The simulations show a one-dimensional stable manifold,
notwithstanding the eigenvalues I computed. The pragmatist in me
believes the simulation, but what about these infinite eigenvalues? I
know that in the case of zero eigenvalues the interpretation is that
the Jacobian is not informative about the local dynamics and that
higher terms of the Taylor expansions must be considered (and the
manifold in this case is called a center manifold, which may be stable
or unstable). I couldn't find anything about infinite eigenvalues.
Thanks for your help!
P.S. eigenvalue is probably the more complicated word I have ever
used, so please forgive my ignorance!
> The Jacobian matrix at the critical point is:
>
> [1, -1, 0]
> [0, 0, -1]
> [a*b, -a*b, a+b]
>
> where both a and b tend to plus infinity when the limit is evaluated
> at the critical point.
The other problem is that this Jacobian has a vanishing eigenvalue
which
requires a centre manifold analysis (as you correctly pointed out).
Could you maybe publish the equations and give a bit more information
on
the reasons
-- why the parameters a and b tend to infinity
-- how you did numerical simulations with parameter values tending t
infinity?
Then it will be much easier to answer your questions.
Cheers,
Ivo
> you cannot assess local stability of critical points from eigenvalues tending to infinity
alright, that's sort of what I suspected (or hoped for): do you have a
reference I could follow up on this "theorem" ?
I was reluctant to copy-paste the system because it is quite messy,
but I have concocted something like a minimal example below:
The ODE system (x,y,z) is given by:
> xd:=f(x)-y;
> yd:=fx(x)-z;
> zd:=(z-p1)*(p2-z)-(p-z)*(fx(x)-z)/(f(x)-y);
x,y,and z are strictly positive reals, the independent variable is
time t. f is a function, and e.g. f(x)=x^(1/2). fx(x) denotes the
derivative of f(x) wrt x and fxx(x) the second derivative. p1,p2,p are
strictly positive real parameters such that 0<p1<p<p2, e.g. p=(p1+p2)/
2 and p=1.
I call a critical point a "stationary state" xd->0, yd->0, zd->0. (I
realize that zd is not, strictly speaking, defined where xd=0). I
denote the corresponding critical values by x*, y*, z*.
For some reason I found it useful to introduce the following notation:
> a := (fx(x)-z)/(f(x)-y);
> b := (p-z)/(f(x)-y);
> c := a*fx(x)-fxx(x);
I'm particularly interested in a>0 and c>0 but agnostic about the sign
of b.
The Jacobian matrix may then be written:
[
[fx,-1,0],
[fxx,0,-1],
[b*c,-a*b,p1+p2-2*z+a+b]
]
There seems to be a continuum of critical points of the system. The
main critical point of interest is where z*=p, but I'm interested in
other critical points too.
A special case will perhaps clarify certain things.
In the special case of some f such that fxx=0, and the special values
p=1, and p=(p1+p2)/2 then the Jacobian simplifies to:
[
[1,-1,0],
[0,0,-1],
[a*b,-a*b,a+b]
]
where fx(x*)=z*=p=1, fxx(x)=0, p1+p2-2*z=0, and b*c=fx(x*)*a*b=a*b.
In this special case, we can compute the eigenvalues. I get this:
e1=0,
e2=1/2*b+1/2+1/2*a+1/2*(b^2-2*b+6*a*b+1-2*a+a^2)^(1/2)
e3=1/2*b+1/2+1/2*a-1/2*(b^2-2*b+6*a*b+1-2*a+a^2)^(1/2)
Indeed, there is (in this special case) a zero eigenvalue. The other
thing is that (it seems to me, so let's call it a conjecture) the
critical point is also characterized by:
|a|-->infinity
|b|-->infinity
and therefore |e2|-->infinity and |e3|-->infinity
This conjecture comes from the equation for zd, which may be written:
zd=(z-p1)*(p2-z)-(p-z)*a
=(z-p1)*(p2-z)-b*yd
=(z-p1)*(p2-z)-a*b*xd
Given the parameter restrictions we have (z-p1)*(p2-z)>0, the only way
for zd-->0 is (p-z)*a-->L, and b*yd-->L, and a*b*xd-->L, where L=(z*-
p1)*(p2-z*). [And in the special case z*=p, L=(p*-p1)*(p2-p*)]. And
therefore given that yd-->0 it must be that |b|-->infinity and in the
special case z-->p it must be that |a|-->infinity.
Therefore, by my calculations, under the parameter assumptions, for
the special critical point under consideration (z-->p), then it looks
to me like two of the eigenvalues have infinite real parts at the
critical point.
I simulated the above system, I also looked at transformations of the
system in which I can safely make an approximation that reduces the
system to 2-D, and the conclusion from the numerical analysis is that
there is one stable manifold converging to the critical point z*=p if
xd>0 but not if xd<0 (i.e. the stable manifold lies to the left of
x*), and it looks like there is no stable manifold converging to any
of the other critical points in the continuum of candidate critical
points. That's what the plots seem to show, but ... can I trust my
simulations? and can I trust the logic just outlined?
How did I do the simulations? I just assumed functional forms and
parameter values and used Maple's dsolve routine. I also used the
dfieldportrait routine on 2-D approximations.
The system written at the top of this post is very much the system I
want to understand in a neighborhood of a critical point xd->0, yd->0,
zd->0. Any hints, suggestions, references are very much appreciated. A
more systematic approach to the analysis would be great too.
Thanks so much Ivo Sans Terre for taking the trouble!
Patrick.
Let's first say something about your special case:
>
> In the special case of some f such that fxx=0, and the special values
> p=1, and p=(p1+p2)/2
The problem is that here z* is NO stationary solution unless p1=p2=p=1
which is probably not what you are interested in...
> zd=(z-p1)*(p2-z)-(p-z)*a
> =(z-p1)*(p2-z)-b*yd
> =(z-p1)*(p2-z)-a*b*xd
because limits like THESE:
>
> Given the parameter restrictions we have (z-p1)*(p2-z)>0, the only way
> for zd-->0 is (p-z)*a-->L, and b*yd-->L, and a*b*xd-->L, where L=(z*-
> p1)*(p2-z*). [And in the special case z*=p, L=(p*-p1)*(p2-p*)]. And
> therefore given that yd-->0 it must be that |b|-->infinity and in the
> special case z-->p it must be that |a|-->infinity.
are NOT defined (in general, you cannot compute a product of limits if
one of the limits does not exist, i.e. tends to infinity!).
If you wish to investigate the stability of a stationary solution (x*,
y*, z*) you might want to try to transform the system to other
coordinates. Try for example \tilde{z}:=z/( f(x) - y). You have to
rewrite the system by computing the derivative of the new variable
\tilde{z} by t and replacing z by ( f(x) - y) \tilde{z}. Here are two
references which show how this can be done for concrete examples:
Gaucel, S & Langlais, M (2007) Some remarks on a singular reaction-
diffusion system arising in predator prey modeling, DISCRETE AND
CONTINUOUS DYNAMICAL SYSTEMS-SERIES B, Vol. 8, 61-72
Hilker, F & H. Malchow (2006) Strange periodic attractors in a prey-
predator system with infected prey, MATHEMATICAL POPULATION STUDIES,
Vol. 13, 119-134
or I can send you a preprint of
Siekmann et. al. (2008) An extension of the Beretta-Kuang model of
viral diseases, MATHEMATICAL BIOSCIENCES AND ENGINEERING, Vol. 5,
549-565
if you do not have access to the other two references.
For an introduction to centre manifold reduction WITH examples (in
case you have a vanishing eigenvalue) I recommend either
King, A. C., Billingham, J., Otto, S. R. (2003): Differential
equations : linear, nonlinear, ordinary, partial, Cambridge Univ.
Press, Cambridge
or the relevant chapters in
Yuri A. Kuznetsov (1995): Elements of applied bifurcation theory,
Springer, New York
Let me know if you have further questions - do not hesitate to give me
the messiest possible version of your system if necessary!
Ivo
http://www.scribd.com/share/upload/12491211/1p7qk8dcnx1s64qs8pek
thanks,
Not sure I follow you on the non-existence of the limits. Say I
consider the limit of x as x goes to infinity, i.e. infinity. Now I
consider the limit of A*x as x goes to infinity, with A a strictly
positive real, well that's infinity again. Now I suppose it's more
than just a question of semantics whether the infinity exists (I for
one have never seen it) but my "limits" above are to be understood in
this sense -- big things with a plus or minus sign in front of them,
bearing in mind that when dividing a big thing by another just about
anything could happen.
Anyway, back to the main topic:
> King, A. C., Billingham, J., Otto, S. R. (2003)
> Yuri A. Kuznetsov (1995)
I had actually looked up these books, and found Kuznetsov to be
particularly enlightening on the center manifold. The other book I
have greatly benefited from is:
Stephen Wiggins, Introduction to Applied Nonlinear Dynamical Systems
and Chaos, Second Edition, Springer, 2003.
As far as I can tell there is no reference to "infinite eigenvalues"
in these books. That's why I look forward to reading the articles you
mention. It will take me a day or so to locate them.
I want to be able to write something along the lines of "Computing the
limit of the Jacobian matrix at the critical point reveals the
existence of singularities: the three eigenvalues are zero, plus
infinity, and minus infinity. It is well known that in this case
nothing may be learned from the linearization -- higher order
approximations must be computed, see e.g. Kuznetsov (1995) and MyMan
(2009)" where MyMan is the author of "Infinite Eigenvalues �ber
Alles".
I'm also going to prepare a summary of my simulations, together with
the pictures (a pdf), so you can see what I mean when I say that I
think I have identified a one-dimensional stable manifold in the case
z-->p. You'll probably see immediately the reasons why I'm not sure.
It will be a lot easier to explain with pictures.
To be continued...
Patrick.
Quote:
"The reaction�diffusion system is a singular system of PDEs, the
denominator of one of the reaction terms vanishing along with one of
the state variables."
Exactly what happens in my system! Well spotted Ivo! I'll be reading
this article very closely.
>Hilker, F & H. Malchow (2006)
Quote:
"it is simplified through a convenient transformation"
Okay, that's my most urgent task: to find a convenient transformation.
What I've attempted so far has been less than convenient. But my
eagerness has grown, let's try.
Thanks Ivo, that was quality advice.
I'm curious about the other article you cite too :-)
Patrick.
> I want to be able to write something along the lines of "Computing the
> limit of the Jacobian matrix at the critical point reveals the
> existence of singularities: the three eigenvalues are zero, plus
> infinity, and minus infinity. It is well known that in this case
> nothing may be learned from the linearization -- higher order
> approximations must be computed, see e.g. Kuznetsov (1995) and MyMan
> (2009)" where MyMan is the author of "Infinite Eigenvalues �ber
> Alles".
>
Higher-order approximation/centre manifold reduction of a dynamical
system is only necessary if one or more eigenvalue(s) has/have
vanishing real part. The centre manifold does not help if eigenvalues
are infinite.
>> Hilker, F & H. Malchow (2006)
>> Quote:
>> "it is simplified through a convenient transformation"
> Okay, that's my most urgent task: to find a convenient transformation.
> What I've attempted so far has been less than convenient. But my
> eagerness has grown, let's try.
You are right: It is NOT a pleasant task to find "convenient
transformations". If it is too difficult: Would it maybe be sufficient
just to give some heuristic arguments why your system behaves in a
certain way and show numerical simulations? In my opinion, if you are
preparing a paper focusing on a certain model it might be enough.
> The other book I
> have greatly benefited from is:
>
> Stephen Wiggins, Introduction to Applied Nonlinear Dynamical Systems
> and Chaos, Second Edition, Springer, 2003.
Right, also a good book...
> You cannot conclude ANYTHING from infinite eigenvalues:
> Neither stability NOR INstability of a stationary solution. Look at
> Gaucel et al., for example
Right, I have checked Gaucel et al. Thanks a lot for the pointer.
Their finding is consistent with my initial intuition, which was based
on the idea that exp(-e*t) is constant whether e is 0 or infinity. But
that was just some vague intuition and could have been wrong, that's
why I was looking for some published theorem. The case e=0 is
mentioned in many books (center manifold theory), but the case
e=infinity is never mentioned in this connection, as far as I could
make out. However, there are references to the "Generalized Eigenvalue
Problem" which shows that there always exist "equivalence
transformations" from/to zero to/from infinite eigenvalues. I was not
100% sure that this theorem applied, but I guess it does. Perhaps the
equivalence is thought by textbook authors to be obvious?
One reference on the "Generalized Eigenvalue Problem" is:
Stewart, G.W. -- Matrix Decomposition, Volume 2 : Eigensystems-- SIAM
2001.
My understanding is:
The case of e=0 and e=infinity are formally equivalent since there
always exist an equivalent transformation that can transform a system
with an infinite eigenvalue to a system with a zero eigenvalue.
Correct?
>It is NOT a pleasant task to find "convenient transformations"
I spend several hours yesterday thinking again about transformations.
For every transformation I tried to remove the singularity, some other
singularity would appear elsewhere.
The singularity appears in (p-z)*(fx(x)-z)/(f(x)-y). I've tried things
like:
l <--> (f(x)-y)/(fx(x)-z)
m <--> (f(x)-y)/(p-z)
because in both l and m the numerators and denominators vanish at the
candidate equilibrium while my simulations seem to show that l-->0 and
m-->0.
I've also tried all manners of xd/yd, yd/zd, zd/xd, etc..
One transformation you suggested is z/(f(x)-y). However that's
divergent, since z-->z*>0 and f(x)-y-->0, so that would yield a
transformed system without a stationary state. Would that be a
problem? But perhaps one could try (z-z*)/(f(x)-y)? However, if my
reasoning is correct, f(x)-y-->0 is faster than fx(x)-z-->0, that's
why I was attempting to have (f(x)-y) in the *numerator* of my
transformed variables.
After several attempts, I couldn't see how the candidate
transformations mentioned above could remove the infinite eigenvalues.
If you have other ideas for transformations, don't hesitate to fire
off!
I still have to look up this reference: Siekmann et. al. (2008).
Thanks Ivo for this stimulating dialogue,
Patrick.
> I spend several hours yesterday thinking again about transformations.
> For every transformation I tried to remove the singularity, some other
> singularity would appear elsewhere.
>
... I needed MONTHS (fortunately occupied with other things, too, for
removing the singularity...
> The singularity appears in (p-z)*(fx(x)-z)/(f(x)-y).
> [...]
> One transformation you suggested is z/(f(x)-y). However that's
> divergent, since z-->z*>0 and f(x)-y-->0, so that would yield a
> transformed system without a stationary state. Would that be a
> problem?
This is exactly the point! I did this in my manuscript (see above), for
example. Instead of having i*=0 in the denominator, a more complicated
stationary solution for v* and w* appeared, which, however could be
investigated. I cannot promise though that my proposed transformation
will do the job. Torture Maple... and good luck!
If stationary solutions disappear you might nevertheless see more
clearly that the soltutions tend to infinity, for example, because one
of the equations is positive all the time - as in Gaucel in the case of
the "blow-up" solutions.
dt/ds = (x'(t))^2
to be applied to the following system:
x'(t) = f(x(t))-y(t)
y'(t) = fx(x(t))-z(t)
z'(t) = (z(t)-p1)*(p2-z(t))-(p-z(t))*(fx(x(t))-z(t))/(f(x(t))-y(t))
the resulting system has independent variable s instead of t,
X'(s) = (f(X(s))-Y(s))^3
Y'(s) = fx(X(s))-Z(s)*(f(X(s))-Y(s))^2
Z'(s) = (Z(s)-p1)*(p2-Z(s))*(f(X(s))-Y(s))^2
-(p-Z(s))*(fx(X(s))-Z(s))*(f(X(s))-Y(s))
the eigenvalues of this transformed system are zero.
With the benefit of hindsight, it's pretty trivial. I saw a trick very
much like this one somewhere, but without the square -- the square is
here to ensure that sign(dt)=sign(ds). I can't for the life of me
remember where I saw the trick, so if you know a reference to
something like this, please do fire off.