[SciPy-user] integrate ODE to steady-state?

0 views
Skip to first unread message

Zachary Pincus

unread,
Jun 24, 2008, 7:45:10 PM6/24/08
to SciPy Users List
Hi all,

So, after a brief bout of the stupids (thanks Robert), I have
formulated my optimization problem as a physical system governed by an
ODE, and I wish to learn the equilibrium configuration of the system.

Any thoughts on what the easiest way to do this with scipy.integrate
is? Ideally, I'd just want the solver to take as large steps as
possible until things converge, and so I don't really care about the
"time" values. One option would be to use odeint and just tell it to
integrate to a distant time-point when I'm sure things will be in
equilibrium, but that seems dorky, wasteful, and potentially incorrect.

Alternately, I could use the ode class and keep asking it to integrate
small time-steps until the RMS change drops below a threshold. There,
still, I'd need to choose a reasonable time-step, and also the inner
loop would be in python instead of fortran.

Any recommendations? (Or a I again being daft? I never really took a
class in numerical methods, so sorry for dim-bulb questions!)

Zach
_______________________________________________
SciPy-user mailing list
SciPy...@scipy.org
http://projects.scipy.org/mailman/listinfo/scipy-user

Nathan Bell

unread,
Jun 24, 2008, 8:51:57 PM6/24/08
to SciPy Users List
On Tue, Jun 24, 2008 at 6:45 PM, Zachary Pincus <zachary...@yale.edu> wrote:
>
> Any thoughts on what the easiest way to do this with scipy.integrate
> is? Ideally, I'd just want the solver to take as large steps as
> possible until things converge, and so I don't really care about the
> "time" values. One option would be to use odeint and just tell it to
> integrate to a distant time-point when I'm sure things will be in
> equilibrium, but that seems dorky, wasteful, and potentially incorrect.
>

Regarding numerical methods, you should read about the "Forward Euler"
integrator, which is very simple and intuitive. For applications
where accuracy is unimportant and a reasonable time step is known a
priori, Forward Euler is sufficient.

Also, explicit ODE integrators (like Forward Euler and Runge-Kutta)
are generally cheap, so the dominant cost of the integration is
usually computing the time derivatives for each variable.

You are right that you'll want to take as-large-as-possible timesteps
when integrating through fictitious time. However, even though the
timescale is arbitrary, the allowable timestep is limited by both the
accuracy of the trajectory (probably not a concern in your case) and
the stability of the system. In short, there's no free lunch :)

I don't know what interface scipy's ode solvers support, but I would
suggest setting t_final to be a large value and instructing the solver
to use at most N iterations. I would use an "adaptive" method like
RK45 (or whatever scipy supports) which automatically finds a timestep
that achieves numerical stability and the desired accuracy.

Alternatively, you could use your second proposal, and terminate
relaxation when the functional is sufficiently small.

BTW, are you using a Lennard-Jones like potential to distribute points
? I did something similar to distribute points evenly over a mesh
surface using a simple integrator:
http://graphics.cs.uiuc.edu/~wnbell/publications/2005-05-SCA-Granular/BeYiMu2005.pdf

An interesting alternative is the CVT:
http://www.math.psu.edu/qdu/Res/Pic/gallery3.html

--
Nathan Bell wnb...@gmail.com
http://graphics.cs.uiuc.edu/~wnbell/

Rob Clewley

unread,
Jun 24, 2008, 8:55:42 PM6/24/08
to SciPy Users List
Zach,

It somewhat depends on your equations, but in principle you don't need
to actually integrate the ODEs at all. An equilibrium corresponds to
no motion, i.e. when the right-hand sides of all the ODEs are
zero-valued. So for a system dx/dt = f(x), where x is a vector, you
just need to solve f(x) = 0. This may have multiple solutions, of
course, if f is nonlinear. There's no scipy code that will do all this
for you but fsolve will do in most cases. It's best to have a good
initial guess as a starting condition for the search. So, integration
can help you find that. If f is strongly nonlinear there could be many
equilibria and your task will be to identify which initial conditions
lead to which equilibria. This is not a trivial task - you may need an
exhaustive search of initial conditions (e.g. based on sampling your
phase space of x) to get started. I have some naive code that
pre-samples the phase space and then uses fsolve, and works OK on some
nonlinear problems. It's the find_fixedpoints function in PyDSTool,
but it's extremely easy to remove the PyDSTool dependence :)

If this approach turns out to be too numerically problematic, you'll
just have to go back to integrating for long times...

Rob

Lou Pecora

unread,
Jun 24, 2008, 9:05:23 PM6/24/08
to SciPy Users List
I agree with Rob Clewley's approach. I suspect that finding the zeros of the vector field (f(x)) is the best way to go. Integration is harder and still leaves the problem of multiple equilibria. This is often expressed as finding multiple basins of attraction (each initial condition of which goes to a different equilibrium point). You may have an additional problem in that you want to find the stability of the equilibrium points. If you want a point that a system will stay at even under the influence of some local noise or perturbations, then once you find the equilibrium points, you want to determine their stability. This is done quite easily by evaluating the Jacobian of the vector field (the matrix which is the "gradient" of the vector field: d f(x)/ dx, where f and x are vectors of the same dimension). You then find the eigenvalues of the Jacobian. If all are negative, you have a stable equilibrium. If one if positive, you have an unstable
equilibrium and the actual system will probably *not* ever end up there. I hope that's clear and that helps. Your problem is only as hard as the f(x) is complex and nonlinear. It is not an ODE integration problem, really.

-- Lou Pecora, my views are my own.

Nathan Bell

unread,
Jun 24, 2008, 9:21:59 PM6/24/08
to lou_bo...@yahoo.com, SciPy Users List
On Tue, Jun 24, 2008 at 8:05 PM, Lou Pecora <lou_bo...@yahoo.com> wrote:
> It is not an ODE integration problem, really.

Casting the minimization of a functional as a system of ODEs is a
completely reasonable thing to do.

Zachary Pincus

unread,
Jun 24, 2008, 10:50:20 PM6/24/08
to SciPy Users List
Hi all,

Thanks for the help!

Nathan:


> I don't know what interface scipy's ode solvers support, but I would
> suggest setting t_final to be a large value and instructing the solver
> to use at most N iterations. I would use an "adaptive" method like
> RK45 (or whatever scipy supports) which automatically finds a timestep
> that achieves numerical stability and the desired accuracy.
>
> Alternatively, you could use your second proposal, and terminate
> relaxation when the functional is sufficiently small.

I'll try both of these -- I hadn't thought to use the solver's limit
on iterations, which is a good idea...

Nathan:


> BTW, are you using a Lennard-Jones like potential to distribute points
> ? I did something similar to distribute points evenly over a mesh
> surface using a simple integrator:
> http://graphics.cs.uiuc.edu/~wnbell/publications/2005-05-SCA-Granular/BeYiMu2005.pdf
>
> An interesting alternative is the CVT:
> http://www.math.psu.edu/qdu/Res/Pic/gallery3.html

My problem is similar, actually, but in 2D. I have an outline of a
bacteria as a 2D polygon and want to lay down a coordinate system on
it, with a midline running from pole to pole ("x-axis"), and well-
defined lines normal to the midline ("y-axis"). The classic solution
would be to use the medial axis transform, which I had been (using
Martin Held's excellent if quirky VRONI code). However, the medial
axis is only a partial midline, it may branch, and there's no
guarantee that it is possible to find reasonable straight-line normals
to the axis (you get a clearance radius at each point instead). So I
formulated the problem as trying to find pairs of positions along the
outline of the shape where the distance between the pairs is
minimized, but adjacent points don't "clump" too much, which naturally
led to this sort of spring-system picture. (This is a very brief
explanation -- sorry if unclear.) It doesn't seem particularly elegant
to solve it out like this, but I wanted to see if it worked at all...

Also, thanks for the references -- interesting indeed.

Rob and Lou:
> [try solving for dy/dt=0]

Good idea too! I'll try that one as well, and see which seems to work
better...


Thanks again, everyone,

Zach

Anne Archibald

unread,
Jun 24, 2008, 11:57:56 PM6/24/08
to SciPy Users List
2008/6/24 Zachary Pincus <zachary...@yale.edu>:

> My problem is similar, actually, but in 2D. I have an outline of a
> bacteria as a 2D polygon and want to lay down a coordinate system on
> it, with a midline running from pole to pole ("x-axis"), and well-
> defined lines normal to the midline ("y-axis"). The classic solution
> would be to use the medial axis transform, which I had been (using
> Martin Held's excellent if quirky VRONI code). However, the medial
> axis is only a partial midline, it may branch, and there's no
> guarantee that it is possible to find reasonable straight-line normals
> to the axis (you get a clearance radius at each point instead). So I
> formulated the problem as trying to find pairs of positions along the
> outline of the shape where the distance between the pairs is
> minimized, but adjacent points don't "clump" too much, which naturally
> led to this sort of spring-system picture. (This is a very brief
> explanation -- sorry if unclear.) It doesn't seem particularly elegant
> to solve it out like this, but I wanted to see if it worked at all...
>
> Also, thanks for the references -- interesting indeed.
>
> Rob and Lou:
>> [try solving for dy/dt=0]
>
> Good idea too! I'll try that one as well, and see which seems to work
> better...

I'd set it up as a numerical minimization procedure - minimizing the
"potential energy" of all those springs. If you give it a good initial
guess and are lucky, the minimizer will follow a path down to the
optimum not too different from the one your ODEs would have followed,
but since the optimizer knows you don't care how it gets there it can
be much more efficient. The danger with both approaches is that there
may be some rather bad local minimum they can get trapped in. The best
approach is to choose a decent initial guess - maybe using the medial
axis transform, since you already have the code working - so that the
nearest, most obvious solution is the real optimum. Failing that, you
can look at some of the global optimizers in OpenOpt.

Better, of course, is to find an algorithm specifically adapted to
solving your particular problem, but I gather you've already done some
considerable searching.

Anne

Reply all
Reply to author
Forward
0 new messages