[SciPy-User] odeint with saturation?

114 views
Skip to first unread message

Brian Blais

unread,
Oct 12, 2010, 8:12:41 AM10/12/10
to SciPy Users List
Hello,

I am using odeint to solve some diff eqs, and it works great, but some of my cases have saturating values. In many cases a value can't go negative (if it does, it should just be set equal to zero). It doesn't seem as if odeint can do this, but is there an easy or preferred way of solving that kind of system?


thanks!

Brian Blais

--
Brian Blais
bbl...@bryant.edu
http://web.bryant.edu/~bblais
http://bblais.blogspot.com/

_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

Tveraa, Torkild

unread,
Oct 12, 2010, 9:10:04 AM10/12/10
to SciPy Users List
Dear All,

I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-)

Cheers,
Torkild

-------------------------------------------------------
import numpy
import pylab
from scipy import *
from scipy import optimize

## This is y-data:
y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0]))

## This is x-data:
t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353]))

## This is the equation:
fitfunc = lambda p, x: p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1)

##
errfunc = lambda p, x, y: fitfunc(p,x) -y

guess = [0, max(y_data), 0.1, 140, -0.1, 270]

bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ]

## This seems to work ok:
p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0)
print 'Estimates from leastsq \n', p2,success


## But this does not:
best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2)

Skipper Seabold

unread,
Oct 14, 2010, 5:21:22 PM10/14/10
to SciPy Users List

The minimization routines, I believe, in fmin expect a function that
maps from to a scalar. So you need to tell fmin_l_bfgs that you want
to minimize the sum of squared errors, optimze.leastsq assumes this.
So just define one more function that sums the squared errors and
minimize it

errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2)

Now, run it without bounds to make sure we get the same thing

boundsnone = [(None,None)]*6

Notice that you also have to tell fmin_l_bfgs_b to approximate the
gradient or else it assumes that your objective function also returns
its gradient

best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess,
approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2)

p2
array([ 6.79548883e-02, 3.68922503e-01, 7.55565728e-02,
1.41378227e+02, 2.91307814e+00, 2.70608242e+02])

best
array([ 6.79585333e-02, -2.33026316e-01, -7.55409880e-02,
1.41388265e+02, -1.36069434e+00, 2.70160779e+02])

Cheers,

Skipper

Skipper Seabold

unread,
Oct 14, 2010, 5:32:42 PM10/14/10
to SciPy Users List

I just realized that these don't come up with the same thing. I don't
have an answer for why yet.

Skipper Seabold

unread,
Oct 14, 2010, 5:58:50 PM10/14/10
to SciPy Users List

Oh,

ret = optimize.leastsq(errfunc, guess, args=(t,y_data))

ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True,
bounds=boundsnone, args=(t, y_data), iprint=2)

fitfunc(ret[0],t)
array([ 0.0690421 , 0.0731951 , 0.08481868, 0.14388978, 0.199337 ,
0.30971974, 0.33570587, 0.3602918 , 0.36414477, 0.36777158,
0.36874788, 0.36881958, 0.14080121, 0.06794499, 0.06795339,
0.0679536 , 0.0679545 , 0.06795477, 0.06795485])

fitfunc(ret2[0],t)
array([ 0.06904625, 0.07319943, 0.0848205 , 0.14386744, 0.19929593,
0.30968735, 0.3356897 , 0.36029973, 0.3641578 , 0.36779021,
0.36876834, 0.3688402 , 0.14077023, 0.06795562, 0.06795703,
0.06795724, 0.06795815, 0.06795842, 0.0679585 ])

errfuncsumsq(ret[0], t, y_data)
0.079297668259408899

errfuncsumsq(ret2[0], t, y_data)
0.079298042836826454

Sebastian Haase

unread,
Oct 15, 2010, 4:31:22 AM10/15/10
to SciPy Users List

Very nice example !
However, is it correct, that *within* the given bounds the result is
just a constant ?
>>> ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, bounds=bounds, args=(t, y_data), iprint=2)
>>> ret3
([ 1.00000000e-01 1.00000000e-01 5.00000000e-02 1.39979309e+02
-5.00000000e-02 2.70003604e+02], 0.55408092, {'warnflag': 0,
'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'grad':
array([-4.41837799, 1.03037829, 0. , 0. , 0.
, 0. ]), 'funcalls': 2})
>>>
>>> errfuncsumsq(ret3[0], t, y_data)
0.55408092
>>>
>>>
>>> bounds
[(-0.2, 0.1), (0.1, 0.97), (0.05, 0.8), (120, 190), (-0.8, -0.05), (200, 300)]
>>> fitfunc(ret3[0],t)
[ 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
0.1 0.1 0.1 0.1]
>>>

(Note: fitfunc above used wronge 't' instead of 'x', correct is:
>>> fitfunc = lambda p, x: p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1)
)

Thanks,
Sebastian Haase

Skipper Seabold

unread,
Oct 15, 2010, 11:53:41 AM10/15/10
to SciPy Users List

I am far from an expert on (constrained) optimization, but it looks
like given the bounds that the solver doesn't know where to go / the
function is flat in the region of the (bounded) starting parameters.

from scipy.optimize import approx_fprime

ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, bounds=bounds,
approx_grad=True, args=(t, y_data), iprint=2)

approx_fprime(ret3[0], errfuncsumsq, 1e-8, *(t, y_data))


array([-4.41837799, 1.03037829, 0. , 0. , 0.
, 0. ])

approx_fprime(guess, errfuncsumsq, 1e-8, *(t, y_data))
array([ -1.40430234e+01, 8.29307161e+00, 2.48736942e-01,
2.06907380e-02, -1.91057303e+00, -3.60373953e-03])

Jonathan Stickel

unread,
Oct 15, 2010, 11:55:42 AM10/15/10
to scipy...@scipy.org, bbl...@bryant.edu
On 10/12/10 11:00 , scipy-use...@scipy.org wrote:
> Date: Tue, 12 Oct 2010 08:12:41 -0400
> From: Brian Blais<bbl...@bryant.edu>
> Subject: [SciPy-User] odeint with saturation?
> To: SciPy Users List<scipy...@scipy.org>

>
> Hello,
>
> I am using odeint to solve some diff eqs, and it works great, but
> some of my cases have saturating values. In many cases a value can't
> go negative (if it does, it should just be set equal to zero). It
> doesn't seem as if odeint can do this, but is there an easy or
> preferred way of solving that kind of system?
>
>
> thanks!
>
> Brian Blais
>

Brian

Maybe you have figured this out on your own already, but I have one
suggestion. In your function that defines the set of ODEs, do something
like:

...
if var < eps && dvar<0:
dvar = 0
...
return [array of rate variables including dvar]

Where 'var' is the variable you want to prevent being negative and
'dvar' is the rate for var calculated earlier in your function.

HTH,
Jonathan

P.S. I found that the effort to learn how to use scipy.integrate.ode
was the effort. It provides more control and a slightly better solver
than scipy.integrate.odeint. YMMV.

Brian Blais

unread,
Oct 15, 2010, 1:58:42 PM10/15/10
to Jonathan Stickel, scipy...@scipy.org
On Oct 15, 2010, at 11:55 AM, Jonathan Stickel wrote:

>> some of my cases have saturating values. In many cases a value can't
>> go negative (if it does, it should just be set equal to zero). It
>

> ...
> if var < eps && dvar<0:
> dvar = 0
> ...

thanks! so the obvious works...I feel a little sheepish now... :)

>
> P.S. I found that the effort to learn how to use scipy.integrate.ode
> was the effort. It provides more control and a slightly better solver
> than scipy.integrate.odeint. YMMV.

what's the difference? it looks like ode lets you choose the integrator, and has some fine-grained options dependent on integrator method. is that right, or is there another advantage to using ode instead of odeint?

thanks,

bb

_______________________________________________

Jonathan Stickel

unread,
Oct 15, 2010, 3:30:24 PM10/15/10
to Brian Blais, scipy...@scipy.org
On 10/15/10 11:58 , Brian Blais wrote:
> On Oct 15, 2010, at 11:55 AM, Jonathan Stickel wrote:
>
>>> some of my cases have saturating values. In many cases a value
>>> can't go negative (if it does, it should just be set equal to
>>> zero). It
>>
>> ... if var< eps&& dvar<0: dvar = 0 ...

>
> thanks! so the obvious works...I feel a little sheepish now... :)
>
>>
>> P.S. I found that the effort to learn how to use
>> scipy.integrate.ode was the effort. It provides more control and a
>> slightly better solver than scipy.integrate.odeint. YMMV.
>
> what's the difference? it looks like ode lets you choose the
> integrator, and has some fine-grained options dependent on integrator
> method. is that right, or is there another advantage to using ode
> instead of odeint?
>

The help strings indicate that odeint uses LSODA and ode uses VODE.
Based on a little research, it seems that VODE is a slightly newer and
improved algorithm. The primary difference that I have observed is that
ode does a bit better with its automatic time step determination.

I also like the way that ode is typically used, i.e. in a loop. If I
use a while loop, I can stop based on some criteria other than time,
e.g. a variable's value or rate of convergence. I suppose odeint could
be used like this as well (in a loop), but it doesn't seem that it was
intended to be used this way.

Jonathan

Tveraa, Torkild

unread,
Oct 14, 2010, 6:39:11 PM10/14/10
to SciPy Users List

Thanks a lot for your help and insight Skipper.

Could it be that the difference between the two simply arise because it is a bit ambiguous to estimate 6 parameters based on just 21 observations? At least that is my gut feeling :-) The data are a bit noisy (see attached figure) so the results might vary accordingly?

Thanks,
Torkild

Estimates from leastsq:
[ 6.79548906e-02 3.68922902e-01 7.55558888e-02 1.41378372e+02
2.25898321e+00 2.70494848e+02]

Estimates from fmin_l_bfgs_b (without bounds):
[ -2.24017466e-01 1.51747173e+00 1.62043085e-02 1.20009182e+02
1.27344545e-02 2.10006033e+02]

Estimates from fmin_l_bfgs_b (with bounds):
(if you try this please note that bounds for p[4] should be positive)
[ 7.56582544e-02 3.72058134e-01 7.57033694e-02 1.42823970e+02
7.99990970e-01 2.50821346e+02]

plot_of_data.png

Tveraa, Torkild

unread,
Oct 15, 2010, 5:05:05 AM10/15/10
to SciPy Users List

-----Original Message-----
From: scipy-use...@scipy.org [mailto:scipy-use...@scipy.org] On Behalf Of Sebastian Haase
Sent: 15. oktober 2010 10:31
To: SciPy Users List
Subject: Re: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b


Jepp thanks. - For me it was a very, very helpful lesson.

I realize that equation is here somewhat taken out of the air, sorry. The equation is slightly modified from Beck et al 2006. "Improved monitoring of vegetation dynamics at very high latitudes: A new method using MODIS NDVI. Remote Sensing of Environment 100:321-334". So the first parameter (p[0]) estimates the minimum NDVI/EVI. Parameter 2 estimates maximum NDVI. Parameter 3 & 5 estimates the rate of increase in EVI in spring and the rate of decrease in NDVI in fall. Parameter 4 & 6 estimates the inflection points in spring and fall (cf. Fig 3 in Beck et al).

Thanks to you both,

Torkild

Tveraa, Torkild

unread,
Oct 15, 2010, 7:45:17 PM10/15/10
to SciPy Users List
-----Original Message-----
From: scipy-use...@scipy.org [mailto:scipy-use...@scipy.org] On Behalf Of Skipper Seabold
Sent: 15. oktober 2010 17:54
To: SciPy Users List
Subject: Re: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b

from scipy.optimize import approx_fprime

A colleague and R-guru came up with what seems to be a solution:

fitfunc = lambda p, x: (p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1))

errfunc = lambda p, y_data: np.sum((y_data-fitfunc(p,x))**2)


best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, approx_grad=True, args=(y_data,), iprint=-1)

ret = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, approx_grad=True, args=(y_data,), iprint=-1)

Reply all
Reply to author
Forward
0 new messages