Fit using dogleg method, how do I use Jacobian and Hessian?

36 views
Skip to first unread message

Julien Ballbé

unread,
Aug 29, 2024, 10:40:36 AM8/29/24
to lmfit-py
Hi everyone, 

First, thank you for the Lmfit package, I've been using it for some times, and it is really great!
About my question, I am trying to fit some very simple data to a custom function (hill), now using the dogleg method. According to the Lmfit documentation, I need to pass a jacobian function and an hessian function (see script enclosed, containing the data to fit). Yet when I try to pass the corresponding function in minimize, I got the following error:
" " "
Traceback (most recent call last):

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/spyder_kernels/py3compat.py:356 in compat_exec
    exec(code, globals, locals)

  File ~/Downloads/Test_Dogleg.py:171
    result = minimize(residual_Hill,

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/lmfit/minimizer.py:2602 in minimize
    return fitter.minimize(method=method)

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/lmfit/minimizer.py:2346 in minimize
    return function(**kwargs)

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/lmfit/minimizer.py:980 in scalar_minimize
    ret = scipy_minimize(self.penalty, variables, **fmin_kws)

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/scipy/optimize/_minimize.py:729 in minimize
    res = _minimize_dogleg(fun, x0, args, jac, hess,

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/scipy/optimize/_trustregion_dogleg.py:33 in _minimize_dogleg
    return _minimize_trust_region(fun, x0, args=args, jac=jac, hess=hess,

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/scipy/optimize/_trustregion.py:175 in _minimize_trust_region
    sf = _prepare_scalar_function(fun, x0, jac=jac, hess=hess, args=args)

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/scipy/optimize/_optimize.py:402 in _prepare_scalar_function
    sf = ScalarFunction(fun, x0, args, grad, hess,

  File ~/anaconda3/envs/wBMTKAllenSDKNEST/lib/python3.11/site-packages/scipy/optimize/_differentiable_functions.py:189 in __init__
    self.H = hess(np.copy(x0), *args)

TypeError: hebbian_hill() missing 1 required positional argument: 'x'
" " "

Do you have any idea how I can modify the code to make it work?
Thank you for any help you can give me!

Best,
Julien
Test_Dogleg.py

Matt Newville

unread,
Aug 29, 2024, 3:51:13 PM8/29/24
to lmfi...@googlegroups.com
Hi Julien,

I would have a couple comments:  First, that's sort of a lot of hand-written mathematically-heavy code for what ought to be "not that hard of a fit".  I would ask what happens if you just plain ignore all of that and just do a "normal fit", allowing the fit to find its own solution.

A second point is that it sure looks like you are using a fitting parameter (x0) as a discrete value.  That can cause problems - and there really is not anything we can do about it.  See https://lmfit.github.io/lmfit-py/faq.html#can-parameters-be-used-for-array-indices-or-discrete-values for more details.

A third point is that it sure looks like you are fitting some combination of power laws.   That can also cause trouble, especially if the parameters are very far from reasonable values.  I do not know if that is happening in your case or not.    Are you sure that is the right model for your data?

You could try fitting the logarithm of your data to a logarithm of your model (well, except that you have many values that are 0 -- I don't know what to tell you for that).   That sometimes helps.

Anyway, I would ask whether you really need all that code just to do such a fit?



--
You received this message because you are subscribed to the Google Groups "lmfit-py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lmfit-py+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lmfit-py/830a6b15-e609-4210-8706-f579de2a4089n%40googlegroups.com.


--
--Matt Newville <newville at cars.uchicago.edu630-327-7411

Julien Ballbé

unread,
Aug 30, 2024, 8:00:12 AM8/30/24
to lmfit-py
Hi Matt,

Thank you for your answer.
I realize the fitting function looks too complicated for the data. The reason is that this the fit should be able to fit more complex data that can either be convex or concave, with a more or less big first jump, with more or less sparse repartition of data along the x-axis... therefore justifying the complex target function, and a defined way of selecting initial conditions (not shown here, I just copy-pasted the corresponding values to make the example clearer).
The fit generally performs very well (when using default method "leastsq"), but when I inspect some of the error, they are generally stuck at initial conditions and after inspecting the reasons for which that can happen, I tried with other fitting algorithm. When using 'least_squares', the fit performed well and was not stuck at initial conditions, even with the x0 parameter considered as a breakpoint in the function. To be more harmonized within our lab, I tried implementing the fitting using "dogleg" method (which is used by other people in the lab on other programming langage, and for who it works perfectly). 
My interrogation was more on how to implement this method, especially using a jacobian and hessian callable as these are required by the dogleg method.

Thank you!
Julien

Reply all
Reply to author
Forward
0 new messages