I am writing with a question regarding the optional argument to minimize, ftol. When working with the below example (from
https://lmfit.github.io/lmfit-py/examples/example_fit_with_bounds.html), I found that when the ftol was set to massive values (say ftol = 10000000), so large that one would expect immediate termination of the algorithm, the code still took 4 function evaluations to produce a fit.
p_true = Parameters()
p_true.add('amp', value=14.0)
p_true.add('period', value=5.4321)
p_true.add('shift', value=0.12345)
p_true.add('decay', value=0.01000)
def residual(pars, x, data=None): argu = (x * pars['decay'])**2 shift = pars['shift']
shift = shift - sign(shift)*pi model = pars['amp'] * sin(shift + x/pars['period']) * exp(-argu) return model
fit_params = Parameters()
fit_params.add('amp', value=13.0, max=20, min=0.0)
fit_params.add('period', value=2, max=10)
fit_params.add('shift', value=0.0, max=pi/2., min=-pi/2.) fit_params.add('decay', value=0.02, max=0.10, min=0.00)
out = minimize(residual, fit_params, args=(x,), kws={'data': data}, ftol = 10000000) fit = residual(out.params, x) report_fit(out, show_correl=True, modelpars=p_true)
What is the reason for these preliminary iterations? I would think that since ftol is set so high the convergence criteria is immediately met, therefore there would not be the need for so many iterations.
Thanks,
Gabriel Myers