Hi,
I've been using lmfit for a while now and this is by far the most concise and easiest to use fit tool I saw, thanks to everyone involved!
I was wondering about the difference between minimizing an objective function and fitting a model function beyond the convenience of the wrapper and the returned objects. I don't expect (or saw) any efficiency difference but I'd like to ask whether I'm missing something?
My typical use case involves fitting an arbitrary number of spectra or similar data sets that share some of their parameters (e.g., if it were Gaussians, all widths were the same but amplitude and center can vary). Using the expression attribute when setting a parameter I can easily compare cases of shared vs. independent parameters. Since I don't know the number of of parameters beforehand, they are just created programmatically and in the function I want to minimize I do a groupby over the datasets and finally append the residuals.
In case of the model.fit, I still can work with arbitrary parameters, 'hiding' them in **params and unpacking as above. Since the advantage of assigning parameters automatically is gone and calculating the residual is really just one line taking the difference between the function and data, I was wondering whether there's another advantage (maybe some optimization because the model 'sees' the underlying function whereas the minimizer only ever gets the residuals handed back)?
Cheers
Sebastian