Model fitting without providing weights however setting scale_covar=False

142 views
Skip to first unread message

Maximilian Philipp

unread,
Dec 12, 2021, 7:12:42 PM12/12/21
to lmfit-py
Context:
I am currently attending some physics labs and whenever I need to fit some data I used the standard Model fitting method:
out = mod.fit(y_fit, pars, **independent_vars)
Currently I am trying to fit exponentials, linearized exponatial data and function of the form a*x/(x^2+b^2). While reading through the documentation I noticed there are several way different way the uncertainties of the parameter are calculated.

The first being:
Using the default specifying nothing
out = mod.fit(y_fit, pars, **independent_vars)

The second one being:
Specify the weights and setting scale_covar=False
     weights = 1/(self.data[f"d{y}"])
     weights = weights.values.reshape(1,-1)
      out = mod.fit(y_fit, pars,weights=weights,scale_covar=False, **independent_vars)

The third one being:
Specifying only the weights and leaving scale_covar=True
     weights = 1/(self.data[f"d{y}"])
     weights = weights.values.reshape(1,-1)
      out = mod.fit(y_fit, pars,weights=weights,scale_covar=False, **independent_vars)


I was wondering what would happen if I do not specify weights and set scale_covar=False
out = mod.fit(y_fit, pars,scale_covar=False, **independent_vars)


 If I understand the first three correctly 
The first is scaling the covariance matrix such that chi -squared is increased by the reduced chi squared 
self.result.covar *= self.result.redchi
it should only be used if the measured data is gaussian distributed around zero

The second one should be if uncertainties of the measurement are known and you are sure the error bars have the correct size. 

The third one should be used if uncertainties of the measurement are known and you are not confident in the size of the error bars.

I played around with the different ways of performing the fit and got different answers as expected for the fit parameter and its uncertainty. Using the default gave to small uncertainties  according to the lab assistant and the only method which gave big enough errorbars for the parameters was the one without specifying any weight and setting scale_var=False. How do I interpret this method correctly.


Reply all
Reply to author
Forward
0 new messages