Dear lmfit-community,I am a new user and have a question about iteratively updating weights to use during least-squares fitting.I want to fit a line using both x and y errors. To tackle this, I would like to incorporate the x error into the weights, producing an additional 'effective' error in y.So my weights become:where df/dx is the partial of the fit function w.r.t. x. and sigma_x is the error estimate in x. As the partials will contain best-fit parameters, the fitting process then becomes iterative.
Initially I am attempting to do this with simple polynomials, for which lmfit appears to be overkill, but I may need non-linear or more complex fits in the future.
Is there a straightforward way to incorporate this iterative weighting into an lmfit-based workflow?
Furthermore is there a simple way to save the weights for each iteration?
Finally, is it possible to calculate the partial derivatives automatically for a given fit function? i've looked at using sympy but I don't know if I can incorporate sympy functions into lmfit.
Hi Matt,Thanks for the quick reply.Here's a quick example, that I hope better illustrates what I am trying to do.from lmfit import Parameters, report_fit, minimizeimport numpy as np# Define x and y data, and associated errorsx_data = np.array([0.130638, 0.001421, 0.117021, 0.033787, 0.034766, 0.033745])y_data = np.array([3.07e+00, 1.00e-07, 2.82e+00, 6.40e-01, 7.50e-01, 8.20e-01])x_err = np.array([3.12668e-04, 1.62169e-04, 2.16837e-04, 9.59135e-05, 0.000119049, 7.59220e-05])y_err = np.array([0.12, 0.10, 0.11, 0.13, 0.11, 0.12])# Define objective function to be minimiseddef obj_fun(params, x_data, y_data, x_err, y_err):p = params.valuesdict()a = p['a']b = p['b']# Automatically calculate partials from fit function, using sympy, and parameters???partial_x = b # partial derivative wrt of x of: y = a + bx.partial_y = 1.var_eff = ((partial_y**2)*y_err**2 + (partial_x**2)*x_err**2) # effective variance accounting for error in xdelta_calc_obs = (a + b*x_data) - y_datadelta_var = delta_calc_obs / var_effreturn delta_var
# Initialise Parametersparams = Parameters()params.add_many(('a', 1.), ('b', 1.))# Perform Fitout = minimize(fcn=obj_fun, params=params, method="lstsq", args=(x_data, y_data, x_err, y_err))report_fit(out)This produces a fit that to my untrained eye seems reasonable.I'm not sure however, that the fit statistics are then being calculated using the calculated weights (1 / var_eff).To save the intermediate weights I suspect I need to seperate the calculating the weights step into its own function that also appends them to another variabel, from my objective function.In the long run, I would like to be able to define a fit function with a string e.g., "a + b*x" or "a + b*x + c*x**2". Then calculate the partial derivative with sympy and perform the fit as above. I have managed to produce a partial derivative that can be evaluated a multiple values of x if needed (e.g., df/dx of a + b*x + c*x**2 is b + 2*c*x):
from sympy.abc import x, yFIT_FUNCTION = 'a + b*x + c*x**2'PARAMS = lmfit.create_params(a=1.0, b=1.0, c=1.0)ff_s = sympy.parsing.sympy_parser.parse_expr(FIT_FUNCTION)ff_pd_x_s = sympy.diff(ff_s, x) # Symbolic Partial Derivative wrt xff_pd_x = sympy.lambdify(list(ff_pd_x_s.free_symbols), ff_pd_x_s, 'numpy') # Partial Derivative capable of accepting vectors of values.ff_s = sympy.parsing.sympy_parser.parse_expr(FIT_FUNCTION)
But I dont know if its possible to remove the need to explicitly define the parameters within the objective function.