I am currently using the LMFIT algorithm with the LEASTSQ method to estimate some parameters for my numerical model. The fitting function operates by taking these parameters as input, modifying the simulation script accordingly, and initiating the simulation. (So the model is not explicitly given, I instead use a subprocess). Subsequently, it retrieves the numerical results from the simulation. The error array is then computed by comparing these numerical results with the experimental data.
What I don't understand is the calculation of the Jacobian matrix to determine the parameter increment. Per incriment, ONLY ONE simulation is being launched after each iteration to compute the error function using the updated parameter set, I'm curious about the specific method employed to compute the Jacobian matrix within this iterative process. Thank you,
--
You received this message because you are subscribed to the Google Groups "lmfit-py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lmfit-py+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lmfit-py/a98d065e-d44e-4353-873d-d5c4488dee96n%40googlegroups.com.