Hi!I just spent hours trying to pass fit keywords for the differential evolution method without any success.If I download the simple example https://lmfit.github.io/lmfit-py/examples/example_diffev.html#sphx-glr-examples-example-diffev-py , and then I want to pass fit keywords to the scipy differential evolution function, I am completely incapable of doing so at line 47:This does not work, in the sense that the code runs like popsize is not there:o2 = lmfit.minimize(resid, params, args=(x, yn), method='differential_evolution', popsize=2)
Hi Matt,Thank you for your answer once again. I am sorry, I probably did not explain myself clearly, I do not expect to solve the problem in less than 30 function calls. The problem I have is that each function call is very long, about 20s at best. At each call I am computing electric and thermoelectric transport relying on Boltzmann formalism (physics), therefore if I can spare 30 useless function calls, that would be appreciated. I had written my own differential evolution code, and that works, but I would prefer to use the one from lmfit that includes a lot of strategies for example.
What I am doing in one call of the objective function, is using a "heavy" calculation to generate sigma (electric conductivity) and alpha (thermoelectric conductivity) that I compare to the data in a dual fit (same objective function).
Another problem I have is that sigma and alpha are not quantities in the same units and do not have the same order of magnitude once expressed in SI units (sigma ~ 10^-9 and alpha ~ 10^-11), therefore sigma and alpha are not weighted the same in the algorithm. I guess it is a common problem, but I do not know how to make it work out, as I have to concatenate the two quantities for the objective function.
I also need to understand how to follow the evolution of the differential evolution algorithm, I would appreciate if I could see what is the best individual after a given time when still running (as it takes forever). I guess a callback function is what I need.
Hi Matt,Thank you so much for trying to help me here. You might be very correct regarding the fact that differential evolution might not be the best solver for me here. I turned myself to it because sigma and alpha are obtained from highly non-linear computations and free parameters can vary from 3 to almost 10 (I just fix parameters function of what I want to test). Here I would stick with 4 for what I want to do today. Therefore, in order to avoid local minima, I thought differential evolution would be the most suited solver, but the reality is that, apart from leastsq and differential evolution, I do not much about the world of solvers. I am opened to any other recommendations! :)
I wish I could find a way to explain simply what does the algorithm to compute sigma and alpha, but for now I don't know how. What I can say is it is not an analytical function, but numerical calculations to get sigma and alpha.
I have data function of temperature in the form of arrays sigma_data and alpha_data, each index correspond to a temperature. Therefore, in the objective function, there is a for loop that calls my "heavy" algorithm to compute sigma and alpha at each temperature. In the end, sigma_model and alpha_model are both arrays of the same length as the temperature array. My objective function returns:diff_sigma = sigma_data - sigma_model # in (Ohm m)^-1
diff_alpha = alpha_data - alpha_model # in (A / K^2 m)return np.concatenate((diff_sigma, diff_alpha))I would like sigma and alpha to be equally weighted in the solver. But as I said, in SI units, their numerical values are far from being in the same order of magnitude. I agree that keeping the data around unity is good, usually I keep mine in SI units because it prevents me from conversion errors, I should change that here. But so, are you recommending to weight alpha in order to have the same order of magnitude for sigma and alpha numerical values? Is it the only way?
Hi Matt,Thank you again! Great, regarding the units I should definitely do that, I have not realized how crucial it was.Well I know that indeed local minima can be a huge pain, do you mean something deeper saying "false" minima?
I never thought that doing `leastsq` with 20 different sets of randomly selected starting values was a fast and efficient practice, would you recommend it as a conventional practice for certain problems?
Right now I know there are parameters values that work for my data, but the fit does not find them, so indeed if I were getting 15 times the same answer, I might consider this answer correct. That is a shame that I do not know more about solvers regarding my problem, now I am curious to know if there is not a better solver for me out there. Differential evolution was sold to me by my advisor as the ideal solver to explore large parameter spaces, and I must confess that in the past year, my own genetic algorithm worked better than "leastsq" which was getting trapped in local minima as soon as I was adding more than 4 free parameters. But I do not know what else is out there.