Hello,
thank you for this very interesting library.
I'm planning to use it in the scope of my PhD project in the field of solar energy.
My optimization problem is single-objective, with about 10 free, real-valued variables with different bounds. The variables have different dimensions (lengths, angles, ...) and are on rather different scales. The evaluation of my objective function is rather expensive, so I'll be using "multiprocessing".
As I'm not so interested in studying optimization algorithms in detail, the application of an existing one in inspyred seems reasonable to me.
I looked in the "Differential Evolution Algorithm".
Is its implementation in inspyred similar to
https://en.wikipedia.org/wiki/Differential_evolution ?
If not, what's its theoretical basis?
What would be a good source for learning how to set the algorithm's parameter in a way suitable for the optimization problem?
While trying to understand the DEA implementation in inspyred a bit better, I looked into the code of "heuristic_crossover".
There is a particular code snippet that I don't understand:
"""
mom_is_better = lookup[pickle.dumps(mom, 1)] > lookup[pickle.dumps(dad, 1)]
for i, (m, d) in enumerate(zip(mom, dad)):
negpos = 1 if mom_is_better else -1
val = d if mom_is_better else m
bro[i] = val + random.random() * negpos * (m - d)
sis[i] = val + random.random() * negpos * (m - d)
"""
Assuming that "random.random()" yields a uniformly distributed random number in [0, 1], could you explain what the "mom_is_better" term is actually doing?
Another question is related to the "gaussian_mutation".
Do I understand correctly that using it with a single "stdev" on my free variables might be a bad idea, as they are on different scales?
Should I try to normalize them first, to be in the same range?
Thank you very much in advance!
Best regards
Peter Schöttl