DEA, heuristic_crossover, gaussian_mutation

32 views
Skip to first unread message

peter.s...@gmail.com

unread,
Mar 22, 2018, 3:47:46 PM3/22/18
to Inspyred
Hello,

thank you for this very interesting library.
I'm planning to use it in the scope of my PhD project in the field of solar energy.

My optimization problem is single-objective, with about 10 free, real-valued variables with different bounds. The variables have different dimensions (lengths, angles, ...) and are on rather different scales. The evaluation of my objective function is rather expensive, so I'll be using "multiprocessing".

As I'm not so interested in studying optimization algorithms in detail, the application of an existing one in inspyred seems reasonable to me.

I looked in the "Differential Evolution Algorithm".
Is its implementation in inspyred similar to https://en.wikipedia.org/wiki/Differential_evolution ?
If not, what's its theoretical basis?

What would be a good source for learning how to set the algorithm's parameter in a way suitable for the optimization problem?

While trying to understand the DEA implementation in inspyred a bit better, I looked into the code of "heuristic_crossover".
There is a particular code snippet that I don't understand:
"""
mom_is_better = lookup[pickle.dumps(mom, 1)] > lookup[pickle.dumps(dad, 1)]
for i, (m, d) in enumerate(zip(mom, dad)):
negpos = 1 if mom_is_better else -1
val = d if mom_is_better else m
bro[i] = val + random.random() * negpos * (m - d)
sis[i] = val + random.random() * negpos * (m - d)
"""
Assuming that "random.random()" yields a uniformly distributed random number in [0, 1], could you explain what the "mom_is_better" term is actually doing?

Another question is related to the "gaussian_mutation".
Do I understand correctly that using it with a single "stdev" on my free variables might be a bad idea, as they are on different scales?
Should I try to normalize them first, to be in the same range?

Thank you very much in advance!

Best regards
Peter Schöttl

Aaron Garrett

unread,
Mar 22, 2018, 6:27:26 PM3/22/18
to insp...@googlegroups.com
Yes, the DEA is based on that same idea, but using two points instead of 3 as in that Wikipedia article. These algorithms are not particularly sensitive to parameter choices, so you don't have to worry too much about that for most problems.

That code needs to figure out whether the "mom" or the "dad" has better fitness. I have to look up the fitness in a table (called "lookup") and see which is better. And then my offspring move toward the better parent.

If you have radically different scales for each candidate dimension, then you can create a custom mutation that applies the gaussian noise differently on each dimension. I could show you how to do that if it's needed later.

--
You received this message because you are subscribed to the Google Groups "Inspyred" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inspyred+u...@googlegroups.com.
To post to this group, send email to insp...@googlegroups.com.
Visit this group at https://groups.google.com/group/inspyred.
For more options, visit https://groups.google.com/d/optout.
--
Aaron Garrett, Ph.D.
Assistant Professor
Computer Science
Wofford College
429 North Church Street
Spartanburg, SC 29303

Ptr Schttl

unread,
Mar 22, 2018, 7:24:59 PM3/22/18
to Inspyred
Dear Prof. Garrett,

thank you for the quick response!

Do I understand you correctly that neither num_selected/tournament_size (for the tournament_selection) nor gaussian_stdev (for the gaussian_mutation) has a significant impact on the rate of convergence or the success in finding the global optimum?

Regarding the "offspring moving towards the better parent": to my understanding, the value of mom_is_better in the code above doesn't make any difference.
For every candidate parameter, a random value between the two parents' parameters is chosen and no bias towards the better parent is introduced.
If mom_is_better == True: Sis[i] = Bro[i] = d + random * (1) * (m - d) = random * m + (1 - random) * d
If mom_is_better == False: Sis[i] = Bro[i] = m + random * (-1) * (m - d) = (1 - random) * m + random * d
Of course, this assumes that random is uniformly distributed between 0 and 1.
Am I wrong?

Best regards
Peter Schöttl

Ptr Schttl

unread,
Mar 22, 2018, 7:27:54 PM3/22/18
to Inspyred
Ok, small correction:

Sis[i] != Bro[i], as a different random number gets drawn for each of them.

Aaron Garrett

unread,
Mar 23, 2018, 8:25:25 AM3/23/18
to insp...@googlegroups.com
I don't see why you have random twice in each equation. If mom_is_better, then it is d + random * (m - d). Otherwise, it is m + random * (d - m). Move from the worse parent toward the better parent by a random amount along the distance between them.


On Thu, Mar 22, 2018 at 7:27 PM Ptr Schttl <peter.s...@gmail.com> wrote:
Ok, small correction:

Sis[i] != Bro[i], as a different random number gets drawn for each of them.

--
You received this message because you are subscribed to the Google Groups "Inspyred" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inspyred+u...@googlegroups.com.
To post to this group, send email to insp...@googlegroups.com.
Visit this group at https://groups.google.com/group/inspyred.
For more options, visit https://groups.google.com/d/optout.

Ptr Schttl

unread,
Mar 23, 2018, 9:43:30 AM3/23/18
to Inspyred
Dear Prof. Garrett,

thank you again for the active discussion, I really appreciate that!

I agree that you "move from the worse parent toward the better parent by a random amount along the distance between them".
As the "random amount" is uniformly distributed, it doesn't matter if you start at the worse parent or the better parent.
You will always find yourself somewhere in between them, without bias towards better or worse.
Imho, the distinction with mom_is_better has no impact.

To my understanding, the snippet in inspyred
mom_is_better = lookup[pickle.dumps(mom, 1)] > lookup[pickle.dumps(dad, 1)]
for i, (m, d) in enumerate(zip(mom, dad)):
    negpos
= 1 if mom_is_better else -1
    val
= d if mom_is_better else m
    bro
[i] = val + random.random() * negpos * (m - d)
    sis
[i] = val + random.random() * negpos * (m - d)

and without mom_is_better
for i, (m, d) in enumerate(zip(mom, dad)):

    bro
[i] = d + random.random() * (m - d)
    sis
[i] = d + random.random() * (m - d)

are essentially equivalent.
This would also make the use of pickle obsolete.

Best regards
Peter Schöttl

Aaron Garrett

unread,
Mar 23, 2018, 10:23:29 AM3/23/18
to insp...@googlegroups.com
Yes... that must be true. Not what I intended, but true nonetheless.
Reply all
Reply to author
Forward
0 new messages