Entropy-Based Termination Criterion for Multiobjective Evolutionary Algorithms

41 views
Skip to first unread message

Manuel Belmadani

unread,
Aug 23, 2016, 2:03:11 AM8/23/16
to deap-users
Hello everyone,

I recently came across this paper: 
Entropy-Based Termination Criterion for Multiobjective Evolutionary Algorithms - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7273886

which proposes an algorithm to automatically terminate multiobjective evolutionary algorithms once the pareto front stabilizes. Their methods uses a dissimilarity between successive generations to measure entropy. I wrote an implementation of the algorithm to try it out, and it seems to offer a nice alternative to time limits or generation caps if you don't know a priori how long your algorithm should run for.


The code is in the class MOEATerminationDetection from automoea.py. There's a bit of sample code in the __main__ at the bottom, though it doesn't use actual pareto fronts, just mock lists of "fitnesses". The MOEATerminationDetection class is not DEAP specific, but you can use it in a DEAP project with the class deap.tools.ParetoFront (http://deap.gel.ulaval.ca/doc/dev/api/tools.html) object by doing something like this:

terminator = MOEATerminationDetection()

paretofront
= deap.tools.ParetoFront()
gen
=0
while not terminator.done():
 
 
#...near end of the evolutionary algorithm, when paretofront is updated...


 P
= [list(y) for y in [x.fitness.values for x in paretofront ]]
 paretofront
.update(offspring)
 Q
= [list(y) for y in [x.fitness.values for x in paretofront ]]
 terminator
.update(P,Q,gen)
 gen
+= 1


 
#......



I've also modified the DEAP knapsack example (using algorithm.py, knapsack.py ) to use automatic termination; assuming you have DEAP installed, try it out with:

 python knapsack.py

If anyone has suggestions feel free to discuss. I'm especially interested in the correctness of the implementation. I'd also be interested to discuss the impact of the parameters. You can actually make the algorithm more (or less) sensitive by setting the number of generations to consider when evaluation dissimilarity, the number of significant digits when comparing the mean and standard deviation between generation, or the amount a information content you attribute when you partition your solution space. The algorithm isn't truly automatic, unfortunately, as you have to set these parameters, but so far I've had good runs by somewhat setting them arbitrarily. This is set on initialization: 

def __init__(self, n_s=20, n_p=3, n_b=10, DEBUG=False).

DEBUG=True print the trace of the algorithm to the standard output.

Cheers,
Reply all
Reply to author
Forward
0 new messages