parameters for multi-objective optimization

507 views
Skip to first unread message

Thomas Evangelidis

unread,
Jul 25, 2014, 12:52:43 PM7/25/14
to deap-...@googlegroups.com
Greetings,

Is it necessary using the ParetoFont() hall of fame when doing multi-objective optimization? I had a look at the HallOfFame() class and found that it sorts the individual by the first value of the fitness, which is not what we want in multi-objective optimization.

Another relevant question is how to get the best individual in that case. For example, I am using:

    pop = toolbox.population(n=populationNo)
    hof
= tools.ParetoFront()
    stats
= tools.Statistics(lambda ind: ind.fitness.values)
    stats
.register("max", max)


But at the output I am getting the maximum values of each objective function found in the population, not the objective function values of the best individual (assuming that the term objective function is not proper to describe the 3 numbers occurring below). E.g.

gen      evals    max    
0        300      [0.1205, 0.16, 0.1279]
1        590      [0.1205, 0.16, 0.1364]
2        611      [0.1205, 0.16, 0.1364]
3        602      [0.1205, 0.16, 0.1364]
4        582      [0.1205, 0.1784, 0.1364]
5        597      [0.1205, 0.1784, 0.1364]
6        623      [0.1205, 0.1784, 0.1364]
7        576      [0.1205, 0.1784, 0.1364]
8        620      [0.1205, 0.1784, 0.1364]
9        612      [0.1205, 0.1784, 0.1364]
10       609      [0.1205, 0.1784, 0.1364]

But the individuals in the hof from the last generation that satisfy these criteria have the following fitness values:


(0.12051855597184445, 0.11850680042046814, 0.096363795642769104)
(0.0944539699428395, 0.17841238174356525, 0.12666687603357699)
(0.020997711353802316, 0.15600067880897239, 0.13636386175863552)

None of them has (0.1205, 0.1784, 0.1364). With what criteria can I select the individual that has as high as possible values of these three objective functions? Can I print in every generation the fitness value of that best individual?

thanks in advance,
Thomas



Marc-André Gardner

unread,
Jul 25, 2014, 3:02:20 PM7/25/14
to deap-...@googlegroups.com
Hi Thomas,

I think the "issue" you mention is not related to DEAP, but to the very nature of a multiobjective fitness.

I'll give you an example. Take a look at this graph (taken from Wikipedia) : https://commons.wikimedia.org/wiki/File:PareoEfficientFrontier1024x1024.png#mediaviewer/File:PareoEfficientFrontier1024x1024.png
It shows a classical 2-objectives problem. The Pareto front is said to be the set of solutions which are better than all others at least on one objective while not being worse on the others. For instance, "A" is a member of the Pareto front, because no other solution has a better value for objective #2. "B" is also on the Pareto front, because while it is worse than "A" according to the objective #2, it is better according to objective #1. On the opposite, "K" is not on the Pareto front, because some solutions (for instance "D" or "E") are simultaneously better on objectives #1 AND #2.

That's for the theory. Now, consider this figure, and ask you the question : which is _the_ best solution? If you have to keep only one solution, which one would you keep?

As you can see, without making other assumptions on the problem, it is impossible to choose and to describe a solution as "better" than all others. Same thing apply to your case. Look at your 3 "best" individuals : which one would you choose? From my perspective (and DEAP's) it is impossible to make a choice: all three are simply different compromises. That's why there is a "ParetoFront" class, which keeps all individuals making the best "compromises" between all the objectives.

Of course, if you have some insights on your problem, you may add other heuristics to guide the evolution. Maybe the value of the 3rd objective is not relevant in your case if the value of the first is too low, or maybe the value of the second objective is less important than the two others. But, as I already stated, this is specific to your problem, and so you have to "tell" DEAP by writing adequate evaluation/selection functions. Without this, the only thing DEAP can do is to filter the individuals worse on every considered aspect, as it did here.

Hope it's a bit more clear. Do not hesitate if you have any remaining questions :)

Marc-André

Thomas Evangelidis

unread,
Jul 26, 2014, 6:19:04 AM7/26/14
to deap-...@googlegroups.com

Hi Marc-Andre,

Thank you for the detailed explanation. Yet are there any methods implemented in python that you would suggest for the selection of one optimum individual. I have found many in the literature but I cant decide which one is the best and easiest to incorporate into my code.

Thanks,
Thomas

--
You received this message because you are subscribed to a topic in the Google Groups "deap-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/deap-users/wUcLPGJnJDg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to deap-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

François-Michel De Rainville

unread,
Jul 26, 2014, 9:25:16 AM7/26/14
to deap-...@googlegroups.com

Dear Thomas,

As MA said there is no single way to do this as it depends on your specific problem and the kind of compromise you are able to make. You may have to work this out on your own.

Either, you can use multi-objective during the evolution and make your decision at the end or combine in some way your multiple fitness values into a single value at evaluation time. These are the main two solution IMHO.

Cheers,
François-Michel

Marc-André Gardner

unread,
Jul 28, 2014, 10:05:10 AM7/28/14
to deap-...@googlegroups.com
Hi Thomas,

I'm not aware of any "method for the selection of _one_ optimum individual". There are many approaches to select _a group_ of individuals from a population, like NSGA-II, SPEA-II, etc. (the most important being already implemented in DEAP). But again, without assumptions on your problem, I don't know any general method to achieve what you want.

 If you know the numerical range of each objective, you may use something like a weighted mean. This is simple to implement, at least...

Have fun with DEAP,

Marc-André

Thomas Evangelidis

unread,
Jul 28, 2014, 11:36:42 AM7/28/14
to deap-...@googlegroups.com
Hi everyone,

Is the Pareto front hall of fame necessary in multi-objective optimization, or can I just use the tools.HallOfFame(1) class to get one solution at the end? I use the NSGA-II selector, so I guess in case of tools.HallOfFame(1) the final solution with be made by NSGA-II right?

The other idea I had to select one solution from the Pareto front that does not lie in the extreme ends was to use a utility function, e.g. for two objectives functions OF1,OF2:

(w1*OF1)^2 + (w2*OF2)^2 = R

or

(OF1/w1)^2 + (OF2/w2)^2 = R

where w1,w2 are the weights and R is the total score of each individual that lies on an ellipse. In a real case where w1=1.0 and w2=0.5 these plots look like this (red dot is the selected solution):

https://www.dropbox.com/s/v1c2depovzl4uk7/Pareto_Front_optimum_solution1.tiff

https://www.dropbox.com/s/7fyiwozk8il6wz6/Pareto_Front_optimum_solution2.tiff

The first equation seems more suitable for me since it gives more emphasis on the 1st objective function (OF1). I would like to read your opinion about this.

thanks,
Thomas
--

======================================================================

Thomas Evangelidis

PhD student

University of Athens
Faculty of Pharmacy
Department of Pharmaceutical Chemistry
Panepistimioupoli-Zografou
157 71 Athens
GREECE

email: tev...@pharm.uoa.gr

          tev...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/


François-Michel De Rainville

unread,
Jul 28, 2014, 11:54:24 AM7/28/14
to deap-...@googlegroups.com
Dear Thomas,

The standard Hall of Fame will sort the individuals lexicographically (for minimization [1, 2, 6] is better than [1, 3, 1]) and the Pareto Hall of Fame will use pareto dominance ([1, 2, 6] and [1, 3, 1] are comparable compromises). The hall of fame is completly independent of the selection scheme.

Your second idea seems like a good way to choose between multiple compromises.

Cheers,
François-Michel

You received this message because you are subscribed to the Google Groups "deap-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to deap-users+...@googlegroups.com.

Marc-André Gardner

unread,
Jul 28, 2014, 11:55:05 AM7/28/14
to deap-...@googlegroups.com
Hi again,



I use the NSGA-II selector, so I guess in case of tools.HallOfFame(1) the final solution with be made by NSGA-II right?

No, it won't :
The hall of fame is lexicographically sorted at all time so that the first element of the hall of fame is the individual that has the best first fitness value ever seen, according to the weights provided to the fitness at creation time.
(source : http://deap.gel.ulaval.ca/doc/default/api/tools.html#deap.tools.HallOfFame)

This is intended, since, as I said, NSGA-II (and other multiobjective selection methods) is not suitable to select one individual, but rather select them by groups (the Pareto front). It may adequatly select only one individual (if for instance the first Pareto front contains only one individual...), but this is a special case.

I mean, you can ask NSGA-II to produce you only one individual, it will not crash:
firstRankedNSGAIndividual = tools.selNSGA2(population, 1)[0]

But the result will be useless, as it will be an arbitrary choice on the first Pareto front (ok, not totally "arbitrary" since NSGA-II implements the concept of crowding distance to choose from many individuals on the same front, but for your use case, it can be seen as random).


Your second approach may work, but you have to carefully ajust the weights in order to do so, or else one objective will have more effect than the other (especially considering that you use a square elevation).


Have fun with DEAP,

Marc-André
Reply all
Reply to author
Forward
0 new messages