Use of mu + lambda algorithm with contradictory objectives ?

305 views
Skip to first unread message

clemen...@gmail.com

unread,
Oct 2, 2015, 10:42:49 AM10/2/15
to deap-users
I want to use a mu + lambda algorithm to a multiobjective optimization, with contradictory objectives. 

I tried it with a basic example. Each individual has 5 attributes, all of them are floats randomly chosen between 0.1 and 1 at the beginning. 
The number of generations is 800 and mu = lambda = 1000.

I create my fitness function like that : "creator.create("Fitness", base.Fitness, weights=(1.0, -1.0))"

If the definition of the evaluation function is :

def funcs_obj (individual):
sum1 = sum((individual[0], individual[1], individual[2]))
sum2 = sum((individual[3], individual[4]))
return sum1, sum2

The results are logical, all the individuals of the last population have those attributes :
 [0.99778040518554, 0.9916939853173194, 0.9858341534536241, 0.0, 0.0]
So the sum of the 3 first attribues is maximised, and the sum of the last 2 ones is minimized.

Now, if :
sum1 = sum(individual)
sum2 = sum((individual[3], individual[4]))

This time all the individuals of the last population have those attributes :
[0.9894994476920232, 0.9758085001565546, 0.9947429983760391, 0.9887476300324795, 1.0]
I expected the values of the last 2 attributes to be around 0.5, since the first objective is to maximize it and the second to minimize it.
As you can see the algorithm prioritize the first objective and maximize all the attributes.

And if :
sum1 = sum((individual[3], individual[4]))
sum2 = sum(individual)

I obtain : [0.0, 0.0, 0.0, 0.9976277774304089, 1.0]. That means, again, that the first objective is prioritized (the last 2 attributes are maximized) and not 

Is there something special to precise to take into account contradictory objectives simultaneously, without prioritizing one ?

The whole code is here...

Thanks in advance ! 

Clément

François-Michel De Rainville

unread,
Oct 2, 2015, 11:07:49 AM10/2/15
to deap-...@googlegroups.com
You are looking for multi-objective optimisation. DEAP has NSGA-II, SPEA-II and MO-CMA-ES.

Regards,
François-Michel

--
You received this message because you are subscribed to the Google Groups "deap-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to deap-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

clemen...@gmail.com

unread,
Oct 5, 2015, 4:23:12 AM10/5/15
to deap-users
Thank you for your answer ! 

Actually, I read many things about multi-objective optimisation (theoretical, I lack practical experience...). 

On this post you told me that : 
First, using NSGA-2 with the simpleGA is a terrible idea. The simple GA requires stochastic selection (like tournament or roulette), while NSGA-2 is deterministic. You'll have the same individuals before and after selection. You should switch to a mu + lambda or mu , lambda algorithm like in all multiobjective examples.

So I thought that a mu + lambda was suitable for multi-objective optimisation ! 

If it is not, I must have misunderstood something. Can you please give me some indication to understand why a mu + lambda algorithm can not be used for a multi-objective optimisation with contradictory objectives ?

Again, sorry for my ignorance, I'm really a beginner in programming. 

Best regards,

Clément

François-Michel De Rainville

unread,
Oct 5, 2015, 8:38:58 AM10/5/15
to deap-...@googlegroups.com
Mu + lambda and Mu , lambda are exactly what you are looking for, but with a multi-objective selection algorithm.

toolbox.register("select", tools.selNSGA2)

Moreover, if you have few objectives (2-3) and continuous variables, I strongly suggest you try MO-CMA-ES, which uses the generate update algorithm. It has proven very efficient on many problem. Plus, there is an (undocumented) example in DEAP ;) https://github.com/DEAP/deap/blob/master/examples/es/cma_mo.py

*** Be sure tu use the C extensions, otherwise the hypervolume computation will be very slow

Here is the paper on which DEAP is based: Voss, Hansen, Igel, “Improved Step Size Adaptation for the MO-CMA-ES”, 2010.

Cheers,
François-Michel

clemen...@gmail.com

unread,
Oct 7, 2015, 12:14:16 PM10/7/15
to deap-users
Hi, 

Again thanks a lot, now I understand better what you wanted to say. 

Actually, I tried to just change the selection algorithm in the code that I gave you, the one with :
sum1 = sum(individual)
sum2 = sum((individual[3], individual[4]))
and creator.create("Fitness", base.Fitness, weights=(1.0, -1.0)

(I replaced "tools.selTournament, tournsize=3" by "tools.selNSGA2") 

and I still have my problem, with mu = lambda = 1000 and after 800 generations I obtain identical individuals with the values : [0.9946180277328509, 0.9974649820122388, 0.9999688763544647, 1.0, 1.0].

I suppose that I also have to change the way I vary the population (using the TournamentDCD selection operator to select the individuals that will participate to the reproduction phase, the SimulatedBinaryBounded crossover operator and the mutPolynomialBounded mutation operator)  and to add :
pop = toolbox.select(pop, len(pop))
before to begin the generational process to assign the crowding distance to the individuals.

In other words, I suppose that I have to use the entire framework of the NSGA-II algorithm as it is described in [Deb2002], and in the deap library (here). Do you agree with that, or do you think it can work just with the NSGA-II selection algorithm ?

Thank you for your advice about MO-CMA-ES. I began with 2 objectives but I will have to work with between 5 and 10 objectives... 

Apparently for this application (more than 4 objectives) I will have to use some scalarization method, since Pareto-based methods work well with problems that have until 4 objectives, not more. Since I know better the NSGA-II algorithm I want to begin with it, for 2, 3 and maybe 4 objectives. I will compare its performance on my problem with MO-CMA-ES too, and will see later for the scalarization method... Is there any in DEAP ?

Best regards,

Clement


François-Michel De Rainville

unread,
Oct 8, 2015, 8:19:37 AM10/8/15
to deap-...@googlegroups.com
See my answers below

and I still have my problem, with mu = lambda = 1000 and after 800 generations I obtain identical individuals with the values : [0.9946180277328509, 0.9974649820122388, 0.9999688763544647, 1.0, 1.0].

Uniform crossover and flip bit are poor choices of operator for floating point numbers. This code produces a nice output (IMHO) given your problem is unbounded.

I suppose that I also have to change the way I vary the population 

Not necessarily. You can still obtain good results with other variation operators.
 
before to begin the generational process to assign the crowding distance to the individuals.

If you wish to use the original NSGA-2 algorithm here is an example: https://github.com/DEAP/deap/blob/master/examples/ga/nsga2.py
 
In other words, I suppose that I have to use the entire framework of the NSGA-II algorithm as it is described in [Deb2002], and in the deap library (here). Do you agree with that, or do you think it can work just with the NSGA-II selection algorithm ?

I used only the selection quite few times and it worked great!
 
Thank you for your advice about MO-CMA-ES. I began with 2 objectives but I will have to work with between 5 and 10 objectives... 

The hypervolume complexity is exponential in the number of objectives so it might take very long if you have even modest populations. I'm not aware of many fast algorithm in high (>3) dimensions. Fortin et al. (https://www.lri.fr/~hansen/proceedings/2013/GECCO/proceedings/p623.pdf) implemented a fast version of the crowding distance that might help in high dimensionality.
 
Apparently for this application (more than 4 objectives) I will have to use some scalarization method, since Pareto-based methods work well with problems that have until 4 objectives, not more. Since I know better the NSGA-II algorithm I want to begin with it, for 2, 3 and maybe 4 objectives. I will compare its performance on my problem with MO-CMA-ES too, and will see later for the scalarization method... Is there any in DEAP ?

I don't know anything about scalarization. Is it only combining multiple values to a scalar?

Best regards,
François-Michel
 

clemen...@gmail.com

unread,
Oct 12, 2015, 10:33:42 AM10/12/15
to deap-users
Thanks for this detailed answer.

I gave a look to the description of deap crossover (https://github.com/DEAP/deap/blob/master/deap/tools/crossover.py) and mutation (https://github.com/DEAP/deap/blob/master/deap/tools/mutation.py) operators and indeed, my choices were not good. 

Your proposal with the SimulatedBinary crossover and the gaussian mutation works well. Actually, in this initial version my problem is unbounded but I want it to be bounded, I just wanted to make it as simple as possible to begin. 

One of the advantages of the original NSGA-II algorithm is that it uses bounded crossover and mutation operators (the SimulatedBinaryBounded crossover operator and the PolynomialBounded mutation operator). I try to use them in your code to keep all the parameters between 0.1 and 1 (so I used this code) and it works well too.

After that I tried with the original NSGA-2 algorithm (=> see this code) and it seems to work better for this case : it's faster and provide a better diversified population of solutions than the eaMuPlusLambda algorithm (with the same selection, crossover and mutation operators, and the same values of "eta" and "indpb"). 

I'm discovering the way to adding some constraints to the algorithm (with your article http://deap.readthedocs.org/en/master/tutorials/advanced/constraints.html) and more... It seems to be more powerful than just using bounded operators. 

Thank you for the article about fast version of the crowding distance.

I don't know a lot of things about scalarization but yeah, it seems to be about combining various objective functions in one and to minimize this one. In the french book I am reading, they speak about a "epsilon-dominance based steady state MOEA", that is a scalarization method proposed by Deb in 2003 (see this article, where you can find some comparisons with NSGA2 and SPEA2...). 

Clément 

Divyam Aggarwal

unread,
Feb 27, 2017, 6:53:01 AM2/27/17
to deap-users
I am currently working in optimization area and maybe I can help you out.

For scaling up NSGA2 for many objectives (>=4) you can use NSGA3 or MOEA/D as they work well on many objectives compared to NSGA2, NSGA2 fails to solve many-objectives (>=4)

Just for knowledge, in the field of optimization, 2 or 3 objectives are referred as multi-objectives and objectives >= 4 are called many-objectives.

If you need a more efficient algorithm than NSGA3 or any other existing many-objective algorithm, I will notify you about it as we are finalizing an algorithm which can beat all the existing ones.

Divyam
Reply all
Reply to author
Forward
0 new messages