In short, the weights are useful in the crowding distance computation for NSGA2. If you have objectives that are not scaled, the weights serve to put or remove emphasis on a subset. Other than that they are of no use. I'm not aware of any study on that subject however.
Cheers,
François-Michel
Hi Jaime,
If you plan to do multi-objective optimization and you do not have prior knowledge on the importance of each objective, you should stay away from aggregating function and instead use a multi-objective selection algorithm that is based on the concept of dominance, and most commonly Pareto dominance. Linear aggregating function, such as the weighted sum, have a well known limitation of being unable to generate non-convex portions of the Pareto front [1].
DEAP currently includes two multi-objective selection operator NSGA-II and SPEA2. However, in order for these operators' elitism to work properly, you have to use an algorithm that will expand your population: algorithms.eaMuPlusLambda. There are many examples of usage in the example folder, here is a short list:
- Discrete optimization with NSGA-II : GA Knapsack
- Discrete optimization with NSGA-II : GA Evolutionary KNN
- Continuous optimization with NSGA-II : GA Kursawe
- Discrete optimization with NSGA-II + custom algorithm: GA Evolutionary Sorting Network
NSGA-II, as François-Michel mentioned, can use the weights via the crowding-distance computation to normalize the objective. Although, it would be preferable to do the normalization in the evaluation function, as it can be more complex than a simple a multiplication. Ideally, each objective should be defined on the same range of values.
Regards,
Félix-Antoine
Hi! Thanks for your time. That was quick!
I think I've understood most of the concepts, but I would like to ask a couple of questions to confirm my thoughts.
Since the score is compared lexicographically in the tournaments, would it be acceptable to clone that function and edit the comparison part to, for example, the weighted sum of the fitnesses? In your example, Marc-andré, I'd try something like this: (rough draft)
https://gist.github.com/enoyx/2f1ae960bf1d6bad5f83
If that's not recommendable, maybe I should rearrange the fitness scores in my evaluation function? Presumably important components first, then the rest?
Thanks again,
Jaime.