Constraints handling for multi-objective problem using NSGA-2

783 views
Skip to first unread message

Wahdat Safia

unread,
Mar 14, 2016, 8:09:31 AM3/14/16
to deap-users
 I have a multi-objective virtual machine placement problem that i am implementing it using NSGA-2, but the problem is, i have multiple constraints along with multiple objective functions. So how can i apply constraints such that infeasible solutions should get wiped out over the generations. Constraints handling in multi-objective GA is still a research topic, Many suggests that you can craft your
genetic operators in such a way so that it should always produce a feasible solutions or we can use constraints- NSGA2. Also we cannot apply penalty functions to multiple objectives because in NSGA-2 fitness value of a solution is evaluated on the basis of pareto dominance rank and not from the objective function value.
Please suggest any of the constraints handling mechanism for multi-objective problem, through which i would be able to search in the feasible region only.
Is there any provision for implementing constraints-NSGA-2 in DEAP??

Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

Manuel Belmadani

unread,
Mar 15, 2016, 1:48:33 AM3/15/16
to deap-users
Would it not be possible to use a penalty function to  return 0.0 for every possible fitness value when an undesirable condition is met? I'm actually interested in this as well, if it is possible.

Otherwise, if you're looking to get something working in the before we get a reply from a DEAP developer (or anyone that knows more about this), you could always do this via the fitness function. 

I have a set up where I have a multiobjective fitness function. The set of functions is arbitrary, and can come in any order. The only assumption is that every fitness function receives the same parameters, in the same order, even if they don't user it (so the same function signature.) My approach is to compute all the metrics I need, and pass them down to each function for evaluation, returning a tuple of the appropriate length. You can then check if the individual or fitness is what you expect it to be, and if no, return a fitness of [0.0, 0.0, 0.0 ..... (depending how many objectives you have.)

Here's what my generalized fitness function looks like:

def fitness_function(individual, FITNESS_FUNCTIONS):
      """
      individual is your compiled solution
      FITNESS_FUNCTIONS is a list of functions, like [sum, compute_something, another_function]
                        P.S., Not [sum(), compute_something(), another_function()], but just the function as it'll be invoked later
                        In my setup, all my objectives take the same set of parameters/metrics, even if they don't require it.
      """
      fitnesses = []

      """
      Compute the metrics you need, independently of you fitness function 
      """
      metric1 = compute_metric1(individual)
      metric2 = compute_metric2(individual)
      metric3 = compute_metric3(individual)
      
      metrics = [ metric1, metric2, metric3 ]
      
      for f in FITNESS_FUNCTIONS:
          # Again, assuming that all your functions take the same parameters
          fitnesses.append( f(*metrics) )

      # If there's something you don't like about the solution , return ([0.0] * len(fitnesses)) as the fitness score, making the lowest possible scoring fitness function
      # You can check at the individual level or the fitness level 
      fitnesses = handle_constraints_over_individual(individual) # If it doesn't look like what I want, return [0.0, 0.0, .... ]
      fitnesses = handle_constraints_over_fitnesses(fitnesses) # If the fitness has something I don't like, , return [0.0, 0.0, .... ]

      return tuple(fitnesses)


I still think it would be more elegant to use the penalty decorator, but maybe there's a limitation there?

I still think it would be more elegant to use the penalty decorator, but maybe there's a limitation there?

Wahdat Safia

unread,
Mar 15, 2016, 2:32:36 AM3/15/16
to deap-users

Thanks Manuel for your suggestion,

But I have studied somewhere that "implementation of penalty function strategies, which is the most frequently used constraint handling strategy in single-objective GA, is not straightforward in multi-objective GA due to fact that fitness assignment is based on the non-dominance rank of a solution, not on its objective function values".
Initially i was using penalty decorator on evaluation function, where each solution is checked against all constraints and if solution is valid it will return original objective function value else it will return constant penalize value i.e., if i have all objectives of minimization type then i can return some large penalty value (choosing this constant penalty value is also an issue). But will this help  selection operator NSGA2 to demarcate such solutions and choose only best solutions from the feasible search space???
Since from the above reference i am little confused whether NSGA2 will consider such penalize fitnesses of solutions as infeasible solution??

Thanks for your answers,your suggestions are highly valuable to me..

Manuel Belmadani

unread,
Mar 15, 2016, 3:00:04 AM3/15/16
to deap-users
Interesting. Where did you get that quote from? 

In principle, if you "nerf" (add a large penalty) to all objective when a given individual is not like what you expect, its score would be the worst possible score, and would get dominated by other solutions in the pareto front, so it likely wouldn't be preserved. In the NSGA-II selection procedure, it would likely not make it from Pn + Qn (Population + Offsprings at generation n) to Pn+1 (The next generation's initial population.) However, I use the word likely, because if you have enough solutions with nerfed fitness scores (more than the size of Qn) then they may get passed to Pn+1, as they will still be selected as the top len(Pn) individuals (the size of your population). In that case, it might be better to directly filter undesirable solutions from your population. Perhaps, at the beginning of every generation, it would be better to scan your population for undesirable individuals, remove them from the population, and then carry on with mating and mutations. 

What kind of contraints are you looking for exactly? Is it more about fitnesses that are too low for a solution to be useful? Or is it something about the way the individual looks like? 

Wahdat Safia

unread,
Mar 15, 2016, 4:45:46 AM3/15/16
to deap-users

That quote is from section Constraint handling in the paper Multi-objective optimization using genetic algorithms: A tutorial.

Yess.. well pointed , i am worried about when we have enough solutions with nerfed fitness scores.
I am generating a random set of solutions where each solution is a form of boolean 2D numpy array and initialization and genetic operators may produce both feasible and infeasible solutions. so by applying constraints (it is basically about how the individuals look like ) i want to remove such infeasible solutions so that only feasible solutions participate in further evaluations of generations.

Manuel Belmadani

unread,
Mar 15, 2016, 5:37:42 AM3/15/16
to deap-users
I see. I haven't tried to do GAs of this sort, but have you considered doing GP instead of a GA?

A GP (Genetric Programming) would allow you to enforce feasible solutions via a grammar. http://deap.readthedocs.org/en/master/examples/gp_spambase.html

I'm not sure if it's an appropriate paradigm for what you're looking to do, but it may help you get around the non-feasible solutions if you design a template grammar that only produces valid individuals. But feel free to ask if you have any questions about GP. In my (limited) experience with GAs, it was working pretty well when any possible solution was 'valid', but GP helped me a lot when I had a specific structure in mind.
Reply all
Reply to author
Forward
0 new messages