PSO Solutions Clustering on Pareto Front

134 views
Skip to first unread message

hobscrk777

unread,
Sep 30, 2014, 4:18:01 PM9/30/14
to deap-...@googlegroups.com


I'm trying to wrap my head around the implementation of Particle Swarm Optimization as introduced in the deap documentation. I'm trying to use the ZDT1 benchmark just to explore the various parameters. For reference, the Pareto front should look like this: ZDT1 Pareto Front


What I'm finding, however, is that the output of my PSO framework is clustered around the upper left portion of the Pareto front. See here. What's more, I should have 30 individuals in my population, but somehow many of them end up having identical fitness values (there are many values sitting on top of one another at (0.0, 1.0) ). This clearly has something to do with my penalty function, but I don't see any reason why my penalty function should in general force all the individuals that are out of bounds to have output responses of (0.0, 1.0). 

My code is pasted below. Does anyone have any suggestions to steer me in the right direction or shed light on why the Pareto front looks so lopsided to one side?


Thanks.



from deap import creator, base, tools, algorithms, benchmarks
import numpy as np
import array, random, operator


def F1(x):
 
return x[0]

def F2(x):
 
return G(x)*( 1 - np.sqrt(F1(x)/G(x)))

def G(x):
 n
= len(x)
 
return 1 + 9/(n-1)*np.sum(x[1:])

def eval(individual):
 
return F1(individual),F2(individual), G(individual),

creator
.create("FitnessMin", base.Fitness, weights=(-1.0,-1.0,-1.0))
creator
.create("Particle", list, fitness=creator.FitnessMin, speed=list,
 smin
=None, smax=None, best=None)

def generate(size, pmin, pmax, smin, smax):
 part
= creator.Particle(random.uniform(pmin, pmax) for _ in range(size))
 part
.speed = [random.uniform(smin, smax) for _ in range(size)]
 part
.smin = smin
 part
.smax = smax
 
return part

def updateParticle(part, best, phi1, phi2):
 u1
= (random.uniform(0, phi1) for _ in range(len(part)))
 u2
= (random.uniform(0, phi2) for _ in range(len(part)))
 v_u1
= map(operator.mul, u1, map(operator.sub, part.best, part))
 v_u2
= map(operator.mul, u2, map(operator.sub, best, part))
 part
.speed = list(map(operator.add, part.speed, map(operator.add, v_u1, v_u2)))
 
for i, speed in enumerate(part.speed):
 
if speed < part.smin:
 part
.speed[i] = part.smin
 
elif speed > part.smax:
 part
.speed[i] = part.smax
 part
[:] = list(map(operator.add, part, part.speed))

def feasibility(individual):
 
"""Feasability function for the individual. Returns True if feasible False
 otherwise."""

 
if any(0 < individual < 1):
   
return True
 
return False

 

def feasible(individual):
 ind
= np.array(individual)
 
if any(ind < 0):
   ind
[ind<0] = 0
 
if any(ind > 1):
   ind
[ind>1] = 1
 ind
= list(ind)
 individual
= creator.Particle(ind)
 
return individual

toolbox
= base.Toolbox()
toolbox
.register("particle", generate, size=30, pmin=0.45, pmax=0.55, smin=-0.01, smax=0.01)
toolbox
.register("population", tools.initRepeat, list, toolbox.particle)
toolbox
.register("update", updateParticle, phi1=1, phi2=1)
toolbox
.register("evaluate", evalOneMax)
toolbox
.decorate("evaluate", tools.ClosestValidPenality(feasibility, feasible, 10))

pop
= toolbox.population(n=25)
stats
= tools.Statistics(lambda ind: ind.fitness.values)
stats
.register("avg", np.mean)
stats
.register("std", np.std)
stats
.register("min", np.min)
stats
.register("max", np.max)

logbook
= tools.Logbook()
logbook
.header = ["gen", "evals"] + stats.fields

GEN
= 100
best
= None
for g in range(GEN):
 
for part in pop:
 part
.fitness.values = toolbox.evaluate(part)
 
if not part.best or part.best.fitness < part.fitness:
   part
.best = creator.Particle(part)
   part
.best.fitness.values = part.fitness.values
 
if not best or best.fitness < part.fitness:
   best
= creator.Particle(part)
   best
.fitness.values = part.fitness.values
 
for part in pop:
   toolbox
.update(part, best)
 
# Gather all the fitnesses in one list and print the stats
   logbook
.record(gen=g, evals=len(pop), **stats.compile(pop))
   
print(logbook.stream)

front
= np.array([ind.fitness.values for ind in pop])
plt
.figure()
plt
.scatter(front[:,0], front[:,1], c="b")
plt
.axis("tight")









Félix-Antoine Fortin

unread,
Sep 30, 2014, 5:24:34 PM9/30/14
to deap-...@googlegroups.com
Hi Jonathan,

You are using a mono-objective algorithm to solve a tri-objective problem. Therefore, you are only optimizing the first objective, and using the other objectives only when there are ties for the first one. This is why all your individual have the same fitness.

Félix-Antoine
--
You received this message because you are subscribed to the Google Groups "deap-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to deap-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Félix-Antoine Fortin
Message has been deleted

François-Michel De Rainville

unread,
Oct 1, 2014, 9:28:22 AM10/1/14
to deap-...@googlegroups.com
You should have a look at the multi-objective PSO literature (MOPSO). There is a lot of possibile algorithms.

hobscrk777

unread,
Oct 1, 2014, 12:36:06 PM10/1/14
to deap-...@googlegroups.com
Thanks for the fast reply Félix-Antoine,

I think by modifying the comparisons of the best positions the particles are being evaluated on the basis of all the objectives, and not merely the first.
To unsubscribe from this group and stop receiving emails from it, send an email to deap-users+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.


--
Félix-Antoine Fortin

François-Michel De Rainville

unread,
Oct 2, 2014, 12:30:13 PM10/2/14
to deap-...@googlegroups.com
"<" does compare on all objectives, but it is a lexicographic sort.

You should use a Multi-objective PSO that maintain multiple best particles.

To unsubscribe from this group and stop receiving emails from it, send an email to deap-users+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages