Thank you developers and the community for this great package ! It is helping me a lot in my research.
I finally found out why my optimisation loop isn't working after 2 days. I am using NSGA-III for a 14 dimensional optimisation with 4 parameters to optimise in each individual. Population size is 212 individuals.
Problem comes when i want to calculate hypervolume of my population. There is no problem when the population size is less than 20, but it becomes slower as the size increases. And when the size is 212, kernel crashes. I am attaching the fitness values as a text file if you guys are interested. You can already see at 'i+31' in the code below calculations become a bit slow. If you keep on increasing, it becomes slower and slower and eventually crashes. Do you guys have an idea of a way around this problem?
Here is the code (Sorry about the formatting i tried, pastebin and notepad++ but it didnt give me the correct format):
import numpy as np
try:
# try importing the C version
from deap.tools._hypervolume import hv
except ImportError:
# fallback on python version
print("no C")
from deap.tools._hypervolume import pyhv as hv
ttt = np.loadtxt('fitness_values.txt')
ref_hyp = np.array([19.685825,962.5723419,5.69326178,0.043162976,11.61159696,0.01699155,0.018175168,223965.1889,2.61386E-06,0.016530917,1.9237E-06,2.82397E-06,4472.93138,4.81537E-06])
for i in range(len(ttt)):
print('iteration number ',i)
value_hypervolume = hv.hypervolume(ttt[i:i+31], ref_hyp)
print('value_hypervolume is ',value_hypervolume)
Have a good day !
Hemant