Hi Keith,
On 3 October 2012 16:37, Keith Nelson <
krne...@gmail.com> wrote:
> Hello Colin,
> My first attempt to train on a sliding window of evaluation data using
> the EvaluationInfo class with FitnessHistoryLength set to 10. I am getting
> some undesired behaviour: What appears to be happening is that each
> generation newly created offspring get awarded higher fitness "average" over
> one fitness value scored off the latest sliding window, than older genomes
> with evaluation counts larger than one scored over several sliding windows -
> so the older genomes get dropped quickly and never reach 10 evaluations.
I've had a look through the code and tried running the prey capture
domain with fitness length = 10. I didn't spot any problems, the only
thing I noticed was that genomes very seem not to live through very
many generations. I think this is because I bias selection towards
newer genomes (but I don't recall 100% if that is the case).
Also, the actually fitness buffer calculation of mean fitness over all
evaluations looks OK.
Could it be a side effect of your fitness scoring? e.g. if it is
giving different scores for a genome (non-deterministic scoring) then
you might get the occasional good score and lots of bad scores,
therefore older genomes will tend to have a mean that is fairly low,
and a few new genomes will be seen with a higher score.
I don't want to go into detail but I've been suspecting for a while
now that the standard/canonical evolutionary algorithm approach is far
to efficient at throwing away low scoring genomes. As an initial
suggestion and stopgap measure you could try setting the elitism and
selection proportions to 80%-90% (instead of the default 20%), and see
if that helps. Otherwise I probably need some more info to help you
out any further. Let me know how you get on.
Regards,
Colin.