It's pretty widely accepted that no single set of genetic algorithm
parameters (e.g. population size, mutation rate, crossover rate)
is optimal for all problems, and it sounds like you're trying
to work out an approach to that issue. One common previous
approach has been to vary the mutation rate from a higher value
at the beginning of a run (thus initially emphasizing exploration of
the search space) linearly down to a lower value at the end of
the run (where presumably crossover will be more successful at
producing good solutions).
Another recent, related piece of work to look at is the "adaptive
operator probabilities" work of Lawrence Davis: he takes a set
of genetic operators (e.g. mutation, single-point crossover,
two-point crossover, uniform crossover, hillclimbing operators)
and gives them each an initial probability at the beginning of
the run, and then modifies the operator's probability on the
basis of the performance of the children produced by this
operator (and their descendants). He's had quite good results
with this approach, and it also provides an interesting way of
testing out new operators to see if they improve performance or
hurt it. Note, however, that he maintains a single probability
for each operator across the entire population, as opposed to
having a different probability for each member.
Lawrence Davis, "Adapting Operator Probabilities in Genetic Algorithms,"
Proceedings of the 3rd International Conference on Genetic Algorithms,
Morgan Kaufman, 1989.
Philip Resnik
pre...@bbn.com