For those who are wondering what "tutorial" Forrest is referring to,
the book started life as a chapter in a forthcoming handbook on
computational intelligence. (There's more on this in the Preface.) A
version of that chapter was also released as <a href="http://
www.essex.ac.uk/dces/research/publications/technicalreports/2007/ces475.pdf">a
technical report</a>, and that's what Forrest's group was using.
There were a lot of changes between that report and the final book,
including changing sections into chapters and moving a lot of material
around. In particular, the Section 7.11 that Forrest talks about
turned into Section 12.1 in the book, and along the way it changed
from being a simple bullet list to being an actual sequence of
paragraphs.
I'd obviously hope that the additional text might clarify some of
these issues, but I suspect the underlying issue still remains: When
is an evolutionary algorithm (in particular GP) the "right" (or at
least a promising) tool for a particular problem? The book provides a
zillion examples of applications of GP in Chapter 12, and some general
principles in 12.1, but that all still leaves the reader with a
substantial induction problem when faced with a new domain.
And this is clearly a common and important question; it came up from
the audience more than once at the panel discussion at EuroGP 2008 two
weeks ago in Naples. One answer from the audience was effectively
"Use GP when you don't know what else to try, and you don't mind
magic". There were people (including at least one of my co-authors)
who disagreed with the "method of last resort" tone, but I think
there's a kernel of truth in it. If there are well-behaved, well
understood deterministic solutions to your problem, then you probably
want to use them rather than using GP to evolve some mysterious
beast. On the other hand, I wouldn't recommend spending 6 months
trying to develop a "traditional" solution if a week with GP might
have turned up something entirely workable.
All of which leaves the question of which problems might GP be good
for, which I don't think has a simple answer. As experience
practitioners know, there's a certain amount of black magic and
experience that goes into getting an evolutionary algorithm (or a
neural net or most machine learning systems) to successfully address a
hard problem. Applying such tools is still more craft than science
(and changing that would be extremely valuable). In the bowling
score, for example, I suspect there are clever ways to pose the
problem and organize the large number of test cases that would make
the problem fairly tractable. Similarly, there are also ways of
framing the problem that are almost certainly guaranteed to fail.
I'm not sure how much this helps, but I think it's an important
question and look forward to continuing the conversation.
Nic