I finished my autobiography at the end of last month and part of it was on AI.
Some scientists believed that the best way to
create artificial intelligence was to imitate existing intelligence and use
techniques based on biological metaphors. We’ll call this the biological
approach. Other scientists insisted that computer intelligence need
not be like human intelligence at all in its manner of function, even though it
might achieve the same ends. The computer could be exploited for
its speed and limitless patience to solve problems in a way that humans could
not. We’ll call them the mechanists.
The situation in this respect was rather like
that which surrounded early attempts to fly. Some people thought that the
best way to fly was to imitate birds and insects, since they were the only
successful fliers around. And so you have these early motion pictures of
the tragic bird men, absurdly dressed in wings and feathers, who flapped their
arms when jumping off high points, only to plummet to their deaths. In
the other camp were the proponents of fixed wing flight. Of course these
won out.
Unlike the bird man/fixed wing conflict that
was played out conclusively in favour of fixed wing, the biological/mechanist
contest was not resolved conclusively in favour of one side. At the
moment, the biological side is in ascendancy. Modern AI is based on
neural nets and positive reinforcement; which is based on
biological models. But there were periods when that approach was not
in favour, and it was not in favour for more than 20 years after 1970.
This was the period when I entered the subject.
Instead the mechanist school was in ascendancy, and the
most powerful contingent was the one which favoured the use of symbolic
logic. This was an exciting time because it was believed that by endowing
the computer with the power to conduct general reasoning in logic that, with
sufficient data, general intelligence would emerge. That is:
intelligence = general reasoning ability
(logic) + information
We’ll call this idea computational logicism.
Computational logicism depended on building powerful inference engines -
programs that would give the computer the power to reason logically.
It's interesting to see what happened to computational logicism. It faltered for several reasons, but some of these were not inevitable.
1. A weak model for connecting natural language to logic. The models based on CF grammars don't really work very well because of the size of the grammars needed - maybe 20,000 rules!
2. The combinatorial problems in using logic to drive reasoning.
3. The weakness of first-order logic wrt to encoding natural language.
4. The failure of Prolog.
5. Intractability of more powerful logics.
6. Lack of cash!
Some of this stuff is not inevitable. My Pygmalion used a different model to do the job and if finished I'd estimate the the rules needed would be 500-800. Theorem provers today are much better than they were in the 70s and 80s. THORN for instance solves Schubert's Steamroller in 0.2s whereas in 1978 it was totally unsolvable by machine and was only solved in 1984 by annotating the problem.
The lack of cash in early AI was a problem. People would do interesting pilot experiments but the billions spent today were not there so pilot experiments were all one could do. I've been tempted to do something in on the old lines but like my predecessors I too would be only able to do a probe.
Will the old AI come back? Perhaps, I don't know.
Mark