Any info anyone has, on the web or not would be greatly appreciated.
Comp.ai.imho FAQ (Version 1.5)
a short course in AI, with net-pointers
jo...@mcs.com 26 December 1995
"A just machine to make big decisions
Programmed by fellows with compassion and vision..."
Donald Fagen "I.G.Y."
A more current and better linked version of this FAQ is on the WWWeb at:
<URL: http://www.mcs.net/~jorn/html/ai.html >
--- Why comp.ai.imho? [imho = in my humble opinion]
Here's a slogan:
============ Artificial Intelligence ===========
=========== is *way* too important ==========
============ to leave to the experts!!! ===========
There used to be a sense of idealism about AI-- that it was going to be
a powerful force for the good of humanity-- but that idealism is being
squeezed out, instead, by hypocrites who crave money, status, and power.
These 'experts' have turned AI into a battle for territory, obstructing
progress, obscuring their trivialities behind impressive-sounding
jargon, and turning this fundamental, urgently important domain of
science into an exclusive club, with an artificially limited 'union
card'...
But AI is way too important to let this happen! What lies ahead for AI
isn't just fancier productivity software-- we should be looking, much
more, for a profound revolution in human self-understanding, to be
swiftly followed by a parallel revolution in human self-government... so
we shouldn't tolerate academic politics and grantsmanship!
I've pulled together this "comp.ai.imho" FAQ as an opinionated
alternative to the "comp.ai.status-quo" FAQ created by Mark Kantrowitz,
which, while an impressive piece of work (that I've pillaged freely
from), doesn't really address the needs of *AI outsiders* to find a
convenient entre to its study on the Net.
I don't have a degree in AI, and there are many areas where my ignorance
is near-total. I'm taking a skeptical, show-me attitude towards these
areas, so if there's new technologies that haven't reached the popular
market, I'll be happy to add them onto this faq, later, once I'm
convinced of their promise.
What I know about AI was mostly worked out on my own, and then extended
and focused by three years as a programmer (and, effectively, *spy* ;^)
at Roger Schank's AI lab at Northwestern University.
-- The layout of this FAQ, and the shape of AI's past
This faq will try to embed a full range of useful net-pointers (as any
faq should) within a *readable* historical narrative of AI's history.
Briefly:
- AI's long prehistory begins with divination, then proceeds thru
Aristotle to Leibniz, Roget, and Polti-- all looking for a neat,
universal sorting of the *full range* of human experience. The primary
tools of this era were the abstraction hierarchy, and the concept of
orthogonal dimensions... imho.
- With the invention of the programmable digital computer, such *neat
systematizations* began to promise a whole new level of payback. One
conceptual hurdle to be crossed, though, was the transition from
arithmetic computation to symbolic computation, most notably via the
invention of the LISP language. LISP's dramatic (imho) history will be
explored.
- An early goal of AI research was automated translation between natural
languages (eg, Russian to English). One line of research has focused on
parsing the grammatical (syntactic) structure of sentences (basically a
pattern-matching problem), another on representing the meanings, or
semantics (that bigger problem that dates back to Aristotle). Emacs and
SGML are useful tools for thinking about these problems. Current
grammar-checkers and text-adventure games, poor as they are, may
adequately represent the state of the art in these areas... imho.
- AI is prone to collective 'manias', regarding 'magical' hardware
solutions like parallel processing, LISP machines, neural nets, etc.
***TANSTAAFL*** ...imho. It's also prone to being poisoned by ego and
greed, because of the combination of low standards and high stakes...
again, imho. [***There Ain't No Such Thing As A Free Lunch]
- *Expert systems* are AI's greatest success story, but so far their
construction has normally been a one-off affair-- each one must be built
from scratch by a labor-intensive process of 'knowledge extraction' and
knowledge representation. The CYC project is trying to fix this by
building a huge, universal expert system by 'brute force'. From a
programmer's point-of-view, though, the laying out of the data
structures is the important thing-- the algorithms of logical expertise
are comparatively trivial. And while CYC is the most ambitious attack
ever on the 'Aristotle problem', it offers as yet few breakthrus on the
problems of representing human mental (and emotional) states... (imho).
- The natural way to represent these mental states, and their laws, is
in the form of *stories*. The Aristotle-problem turns out to require a
universal inventory of human stories, following the groundwork laid out
by Polti and, surprisingly, James Joyce. A new data topology, that
unites the concept of the abstraction hierarchy with the idea of
orthogonal dimensions, promises a new direction for story
representation... imho.
- One application for AI, near and dear to most of us, is netnews
message filtering. Another, related one might be called "object-
oriented word processing", where the 'objects' include words and
phrases, concepts and stories. Designing intuitive software interfaces
also has an important AI component... imho.
- The ancient goal of self-contained robots with videocamera-eyes is
still a long way off. A simpler form of robotics research focuses on
moving virtual robots thru *simulated* worlds. Games like SimCity are
state-of-the-art simulations. Along related lines, *planning* algorithms
have tended to emphasize exhaustive combinatoric search, which is
*hopeless*... imho.
- Discussions about consciousness and pain and the ethics of
artificially-created intelligences... *bore me silly*! They're
premature, at best, and at worst, neurotic... imho!
-- The prehistory of AI
Strangely enough, the first recorded human attack on the problems of AI
came from the *fortune tellers*. If their systems for generating
predictions had overlooked certain classes of human events, then
obviously those systems could never even *accidentally* predict them...
Beginning before 1000 BCE, astrologers were already exploring an
especially rich (though arbitrary) system of planetary relationships
with three orthogonal dimensions (planet, sign, and house), trying to
map them onto human experiences. The I Ching, slightly later, explored
the 64 precise permutations of a six-bit binary system, as well as the
eight three-bit half-words they contained. The Kabalah and Tarot
offered simpler systems around 1000 AD, tied more directly to particular
human meanings like virtue and vice.
"Orthogonality" is a critical concept in software design. The name
implies a set of *dimensions* that are at 'right angles' to each other,
so that a concept can be defined in terms of one-value-for-each-dimension.
The most vivid example I know of was when MacPaint first appeared,
allowing one to easily vary the following orthogonal dimensions for each
graphic 'object': shape, size, position, fill-pattern, border-thickness,
and border-pattern.
The beauty of orthogonality in software design is that it allows an
*extremely* broad range of objects to be defined with a minimal set of
parameters. Consequently, one need only remember these few commands
to master all the objects so created. ("An ounce of orthogonality is
worth a *ton* of 'added-features' tinsel.") And the programming-code
required to implement them is also minimized! So the dream of an
orthogonal analysis of all natural and social phenomena, is an enticing
one...
Aristotle made a much more *grounded* assault on the range of human
meanings around 300 BCE. Aquinas later extended Aristotle's analysis to
include Christian ethics. The Middle Ages brought Raymond Lull playing
mystical combinatorial 'games', leading eventually to Leibniz's (1646-
1716) dream of a purely rational culture, where all concepts will have
been encoded as mathematical formulae, and philosophical disputes will
be met with the cry, *Calculemus*... "Let us calculate!"
Giambattista Vico in his "New Science" (1725, pars 161-162) was likely the
first to anticipate a universal dictionary of *concepts*, realized in 1852
with Peter Mark Roget's Thesaurus. The Dewey Decimal System (1876) and
Library of Congress Classification are two later evolutions, but all of
these are plagued by redundancies and ambiguities. Two net-specific
proposals are Yahoo's WWWeb index, and the Usenet hierarchy itself.
A.I. views *hierarchies* as networks of 'nodes' connected by 'isA links.'
In computer memory, any clump of data can be a node, and any *pointer* to
such a clump can be a link. 'IsA' is the particular relationship between
a more-specialized and a less-specialized form of the same thing: "hunger
isA motive" translates as: "Motive is a general class that includes hunger
as one specialized form." (Another common sort of link is 'partOf'.)
While hierarchical thinking comes naturally to most people, the
implementing of hierarchies in computer memory allows one to extend the
hierarchy-structure in ways that are less intuitively obvious. It's cheap
and easy, for example, to allow a single element to be 'multiply indexed'
at more than one location in the hierarchy... but even this minor tweak
comes only slowly to human thinking-habits.
-resources-
(I expect I'll turn up some Net resources on divination systems,
eventually. There's an I Ching for Unix somewhere, I'm sure...)
For Lull and his lineage, see Frances Yates, "The Art of Memory".
Roget's 1911 Thesaurus is available by anonymous FTP from the Consortium
for Lexical Research: clr.nmsu.edu:/CLR/lexica/roget-1911 [128.123.1.12]
Project Gutenberg also has Roget's 1911 Thesaurus, and many other
classics. The Project Gutenberg archive is at
mrcnext.cso.uiuc.edu:/pub/etext/
or: src.doc.ic.ac.uk:/literary/collections/project_gutenberg/
The Online Book Initiative maintains a text repository on ftp.std.com
Fritz Lehmann (fr...@rodin.wustl.edu) is collecting a master-list of
indexing schemes, or "concept systems", that currently numbers over 150
entries, many *extremely* obscure. ;^/
The Usenet hierarchy can usually be examined on Unix systems via:
/usr/lib/news/active or /usr/lib/news/newsgroups
-- LISP and symbolic computation
Writing elegant programming code is a really hard task! Designing
algorithms and data structures requires such a difficult sort of
analytic thought, it's a wonder anyone can do it well, at all. The main
barrier to AI is the need to innovate new ways of thinking about
programming-- such as object-oriented design, and declarative code.
(Jargon is the greatest enemy of the required clarity of thought!)
(Briefly, object-oriented design recognizes that each type of data-
object-- number, character-string, database record, text document,
etc.-- has its own set of properties and 'verbs' that the programmer must
define. This definition of a 'type' can be called a 'type object'.
Declarative languages like Prolog try to free programmers from thinking
about what specific sequence of actions the computer will go thru,
allowing them instead to spell out a series of 'declarative' statements
that the computer is then able to determine the sequence from.)
The LISP language was invented by John McCarthy in 1958, to simplify the
handling of arbitrary lists of arbitrary data objects. LISP builds
(almost) everything out of "cons cells", an elegant structure consisting
of two memory locations, and nothing else. These may hold pointers to
other cons cells, or to numbers-- and these numbers may represent
quantities, or *qualities, concepts, symbols*. Out of these simple
building blocks, patterns of data can be constructed that embody real-
world intelligence.
The basic task of a LISP program is to wander along a straight chain of
cons cells, pointer to pointer to pointer, looking for some pattern in
the other half of the cell. LISP-code represents these chains very
elegantly by enclosing them in parentheses:
(pattern1 pattern2 pattern3 ... )
This crawl-and-compare strategy is the normal approach to *search*, but
search is an aspect of computation that should be minimized, because it
necessarily proceeds relatively slowly. Speed gains require that data
be *sorted*, again, into neat systematizations, accessible via an
*indexing system*. (Hash tables are a poor compromise, allowing at best
the fastest possible search in cases where systematic indexing is
impossible.)
LISP is cleverly designed to allow a great deal of flexibility in
incremental program modification, and for this reason it's much favored
by symbolic AI researchers. But it's not very efficient, and it may
foster a sloppy attitude towards economical coding. The early
successes of expert systems developed in LISP led to a sudden influx of
venture capital, starting around 1980, into companies developing
dedicated LISP hardware, a financial 'bubble' that had collapsed by
1988, as expert-system development moved on to C, and competitive LISP
compilers became available for general-purpose microcomputers.
Slight differences of 'dialect' in the many different early LISPs led
to severe problems translating applications between computer platforms.
To solve this, an impressive 'Common LISP' standard was hacked out, by
committee. An object-oriented extension to the Common LISP standard is
called the Common LISP Object System (CLOS, often pronounced CEE-loss).
The 'meta-object protocol' problem for CLOS remains under debate: what
sorts of information should type-objects (etc) know about their objects,
and how should it be accessed?
Another direction LISP might yet evolve in will be to supplement its two-
element cons cells with *eight-element* cells, for building well-
balanced content-addressable hierarchies. (See below)
-resources-
GNU Emacs is a fully programmable word processor written in 'elisp', and
can make a useful development platform for many sorts of text-oriented AI
research. Gnu Emacs is available...
A Mac version is now finally available at:
A free LISP for DOS machines is...
A Lisp FAQ is also available by anonymous ftp, from the same ftp
locations as the comp.ai.status-quo FAQ (see biblio at end)
There's a comp.lang.lisp newsgroup.
Steele's beautifully-written text is the standard reference for Common LISP.
It's now available on the WWWeb.
John McCarthy, net curmudgeon, can usually be found on rec.arts.books.
-- Natural language translation, natural language processing (NLP)
On first thought, you'd expect that a *dictionary of word-substitutions*
ought to take us the best part of the way to comprehensible translated
text. In fact, though, any brief test of this idea shows it to be
massively undermined by two difficulties: 1) many idiomatic phrases
don't work at all the same in literal translation (eg, "I give you my
word"), and 2) the largest percentage of words allow several entirely
different meanings. AI researchers' first bouts with this took place
between 1956 and 1966, when the ALPAC report killed (for the time being)
all government funding for translation research.
A slightly more evolved approach focuses on 'parsing' the syntactic
structure of each sentence (ie, constructing a sentence-diagram) as a
way of disambiguating shades of meaning, by eliminating (at least) those
shadings that imply an impossible part-of-speech. This school of
thought continues to try and add more and more complex algorithms for
finding more and more subtle syntactic patterns... but less effort has
been put into trying to collect a huge dictionary of the (idiomatic)
patterns themselves, probably because the latter task de-emphasizes the
element of 'programmer macho' (a factor that steers research directions
much more than it ought!).
The poverty of current NLP can easily be seen by exploring the pathetic
grammar-checkers offered, eg, with Microsoft Word! The minimalist
parsers familiar from Infocom-style text-adventure games offer about as
much grammatical sophistication as one should expect from algorithms
alone. Several toolkits for adventure-game development are available on
the Net, allowing one to experiment with parsers and their limitations.
Front-ends for databases are another target-domain for parser research--
reducing the amount of structure required in "structured query
languages".
Emacs, with its facilities for 'grepping' complex patterns expressed as
'regular expressions', is another useful tool for NLP experiments.
Griswold's text languages, SNOBOL and Icon, are similarly useful.
Another direction is offered by SGML, TEI, and HTML, three related
projects exploiting additional layers of 'markup' within text documents.
Automated document analysis can be given an easy boost if the creators of
the documents add some signposts to the content via SGML markup.
The Text Encoding Initiative (TEI) has been working out detailed
conventions for marking up various classes of literary text, using the
Structured Generalized Markup Language (SGML). SGML markup looks like
<emphasis>this</emphasis>. The HyperText Markup Language (HTML) is an
extension of SGML to support hypertext linkages within and between
documents, and has gained great success via the WorldWide Web project.
The first round of *speech-understanding* research was funded by DARPA
until 1976, when it became clear that no quick solution was emerging.
The state of the art is still limited-vocabulary-spoken-by-a-single-
user, and probably can't do better until improved language-understanding
allows the software to *predict* which words are likeliest. An accurate
mechanical model of speech production would help, too. Handwriting
recognition has done a little better, since the time-path of the stylus
can now be tracked, but again the big improvements depend on word-
prediction.
One important subfield of NLP ought to be focusing on the meanings
(especially the emotions) carried by rhythms and tones in ordinary
speech-- "prosody". I don't know how far this has gotten-- "Sentics"
by Manfred Clynes was an interesting half-baked first attempt.
Natural language *generation* is largely the domain of ELIZA (1966,
Weizenbaum and Colby) and RACTER (1984, Etter and Chamberlain). The
annual Loebner competition for such programs, the closest thing to a
real "Turing test", was won in 1992, 93, and 95 by Joe Weintraub's PC
Politician and PC Therapist. The remarkable thing about these efforts
is their occasionally uncanny successes at mimicking intelligence,
despite absurdly primitive knowledgebases. Thom Whalen's 1994 winner
uses a very different, and promising approach.
-resources-
comp.ai.nat-lang newsgroup
The standard text on NLP is James F. Allen's "Natural Language
Understanding", Addison-Wesley 1988 (A new edition is imminent.)
WordNet, a richly interconnected hyper-thesaurus experiment, is
available by anonymous ftp from: clarity.princeton.edu:/pub/
rec.arts.int-fiction is a newsgroup for adventure game programmers.
The text-adventure archives are at: ftp.gmd.de in /if-archive/
TADS is the most popular platform.
For emacs, see above.
Icon for the Mac is available at: cs.arizona.edu
in
/icon/library/bipl.hqx (the icon program library--sample procedures
/icon/library/info.hqx and programs)
/icon/packages/macintosh/met.hqx (the executables of Icon)
Newsgroup: comp.text.sgml
To subscribe to TEI-L and get an index of the available files, send
electronic mail to the address: LISTSERV@UICVM or List...@uicvm.uic.edu
containing these two lines:
subscribe TEI-L [your name spelled out normally]
index TEI-L
comp.infosystems.www
CORPORA is a mailing list for Text Corpora. It welcomes information and
questions about text corpora such as availability, aspects of compiling
and using corpora, software, tagging, parsing, and bibliography. To be
added to the list, send a message to corpora...@x400.hd.uib.no.
Contributions should be sent to cor...@x400.hd.uib.no.
comp.speech
"doctor.el" is an implementation of Eliza that comes standard with
GNU-Emacs. Invoke it with "M-x doctor". "M-x psychoanalyze-pinhead" is
also amusing, pitting Eliza against Zippy the Pinhead.
AI_ATTIC is an anonymous ftp collection of classic AI programs and other
information maintained by the University of Texas at Austin. It
includes Parry (hi, Kibo!), Adventure, Shrdlu, Doctor, Eliza, Animals,
Trek, Zork, Babbler, Jive, and some AI-related programming languages.
This archive is available by anonymous ftp from ftp.cc.utexas.edu
(bongo.cc.utexas.edu, 128.83.186.13) in the directory /pub/AI_ATTIC. For
more information, contact attic...@bongo.cc.utexas.edu.
For a FAQ on RACTER, try ftp.mcs.com in /mcsnet.users/jorn/racterfaq.txt
-- Hardware (& other) manias: Parallelism, neural nets, LISP machines,
etc.
Speed increases in hardware are great, but they're not AI. Parallelism
and neural nets allow some new implementation strategies, but don't
begin to solve the "Aristotle problem".
Neural nets timeline:
1959: Frank Rosenblatt introduces Perceptron
1969: Minsky & Papert's book "Perceptrons" kills funding for neural net
research, apparently unjustly
1970: SciAm articles on Conway's Game of Life (cellular automata)
1975: Cooper & Erlbaum found Nestor to develop neural net technology
1982: John Hopfield resuscitates neural nets
These hardware manias are one form of a more general problem plaguing
AI, caused by the unfortunate combination of very high stakes
(especially DARPA grant money), and a very immature domain, in which
bold bluffing can take you far. "Citation inflation" is another
symptom-- concealing your poverty of ideas behind an imposing
bibliography.
Another problem is a tendency to reify programming abstractions. One
antidote to this is to make a practice of contemplating one's program
structures as pure *topologies*, with all symbolic labels stripped off.
Old topologies with fancy new names are less than worthless, compared to
entirely new topologies!
-resources-
comp.ai.neural-nets newsgroup
"Who's munging the hacker ethic?" is a short essay reflecting on some of
the 'pathologies of scientific communication' apparent on comp.ai. It
can be ftp'd from ftp.mcs.com in mcsnet.users/jorn as "munging.us"
-- Expert systems and CYC
The history of AI has shown a gratifying series of successes in the
realm of expert systems, all thru the 60s, 70s, and 80s.
The idea of expert systems is that expertise involves *logical*
thinking, and can be modelled by compiling lists of logical propositions
and performing logical transformations upon them. This might be called
the "Euclidean (or geometric) model", implying a small set of axioms
from which a wide range of theorems are then generated.
An alternative view might be called the "proverbs model". By this view,
natural selection, over the course of geologic eons, has introduced
millions of genetically-programmed *details* into the human nervous
system, anticipating particular sorts of survival-challenges that may
arise, and venturing effective ways of reacting to them. Compared to
Euclid's geometry, there will be many more 'axioms' and relatively fewer
'theorems', so the *logic* will probably be comparatively trivial--
almost all the expertise will be in the data structures (and their
contents). So once again, the Aristotle-problem is the real hurdle to
expert-system development-- how do you concisely represent *knowledge*
in a computer memory?
The Japanese Fifth Generation Project between 1982 and 1992 failed in
its goals largely because it focused on logic, eg by choosing Prolog
over LISP as its implementation language (a choice that may have been
simple anti-Americanism, since Prolog was developed by the French).
Doug Lenat's CYC (from enCYClopedia) Project at MCC in Austin, a ten-
year $35 million project begun in 1984, uses logic, too, but emphasizes
the Aristotle-problem, employing a dozen 'ontological engineers' to
enumerate tens of millions of common-sense facts that will ultimately
'add up' to logical intelligence. Lenat's goal is a system that can
understand and speak ordinary language, and detect violations of common
sense as readily as humans can.
As of 1994, CYC's sponsors were: Apple, Bellcore, DEC, the DOD, Interval,
Kodak and Microsoft. Versions of CYC for Macs and Suns are supposed to
be available now, but have enormous requirements for RAM and storage.
Lenat has revised his estimate of the total number of 'rules' required
for this, upward by a factor of ten (to 20-40 million), and extended the
time needed by another ten years. It bothers me a lot that the sort of
thing being added apparently includes rules like, "A creature with two
arms probably has two legs." This seems out-of-control to me. CYC's
ontology includes abstractions like:
Thing
Intangible
IndividualObject
Event
Stuff
Process
SomethingExisting
TangibleObject
The Text Adventure Development System (TADS), by contrast, offers the
following, much more pragmatic, partial object hierarchy:
Thing
Item: vehicle, surface, lightsource, key, food, container, clothing
FixedItem: switch, dial, button, decoration, actor, chair
Room
CYC's data-objects offer such slots as: instanceOf, inverse,
makesSenseFor, entryIsA, specSlots, slotConstraints, becameTrueIn,
qualitativeValue, sufficientCondition. One must expect that some of
these slots will ultimately have thousands of fillers, requiring hash-
tables and slowing processing proportionately, and that some objects
will have thousands of slots, causing similar problems. Interestingly,
in a recent interview Lenat claimed that the number of *facts* has been
at a plateau lately, even as the amount of useful knowledge continues to
grow, because various redundancies in the representation are also being
detected and repaired.
Lenat, at some point, asked John McCarthy to take a shot at enumerating
the laws of human emotion, a critical challenge for the project as a
whole. I don't know where this led (although Lenat claims CYC now 'knows'
about emotions), but McCarthy could have done worse than to start with
Andrew Ortony's dimensional analysis of emotions. Ortony's classes of
emotion:
Emotions about things: liking, disliking.
Emotions about persons: approving, disapproving.
about self: pride, shame.
about others: admiration, reproach.
Emotions about events for self: pleasing, displeasing.
for other: gloating, pity, resentment, happy-for.
about events in the future: hope, fear.
realized (positive): satisfaction, relief.
realized (negative): disappointment, fears-confirmed.
Emotions about another person's role in events: gratitude, anger.
about self's role: 'gratification', remorse.
Another admirable style of 'Aristotelian' thinking about whole systems
can be found in an obscure anthropology text called "Man's Place in
Nature" by C.F. Hockett (McGraw-Hill 1973). Hockett is a fearlessly
original thinker, always striving to find the *apt generalization*
behind the variations of cultures in history. For example, a series of
generalizations about techniques of primitive peoples for strengthening
the surfaces of their artifacts is characterized with the motto:
"Save the surface and you save all." (This is AI at its best!)
-resources-
comp.ai.shells newsgroup
The Expert Systems Shells FAQ is also available by anonymous ftp from
the same ftp location as the comp.ai.status-quo FAQ (see biblio at end).
CYC/MCC's presence on the Net is unfortunately very low-profile. There
was an excellent Nova (?) program about it, in a series on the computer--
"The Machine That Changed The World".
Here's some periodical references, and the only book so far:
"CYC" AI Magazine 7(1), 1986
"When will machines learn?" Machine Learning, Dec 1989
"Cyc: Toward Programs With Common Sense" CACM, Aug 1990
"Knowledge and Natural Language Processing" CACM, Aug 1990
"Common Sense and the Computer" Discover magazine, Aug 1990
"Cyc: A Mid-Term Report" AI Magazine, Fall 1990
"The commonsense reviews" Artificial Intelligence, 61(1), 1993
"CYC-O" Wired magazine, Apr 1994
"Enabling agents to work together" CACM, 37(7), 1994
Douglas B. Lenat and R.V. Guha, "Building Large Knowledge-Based
Systems" Addison-Wesley 1991
Andrew Ortony, Clore and Collins: "The Cognitive Structure of
Emotions" (about $12 paper from Cambridge U.P.)
An implementation of this theory as an a-life microworld is described
in: Elliott: The Affective Reasoner (the TaxiWorld thesis). It can be
ordered for a few dollars from ILS Tech reports, 1890 Maple
Avenue, Evanston, IL 60201.
The brand-new "Wisdom FAQ" mailing list (wisdom-...@mcs.com) is a
first stab at an Internet version of the CYC project, aimed at creating
a comparable public domain knowledgebase especially for interactive
fiction (IF) and social simulations. Its archives are at ftp.mcs.com
in mcsnet.users/jorn/wisdom/.
-- Understanding human behavior via *stories*
The basic data structures in AI might be the *rule*, the *frame*, and
the *script*... but from a programmer's point-of-view these are really
interchangeable-- just arrangements of pointers in memory-space. So the
real challenge is to discover which *interpretations* of these various
arrangements of pointers best match the way human thinking works.
"Case-based reasoning" (CBR) tries to identify nodes with 'cases'-- often
actual realworld events that could (theoretically) be described to any
infinite level of detail without being exhausted. A case is thus
approximately equal to a *story*. But this sort of thinking, again,
comes very hard to conventional computer hackers. If a story is
infinitely complex, for example, how can it be indexed as *more similar*
to some stories than others? (The solution to this paradox requires that
we recognize some details within a story as *more significant* than
others-- an importance-ranking.)
In 1900, a French literary critic named Georges Polti published an
analysis of literary plots entitled "The 36 Dramatic Situations"
(reprinted 1977 by The Writer Inc, $8.95). Polti also further subdivided
each of the 36 (citing particular plays and novels that embodied each
variant), and included for each an enumeration of the basic 'elements'
needed for the plot, eg for Supplication: "The dynamic elements
necessary are: a Persecutor, a Suppliant and a Power in authority, whose
decision is doubtful"
Here's a very rough re-sorting of Polti's thirtysix, according to a
preliminary reworking of those elements:
person thing: Obtaining
person motive: Victim of misfortune, Disaster, Ambition
person motive motive: Self-sacrifice for an ideal
person motive modality: Daring enterprise, Remorse
person modality: Enigma, Madness, Fatal imprudence, Faulty judgment
person person: Revolt, Familial hatred, Family rivalry, Conflict with
a god, Loss of loved ones
person person place: Recovery of a lost one
person person place place: Pursuit, Abduction
person person motive: Supplication, Victim of cruelty, Rivalry between
superior and inferior, Crimes of love, Deliverance
person person modality: Kinsman kills unrecognized kinsman, Obstacles
to love, Mistaken jealousy
person person motive motive: Revenge, All sacrifice for passion,
Sacrifice of loved ones, An enemy loved, Self sacrifice for kindred
person person motive modality: Involuntary crimes of love, Discovery
of dishonor of a loved one
person person person: Adultery, Murderous adultery
person person person person motive motive: Vengeance by family upon
family
Folklorists Vladimir Propp and Stith Thompson offered alternate
approaches to the story-indexing problem, but Polti's remains the most
useful for AI researchers, because (unlike Propp) it was constructed
empirically ("bottom up") by surveying literature, and (unlike Thompson)
it offers a manageable set of classes.
Abelson et al, at Yale, introduced the concept of scripts in "Scripts,
Plans, Goals, and Understanding" in 1977. That book also recapped a
primitive vocabulary called Conceptual Dependency (CD) notation,
consisting of the following verbs: atrans, ptrans, propel, move, grasp,
ingest, expel, mtrans, mbuild, speak, attend. While this proposal was
far too simplistic to work, it has tenaciously dominated the field ever
since (not to say sinisterly).
James Meehan's Yale thesis on his CD-based story-generation program
"TaleSpin" was published as "The Metanovel" (Garland, 197?) and includes
a hilarious chapter on the surreal stories that TaleSpin 'wrote' while
being debugged. Later work in this Abelsonian school proposed a clumsy
structure called a MOP (memory organization packet), then later (at
NWU's ILS) a mulligan-stew called the Universal Indexing Frame.
The original programmer for the UIF (yours truly) was inspired, after
wrestling at length with it on a low, data-structures-and-algorithms
level, to envision a much more elegant structure he later named a
"fractal thicket". (Unhappily, when he asked to present this idea to
the lab, he was terminated!) The fractal-thicket data-structure is
built on top of a simple abstraction hierarchy, each node of which can
contain a (self-similar) image of the whole hierarchy (and on and on,
as deep as one needs). This allows one to choose an arbitrary *set* of
hierarchy-elements, and represent this entire set as one single particular
node, as for example "person person place place" above. This could be
neatly implemented with the 'eight-element cons cells' described above,
so that sparsely populated regions never need to be instantiated in full.
And such a structure will minimize the necessity for *search*, effectively
substituting detailed *indexing*.
Another implication of this proposal is that anyone trying to represent
knowledge can easily take their abstraction hierarchy, two elements at
a time, and analyse these pairs for 'semantic content'. The first
question to ask of such a set is, what are the *basic relationships* of
these elements to each other? And these basic relationships should also
form a natural *story sequence*. For example, the two-element combination
"person thing" might imply the following *chronology of relationships*
between a typical person and a typical thing:
person thing: wants, makes, acquires, uses, maintains, changes,
disposes, destroys
There's a deep correspondence between these dyadic 'relationships' and
what Minsky's frame-theory calls 'slots'. In fact, looking at it this
way we should expect frames to suffer from a "Minsky bottleneck"-- an
inevitable tendency to bog down under *thousands* of slots, corresponding
to every possible relationship the object can participate in. One logical
solution is to *sort* the slots, by the *types* of their fillers... which
is just another way of describing a fractal thicket!
The more-complex *three-element* set "person person thing" will
certainly imply, among other stories, the obvious "gives" and "takes"
and "contests".
And relationships themselves can be in relationship to each other:
relationship relationship: enables disables while causes followedBy etc
Here's a (tentative) further unpacking of one of the Polti-groupings
above, for "person person motive":
Supplication: person asks person for motive
Victim of cruelty: person causes (person suffers motive)
Rivalry between superior and inferior: (person1 over person2) and (person1
gratifies motive) and (person2 suffers motive)
Crimes of love: person indulges motive towards person
Person, place, and thing are obvious, familiar, basic categories.
Motive and modality are somewhat less familiar:
thing: food tool weapon vehicle clothes bodypart etc (cf TADS)
motive: food safety sex esteem family self-expression etc (cf Maslow)
modality: real imaginary desired possible feared etc (cf Ortony)
The person-motive realtionships:
person motive: suffers abstains denies gratifies indulges etc
form a sort of mythic chronological sequence that I call the *pride
cycle*. It forms the basis for James Joyce's universal inventory of
story-elements, "Finnegans Wake" (1939). Joyce's earlier novel, "Ulysses"
(1922) was a preliminary sorting of the universe of story-elements into
18 chapters, derived from episodes of Homer's Odyssey, which Joyce
considered to be the most well-rounded portrait of an everyman, ever in
literature. The 18 chapters cover a single day in the life of a humble
Dubliner named Leopold Bloom. In order to pack every sort of story into
Bloom's day, Joyce transforms them *via metaphor* into completely mundane
everyday details. Analysis of Joyce's AI here has barely been begun by
scholars, who have further tended to dismiss FW as a perfectly baffling
mystery.
Joyce, an avowed Aristotelian who is known to have owned Polti's book,
found it necessary in his work to challenge fearlessly all the *taboos*
of the literary censors. Anyone who tries to inventory the whole range
of the human experience must follow Joyce here, and this requires a
level of self-honesty that has never been emphasized in AI research!
-resources-
The ILS tech report about the UIF, and others, can be ordered for a few
dollars from ILS Tech Reports, 1890 Maple Avenue, Evanston, IL
60201.
Schank & Osgood et al: A Content Theory of Memory Indexing [UIF]
Schank & Fano: A Thematic Hierarchy for Indexing Stories [another try]
Kolodner & Jona: Case-Based Reasoning: an overview [simple and clear]
ftp ftp.mcs.com
cd mcsnet.users/jorn
get README.jb |cat
get ilsmemoir.txt [intro to AI, for game designers, written as a memoir]
get thicketfaq.txt [intro to fractal thicket indexing]
get diykr.txt [do-it-youself knowledge-representation, very short]
get aijoyce.txt [intro to Joyce for AI types]
get joycefaq.txt [intro to Joyce for literary types]
get storymath.txt [intro to AI for Joyceans]
get fwdigest1.txt [100k analysis of FW chapter 4, paragraph 1]
-- Message filtering, object-oriented word-processing, software design
The netnews hierarchy encourages a very high level of pre-sorting among
postings. Killfiles can supplement this, inefficiently. The newsreader
'strn' is the current state-of-the-art in complex killfiling. To go
beyond this will probably require that posters 'tag' keywords, and also
that a system for message-rating be agreed to.
The next stage in the evolution of word processing must be to treat
words, phrases, concepts, etc. as *objects*.
Interface design might be transformed by analysing the *tasks* the
software will be used for, and having the software use this analysis
to anticipate the user's needs. (Current software offers instead
many 'micro-commands' that the user must combine into sequences,
or at best into 'macros'.)
-resources-
news.software.readers
Some specs for an object-oriented word processor are available at
ftp.mcs.com as /mcsnet.users/jorn/decentwrite.txt
The students.chi mailinglist is dedicated to the Human-Computer
Interaction field, focusing on the research, design, development and
evaluation of human-computer communication and interaction. The ii.chi
mailinglist is more specifically about intelligent interfaces.
To be added to either, send a request to Nick Briggs at
"Regist...@xerox.com". (I have no direct knowledge of this list!)
-- Vision, robotics, planning, simulation and artificial life
Some miscellaneous related topics:
Human vision must work, in part, via the imagination-- the mind
projecting multiple possible images of the future (or the present). At
any moment, each section of the visual field must be accounted for, to
some degree, in terms of an imaginary projected image, *not at all unlike
a hallucination*. (Learning about the world means fine-tuning our
hallucinations!) This parallelism between creating images and
perceiving them might be generalized as the "AI complemenarity principle".
There's a cute book called "Vehicles: experiments in synthetic
psychology" by Valentino Braitenberg (MIT Press, 1984), that shows how
very simple electronic 'nervous systems' can allow very simple robots to
exhibit fairly interesting behaviors, like approach and avoidance. This
sort of *emergent behavior* is the ultimate goal of alife research.
There's a common fallacy that 'virtual worlds' could be built entirely
out of the basic laws of physics-- a Grand Unified Theory of
Everything-- so that the path of each particle in the world would be
traced independently, by some super-super-supercomputer doing millions
of math calculations each nanosecond. But this isn't even practical for
a billiards simulator! What's needed, instead, is an inventory of
'particle stories' (cf Feynmann diagrams?) which are *qualitatively*
distinct, which is likely how the brain itself does most of its physics
predictions. There's a famous AI paper called "The Naive Physics
Manifesto" (1978, Patrick Hayes) that broke early ground here.
Computer *planning* is in a very primitive state. The normal models
require that the computer do a thorough search of a large number of
combinations of plan-steps, looking for the most efficient one. This
might be compared to searching a large rectangular field, one row at a
time. An alternative model might be compared to a widening spiral
pattern, where the first circular pass returns some extremely coarse
proposal, and successive passes refine it further and further. The
advantage here is that in real life one may not have enough time to
complete the thorough search, so the 'spiral' model allows one to
abandon the search at any point without ending up empty-handed. This
spiral approach once again requires *prioritising* the elements of the
planning domain.
Computer gaming is the only domain of commercial software in which
programs are *routinely* evaluated for the quality of their AI-- which
in this case normally refers to the program's ability to serve as an
intelligent opponent, even to stand in for a second human gameOBplayer.
This sort of AI can be implemented as a set of possible story-cases to
be executed under certain conditions.
Here's a breakdown of the current status of computers at various human
games, compiled by Victor Allis (vic...@cs.vu.nl):
Othello: Strongest program (Logistello by Michael Buro) stronger than
strongest human players.
Checkers: Strongest program (Chinook by Jonathan Schaeffer et al.) has
now defeated the world champion
Backgammon: Strongest program (TD-gammon by Gerald Tesauro) at world
champion level.
Chess: Strongest program (Deep Thought II, by Feng Hsu and Murray Campbell)
at about the 100th-150th place on world ranking
Bridge: Strongest programs (unclear which) at amateur level
Go: Strongest programs (several, see rec.games.go) at about 8-10 kyu,
which is a little above novice level.
There's also a whole category of computer games devoted to *simulations*
(although flight simulators, whose AI is comparatively trivial,
consistently lead the field in sales). Maxis has done very well with
SimCity, SimEarth, SimAnt, SimLife, SimFarm, SimHealth, and A-Train.
"Civilization" is something like SimCity with inter-city wars. Games
based on economic simulations go back as far as Lemonade Stand and the
like.
Chris Crawford is the grandfather of *social* simulations, with Balance
of Power, Balance of the Planet, and Trust & Betrayal. The first of
these involves the realization that international politics is almost
entirely a question of *maintaining face* (the esteem motive). The
second, Balance of Power, shows how the interconnectedness of ecological
factors makes simple solutions radically ineffective. The third is a
very primitive first try at simulating a community of humanoid
alliances. AI-history collectors will want to get the source code for
T&B from Chris for $150:
Other interesting sims:
Hidden Agenda: you're president of a Central American country-in-crisis,
and must weigh the requests of various factions, with the help of your
fallible advisors.
Shadow President: ditto for US politics?
Carnegie Mellon has an artificial-personality/ virtual-world project
called "Oz" led by Joseph Bates.
-resources-
comp.robotics newsgroup
The most affordable robot kits so far, Hero robots from Heathkit, are no
longer available except secondhand. But for as little as $10 you can
build a six-legged walking robot called a "Stiquito", that fits in the
palm of your hand. It uses an unusual nickel-titanium alloy called
'nitinol' that directly converts the energy of a 9-volt battery into
mechanical force. You can order the kit ($10), tech report ($5), and
how-to video ($10) from:
Computer Science Dept, Attn: Stiquito/TR 363a/Video [specify which]
215 Lindley Hall, Indiana University, Bloomington, Indiana 47405
Checks or money orders should be made payable to "Indiana University".
IUCS Technical Report 363a describes Stiquito's construction and is
available by anonymous ftp from
cs.indiana.edu:/pub/stiquito/ [129.79.254.191]
Questions about Stiquito should be sent to Prof. Jonathan W. Mills
<stiq...@cs.indiana.edu>. To join the Stiquito mailing list run by Jon
Blow of UC/Berkeley, send mail to: stiquito...@xcf.berkeley.edu.
comp.ai.vision
comp.simulation
A critique of SimCity can be found in:
Paul Starr, Seductions of Sim: Policy as a simulation game, The American
Prospect 17, Spring 1994, pages 19-29.
comp.ai.alife
The basic resources on alife:
Langton, Chris G., editor, "Artificial Life" (Proceedings of the First
International Conference '87), Addison-Wesley, 1989.
Langton, C.G. et al, eds, "Artificial Life II", Addison-Wesley, 1991.
Langton, C.G., editor, "Artificial Life III", Addison-Wesley, 1994.
Steven Levy's "Artificial Life"
There's an alife mailing list at: alife-...@cognet.ucla.edu
but it may have been superceded now by the comp.ai.alife newsgroup.
CMU Oz Project:
http://www.cs.cmu.edu:8001/afs/cs.cmu.edu/project/oz/web/oz.html
Some of the project's papers are also accessible as
ftp.cs.cmu.edu:/afs/cs.cmu.edu/project/oz/ftp/papers/
Artificial Morality: artmor...@unixg.ubc.ca
This is a mailing list for discussion of Peter Danielson's book,
"Artificial Morality: Virtuous Robots for Virtual Games" (Routledge,
New York, 1992) and related issues. It explores theories of rational
morality with Prolog. To join the list, send an email message to
artmoral-l...@unixg.ubc.ca
[I have no direct knowledge of this list.]
comp.ai.games
-- Consciousness
Searle and Dreyfus and Penrose are just naive about symbolic AI. It
will ultimately succeed, to *some* extent, anyway. Their arguments are
not really insightful. (For example, Lenat claims with some justice that
CYC is already conscious, because it 'knows' that it's a computer
program and not a human... but this knowledge is, of course, just a
pointer in memory!) Pain and pleasure, though, may be a matter of
analog neural 'hardware', and may *not* be replicable by conventional
digital algorithms. (On the other hand, 'consciousness' itself may be a
neurotic illusion-- see Julian Jaynes, "The Origin of Consciousness in
the Breakdown of the Bicameral Mind", and compare it also to Zen
Buddhism's mystical goal of transcending consciousness.)
The way to deal with human *subjectivity* is to observe it
dispassionately, and describe it with the most precisely evocative
language available. Coining new words for subjective distinctions is
never a good idea-- natural language will have dealt better with the
problem, millennia ago!
-resources-
comp.ai.philosophy
alt.consciousness
alt.fan.hofstadter
PSYCHE is a quarterly refereed electronic journal concerning the
interdisciplinary exploration of the nature of consciousness and its
relationship to the brain. To subscribe, send a message with
"SUBSCRIBE PSYCHE-L Firstname Lastname" in the body to
LISTSERV%NKI.B...@cunyvm.cuny.edu.
A discussion group PSYCHE-D has also been created for discussion of
the contents of the journal and related topics. To subscribe, send a
message with "SUBSCRIBE PSYCHE-D Firstname Lastname" in the body to
the list server. The moderator of PSYCHE-D is David Casacuberta,
<IL...@cc.uab.es>. [I have no direct knowledge of these, either.]
-- Bibliography, etc.
The latest version of the 'comp.ai.status-quo' FAQ is available via
anonymous FTP from:
ftp.cs.cmu.edu:/user/ai/pubs/faqs/ai/ [128.2.206.173]
using username "anonymous" and password "name@host" (substitute your
email address) or via AFS in the Andrew File System directory
/afs/cs.cmu.edu/project/ai-repository/ai/pubs/faqs/ai/
as the files ai_1.faq, ai_2.faq, ai_3.faq, ai_4.faq, ai_5.faq and
ai_6.faq.
You can also obtain a copy of that FAQ by sending a message to
ai+q...@cs.cmu.edu with
Send AI FAQ
in the message body.
The FAQ postings are also archived in the periodic posting archive on
rtfm.mit.edu:/pub/usenet/news.answers/ai-faq/ [18.181.0.24]
If you do not have anonymous ftp access, you can access the archive by
mail server as well. Send an E-mail message to mail-...@rtfm.mit.edu
with "help" and "index" in the body on separate lines for more
information.
Cool free offer-- check this out!
The Computists' Communique is a weekly online newsletter for AI/IS/CS
scientists. It covers research and funding news; career, consulting,
and entrepreneurial issues; AI-related job postings and journal calls;
FTPable & other resource leads; market trends; analysis and discussion.
Subscriptions are fairly pricey, but it's top quality stuff. You can
get a free-sample subscription-- every fourth issue only-- by replying
to la...@ai.sri.com and asking to be added to the Full Moon distribution.
For a general overview of AI, I recommend this coffeetable book:
Raymond Kurzweil's "The Age of Intelligent Machines", MIT Press,
1990, 565 pages, ISBN 0-262-11121-7, $39.95.
The following timeline is based mostly on "The Brain Makers" by H.P.
Newquist (ISBN 0-672-30412-0), and some Kurzweil. It has been plagiarized
by Mark Kantrowitz, without credit:
1000BCE: I Ching
300BCE: Aristotle, Euclid
1617: John Napier's "Napier's Bones"
1642: Blaise Pascal's Pascaline (automatic calculating machine)
1694: Liebnitz's "Computer" can do multiplication
1725: Vico's "New Science" calls for universal thesaurus of concepts
1822: Babbage's Difference Engine (not completed)
1832: Babbage's Analytic Engine (never built)
1847: Boole's symbolic logic
1852: Roget's Thesaurus
1873: Dewey Decimal System
1890: Hollerith's punched-card computer
1900: Polti's "36 Dramatic Situations"
1922: James Joyce's "Ulysses"
1939: Joyce's "Finnegans Wake"
1940: First electronic computers in US, UK, and Germany
1946: ENIAC
1950: Alan Turing "Computing Machinery and Intelligence"
1953: Shannon gives Minsky and McCarthy summer jobs at Bell Labs
1956: Rockefeller funds M&M's AI conference at Dartmouth
1956: CIA funds GAT machine-translation project
1956: Newell, Shaw, and Simon's Logic Theorist
1957: Newell, Shaw, and Simon's General Problem Solver
1958: McCarthy creates first LISP
1959: M&M establish MIT AI Lab
1959: Frank Rosenblatt introduces Perceptron
1960: Bar-Hillel deflates dreams of easy machine translation
1962: First industrial robots
1962: McCarthy moves to Stanford, creates Stanford AI Lab in '63
1963: Quillian lays groundwork for semantic nets
1963: ARPA gives $2 million grant to MIT AI Lab
1964: Bobrow's "Student" solves math word-problems
1965: Feigenbaum takes over SAIL; Noftsker takes over MIT AI Lab
1965: Feigenbaum and Lederberg begin DENDRAL expert system project
1966: Weizenbaum and Colby create ELIZA
1966: ALPAC report kills funding for machine translation
1967: Greenblatt's MacHack defeats Hubert Deyfus at chess
1969: Minsky & Papert's "Perceptrons" kills funding for neural net
research
1969: Kubrick's "2001" introduces AI to mass audience
1970: Terry Winograd's SHRDLU, minor NLP success
1970: Colmerauer creates PROLOG
1972: DARPA cancels funding for robotics at Stanford (Shakey)
1973: Lighthill report kills AI funding in UK
1973: LOGO funding scandal: Minsky & Papert turn MIT lab over to Winston
1974: Edward Shortliffe's thesis on MYCIN
1974: Minsky reifies the 'frame'
1976: DARPA cancels funding for speech understanding research
1976: Greenblatt creates first LISP machine, "CONS"
1976: Doug Lenat's AM (Automated Mathematician)
1976: Marr's "primal sketch" improves computer vision
1978: Marr & Nishihara's "2.5-D sketch"
1978: SRI's PROSPECTOR discovers molybdenum vein
1978: Patrick Hayes' "Naive Physics Manifesto"
1980: First AAAI conference at Stanford
1980: McDermott's XCON for configuring VAX systems
1981: Kazuhiro Fuchi announces Japanese Fifth Generation Project
1982: John Hopfield resuscitates neural nets
1983: MCC consortium formed under Bobby Ray Inman
1983: DARPA's Stategic Computing Initiative commits $600 million over 5
yrs
1984: Austin AAAI conference launches AI into financial spotlight
1984: Doug Lenat begins CYC project at MCC
1984-86: Corporations invest some $50 million in AI startups
1985: GM and Campbell's Soup find expert systems don't need LISP
machines
1986: Thinking Machines Inc introduces Connection Machine
1987: "AI Winter" sets in
1987: Bottom drops out of LISP-machine market due to saturation
1988: AI revenues reach $1 billion
1988: The 386 chip brings PC speeds into competition with LISP machines
1988: Schank forced to resign from Yale and Cognitive Systems
1990: MacArthur Foundation gives Richard Stallman $240,000 genius grant
1992: Japanese Fifth Generation Project ends with a whimper
1992: Japanese Real World Computing Project begins with a big-money bang
1985-present: many other expert-systems success stories
-==---
. hypertext theory : artificial intelligence : finnegans wake . _+m"m+_"+_
lynx http://www.mcs.net/~jorn/ ! Jp Jp qh qh
ftp://ftp.mcs.net/mcsnet.users/jorn/ O O O O
news:alt.music.category-freak Yb Yb dY dY
...do you ever feel your mind has started to erode? "Y_ "Y5m2Y" " no.
--
John H. Frenster, M.D. Voice: 650/367-6483
Matrix Cognition (TM) FAX: 650/364-1773
247 Stockbridge Avenue Frens...@aol.com
Atherton, CA 94027-5446 mat...@ix.netcom.com
http://matrixcognition.com/INDEX.HTM
matrixcognition: Computer-Assisted Decision-Making.