Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What should I read next?

5 views
Skip to first unread message

Shaw Green

unread,
Jan 18, 1996, 3:00:00 AM1/18/96
to
Jeff Gerke (jge...@airmail.net) wrote:
: I am brand spanking new to the world of AI, but I want to know more.

: I have read an introductory "text"--a magazine,
: actually--"Hitchhiker's Guide to Artificial Intelligence." That
: magazine introduced me to Genetic Algorithms, Neural Networks, Fuzzy
: Systems, Case-Based Reasoning, Expert Systems, and Speech Recognition.

: What should I read next? I am no scientist. I am a novelist. So
: whatever texts you recommend need to be fairly readable to someone
: without a degree in computer science or quantum physics. (I do have a
: Master of Divinity degree, though, so the discussions of Free Will
: I've been monitoring here shouldn't be beyond me.)

: The areas of AI I'm most interested in after reading that magazine are
: Genetic Algorithms, Neural Networks, and anything having to do with
: teams of robots.

: Thank you for your kind assistance.

: Jeff


Well a fairly readable book to someone with a basic maths background on
the subject of Genetic algorithms is a book by David E Goldberg entitled
Genetic Algorithms in search optimization and machine learning. When I
say basic maths I mean really basic just simple probabilities and other
high school maths should do unlesss you are interested in some of the
theoretical stuff about schemas and such

hope this is of help

Shaw Green

Jeff Gerke

unread,
Jan 18, 1996, 3:00:00 AM1/18/96
to

Will Dwinnell

unread,
Jan 20, 1996, 3:00:00 AM1/20/96
to
Jeff Gerke <jge...@airmail.net> writes:

>The areas of AI I'm most interested in after reading that magazine are
>Genetic Algorithms, Neural Networks, and anything having to do with
>teams of robots.

.
I am not sure how much technical information you want, but I'll take a
stab at this anyway:
.
For neural networks, consider:
.
"Naturally Intelligent Systems" by Caudill and Butler, pub. by MIT Press
This is a good layman's introduction to neural networks.
.
"Neural Networks for Statistical Modeling" by Smith, pub. by VNR
This is more technical than the last title, focusing on backprop-
trained feedforward neural networks, but covering some important
issues.
.
If you have more specific interests or want more titles, let me know.
.
Will Dwinnell
Commercial Intelligence Inc.

Drew McDermott

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to

In article <4dkmr2$d...@news-f.iadfw.net>, jge...@airmail.net (Jeff Gerke) writes:
|>
|> What should I read next? I am no scientist. I am a novelist. So
|> whatever texts you recommend need to be fairly readable to someone
|> without a degree in computer science or quantum physics.

|> The areas of AI I'm most interested in after reading that magazine are


|> Genetic Algorithms, Neural Networks, and anything having to do with
|> teams of robots.

I don't recommend reading about AI without some prior acquaintance with computer
science or mathematics. The danger that you'll overglamorize AI is just too
great. If you're reading a book and getting more excited as you go, stop and
throw it away; it's probably misleading. If, on the other hand, the book seems
to just drag on with technicality after technicality that seems to have nothing
to do with intelligence, creativity, and free will, keep going. That's what AI
really is.

-- Drew McDermott

Ramachandran Lakshmanan

unread,
Jan 24, 1996, 3:00:00 AM1/24/96
to

Without meaning to be offensive (as I have the highest regard for him as
one of the foremost researchers in the area of AI) I am prompted to say:
"I don't recommend reading Drew McDermott's post above unless you have
some (significant) prior acquaintance with AI!!!"

As someone who started becoming familiar with AI during my graduate
research in chemical engineering, I can remember some especially trying
times ... The books that were (initially) accessible to me (AI by
Winston, e.g.) while very well-written, and certainly not *inaccurate*,
still left a big gap between what I read and anything that seemed even
remotely to do with my thesis!

This situation was compounded by the fact that Expert Systems were the
thing at the time, and my advisor who was one of the first to embrace
their application. Swayed by the popular opinion of the time, he was
convinced that an inference engine and some systematic gathering of
information was all that was needed to solve problems that had
stubbornly challenged the brightest of engineering minds for decades.

As it happened, I was lucky enough to have access to courses and
personnel at the AI lab at MIT, a most wonderful place to learn about
this area. It became rapidly apparent (as Drew has implied) that to
really do good work, one needed to delve not just into the material
labelled "AI" but also into other branches of computer science,
particularly the mathematical. It took a LOT of work, and countless
"all-nighters" (but then these were par for the course around those
parts) but gradually ones appreciation of the area grew.

Strangely enough, Drew says that if a book gets progressively more
exciting, then it is "probably misleading". Well, this is only true if
you *don't* get excited about nifty proofs of theorems that seem quite
irrelevant until you're shown their application ... Am I a geek? I
suppose so ...

Removing my tongue from my cheek for a moment, I'd like to point out
that there is, however, an actual *danger* in relying too heavily on
mathematical results. They give us useful guidelines as to what is or
isn't possible, how long things will take, how much memory is required,
but there are always caveats that result from the fact that our
mathematical models don't always fit real life exactly. And though I've
always been one of the "plodders" whose *needed* maths to help me along,
I've also been privileged to meet several people who can (correctly)
draw conclusions (mathematical or otherwise) without having to resort to
the mechanics of reasoning step-by-step according to the laws of
mathematics.

In summary, I guess I'm not trying to refute what Drew's saying, but
rather to elaborate, somewhat, on it, from the perspective of a
non-computer-scientist who has an interest in the area.

To the original poster, I'd say: *Do* read about AI (it's a fascinating
area) but don't (as Drew warns) get carried away by claims that are not
substantiated as clearly as is possible in language you understand. Most
importantly, assume something is *IMPOSSIBLE* until someone
demonstrates, to your satisfaction, that it *isn't*!

Naturally, your perception of the field (and, indeed, my own) is
unlikely to be as near to the truth as that of someone with appropriate
training, and may turn out to be akin to eating a cone without the
ice-cream. But you may (as did I) have fun in the process.

Despite my comment about it above, I *would* recommend reading
"Aritificial Intelligence" by Winston. It is certainly dated (unless
there is a more recent edition that I'm not aware of), but is a very
good introduction, IMHO.

Cheers,

Rama Lakshmanan

Jorn Barger

unread,
Jan 24, 1996, 3:00:00 AM1/24/96
to
In article <4e0m3n...@aden.ai.cs.yale.edu>,
Drew McDermott <mcdermo...@cs.yale.edu> wrote:
>[...] If you're reading a book and getting more excited as you go, stop and

>throw it away; it's probably misleading. If, on the other hand, the book
>seems to just drag on with technicality after technicality that seems to have
>nothing to do with intelligence, creativity, and free will, keep going.
>That's what AI really is.

Similarly:

- Watch only black and white movies (and close your eyes if they start
kissing!)

- No rock'n'roll. Only classical music. (Even opera can overstimulate
your hormones.)

- Avoid books with illustrations, and novels.

- Always wear a tie (even if you're female), and get a crewcut or
shave your head.

- No spicy or sweet foods.

- Fast several times a month, and wear sackcloth and ashes (and a tie).

- Avoid the company of the opposite sex. Take cold showers when you
have impure thoughts.

et cetera...


j

-==---
. hypertext theory : artificial intelligence : finnegans wake . _+m"m+_"+_
lynx http://www.mcs.net/~jorn/ ! Jp Jp qh qh
ftp://ftp.mcs.net/mcsnet.users/jorn/ O O O O
news:alt.music.category-freak Yb Yb dY dY
...do you ever feel your mind has started to erode? "Y_ "Y5m2Y" " no.


Drew McDermott

unread,
Jan 25, 1996, 3:00:00 AM1/25/96
to

In article <4e55r5$h...@aros.chemeng.ed.ac.uk>, ra...@chemeng.ed.ac.uk (Ramachandran Lakshmanan) writes:
|>
|> To the original poster, I'd say: *Do* read about AI (it's a fascinating
|> area) but don't (as Drew warns) get carried away by claims that are not
|> substantiated as clearly as is possible in language you understand. Most
|> importantly, assume something is *IMPOSSIBLE* until someone
|> demonstrates, to your satisfaction, that it *isn't*!

And above all, do not assume that a mental feat is easy or straightforward just
because introspection tells you it is. Popular books on AI serve as amplifiers
of this fallacy. Their authors get caught up in an introspective frenzy about
what it would be like if a machine could think, and they pass this mood along to
their readers.

|> Despite my comment about it above, I *would* recommend reading
|> "Aritificial Intelligence" by Winston. It is certainly dated (unless
|> there is a more recent edition that I'm not aware of), but is a very
|> good introduction, IMHO.

The best book available now is Russell and Norvig's "Modern Approach to AI" book.
It's almost as good as the book Charniak and I wrote, but more up to date.
(Okay, I'll admit it, it may even be better than our book.)

-- Drew McDermott

Jorn Barger

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to
In article <4e0m3n...@aden.ai.cs.yale.edu>,
Drew McDermott <mcdermo...@cs.yale.edu> wrote:
>I don't recommend reading about AI without some prior acquaintance
>with computer science or mathematics.

You should definitely master symbolic logic, and I think some version
of machine language. My experiences at ILS have led me to think
LISP leads to incredibly bad habits of thought about data structures.

AI has a tendency to bog down in 'combinatorial explosions' if you
don't have a very fine-tuned sense of where you can trim wasted
search. LISP conceals these details too successfully to promote
good design habits.

> The danger that you'll

>overglamorize AI is just too great. If you're reading a book and


>getting more excited as you go, stop and throw it away; it's probably
>misleading.

This doesn't apply at all if you're sophisticated enough to be excited
by good math.

> If, on the other hand, the book seems to just drag on
>with technicality after technicality that seems to have nothing to do
>with intelligence, creativity, and free will, keep going. That's what
>AI really is.

I suppose this is 90% tongue-in-cheek, but even that last 10% is
pretty scary. AI's toolkit of tricks is not that great or
complicated-- if you understand what a CPU's instruction set is
capable of, you can guess most of the rest.

The really hard part of AI is learning how to map these computer
techniques onto the obscene complexities of the Real World.
This requires a dextrous touch with philosophy, literature, and
every other field of knowledge:

AI is an aspect of the scientific method, more than
a scientific domain with content of its own.

George Polti's "36 Dramatic Situations" is a work of literary
criticism that should be required reading for all AI students, because
it tries to systematize the sorts of human interaction that any
simulation of human behavior will have to accommodate.

Drew McDermott

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to

In article <4edvod$j...@Mars.mcs.com>, jo...@MCS.COM (Jorn Barger) writes:

|> [me]


> The danger that you'll
|> >overglamorize AI is just too great. If you're reading a book and
|> >getting more excited as you go, stop and throw it away; it's probably
|> >misleading.
|>
|> This doesn't apply at all if you're sophisticated enough to be excited
|> by good math.

True.

|> [me]


> If, on the other hand, the book seems to just drag on
|> >with technicality after technicality that seems to have nothing to do
|> >with intelligence, creativity, and free will, keep going. That's what
|> >AI really is.
|>
|> I suppose this is 90% tongue-in-cheek, but even that last 10% is
|> pretty scary. AI's toolkit of tricks is not that great or
|> complicated-- if you understand what a CPU's instruction set is
|> capable of, you can guess most of the rest.

I didn't mean to imply that AI's toolkit is complicated. But AI makes increasing
use these days of sophisticated techniques from other fields (decision theory,
differential-equation theory, control theory, computational geometry, ...). It
may have been true in the heady days of the 70's that knowing how a CPU works was
enough (or at least that we *thought* it was), but it's not true now.

|> The really hard part of AI is learning how to map these computer
|> techniques onto the obscene complexities of the Real World.
|> This requires a dextrous touch with philosophy, literature, and

|> every other field of knowledge...

I disagree. It's fun to speculate about the impact of AI on philosophy and other
fields, but most substantial results in the field come from finding something that
works in a simple subset of the obscenely complicated real world. Disappointing,
but true.

|>
|> AI is an aspect of the scientific method, more than
|> a scientific domain with content of its own.
|>

I disagree again. AI is defined by a set of problems, not by a set of solution
techniques.

-- Drew McDermott

Jorn Barger

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <1996Jan29.2...@jarvis.cs.toronto.edu>,
David G. Mitchell <mitc...@cs.toronto.edu> wrote:
>Symbolic logic yes. Machine language certainly not: Probably any
>computer scientist should have exposure to an assembly language,
>but to master one is -- for most -- silly (and sillier for a
>non-specialist). What you need to understand about computation
>to tackle AI problems you will not learn that way.

Well, as I was trying to say, I believe the critical issue is to
find the most elegant possible embodiment of Knowledge (-with-a-K).
If you're buffered from the machine language, you won't develop the
skills to recognize what's possible.

>AI has a substantial toolkit of non-trivial theory and analysis.
>Some of it is standard computer science (computational complexity),
>some overlaps with other areas (discrete math, logic, control theory,
>numerical optimization), and some is pure AI.

Every science in its early stages clings desperately to whatever
trappings of esoteric knowledge it can muster. If you never learn to
view these with detachment, you'll be *trapped* by them, unable to
see your way to deeper truths.

Most of the toolkit you're speaking of has this character... imho.

>It took decades of work, some of it by very smart people,

Penalty, brown-nosing.

>to get to the current stage, and to understand it you need to start
>with the basics of computer science (not programming - although you
>need a bit of that too).

Sort of like New Math?

> Knowing the instruction set for some CPU
>is irrelevant, and if you think you can just guess or make up the
>implications of this all work as you go then you're going to spend
>your life re-inventing ideas several other people thought up in
>the 60's or 70's.

Well, the data structures seem pretty obvious to *me*. The way they
can and can't be mapped onto reality is another question, though.

> Much of the early history of AI is about
>learning the hard way which valuable and insightful intuitions
>just aren't going cash out into useful science.

(So, why are they so valuable and insightful? More brownnosing???)

>>The really hard part of AI is learning how to map these computer
>>techniques onto the obscene complexities of the Real World.

>The really hard part of AI is understanding some cognitive task - in
>all its' apparent obscene complexity - well enough to formalize it,
>and in such a way that there is a chance in hell of coming up with
>a computational method of solving it.

Thanks for the *paraphrase*.

j

David G. Mitchell

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <4edvod$j...@Mars.mcs.com>, Jorn Barger <jo...@MCS.COM> wrote:
>In article <4e0m3n...@aden.ai.cs.yale.edu>,
>Drew McDermott <mcdermo...@cs.yale.edu> wrote:
>>I don't recommend reading about AI without some prior acquaintance
>>with computer science or mathematics.
>
>You should definitely master symbolic logic, and I think some version
>of machine language. My experiences at ILS have led me to think

Symbolic logic yes. Machine language certainly not: Probably any

computer scientist should have exposure to an assembly language,
but to master one is -- for most -- silly (and sillier for a
non-specialist). What you need to understand about computation
to tackle AI problems you will not learn that way.

>AI has a tendency to bog down in 'combinatorial explosions' if you

True, and one thing you want to learn enough math so you understand
what such expressions really mean -- and in particular that
programming tricks are not going to solve the problem.

>> If, on the other hand, the book seems to just drag on
>>with technicality after technicality that seems to have nothing to do
>>with intelligence, creativity, and free will, keep going. That's what
>>AI really is.
>
>I suppose this is 90% tongue-in-cheek, but even that last 10% is

I supposed about 10% tongue-in-cheek: real AI has interesting
content too but if you're new to it, it's certainly going to
have the appearance described.

>pretty scary. AI's toolkit of tricks is not that great or
>complicated-- if you understand what a CPU's instruction set is
>capable of, you can guess most of the rest.

I'm sorry, but now you're really showing your ignorance Jorn.

AI has a substantial toolkit of non-trivial theory and analysis.
Some of it is standard computer science (computational complexity),
some overlaps with other areas (discrete math, logic, control theory,
numerical optimization), and some is pure AI.

It took decades of work, some of it by very smart people, to

get to the current stage, and to understand it you need to start
with the basics of computer science (not programming - although you

need a bit of that too). Knowing the instruction set for some CPU

is irrelevant, and if you think you can just guess or make up the
implications of this all work as you go then you're going to spend
your life re-inventing ideas several other people thought up in

the 60's or 70's. Much of the early history of AI is about

learning the hard way which valuable and insightful intuitions
just aren't going cash out into useful science.

>The really hard part of AI is learning how to map these computer
>techniques onto the obscene complexities of the Real World.

The really hard part of AI is understanding some cognitive task - in
all its' apparent obscene complexity - well enough to formalize it,
and in such a way that there is a chance in hell of coming up with
a computational method of solving it.


David

PS: The original poster might want to read something like
David Harel's "Algorithmics: the spirit of computing",
(or The Science of Computing: exploring the nature and power
of algorithms - depending on edition), which gives a correct
but non-mathematical introduction to some important basics.

Jorn Barger

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <4eiuk9...@aden.ai.cs.yale.edu>,

Drew McDermott <mcdermo...@cs.yale.edu> wrote:
>I didn't mean to imply that AI's toolkit is complicated. But AI
>makes increasing use these days of sophisticated techniques from
>other fields (decision theory, differential-equation theory, control
>theory, computational geometry, ...).

Surely one of the most important things for new students in AI is to
get a balanced perspective on the whole domain. I contend that when
you focus on the most serious, central problems (which involve
simulating human psychology), these tools are distinctly secondary.

> It may have been true in the
>heady days of the 70's that knowing how a CPU works was enough (or at
>least that we *thought* it was), but it's not true now.

Sorry... I don't buy this.

[...]


>I disagree. It's fun to speculate about the impact of AI on
>philosophy and other fields, but most substantial results in the
>field come from finding something that works in a simple subset of
>the obscenely complicated real world. Disappointing, but true.

Philosophy and literature are full of useful terminology and insight
about the o.c.r.w. You can't end-run them if you're looking for
something that works.

>|> AI is an aspect of the scientific method, more than
>|> a scientific domain with content of its own.
>I disagree again. AI is defined by a set of problems, not by a set of
>solution techniques.

Well, I'd like to wrestle about this one till we achieve a common
ground, because it seems uncontroversial to me, and urgently
important.

The "set of problems" boils down to 'capturing all knowledge in
computer code'. Which includes every branch of science, *re-thought*
so that it can be so captured.

There are common threads among AI-for-geology, AI-for-chemistry,
AI-for-medicine, and AI-for-sociology, but these threads are so highly
abstract that they hardly have any content of their own.

Jorn Barger

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to
In article <4el8bn$d...@mars.mcs.com>, I wrote:
>In article <1996Jan29.2...@jarvis.cs.toronto.edu>,
>David G. Mitchell <mitc...@cs.toronto.edu> wrote:
>>Symbolic logic yes. Machine language certainly not: Probably any
>>computer scientist should have exposure to an assembly language,
>>but to master one is -- for most -- silly (and sillier for a
>>non-specialist). What you need to understand about computation
>>to tackle AI problems you will not learn that way.
>
>Well, as I was trying to say, I believe the critical issue is to
>find the most elegant possible embodiment of Knowledge (-with-a-K).
>If you're buffered from the machine language, you won't develop the
>skills to recognize what's possible.

I've been thinking I should find a nice example to illustrate this, so
here's a sample Cyc 'fact', taken from the published literature:

(LogImplication
(LogAnd
(allInstancesOf $AIL Asphyxiation)
(bodilyInvolved $AIL $AGT))
(holdsIn $AIL
(LogNot (behaviorCapable $AGT Breathing bodilyInvolved))))

where $AIL and $AGT are the variables for ailment and agent,
LogAnd, LogNot, and LogImplication are simple logical connectives
and "holdsIn" defines the context within which this 'axiom' holds.

All of which translates as "If someone is asphyxiating, they cannot
breathe" ...which seems like a definition more than a proposition!

I contend that this could hardly be expressed *less* elegantly, and
that this is at least partly due to the 'comfy chair' esthetics of
LISP.

In consequence, any sort of reasoning about this 'fact' bogs down in
massive amounts of search-and-compare parsing, and the sorts of
de-/in-/ab-/dys-duction that ought to be essentially instantaneous
become serious time-drains.

I expect that Cyc includes 'compiled' versions of this fact, for this
very reason, but I think if you look at the problem from a pure-
machine-language point of view, you'll eventually notice that any
propositions like this will be most efficiently represented as an
individual node in a tree (or thicket ;^) so that the assertion of
the proposition is equivalent to the instantiation of the node...

See <URL:ftp://ftp.mcs.net/mcsnet.users/jorn/thicketfaq.txt>

Drew McDermott

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
Jorn Barger wrote:>

> > Drew McDermott <mcdermo...@cs.yale.edu> wrote:
> > AI
> >makes increasing use these days of sophisticated techniques from
> >other fields (decision theory, differential-equation theory, control
> >theory, computational geometry, ...).

> Surely one of the most important things for new students in AI is to
> get a balanced perspective on the whole domain. I contend that when
> you focus on the most serious, central problems (which involve
> simulating human psychology), these tools are distinctly secondary.

Secondary to what?

> Philosophy and literature are full of useful terminology and insight
> about the o.c.r.w. You can't end-run them if you're looking for
> something that works.

Philosophy and literature are based on introspection. AI just can't
be. It's entirely possible that the reason the human nervous system
is organized in such-and-such a way can only be explicated in terms of
decision theory, or some other theoretical framework that is not
accessible to the organism itself. To take another example: it's
entirely possible that intuitions about shape are based on distance in
a space defined by a family of differential equations. People can
tell you when shapes look similar to them, but asking them to tell you
why is likely to yield nothing of interest, because the reason why is
just not accessible to introspection.

> >|> AI is an aspect of the scientific method, more than
> >|> a scientific domain with content of its own.
> >I disagree again. AI is defined by a set of problems, not by a set of
> >solution techniques.
>
> Well, I'd like to wrestle about this one till we achieve a common
> ground, because it seems uncontroversial to me, and urgently
> important.
>
> The "set of problems" boils down to 'capturing all knowledge in
> computer code'. Which includes every branch of science, *re-thought*
> so that it can be so captured.

Here is where we disagree. For me a key problem of AI is building a
creature that can walk through a building without bumping into the
furniture. But it's probably not a good idea to try to "capture
knowledge" about bumping, buildings, and furniture. It *is* a good
idea to try to build the creature. When it can behave in the desired
way, it will be academic whether anything inside it corresponds to
captured knowledge.

In the long run, the question whether everyday life is more like
science or more like physical skill will arise, but I don't think
we're even ready to pose it yet.

-- Drew McDermott

Conn Copas

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to

Drew McDermott <mcdermo...@cs.yale.edu> wrote:
>
>> >|> AI is an aspect of the scientific method, more than
>> >|> a scientific domain with content of its own.
>> >I disagree again. AI is defined by a set of problems, not by a set of
>> >solution techniques.
>>
At the risk of pitting AI notaries against each other, what about the Newell/
Simon view that most computation is (or should be) ultimately about AI,
because most is performed for some problem-solving end? Hence, the area of
application is not so important, because the argument goes that conventional
software engineering could use a healthy dose of AI (eg, by exploiting
constraint satisfaction). And then of course there is the old adage that "if
the problem is well-understood enough to be solved by a deterministic
algorithm, then it ain't AI". Suggesting there is at least one distinctive
solution technique, which eventually involves search in one form or another. I
am not actually aware of any rebuttals of that argument in the same way that I
am aware of rebuttals to the notions that 'knowledge' or 'pattern recognition'
uniquely define AI.

Otherwise, if we identify AI with a set of problems, we arrive at the situation
we currently have with agents: a number of practically useful but theoretically
unremarkable systems for filtering the Internet, which are claimed to be
intelligent because of what they do rather than how they are designed.

0 new messages