Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

IF in LISP

19 views
Skip to first unread message

Daniel J. Dobson

unread,
Mar 2, 1992, 10:12:25 PM3/2/92
to
I was wondering if anyone had developed any LISP routines for dealing
with IF. LISP seems like an ideal language to write IF in, since it has
built in inferencing and association lists. I wrote an inferencer last
week that could answer simple yes/no (or T/NIL, if you understand LISP)
questions off of an association list, which was just tons of fun to play
with. It would easily generalize to IF--for instance, if you defined
"person" to have various body parts: "fingers" "toes" "head" "neck",
etc, and then said "player isa person" you could have the program
successfully divine the intention of questions such as "Do I have
fingers?" or commands such as "Put the iron spike through my hand." The
details of such I haven't worked out yet (I'll work them out and post
them if anyone cares) but if someone had beaten me to the punch, I'd be
most interested. No sense reinventing the wheel, I believe is the
proper response to such matters.

Later,
Wolff
--
********************************************************************
*djdo...@princeton.edu OR (BITNET) djdobson@PUCC* Disclaimer!?! *
* "If this is a consular ship, where is the * Hardly even *
* Ambassador!?!" * know 'er! *
********************************************************************

Jacob S. Weinstein

unread,
Mar 3, 1992, 7:41:26 PM3/3/92
to
In article <1992Mar3.0...@Princeton.EDU> djdo...@door.Princeton.EDU (Daniel J. Dobson) writes:
>I was wondering if anyone had developed any LISP routines for dealing
>with IF. LISP seems like an ideal language to write IF in, since it has

Yup- A few years ago (maybe as many as ten, although I think
fewer) there was a fairly long article written in Dr. Dobbs Journel
by the author of AAL (Adventure Authoring Language) which was written in
LISP. I believe the author is somewhere on the net... I still have the
article somewhere at home, so, if you're interested, I can get more
details, but I won't have a chance to get at it 'till april.

--
***********************************************************************
* || I have discovered a truly wonderful quote, but, *
* Jacob Weinstein || unfortunately, this .sig is too small to contain*
* || it. *

Phil Goetz

unread,
Mar 4, 1992, 4:49:38 PM3/4/92
to
In article <1992Mar3.0...@Princeton.EDU> djdo...@door.Princeton.EDU (Daniel J. Dobson) writes:
>I was wondering if anyone had developed any LISP routines for dealing
>with IF. LISP seems like an ideal language to write IF in, since it has
>built in inferencing and association lists.

LISP does not have built-in inferencing. The inferencing you observed,
if you didn't write it yourself, must have been written by someone else
(perhaps your professor or TA) and loaded into your LISP package.
Prolog has built-in inferencing.

Most advanced parsers are written in LISP. The nice thing about LISP
is that association lists are easy to make & work with, and so semantic
networks are easy to make.

Phil Goetz
go...@cs.buffalo.edu

Ice

unread,
Mar 7, 1992, 11:21:22 PM3/7/92
to
In article <1992Mar4.2...@acsu.buffalo.edu> go...@acsu.buffalo.edu (Phil Goetz) writes:
>
>LISP does not have built-in inferencing. The inferencing you observed,
>if you didn't write it yourself, must have been written by someone else
>(perhaps your professor or TA) and loaded into your LISP package.

>Prolog has built-in inferencing.

I am new to Prolog... but this inferencing is I believe, a resolution
proof by refutation. Something like that. I am also told that this
resolution works with Horn clauses only, but like I said, I'm naive,
and I cannot from this understand why this is weak.

Could someone on the net who understands Horn clauses and resolution
please explain this? All I know about Horn clauses is that they contain
at most one positive literal. I have no idea what a clause with two
or no positive literals looks like and why the Prolog inference
engine isn't very awe-inspiring.

My serious gratitude to anyone who can explain this; I'm sure you'd be
helping a lot of people with some stuff at the foundations of logic
programming.

>
>Phil Goetz
>go...@cs.buffalo.edu

Ice. i...@skynet.uucp
--
/* Ice's Hypermedia Sig */ #include <cyberpunk.h> #include <industrial.h>
Hardware required: biological neural net with _unsupervised_learning_
Audio() Burning Inside by Ministry; "The Mind is a Terrible Thing to Taste"
Visual() Sarah Conner's flesh on fire blasted away leaving screaming skeleton

Michael A. Covington

unread,
Mar 8, 1992, 6:30:46 PM3/8/92
to
In response to Ice's question about Horn clauses and Prolog:

(1) You are right in thinking Prolog is an inference system for Horn
clauses, not for full first-order logic.

(2) The main limitation is that you can't express negative facts
(like "Kermit is not a frog") nor draw negative conclusions (as in
"X is not a frog if X has fur"). This is not as serious as it sounds;
in years of working with Prolog I've had no serious trouble encoding
knowledge bases in Horn clauses.

(3) Prolog also fouls up on a few (very easily recognizable) configurations
of Horn clauses, such as:
ancestor(X,Z) :- ancestor(X,Y), ancestor(Y,Z).
which causes an endless loop. There are ways to work around this.

(4) The choice of Horn clauses and the particular inference procedure
in Prolog was a deliberate compromise between completeness and speed.
Other people might have preferred to make the trade-off differently.
For example, you _can_ put explicit negation into Prolog, but then you
have an inference procedure that runs considerably slower.

--
==========================================================================
Michael A. Covington, Ph.D. | mcov...@uga.cc.uga.edu | ham radio N4TMI
Artificial Intelligence Programs | U of Georgia | Athens, GA 30602 U.S.A.
==========================================================================

Thomas James Jones

unread,
Mar 9, 1992, 3:27:07 AM3/9/92
to
In article <1992Mar8.2...@athena.cs.uga.edu>, mcov...@athena.cs.uga.edu (Michael A. Covington) writes:
|> In response to Ice's question about Horn clauses and Prolog:
|>
|> (1) You are right in thinking Prolog is an inference system for Horn
|> clauses, not for full first-order logic.
|>
|> (2) The main limitation is that you can't express negative facts
|> (like "Kermit is not a frog") nor draw negative conclusions (as in
|> "X is not a frog if X has fur"). This is not as serious as it sounds;
|> in years of working with Prolog I've had no serious trouble encoding
|> knowledge bases in Horn clauses.

c'mon. It's the whole bloody problem with symbolic AI.
(I'm a connexionist and believe that, eventually it is
with ANNs that these _basic_ but _fundamental_ problems
with AI will be solved.)

|> ==========================================================================
|> Michael A. Covington, Ph.D. | mcov...@uga.cc.uga.edu | ham radio N4TMI
|> Artificial Intelligence Programs | U of Georgia | Athens, GA 30602 U.S.A.
|> ==========================================================================

tom j. jones

Peter Van Roy

unread,
Mar 9, 1992, 5:03:37 AM3/9/92
to
In article <43...@cluster.cs.su.oz.au>, t...@minnie.cs.su.OZ.AU (Thomas James Jones) writes:
> (I'm a connexionist and believe that, eventually it is
> with ANNs that these _basic_ but _fundamental_ problems
> with AI will be solved.)
>
> tom j. jones

Aha, a connectionist. Wonderful. I've always wanted to find one, to
ask some questions. I confess, I'm ignorant about the frontiers of
research in neural nets, but I do try to keep informed of what's going
on by reading general articles that appear in IEEE Computer and such.
I am eager to learn about neural nets, and I have some experience in
symbolic programming and compiler construction.

In all that I've ever heard about neural nets, the only examples I've
seen are where the neural net is being used to do pattern matching,
e.g. it can quickly find the local minimum in a given space that is
closest to the input that is given it. For example, handwriting
recognition, speech recognition. Articles on neural nets focus on
topics that are oriented towards this goal, e.g. they develop
techniques for fast learning, they talk about hardware
implementations, they talk about fast recognition.

There is a large gap between this kind of pattern matching and the
kinds of reasoning that people do. For example, I believe that I (and
my fellow humans) have a rich inner world where all kinds of reasoning
takes place, reasoning that can't be modeled by pattern matching.

So here's  my question: Where's the beef? What developments in neural
net research point towards building systems that do reasoning? Please
give me a detailed answer with references and examples.

There's only one example that I know of, of a complex system that you
might think of as a neural net, that does specific tasks that are
rudimentary beginnings of reasoning. That is the human visual system.
In the book on Vision (Scientific American Library), a detailed
exposition is given of the current understanding of how we see. It's
a very bottom-up approach: the idea is to start from the eye, and
trace back the neural connections into the brain, and figure out what
they do. Several layers have so far been traced, and the conclusion
is that the neurons are connected in a very precise manner, like a
deterministic computing engine, and that no "local minimum in a given
space" kind of pattern matching is going on. The connections are
quite clever: For example, they recognize borders, not interiors.
From a standpoint of practical engineering, this is eminently
reasonable: the border is 1D, whereas the interior is 2D, so fewer
neurons are needed. Another clever idea is the logarithmic reduction
of precision as you go farther away from the focal point. This also
saves hardware.

Regards,
Peter Van Roy

----------------------------------------------------------------
Peter Van Roy
Digital Equipment Corporation Net: van...@prl.dec.com
Paris Research Laboratory Tel: [33] (1) 47 14 28 65
85, avenue Victor Hugo Fax: [33] (1) 47 14 28 99
92563 RUEIL MALMAISON CEDEX
FRANCE
----------------------------------------------------------------



P. Singleton

unread,
Mar 9, 1992, 11:28:58 AM3/9/92
to
From article <1992Mar8.2...@athena.cs.uga.edu>, by mcov...@athena.cs.uga.edu (Michael A. Covington):

> (2) The main limitation is that you can't express negative facts
> (like "Kermit is not a frog") nor draw negative conclusions (as in
> "X is not a frog if X has fur"). This is not as serious as it sounds;
> in years of working with Prolog I've had no serious trouble encoding
> knowledge bases in Horn clauses.

Are there standard workarounds for expressing negative or disjunctive facts?
Or do we just conspire to disregard problems which Prolog can't handle, e.g.
the "red blocks" problem? I do the latter, but I'd be glad to learn the
former.

> (3) Prolog also fouls up on a few (very easily recognizable) configurations
> of Horn clauses, such as:
> ancestor(X,Z) :- ancestor(X,Y), ancestor(Y,Z).
> which causes an endless loop. There are ways to work around this.

Are there techniques of static analysis to find most/all such troublespots?

I believe that if Prolog encounters a goal which is a variant of an
ancestor of that goal (I mean same but for renaming of variables), then it
is doomed to infinite recursion, and a meta-interpreter can be built which
detects these cases.

But what about "f(X) :- f(X+X)." which generates novel goals without a hope
of getting anywhere? This example is obviously hopeless, but I think it
sometimes happens non-obviously in well-intentioned code, so tools to
detect it (preferably statically) would be valuable.
---
__ __ Paul Singleton (Mr) JANET: pa...@uk.ac.keele.cs
|__) (__ Computer Science Dept. other: pa...@cs.keele.ac.uk
| . __). Keele University, Newcastle, tel: +44 (0)782 621111 x7355
Staffs ST5 5BG, ENGLAND fax: +44 (0)782 713082

ef80

unread,
Mar 10, 1992, 9:38:03 AM3/10/92
to
In article <43...@cluster.cs.su.oz.au> t...@minnie.cs.su.OZ.AU (Thomas James Jones) writes:
> [innocuous stuff about Prolog deleted]

> c'mon. It's the whole bloody problem with symbolic AI.
> (I'm a connexionist and believe that, eventually it is
> with ANNs that these _basic_ but _fundamental_ problems
> with AI will be solved.)
> tom j. jones

Tom,
How does this help me write interactive fiction? Have you produced a good
interactive fiction game using Neural Nets? Does it produce interesting
NPC's? If you have, share the wealth, man!

Otherwise, keep the academic posturing on comp.ai where it belongs, O.K.?


Disgruntled,
John Deighan
--
net: jde...@relay.nswc.navy.mil -- or just -- jdeigha@relay
U.S. Snail: c/o Sinetics, 24 Danube Drive, King George, VA 22485

Olivier Ridoux

unread,
Mar 10, 1992, 2:43:44 PM3/10/92
to
From article <1992Mar8.2...@athena.cs.uga.edu>, by mcov...@athena.cs.uga.edu (Michael A. Covington):
> In response to Ice's question about Horn clauses and Prolog:
>
> (1) You are right in thinking Prolog is an inference system for Horn
> clauses, not for full first-order logic.
>
> (4) The choice of Horn clauses and the particular inference procedure
> in Prolog was a deliberate compromise between completeness and speed.

Other compromises exist. E.g. Hereditary Harrop formulas as in LambdaProlog.
I agree it is a little more complicated than Horn formulas, but not that much.
It may be that the "deliberateness" only came from aiming at model theoretical
results such as "The intersection of all models
is a model
is equal to the lfp of the immediate consequence function
is equal to the success set of SLD".
Resembling results can be obtained with H. Harrop formulas; they are only much
less natural.
Proof theoretical results are more natural for H. Harrop formulas considered
as a fragment of an intuitionistic predicate calculus:
"Goal oriented sequent proofs are complete for this fragment".
Horn formulas is only a smaller fragment.

Hoping it helps,

Olivier Ridoux

Fernando Pereira

unread,
Mar 12, 1992, 2:42:30 PM3/12/92
to
In article <1992Mar10.1...@irisa.fr> rid...@irisa.fr (Olivier Ridoux) writes:
>From article <1992Mar8.2...@athena.cs.uga.edu>, by mcov...@athena.cs.uga.edu (Michael A. Covington):
>> (4) The choice of Horn clauses and the particular inference procedure
>> in Prolog was a deliberate compromise between completeness and speed.
>
>Other compromises exist. E.g. Hereditary Harrop formulas as in LambdaProlog.
>I agree it is a little more complicated than Horn formulas, but not that much.
Much that I am fond of Lambda Prolog, it is not at all clear that the
HOHH (higher-order hereditary Harrop) fragment is an ideal compromise of
that kind, because of the very high costs of managing lambda normalization
and higher order unification. Something like Dale Miller's LLambda, which
has a much simpler (for one thing, decidable) unification problem but still
much of the logical power of HHOH, may be a better example of such a
compromise extending Horn clauses. As for proof and model theory,
HHOH (and LLambda?) are sufficiently delicate that, for example, we do
not know how to formalize bottom up HHOH proof procedures, which would
be very useful for some applications I have tried. In contrast, HCL
allows many varieties of bottom up or mixed strategy proofs as well
as the usual SLD proofs, which has proved to be essential in deductive
databases and in drawing analogies between parsing and HCL deduction.
Until such issues are resolved, we cannot really say that HHOH (or LLambda)
are as balanced a compromise as HCL, even if they provide a very interesting
and fruitful set of research questions.

Fernando Pereira
2D-447, AT&T Bell Laboratories
600 Mountain Ave, PO Box 636
Murray Hill, NJ 07974-0636
per...@research.att.com

Richard A. O'Keefe

unread,
Mar 13, 1992, 3:53:52 AM3/13/92
to
In article <22...@keele.keele.ac.uk>, cs...@seq1.keele.ac.uk (P. Singleton) writes:
> > (3) Prolog also fouls up on a few (very easily recognizable) configurations
> > of Horn clauses, such as:
> > ancestor(X,Z) :- ancestor(X,Y), ancestor(Y,Z).
> > which causes an endless loop. There are ways to work around this.
>
> Are there techniques of static analysis to find most/all such troublespots?
>
> I believe that if Prolog encounters a goal which is a variant of an
> ancestor of that goal (I mean same but for renaming of variables), then it
> is doomed to infinite recursion, and a meta-interpreter can be built which
> detects these cases.

> But what about "f(X) :- f(X+X)." which generates novel goals without a hope
> of getting anywhere? This example is obviously hopeless, but I think it
> sometimes happens non-obviously in well-intentioned code, so tools to
> detect it (preferably statically) would be valuable.

Brough & Walker had an article on loop detection several years ago.
The bottom line of their article was that there is no one _best_ method,
but there are several _useful_ methods.

There are two cases:
(1) You are using Prolog as an unusually pleasant imperative language.
In this case, you don't _want_ anything coming around behind your
back and messing with your program. The simpler the semantics the
better, even if that does mean infinite loops can happen.

(2) You are using Prolog as a theorem prover for Horn clauses (e.g.
for natural language parsing, or for deductive data base work).
In this case, you want a simple _logical_ semantics, and don't
care about the procedural behaviour. In that case, a different
execution strategy may be a better way to go.
a) The simplest method is to use iterative deepening. It is _so_
easy to do, and it works amazingly well. I have been surprised
at how often there is some measure of problem difficulty which
can naturally be folded into the computation. (My hack for
making reverse/2 and permutation/2 behave well turns out to be
a special case of this, so now I can say it's principled...)
b) The entire Earley deduction family is there to explore...
c) Bottom-up computation may be appropriate. (Check the deductive
data base literature.)
d) Automatic program transformation...

Automatic loop stifling is not as useful as some might think. Why?
Because a program where such a loop arises is likely to be spectactularly
inefficient. (For example,
ancestor(X, Y) :- parent(X, Y).
ancestor(X, Y) :- ancestor(X, Z), ancestor(Z, Y).
is a _fine_ specification, but a truly dreadful program. If you do a lot
of things like that, write
transitive_closure(ancestor(X,Y), parent(X,Y)).
in your programs and use term_expansion/2 to rewrite that into something
more efficient.

Automatic loop detection as part of a declarative debugging environment,
now that's another thing.

--
I am writing a book on debugging aimed at 1st & 2nd year CS students using
C/Modula/Pascal-like languages. Please send suggestions (other than "you
_must_ cite "C Traps and Pitfalls") to o...@goanna.cs.rmit.oz.au

Richard A. O'Keefe

unread,
Mar 13, 1992, 4:02:44 AM3/13/92
to
In article <1992Mar10.1...@irisa.fr>, rid...@irisa.fr (Olivier Ridoux) writes:
> Other compromises exist. E.g. Hereditary Harrop formulas as in LambdaProlog.
> I agree it is a little more complicated than Horn formulas, but not that much.

How about posting a short definition of hereditary Harrop formulas,
and a couple of examples of their use?

Olivier Ridoux

unread,
Mar 16, 1992, 3:08:42 PM3/16/92
to
From article <22...@alice.att.com>, by per...@alice.att.com (Fernando Pereira):

> In article <1992Mar10.1...@irisa.fr> rid...@irisa.fr (Olivier Ridoux) writes:
>>From article <1992Mar8.2...@athena.cs.uga.edu>, by mcov...@athena.cs.uga.edu (Michael A. Covington):
>>> (4) The choice of Horn clauses and the particular inference procedure
>>> in Prolog was a deliberate compromise between completeness and speed.
>>
>>Other compromises exist. E.g. Hereditary Harrop formulas as in LambdaProlog.
>>I agree it is a little more complicated than Horn formulas, but not that much.
> Much that I am fond of Lambda Prolog, it is not at all clear that the
> HOHH (higher-order hereditary Harrop) fragment is an ideal compromise of
> that kind, because of the very high costs of managing lambda normalization
> and higher order unification.

Hereditary Harrop (HH) formulas are not necessary higher-order.
Uniform proofs (a special brand of goal-directed proofs) are complete for HH.
Assuming that goal-directedness as to do with speed, we have such a compromise.

> Something like Dale Miller's LLambda, which
> has a much simpler (for one thing, decidable) unification problem but still
> much of the logical power of HHOH, may be a better example of such a
> compromise extending Horn clauses.

LLambda has also the HH clause structure.

> As for proof and model theory,
> HHOH (and LLambda?) are sufficiently delicate that, for example, we do
> not know how to formalize bottom up HHOH proof procedures, which would
> be very useful for some applications I have tried. In contrast, HCL
> allows many varieties of bottom up or mixed strategy proofs as well
> as the usual SLD proofs, which has proved to be essential in deductive
> databases and in drawing analogies between parsing and HCL deduction.
> Until such issues are resolved, we cannot really say that HHOH (or LLambda)
> are as balanced a compromise as HCL, even if they provide a very interesting
> and fruitful set of research questions.

For those who are interested, Miller proposes a Herbrand theorem and a model
theory semantics of LambdaProlog in
[Miller et al. "HH formulas and uniform proof systems", LICS 1987].
It uses the notion of world (as in modal logic).
I agree it does not yield directly a bottom-up proof procedure.

Olivier Ridoux

Nicole Tedesco

unread,
Mar 21, 1992, 9:18:01 AM3/21/92
to
van...@prl.dec.com (Peter Van Roy) writes:

> So here's my question: Where's the beef? What developments in neural
> net research point towards building systems that do reasoning? Please
> give me a detailed answer with references and examples.
>

> Regards,
> Peter Van Roy

The next step in neural net development, as you have pointed out, is the
marriage of the neural net and the expert system (I guess that's one way to
put it). The neural net must aid in symbolic processing, not merely pattern
recognition. It is my hypothesis that the neural net must be used in the
CONSTRUCTION of expert systems, whereas neural nets become the "atoms" of
decision-making at the various nodes in decision state space. They will
also help to aquire knowledge, and so forth. A true connectionist system
will build an "expert system" entirely out of different types of neural
nets. One type will handle Pavlovian learning. One type will handle fuzzy
decision-making, and so forth. Small nets can contain the "if" clauses of
the various expert system frames, and the output of the nets will be the
triggering actions.

- Nicole

---------------------------------------------------------------------
nic...@toz.buffalo.ny.us (Nicole Tedesco)
-- Change this!

Jorn Barger

unread,
Mar 23, 1992, 10:54:24 AM3/23/92
to
nic...@toz.buffalo.ny.us (Nicole Tedesco) writes:
> The next step in neural net development, as you have pointed out, is the
> marriage of the neural net and the expert system (I guess that's one way to
> put it). The neural net must aid in symbolic processing, not merely pattern
> recognition. It is my hypothesis that the neural net must be used in the
> CONSTRUCTION of expert systems, whereas neural nets become the "atoms" of
> decision-making at the various nodes in decision state space. They will
> also help to aquire knowledge, and so forth. A true connectionist system
> will build an "expert system" entirely out of different types of neural
> nets. One type will handle Pavlovian learning. One type will handle fuzzy
> decision-making, and so forth. Small nets can contain the "if" clauses of
> the various expert system frames, and the output of the nets will be the
> triggering actions.

Being firmly entrenched in the Schankian trench, I guess I'll stick my
neck out again and challenge this. (But I'm running on intuition here,
not any deep understanding of neural net technologies):

This argument sounds to me like: expert systems can't handle the
complexities of the real world, but neural nets might be able to,
because they're trained with real inputs. So if you can somehow
squeeze that magic trick down to its essence, you can build a network
of neural nets that are connected logically. (Is there some way to
predict how many neurons will be necessary for a given problem?
Once the network is trained, can it be optimized/shrunk?)

But you'll need somehow to break down your problem into subproblems that
are simple enough for neural nets to handle, but not simple enough
for other technologies to handle more efficiently.

And this just seems like wishful thinking... The real problem is
to break the real world down into any stable categories at all, which
is what your expert-framework will have to do.

Am I missing the point?

Bill Armstrong

unread,
Mar 23, 1992, 12:45:21 PM3/23/92
to
bar...@ils.nwu.edu (Jorn Barger) writes:

>nic...@toz.buffalo.ny.us (Nicole Tedesco) writes:
>> The next step in neural net development, as you have pointed out, is the
>> marriage of the neural net and the expert system (I guess that's one way to
>> put it). The neural net must aid in symbolic processing, not merely pattern
>> recognition. It is my hypothesis that the neural net must be used in the
>> CONSTRUCTION of expert systems, whereas neural nets become the "atoms" of
>> decision-making at the various nodes in decision state space. They will
>> also help to aquire knowledge, and so forth. A true connectionist system
>> will build an "expert system" entirely out of different types of neural

>> nets. .....

>This argument sounds to me like: expert systems can't handle the
>complexities of the real world, but neural nets might be able to,
>because they're trained with real inputs.

The "complexities of the real world" is a bit too broadly stated here,
I think. Does anyone dispute that a system that deals with real-world
inputs like images and sounds has to be able to perform pattern
recognition? Or that finding an analogy to something is a form of
pattern recognition at a high level? I agree with Nicole Tedesco's
remarks.

So if you can somehow
>squeeze that magic trick down to its essence, you can build a network
>of neural nets that are connected logically.

>Once the network is trained, can it be optimized/shrunk?)

Some nets can be shrunk , e.g. adaptive logic networks. For very
large systems, one serious problem would be retrieving network
parameters from mass storage, which is most efficiently done in large
chunks. On the other hand, an efficient network may only need to use
a part of the data. Adaptive logic networks, for example, use only a
small fraction of the total data, while BP nets need all of it. The
solutions to these problems are probably similar to what is done in
filesystems and operating systems -- keeping frequently used nets *or
parts of nets* in RAM where they can be instantly called upon, and
leaving the rest in mass storage.

>But you'll need somehow to break down your problem into subproblems that
>are simple enough for neural nets to handle, but not simple enough
>for other technologies to handle more efficiently.

Yes, but I don't think you will ever have to consider breaking all
basic pattern recognition problems down into simpler problems that can
be handled by an expert system, say. There is a point at which the
inefficiencies would make further breakup counterproductive, and
that's about the right point for the interface to be set up.
--
***************************************************
Prof. William W. Armstrong, Computing Science Dept.
University of Alberta; Edmonton, Alberta, Canada T6G 2H1
ar...@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071

Jorn Barger

unread,
Mar 24, 1992, 11:17:12 AM3/24/92
to
ar...@cs.UAlberta.CA (Bill Armstrong) writes:
> The "complexities of the real world" is a bit too broadly stated here,
> I think. Does anyone dispute that a system that deals with real-world
> inputs like images and sounds has to be able to perform pattern
> recognition? Or that finding an analogy to something is a form of
> pattern recognition at a high level? I agree with Nicole Tedesco's
> remarks.

The real world I'm thinking of is the world of human motives and plans,
not edge-detection!

> [...] I don't think you will ever have to consider breaking all


> basic pattern recognition problems down into simpler problems that can
> be handled by an expert system, say. There is a point at which the
> inefficiencies would make further breakup counterproductive, and
> that's about the right point for the interface to be set up.

Again, maybe I'm misunderstanding, but this is the opposite of my
complaint: I'm not worried about breaking nets down all the way to
logic, I'm worried about breaking the brain-as-a-whole into a series
of small nets connected by logic-- because the big-picture first-cut
problem is being sloughed off onto some mystical supernet that can
figure out which smaller nets to pass off its results to-- but this
first task seems absurdly infinite without some semantic theory.

(Sorry that I don't speak the lingo: I'm self-taught, via videogame
hacking)

0 new messages