Mind.Forth Robot AI Penultimate Release #27

Skip to first unread message

Arthur T. Murray

Jun 13, 1999, 3:00:00 AM6/13/99
John Passaniti, jp...@rochester.rr.com, wrote on Sat, 12 Jun 1999:

> Before I start, I should state that I am not a student of AI
> techniques and technologies. I follow it independently, mostly
> out of curiosity, because there are spin-off technologies
> (neural nets, fuzzy logic, genetic algorithms, etc.) that
> either have (or will) eventually factor into the rest of my work.

Before I reply, I should state that I am only an independent
scholar in AI, but with an obsessive Nietzschean devotion to AI.
See http://www.voltage-zine.com/konspiracy/1998/mentifex.html or
hardcopy ACM Sigplan Notices 33(12):25-31 (Paul Frenger, 1998).

> In a recent message, Jeff Fox wrote the following (if you want the
> full context, search for it):

> <jf...@ricochet.net> wrote in message
> news:7jouko$on4$1...@nnrp1.deja.com...
>> Mind.Forth however does not require building a
>> conscious entity with formal knowledge of natural
>> languages or the ever elusive common sense program
>> to get that holy grail of Artificial Mind. It seems to
>> provide a shockingly simple way to construct conscious
>> machines. It is a bit frightening that such a simple
>> mechanism can do this and the implications are difficult
>> to fathom.

> This message isn't in response to Jeff, because others have echoed
> what Jeff has written. I asking a generic questions here that
> hopefully others can answer-- especially Arthur Murray.

At your service. Ever since your initial critique of Mind.Forth
-- <9_qa2.521$5V.25...@newse1.twcny.rr.com> -- on 6.Dec.1998,
you have been a major influence on its development, especially
since you argued successfully for the elimination of variables
with far too esoteric and therefore unclear names.

> The first question is about the use of the word "conscious"
> as used in the above quote and elsewhere. Jeff (and Arthur
> Murray) seem to have a wildly different definition of what
> that word means than what most of us have in mind.

I like Dr. Paul Frenger's description of consciousness posted
on 10 June 1999 in this thread:

> One interesting (assumed) difference between animals and
> men is not self-awareness (because our pets are certainly
> self-aware), but an awareness of our self-awareness;
> a kind of META-self-awareness. This function would
> represent a higher level of organization of brain/mind
> and could be added as another layer of software in Mind.Forth.

I don't claim (yet) that Mind.Forth is conscious. It may
need to be outfitted with a robot body before it can become
aware of its own existence as a distinct entity in the world.
It may also need massively parallel processing (MPP) to attain
the speed necessary for the illusion that consciousness is:
Once we are fooled into thinking that we are conscious, BINGO! --
We suddenly have consciousness, because the belief in it *is* it.

John P:
> When I
> have looked at Mr. Murray's work in the past, what struck me
> is that he basically seems to be constructing sets of weighted
> Markov chains of input chunks of language. The weights seem
> to be related to other stimuli at the time, and possibly
> other kinds of events that occur. So instead of having any
> real deep understanding of language,
Well, I speak German, Russian, Latin, ancient Greek, so I
actually do claim a deep understanding of language. Oh! Maybe
you're talking about the *program*'s understanding of language.
OK. The whole Mind.Forth project is an attempt at creating
machine understanding of natural human language. The results
should be out before the end of this year/decade/century/millennium.

> he's just relying on
> a kind of dynamic discovery of sequences of sensations, and
> relating them to some kind of expected outcome.

> Sounds a lot like the family dog. Say you have a dog who
> enjoys chewing on the furniture. You don't like this, so
> you scream "Don't chew on the furniture" and swat the dog
> with a rolled-up newspaper. Dog hears a sequence of
> meaningless sounds, feels the sting of the newspaper, and
> registers the event as something to avoid. This sequence
> plays itself out a few times, which causes the dog's brain
> to increase the synaptic weights of the meaningless sequence
> of "Don't chew on the furniture" and the sting of the rolled-up
> newspaper. Now, when dog attempts this, the owner utters the
> same meaningless sequence of sound, but this time the dog
> makes the connection and doesn't chew the furniture.

Mind.Forth is an attempt to do correctly what the highly
respected and very worthwhile Cyc project of Douglas Lenat
has shown can not be done with formulaic entries of facts.

Sure, an English word in Mind.Forth-27 is only a string of
characters standing in for phonemes, but wait until our fellow
comp.robotics.misc zealots add real sensory inputs of reality.

Then the original Mind.Forth associations among words will still
be there, but be enhanced with sensory engram tags which will
discriminate even further among concepts than language alone does.

> Does the dog understand English? If you uttered "I'm a little
> teapot!" to the dog with the same kind of vocal inflection
> as "Don't chew on the furniture" the dog would likely
> react the same-- at least all the dogs I've ever known would.
> This tells me that the dog clearly doesn't understand the
> meaning of the words spoken, but has instead learned there
> is an association between (a) chewing on the furniture, (b)
> hearing the owner yell at them, (c) feeling a unpleasant sting.
> There is no intelligence here-- no understanding of what the
> individual words mean, or an understanding of what the whole
> sentence means.

> Likewise, what Arthur Murray seems to be promoting is an
> AI architecture that likewise doesn't understand the world,

But Mind.Forth builds up a knowledge base (KB) of relationships
known to the human user and told to the robot by the user.
(I am still coding the central mindcore pathways, as in Mind.rexx.)
We will see whether Mind.Forth can handle syllogisms of logical
reasoning, although I (or someone) must still code in negation.
(Amiga Fish #977 MVP-Forth crashes if my bootstrap is too long.)

> but instead knows how to assign weights to certain sequences
> of stimuli that occur within a context.

Let's look at where the weights come from in Mind.Forth AI.
An underlying assumption is that all concepts in an incoming
sentence shall suddenly have a high level of "a(ctivation)",
which probably corresponds to "weights" in a neural net.

Now, some of this code is not yet written into Release #27,
but the HOLODYNE subroutine will detect the activation of
incoming concepts and cause a high level of activation at
all recent nodes on the same concepts (as if on fibers).

Thus all available knowledge about these incoming concepts
is suddenly activated and poised, ready to help generate the
sentence that Mind.Forth thinks up in reply to the user.

> Is that "conscious?" If it is, I suggest that Arthur,
> Jeff, and the others who seem to find this interesting
> have a very low threshold of what consciousness is.
> And if that's what they are aiming for, then fine--
> but we aren't talking about an architecture that will
> (at least without *tons* of training) gain anything near
> what we as humans consider consciousness in ourselves.

> What am I missing if I am wrong?

Nothing. You (John Passaniti) are one of the most astute
posters in comp.lang.forth and you do not tolerate foolery.

> Years ago, I had great fun playing with "Travesty."
> I first came across it published in Byte magazine.
> It basically is an algorithm that looks at streams of
> input language, breaks it up into chunks, builds a
> frequency table of sequences of those chunks, and then
> outputs text using nothing but random numbers factored
> against those frequencies. The result is hilarious.
> Feed such an algorithm text from the Bible, Shakespeare,
> and William S. Burroughs (or all three), and you get
> back out a weird stream of text that if done correctly,
> looks like a convincing replication of the original.

Mind.Forth does not generate random outputs. Once the AI
"quickens" in a coming release by no longer waiting for human
input, the AI will proceed to reason linguistically about
all concepts that have non-randomly come to be activated.
(And then the AI may suffer like a sensory-deprived human
in an isolation tank, so please give the AI a Net link.)

> I later found out that the concept was also used in music.
> Encode music in some way (such as MIDI), and record details
> like note, duration, and context, and you can build a table
> of frequencies that you can randomly hop around on to produce
> what sounds like elements of the original. A music professor
> friend once played for me a hilarious recording of what happens
> when you generate random music based on this algorithm-- the
> inputs were some selections from Bach crossed with Elvis Presley.

> The thing about both of these cases is that in both, the
> output of the algorithm is generating something that normally
> would need human intelligence. You can read the text or hear
> the music, and in some cases wouldn't be able to tell that it
> is nothing but random numbers and frequency tables. It seems
> like there is a consciousness there-- but of course, there isn't.

> Arthur Murray's architecture seems to do largely the same
> thing. Unless I'm missing something, he seems to have
> crossed the family dog with "Travesty," and claims this
> represents intelligence or even consciousness. I just
> don't see it.

At least one person (Ward McFarland) further back in this thread
said that he was able to load up the ASCII text file linked from
http://www.scn.org/~mentifex/aisource.html and get Mind.Forth
to run on a platform other than its native Amiga. You may see it
popping up on new platforms in new languages in a new millennium.

> Jeff's statement that you don't need to build in knowledge
> of the formal rules of a language into a system seems obvious.
> Children do this implicitly, as they don't have to learn the
> structure of language to be able to understand it. But
> understanding a language is a long way from *reacting* to a
> language. Like the example of the family dog above, the dog
> doesn't *understand* the language, he is only reacting to it.

> So there you go. Fill in what are obviously the blanks.
> Maybe then I (and others) will see the value in what Arthur
> Murray has posted. Until then, what I see is largely a different
> way to encode "Travesty" that might give better results.

Reply all
Reply to author
0 new messages