Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Theory: 'contemplation factor'

3 views
Skip to first unread message

Jorn Barger

unread,
Jun 23, 2001, 8:44:30 AM6/23/01
to
At various times in any person's day, we're confronted with data that we
need to *contemplate*.

Complex data is more difficult to contemplate, simple data is easier...
but these differences are entirely in the mind of the contemplator, not
in the data: if we don't _understand_ the data, it will be hard to
contemplate, if we do, it should be easier.

Complex, poorly-understood data requires the mind to make more effort--
and presumably the brain is also expending more calories.

And I'll call this level of effort the 'contemplation factor', with zero
being the easiest and ten the most taxing.

The process of _scientific inquiry_ always aims to improve (lower) the
contemplation factor for a given class of phenomena.

But the process of constructing _sentences about data_ in natural
language should include an attempt to optimise the contemplation-factor
of the resulting sentence-- which is necessarily closely tied to the
internal contemplation-factor of the speaker.

Eg: if I don't understand the data, I can't possibly articulate it
clearly... but sometimes even when I do understand it, words fail me.

And in these cases, I have to do a sort of meta-contemplation that
includes arranging and rearranging all the available words...

I think the Cyc ontology [1] fails because its most abstract layers
aren't easier to contemplate than its deeper layers-- they're so
abstract they're actually harder to contemplate!

The simplest concepts for the mind are person-place-thing, so these
should be the ontological roots.

[1] http://www.robotwisdom.com/ai/cyc.html
--
http://www.robotwisdom.com/ "Relentlessly intelligent
yet playful, polymathic in scope of interests, minimalist
but user-friendly design." --Milwaukee Journal-Sentinel

Lionel Bonnetier

unread,
Jun 23, 2001, 4:12:09 PM6/23/01
to
Jorn Barger wrote:

> Eg: if I don't understand the data, I can't possibly articulate it
> clearly... but sometimes even when I do understand it, words fail me.
>
> And in these cases, I have to do a sort of meta-contemplation that
> includes arranging and rearranging all the available words...

Do you see the meta as one layer above, or something
different? Do you see the words as external, or entangled
with the things you "contemplate"? How do you picture
"arranging the words"? What parallel would you draw
between what you describe here and speechless spiritual
contemplation?


> I think the Cyc ontology [1] fails because its most abstract layers
> aren't easier to contemplate than its deeper layers-- they're so
> abstract they're actually harder to contemplate!
>
> The simplest concepts for the mind are person-place-thing, so these
> should be the ontological roots.

Why should the abstact layers be easier to contemplate?
Is language closer to the abstract layers? Why? By
"simplest concepts at the ontological roots", do you mean
less abstract? Is "thing" less abstract than "rock" or
"rain" or "pine tree"?


Jorn Barger

unread,
Jun 24, 2001, 3:47:03 AM6/24/01
to
Lionel Bonnetier <spamk...@kill.spam> wrote:
> > Eg: if I don't understand the data, I can't possibly articulate it
> > clearly... but sometimes even when I do understand it, words fail me.
> > And in these cases, I have to do a sort of meta-contemplation that
> > includes arranging and rearranging all the available words...
>
> Do you see the meta as one layer above, or something
> different?

'above' is a plausible metaphor, but 'meta' is more like _adding_ a new
set of data (so you have to 'stand back' to see everything).

> Do you see the words as external, or entangled
> with the things you "contemplate"?

Some are deeply entangled, but when you get stuck in inarticulacy you
need to explore new words from 'outside'.

> How do you picture "arranging the words"?

Eg James Joyce: "Perfume of embraces all him assailed. With hungered
flesh obscurely, he mutely craved to adore."

Composing easy-to-contemplate sentences isn't just about arranging,
though.

> What parallel would you draw
> between what you describe here and speechless spiritual
> contemplation?

Most contemplation is speechless. Eg, Montessori theory emphasizes the
need for children to get deeply absorbed in playing by themselves,
sometimes.

And 'spiritual' is just a matter of degree, I think-- but a
contemplation-factor of zero might always include an element of 'cosmic
consciousness'.

> > I think the Cyc ontology [1] fails because its most abstract layers
> > aren't easier to contemplate than its deeper layers-- they're so
> > abstract they're actually harder to contemplate!
> > The simplest concepts for the mind are person-place-thing, so these
> > should be the ontological roots.
>

> Why should the abstract layers be easier to contemplate?

Because you can't build something simple out of complex parts.

Here's a bit from Cyc's ontology:

Thing
Intangible
IndividualObject
Event
Stuff
Process
SomethingExisting
TangibleObject

> Is language closer to the abstract layers? Why?

I'd say no-- eg, for AntiMath I had to revert to symbols.
http://www.robotwisdom.com/ai/antimath.html

The plus-sign means 'thing' but it's more flexible, I think, because
'thing' has superfluous connotations associated with its five
letter-shapes and/or its phonemes.

> By "simplest concepts at the ontological roots", do you mean
> less abstract?

To some extent-- abstractions are usually harder to master than concrete
ideas.

> Is "thing" less abstract than "rock" or "rain" or "pine tree"?

'thing' is more abstract, but it's a very intuitive abstraction, unlike
'tangible existing process'.

Melanie Baran

unread,
Jun 24, 2001, 3:44:12 PM6/24/01
to
Does this idea not correspond with the idea of propositional modularity?
ie: nouns (person/place/thing) are the basic modules that are strung
together via the usage of more complex items (such as prepositions, perhaps)
and finally the entire network represents the whole "idea"-essentially, the
entire network represents what is most abstract.
Am I correct in saying that your idea of person/place/thing (nouns) being
the roots fits with the idea of propositional modularity??

Jorn Barger

unread,
Jun 24, 2001, 3:52:37 PM6/24/01
to
Melanie Baran <melani...@sympatico.ca> wrote:
> Does this idea not correspond with the idea of propositional modularity?
> ie: nouns (person/place/thing) are the basic modules that are strung
> together via the usage of more complex items (such as prepositions, perhaps)

See: http://www.robotwisdom.com/ai/thicketfaq.html

I prefer 'stories' to 'propositions', but they definitely require
complex relationships between various simple elements like
person-place-thing.

> and finally the entire network represents the whole "idea"-essentially, the
> entire network represents what is most abstract.

No, relationships are indexed in very specific places on a 'fractal'
hierarchy. The specificity is what makes them quickly retrievable-- it
uses indexing, not search. (And there's nothing connectionist about
it.)

Charles

unread,
Jun 24, 2001, 3:55:23 PM6/24/01
to
Melanie Baran wrote:
>
> Does this idea not correspond with the idea of propositional modularity?
> ie: nouns (person/place/thing) are the basic modules that are strung
> together via the usage of more complex items (such as prepositions, perhaps)
> and finally the entire network represents the whole "idea"-essentially, the
> entire network represents what is most abstract.
> Am I correct in saying that your idea of person/place/thing (nouns) being
> the roots fits with the idea of propositional modularity??

That's just rehashed Aristotle. Amerindian languages disprove it.

Xee

unread,
Jun 29, 2001, 10:32:38 PM6/29/01
to
Language is probably closer to the abstract layer, simply because a
thought can be expressed in many (any) languages equally well. To think
that language is anything but abstract is, imho, absurd.

-Xee

Xee

unread,
Jun 29, 2001, 10:48:15 PM6/29/01
to
I doubt that thought is based on modules that are strung together. I
think that an idea is a relationship of various concepts, which are the
relationships of yet other concepts. Imagine, if you will, a tangled
web (mess) of strings. Each string is streatched vertically, but at
various points, it is twisted around another string. Each point may be
a connection to any other string. There are no real "rules" as to what
strings can interact. For this example, let us assume that the strings
are all identical and that twisting any two together at any point is
just as good as any other two at any other point. Now, back to my
point, each twisting, or intersection of two strings is a concept. An
entire string, defined not by itself, but by its relations (twistings)
with other strings, is an idea. This provides a much better relational
model of the mind. I could draw you a picture if it would help, but I
hope that my description is suitable. So, if you wanted to define a
concept (noun, verb, adjective, adverb, anyTHING), you would do it as an
intersection of ideas, but, if you wanted to define an idea, you would
do it as a linking of concepts. To take one single idea, and show it
off alone, and away from all the others, it would look like a branch of
a tree, kind of like a line bent in a bunch of places. It is the bends
(angle, direction, etc.), and their exact placement that makes that idea
(string) unique from all the others. As for how the strings got there
in the first place, or what's at their ends, is not important for this
example. Or, maybe it is, and that's just the magic that is
consciousness.

-Xee

Charles

unread,
Jun 30, 2001, 12:12:14 AM6/30/01
to
I really try to avoid philosophy, but ...

Xee wrote:

> I doubt that thought is based on modules that are strung together. I
> think that an idea is a relationship of various concepts,

I distinguish between the act and the object of thought.
It may be Aristotelian/naive, but seems to work just fine.
Everything else is "merely" the mathematics of relations,
which you map geometrically:

> each twisting, or intersection of two strings is a concept

A nice visualization. Of course there may be others as well.
ISTM a string of modules might be effective, without being
the only or necessarily best way to do it. I like multiple
independent heterogeneous agents for political reasons.
And I think the all-in-one approach is much tougher.

If I had a sig, it would be: "Let 1000 flowers bloom."

Jorn Barger

unread,
Jun 30, 2001, 8:58:38 AM6/30/01
to
Charles <ca...@spam.com> wrote:
> If I had a sig, it would be: "Let 1000 flowers bloom."

From your rhetorical style, I would have guessed the opposite.

Lionel Bonnetier

unread,
Jun 30, 2001, 9:28:49 AM6/30/01
to
Xee wrote:

> Language is probably closer to the abstract layer, simply because a
> thought can be expressed in many (any) languages equally well. To think
> that language is anything but abstract is, imho, absurd.

Keeping within the realm of structure metaphors, how would
you picture language expression in a multilingual person's
mind? In particular, how do you implement language in your
string model of cognition? Similar network models have been
used by several systems, but no one, as far as I know, was
able to produce language directly from the structure, all
had to add extra algorithms, like grafting a calculator to
a brain.

(Sorry if I'm focused on language, I'm on comp.ai.nat-lang.
I overlooked that Jorn had crossposted to sci.cognition and
I forgot to trim, now I'm split on the twin path).


Charles Moeller

unread,
Jun 30, 2001, 1:41:36 PM6/30/01
to
In article <Xuk%6.2216$A51.5...@monolith.news.easynet.net>, "Lionel
Bonnetier" <spamk...@kill.spam> writes:

>Xee wrote:
>
>> Language is probably closer to the abstract layer, simply because a
>> thought can be expressed in many (any) languages equally well. To think
>> that language is anything but abstract is, imho, absurd.
>
>Keeping within the realm of structure metaphors, how would
>you picture language expression in a multilingual person's
>mind? In particular, how do you implement language in your
>string model of cognition? Similar network models have been
>used by several systems,

>but no one, as far as I know, was
>able to produce language directly from the structure, all
>had to add extra algorithms, like grafting a calculator to
>a brain.

Yes, most temporal logics need "bolt-on" characteristics
that really do not and can not do the job in the final analysis.

IMHO, I have produced language from "dynamic process"
(which is definitely structured), both in space and in time.
My "natural language" description then can be exchanged
for logic elements that can actually do (or monitor) the process
in real time. An example can be shown in my solution to the
Yale Shooting Problem, which to my knowledge, has not been
truly solved in conventional logics.

Regards,
Charlie (cmoe...@aol.com)

Lionel Bonnetier

unread,
Jun 30, 2001, 11:10:48 PM6/30/01
to
Charles Moeller wrote:

> >but no one, as far as I know, was
> >able to produce language directly from the structure, all
> >had to add extra algorithms, like grafting a calculator to
> >a brain.
>
> Yes, most temporal logics need "bolt-on" characteristics
> that really do not and can not do the job in the final analysis.

Not sure if time is as important as structure in
language production, but maybe you mean sequence
in this particular case?


> IMHO, I have produced language from "dynamic process"
> (which is definitely structured), both in space and in time.
> My "natural language" description then can be exchanged
> for logic elements that can actually do (or monitor) the process
> in real time. An example can be shown in my solution to the
> Yale Shooting Problem, which to my knowledge, has not been
> truly solved in conventional logics.

I still find only the abstract of your paper on
the conference site. Maybe you will publish only
in paper book?


Xee

unread,
Jul 1, 2001, 2:17:10 AM7/1/01
to

Lionel Bonnetier wrote:
>
> Xee wrote:
>
> > Language is probably closer to the abstract layer, simply because a
> > thought can be expressed in many (any) languages equally well. To think
> > that language is anything but abstract is, imho, absurd.
>
> Keeping within the realm of structure metaphors, how would
> you picture language expression in a multilingual person's
> mind? In particular, how do you implement language in your
> string model of cognition? Similar network models have been
> used by several systems, but no one, as far as I know, was
> able to produce language directly from the structure, all
> had to add extra algorithms, like grafting a calculator to
> a brain.

Why does language need to come directly from this structure? The
structure is merely a relational model of ideas and concepts. Language
is an abstract tool used to describe these relations. Since this is
cross posted to an AI newsgroup, why not use a computer analogy...

The structure I describe is the "back-end" database that the mind uses
for information storage and retrieval. Language may be just one type of
facility that reads from and writes to this database. More languages
may just be more independant agents who have access to the database.
:) I learn something in german, and my "German Language Processing
Facility" translates the language into relations between raw cognitive
materials: concepts. Then, if I need to teach what I've recently
learned to someone, speaking japanese, my "Japanese Language Processing
Module" would read the appropriate idea(s) - translating the raw
concepts into flowing japanese speach.

And another thing, maybe you should look into (if you havn't already)
Marvin Minsky's "The Society of Mind", or Ben Goertzel's 'Psynet' model
of the mind. These models are strikingly similar; they both involve
multiple agents interacting to form the mind as a whole. I don't see
this as being different from "grafting a calculator to a brain".
Actually, I tend to see the brain as a whole mess of calulators all
graphted together (which is literally what Ray Kurzweil is after).

-Xee

Resources:
Ben Goertzel's site: www.goertzel.org
Marvin Minsky's book: The Society of Mind (ISBN: 0671657135)
Ray Kurzweil's book: The Age of Spiritual Machines (ISBN: 0140282025)

Xee

unread,
Jul 1, 2001, 2:52:26 AM7/1/01
to

> A nice visualization. Of course there may be others as well.

thanks... yes, of course.

> ISTM a string of modules might be effective, without being
> the only or necessarily best way to do it.

Yes, exactly. This allows for similies, metaphors, and analogies as
well as confusion and ambiguity.

> I like multiple
> independent heterogeneous agents for political reasons.
> And I think the all-in-one approach is much tougher.

Me too. I made that "string" thing up to demonstrate the abstractness
of language - not as a model of consciousness. I'm not sure how I'd
incorporate it into a model of consiousness, though. I think it would
fit together nicely, since they are not modeling the same things. Are
consciousness and memory one in the same? Or, do they work together
much as the independant heterogeneous agents do in their model of
consciousness? I haven't seen memory addressed (pun) by the 'agents'
model of consciousness. --pause-- I just dusted off The Society of
Mind, and looked up Memory (15.3 and 15.8) to find that Minsky proposes
that the agents (for the most part) do their own memory management.
This semes awkward, and counterintuitive to Minksy's overall scheme
(which I'm a big fan of). I would've guessed that he'd have "Memory
Management" agents to manage memories on behalf of any agent that needs
to store or retrieve something. I figure that my "strings" would be a
model of what these Memory Management agents accessed.

-Xee

Lionel Bonnetier

unread,
Jul 1, 2001, 3:43:47 AM7/1/01
to
Xee wrote:

> Why does language need to come directly from this structure? The
> structure is merely a relational model of ideas and concepts. Language
> is an abstract tool used to describe these relations. Since this is
> cross posted to an AI newsgroup, why not use a computer analogy...
>
> The structure I describe is the "back-end" database that the mind uses
> for information storage and retrieval.

Like Cyc and similar, but (see below)


> Language may be just one type of
> facility that reads from and writes to this database.

Language seems to be much more than just a facility
-otherwise natural language programs would do much
better with their silly Chomskyan algorithms. It
looks like language is as complicated as the string
model you describe.


> And another thing, maybe you should look into (if you havn't already)
> Marvin Minsky's "The Society of Mind", or Ben Goertzel's 'Psynet' model
> of the mind. These models are strikingly similar; they both involve
> multiple agents interacting to form the mind as a whole. I don't see
> this as being different from "grafting a calculator to a brain".
> Actually, I tend to see the brain as a whole mess of calulators all
> graphted together (which is literally what Ray Kurzweil is after).

By "calculator on a brain" I meant that current systems
forget all the suppleness of node models and get back into
decades-old algorithms, like putting bicycle wheels to a
jaguar. See what I mean?


Charles

unread,
Jul 1, 2001, 5:01:43 AM7/1/01
to
Xee wrote:

Interesting post, you should do a bunch more.
Here are my tangential thoughts ...

> I made that "string" thing up to demonstrate the abstractness
> of language - not as a model of consciousness.

> I'm not sure how I'd
> incorporate it into a model of consiousness, though.

As the Platonic ideals that we *think* we think with, but actually
only can try to approach as a target or goal? Geometry is somehow
intuitively pleasing, and works too well to be "coincidental".
It seems to map almost directly into our visual perception.

I'd guess it's possibly isomorphic to SQL, which is a true subset
of natural language (via 1st-order predicate logic). Math is much
simpler and more precise than fuzzy language, and if it turns out
to be easier to implement AI that way (using crude brute-force
mechanistic technique), then that's just the way it goes. But
I think we need more besides, or at least different formulations
for different tasks.

My only advice is to go ahead and implement several different schemes,
expecting to integrate them as networked agents. If some *magical*
component is needed, somebody will add it eventually, but we shouldn't
wait for a complete and perfect design before building the parts we
already know will be needed. Nature didn't wait, she built reptiles
first and then mammals as an improvement to them.

> Are consciousness and memory one in the same? Or, do they work together
> much as the independant heterogeneous agents do in their model of
> consciousness? I haven't seen memory addressed (pun) by the 'agents'
> model of consciousness. --pause-- I just dusted off The Society of
> Mind, and looked up Memory (15.3 and 15.8) to find that Minsky proposes
> that the agents (for the most part) do their own memory management.
> This semes awkward, and counterintuitive to Minksy's overall scheme
> (which I'm a big fan of).

Minsky is eclectic (which is good), his ideas don't always mesh (ISTM).
Some people admire "society of mind", others the "case frames/slots",
but those are really quite distinct ... I am more inspired by Bartlett
about memory, his "schemas" should have become fashionable before now.
I agree with Minsky's idea that memory is distributed, and implemented
in several divergent ways; image memory is unlike episodic memory,
and short term "refresh" memory (the "spotlight" of attention) is
clearly separate from cerebellum, etc. Each brain component does seem
to have its own separate memory. Anyway, centralization is a bad
strategy/design, Nature loves redundancy and fallbacks.

> I would've guessed that he'd have "Memory
> Management" agents to manage memories on behalf of any agent that needs
> to store or retrieve something. I figure that my "strings" would be a
> model of what these Memory Management agents accessed.

Could be, and should be tried. Maybe they are really "schemas".
The idea of bundling data together with its access methods is
currently fashionable, and maybe appropriate there.

What I have reacted against are some who presume to define "the one"
correct standard way; I'm sure we need multiple cooperating but
orthogonal methods. That's where I see language as centrally important,
integrating the agents by symmetric, cooperative communications,
rather than limiting them to rigid (and inevitably false) ontological
committments, e.g. XML, and Cyc.

But many who embrace language become enmeshed in the "logic" of it,
never to emerge from its quicksand of mutability. Language should
be seen as "just" a series of hints between intelligent entities,
to be interpreted situationally and purposefully. Words don't
have meanings, only their speakers do.

Jim Bromer

unread,
Jul 1, 2001, 7:29:52 AM7/1/01
to

Xee <x...@a.d.p> wrote in message news:3B3EC8AA...@a.d.p...

>
> > I like multiple
> > independent heterogeneous agents for political reasons.
> > And I think the all-in-one approach is much tougher.
>
> Me too. I made that "string" thing up to demonstrate the abstractness
> of language - not as a model of consciousness. I'm not sure how I'd
> incorporate it into a model of consiousness, though. I think it would
> fit together nicely, since they are not modeling the same things.

Language doesn't just use the string. It's usage relies on the fact that a
person will be able to incorporate the ideas being expressed and thereby
create a more complex relation between the objects of the entire story. But
a person can only remember so much information at a time.

It's much easier to read a story in which each character is developed in a
few paragraphs focused around him and where previously mentioned
characteristics significant to a story are sometimes briefly mentioned as
memory refreshers. The same thing is true about the significant aspects of
the story and how the individual characteristics of the story integrate.

However, I think that using geometry and three dimensional objects as
metaphors for thought shouldn't be taken to their metaphoric extremes. The
dimensionality of thought is not three but n.


Jim Bromer

unread,
Jul 1, 2001, 1:27:02 PM7/1/01
to

Lionel Bonnetier <spamk...@kill.spam> wrote in message
news:_uA%6.2384$A51.6...@monolith.news.easynet.net...

> ,,,better with their silly Chomskyan algorithms.

Radical conservatism?


Bengt Richter

unread,
Jul 1, 2001, 3:57:34 PM7/1/01
to
On Sun, 1 Jul 2001 09:43:47 +0200, "Lionel Bonnetier"
<spamk...@kill.spam> wrote:

>Xee wrote:
>
>> Why does language need to come directly from this structure? The
>> structure is merely a relational model of ideas and concepts. Language
>> is an abstract tool used to describe these relations. Since this is
>> cross posted to an AI newsgroup, why not use a computer analogy...
>>
>> The structure I describe is the "back-end" database that the mind uses
>> for information storage and retrieval.
>
>Like Cyc and similar, but (see below)
>

Is it possible to ask Cyc if the number of letters in its
first name is a fibonacci number (and get a correct response ;-) ?

What I'm trying indicate by the example is a question whose
answer must be _generated_ by selecting an appropriate procedures
(letter counting, enumeration of fibonacci numbers, comparison)
and applying them to element(s) of internal state and other
procedural results.

This example is very simple, but if you generalize the principle
to the application of broader concepts of procedure and internal
state (including anything a human can attend to), it gets more
interesting.

It may be interesting to consider the dialog that might ensue if
you ask a human the same question. I.e., you are likely to
get a question back as to what a fibonacci number is.

Can Cyc initiate a goal-seeking procedure in a case like this, to find
the necessary procedures (or equivalent 3rd-party agent services)
in order to answer the original question?

(For this example, I think it would be useful if it could initiate
a parallel 3rd-party or n-party search for a definition of a
fibonacci-enumeration algorithm represented in a form that it could
use, or a protocol through which it could get an answer from a third
party doing the computation (with some attributes as to
authoritativeness, etc.)

One of the concepts involved in this is "knowledge" represented by
an unopened book or an unexamined physical object.

In a sense, the whole universe is a partially examined object that
we all can derive knowledge from by exploring and examining. Perhaps
an AI should know that it could potentially improve its model of
the world and/or answer questions about it, by undertaking
exploratory expeditions.

By knowing about its internal model and potential for improving it,
I mean not just a built-in provision for exploring. I mean an internal
model that permits the AI to consider its own existence and and
relation to the world, as well as modeling the fact that its
model is a model. I.e., in a sense understanding its entrapment within
the bubble of its own states. Thus motivations could defined in terms
of the model's existence and "desirable" futures, instead of just an
appetite.

As with humans, it is probably hard to decide what books to leave
unopened, and what to leave unexamined and unexplored ;-)

>
>> Language may be just one type of
>> facility that reads from and writes to this database.
>

I guess I could see thinking of a linguistic expression
as a dynamically constructed 'view' in database terms.

>Language seems to be much more than just a facility
>-otherwise natural language programs would do much
>better with their silly Chomskyan algorithms. It
>looks like language is as complicated as the string
>model you describe.
>

ISTM the trouble with defining language is that it is hard to talk
about one aspect in isolation from others, and if you succeed by
some clinical dissection in isolating something interesting, it is
necessarily excluding important parts of the living whole, but I
wouldn't sneer at such work.

In my view also, language is much more than "just a facility."
>
[...]

Bengt Richter

unread,
Jul 1, 2001, 4:12:45 PM7/1/01
to
On Sun, 01 Jul 2001 02:52:26 -0400, Xee <x...@a.d.p> wrote:

>
>
>> A nice visualization. Of course there may be others as well.
>
>thanks... yes, of course.
>
>> ISTM a string of modules might be effective, without being
>> the only or necessarily best way to do it.
>
>Yes, exactly. This allows for similies, metaphors, and analogies as
>well as confusion and ambiguity.
>
>> I like multiple
>> independent heterogeneous agents for political reasons.
>> And I think the all-in-one approach is much tougher.
>
>Me too. I made that "string" thing up to demonstrate the abstractness
>of language - not as a model of consciousness. I'm not sure how I'd
>incorporate it into a model of consiousness, though. I think it would

I'm afraid my news reader didn't get your '"string" thing' -- was it
on another newsgroup? I am just reading c.a.n-l for the AI subject
area. If you would re-post the essentials of the '"string" thing'
here, that would be easiest for me. Maybe some others missed it too.
TIA.


Xee

unread,
Jul 1, 2001, 4:33:38 PM7/1/01
to
I was on sci.cognitive, and didn't realize at the time that I was
replying to a cross-posted message. So, here's my original...

---from sci.cognitive-----

I doubt that thought is based on modules that are strung together. I

Charles Moeller

unread,
Jul 1, 2001, 5:01:55 PM7/1/01
to
In article <wOw%6.2377$A51.6...@monolith.news.easynet.net>, "Lionel
Bonnetier" <spamk...@kill.spam> writes:

>Charles Moeller wrote:
>
>> >but no one, as far as I know, was
>> >able to produce language directly from the structure, all
>> >had to add extra algorithms, like grafting a calculator to
>> >a brain.
>>
>> Yes, most temporal logics need "bolt-on" characteristics
>> that really do not and can not do the job in the final analysis.
>
>Not sure if time is as important as structure in
>language production, but maybe you mean sequence
>in this particular case?

In a critical use, language describes process.
Structure, if not both in time and in space, has limited meaning
and application. The ordinary logic and its execution are
constrained by the necessary translation of all temporal factors
into the space domain, where they can be "comfortably" operated
upon by static logic operators. The common restriction to static,
spatial logic is the reason that we lack autonomous factories, and
the frame problem can't be solved.

In the Natural Logic (NL) viewpoint, Time is the progression of
events. The perceived sequence of events determine our
understanding of cause-effect and natural laws.
(Physics is one portion of natural laws.)
If real time sequences are removed from our process controllers
(via translation to the space domain) they become inefficient and
unable to perform certain essential dynamic operations (which is
the case at present).

>> IMHO, I have produced language from "dynamic process"
>> (which is definitely structured), both in space and in time.
>> My "natural language" description then can be exchanged
>> for logic elements that can actually do (or monitor) the process
>> in real time. An example can be shown in my solution to the
>> Yale Shooting Problem, which to my knowledge, has not been
>> truly solved in conventional logics.
>
>I still find only the abstract of your paper on
>the conference site. Maybe you will publish only
>in paper book?

The standard operating proceedure for the IWLS workshop is for
limited publication (to the attendees and a few other "officials,"
thus reserving the authors' copyrights.
The authors then may expand or revise (or not) their papers and
may submit to another conference/publication. This IWLS mode
is for the purpose of encouraging early (partial) exposure to
new ideas.

I would be willing to state the YSP and describe the solution
as is shown in my paper for the general readership of
comp.ai.nat-lang. Would you like me to do that?

Regards,
Charlie

Marvin mMnsky

unread,
Jul 2, 2001, 3:25:19 AM7/2/01
to
Xee <x...@a.d.p> wrote in message news:<3B3EC8AA...@a.d.p>...
I just dusted off The Society of
> Mind, and looked up Memory (15.3 and 15.8) to find that Minsky proposes
> that the agents (for the most part) do their own memory management.
> This semes awkward, and counterintuitive to Minksy's overall scheme
> (which I'm a big fan of). I would've guessed that he'd have "Memory
> Management" agents to manage memories on behalf of any agent that needs
> to store or retrieve something. I figure that my "strings" would be a
> model of what these Memory Management agents accessed.

Yes, I don't much like what I said in 15.3. Finding useful memories
needs much better theories than that one. It does make sense for many
resources to have some local ways to cache some memories, and ad hoc
ways to get others that they've found useufl—but for hard problems
where you get stuck, we need much better ideas.

As for section 15.8, I can't figure out much about what that diagram
is supposed to mean, and I can't find any old notes about it. I hope
anyone else can. (Still, I think I learned more from the misprints in
N.Wiener's Cybernetics, and the incomprehensible parts of the 1943
McCulloch-Pitts paper, than from the parts that were clear and
correct.)

I think I'll have much better ideas in my new book. "The Emotion
Machine" (some draft chapters of which you can find on my home page.
In later chapters I'll discuss two memory issues that seem important:

1. With Roger Schank, I think that among our most useful memories are
the cleaned-up "stories" and "scripts" that we reformulate after
significant experiences. These are presumably the results of
"credit-assignment" processes that try to 'make sense' of important
events, and then represent them in cleaned up forms.

2. However, the only plausible way to retrieve relevant such
'case-based' memories is by using schems that search for 'good'
analogies. To do this, one must use the context of your present
problem to determine which features of old ones are likely to be
relevant. In easy cases, this can be done effectively by using
something like the microfeature bus described S.o.M—but in harder
cases, it takes us a good deal of time and 'deliberation' to figure
out whatkinds of records we need—and that sort of process must engage
a good deal of 'the rest of the mind—which means that some agents have
to engage others for help.

I don't know what your strings are like , but I'll take a look if
they're on some site. But surely in many cases, the brain will need
to make serially encoded messages for trying to find matches to all
those useful stories and scripts

Arthur T. Murray

unread,
Jul 3, 2001, 12:30:15 AM7/3/01
to
min...@media.mit.edu (Marvin mMnsky) wrote on 2 Jul 2001:

[...]


>I don't know what your strings are like , but I'll take a look if
>they're on some site. But surely in many cases, the brain will need
>to make serially encoded messages for trying to find matches to all
>those useful stories and scripts

ATM:
http://www.scn.org/~mentifex/jsaimind.html -- AI in JavaScript --
displays "serially encoded messages" in Troubleshoot mode,
as phonemic engrams stored in the auditory memory channel.

The JavaScript AI tries "to find matches" for any incoming
"serially encoded messages" by letting any matching phoneme
pass activation to the serially next phoneme, for as long
as the two comparand strings of phonemes continue to match.
In the end, only perfect matches are recognized as words
and go on to activate concepts in the deeper areas of the
AI mind.

Xee

unread,
Jul 3, 2001, 3:52:54 AM7/3/01
to
Sir, is that a typo in your last name?

-Xee

Lionel Bonnetier

unread,
Jul 5, 2001, 12:48:04 PM7/5/01
to
Charles Moeller wrote:

> >Not sure if time is as important as structure in
> >language production, but maybe you mean sequence
> >in this particular case?
>
> In a critical use, language describes process.
> Structure, if not both in time and in space, has limited meaning
> and application.

Not neglecting time, I believe language lives in a space
of its own where the usual physical dimensions are not
necessarily the more pertinent ones. Please beware that
I'm talking of language production, not general mental
processes. I do believe that time, sequences, are of
utmost importance in a ton of cognitive abilities, the
most obvious being the emergence of causality. But
language has a genuine talent for mixing time, space,
abstract relations, and has also an impish tendency to
jump across industrial chains of production.


> In the Natural Logic (NL) viewpoint, Time is the progression of
> events. The perceived sequence of events determine our
> understanding of cause-effect and natural laws.
> (Physics is one portion of natural laws.)

I agree 100%. Add to that co-temporality, which is an
important base for recognizing and anticipating things
and events, even when no causality has been extracted.


> I would be willing to state the YSP and describe the solution
> as is shown in my paper for the general readership of
> comp.ai.nat-lang. Would you like me to do that?

I would like to know what your system is, though I'm not
sure I will understand it. But nat-lang may not be the most
appropriate forum for that.


Lionel Bonnetier

unread,
Jul 5, 2001, 12:58:35 PM7/5/01
to
Bengt Richter wrote:

> Is it possible to ask Cyc if the number of letters in its
> first name is a fibonacci number (and get a correct response ;-) ?

(...)


> Can Cyc initiate a goal-seeking procedure in a case like this, to find
> the necessary procedures (or equivalent 3rd-party agent services)
> in order to answer the original question?

It sounds like this is what Cycorp is after with
their "networked cyc".


> As with humans, it is probably hard to decide what books to leave
> unopened, and what to leave unexamined and unexplored ;-)

Personal trends? What makes us being obsessed by
some particular topic and become an expert in it?


Lionel Bonnetier

unread,
Jul 5, 2001, 12:52:57 PM7/5/01
to
Jim Bromer wrote:

> > ,,,better with their silly Chomskyan algorithms.
>
> Radical conservatism?

I'm not familiar with US politics. I mean the observed
structures of produced language are not necessarily a
product of a transformational engine. I would rather
imagine some mental alchemy, an emergent thingy, like
the spots and strips on animals.


Lionel Bonnetier

unread,
Jul 5, 2001, 1:03:25 PM7/5/01
to
Jim Bromer wrote:

> However, I think that using geometry and three dimensional objects as
> metaphors for thought shouldn't be taken to their metaphoric extremes. The
> dimensionality of thought is not three but n.

There might as well be no dimension at all in thoughts,
if you use a graph model. Graphs have no dimension in
the usual sense of distance measurement.


Lionel Bonnetier

unread,
Jul 5, 2001, 1:13:22 PM7/5/01
to
Marvin mMnsky wrote:

> I think I'll have much better ideas in my new book. "The Emotion
> Machine" (some draft chapters of which you can find on my home page.
> In later chapters I'll discuss two memory issues that seem important:
>
> 1. With Roger Schank, I think that among our most useful memories are
> the cleaned-up "stories" and "scripts" that we reformulate after
> significant experiences. These are presumably the results of
> "credit-assignment" processes that try to 'make sense' of important
> events, and then represent them in cleaned up forms.
>
> 2. However, the only plausible way to retrieve relevant such
> 'case-based' memories is by using schems that search for 'good'
> analogies. To do this, one must use the context of your present
> problem to determine which features of old ones are likely to be
> relevant. In easy cases, this can be done effectively by using

> something like the microfeature bus described S.o.M-but in harder


> cases, it takes us a good deal of time and 'deliberation' to figure

> out whatkinds of records we need-and that sort of process must engage
> a good deal of 'the rest of the mind-which means that some agents have


> to engage others for help.

What if we didn't have to search for "the relevant features"
and the selection was done after the swarming memories have
all triggered?


Jim Bromer

unread,
Jul 5, 2001, 7:54:24 PM7/5/01
to

Lionel Bonnetier <spamk...@kill.spam> wrote in message
news:Dv117.8461$A51.1...@monolith.news.easynet.net...
I misread Xee's original post, but I agree with you that there are no
dimensions in thoughts, because dimension is used as a term to refer to a
measurable field. However, there is some tendency to try to put knowledge
into a scaleable arrangement, and a number of natural language search
systems use this sort of method by the way! For example, every 5 years or
so someone reinvents a Pythagorean solution to measure the distance between
a group of search words to compare against a prorated categorical evaluation
assigned to books and articles in a library. It's a neat idea, only, well,
it just doesn't fly.

Forty five years ago my cousin's husband did some research on something they
called the semantical differential. They asked people to rate the
difference between the meanings of related words, and then they averaged the
results out. Although it was interesting, that is just not how we use
words. Attempting to put a value on the semantical difference between a
number of related words just isn't going to be sufficient to formulate how
those words are used.

So, knowing that there will be people who will hear the siren song of
mathematical methods to interpret language, I just wanted to point out that
there are numerous potential associations between ideas, and they don't
usually fan out in some simple and neat arrangement as did the legendary
dimensions of geometry and physics.


Jorn Barger

unread,
Jul 6, 2001, 4:40:27 AM7/6/01
to
Jim Bromer <jbr...@concentric.net> wrote:
> there are numerous potential associations between ideas, and they don't
> usually fan out in some simple and neat arrangement as did the legendary
> dimensions of geometry and physics.

The overlooked dimension is complexity-- a count of the simple elements
in any relationship. So 'person goes to place' is a simplification of
'person takes thing to place'.

To keep the data structure neat, it has to begin from a simple
hierarchy, with (self-similar) copies of itself added at each node.
This is just the orthogonality principle, applied to trees in the only
way possible. You can keep it manageable by only instantiating the
nodes you need. http://www.robotwisdom.com/ai/thicketfaq.html

Jim Bromer

unread,
Jul 7, 2001, 10:55:51 AM7/7/01
to

Jorn Barger <jo...@enteract.com> wrote in message
news:1ew3ts0.d76ndym2hm0oN%jo...@enteract.com...

> Jim Bromer <jbr...@concentric.net> wrote:
> > there are numerous potential associations between ideas, and they don't
> > usually fan out in some simple and neat arrangement as did the legendary
> > dimensions of geometry and physics.
>
> The overlooked dimension is complexity-- a count of the simple elements
> in any relationship. So 'person goes to place' is a simplification of
> 'person takes thing to place'.
>
> To keep the data structure neat, it has to begin from a simple
> hierarchy, with (self-similar) copies of itself added at each node.
> This is just the orthogonality principle, applied to trees in the only
> way possible. You can keep it manageable by only instantiating the
> nodes you need. http://www.robotwisdom.com/ai/thicketfaq.html

Yes but - can it learn?

I just started reading your web site page, and I am enjoying it. I think
your idea of what you call the fractal thicket (of semantic relationships)
is interesting although I am skeptical of your belief that your system is
the solution to the problem currently besetting ai programming. Can your
system learn from it's attempts to reason? It may be capable of
representing situations derived through abstract processes but is it capable
of analysis and conclusion. The idea that Cyc has failed to go further only
because of a weakness in its system of representation doesn't seem likely to
me, (I don't really know anything about Cyc.)

I think the possibility of using a relative simple system as a
representation of a more complex data is interesting. Can it be modified to
learn and dynamically write new conclusions through interacting with its
environment (with people)?


Jim Bromer

unread,
Jul 7, 2001, 11:14:22 AM7/7/01
to

Lionel Bonnetier <spamk...@kill.spam> wrote in message
news:Cv117.8459$A51.1...@monolith.news.easynet.net...

I assumed you were from the US. I was just joking.

I just learned that Chomsky's transformational syntax established that the
syntactic rules of language could be constructed through computable
functions. I think that he suggested that there would have to be an
additional filtering system to filter out over generated strings, but that
too was computable. Are you aware that there is a theory that the growth of
the spots and stripes on animals are computable as well?

Chomsky's language transformations can be transformed into abstractions that
can analyze sentences using rules that may not have been used in the earlier
algorithms that you mentioned.

My one criticism of contemporary ai is that the systems that are capable of
sophisticated representations aren't capable of interactive learning.


Jorn Barger

unread,
Jul 7, 2001, 11:24:48 AM7/7/01
to
Jim Bromer <jbr...@concentric.net> wrote:
> > http://www.robotwisdom.com/ai/thicketfaq.html
> Yes but - can it learn?

For me, hearing AI people claim they can master human psychology by
building a learning machine is as likely as a ten-year-old saying he'll
avoid homework by building a homework-robot...

Lionel Bonnetier

unread,
Jul 7, 2001, 1:10:40 PM7/7/01
to
Jim Bromer wrote:

> I just learned that Chomsky's transformational syntax established that the
> syntactic rules of language could be constructed through computable
> functions. I think that he suggested that there would have to be an
> additional filtering system to filter out over generated strings, but that
> too was computable. Are you aware that there is a theory that the growth of
> the spots and stripes on animals are computable as well?

Yes, I've seen pigment distribution made with cellular
automata techniques. I've made pseudo-fractal geo map
generation the same way. And many biological processes
are simulated with a fair accuracy. The issue is not
computability, but the computation model: as far as I
can tell, transfo grammar bears very little resemblance
with the alife (artificial life) models, which would be
more likely to reflect mental processes. Chomskyan
researchers struggle to fit complex psychological
things into their model --how we choose active and
passive conjugation, why we sometimes swap words by
mistake, etc.-- but there are too many of those. Usual
lapsus linguae, for instance, could benefit from a
swarm model where forces compete to pop out into
speech.


> Chomsky's language transformations can be transformed into abstractions that
> can analyze sentences using rules that may not have been used in the earlier
> algorithms that you mentioned.

Are you referring to the plug-ins such as actant
availability? Or some meta-model?


> My one criticism of contemporary ai is that the systems that are capable of
> sophisticated representations aren't capable of interactive learning.

Maybe because the representations used are just that:
representations. Whereas the systems that are best at
learning and acting (i.e. NN) are those we whine that
they are black boxes out of which no analytical
representation can be extracted :) The day we finally
make that super-AI, we may have to accept that we
f***ing don't know how it works in detail.


Charles

unread,
Jul 7, 2001, 2:42:15 PM7/7/01
to
Lionel Bonnetier wrote:

> Usual lapsus linguae, for instance, could benefit from a
> swarm model where forces compete to pop out into speech.

Optimality Theory.

Jim Bromer

unread,
Jul 8, 2001, 9:09:36 AM7/8/01
to

Jorn Barger <jo...@enteract.com> wrote in message
news:1ew67gd.1u5b55v1bph6hfN%jo...@enteract.com...

> Jim Bromer <jbr...@concentric.net> wrote:
> > > http://www.robotwisdom.com/ai/thicketfaq.html
> > Yes but - can it learn?
>
> For me, hearing AI people claim they can master human psychology by
> building a learning machine is as likely as a ten-year-old saying he'll
> avoid homework by building a homework-robot...

I am not sure that mastering human psychology is a focal issue of ai
enthusiasts. However, I do believe that contemporary theories that will be
used to push ai past its current barriers will help illuminate psychological
phenomena.

The idea is that we can take functions that have been non computational in
the past and turn them into computer programs. As more processes are
realized the entire range of activities that a computer can do will become
more sophisticated.

We use whatever we can to help discover strategies of creative thought. The
suggestion that computers can not be programmed to learn and integrate new
ideas is not valid. It should be obvious that it would be easy to improve
on current models of learning.

These programs of the near future will not work perfectly because they will
be innovative. Sometimes you can't break through an obstacle, you have to
take the long way around it. And so the next generation of ai programs will
make mistakes. With judgement comes error.


Jorn Barger

unread,
Jul 8, 2001, 9:46:49 AM7/8/01
to
Jim Bromer <jbr...@concentric.net> wrote:
> I am not sure that mastering human psychology is a focal issue of ai
> enthusiasts.

Yeah, I'm the lonely voice of reason!

*All* indexing schemes founder on human psych, especially motivation
(imho).

> However, I do believe that contemporary theories that will be used to push
> ai past its current barriers will help illuminate psychological
> phenomena.

This is true, and important, for cognitive psych. But cognitive models
offer very little to motivational psych, which evolved slowly and
painfully via billions of years of natural selection. (It's not about
propositions and rules of inference!)

Lionel Bonnetier

unread,
Jul 8, 2001, 11:54:59 AM7/8/01
to
Charles wrote:

Me seems OT describes long-term adaptations, how
distributions and structures freeze in usage, not
individual productions, variations, events, which
depend on temporary motivations, state of mind.

Yet the usual structures do emerge at some moment
and the process is driven by particular forces
that become "statistical" only afterwards.

A big difference between deeply felt expressions
and OT structures shows in the fact they can split
from each other with no return -- the "me seems"
I used experimentally above is an illustration of
the phenomenon.

I'll have to study more, but for the time being I
perceive things that way: transform grammar is a
subset of optim theory, and optim theory is a
subset of a motivational swarm theory yet to be
described. My two cent head scratching...


Jim Bromer

unread,
Jul 8, 2001, 12:45:35 PM7/8/01
to

Lionel Bonnetier <spamk...@kill.spam> wrote in message
news:TmH17.9125$A51.1...@monolith.news.easynet.net...

> Jim Bromer wrote:
> > Chomsky's language transformations can be transformed into abstractions
that
> > can analyze sentences using rules that may not have been used in the
earlier
> > algorithms that you mentioned.
>
> Are you referring to the plug-ins such as actant
> availability? Or some meta-model?


I look at ideas that can be used as components of algorithms or which can be
used as design models. But that means that I can modify them as well, so we
may be talking about different aspects of the same idea.

>
> > My one criticism of contemporary ai is that the systems that are capable
of
> > sophisticated representations aren't capable of interactive learning.
>
> Maybe because the representations used are just that:
> representations. Whereas the systems that are best at
> learning and acting (i.e. NN) are those we whine that
> they are black boxes out of which no analytical
> representation can be extracted :) The day we finally
> make that super-AI, we may have to accept that we
> f***ing don't know how it works in detail.

All computer programs are rule based programs - even neural networks. I
believe that highly abstract principles of learning can be used in a rule
based ai model in which knowledge will emerge through learning. And just as
language must be represented through some sort of symbolization, effective
knowledge must also be represented with some kind of symbols. They may not
be understandable on inspection.

One of the failures of psychology was that the efforts to apply theory to
education were more insipid than inspiring. By examining the failures of
old ai - including neural networks - we can see that the old theories of
learning have been seriously inadequate. The next generation of ai may seem
like a giant step backwards because with the capability of judgement comes
the capability of and the responsibility for error. But regardless of the
success of ai development, learning theory will undergo a significant step
forward.

The educational methods of the twenty first century won't usher in an era of
totalitarian educational mind control. With increased educational potential
comes increased potential for individual and group wisdom. And wisdom is
the best bet in the house.


Jim Bromer

unread,
Jul 9, 2001, 1:45:07 PM7/9/01
to

Jorn Barger <jo...@enteract.com> wrote in message
news:1ew7xkb.ivu7zr1sqpnwcN%jo...@enteract.com...
> Jim Bromer <jbr...@concentric.net> wrote:

> *All* indexing schemes founder on human psych, especially motivation
> (imho).
>
> > However, I do believe that contemporary theories that will be used to
push
> > ai past its current barriers will help illuminate psychological
> > phenomena.
>
> This is true, and important, for cognitive psych. But cognitive models
> offer very little to motivational psych, which evolved slowly and
> painfully via billions of years of natural selection. (It's not about
> propositions and rules of inference!)

I think the programming effectively creates an artificial motivation for the
program. Although a program can't have the same range of experiences that a
person can have, it can be programmed to try to learn. I believe that
learning is a complex process. By developing more sophisticated learning
algorithms, contemporary ai will seem more complex. This sort of
sophistication - if feasible - will create a more sophisticated artificial
motivation.


Lionel Bonnetier

unread,
Jul 9, 2001, 2:44:06 PM7/9/01
to
Jim Bromer wrote:

> I think the programming effectively creates an artificial motivation for the
> program. Although a program can't have the same range of experiences that a
> person can have, it can be programmed to try to learn. I believe that
> learning is a complex process. By developing more sophisticated learning
> algorithms, contemporary ai will seem more complex. This sort of
> sophistication - if feasible - will create a more sophisticated artificial
> motivation.

The relations between learning abilities and motivations are
not very clear. Curiosity is often considered a motivation
--if not an instinct-- but the things may be more complicated
in the detail.

If we look for instance at the limited but tangible results
of Gerald Edelman's works ("Bright Air, Brilliant Fire" and
other more technical books) we can observe the emergence of
complex motivations out of fundamental "values", while the
learning abilities are inborn abilities of the artificial
mind that apply to whatever the sensory modules and self-
observation modules are focused on.

An example of an evolved motivation: "I should move my arm so
as to touch the object in my visual field". This wasn't an
inborn motivation. The inborn "values" were: "It is good to
move my arm", "It is good to touch something", "It is good to
move my eye", and "It is good to have something in the visual
field". Edelman's robot learned that by moving its arm it
could touch things, and how to move its arm coherently, and
how to move its eye to follow things.

Jim Bromer

unread,
Jul 10, 2001, 11:19:20 PM7/10/01
to

Lionel Bonnetier <spamk...@kill.spam> wrote in message
news:WXm27.13794$Mk7.7...@nnrp6.proxad.net...

> The relations between learning abilities and motivations are
> not very clear. Curiosity is often considered a motivation
> --if not an instinct-- but the things may be more complicated
> in the detail.
>
> If we look for instance at the limited but tangible results
> of Gerald Edelman's works ("Bright Air, Brilliant Fire" and
> other more technical books) we can observe the emergence of
> complex motivations out of fundamental "values", while the
> learning abilities are inborn abilities of the artificial
> mind that apply to whatever the sensory modules and self-
> observation modules are focused on.
>
> An example of an evolved motivation: "I should move my arm so
> as to touch the object in my visual field". This wasn't an
> inborn motivation. The inborn "values" were: "It is good to
> move my arm", "It is good to touch something", "It is good to
> move my eye", and "It is good to have something in the visual
> field". Edelman's robot learned that by moving its arm it
> could touch things, and how to move its arm coherently, and
> how to move its eye to follow things.
>

That is very interesting and it shows how high level associations can be
used to represent real actions. On the other hand references to other ideas
can also be considered real.

Thanks, I think I may read the book.


0 new messages