Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Darwin, Dennett, Algorithms, Axioms

12 views
Skip to first unread message

Kenneth Colby

unread,
Aug 26, 1995, 3:00:00 AM8/26/95
to

In his new book Darwin's Dangerous Idea (recommended), Dennett
proposes that natural selection, as an explanation of evolution,
be viewed as an algorithmic process, a "mindless" sort of
process without point or purpose.

In AI, the term "algorithm" is used in at least two ways (1) an
idealized abstract theory-type and (2) an up-and-running computer
program that embodies the algorithm of (1). But such algorithms
have a desired outcome, a purpose , a telos, a goal to be achieved.

Dennett's exclusion of purpose from his algorithmic explanation
brings the idea of natural selection closer to an axiom than
an algorithm in the AI sense. Individual organisms have purposes
but the overall process of evolution would not have a goal or
purpose if Darwin's idea is taken as an axiom with testable
consequences.
KMC

Neil Rickert

unread,
Aug 26, 1995, 3:00:00 AM8/26/95
to
In <41nnng$4...@oahu.cs.ucla.edu> co...@oahu.cs.ucla.edu (Kenneth Colby) writes:

> In his new book Darwin's Dangerous Idea (recommended), Dennett
> proposes that natural selection, as an explanation of evolution,
> be viewed as an algorithmic process, a "mindless" sort of
> process without point or purpose.

I haven't read DDI yet, although it is on my list for future
reading.

It is difficult to see how one can talk about an algorithmic process
without a purpose. Mindless algorithms cause no difficulty, and some
would say that all computer algorithms are mindless. Perhaps
mindlessness is a requirement for an algorithm, although the
algorithm itself might generate a mind.

> In AI, the term "algorithm" is used in at least two ways (1) an
> idealized abstract theory-type and (2) an up-and-running computer
> program that embodies the algorithm of (1). But such algorithms
> have a desired outcome, a purpose , a telos, a goal to be achieved.

I completely agree.

It seems to me that the purpose of evolution is to occupy every
possible ecological niche. I do not mean to imply that evolution is
conciously seeking to accomplish that purpose. It is mindless (i.e.
non conscious) but is apparently purposeful. Or, to say it
differently, we might it useful to interpret evolution as having the
suggested purpose in order to allow us to interpret it as being an
algorithmic process.

> Dennett's exclusion of purpose from his algorithmic explanation
> brings the idea of natural selection closer to an axiom than
> an algorithm in the AI sense. Individual organisms have purposes
> but the overall process of evolution would not have a goal or
> purpose if Darwin's idea is taken as an axiom with testable
> consequences.

I think we need to distinguish between subjective purposes and
objective purposes. The purposes and goals that I pursue are
subjective or internal purposes, in that they are the product of my
conscious mind. However, viewing humans as evolved species, we have
an objective or externally imposed purpose of supporting the
continuation of the species.


Oliver Sparrow

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
One has to be very careful of words such as "purposeful" and, indeed,
"algorithmic". Gas molecules will rush through a pinhole in a vessel until
pressures equalise. Few would regard this as a process which is usefully
described int he common meaning of either of those two words, however.
Why?

I suggest that by "algorithmic" we mean that something is captured in a body
of explicitly stated rules. Whilst the playing out of the switches and
information in the genome is entirely algorithmic, statistical processes - such
as pertains in the example of the pressure vessel - are not. Genes are not
purposeful, however, any more than are gas molecules: indeed, they are
themselves bits of a big molecule. Newspapers have the capacity to move people
to change their behaviour in complex ways; but we would nether talk of
newspapers as being algorithmic nor, of themselves, as being purposeful.

Genes (and newspapers) perform their tasks within a system of transduction.
A gene is nothing without the apparatus which is necessary for its being
switched on, its information being sculpted into a three dimensional pattern of
electrostatic forces, for the provision of the context within which that enzyme
or structural protein performing it task in the dance that is the cell, the
organism, the ecosystem. *Purpose*, in this sense, advances the processes of
that dance. The purpose of the cell is to do what cells do; and that which
helps it to do this is purposeful. Intensional purpose - in which there is an
aware purpose holder, having values and derived goals - is, of course, a whole
other domain.

Manfred Eigen (whose name I am no doubt misspelling) has been working for a
lifetime on the concept of evolution as the climbing of a fitness surface,
where the evolutionary process is - his words - a hill climbing algorithm. I
feel that "algorithm" is th wrong word for this: its a hill climbing process,
in which the act of being further up the hill is a self-justifying critereon of
success and the interaction of the information in the genome, the fluctuating
environment, the phenotypic expression of that geneome all set the conditions
within which hill climbing - like virtually everything that we see around us
-just happens. Purpose is that which just happens and which just happens to
advance the needs of the system.

To the point: can self-assembling purposefulness - in the non-intensional sense
in which I have described it above - be a general property of systems? That is
to say, do the elements of brains or social systems fall into ways of acting
that cause them to climb fitness hills? Almost certainly, in some instances,
the answer is yes. Are the more primitive systems in our brains what are
normally called genetic algorithms (but whre those two words are used ina quite
different sense from what I have outlined above!); that is, do they fall into
systems of self-ordering such that they become better at doing tasks, where
"better" means what it means ot the system: tautology made useful?

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Alan J. Robinson

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
On 26 Aug 1995 11:04:32 -0700,
Kenneth Colby <co...@oahu.cs.ucla.edu > wrote:

>
> In his new book Darwin's Dangerous Idea (recommended), Dennett
> proposes that natural selection, as an explanation of evolution,
> be viewed as an algorithmic process, a "mindless" sort of
> process without point or purpose.
>
> In AI, the term "algorithm" is used in at least two ways (1) an
> idealized abstract theory-type and (2) an up-and-running computer
> program that embodies the algorithm of (1). But such algorithms
> have a desired outcome, a purpose , a telos, a goal to be achieved.
>
> Dennett's exclusion of purpose from his algorithmic explanation
> brings the idea of natural selection closer to an axiom than
> an algorithm in the AI sense. Individual organisms have purposes
> but the overall process of evolution would not have a goal or
> purpose if Darwin's idea is taken as an axiom with testable
> consequences.
> KMC
>


It's important to keep in mind that books such as this are not serious
works of scholarship, especially when they are written by people whose
expertise lies in other discplines. If they are well written there
is a market for such books, which is why they are published. Just
don't take them as authoritative. Read the original articles in the
peer-reviewed scientific literature if at all possible, or failing
that good review articles by experts in the field.

AJR


Burt Voorhees

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
ric...@cs.niu.edu (Neil Rickert) writes:

>In <41nnng$4...@oahu.cs.ucla.edu> co...@oahu.cs.ucla.edu (Kenneth Colby) writes
:

>> In his new book Darwin's Dangerous Idea (recommended), Dennett
>> proposes that natural selection, as an explanation of evolution,
>> be viewed as an algorithmic process, a "mindless" sort of
>> process without point or purpose.

I haven't read this book, yet, either.
It seems to me, however, that one might
say instead that evolution, as it is
observed to happen in the world, can be
modelled by an algorithmic process.
That is we, by observing, can come up
with an algorithmic set of rules which
apply within the symbolic system which we
define as being sufficient to capture the
appearences, and which rules lead to changes
which are similar to the evolutionary changes
we observe. One would have to be careful here,
there might be many ways that a proto-horse
(for example) could evolve rather than into
a Kentucky Derby winner.

To say that evolution itself, as it occurs in
the world, is algorithmic is an additional
assumption, and one which to me makes sense
only if I make the computational physics
assumption that all of physical reality
is computational. I'm not sure that I want
to make that assumption.

>I haven't read DDI yet, although it is on my list for future
>reading.

>It is difficult to see how one can talk about an algorithmic process
>without a purpose. Mindless algorithms cause no difficulty, and some
>would say that all computer algorithms are mindless. Perhaps
>mindlessness is a requirement for an algorithm, although the
>algorithm itself might generate a mind.

Whether algorithms can generate minds or not is, of course,
a question much debated in this newsgroup. The question
of how a mindless algorithm could generate a mind is
particularly interesting. Suggestions, anybody?

>> In AI, the term "algorithm" is used in at least two ways (1) an
>> idealized abstract theory-type and (2) an up-and-running computer
>> program that embodies the algorithm of (1). But such algorithms
>> have a desired outcome, a purpose , a telos, a goal to be achieved.

>I completely agree.

>It seems to me that the purpose of evolution is to occupy every
>possible ecological niche. I do not mean to imply that evolution is
>conciously seeking to accomplish that purpose. It is mindless (i.e.
>non conscious) but is apparently purposeful. Or, to say it
>differently, we might it useful to interpret evolution as having the
>suggested purpose in order to allow us to interpret it as being an
>algorithmic process.

But in fact, not every niche is
occupied. In a finite system, only
finitely many niches will be possible,
and it may be that the fact that some niches
are occupied leads to the exclusion of others.
In other words, the system itself obeys some set
of dynamical laws (which govern its evolution),
and those laws determine which niches will be
occupied, and how they will change over time.
If one views evolution as a dialectical process
then there will be a continual progression
of increasing complexity. There are other
possible views, however, in which evolution
may sometimes be progressive, and other times
regressive.

I'm not disagreeing with the comment about
apparent purposefulness, but I don't think
that the "purpose" is to occupy every niche.
Perhaps it is to evolve conscious beings who
can assign apparent purposes and thus generate
meaning in an otherwise meaningless world.

>> Dennett's exclusion of purpose from his algorithmic explanation
>> brings the idea of natural selection closer to an axiom than
>> an algorithm in the AI sense. Individual organisms have purposes
>> but the overall process of evolution would not have a goal or
>> purpose if Darwin's idea is taken as an axiom with testable
>> consequences.

>I think we need to distinguish between subjective purposes and


>objective purposes. The purposes and goals that I pursue are
>subjective or internal purposes, in that they are the product of my
>conscious mind. However, viewing humans as evolved species, we have
>an objective or externally imposed purpose of supporting the
>continuation of the species.

It seems to me that an organism must be
self-conscious before it can have a subjective
purpose. Otherwise, it is just following mindless instinct.
Talking of purpose in any way requires consciousious
individuals and again brings up the question of how
consciousness can come out of purely mechnical algorithms.

bv

Kenneth Colby

unread,
Aug 28, 1995, 3:00:00 AM8/28/95
to

An AI algorithm is an artifact instantiating (not just following)
rules created by us to achieve a humanly desired goal. According
to Dennett, we are Mother Nature's evolutionary artifacts (DDI -p429).
To follow this maternal metaphor, what is Mother Nature's goal?

One advantage of viewing natural selection as an axiom is that we
can think up new alternatives, although it may take a long time,
e.g. Euclid's 5th.

To wax unabashedly meta-scientific, the goal of evolution may be
to achieve a way for the Universe to reflexively understand itself.
Thus, much as the ancients believed (but not for the same reasons),
we really are at the center of things in that mankind exemplifies
the most highly developed and expanding minds finally capable of
progressively penetrating, understanding, controlling, and changing
the Universe "for the better".
KMC


Julian Assange

unread,
Aug 30, 1995, 3:00:00 AM8/30/95
to
Kenneth Colby (co...@oahu.cs.ucla.edu) wrote:
:

: In his new book Darwin's Dangerous Idea (recommended), Dennett


: proposes that natural selection, as an explanation of evolution,
: be viewed as an algorithmic process, a "mindless" sort of
: process without point or purpose.

[...]

Of course it has a purpose. To lead to the present state of observation.
Only those changes which eventually to this state are mandated. History,
after all does not exist, we just infer it from the current state of
what we can observere.

--
+----------------------------------+-----------------------------------------+
| Julian Assange | "if you think the United States has |
| | has stood still, who built the largest |
| pr...@suburbia.net | shopping centre in the world?" - Nixon |
+----------------------------------+-----------------------------------------+

kand. Pontus Gagge

unread,
Aug 30, 1995, 3:00:00 AM8/30/95
to
ric...@cs.niu.edu (Neil Rickert) writes:

>In <41nnng$4...@oahu.cs.ucla.edu> co...@oahu.cs.ucla.edu (Kenneth Colby) writes:

>> In his new book Darwin's Dangerous Idea (recommended), Dennett
>> proposes that natural selection, as an explanation of evolution,
>> be viewed as an algorithmic process, a "mindless" sort of
>> process without point or purpose.
[...]

>It is difficult to see how one can talk about an algorithmic process
>without a purpose. Mindless algorithms cause no difficulty, and some
>would say that all computer algorithms are mindless. Perhaps
>mindlessness is a requirement for an algorithm, although the
>algorithm itself might generate a mind.

Purpose is in the eye of the beholder. From the description, the
subject of the book sounds like Dennett's "Intentional stance" taken
towards both nature and processes. A reasonable development of his
previous lines of argument. Oh, well, yet another book I should read
someday.

To be fussy, evolution is not and cannot be viewed as algorithms.
It sounds as if he should have used "heuristics": when, exactly,
can evolution be said to terminate with a well-defined result...?
Repeat until \"Ubermensch?

--
/--- Ego sum --------\ /------------------------\
! kand. Pontus Gagge ! c89ponga.und.ida.liu.se !
\---- Enjoyment is an overrated pleasure. ------/

Neil Rickert

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
In <c89ponga.809806435@news> c89p...@ida.liu.se (kand. Pontus Gagge) writes:
>ric...@cs.niu.edu (Neil Rickert) writes:

>>It is difficult to see how one can talk about an algorithmic process

>>without a purpose. ...

>Purpose is in the eye of the beholder.

Sure. But algorithm is also in the eye of the beholder.

>To be fussy, evolution is not and cannot be viewed as algorithms.

Why not? In particular, why can it not be viewed as algorithmic?

>It sounds as if he should have used "heuristics": when, exactly,
>can evolution be said to terminate with a well-defined result...?

An algorithm might be used repeatedly in an infinite iterative loop.
In such a case it would not terminate with a well-defined result.


Oliver Sparrow

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
I think it misuses the word "algorithm" to say that this lies in the eye
of the beholder. An algorithm is an explicitly stated set of rules. These
may be purposeless and often are :) They are, however, explicit. A heuristic
is - as I use the word - a set of rules which have emerged from activity
which are regular and which govenr what happens in a system. They are neither
explicti (unless someone makes them so) nor purposeful, although they may
give rise to equilibria and other regularities. Neither are purposive; except
in the sense that I (re)defined the word elsewhere in this usegroup a few
days ago.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Neil Rickert

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to

>I think it misuses the word "algorithm" to say that this lies in the eye
>of the beholder. An algorithm is an explicitly stated set of rules.

Well, if you want to be strict about it, then we shall have to
conclude that there are no such things as algorithms. Nothing is
ever fully explicit. As you mentioned in another post, you cannot
know about trees without knowing about forests, and you cannot know
about forests without knowing about trees. There is a great deal of
implicit knowledge behind all of our common concepts. Nothing is
ever fully explicit.

When I program a computer, the transistors are doing things with
electrons. It is convenient for me to think of the action of the
computer as corresponding to the following of rules for arithmetic
operations. But the transistors are not following those rules -- I
am just interpreting the computer as if they were following rules.
That is what I mean when I say that "algorithm" is in the eye of the
beholder.


Oliver Sparrow

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
> Kenneth Colby (co...@oahu.cs.ucla.edu) wrote:
> : be viewed as an algorithmic process, a "mindless" sort of

> : process without point or purpose.
>
> pr...@suburbia.net "Julian Assange" replied

>
> Of course it has a purpose. To lead to the present state of observation.
> Only those changes which eventually to this state are mandated. History,
> after all does not exist, we just infer it from the current state of
> what we can observere.

Someone called Dreiche - I think - proposed this under the name of "entelechy"
in the C19th. There was to be a sort of latent yearning to be which somehow
drove evolution, where - not to sound too Jay Gould-ish - it was social
scientism, European nationalism and the related expansionist ethos which were
what would culminate this process. It was not a very helpful concept then, has
added nothing to the debate since and currently has nothing factual or
conceptual to support it.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

R. Mounce

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
Peter Lupton <Lup...@luptonpj.demon.co.uk> wrote:
>Being in the eye of the beholder means being highly subjective, being
>a matter of personal taste.

It could mean just the first portion, that the eye is the eye of a
subject who "beholds" an object. It does not have to follow that the
subject/object relationship depends a lot on personal taste.

>What is needed is some argument
>to the effect that this interpretation is no more than a matter of
>personal taste.

There is no reason that two subjects should not be able to agree on an
interpretation of an object while maintaining their full sense of
subjective presence in doing so. The interpretation you want, I think, is
one where the rules have first been defined and agreed upon by the
subjects, and then the subjects are constrained to operate within those
confines if they are to converse in the agreed-upon manner. The axioms,
perhaps, are matters of personal taste, although it is easy enough to
agree that some have broader appeal and use than others.


Neil Rickert

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
In <638192...@luptonpj.demon.co.uk> Peter Lupton <Lup...@luptonpj.demon.co.uk> writes:
>In article: <424pja$o...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
>Rickert) writes:
>> In <809886...@chatham.demon.co.uk> Oliver Sparrow
><oh...@chatham.demon.co.uk> writes:

>> >I think it misuses the word "algorithm" to say that this lies in the
>eye


>> >of the beholder. An algorithm is an explicitly stated set of rules.

>> Well, if you want to be strict about it, then we shall have to
>> conclude that there are no such things as algorithms. Nothing is

>> ever fully explicit. ...

>Oliver did not say that the rules were stated *fully* explicitly.

If 'explicit' doesn't mean 'fully explicit', then it doesn't mean
anything at all.

>But if you feel that there is an argument from 'Nothing is ever
>fully explicit' to algorithms being in the eye of the beholder,
>don't be coy. Let's have it.

I'll give an example. Think of a UTM. The tape contains the
specification of an algorithm and the data to be used by that
algorithm. Which of the data on the tape is the algorithm, and which
is the data? There is no canonical way of saying where the algorithm
ends and the data begins. Thus the identification of what is the
algorithm is a matter of interpretation. The algorithm is in the eye
of the beholder.

>> When I program a computer, the transistors are doing things with
>> electrons. It is convenient for me to think of the action of the
>> computer as corresponding to the following of rules for arithmetic
>> operations. But the transistors are not following those rules -- I
>> am just interpreting the computer as if they were following rules.

>> That is what I mean when I say that "algorithm" is in the eye of the
>> beholder.

>Being in the eye of the beholder means being highly subjective, being
>a matter of personal taste.

Why are their beauty contests? Presumably the existence of such
contests indicates that there is some semblance of an objective
standard for beauty. Yet we still say that beauty is in the eye of
the beholder. I reject your suggested meaning for "in the eye of the
beholder."


Peter Lupton

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
In article: <424pja$o...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
Rickert) writes:
>
> In <809886...@chatham.demon.co.uk> Oliver Sparrow
<oh...@chatham.demon.co.uk> writes:
>
> >I think it misuses the word "algorithm" to say that this lies in the
eye
> >of the beholder. An algorithm is an explicitly stated set of rules.
>
> Well, if you want to be strict about it, then we shall have to
> conclude that there are no such things as algorithms. Nothing is
> ever fully explicit. As you mentioned in another post, you cannot
> know about trees without knowing about forests, and you cannot know
> about forests without knowing about trees. There is a great deal of
> implicit knowledge behind all of our common concepts. Nothing is
> ever fully explicit.

Oliver did not say that the rules were stated *fully* explicitly.


But if you feel that there is an argument from 'Nothing is ever
fully explicit' to algorithms being in the eye of the beholder,
don't be coy. Let's have it.

> When I program a computer, the transistors are doing things with


> electrons. It is convenient for me to think of the action of the
> computer as corresponding to the following of rules for arithmetic
> operations. But the transistors are not following those rules -- I
> am just interpreting the computer as if they were following rules.
> That is what I mean when I say that "algorithm" is in the eye of the
> beholder.

Being in the eye of the beholder means being highly subjective, being

a matter of personal taste. Nothing you have said justifies the claim
that being a certain algorithm is something in the eye of the beholder.
That a beholder is required in order to interpret the computer as acting
in accord with an algorithm is not sufficient for the claim that the
algorithm is in the eye of the beholder. What is needed is some argument
to the effect that this interpretation is no more than a matter of
personal taste.

Cheers,
Pete Lupton

Neil Rickert

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In <809939...@chatham.demon.co.uk> Oliver Sparrow <oh...@chatham.demon.co.uk> writes:
>In article <424pja$o...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

>Damn. Hoisted on my own petard. However: within a given framework,
>something can be stated explicitly, in the sense that the interpretation of
>its instructions achieves the end intended. One has to take the framework
>as a given. An algorithm exists, therefore, only relative to its framework.

I can agree with that. However there is no certainty that different
people will adopt the identical framework.

>Information exists as information only relative to its framework. But
>the information may be a part of the framework. Hum. Self-referential and
>relativist domains, once again. There is something deeply juicy embedded in
>this particular bit of the treetrunk.

One cannot escape from relativism, although some people try to
pretend otherwise.


R. Mounce

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
Neil Rickert <ric...@cs.niu.edu> wrote:
>Oliver Sparrow <oh...@chatham.demon.co.uk> writes:

>>One has to take the framework
>>as a given. An algorithm exists, therefore, only relative to its framework.
>
>I can agree with that. However there is no certainty that different
>people will adopt the identical framework.

There is also no reason to suggest that similar people will not adopt
similar frameworks. They will do similar things more often than not. I
got the idea that Mr. Sparrow was intending some idea that codependency is
between the algorithm object, the existence and the relative, and the
framework of the subject (the "One" who "*has to take* the framework).
Or, the framework and the algorithm are the same thing and the subject)
the "One") has no choice in adopting any thing except the thing which is
given.

>>Information exists as information only relative to its framework. But
>>the information may be a part of the framework. Hum. Self-referential and
>>relativist domains, once again.
>

>One cannot escape from relativism, although some people try to
>pretend otherwise.

In the first reponse, relativism was cast as "certainty". Is there a
difference? Aren't there categories where we can expect, with certainty,
that subjects will make the same interpretation from the codependent
interaction between subject and object? That is, homo sapien subjects are
similar enough by their object foundation to practically require that
certain interpretations will be consistent for all subjects. Practically,
that is the way science works for the majority. If someone insists on
calling everything yellow-jello, of course, the only logical thing to say
about them is that they are in the minority (but is that relativism?).


Neil Rickert

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In <427koe$j...@nntp5.u.washington.edu> mou...@u.washington.edu (R. Mounce) writes:
>Neil Rickert <ric...@cs.niu.edu> wrote:
>>Oliver Sparrow <oh...@chatham.demon.co.uk> writes:

>>>One has to take the framework
>>>as a given. An algorithm exists, therefore, only relative to its framework.

>>I can agree with that. However there is no certainty that different
>>people will adopt the identical framework.

>There is also no reason to suggest that similar people will not adopt
>similar frameworks.

I can agree with that too. However there is that gnawing problem of
giving a meaning to 'similar'. For example most physicists think
that the conceptual framework of general relativity is quite similar
to that of Newtonian mechanics. Many philosophers consider these two
frameworks to be separated by an enormous gulf. Likewise two people
may appear outwardly to be very similar, yet due to different
lifetime experiences they may really be very different in how they
view the world.

> I
>got the idea that Mr. Sparrow was intending some idea that codependency is
>between the algorithm object, the existence and the relative, and the
>framework of the subject (the "One" who "*has to take* the framework).

Oliver was trying to find a way of disagreeing with my view that
algorithms are observer relative. The great difficulty is that the
framework I choose when teaching a computer architecture class is
different from the framework I use in a programming language class.
Accordingly, I will present different ideas about algorithms in the
two classes, and they may appear to the non-relativist to be in
conflict.

>Or, the framework and the algorithm are the same thing and the subject)
>the "One") has no choice in adopting any thing except the thing which is
>given.

>>>Information exists as information only relative to its framework. But
>>>the information may be a part of the framework. Hum. Self-referential and
>>>relativist domains, once again.

>>One cannot escape from relativism, although some people try to
>>pretend otherwise.

>In the first reponse, relativism was cast as "certainty". Is there a
>difference? Aren't there categories where we can expect, with certainty,
>that subjects will make the same interpretation from the codependent
>interaction between subject and object?

I don't think we can ever be certain of that two subjects will make
the same interpretation. It is common to assume that there is such a
certainty. People who hold that there is only one possible
interpretation and that this interpretation is certain to be held be
others, often are surprised at what the others say and may even label
them as retards or morons or liars.

> That is, homo sapien subjects are
>similar enough by their object foundation to practically require that
>certain interpretations will be consistent for all subjects.

Similarity in their DNA is not sufficient. To a large extent we are
the products of our experiences. The vast differences in personal
experience cause wide divergence of interpretations. Consider the
wide gulf between how different people interpret the O.J. Simpson
case, even though this should be a simple matter of objective
interpretation of the evidence presented in court.

> Practically,
>that is the way science works for the majority.

Or perhaps it works because the majority controls the press.
Astronomers who disagree with the big bang, for example, find it
increasingly difficult to get published in scientific journals.

> If someone insists on
>calling everything yellow-jello, of course, the only logical thing to say
>about them is that they are in the minority (but is that relativism?).

Or perhaps everything is yellow-jello, but the someone cannot get his
reasoning past the peer reviewers. We have to remember that, by the
prevailing Ptolemaic standards, Copernicus was calling everything
yellow-jello. By the standards of entomology, Rachel Carson was
calling everything yellow-jello when she wrote "Silent Spring."
Barbara Maclintock won a Nobel prize for her work on genetic changes
in maize, but for a long time she was viewed as being a borderline
yellow-jello scientist.

I'm not suggesting that everyone who comes up with an absurd sounding
idea is right. But I am suggesting that these cases provide evidence
that the 'necessity' of an identity of interpretation is not nearly
as much a necessity as is commonly supposed.


R. Mounce

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
I tend to use John Mayhill's classification analogy of Turing, Church and
Goedel of existence at different levels of complexity. Furthermore,
although I would say no thing is absolute without some one saying so, I do
tend to accept that the way math works indicates some axiomatic
associations which are consistent and certain.

I suppose the difficulty is setting up the axiom, but there are some that
are pretty common, similar, so useful, etc. that I don't mind if someone
calls them pre-given. I usually just point out that some one has called
them that.

Mayhill says we can agree on what is computational. You could probably
write a program to evaluate these words and conclude, with certainty, that
it is English grammar. All computational machines would come to the same
conclusion, as would any reasonable person because they would be using
computational methods to make that determination. The Halting Problem for
both man and machine is figuring if I would have written this, or how I
would have put the grammar together. Ultimately, the beauty or truth of
this message is prospective.


Shankar Ramakrishnan

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In article <425rom$d...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil Rickert) writes:
>
>Why are their beauty contests? Presumably the existence of such
>contests indicates that there is some semblance of an objective
>standard for beauty. Yet we still say that beauty is in the eye of
>the beholder. I reject your suggested meaning for "in the eye of the
>beholder."
>

Actually, as some wisecrack said, beauty is in the eyes of the beer-holder.

Shankar

Oliver Sparrow

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In article <424pja$o...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

> Well, if you want to be strict about it, then we shall have to
> conclude that there are no such things as algorithms. Nothing is
> ever fully explicit. As you mentioned in another post, you cannot
> know about trees without knowing about forests, and you cannot know
> about forests without knowing about trees. There is a great deal of
> implicit knowledge behind all of our common concepts. Nothing is
> ever fully explicit.

Damn. Hoisted on my own petard. However: within a given framework,


something can be stated explicitly, in the sense that the interpretation of

its instructions achieves the end intended. One has to take the framework

as a given. An algorithm exists, therefore, only relative to its framework.

Information exists as information only relative to its framework. But
the information may be a part of the framework. Hum. Self-referential and

relativist domains, once again. There is something deeply juicy embedded in
this particular bit of the treetrunk.

_________________________________________________

Oliver Sparrow (or Woodpecker...)
oh...@chatham.demon.co.uk

Peter Lupton

unread,
Sep 2, 1995, 3:00:00 AM9/2/95
to
In article: <425rom$d...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
Rickert) writes:
>
> >But if you feel that there is an argument from 'Nothing is ever
> >fully explicit' to algorithms being in the eye of the beholder,
> >don't be coy. Let's have it.
>
> I'll give an example. Think of a UTM. The tape contains the
> specification of an algorithm and the data to be used by that
> algorithm. Which of the data on the tape is the algorithm, and which
> is the data? There is no canonical way of saying where the algorithm
> ends and the data begins. Thus the identification of what is the
> algorithm is a matter of interpretation. The algorithm is in the eye
> of the beholder.

Before we continue, I want to know better the nature of the claim
Neil is making. That is, whether Neil is making the claim that
'there exists a case (which Neil is free to choose) where the
algorithm is in the eye of the beholder' or whether the claim is
that 'in all cases, the algorithm is in the eye of the beholder.'
I take it that Neil intends the latter, but I'd just like to be
sure. Further, Neil has not explained what he means by 'being in
the eye of the beholder' (he has only said he doesn't mean what
I take it to mean) and it would be helpful if this was also
clarified.

Neil's point above is that, in the UTM case, the separation between
program and data is not determined. There are two separate claims
which are possible being made:

1) That the extension of the program is in the eye of the
beholder.

2) That given the program and its execution environment, the
algorithm implemented by that program is in the eye of the
beholder.

Neil's argument is directed at (1) - I wonder if there is an
argument directed at (2)?

Now Neil's argument shows that a single case may be insufficient
to determine the extent of a program. This, in itself, is not
sufficient to establish conclusion (1). In real computer systems
programs are executed and re-executed again and again with different
data. An observer of such a system cannot help but notice this and
make the separation between the fixed program and the variable data. As
the execution and the re-execution of the program continues, this
separation will become all the more sharp.

At the end of this process of observation, an observer can say 'this
program implements the following algorithm'. As more cases occur,
the interpretations become all the more definite, and there is every
reason to suppose that such analyses will tend to become true. Such
interpretations are bound to be highly constrained - not at all
'in the eye of the beholder'.

Cheers,
Pete Lupton

Neil Rickert

unread,
Sep 2, 1995, 3:00:00 AM9/2/95
to
In <333651...@luptonpj.demon.co.uk> Peter Lupton <Lup...@luptonpj.demon.co.uk> writes:
>In article: <425rom$d...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
>Rickert) writes:

>> >But if you feel that there is an argument from 'Nothing is ever
>> >fully explicit' to algorithms being in the eye of the beholder,
>> >don't be coy. Let's have it.

>> I'll give an example. Think of a UTM. The tape contains the
>> specification of an algorithm and the data to be used by that
>> algorithm. Which of the data on the tape is the algorithm, and which
>> is the data? There is no canonical way of saying where the algorithm
>> ends and the data begins. Thus the identification of what is the
>> algorithm is a matter of interpretation. The algorithm is in the eye
>> of the beholder.

>Before we continue, I want to know better the nature of the claim
>Neil is making. That is, whether Neil is making the claim that
>'there exists a case (which Neil is free to choose) where the
>algorithm is in the eye of the beholder' or whether the claim is
>that 'in all cases, the algorithm is in the eye of the beholder.'
>I take it that Neil intends the latter,

Quite right.

>Neil's point above is that, in the UTM case, the separation between
>program and data is not determined. There are two separate claims
>which are possible being made:

>1) That the extension of the program is in the eye of the
> beholder.

>2) That given the program and its execution environment, the
> algorithm implemented by that program is in the eye of the
> beholder.

>Neil's argument is directed at (1) - I wonder if there is an
>argument directed at (2)?

(2) is a little tricky. It could be argued that once there is
agreement on the program, there is agreement on the algorithm. So
let's look at it a little differently. Take a program, and burn that
program into read-only-memory on a special purpose computer. Let's
say that the computer has a single input channel and a single output
channel. Thus we have a mechanism which interacts with the
environment. You may identify the program with that mechanism. I
want to suggest that there are alternative descriptions of the
mechanism which are equally consistent with the I/O behavior.

The computer itself contains many logic gates. There may perhaps be
in excess of 1 million logic gates. Consider and AND gate with
inputs A and B, and with output X, and keep in mind that similar
circumstances apply to other logic gates. There are three reasonable
interpretations of what the AND gate does:

It computes the logical AND of inputs A and B, and
places the result on X.

It acts as a switch with A as the switch controller. When
input A contains a 1 the data received on input B is copied
to X. When input A contains a 0 the output remains a 0.

It acts as a switch with B as the switch controller. When
input B contains a 1 the data received on input A is copied
to X. When input B contains a 0 the output remains a 0.

Note that in a standard interpretation of what the computer is doing
some AND gates are interpreted in each way. The gates used for the
AND opcode of the CPU are given the first interpretation. The gates
connecting memory chips to the bus are interpreted so that the input
from the control bus is considered the switch controller, and the
input from the data bus is considered the data to be switched.

If a computer has 1000000 logic gates, and each can be given three
distinct interpretations, then the mechanism as a whole can be given
at least 3^1000000 distinct interpretations.

>Now Neil's argument shows that a single case may be insufficient
>to determine the extent of a program. This, in itself, is not
>sufficient to establish conclusion (1). In real computer systems
>programs are executed and re-executed again and again with different
>data. An observer of such a system cannot help but notice this and
>make the separation between the fixed program and the variable data. As
>the execution and the re-execution of the program continues, this
>separation will become all the more sharp.

Let me give another example. Consider an ordinary computer executing
an ordinary program. I am quite entitled, if I wish, to say that the
algorithm is the basic CPU execution cycle, and the program is the
data being operated on by that algorithm. A computer architect may
well have that view.

Since the same CPU is reexecuted with different programs (i.e.
different data), there is plenty of evidence for the observer to
agree that the algorithm is the CPU cycle and the program is the
data.


Peter Lupton

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
In article: <428odn$i...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
Rickert) writes (in response to my para below):

> >Before we continue, I want to know better the nature of the claim
> >Neil is making. That is, whether Neil is making the claim that
> >'there exists a case (which Neil is free to choose) where the
> >algorithm is in the eye of the beholder' or whether the claim is
> >that 'in all cases, the algorithm is in the eye of the beholder.'
> >I take it that Neil intends the latter,
>
> Quite right.

I am glad that we agree thus far. But Neil then goes on to be naughty.
The paragraph above is incomplete. The final sentence was:

Further, Neil has not explained what he means by 'being in
the eye of the beholder' (he has only said he doesn't mean what
I take it to mean) and it would be helpful if this was also
clarified.

Now it may be that I have not received some note exalaining Neil's
meaning (my connection can sometimes be faulty). But I did ask
specially. The reason I asked is that there are two different concerns
here:

(1) Algorithmic identification is non-unique.

(2) Algorithmic identification is highly subjective.

The two are not the same. For example, two people might disagree
about who is most beautiful. We would tend to say, in such
circumstances, that neither is right nor wrong: beauty is in the
eye of the beholder. This is account (2) but (1) is out of place.
Further, a question might have multiple answers. That makes
non-unique, but not necessarily highly subjective. I still don't
know what Neil is arguing here and it may be that Neil conflates
(1) and (2).

In response to the question of which claim Neil is making:

> >1) That the extension of the program is in the eye of the
> > beholder.
>
> >2) That given the program and its execution environment, the
> > algorithm implemented by that program is in the eye of the
> > beholder.

Neil's response is to argue for both.

In the first case, Neil observes that an AND gate has three
equally acceptable descriptions, one symmetrical and two
asymetrical:

(1) A & B
(2) IF A then B else false
(3) IF B then A else false

The argument is directed at non-uniqueness, not at a high
degree of subjectivity. We can see the difference here. Suppose
one person identifies gates as AND gates and another as if then
else's. What will happen when they notice the difference? They'll
quickly agree that it makes no difference, it's arbitrary, they
each can multiply represent without demur.

This is not the beauty contest case at all, and does not speak
for a high degree of subjectivity.

Of course there is an argument from non-uniqueness to high
subjectivity. It presupposes that algorithmic identification
must be unique. Once that ideology is embraced, two people may
choose distinct non-unique accounts and then quarrel to no
effect. But that is an outcome predicated upon a fase ideological
committment - that algorithmic identification must be unique.

Neil's response to extension being in the eye of the beholder
is to give the example of CPU/Memory as one argorithm and
program/data being another. This is of course a more interesting
example of algorithmic non-uniqueness. Once again it is a
convincing argument for non-uniqueness, but wholly unconvincing
as an argument for a high degree of subjectivity.

To finish my posting (and here I take a part of Neil's AND gate

example out of context and out of order), Neil wrote:

> If a computer has 1000000 logic gates, and each can be given three
> distinct interpretations, then the mechanism as a whole can be given
> at least 3^1000000 distinct interpretations.

What Neil says is both right and wrong. Of course the computer *could*
be given 3^1000000 interpretations. But it wouldn't. Let me construct
an argument which produces Neil's result:

1) Each AND gate has three descriptions
2) There are 1000000 AND gates
3) There are 3^1000000 possible descriptions
4) All 3^1000000 descriptions are equally good
-----------------------------------------------
5) So there is nothing to choose between them

It is a Quinean argument of extensionalist leanings. The faulty
premise is 4), where part of the claim is that, say, a random
assignment of descriptions to each of the gates is equally
good. This is just not the case. Such ascriptions are no more
than wanton profligacy, out of place in the process of forming
algorithmic descriptions.

Cheers,
Pete Lupton

Aaron Sloman

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
ric...@cs.niu.edu (Neil Rickert) writes:

> Date: 2 Sep 1995 00:00:39 -0500
> Organization: Northern Illinois University

> ....

> The computer itself contains many logic gates. There may perhaps be
> in excess of 1 million logic gates. Consider and AND gate with
> inputs A and B, and with output X, and keep in mind that similar
> circumstances apply to other logic gates. There are three reasonable
> interpretations of what the AND gate does:
>
> It computes the logical AND of inputs A and B, and
> places the result on X.
>
> It acts as a switch with A as the switch controller. When
> input A contains a 1 the data received on input B is copied
> to X. When input A contains a 0 the output remains a 0.
>
> It acts as a switch with B as the switch controller. When
> input B contains a 1 the data received on input A is copied
> to X. When input B contains a 0 the output remains a 0.
>
> Note that in a standard interpretation of what the computer is doing
> some AND gates are interpreted in each way.

There's at least one more interpretation: Switch the interpreations
of 1 and 0 and instead of this truth table

A B (A AND B)
1 1 1
1 0 0
0 1 0
0 0 0

you get
A B (A OR B)
0 0 0
0 1 1
1 0 1
1 1 1

or, equivalently

A B (A OR B)
1 1 1
1 0 1
0 1 1
0 0 0

I.e. every AND gate can be interpreted as an OR gate.

Whether a particular physical state is interpreted as 1 or 0 depends
on how it is treated by the rest of the system, e.g. conditional
jumps. However even there you get a duality: what looks like IF on
one interpretation looks like UNLESS ( = IF NOT ) on another.

This is not all totally trivial. The point is that at least since
Boole (or Frege?) we have appreciated the symmetry between TRUE and
FALSE from a (mathematical) logical point of view. I believe this
symmetry remains in digital computers: there's always a dual
interpretation.

I pointed out in my "What enables a machine to understand?" in
IJCAI-95 (available via my Web page) that this requires us to show
how an asymmetry can arise out of the design of an intelligent
system.

It is suggested that this can be done by adopting one value as the
default which need not be specified when expressing, communicating,
or recording a proposition's truth-value. Thus instead of always
having to say
<p 1>, <q 1>, ...
one is
allowed to say p, q, ....

Then ~p, ~q, ... etc. are abbreviations for <p 0>, <q 0>, ... etc.

In this way a formalism in which truth values are always made
explicit becomes more compact by choosing one as the default.

It is left as an exercise for the reader to consider whether this
is inherently circular because full duality in the intial formalism
would have required
<p 1>
to be expressed as < <p 1> 1>
and the latter as < < <p 1> 1> 1>

etc generating an infinite regress.

All this is connected with the question whether the concept of
computation is inherently a syntactic concept (concerned only with
the structural properties of sequences of formulas, or sequences of
state-descriptions) or whether it is also partly semantic (as Brian
Smith, for example, has claimed).

There's a more general point about what's in the eye of the
beholder, which I'll post separately.

Aaron
--
Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs )
School of Computer Science, The University of Birmingham, B15 2TT, England
EMAIL A.Sl...@cs.bham.ac.uk
Phone: +44-121-414-4775 Fax: +44-121-414-4281

Neil Rickert

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
In <42cfl1$o...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk (Aaron Sloman) writes:
>ric...@cs.niu.edu (Neil Rickert) writes:

>> ... There are three reasonable


>> interpretations of what the AND gate does:

>> ...

>There's at least one more interpretation: Switch the interpreations
>of 1 and 0 and instead of this truth table

>...


>I.e. every AND gate can be interpreted as an OR gate.

Aaron continues with a theoretical discussion of the importance of
this. Let me add that there is also a practical importance. AND
gates are used as OR gates, and vice versa, in practical computers.
The engineers refer to this as using reverse logic. In circuit
diagrams an input would be labelled as A-bar (A with a horizontal bar
above it) to indicate that this is a negative logic input. Complex
chips such as CPUs and device controller chips commonly use negative
logic for some inputs and positive logic for others.


Neil Rickert

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
In <434245...@luptonpj.demon.co.uk> Peter Lupton <Lup...@luptonpj.demon.co.uk> writes:
>In article: <428odn$i...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
>Rickert) writes (in response to my para below):

>I am glad that we agree thus far. But Neil then goes on to be naughty.

>The paragraph above is incomplete. The final sentence was:

> Further, Neil has not explained what he means by 'being in
> the eye of the beholder' (he has only said he doesn't mean what
> I take it to mean) and it would be helpful if this was also
> clarified.

There was no "naughy" involved. I ignored something which seemed
obvious. Since you assure me that it is not obvious, I comment more
below.

> The reason I asked is that there are two different concerns
>here:
>
>(1) Algorithmic identification is non-unique.

>(2) Algorithmic identification is highly subjective.

I intended that algorithmic identification is subjective. I reject
the adjective "highly" as being of dubious meaning.

In the first example I gave, I suggested that there was no canonical
choice of algorithmic identification. If the choice were not
subjective, there would have to be some canonical means of making
that identification. You could base the definition of canonical
meaning on the way people choose if a reliable non-subjective method
of choice could be found.

But even in simpler cases of varying identification, it is hard to
see how there could be such a canonical choice. Given 4 bytes of
memory, should it be interpreted as a 32 bit integer, as 2 16-bit
integers, or as 4 characters. In at least some instances the same
instructions could be used. I have known of a program which stripped
trailing blanks from a record by loading 8 spaces into a 64 bit
double precision floating point register, and doing floating point
compares. Anyone trying to interpret that code from a core dump
would be excused for assuming that the data was a double precision
floating point number rather than 8 characters.

>In response to the question of which claim Neil is making:

>> >1) That the extension of the program is in the eye of the
>> > beholder.
>>
>> >2) That given the program and its execution environment, the
>> > algorithm implemented by that program is in the eye of the
>> > beholder.

>Neil's response is to argue for both.

>In the first case, Neil observes that an AND gate has three
>equally acceptable descriptions, one symmetrical and two
>asymetrical:
>
> (1) A & B
> (2) IF A then B else false
> (3) IF B then A else false

>The argument is directed at non-uniqueness, not at a high
>degree of subjectivity. We can see the difference here.

No, the difference you suggest is not at all clear. The fact is,
most people adopt the standard interpretation because they are
indoctrinated into that interpretation by our computer science
courses. Imagine someone from a future generation discovering one of
our computers as an artifact, and attempting to understand it. I
suggest there is no obvious way that they could decide between the
different interpretations without a great deal more information as to
how the artifact was used. And physical use would not be sufficient,
for the multiple interpretations are semantic interpretations which
do not affect the physical behavior of the system as a shaper of
electrical currents.

>Of course there is an argument from non-uniqueness to high
>subjectivity. It presupposes that algorithmic identification
>must be unique. Once that ideology is embraced, two people may
>choose distinct non-unique accounts and then quarrel to no
>effect.

These sorts of quarrels occur all the time on comp.ai.philosophy.

> But that is an outcome predicated upon a fase ideological
>committment - that algorithmic identification must be unique.

If there is a way of identifying the algorithm, your argument
might be more persuasive. But there is no canonical way that
I know of, and you have not come up with one.

>Neil's response to extension being in the eye of the beholder
>is to give the example of CPU/Memory as one argorithm and
>program/data being another. This is of course a more interesting
>example of algorithmic non-uniqueness. Once again it is a
>convincing argument for non-uniqueness, but wholly unconvincing
>as an argument for a high degree of subjectivity.

It is my understanding that before IBM constructed the first S/360,
it emulated that system on a 7090, and designed fortran compilers and
linkage editors which were tested under that emulation. When the
fortran compiler was running under emulation on a 7090, was the
fortran compiler the algorithm, or was it data for the emulation
program which really constituted the algorithm?

I would imagine that the programmer who designed the emulator, and
the programmer who designed the fortran compiler might have very
different views about that.

>To finish my posting (and here I take a part of Neil's AND gate
>example out of context and out of order), Neil wrote:

>> If a computer has 1000000 logic gates, and each can be given three
>> distinct interpretations, then the mechanism as a whole can be given
>> at least 3^1000000 distinct interpretations.

>What Neil says is both right and wrong. Of course the computer *could*
>be given 3^1000000 interpretations. But it wouldn't. Let me construct
>an argument which produces Neil's result:
>
> 1) Each AND gate has three descriptions
> 2) There are 1000000 AND gates
> 3) There are 3^1000000 possible descriptions
> 4) All 3^1000000 descriptions are equally good
>-----------------------------------------------
> 5) So there is nothing to choose between them

>It is a Quinean argument of extensionalist leanings. The faulty
>premise is 4), where part of the claim is that, say, a random
>assignment of descriptions to each of the gates is equally
>good. This is just not the case. Such ascriptions are no more
>than wanton profligacy, out of place in the process of forming
>algorithmic descriptions.

You are placing a standard of "good". But "good" implies a purpose
for which the goodness is evaluated. That is a subjective standard.
Your argument actually supports my claim of subjectivity.


Aaron Sloman

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
ric...@cs.niu.edu (Neil Rickert) writes:

> Date: 31 Aug 1995 10:22:00 -0500
> Organization: Northern Illinois University
>
> In <c89ponga.809806435@news>


> c89p...@ida.liu.se (kand. Pontus Gagge) writes:
> >ric...@cs.niu.edu (Neil Rickert) writes:
>

> >>It is difficult to see how one can talk about an algorithmic process
> >>without a purpose. ...
>
> >Purpose is in the eye of the beholder.
>

> Sure. But algorithm is also in the eye of the beholder.

There is, I believe a deep sense in which EVERYTHING is in the eye
of the beholder and therefore claiming that any particular type of
thing is in the eye of the beholder says nothing specially of
interest regarding that type of thing.

The key idea is that in itself reality is totally unarticulated
though different sorts of entities that sense and interact with
their environments articulate it in different ways.

How particular agents articulate reality (or `experience' reality,
if you prefer), depends on:

(a) their sensory apparatus (transducers, processing circuits,
interpretation software)
(b) their available representational formalisms (see below)
(c) their needs, purposes, functional requirements, whether
explicit or implicit.

(WARNING: lots of imprecise hand-waving now follows. I need to
expand this into a book!)

Any object interacting with an environment has to do so through
internal state changes that generate the forces and/or signals that
influence the environment. There are many different sorts of cases,
some simply involving causal interactions that are fully describable
and explainable in terms of physics (though what physics is changes
over time).

Some cannot be described fully without using `higher level' concepts
involving virtual or abstract machines with their own laws, even
though they may be implemented in physics. Examples are systems with
feedback control hierarchies. Yet more sophisticated systems involve
storage, manipulation, and use of information about the environment,
where the acquisition of information causes long term changes.

Still more sophisticated systems have architectures that support
distinctions between desire-like states, belief-like states, and
possibly others (intentions, plans, moods, personalities, etc.)

Theories about these different levels of abstraction in machines are
not all theories of physics. E.g. the theory of chess-playing
machines is determined by the structure of chess not the structure
of the physical world or the properties of particular mechanisms
used to implement those machines.

Whether a `higher level' theory is needed to account for all
interesting features of a system (or agent) depends on what sort of
system it is.

Some of the simpler systems are totally reactive, e.g. a perfectly
elastic ball bouncing on a perfectly elastic floor. The interactions
produce state changes which in turn produce interactions all in
accordance with the laws of behaviour of the systems involved (in
this case laws of physics) and there is no historical influence.

Others change themselves permanently (or at least for some time) as
a result of their interactions with the environment.

Some of them include self-sensors that can detect internal states
and use the results alongside detection of external states as part
of the process of producing new behaviour.

Similarly some of the effectors (behaviour-producing transducers)
are self-effectors, changing the internal states.

I.e. in these two cases the distinction between internal and
external environment disappears, or is at least significantly
diminished.

In all these cases we can talk about the information gained, stored,
transmitted, and used in the generation of behaviour (internal or
external).

However all such information has to be somehow expressed or encoded
in states that the system controls or has access to (which can be
both internal and external - e.g. trail-blazing in a forest, notes
or diagrams on paper, scent-marks, etc. as well as internal
records).

I call these control-states. Some control-states are not total
states of the whole system but may have an enduring history of their
own. For that reason I also talk about control SUB-states. Lots of
different control sub-states can co-exist within ONE agent,
interacting and changing in different ways over different
time-scales under different influences, and playing different
causal/functional roles in the whole system.

(WARNING: the next bit offends narrow minded materialist
metaphysicians).

Some of these control sub-states will be essentially states in
virtual or abstract machines, like the type of feedback in a
feedback loop or the data-structures in a prolog machine, or the
information structures in an organisation's data-bases, or the
states of a huge array implemented as a `sparse' array so that most
of its locations have no physical embodiment, or the email messages
travelling around the internet.

These states may all be *implemented* in physics, but they are
nevertheless distinct from physical states since they can be
implemented in different ways at different times and they have their
own laws which are not determined by the laws of physics (you can
change a virtual machine's laws without changing the laws of
physics, simply by changing a compiler or interpreter).

These abstract states and processes involving them can have causal
powers. (That claim requires analysis and argument.)

Now for any behaving system (agent) the information-bearing control
sub-states will have a number of properties, which can vary from one
type of agent to another, and from one sub-state of a particular
agent to another. These will limit what the agent can perceive,
imagine, contemplate, select, want, intend, etc.

In particular the control sub-states have a *syntax*, which I define
in terms of the type of structural variability supported by the
mechanisms underlying by the control sub-state (e.g. a thermostat
has linear (one-dimensional), continuous variability, whereas an
operating system's information tables generally determine a high
dimensional discrete vector, and a parser's information state
includes a high degree of structural variability allowing the
construction of "trees" or "networks" of varying depth and
complexity.)

Note that the same bit of the external environment may be `parsed'
differently by different agents, and therefore may implement control
sub-states with different syntax for different agents. The same
goes, less obviously, for how similar brain-materials are used by
different agents. (All this needs to be explained further).

Secondly the control sub-states have a type of *pragmatics* which
has to do with the functional roles of those states for the system.

(Again: two agents may use the same physical structures in quite
different ways, so that one physical structure can implement
abstract information structures with both different syntax and
different semantics. This can also happen within the same agent.
Examples: two identical bit-patterns in different computers (or the
same computer) may be interpreted quite differently because they
occur and are used in different contexts -- in one context a bit
pattern may be a pointer and in another in instruction.)

(Defining functional roles in terms of structural and causal
relationships is my preferred way forward. Some people want to
define function in terms of the intentions of a designer (or user)
or selection by evolution, but I think that approach is much too
narrow to do justice to the variety of types of mechanisms that can
exist that support the notion of functional differentiation,
including mechanisms that are neither designed nor evolved nor used
by anyone.)

Thirdly some control sub-states have a semantics. Explaining how
this is possible in general terms is very hard, and there are
different philosophical theories about this.

(One mistake is to assume there's a sharp and unique boundary
between things which do and things which do not interpret structures
as having semantics- this is a mistake because the concept is quite
complex, and different fragments of the idea may be found in
different systems, without any clear set of necessary and sufficient
conditions existing for grasping semantic content. It's a `cluster'
concept that cannot be defined in terms of some boolean combination
of capabilities. Another mistake is to assume that Dennett's
intentional stance is required. But I won't argue against that now.)

That was all background. The point I was leading up to is this:

How any agent that stores or manipulates information about its
environment (including itself) does so is going to depend on what
kinds of syntax, pragmatics and semantics are supportable by its
underlying mechanisms and its architecture, and these will differ
enormously from one sort of agent to another.

E.g. trees, ants, chimpanzees, human beings, and contemporary office
information systems all use information about the environment in
generating behaviour, but they differ enormously in the mechanisms
they have available, and their architectures, and therefore in the
kinds of syntax, pragmatics and semantics their control sub-states
can have.

In particular there may be NO way of translating from one to
another. If an animal has a brain with sufficient components to
allow it to express K possible distinct states and another animal
has far more components able to express M possible distinct states,
and which are organised differently and used for different purposes,
then the information content of the second animal may just not be
expressible in the states of the first. Thus how a chimp sees its
environment may not be something that an ant can even consider to be
a true or false view of its environment, even if they are both
sitting on the same branch of the same tree facing in the same
direction.

(If I understand Penrose and Wiener correctly they both think that
quantum engines in brains provide support for syntactic structures
of infinite complexity that overcome limitations of Turing
computability. Whether that's correct is an empirical question.
Whether it is NECESSARY for most aspects of human and animal
intelligence is another sort of question.)

Less obviously, it may also be impossible for the semantics of the
states of the simpler brain to be expressible in the states of the
more complex brain (though some sort of simulation is possible the
fact that it is an explicit simulation changes its role in the
system compared with the original.) For example, popular science
books trying to show us how the world looks to a fly, by presenting
a collage of slightly different pictures of a scene, are potentially
seriously misleading. That's because how we see those pictures may
involve the perception of spatial properties and relationships that
no fly has access to.

From this viewpoint, there is something that it's like to be a bat,
[See T.Nagel's paper in The Mind's I ed. Hofstadter & Dennett]
or an ant, or even a ball rolling down a helter skelter, but WHAT
it's like (e.g. what information is used and how it is used) may be
unthinkable in human minds and indescribable in any form of language
that we can understand.

(We can summarise certain aspects of what it's like to be a bat,
e.g. by describing `from the outside' the number of things it can
discriminate, but we cannot use a description or thought or image
that could also be part of the bat's control state. We can't specify
from the bat's viewpoint, what it's like to be a bat. Similarly from
the ball's viewpoint we can't specify what it's like to be a ball
rolling down a helter skelter. (The differences between the two
cases need discussion I don't have time for here.))

Philosophers, physicists (and other scientists), robots and
mathematicians will all have a particular collection of (virtual and
physical) mechanisms supporting whatever information-rich states
they use to think about the world, or anything else.

Since there's nothing uniquely correct about these mechanisms and
what they can express there's no uniquely correct set of concepts,
beliefs, thoughts, theories about reality. There's no way the world
really is, in abstraction from the semantic capabilities of any kind
of agent. Even the language of physics does not necessarily have any
absolute unique status. (A thousand years hence physicists may
totally reject most of current physics because they have found
something that is far more powerful for the purposes of physics.
Even the purposes can change.)

WE INHABIT OUR DATA-STRUCTURES ?
The person who introduced me to AI was Max Clowes, one of the
pioneering AI vision researchers, who died in 1980. He used to say
"We inhabit our data-structures".

I don't know exactly what he meant by that, but it seems to me to be
a nice summary of everything I have been saying.

Different organisms, or robots, will, from their viewpoints,
`inhabit' different types of data-structures. The same may be true
of young children and adults, or people from different cultures
insofar as they have developed different kinds of representational
systems.

This must not be taken as a way of supporting an `anything-goes'
view of science, or knowledge. It isn't. For agents whose
architectures and cultures have a great deal in common may engage in
common, even cooperative, activities aimed at finding out what's
*really* out there. Moreover, even one individual can discover that
he's got the environment wrong (and sometimes this is literally a
painful discovery). Thus when different world views come into
conflict they are not always totally incommensurable.

There are conditions under which different agents can agree or
disagree in such a way that one is right and one is wrong: because
they use representations with some common semantic content. This
often (but not always) provides a basis for agreeing on how to
investigate who is right on some point of disagreement.

But that's a long and complicated story,and I have already gone on
long enough.

From this standpoint, claims about things being "In the eye of the
beholder" are extremely obscure because it is not clear whether they
are claims of the sorts I have been making, according to which
EVERYTHING is in the eye/mind/data-structures of the beholder, or
whether they are making some different claim about some portions of
`reality' being in the eye of the beholder while others are not.

Anyone who wants to make such a distinction had better answer the
question: what does it mean for something NOT to be in the eye of a
beholder.

Put another way: we tend to think that there's a sharp and clear
division between what is subjective and what is objective: there is
no such sharp and clear division that I am aware of. Thus claims
that X is objective or that X is subjective are both inherently
obscure claims.

Incidentally, I think that what I have been saying could be
construed as an explanation of Wittgenstein's remark to the
effect that if a lion could talk we would not understand it, but I
don't really know what he was thinking about.

[I've enlarged on various of these points in papers in our ftp site:
ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect
also available via
http://www.cs.bham.ac.uk/~axs/cog_affect]

On a different point, in the same message Neil continues:
(Pontus)


> >To be fussy, evolution is not and cannot be viewed as algorithms.

(Neil)


> Why not? In particular, why can it not be viewed as algorithmic?
>
> >It sounds as if he should have used "heuristics": when, exactly,
> >can evolution be said to terminate with a well-defined result...?
>
> An algorithm might be used repeatedly in an infinite iterative loop.
> In such a case it would not terminate with a well-defined result.

I suspect Neil failed to notice an ambiguity here.

There is a wide spread confusion between two senses of "algorithm",
roughly:
(a) An algorithm is a completely specified procedure, determining at
every stage under all conditions what is to be done. Some people
generalise this to allow two sorts of algorithms
(a1) deterministic algorithms and
(a2) non-deterministic algorithms that allow some choices to be
made at random, e.g. under the control of a geiger counter.

(b) An algorithm in the second sense is a method guaranteed to solve
some problem or achieve some specified result (e.g. an algorithm
for deciding whether one string is a substring of another, or
for sorting a list). This sense of "algorithm" is often
contrasted with "heuristic" -- a method that sometimes, or even
mostly, works, but either does not always work, or is not
guaranteed to find the best result. etc.

The confusion arises because a non-algorithmic heuristic in sense
(b) can be a perfect example of an algorithm in sense (a1).

A further confusion arises between those who don't allow things of
type (a2) to be algorithms. Maybe evolution is algorithmic in this
broad sense (a2), but not in the narrow sense (a1), and also
heuristic rather than algorithmic in sense (b).

Bad teaching of AI (and cognitive science) is often the cause of
these confusions.

(A brief attempt at redress can be found on page 94 of the textbook
by Russell and Norvig: AI a Modern Approach. It's interesting that
Winston's text book (3rd edition) includes neither "algorithm" nor
"heuristic" in the index, though both words occur in section
headings.)

Neil Rickert

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
In <42csb0$p...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk (Aaron Sloman) writes:
>ric...@cs.niu.edu (Neil Rickert) writes:
>> In <c89ponga.809806435@news> c89p...@ida.liu.se (kand. Pontus Gagge) writes:

>> >Purpose is in the eye of the beholder.

>> Sure. But algorithm is also in the eye of the beholder.

>There is, I believe a deep sense in which EVERYTHING is in the eye
>of the beholder and therefore claiming that any particular type of
>thing is in the eye of the beholder says nothing specially of
>interest regarding that type of thing.

I actually agree with that. I came to this view when I realized that
in order to be effective, an autonomous robot would need to have a
subjective view of its environment.

I like your argument for this position (which I have not quoted).

>(If I understand Penrose and Wiener correctly they both think that
>quantum engines in brains provide support for syntactic structures
>of infinite complexity that overcome limitations of Turing
>computability. Whether that's correct is an empirical question.
>Whether it is NECESSARY for most aspects of human and animal
>intelligence is another sort of question.)

I think that the problem with the Penrose view of cognition is that
it assumes objectivity is basic. With that assumption, it is not
easy to see how subjectivity (such as having unassailable beliefs)
can be constructed. If, however, subjectivity is basic, and
objectivity is just a term for a culturally shared subjectivity, then
the problem of attaining subjective views is simpler.

>Since there's nothing uniquely correct about these mechanisms and
>what they can express there's no uniquely correct set of concepts,
>beliefs, thoughts, theories about reality. There's no way the world
>really is, in abstraction from the semantic capabilities of any kind
>of agent.

Again, I find myself in agreement. As far as I can tell, this makes
me an anti-realist in the standard terminology of philosophy.
Nevertheless, I consider myself a realist. The literature on
realism/ anti-realism seems to me to be confused. Searle, in "The
construction of social reality" attempts to break through this
confusion by making realism a question of ontology rather than one of
epistemology. Unfortunately he comes across as even more confused.

>Anyone who wants to make such a distinction had better answer the
>question: what does it mean for something NOT to be in the eye of a
>beholder.

I will admit to not responding to that point. To clarify, I think it
rather more apparent that interpretations of processes as
corresponding to an algorithm are subjective, than it is that
everything is subjective. Or, if you like, I came upon my view of
the subjectivity of algorithmic interpretation considerably earlier
than I came upon the realization that everything is subjective.

Actually, I had in mind (a1), but not (b).

You perhaps have in mind for (b) something like Newton's method
for finding square roots. It is guaranteed to converge, so gives
good solutions. For the present I would prefer to evade any
discussion of whether that is an algorithm. However a single
iteration is an algorithm which is being repeated.

As a better example, I would like to consider a database lookup
algorithm. It may operate iteratively, in the sense that there is a
continued sequence of inquiries. There is no sense of convergence,
since each inquiry is new. I want to think of evolution as acting on
current data to produce a change in the distribution of life forms.
But the algorithm is repeated with new data, and there is no
expectation that the system will converge. Part of the new data
comes from the result of the previous iteration, but part of the data
comes from the sun, from radioactive emissions, from earthquakes,
etc.

I have probably just demonstrated that distinction between (a1),
(a2) and (b) is also in the eye of the beholder.


Gordon Joly

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to
In article <shankarD...@netcom.com>,
Shankar Ramakrishnan <sha...@vlibs.com> wrote:


>
>Actually, as some wisecrack said, beauty is in the eyes of the beer-holder.
>
>Shankar

"Intelligence is in the mind of the beholder" - circa 1982

Gordo


--
Gordon Joly || http://www.tecc.co.uk/gordo/ || go...@dircon.co.uk

Philip Jackson

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to
In article <42csb0$p...@percy.cs.bham.ac.uk>, Aaron Sloman writes:

[skipping past a very good discussion by Aaron on the nature of perception
and its relation to the mental processes and data structures of perceptual
agents]

>(We can summarise certain aspects of what it's like to be a bat,
>e.g. by describing `from the outside' the number of things it can
>discriminate, but we cannot use a description or thought or image
>that could also be part of the bat's control state. We can't specify
>from the bat's viewpoint, what it's like to be a bat.[...]
>
>Philosophers, physicists (and other scientists), robots and
>mathematicians will all have a particular collection of (virtual and
>physical) mechanisms supporting whatever information-rich states
>they use to think about the world, or anything else.

Aaron, to what extent does your argument depend on not knowing the full
description of another entity's mental processes? For example, one robot
might know the full program for another robot's mind. It could analyze and
run the other robot's program within its own mind, and notice, and perhaps
"understand" differences in how both robots would perceive something...?

>Since there's nothing uniquely correct about these mechanisms and
>what they can express there's no uniquely correct set of concepts,
>beliefs, thoughts, theories about reality. There's no way the world
>really is, in abstraction from the semantic capabilities of any kind
>of agent.

Suppose that the world or the universe itself is somehow a perceptual
agent, able to perceive itself in the process of constructing reality --
If so, could that be the way the world "really is"?

Of course this supposition is open to debate, though I do not think it
necessarily has to be stated in mystical or religious terms.

[...]
>There is a wide spread confusion between two senses of "algorithm",
>roughly:
>(a) An algorithm is a completely specified procedure, determining at
> every stage under all conditions what is to be done. Some people
> generalise this to allow two sorts of algorithms
> (a1) deterministic algorithms and
> (a2) non-deterministic algorithms that allow some choices to be
> made at random, e.g. under the control of a geiger counter.

In algorithms theory a more standard definition is (a3) non-deterministic
algorithms are supposed to pursue in parallel all choices at any given
choice-point, without any notion of randomness or probability involved.
This is the sense in which NP (nondeterministic polynomial time)
-completeness is defined.

>(b) An algorithm in the second sense is a method guaranteed to solve
> some problem or achieve some specified result (e.g. an algorithm
> for deciding whether one string is a substring of another, or
> for sorting a list). This sense of "algorithm" is often
> contrasted with "heuristic" -- a method that sometimes, or even
> mostly, works, but either does not always work, or is not
> guaranteed to find the best result. etc.
>
>The confusion arises because a non-algorithmic heuristic in sense
>(b) can be a perfect example of an algorithm in sense (a1).
>
>A further confusion arises between those who don't allow things of
>type (a2) to be algorithms. Maybe evolution is algorithmic in this
>broad sense (a2), but not in the narrow sense (a1), and also
>heuristic rather than algorithmic in sense (b).

Some texts on algorithms theory seem to combine several of these notions.
Referring to Knuth, "Fundamental Algorithms", p.4, we find the definition
of an algorithm as being a "set of rules which gives a sequence of
operations for solving a specific type of problem" which is finite,
definite, possesses input and output, and is effective.

From this perspective, true randomness (based on geiger counters, etc.)
would be disallowed, but "pseudorandomness" defined as starting from a
known state and computing a well-defined pseudorandom number generating
function, might be allowed.

Cheers,

Phil Jackson
-------------------------
"...for the word is the sole sign and the only certain mark of the presence
of thought hidden and wrapt up in the body..." --- Descartes
-------------------------
Copyright 1995, Philip C. Jackson, Jr. All Rights Reserved.
Standard disclaimers. <pjac...@pipeline.com><pjac...@ic.net>

Chris Malcolm

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to

>I think it misuses the word "algorithm" to say that this lies in the eye

>of the beholder. An algorithm is an explicitly stated set of rules. These
>may be purposeless and often are :) They are, however, explicit. A heuristic
>is - as I use the word - a set of rules which have emerged from activity
>which are regular and which govenr what happens in a system. They are neither
>explicti (unless someone makes them so) nor purposeful, although they may
>give rise to equilibria and other regularities. Neither are purposive; except
>in the sense that I (re)defined the word elsewhere in this usegroup a few
>days ago.

Which is liable -- in this group -- to give rise to unfortunate
misunderstandings, since "heuristic" in AI is an important technical
term with a couple of well-established (related) uses, both of which
*necessarily* involve clear explicit statement and purposeful
activity. See for example Newell & Simon's "Computer Science as
Empirical Enquiry", Turing Award lecture, Comms of ACM, 19:3, 112-126,
1976, or the chapter on "Search" in almost any introductory AI
textbook.
--
Chris Malcolm c...@aifh.ed.ac.uk +44 (0)131 650 3085
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK DoD #205
"The mind reigns, but does not govern" -- Paul Valery

Kenneth Colby

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to

In saying that natural selection is a mindless "algorithmic" process,
Dennett may mean it is just mechanical with no mind making
goal-directed choices. The other axiom of Neo-Darwinism, "blind"
mutation, might not be so blind in that favorable mutations could be
stimulated by environmental conditions (e.g. mutations of staph
aureus that thrive on vancomycin).

A research tradition should share a lexicon of common kind-terms.
Hence I vote for "algorithm" as meaning a rule-governed process
that achieves a goal.
KMC


Aaron Sloman

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
pjac...@nyc.pipeline.com (Philip Jackson) writes:

> Date: 4 Sep 1995 10:48:04 -0400
> Organization: The Pipeline


>
> In article <42csb0$p...@percy.cs.bham.ac.uk>, Aaron Sloman writes:

> ...
[AS]


> >(We can summarise certain aspects of what it's like to be a bat,
> >e.g. by describing `from the outside' the number of things it can
> >discriminate, but we cannot use a description or thought or image
> >that could also be part of the bat's control state. We can't specify
> >from the bat's viewpoint, what it's like to be a bat.[...]
> >
> >Philosophers, physicists (and other scientists), robots and
> >mathematicians will all have a particular collection of (virtual and
> >physical) mechanisms supporting whatever information-rich states
> >they use to think about the world, or anything else.

[pj]


> Aaron, to what extent does your argument depend on not knowing the full
> description of another entity's mental processes?

My point is simply that your representational apparatus may be
constitutionally (structurally) incapable of encoding exactly the
information used by another entity. Your phrase "not knowing"
suggests that a little more information will provide a remedy.

You can't remedy non-translatability between two formalisms by
providing a user of the second with more information.

[pj]
> ..For example, one robot


> might know the full program for another robot's mind. It could analyze and
> run the other robot's program within its own mind, and notice, and perhaps
> "understand" differences in how both robots would perceive something...?

This is a bit like the suggestion I discussed in this paragraph:

[as]


>| Less obviously, it may also be impossible for the semantics of the
>| states of the simpler brain to be expressible in the states of the
>| more complex brain (though some sort of simulation is possible the

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


>| fact that it is an explicit simulation changes its role in the

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


>| system compared with the original.)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

So up to a point I agree with you. I accept that that sort of
simulation may have predictive and explanatory power. But doesn't
necessarily tell you `what it's like to be' the other kind of agent.

>| .....For example, popular science


>| books trying to show us how the world looks to a fly, by presenting
>| a collage of slightly different pictures of a scene, are potentially
>| seriously misleading. That's because how we see those pictures may
>| involve the perception of spatial properties and relationships that
>| no fly has access to.

I.e. you can't turn off your semantic capabilities when you try to
take on the viewpoint of another simpler agent, and you can't simply
extend your semantic capabilities to enable you to take on the
viewpoint of another more complex agent.

[as]


> >Since there's nothing uniquely correct about these mechanisms and
> >what they can express there's no uniquely correct set of concepts,
> >beliefs, thoughts, theories about reality. There's no way the world
> >really is, in abstraction from the semantic capabilities of any kind
> >of agent.
>

[pj]


> Suppose that the world or the universe itself is somehow a perceptual
> agent, able to perceive itself in the process of constructing reality --
> If so, could that be the way the world "really is"?
>
> Of course this supposition is open to debate, though I do not think it
> necessarily has to be stated in mystical or religious terms.

I've also played with such an idea from time to time. But even if it
does make sense it still does not show that there's one uniquely
correct view of the world.

Maybe we can introduce some sort of partial ordering of views of
reality: in terms of which types subsume others in the sense of
being capable of some sort of simulation of them. But it's tricky.

You now know more about the world than you did when you were two
days old. Yet that doesn't mean that you are capable, now, of
thinking about how you used to perceive or think about the world as
an infant, so that you can judge it correct or incorrect. (It's not
even a matter of forgetting: keeping a full record of the
information would be no help, for your brain has changed so that it
can no longer interpret it as before.)

I'm probably too sleepy to write sense. Must go to bed.

Aaron Sloman

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
ric...@cs.niu.edu (Neil Rickert) writes:

> Date: 3 Sep 1995 23:32:26 -0500
> Organization: Northern Illinois University
>


> In <42csb0$p...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk (Aaron Sloman)

> ..writes:
>
[AS]


> >There is, I believe a deep sense in which EVERYTHING is in the eye
> >of the beholder and therefore claiming that any particular type of
> >thing is in the eye of the beholder says nothing specially of
> >interest regarding that type of thing.

[NR]


> I actually agree with that. I came to this view when I realized that
> in order to be effective, an autonomous robot would need to have a
> subjective view of its environment.

[AS]


> >Since there's nothing uniquely correct about these mechanisms and
> >what they can express there's no uniquely correct set of concepts,
> >beliefs, thoughts, theories about reality. There's no way the world
> >really is, in abstraction from the semantic capabilities of any kind
> >of agent.

[NR]


> Again, I find myself in agreement. As far as I can tell, this makes
> me an anti-realist in the standard terminology of philosophy.

Yes, in some sense of "realist". But if you accept that despite
there being no unique correct view of the world, an individual can
discover that he is in part mistaken about the world, then you
accept some form of realism.

Given a particular conceptual framework, some information structures
will have correct information about the world some incorrect. It's
when perceivers use quite different forms of representation that
their views are not comparable.

> Nevertheless, I consider myself a realist. The literature on
> realism/ anti-realism seems to me to be confused. Searle, in "The
> construction of social reality" attempts to break through this
> confusion by making realism a question of ontology rather than one of
> epistemology. Unfortunately he comes across as even more confused.

I stopped reading Searle some time ago, so I can't comment.

[AS]


> >Anyone who wants to make such a distinction had better answer the
> >question: what does it mean for something NOT to be in the eye of a
> >beholder.

[NR]


> I will admit to not responding to that point. To clarify, I think it
> rather more apparent that interpretations of processes as
> corresponding to an algorithm are subjective, than it is that
> everything is subjective. Or, if you like, I came upon my view of
> the subjectivity of algorithmic interpretation considerably earlier
> than I came upon the realization that everything is subjective.

Describing the algorithm that is instantiated in a process is no
more (or less) subjective than describing the pattern that is
instantiated in a static scene.

............
............
............
............

You can see the above as four rows of dots or as 12 columns or ...

Similarly there may be different ways of imposing an algorithmic
structure on some observed state-changes in a machine.

But often that's because of the necessary equivalence of different
algorithms, just like the necessary equivalence (in the world) of 4
rows of 12 dots and 12 columns of 4 dots, and two juxtaposed
rectangular arrays of 12 by 2 dots, etc.

When the structure gets more and more complicated and irregular it
may become harder and harder to find two equally simple ways of
construing it. Similarly with complex processes. The bit
manipulations involved in searching this file for a sequence of
characters matching a certain regular expression are unlikely to
have a ready interpretation as an implementation of an algorithm for
solving a partial differential equation.

Thus the eye is not free to behold any algorithm it desires.

The most interesting question, for me, is when a process in one part
of a machine is interpreted as implementing a particular algorithm
by another part of the machine, without any external interpreter
being involved.

I think this is bound up with the question what it means for part of
a machine to have a *function* within the machine, independently of
it being designed or used or selected to perform that function by
some external agency (or evolution).

AI and philosophy come together here in some tricky questions.

[AS]
> >On a different point, ....

> >(Pontus)
> >> >To be fussy, evolution is not and cannot be viewed as algorithms.
>
> >(Neil)
> >> Why not? In particular, why can it not be viewed as algorithmic?
>
> >> >It sounds as if he should have used "heuristics": when, exactly,
> >> >can evolution be said to terminate with a well-defined result...?
>
> >> An algorithm might be used repeatedly in an infinite iterative loop.
> >> In such a case it would not terminate with a well-defined result.
>
> >I suspect Neil failed to notice an ambiguity here.

All I meant was that Pontus was contrasting algorithic with
heuristic, i.e. sense (b) below, and you were talking about sense
(a) (actually a1). I think his emphasis was not on termination but
on not having well-defined, guaranteed, results??

[AS]


> >There is a wide spread confusion between two senses of "algorithm",
> >roughly:
> >(a) An algorithm is a completely specified procedure, determining at
> > every stage under all conditions what is to be done. Some people
> > generalise this to allow two sorts of algorithms
> > (a1) deterministic algorithms and
> > (a2) non-deterministic algorithms that allow some choices to be
> > made at random, e.g. under the control of a geiger counter.
>
> >(b) An algorithm in the second sense is a method guaranteed to solve
> > some problem or achieve some specified result (e.g. an algorithm
> > for deciding whether one string is a substring of another, or
> > for sorting a list). This sense of "algorithm" is often
> > contrasted with "heuristic" -- a method that sometimes, or even
> > mostly, works, but either does not always work, or is not
> > guaranteed to find the best result. etc.
>
> >The confusion arises because a non-algorithmic heuristic in sense
> >(b) can be a perfect example of an algorithm in sense (a1).

[NR]


> Actually, I had in mind (a1), but not (b).

As I thought.

[NR]


> You perhaps have in mind for (b) something like Newton's method
> for finding square roots. It is guaranteed to converge, so gives
> good solutions.

It's not so much what I had in mind that's relevant as what the
previous poster whom you were responding to had in mind: he was
thinking of algorithms as things that are guaranteed to solve
problems and since evolutionary processes are not guaranteed to
solve design problems he seemed to want to call them heuristic
rather than algorithmic (in sense b).

Whether the procedure terminates is a secondary question. Algorithms
of types (a1) and (a2) don't ever need to terminate: in that sense
an operating system can implement an algorithm.

Whether something non-terminating is an algorithm in sense (b)
depends on what problem is being solved. If, for instance, it is
controlling some complex machine (e.g. driving a car on the basis of
input from a TV camera) then perhaps we can make a distinction
between an implementation that is guaranteed to meet its
specifications and one that is perhaps cheaper and quicker but
sometimes makes mistakes. The latter is then non-algorithmic, but
heuristic, in sense (b), but not because it never terminates.

> ..For the present I would prefer to evade any


> discussion of whether that is an algorithm. However a single
> iteration is an algorithm which is being repeated.

I don't see the problem about repetitions. Many algorithms involve
iteration or recursion

[NR]


> As a better example, I would like to consider a database lookup
> algorithm. It may operate iteratively, in the sense that there is a
> continued sequence of inquiries. There is no sense of convergence,
> since each inquiry is new. I want to think of evolution as acting on
> current data to produce a change in the distribution of life forms.
> But the algorithm is repeated with new data, and there is no
> expectation that the system will converge. Part of the new data
> comes from the result of the previous iteration, but part of the data
> comes from the sun, from radioactive emissions, from earthquakes,
> etc.
>
> I have probably just demonstrated that distinction between (a1),
> (a2) and (b) is also in the eye of the beholder.

I think natural selection is clearly an algorithm of type (a2)
because it does not use procedures that are guaranteed to produce
the same next step in two identical situations (it's stochastic?).

It is however non-algorithmic in sense (b) because insofar as it
makes any sense to talk of design goals (finding designs with high
survival value) evolution uses procedures that are NOT guaranteed to
achieve those goals.

The only sense in which that's in the eye of the beholder is that
you can change your specification of evoluation's design goals and
then show that those goals are guaranteed to be met. Something
that's only a heuristic method of achieving a certain goal may be an
algorithmic method of achieving that goal with a certain
probability.

In that sense the distinction between what's heuristic and what's
algorithmic in sense (b) can be in the mind of the beholder since
redefining the target can turn something that was not guaranteed a
successful outcome to something that is. I.e. it turns a heuristic
method into an algorithmic one.

Oliver Sparrow

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In article <DEE5u...@festival.ed.ac.uk>
c...@castle.ed.ac.uk "Chris Malcolm" writes:

>
> [This] is liable -- in this group -- to give rise to unfortunate


> misunderstandings, since "heuristic" in AI is an important technical
> term with a couple of well-established (related) uses, both of which
> *necessarily* involve clear explicit statement and purposeful

> activity. See for example....
> ... the chapter on "Search" in almost any introductory AI textbook.

Thanks, teach. My quibble generator inflamed beyond measure, let me do a
Longley. Herewith nice Mr Chamber's dictionary on the issue:

Heuristic adj. serving or leading to find out; encouraging desire to find
out: (of method, argument) depending on assumptions based on past
experience: consists of guided trial and error. N. the art of discovery in
logic; the method of teaching by which a pupil is set to find things out
for himself: (in pl.) principles used in making decisions when all
possibilities cannot be fully explored. (Gr. heuriskein, to find: hence
Heureka!)


_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Peter Lupton

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In article: <42csan$j...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
Rickert) writes:

> > The reason I asked is that there are two different concerns
> >here:
> >
> >(1) Algorithmic identification is non-unique.
>
> >(2) Algorithmic identification is highly subjective.
>
> I intended that algorithmic identification is subjective. I reject
> the adjective "highly" as being of dubious meaning.

I think I understand why you say that - although you don't say why.
What it means, I think, is that you are arguing for some very strong
version of subjectivity. Say that algorithmic identification is
culture-relative? And that *no* algorithmic identification is ruled
out modulo *some* cultural context? This in line with your anti-realism?

> In the first example I gave, I suggested that there was no canonical
> choice of algorithmic identification. If the choice were not
> subjective, there would have to be some canonical means of making
> that identification. You could base the definition of canonical
> meaning on the way people choose if a reliable non-subjective method
> of choice could be found.

The above is unsound. The argument goes:

1. There is no canonical algorithmic identification
2. Objective algorithmic identification entails canonical
algorithmic identification
3. Therefore, algorithmic identification is subjective

However, step (2) is just not so. The relationship between
non-uniqueness and subjective is one of entailment - but the
other way round. If something is subjective, it must be non-unique.
That's true, because if it were unique, there would be no variation
for subjectivity to bring about. The converse, however, just isn't
so. Algorithmic identification may be non-unique - that says nothing
much about whether those identifications are made subjectively.

Not at all. It would hinge upon the nature of those purposes - whether
they were subjective or not. Not everything which involves subjects
is thereby subject*ive*.

Cheers,
Pete Lupton

Peter Lupton

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In article: <42csb0$p...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk
(Aaron Sloman) writes:
>
> There is, I believe a deep sense in which EVERYTHING is in the eye
> of the beholder and therefore claiming that any particular type of
> thing is in the eye of the beholder says nothing specially of
> interest regarding that type of thing.
>
> The key idea is that in itself reality is totally unarticulated
> though different sorts of entities that sense and interact with
> their environments articulate it in different ways.

Perhaps I don't understand what is meant by 'articulated' here.
It seems to be a very strong and far from truistic assertion that
reality is, in itself, totally unarticulated where one is denying
that it has entities and joints, so to speak. When I think of the
haemoblobin molecule and its oxygen/carbon dioxide toggle and when
I think of the action of the ribosome in transcribing rna codes to
amino acids, I tend to think that reality *is* articulated.

Further I don't undestand how a subject is to achieve the feat
of constructuing articulated entities unless the patch of reality
constituting our subject is in itself articulated. That is, if subject
and object *are* articulated, then these articulations can be related.
However, if neither subject and object are articulated, I don't see
how that situation is going to spontaneously change. But if the
subject is in itself articulated, I don't see why the object shouldn't
be also.

> How particular agents articulate reality (or `experience' reality,
> if you prefer), depends on:
>
> (a) their sensory apparatus (transducers, processing circuits,
> interpretation software)

This is an argument for subjectivity, well taken. However,
we can also see how to reduce this subjectivity - widen and
deepen our sensory apparatus. Some part of science and
engineering is doing exactly that.

> (b) their available representational formalisms (see below)

So let's expand that set and ensure that they are recruited
to do their work in the context of (c) below. This may mean
expanding our intellectual abilities and that might turn out
to be impossible. That would leave us subjective to that
degree, but does not argue that what we would be unable to
achieve makes no sense.

> (c) their needs, purposes, functional requirements, whether
> explicit or implicit.

Well, let's eliminate those needs and functional requirements
which are subjective. What's left? - Not nothing. What's left
is the ability to predict and the ability to exploit such
predictions. Isn't that an important part of what science (and
engineering) is about?

I don't see here an argument for irredeemable subjectivity. On the
contrary, I see many reasons why increasing objectivity is a difficult
and important task.

I'm sorry that I won't be responding until the week-end, since I'm off
to a conference.

Cheers,
Pete Lupton

Neil Rickert

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In <42gefp$d...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk (Aaron Sloman) writes:
>ric...@cs.niu.edu (Neil Rickert) writes:

>Describing the algorithm that is instantiated in a process is no
>more (or less) subjective than describing the pattern that is
>instantiated in a static scene.

> ............
> ............
> ............
> ............

>You can see the above as four rows of dots or as 12 columns or ...

Nice example.

>Similarly there may be different ways of imposing an algorithmic
>structure on some observed state-changes in a machine.

>But often that's because of the necessary equivalence of different
>algorithms, just like the necessary equivalence (in the world) of 4
>rows of 12 dots and 12 columns of 4 dots, and two juxtaposed
>rectangular arrays of 12 by 2 dots, etc.

>When the structure gets more and more complicated and irregular it
>may become harder and harder to find two equally simple ways of
>construing it. Similarly with complex processes. The bit
>manipulations involved in searching this file for a sequence of
>characters matching a certain regular expression are unlikely to
>have a ready interpretation as an implementation of an algorithm for
>solving a partial differential equation.

>Thus the eye is not free to behold any algorithm it desires.

Although I hold that algorithm identification is subjective, I do not
suggest a Feyerabendian ("anything goes") type of relativism. There
are standards which the algorithm must meet, such as matching the
input/output behavior of the system. I completely agree that the eye
is not free to behold an arbitrary algorithm.

>The most interesting question, for me, is when a process in one part
>of a machine is interpreted as implementing a particular algorithm
>by another part of the machine, without any external interpreter
>being involved.

This question depends on the possibility of the machine making
interpretations, rather than merely having interpretations imposed by
an external interpreter. This is an important problem for AI and
cognitive science.

>I think natural selection is clearly an algorithm of type (a2)
>because it does not use procedures that are guaranteed to produce
>the same next step in two identical situations (it's stochastic?).

Depending on one's physical assumptions, one could equally argue that
it is stochastic only because it is guaranteed that there will never
be two identical situations.

>The only sense in which that's in the eye of the beholder is that
>you can change your specification of evoluation's design goals and
>then show that those goals are guaranteed to be met. Something
>that's only a heuristic method of achieving a certain goal may be an
>algorithmic method of achieving that goal with a certain
>probability.

Clearly the details of evolution are in the eye of the beholder. You
are evidently seeing rows of dots, while I am seeing columns of
dots. More precisely, you are seeing evolution as a selection
process. I am seeing evolution as a biological process (which
includes sexual reproduction and the DNA crossovers that occur in
meiosis) for creating the genetic diversity from which selections are
made.

>In that sense the distinction between what's heuristic and what's
>algorithmic in sense (b) can be in the mind of the beholder since
>redefining the target can turn something that was not guaranteed a
>successful outcome to something that is. I.e. it turns a heuristic
>method into an algorithmic one.

Evidently there are two meanings for "heuristic". Oliver Sparrow
posted a comment which distinguished heuristic from algorithmic.
Chris Malcolm responded that this disagreed with usage in AI. I tend
to agree with Chris, in that I think of heuristics in terms such as
"heuristic algorithms".


Neil Rickert

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to

What I think this amounts to is that there are two different ways
that 'heuristic' is used.

As an algorithm which will usually give a near optimum
solution to a problem, but which cannot be guaranteed to give
an optimum solution, or perhaps which cannot be guaranteed to
always succeed in giving a solution. I think this is what
Chris had in mind. Computer scientists are, after all,
interested in finding algorithms which solve problems.

As a hand waving way of saying that one does not know what
the algorithm is, and one does not even know whether there is
an algorithm. But whatever is going on seems to more-or-less
work.

I don't see that either usage contradicts the quoted definition.


Neil Rickert

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In <20468...@luptonpj.demon.co.uk> Peter Lupton <Lup...@luptonpj.demon.co.uk> writes:
>In article: <42csb0$p...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk
>(Aaron Sloman) writes:

I am not attempting to put words in Aaron's mouth, and I may well have
interpreted Aaron's points differently than intended. However I do
have some comments on one point.

[Aaron]


>> How particular agents articulate reality (or `experience' reality,
>> if you prefer), depends on:

>>...

>> (c) their needs, purposes, functional requirements, whether
>> explicit or implicit.

[Peter]


> Well, let's eliminate those needs and functional requirements
> which are subjective. What's left? - Not nothing. What's left
> is the ability to predict and the ability to exploit such
> predictions. Isn't that an important part of what science (and
> engineering) is about?

If you eliminate those needs and functional requirements that are
subjective, you eliminate the agent's need for food, sustenance,
shelter, relaxation, etc. In so doing, you might eliminate all of
the agent's motivation for using what remains. Worse still, the
ability to predict and exploit predictions might well have been
constructed on top of the basic subjective requirements, in such a
way that you cannot eliminate the subjective requirements without
also destroying that which you wish to retain.

Perhaps you have in mind the construction of an artificial agent
without any of the subjective requirements. I suppose this is
approximately what early AI ideas had been about. I am inclined to
think that such a plan is doomed to failure. I doubt that you can
have a successful agent unless it has goals of its own, and such
goals would necessarily be subjective.


Neil Rickert

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In <614187...@luptonpj.demon.co.uk> Peter Lupton <Lup...@luptonpj.demon.co.uk> writes:
>In article: <42csan$j...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil
>Rickert) writes:

>> > The reason I asked is that there are two different concerns
>> >here:
>> >(1) Algorithmic identification is non-unique.
>> >(2) Algorithmic identification is highly subjective.

>> I intended that algorithmic identification is subjective. I reject
>> the adjective "highly" as being of dubious meaning.

>I think I understand why you say that - although you don't say why.


>What it means, I think, is that you are arguing for some very strong
>version of subjectivity. Say that algorithmic identification is
>culture-relative? And that *no* algorithmic identification is ruled
>out modulo *some* cultural context? This in line with your anti-realism?

I will comment on my alleged anti-realism later in this message. I
am most certainly not arguing for the claim that "that *no*


algorithmic identification is ruled out modulo *some* cultural

context?" That would be to say that there are no standards which an
algorithmic identification must meet. However there is a clear
standard, namely that the algorithmic identification must be
consistent with the behavior of the system, and the examples I have
given have all observed that standard. Thus you appear to be drawing
an incorrect conclusion from my comments.

[Peter comments on my argument for subjectivity of algorithmic
identification]


>The above is unsound. The argument goes:
> 1. There is no canonical algorithmic identification
> 2. Objective algorithmic identification entails canonical
> algorithmic identification
> 3. Therefore, algorithmic identification is subjective

>However, step (2) is just not so. The relationship between
>non-uniqueness and subjective is one of entailment - but the
>other way round. If something is subjective, it must be non-unique.
>That's true, because if it were unique, there would be no variation
>for subjectivity to bring about. The converse, however, just isn't
>so. Algorithmic identification may be non-unique - that says nothing
>much about whether those identifications are made subjectively.

You are correct on a technical point, but it does not seem to apply
to this case. Perhaps we have very different meanings for
"subjective". Consider the standard optical illusion, which can be
seen as either a vase or a pair of faces. In my meaning of
"subjective," the way you happen to see that picture is subjective.
I admit it is not just the non-uniqueness that is involved here. It
is also important that you cannot see it as both a vase and a pair of
faces at one and the same time. Thus there is also a mutual
exclusivity of the two interpretations.

You are right to point out that non-uniqueness is not sufficient, but
I claim that adding the additional requirement of mutual exclusivity
is sufficient to correct that oversight. In the case of identifying
an algorithm, it seems to me that such identifications are mutually
exclusive. I assumed this without explicitly stating it.

-------

On my alleged anti-realism:

Consider the case of two observers in different relativistic
space-time frames. These observers view the world with different
coordinate systems. The truth of a statement may depend on the
coordinate system. For example, what one observer sees as a square
may appear distinctly rectangular to the other observer, due to the
relativistic contraction.

If questions of truth are determinants of whether one is a realist,
then it must follow that almost all physicists are anti-realists,
since there is near universal support for relativity among
physicists. However physicists are usually taken as exemplars of
realism. There is something seriously wrong with philosophical views
of realism which tie it to such questions of epistemology as the
determination of which statements are true.

My subjectivism is simply of the form that each observer has a
personal coordinate system, and presumably places himself at the
origin of coordinates. Similarly each culture adopts a shared
coordinate system, which it describes as 'objective'. In talking of
coordinate systems, I do not restrict myself to space-time
coordinates. All questions of measurement are done in coordinate
systems, and typically the evaluation of the truth of a statement is
relative to the coordinate system in which it is being considered.

Typically scientific theories also bring with them a particular
coordinate system. For example the main difference between Ptolemaic
astronomy and Copernican astronomy is that one is based on a
geocentric coordinate system, while the other is based on a
heliocentric coordinate system. The purported truths which
distinguish Copernican astronomy from Ptolemaic astronomy are truths
only on the basis of the coordinate system in which they are
evaluated.

The motion of the planets is simpler under a Copernican coordinate
system than it is under a Ptolemaic coordinate system. On the other
hand, terrestrial navigation is simpler under a Ptolemaic coordinate
system than under a Copernican system. There may be good reasons to
prefer one coordinate system over the other. However, there is no
basis for saying that one choice of coordinate system is 'true' while
the other is 'false'. There are relatively straightforward
mathematical procedures for converting statements from one coordinate
system to the other, but these transformations are not
truth-preserving.

My conclusion from this is that 'truth' is not applicable to
scientific theories. The scientific theories set the standard for
truth within those theories. A scientific theory is necessarily true
(apart from observational anomalies) when considered by the standard
for truth that it specifies. Thus 'truth' is not a standard by which
theories themselves can be judged.

I see 'truth' as simply a linguistic and logical convention. Its
linguistic application is one of evaluation within the assumptions
shared by speakers of a language. It has no role in matching
linguistic statements to reality. In particular, correspondence
theories of truth do not work. I note that Putnam also rejects
correspondence theories of truth.

To say that 'truth' is not a standard for evaluating theories is not
to say that theories are completely arbitrary. A theory can be
evaluated by the adequacy with which it describes reality. The
relativism of a Feyerabend, summed up in the slogan "anything goes,"
is an absurdity based on jumping to the silly conclusion that if
'truth' is not a standard for scientific theories, then there is no
standard at all.


Philip Jackson

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In article <42ggnt$d...@percy.cs.bham.ac.uk>, Aaron Sloman writes:

>pjac...@nyc.pipeline.com (Philip Jackson) writes:
>>
>> In article <42csb0$p...@percy.cs.bham.ac.uk>, Aaron Sloman writes:
>
[...]
>[as]
>> >Since there's nothing uniquely correct about these mechanisms and
>> >what they can express there's no uniquely correct set of concepts,
>> >beliefs, thoughts, theories about reality. There's no way the world
>> >really is, in abstraction from the semantic capabilities of any kind
>> >of agent.
>>
>[pj]
>> Suppose that the world or the universe itself is somehow a perceptual
>> agent, able to perceive itself in the process of constructing reality --

>> If so, could that be the way the world "really is"?
>>
>> Of course this supposition is open to debate, though I do not think it
>> necessarily has to be stated in mystical or religious terms.
>
>I've also played with such an idea from time to time. But even if it
>does make sense it still does not show that there's one uniquely
>correct view of the world.

I find it difficult to shake the perception that there really is a reality
which is independent of my perceptions, and indeed independent of the
perceptions of all perceptual agents within it.

On the other hand I agree with you that no perceptual agent within reality
can claim a uniquely correct view of reality.

Oliver Sparrow

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
The issue of "articulation" can entail either

1: how an information-processing entity chunks (articulates, dissects) the
field that it perceives so as to be able to adduce structure, models,
heuristics :) and the like.

2: that actual substantive things that do stuff perform their feats with
similar chunks which are, through their emitted properties or internal
structures not reducible to a single language of description: say, of
particle physics. There is an observer effect here, in that entities which
are knowable only as the emitted properties are known only by the properties
which a given observer - such as a rotary polariser - observes. They are
what they do to whatever it is to which they are being real. Several such
chunks create the environment within which the observer chunk is embedded:
they are its reality. What is does depends on, amongst other things, what
they do to it. What it does may also effect what they are, as the operating
environment may be a mutual artifact of their interaction. Lo! A thingie is
born: articulation created from the interaction of partly accessible
dynamic processes.

Both of these happen. Often, (1) acts on the products of (2), finding ways of
accommodating articulations.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Oliver Sparrow

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
Evolution entails competition. Firms which compete create a system in which
competition feeds upon itself, creating what we have been educated to call
"escalation". This occurs in bursts, with latent positioning building up,
being triggered by events - new entrants, corporate predators, technological
change -or though happenstance. In the fashion typical of an arrhythmic but
regular process, a burst of competitive positioning occurs which, as is
typical of earthquakes, has an intensity which is inversely proportional to
the period since the previous relaxation of the system. We are living
through a particularly strong realignment at the moment.

I describe this because it is accessible to us: we live in it and can see it
happen in ways which are less true of slower or more abstract processes. It is
not algorithmic, in the sense that there are rules that are running on some
processor-equivalent. There are, however, regularities: one can understand the
imperatives upon firms; running one, you can do things about what is happening
to you. There are what I have been rebuked for calling heuristics, partially
boiled rules of thumb which are helpful and, within their domain of validity,
which work.

Notice that if you look back in time, to what happened to Bloggins Inc., you
can understand it both in terms of the heuristics and the details over which
these regularities broke down. You could construct an algorithm - a detailed
set of instructions - which would simulate the Bloggins event in detail,
reconstructing it exactly. One could do this in a large number of different
ways. Stylistically, one could tell a reconstructing system (actors, a
processor) exactly what to do; or you could feed them the scenario (a
rule base, conditions and so forth) and let them self assemble, tweaking the
system until the self-assembly reconstructed the historical events.

Most would feel that the second approach was "deeper" than the first. If I am
the manager of a Soviet desk lamp factory, and I produce 35 kg desk lamps in
a lumpy black finish, I can be sure of their use because (1) workers are
required to have desk lamps and (2) I have the only factory making desk lamps
in the Union. I cannot, however, assert that I have as deep an understanding of
being a lamp maker as someone who has to create products that survive on their
own merits in a competitive world. If I have the history of Bloggins and tell a
system how to reconstruct this, then I can hardly be astonished when it does
so; and hardly smug that I have discovered the truth of matters.

Any regular event is subject to heuristic analysis (as I have used the word
above). Equally, any known history can be rendered into an algorithm which
will reconstruct it: if by no more than an instruction to print x,then y,
then z. The critical issue is that of predictive homology: if you capture US
energy demand from 1880 to 1995 in a near perfect econometric model (r2=.999,
something easily achievable) then an important issue is the degree to which
you can be confident that this model will remain homologous to reality when
you extrapolate it into the future. In the case of such a model, it fails
profoundly: one can reproduce the facts to brilliant effect for the period
1880-1990; but if one sets this to predict 1994, it is quite wrong as compared
to what happened. The heuristic - that price, economic growth, car population
and so forth - have an effect on energy demand remains true, but only true-ish,
as life is a complicated thing. The algorithm (in this case, any number of
alternative, marginally- or radically-different formulations of econometric
equations, annealing-optimised) can offer astonishingly good reproduction of
history, but that is all that they can do.

Competition - and evolution - does what it does in many different ways: great
regularities give way to sudden events, heuristics arise and are broken,
regularities emerge from the soup and dissolve back into it. At a high level of
abstraction, we can define some general principles, as we can say that, in
general, when the US economy grows, then so does energy demand. That we can
adduce fixed rules - embodiable as algorithms - is true only insofar as we are
prepared to constrain those rules to replay the movie of the past; but in doing
so, we cannot expect it to churn out the movie of the future. That which gives
a foggy preview of what may come offers cloudy, multiple views of the past: it
can self-assemble into may different versions which satisfy the same rules but
lead to differing outcomes. Life is not a 35 kg desk lamp, in black.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Neil Rickert

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
In <810398...@longley.demon.co.uk> David Longley <Da...@longley.demon.co.uk> writes:
>In article <42k6n2$7...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:
>> In <810352...@longley.demon.co.uk> David Longley <Da...@longley.demon.co.uk>
>> writes:

>> [David]
>> >A car needs petrol and most animals need shelter. There's nothing
>> >subjective about those needs.

>> You rightly point out that there is a notion of need which is
>> attributed to an animal or automobile. Since such needs are
>> attributions from outside they can be said to be objective.

>> However the existence of externally attributed needs cannot be used
>> to deny the existence of internal needs. These internal needs are
>> recognized in the form of appetites, thirsts, urges, etc. These
>> internally recognized needs are subjective. It is the subjective
>> internal needs which supply the motivation for an organism to act.

>Well, again with my psychologist's hat on, you are referring to
>'drives' rather than needs perhaps.

Actually, I was referring to the needs as underlying causes for the
drives. I agree that it is the drives which are important. Drives
are subjective.

> In the days of Hull and
>Spence, these internal ie (physiological states of interoceptors)
>drive stimuli were conceived in terms of stimulus intensity,
> ...

>All of these can me measured without reference to subjective
>states.

One could, at least in principle, design a system with the right
chemistry so that it would have all of the biochemical indicators of
drives, yet not have any drives. Such a system would sit and rust,
just like the passive automobile. It is the subjective drive that
matters, not the objective biochemical indicator which happens to be
correlated to that drive.

> Whilst one could argue that such stimulations *only*
>work because they stimulate internal, subjective sensations which
>motivate the animal to act, I would say that the evidence
>suggests that the processes are far more mechanical.

You are arguing as if there were a contradiction between reacting to
internal subjective sensations, and being mechanical. These are just
different levels of description, and there is no contradiction
between them.

When I disagree with your behaviorism, I am not arguing that it is
logically wrong. I am arguing that the complexity of the behavioral
relationships is such that you will never understand behavior without
considering what goes on inside the head. A child's experiences at
age 1 may affect the behavior of that person, now an adult, 30 years
later. Yet the childhood experience never gets into your database.

Now consider that passive automobile which is just sitting and
rusting away. I suggest that your behaviorist methods will work far
more effectively for the automobile than they will for people.


David Longley

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
In article <42i6kj$b...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

> In <20468...@luptonpj.demon.co.uk> Peter Lupton


> <Lup...@luptonpj.demon.co.uk> writes:
> >In article: <42csb0$p...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk
> >(Aaron Sloman) writes:
>
> I am not attempting to put words in Aaron's mouth, and I may well have
> interpreted Aaron's points differently than intended. However I do
> have some comments on one point.
>
> [Aaron]

> >> How particular agents articulate reality (or `experience' reality,
> >> if you prefer), depends on:

> >>...


>
> >> (c) their needs, purposes, functional requirements, whether
> >> explicit or implicit.
>

> [Peter]
> > Well, let's eliminate those needs and functional requirements
> > which are subjective. What's left? - Not nothing. What's left
> > is the ability to predict and the ability to exploit such
> > predictions. Isn't that an important part of what science (and
> > engineering) is about?
>
> If you eliminate those needs and functional requirements that are
> subjective, you eliminate the agent's need for food, sustenance,
> shelter, relaxation, etc. In so doing, you might eliminate all of
> the agent's motivation for using what remains. Worse still, the
> ability to predict and exploit predictions might well have been
> constructed on top of the basic subjective requirements, in such a
> way that you cannot eliminate the subjective requirements without
> also destroying that which you wish to retain.
>

Unless you are *very* careful, you are going to debate the old issues
of intensionalism vs. extensionalism as methodological solipsism vs.
naturalism/behaviourism again.

A car needs petrol and most animals need shelter. There's nothing
subjective about those needs.

--
David Longley

David E. Weldon, Ph.D.

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to

}==========Neil Rickert, 9/3/95==========
}
}In <42cfl1$o...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk
}(Aaron Sloman) writes:
}>ric...@cs.niu.edu (Neil Rickert) writes:
}
}>> ... There are three reasonable
}>> interpretations of what the AND gate does:
}>> ...
}
}>There's at least one more interpretation: Switch the
}interpreations
}>of 1 and 0 and instead of this truth table
}>...
}>I.e. every AND gate can be interpreted as an OR gate.
}
}Aaron continues with a theoretical discussion of the importance of
}this. Let me add that there is also a practical importance. AND
}gates are used as OR gates, and vice versa, in practical
}computers.
}The engineers refer to this as using reverse logic. In circuit
}diagrams an input would be labelled as A-bar (A with a horizontal
}bar
}above it) to indicate that this is a negative logic input. Complex
}chips such as CPUs and device controller chips commonly use
}negative
}logic for some inputs and positive logic for others.
}
This is not quite right. It can be shown that every possible logical circuit
(e.g., EXOR, Implicational, etc.) can be built from a very large set of NAND
gates and a handful of NOR gates. Or from a very large set of NOR gates and a
handful of NAND gates. This is what is meant by negative logic (e.g., NAND =
Not AND, etc.). It is this fact that permits VSLI technology; i.e., you can
do everything you need to do to get a CPU using a single gate 99.9 % of the
time.
}


Neil Rickert

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
>In article <42i6kj$b...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:
>> In <20468...@luptonpj.demon.co.uk> Peter Lupton
>> <Lup...@luptonpj.demon.co.uk> writes:

>> [Peter]
>> > Well, let's eliminate those needs and functional requirements
>> > which are subjective. What's left? - Not nothing. What's left
>> > is the ability to predict and the ability to exploit such
>> > predictions. Isn't that an important part of what science (and
>> > engineering) is about?

[Neil]


>> If you eliminate those needs and functional requirements that are
>> subjective, you eliminate the agent's need for food, sustenance,
>> shelter, relaxation, etc. In so doing, you might eliminate all of
>> the agent's motivation for using what remains. Worse still, the
>> ability to predict and exploit predictions might well have been
>> constructed on top of the basic subjective requirements, in such a
>> way that you cannot eliminate the subjective requirements without
>> also destroying that which you wish to retain.

[David]


>A car needs petrol and most animals need shelter. There's nothing
>subjective about those needs.

You rightly point out that there is a notion of need which is


attributed to an animal or automobile. Since such needs are
attributions from outside they can be said to be objective.

However the existence of externally attributed needs cannot be used
to deny the existence of internal needs. These internal needs are
recognized in the form of appetites, thirsts, urges, etc. These
internally recognized needs are subjective. It is the subjective
internal needs which supply the motivation for an organism to act.

I'll grant that an automobile can be said to have only objective
needs. This has something to do with the fact that in its normal
state the automobile is entirely passive. Left completely to its own
devices an automobile would just sit in one spot and slowly rust
away. In its static state it doesn't exactly exhibit much in the way
of an ability to predict and an ability to exploit such predictions.


David Longley

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
In article <42k6n2$7...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

<snip>

Well, again with my psychologist's hat on, you are referring to
'drives' rather than needs perhaps. In the days of Hull and

Spence, these internal ie (physiological states of interoceptors)
drive stimuli were conceived in terms of stimulus intensity,

which in turn could be equated with levels of hypogylycemia,
hypernatremia, circulating sex hormones etc etc. These could be
measured either directly via physiological probes or indirectly
via measures such as weight loss. In more recent decades, these
physiological states have been tied to activity in the
ventromedial and lateral hypothalamus and adrenaline and
noradrenaline systems, the renin-angiotension and vasopressin
systems, 5-HT and dopaminergic systems.

All of these can me measured without reference to subjective

states. One can stimulate bits of the neurochemical sub-systems
and generate eating and drinking, or sex, or shelter seeking
behaviour. Whilst one could argue that such stimulations *only*

work because they stimulate internal, subjective sensations which
motivate the animal to act, I would say that the evidence
suggests that the processes are far more mechanical.

--
David Longley

Andrzej Pindor

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
In article <42csb0$p...@percy.cs.bham.ac.uk>,

Aaron Sloman <A.Sl...@cs.bham.ac.uk> wrote:
>ric...@cs.niu.edu (Neil Rickert) writes:
>
>> Date: 31 Aug 1995 10:22:00 -0500
>> Organization: Northern Illinois University
>>
>> In <c89ponga.809806435@news>
>> c89p...@ida.liu.se (kand. Pontus Gagge) writes:
>> >ric...@cs.niu.edu (Neil Rickert) writes:
>>
>> >>It is difficult to see how one can talk about an algorithmic process
>> >>without a purpose. ...
>>
>> >Purpose is in the eye of the beholder.
>>
>> Sure. But algorithm is also in the eye of the beholder.
>
>There is, I believe a deep sense in which EVERYTHING is in the eye
>of the beholder and therefore claiming that any particular type of
>thing is in the eye of the beholder says nothing specially of
>interest regarding that type of thing.
>
>The key idea is that in itself reality is totally unarticulated
>though different sorts of entities that sense and interact with
>their environments articulate it in different ways.
.....................
{a lot of intersting stuff omitted for brevity}
..........

>This must not be taken as a way of supporting an `anything-goes'
>view of science, or knowledge. It isn't. For agents whose
>architectures and cultures have a great deal in common may engage in
>common, even cooperative, activities aimed at finding out what's
>*really* out there. Moreover, even one individual can discover that
>he's got the environment wrong (and sometimes this is literally a
>painful discovery). Thus when different world views come into
>conflict they are not always totally incommensurable.
..............
I agree basically with everything you have written (and you have done so
in such a clear way too!). However, I have some comments concerning the
following:

>Put another way: we tend to think that there's a sharp and clear
>division between what is subjective and what is objective: there is
>no such sharp and clear division that I am aware of. Thus claims
>that X is objective or that X is subjective are both inherently
>obscure claims.
>

It seems to me that one has to ask 'subjective' or 'objective' relative to
what? This may seem as a contradiction when applied to the word 'objective',
but the point is that one should give up the notion of 'absolute objectivity'.
As a species, we have a certain access to 'reality', determined by our
physical and mental structure. We articulate reality in a certain way and we
cannot go beyond it, even though our articulations may be more and more
sophisticated. On the other hand, as individuals, we may differ in the ways
we articulate the reality and those aspects we can agree on and which will
survive a falsification process can be considered (species) objective, whereas
those which will be individual-specific can be considered subjective. You are
right that sometimes the division may be blurred, but you agree in the quote
I have retained above, that "anything-goes" is incorrect. Some articulations
will survive the falsification process (and can then be considered species-
objective) and some will not. There is of course a possibility that two
different physical theories will explain facts equally well, and then
the choice between them will be subjective.

>Aaron
>--
>Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs )

Andrzej
--
Andrzej Pindor The foolish reject what they see and
University of Toronto not what they think; the wise reject
Instructional and Research Computing what they think and not what they see.
pin...@gpu.utcc.utoronto.ca Huang Po

David Longley

unread,
Sep 7, 1995, 3:00:00 AM9/7/95
to
In article <810401...@chatham.demon.co.uk>
oh...@chatham.demon.co.uk "Oliver Sparrow" writes:

> Sorry to say this David, but a *car* doesn't need petrol, a driver does.
> A car that lacks petrol may want of gasoline in order to function; and a
> person's expectations may perceive that want.
> _________________________________________________
>
> Oliver Sparrow
> oh...@chatham.demon.co.uk
>

A car needs petrol just as a human needs glucose.
--
David Longley

Jim Balter

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to
In article <shankarD...@netcom.com>,
Shankar Ramakrishnan <sha...@vlibs.com> wrote:
>In article <jqbDEK...@netcom.com> j...@netcom.com (Jim Balter) writes:
>>In article <810445...@longley.demon.co.uk>,
>>There are many ways in which the claim is wrong, not the least of which is that
>>a car does not lose its integrity over time in the absence of petrol. But I
>>suppose that a complete blindness to the significance of intensional
>>statements can easily carry with it a blindness to extensional matters as
>>well.
>>
>
>For humans, the need for water is associated with the quale of thirst,
>which distracts whatever you are doing. While a car itself is not going
>to worry about running out of gas, the driver would, esp. if he is driving
>on I-15 in the Mohave desert. The car becomes an extension of the driver,
>and the absence of gasoline is reflected upon the driver as an *emotional*
>quale. Of course, humans cannot experience a *physical* quale like thirst
>or hunger associated with a lack of gasoline for the reason that evolution
>neither had the inclination nor the time to do so.

I remind you that Longley said "just as", and that phrase must be presumed to
have some content. And that I mentioned loss of integrity. The quale of
thirst is not the same as the result of the human body being deprived of
water, even though it is evolutionarily linked to it. No such result occurs
in cars as a result of deprivation of gasoline. (I suppose that the presence
of a desiccated corpse in the driver's seat might cause some erosion.) Your
comments, while interesting in their own right, seem a bit off the mark.

--
<J Q B>


Andrzej Pindor

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to
In article <810563...@longley.demon.co.uk>,
David Longley <Da...@longley.demon.co.uk> wrote:
>In article <jqbDEK...@netcom.com> j...@netcom.com "Jim Balter" writes:
>
>> In article <810548...@longley.demon.co.uk>,
>> David Longley <Da...@longley.demon.co.uk> wrote:
>> >In article <810544...@chatham.demon.co.uk>
>> > oh...@chatham.demon.co.uk "Oliver Sparrow" writes:
>> >
>> >> Sharp change of subject: an interesting article in Science about lions.
>> >> Territorial challenges are an important feature of lioness pack life.
>> >> Some lions are characteristically aggressive in responding to such
>> >> challenges, whilst some are cowardly. Observation shows that individual
>> >> lionesses are aware of the characteristics of these others and factor them
>> >> into their responses to a given challenge. It also demonstrates that
>> >> individual lionesses evaluate the current mental state of others before
>> >> choosing a strategy. T'ain't just humans, therefore; and it happens.
>> >>
>> >> _________________________________________________
>> >>
>> >> Oliver Sparrow
>> >> oh...@chatham.demon.co.uk
>> >>
>> >
>> >The feature extraction you refer to are clear behaviour patterns. Darwin
>> >'The Expression of the Emotions in Man and Animals (1872), along with
>> >many more subsequent zoologists and ethologists since have surely rendered
>> >such 'Fixed Action Patterns' familiar? The particular vectors extracted are,
>> >however one looks at them, *behaviours*, not 'mental states'.
>> >
>> >This is what makes acting or feigning so effective.
>>
>> How does one define "feigning" in behavioral terms?
>>
>> --
>> <J Q B>
>>
>>
>Probably as a sub-set, (fragments) of the behaviours which comprise a
>full action pattern (cf. lion cubs in play-fighting). See Lorenz and
>others for further elaboration.
>
From my knowledge of Lorenz writings, he has a lot of similiar observations.
It seems to me that the controversy about reality of mental states distracts
us from the important fact that the behavior of the lionesses in a given
confrontational situation seems to depend on the observed behavior of other
lionesses in the past. What interpreation we give to it is less important
that the fact that such a _behavior_ in humans is taken as one of the signs
of thinking, intelligence etc. In other words we are not so different from
them, as some may claim (hope).
Perhaps lionesses (or even lions) could also come up with a belief in ZFC?
As we heard this belief is not due to any evolutionary pressures, but comes
from quantum effects in the brain. Is there a reason that these effects would
only be present in human brains?

Andrzej
depend
>--
>David Longley

Jim Balter

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to

Andrzej Pindor

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to
In article <shankarD...@netcom.com>,
Shankar Ramakrishnan <sha...@vlibs.com> wrote:
>In article <jqbDEK...@netcom.com> j...@netcom.com (Jim Balter) writes:
>>In article <810445...@longley.demon.co.uk>,

>>David Longley <Da...@longley.demon.co.uk> wrote:
>>>In article <810401...@chatham.demon.co.uk>
>>> oh...@chatham.demon.co.uk "Oliver Sparrow" writes:
>>>
>>>> Sorry to say this David, but a *car* doesn't need petrol, a driver does.
>>>> A car that lacks petrol may want of gasoline in order to function; and a
>>>> person's expectations may perceive that want.
>>>> _________________________________________________
>>>>
>>>> Oliver Sparrow
>>>> oh...@chatham.demon.co.uk
>>>>
>>>
>>>A car needs petrol just as a human needs glucose.
>>
>>There are many ways in which the claim is wrong, not the least of which is that
>>a car does not lose its integrity over time in the absence of petrol. But I
>>suppose that a complete blindness to the significance of intensional
>>statements can easily carry with it a blindness to extensional matters as
>>well.
>>
>
>For humans, the need for water is associated with the quale of thirst,
>which distracts whatever you are doing. While a car itself is not going
>to worry about running out of gas, the driver would, esp. if he is driving
>on I-15 in the Mohave desert. The car becomes an extension of the driver,
>and the absence of gasoline is reflected upon the driver as an *emotional*
>quale. Of course, humans cannot experience a *physical* quale like thirst
>or hunger associated with a lack of gasoline for the reason that evolution
>neither had the inclination nor the time to do so.
>
>Shankar

You prove conclusively that the need of petrol (for a car) or glucose/water
(for a human) is a purely subjective need, only present from the human
perspective. For a desinterested alien one more set of human bones and one
more pile of rusting metal in the Mohave desert is of equal importance as
a lack of them - none of these eventualities indicates a "need" of something.

Andrzej

Oliver Sparrow

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to

David Longley

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to
In article <810544...@chatham.demon.co.uk>
oh...@chatham.demon.co.uk "Oliver Sparrow" writes:

The feature extraction you refer to are clear behaviour patterns. Darwin


'The Expression of the Emotions in Man and Animals (1872), along with
many more subsequent zoologists and ethologists since have surely rendered
such 'Fixed Action Patterns' familiar? The particular vectors extracted are,
however one looks at them, *behaviours*, not 'mental states'.

This is what makes acting or feigning so effective.

--
David Longley

David Longley

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to
In article <jqbDEK...@netcom.com> j...@netcom.com "Jim Balter" writes:

> In article <810548...@longley.demon.co.uk>,

> How does one define "feigning" in behavioral terms?
>
> --
> <J Q B>
>
>

Probably as a sub-set, (fragments) of the behaviours which comprise a
full action pattern (cf. lion cubs in play-fighting). See Lorenz and
others for further elaboration.

--
David Longley

David Longley

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In article <810803...@chatham.demon.co.uk>
oh...@chatham.demon.co.uk "Oliver Sparrow" writes:

> Back to lions. It is clear that animals model the internal states of others.
^^^^^^^^^^^^^^^^

Try 'I think that', or 'I believe that', or 'I imagine that' or better still,
'I conjecture that'.

Then wonder why all the other zoologists & ethologists haven't expressed their
explanations in such mentalistic (intensional) terms.

Such accounts are lacking in parsimony and go way beyond the information
available surely <g>.
--
David Longley

Chris Malcolm

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In article <42i6kj$b...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil Rickert) writes:
>In <20468...@luptonpj.demon.co.uk> Peter Lupton <Lup...@luptonpj.demon.co.uk> writes:
>>In article: <42csb0$p...@percy.cs.bham.ac.uk> A.Sl...@cs.bham.ac.uk
>>(Aaron Sloman) writes:

>[Aaron]
>>> How particular agents articulate reality (or `experience' reality,
>>> if you prefer), depends on:

>>> (c) their needs, purposes, functional requirements, whether
>>> explicit or implicit.

>[Peter]


>> Well, let's eliminate those needs and functional requirements
>> which are subjective. What's left? - Not nothing. What's left
>> is the ability to predict and the ability to exploit such
>> predictions. Isn't that an important part of what science (and
>> engineering) is about?

>If you eliminate those needs and functional requirements that are


>subjective, you eliminate the agent's need for food, sustenance,
>shelter, relaxation, etc. In so doing, you might eliminate all of
>the agent's motivation for using what remains. Worse still, the
>ability to predict and exploit predictions might well have been
>constructed on top of the basic subjective requirements, in such a
>way that you cannot eliminate the subjective requirements without
>also destroying that which you wish to retain.

>Perhaps you have in mind the construction of an artificial agent


>without any of the subjective requirements. I suppose this is
>approximately what early AI ideas had been about. I am inclined to
>think that such a plan is doomed to failure. I doubt that you can
>have a successful agent unless it has goals of its own, and such
>goals would necessarily be subjective.

"Subjective" is a slippery word. To begin with, whether or not it is
applicable depends on which point of view you are speaking from,
e.g., a scientist, a human being, a dog, an AI KBS, a robot, etc.. In
the old days when academics were the only things in the universe with
subjective experience :-) one could get away with all sorts of
question-begging short-cuts, such as presuming that what was not
objective was subjective, and vice versa, without bothering a hoot
about relativity to points of view, knowledge, purposes, and so on. We
no longer have this luxury. It is possible for something to be
subjective from the point of view of A, and objective from the point
of view of B. It is possible for something to have both subjective and
objective aspects from a single point of view.

So, I find the discussion I cite quite interesting, but so vague that
I can't really get hold of it. For example, by "subjective" do you
mean from your point of view, or from the agent's point of view, and
do you consider that this *implies* non-objectivity from another point
of view, such as the human scientific?

For example, let us suppose that I build a very simple robot whose
purpose in life is to find a dark place and sit still. It doesn't even
have a computer in it, just a few transistors and transducers. This
device has a purpose of its own, which I designed it to have, and this
purpose is objectively verifiable by anyone who cares to experiment
with it or take it to bits. Can you explain what you mean by "such
goals would necessarily be subjective"?
--
Chris Malcolm c...@aifh.ed.ac.uk +44 (0)131 650 3085
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK DoD #205
"The mind reigns, but does not govern" -- Paul Valery

Oliver Sparrow

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
Excuse my ignorance, but what is a ZFC and why should it require a quantum
to think about it? Is it very small, that such tweezers are needed?

Back to lions. It is clear that animals model the internal states of others.

A lioness estimate how another *individual* is likely to behave in a given
situation and changes her behaviour accordingly. She does this by observing the
other individual as *another* *individual*. These observations are carried on
emitted behaviours the way these thoughts are carried on binary states, ASCII,
the seven layer comms protocol, the processor on your PC.... choose your
preferred substratum, they are all in there, playing their parts. As you are
taking in these chicken tracks and making something of them, so the lioness
takes in what she sees around her, interpreting the bits of behaviour in the
mesh of their context, the track record of the individual concerned; and from
this, an intentional lioness weaves together a picture - internalised, a
created object, like all of cognition - but, nonetheless, a real view of the
other as like her but separate, individually willed but open to influence, a
potential collaborator and possible betrayer. The picture is real because is
does things: it changes what the lioness does and it changes how the lioness
perceives.


....and speaking of perception, an interesting aside. Dogs, as readers will
know, have no colour vision, lacking the receptors in the retina by which to
perceive colour. The may well see in high definition shades of grey. The
interesting aside is that the part of the cortex given over to colour vision
in the primates appears to do duty for motion in the dog. Might the grey
rabbit explode into a polychrome rainbow as it moves? Might motion -
acceleration, repetition - each have their respective colour codes? Que quale!

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Neil Rickert

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to

>Back to lions. It is clear that animals model the internal states of others.

I don't think that is at all clear. For that matter, it is not all
particularly clear that humans (other than psychologists,
philosophers, physicians, etc) spend much effort modelling the
internal states of others.

>A lioness estimate how another *individual* is likely to behave in a given
>situation and changes her behaviour accordingly.

All that matters is that she treats others as individuals. I don't
have to model the internal states of a rock or of a feather, in order
to expect them to behave differently when falling. It suffices that
I deal with their external behavior, and treat them as individuals.
In the same way the lioness need only deal with the external behavior
of others in the group, provided she treats them as individuals,
rather than expecting them to be all of the same kind and all to
behave identically.

>....and speaking of perception, an interesting aside. Dogs, as readers will
>know, have no colour vision, lacking the receptors in the retina by which to
>perceive colour. The may well see in high definition shades of grey. The
>interesting aside is that the part of the cortex given over to colour vision
>in the primates appears to do duty for motion in the dog. Might the grey
>rabbit explode into a polychrome rainbow as it moves? Might motion -
>acceleration, repetition - each have their respective colour codes? Que quale!

Blind humans also have the part of the cortex normally used for color
vision. As far as I know, they manage to use it. It does not
atrophy, and there is neural activity. Should we perhaps assume that
blind people think in color?


David E. Weldon, Ph.D.

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to

}==========Neil Rickert, 9/11/95==========

}
}In <810803...@chatham.demon.co.uk> Oliver Sparrow
}<oh...@chatham.demon.co.uk> writes:
}
}>Back to lions. It is clear that animals model the internal states of
}others.
}
}I don't think that is at all clear. For that matter, it is not all
}particularly clear that humans (other than psychologists,
}philosophers, physicians, etc) spend much effort modelling the
}internal states of others.
}
}>A lioness estimate how another *individual* is likely to behave
}in a given
}>situation and changes her behaviour accordingly.
}
}All that matters is that she treats others as individuals. I don't
}have to model the internal states of a rock or of a feather, in order
}to expect them to behave differently when falling. It suffices that
}I deal with their external behavior, and treat them as individuals.
}In the same way the lioness need only deal with the external }behavior
}of others in the group, provided she treats them as individuals,
}rather than expecting them to be all of the same kind and all to
}behave identically.
}
But then you place an incredibly large burden on the poor lioness' cerebral
cortex to process the other lioness as an individual since you imply that the
observing lioness has no expectations or structures to deal with the other's
presence. That is, your observing lioness must take each behavior fragment of
the other, perform a significant number of computations to determine the
"behavior fragment's" meaning, dredge up the response "associated" with that
meaning and emit it. Then wait for the other lioness to emit the next
behavior fragment, and so on. It is the computational economy of a structured
response to a stimulus pattern that gives the explanation based on internal
structures efficacy.
}
}>[[deleted stuff on dogs and blind people]]
}
}


Neil Rickert

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In <DEr23...@intruder.daytonoh.attgis.com> David E. Weldon, Ph.D. <David.E...@DaytonOH.ATTGIS.COM> writes:
>}==========Neil Rickert, 9/11/95==========
>}In <810803...@chatham.demon.co.uk> Oliver Sparrow
>}<oh...@chatham.demon.co.uk> writes:

>}>Back to lions. It is clear that animals model the internal states of
>}others.

>}I don't think that is at all clear.

>}All that matters is that she treats others as individuals.

>But then you place an incredibly large burden on the poor lioness' cerebral


>cortex to process the other lioness as an individual since you imply that the
>observing lioness has no expectations or structures to deal with the other's
>presence.

I don't see this at all. The lioness may well have expectations and
structures associated with distinguishing between different
lionesses, just as I presumably have expectations and structures for
distinguishing between a feather and a rock.

What I am arguing is that it presumes too much to say that these
structures and expectations can be identified with ascription of
mental states to the various other lionesses.

> That is, your observing lioness must take each behavior fragment of
>the other, perform a significant number of computations to determine the
>"behavior fragment's" meaning, dredge up the response "associated" with that
>meaning and emit it. Then wait for the other lioness to emit the next
>behavior fragment, and so on. It is the computational economy of a structured
>response to a stimulus pattern that gives the explanation based on internal
>structures efficacy.

It is not at all obvious that the amount of computation would depend
on whether recorded information is in the form of ascribed mental
states, or in some other form. If you have a good argument which
requires that it be in the form of mental states rather than, for
example, statistical summaries of past behavior, I would be
interested in seeing that argument.


Neil Rickert

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In <shankarD...@netcom.com> sha...@netcom.com (Shankar Ramakrishnan) writes:

>In article <431fh8$b...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil Rickert) writes:

>>>For example, let us suppose that I build a very simple robot whose
>>>purpose in life is to find a dark place and sit still.

>>I find it quite unconvincing that this device has a purpose of its
>>own. It seems to me that it only has the purpose you imposed upon
>>it.

>It doesn't really matter if the 'purpose' is imposed upon by the creator
>of the robot or by evolution as far as the individual being is concerned.
>The meaning of the word 'purpose' in this context makes sense only in
>the larger scheme of things. For example, if there is a demand for
>robots that exhibit this kind of behavior, then the robot that follows
>it can be said to have a sense of purpose (its tribe would increase).

It matters immensely if talking about the robot having a purpose of
its own, or having a sense of purpose. If it has no sense, then it
has no sense of purpose. If the purpose is imposed on the robot,
then it is not the robot's own purpose.

>>Compare this with the case of humans. Evolution has imposed upon us
>>the purpose of widely distributing our DNA. Most of us (with the
>>possible exception of rapists) do not consider that to be our own
>>purpose.

>With humans, because of the highly developed neocortex, evolution doesn't
>make sense anymore. What matters more than the distribtuion of DNA is
>the quality of life of each individual (at least in most Western societies).

You are just changing the meaning of words, so that they mean
something different for the robot than for the human.


Neil Rickert

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In <DEqpE...@festival.ed.ac.uk> c...@castle.ed.ac.uk (Chris Malcolm) writes:
>In article <42i6kj$b...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil Rickert) writes:

>> I doubt that you can
>>have a successful agent unless it has goals of its own, and such
>>goals would necessarily be subjective.

>"Subjective" is a slippery word.

Perhaps so. The conventional wisdom agrees with you. However
"objective" seems to be at least as slippery, as the interminable
debates between realists, anti-realists, non-realists, etc, suggest.

> To begin with, whether or not it is
>applicable depends on which point of view you are speaking from,
>e.g., a scientist, a human being, a dog, an AI KBS, a robot, etc..

I am not quite sure of the point you are making. The very existence
of the many points of view that you have listed argues for the
importance of subjectivity.

> It is possible for something to be
>subjective from the point of view of A, and objective from the point
>of view of B.

It is possible for something to be seen from the point of view of A,
and from the point of view of B. B might reach the subjective
conclusion that his point of view is objective, but that does not
make it so.

> It is possible for something to have both subjective and
>objective aspects from a single point of view.

I think it more accurate to say that a person can often see the same
thing from many points of view. There is a tendency to say that one
of these points of view is objective, but the notion of 'objective'
is too slippery to allow that assumption to be tested. Most commonly
we use the term 'objective' to refer to the view achieved by social
consensus. But that would be better thought of as a socially shared
subjectivity, than as objectivity. There is no guarantee that
distinct societies will share the same view.

>So, I find the discussion I cite quite interesting, but so vague that
>I can't really get hold of it. For example, by "subjective" do you
>mean from your point of view, or from the agent's point of view, and
>do you consider that this *implies* non-objectivity from another point
>of view, such as the human scientific?

In the context in which I said it, I was saying that the agent needs
to have a point of view. I cannot readily argue the point of whether
that implies non-objectivity, since I am doubtful that there is such
a thing as objectivity.

>For example, let us suppose that I build a very simple robot whose

>purpose in life is to find a dark place and sit still. It doesn't even
>have a computer in it, just a few transistors and transducers. This
>device has a purpose of its own, which I designed it to have, and this
>purpose is objectively verifiable by anyone who cares to experiment
>with it or take it to bits. Can you explain what you mean by "such
>goals would necessarily be subjective"?

I find it quite unconvincing that this device has a purpose of its


own. It seems to me that it only has the purpose you imposed upon

it. It is perhaps verifiable by others that the robot is executing a
procedure which will have the effect of it finding a dark place and
sitting still. However it is not clear that you can get from a
procedure to a purpose.

Compare this with the case of humans. Evolution has imposed upon us
the purpose of widely distributing our DNA. Most of us (with the
possible exception of rapists) do not consider that to be our own

purpose. Indeed, the government in China sees one of its purposes to
reduce the population, and this would seem to be directly contrary to
the purposes imposed on us by evolution.


Oliver Sparrow

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
Changes in scale and scope in nature.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The following comes off the back of a menu for a lunch celebrating a
ceremonious and solemn occasion of such splendid tedium that I was fortunate in
my neighbour. Our discussion shifted from capital risk and its measurement to
more interesting issues: to stuff and to what makes stuff do stuff. Save your
pity for my partner, however, for it was I who got his ear bent. There is many
a retooled physicist labouring before the sweaty screens in the City of
London, longing for intellectual freedom. Who knows? The may be a Physicists
Liberation Front balaclava-ing to the rescue. .

The thought that emerged from this session of menu defacement is related
to that which I have posted a number of times. Structures which are more
complex than their component parts cannot be represented by rules that govern
those components. There are layers to reality. Furthermore, some of these
layers appear to result from self-reference. Co-evolving organisms, for
example, are the product of what they create: the dynamic which lies between
competitors, predators and prey, mutualists and symbionts. They come from what
they have made; and what they make shapes and changes what they are. Such
layers are genuinely fundamental in the scheme of things, in that they are
irreducible,

Bruce Sterling is fond of citing Prigoginic levels of complexity, something
that may be more a characteristic of his novels than can be found in the
writings of the good Professor himself. However, we felt that there were
indeed such layers to be found; and we found nine of them between the soup and
the cheese. The resulting cladogram is reproduced from the menu, with each
clade bursting the bounds of its predecessor. Each is just as "fundamental" as
the other. They should probably be drawn in a circle, but this goes beyond
ascii clunk-o-graphics.

??
/\
/ \
/ \
/ \ \
/ \ Symmetry breaking
/ /\
The properties of monadic, / \
isolated entities: QM, QCD / \
/ \ Newtonian dynamics; the
/ \ properties of small populations
Statistical properties /\
of large populations. / \
/ \
/ \ Emergence of dynamical
Information bearing \ systems: convection cells etc.
replicative systems /\
'life as we know it, / \
scottie.' / Information transducing
/\ systems: rote responses.
/ \
Awareness, modelling \
and symbol processing \
Transcendental
creativity.

The last clade sounds terminally new age and may need a word of support.

Each of the clades transcends its substrata and is therefore transcendental,
which is what the word means. This one transcends perceptive awareness, in
that it takes symbols, percepts and models and makes new things from them.
Humans do things here, as may the higher vertebrates, human-created structures
such as companies and discussion groups; and perhaps machines. Creativity
which transcends the rules that are set by what is perceived and adduced is, I
fear, transcendental creativity. It is the outward sign of great things
within. It is probably the true Turing test.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Shankar Ramakrishnan

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In article <810803...@chatham.demon.co.uk> oh...@chatham.demon.co.uk writes:
>
>....and speaking of perception, an interesting aside. Dogs, as readers will
>know, have no colour vision, lacking the receptors in the retina by which to
>perceive colour. The may well see in high definition shades of grey.

Or high definiton shades of blue or green, who knows?

The
>interesting aside is that the part of the cortex given over to colour vision
>in the primates appears to do duty for motion in the dog. Might the grey
>rabbit explode into a polychrome rainbow as it moves? Might motion -
>acceleration, repetition - each have their respective colour codes? Que quale!

Hmm. I read somewhere that some parakeets have four kinds of color receptors
instead of just three as in primates. For humans, there are only three
primary colors and any other color that we may see or even *imagine* is
a combination of these three. I wonder if those parakeets can see
colors that we humans cannot even imagine.

The same with bats. Do bats 'see' or 'hear' echolocation signals? Or is
it some kind of sense that is undescribable by humans?

I have a feeling that, without the appropriate homunculi for a particular
quale, it is not possible to even *imagine* what that quale would be
like. I may be wrong here.

Shankar

David Longley

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In article <DEr23...@intruder.daytonoh.attgis.com>
David.E...@DaytonOH.ATTGIS.COM "David E. Weldon, Ph.D." writes:
<snip>

> But then you place an incredibly large burden on the poor lioness' cerebral
> cortex to process the other lioness as an individual since you imply that the
> observing lioness has no expectations or structures to deal with the other's
> presence. That is, your observing lioness must take each behavior fragment of

> the other, perform a significant number of computations to determine the
> "behavior fragment's" meaning, dredge up the response "associated" with that
> meaning and emit it. Then wait for the other lioness to emit the next
> behavior fragment, and so on. It is the computational economy of a structured
> response to a stimulus pattern that gives the explanation based on internal
> structures efficacy.


A cub's relatively long period of development, play and training will suffice
to provide the necessary flexibility for later recognition. Additionally, most
of the behaviour fragments will be 'hard wired' as fixed action patterns
(UCRs).

Millions of computations already have to be done to just *process* the visual
vectors (features) in the first place, so there's no need to conceive of the
process as one of "dredging up". In fact, response rates force one to think
in terms of "resonance" and plastic configurability.

One can (should?) of course think of all computational systems this way.
large state spaces with vector-vector transformations do not require one
to conceive of 'internal' structures except metaphorically or as shorthand
descriptions.

--
David Longley

Shankar Ramakrishnan

unread,
Sep 11, 1995, 3:00:00 AM9/11/95
to
In article <431fh8$b...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil Rickert) writes:
>
>>For example, let us suppose that I build a very simple robot whose
>>purpose in life is to find a dark place and sit still. It doesn't even
>>have a computer in it, just a few transistors and transducers. This
>>device has a purpose of its own, which I designed it to have, and this
>>purpose is objectively verifiable by anyone who cares to experiment
>>with it or take it to bits. Can you explain what you mean by "such
>>goals would necessarily be subjective"?
>
>I find it quite unconvincing that this device has a purpose of its
>own. It seems to me that it only has the purpose you imposed upon
>it. It is perhaps verifiable by others that the robot is executing a
>procedure which will have the effect of it finding a dark place and
>sitting still. However it is not clear that you can get from a
>procedure to a purpose.

It doesn't really matter if the 'purpose' is imposed upon by the creator


of the robot or by evolution as far as the individual being is concerned.
The meaning of the word 'purpose' in this context makes sense only in
the larger scheme of things. For example, if there is a demand for
robots that exhibit this kind of behavior, then the robot that follows
it can be said to have a sense of purpose (its tribe would increase).

>


>Compare this with the case of humans. Evolution has imposed upon us
>the purpose of widely distributing our DNA. Most of us (with the
>possible exception of rapists) do not consider that to be our own
>purpose. Indeed, the government in China sees one of its purposes to
>reduce the population, and this would seem to be directly contrary to
>the purposes imposed on us by evolution.
>

With humans, because of the highly developed neocortex, evolution doesn't


make sense anymore. What matters more than the distribtuion of DNA is
the quality of life of each individual (at least in most Western societies).

Shankar

Oliver Sparrow

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In article <431dm3$a...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

> Oliver Sparrow writes:
> >Back to lions. It is clear that animals model the internal states of others.
>

> I don't think that is at all clear. For that matter, it is not all
> particularly clear that humans (other than psychologists,

> philosophers, physicians, etc) spend much effort modelling the
> internal states of others.

Social interactions consist of just such modelling, for without it, we would
be a bundle of reflexive rote responses. We see this in psychotics who have
defects in this area: they have a repertoire of responses that fill the space
-joke telling, catchphrases, monologues on sport, on fixed ideas - which are
trotted out irrespective of their appropriateness or welcome. The pathology is
self-evident. That we do not spend effort is the product of our childhood
learning, when we did spend and effort. We have internalised and are now
able to perform such complex tasks effortlessly, as we drive cars without
overtly solving differential equations in our heads.

Soft sciences always feel themselves limper, less *potent* than the good ol'
hard sciences. They feel in need to stiffen themselves up a bit. None of this
subjective stuff: observations only are permissable; and from this, it is a
short step to say that only the observable is real. Balls, to continue the
metaphor: real is what does. One can convolute observations into a set of
transactions between IO mapping black boxes - called "people" or "lionesses" -
and feel oneself hard all through; but particularly hard between the ears.

> ........... I don't


> have to model the internal states of a rock or of a feather, in order
> to expect them to behave differently when falling. It suffices that
> I deal with their external behavior, and treat them as individuals.
> In the same way the lioness need only deal with the external behavior
> of others in the group, provided she treats them as individuals,

This seems convoluted in the sense used above. Rocks and feathers have a
simple set of repertoires that we can easily model. We capture this
quickly, thanks to Galileo. Lionesses, by contrast, have complex repertoires.
In order to be usefully predictive of what aspect of that repretoir will be
presented, she -the observer - needs to model the paths to an outcome. These
paths are, however, occult and within the dom'ed skull thereof. The model -
if it is to be useful -needs to address the issues of the internal state of
the questionable lioness. (Rather a good children's book tilte, that?) Thus
logic and observation combine to make it clear that the observing lioness has
a model of the internal processes (however, projective, however partial) of
the internal states of the subject lioness.

This model may be (is, probably) built largely upon paths laid down by past
interactions, by cubhood play, by general truths about how the states which
the lioness undergoes subjectively are homologous with what she sees occuring
in respoect of others. It may well be and probably is implicit rather than
declarative: she doesn't know that she knows. The status of the model is
undoubtedly updated by the data observed in the subject lioness. Individuals
can and do manipulate the internal models of others by issuing false signals,
by trying to excite others and the like. (Watch dogs psych each other up to go
hunting.) Baby baboons quickly learn to hang around and adult who has food; and
if not fed, to pretend to have been hurt. Mum gallops up; squabble; baby gets
discarded food. This only works if Mum is the right place in the hierarchy,
however, which tells one a great deal about the subjective modelling going on
in a two month old skull and which has *nothing* to do with current or even
recently emitted signals from any of the parties involved.

Why worry? Because open design machine systems also have to model a complex
environment from the receipt of noise embedded signals. They need to learn
which signals matter; and to do this, they need to create a model of what
matters. This has to be heaved up by recursion from the basic principes of
ordering with which they have been supplied. Complicated systems will - I
assert - increasingly have to make their own order as they go along, defining
temporary objective functions, temporary heuristics, temporary filters any of
which may pass their characteristics on to what is useful in a later situation.
Much of interpretation - such as why have Du Pont done *that*? - require one to
put oneself in Du Pont's collective shoes; which implies.... Thus building an
interpretive framework may well have much to do with building models of the
various exogenous actors that have complex or simple properties, such that one
can understand what they may do, understand what they have done and map their
repertoire and extent into a representational framework that allows us - or the
machine - to make sense of the deluge of data and noise to whiuch both we and
it are exposed.

_________________________________________________

oh...@chatham.demon.co.uk

Oliver Sparrow

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In article <432ase$5...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

> It matters immensely if talking about the robot having a purpose of
> its own, or having a sense of purpose. If it has no sense, then it
> has no sense of purpose. If the purpose is imposed on the robot,
> then it is not the robot's own purpose.

It's jump on Rickert day. However: pleasure, plain, social acceptance and
a host of tropisms are wired into us. They are, sensu Dennett, a fair whack of
what we call "purpose", or anyway the roots thereof. They may be the Ysdraggyl,
the entire root of purpose. We don't know; but i would suggest not, because of
our ability to create new goals by conceiving of them. That these goals draw
upon the tropisms does not, perhaps, make them entirely *of* them.
Nevertheless, we are as captured by what we are as Chis Malcolms lucifuge
heap of mobile transistors; or a frightened goldfish set to seek dark places by
its neuropeptide secretions. If it's purpose for us, then its purpose for the
widget, the fish. The issue of options and mechanisms of choice are key to
where the idea of purpose ceases to be useful. It is a continuous variable:
like asking how many hairs make a beard: 1, 1000, 10,000?

--------------------------------
Oliver Sparrow
oh...@chatham.demon.co.uk

Sherwood Vine

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In answer to the question-"Does an algorithm generate a mind?",I would say that a determinist(which I am )would saythat an algorithm is thw way we "think",Decide",etc. We call the process a "mind".

Andrzej Pindor

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In article <432ase$5...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <shankarD...@netcom.com> sha...@netcom.com (Shankar Ramakrishnan) writes:
....
>>The meaning of the word 'purpose' in this context makes sense only in
>>the larger scheme of things. For example, if there is a demand for
>>robots that exhibit this kind of behavior, then the robot that follows
>>it can be said to have a sense of purpose (its tribe would increase).
>
>It matters immensely if talking about the robot having a purpose of
>its own, or having a sense of purpose. If it has no sense, then it
>has no sense of purpose. If the purpose is imposed on the robot,
>then it is not the robot's own purpose.
>
I am a bit surprised that you make this argument. From previous discussions
I understood that in your view whatever humans (just like robots) do springs
ultimately from basically deterministic physical processes in the brain.
These processes may be perhaps chaotic (and hence impossible to model exactly
and predict) but what it means "own" in this context? You seem to be implying
that humans have some "self" which in turn has a "sense of purpose" and
a robot does not have this. Are you suggesting the qualitative difference?
It would not be compatible with your view of the mind as I thought it to be.
Perhaps I have misunderstood your point of view, could you elaborate?

Shankar Ramakrishnan

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In article <432ase$5...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil Rickert) writes:
>In <shankarD...@netcom.com> sha...@netcom.com (Shankar Ramakrishnan) writes:
>
>>It doesn't really matter if the 'purpose' is imposed upon by the creator
>>of the robot or by evolution as far as the individual being is concerned.
>>The meaning of the word 'purpose' in this context makes sense only in
>>the larger scheme of things. For example, if there is a demand for
>>robots that exhibit this kind of behavior, then the robot that follows
>>it can be said to have a sense of purpose (its tribe would increase).
>
>It matters immensely if talking about the robot having a purpose of
>its own, or having a sense of purpose. If it has no sense, then it
>has no sense of purpose. If the purpose is imposed on the robot,
>then it is not the robot's own purpose.

But where do you draw the line? The behavior of insects and lower animals
is hard-coded, with alleles corresponding to favorable behavior gaining
an evolutionary advantage. Even in a brain as complex as the human's,
the vestiges of reptilian behavior still remain. Behavioral psychology is
successful because of the predictability of human behavior to a remarkable
degree - and a lot of it has to do with the imposition of the same due to
evolution.

Shankar

Andrzej Pindor

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In article <43430m$9...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
..................
>Philosophers talk about zombies, which behave like ordinary
>individuals yet have no mental states. Imagine a zombie lioness.
>Since the zombie lioness has no mental states, it would be pointless
>for another lioness to model those non-existent mental states. Thus,
>on your theory, the other lioness would be at a loss to account for
>and predict the behavior of the zombie, even though the zombie
>behaves just like an ordinary conscious lioness.
>
>To me this seems preposterous. Notice that the problem disappears
>if it is behavior profiles that are modelled.
>
Excellent argument. Note that it establishes behaviorism as a sensible course
of action when dealing with human psychology: if zombies are possible, then
clearly behaviorism is sufficient. If mental states are reflected in
behavior, then talking about behavior profiles is equivalent to talking about
mental states which lead to these profiles. Any aspect of mental states
which is not reflected in behavior is irrelevant for the purpose of psychology
as the science which tries to explain human behavior.

Neil Rickert

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In <DEszo...@gpu.utcc.utoronto.ca> pin...@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:

>In article <432ase$5...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>>In <shankarD...@netcom.com> sha...@netcom.com (Shankar Ramakrishnan) writes:
>....
>>>The meaning of the word 'purpose' in this context makes sense only in
>>>the larger scheme of things. For example, if there is a demand for
>>>robots that exhibit this kind of behavior, then the robot that follows
>>>it can be said to have a sense of purpose (its tribe would increase).

>>It matters immensely if talking about the robot having a purpose of
>>its own, or having a sense of purpose. If it has no sense, then it
>>has no sense of purpose. If the purpose is imposed on the robot,
>>then it is not the robot's own purpose.

>I am a bit surprised that you make this argument. From previous discussions


>I understood that in your view whatever humans (just like robots) do springs
>ultimately from basically deterministic physical processes in the brain.

Agreed.

>These processes may be perhaps chaotic (and hence impossible to model exactly
>and predict) but what it means "own" in this context? You seem to be implying
>that humans have some "self" which in turn has a "sense of purpose" and
>a robot does not have this.

I think it obvious that people have a "self" and a "sense of purpose". I
also think it obvious that the type of simple robot that Chris described
did not have a concept of self.

I am not at all suggesting that it is impossible to have a robot with
a sense of self. Quite the contrary. But I took it from the brief
description Chris gave that he was not discussing such a robot.


Neil Rickert

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In <shankarD...@netcom.com> sha...@netcom.com (Shankar Ramakrishnan) writes:
>In article <432ase$5...@mp.cs.niu.edu> ric...@cs.niu.edu (Neil Rickert) writes:
>>In <shankarD...@netcom.com> sha...@netcom.com (Shankar Ramakrishnan) writes:

>>>It doesn't really matter if the 'purpose' is imposed upon by the creator
>>>of the robot or by evolution as far as the individual being is concerned.

>>>The meaning of the word 'purpose' in this context makes sense only in
>>>the larger scheme of things. For example, if there is a demand for
>>>robots that exhibit this kind of behavior, then the robot that follows
>>>it can be said to have a sense of purpose (its tribe would increase).

>>It matters immensely if talking about the robot having a purpose of
>>its own, or having a sense of purpose. If it has no sense, then it
>>has no sense of purpose. If the purpose is imposed on the robot,
>>then it is not the robot's own purpose.

>But where do you draw the line? The behavior of insects and lower animals


>is hard-coded, with alleles corresponding to favorable behavior gaining
>an evolutionary advantage.

It is not completely clear to what extent insect behavior is hard
coded. I tend to think it largely is. However there was an article,
I think in American Scientist some time in spring, about ant
colonies. Apparently an ant colony learns. That is, there are
noticeable differences in the behavior emanating from a 2 year old
ant colony than from a 6 month old colony. The interesting thing is
that the individual worker ants are relatively short lived, so the
learning appears to be a property of the colony as a whole, rather
than of the individual ants.

> Behavioral psychology is
>successful because of the predictability of human behavior to a remarkable
>degree - and a lot of it has to do with the imposition of the same due to
>evolution.

I think it could be equally said that behavioral psychology is
unsuccessful because of the degree of unpredictability of human
behavior. There are psychologists who will offer one of these view
on Monday, Wednesday and Friday, and the other view on Tuesday,
Thursday and Saturday.


Neil Rickert

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In <DEt23...@gpu.utcc.utoronto.ca> pin...@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:

>In article <43430m$9...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>>Philosophers talk about zombies, which behave like ordinary
>>individuals yet have no mental states. Imagine a zombie lioness.
>>Since the zombie lioness has no mental states, it would be pointless
>>for another lioness to model those non-existent mental states. Thus,
>>on your theory, the other lioness would be at a loss to account for
>>and predict the behavior of the zombie, even though the zombie
>>behaves just like an ordinary conscious lioness.

>>To me this seems preposterous. Notice that the problem disappears
>>if it is behavior profiles that are modelled.

>Excellent argument. Note that it establishes behaviorism as a sensible course
>of action when dealing with human psychology: if zombies are possible, then
>clearly behaviorism is sufficient.

I think you are trying to carry the argument a little further than it
can be taken. It may establish that behaviorism is a sensible course
when dealing with human behavior. But that is not quite the same as
dealing with human psychology. It is my impression that psychology
is often thought of as not just a science of behavior, but also a
science of internal states. That is, the aim of psychology may be
not just to predict behavior, but also to give an explanatory account
of the internal mechanisms which generate that behavior.

On that view of psychology, if zombies are possible their possibility
would demonstrate the inadequacy of behaviorism as a theory of those
internal mechanisms. Incidently, I seriously doubt that zombies are
possible.

I should perhaps contrast my position with that of David Longley.
David has argued that the study of behavior must be based on the
study of behavior. He has also made the criticism that there is too
much loose talk about folk psychology, and that much of the talk is
contradicted by the evidence.

I largely agree with David up to that point. But then David goes on
and criticizes any cognitive scientist who dares talk about internal
states. That is where I think he goes too far. If a person's
experience during childhood (for example, that person's education)
can be one of the determinants of behavior in later life, then
evidently there are internal states which in some manner record
information about the childhood experience.

In my opinion it is completely appropriate for the cognitive
scientist to investigate internal states. Preferably the scientist
should do so without any prior committment to assumptions that those
internal states are necessarily the mental states talked about in
folk psychology.


Neil Rickert

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In <810893...@chatham.demon.co.uk> Oliver Sparrow <oh...@chatham.demon.co.uk> writes:
>In article <431dm3$a...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

> > Oliver Sparrow writes:
> > >Back to lions. It is clear that animals model the internal states of others.

> > I don't think that is at all clear. For that matter, it is not all
> > particularly clear that humans (other than psychologists,
> > philosophers, physicians, etc) spend much effort modelling the
> > internal states of others.

>Social interactions consist of just such modelling, for without it, we would
>be a bundle of reflexive rote responses.

I think you misunderstand my point. I don't for one moment
underestimate the value of modelling. I am questioning what it is
that is modelled.

You claim that the lioness models internal mental states of other
lionesses. I claim that the lioness models behavior of other
lionesses, treating each as individuals with individual behavior
profiles.

I have difficulty making sense of the idea that a lioness would model
non-observable mental states rather than observable behavior
profiles. Moreover the non-observable mental states would seem less
useful, since the inference from mental states to behavior
predictions is rather tenuous. The inference from behavior profiles
to behavior predictions is more direct and is likely to be more
reliable.

> > ........... I don't
> > have to model the internal states of a rock or of a feather, in order
> > to expect them to behave differently when falling. It suffices that
> > I deal with their external behavior, and treat them as individuals.
> > In the same way the lioness need only deal with the external behavior
> > of others in the group, provided she treats them as individuals,

>This seems convoluted in the sense used above. Rocks and feathers have a
>simple set of repertoires that we can easily model. We capture this
>quickly, thanks to Galileo.

I rather suspect that people were quite capable of distinguishing
between the behaviors of rocks and feathers well before Galileo spoke
on the matter.

> Lionesses, by contrast, have complex repertoires.

Lionesses are able to have complex repertoires because they have
complex brains. Because they have complex brains they are also able
to handle the complex repertoires of other lionesses.

>In order to be usefully predictive of what aspect of that repretoir will be
>presented, she -the observer - needs to model the paths to an outcome.

I agree. But you have produced no argument as to why it is mental states
that must be modelled.

> The model -
>if it is to be useful -needs to address the issues of the internal state of
>the questionable lioness.

This is simply dogma.

Philosophers talk about zombies, which behave like ordinary
individuals yet have no mental states. Imagine a zombie lioness.
Since the zombie lioness has no mental states, it would be pointless
for another lioness to model those non-existent mental states. Thus,
on your theory, the other lioness would be at a loss to account for
and predict the behavior of the zombie, even though the zombie
behaves just like an ordinary conscious lioness.

To me this seems preposterous. Notice that the problem disappears
if it is behavior profiles that are modelled.

>This model may be (is, probably) built largely upon paths laid down by past

>interactions, by cubhood play, by general truths about how the states which
>the lioness undergoes subjectively are homologous with what she sees occuring
>in respoect of others.

In other words, it is really a model of behavior patterns, rather
than a model of internal mental states.

> It may well be and probably is implicit rather than
>declarative: she doesn't know that she knows.

Aha. The purported internal mental state is nothing more than an
invention, which you conceal behind the word 'implicit'.

> Individuals
>can and do manipulate the internal models of others by issuing false signals,
>by trying to excite others and the like. (Watch dogs psych each other up to go
>hunting.)

I am not questioning whether there are internal models. I am
questioning what they are models of.


Neil Rickert

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In <810894...@chatham.demon.co.uk> Oliver Sparrow <oh...@chatham.demon.co.uk> writes:
> > It matters immensely if talking about the robot having a purpose of
> > its own, or having a sense of purpose. If it has no sense, then it
> > has no sense of purpose. If the purpose is imposed on the robot,
> > then it is not the robot's own purpose.

>It's jump on Rickert day.

No problem. I am not feeling opressed.

>It's jump on Rickert day. However: pleasure, plain, social acceptance and
>a host of tropisms are wired into us. They are, sensu Dennett, a fair whack of
>what we call "purpose", or anyway the roots thereof.

I mainly agree with this.

> They may be the Ysdraggyl,
>the entire root of purpose. We don't know; but i would suggest not, because of
>our ability to create new goals by conceiving of them.

They could be the root even so, depending on how one defines 'root'.

>Nevertheless, we are as captured by what we are as Chis Malcolms lucifuge
>heap of mobile transistors;

Surely many of our purposes do indeed come from the emotions and
personality traits that we inherit from our genes. But we don't say
that those emotions and personality traits are our purposes. Rather,
they are the enablers which make it possible for us to form our own
purposes on the basis of our experience. This seems quite different
from the case of Chris Malcolm's robot, which apparently has no
ability to form its own purposes.


David Longley

unread,
Sep 12, 1995, 3:00:00 AM9/12/95
to
In article <DEt23...@gpu.utcc.utoronto.ca>
pin...@gpu.utcc.utoronto.ca "Andrzej Pindor" writes:

> In article <43430m$9...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

> ..................


> >Philosophers talk about zombies, which behave like ordinary
> >individuals yet have no mental states. Imagine a zombie lioness.
> >Since the zombie lioness has no mental states, it would be pointless
> >for another lioness to model those non-existent mental states. Thus,
> >on your theory, the other lioness would be at a loss to account for
> >and predict the behavior of the zombie, even though the zombie
> >behaves just like an ordinary conscious lioness.
> >
> >To me this seems preposterous. Notice that the problem disappears
> >if it is behavior profiles that are modelled.
> >

> Excellent argument. Note that it establishes behaviorism as a sensible course
> of action when dealing with human psychology: if zombies are possible, then

> clearly behaviorism is sufficient. If mental states are reflected in
> behavior, then talking about behavior profiles is equivalent to talking about
> mental states which lead to these profiles. Any aspect of mental states
> which is not reflected in behavior is irrelevant for the purpose of psychology
> as the science which tries to explain human behavior.
>
> Andrzej
> --
> Andrzej Pindor The foolish reject what they see and
> University of Toronto not what they think; the wise reject
> Instructional and Research Computing what they think and not what they see.
> pin...@gpu.utcc.utoronto.ca Huang Po
>

Yep....that's about it I reckon.
--
David Longley

Neil Rickert

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In <811006...@longley.demon.co.uk> David Longley <Da...@longley.demon.co.uk> writes:
>In article <436q6q$r...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

>> When I think of a computer, the notion of internal store is very much
>> more than a metaphor. It is easy enough to point to the silicon and
>> the iron oxide where that store is physically located. They do sell
>> memory chips and disk drives on the open market. (Whether there is
>> such a store for the brain is, of course, a rather different
>> question).
>>

>Internal to what? Internal to the hermetically sealed drive case perhaps?
>All that changes is it's physically state, so too with animal behaviour.
> ^^^^^^^^^^^^^^^^
>(In human RAM, the power is *never* turned off).

If you you are counting all physical states as behavior, then your
definition of 'behavior' has become so broad as to be almost
meaningless. Most people want to restrict the use of 'behavior' to
externally observable changes of physical state.


Neil Rickert

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In <810977...@chatham.demon.co.uk> Oliver Sparrow <oh...@chatham.demon.co.uk> writes:

>Neil has set out a position with which I fully agree. I hope that he does,
>too. :) We do not model other's mental states in the sense of simulating
>them and sampling what happens. Empathy consists, however, of the ability
>to place yourself in another's position. Without it, social interaction would
>be impossible, art meaningless and Hollywood bankrupt.

Yes, I pretty much agree with this.

>We learn this structure. Children go through well-described stages in the
>development of socialisation and the display of empathy. It is an essential
>part of our tool kit as social beings.

The emphasis on the social is important here. This, I think, is
where AI may have missed the boat. Most AI seems to have ignored
many of the requirement of social relations. Instead, AI has thought
of its problem as developing an autonomous system which does not
require a social organization for its survival. In a sense, this is
understandable, for the complexities of social structure are high.
However AI has aimed for a system which could use natural language.
Yet natural language seems to be very much a system which evolved in
a society, and which serves the needs of social organization. Why
should we think that a non-social AI system can manage to use
natural language?


Andrzej Pindor

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In article <434ouk$q...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <DEt23...@gpu.utcc.utoronto.ca> pin...@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
..............

>>Excellent argument. Note that it establishes behaviorism as a sensible course
>>of action when dealing with human psychology: if zombies are possible, then
>>clearly behaviorism is sufficient.
>
>I think you are trying to carry the argument a little further than it
>can be taken. It may establish that behaviorism is a sensible course
>when dealing with human behavior. But that is not quite the same as
>dealing with human psychology. It is my impression that psychology
>is often thought of as not just a science of behavior, but also a
>science of internal states. That is, the aim of psychology may be
>not just to predict behavior, but also to give an explanatory account
>of the internal mechanisms which generate that behavior.
>
I have nothing against internal states/mechanisms, but only in the same sense
as for instance internal states of an atomic nucleus are postulated in order
to explain observable physical properties of the nuclei. In other words,
only these internal states of the brain/mind make sense which reflect on
behavior. If you are dealing with a black box system you should only postulate
as much internal structure as may be determinable from input-output data.
Fantasizing about what colour may the gears inside be, or are there any
drawings on the inside of the box is not useful if these atributes do not
have demonstrable influence on input-output data.

>On that view of psychology, if zombies are possible their possibility
>would demonstrate the inadequacy of behaviorism as a theory of those
>internal mechanisms.

No, this would demonstrate that we have no basis to consider those internal
mechanisms, that talking about them is an empty exercise as far as
science is concerned.

>...................Incidently, I seriously doubt that zombies are
>possible.
>
Me too.

>I should perhaps contrast my position with that of David Longley.
>David has argued that the study of behavior must be based on the
>study of behavior. He has also made the criticism that there is too
>much loose talk about folk psychology, and that much of the talk is
>contradicted by the evidence.
>
>I largely agree with David up to that point. But then David goes on
>and criticizes any cognitive scientist who dares talk about internal
>states. That is where I think he goes too far. If a person's
>experience during childhood (for example, that person's education)
>can be one of the determinants of behavior in later life, then
>evidently there are internal states which in some manner record
>information about the childhood experience.
>

From another of David Longley's postings I understand that he is not against
talking about internal states as describing states of the system leading to
given behavior. I belive that he is against giving those internal states
lives of their own, independent of their role in behavior. I agree with him.
Consider for the moment that zombies are possible. Clearly the machines which
zombies are have internal states, like any other complex system. However,
consciousness is then inaccessible to scientific study, because we have no
handle on it and David would rightly object (am I right David?) to any talk
about in the context of discussing human behavior. Again, I'd fully agree
with this.

>In my opinion it is completely appropriate for the cognitive
>scientist to investigate internal states. Preferably the scientist
>should do so without any prior committment to assumptions that those
>internal states are necessarily the mental states talked about in
>folk psychology.
>

I agree. Moreover, I think that the mental states talked about in folk
psychology are very ill-defined heuristic correlates of behavioral patterns
and should be given no more attention than the folk weather predictors. Do
you think that weather scientists should incorporate into their theories
a length of groundhog shadow on a certain day or even the colour of the sunset?

Andrzej Pindor

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In article <4373ae$3...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <810977...@chatham.demon.co.uk> Oliver Sparrow <oh...@chatham.demon.co.uk> writes:
>
>>Neil has set out a position with which I fully agree. I hope that he does,
>>too. :) We do not model other's mental states in the sense of simulating
>>them and sampling what happens. Empathy consists, however, of the ability
>>to place yourself in another's position. Without it, social interaction would
>>be impossible, art meaningless and Hollywood bankrupt.
>
>Yes, I pretty much agree with this.
>
Well, if this was the case as a rule, than we would expect others to always
behave/feel like us.
Clearly a lot of people have problems accepting that others may behave/feel
differently from us. However, some are capable of this and, it seems to me,
they manage this by noticing behavior of others and detecting patterns in it.
Do you expect that aggressive lionesses empathize with the cowardly ones?


>>We learn this structure. Children go through well-described stages in the
>>development of socialisation and the display of empathy. It is an essential
>>part of our tool kit as social beings.
>
>The emphasis on the social is important here. This, I think, is
>where AI may have missed the boat. Most AI seems to have ignored
>many of the requirement of social relations. Instead, AI has thought
>of its problem as developing an autonomous system which does not
>require a social organization for its survival. In a sense, this is
>understandable, for the complexities of social structure are high.
>However AI has aimed for a system which could use natural language.
>Yet natural language seems to be very much a system which evolved in
>a society, and which serves the needs of social organization. Why
>should we think that a non-social AI system can manage to use
>natural language?
>
In principle I agree with your point about role of the social organization
for the use of language. I think that semantics is completely determined by
social interctions. Also I suspect that consciousness may be very closely
tied to the language and the social interactions. However, I think that many
aspects of AI are indpendent of the social context. Consider for instance
artifical insects constructed by Brooks (sp?).

David E. Weldon, Ph.D.

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to

}==========Neil Rickert, 9/11/95==========

}
}In <DEr23...@intruder.daytonoh.attgis.com> David E. Weldon,
}Ph.D. <David.E...@DaytonOH.ATTGIS.COM> writes:
}>}==========Neil Rickert, 9/11/95==========
}>}In <810803...@chatham.demon.co.uk> Oliver Sparrow
}>}<oh...@chatham.demon.co.uk> writes:
}
}>}>Back to lions. It is clear that animals model the internal states
}of
}>}others.
}
}>}I don't think that is at all clear.
}>}All that matters is that she treats others as individuals.
}
}>But then you place an incredibly large burden on the poor lioness' cerebral
}>cortex to process the other lioness as an individual since you imply that
the

}>observing lioness has no expectations or structures to deal with the
other's
}>presence.
}
}I don't see this at all. The lioness may well have expectations and
}structures associated with distinguishing between different
}lionesses, just as I presumably have expectations and structures for
}distinguishing between a feather and a rock.
}
}What I am arguing is that it presumes too much to say that these
}structures and expectations can be identified with ascription of
}mental states to the various other lionesses.
}
}> That is, your observing lioness must take each behavior fragment
of
}>the other, perform a significant number of computations to determine the
}>"behavior fragment's" meaning, dredge up the response "associated" with
that
}>meaning and emit it. Then wait for the other lioness to emit the next
}>behavior fragment, and so on. It is the computational economy of a
structured
}>response to a stimulus pattern that gives the explanation based on internal
}>structures efficacy.
}
}It is not at all obvious that the amount of computation would depend
}on whether recorded information is in the form of ascribed mental
}states, or in some other form. If you have a good argument which
}requires that it be in the form of mental states rather than, for
}example, statistical summaries of past behavior, I would be
}interested in seeing that argument.
}
There are numerous levels of evidence that point toward the use of structures
to computational explosions. At the purely computational level, indexed table
lookups are far more efficient that computationswhen dealing with symbol
manipulation, for example (consider a 1000 row table that converts an error
code to the english explanation as compared to an algorithm that looks at each
character of the code and then computes the meaning of the character code,
then combines that meaning with the previously computed character code
meanings to finally produce a response.

Now consider psychophysical experiments on motion that demonstrate that it is
possible to so impoverish a stimulus pattern that observers will report a
moving light when in fact the there are actually two or more lights flashing
in sequence. Or observers will report seeing a rectangle inclined in the
visual field when in fact the stimulus is a flat trapezoid.

Now consider a person who sees the entire environment moving when in fact
he/she is just jiggling one eyeball by placing his finger next to the eyeball
and moving his/her finger back and forth. Note that the person is fully aware
of the eyeball being moved, yet perceives the visual field as in motion.

Now consider a popular tune played to the observer in a different key. If the
observer "knows" the tune, he/she will recognize the tune, even though neither
stimulus pattern contains the same notes whatsoever. Anyone who would claim
that this is done computationally stretches the limits of credibility. Or
consider two speakers reading the same passage to an observer. Assuming no
distortion, the observer will have no difficulty hearing and understanding
both speakers even though when the acoustic patterns are laid side by side
there is very little (even superficial) resemblance between the two patterns.

I could go on and on. But I want to return to the lioness for a second to
make another point. All mammals have exceptionally well developed visual
systems that store whole images. What prevents these systems from storing a
whole sequence of these patterns (kind of like a snippet of videotape). This
pattern sequence could have tags all over it identifying the meaning of the
pattern, its implications for the observing lioness and the appropriate
response. It is just a small step further to assuming that these motion
picture patterns can branch at various points with the branches indicating
different meaning and therefore requiring a different response. The
computations involved in this scenario are simply pattern-matching
computations wich are far easier to do in the brain than on any machine.

I could go on
}


David E. Weldon, Ph.D.

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to

}==========David Longley, 9/11/95==========

}
}In article <DEr23...@intruder.daytonoh.attgis.com>
} David.E...@DaytonOH.ATTGIS.COM "David E.
}Weldon, Ph.D." writes:
}<snip>
}> But then you place an incredibly large burden on the poor lioness'
cerebral
}> cortex to process the other lioness as an individual since you imply that
the
}> observing lioness has no expectations or structures to deal with the
other's
}> presence. That is, your observing lioness must take each behavior fragment

of
}> the other, perform a significant number of computations to determine the
}> "behavior fragment's" meaning, dredge up the response "associated" with
that
}> meaning and emit it. Then wait for the other lioness to emit the next
}> behavior fragment, and so on. It is the computational economy of a
structured
}> response to a stimulus pattern that gives the explanation based on
internal
}> structures efficacy.
}
}
}A cub's relatively long period of development, play and training will suffice

}to provide the necessary flexibility for later recognition.

This is exactly my point. The time to learn the structure patterns (clusters
of behavior that imply aggression, submission, flight, approach-friendly,
approach-curious, etc.) is there. Since in mammals, the visual cortex is a
significant chunk of the entire cortex (even in man), the substrate to support
the structures is there. And the motivation to learn them is there as well.

} Additionally, most
}of the behaviour fragments will be 'hard wired' as fixed action patterns
}(UCRs).

This is a UCR on your part that requires too much of my time to refute.
Experience has also taught me that the sequence of exchanges will follow such
a predictable pattern that responding to it is too aversive.


}
}Millions of computations already have to be done to just *process* the visual

}vectors (features) in the first place, so there's no need to conceive of the

}process as one of "dredging up". In fact, response rates force one to think
}in terms of "resonance" and plastic configurability.
}

You are confusing what you have to do on a computer to simulate or match what
is done in the visual, auditory, kinethetic, and other sensory transducer
systems of mammals. There is energy transformation that converts a variety of
types of physical signals to information carrying electrical pulses. During
this process, assimilation and contrast computations do occur, but this is
done to clean up the "images" so they more closely match their platonic forms
(i.e., their gestalts). When the transduced signals arrive at the brain, they
are reformed into a mental isomorph of the original pattern. If Karl Pribram
is right, this mental isomorph is superimposed on a noise background (seems to
sharpen the pattern even further). In any case, this process requires far,
far, far fewer computations than the corresponding computer simulation.
Otherwise, the constant need for more CPU cycles would not be there and
computers would be faster perceivers than we are.

}
}One can (should?) of course think of all computational systems this way.
}large state spaces with vector-vector transformations do not require one
}to conceive of 'internal' structures except metaphorically or as shorthand
}descriptions.
}

Wrongo bigtime!!! If this were true, human mental activity could be modelled
by a finite state machine and a fairly small one at that. Your conceptual
metaphore is only slightly more sophisticated than the classic telephone
switchboard model and provides no additional value add to our
understanding--so you might as well use the telephone switchboard metaphor.
Both are totally off the mark.
}


Neil Rickert

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In <DEuuL...@intruder.daytonoh.attgis.com> David E. Weldon, Ph.D. <David.E...@DaytonOH.ATTGIS.COM> writes:
>}==========Neil Rickert, 9/11/95==========

>}I don't see this at all. The lioness may well have expectations and


>}structures associated with distinguishing between different
>}lionesses, just as I presumably have expectations and structures for
>}distinguishing between a feather and a rock.

>}What I am arguing is that it presumes too much to say that these
>}structures and expectations can be identified with ascription of
>}mental states to the various other lionesses.

>There are numerous levels of evidence that point toward the use of structures
>to computational explosions.

I am having trouble seeing what you are trying to argue here. I have
already agreed theat there is likely to be data structuring. My
disagreement with you is as to what is represented in those
structures. I am arguing against the view that what is represented
is mental states of the lionesses being observed. Traditionally this
means things such as beliefs, desires, intentions, etc. I think it
far more likely that behavior profiles are represented.

You seem to only be arguing the need for structures, and I have never
disagreed with that. Either you should be agreeing with me, or you
should be giving more support to the idea that it must be mental
states that are represented.

>I could go on and on. But I want to return to the lioness for a second to
>make another point. All mammals have exceptionally well developed visual

Perhaps an exaggeration (did you ever hear the saying "as blind as
a bat"). But probably correct for the lioness.

>systems that store whole images.

Wait just a minute. The evidence for copy theories of memory is
quite weak.

> What prevents these systems from storing a
>whole sequence of these patterns (kind of like a snippet of videotape). This
>pattern sequence could have tags all over it identifying the meaning of the
>pattern, its implications for the observing lioness and the appropriate
>response.

Well, ok. I think it quite unlikely. But even so, this would fit
better with the recorded data as behavior profiles than as mental states
of the lionesses being observed.

> The
>computations involved in this scenario are simply pattern-matching
>computations wich are far easier to do in the brain than on any machine.

Have you thought of the computational complexity of the pattern
matching which you require. In my opinion, the computational
complexity of the pattern matching problem rules out your videotape
scenario. It is a little facile to say that the pattern matching
computations are easier to do in the brain, when we don't know
whether that is how the brain works. At best you can say that
if you assume the brain does pattern matching, then the empirical
evidence suggest that it does it well. But then you are claiming
that if we assume the brain does pattern matching we then
have evidence that the brain does pattern matching. This is
not very persuasive.


Neil Rickert

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In <DEuwF...@gpu.utcc.utoronto.ca> pin...@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>In article <434ouk$q...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>I have nothing against internal states/mechanisms, but only in the same sense
>as for instance internal states of an atomic nucleus are postulated in order
>to explain observable physical properties of the nuclei. In other words,
>only these internal states of the brain/mind make sense which reflect on
>behavior. If you are dealing with a black box system you should only postulate
>as much internal structure as may be determinable from input-output data.
>Fantasizing about what colour may the gears inside be, or are there any
>drawings on the inside of the box is not useful if these atributes do not
>have demonstrable influence on input-output data.

I have no disagreement with this.

>>On that view of psychology, if zombies are possible their possibility
>>would demonstrate the inadequacy of behaviorism as a theory of those
>>internal mechanisms.

>No, this would demonstrate that we have no basis to consider those internal
>mechanisms, that talking about them is an empty exercise as far as
>science is concerned.

This seems to say that if zombies exist, their existence proves that
they don't exist. It must have lost something in the translation.

>>In my opinion it is completely appropriate for the cognitive
>>scientist to investigate internal states. Preferably the scientist
>>should do so without any prior committment to assumptions that those
>>internal states are necessarily the mental states talked about in
>>folk psychology.

>I agree. Moreover, I think that the mental states talked about in folk
>psychology are very ill-defined heuristic correlates of behavioral patterns
>and should be given no more attention than the folk weather predictors.

I am as skeptical of the folk psychological mental states as are you,
and I have made that clear. But at various times when I have talked
about internal states (not the folk psychological ones), David has
jumped on me. His position is, to say the least, confusing.


David Longley

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In article <DEuuL...@intruder.daytonoh.attgis.com>

David.E...@DaytonOH.ATTGIS.COM "David E. Weldon, Ph.D." writes:
<snip>
Its a straw man.

Anyone who has been impressed by the Gibsonian 'direct perception' approach
would take each one of the above and point out that the eye is always moving
and for a reason, as stabilized images fade through habituation, and movement
is required for feature extraction. All of the examples you cite assume an
ecologically invalid model of the perceptual systems. It makes far more sense
to believe that each perceptual system has developed to extract patterns or
invariants out of the different arrays of energy which are afforded to them.
All the examples you cite just make the systems work in ways they are not
designed for. The same argument goes for illusions.
>
Just about all of the research in behavioural neuroscience uses classical
conditioning preparations, so I fail to understand why you object to my
remarks to that effect.

AP seems to have a sound grasp of the basics (and probably much more) of the
position I have outlined elsewhere in 'Fragments of Behaviour: The Extensional
Stance'. What surprises me is that so many even *want* to get into the
quagmire of cognitivism. I can only assume that they don't know the long and
fruitless history of psychology, or that they *do* know how lonely work as a
behaviour scientist can be and have no taste for the hard work required.

However, I don't want to have to go over old ground again either, so let's
not feud over this. Suffice it to say that I consider the rhetoric of nearly
all of contemporary cognitivism and AI to be a retreat from sound scientific
work.

Anyone unfamiliar with the theme developed in 'Fragments..' can ask me for
the 9 articles via e-mail.
--
David Longley

Oliver Sparrow

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In article <811006...@longley.demon.co.uk>
Da...@longley.demon.co.uk "David Longley" writes:

> Internal to what? Internal to the hermetically sealed drive case perhaps?
> All that changes is it's physically state, so too with animal behaviour.

Nope. Internal to its data architecture. A bit is meaningful only in relation
to other bits, a concept to other concepts, a pattern of neural excitiation or
induration to the relations that it has with other systems. Being relational,
therefore, it has to be described, evaluated and used in a contextual
framework. Behaviours are only behaviours when we see them as such: a twitch
matters, as I observed, only when a twitch matters; and what constitutes
something that matters is a complex issue whcih has a great deal to do with
internal states.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Andrzej Pindor

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In article <4372hu$3...@mp.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <811006...@longley.demon.co.uk> David Longley <Da...@longley.demon.co.uk> writes:
>>In article <436q6q$r...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:
>
>>> When I think of a computer, the notion of internal store is very much
>>> more than a metaphor. It is easy enough to point to the silicon and
>>> the iron oxide where that store is physically located. They do sell
>>> memory chips and disk drives on the open market. (Whether there is
>>> such a store for the brain is, of course, a rather different
>>> question).
>>>
>
>>Internal to what? Internal to the hermetically sealed drive case perhaps?
>>All that changes is it's physically state, so too with animal behaviour.
>> ^^^^^^^^^^^^^^^^
>>(In human RAM, the power is *never* turned off).
>
>If you you are counting all physical states as behavior, then your
>definition of 'behavior' has become so broad as to be almost
>meaningless. Most people want to restrict the use of 'behavior' to
>externally observable changes of physical state.
>
Changes which are not externally observable one way or another fall prey
to Occam's razor. Why consider them at all?

Neil Rickert

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In <810977...@longley.demon.co.uk> David Longley <Da...@longley.demon.co.uk> writes:

>In article <434ouk$q...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

>> In my opinion it is completely appropriate for the cognitive
>> scientist to investigate internal states. Preferably the scientist
>> should do so without any prior committment to assumptions that those
>> internal states are necessarily the mental states talked about in
>> folk psychology.

>Where 'internal' states are studied or discussed, they are only studied
>*as* behaviours.

If you define 'behavior' so broadly that everything counts as
behavior, then your consistent complaint that we should only study
behavior becomes an empty one.

> The belief to the contrary is just an example of our
>tendency 'to go beyond the information given'. Secondly, where we do
>discuss such states it is really, I would say, in lieu of more careful,
>ie explicit, step by step, (effective) explanation. This is why I object
>to so called 'Cognitive' *scientists*. They are poets and sophists who
>cheat real scientists out of work through their rhetoric and false promises.

When I write a computer program, that program specifies only changes
to internal states of the computer. The behavior of the computer
system is controlled by those internal states. If we were to observe
your proscription on dealing with internal states, then all computer
programmers would have to cease and desist. We would be left with
computers as black boxes, and people hoping and praying that they
would behave as desired.

>Reference to past behaviour does not require one to subsequently make
>reference to 'internal' states. A re-configuration of a systems states
>on the basis of a learning episode at time T1 may well lead to such
>behaviour being emitted again at time T2, but that will be because the
>system is in a different state from what it was before T1.

You have just contradicted yourself. In one sentence you say that
there is no need to refer to internal states. In the very next
sentence you talk about the system being in a different internal
state at T2 than at T1.

> Think of a
>computer and its RAM and Hard Disk bit map at T1 and T2. What is different
>at T1 and T2 is the overall system *configuration*. The notion of an
>*internal* store with retrieval is just a convenient Cartesian metaphor.

David Longley

unread,
Sep 13, 1995, 3:00:00 AM9/13/95
to
In article <4372hu$3...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:

> In <811006...@longley.demon.co.uk> David Longley <Da...@longley.demon.


> co.uk writes:
> >In article <436q6q$r...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert"
> >writes:

> >> When I think of a computer, the notion of internal store is very much
> >> more than a metaphor. It is easy enough to point to the silicon and
> >> the iron oxide where that store is physically located. They do sell
> >> memory chips and disk drives on the open market. (Whether there is
> >> such a store for the brain is, of course, a rather different
> >> question).
> >>
>

> >Internal to what? Internal to the hermetically sealed drive case perhaps?

> >All that changes is its physically state, so too with animal behaviour.


> > ^^^^^^^^^^^^^^^^
> >(In human RAM, the power is *never* turned off).
>
> If you you are counting all physical states as behavior, then your
> definition of 'behavior' has become so broad as to be almost
> meaningless. Most people want to restrict the use of 'behavior' to
> externally observable changes of physical state.

The above reply warrants further consideration than you have given it perhaps.

Your use of metaphor becomes mixed in the Von Neumann context. Storage is
the basic term, and primary and secondary are used to differentiate between
RAM and disk. There's nothing 'internal' in the obfuscatory sense of the
'cognitive rhetoricians' who, in my view *are* the bane of progress in the
behavioural sciences. At the moment, such views just sound eccentric, but
in time I reckon we'll look back at the latter part of the 20th century &
see cognitivism for the retreat from behaviour *science* that it is. It's the
fast food of academia.

The CNS and its function is externally observable, as is the universe more
generally. The whole of physical science *is not* rendered meaningless through
its committment to what is observable and measurable.

Where cognitivists go wrong is in taking their metaphors literally, as I have
said at length in 'Fragments of Behaviour: The Extensional Stance', this is
because they believe they can quantify into the intensional (they reify where
they should explicate). I doubt whether I will have much more to say on this
that I have not said before. My remarks about changes in state space being
behavioural (physical) should be given due consideration though.
--
David Longley

David Longley

unread,
Sep 14, 1995, 3:00:00 AM9/14/95
to
In article <437jam$f...@mp.cs.niu.edu> ric...@cs.niu.edu "Neil Rickert" writes:
>
> I'm having trouble working out what you are talking about. On the one
> hand you criticize anyone who talks about internal states. But then
> you exclude lots of things that seem to me to be internal states.
>
> Let me try some questions:
>
> We can write the number 2. So the number 2 can be thought of as
> physical. But we cannot write the complete expansion of the
> square root of 2. Does that mean that all talk of the square
> root of 2 should be abolished?
>
> If a belief could be shown to have a physical representation,
> would it be alright to talk about such mental states as beliefs?
>
> Since it is possible that beliefs have a physical representation,
> is it permissible to talk about beliefs even before that
> representation is known, as part of a research program to
> find out what that representation is?
>
> Or is it just that you have a case against cognitive psychology,
> but you generalize it until challenged, but then backtrack to
> a more specific criticism?
>
>
I consider the questions either subjunctive consitionals or metaphysical.
My argument is simple, and it has been clearly understood and expressed
by AP. All talk of 'internal states' is either metaphorical for 'I can't
be bothered to outline the details, this will have to do' (or 'I don't
know how it actually works but this will fill the gap'). The point is that
these so called "explanations" are hokum, redescriptions of the problem.
Skinner put it very nicely in the diatribe he launched against cognitive
science back in 1987.

Once upon a time scientists said what they knew and then said "that's it,
I'll tell you how much further I get another time" (or let someone else
take those steps). How, everyone has a 'gap filler' but they don't seem
to realise how many gaps there are and how all the "filling" amounts to a
library of fiction, --- of "cognitivism".

Such was 70s/80s intensionalism - lots of inflationary hot air with nothing of
substance behind it. But so long as a good chat and 'stimulating ideas' is
all that's wanted, that will do eh?.

Unfortunately, like with everything else, there's a price to pay, & the price
has been the pursuit and delivery of truth. Professional psychology is just
about bankrupt (cf. R. Dawes "House of Cards: Psychology & Psychotherapy
Built on Myth" 1994), but from the way that the AI and Cognitive Science
community around here talk, they are so well advanced both theoretically
and practically that such mundane issues are way beneath their concerns.
However, for grants and student places one needs to be able to write good
stories and make almost Walter Mitty like claims (a somewhat psychopathic
disposition does no harm either!).

The *really* sad fact is that those doing so actually admit that this is
largely what they are "forced to do".!!.
--
David Longley

David Longley

unread,
Sep 14, 1995, 3:00:00 AM9/14/95
to
<If this is a repeat, put it down to problems with the demon service tonight
in the UK>

Oliver Sparrow

unread,
Sep 14, 1995, 3:00:00 AM9/14/95
to
There is a danger with too much abstraction. We understand some things about
some visual cortexes. It seems reasonably established that within these, image
primitives are recognised at various levels of abstraction: edges are
established, areas defined, colours assigned to these; and that the limbic
system has learned to note clusters of such percepts as being meaningful, in
the sense that reacting to them has been reinforced. How this reinforcement
operates in the naive brain is an interesting set of problems, revolving
around what might be called the "!" detector: the entity which recognises what
matters.

This aside, it seems reasonable to assume that hierarchicality is general:
that structures which are evoked by one set of activities are themselves
components of another set. In some instances, this may be circular, where a
half-recognised percept triggers a process of searching for stronger
evidence for its presence. High orders of this are accessible to us - "was
that John?" -and it is well-proven that we can, for example, find things
better in a muddle - the router bit in the mess on the toolshed bench - when
we have visualised the item first. This architecture happens; we can watch it
happen, test for its presence, view parts of the brain light up as centres of
excitation ripple back and forth. Such systems undoubtedly underpin what we
experience as mental states: when they fail, the states are altered. When they
act, they correlate with the experienced states. To suggest that the processes
which appear to support mental states are inaccessible defies the mountainous
literature, millions in investment and some useful investigatory tools. It
denies that pharmaceuticals which intervene in known ways on specific classes
of operand can have predictable impacts on mental states. It follows that
unless one takes the idea of "emitted behaviour" to cover "anything that one
can know", then it is willful to deny important evidence on the basis of some
catch phrases.

I have no idea what proportion of the brain's activity is concerend with
internal housekeeping and what with what in a computer would be called non-
storage IO. My guess is that the ratio is very high - that most of what
happens is internally driven and necessary to the operation of the ensemble.
Indeed, we spend a fair fraction of our lives in complete solepsism, asleep
or dreaming. Most of what happens is, therefore, internal and not emitted.
What is internal is, to a fair degree, accessible. Knowledge representation
-how to walk, how to understand other beings and relate to them, how to catch
a gnu -is a mixture of symbolic procession (which, curiously and measurably,
seems to need an aware mind to occur) and processes which are not succeptible
to analysis by awareness, where we - the cluster of primitives and
capabilities - have learned to 'just do it'. Such processes are orchestrated
and managed; and it is these management structures which are simultaneously
the most fascinating and the least accessible quality about mentation.

It is possible to argue that all beings are best seen as clusters of
capabilities which have acquired the capacity to "just do it": that the
management systems are as reflexive and as learned as the subsystems of
visual recognition described above. It is unquestionably the case that many
of them are exactly this: motor skills, for example, are learned and honed
through practice and are apparently independent of much of what else goes on
in the brain. Critically, however, in certain lesions - such as Parkinsonism
-the volitional connections are damaged and motor indeterminacy results. The
system cannot "decide" what to do and gets stuck, or oscillates between
alternative attractors. Many aspects of the nervous system may be analogous
to "fire and forget" weapons: get me over there, soemthing urges; and the
motor processes fulfil the request. (In, of course, far more complex ways that
this suggests.)

This remains the key issue, however: if the mind is volition free, a bundle of
systems which exert weights and achieve results that are optimised by
learning, then neo-behaviourism, taking account of internal processes and what
they emit will be sufficient. If the epiphenomena of awareness of which we all
appear to be aware has the capacity to change outcomes, however, then this is
an operannd and has to be understood on its own terms, which are those of
mental states. This may be a damn nuisance and may muddy the pure waters; but
that is, I fear, tough.

I have suggested elsewhere how epiphenomena acquire degrees of freedom that
allow them to transcend the limiting dimensionality of their support systems.
The means of decoupling of mental states from neural architecture is,
therefore, available, ever given that you accept this view. If you do not, then
the doors of system determinism (or the escape hatches of quantum thinginess)
close on you, creaking hinges of the charnel house fortell... etc Given that
you buy this (or remain agnostic as to how this is achieved but accept that it
is achieved) then one is left with a model of a multilevel brain, something as
follows:

1: Self-assembling, competing pruned networks delivering primitives. Loops of
self-referentiality error correct and present surface for optimisation.

2: Structures assembled from primitives, bound together by similar loops and
other forms of resonance; perhaps independent data storage. Each structure
is more than somewhat specialised - Broca through Minsky to J.N.Sci. - and
has a set of repertoires which are evokable by "calls" to them. Recursive
calls possible; internal error correction etc. Much of the lioness's
understanding of the way others work would happen at this level. Pleasure,
pain, physiological status integration and presentation.

3: Integration and management; co-ordination amongst (2); and probably poking
down into (1) where this is germane. Learning management.

4.1 Epiphenomena of the management system: the theatre of consciousness,
binding the systems that generate drives, pleasure pain, percept, status
descriptors, emitted behaviour of parallel systems in (2).

4.2: Symbolic representation, symbol processing.

Well; there are other ways of doing this and this is certainly wrong, muddled
and stupid. Nevertheless, it hints at the huge complexity that is bounded by
the castle of the skull. It is unlikely that successful AI would be any less
complex unless it was focused on less complex tasks, such as opening lift
doors.

Apologies for so long an entry: aeroplanes woke me at 04.15 this morning.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Marty Stoneman

unread,
Sep 14, 1995, 3:00:00 AM9/14/95
to
Neil Rickert (ric...@cs.niu.edu) wrote:

[SNIP]

: The emphasis on the social is important here. This, I think, is


: where AI may have missed the boat. Most AI seems to have ignored
: many of the requirement of social relations. Instead, AI has thought
: of its problem as developing an autonomous system which does not
: require a social organization for its survival. In a sense, this is
: understandable, for the complexities of social structure are high.
: However AI has aimed for a system which could use natural language.
: Yet natural language seems to be very much a system which evolved in
: a society, and which serves the needs of social organization. Why
: should we think that a non-social AI system can manage to use
: natural language?

Exactly! And why should we think our human brain architectures (even
those supporting language) are not based upon and built upon those of our
non-human mammalian social forebears?
Marty Stoneman
ma...@indirect.com
P.S. BTW, Some of us caught this boat.


It is loading more messages.
0 new messages