Google Groepen ondersteunt geen nieuwe Usenet-berichten of -abonnementen meer. Historische content blijft zichtbaar.

Re: Request for comments: 'Orthogonal recombinable competences'

3 weergaven
Naar het eerste ongelezen bericht

Douglas Newman and Betty Ng

ongelezen,
30 mrt 2006, 13:27:1830-03-2006
aan
Many thanks, Aaron, for your detailed reply to my comments. You wrote:

> E.g. Doug thought a difference between the precocial species (e.g.
> houseflies, deer, chicks) and the altricial species (e.g. lions, chimps,
> humans) have something to do with consciousness.
As explained below, I don't like the concept of altricial species.
>
> Most of the time I do not find use of the noun 'consciousness' useful in
> scientific discussions, because it refers very loosely to a large,
> ill-defined collection of different things when used by different people
> and sometimes even by the same person on different occasions. And there
> are conflicting theories about what 'it' is and how 'it' works
> with little hope of resolution because its not clear that there is any
> 'it' that everyone is talking about.
>
> So I prefer to couch both descriptions of what needs to be explained and
> explanatory theories in terms that don't use the word.

I agree that the noun 'consciousness' can lead to confusion, and is best
avoided in discussions of mechanisms - as I should have done. In the
present context it should be replaced with the concept of 'self-referential
process'. Exactly what is meant by this is described in Pages 3, 3a and 3b
of my web site.

I suggest that altricial processes are self-referential (as detailed on my
web site), so that information is selected on the basis of its relevance to
the system that selects it. This avoids random collection and provides a
mechanism for the organisation and retention of the collected information.
My web site argues that all mammal and bird species have some
self-referential abilities, although the range of these abilities varies
considerably.

>
> So I am looking for a description of visual mechanisms that can
>
> (a) identify and extract some common information about the environment
> in a variety of very different physical environments (different
> lighting, different materials, shapes, different games being played
> etc.), where retinal patterns are globally very different e.g.
> - information that two surfaces are moving together with a
> protrusion from another object between them (e.g. when two
> fingers grasp a cup handle, or upper and lower jaw grasp a
> bone), or two hands grasp a rock
>
> - information that something long and thing can go through
> a circular ring and while it is in that state if it starts
> moving in a direction in the plane of the ring then it will
> cause the ring to move
>
> and
> (b) extract other information from other circumstances, e.g. that some
> kinds of things can be bent and will keep their shape after bending,
> or some kinds of things can be prodded to change their shape and
> will return to the original shape after being prodded,
>
> and
>
> (c) in a new situation solve a problem by combining those pieces of
> information, e.g. getting something with a handle attached, but out
> of reach to move into reach, by making a hook from a straight piece
> of wire and passing an end of the hook through the handle, then
> pulling, as Betty the new caledonian crow in Oxford does in order to
> get at food in a bucket at the base of a glass tube:
> http://users.ox.ac.uk/~kgroup/tools/tools_main.html
>
> Of course, a new born human infant (and most crows) cannot do that. So
> there is also the question of what exactly has changed between animal
> species that cannot do such things and their descendents that can, or
> between a child at time 1 that cannot do that thing and at time 2 can.
>
It follows from my suggestion (above) that no organism searches for
information about its environment in the abstract. Hence my answer to the
above is that the self-referential process develops through shifts in
motivation, which are in turn related to organisms getting a better
understanding of their relationship with their environment.

> The question may have different kinds of (correct) answers, some
> referring to physiological changes in the brain, some referring to
> changes in virtual machines implemented in the brain, some referring to
> new information pathways linking pre-existing mechanisms, etc.
>
My suggestion relates to a process that involves the brain (as a virtual
machine), the physical capabilities of the organism and the affordancies
provided by its environment. There is (I think) a crucial difference with
Aaron's characterisation of the problem here: I have argued that
self-referential processes are non-algorithmic.

>
> [B]
> Explanations that treat the child or the crow as talking to itself in
> something like English, e.g.
>
> 'If, in my current situation (A), I carry out action B,
> outcome C will be produced.'
>
> can't go unchallenged when we are talking about animals or infants that
> give no evidence of being able to use the vocabulary and syntax of
> something like an adult human language.
>
I didn't mean to imply that the process depended on language. The statement
can be expressed in different words, of course, but words still have to be
used. A third person account reads
An organism receives sensory inputs that characterise its situation (A) as
one member of a set of discrete possibilities. On the basis of its previous
experience it is then able to map possible actions (B), chosen from a
discrete set, on the basis of A and its competencies, on to expected
outcomes (C) expressed as effects on itself.
> However if it is postulated that
> there is some information processing mechanism in the brain of the
> animal or child, or in a virtual machine running on its brain, that
> builds information structures that are interpreted as universally
> quantified condition-consequence statements or rules,
>
(which is more or less what I meant)

> then that *may* be part of a good theory, but we need a lot more detail
> as to *how* such generalisations are derived from, or triggered by, all
> the patterns of sensory information and motor signals that were
> previously encountered.
>
My web site makes some suggestions in this respect, but is far from
complete.

> We also need to know how such structures are implemented in
> brain mechanisms, including how the *semantic* competence (treating
> these internal structures as referring to external structures,
> relationships and processes) develops.
>
Some recent papers mention the role of the orbitofrontal cortex in
predicting outcomes, e.g. Schoenbaum et al, Trends in neurosciences
29(2006)116-124, Roberts, Trends in cognitive sciences 10(2006)83-90.

> =======================
> [C]
>
> Many of the existing theories that refer to discovery and use of
> sensorimotor contingencies refer to mechanisms that learn associations
> between combinations of sensor and motor signals.
>
> But *those* associations do not carry the information about 3-D
> structures and processes that can be perceived and produced in all sorts
> of different ways, and can be thought about when they are not being
> produced (e.g. in planning).
>
> E.g. patterns of retinal stimulation bear no simple relation to
> conditions in which two surfaces are moving together in 3-D so that an
> object between them is grasped: there's an infinite variety of sensor
> patterns corresponding to the same 3-D process.
>
> How is the commonality found? How is it stored for future use? How
> is it combined with other information e.g. about fragility of certain
> substances, so as to control the grasping process?
>

In my approach commonality arises from being able to predict and reproduce
effectively the same outcome from an action on the environment. Simple
organisms can carry out relatively few distinct actions, and have
correspondingly few expectations. Human skills and expectations are far
more complex, but still discrete and finite.
>
> In contrast I am asking about mechanisms and forms of representation
> that refer to, represent and use associations between, objective 3-D
> structures and processes in the *environment*, e.g. what is common to
> grasping with your right hand, your left hand, two hands holding an
> object between them, or your mouth. (This was discussed on the
> web site that I referred to when I started this thread.)
>
The problem, as I see it, is how we progress from an implicit self-model,
which forms the basis of the self-referential process, to the explicit world
and self-models that derive from this process. Human world models are
heavily dependent on language, but simple spatial models (e.g. the spatial
maps used by rats) preceded this.

> [E]
>
> Doug Newman thought I was treating altricial knowledge collection as an
> almost random process.
Not quite. While emphasising that it could not be random, I could not find
the feature of your (present) model that made it non-random. But that,
perhaps, is the core problem.
>
> On the contrary, although there is a huge amount of research in AI which
> does depend on random processes (e.g. in evolutionary computations,
> simulated annealing, and other non-deterministic search mechanisms), in
> contrast the learning done by human infants (at least after several
> months) seems to use powerful, mostly deterministic mechanisms seeking
> and finding a great deal of structure in the environment possibly using
> hypothesise and test processes, but not random ones.
>
> Random processes could take as long as evolution did!
>
> (Some of the randomness is reduced by the role of parents in providing
> appropriate environments, including toys, at various stages of
> development. Some of it is reduced by children finding out what adults
> do, or being challenged by tasks adults set them.)
>
I agree with the above.

Thanks for the reference to the baby video. A short account of baby
watching appears on Page 4a of my web site. I agree that much more is
needed.

>
> [G]
>
> Doug wrote
>> Aaron has proposed an altricial-precocial spectrum. However, there seems
>> to
>> be a discontinuity in this spectrum in that altricial learning in living
>> organisms is (on my interpretation) necessarily conscious, while
>> inherited
>> precocial skills can operate without consciousness.
>
> As mentioned above, the label 'consciousness' does not refer to any
> explanatory mechanism, and I cannot see how it is helpful to say that
> the young of precocial species using innately programmed competences,
> like chicks pecking for food or young deer running with the herd shortly
> after birth, are 'without consciousness'.
>
Apologies again! If I replace 'conscious' with self-referential, then it
makes sense to say that precocial mechanisms do not involve self-reference.
I did not intend to rule out the possiblility that a given organism could
not use both precocial and altricial mechanisms. On the contrary, I think
of altricial mechanisms as overlaying precocial mechanisms, just as the
neocortex overlays the 'old' parts of the brain. Hence, while it makes
sense to talk of precocial species, I don't like the concept of altricial
species.

> Doug again:
>
>> If this is right, attempts to produce unconscious robots with altricial
>> skills are likely to run into the same problems as traditional AI.
Let me again replace 'conscious' with self-referential.
>
> I don't see why robots have to be unconscious.

They certainly could be self-referential but, as far as I am aware, no robot
has yet been constructed that could become self-referential. (A preliminary
suggestion appears on Page 8 of my web site.)

Apologies, again, for unecessarily bringing 'consciousness' into the
discussion. I hope the above makes my comments more useful.

Regards,
Doug
www.con-structure.org.uk

0 nieuwe berichten