Conscious robots

22 views
Skip to first unread message

meekerdb

unread,
Sep 26, 2012, 2:35:16 PM9/26/12
to EveryThing
An interesting paper which comports with my idea that "the problem of consciousness" will be "solved" by engineering.  Or John Clark's point that consciousness is easy, intelligence is hard.

Consciousness in
Cognitive Architectures
A Principled Analysis of RCS, Soar and ACT-R

Here's an excerpt:

"The justifiable quest for methods for managing reasoning about selves in this
context is driven by the desire of moving responsibility for system robustness
from the human engineering and operation team to the system itself. This is
also the rationale behind the autonomic computing movement but in our case
the problem is much harder as the bodies of our machines are deeply embedded
in the physics of the world.
But the rationale for having self models is even deeper than that: if modelbased
control overpasses in capabilities to those of error-based control, the
strategy to follow in the global governing of a concrete embedded system is
not just recognising departure from setpoints but anticipating the behavior
emerging from the interaction of the system with it surrounding reality.
Hence the step from control systems that just exploit models of the object,
to control systems that exploit models of the pair system + object is a necessary
one in the ladder of increased performance and robustness. This step
is also observable in biological systems and while there are still loads of unsolved
issues around, the core role that “self” plays in the generation of sophisticated
behaviour is undeniable. Indeed, part of the importance of selfconsciousness
is related to distinguishing oneself from the emvironment in
this class of models (e.g. for action/agency attribution in critical, bootstrapping
learning processes)."

http://cogprints.org/6228/1/ASLAB-R-2008-004.pdf

Brent
"The perfect machine does not exist, mechanically speaking. The only perfect machine is a woman."
      --- Ettore Bugatti, quoted by Enzo Ferrari

Craig Weinberg

unread,
Sep 26, 2012, 3:19:06 PM9/26/12
to everyth...@googlegroups.com

On Wednesday, September 26, 2012 2:35:27 PM UTC-4, Brent wrote:
An interesting paper which comports with my idea that "the problem of consciousness" will be "solved" by engineering.  Or John Clark's point that consciousness is easy, intelligence is hard.

Consciousness is easy if you already have consciousness. It is impossible if you don't. Intelligence is hard if you already have consciousness, but it is impossible if you don't. Everything assumes that consciousness exists as a possibility in the universe prior to the existence of the universe itself.
 

Consciousness in
Cognitive Architectures
A Principled Analysis of RCS, Soar and ACT-R

Here's an excerpt:

"The justifiable quest for methods for managing reasoning about selves in this
context is driven by the desire of moving responsibility for system robustness
from the human engineering and operation team to the system itself. This is
also the rationale behind the autonomic computing movement but in our case
the problem is much harder as the bodies of our machines are deeply embedded
in the physics of the world.
But the rationale for having self models is even deeper than that: if modelbased
control overpasses in capabilities to those of error-based control, the
strategy to follow in the global governing of a concrete embedded system is
not just recognising departure from setpoints but anticipating the behavior
emerging from the interaction of the system with it surrounding reality.
Hence the step from control systems that just exploit models of the object,
to control systems that exploit models of the pair system + object is a necessary
one in the ladder of increased performance and robustness. This step
is also observable in biological systems and while there are still loads of unsolved
issues around, the core role that “self” plays in the generation of sophisticated
behaviour is undeniable. Indeed, part of the importance of selfconsciousness
is related to distinguishing oneself from the emvironment in
this class of models (e.g. for action/agency attribution in critical, bootstrapping
learning processes)."

Just because the self plays a role doesn't mean that the self is nothing but a role. A king or chief plays an important role, but it is an actual person doing the role playing. There isn't just 'kingness' that fills in for functions which benefit by executive control.

Craig
 

meekerdb

unread,
Sep 26, 2012, 3:37:01 PM9/26/12
to everyth...@googlegroups.com
On 9/26/2012 12:19 PM, Craig Weinberg wrote:
On Wednesday, September 26, 2012 2:35:27 PM UTC-4, Brent wrote:
An interesting paper which comports with my idea that "the problem of consciousness" will be "solved" by engineering.� Or John Clark's point that consciousness is easy, intelligence is hard.

Consciousness is easy if you already have consciousness. It is impossible if you don't. Intelligence is hard if you already have consciousness, but it is impossible if you don't.

So are you now contending that intelligent machines *must be* conscious� and that therefore there are no intelligent machines?


Everything assumes that consciousness exists as a possibility in the universe prior to the existence of the universe itself.

I don't even know how to parse "everything assumes"?

Brent

Craig Weinberg

unread,
Sep 26, 2012, 4:27:20 PM9/26/12
to everyth...@googlegroups.com


On Wednesday, September 26, 2012 3:37:09 PM UTC-4, Brent wrote:
On 9/26/2012 12:19 PM, Craig Weinberg wrote:
On Wednesday, September 26, 2012 2:35:27 PM UTC-4, Brent wrote:
An interesting paper which comports with my idea that "the problem of consciousness" will be "solved" by engineering.� Or John Clark's point that consciousness is easy, intelligence is hard.

Consciousness is easy if you already have consciousness. It is impossible if you don't. Intelligence is hard if you already have consciousness, but it is impossible if you don't.

So are you now contending that intelligent machines *must be* conscious� and that therefore there are no intelligent machines?

I am saying that consciousness is a prerequisite for developing intelligence. If you are conscious and intelligent, you can record intelligent functions and automate their playback in an intelligent way in physical media which support that level of control (not fog, not live hamsters...computers need reliable discrete bits that change or don't change unless they are supposed to.)

What you call intelligent machines I would call advanced automated services. They have no consciousness at all at the personal level, but in order to function, those services must be supported by consciousness on the sub-personal (molecular-electronic correlate) level. Our personal sense and motives are riding on top of the impersonal consequences of the sub-personal activities of the machine.

See if this diagram helps: http://24.media.tumblr.com/tumblr_maxawpbuDl1qeenqko1_1280.jpg

There are qualitative distinctions on the right hand side, and quantitative distinctions of scale on the left. To create automated services, we exploit the impersonal side of lower levels to reflect back our own reconstructed depersonalized self image. An automaton. A meticulously crafted emptiness to serve our personal motives.

By conflating the left and right sides and flattening the levels, it becomes plausible to think of an exterior as the same thing as an interior or a collection of digits as a gestalt whole. All of these distinctions, however, require a conscious agent to provide sense, participation, and perspective to begin with - things which are qualitative, right-hand features and never impersonal left-hand functions.
 

Everything assumes that consciousness exists as a possibility in the universe prior to the existence of the universe itself.

I don't even know how to parse "everything assumes"?

Eh, yeah, that was maybe not such a good way to put it. I meant that every way that we can possibly think of to model the universe already takes for granted the assumption of the potential for consciousness. Without that assumption, there is no way to get from whatever we are starting with to where we are now.

Craig


Brent

Evgenii Rudnyi

unread,
Oct 11, 2012, 5:36:45 AM10/11/12
to everyth...@googlegroups.com
On 26.09.2012 20:35 meekerdb said the following:
> An interesting paper which comports with my idea that "the problem of
> consciousness" will be "solved" by engineering. Or John Clark's
> point that consciousness is easy, intelligence is hard.
>
> Consciousness in Cognitive Architectures A Principled Analysis of
> RCS, Soar and ACT-R
>

I have started reading the paper. Thanks a lot for the link.

An interesting quote from the beginning.

p. 13 "A possible path to the solution of the increasing control
software complexity is to extend the adaptation mechanism from the core
controller to the whole implementation of it.

Adaptation of a technical system like a controller can be during
construction or at runtime. In the first case the amount of rules for
cases and situations results in huge codes, besides being impossible to
anticipate all the cases at construction time. The designers cannot
guarantee by design the correct operation of a complex controller. The
alternative is *move the adaptation from the implementation phase into
the runtime phase*. To do it while addressing the pervasive requirement
for increasing autonomy the single possibility is to move the
responsibility for correct operation to the system itself."

I guess that this is exactly what happens within the software industry.

"*move the adaptation from the implementation phase into the runtime phase*"

They just do something and then test in the runtime phase what happens
and what should be corrected. I am not sure if I like it although it
seems to be impossible to change it.

One typical way out is not to use the newest versions of the software
and let the software companies to test them on freaks. Unfortunately
this does not work well as often features needed are available in the
newest version only. In this case, another typical strategy would be to
keep both versions (the newest and previous) running and when necessary
to change them. The complexity growths indeed.

Evgenii

Evgenii Rudnyi

unread,
Oct 11, 2012, 6:01:02 AM10/11/12
to everyth...@googlegroups.com
On 11.10.2012 11:36 Evgenii Rudnyi said the following:
> On 26.09.2012 20:35 meekerdb said the following:
>> An interesting paper which comports with my idea that "the problem
>> of consciousness" will be "solved" by engineering. Or John
>> Clark's point that consciousness is easy, intelligence is hard.
>>
>> Consciousness in Cognitive Architectures A Principled Analysis of
>> RCS, Soar and ACT-R
>>
>
> I have started reading the paper. Thanks a lot for the link.
>

Another interesting quote.

p. 18 "Architectures that model human cognition. One of the mainstreams
in cognitive science is producing a complete theory of human mind
integrating all the partial models, for example about memory, vision or
learning, that have been produced. These architectures are based upon
data and experiments from psychology or neurophysiology, and tested upon
new breakthroughs. However, this architectures do not limit themselves
to be theoretical models, and have also practical application, i.e.
ACT-R is applied in software based learning systems: the Cognitive
Tutors for Mathematics, that are used in thousands of schools across the
United States. Examples of this type of cognitive architectures are
ACT-R and Atlantis."

I will repeat my point that I have made previously. If there are already
practical application of this type working, they would be very good
candidates to check what happens with consciousness. I could imagine two
different situations.

1) Engineers have developed such an architecture without thinking about
consciousness. Now imagine that an empirical study however demonstrates
that consciousness is already there. This, in my view, would prove
epiphenomenalism of consciousness.

2) Engineers have developed such an architecture with taking
consciousness into account. Now imagine that an empirical study confirms
that consciousness is there. This, in my view, would be the solution of
Hard Problem.

I am curious what an answer I will find in the report in the end.

Evgenii Rudnyi

unread,
Oct 11, 2012, 7:58:57 AM10/11/12
to everyth...@googlegroups.com
On 11.10.2012 11:36 Evgenii Rudnyi said the following:
> On 26.09.2012 20:35 meekerdb said the following:
>> An interesting paper which comports with my idea that "the problem
>> of consciousness" will be "solved" by engineering. Or John
>> Clark's point that consciousness is easy, intelligence is hard.
>>
>> Consciousness in Cognitive Architectures A Principled Analysis of
>> RCS, Soar and ACT-R
>>
>
> I have started reading the paper. Thanks a lot for the link.
>

I have finished reading the paper. I should say that I am not impressed.
First, interestingly enough

p. 30 "The observer selects a system according to a set of main features
which we shall call traits."

Presumably this means that without an observer a system does not exist.
In a way it is logical as without a human being what is available is
just an ensemble of interacting strings.

Now let me make some quotes to show you what the authors mean by
consciousness in the order they appear in the paper.

p. 45 "This makes that, in reality, the state of the environment, from
the point of view of the system, will not only consist of the values of
the coupling quantities, but also of its conceptual representations of
it. We shall call this the subjective state of the environment."

p. 52 "These principles, biologically inspired by the old metaphor �or
not so metaphor but an actual functional definition� of the brain-mind
pair as the controller-control laws of the body �the plant�, provides a
base characterisation of cognitive or intelligent control."

p. 60 "Principle 5: Model-driven perception � Perception is the
continuous update of the integrated models used by the agent in a
model-based cognitive control architecture by means of real-time
sensorial information."

p. 61 "Principle 6: System awareness�A system is aware if it is
continuously perceiving and generating meaning from the countinuously
updated models."

p. 62 "Awareness implies the partitioning of predicted futures and
postdicted pasts by a value function. This partitioning we call meaning
of the update to the model."

p. 65 "Principle 7: System attention � Attentional mechanisms allocate
both physical and cognitive resources for system processes so as to
maximise performance."

p. 116 "From this perspective, the analysis proceeds in a similar way:
if modelbased behaviour gives adaptive value to a system interacting
with an object, it will give also value when the object modelled is the
system itself. This gives rise to metacognition in the form of
metacontrol loops that will improve operation of the system overall."

p. 117 "Principle 8: System self-awareness/consciousness � A system is
conscious if it is continuously generating meanings from continously
updated self-models in a model-based cognitive control architecture."

p. 122 'Now suppose that for adding consciousness to the operation of
the system we add new processes that monitor, evaluate and reflect the
operation of the �unconscious� normal processes (Fig.
fig:cons-processes). We shall call these processes the �conscious� ones.'

If I understood it correctly, the authors when they develop software
just mark some bits as a subjective state and some processes as
conscious. Voil�! We have a conscious robot.

Let us see what happens.

Evgenii
--
http://blog.rudnyi.ru/2012/10/consciousness-in-cognitive-architectures.html

Roger Clough

unread,
Oct 11, 2012, 10:13:06 AM10/11/12
to everything-list
Hi Evgenii Rudnyi

The following components are inextricably mixed:

life, consciousness, free will, intelligence

you can't have one without the others,
and (or because) they're all nonphysical, all subjective.
So only the computer can know for sure if it
has any of these.


Roger Clough, rcl...@verizon.net
10/11/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Evgenii Rudnyi
Receiver: everything-list
Time: 2012-10-11, 07:58:57
Subject: Re: Conscious robots


On 11.10.2012 11:36 Evgenii Rudnyi said the following:
> On 26.09.2012 20:35 meekerdb said the following:
>> An interesting paper which comports with my idea that "the problem
>> of consciousness" will be "solved" by engineering. Or John
>> Clark's point that consciousness is easy, intelligence is hard.
>>
>> Consciousness in Cognitive Architectures A Principled Analysis of
>> RCS, Soar and ACT-R
>>
>
> I have started reading the paper. Thanks a lot for the link.
>

I have finished reading the paper. I should say that I am not impressed.
First, interestingly enough

p. 30 "The observer selects a system according to a set of main features
which we shall call traits."

Presumably this means that without an observer a system does not exist.
In a way it is logical as without a human being what is available is
just an ensemble of interacting strings.

Now let me make some quotes to show you what the authors mean by
consciousness in the order they appear in the paper.

p. 45 "This makes that, in reality, the state of the environment, from
the point of view of the system, will not only consist of the values of
the coupling quantities, but also of its conceptual representations of
it. We shall call this the subjective state of the environment."

p. 52 "These principles, biologically inspired by the old metaphor ?r
not so metaphor but an actual functional definition? of the brain-mind
pair as the controller-control laws of the body ?he plant?, provides a
base characterisation of cognitive or intelligent control."

p. 60 "Principle 5: Model-driven perception ? Perception is the
continuous update of the integrated models used by the agent in a
model-based cognitive control architecture by means of real-time
sensorial information."

p. 61 "Principle 6: System awareness? system is aware if it is
continuously perceiving and generating meaning from the countinuously
updated models."

p. 62 "Awareness implies the partitioning of predicted futures and
postdicted pasts by a value function. This partitioning we call meaning
of the update to the model."

p. 65 "Principle 7: System attention ? Attentional mechanisms allocate
both physical and cognitive resources for system processes so as to
maximise performance."

p. 116 "From this perspective, the analysis proceeds in a similar way:
if modelbased behaviour gives adaptive value to a system interacting
with an object, it will give also value when the object modelled is the
system itself. This gives rise to metacognition in the form of
metacontrol loops that will improve operation of the system overall."

p. 117 "Principle 8: System self-awareness/consciousness ? A system is
conscious if it is continuously generating meanings from continously
updated self-models in a model-based cognitive control architecture."

p. 122 'Now suppose that for adding consciousness to the operation of
the system we add new processes that monitor, evaluate and reflect the
operation of the ?nconscious? normal processes (Fig.
fig:cons-processes). We shall call these processes the ?onscious? ones.'

If I understood it correctly, the authors when they develop software
just mark some bits as a subjective state and some processes as
conscious. Voil?! We have a conscious robot.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

John Clark

unread,
Oct 11, 2012, 11:05:22 AM10/11/12
to everyth...@googlegroups.com
On Wed, Sep 26, 2012  Craig Weinberg <whats...@gmail.com> wrote:

> Consciousness is easy if you already have consciousness. It is impossible if you don't.

But you believe in "panexperientialism", you believe that everything is conscious, so if you are correct then consciousness is not only possible it's easy. QED.

> Everything assumes that consciousness exists as a possibility in the universe

It's not a assumption it's a fact that for consciousness to exist there must have been a time when the possibility of the existence of consciousness existed. In a similar way some religious types have criticized Krauss's book "A Universe From Nothing" because it's not really nothing, its just from very very little because Krauss had to start from a place where there was at least the potential for something, and they insist that very potential is something. Apparently those same religious types don't consider God to be something, and for once I agree with them.


> prior to the existence of the universe

It's not clear what that means. Without the universe you can't have time because time involves change and if nothing exists then nothing changes; and without time the word "prior" has no meaning.

  John K Clark

 





Russell Standish

unread,
Oct 11, 2012, 4:53:42 PM10/11/12
to everyth...@googlegroups.com
On Thu, Oct 11, 2012 at 10:13:06AM -0400, Roger Clough wrote:
> Hi Evgenii Rudnyi
>
> The following components are inextricably mixed:
>
> life, consciousness, free will, intelligence
>
> you can't have one without the others,

I disagree. You can have life without any of the others. Also, I
suspect you can have intelligence without life, and intelligence
without consciousness.

> and (or because) they're all nonphysical, all subjective.

Yes - they share those in common, as do a lot of other concepts such
as emergence, complexity, information, entropy, creativity and so on.
--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Craig Weinberg

unread,
Oct 11, 2012, 10:12:59 PM10/11/12
to everyth...@googlegroups.com


On Thursday, October 11, 2012 11:05:23 AM UTC-4, John Clark wrote:
On Wed, Sep 26, 2012  Craig Weinberg <whats...@gmail.com> wrote:

> Consciousness is easy if you already have consciousness. It is impossible if you don't.

But you believe in "panexperientialism", you believe that everything is conscious, so if you are correct then consciousness is not only possible it's easy. QED.

We are saying the same thing but you are not acknowledging that you assume consciousness. I acknowledge that we have no choice but to assume some kind of experience.
 

> Everything assumes that consciousness exists as a possibility in the universe

It's not a assumption it's a fact that for consciousness to exist there must have been a time when the possibility of the existence of consciousness existed. In a similar way some religious types have criticized Krauss's book "A Universe From Nothing" because it's not really nothing, its just from very very little because Krauss had to start from a place where there was at least the potential for something, and they insist that very potential is something. Apparently those same religious types don't consider God to be something, and for once I agree with them.

I say they are both wrong. Physics and God are both arbitrary somethings that are no more of an explanation of the universe than just starting from what it is right now and saying 'ta dah!'. If you start with sense though, you don't have that problem. It's a whole shift in mindset. You have to essentially realize that one is the first number, not zero - that all of arithmetic exists as fractions within the number one.



> prior to the existence of the universe

It's not clear what that means. Without the universe you can't have time because time involves change and if nothing exists then nothing changes; and without time the word "prior" has no meaning.

Think of it this way. Time is not change, but rather a limitation on the scope of attention which sweeps away the experiences into a non-present. Before time, there is no non-present. It's all present.

Craig
 

  John K Clark

 





Alberto G. Corona

unread,
Oct 12, 2012, 6:40:53 AM10/12/12
to everyth...@googlegroups.com
 life, consciousness, free will, intelligence

I try to give a phsical definition of each one:

Life: whathever that maintain its internal entropy in a non trivial way (A diamant is not alive). That is, to make use of hardwired  and adquired information to maintain the internal entropy by making use of low entropic matter in the environment.

Consciousness: To avoid dangers he has to identify chemical agents, for example, but also (predators that may consider him as a prey. While non teleológical dangers, like chemicals, can be avoided with simple reactions, teleológical dangers, like the predators are different. He has to go a step further than automatic responses, because he has to deliberate between fight of flight, depending on its perceived internal state: healt, size, wether he has breeding descendence to protect etc. He needs to know the state of himself, as well as the boundary of his body.   He has to calibrate the menace by looking at the reactions of the predator when he see its own reactions. there is a processing of "I do this- he is responding with that", at some level.
So a primitive consciouness probably started with predation. that is not self consciousness in the human sense. Self consciousness manages an history of the self that consciousness do not. 

Free will: There are many dylemmas that living beings must confront, like fight of flight: For example, to abandon an wounded cub or not,  to pass the river infested of crocodriles in orde to reach the green pastures in the other side etc.  many of these reactions are automatic, like fight and fligh. because speed of response is very important (Even most humans report this automatism of behaviour when had a traumatic experience). But other dilemmas are not. A primitive perception of an internal conflict (that is free will) may appear in animals who had the luxury of having time for considerating either one course of action or the other, in order to get enough data. This is not very common in the animal kingdom, where life is short and decission have to be fast. Probably only animals with a long life span with a social protection can evolve such internal conflict. When there is no time to spend, even humans act automatically. If you want to know how an animal feel, go to a conflict zone.

Intelligence: The impulse of curiosity and the hability to elaborate activities with the exclusive goal of learning and adquiring experience, rather than direct survivival. of course that curiositiy is not arbitrary but focused in promising activities that learn something valuable for survival.  A cat would inspect a new furniture. Because its impulse for curiosity is towards the search of locations for hiding, watch and shelter and for the knowledge of the surroundings. That is intelligence, but a focused intelligence. It is not general intelligence.We have also a focused curiosity but it is not so narrow. 

Alberto

2012/10/11 Russell Standish <li...@hpcoders.com.au>



--
Alberto.

Roger Clough

unread,
Oct 12, 2012, 8:23:33 AM10/12/12
to everything-list
Hi Russell Standish


Life cannot survive without making choices,
like where to go next. To avoid an enemy. To get food.

This act of life obviously requires an autonomous choice.
Nobody can make it for you. It can't be pre-programmed.

Free autonomous choice is a description in my view of intelligence.

QED

Roger Clough, rcl...@verizon.net
10/12/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Russell Standish
Receiver: everything-list
Time: 2012-10-11, 16:53:42
Subject: Re: Re: Conscious robots

Roger Clough

unread,
Oct 12, 2012, 10:08:51 AM10/12/12
to everything-list
Hi Alberto G. Corona

There is a computer robot program or language called the
bdi model, where

b=belief
d= desire
i = intention

In my thinking consciousness might sort of
fit into such a model,

b=belief = thinking or intelligence (sort of)
d= desire = Missing from my model.
i = intention = free will or will

In Leibniz's monad, these could possivbly be associated to

b=belief = the monad's "perceptions"
d= desire = the monad's "appetite"
i = intention = free will or will = if we take this as doing, it might be the
monad's internal energy source.


These might also replace the three realms of
the human monad's homunculus.


G. Corona
Receiver: everything-list
Time: 2012-10-12, 06:40:53
Subject: Re: Re: Conscious robots


?ife, consciousness, free will, intelligence


I try to give a phsical definition of each one:


Life: whathever that maintain its internal entropy in a non trivial way (A diamant is not alive). That is, to make use of hardwired ?nd adquired information to maintain the internal entropy by making use of low entropic matter in the environment.


Consciousness: To avoid dangers he has to identify chemical agents, for example, but also (predators that may consider him as a prey. While non teleol?ical dangers, like chemicals, can be avoided with simple reactions, teleol?ical dangers, like the predators are different. He has to go a step further than automatic responses, because he has to deliberate between fight of flight, depending on its perceived internal state: healt, size, wether he has breeding descendence to protect etc. He needs to know the state of himself, as well as the boundary of his body. ? He has to calibrate the menace by looking at the reactions of the predator when he see its own reactions. there is a processing of "I do this- he is responding with that", at some level.
So a primitive consciouness probably started with predation. that is not self consciousness in the human sense. Self consciousness manages an history of the self that consciousness do not.?


Free will: There are many dylemmas that living beings must confront, like fight of flight: For example, to abandon an wounded cub or not, ?o pass the river infested of crocodriles in orde to reach the green pastures in the other side etc. ?any of these reactions are automatic, like fight and fligh. because speed of response is very important (Even most humans report this automatism of behaviour when had a traumatic experience). But other dilemmas are not. A primitive perception of an internal conflict (that is free will) may appear in animals who had the luxury of having time for considerating either one course of action or the other, in order to get enough data. This is not very common in the animal kingdom, where life is short and decission have to be fast. Probably only animals with a long life span with a social protection can evolve such internal conflict. When there is no time to spend, even humans act automatically. If you want to know how an animal feel, go to a conflict zone.

Intelligence: The impulse of curiosity and the hability to elaborate activities with the exclusive goal of learning and adquiring experience, rather than direct survivival. of course that curiositiy is not arbitrary but focused in promising activities that learn something valuable for survival. ? cat would inspect a new furniture. Because its impulse for curiosity is towards the search of locations for hiding, watch and shelter and for the knowledge of the surroundings. That is intelligence, but a focused intelligence. It is not general intelligence.We have also a focused curiosity but it is not so narrow.?


Alberto


2012/10/11 Russell Standish
Prof Russell Standish ? ? ? ? ? ? ? ? ?hone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics ? ? ?hpc...@hpcoders.com.au
University of New South Wales ? ? ? ? ?http://www.hpcoders.com.au
----------------------------------------------------------------------------


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.







--
Alberto.

meekerdb

unread,
Oct 12, 2012, 1:41:41 PM10/12/12
to everyth...@googlegroups.com
On 10/12/2012 3:40 AM, Alberto G. Corona wrote:
 life, consciousness, free will, intelligence

I try to give a phsical definition of each one:

Life: whathever that maintain its internal entropy in a non trivial way (A diamant is not alive). That is, to make use of hardwired  and adquired information to maintain the internal entropy by making use of low entropic matter in the environment.

Consciousness: To avoid dangers he has to identify chemical agents, for example, but also (predators that may consider him as a prey. While non teleológical dangers, like chemicals, can be avoided with simple reactions, teleológical dangers, like the predators are different. He has to go a step further than automatic responses, because he has to deliberate between fight of flight, depending on its perceived internal state: healt, size, wether he has breeding descendence to protect etc. He needs to know the state of himself, as well as the boundary of his body.   He has to calibrate the menace by looking at the reactions of the predator when he see its own reactions. there is a processing of "I do this- he is responding with that", at some level.
So a primitive consciouness probably started with predation. that is not self consciousness in the human sense. Self consciousness manages an history of the self that consciousness do not. 

Free will: There are many dylemmas that living beings must confront, like fight of flight: For example, to abandon an wounded cub or not,  to pass the river infested of crocodriles in orde to reach the green pastures in the other side etc.  many of these reactions are automatic, like fight and fligh. because speed of response is very important (Even most humans report this automatism of behaviour when had a traumatic experience). But other dilemmas are not. A primitive perception of an internal conflict (that is free will) may appear in animals who had the luxury of having time for considerating either one course of action or the other, in order to get enough data. This is not very common in the animal kingdom, where life is short and decission have to be fast. Probably only animals with a long life span with a social protection can evolve such internal conflict. When there is no time to spend, even humans act automatically. If you want to know how an animal feel, go to a conflict zone.

I generally agree with your analysis.  And I think you are right that what is called 'free will' is a feeling about conflicting internal values.  This comports with the legal idea of coerced (not-free) choice.  Coercion externally imposes a cost on your decision so that values are shifted and what would have had a negative value has a positive value competing with normally dominant alternatives. 


Intelligence: The impulse of curiosity and the hability to elaborate activities with the exclusive goal of learning and adquiring experience, rather than direct survivival. of course that curiositiy is not arbitrary but focused in promising activities that learn something valuable for survival.  A cat would inspect a new furniture. Because its impulse for curiosity is towards the search of locations for hiding, watch and shelter and for the knowledge of the surroundings. That is intelligence, but a focused intelligence. It is not general intelligence.

But if you define 'general intelligence' as not having any goal, you are defining it out of existence.  Our own goals may not be consciously present, but I don't think they are any less motivated than the cats.

Brent

Russell Standish

unread,
Oct 12, 2012, 4:54:43 PM10/12/12
to everyth...@googlegroups.com
On Fri, Oct 12, 2012 at 08:23:33AM -0400, Roger Clough wrote:
> Hi Russell Standish
>
>
> Life cannot survive without making choices,
> like where to go next. To avoid an enemy. To get food.
>
> This act of life obviously requires an autonomous choice.
> Nobody can make it for you. It can't be pre-programmed.
>
> Free autonomous choice is a description in my view of intelligence.
>
> QED
>

The algorithm employed by certain bacteria is to travel in a straight
line if nutrient concentration is below a certain threshold, and to
tumble randomly if the nutrient concentration is above a certain
threshold.

Why is this effective? Ballistic motion (straight line case) exhibits
<\Delta x> proportional to <\Delta t> (average position change is
proportional to time), so its a good way to somewhere where resources
are more plentiful. By contrast chaotic motion has <\Delta x>
proportional to <sqrt\Delta t>, which means you stick around longer
and hoover up more of the good stuff.

Is this autonomous? You bet. Is it living? Yes - it's bacteria, although
a robot doing the same thing would not necessarily be living. Is it
intelligent? - nup.

Cheers

meekerdb

unread,
Oct 12, 2012, 5:40:10 PM10/12/12
to everyth...@googlegroups.com
On 10/12/2012 1:54 PM, Russell Standish wrote:
> On Fri, Oct 12, 2012 at 08:23:33AM -0400, Roger Clough wrote:
>> Hi Russell Standish
>>
>>
>> Life cannot survive without making choices,
>> like where to go next. To avoid an enemy. To get food.
>>
>> This act of life obviously requires an autonomous choice.
>> Nobody can make it for you. It can't be pre-programmed.
>>
>> Free autonomous choice is a description in my view of intelligence.
>>
>> QED
>>
> The algorithm employed by certain bacteria is to travel in a straight
> line if nutrient concentration is below a certain threshold, and to
> tumble randomly if the nutrient concentration is above a certain
> threshold.
>
> Why is this effective? Ballistic motion (straight line case) exhibits
> <\Delta x> proportional to<\Delta t> (average position change is
> proportional to time), so its a good way to somewhere where resources
> are more plentiful. By contrast chaotic motion has<\Delta x>
> proportional to<sqrt\Delta t>, which means you stick around longer
> and hoover up more of the good stuff.
>
> Is this autonomous? You bet. Is it living? Yes - it's bacteria, although
> a robot doing the same thing would not necessarily be living. Is it
> intelligent? - nup.

I'd say "a little"; it's smarter than just ballistic motion alone. Intelligent behavior
isn't very well defined and admits of degrees.

Brent

Evgenii Rudnyi

unread,
Oct 13, 2012, 6:16:06 AM10/13/12
to everyth...@googlegroups.com
On 12.10.2012 22:54 Russell Standish said the following:
> On Fri, Oct 12, 2012 at 08:23:33AM -0400, Roger Clough wrote:
>> Hi Russell Standish
>>
>>
>> Life cannot survive without making choices, like where to go next.
>> To avoid an enemy. To get food.
>>
>> This act of life obviously requires an autonomous choice. Nobody
>> can make it for you. It can't be pre-programmed.
>>
>> Free autonomous choice is a description in my view of
>> intelligence.
>>
>> QED
>>
>
> The algorithm employed by certain bacteria is to travel in a
> straight line if nutrient concentration is below a certain threshold,
> and to tumble randomly if the nutrient concentration is above a
> certain threshold.
>
> Why is this effective? Ballistic motion (straight line case)
> exhibits <\Delta x> proportional to <\Delta t> (average position
> change is proportional to time), so its a good way to somewhere where
> resources are more plentiful. By contrast chaotic motion has <\Delta
> x> proportional to <sqrt\Delta t>, which means you stick around
> longer and hoover up more of the good stuff.
>
> Is this autonomous? You bet. Is it living? Yes - it's bacteria,
> although a robot doing the same thing would not necessarily be
> living. Is it intelligent? - nup.

Another question here would be who will divide the state space to a
bacterium and environment. Let us imagine that we have implemented
somehow a bacterium in Game of Life (or even better in Continuous Game
of Life). What is meaning of "A bacterium travels" when there is no
human observer?

Or let me can put it this way. To find out whether a bacterium is there
and to find out its coordinates, one could imagine an extra algorithms
that analyses the state space of for example Continuous Game of Life.
Now we run two different simulations.

1) Continuous Game of Life as it is.

2) Continuous Game of Life with an extra algorithm to find out if a
bacterium is there and to report coordinates of the bacterium.

Is there any difference between 1) and 2)?

Evgenii
--
http://blog.rudnyi.ru/2012/07/playing-chess-in-the-game-of-life.html

Roger Clough

unread,
Oct 13, 2012, 12:27:01 PM10/13/12
to everything-list
Hi Russell Standish

I should stay away from discussing bacteria. Brownian motion and
chemical actions could in fact make intelligence ("free" choice-making)
unnecessary, as you may have suggested.


Roger Clough, rcl...@verizon.net
10/13/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Russell Standish
Receiver: everything-list
Time: 2012-10-12, 16:54:43
Subject: Re: Re: Re: Conscious robots


On Fri, Oct 12, 2012 at 08:23:33AM -0400, Roger Clough wrote:
> Hi Russell Standish
>
>
> Life cannot survive without making choices,
> like where to go next. To avoid an enemy. To get food.
>
> This act of life obviously requires an autonomous choice.
> Nobody can make it for you. It can't be pre-programmed.
>
> Free autonomous choice is a description in my view of intelligence.
>
> QED
>

The algorithm employed by certain bacteria is to travel in a straight
line if nutrient concentration is below a certain threshold, and to
tumble randomly if the nutrient concentration is above a certain
threshold.

Why is this effective? Ballistic motion (straight line case) exhibits
<\Delta x> proportional to <\Delta t> (average position change is
proportional to time), so its a good way to somewhere where resources
are more plentiful. By contrast chaotic motion has <\Delta x>
proportional to , which means you stick around longer
and hoover up more of the good stuff.

Is this autonomous? You bet. Is it living? Yes - it's bacteria, although
a robot doing the same thing would not necessarily be living. Is it
intelligent? - nup.

Cheers
--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Roger Clough

unread,
Oct 13, 2012, 12:46:17 PM10/13/12
to everything-list
Hi meekerdb

That's exactly the point. Intelligence can't
be intelligent if it's defined.


Roger Clough, rcl...@verizon.net
10/13/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: meekerdb
Receiver: everything-list
Time: 2012-10-12, 17:40:10
Subject: Re: Conscious robots


On 10/12/2012 1:54 PM, Russell Standish wrote:
> On Fri, Oct 12, 2012 at 08:23:33AM -0400, Roger Clough wrote:
>> Hi Russell Standish
>>
>>
>> Life cannot survive without making choices,
>> like where to go next. To avoid an enemy. To get food.
>>
>> This act of life obviously requires an autonomous choice.
>> Nobody can make it for you. It can't be pre-programmed.
>>
>> Free autonomous choice is a description in my view of intelligence.
>>
>> QED
>>
> The algorithm employed by certain bacteria is to travel in a straight
> line if nutrient concentration is below a certain threshold, and to
> tumble randomly if the nutrient concentration is above a certain
> threshold.
>
> Why is this effective? Ballistic motion (straight line case) exhibits
> <\Delta x> proportional to<\Delta t> (average position change is
> proportional to time), so its a good way to somewhere where resources
> are more plentiful. By contrast chaotic motion has<\Delta x>
> proportional to, which means you stick around longer
> and hoover up more of the good stuff.
>
> Is this autonomous? You bet. Is it living? Yes - it's bacteria, although
> a robot doing the same thing would not necessarily be living. Is it
> intelligent? - nup.

I'd say "a little"; it's smarter than just ballistic motion alone. Intelligent behavior
isn't very well defined and admits of degrees.

Brent

Russell Standish

unread,
Oct 13, 2012, 7:42:50 PM10/13/12
to everyth...@googlegroups.com
On Sat, Oct 13, 2012 at 12:27:01PM -0400, Roger Clough wrote:
> Hi Russell Standish
>
> I should stay away from discussing bacteria. Brownian motion and
> chemical actions could in fact make intelligence ("free" choice-making)
> unnecessary, as you may have suggested.
>
>

Why avoid the topic? By studying these sorts of "edge cases", we get a
clearer idea of what we mean by various things.

Although edge case hardly describes the situation. The dominant
lifeform on this planet is bacteria.

Russell Standish

unread,
Oct 13, 2012, 7:46:58 PM10/13/12
to everyth...@googlegroups.com
On Sat, Oct 13, 2012 at 12:16:06PM +0200, Evgenii Rudnyi wrote:
> Another question here would be who will divide the state space to a
> bacterium and environment. Let us imagine that we have implemented
> somehow a bacterium in Game of Life (or even better in Continuous
> Game of Life). What is meaning of "A bacterium travels" when there
> is no human observer?
>
> Or let me can put it this way. To find out whether a bacterium is
> there and to find out its coordinates, one could imagine an extra
> algorithms that analyses the state space of for example Continuous
> Game of Life. Now we run two different simulations.
>
> 1) Continuous Game of Life as it is.
>
> 2) Continuous Game of Life with an extra algorithm to find out if a
> bacterium is there and to report coordinates of the bacterium.
>
> Is there any difference between 1) and 2)?
>

A very pertinent question. I wish I knew the answer (myself and many
others too!).

There is a sort of proto-answer in the work of Jim Crutchfield. It is
possible (sort of) to come up with a workable definition of emergence
that doesn't require the presence of an observer (or rather he has a
metric to indicate what sorts of things are likely to be interesting
to an abstract observer - my emphasis).

Evgenii Rudnyi

unread,
Oct 14, 2012, 4:55:59 AM10/14/12
to everyth...@googlegroups.com
On 14.10.2012 01:46 Russell Standish said the following:
> On Sat, Oct 13, 2012 at 12:16:06PM +0200, Evgenii Rudnyi wrote:
>> Another question here would be who will divide the state space to
>> a bacterium and environment. Let us imagine that we have
>> implemented somehow a bacterium in Game of Life (or even better in
>> Continuous Game of Life). What is meaning of "A bacterium travels"
>> when there is no human observer?
>>
>> Or let me can put it this way. To find out whether a bacterium is
>> there and to find out its coordinates, one could imagine an extra
>> algorithms that analyses the state space of for example Continuous
>> Game of Life. Now we run two different simulations.
>>
>> 1) Continuous Game of Life as it is.
>>
>> 2) Continuous Game of Life with an extra algorithm to find out if
>> a bacterium is there and to report coordinates of the bacterium.
>>
>> Is there any difference between 1) and 2)?
>>
>
> A very pertinent question. I wish I knew the answer (myself and many
> others too!).
>
> There is a sort of proto-answer in the work of Jim Crutchfield. It
> is possible (sort of) to come up with a workable definition of
> emergence that doesn't require the presence of an observer (or rather
> he has a metric to indicate what sorts of things are likely to be
> interesting to an abstract observer - my emphasis).
>

Do you know some papers/books that discuss this question in depth?

I have taken this idea from

Raymond Tallis, Aping Mankind: Neuromania, Darwinitis and the
Misrepresentation of Humanity

where it was expressed just in a general manner.

Doesn't this mean, that simulations kind of 1) are dead ends?

Evgenii
--
http://blog.rudnyi.ru/2012/10/aping-mankind.html

Russell Standish

unread,
Oct 14, 2012, 6:10:57 AM10/14/12
to everyth...@googlegroups.com
On Sun, Oct 14, 2012 at 10:55:59AM +0200, Evgenii Rudnyi wrote:
> On 14.10.2012 01:46 Russell Standish said the following:
> >On Sat, Oct 13, 2012 at 12:16:06PM +0200, Evgenii Rudnyi wrote:
> >>Another question here would be who will divide the state space to
> >>a bacterium and environment. Let us imagine that we have
> >>implemented somehow a bacterium in Game of Life (or even better in
> >>Continuous Game of Life). What is meaning of "A bacterium travels"
> >>when there is no human observer?
> >>
> >>Or let me can put it this way. To find out whether a bacterium is
> >>there and to find out its coordinates, one could imagine an extra
> >>algorithms that analyses the state space of for example Continuous
> >>Game of Life. Now we run two different simulations.
> >>
> >>1) Continuous Game of Life as it is.
> >>
> >>2) Continuous Game of Life with an extra algorithm to find out if
> >>a bacterium is there and to report coordinates of the bacterium.
> >>
> >>Is there any difference between 1) and 2)?
> >>
> >
> >A very pertinent question. I wish I knew the answer (myself and many
> >others too!).
> >
> >There is a sort of proto-answer in the work of Jim Crutchfield. It
> >is possible (sort of) to come up with a workable definition of
> >emergence that doesn't require the presence of an observer (or rather
> >he has a metric to indicate what sorts of things are likely to be
> >interesting to an abstract observer - my emphasis).
> >
>
> Do you know some papers/books that discuss this question in depth?

Not so much. I have some speculative remarks along these lines towards
the of my 2000 paper "Evolution in the Multiverse".

Also implicit in my view of complexity as information outlined in my
2001 paper "Complexity and Emergence", and also in the more pragmatic
2003 paper "Open-ended artificial evolution", is that the question of
generating an "observer" endemic to the system become crucial. But I
haven't written about it - its more as embryonic thought at this
stage. As I mention, Jim Crutchfield has taken a hard-nosed
objectivist approach to emergence, which I think must contain a kernel
of what is required to do this.

I'm not sure which of Jim's paper to recommend, but perhaps:

J. P. Crutchfield and M. Mitchell, The evolution of emergent
computation, PNAS November 7, 1995 vol. 92 no. 23 10742-10746.

or maybe J. P. Crutchfield (1994) Physica D, 75, 11-54.


Of more recent papers, I was particularly impressed by Anil Seth
(2010), Artificial Life 16, 179-196.

Cheers

>
> I have taken this idea from
>
> Raymond Tallis, Aping Mankind: Neuromania, Darwinitis and the
> Misrepresentation of Humanity
>
> where it was expressed just in a general manner.
>
> Doesn't this mean, that simulations kind of 1) are dead ends?
>
> Evgenii
> --
> http://blog.rudnyi.ru/2012/10/aping-mankind.html
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Alberto G. Corona

unread,
Oct 16, 2012, 10:19:23 AM10/16/12
to everyth...@googlegroups.com


2012/10/12 meekerdb <meek...@verizon.net>
This definition of intelligence is more a hint to discover a form of intelligence than a definition of intelligence as such.  General intelligence is more subtle. Somewhere else I said that general intelligene is not limited by goals even if it is limited by them at the beginning, because if a general intelligent being is not permitted to ask himself (or be asked) about its own goals, then he has no general intelligence. And this is the reason why I think that any general intelligence must have existental problems to be solved that he can not solve.

Brent


We have also a focused curiosity but it is not so narrow. 

Alberto

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
Alberto.
Reply all
Reply to author
Forward
0 new messages