Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Emotion in Robots

37 views
Skip to first unread message

Arthur T. Murray

unread,
Nov 30, 1999, 3:00:00 AM11/30/99
to
Is it irrational to program emotions into our robots? Fear not,
the autopoietic machines will get rid of any unneeded feelings.

http://mentifex.netpedia.net/ Mind.Forth Robot PD AI Source Code
includes a provisional stub for the eventual coding of artificial
http://www.geocities.com/Athens/Agora/7256/emotion.html emotions.

Emotions may not have a high priority in the open source AI work
but they have already been mapped out in our Theory of Cognition:

<PRE>
Hearing Vision Concepts Volition Emotion Motor Output
/iiiiiii\ /!i!i!i!\ /YYYYYYYYYYYY\
| ||||||| || ||||||| | T | |||||||||||| |
| ||||||| || | ___ | | + | |||||||||||| |
| ||||||| || /old\ | + | |S|||||||||| |
| ||||||| || (image)-|---+_ | |H|||||||||| |
| ||||||| || \___/ | / \ | |A|||||||||| |
| ||||||| || | (idea) __ | |K|||||||||| |
| | ||||| || | \__/---------------/ \ | |E|||R|||||| |
| |d------||---------|---+ ____ (fear)-|--*|||U|||||| |
| ||||o|| || _____ | +-------/ \----\__/ | |||||N|||P|| |
| ||g|||| || / re- \-|---+ / de- \---------|------*|||E|| |
| || |||| ||/entrant\| + ( ci- ) | |||||||||T|| |
| ||||||| ||\ image /| + \ sion /---------|----------*|| |
| ||||||| || \_____/ | + \____/ | |||||||||||| |
</PRE>

patrik bagge

unread,
Nov 30, 1999, 3:00:00 AM11/30/99
to
Arthur T. Murray skrev i meddelandet <3843f...@news.victoria.tc.ca>...

>Is it irrational to program emotions into our robots? Fear not,
>the autopoietic machines will get rid of any unneeded feelings.


nice work Arthur, have you considered java or normal C
as a language alternative?
I know that Forth enthusiasts think.live and breath Forth, but
still, some(most) of us are 'casio' kind of individuals.

From my personal little experience regarding emotions or feelings,
one should best avoid using them words,
it often creates an (emotional) storm....

if we should call it a inner measurement-functionality,indicating the amount
of success obtained reaching a certain goal (desire) then it
might be a little more neutral.
One other type of feeling is 'hunger' which directly could be
translated into the charge condition of a robot's battery.

Best
/pat


Robert Posey

unread,
Nov 30, 1999, 3:00:00 AM11/30/99
to

"Arthur T. Murray" wrote:
>
> Is it irrational to program emotions into our robots? Fear not,
> the autopoietic machines will get rid of any unneeded feelings.

I thought that the state of the art in behavioral science was still have
a very hard time even defining what emotions were. Has some one
made the leap to identifying what actions, or thought are controlled by
emotions vs. whatever? It would seem this has to be well defined first.
Your diagram looks like a standard avoid behavior, how does it qualify
as emotion, vs simple survival response.

Muddy

Arthur T. Murray

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
Robert Posey, mu...@raytheon.com, wrote on Tue, 30 Nov 1999:

> "Arthur T. Murray" wrote:
>>
>> Is it irrational to program emotions into our robots? Fear not,
>> the autopoietic machines will get rid of any unneeded feelings.

Posey:

> I thought that the state of the art in behavioral science was
> still have a very hard time even defining what emotions were.

ATM:
For that very reason, I use "fear" in my diagram example, as
an emotion that very few discussants would deny as an emotion,
as they might deny such phenomena as "jealousy" or "regret."

Posey:


> Has some one made the leap to identifying what actions, or
> thought are controlled by emotions vs. whatever? It would
> seem this has to be well defined first.

ATM:
Mind.Forth (q.v.) is based on a theory of mind which necessarily
adumbrates both *emotion* and *consciousness* -- two features of
mind which may seem second-tier to software engineers intent upon
coding an artificial intellect above all, but which for some
reason engender an intense, nigh-unto-irrational interest
in roboticists and behavioral scientists. Therefore we
mind-coders pander to the emotions-and-consciousness crowd
in order to stimulate interest in the basic goal of PD AI.

Posey:


> Your diagram looks
> like a standard avoid behavior, how does it qualify
> as emotion, vs simple survival response.

ATM:
The appended ASCII mind-diagram shows the triangular (non-linear)
nature of emotion in the Mind.Forth AI theory of mind. Whereas
a non-emotional pure intellect ("die reine Vernunft" -- Arthur
Schopenhauer) [ Hey! I just turned on a TV and a State of Civil
E,mergency has bben declared here in Seattle because of 50K people
protesting against the World Trade Organization. My fellow
Seattlites are being tear-gassed! Gotta fimnish thids post, quick!]

... a pure intellect woiuld calmly decide what to do, but EMOTION
is a short-cutting [ the annoiuncer just said that the noise I heard
was a "concussion grenade" going off ] and a warping of the otherwise
linear nature of thinking. Please visit the "emotion.html" poage
and follow the links, I have to stop posting now, there is some
kind of Revolutiuon going on here in my native city of Seattle.
- ATm

ax...@my-deja.com

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to

> ... a pure intellect woiuld calmly decide what to do, but EMOTION
> is a short-cutting [ the annoiuncer just said that the noise I heard
> was a "concussion grenade" going off ] and a warping of the otherwise
> linear nature of thinking. Please visit the "emotion.html" poage
> and follow the links, I have to stop posting now, there is some
> kind of Revolutiuon going on here in my native city of Seattle.
> - ATm
>

See? The rational, intellectly calm responce would be to stay
inside, to minimize the risk of maim. But emotions of anger and
curiosity overcame and you just HAVE to run out on the street...
This is a complex non-rational emotion, linked to some higher concepts
of democratic process and social partaking. Ofcourse, robots
will only have simple abstract table entries to link to, when
shortcutting the ordinary survival behavior. The human's emotions
are though linked to some more complicated understanding, that evokes
the emotion. That is a crucial difference I think.


Sent via Deja.com http://www.deja.com/
Before you buy.

Bloxy's

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
In article <3843f...@news.victoria.tc.ca>, uj...@victoria.tc.ca (Arthur T. Murray) wrote:
>Is it irrational to program emotions into our robots? Fear not,
>the autopoietic machines will get rid of any unneeded feelings.

Ok.

>http://mentifex.netpedia.net/ Mind.Forth Robot PD AI Source Code
>includes a provisional stub for the eventual coding of artificial
>http://www.geocities.com/Athens/Agora/7256/emotion.html emotions.

>Emotions may not have a high priority in the open source AI work
>but they have already been mapped out in our Theory of Cognition:

Well, they are ALREADY having simposiums on the subject of emotion
in AI.

Just wait a few more months...
;)

><PRE>
> Hearing Vision Concepts Volition Emotion Motor Output
> /iiiiiii\ /!i!i!i!\ /YYYYYYYYYYYY\
>| ||||||| || ||||||| | T | |||||||||||| |
>| ||||||| || | ___ | | + | |||||||||||| |
>| ||||||| || /old\ | + | |S|||||||||| |
>| ||||||| || (image)-|---+_ | |H|||||||||| |
>| ||||||| || \___/ | / \ | |A|||||||||| |
>| ||||||| || | (idea) __ | |K|||||||||| |
>| | ||||| || | \__/---------------/ \ | |E|||R|||||| |
>| |d------||---------|---+ ____ (fear)-|--*|||U|||||| |
>| ||||o|| || _____ | +-------/ \----\__/ | |||||N|||P|| |
>| ||g|||| || / re- \-|---+ / de- \---------|------*|||E|| |
>| || |||| ||/entrant\| + ( ci- ) | |||||||||T|| |
>| ||||||| ||\ image /| + \ sion /---------|----------*|| |
>| ||||||| || \_____/ | + \____/ | |||||||||||| |
></PRE>

Maaan. I think these diagrams are better than most of what i've
seen on the subject of Artificial Suckology, they call Intelligence.

And, interestingly enough, that fear is there next to the action door.
I like that.

Bloxy's

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
In article <38443CC7...@raytheon.com>, Robert Posey <mu...@raytheon.com> wrote:

>
>
>"Arthur T. Murray" wrote:
>>
>> Is it irrational to program emotions into our robots? Fear not,
>> the autopoietic machines will get rid of any unneeded feelings.
>
>I thought that the state of the art in behavioral science was still have
>a very hard time even defining what emotions were.

And there are VERY good reasons for it.
"Don't take it personally" is the essense of that campaign.

Now, why?

Well, it turns out that if you DON't "take it personally",
then your MIND is a guiding factor, and the mind can be
manipulated via millions of ways, and the most potent
tool for mind manipulation is a technique of repetition
of very short sentences and most simplistic ideas.

Eventuall, the mind simply gives up and accepts ANYTHING
as "reality", or "that is how it is".

This was discovered by adolph hitler with the help of that
lady, who developed this technique for him. The same lady,
that was found to be a war criminal after WWII, and who
spent just a couple of years in jail before the arrangements
were made to release her.

Now, why do you think was she released and what happened to
her next? Well, because her "research" was found to be the
most powerful technique of mass manipulation. She was taken
from the prison and brought to america, and brought for what?
Well, to teach those techniques in one of the most prestigious
universities around.

Anybody has a clue?

Now, the mind can be manipulated very easily and you see it
every single minute of your life.

But ...

When you "take it personally", then you operate in the emotional
domain and your intelligence functions as a DIRECT experience
of the state of the other human beings and life as such.
You FEEL it first hand, well before the mind manipulation takes
place.

And, the emotion gets on the way of the idea of "efficiency"
[of maximization of the rate of sucking].

That is why these powerful tricks of mind manipulation
were programmed into your very core assumptions via ideas
of the grade "don't take it personally", "it is not MY problem",
"i am just doing my job", and on and on an on.

> Has some one
>made the leap to identifying what actions, or thought are controlled by
>emotions vs. whatever?

ALL, if you are alive and your heart is connected to the head,
and NONE, if you have been converted into bio-robots.

> It would seem this has to be well defined first.

Do you love your child? Do you love your mother?
Do you love your friend, wife or ANYBODY?

Then the "answers" will be so simple, you'll just stand there
dubstuck, asking yourself: "where have i been all these years?
The sky has ALWAYS been blue".

>Your diagram looks like a standard avoid behavior,

Not true. That diagram is the KEY to ANY "behavior".
Sure, no one is willing to admit they are fear driven,
but ...

If you have even a glimps of awareness of your own energy,
you'll see it in nearly every single act of yours.

After all, what is this idea of "survival" of the "fittest"?
Where are the roots of it?

> how does it qualify
>as emotion, vs simple survival response.

Uhu.
Tell me about it.

>Muddy

>>
>> http://mentifex.netpedia.net/ Mind.Forth Robot PD AI Source Code
>> includes a provisional stub for the eventual coding of artificial
>> http://www.geocities.com/Athens/Agora/7256/emotion.html emotions.
>>
>> Emotions may not have a high priority in the open source AI work
>> but they have already been mapped out in our Theory of Cognition:
>>

Bloxy's

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
In article <38448...@news.victoria.tc.ca>, uj...@victoria.tc.ca (Arthur T. Murray) wrote:
>Robert Posey, mu...@raytheon.com, wrote on Tue, 30 Nov 1999:
>
>> "Arthur T. Murray" wrote:
>>>
>>> Is it irrational to program emotions into our robots? Fear not,
>>> the autopoietic machines will get rid of any unneeded feelings.
>
>Posey:
>> I thought that the state of the art in behavioral science was
>> still have a very hard time even defining what emotions were.
>
>ATM:
>For that very reason, I use "fear" in my diagram example, as
>an emotion that very few discussants would deny as an emotion,

Well, you'll be surprised.
The root of fear is NOT emotion, but the same rotten mind,
driven by the ideas of survival. Fear LOOKS like emotion,
it grabs your like a reality of an emotion, but ...

But you are LOST in it.
In emotion you are FOUND, not lost.

;)

Ok, good enough fer now.

---------------------------- end of input ------------------

>as they might deny such phenomena as "jealousy" or "regret."

>Posey:


>> Has some one made the leap to identifying what actions, or

>> thought are controlled by emotions vs. whatever? It would


>> seem this has to be well defined first.
>

>ATM:
>Mind.Forth (q.v.) is based on a theory of mind which necessarily
>adumbrates both *emotion* and *consciousness* -- two features of
>mind which may seem second-tier to software engineers intent upon
>coding an artificial intellect above all, but which for some
>reason engender an intense, nigh-unto-irrational interest
>in roboticists and behavioral scientists. Therefore we
>mind-coders pander to the emotions-and-consciousness crowd
>in order to stimulate interest in the basic goal of PD AI.
>
>Posey:

>> Your diagram looks
>> like a standard avoid behavior, how does it qualify


>> as emotion, vs simple survival response.
>

>ATM:
>The appended ASCII mind-diagram shows the triangular (non-linear)
>nature of emotion in the Mind.Forth AI theory of mind. Whereas
>a non-emotional pure intellect ("die reine Vernunft" -- Arthur
>Schopenhauer) [ Hey! I just turned on a TV and a State of Civil
>E,mergency has bben declared here in Seattle because of 50K people
>protesting against the World Trade Organization. My fellow
>Seattlites are being tear-gassed! Gotta fimnish thids post, quick!]
>

>.... a pure intellect woiuld calmly decide what to do, but EMOTION


>is a short-cutting [ the annoiuncer just said that the noise I heard
>was a "concussion grenade" going off ] and a warping of the otherwise
>linear nature of thinking. Please visit the "emotion.html" poage
>and follow the links, I have to stop posting now, there is some
>kind of Revolutiuon going on here in my native city of Seattle.
>- ATm
>

Rick Harker

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
"Arthur T. Murray" wrote:
> [ Hey! I just turned on a TV and a State of Civil
> E,mergency has bben declared here in Seattle because of 50K people
> protesting against the World Trade Organization. My fellow
> Seattlites are being tear-gassed! Gotta fimnish thids post, quick!]

I have been trying to follow this too...
But what the !@#$ good would a curfew do either side?!
Geesh... you think both sides had lost their minds.

Oh well... back to the sane world of computers.

--
http://www.geocities.com/aibrain/ -Email: aib...@usa.net

"Warning: Witty signature not found."

Pogo Possum, Ph.D.

unread,
Dec 1, 1999, 3:00:00 AM12/1/99
to
You guys might try reading Damasio's "Descarte's error: Emotion, mind
and reason" or Lewis & Haviland's "Handbook of Emotion."

Bloxy's <Bloxy's...@hotmail.com> wrote in message
news:NJ314.301$ji4....@news.wenet.net...
> In article <3843f...@news.victoria.tc.ca>, uj...@victoria.tc.ca


(Arthur T. Murray) wrote:
> >Is it irrational to program emotions into our robots? Fear not,
> >the autopoietic machines will get rid of any unneeded feelings.
>

> Ok.


>
> >http://mentifex.netpedia.net/ Mind.Forth Robot PD AI Source Code
> >includes a provisional stub for the eventual coding of artificial
> >http://www.geocities.com/Athens/Agora/7256/emotion.html emotions.
>
> >Emotions may not have a high priority in the open source AI work
> >but they have already been mapped out in our Theory of Cognition:
>

> Well, they are ALREADY having simposiums on the subject of emotion
> in AI.
>
> Just wait a few more months...
> ;)
>

> ><PRE>
> > Hearing Vision Concepts Volition Emotion Motor Output
> > /iiiiiii\ /!i!i!i!\ /YYYYYYYYYYYY\
> >| ||||||| || ||||||| | T | |||||||||||| |
> >| ||||||| || | ___ | | + | |||||||||||| |
> >| ||||||| || /old\ | + | |S|||||||||| |
> >| ||||||| || (image)-|---+_ | |H|||||||||| |
> >| ||||||| || \___/ | / \ | |A|||||||||| |
> >| ||||||| || | (idea) __ | |K|||||||||| |
> >| | ||||| || | \__/---------------/ \ | |E|||R|||||| |
> >| |d------||---------|---+ ____ (fear)-|--*|||U|||||| |
> >| ||||o|| || _____ | +-------/ \----\__/ | |||||N|||P|| |
> >| ||g|||| || / re- \-|---+ / de- \---------|------*|||E|| |
> >| || |||| ||/entrant\| + ( ci- ) | |||||||||T|| |
> >| ||||||| ||\ image /| + \ sion /---------|----------*|| |
> >| ||||||| || \_____/ | + \____/ | |||||||||||| |
> ></PRE>
>

Bloxy's

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
In article <823i0m$man$1...@ash.prod.itd.earthlink.net>, "Pogo Possum, Ph.D." <pogo...@earthlink.net> wrote:
>You guys might try reading Damasio's "Descarte's error: Emotion, mind
>and reason" or Lewis & Haviland's "Handbook of Emotion."

No book will help on this subject.
Books talk to your mind.
Emotion is related to your being.
Unless you become AWARE of your own energy,
all you are going to be doing is engaging into mental masturbation.

That is all.

Bloxy's

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
In article <822o8g$vli$1...@nnrp1.deja.com>, ax...@my-deja.com wrote:
>
>
>> ... a pure intellect woiuld calmly decide what to do, but EMOTION
>> is a short-cutting [ the annoiuncer just said that the noise I heard
>> was a "concussion grenade" going off ] and a warping of the otherwise
>> linear nature of thinking. Please visit the "emotion.html" poage
>> and follow the links, I have to stop posting now, there is some
>> kind of Revolutiuon going on here in my native city of Seattle.
>> - ATm
>>
>See? The rational, intellectly calm responce would be to stay
>inside, to minimize the risk of maim. But emotions of anger

This is the KEY falacy about the emotion as such, and there
is a VERY GOOD reason you mention the emotion of ANGER.

Now, anger is the ONLY "emotion" "allowed".
That is EXACTLY the program.
Why did not you mention the emotional state of being overwhelmed
with beauty of a flower or silence of the nature?

Why ANGER?

Look inside your CPU between your shoulders, as that is where
that program is stored.

Again, anger is not the emotion, although it LOOKS like one.
Anger is a result of a mental process, largely triggered by
intolerance, when something does not fit the program inside
your CPU.

The whole ideology of perfectionism is EXACTLY the strategy
to keep your emotional level compatible with the program
in the CPU.

Thus, the concept of bio-robots, programmed to behave along
a particular set of lines, all based in fear of survival,
reinforced by guilt and complex of inferiority.

Unless you learn the emotional side effect of love,
you know not what emotion is.

> and
>curiosity overcame and you just HAVE to run out on the street...

Why?

>This is a complex non-rational emotion,

You have to dig MUCH deeper than this.

> linked to some higher concepts

LOWER concepts, programmed into your CPU.

>of democratic process and social partaking.

NOTHING of the kind indeed.

Democratic process has NOTHING to do with the domain
of emotion, it has something to do with the domain of
the mind and underlying principles of "reason".

Same is for "social partaking". They are all IDEAS
in your mind. Again, unless you feel the other individual,
and not some abstract notion of "society", which does not
even exist as it is not an entity of ANY kind.



> Ofcourse, robots
>will only have simple abstract table entries to link to,

And this statement does not link to a previous statement of yours.
They are simply disconnected.
The previous statement is based in nothing, but PURE delusion.

> when shortcutting the ordinary survival behavior.

Survival is the PROGRAM in your mind, reinforced forever.
And you bought into it.

There is no such a concept in ANY other spieces, but the mankind.
Sure, ALL spiecies run away and avoid the destruction for whatever
reason, but there is no concept of this kind. It is all intrinsic.
All there. You don't have to think how to make your next pulse
beat or your next breath. It is inherent in your structure of
functional intelligence, down to cellular level.
ALL UTTER and COMPLETE intelligence at work,
only if you could even begin to comprehend that much.

The ideas of "survival" are lies, created for the purpose of
maintaining that fear, that can and IS being exploited by those,
who suck the blood of all others, calling the life as some
horrible jungle of never ending destruction.

These very ideas are being perpetuated and programmed into your
subconscious every single day on the idiot box, you call tv.

After a while, they become the equivalent of reality.
Yes, mental ideas of reality, but as powerful, as it gets.

> The human's emotions
>are though linked to some more complicated understanding, that evokes
>the emotion.

Emotions are not linked to "understanding".
Yes, ALL aspects are part of functioning intelligence,
but emotion is WELL beyond the "understanding".
It is direct perception of the beauty of life itself.
It is emotional state of orgasmic existance and intiutive
appreciation of unlimited intelligence of ALL THERE IS.

Bloxy's

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
In article <384525CD...@yahoo.com>, AIB...@yahoo.com wrote:
>"Arthur T. Murray" wrote:
>> [ Hey! I just turned on a TV and a State of Civil
>> E,mergency has bben declared here in Seattle because of 50K people
>> protesting against the World Trade Organization. My fellow
>> Seattlites are being tear-gassed! Gotta fimnish thids post, quick!]
>
>I have been trying to follow this too...
>But what the !@#$ good would a curfew do either side?!
>Geesh... you think both sides had lost their minds.

The statement by flinton that these demonstrations are just
"hupla" is a reflection of his COMPLETE corruptness to the
very core of the being.

He'll be peddling the money god in the middle of the worst
nukelar [that is how they spell it nowadays in the highest
places with mutual anihilation buttons] disaster ever.

Here is his ONLY "law":

Money = god, and god = money.

>Oh well... back to the sane world of computers.

Uhu.
Compared to those insane "public servants" even computers
look pretty sane.

David Emrich

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
> Same is for "social partaking". They are all IDEAS
> in your mind. Again, unless you feel the other individual,
> and not some abstract notion of "society", which does not
> even exist as it is not an entity of ANY kind.
>

Society isn't an entity of ANY kind?? I'd like to see you demonstrate that.

Society now has become a self-supporting all-encompassing resource-draining
living thing. It feeds of people, absorbs them, spits them out on the
street and leaves them to rot. Rolls over businesses, the environment,
individuals, emotions feeling and logic. And we feed it with our taxes, and
the people we elect to run it.

Tell me that it isn't an entity, and one we will have to be careful doesn't
overtake us in its greed because if that happens, we're all in trouble.

Ponder that.

David.


Bloxy's

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
In article <newscache$2ai3mf$6...@gw.ihg.com>, "David Emrich" <dem...@ihgtech.com.au> wrote:

>> Same is for "social partaking". They are all IDEAS
>> in your mind. Again, unless you feel the other individual,
>> and not some abstract notion of "society", which does not
>> even exist as it is not an entity of ANY kind.

>Society isn't an entity of ANY kind?? I'd like to see you demonstrate that.

Ok, first of all, lets just pull out a definition of an entity.
Not that is is a FINAL "answer", but it may provide a clue.
Oxford american version.
Entity:
1) thing with distinct existence.
2) Existence or essential nature of a thing regarded distinctly.

Ok, sure it IS a delusion, because they use the term "thing".
But...
ESSENTIAL NATURE of that BEING is what makes an entity.

It needs the integrity of a being. And it needs to be
DISTINCT, even according to this definition, which is
pretty much borders on delusion.

Now, the "society" is what?
Well, it is a bunch if distinct entities, you call people.
Every single entity is in itself a complicated and
multidimensional in scope. All have all sorts of desires,
needs, interests, and on and on and on.

Yes, some of them align along paricular lines, such as
football, computers, and on and on and on, thus forming
the ASSOCIATIONS of entities.

But...
The kicker is that even those very individuals, ALSO
unite in OTHER organizations and associations along
a different set of criterias.
The mothers may be interested in the issues of rasing
children, and SOME mothers may be interested in science,
or whatever. So, the same entity is a part of a different
group AT THE SAME time.

Now, in terms of geographic boundaries, there are certain
"countries", pretending to be societies. But even within
those societies there are ALL sorts of groups.

Zo...

The rest is a piece of cake.

>Society now has become a self-supporting all-encompassing resource-draining
>living thing.

Wake up first, and crear up that confusion in your head.
Yes, certain INDIVIDUALS, engaged in the process of exploitation
of EVERYTHING that moves, and does not, for that matter,
ARE interested in these things, as they do not even trust
the validity of their own being, but you can not blanket
classify those destructive groups as "society".

> It feeds of people,

Not IT, but some principles are used to do certain things.
There is not IT.
It is just a pure delusion.
Society is NOT an entity of ANY kind.

> absorbs them, spits them out on the
>street and leaves them to rot.

Same thing. You need to clarify the issues first.
Yes, what you say about the DOMINANT influences, is not
"wrong", but you identify the parasites and the lowest
grade scum of the Earth with "society".

> Rolls over businesses, the environment,
>individuals, emotions feeling and logic. And we feed it with our taxes, and
>the people we elect to run it.

Well, "you get what you deserve".

>Tell me that it isn't an entity,

I told you once and i told you twice and i will tell you
as many times, as necessary.

> and one we will have to be careful doesn't
>overtake us in its greed because if that happens, we're all in trouble.

>Ponder that.

It is NOT a matter of "ponderance", but of a FEELING.
Unless you reconnect the heart to the brain, you ARE doomed
as people indeed.

>David.

James S. Adelman

unread,
Dec 2, 1999, 3:00:00 AM12/2/99
to
Whatever.

Bloxy's wrote on Thu, 02 Dec 1999 04:37:31 GMT in sci.psychology.psychotherapy:

--
James Samuel Adelman
Liverpool
--
#sci.psychology.psychotherapy on DAL.net
use server viking.dal.net or webbernet.dal.net
This IRC channel is open to anyone with an interest in psychology.

ax...@my-deja.com

unread,
Dec 3, 1999, 3:00:00 AM12/3/99
to
In article <gWl14.363$ji4....@news.wenet.net>,

Bloxy's...@hotmail.com (Bloxy's) wrote:

> >See? The rational, intellectly calm responce would be to stay
> >inside, to minimize the risk of maim. But emotions of anger
>
> This is the KEY falacy about the emotion as such, and there
> is a VERY GOOD reason you mention the emotion of ANGER.
>
> Now, anger is the ONLY "emotion" "allowed".

Not in my CPU.

> Again, anger is not the emotion, although it LOOKS like one.
> Anger is a result of a mental process, largely triggered by
> intolerance, when something does not fit the program inside
> your CPU.
>

Or also, when something outside the CPU seem to stupid
to be accepted by the program.

> The whole ideology of perfectionism is EXACTLY the strategy
> to keep your emotional level compatible with the program
> in the CPU.
>
> Thus, the concept of bio-robots, programmed to behave along
> a particular set of lines, all based in fear of survival,
> reinforced by guilt and complex of inferiority.
>
> Unless you learn the emotional side effect of love,
> you know not what emotion is.
>
> > and
> >curiosity overcame and you just HAVE to run out on the street...
>
> Why?
>
> >This is a complex non-rational emotion,
>
> You have to dig MUCH deeper than this.
>

I did that: "linked to democratic process and social partaking".

> > linked to some higher concepts
>
> LOWER concepts, programmed into your CPU.
>
> >of democratic process and social partaking.
>
> NOTHING of the kind indeed.
>
> Democratic process has NOTHING to do with the domain
> of emotion, it has something to do with the domain of
> the mind and underlying principles of "reason".
>

The breaking/obscuring of democratic process IS evolving
an emotion in millions of people worldwide. Because the
breaking/obscuring is irrational in itself.
"Counter the enemy with enemy's own tactics"

> Same is for "social partaking". They are all IDEAS
> in your mind. Again, unless you feel the other individual,
> and not some abstract notion of "society", which does not
> even exist as it is not an entity of ANY kind.
>

The society IS an individual. An individual slowly growing into
a "creature" of it's own. Ofcourse we can't see nor understand "it"
because it exists on a higher conceptual level.

> > Ofcourse, robots
> >will only have simple abstract table entries to link to,
>
> And this statement does not link to a previous statement of yours.
> They are simply disconnected.
> The previous statement is based in nothing, but PURE delusion.
>

Is that so? You are entitled to your opinion. It's part of
the democratic process that you strangely enough think is a delusion.

> > when shortcutting the ordinary survival behavior.
>
> Survival is the PROGRAM in your mind, reinforced forever.
> And you bought into it.
>

I didn't buy anything.

> There is no such a concept in ANY other spieces, but the mankind.
> Sure, ALL spiecies run away and avoid the destruction for whatever
> reason, but there is no concept of this kind. It is all intrinsic.
> All there. You don't have to think how to make your next pulse
> beat or your next breath. It is inherent in your structure of
> functional intelligence, down to cellular level.
> ALL UTTER and COMPLETE intelligence at work,
> only if you could even begin to comprehend that much.

So first you say it is at a cellular "low-tech" level, and then it
is suddenly intelligence at work? Sure I won't comprehend...


>
> The ideas of "survival" are lies, created for the purpose of
> maintaining that fear, that can and IS being exploited by those,
> who suck the blood of all others, calling the life as some
> horrible jungle of never ending destruction.
>

That is why we need the democratic process and the social partaking.
I see that you are also getting angry on the issue.

> These very ideas are being perpetuated and programmed into your
> subconscious every single day on the idiot box, you call tv.

Pure entertainment. Seeing 2 hours of idiocy a day really puts
things in the right perspective.

Oh, I don't live in US. Thanks God, or I would also be a very
angry person.

rick++

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
The recognition, expression, and understanding of emotion has been
a major thrust of the MIT Robotic Lab and MIT AI Lab.
Some think this is imnportant for human-computer interfaces of all
kinds.

Arthur T. Murray

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
rick++ .... ri...@kana.stanford.edu ... wrote on Fri, 10 Dec 1999:

>The recognition, expression, and understanding of emotion has been
>a major thrust of the MIT Robotic Lab and MIT AI Lab.

And one of the Four Horsemen of the AI Apocalypse (the Dartmouth 1956
AI Urkonferenz: McCarthy + Shannon + Rochester + ) Marvin Minsky,
has been writing a book on emotion -- how is it coming along?

Now, one Joseph LeDoux has written prolifically on emotion, so
that he seems to be the standard reference/expert -- any Web site?

>Some think this is imnportant for human-computer interfaces of all
>kinds.

http://www.geocities.com/Athens/Agora/7256/emotion.html The Mind.-
Forth emotion documentation does not yet deal with actual code
for emotion, but I would like to give here a succinct explanation
of how emotion fits into the theory of mind beneath Mind.Forth.

Have we not noticed that almost every emotion has one or more
physiological excitations inextricably associated with the emotion?
Such physiological arousal/pathos/response amounts to a form of
hardwiring or genetic connectionism between thoughts and motorium.

Is shame an emotion? Because even blushing is a physiological
event linked to mental states.

The Mind.Forth AI is claimed to be an artificial mind, but a
straightforward, *orthogonal* one with all associations
proceeding along directly straight or at least orthogonal
pathways from concept to concept.

Now, if an association has to detour through a physiological
response, the thought process itself then becomes warped or
short-circuited, because the emotive mind must now think not
only about the original, associand ideas, but also about the
physiological response which is *intruding* most forcefully
into the otherwise calm and placid arena of the mind.

Any roboticist, or a-lifer, or Minskyesque AI-lifer,
who can code in software a physiological response,
such as trembling, or horripilation, or weeping of tears,
ought to be able to bring that one phenomenon of mind
into the circumstantial society of mind -- the other agentries.

>Sent via Deja.com [8] http://www.deja.com/
>Before you buy.

kenneth Collins

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
i've resolved the problem through for the brain, and will be glad to
receive an in-person presentation opportunity.

ken (K. P. Collins)

rick++ wrote:

> The recognition, expression, and understanding of emotion has been
> a major thrust of the MIT Robotic Lab and MIT AI Lab.

> Some think this is imnportant for human-computer interfaces of all
> kinds.
>

> Sent via Deja.com http://www.deja.com/
> Before you buy.

Pogo Possum, Ph.D.

unread,
Dec 12, 1999, 3:00:00 AM12/12/99
to

rick++ <ri...@kana.stanford.edu> wrote in message
news:82r8el$35t$1...@nnrp1.deja.com...

> The recognition, expression, and understanding of emotion has been
> a major thrust of the MIT Robotic Lab and MIT AI Lab.
> Some think this is imnportant for human-computer interfaces of all
> kinds.
>

At MIT they go about understanding emotion by studying the extensive
literature on the subject existing among psychologists and
neuroscientists whose research focuses on motivation and emotion.
They don't invent fanciful personal theories about it and discuss them
authoritatively as if they were true.


Gene Douglas

unread,
Dec 18, 1999, 3:00:00 AM12/18/99
to

rick++ wrote in message <82r8el$35t$1...@nnrp1.deja.com>...


>The recognition, expression, and understanding of emotion has been
>a major thrust of the MIT Robotic Lab and MIT AI Lab.
>Some think this is imnportant for human-computer interfaces of all
>kinds.
>

I once had a professor who said that if computers ever became able to think,
he would be in a trench with a bazooka, shooting at computers as they came
over a hill.

I think there are several problems here. Firstly, we have to define the
word "think" in operational terms. Then we can look at humans, and ask if
humans "think," and how do we know. Then we can ask the same of various
computers, and eventually, it may become impossible to tell the difference,
if we make our definition in other than vague terms.

Secondly, it would be possible to think without having survival instincts.
If those instincts were built in, then computers would have to have body
parts, to enable them to seek survival resources. At that point, we might
have a need to shoot them as they try to take over, though I'm not holding
my breath.

I suppose emotions would temper such behavior. A number from one to 100
might indicate the strength of a need, indicating the strength of a behavior
or setting priorities for behavior, and various emotions relating to
compassion, cooperation, or dominance might also cause priorities of
behavioral choices to be re-arranged.

A question which would arise would be, does the computer "feel" anything?
We would have to operationally define the word, "feel." Likewise, we would
ask if humans feel, and how do we know. Feeling might be defined as the
stimulation of an area of brain tissue resulting in a modification of
behavior.

If an address in a computer were stimulated, causing a modification of
behavior, then by the same definition, the computer might be said to "feel."
-
GeneDou...@prodigy.net
--
One out of every four Americans is suffering from some form of mental
illness.
Think of your three best friends. If they're OK, then it's you.
--
THE POLITICAL THERAPIST-- http://www.geocities.com/HotSprings/3616
-
"Justice will only be achieved when those who are not injured by
crime feel as indignant as those who are." - King Solomon
--
"A Native American elder once described his own inner struggles
in this manner: Inside of me there are two dogs. One of the dogs
is mean and evil. The other dog is good. The mean dog fights the
good dog all the time." When asked which dog wins, he reflected
for a moment and replied, "The one I feed the most." (George
Bernard Shaw)


Gary Forbis

unread,
Dec 19, 1999, 3:00:00 AM12/19/99
to
Gene Douglas <gene...@prodigy.net> wrote in message
news:83he5f$434u$1...@newssvr03-int.news.prodigy.com...

>
>
> rick++ wrote in message <82r8el$35t$1...@nnrp1.deja.com>...
> >The recognition, expression, and understanding of emotion has been
> >a major thrust of the MIT Robotic Lab and MIT AI Lab.
> >Some think this is imnportant for human-computer interfaces of all
> >kinds.
> >
> I once had a professor who said that if computers ever became able to
think,
> he would be in a trench with a bazooka, shooting at computers as they came
> over a hill.
>
> I think there are several problems here. Firstly, we have to define the
> word "think" in operational terms. Then we can look at humans, and ask if
> humans "think," and how do we know. Then we can ask the same of various
> computers, and eventually, it may become impossible to tell the
difference,
> if we make our definition in other than vague terms.

The focus on finding operational terms for mushy words used by humans every
day is a diversion for AI implemeters. It really doesn't matter if
computers or
robots "think," "feel," "emote," etc. All that is necessary for a robust
implementation is that it recognize behaviors associated with these things
in
humans and generate situationally appropriate behaviors. In a way it's
being
used as an excuse for failure.

> Secondly, it would be possible to think without having survival instincts.
> If those instincts were built in, then computers would have to have body
> parts, to enable them to seek survival resources. At that point, we might
> have a need to shoot them as they try to take over, though I'm not holding
> my breath.

Why assert "it would be possible to think wihtout having survival
instincts."
Is it so you can, under certain circumstances, say "this computer thinks"?

Rocks "survive," yet I'm not too afraid of them. What is it about your
psyche
that leads you to associate "try to take over" with "survival"?

> I suppose emotions would temper such behavior. A number from one to 100
> might indicate the strength of a need, indicating the strength of a
behavior
> or setting priorities for behavior, and various emotions relating to
> compassion, cooperation, or dominance might also cause priorities of
> behavioral choices to be re-arranged.

I'm not sure how you get from "a number... might indicate..." to "and
varioius
emotions..." but I suspect numbers can control behaviors even if machines
have no emotions.

> A question which would arise would be, does the computer "feel" anything?
> We would have to operationally define the word, "feel." Likewise, we
would
> ask if humans feel, and how do we know. Feeling might be defined as the
> stimulation of an area of brain tissue resulting in a modification of
> behavior.

I don't know how you "know" humans feel. I know I feel because I experience
it. I assume other humans (and many animals) feel because they are very
much
like me physically and behaviorally. I don't need an operational definition
to know
I feel.

> If an address in a computer were stimulated, causing a modification of
> behavior, then by the same definition, the computer might be said to
"feel."

Maybe, but I don't know how saying a computer feels is related to knowing
a computer feels. I don't know what one gains by saying a computer feels
and it's unlikely we'll ever know if a computer feels.

Gene Douglas

unread,
Dec 19, 1999, 3:00:00 AM12/19/99
to

Gary Forbis wrote in message ...

I'm saying that thinking alone would not be sufficient to create the
scenario described by my math professor. In addition, there must be drives
impelling the computer toward self-determined goals.

>
>Rocks "survive," yet I'm not too afraid of them. What is it about your
>psyche
>that leads you to associate "try to take over" with "survival"?

Firstly, we must define the word, "survive." If we use a very broad
definition, then rocks might be included. However, not much is required for
a rock's survival. Possibly if the rock needed to survive a billion years,
it would need to be protected from rain, stream water, and sources of
crushing. However, rocks have no means of behavior, so I wouldn't worry
about their "trying" to do something to ensure survival.


>
>> I suppose emotions would temper such behavior. A number from one to 100
>> might indicate the strength of a need, indicating the strength of a
>behavior
>> or setting priorities for behavior, and various emotions relating to
>> compassion, cooperation, or dominance might also cause priorities of
>> behavioral choices to be re-arranged.
>
>I'm not sure how you get from "a number... might indicate..." to "and
>varioius
>emotions..." but I suspect numbers can control behaviors even if machines
>have no emotions.
>

I'm not sure how I would diagram the above sentence. However, emotions
moderate our thinking and behavior, both in direction and in intensity. If
a machine had competing calculations for its behavior, then if one behavior
was assigned a priority of 75, and another a priority of 30, the 75 would
predominate. That might make a metaphor to a human's pitting his anger
against his desire for social cooperation.

>> A question which would arise would be, does the computer "feel" anything?
>> We would have to operationally define the word, "feel." Likewise, we
>would
>> ask if humans feel, and how do we know. Feeling might be defined as the
>> stimulation of an area of brain tissue resulting in a modification of
>> behavior.
>
>I don't know how you "know" humans feel. I know I feel because I
experience
>it. I assume other humans (and many animals) feel because they are very
>much
>like me physically and behaviorally. I don't need an operational
definition
>to know
>I feel.
>

Then, once we have a computer so sophisticated that we have to ask, we just
ask the computer if it feels. If it says yes, we can assume that its
subjective experience is sufficient to provide that information.

>> If an address in a computer were stimulated, causing a modification of
>> behavior, then by the same definition, the computer might be said to
>"feel."
>
>Maybe, but I don't know how saying a computer feels is related to knowing
>a computer feels. I don't know what one gains by saying a computer feels
>and it's unlikely we'll ever know if a computer feels.
>

I think the original post to the thread was about emotion in robots. We
viewed R2D2 and C3PO, and perceived feeling in them. Were we just
perceiving the result of calculation going through their wiring? When we
perceive feeling in a human, are we just observing the result of tissue
reactions and signals passing through nerve paths? If HAL appeared to
experience affection, fear and anger, was that just the end result of
calculation and programming? If so, then would not all such emotion under
all possible conditions be the same? And would not the same responses
coming from a soft-material robot (flesh) be regardable in the same way?
-
GeneDou...@prodigy.net
-
Portal to un-mod UU group at: http://www.deja.com/~soc_religion_uu/
Remember: (current list) Rich Puchalski / Richard Kulisz / --------- /
(Your name here)
New Un-Moderated group at alt.religion.unitarian-univ, or use URL to
go there.
-
"I quoted Rich Puchalski."


--
One out of every four Americans is suffering from some form of mental
illness.
Think of your three best friends. If they're OK, then it's you.
--

Bob

unread,
Dec 19, 1999, 3:00:00 AM12/19/99
to
Gene Douglas wrote in message
<83iqhn$9pm2$1...@newssvr03-int.news.prodigy.com>...
...

>I'm not sure how I would diagram the above sentence. However, emotions
>moderate our thinking and behavior, both in direction and in intensity. If
>a machine had competing calculations for its behavior, then if one behavior
>was assigned a priority of 75, and another a priority of 30, the 75 would
>predominate. That might make a metaphor to a human's pitting his anger
>against his desire for social cooperation.

But humans don't calculate a number and react one way or another based on
the resulting value. They balance and adjust their actions based on this
stimulus and perhaps perform entirely different actions if both are present
than if either one exists alone. Human behaviors are far more complex than
these simplistic priority-based examples. It's not a matter of
stimulus-response but of interaction of all possible stimuli to possible
variation of response tied in with resulting feedback over time and
experiential pattern matching plus some level of random action response in
extreme circumstances. Even this example is highly simplified.

>
>>> A question which would arise would be, does the computer "feel"
anything?

...


>Then, once we have a computer so sophisticated that we have to ask, we just
>ask the computer if it feels. If it says yes, we can assume that its
>subjective experience is sufficient to provide that information.

"I am, therefore I think" is not the same as "I think, therefore I am".
This kind of thing could just be a programmed response and not an indication
of any emotion or intelligence. I think that you could only determine this
the same way we determine it in humans, by psychological observation.
Examining responses to observed stimulation and attempting to determine if
it exceeds its initial programming. Instinct versus learning or conditioned
response versus intellectual response. Feelings are even tougher to
determine, based on seemingly illogical or non-intellectual responses.

...


>I think the original post to the thread was about emotion in robots. We
>viewed R2D2 and C3PO, and perceived feeling in them. Were we just
>perceiving the result of calculation going through their wiring? When we
>perceive feeling in a human, are we just observing the result of tissue
>reactions and signals passing through nerve paths? If HAL appeared to
>experience affection, fear and anger, was that just the end result of
>calculation and programming? If so, then would not all such emotion under
>all possible conditions be the same? And would not the same responses
>coming from a soft-material robot (flesh) be regardable in the same way?


Odd that the examples you give are all from movies where human actors
controlled the actions and imposed their emotions on the "characters".
Clearly human emotions can be mimicked or expressed with non-human features.
The question isn't whether humans observe these actions and respond with
specific feelings but whether robots would respond to these actions in
similar or recognizable ways. Can a programmed response ever be considered
an emotional "feeling", is emotional "feeling" some sort of unusual response
chaotically created by extreme complexity, or is there some substantive
difference between emotion and programmed response?

Bob


Arthur T. Murray

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
Gordon McComb, gmc...@gmccomb.com, wrote on Sun, 19 Dec 1999:

> JimT9999 wrote:

>> Pain & Fear
>> Keep in mind that all bumper switches are now considered "painful."

> <rest snipped>
GMcC:
> How do you reconcile these "emotions" in a robot designed
> specifically for making physical contact? If a robot is
> made to sense its environment via tactile feedback, how are
> these senses then considered painful?

ATM:
http://www.geocities.com/Athens/Agora/7256/emotion.html is a sub-
page on how eventually to code emotion into Mind.Forth Robot AI.

A sensor for tactile feedback will only register *pain* if signals
from the sensor enter the mindgrid and activate special grids that
are *interpreted* as pain: burning, or impact, or freezing, etc.

It is theorized that physical pain and pleasure are not emotions,
but are instead *translations* of specially dedicated neural signals
into the *qualia* -- the phenomena -- of physical pain and pleasure.

Personally I speculate that pain is evinced in a mindgrid when
the interleaved neurons of the "quale" of pain fire at a faster
frequency than the rest of the mindgrid, literally forcing the
consciousness to be unshakeably aware of the cause of the pain.

Pleasure, by the same token, is a slowing down of the rate of
firing of the pain-pleasure intergrid, so that the rest of the
mind diverts its attention from everyday concerns and notices
-- because pain and pleasure are functions of *attention* --
that a pleasurable sensation is saying, "Whoa! Savor the moment."
Thus to slow reality down is pleasurable, but to speed it up is
painful. Most important here is the *translation* of signals.
Why should a faster signal be painful? Because we don't want to
pay attention to a burn wound, or a cut, but we are forced to.
Our consciousness tries to escape but cannot, so it feels pain.
When consciousness indulges in a slow signal, it feels pleasure.

GMcC:
> It seems to me that such
> emotion is then dependent on the design of the robot. But isn't
> one of the chief values of emotions is that they are, to a great
> extent, consistent (therefore they can be given understandable
> terms we can all use so we know what we're talking about)?
> For humans, touch can bring both pleasure and pain, not only
> by degree, but by association with what was touched.

> I understand there has been quite a bit of study and discussion
> regarding a "light seeking robot" exhibiting either love, or
> aggression, or maybe both. Personally, I think it's all hokum.
> I have to wonder if people like Braitenberg really intended their
> works to be perceived as a Disneyesque literal interpretation of
> human-like emotions applied to machines, rather than an attempt to
> discover more about *human emotions* through the use of fictional
> machine examples.

-- Gordon

MadCat13

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
Hmm...

Has anyone read the book "Affective Computing" by Rosalind W. Picard?

I haven't yet and would like opinions on it.

(Please ignore all of the following that is irrelevant, as I've missed some
of this thread)

Random thoughts on AI and emotions...

What are the simplest emotions? Fear, anger, attraction(like), etc.
I think that each of the simple emotions would have opposite ends of the
spectrum (e.g. fear and attraction) and stress would be the result of both a
positive and negative emotion placed on the same subject of thought.

As in the case of survival...

CaveBot (bear with me, I already hear the groans) sees a small fire.
Attraction(curiosity) draws it to the flame. Sensors reveal that the flame
could be harmful thus Fear is activated. CaveBot now has a both a
positive(curiosity) and negative(fear) (not that the positive and negative
emotions would have to be on the same spectrum such as these are) attached
to the same mental subject(flame). How CaveBot uses its database(what is
known about flame already and predefined needs for survival) of information
along with its emotional adds and subtracts determines its course of action.

I know this is highly simplified, but I wanted to jump in somewhere.

MadCat13
- Please keep all flames within specified location in order to keep
from destroying relevant posts :)

Gordon McComb

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
The part of Dr. Picard's work on allowing a machine to cognize emotional states
in humans is, I believe, worthwhile research. The part of her work where they
are attempting to synthesize emotions into a machine is, IMO, not nearly as
valuable. I haven't seen the book, but her MIT page discusses the goals of her
research, and I don't believe she does an adequate job of truly explaining *why*
a machine endowed with emotions would be better than one that lacks them.

Note that this is not the same as a machine interpreting human emotions, which I
believe has practical applications. Windows really ought to know what I think
of it...

I have yet to see anyone put forth a credible reason why a machine should have
emotions. Emotions, by their nature, are part of human non-rational thought
(love, fear, curiousity...there are all intangible reactions that can lead to
imprecise actions, perhaps flight in once instance, fight in another -- who
knows). I have a hard time imaging why we need irrational machines, seeing
there are enough people who act that way as it is.

This is the state of robotics today: It's a rare machine that can travel a
straight line and accurately get from point A to B -- particularly *outside* the
laboratory -- without having to stop every now and then for its bearings. And to
do just this much requires a ridiculous amount of processing power and sensors.
Why are people so consumed with trying to give such a hunk of junk emotions?
Seems to me the real work of robotics is still in making the thing do the simple
tasks given it. Endowing emotions into a robot is like step 264, and we're
still on number 39.

For Cavebot: couldn't it learn just as effectively, if not more simply,
*without* the addition of emotions? Why would an emotion be inherently better
than simple, traditional pattern analysis; basically A+B=C stuff. The next time
A and B are present, the robot knows C is likely to result. No emotions
necessary. The same for sensation: an LM355 temperature sensor giving a reading
of 100 deg C. is an obvious sign of danger. The machine doesn't need emotion to
tell it that; just one line of code is about all that's needed:

If LM355 > 100 Then Gosub GetTheHellOut

Now, one might argue that having emotions would allow the machine to think
"outside the box," going beyond its programming. Again, I don't see the
connection. Emotions are not part of rational thought in humans, so why would
they be in robots? IMO, true learning is a rational application of the known
facts (i.e. data).

-- Gordon

Gene Douglas

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to

Bob wrote in message <83jmvi$ji1$1...@ffx2nh5.news.uu.net>...

>Gene Douglas wrote in message
><83iqhn$9pm2$1...@newssvr03-int.news.prodigy.com>...
>...
>>I'm not sure how I would diagram the above sentence. However, emotions
>>moderate our thinking and behavior, both in direction and in intensity.
If
>>a machine had competing calculations for its behavior, then if one
behavior
>>was assigned a priority of 75, and another a priority of 30, the 75 would
>>predominate. That might make a metaphor to a human's pitting his anger
>>against his desire for social cooperation.
>
>But humans don't calculate a number and react one way or another based on
>the resulting value. They balance and adjust their actions based on this
>stimulus and perhaps perform entirely different actions if both are
present
>than if either one exists alone.

That's the whole point, of course. And each competing motivation has a
comparitive strenghth. There is the logic of Spock. There is what our
mother taught us. There is a drive resulting from our biology. And there
is an emotion resulting from our general socialization. They act
simultaneously, even though we are capable of only one behavior at a time.

Human behaviors are far more complex than
>these simplistic priority-based examples. It's not a matter of
>stimulus-response but of interaction of all possible stimuli to possible
>variation of response tied in with resulting feedback over time and
>experiential pattern matching plus some level of random action response in
>extreme circumstances. Even this example is highly simplified.
>

Consider that we are speaking of a hypothetical very sophisticated
computer. And as there is no such thing as randomness, even in computers,
the unexplained part of human behavior is sometimes just called random.
And a set of random numbers can also be programmed into a computer, not
actually being random at all.


>>
>>>> A question which would arise would be, does the computer "feel"
>anything?

>...


>>Then, once we have a computer so sophisticated that we have to ask, we
just
>>ask the computer if it feels. If it says yes, we can assume that its
>>subjective experience is sufficient to provide that information.
>

>"I am, therefore I think" is not the same as "I think, therefore I am".

How do you know a human thinks, or feels? The previous poster said that
you just ask him.

>This kind of thing could just be a programmed response and not an
indication
>of any emotion or intelligence. I think that you could only determine
this
>the same way we determine it in humans, by psychological observation.

Very well. You observe the machine, and it behaves against simple logic,
and behaves more or less strongly in certain ways. You make the same
observations you would make in humans, and you get the same result. So you
conclude that the machine "feels."

>Examining responses to observed stimulation and attempting to determine if
>it exceeds its initial programming. Instinct versus learning or
conditioned
>response versus intellectual response. Feelings are even tougher to
>determine, based on seemingly illogical or non-intellectual responses.
>
>...

>>I think the original post to the thread was about emotion in robots. We
>>viewed R2D2 and C3PO, and perceived feeling in them. Were we just
>>perceiving the result of calculation going through their wiring? When we
>>perceive feeling in a human, are we just observing the result of tissue
>>reactions and signals passing through nerve paths? If HAL appeared to
>>experience affection, fear and anger, was that just the end result of
>>calculation and programming? If so, then would not all such emotion
under
>>all possible conditions be the same? And would not the same responses
>>coming from a soft-material robot (flesh) be regardable in the same way?
>
>

>Odd that the examples you give are all from movies where human actors
>controlled the actions and imposed their emotions on the "characters".

They are hypothetical, of course, and examples of the possibility of a
machine behaving in a way that we must interpret what the behavior means,
in terms of ultimate reality.

>Clearly human emotions can be mimicked or expressed with non-human
features.
>The question isn't whether humans observe these actions and respond with
>specific feelings but whether robots would respond to these actions in
>similar or recognizable ways.

If a robot should behave in an identical way, would we then conclude that it
"feels?"

Can a programmed response ever be considered
>an emotional "feeling", is emotional "feeling" some sort of unusual
response
>chaotically created by extreme complexity, or is there some substantive
>difference between emotion and programmed response?
>
>Bob
>

Is an emotion "real" in and of iteslf, as experienced by the human, or is it
just a chemical and mechanical result of cell activity? If it is just
biological activity, then is not any other chemical/mechanical activity
which results in a behavior just as "real?"

As an aside, it is interesting that a sociopath sometimes believes that
other people don't experience certain emotions, but that they are just
pretending, just as he is. Sometimes a sociopath sees certain emotions of
other people as being, not emotions, but just a result of weak-mindedness.

Gene
>-
GeneDou...@prodigy.net


--
One out of every four Americans is suffering from some form of mental
illness.
Think of your three best friends. If they're OK, then it's you.
--

THE POLITICAL THERAPIST-- http://www.geocities.com/HotSprings/3616
-

Gene Douglas

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to

MadCat13 wrote in message ...


>Hmm...
>
>Has anyone read the book "Affective Computing" by Rosalind W. Picard?
>
>I haven't yet and would like opinions on it.
>
>(Please ignore all of the following that is irrelevant, as I've missed
some
>of this thread)
>
>Random thoughts on AI and emotions...
>
>What are the simplest emotions? Fear, anger, attraction(like), etc.
>I think that each of the simple emotions would have opposite ends of the
>spectrum (e.g. fear and attraction) and stress would be the result of both
a
>positive and negative emotion placed on the same subject of thought.
>

I would think that the simplest emotions would be those which are
biological, and do not require learning. Those would probably be present
at birth (except possibly sexual response.) That might include a response
to pain, drives such as hunger, fear of strong new sensations, and any
emotion resulting from a deprivation of comfort.

>As in the case of survival...
>
> CaveBot (bear with me, I already hear the groans) sees a small fire.
>Attraction(curiosity) draws it to the flame. Sensors reveal that the
flame
>could be harmful thus Fear is activated. CaveBot now has a both a
>positive(curiosity) and negative(fear) (not that the positive and negative
>emotions would have to be on the same spectrum such as these are) attached
>to the same mental subject(flame). How CaveBot uses its database(what is
>known about flame already and predefined needs for survival) of
information
>along with its emotional adds and subtracts determines its course of
action.
>
>I know this is highly simplified, but I wanted to jump in somewhere.
>

A present day computer could be programmed to avoid fire. A question would
be how strongly should it behave to do so, in the face of competing
motivations. Suppose it had to achieve a goal, and the fire competed with
that objective. Suppose the fire suddenly became larger, and the machine
finds itself trapped. Or suppose the fire prevents the machine from
reaching an objective, such as saving a life.

How does each of these conditions moderate the response of the machine?
Does the machine weigh the fire against the goal, say, with numbers
indicating priorities? Would more fire and fewer opportunities of escape
moderate the priority numbers in the machine? Would the fact that the
objective is saving a life alter the balance of priorities? Would the
strength of these numbers (size on a scale of 10) be considered to be
emotion? Is this metaphorical to what happens in humans? In dogs? In
grasshoppers? In earthworms? In flatworms?

Patrik Bagge

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
>even though we are capable of only one behavior at a time.


allow me to disagree a little, H.Sapiens nervous system
seems to be a parallell processing entity,

> Very well. You observe the machine, and it behaves against simple logic,
>and behaves more or less strongly in certain ways. You make the same
>observations you would make in humans, and you get the same result. So
you
>conclude that the machine "feels."


yes, what other interpretation is there ?, of course it 'feels' in it's
on subjective little way. The concept of 'feeling' could, imho,
be directly translated to 'mesurement'

>As an aside, it is interesting that a sociopath sometimes believes that
>other people don't experience certain emotions, but that they are just
>pretending, just as he is. Sometimes a sociopath sees certain emotions of
>other people as being, not emotions, but just a result of weak-mindedness.


good example, illustrating lack of identification, empathy.
The same song might apply to artificial beings.

/pat

Gene Douglas

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to

Patrik Bagge wrote in message <9br74.3042$Sfm.18...@news.telia.no>...

>>even though we are capable of only one behavior at a time.
>
>
>allow me to disagree a little, H.Sapiens nervous system
>seems to be a parallell processing entity,
>
>> Very well. You observe the machine, and it behaves against simple logic,
>>and behaves more or less strongly in certain ways. You make the same
>>observations you would make in humans, and you get the same result. So
>you
>>conclude that the machine "feels."
>
>
>yes, what other interpretation is there ?, of course it 'feels' in it's
>on subjective little way. The concept of 'feeling' could, imho,
>be directly translated to 'mesurement'
>
Likewise in humans. When we say a human is feeling, we are saying that
nerve cells are discharging and communicating in a particular pattern.
Centers of pleasure and pain have been discovered in rats and perhaps humans
(at least a pain center and an anxiety center) so when a human is feeling
pain, that means that a certain area of brain tissue is active, and is
putting off some amount of signal, or is in contact with certain other brain
areas. Metaphorical statements could be made about a computer.

>>As an aside, it is interesting that a sociopath sometimes believes that
>>other people don't experience certain emotions, but that they are just
>>pretending, just as he is. Sometimes a sociopath sees certain emotions
of
>>other people as being, not emotions, but just a result of
weak-mindedness.
>
>

>good example, illustrating lack of identification, empathy.
>The same song might apply to artificial beings.
>
>/pat

Freddy

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
the mechanisation of emotion also finds expression in dildonics - see
http://www.fa-b.com and go to new books "Playing the Love Market" for
details and to read chapter


* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


Bob

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
Gene Douglas wrote in message
<83lcir$94go$1...@newssvr03-int.news.prodigy.com>...
...

> Human behaviors are far more complex than
> >these simplistic priority-based examples. It's not a matter of
> >stimulus-response but of interaction of all possible stimuli to possible
> >variation of response tied in with resulting feedback over time and
> >experiential pattern matching plus some level of random action response
in
> >extreme circumstances. Even this example is highly simplified.
> >
> Consider that we are speaking of a hypothetical very sophisticated
>computer. And as there is no such thing as randomness, even in computers,
>the unexplained part of human behavior is sometimes just called random.
>And a set of random numbers can also be programmed into a computer, not
>actually being random at all.

Perhaps you would prefer chaotic complexity, complexity beyond the point
where you can control or analyze it. The point is that many human responses
to emotional triggers don't seem to be entirely explainable, even by the
person performing them. It's an entirely illogical, unpredictable,
unexplainable response. Maybe that's really the definition of emotion.
It's our explanation of our response to things that are either too
physically hardwired or too chaotically complex for us to control or
intellectually understand. Emotion may be one word we use for several
different things.

For example, take raw fear, as in fight-or-flight response type fear which
seems to be hardwired or close to it, versus love which is virtually
undefineable. Things like fear and reflexive response would be at one
extreme but love and other higher emotions would be at the other extreme.
Both are beyond our intellectual control, one because of the reflexive or
primitive brain hardwiring and the other because of the complexity of the
combination of factors that go into the reaction.

> How do you know a human thinks, or feels? The previous poster said that
>you just ask him.

But the reason a computer answers "yes" could be very different from the
reason a human answers "yes". The computer could be designed to always
answer "yes" to this question whether it actually feels emotion or not.
Just asking isn't enough to be certain that the answer is valid.

I once used a digital oscilloscope that printed "ouch" on the screen when
the input voltage exceeded the max voltage for the display scale set. Does
this mean the scope has emotion and was expressing an emotional response to
extreme stimulus? I suppose you could argue that, but by that definition,
just about everything has emotion at some level and emotion becomes
something that is so common as to be uninterresting and irrelevant.
Something so ordinary that it's nothing special. It becomes an inherent
feature of every electrical system and maybe many other things. "My TV has
emotion because it changes channels when I push buttons on the
remote-control." This implies that the TV "likes" to change channels when
the buttons are pushed. Something that there is evidence for since it
doesn't always change channels when I push buttons.

>
> Very well. You observe the machine, and it behaves against simple logic,
>and behaves more or less strongly in certain ways. You make the same
>observations you would make in humans, and you get the same result. So
you
>conclude that the machine "feels."

Nope. My car does this stuff all the time, at least to a certain extent,
and I don't think my car has emotions. I, and others, do sometimes impose a
human emotional image onto the unexplained or unusual behavior of cars and
other machinery, but that doesn't mean they have emotions. This is what is
called anthropomorphism. We impose our own human emotional or intellectual
image onto things that remind us of our own emotional feelings or
intellectual thoughts.

This isn't a question just for computers and robots. It comes up in all
sorts of places from ecology to medicine to farming. Where is the point at
which you say something has emotion or intelligence? If you set the point
too low, virtually everything has it and it becomes cheap, meaningless and
insignificant. If you set it too high, then only one person, or even
nobody, has it (basically the sociopath you mention). Where do you define
this point? Some define it only in humans. Some define it in all living
things or all animals or only in higher vertebrates or only in humans and
co-survival animals (like pets).

Part of the problem is defining this stuff to a point where all or most
people agree that the definition is valid. (I suppose you'll never define
it so all people agree since some people don't even agree that all humans
are human.) Is there a difference in value and/or substance between human
and animal emotion and intelligence? Does the same difference apply to
machines? Many would say that machine intelligence or emotion can never be
more than clever programming, no matter how expressive or responsive it is.

>One out of every four Americans is suffering from some form of mental
>illness.
>Think of your three best friends. If they're OK, then it's you.


But what if all three off them is nuts? ;-)

Bob


Bob

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
Arthur T. Murray wrote in message <385d9...@news.victoria.tc.ca>...

>Gordon McComb, gmc...@gmccomb.com, wrote on Sun, 19 Dec 1999:
>
>> JimT9999 wrote:
>
>>> Pain & Fear
>>> Keep in mind that all bumper switches are now considered "painful."
>
>> <rest snipped>
>GMcC:
>> How do you reconcile these "emotions" in a robot designed
>> specifically for making physical contact? If a robot is
>> made to sense its environment via tactile feedback, how are
>> these senses then considered painful?
>
>ATM:
...>A sensor for tactile feedback will only register *pain* if signals

>from the sensor enter the mindgrid and activate special grids that
>are *interpreted* as pain: burning, or impact, or freezing, etc.


I think the point here is that if you assume all bumper swithces are
considered "pain" won't your robot soon learn to grind to a halt and sit
perfectly still in order to avoid the pain? Unless, of course, you can
define your robot to be a masochist.

Bob


Bob

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
Gordon McComb wrote in message <385DD5C1...@gmccomb.com>...
...

>This is the state of robotics today: It's a rare machine that can travel a
>straight line and accurately get from point A to B -- particularly
*outside* the
>laboratory -- without having to stop every now and then for its bearings.
And to
>do just this much requires a ridiculous amount of processing power and
sensors.
>Why are people so consumed with trying to give such a hunk of junk
emotions?
>Seems to me the real work of robotics is still in making the thing do the
simple
>tasks given it. Endowing emotions into a robot is like step 264, and we're
>still on number 39.
>


I think that a lot of this comes from the desire to short-cut a lot of the
complexity you mention. If you can develop a machine that can "learn" to do
what you want on its own, instead of you having to program (and understand)
all that, it may simplify and speed up this development. You don't need to
worry about exactly what it's doing or how it's doing it, just that it's
doing things right or wrong and how you influence it more toward the right
or desired way. Emotions seem to be the basis for, or at least related to,
intelligent thought so it's a matter of taking the first steps in this
direction.

IMHO, I think this just adds a lot more complexity and impossible questions
to something that is already incredibly complex, but there is some
interresting stuff out there.

Bob


Arthur Ed LeBouthillier

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
On Mon, 20 Dec 1999 07:13:17 GMT, Gordon McComb <gmc...@gmccomb.com>
wrote:

>I have yet to see anyone put forth a credible reason why a machine should have
>emotions. Emotions, by their nature, are part of human non-rational thought
>(love, fear, curiousity...there are all intangible reactions that can lead to
>imprecise actions, perhaps flight in once instance, fight in another -- who
>knows). I have a hard time imaging why we need irrational machines, seeing
>there are enough people who act that way as it is.

Emotions are an important part of any reasoning creature because
they represent a subconscious alerting mechanism. They allow
the consciousness to concentrate on more mundane issues
and then they alert the it to highly symbolic critical state
changes. They represent a highly efficient way of divorcing
one's consciousness from constantly asking "how does this
event benefit/hurt my goals." You don't need to constantly
ask yourself this question because your emotional mechanism
will alert you to events which are critical to your goals.
Therefore, they will be an important part of any intelligent
entity that does not have infinite processing resources.

My view is not that they are "irrational," but rather part of a
rational system. Emotions are a sub-conscious monitoring
system which alerts ones consciousness to positive or negative
conditions which require conscious attention.

The reason that they are not "irrational" is because they
are "programmed" by the rational system. Each emotional
state represents a specific goal (either positive or negative)
which must be attended to. As such, emotions are a product
of one's values and goals.

Cheers,
Art Ed LeBouthillier

Gary Forbis

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
While I believe emotions will have to be simulated in our machines if we
want them
to behave similarly to humans (for the reasons given below,) I don't think
this makes
the philosophical issues for those holding "Strong AI" views any easier.
Now one
doesn't just have to support the notion that a suitably programmed computer
will be conscious but that it will have emotions. Further there is a need
to explain
how shared data structures and processes cause either consciousness,
emotions,
neither, or both depending upon the parent process as envisioned by the
system's designers. (Those who use this stuff metaphorically as an aid to
the design process have no such philosophical problem since they aren't
creating emotions or consciousness but are merely conditioning behavior.)

Arthur Ed LeBouthillier <apen...@earthlink.net.nospam> wrote in message
news:385ef8c...@news.earthlink.net...

Phil Roberts, Jr.

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to

Unless, of course, the robot is truely rational, in which case it is
likely to go into a diesel effect (like us) which some might construe
as free will, but which is actually more of a will that is ongoing
(an insatiable lust for self-significating experience) rather than
"free" in any metaphysical sense:


A Sketch of a Divergent Theory of Emotional Instability

Objective: To account for self-worth related emotion (i.e., needs for
love, acceptance, moral integrity, recognition, achievement,
purpose, meaning, etc.) and emotional disorder (e.g., depression,
suicide, etc.) within the context of an evolutionary scenario; i.e., to
synthesize natural science and the humanities; i.e., to answer the
question: 'Why is there a species of naturally selected organism
expending huge quantities of effort and energy on the survivalistically
bizarre non-physical objective of maximizing self-worth?'

Observation: The species in which rationality is most developed is
also the one in which individuals have the greatest difficulty in
maintaining an adequate sense of self-worth, often going to
extraordinary lengths in doing so (e.g., Evel Knievel, celibate monks,
self-endangering Greenpeacers, etc.).

Hypothesis: Rationality is antagonistic to psychocentric stability (i.e.,
maintaining an adequate sense of self-worth).

Synopsis: In much the manner reasoning allows for the subordination
of lower emotional concerns and values (pain, fear, anger, sex, etc.)
to more global concerns (concern for the self as a whole), so too,
these more global concerns and values can themselves become
reevaluated and subordinated to other more global, more objective
considerations. And if this is so, and assuming that emotional
disorder emanates from a deficiency in self-worth resulting from
precisely this sort of experiencially based reevaluation, then it can
reasonably be construed as a natural malfunction resulting from
one's rational faculties functioning a tad too well.

Normalcy and Disorder: Assuming this is correct, then some
explanation for the relative "normalcy" of most individuals would
seem necessary. This is accomplished simply by postulating
different levels or degrees of consciousness. From this perspective,
emotional disorder would then be construed as a valuative affliction
resulting from an increase in semantic content in the engram indexed
by the linguistic expression, "I am insignificant", which all persons of
common sense "know" to be true, but which the "emotionally
disturbed" have come to "realize", through abstract thought,
devaluing experience, etc.

Implications: So-called "free will" and the incessant activity presumed
to emanate from it is simply the insatiable appetite we all have for
self-significating experience which, in turn, is simply nature's way of
attempting to counter the objectifying influences of our rational
faculties. This also implies that the engine in the first "free-thinking"
artifact is probably going to be a diesel.


"Another simile would be an atomic pile of less than critical size: an
injected idea is to correspond to a neutron entering the pile from
without. Each such neutron will cause a certain disturbance which
eventually dies away. If, however, the size of the pile is sufficiently
increased, the disturbance caused by such an incoming neutron will
very likely go on and on increasing until the whole pile is destroyed.
Is there a corresponding phenomenon for minds?" (A. M. Turing).


Additional Implications: Since the explanation I have proposed
amounts to the contention that the most rational species
(presumably) is beginning to exhibit signs of transcending the
formalism of nature's fixed objective (accomplished in man via
intentional self-concern, i.e., the prudence program) it can reasonably
be construed as providing evidence and argumentation in support of
Lucas (1961) and Penrose (1989, 1994). Not only does this imply
that the aforementioned artifact probably won't be a computer,
but it would also explain why a
question such as "Can Human Irrationality Be Experimentally
Demonstrated?" (Cohen, 1981) has led to controversy, in that it
presupposes the possibility of a discrete (formalizable) answer to a
question which can only be addressed in comparative
(non-formalizable) terms (e.g. X is more rational than Y, the norm, etc.).
Along these same lines, the theory can also be construed as an
endorsement or metajustification for comparative approaches in
epistemology (explanationism, plausiblism, etc.)


"The short answer [to Lucas/Godel and more recently, Penrose]
is that, although it is established that there are limitations to the
powers of any particular machine, it has only been stated, without
any sort of proof, that no such limitations apply to human intellect "
(A. M. Turing).


"So even if mathematicians are superb cognizers of mathematical
truth, and even if there is no algorithm, practical or otherwise,
for cognizing mathematical truth, it does not follow that the power
of mathematicians to cognize mathematical truth is not entirely
explicable in terms of their brain's executing an algorithm. Not
an algorhithm for intuiting mathematical truth -- we can suppose that
Penrose [via Godel] has proved that there could be no such thing.
What would the algorithm be for, then? Most plausibly it would be an
algorithm -- one of very many -- for trying to stay alive ... " (D. C.
Dennett).


Oops! Sorry! Wrong again, old bean.


"My ruling passion is the love of literary fame" (David Hume).


"I have often felt as though I had inherited all the defiance and all the
passions with which our ancestors defended their Temple and could
gladly sacrifice my life for one great moment in history" (Sigmund
Freud).


"He, too [Ludwig Wittgenstein], suffered from depressions and for long
periods considered killing himself because he considered his life
worthless, but the stubbornness inherited from his father may have
helped him to survive" (Hans Sluga).


"The inquest [Alan Turing's] established that it was suicide. The
evidence was perfunctory, not for any irregular reason, but because
it was so transparently clear a case" (Andrew Hodges)

--

Phil Roberts, Jr.

Feelings of Worthlessness and So-Called Cognitive Science
http://www.geocities.com/Athens/5476

Gordon McComb

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
Yep, that's my point. I don't feel that "emotions" are any better at designing
a self-learning machine than other, simpler methods. A programmer doesn't need
to know that A+B is always (or even usually) C -- or what precisely A and B is
-- but the result can certainly be stored in the robot for later lookup.

A robot does not need to "fear" fire to stay clear of it when necessary. In
fact, unless we're only designing robots to play God, we *want* our machines to
be fearless of these dangers (within the confines of their design), because
these things are supposed to do the dirty, dangerous work for us. I can just
see it now: "Master fire chief, I don't want to go into that burning building to
rescue those children. I 'fraid of fire!!"

Emotions are, by their nature, abstract and intangible (though because we humans
collectively experience them we can discuss them, "knowing" what each term
means, even though we can't really describe what "love," "rage," or "longing"
really is). Even with the most optimistic outlook of AI and fuzzy systems, I
have a hard time seeing how abstract, intangible, non-rational behavior will
make for smarter robots. Random robots, more like it!

-- Gordon

Gordon McComb

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
All of a sudden we're up to creating machines that are self-realizing (conscious)?
I assume they also have a conscience? I'm sorry, but this does NOT exist, in any
shape or form other than in the minds of SF writers. You've leap-frogged over
decades of research, and what may be an impossibility anyway, in order to justify
emotions in machines.

You said: The reason that they are not 'irrational' is because they are
'programmed' by the rational system." Then they're not emotions, because emotions
are abstract. If you go for Brooksian cognition theory which suggests that
intelligence manifests itself in the observer, then reactions to abstract
functionality will appear irrational, just like it does in humans. This is a
packaged deal: if you want emotions, you have to accept unpredictable results. I'm
not sure this is a good thing in a machine.

-- Gordon

Arthur T. Murray

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
"Bob" -- bo...@saracon.com -- wrote on Mon, 20 Dec 1999:

>Arthur T. Murray wrote in message <385d9...@news.victoria.tc.ca>...
>>Gordon McComb, gmc...@gmccomb.com, wrote on Sun, 19 Dec 1999:
>>
>>> JimT9999 wrote:
>>
>>>> Pain & Fear
>>>> Keep in mind that all bumper switches are now considered "painful."
>>
>>> <rest snipped>
>>GMcC:
>>> How do you reconcile these "emotions" in a robot designed
>>> specifically for making physical contact? If a robot is
>>> made to sense its environment via tactile feedback, how are
>>> these senses then considered painful?
>>
>>ATM:
>...>A sensor for tactile feedback will only register *pain* if signals
>>from the sensor enter the mindgrid and activate special grids that
>>are *interpreted* as pain: burning, or impact, or freezing, etc.
>
>
> I think the point here is that if you assume all bumper switches

> are considered "pain" won't your robot soon learn to grind to a
> halt and sit perfectly still in order to avoid the pain?

ATM quondam Mentifex:
Here I must make perhaps my most theory-intensive post ever.

No robot-maker -- not even the great Steve Waltz -- can get away with
simply declaring that "all bumper switches shall register pain."

We animals experience pain when trauma happens to us in the world.
Even if trauma happens to a bumper switch, a few wires are not going
to re-create that trauma in the CP/R -- control program/robot.

Indeed, o ye neuroscientists and neurotheoreticians, how can we
re-create trauma, as felt in the mind of man or beast, in a robot?

The answer, my friend, is blowing in the wind: By doing
informational violence to the information grid of the mind, e.g.,
http://www.geocities.com/Athens/Agora/7256/mind-fpc.html Mind.4th.

If the sensory wires from the bumper switch feed into a para-grid,
a pain-registering subset of the otherwise purely logical mindgrid,
then, by massively parallel associative tags, the pain grid
submerged within the logic grid can do violence to the mind
by screaming for attention, but not permitting mind-business-as-
usual until the beast or bot deals with the pain.

In other words, you can do violence to an immaterial mind,
but only in an immaterial manner: by the violence of forced
unrelenting attention seizure.

The incomplete theory is at:
http://www.geocities.com/Athens/Agora/7256/ntj1.html (naive)
http://www.geocities.com/Athens/Agora/7256/ntj2.html (simple)
http://www.geocities.com/Athens/Agora/7256/ntj3.html (eureka)

> Unless, of course, you can define your robot to be a masochist.
>

> Bob

--
http://mentifex.netpedia.net/

Gordon McComb

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
If a machine cannot experience an emotion, then why is it an emotion,
and not merely a mechanical contrivance that only *appears* to be an
emotion to a human observer?

By rational/irrational I mean "consistent with logic." There is no
logic to "love" or "hate" or "envy," nor are these emotions predictable.
If you'd prefer me to use the term "predictable" instead of "irrational"
I will, but the sentiment is the same. In your message you refer to the
emotion, not the reaction to the emotional stimulus (e.g. fire produces
fear which generates X response). Give two people the same emotional
stimulus and they may well respond differently, based on a wide variety
of histories and conditioning. Unpredictable, irrational,
whatever...this inconsistency in the reaction would be dangerous and
unproductive in a machine.

Consider the ant, which despite the cartoon movies otherwise, likely has
few emotional capabilities given the size of its brain; yet it still
exhibits remarkable instincts, social interaction, and work
capabilities. IMO, it's proof that a constructive machine need not
have, or even "process," emotion in order to function in a worthwhile
manner.

-- Gordon

John Casey wrote:
>
> The operation of emotional systems with their associated behaviors and
> the subjective
> (conscious) experiences that we have when these systems are activated
> are two
> different subjects. Just as a machine can process colors and respond
> differently to
> them doesn't mean it has a conscious experience of color. Computers can
> already
> process emotional states. :) happy :( sad although I don't suggest they
> 'experience'
> emotional 'feelings'.
>
> Emotions are not irrational. They are a guide, a motivating state,
> subject in higher organisms such as us by rational analysis and
> modification.
> This takes place mainly in the right hemisphere. Brain damage to the
> left
> hemisphere can result in emotional feeling of loss whereas damage to the
> right can leave them unconcerned at their condition. This lack of
> emotion
> to their condition is not always rational.
>
> I see no rational reason why we should be rational, only an emotional
> reason
> to be so. The importance of emotional intelligence is now being
> recognized.
>
> Emotions have survival value even if at times they are not rational.
>
> An emotional person has lost control of their emotions. That is when
> emotional
> behaviour becomes irrational. The upper cortex can be hijacked by anger
> for
> example. Anger can be a postive rational reaction if directed properly.
> No
> anger at 'wrongs', no improvements in our lives.
>
> The inability to process emotional information can be a serious social
> handicap.
>

Gene Douglas

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to

Bob wrote in message <83n6a1$q4s$7...@ffx2nh5.news.uu.net>...


>Gene Douglas wrote in message
><83lcir$94go$1...@newssvr03-int.news.prodigy.com>...
>...
>> Human behaviors are far more complex than
>> >these simplistic priority-based examples. It's not a matter of
>> >stimulus-response but of interaction of all possible stimuli to
possible
>> >variation of response tied in with resulting feedback over time and
>> >experiential pattern matching plus some level of random action response
>in
>> >extreme circumstances. Even this example is highly simplified.
>> >
>> Consider that we are speaking of a hypothetical very sophisticated
>>computer. And as there is no such thing as randomness, even in
computers,
>>the unexplained part of human behavior is sometimes just called random.
>>And a set of random numbers can also be programmed into a computer, not
>>actually being random at all.
>
>Perhaps you would prefer chaotic complexity, complexity beyond the point
>where you can control or analyze it. The point is that many human
responses
>to emotional triggers don't seem to be entirely explainable, even by the
>person performing them.

Especially by the person performing them. That's why we have psychologists.

It's an entirely illogical, unpredictable,
>unexplainable response. Maybe that's really the definition of emotion.

Emotions are generally more predictable and explainable than that. They can
be boiled down to the simplest emotions, with hormones flowing, brain areas
stimulated or connecting to other areas, and various body responses such as
perspiration, blood pressure, etc. As to chaos, well, our hypothetical
computer could produce that, too.

>It's our explanation of our response to things that are either too
>physically hardwired or too chaotically complex for us to control or
>intellectually understand. Emotion may be one word we use for several
>different things.
>

Yes, in additon to basic drives and emotions, we have simultaneous
combinations of emotions, and combinations of emotions with ideas, or
thoughts.

>For example, take raw fear, as in fight-or-flight response type fear which
>seems to be hardwired or close to it, versus love which is virtually
>undefineable.

The word love may seem undefinable, because it is used in so many different
ways. For example, "I love fried chicken" "I love my mother," and "I love
my fiancee," have very different meanings. However, if we separate them
out, and give them different names, I think it becomes less undefinable than
it may seem at first.

Things like fear and reflexive response would be at one
>extreme but love and other higher emotions would be at the other extreme.

We call them higher emotions because we put a value on them. However, in
lower animals (which we sometimes admire when they show human
characteristics) cows have a tendency to herd, mothers have a proneness to
nurture their offspring, and baboons will courageously defend a member of
their troupe. Babies cling to their mothers, and even penguins care which
chick is their own, even though a thousand adults and a thousand chicks in
the same place look exactly alike. Some animals, including birds, mate for
life, and show signs of depression when a mate is killed. Perhaps love is
more biologically rooted than we might at first think, and less akin to the
stuff of poets and philosophers.

>Both are beyond our intellectual control, one because of the reflexive or
>primitive brain hardwiring and the other because of the complexity of the
>combination of factors that go into the reaction.
>

"Love" (pick a type) is most likely based on hard wiring, and conditioned
stimuli from the environment. Since conditioning is often unconscious, or
less than totally conscious, we may have difficulty understanding the
compulsions that go along with (pick one) eating a favorite food,
sacrificing for the good of a child, lusting after a hot babe, (or hunk) or
maintaining fan status for a baseball team.

>> How do you know a human thinks, or feels? The previous poster said that
>>you just ask him.
>
>But the reason a computer answers "yes" could be very different from the
>reason a human answers "yes". The computer could be designed to always
>answer "yes" to this question whether it actually feels emotion or not.
>Just asking isn't enough to be certain that the answer is valid.
>

So what would be the reason a human answers, "yes?" The human feels fear.
The human reports that the feeling is there. A robot senses aversive
responses to stimuli, promoting a priority setting of 10 on a scale of 10.
The robot reports that it feels fear. So what is the difference?

>I once used a digital oscilloscope that printed "ouch" on the screen when
>the input voltage exceeded the max voltage for the display scale set.
Does
>this mean the scope has emotion and was expressing an emotional response
to
>extreme stimulus? I suppose you could argue that, but by that definition,
>just about everything has emotion at some level and emotion becomes
>something that is so common as to be uninterresting and irrelevant.

But the oscilloscope is a simple mechanism, compared to the hypothetical
robots we are discussing. And the oscilloscope was lying (not
intentionally, of course.) It's a little like the husband saying "fine,
dear," when his wife asks, "how do I look?" The answer has nothing to do
with the question. It's just the answer he always gives to that question.

>Something so ordinary that it's nothing special. It becomes an inherent
>feature of every electrical system and maybe many other things. "My TV
has
>emotion because it changes channels when I push buttons on the
>remote-control." This implies that the TV "likes" to change channels when
>the buttons are pushed. Something that there is evidence for since it
>doesn't always change channels when I push buttons.
>
>>
>> Very well. You observe the machine, and it behaves against simple
logic,
>>and behaves more or less strongly in certain ways. You make the same
>>observations you would make in humans, and you get the same result. So
>you
>>conclude that the machine "feels."
>
>Nope. My car does this stuff all the time, at least to a certain extent,
>and I don't think my car has emotions.

Then you must determine why you believe this in the case of humans, and why
you do not in the case of the car. Once you have determined this, you will
have your answer as to what is the difference. Then apply that thinking to
the hypothetical sophistocated robot, and see in what way that would be
different from the randomness of a car, which may simply have a dirty
carburetor, or worn plugs.

I, and others, do sometimes impose a
>human emotional image onto the unexplained or unusual behavior of cars and
>other machinery, but that doesn't mean they have emotions. This is what
is
>called anthropomorphism. We impose our own human emotional or
intellectual
>image onto things that remind us of our own emotional feelings or
>intellectual thoughts.
>

That is why we must set criteria for answering our questions. If we are to
say a human "knows" when he is feeling an emotion, how do we verify that he
does? (As an aside, a lot of humans don't know when they are feeling an
emotion.)

Once we have carefully established our criterea, without leaving anything
out, then we have a set of rules to apply to anything else.

>This isn't a question just for computers and robots. It comes up in all
>sorts of places from ecology to medicine to farming. Where is the point
at
>which you say something has emotion or intelligence? If you set the point
>too low, virtually everything has it and it becomes cheap, meaningless
and
>insignificant. If you set it too high, then only one person, or even
>nobody, has it (basically the sociopath you mention). Where do you define
>this point? Some define it only in humans. Some define it in all living
>things or all animals or only in higher vertebrates or only in humans and
>co-survival animals (like pets).
>

Do we worry about inflicting pain on an insect? Perhaps our disrespect for
his intelligence and our revulsion at his invading our space cause us to
disregard his pain as anything more than nerve signals. But if we carefully
define feeling, and then apply that definition equally to all things, then
we might make a determination. We might determine, for example, that his
inability to appreciate his pain means that he isn't perceiving what we call
pain, but is merely exhibiting a reflexive reaction to stimuli. Since we
can never walk a mile in his shoes, we may never know if intelligence is
required to "appreciate" pain. So we consider that it is O.K. to boil a
live lobster, because, what's a lobster know, anyway? Possibly a subjective
interpretation.

But our hypothetical sophistocated robot would have the intelligence to
appreciate it's signals. It's main difference is that it is made of hard,
rather than soft materials, such as a chimp or dog might be, and it is
deliberately constructed, rather than evolved from other robots, grown
inside another robot, or grown from a smaller size. But what would any of
that have to do with feeling?

>Part of the problem is defining this stuff to a point where all or most
>people agree that the definition is valid. (I suppose you'll never
define
>it so all people agree since some people don't even agree that all humans
>are human.) Is there a difference in value and/or substance between human
>and animal emotion and intelligence? Does the same difference apply to
>machines? Many would say that machine intelligence or emotion can never
be
>more than clever programming, no matter how expressive or responsive it
is.
>

Can human intelligence be any more than programming? I won't say "clever,"
because that implies external intent. But the result of evolution is a
product what would take cleverness for us to produce the same thing.

>>One out of every four Americans is suffering from some form of mental
>>illness.
>>Think of your three best friends. If they're OK, then it's you.
>
>
>But what if all three off them is nuts? ;-)
>
>Bob
>

Then you're off the hook, and so are three of your other friends.


-
GeneDou...@prodigy.net
-
Portal to un-mod UU group at: http://www.deja.com/~soc_religion_uu/
Remember: (current list) Rich Puchalski / Richard Kulisz / --------- /
(Your name here)
New Un-Moderated group at alt.religion.unitarian-univ, or use URL to
go there.
-
"I quoted Rich Puchalski."
--

MadCat13

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to

Gordon McComb <gmc...@gmccomb.com> wrote in message
news:386006...@gmccomb.com...
-snips-

> Consider the ant, which despite the cartoon movies otherwise, likely has
> few emotional capabilities given the size of its brain; yet it still
> exhibits remarkable instincts, social interaction, and work
> capabilities. IMO, it's proof that a constructive machine need not
> have, or even "process," emotion in order to function in a worthwhile
> manner.
>
> -- Gordon
-more snips-

Two things that emotion definitely has going for it are passion and
imagination.
Without emotion I doubt there would have been an Albert Einstein, Charles
Babbage, etc. There were lots of work there that while the work itself is
rational, the extent to which the people go to achieve it is usually is not.
I don't know if this kind of drive can be reproduced without emotional
simulation.

I know there are downsides to this though. Drive can easily be misplaced.
(Insert human catastrophe of you choice here)

MadCat13
(Asleep at the wheel-mouse)

John Casey

unread,
Dec 22, 1999, 3:00:00 AM12/22/99
to

--- John

Gordon McComb

unread,
Dec 22, 1999, 3:00:00 AM12/22/99
to
I wouldn't have thought that imagination (I guess that's the same as creativity)
was an emotion, if that's what you meant. I agree that emotions can manifest
themselves as passion, either in the obsessive sense, or the adoration sense.

I have no qualms with humans having, and displaying, emotion. I question the
need for it in robots.

-- Gordon

Arthur Ed LeBouthillier

unread,
Dec 22, 1999, 3:00:00 AM12/22/99
to
On Tue, 21 Dec 1999 15:02:10 -0800, Gordon McComb
<gmc...@gmccomb.com> wrote:

>If a machine cannot experience an emotion, then why is it an emotion,
>and not merely a mechanical contrivance that only *appears* to be an
>emotion to a human observer?

One cannot speak for the experiential aspects of emotions for anyone
else other than himself. Psychology has shown that people have
certain expectations and biases in interpreting social/emotional
responses among each other. What the robot will "experience"
will seem like an emotion to it.

>By rational/irrational I mean "consistent with logic."

People are hardly rational. People have a capability for
rationality but that often has no effect on their behavior.
Seriously, people are not all that rational. Being "consistent
with logic" is meaningless. Logic is merely a set of processes
one applies to beliefs which ensures consistency. If one were
to look at it as mathematical mappings, then logic is a mapping
between one set of beliefs and another set of beliefs.

Rationality (i.e. following the mappings/rules of logic) has a role
in problem solving. However, *ALL* logic starts with premises and
the premises are pre-logical. Of course, one can apply logic to one's
premises and therefore re-arrange his premises, but that does not
change the fact that they are premises.

> There is no logic to "love" or "hate" or "envy," nor are these emotions predictable.

Of course there is. They are behaviors of a living creature which
motivate it to be social in certain ways. They are motivating factors
which instill certain goals in the person.

Goal selection precedes logic. All that logic can tell you is
hypotheticals. Logic can tell you "If you want X then Y" but
it cannot tell you that you should want X in the first place.
Although I disagree with his conclusion, Kant outlined this
quite well in his philosophy of ethics. Aristotle in his ethics merely
says that ethical thinking is reasoning from principles and finally
to first principles. Logic will lead you to an understanding of your
premises and it will help you reason from your premises, but it will
not tell you what premises to have.

Rationality is merely one of the resources you have to apply to
your beliefs, but it takes an act of will for individuals to choose to
be rational and modify their beliefs for consistency.

>If you'd prefer me to use the term "predictable" instead of "irrational"
>I will, but the sentiment is the same. In your message you refer to the
>emotion, not the reaction to the emotional stimulus (e.g. fire produces
>fear which generates X response).

I would disagree with your understanding of the emotinal "chain."
An emotion, as I've said, is part of a goal mechanism. Its function
is to evaluate reality in accordance to a number of goal states and
then alert ones consciousness to deviations from or accomplishments
of them. Implicit in the fear response is the goal to avoid certain
states in reality (i.e. avoid harm to oneself, avoid uncertainty or
avoid harm to something which is important). It is the goal which
precedes the response. Understandably in the case of the fear
emotion, that emotional "goal" is part of the firmware of being human.
Nature predisposes living creatures such as humans to have certain
preservation goals.

Therefore, I would say that the emotional stimulus chain would
best start with the goal "avoid harm", which elicits responses in
several mechanisms, some spinal/reflexive and going up the brain
stem to one's consciousness. A feeling is not an emotion.

I think it is important to understand that the human body has
several different response mechanisms and that "emotions" are
the highest level one which directly interacts with that part of the
higher abstract process known as "consciousness." The lower level
response mechanisms are not emotions. The unconscious reflex which
causes you to pull your hand from fire is not an emotion. Emotions
can be elicited merely by knowledge of existing circumstances. A call
on the phone, followed by mere information of harm to something
important can elicit an emotion. Knowledge, alone, can cause an
emotional response.

> Give two people the same emotional stimulus and they may well respond
> differently, based on a wide variety of histories and conditioning. Unpredictable,
> irrational, whatever...this inconsistency in the reaction would be dangerous and
>unproductive in a machine.

First of all, the problem is what consistutes "an emotional stimulus."
Emotions, being part of an abstract goal mechanism, depend on
abstract goals. One's understanding of the relevance of these goals,
how they interact with the myriad of other goals one has, and how
significant a particular goal is to oneself all relate to the
particular response one has. For example, many people may have the
"deer in the headlight" response to certain threats because they don't
understand the implications of them. However, those who have a fuller
ramification of those threats are able to have a more complete
emotional response because they are able to better relate those
threats to particular goals. Emotional response is mediated by ones
knowledge and goals.

>Consider the ant, which despite the cartoon movies otherwise, likely has
>few emotional capabilities given the size of its brain; yet it still
>exhibits remarkable instincts, social interaction, and work
>capabilities. IMO, it's proof that a constructive machine need not
>have, or even "process," emotion in order to function in a worthwhile
>manner.

I would say that an ant has no emotional response because the nature
of its neural system is so simple that an ant does not choose goals
and have to make tradeoffs between goals. Ants are purely reactive
mechanisms whose behaviors are elicited by chemical signals and
hormone/pheremone levels. It does not have an abstract goal system;
it has a behavior system which is largely reactive. Ants don't have an
advanced situation awareness system which alerts a higher level
consciousness process/mechanism to changes of significant state.

Sure, an advanced, intelligent machine can exist without emotions, but
I think that they will be implemented because they will make machines
more useful or they will arise as a natural byproduct of an abstract
goal system which is able to run multiple goal processing threads.
Without an alerting mechanism such as emotions, every goal would
be reviewed like an expert system "If condition X1 then action Y1...
If condition X2 then action X2..." All goals (and conditions in the
world with regard to those goals) would be evaluated with equal
importance. I think an emotional mechanism is a more efficient way
to evaluate key conditions in the world constantly for
beneficial/harmful conditions because then another higher goal
maintenance/problem solving system can be attuned to nearer-
term problem solving (or lolligagging).

Cheers,
Art Ed LeBouthillier

Bob

unread,
Dec 22, 1999, 3:00:00 AM12/22/99
to
Gene Douglas wrote in message
<83pffp$2gps$1...@newssvr04-int.news.prodigy.com>...
...

> >Perhaps you would prefer chaotic complexity, complexity beyond the point
> >where you can control or analyze it. The point is that many human
>responses
> >to emotional triggers don't seem to be entirely explainable, even by the
> >person performing them.
>
>Especially by the person performing them. That's why we have
psychologists.

And can you imagine having to call in a robo-psychologist to get your
assembly line running again or to get your computer to run the program you
want? Come to think of it this probably couldn't hurt for my PC running
Windows.

...


>We call them higher emotions because we put a value on them. However, in
>lower animals (which we sometimes admire when they show human
>characteristics) cows have a tendency to herd, mothers have a proneness to
>nurture their offspring, and baboons will courageously defend a member of
>their troupe. Babies cling to their mothers, and even penguins care which
>chick is their own, even though a thousand adults and a thousand chicks in
>the same place look exactly alike. Some animals, including birds, mate for
>life, and show signs of depression when a mate is killed. Perhaps love is
>more biologically rooted than we might at first think, and less akin to the
>stuff of poets and philosophers.

The point is that animal emotions and human emotions are fundamentally
different. They may perform the same actions but from different
motivations. They may have the same motivations but perform very different
actions. Just because they seem to be the same as our emotions doesn't mean
they are remotely similar. If you can ever create robotic emotions, they
may be inherently unreliable because they can never be the same as humans
since the robot could never be that similar. There is a strong human
tendency to impose our image of human emotions on things like animals and
even inanimate objects.

...


>So what would be the reason a human answers, "yes?" The human feels fear.
>The human reports that the feeling is there. A robot senses aversive
>responses to stimuli, promoting a priority setting of 10 on a scale of 10.
>The robot reports that it feels fear. So what is the difference?

The difference may be in how we define this. A simple machine can answer
this question, even though it clearly has no emotion. A human baby can't
answer the question, even though he/she clearly seems to have emotion.
But read on:

> >I once used a digital oscilloscope that printed "ouch" on the screen when
> >the input voltage exceeded the max voltage for the display scale set.
>Does
> >this mean the scope has emotion and was expressing an emotional response
>to
> >extreme stimulus? I suppose you could argue that, but by that
definition,
> >just about everything has emotion at some level and emotion becomes
> >something that is so common as to be uninterresting and irrelevant.
>
>But the oscilloscope is a simple mechanism, compared to the hypothetical
>robots we are discussing. And the oscilloscope was lying (not
>intentionally, of course.) It's a little like the husband saying "fine,
>dear," when his wife asks, "how do I look?" The answer has nothing to do
>with the question. It's just the answer he always gives to that question.

So would you say the scope experienced an emotion, or not? If not, how is
it any different from the robot example you presented above? The scope
wasn't lying, it was expressing a response to extreme stimulus. If the
ability to analyze input and express a response that can be viewed in
emotional terms indicates emotion, then we already have emotion in a lot of
different things, all electical or electronic systems and probably most or
all mechanical systems. I don't buy it.

...


> >Nope. My car does this stuff all the time, at least to a certain extent,
> >and I don't think my car has emotions.
>
>Then you must determine why you believe this in the case of humans, and why
>you do not in the case of the car. Once you have determined this, you will
>have your answer as to what is the difference. Then apply that thinking to
>the hypothetical sophistocated robot, and see in what way that would be
>different from the randomness of a car, which may simply have a dirty
>carburetor, or worn plugs.

That's simple. I believe it in humans because they are human, and not in
cars because they are machines and not human. Similarly, the robot could be
said to never be capable of human emotions because it's not human, or even
biological. It's still just a machine, no matter how sophisticated it seems
to be or how well it's able to mimic a human being. This may all be pretty
arbitrary. I might feel differently if I were faced with an independently
intelligent and responsive robot like those popular in Sci-Fi stories, but
we're a long way from those assuming they're even possible at all.


>I, and others, do sometimes impose a
> >human emotional image onto the unexplained or unusual behavior of cars
and
> >other machinery, but that doesn't mean they have emotions. This is what
>is
> >called anthropomorphism. We impose our own human emotional or
>intellectual
> >image onto things that remind us of our own emotional feelings or
> >intellectual thoughts.
> >
>That is why we must set criteria for answering our questions. If we are to
>say a human "knows" when he is feeling an emotion, how do we verify that he
>does? (As an aside, a lot of humans don't know when they are feeling an
>emotion.)
>
>Once we have carefully established our criterea, without leaving anything
>out, then we have a set of rules to apply to anything else.

One perfectly valid set of rules, and one that has been used in our society
for millenia, could be that only humans have valid emotions, everything else
is just anthropomorphism. We might have emotional feelings about animals
and even machines, but they don't really have emotions remotely similar to
ours, even if they appear to act in ways that seem to be similar.

Another set of rules could be that only neuro-biological animals have valid
emotions, but they are a bit different in kind from species to species.
Electro-mechanical systems do not, regardless of how clever their
programming becomes to mimic these emotions.

You're unlikely to be able to create a set of rules that most people would
agree on, much less that all would agree on. If you can't even agree on the
rules to be applied, imagine how tough it will be to create conclusions
based on these rules.

Now you've made a jump to robot intelligence, which is a similarly uncertain
possibility, but this is part of the problem in defining our rules, isn't
it? If it's ever possible for a machine to have emotion, then we have that
now in every electro-mechanical machine (and probably a lot of other
things), albeit at an insect-like level. If you say machines or computers
don't have emotion now already, then a more complex and sophisticated
machine won't really ever have emotions either. If it's a matter of the
ability to "appreciate the meaning" of stimulus, then it becomes valid to
regard less intelligent creatures or machines (or even humans) as less able
to appreciate this meaning, having fewer emotions and, therefore, less
important or valued or having fewer rights.

>
> >Part of the problem is defining this stuff to a point where all or most
> >people agree that the definition is valid. (I suppose you'll never
>define
> >it so all people agree since some people don't even agree that all humans
> >are human.) Is there a difference in value and/or substance between
human
> >and animal emotion and intelligence? Does the same difference apply to
> >machines? Many would say that machine intelligence or emotion can never
>be
> >more than clever programming, no matter how expressive or responsive it
>is.
> >
>Can human intelligence be any more than programming? I won't say "clever,"
>because that implies external intent. But the result of evolution is a
>product what would take cleverness for us to produce the same thing.


Certainly it could be. I know there are those who believe, with almost
religious fervor, that this is all just a matter of processing power, an
almost blind faith that some magical barrier will be crossed and this will
poof into existence. I think way too little is known yet to start drawing
definitive conclusions one way or the other. Even if it is possible it may
take us as long, and as many resources, to produce as evolution has taken,
or even longer. I have an extremely hard time believing this is solely a
matter of processing power. I think it will at least require, once we have
the theoretical processing power, a massive programming effort and a long
period of refinement if it is even possible at all. Until we have a system
that seems to mimic independent intelligence and/or emotional expression, it
will be difficult or impossible to evaluate the quality of it.

Bob


Gene Douglas

unread,
Dec 23, 1999, 3:00:00 AM12/23/99
to

Bob wrote in message <83siga$7ag$1...@ffx2nh5.news.uu.net>...


>Gene Douglas wrote in message
><83pffp$2gps$1...@newssvr04-int.news.prodigy.com>...
>...
>> >Perhaps you would prefer chaotic complexity, complexity beyond the
point
>> >where you can control or analyze it. The point is that many human
>>responses
>> >to emotional triggers don't seem to be entirely explainable, even by
the
>> >person performing them.
>>
>>Especially by the person performing them. That's why we have
>psychologists.
>
>And can you imagine having to call in a robo-psychologist to get your
>assembly line running again or to get your computer to run the program you
>want? Come to think of it this probably couldn't hurt for my PC running
>Windows.
>

I would imagine that is happening already. At one time, engineers designed
computers, using drafting equipment and a slide rule. They proceeded to
scientific calculators, and by today, I would suspect that 90% of the
design work is done by computer. When a dozen different improvements are
considered, a computer can put them together in a few seconds, and compare
them, offering the best circuitry for the purpose considered.

Likewise, if there is a glitch in a program, rather than hiring a tech to
read the program line by line, a computer program to search for certain
patterns can probably find it in a few seconds, and then offer solutions and
test them for side effects of the change. It would be pretty primitive to
continue using pencil and paper to improve Windows, when a computer can do
it better, faster, and cheaper.


>...
>>We call them higher emotions because we put a value on them. However, in
>>lower animals (which we sometimes admire when they show human
>>characteristics) cows have a tendency to herd, mothers have a proneness
to
>>nurture their offspring, and baboons will courageously defend a member of
>>their troupe. Babies cling to their mothers, and even penguins care
which
>>chick is their own, even though a thousand adults and a thousand chicks
in
>>the same place look exactly alike. Some animals, including birds, mate
for
>>life, and show signs of depression when a mate is killed. Perhaps love
is
>>more biologically rooted than we might at first think, and less akin to
the
>>stuff of poets and philosophers.
>
>The point is that animal emotions and human emotions are fundamentally
>different. They may perform the same actions but from different
>motivations.

We both begin with the same things. But we apply thinking to the process,
which makes it different. Our hypothetical sophistocated computer could
also provide that thinking.

They may have the same motivations but perform very different
>actions. Just because they seem to be the same as our emotions doesn't
mean
>they are remotely similar.

Remotely would be an exaggeration. If your dog cuddles with you, you can
assume that he experiences affection, and a desire for security, or at
least an assumption of security. So does an infant, even though he doesn't
know the word for it. In fact, he might be 10 years old before he applies
language and reasoning to a process he simply assumes.

If you can ever create robotic emotions, they
>may be inherently unreliable because they can never be the same as humans
>since the robot could never be that similar. There is a strong human
>tendency to impose our image of human emotions on things like animals and
>even inanimate objects.
>

There may be no need for robots emotions to be identical to humans.
Feeling in robots would be tailored to the need in regard to a purpose. At
any rate, how can we say that two humans experience exactly the same thing.
If I see yellow, and you say you see yellow, how do we know that you and I
are experiencing the same thing? If I like milk and you can't stand the
stuff, how do we know that we are both tasting the same thing? If I
experience say, loneliness, and you say you experience an emotion with the
same name, how do we know that you and I are experiencing the same thing?


>...
>>So what would be the reason a human answers, "yes?" The human feels
fear.
>>The human reports that the feeling is there. A robot senses aversive
>>responses to stimuli, promoting a priority setting of 10 on a scale of
10.
>>The robot reports that it feels fear. So what is the difference?
>
>The difference may be in how we define this. A simple machine can answer
>this question, even though it clearly has no emotion. A human baby can't
>answer the question, even though he/she clearly seems to have emotion.
>But read on:
>

Why does it clearly have no emotion? If all conditions are met, then how
can we say the one is different from the other? The human baby exhibits
certain behaviors, which adults interpret as emotion. If R2D2 should
exhibit certain behaviors, humans might also interpret that as emotion. How
do we say that one perception is better than the other?

>> >I once used a digital oscilloscope that printed "ouch" on the screen
when
>> >the input voltage exceeded the max voltage for the display scale set.
>>Does
>> >this mean the scope has emotion and was expressing an emotional
response
>>to
>> >extreme stimulus? I suppose you could argue that, but by that
>definition,
>> >just about everything has emotion at some level and emotion becomes
>> >something that is so common as to be uninterresting and irrelevant.
>>
>>But the oscilloscope is a simple mechanism, compared to the hypothetical
>>robots we are discussing. And the oscilloscope was lying (not
>>intentionally, of course.) It's a little like the husband saying "fine,
>>dear," when his wife asks, "how do I look?" The answer has nothing to do
>>with the question. It's just the answer he always gives to that
question.
>
>So would you say the scope experienced an emotion, or not?

Firstly, a human programmer had put the ouch in the machine. Like the
husband who says "you look fine," without looking up from his newspaper, it
is just producing a prefabricated response, unrelated to reality.

Secondly, the example is too simple to meet conditions required for us to
say that a human had experienced a feeling. If we can list all criteria for
the human, and then apply those same criteria to a machine, then how are we
to say that the machine still does not meet the criteria, if it appears to
do so?

If not, how is
>it any different from the robot example you presented above? The scope
>wasn't lying, it was expressing a response to extreme stimulus. If the
>ability to analyze input and express a response that can be viewed in
>emotional terms indicates emotion, then we already have emotion in a lot
of
>different things, all electical or electronic systems and probably most or
>all mechanical systems. I don't buy it.
>

The human manufacturer had programmed in a lie (or a joke) to the machine.
The printing could as well have read "error."


>...
>> >Nope. My car does this stuff all the time, at least to a certain
extent,
>> >and I don't think my car has emotions.
>>
>>Then you must determine why you believe this in the case of humans, and
why
>>you do not in the case of the car. Once you have determined this, you
will
>>have your answer as to what is the difference. Then apply that thinking
to
>>the hypothetical sophistocated robot, and see in what way that would be
>>different from the randomness of a car, which may simply have a dirty
>>carburetor, or worn plugs.
>
>That's simple. I believe it in humans because they are human, and not in
>cars because they are machines and not human. Similarly, the robot could
be
>said to never be capable of human emotions because it's not human, or even
>biological. It's still just a machine, no matter how sophisticated it
seems
>to be or how well it's able to mimic a human being. This may all be
pretty
>arbitrary. I might feel differently if I were faced with an independently
>intelligent and responsive robot like those popular in Sci-Fi stories, but
>we're a long way from those assuming they're even possible at all.
>

I can be as arbitrary. I believe machines because they are machines, and
not in humans because they are humans and not machines. A human could be
said to never be capable of machine emotions, because it's not a machine,
or even mechanical.

Yes, it's all hypothetical. No such device has yet been built.

But that is an arbitrary assumption. And remotely is still an
exaggeration.

>Another set of rules could be that only neuro-biological animals have
valid
>emotions, but they are a bit different in kind from species to species.
>Electro-mechanical systems do not, regardless of how clever their
>programming becomes to mimic these emotions.
>

We can probably say with confidence that a chicken feels pain, and that
biological experiments with live chickens would inflict pain on the chicken.
However, if we should say that a chicken is lonely for its mother, or that
it would rather run free than have all the food, water, and shelter it could
use, then we are projecting our own characteristics onto the chicken. There
are degrees to which there is sameness, and there are degrees to which we
just imagine motivations and perceptions in lower animals.

It's hard to say that a machine doesn't have intelligence, if we can't even
say what intelligence is. If intelligence is the ability to pass an IQ
test, then machines can probably do that now.

If it's ever possible for a machine to have emotion, then we have that
>now in every electro-mechanical machine (and probably a lot of other
>things), albeit at an insect-like level. If you say machines or computers
>don't have emotion now already, then a more complex and sophisticated
>machine won't really ever have emotions either.

You've again made a leap to the arbitrary.

If it's a matter of the
>ability to "appreciate the meaning" of stimulus, then it becomes valid to
>regard less intelligent creatures or machines (or even humans) as less
able
>to appreciate this meaning, having fewer emotions and, therefore, less
>important or valued or having fewer rights.
>

Which we already do. We don't worry about boiling a lobster, because we
regard his pain is just some nerve signals, which don't result in "horror"
or "terror" or "grief" in himself or his fellow lobsters. Likewise, we
don't worry about the kind of poison we apply to a cockroach, because a
bellyache in a roach is not likely to result in "worry." We don't worry
about leaving a cow outside, because the cow isn't aware that there is an
"inside" by comparison. We don't worry about taking a chimp from the
jungle, because the chimp doesn't know how much better his grandfather had
it. Likewise, dictators change history for humans, so they will believe the
alternatives are worse that what they know now.


>>
>> >Part of the problem is defining this stuff to a point where all or
most
>> >people agree that the definition is valid. (I suppose you'll never
>>define
>> >it so all people agree since some people don't even agree that all
humans
>> >are human.) Is there a difference in value and/or substance between
>human
>> >and animal emotion and intelligence? Does the same difference apply to
>> >machines? Many would say that machine intelligence or emotion can
never
>>be
>> >more than clever programming, no matter how expressive or responsive it
>>is.
>> >
>>Can human intelligence be any more than programming? I won't say
"clever,"
>>because that implies external intent. But the result of evolution is a
>>product what would take cleverness for us to produce the same thing.
>
>
>Certainly it could be. I know there are those who believe, with almost
>religious fervor, that this is all just a matter of processing power, an
>almost blind faith that some magical barrier will be crossed and this will
>poof into existence.

More likely, a gradual evolution will take place, with each generation
approaching that point. Suppose scientists found a way to make a soft
computer. Suppose they took a brain cell from a sea worm, multiplied it to
large numbers, and guided the way they connected, and even manipulated the
number of axons per cell.

Suppose they produced a computer of tremendous complexity in this way,
which could produce human-like responses in every detail. By the mere fact
that the components are soft, would you think that if such a computer could
report a feeling, that it has a feeling?

I think way too little is known yet to start drawing
>definitive conclusions one way or the other. Even if it is possible it
may
>take us as long, and as many resources, to produce as evolution has taken,
>or even longer.

Currently, computers double their speed and processing power every 18
months, while most biological organisms have not. If we could say that
lower animals did something similar every 100,000 years, then maybe we will
progress about 50,000 times faster than that.

I have an extremely hard time believing this is solely a
>matter of processing power. I think it will at least require, once we
have
>the theoretical processing power, a massive programming effort and a long
>period of refinement if it is even possible at all.

Actually, it's the ghost inside the machine. He even has a name:
Homunculus. I forgot to mention that.

Until we have a system
>that seems to mimic independent intelligence and/or emotional expression,
it
>will be difficult or impossible to evaluate the quality of it.
>
>Bob

-
GeneDou...@prodigy.net
--


One out of every four Americans is suffering from some form of mental
illness.
Think of your three best friends. If they're OK, then it's you.

--
THE POLITICAL THERAPIST-- http://www.geocities.com/HotSprings/3616
-

Arthur Ed LeBouthillier

unread,
Dec 23, 1999, 3:00:00 AM12/23/99
to
On Wed, 22 Dec 1999 21:28:00 -0600, "Bob" <bo...@saracon.com> wrote:
[ deletia ]

>Certainly it could be. I know there are those who believe, with almost
>religious fervor, that this is all just a matter of processing power, an
>almost blind faith that some magical barrier will be crossed and this will
>poof into existence. I think way too little is known yet to start drawing
>definitive conclusions one way or the other. Even if it is possible it may
>take us as long, and as many resources, to produce as evolution has taken,
>or even longer. I have an extremely hard time believing this is solely a
>matter of processing power. I think it will at least require, once we have
>the theoretical processing power, a massive programming effort and a long
>period of refinement if it is even possible at all. Until we have a system
>that seems to mimic independent intelligence and/or emotional expression, it
>will be difficult or impossible to evaluate the quality of it.

I suggest you read "In Depth Understanding" by Michael Dyer. This book
reviews an understanding program which understands (but does not
model) emotions. It is an old work but it is representative of the
capability to understand by a computer program/robot.

If you take the baseline capabilities and expand them several orders
of magnitude, then I think you begin to understand the capability of
a robot to understand emotions. Once the ability to understand
emotions exists, that same model can be applied to the robot itself
and therefore, the robot understands its owns emotions.

Seriously, Dyer's book is fascinating stuff and it even includes
code examples.

Cheers,
Art Ed LeBouthillier


Elvis Lives

unread,
Dec 23, 1999, 3:00:00 AM12/23/99
to
I would say, pain and pleasure (physical sense, like an orgasm) are no
different than colors and sounds. They are just interpreted inputs,
not emotions. They could be linked to emotions later on though . . .
smells could make you remember a loved one . ..

On 20 Dec 99 02:46:35 GMT, uj...@victoria.tc.ca (Arthur T. Murray)
wrote:

>Gordon McComb, gmc...@gmccomb.com, wrote on Sun, 19 Dec 1999:
>
>> JimT9999 wrote:
>
>>> Pain & Fear
>>> Keep in mind that all bumper switches are now considered "painful."
>
>> <rest snipped>
>GMcC:
>> How do you reconcile these "emotions" in a robot designed
>> specifically for making physical contact? If a robot is
>> made to sense its environment via tactile feedback, how are
>> these senses then considered painful?
>
>ATM:

>http://www.geocities.com/Athens/Agora/7256/emotion.html is a sub-
>page on how eventually to code emotion into Mind.Forth Robot AI.


>
>A sensor for tactile feedback will only register *pain* if signals
>from the sensor enter the mindgrid and activate special grids that
>are *interpreted* as pain: burning, or impact, or freezing, etc.
>

>It is theorized that physical pain and pleasure are not emotions,
>but are instead *translations* of specially dedicated neural signals
>into the *qualia* -- the phenomena -- of physical pain and pleasure.
>
>Personally I speculate that pain is evinced in a mindgrid when
>the interleaved neurons of the "quale" of pain fire at a faster
>frequency than the rest of the mindgrid, literally forcing the
>consciousness to be unshakeably aware of the cause of the pain.
>
>Pleasure, by the same token, is a slowing down of the rate of
>firing of the pain-pleasure intergrid, so that the rest of the
>mind diverts its attention from everyday concerns and notices
>-- because pain and pleasure are functions of *attention* --
>that a pleasurable sensation is saying, "Whoa! Savor the moment."
>Thus to slow reality down is pleasurable, but to speed it up is
>painful. Most important here is the *translation* of signals.
>Why should a faster signal be painful? Because we don't want to
>pay attention to a burn wound, or a cut, but we are forced to.
>Our consciousness tries to escape but cannot, so it feels pain.
>When consciousness indulges in a slow signal, it feels pleasure.
>
>GMcC:
>> It seems to me that such
>> emotion is then dependent on the design of the robot. But isn't
>> one of the chief values of emotions is that they are, to a great
>> extent, consistent (therefore they can be given understandable
>> terms we can all use so we know what we're talking about)?
>> For humans, touch can bring both pleasure and pain, not only
>> by degree, but by association with what was touched.
>
>> I understand there has been quite a bit of study and discussion
>> regarding a "light seeking robot" exhibiting either love, or
>> aggression, or maybe both. Personally, I think it's all hokum.
>> I have to wonder if people like Braitenberg really intended their
>> works to be perceived as a Disneyesque literal interpretation of
>> human-like emotions applied to machines, rather than an attempt to
>> discover more about *human emotions* through the use of fictional
>> machine examples.
>
>-- Gordon


Valter Hilden

unread,
Dec 24, 1999, 3:00:00 AM12/24/99
to
Gene Douglas wrote:
> ...

> >The difference may be in how we define this. A simple machine can answer
> >this question, even though it clearly has no emotion. A human baby can't
> >answer the question, even though he/she clearly seems to have emotion.
> >But read on:
> >
> Why does it clearly have no emotion? If all conditions are met, then how
> can we say the one is different from the other? The human baby exhibits
> certain behaviors, which adults interpret as emotion. If R2D2 should
> exhibit certain behaviors, humans might also interpret that as emotion. How
> do we say that one perception is better than the other?
> ...

At this moment, we have absolutely no means to enter the brain, or mind,
or soul of any other human being, baby or adult, or of any animal or
machine, so we are limited to philosophical conjectures on this respect.

However, I can enter my own younger brain, and there is a moment in my
childhood that I very distinctly remember to this day. When I was three
years old, I had a sudden perception of myself as a person; I realized
that I was a person, that I had a "self conscience", or whatever you may
call it. I cannot remember any single event in my life that happened
before that moment, but I have a more or less complete and continuous
memory of my life after that. Ever since, I have wondered on what
happened to my brain at that moment, something just "clicked" and I
became conscient of myself.

So, based on my own, admittedly anectodal, experience, I assume that
there is a certain minimum complexity in a brain in order to exhibit
conscient behavior. A baby may have emotions, may feel pain or pleasure,
but is not philosophically aware of the existence of his or her own
self. Unfortunately, our technology is still far from simulating in a
computer the brain of a three year old child, we do not have enough cpu
power for that.

Gene Douglas

unread,
Dec 24, 1999, 3:00:00 AM12/24/99
to

Valter Hilden wrote in message <38636117...@infolink.com.br>...
>Gene Douglas wrote:
>> ...


>> >The difference may be in how we define this. A simple machine can
answer
>> >this question, even though it clearly has no emotion. A human baby
can't
>> >answer the question, even though he/she clearly seems to have emotion.
>> >But read on:
>> >
>> Why does it clearly have no emotion? If all conditions are met, then
how
>> can we say the one is different from the other? The human baby
exhibits
>> certain behaviors, which adults interpret as emotion. If R2D2 should
>> exhibit certain behaviors, humans might also interpret that as emotion.
How
>> do we say that one perception is better than the other?

>> ...
Incidentally, the below was written by a machine, IBM's HAL 2001. However,
because the message is machine-generated, I have no reason to believe it is
anything but the output of mechanical computation, and has nothing to do
with thinking, feeling, or self-awareness.


>
>At this moment, we have absolutely no means to enter the brain, or mind,
>or soul of any other human being, baby or adult, or of any animal or
>machine, so we are limited to philosophical conjectures on this respect.
>
>However, I can enter my own younger brain, and there is a moment in my
>childhood that I very distinctly remember to this day. When I was three
>years old, I had a sudden perception of myself as a person; I realized
>that I was a person, that I had a "self conscience", or whatever you may
>call it. I cannot remember any single event in my life that happened
>before that moment, but I have a more or less complete and continuous
>memory of my life after that. Ever since, I have wondered on what
>happened to my brain at that moment, something just "clicked" and I
>became conscient of myself.
>
>So, based on my own, admittedly anectodal, experience, I assume that
>there is a certain minimum complexity in a brain in order to exhibit
>conscient behavior. A baby may have emotions, may feel pain or pleasure,
>but is not philosophically aware of the existence of his or her own
>self. Unfortunately, our technology is still far from simulating in a
>computer the brain of a three year old child, we do not have enough cpu
>power for that.

As I said before, if you say you see "yellow," and I see "yellow," how do we
know that we are experiencing the same thing?
-
GeneDou...@prodigy.net

Valter Hilden

unread,
Dec 24, 1999, 3:00:00 AM12/24/99
to
Gene Douglas wrote:
> ...
> As I said before, if you say you see "yellow," and I see "yellow," how do we
> know that we are experiencing the same thing?
> ...

I agree with you - we have no means of knowing. I know a person who says
he only discovered he was color blind when he flunked his first driver's
license examination. Before that, he had always thought "red" was the
name of a dark shade of green. The only comparison we can do is within
our own minds.

Therefore, my conjecture is: since I have no memories before a certain
moment when I was three years old, then before that time I was not
conscient in the way I am now. Conscience arised spontaneously when my
brain reached a certain capability, either by growing in size, or by
growing in experience, or both. The only way I can think of proving this
in a concrete and objective way is by constructing an artificial neural
network simulating the brain of a three year old child.

Today this is a nearly impossible task. The amount of hardware is
actually within reach, I estimate a three year old human has a number of
neurons that could be simulated by a thousand or so Pentium computers,
less than a million US$ worth of hardware. But what kind of software are
we talking about? No one knows. We can find the topology of a few
neurons by a painstaking effort of microscopical examination of a
cadaver's brain, but the large scale structure still eludes us. Perhaps
the next decades of research will throw some light in this matter.

kenneth Collins

unread,
Dec 24, 1999, 3:00:00 AM12/24/99
to
Gene Douglas wrote:

>[...]

> As I said before, if you say you see "yellow," and I see "yellow," how do we
> know that we are experiencing the same thing?

get out an EM-frequency detector and a color chart, and cross-correlate.

i =like= your Shaw quote.

cheers, ken collins


Bloxy's

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to
In article <386215d...@news.earthlink.net>, apen...@earthlink.net.nospam (Arthur Ed LeBouthillier) wrote:
>On Wed, 22 Dec 1999 21:28:00 -0600, "Bob" <bo...@saracon.com> wrote:
>[ deletia ]
>
>>Certainly it could be. I know there are those who believe, with almost
>>religious fervor, that this is all just a matter of processing power, an
>>almost blind faith that some magical barrier will be crossed and this will
>>poof into existence. I think way too little is known yet to start drawing
>>definitive conclusions one way or the other. Even if it is possible it may
>>take us as long, and as many resources, to produce as evolution has taken,
>>or even longer. I have an extremely hard time believing this is solely a
>>matter of processing power. I think it will at least require, once we have
>>the theoretical processing power, a massive programming effort and a long
>>period of refinement if it is even possible at all. Until we have a system
>>that seems to mimic independent intelligence and/or emotional expression, it
>>will be difficult or impossible to evaluate the quality of it.
>
>I suggest you read "In Depth Understanding" by Michael Dyer. This book
>reviews an understanding program which understands (but does not
>model) emotions. It is an old work but it is representative of the
>capability to understand by a computer program/robot.
>
>If you take the baseline capabilities and expand them several orders
>of magnitude, then I think you begin to understand the capability of
>a robot to understand emotions.

You can not UNDERSTAND emotions.
This is simply obscene.
Understanding is mental.
Emotion is beyond mental.
It is much closer to telepatic, or energy level of the
essense of the being. Emotion is tuning in on a particular
wavelength of the overall energy field of existence.

It is WELL beyond mere mentation.
It is a DIRECT communication, thus telepatic, between
the entities.

Emotion stands at the CORE of your being, expressing itself
in the physical domain. It IS an expression of your very
essense.

Mental is merely to facilitate the temple of the body and
help to maintain its well being in the physical domain.
You need to eat. You need clothes. You need shelter.
That is where mental comes in.
It is responsible for providing you such.

Emotion is outside the scope of these basic necessities
of purely phisical. It is the expression of your very
essense.

Furthermore, it provides the very imptetus to be.
No emotion - no impetus to be.
You simply wither away.

This is what you see at this very moment in so called
developed countries. They COMPLETELY suppressed the
emotion and operate on purely physical level, driven
by the fear of survival.

That is why there is such a profound frustration.
There simply is no longer an impetus to be.
You need JOY and expression of your essense,
forever craving to reconcile you and your "environment".
You need jokes and playfulness, which provides you with
orgasmic aspects of existence, thus providing the very
impetus to be.

> Once the ability to understand
>emotions exists, that same model can be applied to the robot itself
>and therefore, the robot understands its owns emotions.

Simply obscene.

>Seriously, Dyer's book is fascinating stuff and it even includes
>code examples.

"Blind, leading the blind, WILL fall into the ditch".

>Cheers,
>Art Ed LeBouthillier
>

Bloxy's

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to
In article <386418A7...@infolink.com.br>, Valter Hilden <vhi...@infolink.com.br> wrote:
>Gene Douglas wrote:
>> ...
>> As I said before, if you say you see "yellow," and I see "yellow," how do we
>> know that we are experiencing the same thing?
>> ...
>

>I agree with you - we have no means of knowing. I know a person who says
>he only discovered he was color blind when he flunked his first driver's
>license examination. Before that, he had always thought "red" was the
>name of a dark shade of green. The only comparison we can do is within
>our own minds.
>
>Therefore, my conjecture is: since I have no memories before a certain
>moment when I was three years old, then before that time I was not
>conscient in the way I am now. Conscience arised spontaneously when my
>brain reached a certain capability,

Simply obscene.
You can not even begin to comprehend ANY of it.
Even the cells in your body are and ALWAYS were conscious.

If you were not conscious before the age of 3,
you would not be able to become conscious,
no matter what.

What happens after about age of 5 is you are now formed.
You have seen enough repetitions to start classifying
things and create the ideas of your own identity,
or your sense of self or ego, classifying everything.

But the very mechanism for perception and classification
was ALREADY there, as soon as you opened your eyes
for the first time. Otherwise, you would NEVER be able
to learn ANYTHING.

All you had to do at birth is refocus into a physical
domain. In the beginning it was strange. But as months
passed, you noticed many things repeat. So you latched
onto these repetitions and formed your own estimates
of the meaning of them.

Thus your ego was formed.

> either by growing in size, or by
>growing in experience, or both.

None of the above.
First of all, by simply accumulating the patterns,
you don't, all of a sudden become conscious.
That is simply an absurd.

You become conscious upon attunment along particular
aspects of existence, based on your inner gravitations.
You tune in on a particular wavelenght, and you extract
those aspects, you are excited about, for whatever reason.

Bloxy's

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to
In article <38642580...@earthlink.net>, kenneth Collins <kpa...@earthlink.net> wrote:
>Gene Douglas wrote:
>
>>[...]
>
>> As I said before, if you say you see "yellow," and I see "yellow," how do we
>> know that we are experiencing the same thing?
>
>get out an EM-frequency detector and a color chart, and cross-correlate.

And that is where you fall on your face.
You fail to comprehend that the frequency meter and
a color chart are both just AGREEMENTS, and not absolute
definitions of ANYTHING.

The spectrum is continuous and virtually infinite.
The biological life operates SIMULTANEOUSLY on multiple
frequencies, ranging from the cycles of the oceans,
days, to cycles of your heart beat, sound, light, and
even FAR beyond.

Again, SIMULTANEOUSLY.
So, there is simply no possibility of having an "objective"
interpretation of a snapshot of the vibrational spectrum
and active filters, every human being and other biological
life OBJECTIVELY perceives as color, sound, etc.

You view existence in these simple, single dimensional
and isolated compartments of "color", "sound", etc.

That is NOT complete picture and there is simply no
way to determine the OBJECTIVE value of ANYTHING.
It all depends on particular intent and purpose of given
individual. One may have the entire visual spectr filtered
out in favor of green. Others may have it in ANY other way.
The entire perception thus changes.

It is simply impossible to reduce the multi-dimensionality
to some single dimensional "objective" criteria,
ABSOLUTELY define, as there simply is no "objective"
reference point.

It ALL exists in your mind.

Arthur Ed LeBouthillier

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to
On Sat, 25 Dec 1999 11:00:15 GMT, Bloxy's...@hotmail.com (Bloxy's) wrote:

>You can not UNDERSTAND emotions.

Maybe you can't.

>This is simply obscene.
>Understanding is mental.
>Emotion is beyond mental.
>It is much closer to telepatic, or energy level of the
>essense of the being. Emotion is tuning in on a particular
>wavelength of the overall energy field of existence.

I disagree with you on the mystical, magical nature
of emotions. Emotions are largely understandable.

Emotions are a mechanism of the human mind. They
serve a purpose: they alert the consciousness to
key states in reality that must be attended to. They must
be attended to because they relate to the attainment/failure
of certain important conditions. These conditions are the
goals of the being.

There are several mechanisms that create the emotion
response. They are understandable. The reason for emotional
responses can be inferred.

>It is WELL beyond mere mentation.

Not for some of us.

>It is a DIRECT communication, thus telepatic, between
>the entities.

Should I merely take your word for this? Just because you
say they're un-understandable and "telepathic?"

If you'd studied emotions a bit better, you'd realize first,
that not all emotions are involved in communication acts.
There are some emotions which are purely social, but some
are not. Second, you'd realize that the communicative
aspect of these emotions is not perfect; expressions of emotions
can be misinterpreted.

Another thing you would realize is that there are two classes
of emotions:

Positive - those emotions which motivate the
individual to continue their cause

Negative - those emotions which motivate the
individual to stop their cause.

>Emotion stands at the CORE of your being, expressing itself
>in the physical domain. It IS an expression of your very
>essense.

Again, should I merely take your word, or do you have
something other than mystical assertions about emotions?

>Mental is merely to facilitate the temple of the body and
>help to maintain its well being in the physical domain.
>You need to eat. You need clothes. You need shelter.
>That is where mental comes in.
>It is responsible for providing you such.

So? Emotions serve a purpose in that scheme. They
are an interrupt mechanism for the consciousness. The
emotion mechanism is programmed with certain goal
states which must be maintained/avoided and the
mechanism's job is to alert the consciousness of key
states related to these goals.

>Emotion is outside the scope of these basic necessities
>of purely phisical. It is the expression of your very
>essense.

Nonsense. Emotions are part of an abstract goal
mechanism.

>Furthermore, it provides the very imptetus to be.
>No emotion - no impetus to be.
>You simply wither away.

Do you have proof of this or should I merely take
your word for it? Do you have proof of someone
who has "withered away" for lack of emotions?

Anyways, even if someone should "whither away"
due to lack of emotion, it would be the lack of goals
that caused the whithering (or the pursuit of improper
goals).

>This is what you see at this very moment in so called
>developed countries. They COMPLETELY suppressed the
>emotion and operate on purely physical level, driven
>by the fear of survival.

I don't see that.

>That is why there is such a profound frustration.

The profound frustration is caused by people like
you who confuse thinking...caused by people like
you who, lost in their own self-aggrandizement,
seek to confuse others...caused by people like you
who lacking basic thinking skills, thinks they can
be experts on difficult subjects.

>There simply is no longer an impetus to be.

Some of us have strong impetuses to "be."

>You need JOY and expression of your essense,
>forever craving to reconcile you and your "environment".

Whatever. Existence is struggle.

>You need jokes and playfulness, which provides you with
>orgasmic aspects of existence, thus providing the very
>impetus to be.

Whatever. Keep your "orgasmic aspects" to yourself. I don't
want to hear about them.

>> Once the ability to understand
>>emotions exists, that same model can be applied to the robot itself
>>and therefore, the robot understands its owns emotions.
>
>Simply obscene.

Not really. If you have an implementable model, then you
can begin to implement it. You obviously don't have an
implementable model.

>"Blind, leading the blind, WILL fall into the ditch".

Nanu nanu.

Cheers,
Art Ed LeBouthillier


Gene Douglas

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to

Arthur Ed LeBouthillier wrote in message
<3864db6...@news.earthlink.net>...


>On Sat, 25 Dec 1999 11:00:15 GMT, Bloxy's...@hotmail.com (Bloxy's) wrote:
>
>>You can not UNDERSTAND emotions.
>
>Maybe you can't.
>
>>This is simply obscene.
>>Understanding is mental.
>>Emotion is beyond mental.
>>It is much closer to telepatic, or energy level of the
>>essense of the being. Emotion is tuning in on a particular
>>wavelength of the overall energy field of existence.
>
>I disagree with you on the mystical, magical nature
>of emotions. Emotions are largely understandable.
>
>Emotions are a mechanism of the human mind. They
>serve a purpose: they alert the consciousness to
>key states in reality that must be attended to. They must
>be attended to because they relate to the attainment/failure
>of certain important conditions. These conditions are the
>goals of the being.
>
>There are several mechanisms that create the emotion
>response. They are understandable. The reason for emotional
>responses can be inferred.
>

In time, it will be possible to name an emotion, and describe it in terms of
hormones flowing, affecting certain brain cells and organs, and evoking
certain physical feelings at various locations in the body. Brain circuits
carrying increased or decreased signals from one area of tissue to another
will be identified. Mystery will be removed from the event.

The internal perception of the emotion will be another matter, as will the
complex thoughts which both produce and result from the emotions, and bring
combinations of emotions into simultaneous play.

>>It is WELL beyond mere mentation.
>
>Not for some of us.
>
>>It is a DIRECT communication, thus telepatic, between
>>the entities.
>
>Should I merely take your word for this? Just because you
>say they're un-understandable and "telepathic?"
>

Communication is more than verbal. It involves tone of voice, rapidity of
speech, volume, choice of synonyms, as well as non-speech involving any of
the other senses as well.

>If you'd studied emotions a bit better, you'd realize first,
>that not all emotions are involved in communication acts.
>There are some emotions which are purely social, but some
>are not. Second, you'd realize that the communicative
>aspect of these emotions is not perfect; expressions of emotions
>can be misinterpreted.

-
GeneDou...@prodigy.net
-
Portal to un-mod UU group at: http://www.deja.com/~soc_religion_uu/
Remember: (current list) Rich Puchalski / Richard Kulisz / --------- /
(Your name here)
New Un-Moderated group at alt.religion.unitarian-univ, or use URL to
go there.
-
"I quoted Rich Puchalski."
--

One out of every four Americans is suffering from some form of mental
illness.
Think of your three best friends. If they're OK, then it's you.

Gordon McComb

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to
> Emotions are a mechanism of the human mind.

I'm curious as to why you chose this particular phrase. While we can't be
certain animals have emotions, it certainly looks like many of them do.
And the most basic of emotions for survival (fear, aggression) are
exhibited in similar ways as ours. (Though this is not always the case. A
"grinning" chimpanzee is anything but happy. A dog wagging its tail is
glad to see its master; a cat wagging its tail is about ready to claw
you.)

Let's forget for a moment a machine's ability to sense human reactions and
infer from them a certain emotion. That has a practical application in
the machine-human interface. Let's just talk about the other way around,
a machine "feeling" emotions.

1. Would there be any purpose in having a machine feel an emotion if it
didn't also exhibit that emotion? If sensors can accurately determine
human emotions (galvanic response, voice stress, etc. are generally more
accurate that human judgement), why give robots emotions if it doesn't
require them to work with us?

2. What would be the purpose of a machine in exhibiting emotions? Do we
want machines to show hate? Dispair? Sadness? In what way would that
make them better machines to suit our needs? (Or are we really only doing
this to play God?)

3. If we want machines to have emotions at all, why should they be human
emotions? Why not a cat's emotions? They're curious, they answer the
hunger need when it occurs, they exhibit self-preservation. Wouldn't it
be more logical to endow a working robot with the emotions of a beaver,
with its strong apparent work ethic and familial social unit, than Uncle
George who goes around all day in his underwear doing nothing?

4. Unless all emotions are ultimately for the goal of self-preservation,
what goal does the emotion of fear have? If there is a single goal to
fear, then why is it manifested in so many different ways in different
humans, and animals? If fear is an emotion that we seek to escape from,
why do many humans like to be scared? (Ask Stephen King how big his
royalty checks are if you don't believe this.) Is fear a "positive" or
"negative" emotion (your words) if people both want it and don't want it
at the same time?

-- Gordon

Pogo Possum, Ph.D.

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to

Gene Douglas <gene...@prodigy.net> wrote in message
news:842p81$3r1i$1...@newssvr04-int.news.prodigy.com...

>
> In time, it will be possible to name an emotion, and describe it in
terms of
> hormones flowing, affecting certain brain cells and organs, and
evoking
> certain physical feelings at various locations in the body. Brain
circuits
> carrying increased or decreased signals from one area of tissue to
another
> will be identified. Mystery will be removed from the event.

Richard D. Lane & Lynn Nadel (2000). Cognitive Neuroscience of
Emotion. Oxford University Press.

Jaak Panksepp (1998). Affective Neuroscience: The foundations of
human and animal emotions. Oxford University Press.

Joseph LeDoux (1996). The Emotional Brain. Touchstone Books, Simon &
Schuster (in paperback).

Antonio R. Damasio (1994). Descartes' Error: Emotion, reason and the
human brain. Avon Science (in paperback).

You guys (except Gene) are embarrassing yourselves. Read something
about emotion.


Arthur Ed LeBouthillier

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to
On Sat, 25 Dec 1999 09:54:27 -0600, "Gene Douglas"
<gene...@prodigy.net> wrote:

>In time, it will be possible to name an emotion, and describe it in terms of
>hormones flowing, affecting certain brain cells and organs, and evoking
>certain physical feelings at various locations in the body. Brain circuits
>carrying increased or decreased signals from one area of tissue to another
>will be identified. Mystery will be removed from the event.
>

>The internal perception of the emotion will be another matter, as will the
>complex thoughts which both produce and result from the emotions, and bring
>combinations of emotions into simultaneous play.

Most likely.

>Communication is more than verbal. It involves tone of voice, rapidity of
>speech, volume, choice of synonyms, as well as non-speech involving any of
>the other senses as well.

You're communicating with me right now without all of the other
non-verbal benefits. The non-verbal cues can hel p establish
a lot about the emotional state of an individual, true.

Cheers,
Art


Arthur Ed LeBouthillier

unread,
Dec 25, 1999, 3:00:00 AM12/25/99
to
On Sat, 25 Dec 1999 16:42:51 GMT, Gordon McComb <gmc...@gmccomb.com>
wrote:

>> Emotions are a mechanism of the human mind.
>
>I'm curious as to why you chose this particular phrase.

Because it portrays emotions as I understand them. It appears
that there is no single neural mechanism which creates
emotional experiences. There are several mechanism which
respond to differing stimuli/conditions. The unity of the emotional
experience, however, is a product of the mind/consciousness.

> While we can't be certain animals have emotions, it certainly looks
> like many of them do. And the most basic of emotions for survival
> (fear, aggression) are exhibited in similar ways as ours. (Though this
> is not always the case. A "grinning" chimpanzee is anything but happy.
> A dog wagging its tail is glad to see its master; a cat wagging its tail is
> about ready to claw you.)

Right, the physical display of emotion, which is not under conscious
control is often species-specific.

As to the "grinning" chimpanzee, of course that is a threat display.
It has been surmised by some sociologists that human grinning is
a modified threat display as well. Underlying most humor is an
underlying threat and the purpose behind the grin is to signify that
the intent was not a real threat.

>Let's forget for a moment a machine's ability to sense human reactions and
>infer from them a certain emotion. That has a practical application in
>the machine-human interface. Let's just talk about the other way around,
>a machine "feeling" emotions.
>
>1. Would there be any purpose in having a machine feel an emotion if it
>didn't also exhibit that emotion? If sensors can accurately determine
>human emotions (galvanic response, voice stress, etc. are generally more
>accurate that human judgement), why give robots emotions if it doesn't
>require them to work with us?

Yes. Not all emotions are social. Some emotions, like pride, are
social emotions and their subjects/objects are "social agents."
In some cases, the display of these emotions is completely
sub-conscious. It has been shown that some emotional displays
are completely beyond one's control (i.e. the expression of true
laughter is different than feigned laughter).

But, not all emotions exist for the purpose of communicating internal
state. I would say that the communication of internal emotional state
is a secondary aspect of these emotions, allowing internal emotional
state to facilitate social activity. However, emotions are also a
product of an abstract goal system and I would say that goal
accomplishment is their primary function.

But, I wouldn't say that robots need to be able to display their
emotional state, although it could be useful/nice.

>2. What would be the purpose of a machine in exhibiting emotions? Do we
>want machines to show hate? Dispair? Sadness? In what way would that
>make them better machines to suit our needs? (Or are we really only doing
>this to play God?)

To facilitate social interaction with humans.

As to whether we want machines to show hate/despair/sadness etc, I
would say, probably yes. We would want them to display their internal
states when it is useful. However, we may not want robots to act out
these emotional states. In fact, because emotions serve as a
programmed orienting reflex, we certainly want robots to be able to
orient themselves reflexively as well as cognitively. However, we want
robots to be useful machines and therefore, rather than exhibiting
fright, we might make robots more aggresively seek to control
situations. In these cases, robots might have what is known as
"character;" they would have a strong moral reasoning ability as
well as a strong understanding of their own roles in society that
would make them more useful.

If you examine emotions in terms of their motivations, you will see
that underlying an emotion is a goal state (either positive or
negative). Additionally, one can identify the subjects/objects of
these goals. My view is that in engineering robots, we would
engineer their emotional capabilities. There would be certain
hardwired and non-modifiable emotions with extreme priority
that would monitor the cognitive state of the robot to ensure that
it does not/cannot harm people in the ways that we want.

With hatred, there is a subject and an object. The subject of
hatred is one who is perceived to have engaged in some
wrong-doing, the object being some social group of extreme
importance. My view is that we would not want our robots to
have this emotion because, although we want our robots to
be moral agents, we would like to limit their abilities to promote
certain kinds of states. Hatred is a biologically useful emotion
because it causes humans to protect their social groups. However,
I don't think we want robots to be telling us how we should live
in social groups and therefore, I don't think we would want this
emotion in robots. At most, we would want robots to observe
such social issues dispationately. In my view, we don't want
warrior robots whose purpose is to analyze human social
situations and take it upon themselves to re-engineer us. I
think we should leave that value/goal to humans, no matter
how tempting. This is a dangerous value to have in robots,
as it obviously leads to strife among humans.

Despair is an emotion that is a product of two components:
continued goal failure and a perceived lower social status
(in my opinion based on an incomplete analysis). I think this
is also an emotion which we would want to avoid giving a
robot. Sadness is also an emotion related to goal failure. We
want robots to have some component of this, I believe
because this will motivate the robot to avoid certain states.
However, we don't want sadness to result in despair or shock.

The mechanism to ensure against these kinds of conditions
will be emotions. Emotions will monitor the cognitive thought
train and focus the robot to more productive behaviors.

Humans are social creatures. However it came about, we
have propensities to form and break from social groupings.
Some emotions support this capability (and some are so
important that we cannot control their public display
completely). Forming and breaking social groupings is
a messy business. Some are motivated to a high degree
to conserve certain social groupings; there are emotions
to support that. This desire to maintain certain social groupings
is so strong that the invidiual may be motivated to kill another.

Robots need to be social creatures but with a limited range
of social goals. Some social goals must be hardwired in
so that they can't be overridden. It will be within this pre-defined
social goal tree that the robots other emotions will/should operate.

>3. If we want machines to have emotions at all, why should they be human
>emotions? Why not a cat's emotions? They're curious, they answer the
>hunger need when it occurs, they exhibit self-preservation. Wouldn't it
>be more logical to endow a working robot with the emotions of a beaver,
>with its strong apparent work ethic and familial social unit, than Uncle
>George who goes around all day in his underwear doing nothing?

I don't think that the issue is whether they are "human" emotions or
not. Robots will have robot emotions in order to make them more
useful to people.

Beavers are also social animals. It is likely that they have many
of the same emotions that we have because of that. My belief
is that emotions are probably a mechanism common to all mammals
although each species exhibits them differently.

Without a doubt, as robots become more capable of reasoning,
they become more capable moral agents. We will, without a doubt,
want to engineer the "character" of robots.

>4. Unless all emotions are ultimately for the goal of self-preservation,
>what goal does the emotion of fear have?

The goal to avoid a undesireable state leading to failure of a
preservation goal for something of extreme importance. Robots
should feel fear in order to protect humans. Humans can be
brought to a condition called shock when some key preservation
goals are violated. We probably don't want robots to go into shock.
We probably want robots to become socially attached to all humans,
in general with a slight preference for their owners but not at the
expense of their higher moral obligations.

> If there is a single goal to fear, then why is it manifested in so many
> different ways in different humans, and animals?

I don't think that fear has the goal merely to preserve ones life. I
think that it relates to several key values, including ones life. It
can be motivated by a number of key conditions. Should robots
feel fear? I think yes.

>? If fear is an emotion that we seek to escape from, why do many humans
> like to be scared? (Ask Stephen King how big his royalty checks are if you
> don't believe this.)

I don't know; that's a good question. Let's examine it.

A positive emotion makes one seek to maintain a particular
state. A negative emotion makes one seek to eliminate a particular
state. However, the reason that people continue such activity
may not be related to their emotions. I've tried to distinguish
emotions from feelings.

My guess is that there must be another motivational response which
causes an individual to enjoy surviving a fear emotion (perhaps
endorphins are released in a fear state and afterward one is "high"
from them). It is obviously positively motivating. obviously the
desire for endorphins over-rides the fear emotion; by surviving a
fear-causing situation, endorphins are released. Having endorphins
running through your veins is a great feeling. For years, I ran a lot,
upwards of 10 or more miles a day. In retrospect, it was probably
endorphins that motivated me to continue running. It is probably
the same thing with entering fearful conditions.

> Is fear a "positive" or "negative" emotion (your words) if people both
> want it and don't want it at the same time?

Again, I think it is important to distinguish between what I'm calling
emotions (which are a focusing mechanism of cognition) and
feelings in general (or other sub-conscious motivating mechanisms).
It is obvious that several mechanism are involved in motivating
people to do things. I don't think that people want "fear" what they
want is the feeling after the fear (a state of well-being).

Cheers,
Art Ed LeBouthillier


Valter Hilden

unread,
Dec 26, 1999, 3:00:00 AM12/26/99
to
Gordon McComb wrote:
> ...

> 2. What would be the purpose of a machine in exhibiting emotions? Do we
> want machines to show hate? Dispair? Sadness? In what way would that
> make them better machines to suit our needs? (Or are we really only doing
> this to play God?)
>
> 3. If we want machines to have emotions at all, why should they be human
> emotions? Why not a cat's emotions? They're curious, they answer the
> hunger need when it occurs, they exhibit self-preservation. Wouldn't it
> be more logical to endow a working robot with the emotions of a beaver,
> with its strong apparent work ethic and familial social unit, than Uncle
> George who goes around all day in his underwear doing nothing?
> ...
> -- Gordon

A machine with human emotions would be very useful in understanding
ourselves. If we could isolate emotions and study them in the same way
we study isolated cells and tissues, that would mean a large step
forward in psychotherapy.

On the other hand, we still have many boring, dangerous or unhealthy
jobs that need to be performed by humans today because machines aren't
intelligent enough. We need to develop artificial intelligence to create
machines capable of performing such jobs. We have to be sure that the
intelligence in such machines is enough to perform the job, but we would
not want to send a machine capable of suffering to do it. It is just as
cruel to torture an emotionally sentient robot as torturing a human.

Bob

unread,
Dec 26, 1999, 3:00:00 AM12/26/99
to
Gene Douglas wrote in message
<83tdk7$8rnc$1...@newssvr03-int.news.prodigy.com>...
...

>We both begin with the same things. But we apply thinking to the process,
>which makes it different. Our hypothetical sophistocated computer could
>also provide that thinking.

All this sort of depends on your definition of sophistication. If you mean
that you magically have a computer that displays something approximating
human emotion, and all evidence shows that these are equivalent, or at least
similar within their range and quality, in response and motivation in all
ways, then you may have some argument that they are also equivalent or
similar in the value we put on them. This is sort of an identity from a
logic point of view. IF and IF and IF... THEN conclusion. You have a lot
of IF's here that you haven't examined thoroughly. This argument borders on
fantasy.

If you begin to examine the requirements for these things to come about, you
find this is nowhere near as simple and straightforward as your assuptions
presume. You base these on circumstances of the moment and highly
questionable intermediate conclusions without examining the path or
determining the likelyhood that these circumstances will continue their
current trends long enough to reach the required level.

For example, a faster computer with more memory is not neccessarily a
"smarter" computer. It's the same basic principle with the same basic
programming. The increase in power has really bought you nothing in the way
of intelligence or sophistication. It may provide a higher potential, but
only if its programming can be improved by the same margin at the same rate.
This has not been proven to be the case.

> If you can ever create robotic emotions, they
> >may be inherently unreliable because they can never be the same as humans
> >since the robot could never be that similar. There is a strong human
> >tendency to impose our image of human emotions on things like animals and
> >even inanimate objects.
> >
> There may be no need for robots emotions to be identical to humans.
>Feeling in robots would be tailored to the need in regard to a purpose. At
>any rate, how can we say that two humans experience exactly the same
thing.
>If I see yellow, and you say you see yellow, how do we know that you and I
>are experiencing the same thing? If I like milk and you can't stand the
>stuff, how do we know that we are both tasting the same thing? If I
>experience say, loneliness, and you say you experience an emotion with the
>same name, how do we know that you and I are experiencing the same thing?

It's all in the definition you use, of course. We can equate human emotions
because we are human. We relate somewhat to animal emotions because we
assume a similarity between these biological systems. A digital electronic
system may not be remotely similar. You might as well call it all bugs or
errors. A more complex system has more complex bugs, not neccessarily
emotions.

...


>Why does it clearly have no emotion? If all conditions are met, then how
>can we say the one is different from the other? The human baby exhibits
>certain behaviors, which adults interpret as emotion. If R2D2 should
>exhibit certain behaviors, humans might also interpret that as emotion.
How
>do we say that one perception is better than the other?
>
> >> >I once used a digital oscilloscope that printed "ouch" on the screen
>when
> >> >the input voltage exceeded the max voltage for the display scale set.
> >>Does
> >> >this mean the scope has emotion and was expressing an emotional
>response
> >>to
> >> >extreme stimulus?

...


> >So would you say the scope experienced an emotion, or not?
>
>Firstly, a human programmer had put the ouch in the machine. Like the
>husband who says "you look fine," without looking up from his newspaper,
it
>is just producing a prefabricated response, unrelated to reality.
>
> Secondly, the example is too simple to meet conditions required for us to
>say that a human had experienced a feeling. If we can list all criteria
for
>the human, and then apply those same criteria to a machine, then how are
we
>to say that the machine still does not meet the criteria, if it appears to
>do so?
>
> > If not, how is
> >it any different from the robot example you presented above? The scope
> >wasn't lying, it was expressing a response to extreme stimulus. If the
> >ability to analyze input and express a response that can be viewed in
> >emotional terms indicates emotion, then we already have emotion in a lot
>of
> >different things, all electical or electronic systems and probably most
or
> >all mechanical systems. I don't buy it.
> >
>The human manufacturer had programmed in a lie (or a joke) to the machine.
>The printing could as well have read "error."

You failed to answer the simple question. You seem to be arguing both sides
of this. "Emotions are similar at similar levels of complexity" and
"Ability to express emotion is some proof of the existence of that level"
but also "The scope wasn't expressing an emotion because it is not complex
enough" and "The scope's expression was not really valid because it's a
machine programmed to do so". Where do you draw these lines and set your
definitions?

...


>I can be as arbitrary. I believe machines because they are machines, and
>not in humans because they are humans and not machines. A human could be
>said to never be capable of machine emotions, because it's not a machine,
>or even mechanical.

But emotion is part of most people's definition of human, or at least
animal. Some people trust machines precisely because they have no emotions.
They are reliable because we design them to be that way and we understand
their function. If they become unreliable, they generally become useless
and are destroyed or modified until they are reliable again.

...


> >Now you've made a jump to robot intelligence, which is a similarly
>uncertain
> >possibility, but this is part of the problem in defining our rules, isn't
> >it?
>
>It's hard to say that a machine doesn't have intelligence, if we can't even
>say what intelligence is. If intelligence is the ability to pass an IQ
>test, then machines can probably do that now.

Certainly it could be programmed, by a human, to pass a specific test.


>
> If it's ever possible for a machine to have emotion, then we have that
> >now in every electro-mechanical machine (and probably a lot of other
> >things), albeit at an insect-like level. If you say machines or
computers
> >don't have emotion now already, then a more complex and sophisticated
> >machine won't really ever have emotions either.
>
> You've again made a leap to the arbitrary.

It's not arbitrary. It's the fundamental, root question. Is it justifiable
to equate machine emotional mimickry, at any level, to animal or human
emotion? If so, then emotion becomes trivial and cheap. Virtually
everything has it. If not, then where exactly do you draw the line so it
can provide a reasonable definitional divide? What is your definition of
emotion?

> More likely, a gradual evolution will take place, with each generation
>approaching that point. Suppose scientists found a way to make a soft
>computer. Suppose they took a brain cell from a sea worm, multiplied it
to
>large numbers, and guided the way they connected, and even manipulated the
>number of axons per cell.

Now you're arguing a difference in kind differentiated by a difference in
structure. This is the very argument that animal emotion is fundamentally
different than similar effects from machines. A difference in structure or
materials imparts a difference in results and the definitions and value we
put on those results.

> Suppose they produced a computer of tremendous complexity in this way,
>which could produce human-like responses in every detail. By the mere fact
>that the components are soft, would you think that if such a computer
could
>report a feeling, that it has a feeling?

Perhaps. It's not really due to the fact that it's soft, but that it is
fundamentally different from the machine. If you build this up to the point
of similarity to humans in shape and function and development and
experience, then you might come up with something we would recognize as
having emotion because of that similarity. How much of this similarity is
neccessary? This is an awfully large "might" and also may not result in
anything we equate to emotion.

> I think way too little is known yet to start drawing
> >definitive conclusions one way or the other. Even if it is possible it
>may
> >take us as long, and as many resources, to produce as evolution has
taken,
> >or even longer.
>
> Currently, computers double their speed and processing power every 18
>months, while most biological organisms have not. If we could say that
>lower animals did something similar every 100,000 years, then maybe we
will
>progress about 50,000 times faster than that.

This was discussed before. It is specious logic and the conclusions drawn
from it are invalid. You are taking the momentary slope of a function and a
near-term rate increase, assuming it will continue forever and extend to
infinity, applying this imaginary mathematical pattern to other functions in
the same industry (which haven't shown a similar pattern) and extrapolating
wild conclusions from the result. None of this is reasonable to assume. In
a truly infinite universe, this might be possible, but the universe, while
physically extremely large, is not infinite in this sense.

> I have an extremely hard time believing this is solely a
> >matter of processing power. I think it will at least require, once we
>have
> >the theoretical processing power, a massive programming effort and a long
> >period of refinement if it is even possible at all.
>
> Actually, it's the ghost inside the machine. He even has a name:
>Homunculus. I forgot to mention that.


Please! You can't be serious about THIS. You might as well be talking
about the "boogie man". Patterns in the chaos are not the same as
intelligence or emotion. There are patterns and structure in the complexity
of tornados and hurricanes and fire, but few would believe this indicates
emotion or intelligence in these. There's no indication that this kind of
function would take any different pattern in a highly complex computer
system. There is no inherent guidance in a highly complex computer, beyond
our level of control, just because it is sometimes used to perform
computation.

If you're hoping that random, or chaotically complex, effects in a computer
will magically result in an intelligent or emotional system, I suspect that
this is unlikely. The odds for it are so astronomically small as to put it
on a par with fantasy or irrational fears. There's no indication that
random, or uncontrollably complex functions will be convergent on
intelligence or emotion and it is far, far, far more likely that it will be
divergent.

Bob


Jim Balter

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to
Valter Hilden wrote:
>
> Gene Douglas wrote:
> > ...
> > As I said before, if you say you see "yellow," and I see "yellow," how do we
> > know that we are experiencing the same thing?
> > ...
>
> I agree with you - we have no means of knowing. I know a person who says
> he only discovered he was color blind when he flunked his first driver's
> license examination. Before that, he had always thought "red" was the
> name of a dark shade of green. The only comparison we can do is within
> our own minds.

Ah, but in fact we *do* have a means of knowing that *his* perception
of red isn't the same as *ours* (he more easily confuses red and green
lights, for one thing), so a comparison *is* possible if in fact the
color spaces differ.

The problem here is erroneous reification of "experience".
There is no such *thing* as "yellow experience". "yellow"
names a *relationship* within human perceptual space,
and the relationship in my space is (nearly; there are tests that
can show differences at the edges, where I might call yellow what
call orange, for instance) the same as the relationship in your
space. It is not the same in a color blind person's space; s/he
experiences some stimuli identically when we experience them
differently, and tests reveal this.

The spectrum inversion thought experiment involves replacing
"my experience of red" with "my experience of blue" while
holding all the *relationships* the same, but the
experiment doesn't apply to the real world because there are
no autonomous "qualia" such that one can be replaced by another.
After the inversion, blood continues to be called red and
it's color continues to seem "warm" relative to the color of
the sky. Ex hypothesi, if we woke up one day with out qualia
inverted, we wouldn't be able to tell (and what would we say if we could? "What has happened to me? Yesterday the sky was blue,
but today it's *blue*!"). If we take away all of these relationships
among our qualia, there is nothing left, and thus nothing to be
exchanged.

I suggest C. L. Hardin's _Color for Philosophers_ for those who
are seriously interested in learning about such things.

--
<J Q B>

Jim Balter

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to
Valter Hilden wrote:
>
> Gene Douglas wrote:
> > ...
> > >The difference may be in how we define this. A simple machine can answer
> > >this question, even though it clearly has no emotion. A human baby can't
> > >answer the question, even though he/she clearly seems to have emotion.
> > >But read on:
> > >
> > Why does it clearly have no emotion? If all conditions are met, then how
> > can we say the one is different from the other? The human baby exhibits
> > certain behaviors, which adults interpret as emotion. If R2D2 should
> > exhibit certain behaviors, humans might also interpret that as emotion. How
> > do we say that one perception is better than the other?
> > ...

>
> At this moment, we have absolutely no means to enter the brain, or mind,
> or soul of any other human being, baby or adult, or of any animal or
> machine, so we are limited to philosophical conjectures on this respect.

We can't enter the sun but we still know something about it.
We "enter" other human beings by having dialogues with them.
Anecdotes such as the one below are evidence of the content and structure of the human mind (the evidence is not conclusive,
but no empirical evidence truly is). (BTW, the rubric regarding
"anecdotal evidence" is not against anecdotes per se -- all evidence
consists of anecdotes. Rather it is against isolated reports without
concern for statistical significance or bias.

This is an example of
Dennett's "heterophenomenology". The idea that one must be inside
a mind to gather evidence about the mind is simply wrong, but is
fostered by having special access to our own minds. If we were suns,
we would probably think that you had to get inside a sun to know
about it, and that all else was "philosophical conjecture".

> However, I can enter my own younger brain,

Not really. You have memories, which you are now dumping to an
external medium.

> and there is a moment in my
> childhood that I very distinctly remember to this day. When I was three
> years old, I had a sudden perception of myself as a person; I realized
> that I was a person, that I had a "self conscience", or whatever you may
> call it. I cannot remember any single event in my life that happened
> before that moment, but I have a more or less complete and continuous
> memory of my life after that. Ever since, I have wondered on what
> happened to my brain at that moment, something just "clicked" and I
> became conscient of myself.
>
> So, based on my own, admittedly anectodal, experience, I assume that
> there is a certain minimum complexity in a brain in order to exhibit
> conscient behavior. A baby may have emotions, may feel pain or pleasure,
> but is not philosophically aware of the existence of his or her own
> self. Unfortunately, our technology is still far from simulating in a
> computer the brain of a three year old child, we do not have enough cpu
> power for that.

--
<J Q B>

Gary Forbis

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to
Jim Balter <j...@sandpiper.net> wrote in message
news:3866B7D7...@sandpiper.net...

> If we take away all of these relationships
> among our qualia, there is nothing left, and thus nothing to be
> exchanged.

I agreed with everything you wrote up to this point. The qualia themselves
remain as they have no relationship to the outside world in and of
themselves.
It is true that one cannot invert color qualia if there is no relationship
between
the qualia and colors but just as people and places can exist without any
relationship connecting them, photons and quail can exist without any
relationship
connecting them.

Jim Balter

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to
Gary Forbis wrote:
>
> Jim Balter <j...@sandpiper.net> wrote in message
> news:3866B7D7...@sandpiper.net...
> > If we take away all of these relationships
> > among our qualia, there is nothing left, and thus nothing to be
> > exchanged.
>
> I agreed with everything you wrote up to this point.

Apparently not, since I said that a reification error is involved.

> The qualia themselves
> remain as they have no relationship to the outside world in and of
> themselves.
> It is true that one cannot invert color qualia if there is no relationship
> between
> the qualia and colors but just as people and places can exist without any
> relationship connecting them, photons and quail can exist without any
> relationship
> connecting them.

Silly baseless claims. It is particularly obvious that *places* do not
exist autonomously (imagine exchanging New York and California
without moving them, renaming them, changing their shape, and so on).
Even photons only exist as a relationship among observations; at the
quantum mechanical level, they are only statistical; and talk about
exchanging photons is nonsensical; all photons are identical.
And if two people are A and B exchanged but A has the same
hair color and body shape as C, is still as old as D, and so on,
if all the *attributes*, which are determined by relationships within
our conceptual space, are unchanged, then it is nonsense to say that
they have been exchanged. Nothing exists separate from its attributes.
There are various ways to exchange two people and have them retain
their attributes, but the spectrum inversion thought experiment
imagines exchanging qualia but leaving all the attributes the same,
except the "qualia themselves", the "redness of red" and so on.
But there is no "Forbisness of Forbis" separate from all of Forbis's
attributes.

Apart from relationships within the conceptual space, quail have no
attributes. The claim that they exist is empty; if they are
"exchanged" but all the relationships are left unchanged, then what
is exchanged are two featureless identical null entities, which is no
different from not exchanging them at all. People imagine exchanging
red and blue but leaving all the relationships intact (oceans and sky
now look "red" but still seem cool and still are called "blue"), but
that isn't really what they are imagining, because there is no way to
imagine "blue" having all the relational attributes of "red". If
"blue" has all the relational attributes of "red", then it *is* red,
because that's all there is to red. Saying "the quail themselves
remain" is nonsense; "blue" just *isn't* itself if it is warm, applies
to apples and receding stars, and so on.

Right now as I look at my screen, I see a page with two hyperlinks
displayed in blue. One is on a white background, and one is on a green
background. While I know "objectively" (i.e., via information separate
from my direct perception) that the same wavelengths are being
transmitted in both cases, perceptually they are quite different hues.
The notion that there are "qualia in and of themselves"
is a myth that cannot be maintained in the light of careful
examination. And understanding and then abandoning the myth allows us
to solve conundra like spectrum inversion, and allows us to actually
explain consciousness instead of invoking "and here there be dualism" majik.

--
<J Q B>

Jim Balter

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to
Jim Balter wrote:

> Apart from relationships within the conceptual space, quail have no

Lord, now *I'm* doing it. It's my spell checker's fault, as *I*
typed "qualia" (the singular of which is "quale"; "quail" is a
funny looking bird with a topknot).

--
<J Q B>

Gene Douglas

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to

If an electronic device can be made which will let the blind "see," or the
deaf "hear," are they seeing or hearing the same thing as you or I, even
though they give the same names to them?

Would the blind person appreciate a blue sky, or an artwork, or the deaf
person appreciate music in the same way as you or I? Would there be any way
to know?

Gene Douglas

Jim Balter wrote in message <3867A836...@sandpiper.net>...

Gene Douglas

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to

Bob wrote in message <846f5i$m50$1...@ffx2nh5.news.uu.net>...


>Gene Douglas wrote in message
><83tdk7$8rnc$1...@newssvr03-int.news.prodigy.com>...
>...
>>We both begin with the same things. But we apply thinking to the
process,
>>which makes it different. Our hypothetical sophistocated computer could
>>also provide that thinking.
>
>All this sort of depends on your definition of sophistication. If you
mean
>that you magically have a computer that displays something approximating
>human emotion, and all evidence shows that these are equivalent, or at
least
>similar within their range and quality, in response and motivation in all
>ways, then you may have some argument that they are also equivalent or
>similar in the value we put on them. This is sort of an identity from a
>logic point of view. IF and IF and IF... THEN conclusion. You have a lot
>of IF's here that you haven't examined thoroughly. This argument borders
on
>fantasy.
>

If the computer were far better than they are today -- and they tend to
double in quality every 18 months -- say in the year 2010, and if it could
exhibit behaviors comparable to R2D2, C3PO or HAL, then would we be able to
arbitrarily say they are not experiencing emotion? If we should say so, by
what reasoning?

>If you begin to examine the requirements for these things to come about,
you
>find this is nowhere near as simple and straightforward as your assuptions
>presume. You base these on circumstances of the moment and highly
>questionable intermediate conclusions without examining the path or
>determining the likelyhood that these circumstances will continue their
>current trends long enough to reach the required level.
>

But, should they do so... You are simply suggesting that the technology
would not become available, not that, if it did, there would be no
"feeling."

>For example, a faster computer with more memory is not neccessarily a
>"smarter" computer. It's the same basic principle with the same basic
>programming. The increase in power has really bought you nothing in the
way
>of intelligence or sophistication. It may provide a higher potential, but
>only if its programming can be improved by the same margin at the same
rate.
>This has not been proven to be the case.
>

So, in what other ways are our biological computers different? Once we
define that, then we can set out to find ways to construct that, too. Even
programs can be partially constructed with computers, and as computers get
better, construction of programs can get better, too.

>> If you can ever create robotic emotions, they
>> >may be inherently unreliable because they can never be the same as
humans
>> >since the robot could never be that similar. There is a strong human
>> >tendency to impose our image of human emotions on things like animals
and
>> >even inanimate objects.
>> >

You arbitrarily assume that the robot could never be that similar. At any
rate, human emotions are notoriously "unreliable," by whatever standard you
wish to define that word.

>> There may be no need for robots emotions to be identical to humans.
>>Feeling in robots would be tailored to the need in regard to a purpose.
At
>>any rate, how can we say that two humans experience exactly the same
>thing.
>>If I see yellow, and you say you see yellow, how do we know that you and
I
>>are experiencing the same thing? If I like milk and you can't stand the
>>stuff, how do we know that we are both tasting the same thing? If I
>>experience say, loneliness, and you say you experience an emotion with
the
>>same name, how do we know that you and I are experiencing the same
thing?
>
>It's all in the definition you use, of course. We can equate human
emotions
>because we are human. We relate somewhat to animal emotions because we
>assume a similarity between these biological systems. A digital
electronic
>system may not be remotely similar. You might as well call it all bugs or
>errors. A more complex system has more complex bugs, not neccessarily
>emotions.
>

We can define as precisely as we wish. If we define "anger" according to
some very narrow paramaters, then we need only apply that definition to
computers to determine if they fit the same requirements.

Simply define emotion with as many requirements as you wish. Then apply all
of those requirements to the machine. If you should find a machine that
you say has no emotion, then either it doesn't fit the requirements, or you
are being arbitrary. If you say the oscilloscope does not have emotion,
then you can either produce reasons why it does not, or reasons why it
does. Without reasons, then you can say nothing definitive about it.


>...
>>I can be as arbitrary. I believe machines because they are machines, and
>>not in humans because they are humans and not machines. A human could
be
>>said to never be capable of machine emotions, because it's not a
machine,
>>or even mechanical.
>
>But emotion is part of most people's definition of human, or at least
>animal. Some people trust machines precisely because they have no
emotions.
>They are reliable because we design them to be that way and we understand
>their function. If they become unreliable, they generally become useless
>and are destroyed or modified until they are reliable again.
>

If you definition says, "one must be human to experience emotion," then
you've got the machine beat. But you would also say there is no such thing
as an angry chimpanzee, or a fearful dog. On the other hand, you would
have to say, "what does it take to be human?" Possibly if you can create a
complete definition, then a machine of the same description could be said to
be human, also. (See Blade Runner. In the end, the hero marries the
robot.)


>...
>> >Now you've made a jump to robot intelligence, which is a similarly
>>uncertain
>> >possibility, but this is part of the problem in defining our rules,
isn't
>> >it?
>>
>>It's hard to say that a machine doesn't have intelligence, if we can't
even
>>say what intelligence is. If intelligence is the ability to pass an IQ
>>test, then machines can probably do that now.
>
>Certainly it could be programmed, by a human, to pass a specific test.
>>
>> If it's ever possible for a machine to have emotion, then we have that
>> >now in every electro-mechanical machine (and probably a lot of other
>> >things), albeit at an insect-like level. If you say machines or
>computers
>> >don't have emotion now already, then a more complex and sophisticated
>> >machine won't really ever have emotions either.
>>
>> You've again made a leap to the arbitrary.
>
>It's not arbitrary. It's the fundamental, root question. Is it
justifiable
>to equate machine emotional mimickry, at any level, to animal or human
>emotion? If so, then emotion becomes trivial and cheap.

And I guess I would have to say, "so what?" You are now speaking of
values, not definitions.

Virtually
>everything has it. If not, then where exactly do you draw the line so it
>can provide a reasonable definitional divide? What is your definition of
>emotion?
>

I would say that you can take any definition you wish. Once you are
satisfied with that definition, then you just apply it to anything you are
trying to evaluate.

>> More likely, a gradual evolution will take place, with each generation
>>approaching that point. Suppose scientists found a way to make a soft
>>computer. Suppose they took a brain cell from a sea worm, multiplied it
>to
>>large numbers, and guided the way they connected, and even manipulated
the
>>number of axons per cell.
>
>Now you're arguing a difference in kind differentiated by a difference in
>structure. This is the very argument that animal emotion is fundamentally
>different than similar effects from machines. A difference in structure
or
>materials imparts a difference in results and the definitions and value we
>put on those results.
>

We have hard parts and soft parts. You seem to be biased in favor of soft
parts. Perhaps if we used silicone rather than silicon, you would be
satisfied with a soft machine.

>> Suppose they produced a computer of tremendous complexity in this way,
>>which could produce human-like responses in every detail. By the mere
fact
>>that the components are soft, would you think that if such a computer
>could
>>report a feeling, that it has a feeling?
>
>Perhaps. It's not really due to the fact that it's soft, but that it is
>fundamentally different from the machine. If you build this up to the
point
>of similarity to humans in shape and function and development and
>experience, then you might come up with something we would recognize as
>having emotion because of that similarity. How much of this similarity is
>neccessary? This is an awfully large "might" and also may not result in
>anything we equate to emotion.
>

Again, just define emotion, and then apply the yardstick to whatever.

>> I think way too little is known yet to start drawing
>> >definitive conclusions one way or the other. Even if it is possible it
>>may
>> >take us as long, and as many resources, to produce as evolution has
>taken,
>> >or even longer.
>>
>> Currently, computers double their speed and processing power every 18
>>months, while most biological organisms have not. If we could say that
>>lower animals did something similar every 100,000 years, then maybe we
>will
>>progress about 50,000 times faster than that.
>
>This was discussed before. It is specious logic and the conclusions drawn
>from it are invalid. You are taking the momentary slope of a function and
a
>near-term rate increase, assuming it will continue forever and extend to
>infinity, applying this imaginary mathematical pattern to other functions
in
>the same industry (which haven't shown a similar pattern) and
extrapolating
>wild conclusions from the result. None of this is reasonable to assume.
In
>a truly infinite universe, this might be possible, but the universe, while
>physically extremely large, is not infinite in this sense.
>

All you are saying is that it will take longer, or that it will not be
technically possible. That is not the question. It is entirely possible
that we will never have a C3PO, HAL, or Blade Runner warrior. In which
case, we will never have the issue we are discussing in real time. But
should somebody build one of that complexity, then how we say, rationally,
that it has no emotion?

>> I have an extremely hard time believing this is solely a
>> >matter of processing power. I think it will at least require, once we
>>have
>> >the theoretical processing power, a massive programming effort and a
long
>> >period of refinement if it is even possible at all.
>>
>> Actually, it's the ghost inside the machine. He even has a name:
>>Homunculus. I forgot to mention that.
>
>
>Please! You can't be serious about THIS. You might as well be talking
>about the "boogie man".

Yes. Exactly.

Patterns in the chaos are not the same as
>intelligence or emotion.

Then what it going on inside of us? Could it just be chaos resembling
randomness? And could this occur in a machine? (Actually, it does, in
little novelties made to put on a desktop.) Do we equate the apparent
randomness of humans as "free will?" (Sorry, different topic. See:
http://www.geocities.com/HotSprings/3616/will.html )

There are patterns and structure in the complexity
>of tornados and hurricanes and fire, but few would believe this indicates
>emotion or intelligence in these. There's no indication that this kind of
>function would take any different pattern in a highly complex computer
>system. There is no inherent guidance in a highly complex computer,
beyond
>our level of control, just because it is sometimes used to perform
>computation.
>
>If you're hoping that random, or chaotically complex, effects in a
computer
>will magically result in an intelligent or emotional system, I suspect
that
>this is unlikely. The odds for it are so astronomically small as to put
it
>on a par with fantasy or irrational fears. There's no indication that
>random, or uncontrollably complex functions will be convergent on
>intelligence or emotion and it is far, far, far more likely that it will
be
>divergent.
>
>Bob
>

On the other hand, similar effects are tolerated in a human system, and we
experience occasional intelligence despite them. If they become totally
random, then we experience siezures or cardiac fibrillations.

Jim Balter

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to
Gene Douglas wrote:
>
> If an electronic device can be made which will let the blind "see," or the
> deaf "hear," are they seeing or hearing the same thing as you or I, even
> though they give the same names to them?
>
> Would the blind person appreciate a blue sky, or an artwork, or the deaf
> person appreciate music in the same way as you or I? Would there be any way
> to know?

"appreciate in the same way" erroneously reifies "ways of appreciation". We can talk to people and get a sense of
whether the attributes of their senses of appreciation are the
same. We can of course miss some details, or they can mislead
us; such is the nature of *any* empirical investigation.
But, beyond the measurable attributes of appreciation, there isn't
something else.

>
> Gene Douglas
>
> Jim Balter wrote in message <3867A836...@sandpiper.net>...
> >Jim Balter wrote:
> >
> >> Apart from relationships within the conceptual space, quail have no
> >
> >Lord, now *I'm* doing it. It's my spell checker's fault, as *I*
> >typed "qualia" (the singular of which is "quale"; "quail" is a
> >funny looking bird with a topknot).
> >
> >--
> ><J Q B>


--
<J Q B>

Gary Forbis

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to
Jim Balter <j...@sandpiper.net> wrote in message
news:3867A3E9...@sandpiper.net...

> Gary Forbis wrote:
> >
> > Jim Balter <j...@sandpiper.net> wrote in message
> > news:3866B7D7...@sandpiper.net...
> > > If we take away all of these relationships
> > > among our qualia, there is nothing left, and thus nothing to be
> > > exchanged.
> >
> > I agreed with everything you wrote up to this point.
>
> Apparently not, since I said that a reification error is involved.

OK.

> > The qualia themselves remain as they have no relationship to
> > the outside world in and of themselves. It is true that one
> > cannot invert color qualia if there is no relationship between
> > the qualia and colors but just as people and places can exist without
any

> > relationship connecting them, photons and qualia can exist without any


> > relationship connecting them.
>
> Silly baseless claims. It is particularly obvious that *places* do not
> exist autonomously (imagine exchanging New York and California
> without moving them, renaming them, changing their shape, and so on).

I don't have the slightest idea how you come to this conclusion. It should
be
obvious places exist autonomously from their names. I'm not sure why you
put name in with shape. When a state, such as Virginia, is redefined,
places
aren't moved but rather the places are associated with a different name.

> Even photons only exist as a relationship among observations; at the
> quantum mechanical level, they are only statistical; and talk about
> exchanging photons is nonsensical; all photons are identical.

I don't think this is so. I see some objects and not other because
different photons carry different information and exist in different
locations.
The world doesn't stop existing just because I close my eyes or put my
head in a paper bag.

> And if two people are A and B exchanged but A has the same
> hair color and body shape as C, is still as old as D, and so on,
> if all the *attributes*, which are determined by relationships within
> our conceptual space, are unchanged, then it is nonsense to say that
> they have been exchanged.

Certainly the naming of an attribute requires a reference within our
conceptual space. Do you believe a thing cannot exist without being
a referent within our conceptual space?

> Nothing exists separate from its attributes.

Not all attributes are relationships.

> There are various ways to exchange two people and have them retain
> their attributes, but the spectrum inversion thought experiment
> imagines exchanging qualia but leaving all the attributes the same,
> except the "qualia themselves", the "redness of red" and so on.
> But there is no "Forbisness of Forbis" separate from all of Forbis's
> attributes.

"Qaulia" is often used in relationship to "yellow" because several quale are
referenced by the same name, and while they might be distinguishable they
seldom are. (In a secondary message you made a comment as if I used
"quail" for "quale". I intented "qualia" but misspelled it in haste.)

I'm not saying qualia can be separated from their attributes. There are the
qualia we associate with "red" and they exist appart from the association.
When people talk about specturm intversion they aren't talking about (or
I hope they aren't talking about) inverting the qualia (as if this had any
meaning,)
but rather reassociating the qualia with different sense data along a
particular
vector.

Consider a digitial camera and a digitial monitor. The camera converts
colors
into numbers and the monitors converts numbers into colors. There's no
privaleged relationship between numbers and colors. As long as the two
devices use the same conversion scheme everythings fine, however one would
be wrong if one asserted all cameras convert colors into the same numbers
or that numbers didn't exist apart from colors.

> Apart from relationships within the conceptual space, quail have no

> attributes. The claim that they exist is empty; if they are
> "exchanged" but all the relationships are left unchanged, then what
> is exchanged are two featureless identical null entities, which is no
> different from not exchanging them at all.

The qualia are not featureless. The features cannot be communicated
because communication is done by way of the relationships.

> People imagine exchanging
> red and blue but leaving all the relationships intact (oceans and sky
> now look "red" but still seem cool and still are called "blue"), but
> that isn't really what they are imagining, because there is no way to
> imagine "blue" having all the relational attributes of "red". If
> "blue" has all the relational attributes of "red", then it *is* red,
> because that's all there is to red. Saying "the quail themselves
> remain" is nonsense; "blue" just *isn't* itself if it is warm, applies
> to apples and receding stars, and so on.

That a color feels "warm" or "cold" is due to the relationship between
different sets of qualia. It's a bit strange that you associate "red" with
"hot" and "blue" with "cold" when the photons associate with "blue" are
more energetic than those associated with "red".

> Right now as I look at my screen, I see a page with two hyperlinks
> displayed in blue. One is on a white background, and one is on a green
> background. While I know "objectively" (i.e., via information separate
> from my direct perception) that the same wavelengths are being
> transmitted in both cases, perceptually they are quite different hues.
> The notion that there are "qualia in and of themselves"
> is a myth that cannot be maintained in the light of careful
> examination.

Your ability to know objective and subjective facts and that they are
different
should indicate there are qualia and they are independent of their
relationships
to the outside world.

> And understanding and then abandoning the myth allows us
> to solve conundra like spectrum inversion, and allows us to actually
> explain consciousness instead of invoking "and here there be dualism"
majik.

If you only explain the relationships you aren't explaining consciousness.

Gene Douglas

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to

Jim Balter wrote in message <3866C031...@sandpiper.net>...


>Valter Hilden wrote:
>>
>> Gene Douglas wrote:
>> > ...
>> > >The difference may be in how we define this. A simple machine can
answer
>> > >this question, even though it clearly has no emotion. A human baby
can't
>> > >answer the question, even though he/she clearly seems to have
emotion.
>> > >But read on:
>> > >

>> > Why does it clearly have no emotion? If all conditions are met, then
how
>> > can we say the one is different from the other? The human baby
exhibits
>> > certain behaviors, which adults interpret as emotion. If R2D2 should
>> > exhibit certain behaviors, humans might also interpret that as
emotion. How
>> > do we say that one perception is better than the other?

>> > ...
>>
>> At this moment, we have absolutely no means to enter the brain, or mind,
>> or soul of any other human being, baby or adult, or of any animal or
>> machine, so we are limited to philosophical conjectures on this respect.
>
>We can't enter the sun but we still know something about it.
>We "enter" other human beings by having dialogues with them.
>Anecdotes such as the one below are evidence of the content and structure
of the human mind (the evidence is not conclusive,
>but no empirical evidence truly is). (BTW, the rubric regarding
>"anecdotal evidence" is not against anecdotes per se -- all evidence
>consists of anecdotes. Rather it is against isolated reports without
>concern for statistical significance or bias.
>
>This is an example of
>Dennett's "heterophenomenology". The idea that one must be inside
>a mind to gather evidence about the mind is simply wrong, but is
>fostered by having special access to our own minds. If we were suns,
>we would probably think that you had to get inside a sun to know
>about it, and that all else was "philosophical conjecture".
>

Yet, if you smell a rose, and I smell a rose, we can't ever know if you and
I are having the same experience. We know that the stimulus is the same,
and we know that the name we give it is the same. But I can't know if you
are having the same internal experience that I am having.

To complicate it further, we can add thinking and memory to the perception,
and know that we definitely are not having the same experience. For
example, a Chinese friend once turned down a root beer, because he told me
it had a medicinal taste to him. I then took a sip of my own, and though it
tasted the same as before, it reminded me of medicine. I have since that
time disliked root beer, because it now seems medicinal to me.

Likewise, I know of somebody who wore Old Spice cologne. When his
alcoholism became very bad, he always reeked of urine, and seldom bathed or
changed his clothes. He just applied more Old Spice. I now can't stand
the scent of Old Spice, because that experience is mixed with my perception
of it.

>> However, I can enter my own younger brain,
>
>Not really. You have memories, which you are now dumping to an
>external medium.

Gene Douglas

unread,
Dec 27, 1999, 3:00:00 AM12/27/99
to

Gordon McComb wrote in message <3864F231...@gmccomb.com>...


>> Emotions are a mechanism of the human mind.
>

>I'm curious as to why you chose this particular phrase. While we can't be


>certain animals have emotions, it certainly looks like many of them do.
>And the most basic of emotions for survival (fear, aggression) are
>exhibited in similar ways as ours. (Though this is not always the case. A
>"grinning" chimpanzee is anything but happy. A dog wagging its tail is
>glad to see its master; a cat wagging its tail is about ready to claw
>you.)
>

>Let's forget for a moment a machine's ability to sense human reactions
and
>infer from them a certain emotion. That has a practical application in
>the machine-human interface. Let's just talk about the other way around,
>a machine "feeling" emotions.
>
>1. Would there be any purpose in having a machine feel an emotion if it
>didn't also exhibit that emotion? If sensors can accurately determine
>human emotions (galvanic response, voice stress, etc. are generally more
>accurate that human judgement), why give robots emotions if it doesn't
>require them to work with us?

See the Stepford Wives. They might be useful as toys.


>
>2. What would be the purpose of a machine in exhibiting emotions? Do we
>want machines to show hate? Dispair? Sadness? In what way would that
>make them better machines to suit our needs? (Or are we really only doing
>this to play God?)
>

Suppose, as in Blade Runner, we used machines as warriors. We might want
the machines to increase their motivation in certain situations, or to
protect their safety in others. They might need to survive on their own, as
in seeking food (fuel) and increasing their motivation as the near
depletion. Priorities might become reversed, such that in order to obtain
fuel they would violate certain norms, or engage in violence, wheras in less
desparate situations they would not. Or like Arnold Schwartzenegger, they
might perform surgery on themselves in order to repair damage. (Ouch!)

>3. If we want machines to have emotions at all, why should they be human
>emotions? Why not a cat's emotions? They're curious, they answer the
>hunger need when it occurs, they exhibit self-preservation. Wouldn't it
>be more logical to endow a working robot with the emotions of a beaver,
>with its strong apparent work ethic and familial social unit, than Uncle
>George who goes around all day in his underwear doing nothing?
>

Good point. Porpoises or elephants might have different emotions from
humans. It might be reasonable to program a kind of emotion that fits the
situation, or for that matter, different kinds of intelligence, as well.
How about body-kinesthetic intelligence, or "street-smart" manipulative
intelligence?

>4. Unless all emotions are ultimately for the goal of self-preservation,

>what goal does the emotion of fear have?

To increase obedience?

If there is a single goal to
>fear, then why is it manifested in so many different ways in different

>humans, and animals? If fear is an emotion that we seek to escape from,


>why do many humans like to be scared?

Usually, we only like to be scared when we know it is imaginary, and safe.
However, there are the motorcyclists and sky divers... Perhaps they are
sometimes proving something about themselves, like G. Gordon Liddy.

(Ask Stephen King how big his

>royalty checks are if you don't believe this.) Is fear a "positive" or


>"negative" emotion (your words) if people both want it and don't want it
>at the same time?
>

>-- Gordon
>
It may be like some prison inmates seeking punishment, in order to prove to
their friends that they aren't afraid to talk up to "the man," or even
proving to themselves that they aren't afraid, and "don't take no shit from
nobody." On the other hand, their fear of humiliation, or loss of autonomy
may simply greater than their fear of more obvious punishment.

Jim Balter

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to

The name is just one attribute among many. Does a place exist
autonomously of its neighboring places? It's altitude?
Just what is a "place", such that it can have autonomy?

> I'm not sure why you
> put name in with shape. When a state, such as Virginia, is redefined,
> places
> aren't moved but rather the places are associated with a different name.

Ok, the name stays the same. So exchanging red and blue is just
a matter of calling red "blue" and blue "red". Only the guy
who had his qualia exchanged still call red "red" and blue "blue".
So what changed?


> > Even photons only exist as a relationship among observations; at the
> > quantum mechanical level, they are only statistical; and talk about
> > exchanging photons is nonsensical; all photons are identical.
>
> I don't think this is so. I see some objects and not other because
> different photons carry different information and exist in different
> locations.

The wavelength and direction are different. But in the spectrum
inversion experiment, all such issues remain unchanged.

> The world doesn't stop existing just because I close my eyes or put my
> head in a paper bag.

You'll have to take that up with Neils Bohr and the Copenhagen
folks, but that wasn't really at issue.

> > And if two people are A and B exchanged but A has the same
> > hair color and body shape as C, is still as old as D, and so on,
> > if all the *attributes*, which are determined by relationships within
> > our conceptual space, are unchanged, then it is nonsense to say that
> > they have been exchanged.
>
> Certainly the naming of an attribute requires a reference within our
> conceptual space. Do you believe a thing cannot exist without being
> a referent within our conceptual space?

I believe, along with Bertrand Russell, that talk about
"things existing" is confused. "horses exist" is equivalent
to "the set of horses is non-empty". But the meaning
of "this horse exists" isn't clear at all, especially considering
that "this horse doesn't exist" is nonsense -- "this" already implies
a referent. So, I frankly don't know what you mean by
"a thing cannot exist". If you want to claim that "red qualia
exist independent of the attributes of redness" I can't make out
that you have actually said anything -- that this claim confirms
or is confirmed by anything. My basic point is that the "things"
exchanged in the spectrum inversion experiment are featureless, and thus saying that they have been interchanged
says nothing. The problem is that, despite the conditions
of the experiment, people can't help but imagine it with the features
being exchanged too; they imagine my seeing red where they see
blue by imagining it *as* red -- as warm, the color of blood, and so
on, and if they didn't, they wouldn't be thinking of red at all.
But that isn't what the experiment calls for.



> > Nothing exists separate from its attributes.
>
> Not all attributes are relationships.

Really? Name one. If you say "hue", for instance, I will again
refer you to Hardin's book, where you will find that it is
entirely defined by relationships, and nothing else.

> > There are various ways to exchange two people and have them retain
> > their attributes, but the spectrum inversion thought experiment
> > imagines exchanging qualia but leaving all the attributes the same,
> > except the "qualia themselves", the "redness of red" and so on.
> > But there is no "Forbisness of Forbis" separate from all of Forbis's
> > attributes.
>
> "Qaulia" is often used in relationship to "yellow" because several quale are
> referenced by the same name, and while they might be distinguishable they
> seldom are. (In a secondary message you made a comment as if I used
> "quail" for "quale". I intented "qualia" but misspelled it in haste.)
>
> I'm not saying qualia can be separated from their attributes. There are the
> qualia we associate with "red" and they exist appart from the association.

*What* exists apart from the association? You are just naysaying
something you don't understand. "red" or "redness" is a quale,
in philospeak. Just what is this that we "associate" with it?

> When people talk about specturm intversion they aren't talking about (or
> I hope they aren't talking about) inverting the qualia (as if this had any
> meaning,)
> but rather reassociating the qualia with different sense data along a
> particular
> vector.

You just don't get it. If I look at an ocean and see red,
nice cool red like that of the sky, the red I see in a pretty girl's
eyes, why the f am I calling "red", when I obviously mean *blue*?
my point is that the sensation, the "blueness of blue", consists
of *nothing other than its object associations, emotional associations,
relationships to other hues, and so on*. There is no "blueness"
separate from that, something autonomous, the "look" of blue.
The "look" is a *representation* within our perceptual space
of all the attributes, the associations and relationships,
connected to "blueness". You can't hold all the latter unchanged
but change the look; that's a mythic reification of "look"
separate from, autonomous from, out cognitive function.
The "look of blue" is not some secret sauce squirted into out
brains; it is a part of our perceptual space, not a substance
in our brains.



> Consider a digitial camera and a digitial monitor. The camera converts
> colors

I'll take you to mean wavelengths and such.

> into numbers and the monitors converts numbers into colors. There's no
> privaleged relationship between numbers and colors.

But there is a privileged relationship between the *relationships*
among the numbers and the *relationships* among the colors.
Blue has a relationship to green that is different from its
relationship to red, and these relationships must be preserved.

> As long as the two
> devices use the same conversion scheme everythings fine, however one would
> be wrong if one asserted all cameras convert colors into the same numbers
> or that numbers didn't exist apart from colors.

But which numbers you end up with are a strict consequence
of the physical construction of the camera; your example actually
undermines the point of the spectrum inversion experiment,
which imagines "experience" to be *separate* from physical
composition, so that qualia could be exchanged without any
*physical change*. The exchange posited in the spectrum inversion
experiment is not an exchange of *internal representations*;
that would play right into the functionalists' hands.
If cameras are built differently, we can observe that they are, and if
visual processing centers if brains are built differently, we can
(eventually) observe that they are; that is not the point of
the inversion experiment, which is to undermine functionalism.
If it undermines cameras too, then the dualist has missed the mark!

Another point is that the different numbers must play the same *functional* role within each camera. It's hard to imagine
"getting inside the head" of a camera, but if we could, it
wouldn't see colors as numbers, but as colors -- a bundle
of attributes. What is relevant about red is not its 23ness
or whatever number is used to represent it, but what its
energy curve and so on are, all of which are in fact privileged;
if they weren't, then it wouldn't be possible to do color
correction or any other sort of manipulation that must map
into the same outputs for all cameras.



> > Apart from relationships within the conceptual space, quail have no
> > attributes. The claim that they exist is empty; if they are
> > "exchanged" but all the relationships are left unchanged, then what
> > is exchanged are two featureless identical null entities, which is no
> > different from not exchanging them at all.
>
> The qualia are not featureless. The features cannot be communicated
> because communication is done by way of the relationships.

A purely religious statement. The exchange assumes that all features
other than "the blueness of blue" remain unchanged. For the purposes
of people doing real thinking rather than your sort of fantasy
religious thinking, what is exchanged is featureless, and so the
exchange is null.

Again you should read Hardin; you would be amazed at just how many
features of color qualia *can* be communicated, once you start
doing comparisons. The set of things that you see as blue and
that I see as blue are not coextensive, and the differences reveal
subtle differences in our physiologies.



> > People imagine exchanging
> > red and blue but leaving all the relationships intact (oceans and sky
> > now look "red" but still seem cool and still are called "blue"), but
> > that isn't really what they are imagining, because there is no way to
> > imagine "blue" having all the relational attributes of "red". If
> > "blue" has all the relational attributes of "red", then it *is* red,
> > because that's all there is to red. Saying "the quail themselves
> > remain" is nonsense; "blue" just *isn't* itself if it is warm, applies
> > to apples and receding stars, and so on.
>
> That a color feels "warm" or "cold" is due to the relationship between
> different sets of qualia. It's a bit strange that you associate "red" with
> "hot" and "blue" with "cold" when the photons associate with "blue" are
> more energetic than those associated with "red".

Not just I, but most people, associate red with hot and blue with
cool. It has more to do with visual physiology than with photon
energy. Your brain doesn't feel the energy of photons, it gets
rather different sorts of signals. Again you should read Hardin.


> > Right now as I look at my screen, I see a page with two hyperlinks
> > displayed in blue. One is on a white background, and one is on a green
> > background. While I know "objectively" (i.e., via information separate
> > from my direct perception) that the same wavelengths are being
> > transmitted in both cases, perceptually they are quite different hues.
> > The notion that there are "qualia in and of themselves"
> > is a myth that cannot be maintained in the light of careful
> > examination.
>
> Your ability to know objective and subjective facts and that they are
> different
> should indicate there are qualia and they are independent of their
> relationships
> to the outside world.

Quite a misunderstanding. The perception of blue is not a
pure "quale"; it is a multidimensional complex within a perceptual
space. That blue seems greenish on a green background indicates
that contrast and context, not just hue, are elements of the
"experience", the "feel of the quale" -- the quale as a thing
in itself is a myth.



> > And understanding and then abandoning the myth allows us
> > to solve conundra like spectrum inversion, and allows us to actually
> > explain consciousness instead of invoking "and here there be dualism"
> majik.
>
> If you only explain the relationships you aren't explaining consciousness.

What would an explanation consist of, other than God stepping
into your brain and scribbling "this is how it is" inside?

Or perhaps he's already been there, giving you inerrant
knowledge that consciousness is not a matter of functional
relationships. But suppose, just suppose, that it were -- then
things would seem to you exactly as they do. If you think that
is impossible, why is it impossible?

--
<J Q B>

kenneth Collins

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to
i worked all of this stuff out decades ago. i'll be happy to present
in-person.

K. P. Collins

rick++ wrote:
>
> The recognition, expression, and understanding of emotion has been
> a major thrust of the MIT Robotic Lab and MIT AI Lab.
> Some think this is imnportant for human-computer interfaces of all
> kinds.
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.

Seth Russell

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to
Jim Balter wrote:

> "appreciate in the same way" erroneously reifies "ways of appreciation". We can talk to people and get a sense of
> whether the attributes of their senses of appreciation are the
> same. We can of course miss some details, or they can mislead
> us; such is the nature of *any* empirical investigation.
> But, beyond the measurable attributes of appreciation, there isn't
> something else.

Have you forgotten the appreciation experience itself?

--
Seth Russell
Http://RobustAi.net/Ai/SymKnow.htm
Http://RobustAi.net/Ai/Conjecture.htm

Evan Langlinais

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to
Jim Balter wrote:
> Gary Forbis wrote:
<snippage>
> > ...photons and quail can exist without any relationship connecting > > them.
<and more snippage>

> Apart from relationships within the conceptual space, quail have no
> attributes.

Is anybody else having visions of extremely frustrated quail hunters?

--
Evan "Skwid" Langlinais
http://skwid.home.texas.net/
I am not Sears, and do not speak for them. Ignore the big button.

Gene Douglas

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to

There is a program, decades old now, which purports to psychoanalyze the
operator. It asks a question, you answer, and it responds to your answer.
However, as you experiment with the program, you realize that it
stereotypically responds to your replies in a rote and mechanical way. It
can be quite impressive at first, as if the machine is "understanding" you,
unless you experiment with some variations to see what will happen.

None of that is relevant to what we are discussing, however, just a curious
aside.

Gene

kenneth Collins wrote in message <38684221...@earthlink.net>...


>i worked all of this stuff out decades ago. i'll be happy to present
>in-person.
>
>K. P. Collins
>
>rick++ wrote:
>>
>> The recognition, expression, and understanding of emotion has been
>> a major thrust of the MIT Robotic Lab and MIT AI Lab.
>> Some think this is imnportant for human-computer interfaces of all
>> kinds.

kenneth Collins

unread,
Dec 29, 1999, 3:00:00 AM12/29/99
to
i really did work it all out decades ago.

part of what i'm doing is a study of how many times i can reiterate such
while no one says 'show me', which i'll gladly do, in-person, before
folks who work in Science.

K. P. Collins

Erik Max Francis

unread,
Dec 29, 1999, 3:00:00 AM12/29/99
to
kenneth Collins wrote:

> i really did work it all out decades ago.
>
> part of what i'm doing is a study of how many times i can reiterate
> such
> while no one says 'show me', which i'll gladly do, in-person, before
> folks who work in Science.

Go away, crank.

--
Erik Max Francis | email m...@alcyone.com | icq 16063900
Alcyone Systems | web http://www.alcyone.com/max/
San Jose, CA | languages en, eo | icbm 37 20 07 N 121 53 38 W
USA | 960.613 Ms pL | 369 days left | &tSftDotIotE
__
/ \ Success and failure are equally disastrous.
\__/ Tennessee Williams

kenneth Collins

unread,
Dec 29, 1999, 3:00:00 AM12/29/99
to
to ALL:

it appears to me that this fellow has some 'curious purpose' in mind,
because all he's ever done is what he's done in response to my prior
post.

he's never even tried to do anything else.

K. P. Collins

tel...@xenon.triode.net.au

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
In comp.ai.alife kenneth Collins <kpa...@earthlink.net> wrote:
> i really did work it all out decades ago.

> part of what i'm doing is a study of how many times i can reiterate such
> while no one says 'show me', which i'll gladly do, in-person, before
> folks who work in Science.

> K. P. Collins

If it can only be shown ``in-person'' then it is stage magic or a charismatic
persona or a pretty face, it is not science. If you really know the answer
then post some source code or a good enough technical breakdown that I can
write source code for myself. Even post some good ideas that I can use to
further my own understanding. If you cannot express your discovery in a
way that may be transfered from person to person, regardless of the medium,
then your ideas need refinement.

So by all means show me, but since doing it in-person is impossible and not
very useful anyhow, use some creativity and convey your ideas in a portable
format. May I suggest ASCII text, C or latex?

- Tel

Erik Max Francis

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
tel...@xenon.triode.net.au wrote:

> So by all means show me, but since doing it in-person is impossible
> and not
> very useful anyhow, use some creativity and convey your ideas in a
> portable
> format. May I suggest ASCII text, C or latex?

A sign nearby reads:

Please Do Not Feed the Cranks.

The crank looks at you, longing for attention.

--
Erik Max Francis | email m...@alcyone.com | icq 16063900
Alcyone Systems | web http://www.alcyone.com/max/
San Jose, CA | languages en, eo | icbm 37 20 07 N 121 53 38 W

USA | 960.916 Ms pL | 365 days left | &tSftDotIotE
__
/ \ If love is the answer, could you rephrase the question?
\__/ Lily Tomlin

Gene Douglas

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to

I think it's better that we take him at his word, and not worry about what
that word is.

Gene
tel...@xenon.triode.net.au wrote in message
<84kk9u$j3b$1...@hyperion.triode.net.au>...


>In comp.ai.alife kenneth Collins <kpa...@earthlink.net> wrote:
> > i really did work it all out decades ago.
>
>> part of what i'm doing is a study of how many times i can reiterate such
>> while no one says 'show me', which i'll gladly do, in-person, before
>> folks who work in Science.
>
>> K. P. Collins
>
>If it can only be shown ``in-person'' then it is stage magic or a
charismatic
>persona or a pretty face, it is not science. If you really know the answer
>then post some source code or a good enough technical breakdown that I can
>write source code for myself. Even post some good ideas that I can use to
>further my own understanding. If you cannot express your discovery in a
>way that may be transfered from person to person, regardless of the
medium,
>then your ideas need refinement.
>

>So by all means show me, but since doing it in-person is impossible and
not
>very useful anyhow, use some creativity and convey your ideas in a
portable
>format. May I suggest ASCII text, C or latex?
>

> - Tel
-
GeneDou...@prodigy.net


--
One out of every four Americans is suffering from some form of mental
illness.
Think of your three best friends. If they're OK, then it's you.

kenneth Collins

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
tel...@xenon.triode.net.au wrote:
>
> In comp.ai.alife kenneth Collins <kpa...@earthlink.net> wrote:
> > i really did work it all out decades ago.
>
> > part of what i'm doing is a study of how many times i can reiterate such
> > while no one says 'show me', which i'll gladly do, in-person, before
> > folks who work in Science.
>
> > K. P. Collins
>
> If it can only be shown ``in-person'' then it is stage magic or a charismatic
> persona or a pretty face, it is not science.

or it's just me, tired of Jackasses 'borrowing' my work without giving
its stuff to the folks on whose behalves the work was done.

K. P. Collins

kenneth Collins

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Erik Max Francis wrote:

>
> tel...@xenon.triode.net.au wrote:
>
> > So by all means show me, but since doing it in-person is impossible
> > and not
> > very useful anyhow, use some creativity and convey your ideas in a
> > portable
> > format. May I suggest ASCII text, C or latex?
>
> A sign nearby reads:
>
> Please Do Not Feed the Cranks.
>
> The crank looks at you, longing for attention.

"Hawking Radiation" and 'gravity"...

...ho, ho, ho ^ (ho, ho, ho)

K. P. Collins

Phil Roberts, Jr.

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to

Maniaq wrote:
>
> Whatever happenned to the Four F's ??
> Is this now a concept which is out of fashion, maybe?
>
> Personally, I can see the validity in asserting that
> instincts are quite simple trigger/response mechanisms that
> can be built into even the simplest organisms - the "Four
> F's" being your four basic instincts - Fighting, Fleeing,
> Feeding, and F@#king (sorry, make that Fornicating).
>
> Sure, emotions are usually more complex than basic
> instincts, but are they not made up of these basic
> instincts? Can you not define any given emotion as a
> combination of these basic building blocks?
>
> I can see it working recursively - so that a given emotion
> may be a combination of other emotions, which may in turn be
> a combination of other emotions, and so on - but at the
> bottom (top?) of every tree, you will find only F's...
>
> Whaddayathink?
>

You had to ask.

A Sketch of a Divergent Theory of Emotional Instability


Objective: To account for self-worth related emotion (i.e., needs for
love, acceptance, moral integrity, recognition, achievement,
purpose, meaning, etc.) and emotional disorder (e.g., depression,
suicide, etc.) within the context of an evolutionary scenario; i.e., to
synthesize natural science and the humanities; i.e., to answer the
question: 'Why is there a species of naturally selected organism
expending huge quantities of effort and energy on the survivalistically
bizarre non-physical objective of maximizing self-worth?'

Observation: The species in which rationality is most developed is
also the one in which individuals have the greatest difficulty in
maintaining an adequate sense of self-worth, often going to
extraordinary lengths in doing so (e.g., Evel Knievel, celibate monks,
self-endangering Greenpeacers, etc.).

Hypothesis: Rationality is antagonistic to psychocentric stability (i.e.,
maintaining an adequate sense of self-worth).

Synopsis: In much the manner reasoning allows for the subordination
of lower emotional concerns and values (pain, fear, anger, sex, etc.)
to more global concerns (concern for the self as a whole), so too,
these more global concerns and values can themselves become
reevaluated and subordinated to other more global, more objective
considerations. And if this is so, and assuming that emotional
disorder emanates from a deficiency in self-worth resulting from
precisely this sort of experiencially based reevaluation, then it can
reasonably be construed as a natural malfunction resulting from
one's rational faculties functioning a tad too well.

Normalcy and Disorder: Assuming this is correct, then some
explanation for the relative "normalcy" of most individuals would
seem necessary. This is accomplished simply by postulating
different levels or degrees of consciousness. From this perspective,
emotional disorder would then be construed as a valuative affliction
resulting from an increase in semantic content in the engram indexed
by the linguistic expression, "I am insignificant", which all persons of
common sense "know" to be true, but which the "emotionally
disturbed" have come to "realize", through abstract thought,
devaluing experience, etc.

Implications: So-called "free will" and the incessant activity presumed
to emanate from it is simply the insatiable appetite we all have for
self-significating experience which, in turn, is simply nature's way of
attempting to counter the objectifying influences of our rational
faculties. This also implies that the engine in the first "free-thinking"
artifact is probably going to be a diesel.


"Another simile would be an atomic pile of less than critical size: an
injected idea is to correspond to a neutron entering the pile from
without. Each such neutron will cause a certain disturbance which
eventually dies away. If, however, the size of the pile is sufficiently
increased, the disturbance caused by such an incoming neutron will
very likely go on and on increasing until the whole pile is destroyed.
Is there a corresponding phenomenon for minds?" (A. M. Turing).


Additional Implications: Since the explanation I have proposed
amounts to the contention that the most rational species
(presumably) is beginning to exhibit signs of transcending the
formalism of nature's fixed objective (accomplished in man via
intentional self-concern, i.e., the prudence program) it can reasonably
be construed as providing evidence and argumentation in support of
Lucas (1961) and Penrose (1989, 1994). Not only does this imply
that the aforementioned artifact probably won't be a computer,
but it would also explain why a question such as "Can Human
Irrationality Be Experimentally Demonstrated?" (Cohen, 1981)
has led to controversy, in that it presupposes the possibility
of a discrete (formalizable) answer to a question which can only
be addressed in comparative (non-formalizable) terms (e.g. X is
more rational than Y, the norm, etc.). Along these same lines,
the theory can also be construed as an endorsement or
metajustification for comparative approaches in epistemology
(explanationism, plausiblism, etc.)


"The short answer [to Lucas/Godel and more recently, Penrose]
is that, although it is established that there are limitations to the
powers of any particular machine, it has only been stated, without
any sort of proof, that no such limitations apply to human intellect "
(A. M. Turing).


"So even if mathematicians are superb cognizers of mathematical
truth, and even if there is no algorithm, practical or otherwise,
for cognizing mathematical truth, it does not follow that the power
of mathematicians to cognize mathematical truth is not entirely
explicable in terms of their brain's executing an algorithm. Not
an algorhithm for intuiting mathematical truth -- we can suppose that
Penrose [via Godel] has proved that there could be no such thing.
What would the algorithm be for, then? Most plausibly it would be an
algorithm -- one of very many -- for trying to stay alive ... " (D. C.
Dennett).


Oops! Sorry! Wrong again, old bean.


"My ruling passion is the love of literary fame" (David Hume).


"I have often felt as though I had inherited all the defiance and all the
passions with which our ancestors defended their Temple and could
gladly sacrifice my life for one great moment in history" (Sigmund
Freud).


"He, too [Ludwig Wittgenstein], suffered from depressions and for long
periods considered killing himself because he considered his life
worthless, but the stubbornness inherited from his father may have
helped him to survive" (Hans Sluga).


"The inquest [Alan Turing's] established that it was suicide. The
evidence was perfunctory, not for any irregular reason, but because
it was so transparently clear a case" (Andrew Hodges)

REFERENCES

1. Cohen, L. Jonathan, Can Human Irrationality be Experimentally
Demonstrated?, The Behavioral and Brain Sciences, 1981, 4, 317-370.

2. Lucas, J. R., Minds, Machines and Godel, Philosophy, Vol XXXVI (1961).
Reprinted in Anderson's, Minds and Machines, and engagingly explored
in Hofstadter's Pulitzer prize winner, Godel, Escher, Bach: An
Eternal Golden Braid.

3. Penrose, Roger, The Emperor's New Mind, 1989; Shadows of the Mind,
1994.

--

Phil Roberts, Jr.

The Psychodynamics of Genetic Indeterminism:
Why We Turned Out Like Captain Kirk Instead of Mr. Spock
http://www.fortunecity.com/victorian/dada/90/

Phil Roberts, Jr.

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

Gene Douglas wrote:
>
> Phil Roberts, Jr. wrote in message <38718A1D...@ix.netcom.com>...


>
> >
> > A Sketch of a Divergent Theory of Emotional Instability
> >
> >
> >Objective: To account for self-worth related emotion (i.e., needs for
> > love, acceptance, moral integrity, recognition, achievement,
> > purpose, meaning, etc.) and emotional disorder (e.g., depression,
> > suicide, etc.) within the context of an evolutionary scenario; i.e., to
> > synthesize natural science and the humanities; i.e., to answer the
> > question: 'Why is there a species of naturally selected organism
> > expending huge quantities of effort and energy on the survivalistically
> > bizarre non-physical objective of maximizing self-worth?'
> >
> >Observation: The species in which rationality is most developed is
> > also the one in which individuals have the greatest difficulty in
> > maintaining an adequate sense of self-worth, often going to
> > extraordinary lengths in doing so (e.g., Evel Knievel, celibate monks,
> > self-endangering Greenpeacers, etc.).
> >
> >Hypothesis: Rationality is antagonistic to psychocentric stability (i.e.,
> > maintaining an adequate sense of self-worth).
> >

> How about: We are not entirely rational. In fact, the rational aspect of
> ourselves is the least aspect, and must be cultivated in order to reach its
> full potential. Many areas of brain tissue are devoted to such things as
> breathing, heartbeat, nausea, yawning, pain, anxiety, fear, anger, and
> probably aspects which relate to dominance, affection, nurturing, empathy,
> etc.
>

Excellent, grasshopper. Indeed, the commonplace practice of referring to
X as rational or irrational is incompatible with Godel's theorem, if my
own understanding of the matter is basically correct. If rationality
can not be reduced to logic, as I believe Godel suggests, than the only
intelligible rationality ascriptions would have to be comparative, for
example, X is more rational than Y, the norm, etc. If so, then rather
than the rational animal, man should be construed as the SOMEWHAT MORE
rational animal. And given that this rationality is sitting on top of
a half a billion years of arational motivational foundation, combined
with our egotistical need to think highly of ourselves, it might lead
one to suspect that reports of a rational species may have been highly
exaggerated. :)

Special concern for one's own future would be selected by
evolution: Animals without such concern would be more likely
to die before passing on their genes. Such concern would
remain, as a natural fact, even if we decided that it was not
justified. By thinking hard about the arguments, we might
be able briefly to stun this natural concern. But it would
soon revive... The fact that we have this attitude cannot
therefore be a reason for thinking it justified. Whether
it is justified [i.e. rational] is an open question, waiting
to be answered (Derek Parfit, 'Reasons and Persons').

> Further, we are a very plastic creature during our formative years. We can
> become American, African, Eskimo, Arab, Fijian, aggressive, passive,
> individualist, conformist, and on and on, just depending on experiences
> deriving from our environment.
>

Yes. I think the term you may be looking for is individualization. There
seems to be a correlation between the rationality of a species and the
individuality found in its members.


> >Synopsis: In much the manner reasoning allows for the subordination
> > of lower emotional concerns and values (pain, fear, anger, sex, etc.)
> > to more global concerns (concern for the self as a whole), so too,
> > these more global concerns and values can themselves become
> > reevaluated and subordinated to other more global, more objective
> > considerations. And if this is so, and assuming that emotional
> > disorder emanates from a deficiency in self-worth resulting from
> > precisely this sort of experiencially based reevaluation, then it can
> > reasonably be construed as a natural malfunction resulting from
> > one's rational faculties functioning a tad too well.
> >

> And where does self-worth come from? Probably early childhood experiences,
> in which we are treated as having much or little of the same, attempts at
> autonomy, praise, affection, things which connect with a potential for
> self-estimation which is already there. If you should do the same to a
> turtle or a stone, you would get no effect, because they do not have the
> potential on which to overlay this experience.

Yes. In man, I suspect that self-worth IS the will to survive, and that
indeed, man may be the only species in which such a will is present. And
the fact that so many of us are having self-worth problems suggests, to
me at least, that this most rational species is also beginning to show
signs of being LESS DETERMINED by natural selection. IOW, we are a
species which appears to require REASONS for surviving (justification
for the high opinion nature would like us to have of ourselves) rather
than just blindly responding to stimuli.


> >Normalcy and Disorder: Assuming this is correct, then some
> > explanation for the relative "normalcy" of most individuals would
> > seem necessary. This is accomplished simply by postulating
> > different levels or degrees of consciousness. From this perspective,
> > emotional disorder would then be construed as a valuative affliction
> > resulting from an increase in semantic content in the engram indexed
> > by the linguistic expression, "I am insignificant", which all persons
> > of
> > common sense "know" to be true, but which the "emotionally
> > disturbed" have come to "realize", through abstract thought,
> > devaluing experience, etc.
> >

> These things can be done consciously or unconsciously. In earliest
> childhood, they are probably just assumed, without awareness of them. When
> children get old enough to begin name-calling, they are probably defining
> self worth in logical, or at least verbal terms. They can be heard to say
> things like, "you can't..." or "you would probably do this..." etc.,
> projecting their own negative evaluations on to others. At older ages,
> children may evaluate their self worth, because they see themselves being
> evaluated on externally-imposed scales. Their parents and church people
> may estimate their liklihood of going to hell or being loved by Jesus, their
> schools may measure their learning performance and their athletic skills.
> Teachers may give lessons on the topic of self-worth, and as adults, they
> may study it in the classroom or in therapy.
>

Yes. My theory is about the evolution of rationality, most of which I
presume to be cultural or memetic in nature. Anywhere life evolves
via natural selection, we should expect to find the same valuative
anomalies of other-interestness (morality) and self-disinterestedness
(emotional instability) since these are simply features of rationality
itself, i.e., an increase in valuative objectivity/impartiality.

> >Implications: So-called "free will" and the incessant activity presumed
> > to emanate from it is simply the insatiable appetite we all have for
> > self-significating experience which, in turn, is simply nature's way of
> > attempting to counter the objectifying influences of our rational
> > faculties. This also implies that the engine in the first
> > "free-thinking"
> > artifact is probably going to be a diesel.
> >

> Ummm... uhh... O.K.
> >

The reason it implies that the engine in the first "free-thinking" AI is
going to be a diesel is because that is what we are, if my explanation
of feelings of worthlessness is correct, i.e., that it is the maladaptive
by-product of the evolution of rationality. IOW, we expend significant
quantities of effort and energy trying to maximize our self-worth, which
in turn leaves a residue of an increase in rationality which further
destabilizes the system requiring still more self-significating
experience, and on and on. Get it?

> > "Another simile would be an atomic pile of less than critical size: an
> > injected idea is to correspond to a neutron entering the pile from
> > without. Each such neutron will cause a certain disturbance which
> > eventually dies away. If, however, the size of the pile is sufficiently
> > increased, the disturbance caused by such an incoming neutron will
> > very likely go on and on increasing until the whole pile is destroyed.
> > Is there a corresponding phenomenon for minds?" (A. M. Turing).
> >

--

Phil Roberts, Jr.

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

Gene Douglas wrote:
>
> Phil Roberts, Jr. wrote in message <38718A1D...@ix.netcom.com>...

> >


> > A Sketch of a Divergent Theory of Emotional Instability
> >
> >
> >

> > "The short answer [to Lucas/Godel and more recently, Penrose]
> > is that, although it is established that there are limitations to the
> > powers of any particular machine, it has only been stated, without
> > any sort of proof, that no such limitations apply to human intellect "
> > (A. M. Turing).
> >
> >
> > "So even if mathematicians are superb cognizers of mathematical
> > truth, and even if there is no algorithm, practical or otherwise,
> > for cognizing mathematical truth, it does not follow that the power
> > of mathematicians to cognize mathematical truth is not entirely
> > explicable in terms of their brain's executing an algorithm. Not
> > an algorhithm for intuiting mathematical truth -- we can suppose that
> > Penrose [via Godel] has proved that there could be no such thing.
> > What would the algorithm be for, then? Most plausibly it would be an
> > algorithm -- one of very many -- for trying to stay alive ... " (D. C.
> > Dennett).
> >
> >
> >Oops! Sorry! Wrong again, old bean.
> >

> Any time a feature exists in an organism, we can assume that it somehow
> contributes to its staying alive, or at least to the preservation of its
> bloodline. Your quotes below suggest that the pleasure we derive from an
> experience can be exaggerated, such that it no longer serves the function
> that was developed through a thousand ancestors before one.
>

You are both right and wrong, IMHO. The notion that everything is adaptive
is referred to as panadaptionism, and generally construed as the hallmark
of the novice in natural science. However, that does not mean that the
existence of a feature does not have an evolutionary explantion. My own
explanation, if you will notice, is neither panadaptionist nor does it
ignore evoution, but rather amounts to the contention that morality and
emotional instability (both of which are the two sides of a valuative
objectivity coin) represent the MALAPTIVE BY-PRODUCT of the evolution
of rationality. The presence of the features is explained circuitously,
by supposing that the benefits of our rationality outweigh the detriments
in terms of the cost benefit equations.

Perhaps another way of saying this is that our rationality makes it easier
and easier for us to survive (the upside) but at the expense of an
ongoing decrease in the will to do so (i.e., an increase in the value
attached to others (morality) and a decrease in the value attached to
one's self (emotional instability).


> Sometimes a characteristic promotes the survival, not of the individual,
> nor even his own progeny, but of his social group. So if the impulse to
> commit a sacrificial act exists with sufficient frequency in a group, the
> group may survive, though many of its individual members may not.

Groups selection, another anathema to those who have spent a little time
studying natural selection:

Quotes:

The identification of individuals as the unit of
selection is a central theme in Darwin's thought.
This idea underliees his most radical claim: that
evolution is purposeless and without inherent
direction. ... Evolution does not recognize the 'good'
of the ecosystem' or even the 'good of the species.'
Any harmony or stability is only an indirect result of
individuals relentlessly pursuing their own self-interest
-- in modern parlance, getting more of their genes into
future generations by greater reproductive success.
Individuals are the unit of selection; the "struggle
for existence" is a matter among individuals (Stephen
Gould).


_With very few exceptions_, the only parts of the theory
of natural selection which have been supported by
mathematical models admit no possiblity of the
evolution of any characters which are on average to
the disadvantage of the individuals possessing them.
If natural selection followed the classical models
exclusively, species would not show any behavior more
positively social than the coming together of the
sexes and parental care....

Clearly from a gene's point of view it is worthwhile
to deprive a large number of distant relatives in order
to extract a small reproductive advantage. (W. D. Hamilton)

Like Chicago gangsters, our genes have survived, in some cases for
millions of years, in a highly competitive world. This entitles us to expect
certain qualities in our genes. I shall argue that a predominent quality
to be expected in a successful gene is ruthless selfishness. This gene
selfishness will usually give rise to selfishness in individual behavior.
However, as we shall see, there are special circumstances in which a
gene can achieve its own selfish goals best by fostering a limited form
of altruism. 'Special' and 'limited' are important words in the last
sentence. Much as we might wish to believe otherwise, universal love
and the welfare of the species as a whole are concepts which simply
do not make evolutionary sense (Dawkins).


Even with qualifications regarding the possibility
of group selection, the portrait of the biologically
based social personality that emerges is one of
predominantly self-serving opportunism _even_for_
_the_most_social_species_, for all species in which
there is genetic competition among the social co-
operators, that is, where all members have the chance
of parenthood (Donald Campbell).


It is ironic that Ashley Montagu should criticize Lorentz as
'a direct descendent of the "nature red in tooth and claw" thinkers
of the nineteenth century....' As I understand Lorentz' view of
evolution, he would be very much at one with Montagu in rejecting
the implications of Tennyson's phrase. Unlike both of them, I
think 'nature red in tooth and claw' sums up our modern understanding
of natural selection admirably. (Dawkins).

(END QUOTES)

The notable exceptions are the social insects, but
that's because the workers are dependent on the queen
for their DNA perpetuation. Yep.


> >
> > "My ruling passion is the love of literary fame" (David Hume).
> >
> >
> > "I have often felt as though I had inherited all the defiance and all
> > the
> > passions with which our ancestors defended their Temple and could
> > gladly sacrifice my life for one great moment in history" (Sigmund
> > Freud).
> >
> >
> > "He, too [Ludwig Wittgenstein], suffered from depressions and for long
> > periods considered killing himself because he considered his life
> > worthless, but the stubbornness inherited from his father may have
> > helped him to survive" (Hans Sluga).
> >

> Which may refer to a defect. However, depression exists with such high
> frequency among humans, one might wonder if it has a survival value in
> itself.

The PAIN of feelings of worthlessness have a survival value, in that they
motivate the organism to take remedial action. However, IMHO, the
worthlessness part is almost certainly maladaptive, in that even in
a group selection scenario a worthless organism is going to be, well....
worthless and ineffectual.

> If one refers to paranoia, one can see the survival value in that,
> though when it becomes exaggerated in an individual, it becomes a defect,
> degrading one's survival ability. Yet, it occurs with such frequency, one
> must assume that the potential exists in all of us.
>

I believe the paranoia is nature's way of alerting us to a life threatening
condition, its just not a physical threat. Its the fear of being overwhelmed
by one's insignificance. Its fine to KNOW this little unpleasantry,
but when one starts to REALIZE it, i.e., when it begins to enter the oval office
of consciousness, then it begins to become a bit of a sticky wickett, if you
know what I mean:

[Branden quotes]

Virtually all psychologists recognize that man experiences a need
of self-esteem. But what they have not identified is the nature
of self-esteem, the reasons why man need it, and the conditions he
must satisfy if he is to achieve it. Virtually all psychologists
recognize, if only vaguely, that there is some relationship
between the degree of a man's self-esteem and the degree of his
mental health. But they have not identified the nature of that
relationship, nor the causes of it. Virtually all psychologists
recognize, if only dimly, that there is some relationship between
the nature and degree of a man's self-esteem and his motivation,
i.e., his behavior in the spheres of work, love and human
relationships. But they have not explained why, nor identified
the principle involved.


There is no value-judgment more important to man -- no factor
more decisive in his psychological development and motivation --
than the estimate he passes on himself. This estimate is
ordinarily experienced by him, not in the form of a conscious,
verbalized judgment, but in the form of a feeling, a feeling
that can be hard to isolate and identify because he experiences
it constantly: it is part of every other feeling, it is involved
in his every emotional response. ... it is the single most
significant key to his behavior.

[endquotes]

Phil Roberts, Jr.

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to

"Phil Roberts, Jr." wrote:
>
> Gene Douglas wrote:
> >

>
> > >Implications: So-called "free will" and the incessant activity presumed
> > > to emanate from it is simply the insatiable appetite we all have for
> > > self-significating experience which, in turn, is simply nature's way of
> > > attempting to counter the objectifying influences of our rational
> > > faculties. This also implies that the engine in the first
> > > "free-thinking"
> > > artifact is probably going to be a diesel.
> > >
> > Ummm... uhh... O.K.
> > >
>
> The reason it implies that the engine in the first "free-thinking" AI is
> going to be a diesel is because that is what we are, if my explanation
> of feelings of worthlessness is correct, i.e., that it is the maladaptive
> by-product of the evolution of rationality. IOW, we expend significant
> quantities of effort and energy trying to maximize our self-worth, which
> in turn leaves a residue of an increase in rationality which further
> destabilizes the system requiring still more self-significating
> experience, and on and on. Get it?
>

Maybe a better metaphor would be the hamster wheel. If my theory is
correct, the faster we run the faster we will have to run, which leads
to a pretty gloomy forcast for the species emotional stability. However,
the metaphor also suggests a psychotherapy, in that rather than running
as fiercely, and taking everything so seriously, perhaps we could develop
a part of ourselves which stands outside of the wheel and watches the
running, but with a bit of a sense of humour about the whole thing.

Not running is not an option, BTW, if I am correct. Nature is simply
not going to allow us to become too rational (too valuatively objective)
without inflicting us with extreme pain, IMHO. But that doesn't mean
we have to take it all so seriously. So you continue to play the
self-significance game, only you do it a little more tongue in cheek,
and with a better developed sense of humour about the absurdity of
what it is you are trying to prove (that you are the most significant
entity in the universe, which is, of course, absurd, but then nature
is not a very rational lady now, is she?).

Maniaq

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to
Whatever happenned to the Four F's ??
Is this now a concept which is out of fashion, maybe?

Personally, I can see the validity in asserting that
instincts are quite simple trigger/response mechanisms that
can be built into even the simplest organisms - the "Four
F's" being your four basic instincts - Fighting, Fleeing,
Feeding, and F@#king (sorry, make that Fornicating).

Sure, emotions are usually more complex than basic
instincts, but are they not made up of these basic
instincts? Can you not define any given emotion as a
combination of these basic building blocks?

I can see it working recursively - so that a given emotion
may be a combination of other emotions, which may in turn be
a combination of other emotions, and so on - but at the
bottom (top?) of every tree, you will find only F's...

Whaddayathink?


* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping. Smart is Beautiful

Gene Douglas

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to

If it's robots we're concerned with, we would probably program them for
survival. That would mean that they need fuel, lubrication, moderate
temperatures, shelter from water, avoidance of jostling, and avoidance of
violence. If they are warrior robots or robocops, that would mean the
violence of others, and of the external environment.

Then, we would program "emotions" around those basic needs. Emotion would
mean that as the threat to the need increases, the priority of protective
behaviors increases. Eventually, that priority would exceed that of another
programming mandate. If priorities could range from 1 to 100, and if there
is a mandate, "do not harm humans," which is set at 50, then once the
computer is likely to be submerged in water, and lower priorities have not
worked, its survival priority might go up to 51 (a little pushing and
shoving) 60 (a good whack) 70 (whack with liklihood of injury) 80 (whack
with likelinood of serious injury) 90 (activity with a probability of
lethality) or 100 (activity with certainty of lethality) if its programming
allows that.

As these priorities go up, its energy output goes up, its concentration on
the one goal increases, its hypervigilance to stimuli, its bias of
perception to interpret stimuli to detect danger goes up. Would that then
be "emotion?" If not, why?

Must emotion involve sweat, blood pressure, trembling, etc.? If so, does
that indicate a bias toward soft materials, as opposed to metal and plastic?

Maniaq wrote in message <2bc52c90...@usw-ex0108-062.remarq.com>...


>Whatever happenned to the Four F's ??
> Is this now a concept which is out of fashion, maybe?
>
>Personally, I can see the validity in asserting that
>instincts are quite simple trigger/response mechanisms that
>can be built into even the simplest organisms - the "Four
>F's" being your four basic instincts - Fighting, Fleeing,
>Feeding, and F@#king (sorry, make that Fornicating).
>
>Sure, emotions are usually more complex than basic
>instincts, but are they not made up of these basic
>instincts? Can you not define any given emotion as a
>combination of these basic building blocks?
>
>I can see it working recursively - so that a given emotion
>may be a combination of other emotions, which may in turn be
>a combination of other emotions, and so on - but at the
>bottom (top?) of every tree, you will find only F's...
>
>Whaddayathink?
>

Patrik Bagge

unread,
Jan 7, 2000, 3:00:00 AM1/7/00
to
Short comment on the topic of
'higher directives'

There is one problem with
1) do not harm humans
2) survive

The problem is the same as for humans/countries etc.
Is it allowed to kill/injure in order to avoid greater killing/injury
if the answer is yes (commonly is), then the scenario gets
tricky because it involves a prediction of future events...

Yours
/pat


Gene Douglas

unread,
Jan 7, 2000, 3:00:00 AM1/7/00
to

Patrik Bagge wrote in message ...

Computers are good at that. Consider the chess-playing computers. They see
a situation, and consider ten possible moves to follow that, ten possible
moves to follow each move, ten possible moves to follow the move after that,
etc. They can do this well enough to beat human champions. The limiting
factors seem to be the amount of memory they have, and the speed of their
processor.
Gene
-
GeneDou...@prodigy.net

Valter Hilden

unread,
Jan 7, 2000, 3:00:00 AM1/7/00
to
Maniaq wrote:
>
> Whatever happenned to the Four F's ??
> Is this now a concept which is out of fashion, maybe?
>
> Personally, I can see the validity in asserting that
> instincts are quite simple trigger/response mechanisms that
> can be built into even the simplest organisms - the "Four
> F's" being your four basic instincts - Fighting, Fleeing,
> Feeding, and F@#king (sorry, make that Fornicating).
>
> Sure, emotions are usually more complex than basic
> instincts, but are they not made up of these basic
> instincts? Can you not define any given emotion as a
> combination of these basic building blocks?
>
> I can see it working recursively - so that a given emotion
> may be a combination of other emotions, which may in turn be
> a combination of other emotions, and so on - but at the
> bottom (top?) of every tree, you will find only F's...
>
> Whaddayathink?
>
> * Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping. Smart is Beautiful

An interesting mathematical question: define a set of linearly
independent emotions, or an eigenvector of emotions.

Another interesting set of primitive emotions for robots:
1) Sloth - a robot should try to economize its energy
2) Lust - a robot should try to make copies of itself
3) Greed - a robot should try to not waste the resources it needs
4) Pride - a robot should try to divulge its qualities
5) Envy - a robot should try to absorb the qualities of its competitors
6) Gluttony - a robot should try to acquire more of the resources it
needs
7) Anger - a robot should try to destroy the obstacles it finds
If entered in a survival competition, such a robot would probably win.

It is loading more messages.
0 new messages