Thought Doesn’t Think That It Feels

5 views
Skip to first unread message

Craig Weinberg

unread,
Sep 21, 2012, 4:48:14 PM9/21/12
to everyth...@googlegroups.com

Post from my blog:

Simple as that, really. From psychological discoveries of the subconscious and unconscious, to cognitive bias and logical fallacies, to quasi-religious faith in artificial intelligence, we seem to have a mental blind spot for emotional realities.

What could be more human than making emotional mistakes or having one’s judgment cloud over because of favoritism or prejudice? Yet when it comes to assessing the feasibility of a sentient being composed of programmed functions, we tend to miss entirely this little detail: Personal preference. Opinion. Bias. It doesn’t bother us that machines completely lack this dimension and in all cases exhibit nothing but impersonal computation. This tends to lead the feel-blind intellect to unknowingly bond to the computer. The consistency of an automaton’s function is comforting to our cognitive self, who longs to be free of emotional bias, so much so that it is able to hide that longing from itself and project the clean lines of perfect consequences outward onto a program.

It’s not that machines aren’t biased too - of course they are incredibly biased toward the most literal interpretations possible, but they are all biased in the same exact way so that is seems to us a decent tradeoff. The rootless consciousness of the prefrontal cortex thinks that is a small price to pay, and one which will inevitably be mitigated with improvements in technology. In its crossword puzzle universe of Boolean games, something like a lack of personhood or feeling is a minor glitch, an aesthetic ‘to be continued’ which need only be set aside for now while the more important problems of function can be solved.

It seems that the ocean of feelings and dreams which were tapped into by Freud, Jung, and others in the 20th century have been entirely dismissed in favor of a more instrumental approach. Simulation of behaviors. Turing machine emulation. This approach has the fatal flaw of drawing the mind upside down, with intellect and logic at the base that builds up to complex mimicry of mood and inflection. The mind has an ego and doesn’t know it. Thinking has promoted itself to a cause of feeling and experience rather than a highly specialized and esoteric elaboration of personhood.

We can see this of course in developmental psychology and anthropology. Babies don’t come out of the womb with a flashing cursor, ready to accept programming passively. Primitive societies don’t begin with impersonal state bureaucracies and progress to chiefdoms. We seem to have to learn this lesson again and again that our humanity is not a product of strategy and programming, but of authenticity and direct participation.

When people talk about building advanced robots and computers which will be indistinguishable from or far surpass human beings, they always seem to project a human agenda on them. We define intelligence outside of ourselves as that which serves a function to us, not to the being itself. This again suggests to me the reflective quality of the mind, of being blinded by the reflection of our own eyes in our sunglasses. Thoughts have a hard time assessing the feeling behind themselves, and an even harder time admitting that it matters.

I think we see this more and more in all areas of our lives - an overconfidence in theoretical approaches and a continuous disconnecting with the results. We keep hoping that it will work this time, even though we probably know that it never will. It’s as if our collective psyche is waiting for our deluded minds to catch up. Waiting for us to figure out that in spite of the graphs and tests and retooling, the machine is really not working any better.

Bruno Marchal

unread,
Sep 22, 2012, 9:10:23 AM9/22/12
to everyth...@googlegroups.com
You are right. We have very often dismissed emotion, feelings and consciousness in human. 

Unfortunately, dismissing emotion feelings and consciousness in machine, will not help.

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/2h-lGPs0zXwJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


Craig Weinberg

unread,
Sep 22, 2012, 11:08:58 AM9/22/12
to everyth...@googlegroups.com


On Saturday, September 22, 2012 9:10:30 AM UTC-4, Bruno Marchal wrote:

On 21 Sep 2012, at 22:48, Craig Weinberg wrote:

Post from my blog:

Simple as that, really. From psychological discoveries of the subconscious and unconscious, to cognitive bias and logical fallacies, to quasi-religious faith in artificial intelligence, we seem to have a mental blind spot for emotional realities.

What could be more human than making emotional mistakes or having one’s judgment cloud over because of favoritism or prejudice? Yet when it comes to assessing the feasibility of a sentient being composed of programmed functions, we tend to miss entirely this little detail: Personal preference. Opinion. Bias. It doesn’t bother us that machines completely lack this dimension and in all cases exhibit nothing but impersonal computation. This tends to lead the feel-blind intellect to unknowingly bond to the computer. The consistency of an automaton’s function is comforting to our cognitive self, who longs to be free of emotional bias, so much so that it is able to hide that longing from itself and project the clean lines of perfect consequences outward onto a program.

It’s not that machines aren’t biased too - of course they are incredibly biased toward the most literal interpretations possible, but they are all biased in the same exact way so that is seems to us a decent tradeoff. The rootless consciousness of the prefrontal cortex thinks that is a small price to pay, and one which will inevitably be mitigated with improvements in technology. In its crossword puzzle universe of Boolean games, something like a lack of personhood or feeling is a minor glitch, an aesthetic ‘to be continued’ which need only be set aside for now while the more important problems of function can be solved.

It seems that the ocean of feelings and dreams which were tapped into by Freud, Jung, and others in the 20th century have been entirely dismissed in favor of a more instrumental approach. Simulation of behaviors. Turing machine emulation. This approach has the fatal flaw of drawing the mind upside down, with intellect and logic at the base that builds up to complex mimicry of mood and inflection. The mind has an ego and doesn’t know it. Thinking has promoted itself to a cause of feeling and experience rather than a highly specialized and esoteric elaboration of personhood.

We can see this of course in developmental psychology and anthropology. Babies don’t come out of the womb with a flashing cursor, ready to accept programming passively. Primitive societies don’t begin with impersonal state bureaucracies and progress to chiefdoms. We seem to have to learn this lesson again and again that our humanity is not a product of strategy and programming, but of authenticity and direct participation.

When people talk about building advanced robots and computers which will be indistinguishable from or far surpass human beings, they always seem to project a human agenda on them. We define intelligence outside of ourselves as that which serves a function to us, not to the being itself. This again suggests to me the reflective quality of the mind, of being blinded by the reflection of our own eyes in our sunglasses. Thoughts have a hard time assessing the feeling behind themselves, and an even harder time admitting that it matters.

I think we see this more and more in all areas of our lives - an overconfidence in theoretical approaches and a continuous disconnecting with the results. We keep hoping that it will work this time, even though we probably know that it never will. It’s as if our collective psyche is waiting for our deluded minds to catch up. Waiting for us to figure out that in spite of the graphs and tests and retooling, the machine is really not working any better.


You are right. We have very often dismissed emotion, feelings and consciousness in human. 

Unfortunately, dismissing emotion feelings and consciousness in machine, will not help.

Bruno


You don't see a connection between the two? There is no chance of machine feelings being a psychological projection?

I'm not opposed to the idea of computers having emotions in theory, but the evidence we've seen so far shows precisely the opposite. If inorganic machines could grow and change and learn by themselves, then we would likely see a single example of just that. What we see instead is that even with many brilliant minds working hard with the finest technology, face a perpetual uphill battle. In spite of Moore's Law and 30 years of commercial explosion, there is still no sign of any authentic feeling or intentional act by a program. What we see is exactly what I would expect from a fundamentally flawed assumption being dragged out - like Ptolemaic astronomy...it just isn't working out because we aren't approaching it the right way. We are trying to build a house on top of a floating roof.

Craig


Bruno Marchal

unread,
Sep 22, 2012, 11:55:29 AM9/22/12
to everyth...@googlegroups.com
On 22 Sep 2012, at 17:08, Craig Weinberg wrote:



On Saturday, September 22, 2012 9:10:30 AM UTC-4, Bruno Marchal wrote:

On 21 Sep 2012, at 22:48, Craig Weinberg wrote:

Post from my blog:

Simple as that, really. From psychological discoveries of the subconscious and unconscious, to cognitive bias and logical fallacies, to quasi-religious faith in artificial intelligence, we seem to have a mental blind spot for emotional realities.

What could be more human than making emotional mistakes or having one’s judgment cloud over because of favoritism or prejudice? Yet when it comes to assessing the feasibility of a sentient being composed of programmed functions, we tend to miss entirely this little detail: Personal preference. Opinion. Bias. It doesn’t bother us that machines completely lack this dimension and in all cases exhibit nothing but impersonal computation. This tends to lead the feel-blind intellect to unknowingly bond to the computer. The consistency of an automaton’s function is comforting to our cognitive self, who longs to be free of emotional bias, so much so that it is able to hide that longing from itself and project the clean lines of perfect consequences outward onto a program.

It’s not that machines aren’t biased too - of course they are incredibly biased toward the most literal interpretations possible, but they are all biased in the same exact way so that is seems to us a decent tradeoff. The rootless consciousness of the prefrontal cortex thinks that is a small price to pay, and one which will inevitably be mitigated with improvements in technology. In its crossword puzzle universe of Boolean games, something like a lack of personhood or feeling is a minor glitch, an aesthetic ‘to be continued’ which need only be set aside for now while the more important problems of function can be solved.

It seems that the ocean of feelings and dreams which were tapped into by Freud, Jung, and others in the 20th century have been entirely dismissed in favor of a more instrumental approach. Simulation of behaviors. Turing machine emulation. This approach has the fatal flaw of drawing the mind upside down, with intellect and logic at the base that builds up to complex mimicry of mood and inflection. The mind has an ego and doesn’t know it. Thinking has promoted itself to a cause of feeling and experience rather than a highly specialized and esoteric elaboration of personhood.

We can see this of course in developmental psychology and anthropology. Babies don’t come out of the womb with a flashing cursor, ready to accept programming passively. Primitive societies don’t begin with impersonal state bureaucracies and progress to chiefdoms. We seem to have to learn this lesson again and again that our humanity is not a product of strategy and programming, but of authenticity and direct participation.

When people talk about building advanced robots and computers which will be indistinguishable from or far surpass human beings, they always seem to project a human agenda on them. We define intelligence outside of ourselves as that which serves a function to us, not to the being itself. This again suggests to me the reflective quality of the mind, of being blinded by the reflection of our own eyes in our sunglasses. Thoughts have a hard time assessing the feeling behind themselves, and an even harder time admitting that it matters.

I think we see this more and more in all areas of our lives - an overconfidence in theoretical approaches and a continuous disconnecting with the results. We keep hoping that it will work this time, even though we probably know that it never will. It’s as if our collective psyche is waiting for our deluded minds to catch up. Waiting for us to figure out that in spite of the graphs and tests and retooling, the machine is really not working any better.


You are right. We have very often dismissed emotion, feelings and consciousness in human. 

Unfortunately, dismissing emotion feelings and consciousness in machine, will not help.

Bruno


You don't see a connection between the two? There is no chance of machine feelings being a psychological projection?

There is. But as far as we are concern with the "emotion dismissing" problem, projecting emotion them, when they behave in some way, will be less dismissing emotion that attribuating puppetness by decision.




I'm not opposed to the idea of computers having emotions in theory, but the evidence we've seen so far shows precisely the opposite. If inorganic machines could grow and change and learn by themselves, then we would likely see a single example of just that. What we see instead is that even with many brilliant minds working hard with the finest technology, face a perpetual uphill battle. In spite of Moore's Law and 30 years of commercial explosion, there is still no sign of any authentic feeling or intentional act by a program.


Because the shadows of those experiences, which exists (epistemologically) in the comp theory, are still confined in complex mathematical theorems. But PA thinks like you and me, I think. From my perspective you are just not listening to such machines, and from what seems to me arbitrary, you have just decide that they are zombie, when I think, they just lack our kind of long rich story, but they don't lack a soul. You could look at baby and decide that they are completely stupid, *at first sight*.
Give them time. You can't compare millions years evolution machinery, with the hundred thousand years of machine evolution, or the one century year of the universal machine.

Bruno


What we see is exactly what I would expect from a fundamentally flawed assumption being dragged out - like Ptolemaic astronomy...it just isn't working out because we aren't approaching it the right way. We are trying to build a house on top of a floating roof.

Craig






--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/2h-lGPs0zXwJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/H0M1Zfk2tZoJ.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Sep 23, 2012, 8:47:12 AM9/23/12
to everyth...@googlegroups.com


On Saturday, September 22, 2012 11:55:35 AM UTC-4, Bruno Marchal wrote:

On 22 Sep 2012, at 17:08, Craig Weinberg wrote:



On Saturday, September 22, 2012 9:10:30 AM UTC-4, Bruno Marchal wrote:

On 21 Sep 2012, at 22:48, Craig Weinberg wrote:

Post from my blog:

Simple as that, really. From psychological discoveries of the subconscious and unconscious, to cognitive bias and logical fallacies, to quasi-religious faith in artificial intelligence, we seem to have a mental blind spot for emotional realities.

What could be more human than making emotional mistakes or having one’s judgment cloud over because of favoritism or prejudice? Yet when it comes to assessing the feasibility of a sentient being composed of programmed functions, we tend to miss entirely this little detail: Personal preference. Opinion. Bias. It doesn’t bother us that machines completely lack this dimension and in all cases exhibit nothing but impersonal computation. This tends to lead the feel-blind intellect to unknowingly bond to the computer. The consistency of an automaton’s function is comforting to our cognitive self, who longs to be free of emotional bias, so much so that it is able to hide that longing from itself and project the clean lines of perfect consequences outward onto a program.

It’s not that machines aren’t biased too - of course they are incredibly biased toward the most literal interpretations possible, but they are all biased in the same exact way so that is seems to us a decent tradeoff. The rootless consciousness of the prefrontal cortex thinks that is a small price to pay, and one which will inevitably be mitigated with improvements in technology. In its crossword puzzle universe of Boolean games, something like a lack of personhood or feeling is a minor glitch, an aesthetic ‘to be continued’ which need only be set aside for now while the more important problems of function can be solved.

It seems that the ocean of feelings and dreams which were tapped into by Freud, Jung, and others in the 20th century have been entirely dismissed in favor of a more instrumental approach. Simulation of behaviors. Turing machine emulation. This approach has the fatal flaw of drawing the mind upside down, with intellect and logic at the base that builds up to complex mimicry of mood and inflection. The mind has an ego and doesn’t know it. Thinking has promoted itself to a cause of feeling and experience rather than a highly specialized and esoteric elaboration of personhood.

We can see this of course in developmental psychology and anthropology. Babies don’t come out of the womb with a flashing cursor, ready to accept programming passively. Primitive societies don’t begin with impersonal state bureaucracies and progress to chiefdoms. We seem to have to learn this lesson again and again that our humanity is not a product of strategy and programming, but of authenticity and direct participation.

When people talk about building advanced robots and computers which will be indistinguishable from or far surpass human beings, they always seem to project a human agenda on them. We define intelligence outside of ourselves as that which serves a function to us, not to the being itself. This again suggests to me the reflective quality of the mind, of being blinded by the reflection of our own eyes in our sunglasses. Thoughts have a hard time assessing the feeling behind themselves, and an even harder time admitting that it matters.

I think we see this more and more in all areas of our lives - an overconfidence in theoretical approaches and a continuous disconnecting with the results. We keep hoping that it will work this time, even though we probably know that it never will. It’s as if our collective psyche is waiting for our deluded minds to catch up. Waiting for us to figure out that in spite of the graphs and tests and retooling, the machine is really not working any better.


You are right. We have very often dismissed emotion, feelings and consciousness in human. 

Unfortunately, dismissing emotion feelings and consciousness in machine, will not help.

Bruno


You don't see a connection between the two? There is no chance of machine feelings being a psychological projection?

There is. But as far as we are concern with the "emotion dismissing" problem, projecting emotion them, when they behave in some way, will be less dismissing emotion that attribuating puppetness by decision.


Why would it be any less dismissive? You just have the opposite problem of Chalmers paper: Spontaneously present and advancing qualia. If someone writes a program that draws Bugs Bunny, as that program is improved to respond to other drawings of Elmer Fudd and Daffy Duck, and to talk like Bugs Bunny, you would have to have feelings and thoughts begin to appear and gradually become more real. Bugs Bunny would have to feel himself and his world as the faintest hint of non-zombie, with sudden infusions of realism and phenomenology coinciding with each software upgrade.


I'm not opposed to the idea of computers having emotions in theory, but the evidence we've seen so far shows precisely the opposite. If inorganic machines could grow and change and learn by themselves, then we would likely see a single example of just that. What we see instead is that even with many brilliant minds working hard with the finest technology, face a perpetual uphill battle. In spite of Moore's Law and 30 years of commercial explosion, there is still no sign of any authentic feeling or intentional act by a program.


Because the shadows of those experiences, which exists (epistemologically) in the comp theory, are still confined in complex mathematical theorems. But PA thinks like you and me, I think.

That's what I'm writing about...we don't think like we think we think. Our thoughts bubble up from emotion and sensation, desire and personhood. PA thinks like we think we think - skimmed off the top of the prefrontal cortex qualia of logical abstraction. It's the tip of the pyramid thinking that it's made up of smaller pyramidal peaks, but it isn't - it's made of sub-personal bricks of  trans-rational, non-mereological qualia.



From my perspective you are just not listening to such machines, and from what seems to me arbitrary, you have just decide that they are zombie, when I think, they just lack our kind of long rich story, but they don't lack a soul. You could look at baby and decide that they are completely stupid, *at first sight*.
Give them time. You can't compare millions years evolution machinery, with the hundred thousand years of machine evolution, or the one century year of the universal machine.
 
I understand why you think that, but my view isn't arbitrary at all. For 20 years I had your view. I saw 'patterns' as being the universal primitive. Babies cry, but machines never do. Babies have personal needs, but machines are only there to service our needs impersonally. I agree that the difference is only one of the richness of history, but I think that you are arbitrarily assuming that that history doesn't extend to the material substrate itself. I am saying that matter is history, and that it is the only vehicle of history within spacetime. I have no prejudice against non-human intelligence or non-biological experience at all, I just think that you dishonor the last billion years of biology by trying to skip right from inorganic mechanism to anthropomorphic psychology. You wind up with a puppet (not a zombie, since I don't have any expectation of biological-level feeling and experience in the first place). This isn't a person with no soul, it is an assembly of molecular souls that has been configured to impersonate human behavior - just like a cartoon or a puppet.

Craig




What we see is exactly what I would expect from a fundamentally flawed assumption being dragged out - like Ptolemaic astronomy...it just isn't working out because we aren't approaching it the right way. We are trying to build a house on top of a floating roof.

Craig






--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/2h-lGPs0zXwJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/H0M1Zfk2tZoJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

John Clark

unread,
Sep 23, 2012, 12:32:50 PM9/23/12
to everyth...@googlegroups.com
On Sat, Sep 22, 2012 at 9:10 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> We have very often dismissed emotion

Nothing mysterious about emotion, its just a condition that predisposes a computer or a human to behave in one way rather than another.  

> feelings and consciousness in human. 
 
Unfortunately that is not true for philosophers, they don't dismiss consciousness in humans, in fact that's just about the only thing they want to talk about despite the fact that such talk has never once produced anything of value. However philosophers are reluctant to talk about intelligence in humans, even though the subject has proven to be much more fruitful, because its also much harder, and unlike a bullshit consciousness theory a bullshit intelligence theory is easy to shoot down because you can see with your own eyes that it just doesn't work, and that takes all the fun out of theorizing. Another advantage is that to become a consciousness theorist you really don't need to know anything, a grade school education is more than enough, but it takes years of study before you can even begin to figure out how intelligence works, with consciousness you can start producing hot air immediately and to some that's more fun.

So philosophers continue to blather on and on about consciousness and, having abandoned the subject, the study of intelligence has been left to computer scientists, programers, mathematicians and neurologists.

  John K Clark 


Craig Weinberg

unread,
Sep 23, 2012, 2:10:00 PM9/23/12
to everyth...@googlegroups.com


On Sunday, September 23, 2012 12:32:51 PM UTC-4, John Clark wrote:
On Sat, Sep 22, 2012 at 9:10 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> We have very often dismissed emotion

Nothing mysterious about emotion, its just a condition that predisposes a computer or a human to behave in one way rather than another.  

That's not emotion, that's a behavioral factor. I can throw a bowling ball towards the left side or the right, it doesn't mean that there is an emotion which is magically present or necessary. To say that emotion is just a condition that affects behavior is so completely foreign to me, it's hard to even relate to. You don't see that you are already amputating 99% of the phenomenon from the start and then announcing casually that the remaining 1% is the whole thing. If I had not already been talking with you I would think that you are trolling me. It's like saying "the entire history of human speech is just way to make the vocal chords vibrate other people's eardrums".
 

> feelings and consciousness in human. 
 
Unfortunately that is not true for philosophers, they don't dismiss consciousness in humans, in fact that's just about the only thing they want to talk about despite the fact that such talk has never once produced anything of value.

ALL science comes from philosophy and nowhere else. Science is a special case of philosophy. Not that philosophy isn't annoying, but really philosophy has literally produced everything of value in human civilization - which is why it is held in such high esteem throughout history.
 
However philosophers are reluctant to talk about intelligence in humans, even though the subject has proven to be much more fruitful, because its also much harder, and unlike a bullshit consciousness theory a bullshit intelligence theory is easy to shoot down because you can see with your own eyes that it just doesn't work, and that takes all the fun out of theorizing.

"Blah blah, people who don't think exactly like me are stupid liars and worthless and their lives are worthless blah blah why am I cursed with their presence, blah blah ego, arrogance, ignorance."
 
Another advantage is that to become a consciousness theorist you really don't need to know anything, a grade school education is more than enough, but it takes years of study before you can even begin to figure out how intelligence works, with consciousness you can start producing hot air immediately and to some that's more fun.

(see previous)
 

So philosophers continue to blather on and on about consciousness and, having abandoned the subject, the study of intelligence has been left to computer scientists, programers, mathematicians and neurologists.

...and the peer-reviewed echo chamber expands forever, generating ever more inscrutable documents understandable only to tiny incestuous groups of ultra-specialists. The End.

Craig


  John K Clark 


Bruno Marchal

unread,
Sep 24, 2012, 4:39:29 AM9/24/12
to everyth...@googlegroups.com
On 23 Sep 2012, at 18:32, John Clark wrote:

On Sat, Sep 22, 2012 at 9:10 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> We have very often dismissed emotion

Nothing mysterious about emotion, its just a condition that predisposes a computer or a human to behave in one way rather than another.  

That is conditional instruction, and it is third person describable. 
"Emotion" refers to a first person quale. I think you continue to dismiss the difference, despite some posts which witness you do see the difference. It means that you believe in a supervenience thesis which has been debunked in the computationalist frame. 






> feelings and consciousness in human. 
 
Unfortunately that is not true for philosophers, they don't dismiss consciousness in humans, in fact that's just about the only thing they want to talk about despite the fact that such talk has never once produced anything of value. However philosophers are reluctant to talk about intelligence in humans, even though the subject has proven to be much more fruitful, because its also much harder, and unlike a bullshit consciousness theory a bullshit intelligence theory is easy to shoot down because you can see with your own eyes that it just doesn't work, and that takes all the fun out of theorizing. Another advantage is that to become a consciousness theorist you really don't need to know anything, a grade school education is more than enough, but it takes years of study before you can even begin to figure out how intelligence works, with consciousness you can start producing hot air immediately and to some that's more fun.

So philosophers continue to blather on and on about consciousness and, having abandoned the subject, the study of intelligence has been left to computer scientists, programers, mathematicians and neurologists.

So let us tackle the subject of consciousness with the scientific method. What about UDA step 4? Your argument for refusing step 3 has been shown, by a number of one people on this list, to be a confusion between 1-view and 3-view, despite their 3p definitions. So what? 

Bruno


Roger Clough

unread,
Sep 24, 2012, 7:37:59 AM9/24/12
to everything-list
Hi John Clark


Emotions are strong feelings that are set off, not by
our senses primarily but when our will to do something is
blocked. But they are still feelings and can be handled as such,
except that they are often more strongly linked to muscular and bodily reactions.
Animal studies would bring out what's going on from an instinctual
piont of view. In humans, Flight vs Flight episodes are good examples.
And in humans, emotions are more often than not triggered by unfortunate
thoughts. Emotions are usually linked in particular to
facial expressions. Darwin wrote a book about those.





Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: John Clark
Receiver: everything-list
Time: 2012-09-23, 12:32:50
Subject: Re:_Thought_Doesn?_Think_That_It_Feels


On Sat, Sep 22, 2012 at 9:10 AM, Bruno Marchal wrote:


> We have very often dismissed emotion

Nothing mysterious about emotion, its just a condition that predisposes a computer or a human to behave in one way rather than another.?


> feelings and consciousness in human.?
?
Unfortunately that is not true for philosophers, they don't dismiss consciousness in humans, in fact that's just about the only thing they want to talk about despite the fact that such talk has never once produced anything of value. However philosophers are reluctant to talk about intelligence in humans, even though the subject has proven to be much more fruitful, because its also much harder, and unlike a bullshit consciousness theory a bullshit intelligence theory is easy to shoot down because you can see with your own eyes that it just doesn't work, and that takes all the fun out of theorizing. Another advantage is that to become a consciousness theorist you really don't need to know anything, a grade school education is more than enough, but it takes years of study before you can even begin to figure out how intelligence works, with consciousness you can start producing hot air immediately and to some that's more fun.

So philosophers continue to blather on and on about consciousness and, having abandoned the subject, the study of intelligence has been left to computer scientists, programers, mathematicians and neurologists.

? John K Clark?




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
Reply all
Reply to author
Forward
0 new messages