Aeon: "AIs should have the same ethical protections as animals"

160 views
Skip to first unread message

cloud...@gmail.com

unread,
Apr 26, 2019, 9:33:31 AM4/26/19
to Everything List


AIs should have the same ethical protections as animals

John Basl is assistant professor of philosophy at Northeastern University in Boston


...

A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.

It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.

Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.

...

 

My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.




Cosmin Visan

unread,
Apr 26, 2019, 9:54:32 AM4/26/19
to Everything List
Also my plush rabbit toy should have the same rights. Freedom for all the objects in the world!

Bruno Marchal

unread,
Apr 29, 2019, 8:27:26 AM4/29/19
to everyth...@googlegroups.com
On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:



AIs should have the same ethical protections as animals

John Basl is assistant professor of philosophy at Northeastern University in Boston


...

A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.

It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.

Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.

The humans are still the main threat for the human. The idea to give human right to AI does not make music sense. It is part of the work of the AI to learn to defend themselves. We can be open mind, and listen, but defending their right can only threat the human right, I would say. In the theology of the machine, it can be proved that hell is paved with the good intentions … (amazingly enough, and accepting some definitions, of course).

 

My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.

Mechanism makes you right on this, although it can depend how information processing is defined. Consciousness is not in the processing, but in truth, or in the semantic related to that processing,. The processing itself by is only a relative concept, where consciousness is an absolute thing. 

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Cosmin Visan

unread,
Apr 29, 2019, 8:34:56 AM4/29/19
to Everything List
There is no definition for such a thing. It is just a non-sensical concept. It's as if stepping in the mud and saying: "Look! The mud information processed the shape of my foot! The mud is so intelligent! He must have rights!!!"

cloud...@gmail.com

unread,
Apr 29, 2019, 9:16:49 AM4/29/19
to Everything List


On Monday, April 29, 2019 at 7:27:26 AM UTC-5, Bruno Marchal wrote:

On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:



AIs should have the same ethical protections as animals

John Basl is assistant professor of philosophy at Northeastern University in Boston


...

A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.

It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.

Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.

The humans are still the main threat for the human. The idea to give human right to AI does not make music sense. It is part of the work of the AI to learn to defend themselves. We can be open mind, and listen, but defending their right can only threat the human right, I would say. In the theology of the machine, it can be proved that hell is paved with the good intentions … (amazingly enough, and accepting some definitions, of course).

 

My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.

Mechanism makes you right on this, although it can depend how information processing is defined. Consciousness is not in the processing, but in truth, or in the semantic related to that processing,. The processing itself by is only a relative concept, where consciousness is an absolute thing. 

Bruno


On "Consciousness is not in the processing, but in truth, or in the semantic related to that processing, ..." I address in the next article:


But my mode of thinking is that of an engineer, not a truth-seeker.

Telmo Menezes

unread,
Apr 29, 2019, 9:19:26 AM4/29/19
to 'Brent Meeker' via Everything List
I would say that engineering is a form of truth-seeking.

Telmo.

Cosmin Visan

unread,
Apr 29, 2019, 9:50:30 AM4/29/19
to Everything List
Semantics means meaning, and meaning is something that exists in consciousness. You cannot use that for any "programming".

cloud...@gmail.com

unread,
Apr 29, 2019, 2:54:47 PM4/29/19
to Everything List

Now that is something programming language theorists would not agree with:


In programming language theorysemantics is the field concerned with the ... study of the meaning of programming languages

@philipthrift

Brent Meeker

unread,
Apr 29, 2019, 3:02:21 PM4/29/19
to everyth...@googlegroups.com


On 4/29/2019 5:27 AM, Bruno Marchal wrote:
On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:



AIs should have the same ethical protections as animals

John Basl is assistant professor of philosophy at Northeastern University in Boston


...

A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views ??? ???liberal??? views ??? for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views ??? ???conservative??? views ??? consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.

It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.

Discussions of ???AI risk??? normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.

The humans are still the main threat for the human. The idea to give human right to AI does not make music sense. It is part of the work of the AI to learn to defend themselves. We can be open mind, and listen, but defending their right can only threat the human right, I would say. In the theology of the machine, it can be proved that hell is paved with the good intentions ??? (amazingly enough, and accepting some definitions, of course).

A right is some action that society agrees both to not interfere with and to protect against interference.Humans give to other humans the rights they want for themselves, so it is a reciprocal agreement.The problem with extending this to AI is that AI are easily distinguished and so there must be some commonality to be the basis for reciprocity.It's like the problem of racism.?? It was easy to deprived blacks of rights so long as they were seen a distinct.?? So what do we have in common with AI's?Intelligence.Do we have any values, like empathy, love of children, need for companionship...in common?I think it depends on the AI.

Brent


??

My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.

Mechanism makes you right on this, although it can depend how information processing is defined. Consciousness is not in the processing, but in truth, or in the semantic related to that processing,. The processing itself by is only a relative concept, where consciousness is an absolute thing.??

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Cosmin Visan

unread,
Apr 30, 2019, 3:15:41 AM4/30/19
to Everything List
Well... if you want to do words-play, you can word-play all day long as you want. I see that AI believers are experts in words playing. They endow their toy with all the human capacities in the world and they awe at their live object. Their little puppy is alive, intelligent, smart, beautiful, can play Chess, can colonize the entire galaxy, lol. Probably too much loneliness and lack of genuine human interactions.

cloud...@gmail.com

unread,
Apr 30, 2019, 7:02:10 AM4/30/19
to Everything List



It's simply  distinguishing informational/physical semantics from experimental/psychical semantics.

- @philipthrift

Brent Meeker

unread,
Apr 30, 2019, 6:06:54 PM4/30/19
to everyth...@googlegroups.com
And I see that you have no rational response to any criticism.?? Only "It doesn't exist" and unsupported assertions about what can't be true.

Brent


On 4/30/2019 12:15 AM, 'Cosmin Visan' via Everything List wrote:
Well... if you want to do words-play, you can word-play all day long as you want. I see that AI believers are experts in words playing. They endow their toy with all the human capacities in the world and they awe at their live object. Their little puppy is alive, intelligent, smart, beautiful, can play Chess, can colonize the entire galaxy, lol. Probably too much loneliness and lack of genuine human interactions.



On Monday, 29 April 2019 21:54:47 UTC+3, cloud...@gmail.com wrote:

Now that is something programming language theorists would not agree with:


In??programming language theory,??semantics??is the field concerned with the ... study of the meaning of??programming languages.??

@philipthrift

Cosmin Visan

unread,
May 1, 2019, 2:21:40 AM5/1/19
to Everything List
Sure I have lots of rational responses. But you have to be intelligent to understand them. Otherwise, what can you say to people that believe objects can become alive just by writing a line of code ?


On Wednesday, 1 May 2019 01:06:54 UTC+3, Brent wrote:
And I see that you have no rational response to any criticism.?? Only "It doesn't exist" and unsupported assertions about what can't be true.

Brent

On 4/30/2019 12:15 AM, 'Cosmin Visan' via Everything List wrote:
Well... if you want to do words-play, you can word-play all day long as you want. I see that AI believers are experts in words playing. They endow their toy with all the human capacities in the world and they awe at their live object. Their little puppy is alive, intelligent, smart, beautiful, can play Chess, can colonize the entire galaxy, lol. Probably too much loneliness and lack of genuine human interactions.



On Monday, 29 April 2019 21:54:47 UTC+3, cloud...@gmail.com wrote:

Now that is something programming language theorists would not agree with:


In??programming language theory,??semantics??is the field concerned with the ... study of the meaning of??programming languages.??

@philipthrift

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Bruno Marchal

unread,
May 1, 2019, 5:11:16 AM5/1/19
to everyth...@googlegroups.com
On 29 Apr 2019, at 14:34, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

There is no definition for such a thing.

Of “information processing”? 

Of course there are definition, even formalised in arithmetic. To be clear, by information I ALWAYS use Shannon 3p definition. It is not the mundane usage, which often put the meaning of the information in the definition, which is reasonable, but confusing when we address the mind-body issue.

By information processing, I mean mainly “computation”, which means sequence of expression related through some universal system (universal in the mathematical sense of Church, Turing, etc.).




It is just a non-sensical concept. It's as if stepping in the mud and saying: "Look! The mud information processed the shape of my foot! The mud is so intelligent! He must have rights!!!”


Not at, all. Those will be defined by the notion of first person, and eventually, be related to machine through the mechanist hypothesis, and also the self-rerefntial discourse, including silence, of the universal machine.

Bruno





On Monday, 29 April 2019 15:27:26 UTC+3, Bruno Marchal wrote:
 it can depend how information processing is defined.


Bruno Marchal

unread,
May 1, 2019, 5:15:37 AM5/1/19
to everyth...@googlegroups.com
On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

Semantics means meaning, and meaning is something that exists in consciousness.

No problem with this.



You cannot use that for any "programming”.


But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.

Bruno




On Monday, 29 April 2019 16:16:49 UTC+3, cloud...@gmail.com wrote:

On "Consciousness is not in the processing, but in truth, or in the semantic related to that processing, ..." I address in the next article:


But my mode of thinking is that of an engineer, not a truth-seeker.


Cosmin Visan

unread,
May 1, 2019, 6:10:32 AM5/1/19
to Everything List
There are different ways of arriving at the shape of my foot in the mud:

1) I step in the mud.

2) I make a super-duper complicated AI that does pattern recognition and plays Chess and make him sculpt the shape of my foot in the mud.

3) I personally using my own consciousness sculpt the shape of my foot.

Based on the belief that "behavior is everything" that lots of people have, since the result in all the 3 cases is identical, it means that the mud itself is intelligent since he can process the shape of my foot exactly as the super-duper AI and exactly as me personally. And since people name option 2) as computation and claim that since the result is the same in all cases, it means that all 3 cases are computations. So stepping in the mud is computation. QED


On Wednesday, 1 May 2019 12:11:16 UTC+3, Bruno Marchal wrote:
It is just a non-sensical concept. It's as if stepping in the mud and saying: "Look! The mud information processed the shape of my foot! The mud is so intelligent! He must have rights!!!”

Not at, all. Those will be defined by the notion of first person, and eventually, be related to machine through the mechanist hypothesis, and also the self-rerefntial discourse, including silence, of the universal machine.

Bruno





On Monday, 29 April 2019 15:27:26 UTC+3, Bruno Marchal wrote:
 it can depend how information processing is defined.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Cosmin Visan

unread,
May 1, 2019, 6:16:21 AM5/1/19
to Everything List


On Wednesday, 1 May 2019 12:15:37 UTC+3, Bruno Marchal wrote:

On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

Semantics means meaning, and meaning is something that exists in consciousness.

No problem with this.



You cannot use that for any "programming”.


But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.

Bruno


First you say that you have no problem with "semantics" meaning "meaning in consciousness" and 1 second later you talk about computers having semantics. What am I missing ?

John Clark

unread,
May 1, 2019, 10:07:47 AM5/1/19
to everyth...@googlegroups.com
On Fri, Apr 26, 2019 at 9:33 AM <cloud...@gmail.com> wrote:

> AIs should have the same ethical protections as animals

I would maintain that question is of no practical importance whatsoever because AI's won't need our protection. The important question is the one a AI might ask himself:  Should I give humans the same ethical protection that I give to other AI's?

 John K Clark

 

Bruno Marchal

unread,
May 1, 2019, 12:41:14 PM5/1/19
to everyth...@googlegroups.com
On 1 May 2019, at 12:16, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:



On Wednesday, 1 May 2019 12:15:37 UTC+3, Bruno Marchal wrote:

On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

Semantics means meaning, and meaning is something that exists in consciousness.

No problem with this.



You cannot use that for any "programming”.


But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.

Bruno


First you say that you have no problem with "semantics" meaning "meaning in consciousness”

OK. (Without making the technical nuances needed to make this entirely clear), but meaning and consciousness are deeply related, and quasi-identifiable if we take some precaution.




and 1 second later you talk about computers having semantics. What am I missing ?


That a computer can be conscious. To be sure, usually, we can build semantic for simple programs, and the computer is everything but a simple machine, so the semantic associated to a universal computer is infinitely complex, and accessible in direct way by the person associated with the computer, but in a way which it cannot communicate or prove to another. Yet, it can communicate part of it, and he can prove that it cannot prove it to another. By introspection, you can verify this on yourself, and this explain in part why people are confused with the term consciousness, because it involves the undefinable meaning of … meaning.

Bruno

Brent Meeker

unread,
May 1, 2019, 12:44:26 PM5/1/19
to everyth...@googlegroups.com


On 5/1/2019 2:15 AM, Bruno Marchal wrote:

On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

Semantics means meaning, and meaning is something that exists in consciousness.

No problem with this.



You cannot use that for any "programming”.


But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.

Which illustrates that meaning is a relation to environments and actions.

Brent

Cosmin Visan

unread,
May 1, 2019, 1:13:27 PM5/1/19
to Everything List
How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ? Are you even aware of what the phenomenon of Understanding is about ? Are you aware that consciousnesses work by Understanding ? Namely bringing new qualia into existence out of nothing ? Are you aware that a computer is just a collection of billiard balls banking into each others ? Are you aware that consciousness is a unified entity, while "computer" is not ? And so on. It seems to me that you have no understanding of consciousness whatsoever. You just randomly play with concepts and live under the impression that you talk about consciousness, when in fact since you have no understanding whatsoever of even elementary phenomenological facts, you are actually talking about anything but consciousness ? For example, are you aware that time is just a quale in consciousness, so there being no "physical time" ? If there is no physical time, you computer becomes once more what it always been: a fantasy.

I would really like you to answer all of the above questions, to see for yourself your own ignorance regarding consciousness.

John Clark

unread,
May 1, 2019, 1:34:25 PM5/1/19
to everyth...@googlegroups.com
On Wed, May 1, 2019 at 1:13 PM 'Cosmin Visan'  <everyth...@googlegroups.com> wrote:

> How is a computer conscious ?

The same way I am and perhaps you are.

>  Are you even aware of the Chinese Room argument ?

Yes, the silliest thought exparament in the history of the world, the only thing it proves is that Searle is a very bad philosopher.  

 John K Clark 

Brent Meeker

unread,
May 1, 2019, 1:45:54 PM5/1/19
to everyth...@googlegroups.com
What ethics attempt to do is to allow an interacting social group to realize their individual values to the greatest degree possible by some measure, even though they have some conflicting values.  The problem with AI's is they may have very different values, not only from humans, but also from one another.  For example, humans value companionship of other humans.  This is a big evolutionary advantage and appears in other social animals.  But there's no reason that an AI, say built as a Mars Rover, would be provided with a desire for the companionship of another Mars Rover.  In fact we'd want them to explore independently and would provide that as a hard-wired value the way evolution provides us with a hard-wired value for sex.

In some ways this may make the problem of AI ethics easier, they may have values that don't conflict with each other  or with humans.  An AI may not care if it's turned off for a year or scrapped for parts.  But also it may not care if it has to eliminate the human race to achieve it's values.

Brent

Quentin Anciaux

unread,
May 1, 2019, 3:02:12 PM5/1/19
to everyth...@googlegroups.com


Le mer. 1 mai 2019 à 18:13, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> a écrit :
How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ?

Yes, and how is the chinese room not conscious ? Because you have to associate it either to the dumb person acting as processor or the rules ? The chinese room as a whole information processing unit is conscious. If you ask it, it will tell you so... Prove it is not.

Quentin

Terren Suydam

unread,
May 1, 2019, 3:34:40 PM5/1/19
to Everything List

I would argue for "pancyberpsychism" (I'm no good at naming - is there a name for that already?) which is to say that there it is something it is like to do information processing of any kind. However, the quality of the consciousness involved in that processing is related to its dynamics. So banging on a rock involves a primitive form of information processing, as vibrations ripple through the rock - there it is something it is like for that rock to be banged on. For ongoing consciousness, some sort of feedback loop must be involved. A thermostat would be a primitive example of this, or a simple oscillating electric circuit. The main idea is that consciousness is associated with cybernetic organization and has nothing to do with substrate, which might be material or virtual. 

In the Chinese Room example the cybernetic characteristics of the thought experiment lack any true feedback mechanism. This is the case with most instances of software as we know it - e.g. traditional chess engines. There is something it is like to be them, but it's not anything we would recognize in terms of ongoing subjective awareness. One could argue that operation systems (including Mars Rovers) embody the cybernetic dynamics necessary for ongoing experience, but I'd guess that what it's like to be an operating system would be pretty alien. 

With biological brains, it's all about feedback and recursivity. Small insects with rudimentary nervous systems are totally recursive, feeding sensory data in and processing it continuously. So insect consciousness is much closer to our own than ordinary Von-Neumann architecture data-processing.

As nervous systems get more complex, feeding in more data and processing data in much more sophisticated ways, the consciousness involved would likewise be experienced in a richer way.

Humans, with our intricate conceptual, language-based self-models, achieve true self-consciousness. The self-model is a quantum leap forward, giving us the ability to say "I am". The ego gets a bad rap but it's responsible for our ability to notice ourselves and live within and create ongoing narratives about what we are, in relation to what we aren't.  This explains why ego-dissolving psychedelics lead to such profound changes in consciousness.

Terren

Quentin Anciaux

unread,
May 1, 2019, 3:42:07 PM5/1/19
to everyth...@googlegroups.com
Map lookup is a valid implementation for any program you can conceive, albeit a very ineffective one... The chinese room is such implementation... And as much as my parts are not me, i'm not the sum of my parts...

Quentin

John Clark

unread,
May 1, 2019, 4:39:03 PM5/1/19
to everyth...@googlegroups.com
Searle's Chinese Room thought experiment is not just wrong it's STUPID. I say this because it has 3 colossal flaws, just one would render it stupid and 3 render it stupidity cubed:

1) It assumes that a small part of a system has all the properties of the entire system.

2) It assumes that slowing down consciousness would not make things strange and that strange things can not exist. Yes it's strange that a room considered as a whole can be conscious, but it would also be strange if the grey goo inside your head was slowed down by a factor of a hundred thousand million billion trillion.

3) This is the stupidest reason of the lot. Searle wants to prove that mechanical things may behave intelligently but only humans can be conscious. Searle starts by showing successfully that the Chinese Room does indeed behave intelligently, but then he concludes that no consciousness was involved in the operation of that intelligent room. How does he reach that conclusion? I will tell you. 

Searle assumes that mechanical things may behave intelligently but only humans can be conscious, and it is perfectly true that the little man is not aware of what's going on, therefore Searle concludes that consciousness was not involved in that intelligence. Searle assumes that if consciousness of Chinese exists anywhere in that room it can only be in the human and since the human is not conscious of Chinese he concludes consciousness was not involved. And by assuming the very thing he wants to prove he has only succeeded in proving that he's an idiot.

And now let me tell you about Clark's Chinese Room: You are a professor of Chinese Literature and are in a room with me and the great Chinese Philosopher and Poet Laozi. Laozi writes something in his native language on a paper and hands it to me. I walk 10 feet and give it to you. You read the paper and are impressed with the wisdom of the message and the beauty of its language. Now I tell you that I don't know a word of Chinese; can you find any deep philosophical  implications from that fact? I believe Clark's Chinese Room is every bit as profound as Searle's Chinese Room. Not very.

John K clark

Brent Meeker

unread,
May 1, 2019, 6:48:47 PM5/1/19
to everyth...@googlegroups.com


On 5/1/2019 12:34 PM, Terren Suydam wrote:

I would argue for "pancyberpsychism" (I'm no good at naming - is there a name for that already?) which is to say that there it is something it is like to do information processing of any kind. However, the quality of the consciousness involved in that processing is related to its dynamics. So banging on a rock involves a primitive form of information processing, as vibrations ripple through the rock - there it is something it is like for that rock to be banged on. For ongoing consciousness, some sort of feedback loop must be involved. A thermostat would be a primitive example of this, or a simple oscillating electric circuit. The main idea is that consciousness is associated with cybernetic organization and has nothing to do with substrate, which might be material or virtual. 

In the Chinese Room example the cybernetic characteristics of the thought experiment lack any true feedback mechanism. This is the case with most instances of software as we know it - e.g. traditional chess engines. There is something it is like to be them, but it's not anything we would recognize in terms of ongoing subjective awareness. One could argue that operation systems (including Mars Rovers) embody the cybernetic dynamics necessary for ongoing experience, but I'd guess that what it's like to be an operating system would be pretty alien.

Yes, that's one of the things I find interesting about AI.  Human-like consciousness requires learning from memory, prediction and planning.  This means internal simulation of prospective actions.  But even within those conditions there could be a lot of variations.  For example, human memory involves a lot confabulation, as shown by many experiments.  This obviously conserves memory since only key things are actually stored in a narrative and a lot of a memory is reconstructed.   An AI Mars Rover wouldn't necessarily work this way.  Electronic memories can be much bigger and still have reasonable access times.  So an AI might simply have everything recorded.  So what would it be like to an AI Mars Rover with many more kinds of sensory systems and eidetic memory?

Brent


With biological brains, it's all about feedback and recursivity. Small insects with rudimentary nervous systems are totally recursive, feeding sensory data in and processing it continuously. So insect consciousness is much closer to our own than ordinary Von-Neumann architecture data-processing.

As nervous systems get more complex, feeding in more data and processing data in much more sophisticated ways, the consciousness involved would likewise be experienced in a richer way.

Humans, with our intricate conceptual, language-based self-models, achieve true self-consciousness. The self-model is a quantum leap forward, giving us the ability to say "I am". The ego gets a bad rap but it's responsible for our ability to notice ourselves and live within and create ongoing narratives about what we are, in relation to what we aren't.  This explains why ego-dissolving psychedelics lead to such profound changes in consciousness.

Terren

On Wed, May 1, 2019 at 3:02 PM Quentin Anciaux <allc...@gmail.com> wrote:


Le mer. 1 mai 2019 à 18:13, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> a écrit :
How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ?

Yes, and how is the chinese room not conscious ? Because you have to associate it either to the dumb person acting as processor or the rules ? The chinese room as a whole information processing unit is conscious. If you ask it, it will tell you so... Prove it is not.

Quentin

cloud...@gmail.com

unread,
May 1, 2019, 7:24:18 PM5/1/19
to Everything List
I would say that one could have a Jupiter planet-sized network of Intel® Core™ processors + whatever distributed program running on it, and it will not be conscious. 

It is not composed of the kind of matter needed for consciousness, which could include biochemical alternatives.


@philipthrift

 

John Clark

unread,
May 1, 2019, 7:48:10 PM5/1/19
to everyth...@googlegroups.com
On Wed, May 1, 2019 at 7:24 PM <cloud...@gmail.com> wrote:

> I would say that one could have a Jupiter planet-sized network of Intel® Core™ processors + whatever distributed program running on it, and it will not be conscious. It is not composed of the kind of matter needed for consciousness, which could include biochemical alternatives.

Your theory, which you offer without a single particle of evidence, is that dry and hard things can be intelligent and even super intelligent but only wet and squishy things can be conscious. My theory, which has exactly as much supporting evidence as your theory, is that only people with a size 13 shoe size are conscious. Guess what my shoe size is.  

John K Clark


Brent Meeker

unread,
May 1, 2019, 8:10:03 PM5/1/19