On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:AIs should have the same ethical protections as animalsJohn Basl is assistant professor of philosophy at Northeastern University in Boston...A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.
My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:AIs should have the same ethical protections as animalsJohn Basl is assistant professor of philosophy at Northeastern University in Boston...A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.The humans are still the main threat for the human. The idea to give human right to AI does not make music sense. It is part of the work of the AI to learn to defend themselves. We can be open mind, and listen, but defending their right can only threat the human right, I would say. In the theology of the machine, it can be proved that hell is paved with the good intentions … (amazingly enough, and accepting some definitions, of course).My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.Mechanism makes you right on this, although it can depend how information processing is defined. Consciousness is not in the processing, but in truth, or in the semantic related to that processing,. The processing itself by is only a relative concept, where consciousness is an absolute thing.Bruno
On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:
AIs should have the same ethical protections as animals
John Basl is assistant professor of philosophy at Northeastern University in Boston
...
A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views ??? ???liberal??? views ??? for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views ??? ???conservative??? views ??? consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.
It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.
Discussions of ???AI risk??? normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.
The humans are still the main threat for the human. The idea to give human right to AI does not make music sense. It is part of the work of the AI to learn to defend themselves. We can be open mind, and listen, but defending their right can only threat the human right, I would say. In the theology of the machine, it can be proved that hell is paved with the good intentions ??? (amazingly enough, and accepting some definitions, of course).
??
My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.
Mechanism makes you right on this, although it can depend how information processing is defined. Consciousness is not in the processing, but in truth, or in the semantic related to that processing,. The processing itself by is only a relative concept, where consciousness is an absolute thing.??
Bruno
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Well... if you want to do words-play, you can word-play all day long as you want. I see that AI believers are experts in words playing. They endow their toy with all the human capacities in the world and they awe at their live object. Their little puppy is alive, intelligent, smart, beautiful, can play Chess, can colonize the entire galaxy, lol. Probably too much loneliness and lack of genuine human interactions.
On Monday, 29 April 2019 21:54:47 UTC+3, cloud...@gmail.com wrote:
Now that is something programming language theorists would not agree with:
In??programming language theory,??semantics??is the field concerned with the ... study of the meaning of??programming languages.??
@philipthrift
And I see that you have no rational response to any criticism.?? Only "It doesn't exist" and unsupported assertions about what can't be true.
Brent
On 4/30/2019 12:15 AM, 'Cosmin Visan' via Everything List wrote:
--Well... if you want to do words-play, you can word-play all day long as you want. I see that AI believers are experts in words playing. They endow their toy with all the human capacities in the world and they awe at their live object. Their little puppy is alive, intelligent, smart, beautiful, can play Chess, can colonize the entire galaxy, lol. Probably too much loneliness and lack of genuine human interactions.
On Monday, 29 April 2019 21:54:47 UTC+3, cloud...@gmail.com wrote:
Now that is something programming language theorists would not agree with:
In??programming language theory,??semantics??is the field concerned with the ... study of the meaning of??programming languages.??
@philipthrift
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
On 29 Apr 2019, at 14:34, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:There is no definition for such a thing.
It is just a non-sensical concept. It's as if stepping in the mud and saying: "Look! The mud information processed the shape of my foot! The mud is so intelligent! He must have rights!!!”
On Monday, 29 April 2019 15:27:26 UTC+3, Bruno Marchal wrote:it can depend how information processing is defined.
On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:Semantics means meaning, and meaning is something that exists in consciousness.
You cannot use that for any "programming”.
On Monday, 29 April 2019 16:16:49 UTC+3, cloud...@gmail.com wrote:On "Consciousness is not in the processing, but in truth, or in the semantic related to that processing, ..." I address in the next article:But my mode of thinking is that of an engineer, not a truth-seeker.
It is just a non-sensical concept. It's as if stepping in the mud and saying: "Look! The mud information processed the shape of my foot! The mud is so intelligent! He must have rights!!!”Not at, all. Those will be defined by the notion of first person, and eventually, be related to machine through the mechanist hypothesis, and also the self-rerefntial discourse, including silence, of the universal machine.Bruno
On Monday, 29 April 2019 15:27:26 UTC+3, Bruno Marchal wrote:it can depend how information processing is defined.--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:Semantics means meaning, and meaning is something that exists in consciousness.No problem with this.You cannot use that for any "programming”.But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.Bruno
> AIs should have the same ethical protections as animals
On 1 May 2019, at 12:16, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
On Wednesday, 1 May 2019 12:15:37 UTC+3, Bruno Marchal wrote:On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:Semantics means meaning, and meaning is something that exists in consciousness.No problem with this.You cannot use that for any "programming”.But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.Bruno
First you say that you have no problem with "semantics" meaning "meaning in consciousness”
and 1 second later you talk about computers having semantics. What am I missing ?
On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
Semantics means meaning, and meaning is something that exists in consciousness.
No problem with this.
You cannot use that for any "programming”.
But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.
> How is a computer conscious ?
> Are you even aware of the Chinese Room argument ?
How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ?
1) It assumes that a small part of a system has all the properties of the entire system.
2) It assumes that slowing down consciousness would not make things strange and that strange things can not exist. Yes it's strange that a room considered as a whole can be conscious, but it would also be strange if the grey goo inside your head was slowed down by a factor of a hundred thousand million billion trillion.
3) This is the stupidest reason of the lot. Searle wants to prove that mechanical things may behave intelligently but only humans can be conscious. Searle starts by showing successfully that the Chinese Room does indeed behave intelligently, but then he concludes that no consciousness was involved in the operation of that intelligent room. How does he reach that conclusion? I will tell you.
Searle assumes that mechanical things may behave intelligently but only humans can be conscious, and it is perfectly true that the little man is not aware of what's going on, therefore Searle concludes that consciousness was not involved in that intelligence. Searle assumes that if consciousness of Chinese exists anywhere in that room it can only be in the human and since the human is not conscious of Chinese he concludes consciousness was not involved. And by assuming the very thing he wants to prove he has only succeeded in proving that he's an idiot.
And now let me tell you about Clark's Chinese Room: You are a professor of Chinese Literature and are in a room with me and the great Chinese Philosopher and Poet Laozi. Laozi writes something in his native language on a paper and hands it to me. I walk 10 feet and give it to you. You read the paper and are impressed with the wisdom of the message and the beauty of its language. Now I tell you that I don't know a word of Chinese; can you find any deep philosophical implications from that fact? I believe Clark's Chinese Room is every bit as profound as Searle's Chinese Room. Not very.
I would argue for "pancyberpsychism" (I'm no good at naming - is there a name for that already?) which is to say that there it is something it is like to do information processing of any kind. However, the quality of the consciousness involved in that processing is related to its dynamics. So banging on a rock involves a primitive form of information processing, as vibrations ripple through the rock - there it is something it is like for that rock to be banged on. For ongoing consciousness, some sort of feedback loop must be involved. A thermostat would be a primitive example of this, or a simple oscillating electric circuit. The main idea is that consciousness is associated with cybernetic organization and has nothing to do with substrate, which might be material or virtual.
In the Chinese Room example the cybernetic characteristics of the thought experiment lack any true feedback mechanism. This is the case with most instances of software as we know it - e.g. traditional chess engines. There is something it is like to be them, but it's not anything we would recognize in terms of ongoing subjective awareness. One could argue that operation systems (including Mars Rovers) embody the cybernetic dynamics necessary for ongoing experience, but I'd guess that what it's like to be an operating system would be pretty alien.
With biological brains, it's all about feedback and recursivity. Small insects with rudimentary nervous systems are totally recursive, feeding sensory data in and processing it continuously. So insect consciousness is much closer to our own than ordinary Von-Neumann architecture data-processing.
As nervous systems get more complex, feeding in more data and processing data in much more sophisticated ways, the consciousness involved would likewise be experienced in a richer way.
Humans, with our intricate conceptual, language-based self-models, achieve true self-consciousness. The self-model is a quantum leap forward, giving us the ability to say "I am". The ego gets a bad rap but it's responsible for our ability to notice ourselves and live within and create ongoing narratives about what we are, in relation to what we aren't. This explains why ego-dissolving psychedelics lead to such profound changes in consciousness.
Terren
On Wed, May 1, 2019 at 3:02 PM Quentin Anciaux <allc...@gmail.com> wrote:
Le mer. 1 mai 2019 à 18:13, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> a écrit :
How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ?
Yes, and how is the chinese room not conscious ? Because you have to associate it either to the dumb person acting as processor or the rules ? The chinese room as a whole information processing unit is conscious. If you ask it, it will tell you so... Prove it is not.
Quentin