On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:AIs should have the same ethical protections as animalsJohn Basl is assistant professor of philosophy at Northeastern University in Boston...A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.
My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:AIs should have the same ethical protections as animalsJohn Basl is assistant professor of philosophy at Northeastern University in Boston...A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.The humans are still the main threat for the human. The idea to give human right to AI does not make music sense. It is part of the work of the AI to learn to defend themselves. We can be open mind, and listen, but defending their right can only threat the human right, I would say. In the theology of the machine, it can be proved that hell is paved with the good intentions … (amazingly enough, and accepting some definitions, of course).My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.Mechanism makes you right on this, although it can depend how information processing is defined. Consciousness is not in the processing, but in truth, or in the semantic related to that processing,. The processing itself by is only a relative concept, where consciousness is an absolute thing.Bruno
On 26 Apr 2019, at 15:33, cloud...@gmail.com wrote:
AIs should have the same ethical protections as animals
John Basl is assistant professor of philosophy at Northeastern University in Boston
...
A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views ??? ???liberal??? views ??? for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views ??? ???conservative??? views ??? consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.
It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.
Discussions of ???AI risk??? normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.
The humans are still the main threat for the human. The idea to give human right to AI does not make music sense. It is part of the work of the AI to learn to defend themselves. We can be open mind, and listen, but defending their right can only threat the human right, I would say. In the theology of the machine, it can be proved that hell is paved with the good intentions ??? (amazingly enough, and accepting some definitions, of course).
??
My 'conservative' view: information processing (alone) does not achieve experience (consciousness) processing.
Mechanism makes you right on this, although it can depend how information processing is defined. Consciousness is not in the processing, but in truth, or in the semantic related to that processing,. The processing itself by is only a relative concept, where consciousness is an absolute thing.??
Bruno
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Well... if you want to do words-play, you can word-play all day long as you want. I see that AI believers are experts in words playing. They endow their toy with all the human capacities in the world and they awe at their live object. Their little puppy is alive, intelligent, smart, beautiful, can play Chess, can colonize the entire galaxy, lol. Probably too much loneliness and lack of genuine human interactions.
On Monday, 29 April 2019 21:54:47 UTC+3, cloud...@gmail.com wrote:
Now that is something programming language theorists would not agree with:
In??programming language theory,??semantics??is the field concerned with the ... study of the meaning of??programming languages.??
@philipthrift
And I see that you have no rational response to any criticism.?? Only "It doesn't exist" and unsupported assertions about what can't be true.
Brent
On 4/30/2019 12:15 AM, 'Cosmin Visan' via Everything List wrote:
--Well... if you want to do words-play, you can word-play all day long as you want. I see that AI believers are experts in words playing. They endow their toy with all the human capacities in the world and they awe at their live object. Their little puppy is alive, intelligent, smart, beautiful, can play Chess, can colonize the entire galaxy, lol. Probably too much loneliness and lack of genuine human interactions.
On Monday, 29 April 2019 21:54:47 UTC+3, cloud...@gmail.com wrote:
Now that is something programming language theorists would not agree with:
In??programming language theory,??semantics??is the field concerned with the ... study of the meaning of??programming languages.??
@philipthrift
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
On 29 Apr 2019, at 14:34, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:There is no definition for such a thing.
It is just a non-sensical concept. It's as if stepping in the mud and saying: "Look! The mud information processed the shape of my foot! The mud is so intelligent! He must have rights!!!”
On Monday, 29 April 2019 15:27:26 UTC+3, Bruno Marchal wrote:it can depend how information processing is defined.
On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:Semantics means meaning, and meaning is something that exists in consciousness.
You cannot use that for any "programming”.
On Monday, 29 April 2019 16:16:49 UTC+3, cloud...@gmail.com wrote:On "Consciousness is not in the processing, but in truth, or in the semantic related to that processing, ..." I address in the next article:But my mode of thinking is that of an engineer, not a truth-seeker.
It is just a non-sensical concept. It's as if stepping in the mud and saying: "Look! The mud information processed the shape of my foot! The mud is so intelligent! He must have rights!!!”Not at, all. Those will be defined by the notion of first person, and eventually, be related to machine through the mechanist hypothesis, and also the self-rerefntial discourse, including silence, of the universal machine.Bruno
On Monday, 29 April 2019 15:27:26 UTC+3, Bruno Marchal wrote:it can depend how information processing is defined.--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:Semantics means meaning, and meaning is something that exists in consciousness.No problem with this.You cannot use that for any "programming”.But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.Bruno
> AIs should have the same ethical protections as animals
On 1 May 2019, at 12:16, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
On Wednesday, 1 May 2019 12:15:37 UTC+3, Bruno Marchal wrote:On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:Semantics means meaning, and meaning is something that exists in consciousness.No problem with this.You cannot use that for any "programming”.But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.Bruno
First you say that you have no problem with "semantics" meaning "meaning in consciousness”
and 1 second later you talk about computers having semantics. What am I missing ?
On 29 Apr 2019, at 15:50, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:
Semantics means meaning, and meaning is something that exists in consciousness.
No problem with this.
You cannot use that for any "programming”.
But computer science is in a large part the study between the relation between program and their semantics. The machine which relate the two is the universal machine. If my computer was unable to associate some semantic to a program, this mail would never been sent to you.
> How is a computer conscious ?
> Are you even aware of the Chinese Room argument ?
How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ?
1) It assumes that a small part of a system has all the properties of the entire system.
2) It assumes that slowing down consciousness would not make things strange and that strange things can not exist. Yes it's strange that a room considered as a whole can be conscious, but it would also be strange if the grey goo inside your head was slowed down by a factor of a hundred thousand million billion trillion.
3) This is the stupidest reason of the lot. Searle wants to prove that mechanical things may behave intelligently but only humans can be conscious. Searle starts by showing successfully that the Chinese Room does indeed behave intelligently, but then he concludes that no consciousness was involved in the operation of that intelligent room. How does he reach that conclusion? I will tell you.
Searle assumes that mechanical things may behave intelligently but only humans can be conscious, and it is perfectly true that the little man is not aware of what's going on, therefore Searle concludes that consciousness was not involved in that intelligence. Searle assumes that if consciousness of Chinese exists anywhere in that room it can only be in the human and since the human is not conscious of Chinese he concludes consciousness was not involved. And by assuming the very thing he wants to prove he has only succeeded in proving that he's an idiot.
And now let me tell you about Clark's Chinese Room: You are a professor of Chinese Literature and are in a room with me and the great Chinese Philosopher and Poet Laozi. Laozi writes something in his native language on a paper and hands it to me. I walk 10 feet and give it to you. You read the paper and are impressed with the wisdom of the message and the beauty of its language. Now I tell you that I don't know a word of Chinese; can you find any deep philosophical implications from that fact? I believe Clark's Chinese Room is every bit as profound as Searle's Chinese Room. Not very.
I would argue for "pancyberpsychism" (I'm no good at naming - is there a name for that already?) which is to say that there it is something it is like to do information processing of any kind. However, the quality of the consciousness involved in that processing is related to its dynamics. So banging on a rock involves a primitive form of information processing, as vibrations ripple through the rock - there it is something it is like for that rock to be banged on. For ongoing consciousness, some sort of feedback loop must be involved. A thermostat would be a primitive example of this, or a simple oscillating electric circuit. The main idea is that consciousness is associated with cybernetic organization and has nothing to do with substrate, which might be material or virtual.
In the Chinese Room example the cybernetic characteristics of the thought experiment lack any true feedback mechanism. This is the case with most instances of software as we know it - e.g. traditional chess engines. There is something it is like to be them, but it's not anything we would recognize in terms of ongoing subjective awareness. One could argue that operation systems (including Mars Rovers) embody the cybernetic dynamics necessary for ongoing experience, but I'd guess that what it's like to be an operating system would be pretty alien.
With biological brains, it's all about feedback and recursivity. Small insects with rudimentary nervous systems are totally recursive, feeding sensory data in and processing it continuously. So insect consciousness is much closer to our own than ordinary Von-Neumann architecture data-processing.
As nervous systems get more complex, feeding in more data and processing data in much more sophisticated ways, the consciousness involved would likewise be experienced in a richer way.
Humans, with our intricate conceptual, language-based self-models, achieve true self-consciousness. The self-model is a quantum leap forward, giving us the ability to say "I am". The ego gets a bad rap but it's responsible for our ability to notice ourselves and live within and create ongoing narratives about what we are, in relation to what we aren't. This explains why ego-dissolving psychedelics lead to such profound changes in consciousness.
Terren
On Wed, May 1, 2019 at 3:02 PM Quentin Anciaux <allc...@gmail.com> wrote:
Le mer. 1 mai 2019 à 18:13, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> a écrit :
How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ?
Yes, and how is the chinese room not conscious ? Because you have to associate it either to the dumb person acting as processor or the rules ? The chinese room as a whole information processing unit is conscious. If you ask it, it will tell you so... Prove it is not.
Quentin
> I would say that one could have a Jupiter planet-sized network of Intel® Core™ processors + whatever distributed program running on it, and it will not be conscious. It is not composed of the kind of matter needed for consciousness, which could include biochemical alternatives.
On Wednesday, May 1, 2019 at 7:10:03 PM UTC-5, Brent wrote:
On 5/1/2019 4:24 PM, cloud...@gmail.com wrote:
> I would say that one could have a Jupiter planet-sized network of
> Intel® Core™ processors + whatever distributed program running on it,
> and it will not be conscious.
Based on what? Human hubris?
Brent
A racist is [via Google definition] "a person who shows or feels discrimination or prejudice against people of other races, or who believes that a particular race is superior to another".
I'm not that, but I do think that different types of matter have different capabilities (as materials scientists do).
I am a materialist.
On the other hand, the new materialists [ https://lareviewofbooks.org/article/mirroring-and-mattering-science-politics-and-the-new-feminist-materialism/ ]
reconceptualize “the terms of social theory, such that the social is seen as a part of, rather than distinct from, the natural, an undertaking that requires a rethinking of the natural too.” In this newly monist view, the proper response to the threat of biological determinism — the claim that biology is destiny or that our fate lies in our genes — is not to reject the natural sciences and assert the primacy of the social, nor indeed to treat the world as text, but rather to grasp the inseparability of the “bio” and the “social,” as captured in the word “biosocial.” In place of a linguistic process of representing the world, the new materialism proposes “mattering” as the generative process through which matter comes into being.
Material stuff — bodies, tools, objects — are understood as imbued with vitality and dynamic force.
This is a philosophical claim, but one that entails a political sensibility. And while materialism is a venerable school of thought, this conception of “mattering” seems, as I have suggested, very much of the moment.
@philipthrift
Brent
On 1 May 2019, at 19:13, 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:How is a computer conscious ? Magic ? Are you even aware of the Chinese Room argument ?
Are you even aware of what the phenomenon of Understanding is about ?
Are you aware that consciousnesses work by Understanding ?
Namely bringing new qualia into existence out of nothing ?
Are you aware that a computer is just a collection of billiard balls banking into each others ?
Are you aware that consciousness is a unified entity, while "computer" is not ?
And so on. It seems to me that you have no understanding of consciousness whatsoever. You just randomly play with concepts and live under the impression that you talk about consciousness, when in fact since you have no understanding whatsoever of even elementary phenomenological facts, you are actually talking about anything but consciousness ?
For example, are you aware that time is just a quale in consciousness, so there being no "physical time" ? If there is no physical time, you computer becomes once more what it always been: a fantasy.
I would really like you to answer all of the above questions, to see for yourself your own ignorance regarding consciousness.
On Wednesday, 1 May 2019 19:41:14 UTC+3, Bruno Marchal wrote:That a computer can be conscious.
On 1 May 2019, at 21:41, Quentin Anciaux <allc...@gmail.com> wrote:Map lookup is a valid implementation for any program you can conceive, albeit a very ineffective one…
On 5/2/2019 12:58 AM, cloud...@gmail.com wrote:
On Wednesday, May 1, 2019 at 7:10:03 PM UTC-5, Brent wrote:
On 5/1/2019 4:24 PM, cloud...@gmail.com wrote:
> I would say that one could have a Jupiter planet-sized network of
> Intel® Core™ processors + whatever distributed program running on it,
> and it will not be conscious.
Based on what? Human hubris?
Brent
A racist is [via Google definition] "a person who shows or feels discrimination or prejudice against people of other races, or who believes that a particular race is superior to another".
I'm not that, but I do think that different types of matter have different capabilities (as materials scientists do).
Is there a type that is different from quarks and leptons?
I am a materialist.
Except you imbue matter with properties that are undetectable. You place emphasis on matter having experience, but that seems like a half-measure to me. Why not go all the way and say that it has libertarian free will too.
Brent
On Thursday, May 2, 2019 at 10:52:50 AM UTC-5, Brent wrote:
On 5/2/2019 12:58 AM, cloud...@gmail.com wrote:
On Wednesday, May 1, 2019 at 7:10:03 PM UTC-5, Brent wrote:
On 5/1/2019 4:24 PM, cloud...@gmail.com wrote:
> I would say that one could have a Jupiter planet-sized network of
> Intel® Core™ processors + whatever distributed program running on it,
> and it will not be conscious.
Based on what? Human hubris?
Brent
A racist is [via Google definition] "a person who shows or feels discrimination or prejudice against people of other races, or who believes that a particular race is superior to another".
I'm not that, but I do think that different types of matter have different capabilities (as materials scientists do).
Is there a type that is different from quarks and leptons?
Apparently matter is not "reducible" to just the physics a couple of particles.
Phases of matter is a mystery:
https://news.stonybrook.edu/oncampus/simons-center-lecture-on-new-electronic-phases-of-matter-may-8/
No one can say except via the certainty of fundamentalist religion that all of chemistry, biochemistry, biology, neurobiology can be reduced to the physics of a few particles.
I am a materialist.
Except you imbue matter with properties that are undetectable. You place emphasis on matter having experience, but that seems like a half-measure to me. Why not go all the way and say that it has libertarian free will too.
Brent
Consciousness itself - my 'self' - is detected.
Galen Strawson (on free will):
"[T]he best way to try to achieve a comprehensive understanding of the free will debate, and of the reason why it is interminable, is to study the thing that keeps it going — our experience of freedom. Because this experience is something real, complex, and important, even if free will itself is not real. Because it may be that the experience of freedom is really all there is, so far as free will is concerned.* [footnote *] It may then be said that free will is real after all, because the reality of free will resides precisely in the reality of the experience of being free."
(on consciousness):Consciousness Never Left
"... So there is no mystery of consciousness. What we do not understand, what we find a mystery [(that is, matter)], is how conscious experience can be simply a matter of goings-on in the brain. But this is not because we do not know what consciousness is. It is because we do not know how to relate the things we know about the brain, when we use the language of physics and neurophysiology, to the things we know about the brain simply in having conscious experience – whose nature we know simply in having it."
@philipthrift
On the other hand, the new materialists [ https://lareviewofbooks.org/article/mirroring-and-mattering-science-politics-and-the-new-feminist-materialism/ ]
reconceptualize “the terms of social theory, such that the social is seen as a part of, rather than distinct from, the natural, an undertaking that requires a rethinking of the natural too.” In this newly monist view, the proper response to the threat of biological determinism — the claim that biology is destiny or that our fate lies in our genes — is not to reject the natural sciences and assert the primacy of the social, nor indeed to treat the world as text, but rather to grasp the inseparability of the “bio” and the “social,” as captured in the word “biosocial.” In place of a linguistic process of representing the world, the new materialism proposes “mattering” as the generative process through which matter comes into being.
Material stuff — bodies, tools, objects — are understood as imbued with vitality and dynamic force.
This is a philosophical claim, but one that entails a political sensibility. And while materialism is a venerable school of thought, this conception of “mattering” seems, as I have suggested, very much of the moment.
@philipthrift
On 5/2/2019 11:39 AM, cloud...@gmail.com wrote:
Apparently matter is not "reducible" to just the physics a couple of particles.
Then you're not a materialist. You think there is matter plus something else, that everyone calls "mind", but you're going to call it "matter" and add it to everyone else's list of matter so you can still call yourself a materialist.
Brent
Reductive physicalism...is normally assumed to be incompatible with panpsychism. Materialism, if held to be distinct from physicalism, is compatible with panpsychism insofar as mental properties are attributed to physical matter, which is the only basic substance.
On Thursday, May 2, 2019 at 5:37:26 PM UTC-5, Brent wrote:
On 5/2/2019 11:39 AM, cloud...@gmail.com wrote:
Apparently matter is not "reducible" to just the physics a couple of particles.
Then you're not a materialist. You think there is matter plus something else, that everyone calls "mind", but you're going to call it "matter" and add it to everyone else's list of matter so you can still call yourself a materialist.
Brent
But everything reducing to the physics of particles is thought of as physicalism (not materialism):Physicalism and materialism
Reductive physicalism...is normally assumed to be incompatible with panpsychism. Materialism, if held to be distinct from physicalism, is compatible with panpsychism insofar as mental properties
--
I think that is right. But when you consider some simplified cases, e.g. a computation written out on paper (or Bruno's movie graph) it becomes apparent that consciousness must ultimately refer to other things.
Much is made of "self-awareness" but this is usually just having an internal model of one's body, or social standing or some other model of the self. It is not consciousness of consciousness...that is only a temporal reflection: "I was conscious just now."
In general terms we could say consciousness is awareness of the evironment, where that includes one's body. Damasio identifies emotions as awareness of the bodies state. The point is that the stuff of which we are aware and which we find agreement with other people's awareness is what we infer to be the physical world. It might be possible to be conscious in some sense without a physical world, but it would be qualitatively different.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
--
If ὕ – the first Greek letter for “hyle”, upsilon (υ) with diacritics dasia and oxia (U+1F55) – is used for the symbol of matter, φ (phi) for physical, + ψ (psi) for psychical, then
(i.e., the combination of physical and psychical properties is a more complete view of what matter is). The physical is the (quantitative) behavioral aspect of matter – the kind that is formulated in mathematical language in current physics, for example – whereas the psychical is the (qualitative) experiential aspect of matter, at various levels, from brains on down. There is no reason in principle for only φ to the considered by science and for ψ to be ignored by science.
On Wednesday, May 1, 2019 at 7:10:03 PM UTC-5, Brent wrote:
On 5/1/2019 4:24 PM, cloud...@gmail.com wrote:
> I would say that one could have a Jupiter planet-sized network of
> Intel® Core™ processors + whatever distributed program running on it,
> and it will not be conscious.
Based on what? Human hubris?
BrentA racist is [via Google definition] "a person who shows or feels discrimination or prejudice against people of other races, or who believes that a particular race is superior to another".I'm not that, but I do think that different types of matter have different capabilities (as materials scientists do).
On Fri, May 3, 2019 at 1:10 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
I think that is right. But when you consider some simplified cases, e.g. a computation written out on paper (or Bruno's movie graph) it becomes apparent that consciousness must ultimately refer to other things.
Right, the movie graph argument shows that consciousness doesn't supervene on physical computation. Nevertheless, the character of my consciousness still corresponds with the kind of cybernetic system implemented by e.g. my brain and body, as instantiated by the infinity of programs going through my state.
Much is made of "self-awareness" but this is usually just having an internal model of one's body, or social standing or some other model of the self. It is not consciousness of consciousness...that is only a temporal reflection: "I was conscious just now."
I see it a little differently. The self-model/ego is a higher-order construct that organizes the system in a holistic way.
We take this for granted - it's the water we swim in - but our minds are radically re-organized as children by the taught narrative that we have an identity
All software that has ever run has run on computers made of materials and assembled in factories.There is no spiritual/heavenly realm - as fat as I know - where software is running.Can you show me such a place? Have you seen it?
If "consciousness doesn't supervene on physical [or material] computation" then does that mean there is realm for (A) consciousness and one for (B) physical [or material] computation?
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
On 5/3/2019 11:44 AM, Terren Suydam wrote:
On Fri, May 3, 2019 at 1:10 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
I think that is right. But when you consider some simplified cases, e.g. a computation written out on paper (or Bruno's movie graph) it becomes apparent that consciousness must ultimately refer to other things.
Right, the movie graph argument shows that consciousness doesn't supervene on physical computation. Nevertheless, the character of my consciousness still corresponds with the kind of cybernetic system implemented by e.g. my brain and body, as instantiated by the infinity of programs going through my state.
What makes it "your state"? It's just a bunch of programs. Why those programs and not others?
Much is made of "self-awareness" but this is usually just having an internal model of one's body, or social standing or some other model of the self. It is not consciousness of consciousness...that is only a temporal reflection: "I was conscious just now."
I see it a little differently. The self-model/ego is a higher-order construct that organizes the system in a holistic way.
? That sounds like a kind of dualism. You're postulating something that creates a "higher-order construct". If you're following Bruno's idea things have to just come out of the UD threads. There's nothing to create anything more.
We take this for granted - it's the water we swim in - but our minds are radically re-organized as children by the taught narrative that we have an identity
You don't have teach a kid he has an identity. He knows who's hungry. He has a view point.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
On 5/3/2019 12:00 PM, cloud...@gmail.com wrote:
If "consciousness doesn't supervene on physical [or material] computation" then does that mean there is realm for (A) consciousness and one for (B) physical [or material] computation?
No, the theory is that all possible computations (the UD) exist and they instantiate all conscious thoughts, including those we call perception of an external reality. There isn't anymore to reality; it's just the statistical regularities among the different threads of the UD. At least that's Bruno's idea.
Brent
On Friday, May 3, 2019 at 3:19:00 PM UTC-5, Jason wrote:On Thu, May 2, 2019 at 2:58 AM <cloud...@gmail.com> wrote:
On Wednesday, May 1, 2019 at 7:10:03 PM UTC-5, Brent wrote:
On 5/1/2019 4:24 PM, cloud...@gmail.com wrote:
> I would say that one could have a Jupiter planet-sized network of
> Intel® Core™ processors + whatever distributed program running on it,
> and it will not be conscious.
Based on what? Human hubris?
BrentA racist is [via Google definition] "a person who shows or feels discrimination or prejudice against people of other races, or who believes that a particular race is superior to another".I'm not that, but I do think that different types of matter have different capabilities (as materials scientists do).Are you familiar with the Fading Qualia thought experiment: http://www.consc.net/papers/qualia.html ?If so I am very interested to know what your conclusions on it are. I.e., what would someone feel/experience/say as their bio neurons are gradually replaced with artificial silicon neurons?JasonI've found David Chalmers a bit hard to digest. He seems to keep jumping around on what he thinks consciousness is (from the 1990s to soon-to-be 2020s).
As Philip Goff said (via a Twitter response), he has one type of view on panpsychism or consciousness Monday-Wednesday and another the rest of the week.Synthetic neurons (polymer-based) are in the science news of course. Replacing one's original neurons with these (w/similar chemical abilities) seems doable.
> All software that has ever run has run on computers made of materials and assembled in factories.
> I don't believe in the "functional equivalence" principle
On Fri, May 3, 2019 at 4:19 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 5/3/2019 11:44 AM, Terren Suydam wrote:
On Fri, May 3, 2019 at 1:10 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
I think that is right. But when you consider some simplified cases, e.g. a computation written out on paper (or Bruno's movie graph) it becomes apparent that consciousness must ultimately refer to other things.
Right, the movie graph argument shows that consciousness doesn't supervene on physical computation. Nevertheless, the character of my consciousness still corresponds with the kind of cybernetic system implemented by e.g. my brain and body, as instantiated by the infinity of programs going through my state.
What makes it "your state"? It's just a bunch of programs. Why those programs and not others?
It's the set of programs that implements the body/brain used to construct my inner world.
Much is made of "self-awareness" but this is usually just having an internal model of one's body, or social standing or some other model of the self. It is not consciousness of consciousness...that is only a temporal reflection: "I was conscious just now."
I see it a little differently. The self-model/ego is a higher-order construct that organizes the system in a holistic way.
? That sounds like a kind of dualism. You're postulating something that creates a "higher-order construct". If you're following Bruno's idea things have to just come out of the UD threads. There's nothing to create anything more.
For the self-image construct, I mean 'construct' in the same way that anything we learn is a construct. The self-image represents a higher-order construct on top of the types of constructs that, say, a dog might employ. A dog has a self-image of a certain type, but with humans (for whom I'll call the self-image 'ego' to differentiate from animal self-image), the ego's construction is conceptual and requires language. The ego is a narrative, and that narrative acts to organize the system as a whole.
We take this for granted - it's the water we swim in - but our minds are radically re-organized as children by the taught narrative that we have an identity
You don't have teach a kid he has an identity. He knows who's hungry. He has a view point.
Just like a dog. But a kid knows his name (learned) and can answer the question, "why did you do that?". The answer to that question is also largely learned. We are told who to be, what's right, wrong, appropriate, taboo, etc., for the culture we grow up in. IOW why I do something is filtered through learned cultural constructs. Most of the time the answer amounts to a justification in terms of what's appropriate, logical, or some other descriptor that benefits me in some way relative to the implicit values I'm socialized to. This form of self-image is of a higher order than whatever self-image my dog has.
> As you see in the article on functionalism:Functionalism developed largely as an alternative to identity theory ...which is at least in the same ballpark as my view.
What makes it "your state"? It's just a bunch of programs. Why those programs and not others?
It's the set of programs that implements the body/brain used to construct my inner world.
But that doesn't explain why there is such a thing as "your inner world" that is separate from "my inner world". Why don't the programs produce overlapping or mixing "inner worlds".
Just like a dog. But a kid knows his name (learned) and can answer the question, "why did you do that?". The answer to that question is also largely learned. We are told who to be, what's right, wrong, appropriate, taboo, etc., for the culture we grow up in. IOW why I do something is filtered through learned cultural constructs. Most of the time the answer amounts to a justification in terms of what's appropriate, logical, or some other descriptor that benefits me in some way relative to the implicit values I'm socialized to. This form of self-image is of a higher order than whatever self-image my dog has.
I don't disagree with any of that, but I don't see that any of it is entailed by there being the infinite programs of the UD.
Brent
> It seems people will remain in the delusion that software or programming in a conventional computer device - even with many processors - will achieve consciousness.
> Searle's Chinese Room argument still does apply here, as anyone should clearly be able to see.
--
What I'm suggesting draws on both functionalism and identity theory. It's functional in the sense that the constitutive aspect of cybernetics is entirely functional.
> This obviously has nothing to do with Searle's argument