> How can we know if a robot is conscious?
On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?"
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
We can’t know if a particular entity is conscious,
but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com.
But I think science/technology can go a lot further. I can look at the
information flow, where is memory and how is it formed and how is it
accessed and does this matter or not in the action of the entity. It
can look at the decision processes. Are there separate competing
modules (as Dennett hypothesizes) or is there a global workspace...and
again does it make a difference. What does it take to make the entity
act happy, sad, thoughtful, bored, etc.
On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?"
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
We can’t know if a particular entity is conscious,
If the term means anything, you can know one particular entity is conscious.
but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.
So any entity functionally equivalent to yourself, you must know is conscious. But "functionally equivalent" is vague, ambiguous, and certainly needs qualifying by environment and other factors. Is a dolphin functionally equivalent to me. Not in swimming.
On Wed, 10 Jun 2020 at 09:15, Jason Resch <jason...@gmail.com> wrote:
On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?"
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
We can’t know if a particular entity is conscious, but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious. This is the subject of David Chalmers’ paper:
Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.
But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?
Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it.
As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.
Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.
--
Stathis Papaioannou--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com.
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWG377ELaFd1MZybF%2Bjfmg0aGbxc%3DeCh1AwHOYoaYg9zQ%40mail.gmail.com.
On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 09:15, Jason Resch <jason...@gmail.com> wrote:
On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?"
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
We can’t know if a particular entity is conscious, but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious. This is the subject of David Chalmers’ paper:
Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.
But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?
Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it.
Isn't this what blindsight exemplifies?
As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.
Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.
I think this all-or-nothing is misconceived. It's not internal cognition that might vanish suddenly, it's some specific aspect of experience: There are people who, thru brain injury, lose the ability to recognize faces...recognition is a qualia. Of course people's frequency range of hearing fades (don't ask me how I know). My mother, when she was 95 lost color vision in one eye, but not the other. Some people, it seems cannot do higher mathematics. So how would you know if you lost the qualia of empathy for example? Could it not just fade...i.e. become evoked less and less?
On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 09:15, Jason Resch <jason...@gmail.com> wrote:
On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?"
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
We can’t know if a particular entity is conscious, but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious. This is the subject of David Chalmers’ paper:
Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.
But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?
Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it.
Isn't this what blindsight exemplifies?
Blindsight entails behaving as if you have vision but not believing that you have vision.
Anton syndrome entails believing you have vision but not behaving as if you have vision.Being a partial zombie would entail believing you have vision and behaving as if you have vision, but not actually having vision.
As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.
Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.
I think this all-or-nothing is misconceived. It's not internal cognition that might vanish suddenly, it's some specific aspect of experience: There are people who, thru brain injury, lose the ability to recognize faces...recognition is a qualia. Of course people's frequency range of hearing fades (don't ask me how I know). My mother, when she was 95 lost color vision in one eye, but not the other. Some people, it seems cannot do higher mathematics. So how would you know if you lost the qualia of empathy for example? Could it not just fade...i.e. become evoked less and less?
I don't believe suddenly disappearing qualia can happen, but either this - leading to full zombiehood - or fading qualia - leading to partial zombiehood - would be a consequence of replacement of the brain if behaviour could be replicated without replicating qualia.
On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?"
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
We can’t know if a particular entity is conscious,
If the term means anything, you can know one particular entity is conscious.
Yes, I should have added we can’t know know that a particular entity other than oneself is conscious.but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.
So any entity functionally equivalent to yourself, you must know is conscious. But "functionally equivalent" is vague, ambiguous, and certainly needs qualifying by environment and other factors. Is a dolphin functionally equivalent to me. Not in swimming.
Functional equivalence here means that you replace a part with a new part that behaves in the same way. So if you replaced the copper wires in a computer with silver wires, the silver wires would be functionally equivalent, and you would notice no change in using the computer. Copper and silver have different physical properties such as conductivity, but the replacement would be chosen so that this is not functionally relevant.
But that functional equivalence at a microscopic level is worthless in judging what entities are conscious. The whole reason for bringing it up is that it provides a criterion for recognizing consciousness at the entity level.
I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical. Suppose his vision was replaced by some combination of sonar and radar. He could be as close to you as a color blind person in his answers.
On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <everything-list@googlegroups.com> wrote:
On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <everything-list@googlegroups.com> wrote:
On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <everything-list@googlegroups.com> wrote:
On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?"
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
We can’t know if a particular entity is conscious,
If the term means anything, you can know one particular entity is conscious.
Yes, I should have added we can’t know know that a particular entity other than oneself is conscious.but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.
So any entity functionally equivalent to yourself, you must know is conscious. But "functionally equivalent" is vague, ambiguous, and certainly needs qualifying by environment and other factors. Is a dolphin functionally equivalent to me. Not in swimming.
Functional equivalence here means that you replace a part with a new part that behaves in the same way. So if you replaced the copper wires in a computer with silver wires, the silver wires would be functionally equivalent, and you would notice no change in using the computer. Copper and silver have different physical properties such as conductivity, but the replacement would be chosen so that this is not functionally relevant.
But that functional equivalence at a microscopic level is worthless in judging what entities are conscious. The whole reason for bringing it up is that it provides a criterion for recognizing consciousness at the entity level.
The thought experiment involves removing a part of the brain that would normally result in an obvious deficit in qualia and replacing it with a non-biological component that replicates its interactions with the rest of the brain. Remove the visual cortex, and the subject becomes blind, staggering around walking into things, saying "I'm blind, I can't see anything, why have you done this to me?" But if you replace it with an implant that processes input and sends output to the remaining neural tissue, the subject will have normal input to his leg muscles and his vocal cords, so he will be able to navigate his way around a room and will say "I can see everything normally, I feel just the same as before". This follows necessarily from the assumptions. But does it also follow that the subject will have normal visual qualia? If not, something very strange would be happening: he would be blind, but would behave normally, including his behaviour in communicating that everything feels normal.
I understand the "Yes doctor" experiment. But Jason was asking about being able to recognize consciousness by function of the entity, and I think that is a different problem that needs to into account the possibility of different kinds and degrees of consciousness. The YD question makes it binary by equating consciousness with exactly the same as pre-doctor. Applying that to Jason's question you would conclude that you cannot infer that other people are conscious because, while they are functionally equivalent is a loose sense, they are not exactly the same as you. They don't give exactly the same answers to questions. They may not even be able to see or hear things you do.My answer to Jason's question was that it is not possible to know that another entity is conscious, but it is possible to know that if it is conscious, replicating its behaviour would replicate its consciousness.
I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical. Suppose his vision was replaced by some combination of sonar and radar. He could be as close to you as a color blind person in his answers.If the subject suddenly became colour blind or his vision were replaced by a combination of sonar and radar, while he may be able to navigate his way around normally there would be a test that could distinguish the change, like trying to pick a number in a coloured pattern, or simply asking him if he feels the same. Otherwise, in what sense is it meaningful to say there has been a change in qualia?--Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXqFpOu-4qCxeXWXs34-TAbsB70hX_N4cmLfsJeGWitKw%40mail.gmail.com.
Otherwise I think rather than say "it is possible to know if it is consciousness", we need to amend to "it is impossible to disprove that it is conscious".Thought perhaps there's an argument to be made from the church Turing theses, which pertains to possible states of knowledge accessible to a computer program/software. If consciousness is viewed as software then Church-Turing thesis implies that software could never know/realize if it's ultimate computing substrate changed.But this is assuming the thing we're trying to prove, so I'm not sure it helps establish computationalism definitively.Jason
--I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical. Suppose his vision was replaced by some combination of sonar and radar. He could be as close to you as a color blind person in his answers.If the subject suddenly became colour blind or his vision were replaced by a combination of sonar and radar, while he may be able to navigate his way around normally there would be a test that could distinguish the change, like trying to pick a number in a coloured pattern, or simply asking him if he feels the same. Otherwise, in what sense is it meaningful to say there has been a change in qualia?--Stathis Papaioannou
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXqFpOu-4qCxeXWXs34-TAbsB70hX_N4cmLfsJeGWitKw%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhjT3LF3M-%2BdFgP6x%3DNMMSDS3xd7Oy%3DgNqxCGDKcSu66w%40mail.gmail.com.
Jason
On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?”
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.
Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com.
--as you describe, is also conscious. This is the subject of David Chalmers’ paper:Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com.
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/c04fc448-4164-e6dd-1958-54f581839dd7%40verizon.net.
Suppose his vision was replaced by some combination of sonar and radar. He could be as close to you as a color blind person in his answers.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/3fdb39ac-ec70-2fc1-db1f-4c0710d155b4%40verizon.net.
On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?”
That question is very different than “is functionalism/computationalism unfalsifiable?”.
Note that in my older paper, I relate computationisme to Putnam’s ambiguous functionalism, by defining computationalism by asserting the existence of of level of description of my body/brain such that I survive (ma consciousness remains relatively invariant) with a digital machine (supposedly physically implemented) replacing my body/brain.
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.
I guess you mean “for all possible inputs”.
Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
With computationalism, (and perhaps without) we cannot prove that anything is conscious (we can know our own consciousness, but still cannot justified it to ourself in any public way, or third person communicable way).
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
Computationalism is indirectly testable. By verifying the physics implied by the theory of consciousness, we verify it indirectly.
As you know, I define consciousness by that indubitable truth that all universal machine, cognitively enough rich to know that they are universal, finds by looking inward (in the Gödel-Kleene sense), and which is also non provable (non rationally justifiable) and even non definable without invoking *some* notion of truth. Then such consciousness appears to be a fixed point for the doubting procedure, like in Descartes, and it get a key role: self-speeding up relatively to universal machine(s).
So, it seems so clear to me that nobody can prove that anything is conscious that I make it into one of the main way to characterise it.
Consciousness is already very similar with consistency, which is (for effective theories, and sound machine) equivalent to a belief in some reality. No machine can prove its own consistency, and no machines can prove that there is reality satisfying their beliefs.
In all case, it is never the machine per se which is conscious, but the first person associated with the machine. There is a core universal person common to each of “us” (with “us” in a very large sense of universal numbers/machines).
Consciousness is not much more than knowledge, and in particular indubitable knowledge.
Bruno
Jason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be.
On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:"How can we know if a robot is conscious?"Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?Every piece of writing is a theory of mind; both within western science and beyond.What about the abilities to understand and use natural language, to come up with new avenues for scientific or creative inquiry, to experience qualia and report on them, adapting and dealing with unexpected circumstances through senses, and formulating + solving problems in benevolent ways by contributing towards the resilience of its community and environment?Trouble with this is that humans, even world leaders, fail those tests lol, but it's up to everybody, the AI and Computer Science folks in particular, to come up with the math, data, and complete their mission... and as amazing as developments have been around AI in the last couple of decades, I'm not certain we can pull it off, even if it would be pleasant to be wrong and some folks succeed.
Even if folks do succeed, a context of militarized nation states and monopolistic corporations competing for resources in self-destructive, short term ways... will not exactly help towards NOT weaponizing AI. A transnational politics, economics, corporate law, values/philosophies, ethics, culture etc. to vanquish poverty and exploitation of people, natural resources, life; while being sustainable and benevolent stewards of the possibilities of life... would seem to be prerequisite to develop some amazing AI.Ideas are all out there but progressives are ineffective politically on a global scale. The right wing folks, finance guys, large irresponsible monopolistic corporations are much more effective in organizing themselves globally and forcing agendas down everybody's throats. So why wouldn't AI do the same? PGC
On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:"How can we know if a robot is conscious?”That question is very different than “is functionalism/computationalism unfalsifiable?”.Note that in my older paper, I relate computationisme to Putnam’s ambiguous functionalism, by defining computationalism by asserting the existence of of level of description of my body/brain such that I survive (ma consciousness remains relatively invariant) with a digital machine (supposedly physically implemented) replacing my body/brain.Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.I guess you mean “for all possible inputs”.Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?With computationalism, (and perhaps without) we cannot prove that anything is conscious (we can know our own consciousness, but still cannot justified it to ourself in any public way, or third person communicable way).Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?Computationalism is indirectly testable. By verifying the physics implied by the theory of consciousness, we verify it indirectly.As you know, I define consciousness by that indubitable truth that all universal machine, cognitively enough rich to know that they are universal, finds by looking inward (in the Gödel-Kleene sense), and which is also non provable (non rationally justifiable) and even non definable without invoking *some* notion of truth. Then such consciousness appears to be a fixed point for the doubting procedure, like in Descartes, and it get a key role: self-speeding up relatively to universal machine(s).So, it seems so clear to me that nobody can prove that anything is conscious that I make it into one of the main way to characterise it.Consciousness is already very similar with consistency, which is (for effective theories, and sound machine) equivalent to a belief in some reality. No machine can prove its own consistency, and no machines can prove that there is reality satisfying their beliefs.In all case, it is never the machine per se which is conscious, but the first person associated with the machine. There is a core universal person common to each of “us” (with “us” in a very large sense of universal numbers/machines).Consciousness is not much more than knowledge, and in particular indubitable knowledge.Bruno
On 6/10/2020 8:50 AM, Jason Resch wrote:
> Thought perhaps there's an argument to be made from the church Turing
> theses, which pertains to possible states of knowledge accessible to a
> computer program/software. If consciousness is viewed as software then
> Church-Turing thesis implies that software could never know/realize if
> it's ultimate computing substrate changed.
I don't understand the import of this. The very concept of software
mean "independent of hardware" by definition. It is not affected by
whether CT is true or not, whether the computation is finite or not.
If
you think that consciousness evolved then it is an obvious inference
that consciousness would not include consciousness of it's hardware
implementation.
The details here then involve that computations are not well defined if you refer to a single instant of time, you need to at least appeal to a sequence of states the system over through. Consciousness cannot then be located at a single instant, in violating with our own experience.
I deny that our experience consists of instants without duration or
direction. This is an assumption by computationalists made to simply
their analysis.
Brent
If one needs to appeal to finite time intervals in a single universe setting, then given that in principle observers only have direct access to the exact moment they exist
On 11 Jun 2020, at 21:26, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 6/11/2020 9:03 AM, Bruno Marchal wrote:
On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:
"How can we know if a robot is conscious?”
That question is very different than “is functionalism/computationalism unfalsifiable?”.
Note that in my older paper, I relate computationisme to Putnam’s ambiguous functionalism, by defining computationalism by asserting the existence of of level of description of my body/brain such that I survive (ma consciousness remains relatively invariant) with a digital machine (supposedly physically implemented) replacing my body/brain.
Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.
I guess you mean “for all possible inputs”.
Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?
With computationalism, (and perhaps without) we cannot prove that anything is conscious (we can know our own consciousness, but still cannot justified it to ourself in any public way, or third person communicable way).
Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
Computationalism is indirectly testable. By verifying the physics implied by the theory of consciousness, we verify it indirectly.
As you know, I define consciousness by that indubitable truth that all universal machine, cognitively enough rich to know that they are universal, finds by looking inward (in the Gödel-Kleene sense), and which is also non provable (non rationally justifiable) and even non definable without invoking *some* notion of truth. Then such consciousness appears to be a fixed point for the doubting procedure, like in Descartes, and it get a key role: self-speeding up relatively to universal machine(s).
So, it seems so clear to me that nobody can prove that anything is conscious that I make it into one of the main way to characterise it.
Of course as a logician you tend to use "proof" to mean deductive proof...but then you switch to a theological attitude toward the premises you've used and treat them as given truths, instead of mere axioms.
I appreciate your categorization of logics of self-reference.
But I doubt that it has anything to do with human (or animal) consciousness. I don't think my dog is unconscious because he doesn't understand Goedelian incompleteness.
And I'm not conscious because I do. I'm conscious because of the Darwinian utility of being able to imagine myself in hypothetical situations.
Consciousness is already very similar with consistency, which is (for effective theories, and sound machine) equivalent to a belief in some reality. No machine can prove its own consistency, and no machines can prove that there is reality satisfying their beliefs.
First, I can't prove it because such a proof would be relative to premises which simply be my beliefs.
Second, I can prove it in the sense of jurisprudence...i.e. beyond reasonable doubt. Science doesn't care about "proofs", only about evidence.
Brent
In all case, it is never the machine per se which is conscious, but the first person associated with the machine. There is a core universal person common to each of “us” (with “us” in a very large sense of universal numbers/machines).
Consciousness is not much more than knowledge, and in particular indubitable knowledge.
Bruno
Jason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/37d6db17-fbc6-d241-c03d-6a090f95c7aa%40verizon.net.
Jason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg6XyBiey6-Fgge7orv%3D_kS69tprAwviaKag5w73-8v2g%40mail.gmail.com.
Jason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiQCaZWo2tpCW-_Z%2BRMfrgOkKDoz5%3Dcpwk%3DxKDZZMDQsQ%40mail.gmail.com.
On 12 Jun 2020, at 20:52, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 6/12/2020 11:38 AM, Jason Resch wrote:
On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 6/10/2020 8:50 AM, Jason Resch wrote:
> Thought perhaps there's an argument to be made from the church Turing
> theses, which pertains to possible states of knowledge accessible to a
> computer program/software. If consciousness is viewed as software then
> Church-Turing thesis implies that software could never know/realize if
> it's ultimate computing substrate changed.
I don't understand the import of this. The very concept of software
mean "independent of hardware" by definition. It is not affected by
whether CT is true or not, whether the computation is finite or not.
You're right. The only relevance of CT is it means any software can be run by any universal hardware. There's not some software that requires special hardware of a certain kind.If
you think that consciousness evolved then it is an obvious inference
that consciousness would not include consciousness of it's hardware
implementation.
If consciousness is software, it can't know its hardware. But some like Searle or Penrose think the hardware is important.
I think the hardware is important when you're talking about a computer that is emerged in some environment.
The hardware can define the the interaction with that environment.
We idealize the brain as a computer independent of it's physical instantiation...but that's just a theoretical simplification.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/5125d2b1-71c1-1d42-f1eb-cc152971b237%40verizon.net.
Bruno has argued on the basis of this to motivate his theory, but this is a generic feature of any theory that assumes computational theory of consciousness. In particular, computational theory of consciousness is incompatible with a single universe theory. So, if you prove that only a single universe exists, then that disproves the computational theory of consciousness. The details here then involve that computations are not well defined if you refer to a single instant of time, you need to at least appeal to a sequence of states the system over through. Consciousness cannot then be located at a single instant, in violating with our own experience. Therefore either single World theories are false or computational theory of consciousness is false.
Saibal
Hi Saibal,
I agree indirect mechanisms like looking at the resulting physics may be the best way to test it. I was curious if there any direct ways to test it. It seems not, given the lack of any direct tests of consciousness.
Though most people admit other humans are conscious, many would reject the idea of a conscious computer.
Computationalism seems right, but it also seems like something that by definition can't result in a failed test. So it has the appearance of not being falsifiable.
A single universe, or digital physics would be evidence that either computationalism is false or the ontology is sufficiently small, but a finite/small ontology is doubtful for many reasons.
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com.
That doesn't follow. You've implicitly assumed that all those excess computational states exist…
which is then begging the question of other worlds.
Brent
Bruno has argued on the basis of this to motivate his theory, but this is a generic feature of any theory that assumes computational theory of consciousness. In particular, computational theory of consciousness is incompatible with a single universe theory. So, if you prove that only a single universe exists, then that disproves the computational theory of consciousness. The details here then involve that computations are not well defined if you refer to a single instant of time, you need to at least appeal to a sequence of states the system over through. Consciousness cannot then be located at a single instant, in violating with our own experience. Therefore either single World theories are false or computational theory of consciousness is false.
Saibal
Hi Saibal,
I agree indirect mechanisms like looking at the resulting physics may be the best way to test it. I was curious if there any direct ways to test it. It seems not, given the lack of any direct tests of consciousness.
Though most people admit other humans are conscious, many would reject the idea of a conscious computer.
Computationalism seems right, but it also seems like something that by definition can't result in a failed test. So it has the appearance of not being falsifiable.
A single universe, or digital physics would be evidence that either computationalism is false or the ontology is sufficiently small, but a finite/small ontology is doubtful for many reasons.
Jason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/06470e19-3af7-0c82-cdcf-0cb0adb7d3a2%40verizon.net.
I hear you! You are saying that the existence of number is like the existence of Sherlock Holmes, but that leads to a gigantic multiverse,
with infinitely many Brent having the same conversation with me, here and now, and they all become zombie, except one, because some Reality want it that way?
which is then begging the question of other worlds.
You are the one adding a metaphysical assumption, to make some people whose existence in arithmetic follows from digital mechanism into zombie.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39071413-B123-402B-944F-BAE95BC7040E%40ulb.ac.be.
On Wed, Jun 10, 2020 at 5:55 PM PGC <multipl...@gmail.com> wrote:
On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:"How can we know if a robot is conscious?"Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?Every piece of writing is a theory of mind; both within western science and beyond.What about the abilities to understand and use natural language, to come up with new avenues for scientific or creative inquiry, to experience qualia and report on them, adapting and dealing with unexpected circumstances through senses, and formulating + solving problems in benevolent ways by contributing towards the resilience of its community and environment?Trouble with this is that humans, even world leaders, fail those tests lol, but it's up to everybody, the AI and Computer Science folks in particular, to come up with the math, data, and complete their mission... and as amazing as developments have been around AI in the last couple of decades, I'm not certain we can pull it off, even if it would be pleasant to be wrong and some folks succeed.It's interesting you bring this up, I just wrote an article about the present capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/
Even if folks do succeed, a context of militarized nation states and monopolistic corporations competing for resources in self-destructive, short term ways... will not exactly help towards NOT weaponizing AI. A transnational politics, economics, corporate law, values/philosophies, ethics, culture etc. to vanquish poverty and exploitation of people, natural resources, life; while being sustainable and benevolent stewards of the possibilities of life... would seem to be prerequisite to develop some amazing AI.Ideas are all out there but progressives are ineffective politically on a global scale. The right wing folks, finance guys, large irresponsible monopolistic corporations are much more effective in organizing themselves globally and forcing agendas down everybody's throats. So why wouldn't AI do the same? PGCAI will either be a blessing or a curse. I don't think it can be anything in the middle.
true=/=real.
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:"How can we know if a robot is conscious?"Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?
Jason
But true=/=real.
I hear you! You are saying that the existence of number is like the existence of Sherlock Holmes, but that leads to a gigantic multiverse,
Only via your assumption that arithmetic constitutes universes. I take it as a reductio.
with infinitely many Brent having the same conversation with me, here and now, and they all become zombie, except one, because some Reality want it that way?
which is then begging the question of other worlds.
You are the one adding a metaphysical assumption, to make some people whose existence in arithmetic follows from digital mechanism into zombie.
You're the one asserting that people "exist in arithmetic" whatever that may mean.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/043c165b-ab2b-d282-c805-eac98d81e12a%40verizon.net.
On 15 Jun 2020, at 20:39, Brent Meeker <meek...@verizon.net> wrote:
So all those theorems about real analysis and Cantorian infinities are just as real as arithmetic. If you don't practice free logic.
Truth is a property of propositions relative to observations for a scientist.
But for arithmetic, we do have a pretty good idea of what is the “standard model of arithmetic” (the structure (N, 0, s, +, *)), and by true (without further precision) we always mean “true in the standard model of arithmetic”.
I hear you! You are saying that the existence of number is like the existence of Sherlock Holmes, but that leads to a gigantic multiverse,
Only via your assumption that arithmetic constitutes universes. I take it as a reductio.
Not at all. I use only the provable and proved fact that the standard model of arithmetic implements and run all computations, with “implement” and “run” defined in computer science (by Turing, without any assumption in physics).
If you believe in mechanism, and in Kxy = x + Sxyz = xz(yz), then I can prove that there is an infinity of Brent in arithmetic, having the very conversation that we have here and now.
It needs the assumption that you can apply operators arbitrarily many times.
That does not need any other assumption than Digital Mechanism. Even without mechanism, the facts remains that all computations are run in arithmetic. That is why if mechanism is false, the arithmetical reality (the standard model of arithmetic) is full of zombies.
with infinitely many Brent having the same conversation with me, here and now, and they all become zombie, except one,
If they are having the same conversation in the same way then they are the same persons/events per Leibniz identity of indiscernibles.
because some Reality want it that way?
which is then begging the question of other worlds.
You are the one adding a metaphysical assumption, to make some people whose existence in arithmetic follows from digital mechanism into zombie.
You're the one asserting that people "exist in arithmetic" whatever that may mean.
It means that there exist a number k such that phi_k(x) = y iff Brent# gives y on x, where x describe some possible input (a giant number to take into account all your senses).As we change ourself all the times, I use “Brent#” to denote you at some precise time. The coding here are huge, but the arithmetical reality count without counting, if I may say. All the relative state of your brain, relative to, say, our cluster of galaxies, are run in arithmetic, in finitely many number relations, and unless you want them to be all zombie, they are all conscious, and belongs to your personal range of first person indeterminacy, although in this case, the measure is plausibly negligible, compared to all solution of DeWitt-Wheeler equation (whose negligibility or not is to be studied).
If interested, I can explain once more why the arithmetical reality run all computations, with a highly structured redundancy,
Yes, I understand the theory of infinite computations.
which already suggest a non trivial measure on the computations (with and without oracle).
So what is that measure, and how can it be compared to some observation?
Brent
On 9 Jun 2020, at 19:24, John Clark <johnk...@gmail.com> wrote:
On Tue, Jun 9, 2020 at 1:08 PM Jason Resch <jason...@gmail.com> wrote:> How can we know if a robot is conscious?
The exact same way we know that one of our fellow human beings is conscious when he's not sleeping or under anesthesia or dead.
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com.
On 9 Jun 2020, at 19:24, John Clark <johnk...@gmail.com> wrote:
On Tue, Jun 9, 2020 at 1:08 PM Jason Resch <jason...@gmail.com> wrote:
> How can we know if a robot is conscious?
The exact same way we know that one of our fellow human beings is conscious when he's not sleeping or under anesthesia or dead.
That is how we believe that a human is conscious, and we project pour own incorrigible feeling of being conscious to them, when they are similar enough. And that makes us knowing that they are conscious, in the weak sense of knowing (true belief), but we can’t “know-for-sure”.
It is unclear if we can apply this to a robot, which might look too much different. If a Japanese sexual doll complains of having been raped, the judge will say that she was program to complain, but that she actually feel nothing, and many people will agree (wrongly or rightly).
It will take some time before the robots get freedom and social security.I guess we will digitalise ourselves before…
Bruno
John K Clark--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be.
On 9 Jul 2020, at 22:12, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 7/9/2020 3:36 AM, Bruno Marchal wrote:
On 9 Jun 2020, at 19:24, John Clark <johnk...@gmail.com> wrote:
On Tue, Jun 9, 2020 at 1:08 PM Jason Resch <jason...@gmail.com> wrote:
> How can we know if a robot is conscious?
The exact same way we know that one of our fellow human beings is conscious when he's not sleeping or under anesthesia or dead.
That is how we believe that a human is conscious, and we project pour own incorrigible feeling of being conscious to them, when they are similar enough. And that makes us knowing that they are conscious, in the weak sense of knowing (true belief), but we can’t “know-for-sure”.
It is unclear if we can apply this to a robot, which might look too much different. If a Japanese sexual doll complains of having been raped, the judge will say that she was program to complain, but that she actually feel nothing, and many people will agree (wrongly or rightly).
And when she argues that the judge is wrong she will prove her point.
Brent
It will take some time before the robots get freedom and social security.I guess we will digitalise ourselves before…
Bruno
John K Clark--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bbd80fca-c55f-a70d-bf55-f965273efce6%40verizon.net.