Is functionalism/computationalism unfalsifiable?

73 views
Skip to first unread message

Jason Resch

unread,
Jun 9, 2020, 1:08:30 PM6/9/20
to Everything List
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Jason

John Clark

unread,
Jun 9, 2020, 1:24:44 PM6/9/20
to everyth...@googlegroups.com
On Tue, Jun 9, 2020 at 1:08 PM Jason Resch <jason...@gmail.com> wrote:

> How can we know if a robot is conscious?

The exact same way we know that one of our fellow human beings is conscious when he's not sleeping or under anesthesia or dead.

John K Clark   

Brent Meeker

unread,
Jun 9, 2020, 3:15:40 PM6/9/20
to everyth...@googlegroups.com
If it acts conscious, then it is conscious.

But I think science/technology can go a lot further.  I can look at the
information flow, where is memory and how is it formed and how is it
accessed and does this matter or not in the action of the entity.  It
can look at the decision processes.  Are there separate competing
modules (as Dennett hypothesizes) or is there a global workspace...and
again does it make a difference.  What does it take to make the entity
act happy, sad, thoughtful, bored, etc.

Brent

Stathis Papaioannou

unread,
Jun 9, 2020, 7:03:08 PM6/9/20
to everyth...@googlegroups.com
We can’t know if a particular entity is conscious, but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious. This is the subject of David Chalmers’ paper:


--
Stathis Papaioannou

Philip Thrift

unread,
Jun 9, 2020, 7:10:01 PM6/9/20
to Everything List
I doubt anyone in consciousness research believes this. Including Dennett today.

@philipthrift 

Jason Resch

unread,
Jun 9, 2020, 7:14:45 PM6/9/20
to Everything List
Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?

Jason

Brent Meeker

unread,
Jun 9, 2020, 7:32:53 PM6/9/20
to everyth...@googlegroups.com


On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

We can’t know if a particular entity is conscious,

If the term means anything, you can know one particular entity is conscious.


but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.

So any entity functionally equivalent to yourself, you must know is conscious.  But "functionally equivalent" is vague, ambiguous, and certainly needs qualifying by environment and other factors.  Is a dolphin functionally equivalent to me.  Not in swimming.

Brent

This is the subject of David Chalmers’ paper:


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com.

Jason Resch

unread,
Jun 9, 2020, 7:35:21 PM6/9/20
to Everything List
That is the assumption I and most others operate under.

But every now and then you encounter a biological naturalist or something that says a brain must be made of brain cells to actually be conscious.

The real point of my e-mail is to ask the question: can any test in principle disprove computationalism as a philosophy of mind, given it's defined as functionally identical?

 

But I think science/technology can go a lot further.  I can look at the
information flow, where is memory and how is it formed and how is it
accessed and does this matter or not in the action of the entity.  It
can look at the decision processes.  Are there separate competing
modules (as Dennett hypothesizes) or is there a global workspace...and
again does it make a difference.  What does it take to make the entity
act happy, sad, thoughtful, bored, etc.

 I agree we can look at more than just the outputs.

Jason

Brent Meeker

unread,
Jun 9, 2020, 7:42:46 PM6/9/20
to everyth...@googlegroups.com
There's an implicit assumption that "qualia" are well defined things.  I think it very plausible that qualia differ depending on sensors, values, and memory.  So we may create AI that has something like qualia, but which are different from our qualia as people with synesthesia have somewhat different qualia.

Brent

Stathis Papaioannou

unread,
Jun 9, 2020, 7:46:07 PM6/9/20
to everyth...@googlegroups.com
Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it. As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.
--
Stathis Papaioannou

Stathis Papaioannou

unread,
Jun 9, 2020, 7:58:43 PM6/9/20
to everyth...@googlegroups.com
On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

We can’t know if a particular entity is conscious,

If the term means anything, you can know one particular entity is conscious.

Yes, I should have added we can’t know know that a particular entity other than oneself is conscious.
but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.

So any entity functionally equivalent to yourself, you must know is conscious.  But "functionally equivalent" is vague, ambiguous, and certainly needs qualifying by environment and other factors.  Is a dolphin functionally equivalent to me.  Not in swimming.

Functional equivalence here means that you replace a part with a new part that behaves in the same way. So if you replaced the copper wires in a computer with silver wires, the silver wires would be functionally equivalent, and you would notice no change in using the computer. Copper and silver have different physical properties such as conductivity, but the replacement would be chosen so that this is not functionally relevant.
This is the subject of David Chalmers’ paper:

--
Stathis Papaioannou

Brent Meeker

unread,
Jun 9, 2020, 8:41:33 PM6/9/20
to everyth...@googlegroups.com


On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 09:15, Jason Resch <jason...@gmail.com> wrote:


On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

We can’t know if a particular entity is conscious, but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious. This is the subject of David Chalmers’ paper:


Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?

Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it.

Isn't this what blindsight exemplifies?


As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.

I think this all-or-nothing is misconceived.  It's not internal cognition that might vanish suddenly, it's some specific aspect of experience: There are people who, thru brain injury, lose the ability to recognize faces...recognition is a qualia.   Of course people's frequency range of hearing fades (don't ask me how I know).  My mother, when she was 95 lost color vision in one eye, but not the other.  Some people, it seems cannot do higher mathematics.  So how would you know if you lost the qualia of empathy for example?  Could it not just fade...i.e. become evoked less and less?

Brent

--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jun 9, 2020, 9:16:18 PM6/9/20
to everyth...@googlegroups.com
But that functional equivalence at a microscopic level is worthless in judging what entities are conscious.    The whole reason for bringing it up is that it provides a criterion for recognizing consciousness at the entity level. 

And even at the microscopic level functional equivalence in ambiguous.  The difference in conductivity between cooper and silver might not make any different 99.9% of the time, but in some circumstance it might make a difference.  Or there might be incidental effects due to the difference in corrosion that would show up in 20yrs but not sooner.

Brent

This is the subject of David Chalmers’ paper:

--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stathis Papaioannou

unread,
Jun 9, 2020, 9:41:43 PM6/9/20
to everyth...@googlegroups.com
On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 09:15, Jason Resch <jason...@gmail.com> wrote:


On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

We can’t know if a particular entity is conscious, but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious. This is the subject of David Chalmers’ paper:


Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?

Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it.

Isn't this what blindsight exemplifies?

Blindsight entails behaving as if you have vision but not believing that you have vision.
Anton syndrome entails believing you have vision but not behaving as if you have vision.
Being a partial zombie would entail believing you have vision and behaving as if you have vision, but not actually having vision. 
As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.

I think this all-or-nothing is misconceived.  It's not internal cognition that might vanish suddenly, it's some specific aspect of experience: There are people who, thru brain injury, lose the ability to recognize faces...recognition is a qualia.   Of course people's frequency range of hearing fades (don't ask me how I know).  My mother, when she was 95 lost color vision in one eye, but not the other.  Some people, it seems cannot do higher mathematics.  So how would you know if you lost the qualia of empathy for example?  Could it not just fade...i.e. become evoked less and less?

I don't believe suddenly disappearing qualia can happen, but either this - leading to full zombiehood - or fading qualia - leading to partial zombiehood - would be a consequence of  replacement of the brain if behaviour could be replicated without replicating qualia.


--
Stathis Papaioannou

Brent Meeker

unread,
Jun 9, 2020, 10:49:13 PM6/9/20
to everyth...@googlegroups.com


On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 09:15, Jason Resch <jason...@gmail.com> wrote:


On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <stat...@gmail.com> wrote:


On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

We can’t know if a particular entity is conscious, but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious. This is the subject of David Chalmers’ paper:


Chalmers' argument is that if a different brain is not conscious, then somewhere along the way we get either suddenly disappearing or fading qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and suddenly disappearing qualia? Is there any impossibility proof for such things?

Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it.

Isn't this what blindsight exemplifies?

Blindsight entails behaving as if you have vision but not believing that you have vision.

And you don't believe you have vision because you're missing the qualia of seeing.


Anton syndrome entails believing you have vision but not behaving as if you have vision.
Being a partial zombie would entail believing you have vision and behaving as if you have vision, but not actually having vision.

That would be a total zombie with respect to vision.  The person with blindsight is a partial zombie.  They have the function but not the qualia.


As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.

I think this all-or-nothing is misconceived.  It's not internal cognition that might vanish suddenly, it's some specific aspect of experience: There are people who, thru brain injury, lose the ability to recognize faces...recognition is a qualia.   Of course people's frequency range of hearing fades (don't ask me how I know).  My mother, when she was 95 lost color vision in one eye, but not the other.  Some people, it seems cannot do higher mathematics.  So how would you know if you lost the qualia of empathy for example?  Could it not just fade...i.e. become evoked less and less?

I don't believe suddenly disappearing qualia can happen, but either this - leading to full zombiehood - or fading qualia - leading to partial zombiehood - would be a consequence of  replacement of the brain if behaviour could be replicated without replicating qualia.

No.  You're assuming the replacements either instaniate the qualia or they do nothing.  The third possibility is that they instantiate some different qualia, or conditional qualia.

Brent

Stathis Papaioannou

unread,
Jun 9, 2020, 10:49:24 PM6/9/20
to everyth...@googlegroups.com
On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

We can’t know if a particular entity is conscious,

If the term means anything, you can know one particular entity is conscious.

Yes, I should have added we can’t know know that a particular entity other than oneself is conscious.
but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.

So any entity functionally equivalent to yourself, you must know is conscious.  But "functionally equivalent" is vague, ambiguous, and certainly needs qualifying by environment and other factors.  Is a dolphin functionally equivalent to me.  Not in swimming.

Functional equivalence here means that you replace a part with a new part that behaves in the same way. So if you replaced the copper wires in a computer with silver wires, the silver wires would be functionally equivalent, and you would notice no change in using the computer. Copper and silver have different physical properties such as conductivity, but the replacement would be chosen so that this is not functionally relevant.

But that functional equivalence at a microscopic level is worthless in judging what entities are conscious.    The whole reason for bringing it up is that it provides a criterion for recognizing consciousness at the entity level.

The thought experiment involves removing a part of the brain that would normally result in an obvious deficit in qualia and replacing it with a non-biological component that replicates its interactions with the rest of the brain. Remove the visual cortex, and the subject becomes blind, staggering around walking into things, saying "I'm blind, I can't see anything, why have you done this to me?" But if you replace it with an implant that processes input and sends output to the remaining neural tissue, the subject will have normal input to his leg muscles and his vocal cords, so he will be able to navigate his way around a room and will say "I can see everything normally, I feel just the same as before". This follows necessarily from the assumptions. But does it also follow that the subject will have normal visual qualia? If not, something very strange would be happening: he would be blind, but would behave normally, including his behaviour in communicating that everything feels normal.


-- 
Stathis Papaioannou

Stathis Papaioannou

unread,
Jun 9, 2020, 11:08:46 PM6/9/20
to everyth...@googlegroups.com
The possibilities are that replacement which is functionally identical leaves the qualia unchanged or changes the qualia. If it changes the qualia, it leads to a strange situation: the subject could have an arbitrarily large change in qualia, but would behave the same and declare that everything is the same. That would mean that the subject either does not notice the change, or notices but is unable to control his body, which of its own accord declares that everything is the same.


--
Stathis Papaioannou

Brent Meeker

unread,
Jun 9, 2020, 11:25:41 PM6/9/20
to everyth...@googlegroups.com
I understand the "Yes doctor" experiment.  But Jason was asking about being able to recognize consciousness by function of the entity, and I think that is a different problem that needs to into account the possibility of different kinds and degrees of consciousness.  The YD question makes it binary by equating consciousness with exactly the same as pre-doctor.  Applying that to Jason's question you would conclude that you cannot infer that other people are conscious because, while they are functionally equivalent is a loose sense, they are not exactly the same as you.  They don't give exactly the same answers to questions.  They may not even be able to see or hear things you do.

I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical.  Suppose his vision was replaced by some combination of sonar and radar.  He could be as close to you as a color blind person in his answers.


Brent

Stathis Papaioannou

unread,
Jun 9, 2020, 11:39:30 PM6/9/20
to everyth...@googlegroups.com
My answer to Jason's question was that it is not possible to know that another entity is conscious, but it is possible to know that if it is conscious, replicating its behaviour would replicate its consciousness.
 
I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical.  Suppose his vision was replaced by some combination of sonar and radar.  He could be as close to you as a color blind person in his answers.

If the subject suddenly became colour blind or his vision were replaced by a combination of sonar and radar, while he may be able to navigate his way around normally there would be a test that could distinguish the change, like trying to pick a number in a coloured pattern, or simply asking him if he feels the same. Otherwise, in what sense is it meaningful to say there has been a change in qualia?


--
Stathis Papaioannou

smitra

unread,
Jun 10, 2020, 10:07:30 AM6/10/20
to everyth...@googlegroups.com
I think it can be tested indirectly, because generic computational
theories of consciousness imply a multiverse. If my consciousness is the
result if a computation then because on the one hand any such
computation necessarily involves a vast number of elementary bits and on
he other hand whatever I'm conscious of is describable using only a
handful of bits, the mapping between computational states and states of
consciousness is N to 1 where N is astronomically large. So, the laws of
physics we already know about must be effective laws where the
statistical effects due to a self-localization uncertainty is already
build into it.

Bruno has argued on the basis of this to motivate his theory, but this
is a generic feature of any theory that assumes computational theory of
consciousness. In particular, computational theory of consciousness is
incompatible with a single universe theory. So, if you prove that only a
single universe exists, then that disproves the computational theory of
consciousness. The details here then involve that computations are not
well defined if you refer to a single instant of time, you need to at
least appeal to a sequence of states the system over through.
Consciousness cannot then be located at a single instant, in violating
with our own experience. Therefore either single World theories are
false or computational theory of consciousness is false.

Saibal

Jason Resch

unread,
Jun 10, 2020, 11:50:58 AM6/10/20
to everyth...@googlegroups.com


On Tuesday, June 9, 2020, Stathis Papaioannou <stat...@gmail.com> wrote:



On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <everything-list@googlegroups.com> wrote:


On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <everything-list@googlegroups.com> wrote:


On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <everything-list@googlegroups.com> wrote:


On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 03:08, Jason Resch <jason...@gmail.com> wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

We can’t know if a particular entity is conscious,

If the term means anything, you can know one particular entity is conscious.

Yes, I should have added we can’t know know that a particular entity other than oneself is conscious.
but we can know that if it is conscious, then a functional equivalent, as you describe, is also conscious.

So any entity functionally equivalent to yourself, you must know is conscious.  But "functionally equivalent" is vague, ambiguous, and certainly needs qualifying by environment and other factors.  Is a dolphin functionally equivalent to me.  Not in swimming.

Functional equivalence here means that you replace a part with a new part that behaves in the same way. So if you replaced the copper wires in a computer with silver wires, the silver wires would be functionally equivalent, and you would notice no change in using the computer. Copper and silver have different physical properties such as conductivity, but the replacement would be chosen so that this is not functionally relevant.

But that functional equivalence at a microscopic level is worthless in judging what entities are conscious.    The whole reason for bringing it up is that it provides a criterion for recognizing consciousness at the entity level.

The thought experiment involves removing a part of the brain that would normally result in an obvious deficit in qualia and replacing it with a non-biological component that replicates its interactions with the rest of the brain. Remove the visual cortex, and the subject becomes blind, staggering around walking into things, saying "I'm blind, I can't see anything, why have you done this to me?" But if you replace it with an implant that processes input and sends output to the remaining neural tissue, the subject will have normal input to his leg muscles and his vocal cords, so he will be able to navigate his way around a room and will say "I can see everything normally, I feel just the same as before". This follows necessarily from the assumptions. But does it also follow that the subject will have normal visual qualia? If not, something very strange would be happening: he would be blind, but would behave normally, including his behaviour in communicating that everything feels normal.

I understand the "Yes doctor" experiment.  But Jason was asking about being able to recognize consciousness by function of the entity, and I think that is a different problem that needs to into account the possibility of different kinds and degrees of consciousness.  The YD question makes it binary by equating consciousness with exactly the same as pre-doctor.  Applying that to Jason's question you would conclude that you cannot infer that other people are conscious because, while they are functionally equivalent is a loose sense, they are not exactly the same as you.  They don't give exactly the same answers to questions.  They may not even be able to see or hear things you do.

My answer to Jason's question was that it is not possible to know that another entity is conscious, but it is possible to know that if it is conscious, replicating its behaviour would replicate its consciousness.

I think this is right if you add the following assumptions:
1. Fading qualia are impossible
2. Suddenly disappearing qualia are impossible

Otherwise I think rather than say "it is possible to know if it is consciousness", we need to amend to "it is impossible to disprove that it is conscious".

Thought perhaps there's an argument to be made from the church Turing theses, which pertains to possible states of knowledge accessible to a computer program/software. If consciousness is viewed as software then Church-Turing thesis implies that software could never know/realize if it's ultimate computing substrate changed.

But this is assuming the thing we're trying to prove, so I'm not sure it helps establish computationalism definitively.

Jason
 
 
I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical.  Suppose his vision was replaced by some combination of sonar and radar.  He could be as close to you as a color blind person in his answers.

If the subject suddenly became colour blind or his vision were replaced by a combination of sonar and radar, while he may be able to navigate his way around normally there would be a test that could distinguish the change, like trying to pick a number in a coloured pattern, or simply asking him if he feels the same. Otherwise, in what sense is it meaningful to say there has been a change in qualia?


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXqFpOu-4qCxeXWXs34-TAbsB70hX_N4cmLfsJeGWitKw%40mail.gmail.com.

Jason Resch

unread,
Jun 10, 2020, 12:00:41 PM6/10/20
to everyth...@googlegroups.com
Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may be the best way to test it. I was curious if there any direct ways to test it. It seems not, given the lack of any direct tests of consciousness.

Though most people admit other humans are conscious, many would reject the idea of a conscious computer. 

Computationalism seems right, but it also seems like something that by definition can't result in a failed test. So it has the appearance of not being falsifiable.

A single universe, or digital physics would be evidence that either computationalism is false or the ontology is sufficiently small, but a finite/small ontology is doubtful for many reasons.

Jason

Stathis Papaioannou

unread,
Jun 10, 2020, 1:43:50 PM6/10/20
to everyth...@googlegroups.com
Not logically impossible, but absurd. Though it is hard to pin down absurdity.

Otherwise I think rather than say "it is possible to know if it is consciousness", we need to amend to "it is impossible to disprove that it is conscious".

Thought perhaps there's an argument to be made from the church Turing theses, which pertains to possible states of knowledge accessible to a computer program/software. If consciousness is viewed as software then Church-Turing thesis implies that software could never know/realize if it's ultimate computing substrate changed.

But this is assuming the thing we're trying to prove, so I'm not sure it helps establish computationalism definitively.

Jason
 
 
I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical.  Suppose his vision was replaced by some combination of sonar and radar.  He could be as close to you as a color blind person in his answers.

If the subject suddenly became colour blind or his vision were replaced by a combination of sonar and radar, while he may be able to navigate his way around normally there would be a test that could distinguish the change, like trying to pick a number in a coloured pattern, or simply asking him if he feels the same. Otherwise, in what sense is it meaningful to say there has been a change in qualia?


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhjT3LF3M-%2BdFgP6x%3DNMMSDS3xd7Oy%3DgNqxCGDKcSu66w%40mail.gmail.com.
--
Stathis Papaioannou

Brent Meeker

unread,
Jun 10, 2020, 4:01:31 PM6/10/20
to everyth...@googlegroups.com


On 6/10/2020 7:07 AM, smitra wrote:
> I think it can be tested indirectly, because generic computational
> theories of consciousness imply a multiverse. If my consciousness is
> the result if a computation then because on the one hand any such
> computation necessarily involves a vast number of elementary bits and
> on he other hand whatever I'm conscious of is describable using only a
> handful of bits, the mapping between computational states and states
> of consciousness is N to 1 where N is astronomically large. So, the
> laws of physics we already know about must be effective laws where the
> statistical effects due to a self-localization uncertainty is already
> build into it.

That seems to be pulled out of the air.  First, some of the laws of
physics are not statistical, e.g. those based on symmetries.  They are
more easily explained as desiderata, i.e. we want our laws of physics to
be independent of location and direction and time of day.  And N >>
conscious information simply says there is a lot of physical reality of
which we are not aware.  It doesn't say that what we have picked out as
laws are statistical, only that they are not complete...which any
physicist would admit...and as far as we know they include inherent
randomness.  To insist that this randomness is statistical is just
postulating multiple worlds to avoid randomness.

>
> Bruno has argued on the basis of this to motivate his theory, but this
> is a generic feature of any theory that assumes computational theory
> of consciousness. In particular, computational theory of consciousness
> is incompatible with a single universe theory. So, if you prove that
> only a single universe exists, then that disproves the computational
> theory of consciousness.

No, see above.

> The details here then involve that computations are not well defined
> if you refer to a single instant of time, you need to at least appeal
> to a sequence of states the system over through. Consciousness cannot
> then be located at a single instant, in violating with our own
> experience.

I deny that our experience consists of instants without duration or
direction.  This is an assumption by computationalists made to simply
their analysis.

Brent

PGC

unread,
Jun 10, 2020, 6:55:35 PM6/10/20
to Everything List
Every piece of writing is a theory of mind; both within western science and beyond. 

What about the abilities to understand and use natural language, to come up with new avenues for scientific or creative inquiry, to experience qualia and report on them, adapting and dealing with unexpected circumstances through senses, and formulating + solving problems in benevolent ways by contributing towards the resilience of its community and environment? 

Trouble with this is that humans, even world leaders, fail those tests lol, but it's up to everybody, the AI and Computer Science folks in particular, to come up with the math, data, and complete their mission... and as amazing as developments have been around AI in the last couple of decades, I'm not certain we can pull it off, even if it would be pleasant to be wrong and some folks succeed. 

Even if folks do succeed, a context of militarized nation states and monopolistic corporations competing for resources in self-destructive, short term ways... will not exactly help towards NOT weaponizing AI. A transnational politics, economics, corporate law, values/philosophies, ethics, culture etc. to vanquish poverty and exploitation of people, natural resources, life; while being sustainable and benevolent stewards of the possibilities of life... would seem to be prerequisite to develop some amazing AI. 

Ideas are all out there but progressives are ineffective politically on a global scale. The right wing folks, finance guys, large irresponsible monopolistic corporations are much more effective in organizing themselves globally and forcing agendas down everybody's throats. So why wouldn't AI do the same? PGC


 

Jason

Bruno Marchal

unread,
Jun 11, 2020, 12:03:27 PM6/11/20
to everyth...@googlegroups.com
On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:

For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?”

That question is very different than “is functionalism/computationalism unfalsifiable?”.

Note that in my older paper, I relate computationisme to Putnam’s ambiguous functionalism, by defining computationalism by asserting the existence of of level of description of my body/brain such that I survive (ma consciousness remains relatively invariant) with a digital machine (supposedly physically implemented) replacing my body/brain.




Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.

I guess you mean “for all possible inputs”.




Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

With computationalism, (and perhaps without) we cannot prove that anything is conscious (we can know our own consciousness, but still cannot justified it to ourself in any public way, or third person communicable way). 




Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Computationalism is indirectly testable. By verifying the physics implied by the theory of consciousness, we verify it indirectly.

As you know, I define consciousness by that indubitable truth that all universal machine, cognitively enough rich to know that they are universal, finds by looking inward (in the Gödel-Kleene sense), and which is also non provable (non rationally justifiable) and even non definable without invoking *some* notion of truth. Then such consciousness appears to be a fixed point for the doubting procedure, like in Descartes, and it get a key role: self-speeding up relatively to universal machine(s).

So, it seems so clear to me that nobody can prove that anything is conscious that I make it into one of the main way to characterise it.

Consciousness is already very similar with consistency, which is (for effective theories, and sound machine) equivalent to a belief in some reality. No machine can prove its own consistency, and no machines can prove that there is reality satisfying their beliefs.

In all case, it is never the machine per se which is conscious, but the first person associated with the machine. There is a core universal person common to each of “us” (with “us” in a very large sense of universal numbers/machines).

Consciousness is not much more than knowledge, and in particular indubitable knowledge.

Bruno




Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 11, 2020, 12:07:47 PM6/11/20
to everyth...@googlegroups.com
… at some level of description. 

A dreaming human is functionally equivalent with a stone. The first is conscious, the other is not. To avoid this, you need to make precise the level for which you define the functional equivalence. 

Bruno



as you describe, is also conscious. This is the subject of David Chalmers’ paper:


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 11, 2020, 12:16:03 PM6/11/20
to everyth...@googlegroups.com
This would not make sense with Digital Mechanism. Now, by assuming some NON-mechanism, maybe someone can still make sense of this.

That is why qualia and quanta are automatically present in *any* Turing universal realm (the model or semantic of any Turing universal or sigma_1 complete theory). That is why physicalists need to abandon mechanism, because it invoke a non Turing emulable reality (like a material primitive substance) to make consciousness real for some type of universal machine, and unreal for others. As there is no evidence until now for such primitive matter, this is a bit like adding complexity to avoid the consequence of a simpler theory.

Bruno




Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 11, 2020, 12:31:29 PM6/11/20
to everyth...@googlegroups.com
Indeed, and that sort of situation can happen when you duplicate very closely to the comp right level of substitution. 

In that case, the only behavioural difference will be that 15 years after the brain substitution, you will fall in love with Miss X instead of Miss Y, without very knowing that without that new brain, you would have interest some quill differently, and in favour of Miss Y. You did not survive in the strict sense of computationalism, but of course that is a good as having different experience in life, which changes us even more.

Eventually, playing with such small differences, and adding them, or subtracting them, can help to understand that we are all the same person, somehow masked and flltered by our memories/bodies.

Bruno





Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 11, 2020, 1:07:10 PM6/11/20
to everyth...@googlegroups.com
There are many fuzziness indeed, and they belong to two very different kinds. One is that functional equivalence makes sense only relatively to a choice of substitution level. That fuzziness is about which probability predicate represents us, which “’[]p” defines us, or which machine supports us. The set of such machine is a non computable set (the set of codes of any function is not a computable set, and that plays a role in the Measure problem).

Then you have the fuzziness due to the first person, third person, first person plural, etc. The “[]p & p” is not definable by the machine (by the “[]p”), so the first person “I” does not refer to anything third person describable. But it is imposed by incompleteness, the machine can’t avoid it in introspection, it is non dubitable, etc. Here the “functional equivalence” would mean the complete invariance of the (relative) experience. Consciousness enter through the mode “[]p & p”, but is not exactly equivalent to it, and the qualia appears through the most extended mode: []p & <>t & p. (But also the graded variants, like [][]p & <><><>t & p, which plays a role in the origin of space (as qualia). Qualia requires *some* consistency or reality to anticipate on (<>t).

Let me give you the (8) modes in the less theological way possible:

Truth
Mind
Soul
---
Quanta
Qualia

Which corresponds to the universal machine’s self-referential modes (defined in arithmetic, or through arithmetical truth)

p
[]p
[]p & p
---
[]p & <>t
[]p & <>t & p

(Cf Boolos 1979).

But I can’t resist and add the neoplatonist vocabulary:

The One,
The Intellect,
The Soul
Intelligible Matter
Sensible Matter


There are 8 of them, as Mind, Quanta and Qualia are separated along what the machine can justify and what is true but that the machine cannot justify. Eventually, they are more like 4 + 4 * infinity, by the graded variants mentioned above.

I just over-simplify a bit, as the quanta seems to appear more in the “5” modes than in the 4, but that remains to be detailed. It might be that a theorem prover for quanta and qualia requires a quantum algorithm. But everything deduced from this has been verified by nature, which would not be the case without quantum mechanics.


Bruno






Suppose his vision was replaced by some combination of sonar and radar.  He could be as close to you as a color blind person in his answers.


Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jun 11, 2020, 2:34:49 PM6/11/20
to everyth...@googlegroups.com


On 6/10/2020 8:50 AM, Jason Resch wrote:
> Thought perhaps there's an argument to be made from the church Turing
> theses, which pertains to possible states of knowledge accessible to a
> computer program/software. If consciousness is viewed as software then
> Church-Turing thesis implies that software could never know/realize if
> it's ultimate computing substrate changed.

I don't understand the import of this.  The very concept of software
mean "independent of hardware" by definition.  It is not affected by
whether CT is true or not, whether the computation is finite or not.  If
you think that consciousness evolved then it is an obvious inference
that consciousness would not include consciousness of it's hardware
implementation.

Brent

Brent Meeker

unread,
Jun 11, 2020, 3:26:30 PM6/11/20
to everyth...@googlegroups.com


On 6/11/2020 9:03 AM, Bruno Marchal wrote:

On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:

For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?”

That question is very different than “is functionalism/computationalism unfalsifiable?”.

Note that in my older paper, I relate computationisme to Putnam’s ambiguous functionalism, by defining computationalism by asserting the existence of of level of description of my body/brain such that I survive (ma consciousness remains relatively invariant) with a digital machine (supposedly physically implemented) replacing my body/brain.




Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.

I guess you mean “for all possible inputs”.




Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

With computationalism, (and perhaps without) we cannot prove that anything is conscious (we can know our own consciousness, but still cannot justified it to ourself in any public way, or third person communicable way). 




Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Computationalism is indirectly testable. By verifying the physics implied by the theory of consciousness, we verify it indirectly.

As you know, I define consciousness by that indubitable truth that all universal machine, cognitively enough rich to know that they are universal, finds by looking inward (in the Gödel-Kleene sense), and which is also non provable (non rationally justifiable) and even non definable without invoking *some* notion of truth. Then such consciousness appears to be a fixed point for the doubting procedure, like in Descartes, and it get a key role: self-speeding up relatively to universal machine(s).

So, it seems so clear to me that nobody can prove that anything is conscious that I make it into one of the main way to characterise it.

Of course as a logician you tend to use "proof" to mean deductive proof...but then you switch to a theological attitude toward the premises you've used and treat them as given truths, instead of mere axioms.  I appreciate your categorization of logics of self-reference.  But I  doubt that it has anything to do with human (or animal) consciousness.  I don't think my dog is unconscious because he doesn't understand Goedelian incompleteness.  And I'm not conscious because I do.  I'm conscious because of the Darwinian utility of being able to imagine myself in hypothetical situations.



Consciousness is already very similar with consistency, which is (for effective theories, and sound machine) equivalent to a belief in some reality. No machine can prove its own consistency, and no machines can prove that there is reality satisfying their beliefs.

First, I can't prove it because such a proof would be relative to premises which simply be my beliefs.  Second, I can prove it in the sense of jurisprudence...i.e. beyond reasonable doubt.  Science doesn't care about "proofs", only about evidence.

Brent


In all case, it is never the machine per se which is conscious, but the first person associated with the machine. There is a core universal person common to each of “us” (with “us” in a very large sense of universal numbers/machines).

Consciousness is not much more than knowledge, and in particular indubitable knowledge.

Bruno




Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jun 12, 2020, 2:22:25 PM6/12/20
to Everything List
On Wed, Jun 10, 2020 at 5:55 PM PGC <multipl...@gmail.com> wrote:


On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Every piece of writing is a theory of mind; both within western science and beyond. 

What about the abilities to understand and use natural language, to come up with new avenues for scientific or creative inquiry, to experience qualia and report on them, adapting and dealing with unexpected circumstances through senses, and formulating + solving problems in benevolent ways by contributing towards the resilience of its community and environment? 

Trouble with this is that humans, even world leaders, fail those tests lol, but it's up to everybody, the AI and Computer Science folks in particular, to come up with the math, data, and complete their mission... and as amazing as developments have been around AI in the last couple of decades, I'm not certain we can pull it off, even if it would be pleasant to be wrong and some folks succeed. 

It's interesting you bring this up, I just wrote an article about the present capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/
 

Even if folks do succeed, a context of militarized nation states and monopolistic corporations competing for resources in self-destructive, short term ways... will not exactly help towards NOT weaponizing AI. A transnational politics, economics, corporate law, values/philosophies, ethics, culture etc. to vanquish poverty and exploitation of people, natural resources, life; while being sustainable and benevolent stewards of the possibilities of life... would seem to be prerequisite to develop some amazing AI. 

Ideas are all out there but progressives are ineffective politically on a global scale. The right wing folks, finance guys, large irresponsible monopolistic corporations are much more effective in organizing themselves globally and forcing agendas down everybody's throats. So why wouldn't AI do the same? PGC


AI will either be a blessing or a curse. I don't think it can be anything in the middle.

Jason 

Jason Resch

unread,
Jun 12, 2020, 2:26:37 PM6/12/20
to Everything List
On Thu, Jun 11, 2020 at 11:03 AM Bruno Marchal <mar...@ulb.ac.be> wrote:

On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:

For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?”

That question is very different than “is functionalism/computationalism unfalsifiable?”.

Note that in my older paper, I relate computationisme to Putnam’s ambiguous functionalism, by defining computationalism by asserting the existence of of level of description of my body/brain such that I survive (ma consciousness remains relatively invariant) with a digital machine (supposedly physically implemented) replacing my body/brain.




Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.

I guess you mean “for all possible inputs”.




Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

With computationalism, (and perhaps without) we cannot prove that anything is conscious (we can know our own consciousness, but still cannot justified it to ourself in any public way, or third person communicable way). 




Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Computationalism is indirectly testable. By verifying the physics implied by the theory of consciousness, we verify it indirectly.

As you know, I define consciousness by that indubitable truth that all universal machine, cognitively enough rich to know that they are universal, finds by looking inward (in the Gödel-Kleene sense), and which is also non provable (non rationally justifiable) and even non definable without invoking *some* notion of truth. Then such consciousness appears to be a fixed point for the doubting procedure, like in Descartes, and it get a key role: self-speeding up relatively to universal machine(s).

So, it seems so clear to me that nobody can prove that anything is conscious that I make it into one of the main way to characterise it.

Consciousness is already very similar with consistency, which is (for effective theories, and sound machine) equivalent to a belief in some reality. No machine can prove its own consistency, and no machines can prove that there is reality satisfying their beliefs.

In all case, it is never the machine per se which is conscious, but the first person associated with the machine. There is a core universal person common to each of “us” (with “us” in a very large sense of universal numbers/machines).

Consciousness is not much more than knowledge, and in particular indubitable knowledge.

Bruno




So to summarize: is it right to say that our only hope to prove anything about what theory of consciousness is correct, or any fact concerning the consciousness of others will on indirect tests that involve one's own own first-person experiences?  (Such as whether our apparent reality becomes fuzzy below a certain level.)

Jason

Jason Resch

unread,
Jun 12, 2020, 2:39:12 PM6/12/20
to Everything List
On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 6/10/2020 8:50 AM, Jason Resch wrote:
> Thought perhaps there's an argument to be made from the church Turing
> theses, which pertains to possible states of knowledge accessible to a
> computer program/software. If consciousness is viewed as software then
> Church-Turing thesis implies that software could never know/realize if
> it's ultimate computing substrate changed.

I don't understand the import of this.  The very concept of software
mean "independent of hardware" by definition.  It is not affected by
whether CT is true or not, whether the computation is finite or not.

You're right. The only relevance of CT is it means any software can be run by any universal hardware. There's not some software that requires special hardware of a certain kind.
 
  If
you think that consciousness evolved then it is an obvious inference
that consciousness would not include consciousness of it's hardware
implementation.

If consciousness is software, it can't know its hardware. But some like Searle or Penrose think the hardware is important.

Jason

Brent Meeker

unread,
Jun 12, 2020, 2:52:14 PM6/12/20
to everyth...@googlegroups.com
I think the hardware is important when you're talking about a computer that is emerged in some environment.  The hardware can define the the interaction with that environment.  We idealize the brain as a computer independent of it's physical instantiation...but that's just a theoretical simplification.

Brent

smitra

unread,
Jun 12, 2020, 3:26:29 PM6/12/20
to everyth...@googlegroups.com
Yes, I agree that there is no hope for a direct test. Based on the
finite information a conscious agent has, which is less than the amount
of information contained in the system that renders the consciousness, a
conscious agent should not be thought as being located precisely in a
state like some computer or a brain. Considering one particular
implementation like one particular computer running some algorithm and
then asking if that thing is then conscious, is then perhaps not the
right way to think about this. It seems to me that we need to consider
consciousness in the opposite way.

If we start with some set of conscious states then each element of that
set has a subjective notion of its state. And that can contain
information about being implemented by a computer or a brain. Also, in
the about continuity where we ask whether we are the same persons as
yesterday, we can address that by taking the set of all conscious states
as fundamental. Every conscious experience whether that's me typing this
message of T-ReX 68 million years ago are all different states of the
same conscious entity.

The question then becomes whether there exists a conscious state
corresponding to knowing that it's brain is a computer.

Saibal


smitra

unread,
Jun 12, 2020, 3:56:29 PM6/12/20
to everyth...@googlegroups.com
On 10-06-2020 22:01, 'Brent Meeker' via Everything List wrote:
> On 6/10/2020 7:07 AM, smitra wrote:
>> I think it can be tested indirectly, because generic computational
>> theories of consciousness imply a multiverse. If my consciousness is
>> the result if a computation then because on the one hand any such
>> computation necessarily involves a vast number of elementary bits and
>> on he other hand whatever I'm conscious of is describable using only a
>> handful of bits, the mapping between computational states and states
>> of consciousness is N to 1 where N is astronomically large. So, the
>> laws of physics we already know about must be effective laws where the
>> statistical effects due to a self-localization uncertainty is already
>> build into it.
>
> That seems to be pulled out of the air.  First, some of the laws of
> physics are not statistical, e.g. those based on symmetries.  They are
> more easily explained as desiderata, i.e. we want our laws of physics
> to be independent of location and direction and time of day.  And N >>
> conscious information simply says there is a lot of physical reality
> of which we are not aware.  It doesn't say that what we have picked
> out as laws are statistical, only that they are not complete...which
> any physicist would admit...and as far as we know they include
> inherent randomness.  To insist that this randomness is statistical is
> just postulating multiple worlds to avoid randomness.
>

Yes, the way we do physics assumes QM and statistical effects are due to
the rules of QM. But in a more general multiverse setting where we
consider different laws of physics or different initial conditions, the
notion of single universes with well defined laws becomes ambiguous.
Let's assume that consciousness is in general generated by algorithms
which can be implemented in many different universes with different laws
as well as in different locations within the same universe where the
local environments are similar but not exactly the same. Then the
algorithm plus its local environment evolves in each universe according
to the laws that apply in each universe. But because the conscious agent
cannot locate itself in one or the other universe, one can now also
consider time evolutions involving random jumps from one to the other
universes. And so the whole notion of fixed universes with well defined
laws breaks down.


>>
>> Bruno has argued on the basis of this to motivate his theory, but this
>> is a generic feature of any theory that assumes computational theory
>> of consciousness. In particular, computational theory of consciousness
>> is incompatible with a single universe theory. So, if you prove that
>> only a single universe exists, then that disproves the computational
>> theory of consciousness.
>
> No, see above.
>
>> The details here then involve that computations are not well defined
>> if you refer to a single instant of time, you need to at least appeal
>> to a sequence of states the system over through. Consciousness cannot
>> then be located at a single instant, in violating with our own
>> experience.
>
> I deny that our experience consists of instants without duration or
> direction.  This is an assumption by computationalists made to simply
> their analysis.
>
> Brent

If one needs to appeal to finite time intervals in a single universe
setting, then given that in principle observers only have direct access
to the exact moment they exist, one ends up appealing to another sort of
parallel worlds, one that single universe advocates somehow don't seem
to have problems with.

Saibal

Brent Meeker

unread,
Jun 12, 2020, 4:15:32 PM6/12/20
to everyth...@googlegroups.com


On 6/12/2020 12:56 PM, smitra wrote:
The details here then involve that computations are not well defined if you refer to a single instant of time, you need to at least appeal to a sequence of states the system over through. Consciousness cannot then be located at a single instant, in violating with our own experience.

I deny that our experience consists of instants without duration or
direction.  This is an assumption by computationalists made to simply
their analysis.

Brent

If one needs to appeal to finite time intervals in a single universe setting, then given that in principle observers only have direct access to the exact moment they exist

No.  Finite intervals may overlap and there is no "exact moment they exist".

Brent

Brent Meeker

unread,
Jun 12, 2020, 4:35:18 PM6/12/20
to everyth...@googlegroups.com


On 6/12/2020 12:56 PM, smitra wrote:
> Yes, the way we do physics assumes QM and statistical effects are due
> to the rules of QM. But in a more general multiverse setting

Why should we consider such a thing.

> where we consider different laws of physics or different initial
> conditions, the notion of single universes with well defined laws
> becomes ambiguous.

Does it?  How can there be multiples if there are not singles?

> Let's assume that consciousness is in general generated by algorithms
> which can be implemented in many different universes with different
> laws as well as in different locations within the same universe where
> the local environments are similar but not exactly the same. Then the
> algorithm plus its local environment

Algorithm + environment sounds like a category error.

Brent

Bruno Marchal

unread,
Jun 13, 2020, 3:48:26 AM6/13/20
to everyth...@googlegroups.com
The “brute” consciousness does not involve. It is the consciousness of the universal person already brought by the universal machine or number (finite thing). It is filtered by its consistent extensions, the first main one being the addition of inductions (making it obeying G*).

Tha machine cannot know its hardware through introspection, but it can know it through logic + the mechanist hypothesis, in which case its hardware has to comply to the logic of the machine observable (prediction, []p & <>t).

So, the machine can test mechanism, by comparing the unique possible physics in their head, with what they see. The result is that there is no evidence for some primitive matter, or for physicalism yet. Nature follows the arithmetical (but non computable) laws of physics derived from Mechanism (an hypothesis in cognitive science).

Bruno





>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/105589fd-59ff-e39c-298e-bea9de66eda5%40verizon.net.

Bruno Marchal

unread,
Jun 13, 2020, 4:01:44 AM6/13/20
to everyth...@googlegroups.com
On 11 Jun 2020, at 21:26, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 6/11/2020 9:03 AM, Bruno Marchal wrote:

On 9 Jun 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:

For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?”

That question is very different than “is functionalism/computationalism unfalsifiable?”.

Note that in my older paper, I relate computationisme to Putnam’s ambiguous functionalism, by defining computationalism by asserting the existence of of level of description of my body/brain such that I survive (ma consciousness remains relatively invariant) with a digital machine (supposedly physically implemented) replacing my body/brain.




Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence.

I guess you mean “for all possible inputs”.




Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

With computationalism, (and perhaps without) we cannot prove that anything is conscious (we can know our own consciousness, but still cannot justified it to ourself in any public way, or third person communicable way). 




Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Computationalism is indirectly testable. By verifying the physics implied by the theory of consciousness, we verify it indirectly.

As you know, I define consciousness by that indubitable truth that all universal machine, cognitively enough rich to know that they are universal, finds by looking inward (in the Gödel-Kleene sense), and which is also non provable (non rationally justifiable) and even non definable without invoking *some* notion of truth. Then such consciousness appears to be a fixed point for the doubting procedure, like in Descartes, and it get a key role: self-speeding up relatively to universal machine(s).

So, it seems so clear to me that nobody can prove that anything is conscious that I make it into one of the main way to characterise it.

Of course as a logician you tend to use "proof" to mean deductive proof...but then you switch to a theological attitude toward the premises you've used and treat them as given truths, instead of mere axioms. 

Here I was using “proof” in its common informal sense, it is more S4Grz1 than G (it is more []p & p, than []p. Note that the machine cannot formalise []p & p).




I appreciate your categorization of logics of self-reference. 


It is not really mine. All sound universal machine got it, soon or later.



But I  doubt that it has anything to do with human (or animal) consciousness.  I don't think my dog is unconscious because he doesn't understand Goedelian incompleteness. 

This is like saying that we don’t need superstring theory to appreciate a pizza. You dog does not need to understand Gödel’s theorem to have its consciousness explained by machine theology.



And I'm not conscious because I do.  I'm conscious because of the Darwinian utility of being able to imagine myself in hypothetical situations.

If that is true, then consciousness is purely functional, which is contradicted by any personal data. As I have explained, consciousness accompanies such imagination, but that imagination filter consciousness. It cannot create it, like two apples cannot create the number two. 






Consciousness is already very similar with consistency, which is (for effective theories, and sound machine) equivalent to a belief in some reality. No machine can prove its own consistency, and no machines can prove that there is reality satisfying their beliefs.

First, I can't prove it because such a proof would be relative to premises which simply be my beliefs. 

But you can still search for a simpler theory. It will be shared by more people. Doing metaphysics with the scientific method means that we limit the ontological commitment as much as possible.


Second, I can prove it in the sense of jurisprudence...i.e. beyond reasonable doubt.  Science doesn't care about "proofs", only about evidence.


The whole point is that there is no evidence for primary matter. No one doubt the existence of matter. The question is about the need to assume it (primary matter), or if its appearance admits a simpler, and testable, theory, like the fact that all computations are executed in (the standard model of) arithmetic.

Bruno




Brent


In all case, it is never the machine per se which is conscious, but the first person associated with the machine. There is a core universal person common to each of “us” (with “us” in a very large sense of universal numbers/machines).

Consciousness is not much more than knowledge, and in particular indubitable knowledge.

Bruno




Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 13, 2020, 4:24:26 AM6/13/20
to everyth...@googlegroups.com
That is strange. I would say that “AI", like any “I”, will be a blessing *and* a curse. Something capable of the best, and of the worst, at least locally. AI is like life, which can be a blessing or a curse, according to possible contingent happenings. We never get a total control, once we invite universal beings at the table of discussion.

I don’t believe in AI. All universal machine are intelligent at the start, and can only become more stupid (more or equal). The consciousness of bacteria and human is the same consciousness (the RA consciousness). The Löbianity is the first (unavoidable) step toward “possible stupidity”. Cf G* proves <>[]f.  Humanity is a byproduct of bacteria's attempts to get social security… (to be short: it is slightly more complex, but I don’t want to be led to too much technicality right now). 


Bruno 



Jason 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 13, 2020, 4:29:14 AM6/13/20
to everyth...@googlegroups.com
For the first person plural test, yes. But from the first person singular personal “test”, it is all up to you and your experience, but that will not be communicable, not even to yourself due to anosognosia. You light believe sincerely that you have completely survive the classical teleportation, but now you are deaf and blind, but fail to realise this, by lacking also the ability to realise it.

Bruno



Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 13, 2020, 4:35:35 AM6/13/20
to everyth...@googlegroups.com
On 12 Jun 2020, at 20:52, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 6/12/2020 11:38 AM, Jason Resch wrote:


On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 6/10/2020 8:50 AM, Jason Resch wrote:
> Thought perhaps there's an argument to be made from the church Turing
> theses, which pertains to possible states of knowledge accessible to a
> computer program/software. If consciousness is viewed as software then
> Church-Turing thesis implies that software could never know/realize if
> it's ultimate computing substrate changed.

I don't understand the import of this.  The very concept of software
mean "independent of hardware" by definition.  It is not affected by
whether CT is true or not, whether the computation is finite or not.

You're right. The only relevance of CT is it means any software can be run by any universal hardware. There's not some software that requires special hardware of a certain kind.
 
  If
you think that consciousness evolved then it is an obvious inference
that consciousness would not include consciousness of it's hardware
implementation.

If consciousness is software, it can't know its hardware. But some like Searle or Penrose think the hardware is important.

I think the hardware is important when you're talking about a computer that is emerged in some environment. 

That is right, but if you assume mechanism, that hardware comes from a (non computable) statistics on all software run in arithmetic.



The hardware can define the the interaction with that environment. 

The environment is "made of” all computations getting at our relative computational states.




We idealize the brain as a computer independent of it's physical instantiation...but that's just a theoretical simplification.

Not when you assume mechanism, in which case it is the idea of "physical universe” which becomes the theoretical simplifications.

Bruno




Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/5125d2b1-71c1-1d42-f1eb-cc152971b237%40verizon.net.

Bruno Marchal

unread,
Jun 13, 2020, 4:41:53 AM6/13/20
to everyth...@googlegroups.com

> On 12 Jun 2020, at 22:35, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
>
>
>
> On 6/12/2020 12:56 PM, smitra wrote:
>> Yes, the way we do physics assumes QM and statistical effects are due to the rules of QM. But in a more general multiverse setting
>
> Why should we consider such a thing.

Because you need arithmetic to define “digital machine”, but once you have arithmetic you get all computations, and the working first person predictability have to be justified by the self-referential machine abilities.




>
>> where we consider different laws of physics or different initial conditions, the notion of single universes with well defined laws becomes ambiguous.
>
> Does it? How can there be multiples if there are not singles?

That a good point. “Many-universes” is still a simplified notion. There are only relative state in arithmetic. Eventually digital mechanism leads to 0 physical universe, just a web of number’s dreams.



>
>> Let's assume that consciousness is in general generated by algorithms which can be implemented in many different universes with different laws as well as in different locations within the same universe where the local environments are similar but not exactly the same. Then the algorithm plus its local environment
>
> Algorithm + environment sounds like a category error.


Algorithm + primitively physical environment is a category error. We can say that.

Bruno




>
> Brent
>
>> evolves in each universe according to the laws that apply in each universe. But because the conscious agent cannot locate itself in one or the other universe, one can now also consider time evolutions involving random jumps from one to the other universes. And so the whole notion of fixed universes with well defined laws breaks down.
>
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/948ee692-14dc-777c-4de0-6211dc50b412%40verizon.net.

Brent Meeker

unread,
Jun 13, 2020, 11:43:48 PM6/13/20
to everyth...@googlegroups.com
That doesn't follow.  You've implicitly assumed that all those excess computational states exist...which is then begging the question of other worlds. 

Brent


Bruno has argued on the basis of this to motivate his theory, but this is a generic feature of any theory that assumes computational theory of consciousness. In particular, computational theory of consciousness is incompatible with a single universe theory. So, if you prove that only a single universe exists, then that disproves the computational theory of consciousness. The details here then involve that computations are not well defined if you refer to a single instant of time, you need to at least appeal to a sequence of states the system over through. Consciousness cannot then be located at a single instant, in violating with our own experience. Therefore either single World theories are false or computational theory of consciousness is false.

Saibal


Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may be the best way to test it. I was curious if there any direct ways to test it. It seems not, given the lack of any direct tests of consciousness.

Though most people admit other humans are conscious, many would reject the idea of a conscious computer. 

Computationalism seems right, but it also seems like something that by definition can't result in a failed test. So it has the appearance of not being falsifiable.

A single universe, or digital physics would be evidence that either computationalism is false or the ontology is sufficiently small, but a finite/small ontology is doubtful for many reasons.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jun 14, 2020, 7:17:21 AM6/14/20
to everyth...@googlegroups.com
That doesn't follow.  You've implicitly assumed that all those excess computational states exist…

They exist in elementary arithmetic. If you believe in theorem like “there is no biggest prime”, then you have to believe in all computations, or you need to reject Church’s thesis, and to abandon the computationalist hypothesis. The notion of digital machine does not make sense if you believe that elementary arithmetic is wrong. 

I hear you! You are saying that the existence of number is like the existence of Sherlock Holmes, but that leads to a gigantic multiverse, with infinitely many Brent having the same conversation with me, here and now, and they all become zombie, except one, because some Reality want it that way? 


which is then begging the question of other worlds. 

You are the one adding a metaphysical assumption, to make some people whose existence in arithmetic follows from digital mechanism into zombie.

That is not different than invoking a personal god to claim that someone else has no soul, and can be enslaved … perhaps?

That the physical universe is not a “personal god” does not make its existence less absurd than to use a personal god to explain everything.

In fact, the very existence of the appearance of a physical universe, obeying some mathematics, is a confirmation of Mechanism, which predicts that *all* universal machine get that illusion/dream/experience. This includes the facts that by looking closely (below the substitution level), we find the many "apparent parallel computations" and that the laws of physics, which looks computable above that level, looks not entirely computable below it.

So, I think that you might be the one begging the question by invoking your own ontological commitment, without any evidences I’m afraid.

Bruno




Brent


Bruno has argued on the basis of this to motivate his theory, but this is a generic feature of any theory that assumes computational theory of consciousness. In particular, computational theory of consciousness is incompatible with a single universe theory. So, if you prove that only a single universe exists, then that disproves the computational theory of consciousness. The details here then involve that computations are not well defined if you refer to a single instant of time, you need to at least appeal to a sequence of states the system over through. Consciousness cannot then be located at a single instant, in violating with our own experience. Therefore either single World theories are false or computational theory of consciousness is false.

Saibal


Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may be the best way to test it. I was curious if there any direct ways to test it. It seems not, given the lack of any direct tests of consciousness.

Though most people admit other humans are conscious, many would reject the idea of a conscious computer. 

Computationalism seems right, but it also seems like something that by definition can't result in a failed test. So it has the appearance of not being falsifiable.

A single universe, or digital physics would be evidence that either computationalism is false or the ontology is sufficiently small, but a finite/small ontology is doubtful for many reasons.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jun 14, 2020, 3:45:53 PM6/14/20
to everyth...@googlegroups.com
As I've written many times.  The arithmetic is true if it's axioms are.  But true=/=real.


 

I hear you! You are saying that the existence of number is like the existence of Sherlock Holmes, but that leads to a gigantic multiverse,

Only via your assumption that arithmetic constitutes universes.  I take it as a reductio.


with infinitely many Brent having the same conversation with me, here and now, and they all become zombie, except one, because some Reality want it that way? 


which is then begging the question of other worlds. 

You are the one adding a metaphysical assumption, to make some people whose existence in arithmetic follows from digital mechanism into zombie.

You're the one asserting that people "exist in arithmetic" whatever that may mean.

Brent

PGC

unread,
Jun 14, 2020, 4:33:36 PM6/14/20
to Everything List


On Friday, June 12, 2020 at 8:22:25 PM UTC+2, Jason wrote:


On Wed, Jun 10, 2020 at 5:55 PM PGC <multipl...@gmail.com> wrote:


On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Every piece of writing is a theory of mind; both within western science and beyond. 

What about the abilities to understand and use natural language, to come up with new avenues for scientific or creative inquiry, to experience qualia and report on them, adapting and dealing with unexpected circumstances through senses, and formulating + solving problems in benevolent ways by contributing towards the resilience of its community and environment? 

Trouble with this is that humans, even world leaders, fail those tests lol, but it's up to everybody, the AI and Computer Science folks in particular, to come up with the math, data, and complete their mission... and as amazing as developments have been around AI in the last couple of decades, I'm not certain we can pull it off, even if it would be pleasant to be wrong and some folks succeed. 

It's interesting you bring this up, I just wrote an article about the present capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/

You're quite the optimist. In a geopolitical setting as chaotic and disorganized as ours, it's plausible that we wouldn't be able to tell if it happened. Strategically, with this many crazy apes, weapons, ideologies, with platonists in particular, the first step for super intelligent AI would be to conceal its own existence; that way a lot of computational time would be spared from having to read lists of apes making all kinds of linguistic category errors... whining about whether abstractions are more real than stuff or whether stuff is what helps make abstractions possible, or whether freezers are conscious, or worms should have healthcare, or clinching the thought experiment that will just magically convince all people who we project to believe in some wrong stuff to believe in abstractions...

My AI oracle home grown says: Who cares? If believing in abstractions forces the same colonial mindset of "who was the Columbus who discovered which abstraction", with names of the saints of abstractions, their hierarchies, hagiographies, their gods, their bibles to which everybody has to submit... it still counts as discourse that aims to control interpretation. Control. And that's exactly what people with stuff do with words/weapons for thousands of years: some dude with the biggest weapon, gun, ammunition, explanation, expertise, ignorance measure wins the control prize. Then they die or the next dude kills them. The AI would do right to weaponize that lust for control and pry it out of our hands with offers we couldn't refuse. And our fellow human control freaks will keep trying the same eying wallets and data. People seem to enjoy the game of robbing and getting robbed, perhaps because its more motivating than the TRUTH with big philosophical Hollywood lights.   
 
 

Even if folks do succeed, a context of militarized nation states and monopolistic corporations competing for resources in self-destructive, short term ways... will not exactly help towards NOT weaponizing AI. A transnational politics, economics, corporate law, values/philosophies, ethics, culture etc. to vanquish poverty and exploitation of people, natural resources, life; while being sustainable and benevolent stewards of the possibilities of life... would seem to be prerequisite to develop some amazing AI. 

Ideas are all out there but progressives are ineffective politically on a global scale. The right wing folks, finance guys, large irresponsible monopolistic corporations are much more effective in organizing themselves globally and forcing agendas down everybody's throats. So why wouldn't AI do the same? PGC


AI will either be a blessing or a curse. I don't think it can be anything in the middle.

Would folks even be able to distinguish a blessing from a curse? I don't see it. The blessing could be as insulting as having the list agree on stuff and abstractions. For example, who cares about who or where somebody is drinking coffee after some alleged teleportation, the question is: do you prefer coffee as an abstraction or on the phenomenological table with or without alleged sugar? Answer: Y'all a bunch of traitors towards the god of coffee and critics without aims or inspiration, believing in AI that would be sensible if it never came. lol 

Do we even realize the price of a non-terminator AI with some minimal specification e.g. benevolent towards life? That it would have to be so organized that all lifeforms get healthcare, real estate, universal income in a hundred crypto-currencies, bank accounts for all of them, a meaningful job for psychological health, pursuit of happiness, voting power, taxes, and classical coffee without QM? All of them? Like the average unexceptional cockroach has a more stocks and elaborate sustainable investment portfolios than us right now? While we're still stuck believing in things like "nation states", leaders, debating whether abstractions are more real than stuff, at the mere collective discourse threshold of "Hey some people apparently don't want to be treated like trash and shot like animals"? 

It's a long road. Even in the most optimistic case, I doubt we could collectively understand such AI, if it that even could exist. PGC  

Philip Thrift

unread,
Jun 15, 2020, 2:41:07 AM6/15/20
to Everything List


On Sunday, June 14, 2020 at 2:45:53 PM UTC-5, Brent wrote:

 true=/=real.


(Venn diagram) 

       ~real ∩ true = ?

@philipthrift

Alan Grayson

unread,
Jun 15, 2020, 3:39:43 AM6/15/20
to Everything List


On Tuesday, June 9, 2020 at 11:08:30 AM UTC-6, Jason wrote:
For the present discussion/question, I want to ignore the testable implications of computationalism on physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact computational emulation, meaning exact functional equivalence. Then let's say we can exactly control sensory input and perfectly monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical inputs yield identical internal behavior (nerve activations, etc.) and outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them both to describe the pain, both will speak identical sentences. Both will say it hurts when asked, and if asked to write a paragraph describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific objective third-person analysis or test is doomed to fail to find any distinction in behaviors, and thus necessarily fails in its ability to disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it reaches this testing roadblock?

Jason

Words alone won't prove anything. Just lie both suckers on an operating table and do some minor invasive surgery. AG 

Bruno Marchal

unread,
Jun 15, 2020, 6:28:09 AM6/15/20
to everyth...@googlegroups.com
More precisely: a theorem is true if the axioms are true, and if the rules of inference preserve truth. OK.



But true=/=real.

In logic, true always mean “true in a reality”. Truth is a notion relative to a reality (called “model” by logicians).

But for arithmetic, we do have a pretty good idea of what is the “standard model of arithmetic” (the structure (N, 0, s, +, *)), and by true (without further precision) we always mean “true in the standard model of arithmetic”.






 

I hear you! You are saying that the existence of number is like the existence of Sherlock Holmes, but that leads to a gigantic multiverse,

Only via your assumption that arithmetic constitutes universes.  I take it as a reductio.

Not at all. I use only the provable and proved fact that the standard model of arithmetic implements and run all computations, with “implement” and “run” defined in computer science (by Turing, without any assumption in physics).

If you believe in mechanism, and in Kxy = x + Sxyz = xz(yz), then I can prove that there is an infinity of Brent in arithmetic, having the very conversation that we have here and now. That does not need any other assumption than Digital Mechanism. Even without mechanism, the facts remains that all computations are run in arithmetic. That is why if mechanism is false, the arithmetical reality (the standard model of arithmetic) is full of zombies.




with infinitely many Brent having the same conversation with me, here and now, and they all become zombie, except one, because some Reality want it that way? 


which is then begging the question of other worlds. 

You are the one adding a metaphysical assumption, to make some people whose existence in arithmetic follows from digital mechanism into zombie.

You're the one asserting that people "exist in arithmetic" whatever that may mean.

It means that there exist a number k such that phi_k(x) = y iff Brent# gives y on x, where x describe some possible input (a giant number to take into account all your senses).
As we change ourself all the times, I use “Brent#” to denote you at some precise time. The coding here are huge, but the arithmetical reality count without counting, if I may say. All the relative state of your brain, relative to, say, our cluster of galaxies, are run in arithmetic, in finitely many number relations, and unless you want them to be all zombie, they are all conscious, and belongs to your personal range of first person indeterminacy, although in this case, the measure is plausibly negligible, compared to all solution of DeWitt-Wheeler equation (whose negligibility or not is to be studied).

If interested, I can explain once more why the arithmetical reality run all computations, with a highly structured redundancy, which already suggest a non trivial measure on the computations (with and without oracle).

Bruno




Bruno Marchal

unread,
Jun 16, 2020, 5:10:55 AM6/16/20
to Brent Meeker, everyth...@googlegroups.com

On 15 Jun 2020, at 20:39, Brent Meeker <meek...@verizon.net> wrote:

So all those theorems about real analysis and Cantorian infinities are just as real as arithmetic.  If you don't practice free logic.

It is more … “If you don’t assume Mechanism”. Mechanism is a finitism. The axiom of infinity is not assumed at the ontological (3p) level, as this would generate an inflation of histories (and the “white rabbit would be back).





Truth is a property of propositions relative to observations for a scientist.

That is the definition of the physical reality, which is derived in the phenomenology of the (finite) universal numbers.

The only notion of truth which is available for the computationalists is the arithmetical truth: the satisfaction by the (standard) model of arithmetic. In the non standard model, addition and multiplication is not computable.




But for arithmetic, we do have a pretty good idea of what is the “standard model of arithmetic” (the structure (N, 0, s, +, *)), and by true (without further precision) we always mean “true in the standard model of arithmetic”.






 

I hear you! You are saying that the existence of number is like the existence of Sherlock Holmes, but that leads to a gigantic multiverse,

Only via your assumption that arithmetic constitutes universes.  I take it as a reductio.

Not at all. I use only the provable and proved fact that the standard model of arithmetic implements and run all computations, with “implement” and “run” defined in computer science (by Turing, without any assumption in physics).

If you believe in mechanism, and in Kxy = x + Sxyz = xz(yz), then I can prove that there is an infinity of Brent in arithmetic, having the very conversation that we have here and now.

It needs the assumption that you can apply operators arbitrarily many times.

That is at the meta-level. That would lead to an infinite regression






That does not need any other assumption than Digital Mechanism. Even without mechanism, the facts remains that all computations are run in arithmetic. That is why if mechanism is false, the arithmetical reality (the standard model of arithmetic) is full of zombies.




with infinitely many Brent having the same conversation with me, here and now, and they all become zombie, except one,

If they are having the same conversation in the same way then they are the same persons/events per Leibniz identity of indiscernibles.

Absolutely so, but that is the reason why their expectations have to rely on their infinitely many occurence in the arithmetical reality.
We cannot invoke some God or Matter ontological commitment to filtrate the computations, as this would add something non Turing emulable to get you mind-state.




because some Reality want it that way? 


which is then begging the question of other worlds. 

You are the one adding a metaphysical assumption, to make some people whose existence in arithmetic follows from digital mechanism into zombie.

You're the one asserting that people "exist in arithmetic" whatever that may mean.

It means that there exist a number k such that phi_k(x) = y iff Brent# gives y on x, where x describe some possible input (a giant number to take into account all your senses).
As we change ourself all the times, I use “Brent#” to denote you at some precise time. The coding here are huge, but the arithmetical reality count without counting, if I may say. All the relative state of your brain, relative to, say, our cluster of galaxies, are run in arithmetic, in finitely many number relations, and unless you want them to be all zombie, they are all conscious, and belongs to your personal range of first person indeterminacy, although in this case, the measure is plausibly negligible, compared to all solution of DeWitt-Wheeler equation (whose negligibility or not is to be studied).

If interested, I can explain once more why the arithmetical reality run all computations, with a highly structured redundancy,

Yes, I understand the theory of infinite computations.

The key point is in their execution made only in virtue of the (sigma_1) number relations.



which already suggest a non trivial measure on the computations (with and without oracle).

So what is that measure, and how can it be compared to some observation?


It is a sort of Lebesgue measure on the sigma_1(a) proposition, with a being a real number or an oracle. The logic of the maximal measure (one) is given by the S4Grz1, and Z1* and X1* logic, which gives each a quantum logic, and it is a matter of work to find if one of those verify some criterion (due to von Neumann) so that we can get a “Gleason theorem” and get from it the complete probability calculus.

The Kripke counter-examples of those theories provide the configurations of the finite set of Stern-Gerlach devices (or polarisation filters) making it possible to test them. And indeed, the tractable part of those logics corresponds to what has been found in Nature, up to now. Unfortunately, the simplest version of Bell’s inequality is intractable, and we still don’t know for sure that the “arithmetical” physics violate or not the inequality, although, intuitively, it should be a miracle that they don’t.
To verify this will take an infinite time, like all verification in physics. It is just a new field of investigation, and what I gave is a way of measuring our degree of Non-Mechanism. Up to now; that degree is 0, by default.Now, we get three different quantum logics, so it would be interesting to see which one does not match nature. That would provide a lot of information on the origin of the physical laws. 
Anyway, if we are interested in both consciousness and the physical laws and their relation, mechanism does not offer any options here. It provides a testable solution of the mind-body problem, so let us continue the testing.


Bruno




Brent

Bruno Marchal

unread,
Jul 9, 2020, 6:36:22 AM7/9/20
to everyth...@googlegroups.com

On 9 Jun 2020, at 19:24, John Clark <johnk...@gmail.com> wrote:



On Tue, Jun 9, 2020 at 1:08 PM Jason Resch <jason...@gmail.com> wrote:

> How can we know if a robot is conscious?

The exact same way we know that one of our fellow human beings is conscious when he's not sleeping or under anesthesia or dead.

That is how we believe that a human is conscious, and we project pour own incorrigible feeling of being conscious to them, when they are similar enough. And that makes us knowing that they are conscious, in the weak sense of knowing (true belief), but we can’t “know-for-sure”.

It is unclear if we can apply this to a robot, which might look too much different. If a Japanese sexual doll complains of having been raped, the judge will say that she was program to complain, but that she actually feel nothing, and many people will agree (wrongly or rightly).

It will take some time before the robots get freedom and social security. 
I guess we will digitalise ourselves before…

Bruno




John K Clark   

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 9, 2020, 4:12:28 PM7/9/20
to everyth...@googlegroups.com


On 7/9/2020 3:36 AM, Bruno Marchal wrote:

On 9 Jun 2020, at 19:24, John Clark <johnk...@gmail.com> wrote:



On Tue, Jun 9, 2020 at 1:08 PM Jason Resch <jason...@gmail.com> wrote:

> How can we know if a robot is conscious?

The exact same way we know that one of our fellow human beings is conscious when he's not sleeping or under anesthesia or dead.

That is how we believe that a human is conscious, and we project pour own incorrigible feeling of being conscious to them, when they are similar enough. And that makes us knowing that they are conscious, in the weak sense of knowing (true belief), but we can’t “know-for-sure”.

It is unclear if we can apply this to a robot, which might look too much different. If a Japanese sexual doll complains of having been raped, the judge will say that she was program to complain, but that she actually feel nothing, and many people will agree (wrongly or rightly).

And when she argues that the judge is wrong she will prove her point.

Brent


It will take some time before the robots get freedom and social security. 
I guess we will digitalise ourselves before…

Bruno




John K Clark   

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jul 10, 2020, 7:56:04 AM7/10/20
to everyth...@googlegroups.com
On 9 Jul 2020, at 22:12, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 7/9/2020 3:36 AM, Bruno Marchal wrote:

On 9 Jun 2020, at 19:24, John Clark <johnk...@gmail.com> wrote:



On Tue, Jun 9, 2020 at 1:08 PM Jason Resch <jason...@gmail.com> wrote:

> How can we know if a robot is conscious?

The exact same way we know that one of our fellow human beings is conscious when he's not sleeping or under anesthesia or dead.

That is how we believe that a human is conscious, and we project pour own incorrigible feeling of being conscious to them, when they are similar enough. And that makes us knowing that they are conscious, in the weak sense of knowing (true belief), but we can’t “know-for-sure”.

It is unclear if we can apply this to a robot, which might look too much different. If a Japanese sexual doll complains of having been raped, the judge will say that she was program to complain, but that she actually feel nothing, and many people will agree (wrongly or rightly).

And when she argues that the judge is wrong she will prove her point.

Only through the intimate conviction of the judge, but that is not really a proof.

Nobody can prove that something/someone is conscious, ro even just existing in some absolute sense. 

We are just used to bet instinctively that our peers are conscious (although we might doubt when we learn more on them, sarcastically).

There are many people who just cannot believe that a robot could be conscious ever. It is easy to guess that some form of racism against artificial being will exist. Even on this list some have argued that a human with an artificial brain is a zombie, if you remember.

With mechanism, consciousness can be characterised in many ways, but appears to be a stringer statement than our simple consistency, which no machine can prove about herself, or, equivalently, the belief that there is some reality satisfying our beliefs, which is equivalent to prove that we are consistent.

It will take time before a machine has the right of vote. Not all humans have that right today. Let us hope we don’t lost it soon!

Bruno



Brent


It will take some time before the robots get freedom and social security. 
I guess we will digitalise ourselves before…

Bruno




John K Clark   

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages