Zombieopolis Thought Experiment

175 views
Skip to first unread message

Craig Weinberg

unread,
Sep 13, 2012, 3:03:13 PM9/13/12
to everyth...@googlegroups.com
If anyone is not familiar with David Chalmers "Absent Qualia, Fading Qualia, Dancing Qualia" You should have a look at it first.

This thought experiment is intended to generalize principles common to both computationalism and functionalism so that the often confusing objections surrounding their assumptions can be revealed.

Say that we have the technology to scan the city of New York by means of releasing 100,000 specially fitted cats into the streets, which will return to the laboratory in a week's time with a fantastically large amount of data about what the cats see and feel, smell and taste, hear, their positions and movements relative to each other, etc.

We now set about computing algorithms to simulate the functions of Brooklyn such that we can tear down Brooklyn completely and replace it with a simulation which causes cats released into the simulated environment to behave in the same way as they would have according to the history of their initial release.

Indeed, cats in Manhattan travel to and from Brooklyn as usual. Perhaps to get this right, we had to take all of Brooklyn and grind it up in a giant blender until it becomes a paste of liquified corpses, garbage, concrete, wood, and glass, and then use this substrate to mold into objects that can be moved around remotely to suit the expectations of the cats.

Armed with the confidence of the feline thumbs-up, we go ahead and replace Manhattan and the other boroughs in the same way, effectively turning a city of millions into a cat-friendly cemetery. While the experiment is not a PR success (Luddites and Fundamentalists complain loudly about a genocide), our cats assure us that all is well and the experiment is a great success.

Craig



Stathis Papaioannou

unread,
Sep 13, 2012, 8:01:28 PM9/13/12
to everyth...@googlegroups.com
Craig, this post of yours just shows me that you don't understand the
paper at all. If I am wrong, perhaps you could summarise it. I suspect
that the part you don't understand is what it means to make a
functional replacement of a neuron, which means replicating just the
third party observable behaviour. I'm not sure if you don't understand
"third party observable behaviour" or if you do understand but think
it's impossible to replicate it. Perhaps you could clarify by
explaining what you think "third party observable behaviour" actually
means.


--
Stathis Papaioannou

Roger Clough

unread,
Sep 14, 2012, 7:05:15 AM9/14/12
to everything-list
Hi Craig Weinberg

His very first sentence is wrong. Conscious experience is an expression of nonphysical mind,
although it may deal with physical topics.

"It is widely accepted that conscious experience has a physical basis.


Roger Clough, rcl...@verizon.net
9/14/2012
Leibniz would say, "If there's no God, we'd have to invent him
so that everything could function."


----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-13, 15:03:13
Subject: Zombieopolis Thought Experiment
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/5BbVwrPfmSoJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Stephen P. King

unread,
Sep 14, 2012, 12:09:25 PM9/14/12
to everyth...@googlegroups.com
On 9/14/2012 7:05 AM, Roger Clough wrote:
> Hi Craig Weinberg
>
> His very first sentence is wrong. Conscious experience is an expression of nonphysical mind,
> although it may deal with physical topics.
>
> "It is widely accepted that conscious experience has a physical basis.

Dear Roger,

No, you misunderstand his argument. If "Conscious experience is an
expression of nonphysical mind" in a strict "nothing but" sense then
consciousness would be completely solipsistic and incapable of even
comprehending that it is not all that exists. It is because
consciousness is contained to be Boolean representable (and thus
finite!) that it can "bet" on its incompleteness and thus go beyond
itself, escaping its solipsism.
--
Onward!

Stephen

http://webpages.charter.net/stephenk1/Outlaw/Outlaw.html


Craig Weinberg

unread,
Sep 14, 2012, 12:55:30 PM9/14/12
to everyth...@googlegroups.com
What you think third party observable behavior means is the set of all properties which are externally discoverable. I am saying that is a projection of naive realism, and that in reality, there is no such set, and that in fact the process of discovery of any properties supervenes on the properties of all participants and the methods of their interaction.

My point of using cats in this thought experiment is to specifically point out our naivete in assuming that instruments which extend our perception in only the most deterministic and easy to control ways are sufficient to define a 'third person'. If we look at the brain with a microscope, we see those parts of the brain that microscopes can see. If we look at New York with a swarm of cats, then we see the parts of New York that cats can see.

This is the point of the thought experiment. The limitations of all forms of measurement and perception preclude all possibility of there ever being a such thing as an exhaustively complete set of third person behaviors of any system.

What is it that you don't think I understand?
 
Craig

--
Stathis Papaioannou

Stathis Papaioannou

unread,
Sep 15, 2012, 6:20:41 AM9/15/12
to everyth...@googlegroups.com
On Sat, Sep 15, 2012 at 2:55 AM, Craig Weinberg <whats...@gmail.com> wrote:

> What you think third party observable behavior means is the set of all
> properties which are externally discoverable. I am saying that is a
> projection of naive realism, and that in reality, there is no such set, and
> that in fact the process of discovery of any properties supervenes on the
> properties of all participants and the methods of their interaction.

Of course there is a set of all properties that are externally
discoverable, even if you think this set is very small! Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.

> My point of using cats in this thought experiment is to specifically point
> out our naivete in assuming that instruments which extend our perception in
> only the most deterministic and easy to control ways are sufficient to
> define a 'third person'. If we look at the brain with a microscope, we see
> those parts of the brain that microscopes can see. If we look at New York
> with a swarm of cats, then we see the parts of New York that cats can see.

Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin. As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.

> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?

What you don't understand is that an exhaustively complete set of
behaviours is not required. I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.


--
Stathis Papaioannou

Roger Clough

unread,
Sep 15, 2012, 8:52:18 AM9/15/12
to everything-list
Hi Stephen P. King

I seem to have-- whoops-- totally misread him. Logical dyslexia ?

His first sentence is correct:

"Conscious experience is an expression of nonphysical mind"

I don't follow the rest of your comments. Berkeley's solipsism has
never been disproven, as far as I know.


Roger Clough, rcl...@verizon.net
9/15/2012
Leibniz would say, "If there's no God, we'd have to invent him
so that everything could function."
----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-09-14, 12:09:25
Subject: Re: Zombieopolis Thought Experiment

Stephen P. King

unread,
Sep 15, 2012, 1:04:41 PM9/15/12
to everyth...@googlegroups.com
On 9/15/2012 8:52 AM, Roger Clough wrote:
> Hi Stephen P. King
>
> I seem to have-- whoops-- totally misread him. Logical dyslexia ?

Hi Roger,

Good catch! Yeah, my dyslexia distorts things in a weird "telephone
game" way...

>
> His first sentence is correct:
>
> "Conscious experience is an expression of nonphysical mind"
OK, but I agree with that remark. It is the idea that "all that
exists is the possible expressions of nonphysical mind" that I find to
be deeply flawed.

>
> I don't follow the rest of your comments. Berkeley's solipsism has
> never been disproven, as far as I know.

The inability for Berkeley and those to support his thesis to
answer to Mr. Johnson's retort of bounding his foot off of a rock was
the evidence of the flaw. A thesis that makes a deed ontological
statement, such as Immaterials does with its thesis that: "all that
exists is the possible expressions of nonphysical mind", need to be able
to explain the causal relationships of that which it claims is "merely
epiphenomena", as such can have (by definition) no causal efficacy
whatsoever.
The fact that I experience a world that is not directly maleable to
my whim is a pretty good indication that it is not just the case "all
that exists is the possible expressions of nonphysical mind" since I
have what very much appears to be a " nonphysical mind".
Onward!

Stephen

http://webpages.charter.net/stephenk1/Outlaw/Outlaw.html


Stathis Papaioannou

unread,
Sep 15, 2012, 6:11:19 PM9/15/12
to everyth...@googlegroups.com
Just saw this article quite relevant to our discussion:

Researchers have used a neural implant to recapture a lost
decision-making process in monkeys—demonstrating that a neural
prosthetic can recover cognitive function in a primate brain.

http://www.technologyreview.com/news/429204/a-brain-implant-that-thinks/?nlid=nldly&nld=2012-09-14


--
Stathis Papaioannou

Roger Clough

unread,
Sep 16, 2012, 8:26:36 AM9/16/12
to everything-list
Hi Stephen P. King
 
Now I see your problem with Chalmers.
It seems to be too sweeping a remark,
but Leibniz would agree. because
God, who is the supreme monad, causes all
to happen. Mind is the ruling power.
As I say below,
 
"If there's no God, we'd have to invent him
so that everything could function."
 
Roger Clough, rcl...@verizon.net
9/16/2012
Leibniz would say, "If there's no God, we'd have to invent him
so that everything could function."
----- Receiving the following content -----
Receiver: everything-list
Time: 2012-09-15, 13:04:41
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

Craig Weinberg

unread,
Sep 16, 2012, 8:42:42 AM9/16/12
to everyth...@googlegroups.com


On Saturday, September 15, 2012 6:21:14 AM UTC-4, stathisp wrote:
On Sat, Sep 15, 2012 at 2:55 AM, Craig Weinberg <whats...@gmail.com> wrote:

> What you think third party observable behavior means is the set of all
> properties which are externally discoverable. I am saying that is a
> projection of naive realism, and that in reality, there is no such set, and
> that in fact the process of discovery of any properties supervenes on the
> properties of all participants and the methods of their interaction.

Of course there is a set of all properties that are externally
discoverable, even if you think this set is very small!

No, there isn't. That is what I am telling you. Nothing exists outside of experience, which is creating new properties all of the time. There is no set at all. There is no such thing as a generic externality...each exterior is only a reflection of the interior of the system which discovers the interior of other systems as exteriors.

 
Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.

No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.
 

> My point of using cats in this thought experiment is to specifically point
> out our naivete in assuming that instruments which extend our perception in
> only the most deterministic and easy to control ways are sufficient to
> define a 'third person'. If we look at the brain with a microscope, we see
> those parts of the brain that microscopes can see. If we look at New York
> with a swarm of cats, then we see the parts of New York that cats can see.

Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.

Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'.

 
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.

Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience. I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations.


> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?

What you don't understand is that an exhaustively complete set of
behaviours is not required.

Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.
 
I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.

Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?

Craig



--
Stathis Papaioannou

Stephen P. King

unread,
Sep 16, 2012, 11:16:05 AM9/16/12
to everyth...@googlegroups.com
On 9/16/2012 8:26 AM, Roger Clough wrote:
Hi Stephen P. King
 
Now I see your problem with Chalmers.
It seems to be too sweeping a remark,
but Leibniz would agree. because
God, who is the supreme monad, causes all
to happen. Mind is the ruling power.
As I say below,
 
"If there's no God, we'd have to invent him
so that everything could function."
 
Roger Clough, rcl...@verizon.net
9/16/2012
Leibniz would say, "If there's no God, we'd have to invent him
so that everything could function."
Hi Roger,

    I would agree completely with you but I cannot because of the phrase "causes all". To cause something implies that there is a choice to act or not to act, God does not have this freedom, God, in his omnipotence, does all things and does not do anything at all. Our confusion flow from our inability to see ourselves as part of God. We are the ones that cause things, we are the expression of God's Will.

Stephen P. King

unread,
Sep 16, 2012, 12:13:52 PM9/16/12
to everyth...@googlegroups.com
On 9/16/2012 8:42 AM, Craig Weinberg wrote:


On Saturday, September 15, 2012 6:21:14 AM UTC-4, stathisp wrote:
On Sat, Sep 15, 2012 at 2:55 AM, Craig Weinberg <whats...@gmail.com> wrote:

> What you think third party observable behavior means is the set of all
> properties which are externally discoverable. I am saying that is a
> projection of naive realism, and that in reality, there is no such set, and
> that in fact the process of discovery of any properties supervenes on the
> properties of all participants and the methods of their interaction.

Of course there is a set of all properties that are externally
discoverable, even if you think this set is very small!

No, there isn't. That is what I am telling you. Nothing exists outside of experience, which is creating new properties all of the time. There is no set at all. There is no such thing as a generic externality...each exterior is only a reflection of the interior of the system which discovers the interior of other systems as exteriors.

 Hi Craig!

    EXACTLY!


 
Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.

No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.

    Thus we can realistically claim that the physical world is exactly and only all things that we (as we truly are) have in common. What must be understood is that as the number of participating entities increase to infinity, the number of "things in common" goes to zero. Only for a large but finite set of entities will there be a semi-large number of relations that the entities have in common and not have a degeneracy relation between them.

    A black Hole is a nice demonstration of the degeneracy idea. The effect of gravity is the force of degeneracy, when all the ground states are forces to normalize and become identical with each other, the "space" and "delay" (time) that is different between them collapses to zero and thus we get singularity in the limit of the degeneracy.


 

> My point of using cats in this thought experiment is to specifically point
> out our naivete in assuming that instruments which extend our perception in
> only the most deterministic and easy to control ways are sufficient to
> define a 'third person'. If we look at the brain with a microscope, we see
> those parts of the brain that microscopes can see. If we look at New York
> with a swarm of cats, then we see the parts of New York that cats can see.

Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.

    Craig point here is that if we are going to perform a substitution then the artificial component must be capable of reproducing *all* of the functions of the neuron unless we are going to ignore the fact that neurons are not *just transistors*. We cannot fail to recognize that a neuron is not just one thing to each other and to the rest of the body and environment beyond it. We need to drop the idea that the universe is made up of gears and levers and springs and understand that it is not uniquely decomposable into isolate entities that can somehow retain their set of unique properties in isolation.



Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'.

    Exactly. If we are going to invoke functional equivalence then we must invoke all functions that are involved, not just some of them.



 
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.

Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience.

    Indeed! WE can easily see that the principle of identity of indiscernibles is involved here. Minds, the "things that are conscious", do not exist "in space" as physical objects and thus do not have positions or momenta or spin or duration quantities that can be used to externally locate them in different places so that the PII can be safely ignored. OTOH, minds must be implemented or else they are just the "presupposition of a possible thought". They have to be functionally implemented "in the flesh" for only the possibility of being able to interact with each other and thus gain knowledge of themselves and the world (of other minds).


I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations.

    The postulate of "No Doppelgangers" by Gordon Pask and the "no cloning" theorem of Kochen & Specker speak to this directly.




> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?

What you don't understand is that an exhaustively complete set of
behaviours is not required.

Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that.

    True if and only if the set of behaviors (functions) is truly infinite. What needs to be understood, is that we can safely ignore all of the infinity except for a finite subset in our models of interactions. We must pay a price for doing this and it is the price of not having a completely deterministic theory.


You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.
   
    LOL! Indeed!


 
I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.

Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?

    He want complete predictability, Craig. That is why. TO predict exactly what something is going to do is to be able to control it. We humans have this hang up about having to control everything....


Craig



--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QZP5OE1BqSoJ.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Sep 16, 2012, 12:49:46 PM9/16/12
to everyth...@googlegroups.com


On Sunday, September 16, 2012 12:13:57 PM UTC-4, Stephen Paul King wrote:
On 9/16/2012 8:42 AM, Craig Weinberg wrote:


On Saturday, September 15, 2012 6:21:14 AM UTC-4, stathisp wrote:
 
Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.

No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.

    Thus we can realistically claim that the physical world is exactly and only all things that we (as we truly are) have in common. What must be understood is that as the number of participating entities increase to infinity, the number of "things in common" goes to zero. Only for a large but finite set of entities will there be a semi-large number of relations that the entities have in common and not have a degeneracy relation between them.

That's exactly the backbone equivalence of multisense realism. As the most common denominators, the characteristics of quanta are the most impersonal and common forms of qualia. Because of the privacy of qualia, it really drains almost everything out of it to make it shared to the point of near universality. Qualia drained of most quality is really involuted qualia, but because it has no existence of its own, it doesn't commute the other way around. Qualia isn't involuted quanta, as quanta isn't anything without first being qualia.

Both logical algebras and topologies are quanta. They are reductions of qualia, not particles or qualia-free informaiton which generate qualia. An mp3 file is a compression, not of music, but of the articulations of current which matches the acoustic dynamic conditions associated with sound perception. We can intellectually match the data of the mp3 file with any external topology (eardrum, needle on vinyl, speaker, laser pits on a CD, etc), and the logical algebra which (figuratively) animates that topology ( f(mp3)), but they only relate to each other, and not any kind of subjective experience.


    A black Hole is a nice demonstration of the degeneracy idea. The effect of gravity is the force of degeneracy, when all the ground states are forces to normalize and become identical with each other, the "space" and "delay" (time) that is different between them collapses to zero and thus we get singularity in the limit of the degeneracy.

That's cool. I can see that in my terms too, with gravity being like Kryptonite to motive participation. Systems lose their ability to differentiate themselves from a large mass.

Craig

Stephen P. King

unread,
Sep 16, 2012, 4:27:35 PM9/16/12
to everyth...@googlegroups.com
    Horray! We bisimulate!

Roger Clough

unread,
Sep 17, 2012, 8:59:55 AM9/17/12
to everything-list
Hi Stephen P. King
 
The physical is, and only is, what you can measure.
 
 
Roger Clough, rcl...@verizon.net
9/17/2012
Leibniz would say, "If there's no God, we'd have to invent him
so that everything could function."
----- Receiving the following content -----
Time: 2012-09-16, 12:13:52
Subject: Re: Zombieopolis Thought Experiment

Stathis Papaioannou

unread,
Sep 17, 2012, 9:28:37 AM9/17/12
to everyth...@googlegroups.com


On Sep 16, 2012, at 10:42 PM, Craig Weinberg <whats...@gmail.com> wrote:

Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.

No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.

I'm just saying that the mass of the human and the mass of the rocks is the same, not that the rocks and the human are the same. They share a property, which manifests as identical behaviour when they are put on scales. What's controversial about that?

Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.

Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'.

Yes, it's not *identical*. No-one has claimed this. And since it's not identical, under some possible test it would behave differently; otherwise it would be identical. But there are some changes which make no functional difference. If l have a drink of water, that changes my brain by decreasing the sodium concentration. But this change is not significant if we are considering whether I continue to manifest normal human behaviour, since firstly the brain is tolerant of moderate physical changes and secondly people can manifest a range of different behaviours and remain recognisably human and recognisably the same human. In other words humans have certain engineering tolerances in their components, and the aim in replacing components would be to do it within this tolerance. Perfection is not attainable by either engineers or nature.

 
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.

Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience. I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations.

Here is where you have misunderstood the whole aim of the thought experiment in the paper you have cited. The paper assumes that identical function does *not* necessarily result in identical consciousness and follows this idea to see where it leads.

> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?

What you don't understand is that an exhaustively complete set of
behaviours is not required.

Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.

The replacement components need only be within the engineering tolerance of the nervous system components. This is a difficult task but it is achievable in principle.

I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.

Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?

I assume that my friends have not been replaced by robots. If they have been then that means the robots can almost perfectly replicate their behaviour, since I (and people in general) am very good at picking up even tiny deviations from normal behaviour. The question then is, if the function of a human can be replicated this closely by a machine does that mean the consciousness can also be replicated? The answer is yes, since otherwise we would have the possibility of a person having radically different experiences but behaving normally and being unaware that their experiences were different.


-- Stathis Papaioannou

Stephen P. King

unread,
Sep 17, 2012, 11:30:13 AM9/17/12
to everyth...@googlegroups.com
On 9/17/2012 8:59 AM, Roger Clough wrote:
Hi Stephen P. King
 
The physical is, and only is, what you can measure.
 
 
Roger Clough, rcl...@verizon.net
9/17/2012
Leibniz would say, "If there's no God, we'd have to invent him
so that everything could function."
    Yes, exactly. But what about what we do not measure, what about what we infer from what we measure?

Craig Weinberg

unread,
Sep 17, 2012, 4:39:12 PM9/17/12
to everyth...@googlegroups.com


On Monday, September 17, 2012 9:24:23 AM UTC-4, stathisp wrote:


On Sep 16, 2012, at 10:42 PM, Craig Weinberg <whats...@gmail.com> wrote:

Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.

No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.

I'm just saying that the mass of the human and the mass of the rocks is the same, not that the rocks and the human are the same. They share a property, which manifests as identical behaviour when they are put on scales. What's controversial about that?

It isn't controversial, but I am suggesting that maybe it should be. It isn't that there is an independent and disembodied 'property' that human body and the rocks share, it is that we measure them in a way which allows us to categorize one's behavior as similar to another in a particular way.

Think of the fabric of the universe being like an optical illusion where colors change when they are adjacent to each other but not if they are against grey. There is no abstract property being manifested as concrete experiences, only concrete experiences can be re-presented as abstract properties.


Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.

Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'.

Yes, it's not *identical*. No-one has claimed this. And since it's not identical, under some possible test it would behave differently; otherwise it would be identical.

Not in the case of consciousness. There is no reason to believe that it is possible to test quality of consciousness. What might seem identical to a child may be completely dysfunctional as an adolescent - or it might be that tests done in a laboratory fail to reveal real world defects. We have no reason to believe that it is possible for consciousness to be anything other than completely unique and maybe even tied to the place and time of its instantiation.

 
But there are some changes which make no functional difference.

Absolutely, but consciousness is not necessarily a function, and function is subject to the form of measurement and interpretation applied.
 
If l have a drink of water, that changes my brain by decreasing the sodium concentration. But this change is not significant if we are considering whether I continue to manifest normal human behaviour, since firstly the brain is tolerant of moderate physical changes 

But a few milligrams of LSD or ricin (LD100 of 25 µg/kg) will have a catastrophic effect on normal human capacities, so that the brain's tolerance has nothing to do with how moderate the physical changes are. That's a blanket generalization that doesn't pan out. It's folk neuroscience.
 
and secondly people can manifest a range of different behaviours and remain recognisably human and recognisably the same human. In other words humans have certain engineering tolerances in their components, and the aim in replacing components would be to do it within this tolerance. Perfection is not attainable by either engineers or nature.

Engineering may not be applicable to consciousness though. There is tolerance for the extension of consciousness - if you injure your spine, we could engineer a new segment, just like we could replace your leg with a prosthetic, but there is not necessarily a replacement for the self as a whole. A prosthetic head that doesn't replace the person is not necessarily a possibility. You assume that a person is a structure with interchangeable parts. I think it is an experience which is inherently irreducible and non-transferable.
 

 
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.

Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience. I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations.

Here is where you have misunderstood the whole aim of the thought experiment in the paper you have cited. The paper assumes that identical function does *not* necessarily result in identical consciousness and follows this idea to see where it leads.

I understand that, but it still assumes that there is a such thing as a set of functions which could be identified and reproduced that cause consciousness. I don't assume that, because consciousness isn't like anything else. It is the source of all functions and appearances, not the effect of them. Once you have consciousness in the universe, then it can be enhanced and altered in infinite ways, but none of them can replace the experience that is your own.


> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?

What you don't understand is that an exhaustively complete set of
behaviours is not required.

Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.

The replacement components need only be within the engineering tolerance of the nervous system components. This is a difficult task but it is achievable in principle.

You assume that consciousness can be replaced, but I understand exactly why it can't. You can believe that there is no difference between scooping out your brain stem and replacing it with a functional equivalent as long as it was well engineered, but to me it's a completely misguided notion. Consciousness doesn't exist on the outside of us. Engineering only deals with exteriors. If the universe were designed by engineers, there could be no consciousness.
 

I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.

Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?

I assume that my friends have not been replaced by robots. If they have been then that means the robots can almost perfectly replicate their behaviour, since I (and people in general) am very good at picking up even tiny deviations from normal behaviour. The question then is, if the function of a human can be replicated this closely by a machine does that mean the consciousness can also be replicated? The answer is yes, since otherwise we would have the possibility of a person having radically different experiences but behaving normally and being unaware that their experiences were different.

The answer is no. A cartoon of Bugs Bunny has no experiences but behaves just like Bugs Bunny would if he had experiences. You are eating the menu.

Craig
 


-- Stathis Papaioannou

Jason Resch

unread,
Sep 17, 2012, 6:16:08 PM9/17/12
to everyth...@googlegroups.com
Craig,

Do you think if your brain were cut in half, but then perfectly put back together that you would still be conscious in the same way?

What if cut into a thousand pieces and put back together perfectly?

What if every atom was taken apart and put back together?

What if every atom was taken apart, and then atoms from a different pile were used to put you back together?

What then if the original atoms were put back, would they both experience what it is like to be you?

Does the identity of one's atoms matter or are they interchangable?  If the identity is not what matters, what is it that does?

Jason 


> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?

What you don't understand is that an exhaustively complete set of
behaviours is not required.

Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.

The replacement components need only be within the engineering tolerance of the nervous system components. This is a difficult task but it is achievable in principle.

You assume that consciousness can be replaced, but I understand exactly why it can't. You can believe that there is no difference between scooping out your brain stem and replacing it with a functional equivalent as long as it was well engineered, but to me it's a completely misguided notion. Consciousness doesn't exist on the outside of us. Engineering only deals with exteriors. If the universe were designed by engineers, there could be no consciousness.
 

I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.

Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?

I assume that my friends have not been replaced by robots. If they have been then that means the robots can almost perfectly replicate their behaviour, since I (and people in general) am very good at picking up even tiny deviations from normal behaviour. The question then is, if the function of a human can be replicated this closely by a machine does that mean the consciousness can also be replicated? The answer is yes, since otherwise we would have the possibility of a person having radically different experiences but behaving normally and being unaware that their experiences were different.

The answer is no. A cartoon of Bugs Bunny has no experiences but behaves just like Bugs Bunny would if he had experiences. You are eating the menu.

Craig
 


-- Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/1JuM_HGXyUoJ.

Craig Weinberg

unread,
Sep 17, 2012, 8:03:54 PM9/17/12
to everyth...@googlegroups.com


On Monday, September 17, 2012 6:18:00 PM UTC-4, Jason wrote:


Craig,

Do you think if your brain were cut in half, but then perfectly put back together that you would still be conscious in the same way?

There is no such thing as perfectly put back together. If you cut a living cell in half, it dies. The only way of putting it perfectly back together is to travel back in time and not cut it in half.
 

What if cut into a thousand pieces and put back together perfectly?

Same answer.
 

What if every atom was taken apart and put back together?

If you could take every atom in a living cell 'apart' and put it back together without killing the cell, then it seems like it would work, but I don't think that the cells would necessarily be 'the same' cells. To me consciousness is an event in time, not a structure in space. The structure is the vehicle of the event. If you mess with the vehicle, you mess with the event.
 

What if every atom was taken apart, and then atoms from a different pile were used to put you back together?

When the atoms are taken apart, you die. If you put them together in what you think is the same way, it is still a different performance of atoms, whether they are the same or different.
 

What then if the original atoms were put back, would they both experience what it is like to be you?

No.
 

Does the identity of one's atoms matter or are they interchangable?  If the identity is not what matters, what is it that does?

Our atoms are replaced all the time. Our identity exists at the level of our experience as a whole. The experience of our body, our family, culture, etc. We are a lifetime that uses the whole brain as a way to participate in the human world as a human body.  Experience is what matters.

Craig
 

Jason 

Stathis Papaioannou

unread,
Sep 17, 2012, 11:01:43 PM9/17/12
to everyth...@googlegroups.com
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg <whats...@gmail.com> wrote:

> I understand that, but it still assumes that there is a such thing as a set
> of functions which could be identified and reproduced that cause
> consciousness. I don't assume that, because consciousness isn't like
> anything else. It is the source of all functions and appearances, not the
> effect of them. Once you have consciousness in the universe, then it can be
> enhanced and altered in infinite ways, but none of them can replace the
> experience that is your own.

No, the paper does *not* assume that there is a set of functions that
if reproduced will will cause consciousness. It assumes that something
like what you are saying is right.
Yes, that is exactly what the paper assumes. Exactly that!

>> I assume that my friends have not been replaced by robots. If they have
>> been then that means the robots can almost perfectly replicate their
>> behaviour, since I (and people in general) am very good at picking up even
>> tiny deviations from normal behaviour. The question then is, if the function
>> of a human can be replicated this closely by a machine does that mean the
>> consciousness can also be replicated? The answer is yes, since otherwise we
>> would have the possibility of a person having radically different
>> experiences but behaving normally and being unaware that their experiences
>> were different.
>
>
> The answer is no. A cartoon of Bugs Bunny has no experiences but behaves
> just like Bugs Bunny would if he had experiences. You are eating the menu.

And if it were possible to replicate the behaviour without the
experiences - i.e. make a zombie - it would be possible to make a
partial zombie, which lacks some experiences but behaves normally and
doesn't realise that it lacks those experiences. Do you agree that
this is the implication? If not, where is the flaw in the reasoning?


--
Stathis Papaioannou

Craig Weinberg

unread,
Sep 17, 2012, 11:43:08 PM9/17/12
to everyth...@googlegroups.com


On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg <whats...@gmail.com> wrote:

> I understand that, but it still assumes that there is a such thing as a set
> of functions which could be identified and reproduced that cause
> consciousness. I don't assume that, because consciousness isn't like
> anything else. It is the source of all functions and appearances, not the
> effect of them. Once you have consciousness in the universe, then it can be
> enhanced and altered in infinite ways, but none of them can replace the
> experience that is your own.

No, the paper does *not* assume that there is a set of functions that
if reproduced will will cause consciousness. It assumes that something
like what you are saying is right.

By assume I mean the implicit assumptions which are unstated in the paper. The thought experiment comes out of a paradox arising from assumptions about qualia and the brain which are both false in my view. I see the brain as the flattened qualia of human experience.
 

It still is modeling the experience of qualia as having a quantitative relation with the ratio of brain to non-brain. That isn't the only way to model it, and I use a different model.

>> I assume that my friends have not been replaced by robots. If they have
>> been then that means the robots can almost perfectly replicate their
>> behaviour, since I (and people in general) am very good at picking up even
>> tiny deviations from normal behaviour. The question then is, if the function
>> of a human can be replicated this closely by a machine does that mean the
>> consciousness can also be replicated? The answer is yes, since otherwise we
>> would have the possibility of a person having radically different
>> experiences but behaving normally and being unaware that their experiences
>> were different.
>
>
> The answer is no. A cartoon of Bugs Bunny has no experiences but behaves
> just like Bugs Bunny would if he had experiences. You are eating the menu.

And if it were possible to replicate the behaviour without the
experiences - i.e. make a zombie - it would be possible to make a
partial zombie, which lacks some experiences but behaves normally and
doesn't realise that it lacks those experiences. Do you agree that
this is the implication? If not, where is the flaw in the reasoning?

The word zombie implies that you have an expectation of consciousness but there isn't any. That is a fallacy from the start, since there is not reason to expect a simulation to have any experience at all. It's not a zombie, it's a puppet.

A partial zombie is just someone who has brain damage, and yes if you tried to replace enough of a person's brain with a non-biological material, you would get brain damage, dementia, coma, and death.

Craig
 


--
Stathis Papaioannou

Jason Resch

unread,
Sep 18, 2012, 1:17:07 AM9/18/12
to everyth...@googlegroups.com
On Mon, Sep 17, 2012 at 7:03 PM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, September 17, 2012 6:18:00 PM UTC-4, Jason wrote:


Craig,

Do you think if your brain were cut in half, but then perfectly put back together that you would still be conscious in the same way?

There is no such thing as perfectly put back together. If you cut a living cell in half, it dies. The only way of putting it perfectly back together is to travel back in time and not cut it in half.

Why do you believe this?  We can put machines back together.  Cells are machines on a very small scale.  It would be difficult, but there is no physical reason that prevents us from putting a cell back together after it has come apart.
 
 

What if cut into a thousand pieces and put back together perfectly?

Same answer.
 

What if every atom was taken apart and put back together?

If you could take every atom in a living cell 'apart' and put it back together without killing the cell, then it seems like it would work, but I don't think that the cells would necessarily be 'the same' cells.

What is different about them?  They could have the same exact quantum state, and yet you believe that because at one point in the past some atoms had some distance put between, and this somehow rules out the possibility of those atoms ever being used to build a person or life form, or be conscious?

Why would this be?  Our bodies continually take in and use atoms from things that were once not alive.  What is different here?
 
To me consciousness is an event in time, not a structure in space. The structure is the vehicle of the event. If you mess with the vehicle, you mess with the event.

What the difference between putting someone back together and a baby slowly being constructed through a set of complex chemical reactions from previously lifeless matter?  In either case would the result not be a fully alive and conscious human?  Do you suppose life also requires that life forms be built in certain natural ways (rather than artificial ways)?
 
 

What if every atom was taken apart, and then atoms from a different pile were used to put you back together?

When the atoms are taken apart, you die. If you put them together in what you think is the same way,
it is still a different performance of atoms, whether they are the same or different.
 

The hypothetical did not involve some person thinking they were put back in the same way, but the atoms actually being put back in the same way.

Do you still think there would be a "different performance of atoms"?
 

What then if the original atoms were put back, would they both experience what it is like to be you?

No.

Why shouldn't they?
 
 

Does the identity of one's atoms matter or are they interchangable?  If the identity is not what matters, what is it that does?

Our atoms are replaced all the time.

Right.
 
Our identity exists at the level of our experience as a whole.

I don't understand what you mean here.
 
The experience of our body, our family, culture, etc. We are a lifetime that uses the whole brain as a way to participate in the human world as a human body. 

Are you suggesting that things beyond one's skull are relevant to what someone experiences?

Jason

 
Experience is what matters.

Craig
 

Jason 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/z123_FMESkIJ.

Roger Clough

unread,
Sep 18, 2012, 5:45:11 AM9/18/12
to everything-list
Hi Craig Weinberg

According to Leibniz (and common sense) the monads or "souls" of rocks do not contain
intelligence or feeling and are thus called "bare naked monads."
These should be much different from the monads of humans, which contain
intelligence and feelings and are true souls (Leibniz however
refers to human souls as spirits).



Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allan

----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 16:39:12
Subject: Re: Zombieopolis Thought Experiment




On Monday, September 17, 2012 9:24:23 AM UTC-4, stathisp wrote:



But a few milligrams of LSD or ricin (LD100 of 25 ?/kg) will have a catastrophic effect on normal human capacities, so that the brain's tolerance has nothing to do with how moderate the physical changes are. That's a blanket generalization that doesn't pan out. It's folk neuroscience.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/1JuM_HGXyUoJ.

Roger Clough

unread,
Sep 18, 2012, 6:07:39 AM9/18/12
to everything-list
Hi Craig Weinberg

IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.

Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen

----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08
Subject: Re: Zombieopolis Thought Experiment




On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/nrqkIqoR6xMJ.

Stephen P. King

unread,
Sep 18, 2012, 8:25:44 AM9/18/12
to everyth...@googlegroups.com
On 9/18/2012 6:07 AM, Roger Clough wrote:
> Hi Craig Weinberg
>
> IMHO conscousness is not really anything in itself,
> it is what the brain makes of its contents that the self
> perceives. The self is intelligence, which is
> able to focus all pertinent brain activity to a unified point.
>
> Roger Clough, rcl...@verizon.net
> 9/18/2012
> "Forever is a long time, especially near the end."
> Woody Allen
>
>
Hi Roger,

The brain as just a "lens" or "parabolic mirror", nice!

Roger Clough

unread,
Sep 18, 2012, 9:24:30 AM9/18/12
to everything-list
Hi Stephen P. King
 
Absolutely.  Science is supposed to be impersonal,
but an individual has to decide
 
what to measure (this can be influenced by politics)
how to measure it-- including how accurately
what theory to compare the results with (this can be influenced by politics)
How to interpret the results (this can be influenced by politics)
Infer some meaning from the results (this can be influenced by politics)
 
So science is often more politics than science.
 
 
Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end." -Woody Allen
 
 
----- Receiving the following content -----
Receiver: everything-list
Time: 2012-09-17, 11:30:13
Subject: Re: Zombieopolis Thought Experiment

Craig Weinberg

unread,
Sep 18, 2012, 1:44:27 PM9/18/12
to everyth...@googlegroups.com


On Tuesday, September 18, 2012 1:17:08 AM UTC-4, Jason wrote:


On Mon, Sep 17, 2012 at 7:03 PM, Craig Weinberg <whats...@gmail.com> wrote:


On Monday, September 17, 2012 6:18:00 PM UTC-4, Jason wrote:


Craig,

Do you think if your brain were cut in half, but then perfectly put back together that you would still be conscious in the same way?

There is no such thing as perfectly put back together. If you cut a living cell in half, it dies. The only way of putting it perfectly back together is to travel back in time and not cut it in half.

Why do you believe this?  We can put machines back together.  Cells are machines on a very small scale.  It would be difficult, but there is no physical reason that prevents us from putting a cell back together after it has come apart.

I'm sure when electricity was first being understood it was assumed that a dead body could be revived by electrical stimulation. The reality is that there are processes which are thermodynamically irreversible. This is why cryogenics has not been successful yet also. It's not that simple. Living bodies and cells are more than the sum of their parts, and if you reduce the wholes to parts, there is no guarantee that if you could force the parts into a whole again, that it would be the same whole.

Machines don't die, but living organisms do. Machines are assembled from the outside, but organisms are born of their own internal nature. The two approaches could not be more opposite.

 
 

What if cut into a thousand pieces and put back together perfectly?

Same answer.
 

What if every atom was taken apart and put back together?

If you could take every atom in a living cell 'apart' and put it back together without killing the cell, then it seems like it would work, but I don't think that the cells would necessarily be 'the same' cells.

What is different about them?  They could have the same exact quantum state, and yet you believe that because at one point in the past some atoms had some distance put between, and this somehow rules out the possibility of those atoms ever being used to build a person or life form, or be conscious?

What's different is that everything in the universe has changed. It's a different moment. Particles are entangled through time as well as across space. You assume that there is a such thing as two identical instances of consciousness, when everything that we have to go on tells us exactly the opposite. No two moments, no two people, no two experiences are identical. They can't be because every experience is shaped and influenced by every other experience.
 

Why would this be?  Our bodies continually take in and use atoms from things that were once not alive.  What is different here?

Yet mainly what we need to survive is molecules from things that were alive. Why would that be? Different levels of evolutionary development correspond to different layers of qualitative elaboration. A human being needs more than sunlight and water, more even than nutrients and shelter. People need social participation and perceptual stimulation to be truly human. It's irreducible. There is no information-only substitute.
 
 
To me consciousness is an event in time, not a structure in space. The structure is the vehicle of the event. If you mess with the vehicle, you mess with the event.

What the difference between putting someone back together and a baby slowly being constructed through a set of complex chemical reactions from previously lifeless matter?

The difference is that the baby is growing by itself. It is the embodiment of a self-expressing human story as a lifetime-long event in the cosmos. It's like asking, if we beat and torture someone for ten years, but then restore their body and put them back into society, what could go wrong? Experience is the underlying reality, structure only represents the control of experience.
 
 In either case would the result not be a fully alive and conscious human?  Do you suppose life also requires that life forms be built in certain natural ways (rather than artificial ways)?

Life forms aren't built, they grow. If you create the right conditions, you can cause life to grow, but only because the potential for life to exist in the universe is already present over and above any mechanistic or informative purpose. If you grow a human being from human DNA then you get a human. If you assemble a machine that you think should behave like human DNA, then you'd get something else - maybe an alternate biology, maybe a cybernetic non-entity, but not a Homo sapien with human experiences.
 
 
 

What if every atom was taken apart, and then atoms from a different pile were used to put you back together?

When the atoms are taken apart, you die. If you put them together in what you think is the same way,
it is still a different performance of atoms, whether they are the same or different.
 

The hypothetical did not involve some person thinking they were put back in the same way, but the atoms actually being put back in the same way.

If you took apart the wick of a burning candle, and then put it back together 'the same way', would it still be burning with the same flame?
 

Do you still think there would be a "different performance of atoms"?
 

What then if the original atoms were put back, would they both experience what it is like to be you?

No.

Why shouldn't they?

I would imagine that atoms have atomic experiences, not human experiences. This is the same reason that my  computer is not reading what it is that I am writing right now.
 
 
 

Does the identity of one's atoms matter or are they interchangable?  If the identity is not what matters, what is it that does?

Our atoms are replaced all the time.

Right.
 
Our identity exists at the level of our experience as a whole.

I don't understand what you mean here.

I mean that Jason is a phenomenological event at the level of human interaction, not one that can be resolved at the sub-personal level. Levels change everything, especially when you consider that they may be intertwined in both bottom-up and top-down relations.
 
 
The experience of our body, our family, culture, etc. We are a lifetime that uses the whole brain as a way to participate in the human world as a human body. 

Are you suggesting that things beyond one's skull are relevant to what someone experiences?

If that weren't the case then all that we could experience would be our skull.

Craig
 

Craig Weinberg

unread,
Sep 18, 2012, 1:54:30 PM9/18/12
to everyth...@googlegroups.com


On Tuesday, September 18, 2012 6:08:46 AM UTC-4, rclough wrote:
Hi Craig Weinberg  

IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives.

It gets tricky. Depends what you mean by a thing. I would say that consciousness is the less-than-anything and the more-than-anything which experiences the opposite of itself as somethings. It is otherthanthing. In order to think or talk about this, we need to represent it as a subjective idea 'thing'.

Make no mistake though. The brain is nothing but an experience of many things, of our mind's experience of our body using our body's experience of medical instruments. The capacity to experience is primary. No structure can generate an experience unless it is made out of something which already has that capacity. If I make a perfect model of H2O out of anything other than actual hydrogen and oxygen atoms, I will not get water.
 
The self is intelligence, which is  
able to focus all pertinent brain activity to a unified point.

You don't need intelligence to have a self. Infants are pretty selfish, and not terribly intelligent. Brain activity is overrated as well. Jellyfish and worms have no brain. Bacteria have no brains, yet they behave intelligently (see also quorum sensing). Intelligence is everywhere - just not human intelligence.

Craig
 

John Mikes

unread,
Sep 18, 2012, 5:17:40 PM9/18/12
to everyth...@googlegroups.com
Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something   
(as a homunculus?) on its own? Any suggestions?
John M       

Stephen P. King

unread,
Sep 18, 2012, 6:16:53 PM9/18/12
to everyth...@googlegroups.com
On 9/18/2012 5:17 PM, John Mikes wrote:
Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something   
(as a homunculus?) on its own? Any suggestions?
John M       

Stathis Papaioannou

unread,
Sep 18, 2012, 7:13:46 PM9/18/12
to everyth...@googlegroups.com
On Tue, Sep 18, 2012 at 1:43 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> No, the paper does *not* assume that there is a set of functions that
>> if reproduced will will cause consciousness. It assumes that something
>> like what you are saying is right.
>
>
> By assume I mean the implicit assumptions which are unstated in the paper.
> The thought experiment comes out of a paradox arising from assumptions about
> qualia and the brain which are both false in my view. I see the brain as the
> flattened qualia of human experience.

Chalmer's position is that functionalism is true, and he states this
in the introduction, but this is not *assumed* in the thought
experiment. The thought experiment explicitly assumes that
functionalism is *false*; that consciousness is dependent on the
substrate and swapping a brain for a functional equivalent will not
necessarily give rise to the same consciousness or any consciousness
at all. Isn't that what you believe?

>> And if it were possible to replicate the behaviour without the
>> experiences - i.e. make a zombie - it would be possible to make a
>> partial zombie, which lacks some experiences but behaves normally and
>> doesn't realise that it lacks those experiences. Do you agree that
>> this is the implication? If not, where is the flaw in the reasoning?
>
>
> The word zombie implies that you have an expectation of consciousness but
> there isn't any. That is a fallacy from the start, since there is not reason
> to expect a simulation to have any experience at all. It's not a zombie,
> it's a puppet.

Replace the word "zombie" with "puppet" if that makes it easier to understand.

> A partial zombie is just someone who has brain damage, and yes if you tried
> to replace enough of a person's brain with a non-biological material, you
> would get brain damage, dementia, coma, and death.

Not if the puppet components perform the same purely mechanical
functions as the original components. In order for this to happen
according to the paper you have to accept that the physics of the
brain is in fact computable. If it is computable, then we can model
the behaviour of the brain, although according to the assumptions in
the paper (which coincide with your assumptions) modeling the
behaviour won't reproduce the consciousness. All the evidence we have
suggests that physics is computable, but it might not be. It may turn
out that there is some exotic physics in the brain which requires
solving the halting problem, for example, in order to model it, and
that would mean that a computer could not adequately simulate those
components of the brain which utilise this physics. But going beyond
the paper, the argument for functionalism (substrate-independence of
consciousness) could still be made by considering theoretical
components with non-biological hypercomputers.

--
Stathis Papaioannou

Stathis Papaioannou

unread,
Sep 18, 2012, 7:26:01 PM9/18/12
to everyth...@googlegroups.com
On Wed, Sep 19, 2012 at 3:44 AM, Craig Weinberg <whats...@gmail.com> wrote:

> I'm sure when electricity was first being understood it was assumed that a
> dead body could be revived by electrical stimulation. The reality is that
> there are processes which are thermodynamically irreversible. This is why
> cryogenics has not been successful yet also. It's not that simple. Living
> bodies and cells are more than the sum of their parts, and if you reduce the
> wholes to parts, there is no guarantee that if you could force the parts
> into a whole again, that it would be the same whole.
>
> Machines don't die, but living organisms do. Machines are assembled from the
> outside, but organisms are born of their own internal nature. The two
> approaches could not be more opposite.

It's difficult having a discussion with you when you believe something
contrary to all biological science for the last two centuries.


--
Stathis Papaioannou

Roger Clough

unread,
Sep 19, 2012, 8:48:27 AM9/19/12
to everything-list
Hi Craig Weinberg

"Things" have extension and are physical, a "non-thing" has no extension and is not physical.
Consciousness or mind is not physical, at least in my understanding. The brain is physical.


Roger Clough, rcl...@verizon.net
9/19/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-18, 13:54:30
Subject: Re: IMHO conscousness is an activity not a thing
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/J0zrRDzijqwJ.

John Clark

unread,
Sep 19, 2012, 9:51:57 AM9/19/12
to everyth...@googlegroups.com

On Wed, Sep 19, 2012  Roger Clough <rcl...@verizon.net> wrote:

> "Things" have extension and are physical

In other words they are nouns. 

> a  "non-thing" has no extension and is not physical.

Like a adjective. 

> Consciousness or mind is not physical

So its not a noun.

> The brain is physical.

Yes, so mind must be what the brain does.

  John K Clark

Roger Clough

unread,
Sep 19, 2012, 9:53:45 AM9/19/12
to everything-list
Hi John Mikes
 
Once you leave the material world for the ideal one,
all things -- or at least many things-- now become possible.
 
 
Roger Clough, rcl...@verizon.net
9/19/2012
"Forever is a long time, especially near the end." -Woody Allen
 
 
----- Receiving the following content -----
From: John Mikes
Receiver: everything-list
Time: 2012-09-18, 17:17:40
Subject: Re: IMHO conscousness is an activity not a thing

Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something锟斤拷
(as a homunculus?) on its own? Any suggestions?
John M锟斤拷锟斤拷锟斤拷�

On Tue, Sep 18, 2012 at 6:07 AM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg

IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.

Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen

----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08
Subject: Re: Zombieopolis Thought Experiment




On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg 锟絯rote:

Roger Clough

unread,
Sep 19, 2012, 9:58:58 AM9/19/12
to everything-list
Hi John Clark

Very good. I might amplify it simply by saying that
mind can also operate on brain (through the will or an intention).

I have no idea at the present of what such a monadic structure
might be like.


Roger Clough, rcl...@verizon.net
9/19/2012
"Forever is a long time, especially near the end." -Woody Allen
----------------------------------------------------------------------------------------

----- Receiving the following content -----
From: John Clark
Receiver: everything-list
Time: 2012-09-19, 09:51:57
Subject: Re: Re: IMHO conscousness is an activity not a thing



On Wed, Sep 19, 2012? Roger Clough wrote:



> "Things" have extension and are physical

In other words they are nouns.?


> a ?"non-thing" has no extension and is not physical.


Like a adjective.?



> Consciousness or mind is not physical

So its not a noun.



> The brain is physical.


Yes, so mind must be what the brain does.

? John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.

Roger Clough

unread,
Sep 19, 2012, 10:38:14 AM9/19/12
to everything-list
Hi Stathis Papaioannou

OK, I'll bite.

How does modern biology define life ?



Roger Clough, rcl...@verizon.net
9/19/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stathis Papaioannou
Receiver: everything-list
Time: 2012-09-18, 19:26:01
Subject: Re: Zombieopolis Thought Experiment


--
You received this message because you are subscribed to the Google Groups "Everything List" group.

Stathis Papaioannou

unread,
Sep 19, 2012, 1:42:05 PM9/19/12
to everyth...@googlegroups.com
On Thu, Sep 20, 2012 at 12:38 AM, Roger Clough <rcl...@verizon.net> wrote:
> Hi Stathis Papaioannou
>
> OK, I'll bite.
>
> How does modern biology define life ?

It's rarely defined unless someone asks for a definition. Problems
arise with the definition when it comes to viruses and prions, which
have some characteristics of other entities commonly considered alive
but not others. We can imagine other cases of entities that grow,
replicate and maintain homeostasis but may or may not be said to be
alive based on some other arbitrary criterion, such as whether it uses
organic chemistry. Thus a machine (electronic and mechanical) that
maintains itself, seeks energy and spare parts from its environment
and makes copies of itself may or may not be called "alive" depending
on the whim of the definer.

But the important point I wanted to make is that biologists reject vitalism.


--
Stathis Papaioannou

Bruno Marchal

unread,
Sep 19, 2012, 2:18:30 PM9/19/12
to everyth...@googlegroups.com
On 19 Sep 2012, at 15:53, Roger Clough wrote:

Hi John Mikes
 
Once you leave the material world for the ideal one,
all things -- or at least many things-- now become possible.

Yes. Since always.

But there are many paths, and we can get lost.

Platonia before and after Gödel or Church is not the same. The circle and the regular polyhedra keeps their majestuous importance, but now they have the company of the Mandelbrot set, and UDs. Shit happens, when seen from inside. With comp, heaven and hell are not mechanically separable, nothing is easy near the boundaries. 

***

I think that your metaphysics and reading of Leibniz makes sense for me, and comp, but I have to say I don't follow your methodology or teaching method on the religious field, as it contains authoritative arguments. 

My feeling is that authoritative argument is the symptom of those who lack faith. 

That error is multiplied in the transfinite when an authoritative argument is attributed to God.

Can you answer the following question?

How could anyone love a God, or a Goddess, threatening you of eternal torture in case you don't love He or She?

That's bizarre.

How could even just an atom of sincerity reside in that love, with such an explicit horrible threat?

I hope you don't mind my frankness and the naïvety of my questioning.

Bruno



 
 
Roger Clough, rcl...@verizon.net
9/19/2012
"Forever is a long time, especially near the end." -Woody Allen
 
 
----- Receiving the following content -----
From: John Mikes
Receiver: everything-list
Time: 2012-09-18, 17:17:40
Subject: Re: IMHO conscousness is an activity not a thing

Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something
(as a homunculus?) on its own? Any suggestions?
John M牋牋牋�

On Tue, Sep 18, 2012 at 6:07 AM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg

IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.

Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen

----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08
Subject: Re: Zombieopolis Thought Experiment




On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg 爓rote:

meekerdb

unread,
Sep 19, 2012, 3:05:52 PM9/19/12
to everyth...@googlegroups.com
On 9/19/2012 10:42 AM, Stathis Papaioannou wrote:
> On Thu, Sep 20, 2012 at 12:38 AM, Roger Clough<rcl...@verizon.net> wrote:
>> Hi Stathis Papaioannou
>>
>> OK, I'll bite.
>>
>> How does modern biology define life ?
> It's rarely defined unless someone asks for a definition. Problems
> arise with the definition when it comes to viruses and prions, which
> have some characteristics of other entities commonly considered alive
> but not others.

I think they illustrate the point that the ability to live is always relative to an
environment.

Brent

Craig Weinberg

unread,
Sep 19, 2012, 4:27:10 PM9/19/12
to everyth...@googlegroups.com


On Tuesday, September 18, 2012 7:14:17 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 1:43 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> No, the paper does *not* assume that there is a set of functions that
>> if reproduced will will cause consciousness. It assumes that something
>> like what you are saying is right.
>
>
> By assume I mean the implicit assumptions which are unstated in the paper.
> The thought experiment comes out of a paradox arising from assumptions about
> qualia and the brain which are both false in my view. I see the brain as the
> flattened qualia of human experience.

Chalmer's position is that functionalism is true, and he states this
in the introduction, but this is not *assumed* in the thought
experiment. The thought experiment explicitly assumes that
functionalism is *false*; that consciousness is dependent on the
substrate and swapping a brain for a functional equivalent will not
necessarily give rise to the same consciousness or any consciousness
at all. Isn't that what you believe?

I believe that there is ontologically no such thing as a functional equivalent of an organism by an inorganic mechanism. If you use stem cells as the functional equivalent, then it could work fine. There is no 'good enough' as a citeria for being alive.
 

>> And if it were possible to replicate the behaviour without the
>> experiences - i.e. make a zombie - it would be possible to make a
>> partial zombie, which lacks some experiences but behaves normally and
>> doesn't realise that it lacks those experiences. Do you agree that
>> this is the implication? If not, where is the flaw in the reasoning?
>
>
> The word zombie implies that you have an expectation of consciousness but
> there isn't any. That is a fallacy from the start, since there is not reason
> to expect a simulation to have any experience at all. It's not a zombie,
> it's a puppet.

Replace the word "zombie" with "puppet" if that makes it easier to understand.

I have no trouble understanding what you are saying.
 

> A partial zombie is just someone who has brain damage, and yes if you tried
> to replace enough of a person's brain with a non-biological material, you
> would get brain damage, dementia, coma, and death.

Not if the puppet components perform the same purely mechanical
functions as the original components.

I am saying that consciousness is not a mechanical function, so it makes no difference if you have a trillion little puppet strings pushing dendrites around, there is still nothing there that experiences anything as a whole.
 
In order for this to happen
according to the paper you have to accept that the physics of the
brain is in fact computable. If it is computable, then we can model
the behaviour of the brain,

Except that we can't, because the behavior of the brain is contingent on the real experience of the person who is using that brain to experience their life. You would have to model the entire cosmos and separate out the experiences of a single person to model the brain.
 
although according to the assumptions in
the paper (which coincide with your assumptions)

No, they don't. I say that the paper's explicit assumptions are based on incorrect implicit assumptions (as are yours) that consciousness is the end product of brain mechanisms. I see consciousness as the beginning and ending of all things, and the brain as a representation of certain kinds of experiences.
 
modeling the
behaviour won't reproduce the consciousness. All the evidence we have
suggests that physics is computable, but it might not be. It may turn
out that there is some exotic physics in the brain which requires
solving the halting problem, for example, in order to model it, and
that would mean that a computer could not adequately simulate those
components of the brain which utilise this physics. But going beyond
the paper, the argument for functionalism (substrate-independence of
consciousness) could still be made by considering theoretical
components with non-biological hypercomputers.

Will functionalism make arsenic edible? Will it use numbers to cook food?

My point is this. I am programmed, but I am not a program. An electronic computer is also programmed but not a program. It doesn't matter what kind of program is installed on either one, neither of us can become the other.

Craig

Craig Weinberg

unread,
Sep 19, 2012, 4:32:22 PM9/19/12
to everyth...@googlegroups.com


On Wednesday, September 19, 2012 8:49:35 AM UTC-4, rclough wrote:
Hi Craig Weinberg  

"Things" have extension and are physical, a  "non-thing" has no extension and is not physical.
Consciousness or mind is not physical, at least in my understanding. The brain is physical.


Hi Roger,

Taking drugs changes the mind. Caffeine is physical and it causes changes in the brain which we experience subjectively. Everything is physical, but interior experiences are private, temporal, and sensory-motive, while exterior objects are public, spatial, and electromagnetic.

Craig

Stephen P. King

unread,
Sep 20, 2012, 12:40:21 AM9/20/12
to everyth...@googlegroups.com
--
Hi Craig,

    What you need to perhaps show is that you are not just making a new case for "vitalism", that there is something about each individual thing that is actually conscious that is not capturable in terms of recursively enumerable functions.

Stathis Papaioannou

unread,
Sep 20, 2012, 12:54:00 AM9/20/12
to everyth...@googlegroups.com
On Thu, Sep 20, 2012 at 6:27 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> Chalmer's position is that functionalism is true, and he states this
>> in the introduction, but this is not *assumed* in the thought
>> experiment. The thought experiment explicitly assumes that
>> functionalism is *false*; that consciousness is dependent on the
>> substrate and swapping a brain for a functional equivalent will not
>> necessarily give rise to the same consciousness or any consciousness
>> at all. Isn't that what you believe?
>
>
> I believe that there is ontologically no such thing as a functional
> equivalent of an organism by an inorganic mechanism. If you use stem cells
> as the functional equivalent, then it could work fine. There is no 'good
> enough' as a citeria for being alive.

I've explained what "functional equivalent" means in this case. A
replacement neurological component is functionally equivalent to the
biological one if it results in a similar sequence of neural firings
ultimately causing muscle contraction. Note the important word
"similar". If I make a functional equivalent of you it means it will
behave similarly enough to you that people who know you can't tell a
difference. It does not necessarily mean that the functional
equivalent will display exactly the same behaviour that you will, and
in fact it would be extremely unlikely that even an atom for atom copy
would display exactly the same behaviour.

> I am saying that consciousness is not a mechanical function, so it makes no
> difference if you have a trillion little puppet strings pushing dendrites
> around, there is still nothing there that experiences anything as a whole.

Yes, yes, yes that is EXACTLY the assumption made in the thought
experiment. Sorry for shouting. How else can I say it so that it's
clear?

>> In order for this to happen
>> according to the paper you have to accept that the physics of the
>> brain is in fact computable. If it is computable, then we can model
>> the behaviour of the brain,
>
>
> Except that we can't, because the behavior of the brain is contingent on the
> real experience of the person who is using that brain to experience their
> life. You would have to model the entire cosmos and separate out the
> experiences of a single person to model the brain.

No you wouldn't. All you have to model is when a neuron will fire
given a certain input. If you can do this then you can model when the
motor neurons fire and you can model behaviour. You then have a zombie
or a puppet (in your view and in Chalmers' thought experiment) which
moves around like a person or an animal but lacks any experience. For
example, when you stick a pin in it it says "ouch!" and says it will
hit you if you do it again, but it doesn't actually feel any pain, it
doesn't actually understand what has happened, and it doesn't actually
care about your actions or anything else. Is that clear enough?

>> although according to the assumptions in
>> the paper (which coincide with your assumptions)
>
>
> No, they don't. I say that the paper's explicit assumptions are based on
> incorrect implicit assumptions (as are yours) that consciousness is the end
> product of brain mechanisms. I see consciousness as the beginning and ending
> of all things, and the brain as a representation of certain kinds of
> experiences.

And the assumption in the paper is also that consciousness is not
involved in the process whereby electronic circuits cause muscles to
twitch.

>> modeling the
>> behaviour won't reproduce the consciousness. All the evidence we have
>> suggests that physics is computable, but it might not be. It may turn
>> out that there is some exotic physics in the brain which requires
>> solving the halting problem, for example, in order to model it, and
>> that would mean that a computer could not adequately simulate those
>> components of the brain which utilise this physics. But going beyond
>> the paper, the argument for functionalism (substrate-independence of
>> consciousness) could still be made by considering theoretical
>> components with non-biological hypercomputers.
>
>
> Will functionalism make arsenic edible? Will it use numbers to cook food?

If functionalism is true then it will allow you to replace your brain
with a machine and remain you.

> My point is this. I am programmed, but I am not a program. An electronic
> computer is also programmed but not a program. It doesn't matter what kind
> of program is installed on either one, neither of us can become the other.

But you become the person you are in a year's time, even though almost
all the atoms in your body have changed.


--
Stathis Papaioannou

Roger Clough

unread,
Sep 20, 2012, 5:45:00 AM9/20/12
to everything-list



BRUNO: I think that your metaphysics and reading of Leibniz makes sense for me, and comp, but I have to say I don't follow your methodology or teaching method on the religious field, as it contains authoritative arguments.

ROGER: Everything I write should be prefaced with IMHO.

BRUNO: My feeling is that authoritative argument is the symptom of those who lack faith.

ROGER: That doesn't make sense, because faith= trust. And if you don't trust, nothing is authoritative.

BRUNO: That error is multiplied in the transfinite when an authoritative argument is attributed to God.

ROGER: Sorry, no comprehende.

BRUNO: you answer the following question?

How could anyone love a God, or a Goddess, threatening you of eternal torture in case you don't love He or She?

That's bizarre.

How could even just an atom of sincerity reside in that love, with such an explicit horrible threat?

ROGER: That love and all love, comes from God, not from me.

BRUNO: I hope you don't mind my frankness and the naivety of my questioning.

Bruno

ROGER: Not at all, as in my experience most agnosticism or atheism is

is a product of ignorance, if you don't mind my saying that. :-)


Roger Clough, rcl...@verizon.net
9/19/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: John Mikes
Receiver: everything-list
Time: 2012-09-18, 17:17:40
Subject: Re: IMHO conscousness is an activity not a thing


Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something ?
(as a homunculus?) on its own? Any suggestions?
John M???


On Tue, Sep 18, 2012 at 6:07 AM, Roger Clough wrote:

Hi Craig Weinberg

IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.

Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen

----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08
Subject: Re: Zombieopolis Thought Experiment




On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg ?rote:

Bruno Marchal

unread,
Sep 20, 2012, 6:06:06 AM9/20/12
to everyth...@googlegroups.com

On 20 Sep 2012, at 11:45, Roger Clough wrote:

>
>
>
> BRUNO: I think that your metaphysics and reading of Leibniz makes
> sense for me, and comp, but I have to say I don't follow your
> methodology or teaching method on the religious field, as it
> contains authoritative arguments.
>
> ROGER: Everything I write should be prefaced with IMHO.
>
> BRUNO: My feeling is that authoritative argument is the symptom of
> those who lack faith.
>
> ROGER: That doesn't make sense, because faith= trust. And if you
> don't trust, nothing is authoritative.
>
> BRUNO: That error is multiplied in the transfinite when an
> authoritative argument is attributed to God.
>
> ROGER: Sorry, no comprehende.

I can trust entities which provides explanations, not entities
threatening with torture in case I do not love them.
Humans have attributed to God authoritative arguments, with the result
of justifying their own use of it.
I can understand such argument in warfare, or when decision must be
taken without the time to make a rational decision, but in the
religious field, I think that authoritative argument have to fail,
they only display the lack of faith of those who use them, or, more
often, they display their special terrestrial interests.




>
> BRUNO: you answer the following question?
>
> How could anyone love a God, or a Goddess, threatening you of
> eternal torture in case you don't love He or She?
>
> That's bizarre.
>
> How could even just an atom of sincerity reside in that love, with
> such an explicit horrible threat?
>
> ROGER: That love and all love, comes from God, not from me.

But then why God has to threaten his creature to get love from them?
And again, how could that love be sincere?
This does not make sense.

Bruno
> http://iridia.ulb.ac.be/~marchal/
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Sep 20, 2012, 6:47:44 AM9/20/12
to everything-list
Hi Stathis Papaioannou

The brain is dead meat unless something like vitalism or intelligence
or consciousness is there to not only to provide life but to explain
how the brain works. How is the brain able to focus its cacophony
of electromagnetic signals into a perception ?

For the brain is a complex bundle of nerves and electromagnetic signals
that, without something to organize and focus its experiences,
would only provide us with chaotic noise. The self does that.
It provides us with a singular unifed point of experience, that
is to say, the self provides us with perception.

Do you have an alternate explanation ?


Roger Clough, rcl...@verizon.net
9/20/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stathis Papaioannou
Receiver: everything-list
Time: 2012-09-19, 13:42:05
Subject: Re: Re: Zombieopolis Thought Experiment

Roger Clough

unread,
Sep 20, 2012, 6:54:25 AM9/20/12
to everything-list
Hi meekerdb

I would say that one necessary ability for
life is for an organism to be able to separate itself off
from its environment and thus to be able to make its
own decisions without outside interference. In
other words, to be autonomous.

Materialism provides no such focussing tool.
I would call that tool a self, primitive though it may be.


Roger Clough, rcl...@verizon.net
9/20/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: meekerdb
Receiver: everything-list
Time: 2012-09-19, 15:05:52
Subject: Re: Zombieopolis Thought Experiment


On 9/19/2012 10:42 AM, Stathis Papaioannou wrote:
> On Thu, Sep 20, 2012 at 12:38 AM, Roger Clough wrote:
>> Hi Stathis Papaioannou
>>
>> OK, I'll bite.
>>
>> How does modern biology define life ?
> It's rarely defined unless someone asks for a definition. Problems
> arise with the definition when it comes to viruses and prions, which
> have some characteristics of other entities commonly considered alive
> but not others.

I think they illustrate the point that the ability to live is always relative to an
environment.

Brent

> We can imagine other cases of entities that grow,
> replicate and maintain homeostasis but may or may not be said to be
> alive based on some other arbitrary criterion, such as whether it uses
> organic chemistry. Thus a machine (electronic and mechanical) that
> maintains itself, seeks energy and spare parts from its environment
> and makes copies of itself may or may not be called "alive" depending
> on the whim of the definer.
>
> But the important point I wanted to make is that biologists reject vitalism.
>
>

Roger Clough

unread,
Sep 20, 2012, 7:11:20 AM9/20/12
to everything-list
Hi Bruno Marchal

If you want to be the one who judges, who decides what
is best or if it is logical or not, that's not trust, it's
the way of the world. Secularism.

The problem with secularism is that it cannot
help you in a time of suffering or sorrow.


Roger Clough, rcl...@verizon.net
9/20/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-20, 06:06:06

Roger Clough

unread,
Sep 20, 2012, 7:18:20 AM9/20/12
to everything-list
Hi Craig Weinberg
 
Consciousness requires an autonomous self.
So does life itself. And intelligence.
 
So, I hagte to say this, but perhaps consciousness and life may be a
problem with mereology, don't know.
 
Also, have you seen Jan Smuts' "Holism"?
Maybe he solved the problem.
He was a lousy general but a good thinker otherwise.

Bruno Marchal

unread,
Sep 20, 2012, 7:41:31 AM9/20/12
to everyth...@googlegroups.com

On 20 Sep 2012, at 12:54, Roger Clough wrote:

> Hi meekerdb
>
> I would say that one necessary ability for
> life is for an organism to be able to separate itself off
> from its environment and thus to be able to make its
> own decisions without outside interference. In
> other words, to be autonomous.
>
> Materialism provides no such focussing tool.
> I would call that tool a self, primitive though it may be.

You really should study computer science, as the self is a solved
problem. Bot the 3p-self, and the non nameable 1p-self.

Bruno
http://iridia.ulb.ac.be/~marchal/



Craig Weinberg

unread,
Sep 20, 2012, 12:55:14 PM9/20/12
to everyth...@googlegroups.com


On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:
Hi Craig Weinberg
 
Consciousness requires an autonomous self.

Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.

 
So does life itself. And intelligence.

We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.
 
 
So, I hagte to say this, but perhaps consciousness and life may be a
problem with mereology, don't know.

Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.

 
Also, have you seen Jan Smuts' "Holism"?
Maybe he solved the problem.
He was a lousy general but a good thinker otherwise.

Nope. I'm familiar with holistic concepts though.

Craig
 

Stephen P. King

unread,
Sep 20, 2012, 3:46:54 PM9/20/12
to everyth...@googlegroups.com
On 9/20/2012 7:41 AM, Bruno Marchal wrote:
>
> On 20 Sep 2012, at 12:54, Roger Clough wrote:
>
>> Hi meekerdb
>>
>> I would say that one necessary ability for
>> life is for an organism to be able to separate itself off
>> from its environment and thus to be able to make its
>> own decisions without outside interference. In
>> other words, to be autonomous.
>>
>> Materialism provides no such focussing tool.
>> I would call that tool a self, primitive though it may be.
>
> You really should study computer science, as the self is a solved
> problem. Bot the 3p-self, and the non nameable 1p-self.
>
> Bruno
>
Dear Bruno,

Did you mean "both the 3p-self and the non-nameable 1p-self"? How
does the 1p-self name itself? Can the the non-nameable 1p-self be
approximately named with an integration of an uncountable set of names?

Stephen P. King

unread,
Sep 20, 2012, 3:48:27 PM9/20/12
to everyth...@googlegroups.com
On 9/20/2012 6:54 AM, Roger Clough wrote:
> Hi meekerdb
>
> I would say that one necessary ability for
> life is for an organism to be able to separate itself off
> from its environment and thus to be able to make its
> own decisions without outside interference. In
> other words, to be autonomous.
>
> Materialism provides no such focussing tool.
> I would call that tool a self, primitive though it may be.
>
>
> Roger Clough,rcl...@verizon.net
> 9/20/2012
> "Forever is a long time, especially near the end." -Woody Allen
Dear Roger,

What are the necessary requirements for a Self? What defines
"autonomy"? I think that it is closure of some kind.

Stephen P. King

unread,
Sep 20, 2012, 9:50:01 PM9/20/12
to everyth...@googlegroups.com
On 9/20/2012 12:55 PM, Craig Weinberg wrote:


On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:
Hi Craig Weinberg
 
Consciousness requires an autonomous self.

Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.

    What if awareness is what happens when autonomous self and consciousness mirror each other?



 
So does life itself. And intelligence.

We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.
 
 
So, I hagte to say this, but perhaps consciousness and life may be a
problem with mereology, don't know.

Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.

    Huh? non-mereology. What is that?

Craig Weinberg

unread,
Sep 20, 2012, 10:04:04 PM9/20/12
to everyth...@googlegroups.com


On Thursday, September 20, 2012 9:49:58 PM UTC-4, Stephen Paul King wrote:
On 9/20/2012 12:55 PM, Craig Weinberg wrote:


On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:
Hi Craig Weinberg
 
Consciousness requires an autonomous self.

Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.

    What if awareness is what happens when autonomous self and consciousness mirror each other?

There can't be an autonomous self without awareness as an ontological given to begin with, at least as an inevitable potential. What would a self be or do without awareness? You can have awareness without a self being presented within that awareness though. I've had dreams where there is no "I" there are just scenes that are taking place.



 
So does life itself. And intelligence.

We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.
 
 
So, I hagte to say this, but perhaps consciousness and life may be a
problem with mereology, don't know.

Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.

    Huh? non-mereology. What is that?

I call it a-mereology also. That's the subjective conjugate to topology. In public realism there is the Stone Duality ( topologies logical algebras) while the private phenomenology duality is orthogonal to the Stone (a-mereology transrational gestalt-algebra).

I posted about it a bit yesterday:

Our feeling of hurting is a (whole) experience of human reality, so that it is not composed of sub-personal experiences in a part-whole mereological relation but rather the relation is just the opposite. It is non-mereological or a-mereological. It is the primordial semi-unity/hyper-unity from which part-whole distinctions are extracted and projected outward as classical realism of an exterior world. I know that sounds dense and crazy, but I don’t know of a clearer way to describe it. Subjective experience is augmented along an axis of quality rather than quantity. Experiences of hurting capitulate sub personal experiences of emotional loss and disappointment, anger, and fear, with tactile sensations of throbbing, stabbing, burning, and cognitive feedback loops of worry, impatience, exaggerating and replaying the injury or illness, memories of associated experiences, etc. But we can just say ‘hurting’ and we all know generally what that means. No more particular description adds much to it. That is completely unlike exterior realism, where all we can see of a machine hurting would be that more processing power would seem to be devoted to some particular set of computations. They don’t run ‘all together and at once’, unless there is a living being who is there to interpret it that way - as we do when we look at a screen full of individual pixels and see images through the pixels rather than the changing pixels themselves.

Craig


Stephen P. King

unread,
Sep 20, 2012, 11:21:02 PM9/20/12
to everyth...@googlegroups.com
On 9/20/2012 10:04 PM, Craig Weinberg wrote:


On Thursday, September 20, 2012 9:49:58 PM UTC-4, Stephen Paul King wrote:
On 9/20/2012 12:55 PM, Craig Weinberg wrote:


On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:
Hi Craig Weinberg
 
Consciousness requires an autonomous self.

Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.

    What if awareness is what happens when autonomous self and consciousness mirror each other?

There can't be an autonomous self without awareness as an ontological given to begin with, at least as an inevitable potential.

    I take that as a good starting point, but I am just sticking my head into a stream. That is "God", btw. ;-)


What would a self be or do without awareness?

    Not a thing!


You can have awareness without a self being presented within that awareness though. I've had dreams where there is no "I" there are just scenes that are taking place.

    And those scenes go without meaning... Reminds me of a scene from Macbeth...





 
So does life itself. And intelligence.

We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.
 
 
So, I hagte to say this, but perhaps consciousness and life may be a
problem with mereology, don't know.

Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.

    Huh? non-mereology. What is that?

I call it a-mereology also. That's the subjective conjugate to topology. In public realism there is the Stone Duality ( topologies logical algebras) while the private phenomenology duality is orthogonal to the Stone (a-mereology transrational gestalt-algebra).

    Mereology is the study of relations between "wholes" and "parts"....



I posted about it a bit yesterday:

Our feeling of hurting is a (whole) experience of human reality, so that it is not composed of sub-personal experiences in a part-whole mereological relation but rather the relation is just the opposite. It is non-mereological or a-mereological. It is the primordial semi-unity/hyper-unity from which part-whole distinctions are extracted and projected outward as classical realism of an exterior world. I know that sounds dense and crazy, but I don’t know of a clearer way to describe it. Subjective experience is augmented along an axis of quality rather than quantity. Experiences of hurting capitulate sub personal experiences of emotional loss and disappointment, anger, and fear, with tactile sensations of throbbing, stabbing, burning, and cognitive feedback loops of worry, impatience, exaggerating and replaying the injury or illness, memories of associated experiences, etc. But we can just say ‘hurting’ and we all know generally what that means. No more particular description adds much to it. That is completely unlike exterior realism, where all we can see of a machine hurting would be that more processing power would seem to be devoted to some particular set of computations. They don’t run ‘all together and at once’, unless there is a living being who is there to interpret it that way - as we do when we look at a screen full of individual pixels and see images through the pixels rather than the changing pixels themselves.

Craig

    ummmmm

Bruno Marchal

unread,
Sep 21, 2012, 4:34:49 AM9/21/12
to everyth...@googlegroups.com

On 20 Sep 2012, at 21:46, Stephen P. King wrote:

> On 9/20/2012 7:41 AM, Bruno Marchal wrote:
>>
>> On 20 Sep 2012, at 12:54, Roger Clough wrote:
>>
>>> Hi meekerdb
>>>
>>> I would say that one necessary ability for
>>> life is for an organism to be able to separate itself off
>>> from its environment and thus to be able to make its
>>> own decisions without outside interference. In
>>> other words, to be autonomous.
>>>
>>> Materialism provides no such focussing tool.
>>> I would call that tool a self, primitive though it may be.
>>
>> You really should study computer science, as the self is a solved
>> problem. Bot the 3p-self, and the non nameable 1p-self.
>>
>> Bruno
>>
> Dear Bruno,
>
> Did you mean "both the 3p-self and the non-nameable 1p-self"? How
> does the 1p-self name itself?

It cannot. In logic "name" is for definite description. The 3-self can
name itself (due to the existence of solution to phi_x(y) = x), but
the 1-self cannot know who he is, and can only give relative pointers,
not a description/name.



> Can the the non-nameable 1p-self be approximately named with an
> integration of an uncountable set of names?

Yes. It is a sort of equivalent of a relativistic diabolo. You can
associate to an 1p-instant, its computational state, and all
computations going through it. Of course the 1p itself is not such a
structure, as it is not a structure in any 3p sense.

Bruno


http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Sep 21, 2012, 6:54:44 AM9/21/12
to everything-list
Hi Stephen P. King

The self is the monarch of your personal world.
It is "you" in the personal sense.*
It perceives, it controls, it knows, it does,
it remembers, it lives.

Since the self is nonphysical, I say that it lives rather than exists.

Anything that lives is autonomous, anything that does not live is not autonomous.

*Note that in english, the personal you would be thee of thou.
German and mandarin have comparable distinctions.

Roger Clough, rcl...@verizon.net
9/21/2012
"Forever is a long time, especially near the end." -Woody Allen


-----------------------------------------------------------------------------------
----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-09-20, 15:48:27
Subject: Re: Life requires autonomy

Roger Clough

unread,
Sep 21, 2012, 6:37:25 AM9/21/12
to everything-list
Hi Craig Weinberg

I suggest that we only use the word "exists" to refer to something being in spacetime.
Thus, the brain exists.

Otherwise, when something has its being outside of spacetime,
we say that it "lives". Thus mind lives, Platonia lives, numbers live,
consciousness lives, life lives. God lives.

Computers exist.
Computer programs live.

Intelligence lives.
Consciousness lives.

I both exist and live.

Numbers live.


Roger Clough, rcl...@verizon.net
9/21/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-20, 22:04:04
Subject: Re: Zombieopolis Thought Experiment




On Thursday, September 20, 2012 9:49:58 PM UTC-4, Stephen Paul King wrote:
On 9/20/2012 12:55 PM, Craig Weinberg wrote:



On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:
Hi Craig Weinberg

Consciousness requires an autonomous self.

Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.


What if awareness is what happens when autonomous self and consciousness mirror each other?


There can't be an autonomous self without awareness as an ontological given to begin with, at least as an inevitable potential. What would a self be or do without awareness? You can have awareness without a self being presented within that awareness though. I've had dreams where there is no "I" there are just scenes that are taking place.







So does life itself. And intelligence.

We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.



So, I hagte to say this, but perhaps consciousness and life may be a
problem with mereology, don't know.

Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.


Huh? non-mereology. What is that?


I call it a-mereology also. That's the subjective conjugate to topology. In public realism there is the Stone Duality ( topologies - logical algebras) while the private phenomenology duality is orthogonal to the Stone (a-mereology - transrational gestalt-algebra).

I posted about it a bit yesterday:


Our feeling of hurting is a (whole) experience of human reality, so that it is not composed of sub-personal experiences in a part-whole mereological relation but rather the relation is just the opposite. It is non-mereological or a-mereological. It is the primordial semi-unity/hyper-unity from which part-whole distinctions are extracted and projected outward as classical realism of an exterior world. I know that sounds dense and crazy, but I don? know of a clearer way to describe it. Subjective experience is augmented along an axis of quality rather than quantity. Experiences of hurting capitulate sub personal experiences of emotional loss and disappointment, anger, and fear, with tactile sensations of throbbing, stabbing, burning, and cognitive feedback loops of worry, impatience, exaggerating and replaying the injury or illness, memories of associated experiences, etc. But we can just say ?urting? and we all know generally what that means. No more particular description adds much to it. That is completely unlike exterior realism, where all we can see of a machine hurting would be that more processing power would seem to be devoted to some particular set of computations. They don? run ?ll together and at once?, unless there is a living being who is there to interpret it that way - as we do when we look at a screen full of individual pixels and see images through the pixels rather than the changing pixels themselves.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/R2dVP-ATA_oJ.

Stephen P. King

unread,
Sep 21, 2012, 11:12:15 AM9/21/12
to everyth...@googlegroups.com
On 9/21/2012 4:34 AM, Bruno Marchal wrote:
>
> On 20 Sep 2012, at 21:46, Stephen P. King wrote:
>
>> snip

>> Dear Bruno,
>>
>> Did you mean "both the 3p-self and the non-nameable 1p-self"? How
>> does the 1p-self name itself?
>
> It cannot. In logic "name" is for definite description. The 3-self can
> name itself (due to the existence of solution to phi_x(y) = x), but
> the 1-self cannot know who he is, and can only give relative pointers,
> not a description/name.
>
Dear Bruno,

A bisimulation between us is occurring! Finally, some progress!

I am considering 'particular physical systems" as "relative names"
of instances of universal computations.

>
>
>> Can the the non-nameable 1p-self be approximately named with an
>> integration of an uncountable set of names?
>
> Yes. It is a sort of equivalent of a relativistic diabolo.

Please elaborate! http://en.wikipedia.org/wiki/Diabolo Interesting!
"the devil on two sticks". This fits nicely with my thinking of Pratt's
idea!

> You can associate to an 1p-instant, its computational state, and all
> computations going through it. Of course the 1p itself is not such a
> structure, as it is not a structure in any 3p sense.

Right. it is an illusion. But it is the illusion of "something".
Dreams = illusions.

John Clark

unread,
Sep 22, 2012, 1:53:50 PM9/22/12
to everyth...@googlegroups.com
On Thu, Sep 13, 2012 at 3:03 PM, Craig Weinberg <whats...@gmail.com> wrote:

> If anyone is not familiar with David Chalmers "Absent Qualia, Fading Qualia, Dancing Qualia" You should have a look at it first.

I confess I have not read it because I have little confidence it's any better than the Chinese Room. Well OK I exaggerate, it's probably better than that (what isn't) but there is something about all these anti AI thought experiments that has always confused me. Let's suppose I'm dead wrong and Chambers really has found something new and strange and maybe even paradoxical about consciousness, what I want to know is why am I required to explain it if I want to continue to believe that a intelligent computers would be conscious? Whatever argument Chambers has it could just as easily be turned against the idea that the intelligent behavior of other people indicates consciousness, and yet not one person on this list believes in Solipsism, not even the most vocal AI critics. Why? Why is it that I must find the flaws in all these thought experiments but the anti AI people feel no need to do so?

In the extraordinarily unlikely event that Chambers has shown that consciousness is paradoxical (and its probably just as childish as all the others) I would conclude that he just made an error someplace that nobody has found yet. When Zeno showed that  motion was paradoxical nobody thought that motion did not exist but that Zeno just made a mistake, and he did, although the error wasn't found till the invention of the Calculus thousands of years later.

  John K Clark          
 

meekerdb

unread,
Sep 22, 2012, 5:04:46 PM9/22/12
to everyth...@googlegroups.com
On 9/22/2012 10:53 AM, John Clark wrote:
On Thu, Sep 13, 2012 at 3:03 PM, Craig Weinberg <whats...@gmail.com> wrote:

> If anyone is not familiar with David Chalmers "Absent Qualia, Fading Qualia, Dancing Qualia" You should have a look at it first.

It's some reductio arguments in favor of functionalism (i.e. comp).  I find these arguments convincing.  So in building an intelligent robot it is almost certain that a sufficiently high level of intelligence we will have created a conscious robot.  But I don't think it follows that the robot's consciousness will be the same as ours - because it's not the same even between different human beings.  In particular I refer to synasthesia and certain mathematical savants who seem to have some different consciousness than I do.  So for me the interesting question is how to build a robot with different consciousness in prespecified ways?

Brent


I confess I have not read it because I have little confidence it's any better than the Chinese Room. Well OK I exaggerate, it's probably better than that (what isn't) but there is something about all these anti AI thought experiments that has always confused me. Let's suppose I'm dead wrong and Chambers really has found something new and strange and maybe even paradoxical about consciousness, what I want to know is why am I required to explain it if I want to continue to believe that a intelligent computers would be conscious? Whatever argument Chambers has it could just as easily be turned against the idea that the intelligent behavior of other people indicates consciousness, and yet not one person on this list believes in Solipsism, not even the most vocal AI critics. Why? Why is it that I must find the flaws in all these thought experiments but the anti AI people feel no need to do so?

In the extraordinarily unlikely event that Chambers has shown that consciousness is paradoxical (and its probably just as childish as all the others) I would conclude that he just made an error someplace that nobody has found yet. When Zeno showed that  motion was paradoxical nobody thought that motion did not exist but that Zeno just made a mistake, and he did, although the error wasn't found till the invention of the Calculus thousands of years later.

  John K Clark          
 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.

Stathis Papaioannou

unread,
Sep 23, 2012, 9:02:12 AM9/23/12
to everyth...@googlegroups.com
The paper presents a very strong argument *in favour* of computers
having consciousness. I haven't seen anyone who understands it refute
it, or even try to refute it. It's worth reading at least part 3, as
it constitutes a proof of that which you suspected.


--
Stathis Papaioannou

Craig Weinberg

unread,
Sep 23, 2012, 9:13:15 AM9/23/12
to everyth...@googlegroups.com

No, you are being prejudiced against the wrong thought experiment. This is the paper  where Chalmers argues that it wouldn't make sense for functionalism not to be true, because gradually replacing a person's mind with functional equivalents that didn't provide qualia would create absurdity (someone who acts like they can taste strawberries but has no idea what they are doing). It's the only instance I can think of where I actually disagree with Chalmers, although I understand why he (and everyone else) assumes that consciousness would have to work that way..

What I see that he has not considered is that consciousness is a the function of uniqueness itself, and propagates through time as experience, not as as a product of mechanism. It's qualities are accessed and focused through the accumulated history of experience, seen to us as matter and mechanism. Instead of fading or absent qualia, I see only coma and death as the replacement neurons encroach on the brain stem irreversible damage would occur just as it would with dementia or a malignant brain tumor. The only difference is that the tumor will be trying to run parts of the body and mind which are no longer relevant to whats left of the person.

Craig


 

Stathis Papaioannou

unread,
Sep 23, 2012, 10:27:06 AM9/23/12
to everyth...@googlegroups.com
On Sun, Sep 23, 2012 at 11:13 PM, Craig Weinberg <whats...@gmail.com> wrote:

> What I see that he has not considered is that consciousness is a the
> function of uniqueness itself, and propagates through time as experience,
> not as as a product of mechanism. It's qualities are accessed and focused
> through the accumulated history of experience, seen to us as matter and
> mechanism. Instead of fading or absent qualia, I see only coma and death as
> the replacement neurons encroach on the brain stem irreversible damage would
> occur just as it would with dementia or a malignant brain tumor. The only
> difference is that the tumor will be trying to run parts of the body and
> mind which are no longer relevant to whats left of the person.

But the "tumour" would result in no behavioural change (since it makes
all the other neurons fire normally) and no experiential change (since
the subject would report that everything was fine and why would he do
that if everything were not?).


--
Stathis Papaioannou

Bruno Marchal

unread,
Sep 23, 2012, 11:24:55 AM9/23/12
to everyth...@googlegroups.com
On 22 Sep 2012, at 23:04, meekerdb wrote:

On 9/22/2012 10:53 AM, John Clark wrote:
On Thu, Sep 13, 2012 at 3:03 PM, Craig Weinberg <whats...@gmail.com> wrote:

> If anyone is not familiar with David Chalmers "Absent Qualia, Fading Qualia, Dancing Qualia" You should have a look at it first.

It's some reductio arguments in favor of functionalism (i.e. comp). 

To save time in reply against comp, I define comp as the existence of a level where functionalism apply. Putnam and orther's functionalism usually are ambiguous on the level, and take it usually as being given by the neural net in the biological brain. 
Comp is "It exists a level n such that functionalism is correct at level n", meaning that at such a level, you can make the digital functional substitution.
This makes the comp I talk about, much weaker logically that the one commonly described in the literature. It can be important as we cannot know our substitution level.
It prevents the environment-argument, or the quantum machine-argument against "classical" comp.

Bruno




I find these arguments convincing.  So in building an intelligent robot it is almost certain that a sufficiently high level of intelligence we will have created a conscious robot.  But I don't think it follows that the robot's consciousness will be the same as ours - because it's not the same even between different human beings.  In particular I refer to synasthesia and certain mathematical savants who seem to have some different consciousness than I do.  So for me the interesting question is how to build a robot with different consciousness in prespecified ways?

Brent


I confess I have not read it because I have little confidence it's any better than the Chinese Room. Well OK I exaggerate, it's probably better than that (what isn't) but there is something about all these anti AI thought experiments that has always confused me. Let's suppose I'm dead wrong and Chambers really has found something new and strange and maybe even paradoxical about consciousness, what I want to know is why am I required to explain it if I want to continue to believe that a intelligent computers would be conscious? Whatever argument Chambers has it could just as easily be turned against the idea that the intelligent behavior of other people indicates consciousness, and yet not one person on this list believes in Solipsism, not even the most vocal AI critics. Why? Why is it that I must find the flaws in all these thought experiments but the anti AI people feel no need to do so?

In the extraordinarily unlikely event that Chambers has shown that consciousness is paradoxical (and its probably just as childish as all the others) I would conclude that he just made an error someplace that nobody has found yet. When Zeno showed that  motion was paradoxical nobody thought that motion did not exist but that Zeno just made a mistake, and he did, although the error wasn't found till the invention of the Calculus thousands of years later.

  John K Clark          
 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

John Clark

unread,
Sep 23, 2012, 11:52:34 AM9/23/12
to everyth...@googlegroups.com
On Sun, Sep 23, 2012 at 9:13 AM, Craig Weinberg <whats...@gmail.com> wrote:

> What I see that he has not considered is that consciousness is a the function of uniqueness itself

For me to understand what you mean by this you need to answer one question, was the Email message that you sent to the Everything list on Sunday Sep 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?

> the accumulated history of experience, seen to us as matter

Without information to organize it matter doesn't seem like much of anything, its just a chaotic amorphous lump of stuff containing nothing of interest.  

> I see only coma and death as the replacement neurons encroach on the brain stem

Because you believe that the neurons are doing something magical, even though the scientific method can not find one scrap of evidence that they are doing any such thing. No doubt you will say that science doesn't know everything and just hasn't found the answer, but the problem is that science hasn't even found evidence that there is a question that needs answering, or if you prefer to put it another way, science hasn't found any evidence that a intelligent conscious computer is more impossible than a intelligent conscious human. Unless you can show at a fundamental level that biology has something that electronics lacks we must conclude that If computers can't be conscious then neither can humans.

> irreversible damage would occur just as it would with dementia or a malignant brain tumor.

I would say that would be more like a benign brain tumor, in fact given that it performs exactly like the original brain cells it would not be going too far to call it an Infinitely benign brain tumor.

  John K Clark



John Clark

unread,
Sep 23, 2012, 12:58:45 PM9/23/12
to everyth...@googlegroups.com
On Wed, Sep 19, 2012 at 9:58 AM, Roger Clough <rcl...@verizon.net> wrote:

> mind can also operate on brain (through the will or an intention). I have no idea at the present of what such a monadic structure might be like.

Will or Intention is a high level description as is pressure, but it's not the only valid description. It's true that pressure made the balloon expand but it is also true that air molecules hitting the inside of the balloon made it expand and molecules know nothing about pressure. It's true that I scratched my nose because I wanted too but its also true that it happened because an electrochemical signal was sent from my brain to the nerves in my hand.

 John K Clark




Craig Weinberg

unread,
Sep 23, 2012, 12:58:55 PM9/23/12
to everyth...@googlegroups.com


On Sunday, September 23, 2012 11:52:40 AM UTC-4, John Clark wrote:
On Sun, Sep 23, 2012 at 9:13 AM, Craig Weinberg <whats...@gmail.com> wrote:

> What I see that he has not considered is that consciousness is a the function of uniqueness itself

For me to understand what you mean by this you need to answer one question, was the Email message that you sent to the Everything list on Sunday Sep 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?

My experience of sending it was unique. The experiences of people reading what I wrote were unique. The existence of an email message is only inferred through our experiences, but there is no email message outside of human interpretation.
 

> the accumulated history of experience, seen to us as matter

Without information to organize it matter doesn't seem like much of anything, its just a chaotic amorphous lump of stuff containing nothing of interest.  

Without sense to be informed, organization is just a hypothetical morphology containing no possibilities of interest. With sense, you don't need information, you just need to be able to make sense of forms locally in some way.
 

> I see only coma and death as the replacement neurons encroach on the brain stem

Because you believe that the neurons are doing something magical,

I believe only that they facilitate our human experience. If you think that human experience is magical, then that is your projection, not mine.
 
even though the scientific method can not find one scrap of evidence that they are doing any such thing.

Yes, scientific method can find no evidence of consciousness of any kind. If you think that means that consciousness has to be impossible, then again, that is your projection. I see clearly that this view is as obsolete and narrow as some kind of Inquisition era church edict.
 
No doubt you will say that science doesn't know everything and just hasn't found the answer, but the problem is that science hasn't even found evidence that there is a question that needs answering, or if you prefer to put it another way, science hasn't found any evidence that a intelligent conscious computer is more impossible than a intelligent conscious human.

Because subjectivity is not an object, and you define science as the objective study of the behavior of objects, then you cannot be surprised when science cannot locate what it is explicitly defined to disqualify. I don't understand how this isn't blindingly obvious, but I must accept that it is like gender orientation or political bias - not something that can be addressed by reason.
 
Unless you can show at a fundamental level that biology has something that electronics lacks we must conclude that If computers can't be conscious then neither can humans.

If you try to live off of electronics then you will not survive. I have now shown that at a fundamental level, biology, in the form of food, respiration, hydration, etc, has something that electronics lack. When we have electronics that can be used as meal replacements, then I will consider the possibility that such an advancement in electronics might have additional capacities.


> irreversible damage would occur just as it would with dementia or a malignant brain tumor.

I would say that would be more like a benign brain tumor, in fact given that it performs exactly like the original brain cells it would not be going too far to call it an Infinitely benign brain tumor.

I'm saying that it cannot perform exactly like the original brain cells though. It will never be possible for an inorganic system to perform exactly like a living cell - which is why you can't eat glass instead of food. It doesn't matter how great of a computer you have in your brain, or how effectively it suppresses your experiences of hunger, your body will still starve if you don't consume actual food. There is no digital food.

Craig
 

  John K Clark



John Mikes

unread,
Sep 23, 2012, 4:30:42 PM9/23/12
to everyth...@googlegroups.com
Roger, no matter how hard I tried: here is my reply;
is your "material world" THE reality? I think it is our figment of our changing levels of a developing mentlity. Do you really believe that all those additional items we learned over the past millennia are products of an "ideal"(?) world?
(Btw I left the term 'ideal', because of its positive pointing  connotations). And "possible"? according to your physix, or faith? 
 
Bruno:  Thanks for you excellent question:
"How could anyone love a God, or a Goddess, threatening you of eternal torture in case you don't love Him or Her?"
 
JohnM


On Wed, Sep 19, 2012 at 2:18 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 19 Sep 2012, at 15:53, Roger Clough wrote:

Hi John Mikes
 
Once you leave the material world for the ideal one,
all things -- or at least many things-- now become possible.

Yes. Since always.

But there are many paths, and we can get lost.

Platonia before and after Gödel or Church is not the same. The circle and the regular polyhedra keeps their majestuous importance, but now they have the company of the Mandelbrot set, and UDs. Shit happens, when seen from inside. With comp, heaven and hell are not mechanically separable, nothing is easy near the boundaries. 

***

I think that your metaphysics and reading of Leibniz makes sense for me, and comp, but I have to say I don't follow your methodology or teaching method on the religious field, as it contains authoritative arguments. 

My feeling is that authoritative argument is the symptom of those who lack faith. 

That error is multiplied in the transfinite when an authoritative argument is attributed to God.

Can you answer the following question?

How could anyone love a God, or a Goddess, threatening you of eternal torture in case you don't love He or She?

That's bizarre.

How could even just an atom of sincerity reside in that love, with such an explicit horrible threat?

I hope you don't mind my frankness and the naïvety of my questioning.

Bruno



 
 
Roger Clough, rcl...@verizon.net
9/19/2012
"Forever is a long time, especially near the end." -Woody Allen
 
 
----- Receiving the following content -----
From: John Mikes
Receiver: everything-list
Time: 2012-09-18, 17:17:40
Subject: Re: IMHO conscousness is an activity not a thing

Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something
(as a homunculus?) on its own? Any suggestions?
John M牋牋牋�

On Tue, Sep 18, 2012 at 6:07 AM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg

IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.

Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen

----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08

Subject: Re: Zombieopolis Thought Experiment




On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg 爓rote:
A partial zombie is just someone who has brain damage, and yes if you tried to replace enough of a person's brain with a non-biological material, you would get brain damage, dementia, coma, and death.

Craig




--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/nrqkIqoR6xMJ.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Roger Clough

unread,
Sep 24, 2012, 7:40:33 AM9/24/12
to everything-list
Hi John Clark

I believe that the will in a monad is a desire to do something
which would show up as an appetite. The desired action is then seen
and effected by the supreme monad.

Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: John Clark
Receiver: everything-list
Time: 2012-09-23, 12:58:45
Subject: Re: Re: Re: IMHO conscousness is an activity not a thing


On Wed, Sep 19, 2012 at 9:58 AM, Roger Clough wrote:


> mind can also operate on brain (through the will or an intention). I have no idea at the present of what such a monadic structure might be like.


Will or Intention is a high level description as is pressure, but it's not the only valid description. It's true that pressure made the balloon expand but it is also true that air molecules hitting the inside of the balloon made it expand and molecules know nothing about pressure. It's true that I scratched my nose because I wanted too but its also true that it happened because an electrochemical signal was sent from my brain to the nerves in my hand.

?ohn K Clark

Roger Clough

unread,
Sep 24, 2012, 8:02:27 AM9/24/12
to everything-list
Hi Stathis Papaioannou
 
You need a self or observer to be conscious, and computers
have no self. So they can't be conscious. 
 
Consciousness =  a subject looking at, or aware of, an object.
 
Computers have no subject.
 
 
Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen
 
 
----- Receiving the following content -----
Receiver: everything-list
Time: 2012-09-23, 09:02:12
Subject: Re: Zombieopolis Thought Experiment

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

Stathis Papaioannou

unread,
Sep 24, 2012, 8:38:48 AM9/24/12
to everyth...@googlegroups.com
On Mon, Sep 24, 2012 at 10:02 PM, Roger Clough <rcl...@verizon.net> wrote:
> Hi Stathis Papaioannou
>
> You need a self or observer to be conscious, and computers
> have no self. So they can't be conscious.
>
> Consciousness = a subject looking at, or aware of, an object.
>
> Computers have no subject.

So where do you get the idea that computers have no self, no subject
and can't be observers or be conscious? You may as well claim that
women aren't conscious but just act as if they are conscious, like an
advanced computer pretending to have a self.


--
Stathis Papaioannou

Roger Clough

unread,
Sep 24, 2012, 8:51:32 AM9/24/12
to everything-list
Hi Stathis Papaioannou

Try to define consciousness. If you can't,
how do you know that a computer is conscious ?




Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stathis Papaioannou
Receiver: everything-list
Time: 2012-09-24, 08:38:48
Subject: Re: Re: Zombieopolis Thought Experiment


On Mon, Sep 24, 2012 at 10:02 PM, Roger Clough wrote:
> Hi Stathis Papaioannou
>
> You need a self or observer to be conscious, and computers
> have no self. So they can't be conscious.
>
> Consciousness = a subject looking at, or aware of, an object.
>
> Computers have no subject.

So where do you get the idea that computers have no self, no subject
and can't be observers or be conscious? You may as well claim that
women aren't conscious but just act as if they are conscious, like an
advanced computer pretending to have a self.


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Sep 24, 2012, 9:29:50 AM9/24/12
to everyth...@googlegroups.com
On 24 Sep 2012, at 14:02, Roger Clough wrote:

Hi Stathis Papaioannou
 
You need a self or observer to be conscious, and computers
have no self. So they can't be conscious. 

Few lines of instructions gives a self to computer. I told you that "self" is what computer science explains the best.



 
Consciousness =  a subject looking at, or aware of, an object.
 
Computers have no subject.

That is a quite strong statement akin to racism. 

And it is false once you define the subject by the one who knows, as incompleteness can be used to justify a notion of (private, incommunicable) knowledge for computers.

Bruno



To unsubscribe from this group, send email to everything-li...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Bruno Marchal

unread,
Sep 24, 2012, 9:52:34 AM9/24/12
to everyth...@googlegroups.com

On 24 Sep 2012, at 14:51, Roger Clough wrote:

> Hi Stathis Papaioannou
>
> Try to define consciousness. If you can't,
> how do you know that a computer is conscious ?

Try to define consciousness. If you can't
how do you know that a computer is not conscious?

Bruno
http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Sep 24, 2012, 9:59:40 AM9/24/12
to everything-list
Hi Bruno Marchal

By self I mean conscious self. Computers
are not conscious because codes can describe,
but they can't perceive. Perception requires a
live viewer or self.

I had no racial intentions in mind when I spoke
of not having a subject, and I  find it difficult to
see how you could imagine that. And not having
a subject would mean you are dead.
 



Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-24, 09:29:50
Subject: Re: Zombieopolis Thought Experiment




On 24 Sep 2012, at 14:02, Roger Clough wrote:


Hi Stathis Papaioannou

You need a self or observer to be conscious, and computers
have no self. So they can't be conscious.


Few lines of instructions gives a self to computer. I told you that "self" is what computer science explains the best.







Consciousness = a subject looking at, or aware of, an object.

Computers have no subject.


That is a quite strong statement akin to racism.


And it is false once you define the subject by the one who knows, as incompleteness can be used to justify a notion of (private, incommunicable) knowledge for computers.


Bruno








Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stathis Papaioannou
Receiver: everything-list
Time: 2012-09-23, 09:02:12
Subject: Re: Zombieopolis Thought Experiment


On Sun, Sep 23, 2012 at 3:53 AM, John Clark wrote:
> On Thu, Sep 13, 2012 at 3:03 PM, Craig Weinberg

Roger Clough

unread,
Sep 24, 2012, 10:13:30 AM9/24/12
to everything-list
Hi Bruno Marchal

A computer being not conscious ? All computer operations
(to my mind,probably not yours) are actual (in spacetime).
But consciousness is an inherent (mental, not in spacetime)
activity.

Cs = subject + object

A computer has no inherent realms, no conscious self or observer.

Instead, a computer is all object (completely in the objective realm),
no subject.

Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-24, 09:52:34
Subject: Re: Zombieopolis Thought Experiment

Stephen P. King

unread,
Sep 24, 2012, 10:50:33 AM9/24/12
to everyth...@googlegroups.com
On 9/24/2012 9:59 AM, Roger Clough wrote:
> By self I mean conscious self. Computers
> are not conscious because codes can describe,
> but they can't perceive. Perception requires a
> live viewer or self.
>
> I had no racial intentions in mind when I spoke
> of not having a subject, and I find it difficult to
> see how you could imagine that. And not having
> a subject would mean you are dead.
HI Roger,


We can faithfully capture the idea of perception by considering a
process of actively generating and updating an internal model of the
entity and its interactions with its environment. The "subject" or self"
is, in this reasoning, identified with the internal model. We can limit
the infinite regress problem
(http://en.wikipedia.org/wiki/Homunculus_argument) that might be
considered as an argument against this idea by the following means:
1) Each model and any sub-model are (up to some limit) isomorphic
(see Kleene's theorems), so one only needs resources to code the initial
model and any bits that represent the differences between it and its
sub-models. The "self" is the model plus the updating mechanism.

Bruno Marchal

unread,
Sep 24, 2012, 10:52:42 AM9/24/12
to everyth...@googlegroups.com

On 24 Sep 2012, at 16:13, Roger Clough wrote:

> Hi Bruno Marchal
>
> A computer being not conscious ? All computer operations
> (to my mind,probably not yours) are actual (in spacetime).
> But consciousness is an inherent (mental, not in spacetime)
> activity.

All right, in that sense a computer cannot think. I agree, but a brain
cannot think too, nor any body. They can only manifest consciousness,
which, we agree on this, is in Platonia.

Computer can support a knowing self, like a brain, unless you decide
not, but then it looks like arbitrary racism. You just decide that
some entities cannot think, because *you* fail to recognize yourself
in them.

You could at least say that you don't know, or give argument, but you
just repeat that brain can support consciousness and that silicon
cannot, without giving an atom of justification. This can't be serious.


>
> Cs = subject + object
>
> A computer has no inherent realms, no conscious self or observer.
>
> Instead, a computer is all object (completely in the objective realm),
> no subject.

You can implement a self-transformative software on computers.

You should be more careful and study a bit of computer science before
judging computers, especially if you assert strong negative statements
about them.
>> To post to this group, send email to everything-
>> li...@googlegroups.com.
>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> To post to this group, send email to everything-
>> li...@googlegroups.com.

Stephen P. King

unread,
Sep 24, 2012, 10:56:41 AM9/24/12
to everyth...@googlegroups.com
On 9/24/2012 10:13 AM, Roger Clough wrote:
> A computer being not conscious ? All computer operations
> (to my mind,probably not yours) are actual (in spacetime).
> But consciousness is an inherent (mental, not in spacetime)
> activity.
>
> Cs = subject + object
>
> A computer has no inherent realms, no conscious self or observer.
>
> Instead, a computer is all object (completely in the objective realm),
> no subject.
Hi Roger,

I disagree. You are merely stipulating that there is no "self"
possible and thus conclude the obvious implication. If we permit
consideration of an internal modeling system then the possibility of a
conscious self becomes just a matter of discovering whether or not the
technical means of implementing an internal modeling and updating
process are actual. There strong reasons to consider that a physical
object and its evolution in time are, effectively, the best possible
simulation of that physical system, thus a physical system is, FAPP, its
own best possible model. If there is a feedback between the physical
states of the system and its simulation that has come causal efficacy
then I would propose that we must consider that physical system to be,
in fact, conscious.

Stephen P. King

unread,
Sep 24, 2012, 12:01:47 PM9/24/12
to everyth...@googlegroups.com
Hi Roger,

I agree with Bruno's remarks.

John Clark

unread,
Sep 24, 2012, 12:02:08 PM9/24/12
to everyth...@googlegroups.com
On Sun, Sep 23, 2012 Craig Weinberg <whats...@gmail.com> wrote:

>>  was the Email message that you sent to the Everything list on Sunday Sep 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?

> My experience of sending it was unique. The experiences of people reading what I wrote were unique.

That's all very nice but it doesn't answer my question, was the Email message that you sent to the Everything list on Sunday September 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?


> The existence of an email message is only inferred through our experiences

Obviously.


> there is no email message outside of human interpretation.

Thus the moon does not exist when you are not looking at it.


> Without sense to be informed, organization is just a hypothetical morphology containing no possibilities of interest.

Translation from the original bafflegab: without information information would contain nothing informative. I could not agree more.


> With sense, you don't need information, you just need to be able to make sense of forms locally in some way.

You made enough sense out of my message to respond to it and you only received that sense impression because it was sent over a wire, and if it can be sent over a wire then its information.


> Yes, scientific method can find no evidence of consciousness of any kind.

The thing I don't understand is why this is supposed to be a problem only for those who think a intelligent computer is conscious and is supposed to be no problem for those who think that other intelligent humans are conscious.


> If you think that means that consciousness has to be impossible, then again, that is your projection.

You and I have both believed that consciousness exists since we were both infants and we both have been implicitly using the exact same theory to determine when something is conscious and when something is not, and that is that intelligent behavior indicates consciousness. In fact you don't even believe that you yourself are conscious when you don't behave in a complex intelligent manner, such as when you are in a dreamless sleep or under anesthesia, and that's why you and I fear death, when we eventually get in that state we won't be acting any smarter than a rock and as a result we fear that we will be no more conscious than a rock. What I object to is that when we run across a intelligent computer the rules of the game are supposed to suddenly change, and that just doesn't seem very smart.


> you define science as the objective study of the behavior of objects,

No, I define science as the use of the scientific method, and that means looking at the evidence and developing a theory to explain it, NOT finding a theory that makes you feel good and then looking for evidence that supports it and ignoring evidence that refutes it. As illustrated in our debate on the free will noise you were even willing to embrace flat out logical contradictions if that's what it took for you to continue to believe what you found pleasant to believe, like X is not Y and X is also not not Y.  Using such procedures may be successful in inducing a pleasing stupor but you'll have to abandon any hope of finding things that are true.

> then you cannot be surprised when science cannot locate what it is explicitly defined to disqualify.

I'm not surprised and all I ask is that whatever method you use for determining the existence of consciousness, scientific or otherwise, you don't suddenly change the rules in the middle of the race just because you saw a intelligent computer. Use whatever test you want to infer consciousness, all I'm asking for is consistency. 

> I don't understand how this isn't blindingly obvious, but I must accept that it is like gender orientation or political bias - not something that can be addressed by reason.

At one time it was blindingly obvious that human beings with a black skin didn't have the same sort of feelings as people with white skin do, even though they acted as if they did, that's how they convinced themselves that there was nothing wrong with slavery.


> If you try to live off of electronics then you will not survive. I have now shown that at a fundamental level, biology, in the form of food, respiration, hydration, etc, has something that electronics lack.

So the key to consciousness is that humans eat breathe drink and shit but computer's don't. Hmm, I don't quite see the connection, however I do know that both biology and electronics are involved with quantum tunneling, the Schrodinger Equation, and the Pauli Exclusion Principle but electronics also has things that biology lacks, things like Bloch lattice functions, semiconductor valence bands, and the Hall effect; I don't understand why those functions have nothing to do with consciousness but defecation is intimately related with consciousness.

I also don't understand why the computer counterpart of Craig Weinberg couldn't make the argument that Human beings can behave intelligently but they can never be conscious because they don't have p-n silicon junctions, after all the link between p-n silicon junctions and consciousness is every bit as strong as the link between digestion and consciousness. For that matter I don't understand why the biological Craig Weinberg doesn't make the argument that biological women can't be conscious because they don't have testicles.

> When we have electronics that can be used as meal replacements, then I will consider the possibility that such an advancement in electronics might have additional capacities.

So you're only conscious when you eat.

John K Clark

 
 

Stephen P. King

unread,
Sep 24, 2012, 12:28:15 PM9/24/12
to everyth...@googlegroups.com
On 9/24/2012 12:02 PM, John Clark wrote:
> Thus the moon does not exist when you are not looking at it.
Hi John,

I expected better from you! This quip is based on the premise that
"you" are the only observer involved. Such nonsense! Considering that
there are a HUGE number of observers of the moon, the effects of the
observations of any one is negligible. If none of them measure the
presence of the moon or its effects, then the existence of the moon
becomes pure the object of speculation. Note that being affected by the
moon in terms of tidal effects is a measurement!

John Clark

unread,
Sep 24, 2012, 12:59:07 PM9/24/12
to everyth...@googlegroups.com

On Mon, Sep 24, 2012 at 12:28 PM, Stephen P. King <step...@charter.net> wrote:
 
>> Thus the moon does not exist when you are not looking at it.
 
  > I expected better from you! This quip is based on the premise that "you" are the only observer involved. Such nonsense! Considering that there are a HUGE number of observers of the moon

What about the crater Daedalus on the far side of the moon, nobody is observing it at this moment so does it exist right now? At any rate my point was that if it's true that "there is no email message outside of human interpretation" as Craig Weinberg asserted then it must also be true that the moon does not exist when you are not looking at it.

  John K Clark



Stephen P. King

unread,
Sep 24, 2012, 1:37:04 PM9/24/12
to everyth...@googlegroups.com
Hi John,

    Does the presence of the crater make a difference that makes a difference, or equivalently, have a causal effect on other entities in its environment? If yes then yes, it is "being observed". But its "existence, qua necessary possibility is strictly a priori. Why do you insist on conflating the possibility of a measurement outcome with the measurement outcome? I think that Craig is discussing ideas that are flying right over your head.

Alberto G. Corona

unread,
Sep 24, 2012, 1:37:46 PM9/24/12
to everyth...@googlegroups.com
Hi John

This crater has been observed, so there are a current observed phenomenon about this crater: our memory of it.

I observe that others had observed it, and I trust these people. This indirect account is also an "observation" . I believe because I trust these people and trust science. But the original observer also believed in something: that their observations, their instruments gave an adequate image of  "reality" (that is, those things that other also may perceive). 

In contrast this crated did not exist in the XIX century, no more than for us exist the crater R2D2 that will be discovered in a planet near Alpha Centaury in the year 2050.

Percical Lowel convinced the world on the existence of Mars channels . At that time, these channels had an status of existence. But they do not exist today. Upto this point existence is a matter of belief, trust in ourselves and in others and trust in a set of principles. it is what Voegelin called "shared consciousness".


2012/9/24 John Clark <johnk...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
Alberto.

John Clark

unread,
Sep 24, 2012, 1:44:13 PM9/24/12
to everyth...@googlegroups.com

On Mon, Sep 24, 2012 at 1:37 PM, Stephen P. King <step...@charter.net> wrote:

> Does the presence of the crater make a difference that makes a difference, or equivalently, have a causal effect on other entities in its environment? If yes then yes, it is "being observed".

Then Craig was wrong when he said  "there is no email message outside of human interpretation".

> I think that Craig is discussing ideas that are flying right over your head.

Or dropping under the radar.

  John K Clark  

Craig Weinberg

unread,
Sep 24, 2012, 1:45:10 PM9/24/12
to everyth...@googlegroups.com


On Monday, September 24, 2012 12:02:16 PM UTC-4, John Clark wrote:
On Sun, Sep 23, 2012 Craig Weinberg <whats...@gmail.com> wrote:

>>  was the Email message that you sent to the Everything list on Sunday Sep 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?

> My experience of sending it was unique. The experiences of people reading what I wrote were unique.

That's all very nice but it doesn't answer my question, was the Email message that you sent to the Everything list on Sunday September 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?

There was no email message from the perspective of 'objective reality' that you assume exists independently of all experience. I had an experience of sending a message, you and others have an experience of receiving a message, computers have an experience of voltage changes, and that's it. All of those experiences were unique. We are now having unique experiences of talking about it.

> The existence of an email message is only inferred through our experiences

Obviously.

> there is no email message outside of human interpretation.

Thus the moon does not exist when you are not looking at it.

I agree with Stephen's comment. The moon is a lot of experiences to a lot of things. Hypnotize someone and you can get them to think that an onion is an apple. That doesn't mean that every cell in your entire body now believes that it is metabolizing an apple.


> Without sense to be informed, organization is just a hypothetical morphology containing no possibilities of interest.

Translation from the original bafflegab: without information information would contain nothing informative. I could not agree more.

No. You are conflating sense with information. It isn't. These letters do not speak English. Books do not read the stories that they tell. It's hard for me to see what is so mystifying about this...but then again, it's hard for me to imagine what people see in knitting too.
 

> With sense, you don't need information, you just need to be able to make sense of forms locally in some way.

You made enough sense out of my message to respond to it and you only received that sense impression because it was sent over a wire, and if it can be sent over a wire then its information.

There is no information literally in the wire. The wire is a chain of molecular forms which change their relation to each other when stimulated properly at one end. It's like cracking a whip. I can wiggle a string on one end and the string wiggles on the other end because the medium has physical properties which propagate stimulation in that particular way. If the string was made of something that glows when you shake it, then you would see different patterns depending on the curves of the shapes in the string, etc.

There is no information there unless this formation is 'in'-terpreted in such a way as to 'in'-form something. Without a computer to translate the wiggling molecules in the wire to pixels of wiggling LCD molecules and a person to translate the wiggles of their brain and retina into an email, there isn't any email there. In fact, there is information there, but only because the molecules that are acting like strings and wires and brains feel informed on their own layer of perception and participation.

There is no human layer of information that is 'in' the wire. There is no independently persisting 'information' at all. It's all nested experiences happening at different quantitative rates and qualitative depths. Experiences are not made of information, information is made of experiences.


> Yes, scientific method can find no evidence of consciousness of any kind.

The thing I don't understand is why this is supposed to be a problem only for those who think a intelligent computer is conscious and is supposed to be no problem for those who think that other intelligent humans are conscious.

Because we have no reason to doubt that other people are fundamentally different from ourselves and we have no reason to suspect that the behavior of machines indicates any capacity to feel anything.
 

> If you think that means that consciousness has to be impossible, then again, that is your projection.

You and I have both believed that consciousness exists since we were both infants and we both have been implicitly using the exact same theory to determine when something is conscious and when something is not, and that is that intelligent behavior indicates consciousness.

In reality, the fact of consciousness comes long before anything like 'belief' can be generated. Infants don't believe they are conscious, they have to already be conscious to believe anything. Intelligent behavior is not the indicator of consciousness. It's almost the opposite indicator. Consciousness is indicated by responding to feeling with feeling. Intelligence arises from feeling distancing itself from feeling.
 
In fact you don't even believe that you yourself are conscious when you don't behave in a complex intelligent manner, such as when you are in a dreamless sleep or under anesthesia, and that's why you and I fear death, when we eventually get in that state we won't be acting any smarter than a rock and as a result we fear that we will be no more conscious than a rock. What I object to is that when we run across a intelligent computer the rules of the game are supposed to suddenly change, and that just doesn't seem very smart.

Computers aren't playing the same game as living organisms, even when we program them to pretend as such. They don't fear death or disconnection. They fear nothing at all and they desire nothing at all, which is why they can never experience being a living being. Life is fear and desire.
 

> you define science as the objective study of the behavior of objects,

No, I define science as the use of the scientific method,

Circular, irrelevant.
 
and that means looking at the evidence and developing a theory to explain it,

Experience is evidence, and all evidence is filtered by the expectations of experience.
 
NOT finding a theory that makes you feel good and then looking for evidence that supports it and ignoring evidence that refutes it.

The scientific method is never followed in linear order. Hypothesis, gathering information, experimentation are driven by the scientist themselves, not your toy models of arbitrary regimentation. Hypothesis is *exactly* the process of finding a theory that makes you feel like you have explained something and then testing it through experimentation, gathering more information, refining your hypothesis, etc. It's a cyclic process, not a linear recipe.
 
As illustrated in our debate on the free will noise you were even willing to embrace flat out logical contradictions if that's what it took for you to continue to believe what you found pleasant to believe, like X is not Y and X is also not not Y.  Using such procedures may be successful in inducing a pleasing stupor but you'll have to abandon any hope of finding things that are true.

Free will is pre-logical. I don't have to talk my arm into moving, I just move it. You identify completely with only your intellect so it is inconceivable for you to entertain that logic itself has limits and that those limits supervene on the integrity of consciousness...but they do. You are the one who is sentimentally clinging to the pleasing simplicity of "X" and "Y" rather than confront the ambiguity of the ground of being. You are the intellectual coward, the puritan, the inquisitor. I am the curious scientist, questioning dreamer, provocative madman. Your mirror needs cleaning.


> then you cannot be surprised when science cannot locate what it is explicitly defined to disqualify.

I'm not surprised and all I ask is that whatever method you use for determining the existence of consciousness, scientific or otherwise, you don't suddenly change the rules in the middle of the race just because you saw a intelligent computer. Use whatever test you want to infer consciousness, all I'm asking for is consistency. 

To quote Emerson, "A foolish consistency is the hobgoblin of little minds".

I'm not being inconsistent though, you just don't understand that qualitative inertial frames do not commute quantitatively. In reality, Pinocchio doesn't turn into a real boy. He's just a puppet made of wood, even if you program a machine to operate that puppet flawlessly. There is no inconsistency in my position, it's just that the reality is subtle and confusing if you are accustomed to thinking of the universe as a place full of things rather than an experience with many qualities.


> I don't understand how this isn't blindingly obvious, but I must accept that it is like gender orientation or political bias - not something that can be addressed by reason.

At one time it was blindingly obvious that human beings with a black skin didn't have the same sort of feelings as people with white skin do, even though they acted as if they did, that's how they convinced themselves that there was nothing wrong with slavery.

Exactly my point. Humans are terribly biased by default in how they see others. It goes both ways. We see cartoon characters and stuffed animals as having personalities. If a computer was born out of an egg and healed when it got sick and ate other living things, then I wouldn't think twice about thinking it's conscious...but they aren't born that way and they don't do those things. They are dumb as rocks. They will execute their own destruction happily if you only instruct them carefully on how to do it. They will keep trying to compute Pi to the last digit forever if they can.


> If you try to live off of electronics then you will not survive. I have now shown that at a fundamental level, biology, in the form of food, respiration, hydration, etc, has something that electronics lack.

So the key to consciousness is that humans eat breathe drink and shit but computer's don't.

Not the key, but a clue. Symptomatic themes that should give us pause when diagnosing a sniffling computer as having the ebola of consciousness.
 
Hmm, I don't quite see the connection,

I know. That's the problem.
 
however I do know that both biology and electronics are involved with quantum tunneling, the Schrodinger Equation, and the Pauli Exclusion Principle but electronics also has things that biology lacks, things like Bloch lattice functions, semiconductor valence bands, and the Hall effect; I don't understand why those functions have nothing to do with consciousness but defecation is intimately related with consciousness.

QM has to do with consciousness too, just not human quality consciousness. Atoms have universal quality consciousness. Lowest common sense.
 

I also don't understand why the computer counterpart of Craig Weinberg couldn't make the argument that Human beings can behave intelligently but they can never be conscious because they don't have p-n silicon junctions, after all the link between p-n silicon junctions and consciousness is every bit as strong as the link between digestion and consciousness. For that matter I don't understand why the biological Craig Weinberg doesn't make the argument that biological women can't be conscious because they don't have testicles.

The consciousness of human beings is not in question. The consciousness of assembled equipment is.
 

> When we have electronics that can be used as meal replacements, then I will consider the possibility that such an advancement in electronics might have additional capacities.

So you're only conscious when you eat.

That has nothing to do with it. You asked for a way that what we are is different fundamentally from what computers are. Food is a good place to start. It's supposed to allow you to doubt your hasty generalizations and just-so stories long enough to become curious about the whole truth rather than your astonishingly limited designs on it.

Craig


John K Clark

 
 

meekerdb

unread,
Sep 24, 2012, 1:50:09 PM9/24/12
to everyth...@googlegroups.com
On 9/24/2012 9:28 AM, Stephen P. King wrote:
On 9/24/2012 12:02 PM, John Clark wrote:
Thus the moon does not exist when you are not looking at it.
Hi John,

    I expected better from you! This quip is based on the premise that "you" are the only observer involved. Such nonsense! Considering that there are a HUGE number of observers of the moon, the effects of the observations of any one is negligible. If none of them measure the presence of the moon or its effects, then the existence of the moon becomes pure the object of speculation. Note that being affected by the moon in terms of tidal effects is a measurement!


So who or what counts as an observer.  Young's slit experiments on fullerenes seem to indicate that a few IR photons or gas molecules qualify.

http://arxiv.org/pdf/0903.1614v1.pdf

Brent

smi...@zonnet.nl

unread,
Sep 24, 2012, 11:02:54 PM9/24/12
to everyth...@googlegroups.com
Citeren meekerdb <meek...@verizon.net>:
If I don't observe it, then it doesn't matter who/what else observes
something, the rest of the universe is still a superposition. It
doesn't matter whether or not an interference pattern can be detected.

Saibal

smi...@zonnet.nl

unread,
Sep 24, 2012, 11:04:51 PM9/24/12
to everyth...@googlegroups.com
Citeren "Stephen P. King" <step...@charter.net>:
Thing is, the Moon doesn't exist, even if you do look at it.

Saibal

meekerdb

unread,
Sep 24, 2012, 11:17:01 PM9/24/12
to everyth...@googlegroups.com
?? I could matter. Suppose I bet you $100 there's no interference pattern when the
buckyballs are hot? Then it would matter. But apparently it wouldn't matter whether
anyone observed the IR photons or not.

Brent

Stephen P. King

unread,
Sep 24, 2012, 11:51:06 PM9/24/12
to everyth...@googlegroups.com
Dear Saibal,

If you are operating under the stipulation that each observer is
uniquely isolated from all others, then I agree with you. But I hope you
understand the long range implications if this as it opens wide the need
for an explanation for the appearance of interactions/mutual
consistencies between observers.

Stephen P. King

unread,
Sep 24, 2012, 11:53:59 PM9/24/12
to everyth...@googlegroups.com
Hi Saibal,

I would have to disagree with you only because I wish to be
consistent with my definition of existence. The moon, as everything
else, is merely phenomenal appearance, but as an a prior necessary
possibility to even be an illusion, it must exist.

meekerdb

unread,
Sep 24, 2012, 11:55:34 PM9/24/12
to everyth...@googlegroups.com
On 9/24/2012 8:51 PM, Stephen P. King wrote:
>>
>> If I don't observe it, then it doesn't matter who/what else observes something, the
>> rest of the universe is still a superposition. It doesn't matter whether or not an
>> interference pattern can be detected.
>>
>> Saibal
>>
> Dear Saibal,
>
> If you are operating under the stipulation that each observer is uniquely isolated
> from all others, then I agree with you.

Or it's Chris Fuch's instrumental Bayesianism which regards QM as just a way of
representing one's knowledge of systems.

Brent

Stephen P. King

unread,
Sep 24, 2012, 11:57:09 PM9/24/12
to everyth...@googlegroups.com
Hi Brent,

If we are consistent with the rules of QM, the mere possibility of
detection of position basis information is sufficient to prevent the
interference pattern. Thus my prediction is that the temperature of the
buckyballs is irrelevant for the two slit experiment, so long as a
position basis measurement is not possible. Very hard to do...
It is loading more messages.
0 new messages