--
Stathis Papaioannou
----- Receiving the following content -----From: Stephen P. KingReceiver: everything-list
Time: 2012-09-15, 13:04:41
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.
On Sat, Sep 15, 2012 at 2:55 AM, Craig Weinberg <whats...@gmail.com> wrote:
> What you think third party observable behavior means is the set of all
> properties which are externally discoverable. I am saying that is a
> projection of naive realism, and that in reality, there is no such set, and
> that in fact the process of discovery of any properties supervenes on the
> properties of all participants and the methods of their interaction.
Of course there is a set of all properties that are externally
discoverable, even if you think this set is very small!
Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.
> My point of using cats in this thought experiment is to specifically point
> out our naivete in assuming that instruments which extend our perception in
> only the most deterministic and easy to control ways are sufficient to
> define a 'third person'. If we look at the brain with a microscope, we see
> those parts of the brain that microscopes can see. If we look at New York
> with a swarm of cats, then we see the parts of New York that cats can see.
Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.
> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?
What you don't understand is that an exhaustively complete set of
behaviours is not required.
I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.
--
Stathis Papaioannou
Hi Stephen P. King
Now I see your problem with Chalmers.It seems to be too sweeping a remark,but Leibniz would agree. becauseGod, who is the supreme monad, causes allto happen. Mind is the ruling power.As I say below,"If there's no God, we'd have to invent himso that everything could function."Roger Clough, rcl...@verizon.net9/16/2012Leibniz would say, "If there's no God, we'd have to invent himso that everything could function."
On Saturday, September 15, 2012 6:21:14 AM UTC-4, stathisp wrote:On Sat, Sep 15, 2012 at 2:55 AM, Craig Weinberg <whats...@gmail.com> wrote:
> What you think third party observable behavior means is the set of all
> properties which are externally discoverable. I am saying that is a
> projection of naive realism, and that in reality, there is no such set, and
> that in fact the process of discovery of any properties supervenes on the
> properties of all participants and the methods of their interaction.
Of course there is a set of all properties that are externally
discoverable, even if you think this set is very small!
No, there isn't. That is what I am telling you. Nothing exists outside of experience, which is creating new properties all of the time. There is no set at all. There is no such thing as a generic externality...each exterior is only a reflection of the interior of the system which discovers the interior of other systems as exteriors.
Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.
No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.
> My point of using cats in this thought experiment is to specifically point
> out our naivete in assuming that instruments which extend our perception in
> only the most deterministic and easy to control ways are sufficient to
> define a 'third person'. If we look at the brain with a microscope, we see
> those parts of the brain that microscopes can see. If we look at New York
> with a swarm of cats, then we see the parts of New York that cats can see.
Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.
Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'.
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.
Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience.
I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations.
> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?
What you don't understand is that an exhaustively complete set of
behaviours is not required.
Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that.
You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.
I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.
Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?
Craig
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QZP5OE1BqSoJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On 9/16/2012 8:42 AM, Craig Weinberg wrote:
On Saturday, September 15, 2012 6:21:14 AM UTC-4, stathisp wrote:Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.
No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.
Thus we can realistically claim that the physical world is exactly and only all things that we (as we truly are) have in common. What must be understood is that as the number of participating entities increase to infinity, the number of "things in common" goes to zero. Only for a large but finite set of entities will there be a semi-large number of relations that the entities have in common and not have a degeneracy relation between them.
A black Hole is a nice demonstration of the degeneracy idea. The effect of gravity is the force of degeneracy, when all the ground states are forces to normalize and become identical with each other, the "space" and "delay" (time) that is different between them collapses to zero and thus we get singularity in the limit of the degeneracy.
----- Receiving the following content -----
From: Stephen P. KingReceiver: everything-list
Time: 2012-09-16, 12:13:52
Subject: Re: Zombieopolis Thought Experiment
Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.
No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.
Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.
Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'.
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.
Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience. I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations.
> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?
What you don't understand is that an exhaustively complete set of
behaviours is not required.
Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.
I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.
Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?
Hi Stephen P. KingThe physical is, and only is, what you can measure.Roger Clough, rcl...@verizon.net9/17/2012Leibniz would say, "If there's no God, we'd have to invent himso that everything could function."
Moreover, this
set has subsets, and we can limit our discussion to these subsets. For
example, if we are interested only in mass, we can simulate a human
perfectly using the right number of rocks. Even someone who believes
in an immortal soul would agree with this.
No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules.I'm just saying that the mass of the human and the mass of the rocks is the same, not that the rocks and the human are the same. They share a property, which manifests as identical behaviour when they are put on scales. What's controversial about that?
Yes, but there are properties of the brain that may not be relevant to
behaviour. Which properties are in fact important is determined by
experiment. For example, we may replace the myelin sheath with a
synthetic material that has similar electrical properties and then
test an isolated nerve to see if action potentials propagate in the
same way. If they do, then the next step is to incorporate the nerve
in a network and see if the pattern of firing in the network looks
normal. The step after that is to replace the myelin in the brain of a
rat to see if the animal's behaviour changes. The modified rats are
compared to unmodified rats by a blinded researcher to see if he can
tell the difference. If no-one can consistently tell the difference
then it is announced that the synthetic myelin appears to be a
functionally identical substitute for natural myelin.
Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'.Yes, it's not *identical*. No-one has claimed this. And since it's not identical, under some possible test it would behave differently; otherwise it would be identical.
But there are some changes which make no functional difference.
If l have a drink of water, that changes my brain by decreasing the sodium concentration. But this change is not significant if we are considering whether I continue to manifest normal human behaviour, since firstly the brain is tolerant of moderate physical changes
and secondly people can manifest a range of different behaviours and remain recognisably human and recognisably the same human. In other words humans have certain engineering tolerances in their components, and the aim in replacing components would be to do it within this tolerance. Perfection is not attainable by either engineers or nature.
As is the nature
of science, another team of researchers may then find some deficit in
the behaviour of the modified rats under conditions the first team did
not examine. Scientists then make modifications to the formula of the
synthetic myelin and do the experiments again.
Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience. I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations.Here is where you have misunderstood the whole aim of the thought experiment in the paper you have cited. The paper assumes that identical function does *not* necessarily result in identical consciousness and follows this idea to see where it leads.
> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?
What you don't understand is that an exhaustively complete set of
behaviours is not required.
Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.The replacement components need only be within the engineering tolerance of the nervous system components. This is a difficult task but it is achievable in principle.
I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.
Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?I assume that my friends have not been replaced by robots. If they have been then that means the robots can almost perfectly replicate their behaviour, since I (and people in general) am very good at picking up even tiny deviations from normal behaviour. The question then is, if the function of a human can be replicated this closely by a machine does that mean the consciousness can also be replicated? The answer is yes, since otherwise we would have the possibility of a person having radically different experiences but behaving normally and being unaware that their experiences were different.
-- Stathis Papaioannou
> This is the point of the thought experiment. The limitations of all forms of
> measurement and perception preclude all possibility of there ever being a
> such thing as an exhaustively complete set of third person behaviors of any
> system.
>
> What is it that you don't think I understand?
What you don't understand is that an exhaustively complete set of
behaviours is not required.
Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on.The replacement components need only be within the engineering tolerance of the nervous system components. This is a difficult task but it is achievable in principle.
You assume that consciousness can be replaced, but I understand exactly why it can't. You can believe that there is no difference between scooping out your brain stem and replacing it with a functional equivalent as long as it was well engineered, but to me it's a completely misguided notion. Consciousness doesn't exist on the outside of us. Engineering only deals with exteriors. If the universe were designed by engineers, there could be no consciousness.
I don't access an exhaustively complete
set of behaviours to determine if my friends are the same people from
day to day, and in fact they are *not* the same systems from day to
day, as they change both physically and psychologically. I have in
mind a rather vague set of behavioural behavioural limits and if the
people who I think are my friends deviate significantly from these
limits I will start to worry.
Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation?I assume that my friends have not been replaced by robots. If they have been then that means the robots can almost perfectly replicate their behaviour, since I (and people in general) am very good at picking up even tiny deviations from normal behaviour. The question then is, if the function of a human can be replicated this closely by a machine does that mean the consciousness can also be replicated? The answer is yes, since otherwise we would have the possibility of a person having radically different experiences but behaving normally and being unaware that their experiences were different.
The answer is no. A cartoon of Bugs Bunny has no experiences but behaves just like Bugs Bunny would if he had experiences. You are eating the menu.
Craig
-- Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/1JuM_HGXyUoJ.
Craig,Do you think if your brain were cut in half, but then perfectly put back together that you would still be conscious in the same way?
What if cut into a thousand pieces and put back together perfectly?
What if every atom was taken apart and put back together?
What if every atom was taken apart, and then atoms from a different pile were used to put you back together?
What then if the original atoms were put back, would they both experience what it is like to be you?
Does the identity of one's atoms matter or are they interchangable? If the identity is not what matters, what is it that does?
Jason
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg <whats...@gmail.com> wrote:
> I understand that, but it still assumes that there is a such thing as a set
> of functions which could be identified and reproduced that cause
> consciousness. I don't assume that, because consciousness isn't like
> anything else. It is the source of all functions and appearances, not the
> effect of them. Once you have consciousness in the universe, then it can be
> enhanced and altered in infinite ways, but none of them can replace the
> experience that is your own.
No, the paper does *not* assume that there is a set of functions that
if reproduced will will cause consciousness. It assumes that something
like what you are saying is right.
>> I assume that my friends have not been replaced by robots. If they have
>> been then that means the robots can almost perfectly replicate their
>> behaviour, since I (and people in general) am very good at picking up even
>> tiny deviations from normal behaviour. The question then is, if the function
>> of a human can be replicated this closely by a machine does that mean the
>> consciousness can also be replicated? The answer is yes, since otherwise we
>> would have the possibility of a person having radically different
>> experiences but behaving normally and being unaware that their experiences
>> were different.
>
>
> The answer is no. A cartoon of Bugs Bunny has no experiences but behaves
> just like Bugs Bunny would if he had experiences. You are eating the menu.
And if it were possible to replicate the behaviour without the
experiences - i.e. make a zombie - it would be possible to make a
partial zombie, which lacks some experiences but behaves normally and
doesn't realise that it lacks those experiences. Do you agree that
this is the implication? If not, where is the flaw in the reasoning?
--
Stathis Papaioannou
On Monday, September 17, 2012 6:18:00 PM UTC-4, Jason wrote:Craig,Do you think if your brain were cut in half, but then perfectly put back together that you would still be conscious in the same way?
There is no such thing as perfectly put back together. If you cut a living cell in half, it dies. The only way of putting it perfectly back together is to travel back in time and not cut it in half.
What if cut into a thousand pieces and put back together perfectly?
Same answer.
What if every atom was taken apart and put back together?
If you could take every atom in a living cell 'apart' and put it back together without killing the cell, then it seems like it would work, but I don't think that the cells would necessarily be 'the same' cells.
To me consciousness is an event in time, not a structure in space. The structure is the vehicle of the event. If you mess with the vehicle, you mess with the event.
What if every atom was taken apart, and then atoms from a different pile were used to put you back together?
When the atoms are taken apart, you die. If you put them together in what you think is the same way,
it is still a different performance of atoms, whether they are the same or different.
What then if the original atoms were put back, would they both experience what it is like to be you?
No.
Does the identity of one's atoms matter or are they interchangable? If the identity is not what matters, what is it that does?
Our atoms are replaced all the time.
Our identity exists at the level of our experience as a whole.
The experience of our body, our family, culture, etc. We are a lifetime that uses the whole brain as a way to participate in the human world as a human body.
Experience is what matters.
Craig
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/z123_FMESkIJ.
----- Receiving the following content -----From: Stephen P. KingReceiver: everything-list
Time: 2012-09-17, 11:30:13
Subject: Re: Zombieopolis Thought Experiment
On Mon, Sep 17, 2012 at 7:03 PM, Craig Weinberg <whats...@gmail.com> wrote:
On Monday, September 17, 2012 6:18:00 PM UTC-4, Jason wrote:Craig,Do you think if your brain were cut in half, but then perfectly put back together that you would still be conscious in the same way?
There is no such thing as perfectly put back together. If you cut a living cell in half, it dies. The only way of putting it perfectly back together is to travel back in time and not cut it in half.Why do you believe this? We can put machines back together. Cells are machines on a very small scale. It would be difficult, but there is no physical reason that prevents us from putting a cell back together after it has come apart.
What if cut into a thousand pieces and put back together perfectly?
Same answer.
What if every atom was taken apart and put back together?
If you could take every atom in a living cell 'apart' and put it back together without killing the cell, then it seems like it would work, but I don't think that the cells would necessarily be 'the same' cells.What is different about them? They could have the same exact quantum state, and yet you believe that because at one point in the past some atoms had some distance put between, and this somehow rules out the possibility of those atoms ever being used to build a person or life form, or be conscious?
Why would this be? Our bodies continually take in and use atoms from things that were once not alive. What is different here?
To me consciousness is an event in time, not a structure in space. The structure is the vehicle of the event. If you mess with the vehicle, you mess with the event.What the difference between putting someone back together and a baby slowly being constructed through a set of complex chemical reactions from previously lifeless matter?
In either case would the result not be a fully alive and conscious human? Do you suppose life also requires that life forms be built in certain natural ways (rather than artificial ways)?
What if every atom was taken apart, and then atoms from a different pile were used to put you back together?
When the atoms are taken apart, you die. If you put them together in what you think is the same way,it is still a different performance of atoms, whether they are the same or different.
The hypothetical did not involve some person thinking they were put back in the same way, but the atoms actually being put back in the same way.
Do you still think there would be a "different performance of atoms"?What then if the original atoms were put back, would they both experience what it is like to be you?
No.Why shouldn't they?
Does the identity of one's atoms matter or are they interchangable? If the identity is not what matters, what is it that does?
Our atoms are replaced all the time.Right.Our identity exists at the level of our experience as a whole.I don't understand what you mean here.
The experience of our body, our family, culture, etc. We are a lifetime that uses the whole brain as a way to participate in the human world as a human body.Are you suggesting that things beyond one's skull are relevant to what someone experiences?
Hi Craig Weinberg
IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives.
The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.
Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brainDO something(as a homunculus?) on its own? Any suggestions?John M
> "Things" have extension and are physical
> a "non-thing" has no extension and is not physical.
> Consciousness or mind is not physical
> The brain is physical.
----- Receiving the following content -----
From: John MikesReceiver: everything-listTime: 2012-09-18, 17:17:40
Subject: Re: IMHO conscousness is an activity not a thing
Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something�锟斤拷
(as a homunculus?) on its own? Any suggestions?
John M锟斤拷锟斤拷锟斤拷�
On Tue, Sep 18, 2012 at 6:07 AM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.
Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08
Subject: Re: Zombieopolis Thought Experiment
On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg 锟絯rote:
Hi John MikesOnce you leave the material world for the ideal one,all things -- or at least many things-- now become possible.
Roger Clough, rcl...@verizon.net9/19/2012"Forever is a long time, especially near the end." -Woody Allen
----- Receiving the following content -----From: John MikesReceiver: everything-listTime: 2012-09-18, 17:17:40Subject: Re: IMHO conscousness is an activity not a thing
Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something�牋
(as a homunculus?) on its own? Any suggestions?
John M牋牋牋�
On Tue, Sep 18, 2012 at 6:07 AM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.
Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08
Subject: Re: Zombieopolis Thought Experiment
On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg 爓rote:
On Tue, Sep 18, 2012 at 1:43 PM, Craig Weinberg <whats...@gmail.com> wrote:
>> No, the paper does *not* assume that there is a set of functions that
>> if reproduced will will cause consciousness. It assumes that something
>> like what you are saying is right.
>
>
> By assume I mean the implicit assumptions which are unstated in the paper.
> The thought experiment comes out of a paradox arising from assumptions about
> qualia and the brain which are both false in my view. I see the brain as the
> flattened qualia of human experience.
Chalmer's position is that functionalism is true, and he states this
in the introduction, but this is not *assumed* in the thought
experiment. The thought experiment explicitly assumes that
functionalism is *false*; that consciousness is dependent on the
substrate and swapping a brain for a functional equivalent will not
necessarily give rise to the same consciousness or any consciousness
at all. Isn't that what you believe?
>> And if it were possible to replicate the behaviour without the
>> experiences - i.e. make a zombie - it would be possible to make a
>> partial zombie, which lacks some experiences but behaves normally and
>> doesn't realise that it lacks those experiences. Do you agree that
>> this is the implication? If not, where is the flaw in the reasoning?
>
>
> The word zombie implies that you have an expectation of consciousness but
> there isn't any. That is a fallacy from the start, since there is not reason
> to expect a simulation to have any experience at all. It's not a zombie,
> it's a puppet.
Replace the word "zombie" with "puppet" if that makes it easier to understand.
> A partial zombie is just someone who has brain damage, and yes if you tried
> to replace enough of a person's brain with a non-biological material, you
> would get brain damage, dementia, coma, and death.
Not if the puppet components perform the same purely mechanical
functions as the original components.
In order for this to happen
according to the paper you have to accept that the physics of the
brain is in fact computable. If it is computable, then we can model
the behaviour of the brain,
although according to the assumptions in
the paper (which coincide with your assumptions)
modeling the
behaviour won't reproduce the consciousness. All the evidence we have
suggests that physics is computable, but it might not be. It may turn
out that there is some exotic physics in the brain which requires
solving the halting problem, for example, in order to model it, and
that would mean that a computer could not adequately simulate those
components of the brain which utilise this physics. But going beyond
the paper, the argument for functionalism (substrate-independence of
consciousness) could still be made by considering theoretical
components with non-biological hypercomputers.
Hi Craig Weinberg
"Things" have extension and are physical, a "non-thing" has no extension and is not physical.
Consciousness or mind is not physical, at least in my understanding. The brain is physical.
--Hi Craig,
Hi Craig WeinbergConsciousness requires an autonomous self.
So does life itself. And intelligence.
So, I hagte to say this, but perhaps consciousness and life may be aproblem with mereology, don't know.
Also, have you seen Jan Smuts' "Holism"?Maybe he solved the problem.He was a lousy general but a good thinker otherwise.
On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:Hi Craig WeinbergConsciousness requires an autonomous self.
Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.
So does life itself. And intelligence.
We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.
So, I hagte to say this, but perhaps consciousness and life may be aproblem with mereology, don't know.
Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.
On 9/20/2012 12:55 PM, Craig Weinberg wrote:
On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:Hi Craig WeinbergConsciousness requires an autonomous self.
Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.
What if awareness is what happens when autonomous self and consciousness mirror each other?
So does life itself. And intelligence.
We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.
So, I hagte to say this, but perhaps consciousness and life may be aproblem with mereology, don't know.
Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.
Huh? non-mereology. What is that?
Our feeling of hurting is a (whole) experience of human reality, so that it is not composed of sub-personal experiences in a part-whole mereological relation but rather the relation is just the opposite. It is non-mereological or a-mereological. It is the primordial semi-unity/hyper-unity from which part-whole distinctions are extracted and projected outward as classical realism of an exterior world. I know that sounds dense and crazy, but I don’t know of a clearer way to describe it. Subjective experience is augmented along an axis of quality rather than quantity. Experiences of hurting capitulate sub personal experiences of emotional loss and disappointment, anger, and fear, with tactile sensations of throbbing, stabbing, burning, and cognitive feedback loops of worry, impatience, exaggerating and replaying the injury or illness, memories of associated experiences, etc. But we can just say ‘hurting’ and we all know generally what that means. No more particular description adds much to it. That is completely unlike exterior realism, where all we can see of a machine hurting would be that more processing power would seem to be devoted to some particular set of computations. They don’t run ‘all together and at once’, unless there is a living being who is there to interpret it that way - as we do when we look at a screen full of individual pixels and see images through the pixels rather than the changing pixels themselves.
On Thursday, September 20, 2012 9:49:58 PM UTC-4, Stephen Paul King wrote:On 9/20/2012 12:55 PM, Craig Weinberg wrote:
On Thursday, September 20, 2012 7:19:30 AM UTC-4, rclough wrote:Hi Craig WeinbergConsciousness requires an autonomous self.
Human consciousness requires an autonomous human self, but it is not necessarily true that consciousness requires a 'self'. It makes more sense to say that an autonomous self and consciousness both require awareness.
What if awareness is what happens when autonomous self and consciousness mirror each other?
There can't be an autonomous self without awareness as an ontological given to begin with, at least as an inevitable potential.
What would a self be or do without awareness?
You can have awareness without a self being presented within that awareness though. I've had dreams where there is no "I" there are just scenes that are taking place.
So does life itself. And intelligence.
We don't really know that. We can only speak for our own life and our own intelligence. I wouldn't presume a self, especially on low levels of awareness like molecular groupings.
So, I hagte to say this, but perhaps consciousness and life may be aproblem with mereology, don't know.
Why is it a problem. Mereology is the public presentation of life, and the private presentation is the opposite: non-mereology.
Huh? non-mereology. What is that?
I call it a-mereology also. That's the subjective conjugate to topology. In public realism there is the Stone Duality ( topologies ┴ logical algebras) while the private phenomenology duality is orthogonal to the Stone (a-mereology ┴ transrational gestalt-algebra).
I posted about it a bit yesterday:
Our feeling of hurting is a (whole) experience of human reality, so that it is not composed of sub-personal experiences in a part-whole mereological relation but rather the relation is just the opposite. It is non-mereological or a-mereological. It is the primordial semi-unity/hyper-unity from which part-whole distinctions are extracted and projected outward as classical realism of an exterior world. I know that sounds dense and crazy, but I don’t know of a clearer way to describe it. Subjective experience is augmented along an axis of quality rather than quantity. Experiences of hurting capitulate sub personal experiences of emotional loss and disappointment, anger, and fear, with tactile sensations of throbbing, stabbing, burning, and cognitive feedback loops of worry, impatience, exaggerating and replaying the injury or illness, memories of associated experiences, etc. But we can just say ‘hurting’ and we all know generally what that means. No more particular description adds much to it. That is completely unlike exterior realism, where all we can see of a machine hurting would be that more processing power would seem to be devoted to some particular set of computations. They don’t run ‘all together and at once’, unless there is a living being who is there to interpret it that way - as we do when we look at a screen full of individual pixels and see images through the pixels rather than the changing pixels themselves.
Craig
> If anyone is not familiar with David Chalmers "Absent Qualia, Fading Qualia, Dancing Qualia" You should have a look at it first.
On Thu, Sep 13, 2012 at 3:03 PM, Craig Weinberg <whats...@gmail.com> wrote:
> If anyone is not familiar with David Chalmers "Absent Qualia, Fading Qualia, Dancing Qualia" You should have a look at it first.
I confess I have not read it because I have little confidence it's any better than the Chinese Room. Well OK I exaggerate, it's probably better than that (what isn't) but there is something about all these anti AI thought experiments that has always confused me. Let's suppose I'm dead wrong and Chambers really has found something new and strange and maybe even paradoxical about consciousness, what I want to know is why am I required to explain it if I want to continue to believe that a intelligent computers would be conscious? Whatever argument Chambers has it could just as easily be turned against the idea that the intelligent behavior of other people indicates consciousness, and yet not one person on this list believes in Solipsism, not even the most vocal AI critics. Why? Why is it that I must find the flaws in all these thought experiments but the anti AI people feel no need to do so?
In the extraordinarily unlikely event that Chambers has shown that consciousness is paradoxical (and its probably just as childish as all the others) I would conclude that he just made an error someplace that nobody has found yet. When Zeno showed that motion was paradoxical nobody thought that motion did not exist but that Zeno just made a mistake, and he did, although the error wasn't found till the invention of the Calculus thousands of years later.
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
On 9/22/2012 10:53 AM, John Clark wrote:On Thu, Sep 13, 2012 at 3:03 PM, Craig Weinberg <whats...@gmail.com> wrote:
> If anyone is not familiar with David Chalmers "Absent Qualia, Fading Qualia, Dancing Qualia" You should have a look at it first.
It's some reductio arguments in favor of functionalism (i.e. comp).
I find these arguments convincing. So in building an intelligent robot it is almost certain that a sufficiently high level of intelligence we will have created a conscious robot. But I don't think it follows that the robot's consciousness will be the same as ours - because it's not the same even between different human beings. In particular I refer to synasthesia and certain mathematical savants who seem to have some different consciousness than I do. So for me the interesting question is how to build a robot with different consciousness in prespecified ways?
Brent
--
I confess I have not read it because I have little confidence it's any better than the Chinese Room. Well OK I exaggerate, it's probably better than that (what isn't) but there is something about all these anti AI thought experiments that has always confused me. Let's suppose I'm dead wrong and Chambers really has found something new and strange and maybe even paradoxical about consciousness, what I want to know is why am I required to explain it if I want to continue to believe that a intelligent computers would be conscious? Whatever argument Chambers has it could just as easily be turned against the idea that the intelligent behavior of other people indicates consciousness, and yet not one person on this list believes in Solipsism, not even the most vocal AI critics. Why? Why is it that I must find the flaws in all these thought experiments but the anti AI people feel no need to do so?
In the extraordinarily unlikely event that Chambers has shown that consciousness is paradoxical (and its probably just as childish as all the others) I would conclude that he just made an error someplace that nobody has found yet. When Zeno showed that motion was paradoxical nobody thought that motion did not exist but that Zeno just made a mistake, and he did, although the error wasn't found till the invention of the Calculus thousands of years later.
John K Clark
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
> What I see that he has not considered is that consciousness is a the function of uniqueness itself
> the accumulated history of experience, seen to us as matter
> I see only coma and death as the replacement neurons encroach on the brain stem
> irreversible damage would occur just as it would with dementia or a malignant brain tumor.
> mind can also operate on brain (through the will or an intention). I have no idea at the present of what such a monadic structure might be like.
On Sun, Sep 23, 2012 at 9:13 AM, Craig Weinberg <whats...@gmail.com> wrote:> What I see that he has not considered is that consciousness is a the function of uniqueness itself
For me to understand what you mean by this you need to answer one question, was the Email message that you sent to the Everything list on Sunday Sep 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?
> the accumulated history of experience, seen to us as matter
Without information to organize it matter doesn't seem like much of anything, its just a chaotic amorphous lump of stuff containing nothing of interest.
> I see only coma and death as the replacement neurons encroach on the brain stem
Because you believe that the neurons are doing something magical,
even though the scientific method can not find one scrap of evidence that they are doing any such thing.
No doubt you will say that science doesn't know everything and just hasn't found the answer, but the problem is that science hasn't even found evidence that there is a question that needs answering, or if you prefer to put it another way, science hasn't found any evidence that a intelligent conscious computer is more impossible than a intelligent conscious human.
Unless you can show at a fundamental level that biology has something that electronics lacks we must conclude that If computers can't be conscious then neither can humans.
> irreversible damage would occur just as it would with dementia or a malignant brain tumor.
I would say that would be more like a benign brain tumor, in fact given that it performs exactly like the original brain cells it would not be going too far to call it an Infinitely benign brain tumor.
John K Clark
On 19 Sep 2012, at 15:53, Roger Clough wrote:
Hi John MikesOnce you leave the material world for the ideal one,all things -- or at least many things-- now become possible.Yes. Since always.But there are many paths, and we can get lost.Platonia before and after Gödel or Church is not the same. The circle and the regular polyhedra keeps their majestuous importance, but now they have the company of the Mandelbrot set, and UDs. Shit happens, when seen from inside. With comp, heaven and hell are not mechanically separable, nothing is easy near the boundaries.***
I think that your metaphysics and reading of Leibniz makes sense for me, and comp, but I have to say I don't follow your methodology or teaching method on the religious field, as it contains authoritative arguments.
My feeling is that authoritative argument is the symptom of those who lack faith.
That error is multiplied in the transfinite when an authoritative argument is attributed to God.
Can you answer the following question?
How could anyone love a God, or a Goddess, threatening you of eternal torture in case you don't love He or She?That's bizarre.How could even just an atom of sincerity reside in that love, with such an explicit horrible threat?
I hope you don't mind my frankness and the naïvety of my questioning.Bruno
"Forever is a long time, especially near the end." -Woody Allen
----- Receiving the following content -----
From: John MikesReceiver: everything-listTime: 2012-09-18, 17:17:40Subject: Re: IMHO conscousness is an activity not a thing
Ha ha: so not consciousness is the 'thing', but 'intelligence'? or is this one also a function (of the brain towards the self?) who is the self? how does the brain
DO something�牋
(as a homunculus?) on its own? Any suggestions?
John M牋牋牋�
On Tue, Sep 18, 2012 at 6:07 AM, Roger Clough <rcl...@verizon.net> wrote:
Hi Craig Weinberg
IMHO conscousness is not really anything in itself,
it is what the brain makes of its contents that the self
perceives. The self is intelligence, which is
able to focus all pertinent brain activity to a unified point.
Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-17, 23:43:08
Subject: Re: Zombieopolis Thought Experiment
On Monday, September 17, 2012 11:02:16 PM UTC-4, stathisp wrote:
On Tue, Sep 18, 2012 at 6:39 AM, Craig Weinberg 爓rote:
A partial zombie is just someone who has brain damage, and yes if you tried to replace enough of a person's brain with a non-biological material, you would get brain damage, dementia, coma, and death.
Craig
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/nrqkIqoR6xMJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
----- Receiving the following content -----
From: Stathis PapaioannouReceiver: everything-listTime: 2012-09-23, 09:02:12
Subject: Re: Zombieopolis Thought Experiment
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.
Hi Stathis PapaioannouYou need a self or observer to be conscious, and computershave no self. So they can't be conscious.
Consciousness = a subject looking at, or aware of, an object.Computers have no subject.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>> was the Email message that you sent to the Everything list on Sunday Sep 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?
> My experience of sending it was unique. The experiences of people reading what I wrote were unique.
> The existence of an email message is only inferred through our experiences
> there is no email message outside of human interpretation.
> Without sense to be informed, organization is just a hypothetical morphology containing no possibilities of interest.
> With sense, you don't need information, you just need to be able to make sense of forms locally in some way.
> Yes, scientific method can find no evidence of consciousness of any kind.
> If you think that means that consciousness has to be impossible, then again, that is your projection.
> you define science as the objective study of the behavior of objects,
> then you cannot be surprised when science cannot locate what it is explicitly defined to disqualify.
> I don't understand how this isn't blindingly obvious, but I must accept that it is like gender orientation or political bias - not something that can be addressed by reason.
> If you try to live off of electronics then you will not survive. I have now shown that at a fundamental level, biology, in the form of food, respiration, hydration, etc, has something that electronics lack.
> When we have electronics that can be used as meal replacements, then I will consider the possibility that such an advancement in electronics might have additional capacities.
>> Thus the moon does not exist when you are not looking at it.
> I expected better from you! This quip is based on the premise that "you" are the only observer involved. Such nonsense! Considering that there are a HUGE number of observers of the moon
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
> Does the presence of the crater make a difference that makes a difference, or equivalently, have a causal effect on other entities in its environment? If yes then yes, it is "being observed".
> I think that Craig is discussing ideas that are flying right over your head.
On Sun, Sep 23, 2012 Craig Weinberg <whats...@gmail.com> wrote:>> was the Email message that you sent to the Everything list on Sunday Sep 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?
> My experience of sending it was unique. The experiences of people reading what I wrote were unique.
That's all very nice but it doesn't answer my question, was the Email message that you sent to the Everything list on Sunday September 23, 2012 at 9:13 AM on the east coast of the USA with the title "Re:Zombieopolis Thought Experiment" unique?
> The existence of an email message is only inferred through our experiences
Obviously.> there is no email message outside of human interpretation.
Thus the moon does not exist when you are not looking at it.
> Without sense to be informed, organization is just a hypothetical morphology containing no possibilities of interest.
Translation from the original bafflegab: without information information would contain nothing informative. I could not agree more.
> With sense, you don't need information, you just need to be able to make sense of forms locally in some way.
You made enough sense out of my message to respond to it and you only received that sense impression because it was sent over a wire, and if it can be sent over a wire then its information.
> Yes, scientific method can find no evidence of consciousness of any kind.
The thing I don't understand is why this is supposed to be a problem only for those who think a intelligent computer is conscious and is supposed to be no problem for those who think that other intelligent humans are conscious.
> If you think that means that consciousness has to be impossible, then again, that is your projection.
You and I have both believed that consciousness exists since we were both infants and we both have been implicitly using the exact same theory to determine when something is conscious and when something is not, and that is that intelligent behavior indicates consciousness.
In fact you don't even believe that you yourself are conscious when you don't behave in a complex intelligent manner, such as when you are in a dreamless sleep or under anesthesia, and that's why you and I fear death, when we eventually get in that state we won't be acting any smarter than a rock and as a result we fear that we will be no more conscious than a rock. What I object to is that when we run across a intelligent computer the rules of the game are supposed to suddenly change, and that just doesn't seem very smart.
> you define science as the objective study of the behavior of objects,
No, I define science as the use of the scientific method,
and that means looking at the evidence and developing a theory to explain it,
NOT finding a theory that makes you feel good and then looking for evidence that supports it and ignoring evidence that refutes it.
As illustrated in our debate on the free will noise you were even willing to embrace flat out logical contradictions if that's what it took for you to continue to believe what you found pleasant to believe, like X is not Y and X is also not not Y. Using such procedures may be successful in inducing a pleasing stupor but you'll have to abandon any hope of finding things that are true.
> then you cannot be surprised when science cannot locate what it is explicitly defined to disqualify.
I'm not surprised and all I ask is that whatever method you use for determining the existence of consciousness, scientific or otherwise, you don't suddenly change the rules in the middle of the race just because you saw a intelligent computer. Use whatever test you want to infer consciousness, all I'm asking for is consistency.
> I don't understand how this isn't blindingly obvious, but I must accept that it is like gender orientation or political bias - not something that can be addressed by reason.
At one time it was blindingly obvious that human beings with a black skin didn't have the same sort of feelings as people with white skin do, even though they acted as if they did, that's how they convinced themselves that there was nothing wrong with slavery.
> If you try to live off of electronics then you will not survive. I have now shown that at a fundamental level, biology, in the form of food, respiration, hydration, etc, has something that electronics lack.
So the key to consciousness is that humans eat breathe drink and shit but computer's don't.
Hmm, I don't quite see the connection,
however I do know that both biology and electronics are involved with quantum tunneling, the Schrodinger Equation, and the Pauli Exclusion Principle but electronics also has things that biology lacks, things like Bloch lattice functions, semiconductor valence bands, and the Hall effect; I don't understand why those functions have nothing to do with consciousness but defecation is intimately related with consciousness.
I also don't understand why the computer counterpart of Craig Weinberg couldn't make the argument that Human beings can behave intelligently but they can never be conscious because they don't have p-n silicon junctions, after all the link between p-n silicon junctions and consciousness is every bit as strong as the link between digestion and consciousness. For that matter I don't understand why the biological Craig Weinberg doesn't make the argument that biological women can't be conscious because they don't have testicles.
> When we have electronics that can be used as meal replacements, then I will consider the possibility that such an advancement in electronics might have additional capacities.
So you're only conscious when you eat.
John K Clark
On 9/24/2012 12:02 PM, John Clark wrote:
Thus the moon does not exist when you are not looking at it.Hi John,
I expected better from you! This quip is based on the premise that "you" are the only observer involved. Such nonsense! Considering that there are a HUGE number of observers of the moon, the effects of the observations of any one is negligible. If none of them measure the presence of the moon or its effects, then the existence of the moon becomes pure the object of speculation. Note that being affected by the moon in terms of tidal effects is a measurement!