Continuous Game of Life

63 views
Skip to first unread message

Jason Resch

unread,
Oct 11, 2012, 5:14:15 PM10/11/12
to Everything List

Russell Standish

unread,
Oct 11, 2012, 5:47:59 PM10/11/12
to everyth...@googlegroups.com
That's serious cool! I love the comment posted "Stephen Wolfram is
very angry!"

They do discrete time (Euler integration), but one could easily make
it continuous by replacing it with a Runge-Kutta integration scheme.

Thanks for posting this.

On Thu, Oct 11, 2012 at 04:14:15PM -0500, Jason Resch wrote:
> http://www.jwz.org/blog/2012/10/smoothlifel/
>
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Bruno Marchal

unread,
Oct 12, 2012, 8:44:10 AM10/12/12
to everyth...@googlegroups.com

On 11 Oct 2012, at 23:47, Russell Standish wrote:

> That's serious cool! I love the comment posted "Stephen Wolfram is
> very angry!"
>
> They do discrete time (Euler integration), but one could easily make
> it continuous by replacing it with a Runge-Kutta integration scheme.
>
> Thanks for posting this.

Very cool videos indeed. Although those are no more cellular automata,
those are still featuring digital phenomena, even with a Runge-Kutta
integration scheme. I guess this remark is obvious, despite the notion
of computation on the real does not have standard definition, nor the
equivalent of Church thesis. Of course some people search for that.

I bet those smooth life game are Turing universal, but that might not
be so easy to prove. I guess the simplest way to do that consists in
finding the good subrange of phenomena need to get the elementary part
of a "von Neumann" sort of machine, like with the usual GOL.

Bruno



>
> On Thu, Oct 11, 2012 at 04:14:15PM -0500, Jason Resch wrote:
>> http://www.jwz.org/blog/2012/10/smoothlifel/
>>
>> Jason
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> To post to this group, send email to everything-
>> li...@googlegroups.com.
>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>>
>
> --
>
> ----------------------------------------------------------------------------
> Prof Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics hpc...@hpcoders.com.au
> University of New South Wales http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Craig Weinberg

unread,
Oct 12, 2012, 8:50:11 AM10/12/12
to everyth...@googlegroups.com
They are certainly cool looking and biomorphic. The question I have is, at what point do they begin to have experiences...or do you think that those blobs have experiences already?

Would it give them more of a human experience if an oscillating smiley-face/frowny-face algorithm were added graphically into the center of each blob?

Craig

On Thursday, October 11, 2012 5:14:17 PM UTC-4, Jason wrote:
http://www.jwz.org/blog/2012/10/smoothlifel/

Jason

Bruno Marchal

unread,
Oct 12, 2012, 10:23:52 AM10/12/12
to everyth...@googlegroups.com

On 12 Oct 2012, at 14:50, Craig Weinberg wrote:

> They are certainly cool looking and biomorphic. The question I have
> is, at what point do they begin to have experiences...or do you
> think that those blobs have experiences already?
>
> Would it give them more of a human experience if an oscillating
> smiley-face/frowny-face algorithm were added graphically into the
> center of each blob?


Here is a "deterministic" simple phenomenon looking amazingly
"alive" (non-newtonian fluid):

http://www.youtube.com/watch?v=3zoTKXXNQIU

Is it alive? That question does not make sense for me. Yes with some
definition, no with other one. Unlike consciousness or intelligence
"life" is not a definite concept for me. I use usually the definition
"has a reproductive cycle". But this makes cigarettes and stars alive.
No problem for me.

Bruno


http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Oct 12, 2012, 12:50:22 PM10/12/12
to everything-list
Hi Bruno Marchal

Life is whatever operates autonomously,
not following any rules, laws, or programs.
Thus a Turing machine cannot be part of
a live creature. Even if it reprograms itself, it
must be constrained by the computer language
and operating system.

Roger Clough, rcl...@verizon.net
10/12/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-12, 10:23:52
Subject: Re: Continuous Game of Life
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

Roger Clough

unread,
Oct 12, 2012, 12:52:56 PM10/12/12
to everything-list
Hi Craig Weinberg

I would begin to believe that that life-game
is conscious if there is some sort of shepherding
done by a "shepherd". A watcher and director.


Roger Clough, rcl...@verizon.net
10/12/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-10-12, 08:50:11
Subject: Re: Continuous Game of Life


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/efk__ExlmJwJ.

Russell Standish

unread,
Oct 12, 2012, 4:42:56 PM10/12/12
to everyth...@googlegroups.com
On Fri, Oct 12, 2012 at 05:50:11AM -0700, Craig Weinberg wrote:
> They are certainly cool looking and biomorphic. The question I have is, at
> what point do they begin to have experiences...or do you think that those
> blobs have experiences already?
>
> Would it give them more of a human experience if an oscillating
> smiley-face/frowny-face algorithm were added graphically into the center of
> each blob?
>
> Craig

Assuming this system exhibits universality like the original GoL, and
assuming COMP, then some patterns will exhibit consciousness. However,
the patterns will no doubt be astronomical in size. The movies you see
here would be like taking an electron microscopic movie of the inner
workings of part of one cell in the human body.

I was more struck by the apparent similarity of the movie to the formation of
bilipid membranes.

Terren Suydam

unread,
Oct 12, 2012, 5:42:11 PM10/12/12
to everyth...@googlegroups.com
Hi Russell,

Even more suggestive is its similarity to Butschli protocells... see
this video for example:

http://www.youtube.com/watch?v=9tmTDvL1AUs and many others uploaded by
Rachel Armstrong... as she describes them "a simple self-organizing
system that is formed by the addition of a drop of alkali to a field
of olive oil - first described by Otto Butschli 1898"

Terren
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.

Craig Weinberg

unread,
Oct 13, 2012, 5:10:24 PM10/13/12
to everyth...@googlegroups.com

"The good news is, after this operation you'll be every bit as alive as a cigarette is".

There are some cool videos out there of cymatic animation like that. All that it really tells me is that there are a limited number of morphological themes in the universe, not that those themes are positively linked to any particular private phenomenology. They are producing those patterns with a particular acoustic signal, but we could model it mathematically and see the same pattern on a video screen without any acoustic signal at all. Same thing happens when we model the behaviors of a conscious mind. It looks similar from a distance, but that's all.

Craig

 


http://iridia.ulb.ac.be/~marchal/



Craig Weinberg

unread,
Oct 13, 2012, 5:11:59 PM10/13/12
to everyth...@googlegroups.com


On Friday, October 12, 2012 4:42:56 PM UTC-4, Russell Standish wrote:
On Fri, Oct 12, 2012 at 05:50:11AM -0700, Craig Weinberg wrote:
> They are certainly cool looking and biomorphic. The question I have is, at
> what point do they begin to have experiences...or do you think that those
> blobs have experiences already?
>
> Would it give them more of a human experience if an oscillating
> smiley-face/frowny-face algorithm were added graphically into the center of
> each blob?
>
> Craig

Assuming this system exhibits universality like the original GoL, and
assuming COMP, then some patterns will exhibit consciousness. However,
the patterns will no doubt be astronomical in size. The movies you see
here would be like taking an electron microscopic movie of the inner
workings of part of one cell in the human body.

Unlike part of a human cell though, they are just an optical presentation with no mass or chemical composition.

Craig
 

Russell Standish

unread,
Oct 13, 2012, 7:51:15 PM10/13/12
to everyth...@googlegroups.com
On Sat, Oct 13, 2012 at 02:11:59PM -0700, Craig Weinberg wrote:
>
>
> On Friday, October 12, 2012 4:42:56 PM UTC-4, Russell Standish wrote:
> > Assuming this system exhibits universality like the original GoL, and
> > assuming COMP, then some patterns will exhibit consciousness. However,
> > the patterns will no doubt be astronomical in size. The movies you see
> > here would be like taking an electron microscopic movie of the inner
> > workings of part of one cell in the human body.
> >
>
> Unlike part of a human cell though, they are just an optical presentation
> with no mass or chemical composition.
>
> Craig

I know you don't believe in COMP, but assuming COMP (I am open-minded
on the topic), mass and chemical composition are irrelevant to
consciousness.

Cheers

Craig Weinberg

unread,
Oct 13, 2012, 7:51:44 PM10/13/12
to everyth...@googlegroups.com


On Saturday, October 13, 2012 7:41:10 PM UTC-4, Russell Standish wrote:
On Sat, Oct 13, 2012 at 02:11:59PM -0700, Craig Weinberg wrote:
>
>
> On Friday, October 12, 2012 4:42:56 PM UTC-4, Russell Standish wrote:
> > Assuming this system exhibits universality like the original GoL, and
> > assuming COMP, then some patterns will exhibit consciousness. However,
> > the patterns will no doubt be astronomical in size. The movies you see
> > here would be like taking an electron microscopic movie of the inner
> > workings of part of one cell in the human body.
> >
>
> Unlike part of a human cell though, they are just an optical presentation
> with no mass or chemical composition.
>
> Craig

I know you don't believe in COMP, but assuming COMP (I am open-minded
on the topic), mass and chemical composition are irrelevant to
consciousness.

Since we know that our consciousness is exquisitely sensitive to particular masses of specific chemicals, yet relatively tolerant of other kinds of chemical changes, it suggests that we should strongly suspect that COMP is a fantasy.

Craig
 

Stathis Papaioannou

unread,
Oct 13, 2012, 7:54:12 PM10/13/12
to everyth...@googlegroups.com
On Sun, Oct 14, 2012 at 10:51 AM, Russell Standish
<li...@hpcoders.com.au> wrote:

> I know you don't believe in COMP, but assuming COMP (I am open-minded
> on the topic), mass and chemical composition are irrelevant to
> consciousness.

Chalmers' "fading qualia" argument purports to prove the
substrate-independence of consciousness.


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Oct 13, 2012, 8:04:55 PM10/13/12
to everyth...@googlegroups.com
On Sun, Oct 14, 2012 at 10:51 AM, Craig Weinberg <whats...@gmail.com> wrote:

> Since we know that our consciousness is exquisitely sensitive to particular
> masses of specific chemicals, yet relatively tolerant of other kinds of
> chemical changes, it suggests that we should strongly suspect that COMP is a
> fantasy.

That proves nothing. Any machine will be sensitive to small physical
changes of one kind and tolerant of other changes. If you introduce a
little bit of saline into the brain nothing will happen, if you
introduce inside an integrated circuit it will destroy it.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 13, 2012, 8:10:42 PM10/13/12
to everyth...@googlegroups.com

Fading qualia is the only argument of Chalmers' that I disagree with. It's a natural mistake to make, but I think he goes wrong by assuming a priori that consciousness is functional, i.e. that personal consciousness is an assembly of sub-personal parts which can be isolated and reproduced based on exterior behavior. I don't assume that at all. I suspect the opposite case, that in fact any level of personal consciousness - be it sub-personal-reflex, personal-intentional, or super-signifying-synchronistic cannot be modeled by the impersonal views from third person perspectives. The impersonal (micro, meso, macrocosm) is based on public extension, space, and quantifiable lengths, while the personal is based on private intention, time, and qualitative oscillation. Each layer of the personal relates to all of the impersonal layers in a different way, so that you can't necessarily replace a person with a sculpture and expect there to still be a person there - even if the sculpture seems extremely convincing to us from the outside appearance. My prediction is that rather than fading qualia, we would simply see increasing pathology, psychosis, dementia, coma, and death.

Craig



--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 13, 2012, 8:13:17 PM10/13/12
to everyth...@googlegroups.com

But if you introduce digital saline into a program, even if there is an effect that we can imagine is destruction, we can just restore from a backup. No actual destruction has taken place. The question of COMP deals not with physical computing devices versus biological organisms, but logic which is independent of all forms of matter, energy, space, and time.

Craig
 


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Oct 13, 2012, 9:05:27 PM10/13/12
to everyth...@googlegroups.com
On Sun, Oct 14, 2012 at 11:10 AM, Craig Weinberg <whats...@gmail.com> wrote:

> Fading qualia is the only argument of Chalmers' that I disagree with. It's a
> natural mistake to make, but I think he goes wrong by assuming a priori that
> consciousness is functional, i.e. that personal consciousness is an assembly
> of sub-personal parts which can be isolated and reproduced based on exterior
> behavior.

No, he does NOT assume this. He assumes the opposite: that
consciousness is a property of the brain and CANNOT be reproduced by
reproducing the behaviour in another substrate.

> I don't assume that at all. I suspect the opposite case, that in
> fact any level of personal consciousness - be it sub-personal-reflex,
> personal-intentional, or super-signifying-synchronistic cannot be modeled by
> the impersonal views from third person perspectives. The impersonal (micro,
> meso, macrocosm) is based on public extension, space, and quantifiable
> lengths, while the personal is based on private intention, time, and
> qualitative oscillation. Each layer of the personal relates to all of the
> impersonal layers in a different way, so that you can't necessarily replace
> a person with a sculpture and expect there to still be a person there - even
> if the sculpture seems extremely convincing to us from the outside
> appearance. My prediction is that rather than fading qualia, we would simply
> see increasing pathology, psychosis, dementia, coma, and death.

But since you misunderstand the first assumption you misunderstand the
whole argument.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 13, 2012, 11:59:39 PM10/13/12
to everyth...@googlegroups.com


On Saturday, October 13, 2012 9:05:58 PM UTC-4, stathisp wrote:
On Sun, Oct 14, 2012 at 11:10 AM, Craig Weinberg <whats...@gmail.com> wrote:

> Fading qualia is the only argument of Chalmers' that I disagree with. It's a
> natural mistake to make, but I think he goes wrong by assuming a priori that
> consciousness is functional, i.e. that personal consciousness is an assembly
> of sub-personal parts which can be isolated and reproduced based on exterior
> behavior.

No, he does NOT assume this. He assumes the opposite: that
consciousness is a property of the brain and CANNOT be reproduced by
reproducing the behaviour in another substrate.

I'm not talking about what the structure of the thought experiment assumes, I am talking about what David Chalmers himself assumed before coming up with the paper. We have been over this before. I'm not saying I disagree with the reasoning of the thought experiment, I am saying that I see a mistake in the initial assumptions which invalidate the thought experiment in the first place.
 

> I don't assume that at all. I suspect the opposite case, that in
> fact any level of personal consciousness - be it sub-personal-reflex,
> personal-intentional, or super-signifying-synchronistic cannot be modeled by
> the impersonal views from third person perspectives. The impersonal (micro,
> meso, macrocosm) is based on public extension, space, and quantifiable
> lengths, while the personal is based on private intention, time, and
> qualitative oscillation. Each layer of the personal relates to all of the
> impersonal layers in a different way, so that you can't necessarily replace
> a person with a sculpture and expect there to still be a person there - even
> if the sculpture seems extremely convincing to us from the outside
> appearance. My prediction is that rather than fading qualia, we would simply
> see increasing pathology, psychosis, dementia, coma, and death.

But since you misunderstand the first assumption you misunderstand the
whole argument.

Nope. You misunderstand my argument completely.

Craig
 


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Oct 14, 2012, 1:04:22 AM10/14/12
to everyth...@googlegroups.com
On Sun, Oct 14, 2012 at 2:59 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> No, he does NOT assume this. He assumes the opposite: that
>> consciousness is a property of the brain and CANNOT be reproduced by
>> reproducing the behaviour in another substrate.
>
>
> I'm not talking about what the structure of the thought experiment assumes,
> I am talking about what David Chalmers himself assumed before coming up with
> the paper. We have been over this before. I'm not saying I disagree with the
> reasoning of the thought experiment, I am saying that I see a mistake in the
> initial assumptions which invalidate the thought experiment in the first
> place.

The validity of a proof is not dependent on the beliefs, habits or
psychology of its author!

>> But since you misunderstand the first assumption you misunderstand the
>> whole argument.
>
>
> Nope. You misunderstand my argument completely.

Perhaps I do, but you specifically misunderstand that the argument
depends on the assumption that computers don't have consciousness. You
also misunderstand (or pretend to) the idea that a brain or computer
does not have to know the entire future history of the universe and
how it will respond to every situation it may encounter in order to
function. What are some equivalently simple, uncontroversial things in
what you say that i misunderstand?


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 14, 2012, 1:10:20 PM10/14/12
to everyth...@googlegroups.com


On Sunday, October 14, 2012 1:04:54 AM UTC-4, stathisp wrote:
On Sun, Oct 14, 2012 at 2:59 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> No, he does NOT assume this. He assumes the opposite: that
>> consciousness is a property of the brain and CANNOT be reproduced by
>> reproducing the behaviour in another substrate.
>
>
> I'm not talking about what the structure of the thought experiment assumes,
> I am talking about what David Chalmers himself assumed before coming up with
> the paper. We have been over this before. I'm not saying I disagree with the
> reasoning of the thought experiment, I am saying that I see a mistake in the
> initial assumptions which invalidate the thought experiment in the first
> place.

The validity of a proof is not dependent on the beliefs, habits or
psychology of its author!

If someone sets out to estimate how many angels can fit on the head of a pin, you are disallowing that we can question the existence of angels.
 

>> But since you misunderstand the first assumption you misunderstand the
>> whole argument.
>
>
> Nope. You misunderstand my argument completely.

Perhaps I do, but you specifically misunderstand that the argument
depends on the assumption that computers don't have consciousness.

No, I do understand that.
 
You
also misunderstand (or pretend to) the idea that a brain or computer
does not have to know the entire future history of the universe and
how it will respond to every situation it may encounter in order to
function.

Do you have to know the entire history of how you learned English to read these words? It depends what you mean by know. You don't have to consciously recall learning English, but without that experience, you wouldn't be able to read this. If you had a module implanted in your brain which would allow you to read Chinese, it might give you an acceptable capacity to translate Chinese phonemes and characters, but it would be a generic understanding, not one rooted in decades of human interaction. Do you see the difference? Do you see how words are not only functional data but also names which carry personal significance?
 
What are some equivalently simple, uncontroversial things in
what you say that i misunderstand?

You think that I don't get that Fading Qualia is a story about a world in which the brain cannot be substituted, but I do. Chalmers is saying 'OK lets say that's true - how would that be? Would your blue be less and less blue? How could you act normally if you...blah, blah, blah'. I get that. It's crystal clear.

What you don't understand is that this carries a priori assumptions about the nature of consciousness, that it is an end result of a distributed process which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.

Imagine that we had one eye in the front of our heads and one ear in the back, and that the whole of human history has been to debate over whether walking forward means that objects are moving toward you or whether it means changes in relative volume of sounds.

Chalmers is saying, 'if we gradually replaced the eye with parts of the ear, how would our sight gradually change to sound, or would it suddenly switch over?' Since both options seem absurd, then he concludes that anything that is in the front of the head is an eye and everything on the back is an ear, or that everything has both ear and eye potentials.

The MR model is to understand that these two views are not merely substance dual or property dual, they are involuted juxtapositions of each other. The difference between front and back is not merely irreconcilable, it is mutually exclusive by definition in experience. I am not throwing up my hands and saying 'ears can't be eyes because eyes are special', I am positively asserting that there is a way of modeling the eye-ear relation based on an understanding of what time, space, matter, energy, entropy, significance, perception, and participation actually are and how they relate to each other.

The idea that the newly discovered ear-based models out of the back of our head is eventually going to explain the view eye view out of the front is not scientific, it's an ideological faith that I understand to be critically flawed. The evidence is all around us, we have only to interpret it that way rather than to keep updating our description of reality to match the narrowness of our fundamental theory. The theory only works for the back view of the world...it says *nothing* useful about the front view. To the True Disbeliever, this is a sign that we need to double down on the back end view because it's the best chance we have. The thinking is that any other position implies that we throw out the back end view entirely and go back to the dark ages of front end fanatacism. I am not suggesting a compromise, I propose a complete overhaul in which we start not from the front and move back or back and move front, but start from the split and see how it can be understood as double knot - a fold of folds
.

Craig



--
Stathis Papaioannou

Roger Clough

unread,
Oct 15, 2012, 10:19:24 AM10/15/12
to everything-list
Hi Craig Weinberg

I think that comp is a form of scientific idealism.
I don't know exactly what that means, but
there are clues at

http://en.wikipedia.org/wiki/Idealism#Idealism_in_the_philosophy_of_science


Roger Clough, rcl...@verizon.net
10/15/2012
"Forever is a long time, especially near the end." -Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Receiver: everything-list
Time: 2012-10-13, 20:13:17
Subject: Re: Continuous Game of Life




On Saturday, October 13, 2012 8:05:26 PM UTC-4, stathisp wrote:
On Sun, Oct 14, 2012 at 10:51 AM, Craig Weinberg wrote:

> Since we know that our consciousness is exquisitely sensitive to particular
> masses of specific chemicals, yet relatively tolerant of other kinds of
> chemical changes, it suggests that we should strongly suspect that COMP is a
> fantasy.

That proves nothing. Any machine will be sensitive to small physical
changes of one kind and tolerant of other changes. If you introduce a
little bit of saline into the brain nothing will happen, if you
introduce inside an integrated circuit it will destroy it.


But if you introduce digital saline into a program, even if there is an effect that we can imagine is destruction, we can just restore from a backup. No actual destruction has taken place. The question of COMP deals not with physical computing devices versus biological organisms, but logic which is independent of all forms of matter, energy, space, and time.

Craig




--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/ieYmJNFW_dUJ.

John Clark

unread,
Oct 15, 2012, 12:14:52 PM10/15/12
to everyth...@googlegroups.com
On Sat, Oct 13, 2012 Craig Weinberg <whats...@gmail.com> wrote:

 > Since we know that our consciousness

You don't know diddly squat about "our consciousness", you only know about your consciousness; assuming of course that you are conscious, if not then you don't even know that.

> is exquisitely sensitive to particular masses of specific chemicals, yet relatively tolerant of other kinds of chemical changes,

And a computer is exquisitely sensitive to particular voltages and not sensitive at all to other voltages that don't make the threshold.

> it suggests that we should strongly suspect that COMP is a fantasy.

And so the computer strongly suspects that biological consciousness is a fantasy.

  John K Clark

John Clark

unread,
Oct 15, 2012, 12:38:29 PM10/15/12
to everyth...@googlegroups.com
On Sat, Oct 13, 2012 at 8:10 PM, Craig Weinberg <whats...@gmail.com> wrote:
 
> I think he [Chambers] goes wrong by assuming a priori that consciousness is functional,

I've asked you this question dozens of times but you have never coherently answered it: If consciousness doesn't do anything then Evolution can't see it, so how and why did Evolution produce it? The fact that you have no answer to this means your ideas are fatally flawed.

> that personal consciousness is an assembly of sub-personal parts which can be isolated and reproduced based on exterior behavior. I don't assume that at all.

And I've asked you another question that you also have no answer for: If we can deduce nothing about consciousness from behavior then why do you believe that your fellow human beings are conscious when they are behaving as if they are awake, and why do you believe that they are not conscious when they are sleeping or undergoing anesthesia or behaving as if they were dead and rotting in the ground? 

  John K Clark




Craig Weinberg

unread,
Oct 15, 2012, 12:41:40 PM10/15/12
to everyth...@googlegroups.com


On Monday, October 15, 2012 12:14:55 PM UTC-4, John Clark wrote:
On Sat, Oct 13, 2012 Craig Weinberg <whats...@gmail.com> wrote:

 > Since we know that our consciousness

You don't know diddly squat about "our consciousness", you only know about your consciousness; assuming of course that you are conscious, if not then you don't even know that.

If that were true, then you don't know diddly squat about what I know. You can't have it both ways. Either it is possible that we know things or it is not. You can't claim to be omniscient about my ignorance.
 

> is exquisitely sensitive to particular masses of specific chemicals, yet relatively tolerant of other kinds of chemical changes,

And a computer is exquisitely sensitive to particular voltages and not sensitive at all to other voltages that don't make the threshold.

Let's see how computer fares under a giant junkyard magnet.
 

> it suggests that we should strongly suspect that COMP is a fantasy.

And so the computer strongly suspects that biological consciousness is a fantasy.

Maybe the doorknob thinks that hands aren't alive too? Maybe you can talk yourself into believing that sophistry, but I'm not buying it.

Craig

  John K Clark

Craig Weinberg

unread,
Oct 15, 2012, 12:46:44 PM10/15/12
to everyth...@googlegroups.com


On Monday, October 15, 2012 12:38:30 PM UTC-4, John Clark wrote:
On Sat, Oct 13, 2012 at 8:10 PM, Craig Weinberg <whats...@gmail.com> wrote:
 
> I think he [Chambers] goes wrong by assuming a priori that consciousness is functional,

I've asked you this question dozens of times but you have never coherently answered it: If consciousness doesn't do anything then Evolution can't see it, so how and why did Evolution produce it?

Evolution did not produce consciousness. Consciousness produced evolution. Not human consciousness, but sense. I have said this repeatedly.

 
The fact that you have no answer to this means your ideas are fatally flawed.

I keep answering it. You keep putting your fingers in your ears.
 

> that personal consciousness is an assembly of sub-personal parts which can be isolated and reproduced based on exterior behavior. I don't assume that at all.

And I've asked you another question that you also have no answer for: If we can deduce nothing about consciousness from behavior then why do you believe that your fellow human beings are conscious when they are behaving as if they are awake, and why do you believe that they are not conscious when they are sleeping or undergoing anesthesia or behaving as if they were dead and rotting in the ground? 

We can deduce a great deal about the consciousness of things which are similar to ourselves. The more distant and unrelated a phenomenon is to ourselves, the less certain we can be about what the experience associated with it might be. It's not  a question of conscious vs unconscious, it is a question of the range of qualities of consciousness. Humans have a broad range.

Craig


  John K Clark




John Clark

unread,
Oct 15, 2012, 1:02:03 PM10/15/12
to everyth...@googlegroups.com
On Mon, Oct 15, 2012 at 12:41 PM, Craig Weinberg <whats...@gmail.com> wrote:


You don't know diddly squat about "our consciousness", you only know about your consciousness; assuming of course that you are conscious, if not then you don't even know that.

If that were true, then you don't know diddly squat about what I know.

Not true, I know you don't have a proof of the Goldbach Conjecture. Well OK, I don't know that with absolute certainty, maybe you have a proof but are keeping it secret for some strange reason, but my knowledge is more than diddly squat because I very strongly suspect you have no such proof and I'm probably right. But I do know for certain that you don't have a valid proof that 2+2=5 or a way to directly detect consciousness in any mind other than your own.

You can't have it both ways. Either it is possible that we know things or it is not.

That is most certainly true, it is possible to know things, it's just not possible to know all things. 

> You can't claim to be omniscient about my ignorance.

It's almost as if you're claiming your ignorance is vast, well I admit I am not omniscient about your ignorance, no doubt you are ignorant about things that I don't know you are ignorant of.

> Let's see how computer fares under a giant junkyard magnet.

Let's see how you fare in a junkyard car crusher.

  John K Clark

 

John Clark

unread,
Oct 15, 2012, 1:53:50 PM10/15/12
to everyth...@googlegroups.com
On Mon, Oct 15, 2012  Craig Weinberg <whats...@gmail.com> wrote:

> Evolution did not produce consciousness. Consciousness produced evolution.

So you believe in the God theory, do you also believe that the Earth is only 6000 years old as so many of your religious colleagues do?

> Not human consciousness

Then how did human consciousness come to be? It can't be Evolution because it can only see behavior, Evolution can't see consciousness any better than we can. 

> I have said this repeatedly.

You have said repeatedly that everything, absolutely positively everything, is conscious and that in some strange way that you never make clear this is somehow supposed to be different from nothing is conscious. And you have said repeatedly that "everything is conscious" explains something, but you have never said what.

> We can deduce a great deal about the consciousness of things which are similar to ourselves.

There are (possibly) a infinite number of factors that determine you or me, so which factors are important and which are not. You talk about things similar to ourselves but similar in what way, what is more important similarity in behavior or similarity in surface appearance? The bigots of the old south believed that the appearance of a person's skin, in particular the number of photons reflected off it, was of far more importance than what the person said or did. They believed this because that's the way light reflected off their own skin. I don't believe they were on the right track.

  John K Clark

 

Craig Weinberg

unread,
Oct 15, 2012, 2:02:39 PM10/15/12
to everyth...@googlegroups.com


On Monday, October 15, 2012 1:02:05 PM UTC-4, John Clark wrote:


On Mon, Oct 15, 2012 at 12:41 PM, Craig Weinberg <whats...@gmail.com> wrote:


You don't know diddly squat about "our consciousness", you only know about your consciousness; assuming of course that you are conscious, if not then you don't even know that.

If that were true, then you don't know diddly squat about what I know.

Not true, I know you don't have a proof of the Goldbach Conjecture. Well OK, I don't know that with absolute certainty, maybe you have a proof but are keeping it secret for some strange reason, but my knowledge is more than diddly squat because I very strongly suspect you have no such proof and I'm probably right. But I do know for certain that you don't have a valid proof that 2+2=5 or a way to directly detect consciousness in any mind other than your own.

Then you are claiming to know about "our consciousness" instead of just your own. If you can do that, why can't I? The difference is that I don't put some artificial constraint on what you can or can't know. I let consciousness be what it actually is, rather than what it needs to be to fit into my inherited worldview.
 

You can't have it both ways. Either it is possible that we know things or it is not.

That is most certainly true, it is possible to know things, it's just not possible to know all things. 

> You can't claim to be omniscient about my ignorance.

It's almost as if you're claiming your ignorance is vast, well I admit I am not omniscient about your ignorance, no doubt you are ignorant about things that I don't know you are ignorant of.

Whatever you can know about what I know, I can also know about what you know.
 

> Let's see how computer fares under a giant junkyard magnet.

Let's see how you fare in a junkyard car crusher.

translation - "I concede, I have no argument."

Craig
 

  John K Clark

 

meekerdb

unread,
Oct 15, 2012, 2:40:56 PM10/15/12
to everyth...@googlegroups.com
On 10/15/2012 9:38 AM, John Clark wrote:
On Sat, Oct 13, 2012 at 8:10 PM, Craig Weinberg <whats...@gmail.com> wrote:
 
> I think he [Chambers] goes wrong by assuming a priori that consciousness is functional,

I've asked you this question dozens of times but you have never coherently answered it: If consciousness doesn't do anything then Evolution can't see it, so how and why did Evolution produce it? The fact that you have no answer to this means your ideas are fatally flawed.

I don't see this as a *fatal* flaw.  Evolution, as you've noted, is not a paradigm of efficient design.  Consciousness might just be a side-effect of using some brain language modules as filters for remembering more important events, while forgetting most of them.  This would comport with Julian Jaynes idea of the origin of consciousness.

Bretn

meekerdb

unread,
Oct 15, 2012, 2:42:24 PM10/15/12
to everyth...@googlegroups.com
On 10/15/2012 9:41 AM, Craig Weinberg wrote:
And a computer is exquisitely sensitive to particular voltages and not sensitive at all to other voltages that don't make the threshold.

Let's see how computer fares under a giant junkyard magnet.

Probably better than you will fare plugged into a 120V outlet.  :-)

Brent

Craig Weinberg

unread,
Oct 15, 2012, 2:48:45 PM10/15/12
to everyth...@googlegroups.com

Let's see who fares better in a swimming pool.

Craig
 

Brent

meekerdb

unread,
Oct 15, 2012, 3:09:29 PM10/15/12
to everyth...@googlegroups.com
I'll accept that as an admission that you've run out of cogent arguments.

Brent

Craig Weinberg

unread,
Oct 15, 2012, 3:42:58 PM10/15/12
to everyth...@googlegroups.com

No, I'm just making the point that human beings have a much more robust and complex relation to physical conditions. Computers reveal their rigidity and lack of sentience in their relatively uniform relation to temperature, chemicals, etc.

Craig


Brent

John Clark

unread,
Oct 16, 2012, 12:13:53 PM10/16/12
to everyth...@googlegroups.com
On Mon, Oct 15, 2012 at 2:02 PM, Craig Weinberg <whats...@gmail.com> wrote:


 I know you don't have a proof of the Goldbach Conjecture. Well OK, I don't know that with absolute certainty, maybe you have a proof but are keeping it secret for some strange reason, but my knowledge is more than diddly squat because I very strongly suspect you have no such proof and I'm probably right. But I do know for certain that you don't have a valid proof that 2+2=5 or a way to directly detect consciousness in any mind other than your own.

Then you are claiming to know about "our consciousness" instead of just your own.

I am claiming that you don't possess a valid proof that 2+2=5 because there is no such proof to possess.

 
>>> Let's see how computer fares under a giant junkyard magnet.

>> Let's see how you fare in a junkyard car crusher.

> translation - "I concede, I have no argument."

So lets see, "a giant junkyard magnet" is a devastating logical argument but  "a junkyard car crusher" is not. Explain to me how that works.

  John K Clark

John Clark

unread,
Oct 16, 2012, 12:37:34 PM10/16/12
to everyth...@googlegroups.com
On Mon, Oct 15, 2012 at 2:40 PM, meekerdb <meek...@verizon.net> wrote:

 >>  If consciousness doesn't do anything then Evolution can't see it, so how and why did Evolution produce it? The fact that you have no answer to this means your ideas are fatally flawed.

> I don't see this as a *fatal* flaw.  Evolution, as you've noted, is not a paradigm of efficient design.  Consciousness might just be a side-effect

But that's exactly what I've been saying for months, unless Darwin was dead wrong consciousness must be a side effect of intelligence, so a intelligent computer must be a conscious computer. And I don't think Darwin was dead wrong.

  John K Clark

Craig Weinberg

unread,
Oct 16, 2012, 12:56:46 PM10/16/12
to everyth...@googlegroups.com


On Tuesday, October 16, 2012 12:13:55 PM UTC-4, John Clark wrote:


On Mon, Oct 15, 2012 at 2:02 PM, Craig Weinberg <whats...@gmail.com> wrote:


 I know you don't have a proof of the Goldbach Conjecture. Well OK, I don't know that with absolute certainty, maybe you have a proof but are keeping it secret for some strange reason, but my knowledge is more than diddly squat because I very strongly suspect you have no such proof and I'm probably right. But I do know for certain that you don't have a valid proof that 2+2=5 or a way to directly detect consciousness in any mind other than your own.

Then you are claiming to know about "our consciousness" instead of just your own.

I am claiming that you don't possess a valid proof that 2+2=5 because there is no such proof to possess.

Two men and two women live together. The woman has a child. 2+2=5

 
 
>>> Let's see how computer fares under a giant junkyard magnet.

>> Let's see how you fare in a junkyard car crusher.

> translation - "I concede, I have no argument."

So lets see, "a giant junkyard magnet" is a devastating logical argument but  "a junkyard car crusher" is not. Explain to me how that works.

Because talking about how you want to kill me in an argument about computers is pointless ad hominem venting, but talking about the effect of magnetism on computers in an argument about computers is relevant. I get it, my views upset you. You should discuss that with a professional.

Craig
 

  John K Clark

meekerdb

unread,
Oct 16, 2012, 3:16:51 PM10/16/12
to everyth...@googlegroups.com
But it  might be a side-effect of the particular way in which evolution implemented human intelligence.  If we created an artificial intelligence that, for example had a module for filtering and storing information about significant events that was separate from the language/communication module then that AI might not be conscious in the way people are.  I agree that it would be conscious in *some* way, but different ways of processing and storing information, even though they produce roughly the same intelligent behaviour, might produce qualitatively different consciousness.  In fact I expect that cuttlefish, who are social and communicate by producing color patterns on their body, have a different kind of 'stream of consciousness' and if they evolved to be as intelligent as humans they would still have this qualitative difference in consciousness, somewhat as people with synasthesia do but more so.

Brent

John Mikes

unread,
Oct 16, 2012, 3:55:18 PM10/16/12
to everyth...@googlegroups.com
Bruno:
corn starch is not a fluid (newtinian or not). It is a solid and when dissolved in water (or whatever?) it makes a N.N.fluid ---------My question about it's 'live, or not' status is:
does it provide METABOLISM  and  REPAIR ?????
I doubt it.
Do not misunderstand me, please: this is not my word about :"LIFE" it pertains to the LIVE STATUS (process) which - according to Robert Rosen's brilliant distinction - shows a relying upon environmental (material??) support for its substinence (called metabolism) and a mechanism to repair damages that occur in the process of being alive.
 
Minds with chemistry impediment look differently at things.
 
John M
 
PS: I could not enjoy the video in the URL: I got a warning to close it down because it slows down my browser (to 0).J

On Sat, Oct 13, 2012 at 5:10 PM, Craig Weinberg <whats...@gmail.com> wrote:


On Friday, October 12, 2012 10:23:57 AM UTC-4, Bruno Marchal wrote:

On 12 Oct 2012, at 14:50, Craig Weinberg wrote:

> They are certainly cool looking and biomorphic. The question I have  
> is, at what point do they begin to have experiences...or do you  
> think that those blobs have experiences already?
>
> Would it give them more of a human experience if an oscillating  
> smiley-face/frowny-face algorithm were added graphically into the  
> center of each blob?


Here is a  "deterministic" simple phenomenon looking amazingly  
"alive" (non-newtonian fluid):

http://www.youtube.com/watch?v=3zoTKXXNQIU

Is it alive? That question does not make sense for me. Yes with some  
definition, no with other one. Unlike consciousness or intelligence  
"life" is not a definite concept for me. I use usually the definition  
"has a reproductive cycle". But this makes cigarettes and stars alive.  
No problem for me.

Bruno

"The good news is, after this operation you'll be every bit as alive as a cigarette is".

There are some cool videos out there of cymatic animation like that. All that it really tells me is that there are a limited number of morphological themes in the universe, not that those themes are positively linked to any particular private phenomenology. They are producing those patterns with a particular acoustic signal, but we could model it mathematically and see the same pattern on a video screen without any acoustic signal at all. Same thing happens when we model the behaviors of a conscious mind. It looks similar from a distance, but that's all.

Craig

 


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/8-pjDX84CC4J.

Bruno Marchal

unread,
Oct 17, 2012, 10:13:37 AM10/17/12
to everyth...@googlegroups.com
Darwin does not need to be wrong. Consciousness role can be deeper, in the "evolution/selection" of the laws of physics from the coherent dreams (computations from the 1p view) in arithmetic.

Bruno



Bruno Marchal

unread,
Oct 17, 2012, 10:16:29 AM10/17/12
to everyth...@googlegroups.com

On 16 Oct 2012, at 18:56, Craig Weinberg wrote:

Two men and two women live together. The woman has a child. 2+2=5

You mean two men + two women + a baby = five persons. 

You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".

Bruno





Craig Weinberg

unread,
Oct 17, 2012, 11:04:23 AM10/17/12
to everyth...@googlegroups.com


On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:

On 16 Oct 2012, at 18:56, Craig Weinberg wrote:

Two men and two women live together. The woman has a child. 2+2=5

You mean two men + two women + a baby = five persons. 

You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".

Bruno


I only see that one person plus another person can eventually equal three or more people. It depends when you start counting and how long it takes you to finish.

Craig
 

Bruno Marchal

unread,
Oct 17, 2012, 11:47:22 AM10/17/12
to everyth...@googlegroups.com
On 16 Oct 2012, at 21:55, John Mikes wrote:

Bruno:
corn starch is not a fluid (newtinian or not). It is a solid and when dissolved in water (or whatever?) it makes a N.N.fluid ---------My question about it's 'live, or not' status is:
does it provide METABOLISM  and  REPAIR ?????
I doubt it.
Do not misunderstand me, please: this is not my word about :"LIFE" it pertains to the LIVE STATUS (process) which - according to Robert Rosen's brilliant distinction - shows a relying upon environmental (material??) support for its substinence (called metabolism) and a mechanism to repair damages that occur in the process of being alive.

I can use such provisory definition of carbon-based life. But I can conceive other form of life. And for life, I am large, it includes anything which adds and multiplies, basically.


 
Minds with chemistry impediment look differently at things.

Certainly. I hope you didn't believe I meant those Non Newtonian things as being alive. They just *look*, to me, as amazingly alive, only. They are baby zombies (joke).

Bruno

Roger Clough

unread,
Oct 17, 2012, 1:19:49 PM10/17/12
to everything-list
Hi Bruno Marchal

IMHO all life must have some degree of consciousness
or it cannot perceive its environment.


Roger Clough, rcl...@verizon.net
10/17/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-17, 10:13:37
Subject: Re: Continuous Game of Life




On 16 Oct 2012, at 18:37, John Clark wrote:


Bruno Marchal

unread,
Oct 18, 2012, 10:39:33 AM10/18/12
to everyth...@googlegroups.com

On 17 Oct 2012, at 19:19, Roger Clough wrote:

> Hi Bruno Marchal
>
> IMHO all life must have some degree of consciousness
> or it cannot perceive its environment.

Are you sure?

Would you say that the plants are conscious? I do think so, but I am
not sure they have self-consciousness.

Self-consciousness accelerates the information treatment, and might
come from the need of this for the self-movie living creature having
some important mass.

"all life" is a very fuzzy notion.

Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Oct 19, 2012, 3:29:39 AM10/19/12
to everyth...@googlegroups.com
On 17 Oct 2012, at 17:04, Craig Weinberg wrote:



On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:

On 16 Oct 2012, at 18:56, Craig Weinberg wrote:

Two men and two women live together. The woman has a child. 2+2=5

You mean two men + two women + a baby = five persons. 

You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".

Bruno


I only see that one person plus another person can eventually equal three or more people.

With the operation of sexual reproduction, not by the operation of addition. 




It depends when you start counting and how long it takes you to finish.

It depends on what we are talking about. Person with sex is not numbers with addition.

You are just changing definition, not invalidating a proof (the proof that 2+2=4, in arithmetic).

Bruno




Craig
 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QjkYW9tKq6EJ.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

John Clark

unread,
Oct 20, 2012, 1:01:46 AM10/20/12
to everyth...@googlegroups.com
On Tue, Oct 16, 2012 at 12:56 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> So lets see, "a giant junkyard magnet" is a devastating logical argument but  "a junkyard car crusher" is not. Explain to me how that works.

> Because talking about how you want to kill me in an argument about computers is pointless ad hominem venting, but talking about the effect of magnetism on computers in an argument about computers is relevant

A strong magnetic field will disrupt the operation of a computer and it will disrupt the operation of your brain too, and a junkyard car crusher will disrupt the operation of both as well.

  John K Clark 

John Clark

unread,
Oct 20, 2012, 1:15:42 AM10/20/12
to everyth...@googlegroups.com
On Wed, Oct 17, 2012 at 10:13 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> Darwin does not need to be wrong. Consciousness role can be deeper, in the "evolution/selection" of the laws of physics from the coherent dreams (computations from the 1p view) in arithmetic.

I have no idea what that means, not a clue, but I do know that Evolution can't select for something it can't see, and I do know that Evolution can see intelligence because it produces behavior.  Evolution can't see consciousness directly any better than we can, so if it produced it (and it did unless Darwin was dead wrong) then consciousness MUST be a byproduct of something that it can see. 

  John K Clark


Roger Clough

unread,
Oct 20, 2012, 6:56:33 AM10/20/12
to everything-list
Hi Bruno Marchal

Obviously, my statement wasn't very clear.

All living things can sense their environments.
Plants turn themselves sometimes to the light
and know night from day. I don't know
if they have the sensation of light, which is
a clear indication of what is produced in the
mind by consciousness. To the degree that
a plant can do that would be how conscious it is.
I would say that a plant's consciousness
would be more like the "consciousness" we have
when we dream. But that's just a speculation.
 



Roger Clough, rcl...@verizon.net
10/20/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-18, 10:39:33
> To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

Bruno Marchal

unread,
Oct 20, 2012, 11:17:30 AM10/20/12
to everyth...@googlegroups.com
On 20 Oct 2012, at 07:15, John Clark wrote:

On Wed, Oct 17, 2012 at 10:13 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> Darwin does not need to be wrong. Consciousness role can be deeper, in the "evolution/selection" of the laws of physics from the coherent dreams (computations from the 1p view) in arithmetic.

I have no idea what that means, not a clue,

Probably for the same reason that you stop at step 3 in the UD Argument.

You assume a physical reality, and you assume that our consciousness is some phenomenon related exclusively to some construct (brain, bodies) in that physical reality.

But once you grasp the first person indeterminacy, and take into account its many invariance features (they can't distinguish immediately "real", "virtual", "arithmetical", they can't be aware of the delays of reconstitution) you can see that comp make the existence of a physical universe a from of vague "wishful thinking" kind of thing, as your future, from your first person points of view will depend on all the computations going through your actual current relative state(s).

Comp generalized Everett (on QM) to arithmetic.

No doubt we share deep linear computations. Everett saves comp from solipism. But QM has to be retrieved from number dreams statistics to confirms this.

Advantage? The subtlety of arithmetical self-reference makes possible to distinguish many sorts of points of view, and suggests explanation for the difference between the qualia and the quanta.




but I do know that Evolution can't select for something it can't see,

OK.



and I do know that Evolution can see intelligence because it produces behavior. 

OK.



Evolution can't see consciousness directly any better than we can,

Plausible.




so if it produced it

No. With comp, consciousness was there before. It just get lost on relatively coherent sheafs of computational histories.
We share dreams.   (a dream is a computation to which a first person is attributable)



(and it did unless Darwin was dead wrong)

Darwin explains the evolution of species, in an Aristotelian framework. 

Comp refutes the Aristotelian framework, and saves the main part of Darwin, indeed, it generalizes it on a realm where the laws of physics themselves arises by a process of arithmetical self-selection.




then consciousness MUST be a byproduct of something that it can see. 

The contrary, if you say "yes" to the doctor by betting on comp, "consciously".

I think anybody can see that once he/she/it takes comp seriously and stay cold rationalist on the subject. 

I don't think it is so much more alluring than Everett QM. 

Bruno





  John K Clark



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Oct 20, 2012, 1:18:17 PM10/20/12
to everyth...@googlegroups.com


On Friday, October 19, 2012 3:29:39 AM UTC-4, Bruno Marchal wrote:

On 17 Oct 2012, at 17:04, Craig Weinberg wrote:



On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:

On 16 Oct 2012, at 18:56, Craig Weinberg wrote:

Two men and two women live together. The woman has a child. 2+2=5

You mean two men + two women + a baby = five persons. 

You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".

Bruno


I only see that one person plus another person can eventually equal three or more people.

With the operation of sexual reproduction, not by the operation of addition. 

Only if you consider the 2+2=5 to be a complex special case and 2+2=4 to be a simple general rule. It could just as easily be flipped. I can say 2+2=4 by the operation of reflexive neurology, and 2+2=5 is an operation of multiplication. It depends on what level of description you privilege by over-signifying and the consequence that has on the other levels which are under-signified. To me, the Bruno view is near-sighted when it comes to physics (only sees numbers, substance is disqualified) and far-sighted when it comes to numbers (does not question the autonomy of numbers). What is it that can tell one number from another? What knows that + is different from * and how? Why doesn't arithmetic truth need a meta-arithmetic machine to allow it to function (to generate the ontology of 'function' in the first place)?

It's all sense. It has to be sense.





It depends when you start counting and how long it takes you to finish.

It depends on what we are talking about. Person with sex is not numbers with addition.

You are just changing definition, not invalidating a proof (the proof that 2+2=4, in arithmetic).

I'm not trying to invalidate the proof within one context of sense, I'm pointing out that it isn't that simple. There are other contexts of sense which reduce differently.

Craig

 

Craig Weinberg

unread,
Oct 20, 2012, 1:21:10 PM10/20/12
to everyth...@googlegroups.com

I get your point, but at the same time, we aren't outfitting Apache helicopters with giant magnets to immobilize armies of people.

Craig
 

  John K Clark 

John Clark

unread,
Oct 20, 2012, 1:29:30 PM10/20/12
to everyth...@googlegroups.com
On Sat, Oct 20, 2012  Bruno Marchal <mar...@ulb.ac.be> wrote:

>>  I have no idea what that means, not a clue
 
> Probably for the same reason that you stop at step 3 in the UD Argument.

Probably. I remember I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that; but I don't remember if that was step 3 or not. 

>You assume a physical reality,

I assume that if physical reality doesn't exist then either the words "physical" or "reality" or "exists" are meaningless, and I don't think any of those words are.
 
> and you assume that our consciousness is some phenomenon related exclusively to some construct (brain, bodies)

If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.
>>  so if it [Evolution] produced it [consciousness]
 
>No. With comp, consciousness was there before.

Well I don't know about you but I don't think my consciousness was there before Evolution figured out how to make brains, I believe this because I can't seem to remember events that were going on during the Precambrian. I've always been a little hazy about what exactly "comp" meant but I had the general feeling that I sorta agreed with it, but apparently not. 

  John K Clark


Stathis Papaioannou

unread,
Oct 20, 2012, 1:48:06 PM10/20/12
to everyth...@googlegroups.com


On Oct 15, 2012, at 4:10 AM, Craig Weinberg <whats...@gmail.com> wrote:


>> But since you misunderstand the first assumption you misunderstand the
>> whole argument.
>
>
> Nope. You misunderstand my argument completely.

Perhaps I do, but you specifically misunderstand that the argument
depends on the assumption that computers don't have consciousness.

No, I do understand that.

Good.

You
also misunderstand (or pretend to) the idea that a brain or computer
does not have to know the entire future history of the universe and
how it will respond to every situation it may encounter in order to
function.

Do you have to know the entire history of how you learned English to read these words? It depends what you mean by know. You don't have to consciously recall learning English, but without that experience, you wouldn't be able to read this. If you had a module implanted in your brain which would allow you to read Chinese, it might give you an acceptable capacity to translate Chinese phonemes and characters, but it would be a generic understanding, not one rooted in decades of human interaction. Do you see the difference? Do you see how words are not only functional data but also names which carry personal significance?

The atoms in my brain don't have to know how to read Chinese. They only need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex behaviour which is reading Chinese comes from the interaction of billions of these atoms doing their simple thing. If the atoms in my brain were put into a Chinese-reading configuration, either through a lot of work learning the language or through direct manipulation, then I would be able to understand Chinese.

What are some equivalently simple, uncontroversial things in
what you say that i misunderstand?

You think that I don't get that Fading Qualia is a story about a world in which the brain cannot be substituted, but I do. Chalmers is saying 'OK lets say that's true - how would that be? Would your blue be less and less blue? How could you act normally if you...blah, blah, blah'. I get that. It's crystal clear.

What you don't understand is that this carries a priori assumptions about the nature of consciousness, that it is an end result of a distributed process which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.

Imagine that we had one eye in the front of our heads and one ear in the back, and that the whole of human history has been to debate over whether walking forward means that objects are moving toward you or whether it means changes in relative volume of sounds.

Chalmers is saying, 'if we gradually replaced the eye with parts of the ear, how would our sight gradually change to sound, or would it suddenly switch over?' Since both options seem absurd, then he concludes that anything that is in the front of the head is an eye and everything on the back is an ear, or that everything has both ear and eye potentials.

The MR model is to understand that these two views are not merely substance dual or property dual, they are involuted juxtapositions of each other. The difference between front and back is not merely irreconcilable, it is mutually exclusive by definition in experience. I am not throwing up my hands and saying 'ears can't be eyes because eyes are special', I am positively asserting that there is a way of modeling the eye-ear relation based on an understanding of what time, space, matter, energy, entropy, significance, perception, and participation actually are and how they relate to each other.

The idea that the newly discovered ear-based models out of the back of our head is eventually going to explain the view eye view out of the front is not scientific, it's an ideological faith that I understand to be critically flawed. The evidence is all around us, we have only to interpret it that way rather than to keep updating our description of reality to match the narrowness of our fundamental theory. The theory only works for the back view of the world...it says *nothing* useful about the front view. To the True Disbeliever, this is a sign that we need to double down on the back end view because it's the best chance we have. The thinking is that any other position implies that we throw out the back end view entirely and go back to the dark ages of front end fanatacism. I am not suggesting a compromise, I propose a complete overhaul in which we start not from the front and move back or back and move front, but start from the split and see how it can be understood as double knot - a fold of folds
.

I'm sorry, but this whole passage is a non sequitur as far as the fading qualia thought experiment goes. You have to explain what you think would happen if part of your brain were replaced with a functional equivalent. A functional equivalent would stimulate the remaining neurons the same as the part that is replaced. The original paper says this is a computer chip but this is not necessary to make the point: we could just say that it is any device, not being the normal biological neurons. If consciousness is substrate-dependent (as you claim) then the device could do its job of stimulating the neurons normally while lacking or differing in consciousness. Since it stimulates the neurons normally you would behave normally. If you didn't then it would be a miracle, since your muscles would have to contract normally. Do you at least see this point, or do you think that your muscles would do something different?


-- Stathis Papaioannou

Craig Weinberg

unread,
Oct 20, 2012, 2:51:19 PM10/20/12
to everyth...@googlegroups.com


On Saturday, October 20, 2012 1:47:28 PM UTC-4, stathisp wrote:


On Oct 15, 2012, at 4:10 AM, Craig Weinberg <whats...@gmail.com> wrote:


>> But since you misunderstand the first assumption you misunderstand the
>> whole argument.
>
>
> Nope. You misunderstand my argument completely.

Perhaps I do, but you specifically misunderstand that the argument
depends on the assumption that computers don't have consciousness.

No, I do understand that.

Good.

You
also misunderstand (or pretend to) the idea that a brain or computer
does not have to know the entire future history of the universe and
how it will respond to every situation it may encounter in order to
function.

Do you have to know the entire history of how you learned English to read these words? It depends what you mean by know. You don't have to consciously recall learning English, but without that experience, you wouldn't be able to read this. If you had a module implanted in your brain which would allow you to read Chinese, it might give you an acceptable capacity to translate Chinese phonemes and characters, but it would be a generic understanding, not one rooted in decades of human interaction. Do you see the difference? Do you see how words are not only functional data but also names which carry personal significance?

The atoms in my brain don't have to know how to read Chinese. They only need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex behaviour which is reading Chinese comes from the interaction of billions of these atoms doing their simple thing.

I don't think that is true. The other way around makes just as much sense of not more: Reading Chinese is a simple behavior which drives the behavior of billions of atoms to do a complex interaction. To me, it has to be both bottom-up and top-down. It seems completely arbitrary prejudice to presume one over the other just because we think that we understand the bottom-up so well.

Once you can see how it is the case that it must be both bottom-up and top-down at the same time, the next step is to see that there is no possibility for it to be a cause-effect relationship, but rather a dual aspect ontological relation. Nothing is translating the functions of neurons into a Cartesian theater of experience - there is nowhere to put it in the tissue of the brain and there is no evidence of a translation from neural protocols to sensorimotive protocols - they are clearly the same thing.
 
If the atoms in my brain were put into a Chinese-reading configuration, either through a lot of work learning the language or through direct manipulation, then I would be able to understand Chinese.

It's understandable to assume that, but no I don't think it's like that. You can't transplant a language into a brain instantaneously because there is no personal history of association. Your understanding of language is not a lookup table in space, it is made out of you. It's like if you walked around with Google translator in your brain. You could enter words and phrases and turn them into you language, but you would never know the language first hand. The knowledge would be impersonal - accessible, but not woven into your proprietary sense.
 

What are some equivalently simple, uncontroversial things in
what you say that i misunderstand?

You think that I don't get that Fading Qualia is a story about a world in which the brain cannot be substituted, but I do. Chalmers is saying 'OK lets say that's true - how would that be? Would your blue be less and less blue? How could you act normally if you...blah, blah, blah'. I get that. It's crystal clear.

What you don't understand is that this carries a priori assumptions about the nature of consciousness, that it is an end result of a distributed process which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.

Imagine that we had one eye in the front of our heads and one ear in the back, and that the whole of human history has been to debate over whether walking forward means that objects are moving toward you or whether it means changes in relative volume of sounds.

Chalmers is saying, 'if we gradually replaced the eye with parts of the ear, how would our sight gradually change to sound, or would it suddenly switch over?' Since both options seem absurd, then he concludes that anything that is in the front of the head is an eye and everything on the back is an ear, or that everything has both ear and eye potentials.

The MR model is to understand that these two views are not merely substance dual or property dual, they are involuted juxtapositions of each other. The difference between front and back is not merely irreconcilable, it is mutually exclusive by definition in experience. I am not throwing up my hands and saying 'ears can't be eyes because eyes are special', I am positively asserting that there is a way of modeling the eye-ear relation based on an understanding of what time, space, matter, energy, entropy, significance, perception, and participation actually are and how they relate to each other.

The idea that the newly discovered ear-based models out of the back of our head is eventually going to explain the view eye view out of the front is not scientific, it's an ideological faith that I understand to be critically flawed. The evidence is all around us, we have only to interpret it that way rather than to keep updating our description of reality to match the narrowness of our fundamental theory. The theory only works for the back view of the world...it says *nothing* useful about the front view. To the True Disbeliever, this is a sign that we need to double down on the back end view because it's the best chance we have. The thinking is that any other position implies that we throw out the back end view entirely and go back to the dark ages of front end fanatacism. I am not suggesting a compromise, I propose a complete overhaul in which we start not from the front and move back or back and move front, but start from the split and see how it can be understood as double knot - a fold of folds
.

I'm sorry, but this whole passage is a non sequitur as far as the fading qualia thought experiment goes. You have to explain what you think would happen if part of your brain were replaced with a functional equivalent.

There is no functional equivalent. That's what I am saying. Functional equivalence when it comes to a person is a non-sequitur. Not only is every person unique, they are an expression of uniqueness itself. They define uniqueness in a never-before-experienced way. This is a completely new way of understanding consciousness and signal. Not as mechanism, but as animism-mechanism.

 
A functional equivalent would stimulate the remaining neurons the same as the part that is replaced.

No such thing. Does any imitation function identically to an original?
 
The original paper says this is a computer chip but this is not necessary to make the point: we could just say that it is any device, not being the normal biological neurons. If consciousness is substrate-dependent (as you claim) then the device could do its job of stimulating the neurons normally while lacking or differing in consciousness. Since it stimulates the neurons normally you would behave normally. If you didn't then it would be a miracle, since your muscles would have to contract normally. Do you at least see this point, or do you think that your muscles would do something different?

I see the point completely. That's the problem is that you keep trying to explain to me what is obvious, while I am trying to explain to you something much more subtle and sophisticated. I can replace neurons which control my muscles because muscles are among the most distant and replaceable parts of 'me'. These nerves are outbound efferent nerves and the target muscle cells are for the most part willing servants. The same goes for amputating my arm. I can replace it in theory. What I am saying though is that amputating my head is not even theoretically possible. Wherever my head is, that is where I have to be. If I replace my brain with other parts, the more parts there are the less of me there is left. The brain isn't like a computer though. You can't just pull out something and then put it back in if it doesn't work. In the brain, as soon as you screw it up, you get coma, death, dementia, stroke, etc. It's part of a living creature made of smaller living creatures. It doesn't matter how closely you think your substitute brain acts like my brain, I am never going to be found in your substitute brain, and the substitute brain will never even get close to working properly. Computers do not work very well. Every time I turn on my stupid phone there are like 25 updates, and I hardly do anything with it. Can you imagine how unreliable a network the size of a synthetic brain would be? How easy it would be to halt the thalamus program and kill you? It's wildly overconfident and factually misguided to think of the self and the brain in these terms. I see it like 19th century Jules Verne sci-fi now. It's just silly and every week there are more studies which suggest that our neuroscientific models continue to be more and more inadequate. They don't add up.

Craig
 


-- Stathis Papaioannou

John Mikes

unread,
Oct 20, 2012, 5:16:08 PM10/20/12
to everyth...@googlegroups.com
Bruno,
especially in my identification as "responding to relations".
Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?)  mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).
JM

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

Stephen P. King

unread,
Oct 20, 2012, 5:29:38 PM10/20/12
to everyth...@googlegroups.com
On 10/20/2012 5:16 PM, John Mikes wrote:
Bruno,
especially in my identification as "responding to relations".
Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?)  mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).
JM

    If we where cauliflowers, we would have no concept of what it would be like to be "human" or, maybe, that humans even exist!
-- 
Onward!

Stephen

Stathis Papaioannou

unread,
Oct 21, 2012, 4:05:44 AM10/21/12
to everyth...@googlegroups.com
On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> The atoms in my brain don't have to know how to read Chinese. They only
>> need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex
>> behaviour which is reading Chinese comes from the interaction of billions of
>> these atoms doing their simple thing.
>
>
> I don't think that is true. The other way around makes just as much sense of
> not more: Reading Chinese is a simple behavior which drives the behavior of
> billions of atoms to do a complex interaction. To me, it has to be both
> bottom-up and top-down. It seems completely arbitrary prejudice to presume
> one over the other just because we think that we understand the bottom-up so
> well.
>
> Once you can see how it is the case that it must be both bottom-up and
> top-down at the same time, the next step is to see that there is no
> possibility for it to be a cause-effect relationship, but rather a dual
> aspect ontological relation. Nothing is translating the functions of neurons
> into a Cartesian theater of experience - there is nowhere to put it in the
> tissue of the brain and there is no evidence of a translation from neural
> protocols to sensorimotive protocols - they are clearly the same thing.

If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this. Evidence would
constitute, for example, neurons firing when measurements of
transmembrane potentials, ion concentrations etc. suggest that they
should not. You claim that such anomalous behaviour of neurons and
other cells due to consciousness is widespread, yet it has never been
experimentally observed. Why?

>> If the atoms in my brain were put into a Chinese-reading configuration,
>> either through a lot of work learning the language or through direct
>> manipulation, then I would be able to understand Chinese.
>
>
> It's understandable to assume that, but no I don't think it's like that. You
> can't transplant a language into a brain instantaneously because there is no
> personal history of association. Your understanding of language is not a
> lookup table in space, it is made out of you. It's like if you walked around
> with Google translator in your brain. You could enter words and phrases and
> turn them into you language, but you would never know the language first
> hand. The knowledge would be impersonal - accessible, but not woven into
> your proprietary sense.

I don't mean putting an extra module into the brain, I mean putting
the brain directly into the same configuration it is put into by
learning the language in the normal way.

>> I'm sorry, but this whole passage is a non sequitur as far as the fading
>> qualia thought experiment goes. You have to explain what you think would
>> happen if part of your brain were replaced with a functional equivalent.
>
>
> There is no functional equivalent. That's what I am saying. Functional
> equivalence when it comes to a person is a non-sequitur. Not only is every
> person unique, they are an expression of uniqueness itself. They define
> uniqueness in a never-before-experienced way. This is a completely new way
> of understanding consciousness and signal. Not as mechanism, but as
> animism-mechanism.
>
>
>>
>> A functional equivalent would stimulate the remaining neurons the same as
>> the part that is replaced.
>
>
> No such thing. Does any imitation function identically to an original?

In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original. We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?
As I said, technical problems with computers are not relevant to the
argument. The implant is just a device that has the correct timing of
neural impulses. Would it necessarily preserve consciousness?


--
Stathis Papaioannou

Evgenii Rudnyi

unread,
Oct 21, 2012, 6:24:41 AM10/21/12
to everyth...@googlegroups.com
On 21.10.2012 10:05 Stathis Papaioannou said the following:
> On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg
> <whats...@gmail.com> wrote:
>

...

>> I don't think that is true. The other way around makes just as much
>> sense of not more: Reading Chinese is a simple behavior which
>> drives the behavior of billions of atoms to do a complex
>> interaction. To me, it has to be both bottom-up and top-down. It
>> seems completely arbitrary prejudice to presume one over the other
>> just because we think that we understand the bottom-up so well.
>>
>> Once you can see how it is the case that it must be both bottom-up
>> and top-down at the same time, the next step is to see that there
>> is no possibility for it to be a cause-effect relationship, but
>> rather a dual aspect ontological relation. Nothing is translating
>> the functions of neurons into a Cartesian theater of experience -
>> there is nowhere to put it in the tissue of the brain and there is
>> no evidence of a translation from neural protocols to sensorimotive
>> protocols - they are clearly the same thing.
>
> If there is a top-down effect of the mind on the atoms then there we
> would expect some scientific evidence of this. Evidence would

Scientific evidence, in my view, is the existence of science. Do you
mean that for example scientific books have assembled themselves from
atoms according to the M-theory?

Evgenii

Bruno Marchal

unread,
Oct 21, 2012, 9:07:18 AM10/21/12
to everyth...@googlegroups.com
On 20 Oct 2012, at 19:18, Craig Weinberg wrote:



On Friday, October 19, 2012 3:29:39 AM UTC-4, Bruno Marchal wrote:

On 17 Oct 2012, at 17:04, Craig Weinberg wrote:



On Wednesday, October 17, 2012 10:16:52 AM UTC-4, Bruno Marchal wrote:

On 16 Oct 2012, at 18:56, Craig Weinberg wrote:

Two men and two women live together. The woman has a child. 2+2=5

You mean two men + two women + a baby = five persons. 

You need the arithmetical 2+2=4, and 4+1 = 5, in your "argument".

Bruno


I only see that one person plus another person can eventually equal three or more people.

With the operation of sexual reproduction, not by the operation of addition. 

Only if you consider the 2+2=5 to be a complex special case and 2+2=4 to be a simple general rule.

2+2 = 5 is not a special case of 2+2=4.


It could just as easily be flipped.

Errors are possible pour complex subjects.



I can say 2+2=4 by the operation of reflexive neurology, and 2+2=5 is an operation of multiplication. It depends on what level of description you privilege by over-signifying and the consequence that has on the other levels which are under-signified. To me, the Bruno view is near-sighted when it comes to physics (only sees numbers, substance is disqualified)

It means that you think that there is a flaw in UDA, as the non materiality of physics is a consequence of the comp hypothesis. There is no choice in the matter (pun included).



and far-sighted when it comes to numbers (does not question the autonomy of numbers).

Because computer science explains in details how number can be autonomous, or less simplified: how arithmetical realization can generate the beliefs in bodies, relative autonomy, etc. You seem to want to ignore the computer science behind the comp hypothesis.



What is it that can tell one number from another?

It is not simple to prove, but the laws of addition and multiplication is enough. I am not sanguine on numbers, I can take fortran programs in place, with the same explanation for the origin of the consciousness/realities couplings.




What knows that + is different from * and how?


Because we know the definition, and practice first order logical language. Everything I say is a theorem in the theory:

x + 0 = x  
x + s(y) = s(x + y) 

 x *0 = 0
 x*s(y) = x*y + x   






Why doesn't arithmetic truth need a meta-arithmetic machine to allow it to function (to generate the ontology of 'function' in the first place)?

It does not. That's the amazing whole theoretical computer science point. The meta-arithmetic is already a consequence of the four laws above.

Bruno


It's all sense. It has to be sense.





It depends when you start counting and how long it takes you to finish.

It depends on what we are talking about. Person with sex is not numbers with addition.

You are just changing definition, not invalidating a proof (the proof that 2+2=4, in arithmetic).

I'm not trying to invalidate the proof within one context of sense, I'm pointing out that it isn't that simple. There are other contexts of sense which reduce differently.

Craig

 

Bruno




Craig
 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QjkYW9tKq6EJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/ma4il48CDGAJ.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Bruno Marchal

unread,
Oct 21, 2012, 9:17:59 AM10/21/12
to everyth...@googlegroups.com
On 20 Oct 2012, at 19:29, John Clark wrote:

On Sat, Oct 20, 2012  Bruno Marchal <mar...@ulb.ac.be> wrote:

>>  I have no idea what that means, not a clue
 
> Probably for the same reason that you stop at step 3 in the UD Argument.

Probably. I remember I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that; but I don't remember if that was step 3 or not. 

From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W, yet your agree that both copy will feel to live in only one place, so the error you have seen was dues to a confusion between first person and third person. We were many to tell you this, and it seems you are stick in that confusion.

By the way, it is irrational to stop in the middle of a proof. Obviously, reading the sequel, can help you to see the confusion you are doing.




>You assume a physical reality,

I assume that if physical reality doesn't exist then either the words "physical" or "reality" or "exists" are meaningless, and I don't think any of those words are.

By assuming a physical reality at the start, you make it into a primitive ontology. But the physical reality can emerge or appear without a physical reality at the start, like in the numbers' dreams.



 
> and you assume that our consciousness is some phenomenon related exclusively to some construct (brain, bodies)

If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.

But this is easily shown to be false when we assume comp. If your state appears in a far away galaxies, what will happen far away might change your outcome of an experience you decided to do "here". You believe in an identity thesis which can't work, unless you singularize both the mind and the brain matter with special sort of infinities.




>>  so if it [Evolution] produced it [consciousness]
 
>No. With comp, consciousness was there before.

Well I don't know about you but I don't think my consciousness was there before Evolution figured out how to make brains, I believe this because I can't seem to remember events that were going on during the Precambrian. I've always been a little hazy about what exactly "comp" meant but I had the general feeling that I sorta agreed with it, but apparently not. 

You keep defending comp, in your dialog with Craig, but you don't follow its logical consequences, 
I guess, this is by not wanting to take seriously the first person and third person distinction, which is the key of the UD argument.

You can attach consciousness to the owner of a brain, but the owner itself must attach his consciousness to all states existing in arithmetic (or in a physical universe if that exists) and realizing that brain state.

Bruno




Bruno Marchal

unread,
Oct 21, 2012, 9:56:39 AM10/21/12
to everyth...@googlegroups.com
Hi John,

On 20 Oct 2012, at 23:16, John Mikes wrote:

Bruno,
especially in my identification as "responding to relations".
Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?)  mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).

My feeling was first that all homeotherm animals have self-consciousness, as they have the ability to dream, easily realted to the ability to build a representation of one self. Then I have enlarged the spectrum up to some spiders and the octopi, just by reading a lot about them, looking video.

But this is just a personal appreciation. For the plant, let us say I know nothing, although I supect possible consciousness, related to different scalings.

The following theory seems to have consciousness, for different reason (the main one is that it is Turing Universal):

x + 0 = x  
x + s(y) = s(x + y) 

 x *0 = 0
 x*s(y) = x*y + x   

But once you add the very powerful induction axioms: which say that if a property F is true for zero, and preserved by the successor operation, then it is true for all natural numbers. That is the infinity of axioms:

(F(0) & Ax(F(x) -> F(s(x))) -> AxF(x), 

with F(x) being any formula in the arithmetical language (and thus defined with "0, s, +, *), 

Then you get Löbianity, and this makes it as much conscious as you and me. Indeed, they got a rich theology about which they can develop maximal awareness, and even test it by comparing the physics retrievable by that theology, and the observation and inference on their most probable neighborhoods.

Löbianity is the treshold at which any new axiom added will create and enlarge the machine ignorance. It is the utimate modesty treshold.


Bruno




To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Oct 21, 2012, 10:48:14 AM10/21/12
to everyth...@googlegroups.com


On Sunday, October 21, 2012 4:06:16 AM UTC-4, stathisp wrote:
On Sun, Oct 21, 2012 at 5:51 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> The atoms in my brain don't have to know how to read Chinese. They only
>> need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex
>> behaviour which is reading Chinese comes from the interaction of billions of
>> these atoms doing their simple thing.
>
>
> I don't think that is true. The other way around makes just as much sense of
> not more: Reading Chinese is a simple behavior which drives the behavior of
> billions of atoms to do a complex interaction. To me, it has to be both
> bottom-up and top-down. It seems completely arbitrary prejudice to presume
> one over the other just because we think that we understand the bottom-up so
> well.
>
> Once you can see how it is the case that it must be both bottom-up and
> top-down at the same time, the next step is to see that there is no
> possibility for it to be a cause-effect relationship, but rather a dual
> aspect ontological relation. Nothing is translating the functions of neurons
> into a Cartesian theater of experience - there is nowhere to put it in the
> tissue of the brain and there is no evidence of a translation from neural
> protocols to sensorimotive protocols - they are clearly the same thing.

If there is a top-down effect of the mind on the atoms then there we
would expect some scientific evidence of this.

These words are a scientific evidence of this. The atoms of my brain are being manipulated from the top down. I am directly projecting what I want to say through my mind in such a way that the atoms of my brain facilitate changes in the tissues of my body. Fingers move. Keys click.

 
Evidence would
constitute, for example, neurons firing when measurements of
transmembrane potentials, ion concentrations etc. suggest that they
should not.

Do not neurons fire when I decide to type?

What you are expecting would be nothing but another homunculus. If there was some special sauce oozing out of your neurons which looked like...what? pictures of me moving my fingers? How would that explain how I am inside those pictures. The problem is that you are committed to the realism of cells and neurons over thoughts and feelings - even when we understand that our idea of neurons are themselves only thoughts and feelings. This isn't a minor glitch, it is The Grand Canyon.

What has to be done is to realize that thoughts and feelings cannot be made out of forms and functions, but rather forms and functions are what thoughts and feelings look like from an exterior, impersonal perspective. The thoughts and feelings are the full-spectrum phenomenon, the forms and functions a narrow band of that spectrum. The narrowness of that band is what maximizes the universality of it. Physics is looking a slice of experience across all phenomena, effectively amputating all of the meaning and perceptual inertia which has accumulated orthogonally to that slice. This is the looong way around when it comes to consciousness as consciousness is all about the longitudinal history of experience, not the spatial-exterior mechanics of the moment.
 
You claim that such anomalous behaviour of neurons and
other cells due to consciousness is widespread, yet it has never been
experimentally observed. Why?

Nobody except you and John Clark are suggesting any anomalous behavior. This is your blind spot. I don't know if you can see beyond. I am not optimistic. If there were any anomalous behavior of neurons, they would STILL require another meta-level of anomalous behaviors to explain them. Whatever level of description you choose for human consciousness - the brain, the body, the extended body, CNS, neurons, molecules, atoms, quanta... it DOESN'T MATTER AT ALL to the hard problem. There is still NO WAY for us to be inside of those descriptions, and even if there were, there is no conceivable purpose for 'our' being there in the first place.  This isn't a cause for despair or giving up, it is a triumph of insight. It is to see that the world is round if you are far away from it, but flat if you are on the surface. You keep trying to say that if the world were round you would see anomalous dips and valleys where the Earth begins to curve. You are not getting it. Reality is exactly what it seems to be, and it is many other things as well. Just because our understanding brings us sophisticated views of what we are from the outside in does not in any way validate the supremacy of the realism which we rely on from the inside out to even make sense of science.
 

>> If the atoms in my brain were put into a Chinese-reading configuration,
>> either through a lot of work learning the language or through direct
>> manipulation, then I would be able to understand Chinese.
>
>
> It's understandable to assume that, but no I don't think it's like that. You
> can't transplant a language into a brain instantaneously because there is no
> personal history of association. Your understanding of language is not a
> lookup table in space, it is made out of you. It's like if you walked around
> with Google translator in your brain. You could enter words and phrases and
> turn them into you language, but you would never know the language first
> hand. The knowledge would be impersonal - accessible, but not woven into
> your proprietary sense.

I don't mean putting an extra module into the brain, I mean putting
the brain directly into the same configuration it is put into by
learning the language in the normal way.

That can't be done. It's like saying you will put New York City directly in the same configuration as Shanghai. It's meaningless. Even if you could move the population of Shanghai to New York or demolish New York and rebuild it in the shape of Shanghai, it wouldn't matter because consciousness develops through time. It is made of significance which accumulates through sense experience - *not just 'data'*.
 

>> I'm sorry, but this whole passage is a non sequitur as far as the fading
>> qualia thought experiment goes. You have to explain what you think would
>> happen if part of your brain were replaced with a functional equivalent.
>
>
> There is no functional equivalent. That's what I am saying. Functional
> equivalence when it comes to a person is a non-sequitur. Not only is every
> person unique, they are an expression of uniqueness itself. They define
> uniqueness in a never-before-experienced way. This is a completely new way
> of understanding consciousness and signal. Not as mechanism, but as
> animism-mechanism.
>
>
>>
>> A functional equivalent would stimulate the remaining neurons the same as
>> the part that is replaced.
>
>
> No such thing. Does any imitation function identically to an original?

In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original.

Then the thought experiment is garbage from the start. It begs the question. Why not just say we can have an imitation human being that stimulates the surrounding human beings in the same way as the original? Ta-da! That makes it easy. Now all we need to do is make a human being that stimulates their social matrix in the same way as the original and we have perfect AI without messing with neurons or brains at all. Just make a whole person out of person stuff - like as a thought experiment suppose there is some stuff X which makes things that human beings think is another human being. Like marzipan. We can put the right pheromones in it and dress it up nice, and according to the thought experiment, let's say that works.

You aren't allowed to deny this because then you don't understand the thought experiment, see? Don't you get it? You have to accept this flawed pretext to have a discussion that I will engage in now. See how it works? Now we can talk for six or eight months about how human marzipan is inevitable because it wouldn't make sense if you replaced a city gradually with marzipan people that New York would gradually fade into less of a New York or that New York becomes suddenly absent. It's a fallacy. The premise screws up the result.

 
We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?

Would the marzipan Brooklyn necessarily replicate the local TV and Radio along with the traffic on the street or could the two be separated? Neither. The whole premise is garbage because both Brooklyn and brain are made of living organisms who are aware of their description of the universe. We can't imitate their description of the universe because we can only get our own description of our measuring instruments description of  their exterior descriptions.


The timing of neural impulses can only be made completely correct by direct  experience. The implant can't work as a source of consciousness on a personal level, only as band-aid on a sub-personal level. Making a person out of band-aids doesn't work.

Craig



--
Stathis Papaioannou

Stephen P. King

unread,
Oct 21, 2012, 10:55:10 AM10/21/12
to everyth...@googlegroups.com
Hi Stathis,

How would you set up the experiment? How do you control for an
effect that may well be ubiquitous? Did you somehow miss the point that
consciousness can only be observed in 1p? Why are you so insistent on a
3p of it?

>
>>> If the atoms in my brain were put into a Chinese-reading configuration,
>>> either through a lot of work learning the language or through direct
>>> manipulation, then I would be able to understand Chinese.
>>
>> It's understandable to assume that, but no I don't think it's like that. You
>> can't transplant a language into a brain instantaneously because there is no
>> personal history of association. Your understanding of language is not a
>> lookup table in space, it is made out of you. It's like if you walked around
>> with Google translator in your brain. You could enter words and phrases and
>> turn them into you language, but you would never know the language first
>> hand. The knowledge would be impersonal - accessible, but not woven into
>> your proprietary sense.
> I don't mean putting an extra module into the brain, I mean putting
> the brain directly into the same configuration it is put into by
> learning the language in the normal way.

How might we do that? Alter 1 neuron and you might not have the
same mind.

>
>>> I'm sorry, but this whole passage is a non sequitur as far as the fading
>>> qualia thought experiment goes. You have to explain what you think would
>>> happen if part of your brain were replaced with a functional equivalent.
>>
>> There is no functional equivalent. That's what I am saying. Functional
>> equivalence when it comes to a person is a non-sequitur. Not only is every
>> person unique, they are an expression of uniqueness itself. They define
>> uniqueness in a never-before-experienced way. This is a completely new way
>> of understanding consciousness and signal. Not as mechanism, but as
>> animism-mechanism.
>>
>>
>>> A functional equivalent would stimulate the remaining neurons the same as
>>> the part that is replaced.
>>
>> No such thing. Does any imitation function identically to an original?
> In a thought experiment we can say that the imitation stimulates the
> surrounding neurons in the same way as the original. We can even say
> that it does this miraculously. Would such a device *necessarily*
> replicate the consciousness along with the neural impulses, or could
> the two be separated?

Is the brain strictly a classical system?
Let's see. If I ingest psychoactive substances, there is a 1p
observable effect.... Is this a circumstance that is different in kind
from that device?

--
Onward!

Stephen


Jason Resch

unread,
Oct 21, 2012, 12:42:55 PM10/21/12
to everyth...@googlegroups.com
On Sun, Oct 21, 2012 at 8:56 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:
Hi John,

On 20 Oct 2012, at 23:16, John Mikes wrote:

Bruno,
especially in my identification as "responding to relations".
Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?)  mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).

My feeling was first that all homeotherm animals have self-consciousness, as they have the ability to dream, easily realted to the ability to build a representation of one self. Then I have enlarged the spectrum up to some spiders and the octopi, just by reading a lot about them, looking video.

But this is just a personal appreciation. For the plant, let us say I know nothing, although I supect possible consciousness, related to different scalings.

The following theory seems to have consciousness, for different reason (the main one is that it is Turing Universal):

x + 0 = x  
x + s(y) = s(x + y) 

 x *0 = 0
 x*s(y) = x*y + x   

But once you add the very powerful induction axioms: which say that if a property F is true for zero, and preserved by the successor operation, then it is true for all natural numbers. That is the infinity of axioms:

(F(0) & Ax(F(x) -> F(s(x))) -> AxF(x), 

with F(x) being any formula in the arithmetical language (and thus defined with "0, s, +, *), 

Then you get Löbianity, and this makes it as much conscious as you and me. Indeed, they got a rich theology about which they can develop maximal awareness, and even test it by comparing the physics retrievable by that theology, and the observation and inference on their most probable neighborhoods.

Löbianity is the treshold at which any new axiom added will create and enlarge the machine ignorance. It is the utimate modesty treshold.



Bruno,

Might there be still other axioms (which we are not aware of, or at least do not use) that could lead to even higher states of consciousness than we presently have?

Also, it isn't quite clear to me how something needs to be added to Turing universality to expand the capabilities of consciousness, if all consciousness is the result of computation.

Thanks,

Jason

Jason Resch

unread,
Oct 21, 2012, 12:48:58 PM10/21/12
to everyth...@googlegroups.com

John,

I would also suggest that you read this link, it shows how an infinitely large cosmos leads directly to quantum mechanics due to the observer's inability to self-locate.  For someone who believes in both mechanism and platonism, it is the exact scenario platonic programs should find themselves in:

http://lesswrong.com/lw/3pg/aguirre_tegmark_layzer_cosmological/
http://arxiv.org/abs/1008.1066

Jason
 

John Clark

unread,
Oct 21, 2012, 1:46:57 PM10/21/12
to everyth...@googlegroups.com
On Sun, Oct 21, 2012  Bruno Marchal <mar...@ulb.ac.be> wrote:
 >> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that

> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W

Yes.

yet your agree that both copy will feel to live in only one place

Yes.

> so the error you have seen was dues to a confusion between first person and third person.

Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated. 

Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical. And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.

> By the way, it is irrational to stop in the middle of a proof.

If one of the steps in a proof contains a blunder then it would be irrational to keep reading it.

> By assuming a physical reality at the start

That seems like a pretty damn good place to make an assumption.

 > But the physical reality can emerge or appear without a physical reality at the start

Maybe maybe not, but even if you're right that wouldn't make it any less real; and maybe physical reality didn't even need to emerge because there was no start.
 
>> If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.

 > But this is easily shown to be false when we assume comp.

It's not false and I don't need to assume it and I haven't theorized it from armchair philosophy either, I can show it's true experimentally. And when theory and experiment come into conflict it is the theory that must submit not the experiment. If I insert drugs into your bloodstream it will change the chemistry of your brain, and when that happens your conscious state will also change. Depending on the drug I can make you happy-sad, friendly-angry, frightened-clam, alert-sleepy, dead-alive, you name it.    
 
> If your state appears in a far away galaxies [...]

Then he will be me and he will remain me until differences between that far away galaxy and this one cause us to change in some way, such as by forming different memories; after that he will no longer be me, although we will still both be John K Clark because John K Clark has been duplicated, the machine duplicated the body of him and the environmental differences caused his consciousness to diverge. As I've said before this is a odd situation but in no way paradoxical.

> You keep defending comp, in your dialog with Craig,

I keep defending my ideas, "comp" is your homemade term not mine, I have no use for it.

> You can attach consciousness to the owner of a brain,

Yes, consciousness is what the brain does.

> but the owner itself must attach his consciousness to all states existing in arithmetic

Then I must remember events that happened in the Precambrian because arithmetic existed even back then, but I don't, I don't remember existing then at all. Now that is a paradox! Therefore one of the assumptions must be wrong, namely that the owner of a brain "must attach his consciousness to all states existing in arithmetic".

  John K Clark

Quentin Anciaux

unread,
Oct 21, 2012, 2:49:36 PM10/21/12
to everyth...@googlegroups.com


2012/10/21 John Clark <johnk...@gmail.com>

Therefore that shows that you do your best to turn the meaning of everything you read to be able to marvel at yourself... but well, that only fools you.

Quentin
 
namely that the owner of a brain "must attach his consciousness to all states existing in arithmetic".

  John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
All those moments will be lost in time, like tears in rain.

Roger Clough

unread,
Oct 21, 2012, 4:35:19 PM10/21/12
to everything-list
Hi Bruno Marchal

1p is to know by acquaintance (only possible to humans).
I conjecture that any statement pertaining to humans containing 1p is TRUE.

3p is to know by description (works for both humans and computers).
I believe that any statement pertaining to computers containing 1p is FALSE.

Consciousness would be to know that you are conscious, or

for a real person, 1p(1p) = TRUE
and saying that he is conscious to others would be 3p(1p) = TRUE
or even (3p(1p(1p))) = TRUE


But a computer cannot experience anything (is blocked from 1p), or

for a computer, 3p (1p) = FALSE (or any statement containing 1p)
but 3p(3p) = TRUE (or any proposition not containing 1p = TRUE)


Roger Clough, rcl...@verizon.net
10/21/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-21, 09:56:39
Subject: Re: Continuous Game of Life


Hi John,


On 20 Oct 2012, at 23:16, John Mikes wrote:


Bruno,
especially in my identification as "responding to relations".
Now the "Self"? IT certainly refers to a more sophisticated level of thinking, more so than the average (animalic?) mind. - OR: we have no idea. What WE call 'Self-Ccness' is definitely a human attribute because WE identify it that way. I never talked to a cauliflower to clarify whether she feels like having a self? (In cauliflowerese, of course).


My feeling was first that all homeotherm animals have self-consciousness, as they have the ability to dream, easily realted to the ability to build a representation of one self. Then I have enlarged the spectrum up to some spiders and the octopi, just by reading a lot about them, looking video.


But this is just a personal appreciation. For the plant, let us say I know nothing, although I supect possible consciousness, related to different scalings.


The following theory seems to have consciousness, for different reason (the main one is that it is Turing Universal):


x + 0 = x
x + s(y) = s(x + y)


x *0 = 0
x*s(y) = x*y + x


But once you add the very powerful induction axioms: which say that if a property F is true for zero, and preserved by the successor operation, then it is true for all natural numbers. That is the infinity of axioms:


(F(0) & Ax(F(x) -> F(s(x))) -> AxF(x),


with F(x) being any formula in the arithmetical language (and thus defined with "0, s, +, *),


Then you get L?ianity, and this makes it as much conscious as you and me. Indeed, they got a rich theology about which they can develop maximal awareness, and even test it by comparing the physics retrievable by that theology, and the observation and inference on their most probable neighborhoods.


L?ianity is the treshold at which any new axiom added will create and enlarge the machine ignorance. It is the utimate modesty treshold.

Jason Resch

unread,
Oct 21, 2012, 6:25:21 PM10/21/12
to everyth...@googlegroups.com
On Sun, Oct 21, 2012 at 12:46 PM, John Clark <johnk...@gmail.com> wrote:
On Sun, Oct 21, 2012  Bruno Marchal <mar...@ulb.ac.be> wrote:

 >> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that

> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W

Yes.

yet your agree that both copy will feel to live in only one place

Yes.

> so the error you have seen was dues to a confusion between first person and third person.

Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated. 

Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical. And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.

John,

I think you are missing something.  It is a problem that I noticed after watching the movie "The Prestige" and it eventually led me to join this list.

Unless you consider yourself to be only a single momentary atom of thought, you probably believe there is some stream of thoughts/consciousness that you identify with.  You further believe that these thoughts and consciousness are produced by some activity of your brain.  Unlike Craig, you believe that whatever horrible injury you suffered, even if every atom in your body were separated from every other atom, in principle you could be put back together, and if the atoms are put back just right, you will be removed and alive and well, and conscious again.

Further, you probably believe it doesn't matter if we even re-use the same atoms or not, since atoms of the same elements and isotopes are functionally equivalent.  We could take apart your current atoms, then put you back together with atoms from a different pile and your consciousness would continue right where it left off (from before you were obliterated).  It would be as if a simulation of your brain were running on a VM, we paused the VM, moved it to a different physical computer and then resumed it.  From your perspective inside, there was no interruption, yet your physical incarnation and location has changed.

Assuming you are with me so far, an interesting question emerges: what happens to your consciousness when duplicated?  Either an atom for atom replica of yourself is created in two places or your VM image which contains your brain emulation is copied to two different computers while paused, and then both are resumed.  Initially, the sensory input to the two duplicates could be the same, and in a sense they are still the same mind, just with two instances, but then something interesting happens once different input is fed to the two instances: they split.  You could say they split in the same sense as when someone opens the steel box to see whether the cat is alive or dead.  All the splitting in quantum mechanics may be the result of our infinite instances discovering/learning different things about our infinite environments.

Jason
 

> By the way, it is irrational to stop in the middle of a proof.

If one of the steps in a proof contains a blunder then it would be irrational to keep reading it.

> By assuming a physical reality at the start

That seems like a pretty damn good place to make an assumption.

 > But the physical reality can emerge or appear without a physical reality at the start

Maybe maybe not, but even if you're right that wouldn't make it any less real; and maybe physical reality didn't even need to emerge because there was no start.
 
>> If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.

 > But this is easily shown to be false when we assume comp.

It's not false and I don't need to assume it and I haven't theorized it from armchair philosophy either, I can show it's true experimentally. And when theory and experiment come into conflict it is the theory that must submit not the experiment. If I insert drugs into your bloodstream it will change the chemistry of your brain, and when that happens your conscious state will also change. Depending on the drug I can make you happy-sad, friendly-angry, frightened-clam, alert-sleepy, dead-alive, you name it.    
 
> If your state appears in a far away galaxies [...]

Then he will be me and he will remain me until differences between that far away galaxy and this one cause us to change in some way, such as by forming different memories; after that he will no longer be me, although we will still both be John K Clark because John K Clark has been duplicated, the machine duplicated the body of him and the environmental differences caused his consciousness to diverge. As I've said before this is a odd situation but in no way paradoxical.

> You keep defending comp, in your dialog with Craig,

I keep defending my ideas, "comp" is your homemade term not mine, I have no use for it.

> You can attach consciousness to the owner of a brain,

Yes, consciousness is what the brain does.

> but the owner itself must attach his consciousness to all states existing in arithmetic

Then I must remember events that happened in the Precambrian because arithmetic existed even back then, but I don't, I don't remember existing then at all. Now that is a paradox! Therefore one of the assumptions must be wrong, namely that the owner of a brain "must attach his consciousness to all states existing in arithmetic".

  John K Clark

Stathis Papaioannou

unread,
Oct 21, 2012, 7:14:59 PM10/21/12
to everyth...@googlegroups.com
On Mon, Oct 22, 2012 at 1:55 AM, Stephen P. King <step...@charter.net> wrote:

>> If there is a top-down effect of the mind on the atoms then there we
>> would expect some scientific evidence of this. Evidence would
>> constitute, for example, neurons firing when measurements of
>> transmembrane potentials, ion concentrations etc. suggest that they
>> should not. You claim that such anomalous behaviour of neurons and
>> other cells due to consciousness is widespread, yet it has never been
>> experimentally observed. Why?
>
>
> Hi Stathis,
>
> How would you set up the experiment? How do you control for an effect
> that may well be ubiquitous? Did you somehow miss the point that
> consciousness can only be observed in 1p? Why are you so insistent on a 3p
> of it?

A top-down effect of consciousness on matter could be inferred if
miraculous events were observed in neurophysiology research. The
consciousness itself cannot be directly observed.

>> I don't mean putting an extra module into the brain, I mean putting
>> the brain directly into the same configuration it is put into by
>> learning the language in the normal way.
>
>
> How might we do that? Alter 1 neuron and you might not have the same
> mind.

When you learn something, your brain physically changes. After a year
studying Chinese it goes from configuration SPK-E to configuration
SPK-E+C. If your brain were put directly into configuration SPK-E+C
then you would know Chinese and have a false memory of the year of
learning it.

>> In a thought experiment we can say that the imitation stimulates the
>> surrounding neurons in the same way as the original. We can even say
>> that it does this miraculously. Would such a device *necessarily*
>> replicate the consciousness along with the neural impulses, or could
>> the two be separated?
>
>
> Is the brain strictly a classical system?

No, although the consensus appears to be that quantum effects are not
significant in its functioning. In any case, this does not invalidate
functionalism.

>> As I said, technical problems with computers are not relevant to the
>> argument. The implant is just a device that has the correct timing of
>> neural impulses. Would it necessarily preserve consciousness?
>>
>>
> Let's see. If I ingest psychoactive substances, there is a 1p observable
> effect.... Is this a circumstance that is different in kind from that
> device?

The psychoactive substances cause a physical change in your brain and
thereby also a psychological change.


--
Stathis Papaioannou

Stephen P. King

unread,
Oct 21, 2012, 8:43:04 PM10/21/12
to everyth...@googlegroups.com
On 10/21/2012 7:14 PM, Stathis Papaioannou wrote:
> On Mon, Oct 22, 2012 at 1:55 AM, Stephen P. King <step...@charter.net> wrote:
>
>>> If there is a top-down effect of the mind on the atoms then there we
>>> would expect some scientific evidence of this. Evidence would
>>> constitute, for example, neurons firing when measurements of
>>> transmembrane potentials, ion concentrations etc. suggest that they
>>> should not. You claim that such anomalous behaviour of neurons and
>>> other cells due to consciousness is widespread, yet it has never been
>>> experimentally observed. Why?
>>
>> Hi Stathis,
>>
>> How would you set up the experiment? How do you control for an effect
>> that may well be ubiquitous? Did you somehow miss the point that
>> consciousness can only be observed in 1p? Why are you so insistent on a 3p
>> of it?
> A top-down effect of consciousness on matter could be inferred if
> miraculous events were observed in neurophysiology research. The
> consciousness itself cannot be directly observed.

Hi Stathis,

This would be true only if consciousness is separate from matter,
such as in Descartes failed theory of substance dualism. In the dual
aspect theory that I am arguing for, there would never be any "miracles"
that would contradict physical law. At most there would be statistical
deviations from classical predictions. Check out
http://boole.stanford.edu/pub/ratmech.pdf for details. My support for
this theory and not materialism follows from materialism demonstrated
inability to account for 1p. Dual aspect monism has 1p built in from
first principles. BTW, I don't use the term "dualism" any more as what I
am advocating seems to be too easily confused with the failed version.

>
>>> I don't mean putting an extra module into the brain, I mean putting
>>> the brain directly into the same configuration it is put into by
>>> learning the language in the normal way.
>>
>> How might we do that? Alter 1 neuron and you might not have the same
>> mind.
> When you learn something, your brain physically changes. After a year
> studying Chinese it goes from configuration SPK-E to configuration
> SPK-E+C. If your brain were put directly into configuration SPK-E+C
> then you would know Chinese and have a false memory of the year of
> learning it.

Ah, but is that change, from SPK-E to SPK-E+C, one that is
numerable strictly in terms of a number of neurons changed? No. I would
conjecture that it is a computational problem that is at least NP-hard.
My reasoning is that if the change where emulable by a computation X
*and* that X could also could be used to solve a P-hard problem, then
there should exist an algorithm that could easily translate any
statement in one language into another *and* finding that algorithm
should require only some polynomial quantity of resources (relative to
the number of possible algorithms). It should be easy to show that this
is not the case.
I strongly believe that computational complexity plays a huge role
in many aspects of the hard problem of consciousness and that the
Platonic approach to computer science is obscuring solutions as it is
blind to questions of resource availability and distribution.

>>> In a thought experiment we can say that the imitation stimulates the
>>> surrounding neurons in the same way as the original. We can even say
>>> that it does this miraculously. Would such a device *necessarily*
>>> replicate the consciousness along with the neural impulses, or could
>>> the two be separated?
>>
>> Is the brain strictly a classical system?
> No, although the consensus appears to be that quantum effects are not
> significant in its functioning. In any case, this does not invalidate
> functionalism.

Well, I don't follow the crowd. I agree that functionalist is not
dependent on the type of physics of the system, but there is an issue of
functional closure that must be met in my conjecture; there has to be
some way for the system (that supports the conscious capacity) to be
closed under the transformation involved.

>>> As I said, technical problems with computers are not relevant to the
>>> argument. The implant is just a device that has the correct timing of
>>> neural impulses. Would it necessarily preserve consciousness?
>>>
>>>
>> Let's see. If I ingest psychoactive substances, there is a 1p observable
>> effect.... Is this a circumstance that is different in kind from that
>> device?
> The psychoactive substances cause a physical change in your brain and
> thereby also a psychological change.
>
>
Of course. As I see it, there is no brain change without a mind
change and vice versa. The mind and brain are dual, as Boolean algebras
and topological spaces are dual, the relation is an isomorphism between
structures that have oppositely directed arrows of transformation. The
math is very straight forward... People just have a hard time
understanding the idea that all of "matter" is some form of topological
space and there is no known calculus of variations for Boolean algebras
(no one is looking for it, except for me, that I know of). Care to help
me? The idea of SPK-E -> SPK-E+C, that you mentioned, is an example of a
variation of Boolean algebra!

--
Onward!

Stephen


Quentin Anciaux

unread,
Oct 22, 2012, 3:51:42 AM10/22/12
to everyth...@googlegroups.com


2012/10/22 Jason Resch <jason...@gmail.com>



On Sun, Oct 21, 2012 at 12:46 PM, John Clark <johnk...@gmail.com> wrote:
On Sun, Oct 21, 2012  Bruno Marchal <mar...@ulb.ac.be> wrote:

 >> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that

> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W

Yes.

yet your agree that both copy will feel to live in only one place

Yes.

> so the error you have seen was dues to a confusion between first person and third person.

Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated. 

Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical. And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.

John,

I think you are missing something.  It is a problem that I noticed after watching the movie "The Prestige" and it eventually led me to join this list.

Unless you consider yourself to be only a single momentary atom of thought, you probably believe there is some stream of thoughts/consciousness that you identify with.  You further believe that these thoughts and consciousness are produced by some activity of your brain.  Unlike Craig, you believe that whatever horrible injury you suffered, even if every atom in your body were separated from every other atom, in principle you could be put back together, and if the atoms are put back just right, you will be removed and alive and well, and conscious again.

Further, you probably believe it doesn't matter if we even re-use the same atoms or not, since atoms of the same elements and isotopes are functionally equivalent.  We could take apart your current atoms, then put you back together with atoms from a different pile and your consciousness would continue right where it left off (from before you were obliterated).  It would be as if a simulation of your brain were running on a VM, we paused the VM, moved it to a different physical computer and then resumed it.  From your perspective inside, there was no interruption, yet your physical incarnation and location has changed.

Assuming you are with me so far, an interesting question emerges: what happens to your consciousness when duplicated?  Either an atom for atom replica of yourself is created in two places or your VM image which contains your brain emulation is copied to two different computers while paused, and then both are resumed.  Initially, the sensory input to the two duplicates could be the same, and in a sense they are still the same mind, just with two instances, but then something interesting happens once different input is fed to the two instances: they split.  You could say they split in the same sense as when someone opens the steel box to see whether the cat is alive or dead.  All the splitting in quantum mechanics may be the result of our infinite instances discovering/learning different things about our infinite environments.

I would add that what's interresting in the duplication is the what happens next probability (when the "two" copies diverge). If you're about to do an experience (for exemple opening a door and looking what is behind) and that just before opening the door, your are duplicated, the copy is put in the same position in front of an identical door, the fact that you were originally (just before duplication) in front of a door that opens on new york city, what is the probability that when you open it *it is* new york city... in case of a single universe (limited) where not duplications of state could appear the answer is straighforward, it is 100%, but in case of comp or MWI, the probability is not 100%, you must take in account all duplications (now and then) and there relative measure. That is the "measure" problem. The "before" divergence is not interresting, that's the point where John stays stuck willingly.

Quentin
 

Quentin Anciaux

unread,
Oct 22, 2012, 6:05:53 AM10/22/12
to everyth...@googlegroups.com


2012/10/22 Stephen P. King <step...@charter.net>

I don't understand why you're focusing on NP-hard problems... NP-hard problems are solvable algorithmically... but not efficiently. When I read you (I'm surely misinterpreting), it seems like you're saying you can't solve NP-hard problems... it's not the case,... but as your input grows, the time to solve the problem may be bigger than the time ellapsed since the bigbang. You could say that the NP-hard problems for most input are not technically/practically sovable but they are in theories (you have the algorithm) unlike undecidable problems like the halting problem.

Quentin
 
    I strongly believe that computational complexity plays a huge role in many aspects of the hard problem of consciousness and that the Platonic approach to computer science is obscuring solutions as it is blind to questions of resource availability and distribution.

In a thought experiment we can say that the imitation stimulates the
surrounding neurons in the same way as the original. We can even say
that it does this miraculously. Would such a device *necessarily*
replicate the consciousness along with the neural impulses, or could
the two be separated?

     Is the brain strictly a classical system?
No, although the consensus appears to be that quantum effects are not
significant in its functioning. In any case, this does not invalidate
functionalism.

    Well, I don't follow the crowd. I agree that functionalist is not dependent on the type of physics of the system, but there is an issue of functional closure that must be met in my conjecture; there has to be some way for the system (that supports the conscious capacity) to be closed under the transformation involved.

As I said, technical problems with computers are not relevant to the
argument. The implant is just a device that has the correct timing of
neural impulses. Would it necessarily preserve consciousness?


     Let's see. If I ingest psychoactive substances, there is a 1p observable
effect.... Is this a circumstance that is different in kind from that
device?
The psychoactive substances cause a physical change in your brain and
thereby also a psychological change.


    Of course. As I see it, there is no brain change without a mind change and vice versa. The mind and brain are dual, as Boolean algebras and topological spaces are dual, the relation is an isomorphism between structures that have oppositely directed arrows of transformation. The math is very straight forward... People just have a hard time understanding the idea that all of "matter" is some form of topological space and there is no known calculus of variations for Boolean algebras (no one is looking for it, except for me, that I know of). Care to help me? The idea of SPK-E -> SPK-E+C, that you mentioned, is an example of a variation of Boolean algebra!


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Stathis Papaioannou

unread,
Oct 22, 2012, 6:53:11 AM10/22/12
to everyth...@googlegroups.com
On Mon, Oct 22, 2012 at 1:48 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> If there is a top-down effect of the mind on the atoms then there we
>> would expect some scientific evidence of this.
>
>
> These words are a scientific evidence of this. The atoms of my brain are
> being manipulated from the top down. I am directly projecting what I want to
> say through my mind in such a way that the atoms of my brain facilitate
> changes in the tissues of my body. Fingers move. Keys click.

You assert that there is top-down manipulation of the atoms in your
brain but the scientific evidence is against you.

>> Evidence would
>> constitute, for example, neurons firing when measurements of
>> transmembrane potentials, ion concentrations etc. suggest that they
>> should not.
>
>
> Do not neurons fire when I decide to type?

Yes, but you decide to type because neurons fire. You can't have a
decision without the physical process, so every decision or other
mental process has a correlating physical process.

> What you are expecting would be nothing but another homunculus. If there was
> some special sauce oozing out of your neurons which looked like...what?
> pictures of me moving my fingers? How would that explain how I am inside
> those pictures. The problem is that you are committed to the realism of
> cells and neurons over thoughts and feelings - even when we understand that
> our idea of neurons are themselves only thoughts and feelings. This isn't a
> minor glitch, it is The Grand Canyon.
>
> What has to be done is to realize that thoughts and feelings cannot be made
> out of forms and functions, but rather forms and functions are what thoughts
> and feelings look like from an exterior, impersonal perspective. The
> thoughts and feelings are the full-spectrum phenomenon, the forms and
> functions a narrow band of that spectrum. The narrowness of that band is
> what maximizes the universality of it. Physics is looking a slice of
> experience across all phenomena, effectively amputating all of the meaning
> and perceptual inertia which has accumulated orthogonally to that slice.
> This is the looong way around when it comes to consciousness as
> consciousness is all about the longitudinal history of experience, not the
> spatial-exterior mechanics of the moment.

Craig, I have repeatedly explained how entertaining your hypothesis
that consciousness is substrate-dependent rather than
function-dependent, which on the face of it is not unreasonable, leads
to absurdity. You actually seem to agree with this below without
realising it.

>> You claim that such anomalous behaviour of neurons and
>> other cells due to consciousness is widespread, yet it has never been
>> experimentally observed. Why?
>
>
> Nobody except you and John Clark are suggesting any anomalous behavior. This
> is your blind spot. I don't know if you can see beyond. I am not optimistic.
> If there were any anomalous behavior of neurons, they would STILL require
> another meta-level of anomalous behaviors to explain them. Whatever level of
> description you choose for human consciousness - the brain, the body, the
> extended body, CNS, neurons, molecules, atoms, quanta... it DOESN'T MATTER
> AT ALL to the hard problem. There is still NO WAY for us to be inside of
> those descriptions, and even if there were, there is no conceivable purpose
> for 'our' being there in the first place. This isn't a cause for despair or
> giving up, it is a triumph of insight. It is to see that the world is round
> if you are far away from it, but flat if you are on the surface. You keep
> trying to say that if the world were round you would see anomalous dips and
> valleys where the Earth begins to curve. You are not getting it. Reality is
> exactly what it seems to be, and it is many other things as well. Just
> because our understanding brings us sophisticated views of what we are from
> the outside in does not in any way validate the supremacy of the realism
> which we rely on from the inside out to even make sense of science.

If the the behaviour of neurons cannot be described and predicted
using physical laws then there must be anomalous at play. How else
could you explain it?

>> I don't mean putting an extra module into the brain, I mean putting
>> the brain directly into the same configuration it is put into by
>> learning the language in the normal way.
>
>
> That can't be done. It's like saying you will put New York City directly in
> the same configuration as Shanghai. It's meaningless. Even if you could move
> the population of Shanghai to New York or demolish New York and rebuild it
> in the shape of Shanghai, it wouldn't matter because consciousness develops
> through time. It is made of significance which accumulates through sense
> experience - *not just 'data'*.

Well, if you did disassemble New York and put the atoms into
Shanghai's configuration, including the population, then you would
have Shanghai. Not going to happen tomorrow but where is the
theoretical problem?

>> > No such thing. Does any imitation function identically to an original?
>>
>> In a thought experiment we can say that the imitation stimulates the
>> surrounding neurons in the same way as the original.
>
>
> Then the thought experiment is garbage from the start. It begs the question.
> Why not just say we can have an imitation human being that stimulates the
> surrounding human beings in the same way as the original? Ta-da! That makes
> it easy. Now all we need to do is make a human being that stimulates their
> social matrix in the same way as the original and we have perfect AI without
> messing with neurons or brains at all. Just make a whole person out of
> person stuff - like as a thought experiment suppose there is some stuff X
> which makes things that human beings think is another human being. Like
> marzipan. We can put the right pheromones in it and dress it up nice, and
> according to the thought experiment, let's say that works.

The imitation human stimulating his surrounding humans in the same way
as the original could be a zombie or a very good actor. That's what we
need for the neural implant in the thought experiment: a zombie or a
very good actor that stimulates the surrounding neurons in the same
way as the original. Do you think this is logically impossible?
Logical possibility is all that is needed in order to establish
functionalism.

> You aren't allowed to deny this because then you don't understand the
> thought experiment, see? Don't you get it? You have to accept this flawed
> pretext to have a discussion that I will engage in now. See how it works?
> Now we can talk for six or eight months about how human marzipan is
> inevitable because it wouldn't make sense if you replaced a city gradually
> with marzipan people that New York would gradually fade into less of a New
> York or that New York becomes suddenly absent. It's a fallacy. The premise
> screws up the result.

To state it as clearly as I can again, what is required is an
artificial component that stimulates the other neurons in the same way
as the original did. Chalmers says that this component is a computer
chip, which is necessary to establish computationalism, but not to
establish functionalism. To establish functionalism, it is not
necessary to specify how the component works, only that it does work.

>> We can even say
>> that it does this miraculously. Would such a device *necessarily*
>> replicate the consciousness along with the neural impulses, or could
>> the two be separated?
>
> Would the marzipan Brooklyn necessarily replicate the local TV and Radio
> along with the traffic on the street or could the two be separated? Neither.
> The whole premise is garbage because both Brooklyn and brain are made of
> living organisms who are aware of their description of the universe. We
> can't imitate their description of the universe because we can only get our
> own description of our measuring instruments description of their exterior
> descriptions.

Are you actually saying that it is *logically* impossible to replicate
a neuron's behaviour stimulating its neighbours? Not just that the
behaviour is not computable, but that not even an omnipotent being
could replicate it? So where is the logical contradiction?

>> As I said, technical problems with computers are not relevant to the
>> argument. The implant is just a device that has the correct timing of
>> neural impulses. Would it necessarily preserve consciousness?
>
>
> The timing of neural impulses can only be made completely correct by direct
> experience. The implant can't work as a source of consciousness on a
> personal level, only as band-aid on a sub-personal level. Making a person
> out of band-aids doesn't work.

But you said there are no scientifically anomalous events in neurons,
and if so it would mean that the timing can be be calculated. And if
that fails, there is always God, who is omnipotent. If God got the
timing right would consciousness necessarily be preserved? If so,
functionalism is established as true. If not, we would have the
possibility of partial (as well as full) zombies, lacking aspects of
their consciousness but unaware of this.


--
Stathis Papaioannou

Bruno Marchal

unread,
Oct 22, 2012, 11:45:19 AM10/22/12
to everyth...@googlegroups.com
Yes, there are interesting transfinities below and beyond omega_1^CK (the Kleene Church first non constructive ordinal). This has plausibly, with comp, some relation with possible consciousness states (but that is not obvious and depends on definitions).




Also, it isn't quite clear to me how something needs to be added to Turing universality to expand the capabilities of consciousness, if all consciousness is the result of computation.


Gosh! It is only recent, for me, that I even think that universal machines are already conscious. I thought Lôbianity was needed. But then it is basically the same as the consciousness-->self-consciousness type of consciousness "enrichment/delusion".  In a sense, abstract universality is maximally conscious, maximally undeluded, or awake, somehow.

But Turing universality is cheap and concerns an ability to imitate other machine, not to understand them, so for provability and beliefs, and knowledge there are transfinite improvement and enlargement possible. 
We are not just conscious, we differentiate in developing beliefs, and get greater and greater view on truth.


It is like you might be near doing a kind of  "Searle error" perhaps. A computation can emulate consciousness, but the computation is not conscious, only the person emulated by that computations, she can always progress infinitely (even if "restricted" on the search of arithmetical truth), develop more and more beliefs and knowledge. Particular universal machines will develop particular parts (even if transfinite) of arithmetical truth.

But G and G*, that is the modal logic of the provability of the Löbian machines, is a treshold. Despite growing transfinitely on her knowledge(s) of the arithmetical truth, as long as they remain self-referentially correct, they will obey to G and G*, for their theory of provability. If consistent, they will for ever been able to prove that they are consistent, for example, and they can prove that for themselves. The abstract theology is invariant despite the evolution of the arithmetical content of the B in Bp. PA and ZF have very different arithmetical beliefs, but both obeys to G and G*.

Consciousness, from the first person perspective is more related to "all computations" going through my states, than any particular computations. The living self is not a computer, it is a believer, supported by infinities of computer (by UDA).

I am happy you are open to the idea that universal machine are all conscious, it is then, the state of "you" before developing any more beliefs than those making you universal. Your first person indeterminacy, in that state, is all other possible machine/dreams. 
The Löbian machine knows that they are universal, and so knows the price to pay, for "staying consistent", like the possibility of crashing, in front of the unknown arithmetical truth. 



Thanks,

You are welcome.

Bruno



John Clark

unread,
Oct 22, 2012, 12:04:06 PM10/22/12
to everyth...@googlegroups.com
On Sun, Oct 21, 2012 at 6:25 PM, Jason Resch <jason...@gmail.com> wrote

> I think you are missing something.  It is a problem that I noticed after watching the movie "The Prestige"

In my opinion "The Prestige" is the best movie made in the last 10 years, and this is one of those rare instances where the movie was better than the book. Before the movie back in 1996 I wrote a short scenario that had somewhat similar themes, this is part of it: 

" About a year ago I started building a matter duplicating machine. It could  find the position and velocity of every atom in a human being to the limit imposed by Heisenberg's law. It then used this information to construct a copy and it does it all in a fraction of a second and without harming the original in any way. You may be surprised that I was able to build such a complicated machine, but you wouldn't be if you knew how good I am with my hands. The birdhouse I made is simply lovely and I have all the latest tools from Sears.

I was a little nervous but I decided to test the machine by duplicating myself. The day before yesterday I walked into the chamber, it filled with smoke (damn those radio shack transformers) there was a flash of light, and then 3 feet to my left was a man who looked exactly like me. It was at that instant that the full realization of the terrible thing I did hit me. I yelled "This is monstrous, there can only be one of me", my copy yelled exactly the same thing. I thought he was trying to mock me, so I reached for my 44 magnum that I always carry with me (I wonder why people think I'm strange) and pointed it at my double. I noted with alarm that the double also had a gun and he was pointing it at me. I shouted "you don't have the guts to pull the trigger, but I do". Again he mimicked my words and did so in perfect synchronization, this made me even more angry and I pulled the trigger, he did too. My gun went off but due to a random quantum fluctuation his gun jammed. I buried him in my back yard.

Now that my anger has cooled and I can think more clearly I've had  some pangs of guilt about killing a living creature, but that's not what really torments me. How do I know I'm not the copy? I feel exactly the same as before, but would a copy feel different? Actually there is a way to be certain, I have a video tape of the entire experiment. My memory is  that the copy first appeared 3 feet to my LEFT, (if I had arranged things so he appeared 3 feet in front of me face to face things would have been more symmetrical, like looking in a mirror), if the tape shows the original walking into the chamber and the copy materializing 3 feet to his RIGHT, then I would know that I am the copy. But I'm afraid to look at the tape, should I be? If I found out I was the copy what should I do? I suppose I should morn the death of John Clark, but how can I, I'm not dead. If I am the copy would that mean that I have no real past and my life is meaningless? Is it important, or should I just burn the tape and forget all about it?"

> you probably believe there is some stream of thoughts/consciousness that you identify with.

I can't conceive of anyone disagreeing with that.

  > You further believe that these thoughts and consciousness are produced by some activity of your brain.

Yes.

> Unlike Craig, you believe that whatever horrible injury you suffered, even if every atom in your body were separated from every other atom, in principle you could be put back together, and if the atoms are put back just right, you will be removed and alive and well, and conscious again.

Yes.
 
> Further, you probably believe it doesn't matter if we even re-use the same atoms or not, since atoms of the same elements and isotopes are functionally equivalent.  

Yes.

> We could take apart your current atoms, then put you back together with atoms from a different pile and your consciousness would continue right where it left off (from before you were obliterated).

Yes.
 
It would be as if a simulation of your brain were running on a VM, we paused the VM, moved it to a different physical computer and then resumed it.  From your perspective inside, there was no interruption, yet your physical incarnation and location has changed.

Yes.
 
> what happens to your consciousness when duplicated?  

When what is duplicated? Adjectives, like consciousness or Jason Resch, do not duplicate in the same way that nouns, like brains, do. If I exactly duplicate a iPod playing loud music the iPod is duplicated but the adjective "loud" is not duplicated, but if I then change the loudness level on one of them but not the other then the two differentiate. In the same way If I exactly duplicate you and a cat as you consciously look at the cat then your body and brain are duplicated but the adjective describing what the brain is doing, consciousness, is not duplicated; however if I then change one cat but not the other then the conscious experience and memories formed by observing the cat will be different and the two of you will no longer be each other but both will be Jason Resch.

> Initially, the sensory input to the two duplicates could be the same, and in a sense they are still the same mind, just with two instances

Two identical minds are not "in a sense" the same mind they ARE the same mind period.

> but then something interesting happens once different input is fed to the two instances: they split.

Yes, now let me tell you of a thought experiment of my own.

An exact duplicate of the earth, and it's entire ecosystem, is created a billion light years away. The duplicate world would need some sort of feedback mechanism to keep the worlds in synchronization, non linear effects would amplify tiny variations, even quantum fluctuations, into big differences, but this is a thought experiment so who cares. In the first two cases below the results would vary according to personalities, remember there's a lot of illogic even in the best of us.

1) I know all about the duplicate world and you put a 44 magnum to my head and tell me that in ten seconds you will blow my brains out. Am I concerned? You bet I am because I know that your double is holding an identical gun to the head of my double and making an identical threat.

2) I find out that for the first time since the Big Bang the worlds will diverge, in 10 seconds you will put a bullet in my head but my double will be spared. Am I concerned? Yes, and angry as well, in times of intense stress nobody is very logical. My double is no longer exact because I am going through a traumatic experience and my double is not. I'd be looking at that huge gun and wondering what it will be like when it goes off and if death will really be instantaneous. I'd be wondering if my philosophy was really as sound as I thought it was and I'd also be wondering why I get the bullet and not my double and cursing the unfairness of it all. My (semi) double would be thinking "it's a shame about that other fellow but I'm glad it's not me".

3) I know nothing about the duplicate world, a gun is at both our heads and we both are convinced we're going to die. One gun goes off, making a hell of a mess, but the other gun, for inexplicable reasons misfires. In this case NOBODY died and except for undergoing a terrifying experience I am completely unharmed. The real beauty part is that I don't even have to clean up the mess.

The bottom line is we don't have thoughts and emotions, we are thoughts and emotions, and the idea that the particular hardware that is rendering them changes their meaning is as crazy as my computer making the meaning of your post different from what it was on yours.

  John K Clark


 


 

Bruno Marchal

unread,
Oct 22, 2012, 12:17:35 PM10/22/12
to everyth...@googlegroups.com
On 21 Oct 2012, at 19:46, John Clark wrote:

On Sun, Oct 21, 2012  Bruno Marchal <mar...@ulb.ac.be> wrote:

 >> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that

> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W

Yes.

yet your agree that both copy will feel to live in only one place

Yes.

> so the error you have seen was dues to a confusion between first person and third person.

Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated. 

Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical.

The uncertainty question bears on the personal memories. You attribute me imaginary identifications.






And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.


That is stopping thinking.





> By the way, it is irrational to stop in the middle of a proof.

If one of the steps in a proof contains a blunder then it would be irrational to keep reading it.


I say, with all the definition and the protocol, that P(W) = 1/2. What do you say?

You told me W and M. But when I interview the two John Clarck, none of them has written in his personal diary; I feel to be in W and in M. 






> By assuming a physical reality at the start

That seems like a pretty damn good place to make an assumption.

If front of deep conceptual problem, like the mind body problem, it is better to remain neutral on the different possible rational ways to conceive reality.





 > But the physical reality can emerge or appear without a physical reality at the start

Maybe maybe not, but even if you're right that wouldn't make it any less real; and maybe physical reality didn't even need to emerge because there was no start.
 
>> If you change your conscious state then your brain changes, and if I make a change in your brain then your conscious state changes too, so I'd say that it's a good assumption that consciousness is interlinked with a physical object, in fact it's a downright superb assumption.

 > But this is easily shown to be false when we assume comp.

It's not false and I don't need to assume it and I haven't theorized it from armchair philosophy either, I can show it's true experimentally.


Nothing can be shown true experimentally. Things can be disprove experimentally, but in science we cannot do any assertative statement on reality, except negative one.

Even if someone survive with an artificial digital brain, that will still not be a public proof of comp.





And when theory and experiment come into conflict it is the theory that must submit not the experiment.

Of course.



If I insert drugs into your bloodstream it will change the chemistry of your brain, and when that happens your conscious state will also change. Depending on the drug I can make you happy-sad, friendly-angry, frightened-clam, alert-sleepy, dead-alive, you name it.    

 
> If your state appears in a far away galaxies [...]

Then he will be me and he will remain me until differences between that far away galaxy and this one cause us to change in some way, such as by forming different memories; after that he will no longer be me, although we will still both be John K Clark because John K Clark has been duplicated, the machine duplicated the body of him and the environmental differences caused his consciousness to diverge. As I've said before this is a odd situation but in no way paradoxical.

You are the one talking about confusion and seeing paradox.
but don't you think that this other John Clark, in the galaxy far away, will not think "oh, that marchal was right, my future was indeterminate as I have been unable to predict what just happened. 

You just stop doing the thought experiences. yes, there is no paradox, just an indeterminacy from the first person point of view. 





> You keep defending comp, in your dialog with Craig,

I keep defending my ideas, "comp" is your homemade term not mine, I have no use for it.

> You can attach consciousness to the owner of a brain,

Yes, consciousness is what the brain does.


> but the owner itself must attach his consciousness to all states existing in arithmetic

Then I must remember events that happened in the Precambrian because arithmetic existed even back then, but I don't, I don't remember existing then at all. Now that is a paradox! Therefore one of the assumptions must be wrong, namely that the owner of a brain "must attach his consciousness to all states existing in arithmetic".

Well, sorry but you clearly misunderstand. Then it is normal because you need to go at least to the step seven to get the understanding of why consciousness is attach, not to all states in arithmetic (of course), but to all equivalent computational state belonging to a different computations going through those computations.

You know, when I say P(M) = 1/2, the question is not if that is true or false, but of understanding what is meant by that. And what is meant by that can be given by the frequency interpretation of probability for iterated self-duplication. the many copies knows very well that the number of themselves having gone to W or M, will be *exactly* given by the coefficient of the binome of Newton.

I really failed to see why you stop, or seem to stop, thinking, especially for someone talking in list open on Everett and observers multiplications.

Bruno




meekerdb

unread,
Oct 22, 2012, 12:26:39 PM10/22/12
to everyth...@googlegroups.com
On 10/22/2012 12:51 AM, Quentin Anciaux wrote:


2012/10/22 Jason Resch <jason...@gmail.com>


On Sun, Oct 21, 2012 at 12:46 PM, John Clark <johnk...@gmail.com> wrote:
On Sun, Oct 21, 2012  Bruno Marchal <mar...@ulb.ac.be> wrote:

 >> I stopped reading after your proof of the existence of a new type of indeterminacy never seen before because the proof was in error, so there was no point in reading about things built on top of that

> From your "error" you have been obliged to say that in the WM duplication, you will live both at W and at W

Yes.

yet your agree that both copy will feel to live in only one place

Yes.

> so the error you have seen was dues to a confusion between first person and third person.

Somebody is certainly confused but it's not me. The fact is that if we are identical then my first person experience of looking at you is identical to your first person experience of looking at me, and both our actions are identical for a third person looking at both of us. As long as we're identical it's meaningless to talk about 2 conscious beings regardless of how many bodies or brains have been duplicated. 

Your confusion stems from saying "you have been duplicated" but then not thinking about what that really means, you haven't realized that a noun (like a brain) has been duplicated but a adjective (like Bruno Marchal) has not been as long as they are identical; you are treating adjectives as if they were nouns and that's bound to cause confusion. You are also confused by the fact that if 2 identical things change in nonidentical ways, such as by forming different memories, then they are no longer identical. And finally you are confused by the fact that although they are not each other any more after those changes both still have a equal right to call themselves Bruno Marchal. After reading these multiple confusions in one step of your proof I saw no point in reading more, and I still don't.

John,

I think you are missing something.  It is a problem that I noticed after watching the movie "The Prestige" and it eventually led me to join this list.

Unless you consider yourself to be only a single momentary atom of thought, you probably believe there is some stream of thoughts/consciousness that you identify with.  You further believe that these thoughts and consciousness are produced by some activity of your brain.  Unlike Craig, you believe that whatever horrible injury you suffered, even if every atom in your body were separated from every other atom, in principle you could be put back together, and if the atoms are put back just right, you will be removed and alive and well, and conscious again.

Further, you probably believe it doesn't matter if we even re-use the same atoms or not, since atoms of the same elements and isotopes are functionally equivalent.  We could take apart your current atoms, then put you back together with atoms from a different pile and your consciousness would continue right where it left off (from before you were obliterated).  It would be as if a simulation of your brain were running on a VM, we paused the VM, moved it to a different physical computer and then resumed it.  From your perspective inside, there was no interruption, yet your physical incarnation and location has changed.

Assuming you are with me so far, an interesting question emerges: what happens to your consciousness when duplicated?  Either an atom for atom replica of yourself is created in two places or your VM image which contains your brain emulation is copied to two different computers while paused, and then both are resumed.  Initially, the sensory input to the two duplicates could be the same, and in a sense they are still the same mind, just with two instances, but then something interesting happens once different input is fed to the two instances: they split.  You could say they split in the same sense as when someone opens the steel box to see whether the cat is alive or dead.  All the splitting in quantum mechanics may be the result of our infinite instances discovering/learning different things about our infinite environments.

I would add that what's interresting in the duplication is the what happens next probability (when the "two" copies diverge). If you're about to do an experience (for exemple opening a door and looking what is behind) and that just before opening the door, your are duplicated, the copy is put in the same position in front of an identical door, the fact that you were originally (just before duplication) in front of a door that opens on new york city, what is the probability that when you open it *it is* new york city... in case of a single universe (limited) where not duplications of state could appear the answer is straighforward, it is 100%, but in case of comp or MWI, the probability is not 100%, you must take in account all duplications (now and then) and there relative measure. That is the "measure" problem. The "before" divergence is not interresting, that's the point where John stays stuck willingly.

Quentin

There is something puzzling here.  Duplication at the lowest level, cloning the quantum state, is impossible.  And even duplicates at a relatively high state, e.g. nuerons, must quickly diverge just because of interactions with the uncontrolled environment - and in fact if QM is correct it is the interaction with the environment that permits the "higher" classical level to exist. In this thought experiments, Bruno sweeps these problems aside by considering conscious states.  Conscious states are very crude things.  We're not aware of very much of the world.  So Bruno notes that a given conscious state is consistent with a lot of different worlds - different computational states in different computational threads of a UD.  Then he proposes that the physical world is just a kind of consistency class within all the consciousness threads (intersujective agreement).  But each thread of computation that contributes to a given consciousness only does so in virtue of being consistent with the other threads (one is never literally of two minds).  So it seems that consciousness, by this theory, is an epiphenomena on certain classes of computation (e.g. those that 'hang together' enough to be conscious "of" something).  Then we're back to the same sort of question asked of materialism, but instead of "Why is this physical process conscious and not that one?" the question is "Why is this bundle of computational states conscious and not that one?"

Brent

Bruno Marchal

unread,
Oct 22, 2012, 1:18:13 PM10/22/12
to everyth...@googlegroups.com
Hi Roger,

You just describe the non-comp conviction. You don't give any
argument. With comp, you are the owner of an infinity of machine, it
does not matter if it is in silicon or carbon, as long as the
components do the right relative things in the most probable history.

You are just insulting many creatures just by referring to their 3p
shapes. You are not cautious. You might insult God in the process.
Certainly so in case they are conscious, imo.

Any way, strong AI is the hypothesis that machine can be conscious.
Comp is the assumption that your body behave locally like a machine,
so that you might change it in some futures.


Bruno



On 21 Oct 2012, at 22:35, Roger Clough wrote:

> Hi Bruno Marchal
>

Stephen P. King

unread,
Oct 22, 2012, 2:35:15 PM10/22/12
to everyth...@googlegroups.com
On 10/22/2012 6:05 AM, Quentin Anciaux wrote:
> I don't understand why you're focusing on NP-hard problems... NP-hard
> problems are solvable algorithmically... but not efficiently. When I
> read you (I'm surely misinterpreting), it seems like you're saying you
> can't solve NP-hard problems... it's not the case,... but as your
> input grows, the time to solve the problem may be bigger than the time
> ellapsed since the bigbang. You could say that the NP-hard problems
> for most input are not technically/practically sovable but they are in
> theories (you have the algorithm) unlike undecidable problems like the
> halting problem.
>
> Quentin
Hi Quentin,

Yes, they are solved algorithmically. I am trying to get some focus
on the requirement of resources for computations to be said to be
solvable. This is my criticism of the Platonic treatment of computer
theory, it completely ignores these considerations. The Big Bang theory
(considered in classical terms) has a related problem in its stipulation
of initial conditions, just as the Pre-Established Harmony of Leibniz'
Monadology. Both require the prior existence of a solution to a NP-Hard
problem. We cannot consider the solution to be "accessible" prior to its
actual computation!
The calculation of the minimum action configuration of the universe
such that there is a universe that we observe now is in the state that
it is and such is consistent with our existence in it must be explained
either as being the result of some fortuitous accident or, as some
claim, some "intelligent design" or some process working in some
super-universe where our universe was somehow selected, if the prior
computation idea is true.
I am trying to find an alternative that does not require
computations to occur prior to the universe's existence! Several people,
such as Lee Smolin, Stuart Kaufmann and David Deutsch have advanced the
idea that the universe is, literally, computing its next state in an
ongoing fashion, so my conjecture is not new. The universe is computing
solutions to NP-Hard problems, but not in any Platonic sense.

--
Onward!

Stephen


meekerdb

unread,
Oct 23, 2012, 2:03:13 AM10/23/12
to everyth...@googlegroups.com
On 10/22/2012 11:35 AM, Stephen P. King wrote:
> On 10/22/2012 6:05 AM, Quentin Anciaux wrote:
>> I don't understand why you're focusing on NP-hard problems... NP-hard problems are
>> solvable algorithmically... but not efficiently. When I read you (I'm surely
>> misinterpreting), it seems like you're saying you can't solve NP-hard problems... it's
>> not the case,... but as your input grows, the time to solve the problem may be bigger
>> than the time ellapsed since the bigbang. You could say that the NP-hard problems for
>> most input are not technically/practically sovable but they are in theories (you have
>> the algorithm) unlike undecidable problems like the halting problem.
>>
>> Quentin
> Hi Quentin,
>
> Yes, they are solved algorithmically. I am trying to get some focus on the
> requirement of resources for computations to be said to be solvable. This is my
> criticism of the Platonic treatment of computer theory, it completely ignores these
> considerations. The Big Bang theory (considered in classical terms) has a related
> problem in its stipulation of initial conditions, just as the Pre-Established Harmony of
> Leibniz' Monadology. Both require the prior existence of a solution to a NP-Hard
> problem. We cannot consider the solution to be "accessible" prior to its actual
> computation!

Why not? NP-hard problems have solutions ex hypothesi; it's part of their defintion. What
would a "prior" computation mean? Are you supposing that there is a computation and
*then* there is an implementation (in matter) that somehow realizes the computation that
was formerly abstract. That would seem muddled. If the universe is to be explained as a
computation then it must be realized by the computation - not by some later (in what time
measure?) events.

Brent

Stephen P. King

unread,
Oct 23, 2012, 6:40:22 AM10/23/12
to everyth...@googlegroups.com
"Having a solution" in the abstract sense, is different from actual
access to the solution. You cannot do any work with the abstract fact
that a NP-Hard problem has a solution, you must actually compute a
solution! The truth that there exists a minimum path for a traveling
salesman to follow given N cities does not guide her anywhere. This
should not be so unobvious!

> What would a "prior" computation mean?

Where did you get that cluster of words?

> Are you supposing that there is a computation and *then* there is an
> implementation (in matter) that somehow realizes the computation that
> was formerly abstract. That would seem muddled.

Right! It would be, at least, muddled. That is my point!

> If the universe is to be explained as a computation then it must be
> realized by the computation - not by some later (in what time
> measure?) events.

Exactly. The computation cannot occur before the universe! Did you
stop reading at this point?

>
> Brent
>
>> The calculation of the minimum action configuration of the
>> universe such that there
>> is a universe that we observe now is in the state that it is and such
>> is consistent with
>> our existence in it must be explained either as being the result of
>> some fortuitous
>> accident or, as some claim, some "intelligent design" or some process
>> working in some
>> super-universe where our universe was somehow selected, if the prior
>> computation idea is
>> true.
>> I am trying to find an alternative that does not require
>> computations to occur prior
>> to the universe's existence! Several people, such as Lee Smolin,
>> Stuart Kaufmann and
>> David Deutsch have advanced the idea that the universe is, literally,
>> computing its next
>> state in an ongoing fashion, so my conjecture is not new. The
>> universe is computing
>> solutions to NP-Hard problems, but not in any Platonic sense.
>>
>


--
Onward!

Stephen


Roger Clough

unread,
Oct 23, 2012, 8:50:36 AM10/23/12
to everything-list
Hi meekerdb

There are a number of theories to explain the collapse of the quantum wave function
(see below).
 
1) In subjective theories, the collapse is attributed
to consciousness (presumably of the intent or decision to make
a measurement).
 
2) In objective or decoherence theories, some physical
event (such as using a probe to make a measurement)
in itself causes decoherence of the wave function. To me,
this is the simplest and most sensible answer (Occam's Razor).
3) There is also the many-worlds interpretation, in which collapse
of the wave is avoided by creating an entire universe.
This sounds like overkill to me.
 
So I vote for decoherence of the wave by a probe.
 
Roger Clough
 
--------------------------------------------------------------------
Wave function collapse.

http://en.wikipedia.org/wiki/Wave_function_collapse

"The cluster of phenomena described by the expression wave function collapse
is a fundamental problem in the interpretation of quantum mechanics, and is known
 as the measurement problem. The problem is not really confronted by the Copenhagen
Interpretation, which postulates that this is a special characteristic of the "measurement" process.

 The Many-Worlds Interpretation deals with it by discarding the collapse-process,
thus reformulating the relation between measurement apparatus and system in
such a way that the linear laws of quantum mechanics are universally valid;
that is, the only process according to which a quantum system evolves is governed
by the Schr鰀inger equation or some relativistic equivalent. Often tied in with the Many-Worlds
Interpretation, but not limited to it, is the physical process of decoherence, which
causes an apparent collapse. Decoherence is also important for the interpretation
 based on Consistent Histories.

A general description of the evolution of quantum mechanical systems is possible by
using density operators and quantum operations. In this formalism (which is closely
related to the C*-algebraic formalism) the collapse of the wave function corresponds
to a non-unitary quantum operation.

The significance ascribed to the wave function varies from interpretation to interpretation,
and varies even within an interpretation (such as the Copenhagen Interpretation).
If the wave function merely encodes an observer's knowledge of the universe then
the wave function collapse corresponds to the receipt of new information.
This is somewhat analogous to the situation in classical physics, except that
the classical "wave function" does not necessarily obey a wave equation.
If the wave function is physically real, in some sense and to some extent,
then the collapse of the wave function is also seen as a real process, to the same extent."



Roger Clough, rcl...@verizon.net
10/23/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: meekerdb
Receiver: everything-list
Time: 2012-10-22, 12:26:39
Subject: Re: Continuous Game of Life


On 10/22/2012 12:51 AM, Quentin Anciaux wrote:



2012/10/22 Jason Resch




On Sun, Oct 21, 2012 at 12:46 PM, John Clark wrote:

Roger Clough

unread,
Oct 23, 2012, 9:35:50 AM10/23/12
to everything-list
Hi Bruno Marchal

Nothing is true, even comp, until it is proven by experiment.
Can you think of an experiment to verify comp ?


Roger Clough, rcl...@verizon.net
10/23/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-22, 13:18:13

Roger Clough

unread,
Oct 23, 2012, 9:43:32 AM10/23/12
to everything-list
Hi Stephen P. King

I saw a paper once on the possibility of the universe
inventing itself as it goes along. I forget the result
or why, but it had to do with the amount of information
in the universe, the amount needed to do such a calc,
etc. Is some limnit exceeded ?


Roger Clough, rcl...@verizon.net
10/23/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-10-22, 14:35:15
Subject: Re: Interactions between mind and brain
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Oct 23, 2012, 10:06:14 AM10/23/12
to everyth...@googlegroups.com
You are right. I have often said this: comp does not solve the mind-body problem. But it does something: it shows the problem two times more difficult, because it reduces the problem partially, but obligatory, to a justification of the appearance of matter from only a statistics on computations.

That is why I explain those things preferably to people who have a good understanding of Everett. Comp give no choice other than generalize Everett's embedding of the physicist in the QM wave into an embedding of the thinker/dreamer in the arithmetical reality.

Comp solves 100% of the TOE problem. Arithmetic or combinators or anything Turing universal is enough, ontologically. Then the logic of self-reference and computer science solves 99% of the hard consciousness problem, by showing that there is something verifying the semi-axiomatic of consciousness for the machines which look at themselves, and eventually this gives a theory close to Plotinus, Plato, contradicting Aristotle.

The main point is that it gives also the physical laws, so that it is testable. 

All this shows also that if you want to keep a primitively material universe, you have to never say yes to a digitalist surgeon.

Bruno




smi...@zonnet.nl

unread,
Oct 23, 2012, 11:27:16 AM10/23/12
to everyth...@googlegroups.com
Bruno was born 100 years too late, he would have predicted quantum mechanics.

Saibal


Citeren Roger Clough <rcl...@verizon.net>:

meekerdb

unread,
Oct 23, 2012, 1:29:24 PM10/23/12
to everyth...@googlegroups.com
But you wrote, "Both require the prior existence of a solution to a NP-Hard problem." An
existence that is guaranteed by the definition. When you refer to the universe computing
itself as an NP-hard problem, you are assuming that "computing the universe" is member of
a class of problems. It actually doesn't make any sense to refer to a single problem as
NP-hard, since the "hard" refers to how the difficulty scales with different problems of
increasing size. I'm not clear on what this class is. Are you thinking of something like
computing Feynman path integrals for the universe?

>
>> What would a "prior" computation mean?
>
> Where did you get that cluster of words?
From you, below, in the next to last paragraph (just because I quit writing doesn't mean
I quit reading at the same point).

>
>> Are you supposing that there is a computation and *then* there is an implementation (in
>> matter) that somehow realizes the computation that was formerly abstract. That would
>> seem muddled.
>
> Right! It would be, at least, muddled. That is my point!

But no one but you has ever suggested the universe is computed and then implemented to a
two-step process. So it seems to be a muddle of your invention.

Brent

Stephen P. King

unread,
Oct 23, 2012, 2:08:09 PM10/23/12
to everyth...@googlegroups.com
On 10/23/2012 9:43 AM, Roger Clough wrote:
> Hi Stephen P. King
>
> I saw a paper once on the possibility of the universe
> inventing itself as it goes along. I forget the result
> or why, but it had to do with the amount of information
> in the universe, the amount needed to do such a calc,
> etc. Is some limnit exceeded ?
Hi Roger,

The currently accepted theoretical upper bound on computation is
the Bekenstein bound.
http://www.scholarpedia.org/article/Bekenstein_bound But this bound is
based on the assumption that the radius of a sphere that can enclose a
given system is equivalent to what is required to effectively isolate
that system, if an event horizon where to exist at the surface. It
ignores the implications of quantum entanglement, but for the sake of
0-th order approximations of it, it works.
Onward!

Stephen


John Mikes

unread,
Oct 23, 2012, 4:53:48 PM10/23/12
to everyth...@googlegroups.com
Hi, Stephen,
you wrote some points in accordance with my thinking (whatever that is worth) with one point I disagree with:
if you want to argue a point, do not accept it as a base for your argument (even negatively not). You do that all the time. (SPK? etc.) -
My fundamental question: what do you (all) call 'mind'?
(Sub: does the brain do/learn mind functions? HOW?)
(('experimentally observed' is restricted to our present level of understanding/technology(instrumentation)/theories.
Besides: "miraculous" is subject to oncoming explanatory novel info, when it changes into merely 'functonal'.))
 
To fish out some of my agreeing statements:
"Well, I don't follow the crowd...."
Science is no voting matter. 90+% believed the Flat Earth.
 
"... Alter 1 neuron and you might not have the same mind..."
(Meaning: the 'invasion(?)' called 'altering a neuron' MAY change the functionalist's complexity IN THE MIND!- which is certainly beyond our knowable domain. That makes the 'hard' hard. We 'like' to explain DOWN everything in today's knowable terms. (Beware my agnostic views!)
 
"Computation" of course I consider a lot more than that (Platonistic?) algorithmic calculation on our existing (and so knowable?) embryonic device. I go for the Latin orig.: to THINK together - mathematically, or beyond. That mat be a deficiency from my (Non-Indo-European) mother tongue where the (improper?) translatable equivalent closes to the term "expectable". "I am counting on your visit tomorrow".
 
 "I strongly believe that computational complexity plays a huge role in many aspects of the hard problem of consciousness and that the Platonic approach to computer science is obscuring solutions as it is blind to questions of resource availability and distribution."
(and a lot more, do we 'know' about them, or not (yet).
 
"Is the brain strictly a classical system? - No,..."
The "BRAIN" may be - as a 'Physical-World' figment of our bio-physio conventional science image, but its mind-related  function(?) (especially the hard one) is much more than a 'system': ALL 'parts' inventoried in explained functionality).
And: I keep away from the beloved "thought-experiments" invented to make uncanny ideas practically(?) feasible.
 
"...As I see it, there is no brain change without a mind change and vice versa. The mind and brain are dual,..."
Thanks, Stephen, originally I thought there may be some (tissue-related) minor brain-changes not affecting the mind of which the 'brains' serves as a (material) tool in our "sci"? explanations.
Reading your post(s) I realized that it is a complexity and ANY change in one part has consequences in the others.
So whatever 'part' we landscape as the 'neuronal brain' it is
still part of the wider complexity unknowable.
 
Have a good trip onward
 
John M
 
 


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

Stephen P. King

unread,
Oct 23, 2012, 6:35:10 PM10/23/12
to everyth...@googlegroups.com
Hi Brent,

OH! Well, I thank you for helping me clean up my language! Let me
try again. ;--) First I need to address the word "existence". I have
tried to argue that "to exists" is to be "necessarily possible" but that
attempt has fallen on deaf ears, well, it has until now for you are
using it exactly how I am arguing that it should be used, as in "An
existence that is guaranteed by the definition." DO you see that
existence does nothing for the issue of properties? The existence of a
pink unicorn and the existence of the 1234345465475766th prime number
are the same kind of existence, once we drop the pretense that existence
is dependent or contingent on physicality.
Is it possible to define Physicality can be considered solely in
terms of bundles of particular properties, kinda like Bruno's bundles of
computations that define any given 1p. My thinking is that what is
physical is exactly what some quantity of separable 1p have as mutually
consistent (or representable as a Boolean Algebra) but this
consideration seems to run independent of anything physical. What could
reasonably constrain the computations so that there is some thing "real"
to a physical universe? There has to be something that cannot be changed
merely by changing one's point of view.


> When you refer to the universe computing itself as an NP-hard problem,
> you are assuming that "computing the universe" is member of a class of
> problems.

Yes. It can be shown that computing a universe that contains
something consistent with Einstein's GR is NP-Hard, as the problem of
deciding whether or not there exists a smooth diffeomorphism between a
pair of 3,1 manifolds has been proven (by Markov) to be so. This tells
me that if we are going to consider the evolution of the universe to be
something that can be a simulation running on some powerful computer (or
an abstract computation in Platonia) then that simulation has to at
least the equivalent to solving an NP-Hard problem. The prior existence,
per se, of a solution is no different than the non-constructable proof
that Diffeo_3,1 /subset NP-Hard that Markov found.

> It actually doesn't make any sense to refer to a single problem as
> NP-hard, since the "hard" refers to how the difficulty scales with
> different problems of increasing size.

These terms, "Scale" and "Size", do they refer to some thing
abstract or something physical or, perhaps, both in some sense?

> I'm not clear on what this class is.

It is an equivalence class of computationally soluble problems.
http://cs.joensuu.fi/pages/whamalai/daa/npsession.pdf There are many of
them.

> Are you thinking of something like computing Feynman path integrals
> for the universe?

Not exactly, but that is one example of a computational problem.

>
>>
>>> What would a "prior" computation mean?
>>
>> Where did you get that cluster of words?
> From you, below, in the next to last paragraph (just because I quit
> writing doesn't mean I quit reading at the same point).

Ah, I wrote "...if the prior computation idea is true. " I was
trying to contrast two different ideas: one where all of the
computations are somehow performed "ahead of time" (literally!) and the
other is where the computations occur as they need to subject to
restrictions such as only those computations that have resources
available can occur.

>
>>
>>> Are you supposing that there is a computation and *then* there is an
>>> implementation (in matter) that somehow realizes the computation
>>> that was formerly abstract. That would seem muddled.
>>
>> Right! It would be, at least, muddled. That is my point!
>
> But no one but you has ever suggested the universe is computed and
> then implemented to a two-step process. So it seems to be a muddle of
> your invention.

No, I am trying to explain something that is taken for granted; it
is more obvious for the Pre-established harmony of Leibniz, but I am
arguing that this is also the case in Big Bang theory: the initial
condition problem (also known as the foliation problem) is a problem of
computing the universe ahead of time.

>
> Brent
>
>>
>>> If the universe is to be explained as a computation then it must
>>> be realized by the computation - not by some later (in what time
>>> measure?) events.
>>
>> Exactly. The computation cannot occur before the universe!
>>
>>>
>>> Brent
>>>
>>>> The calculation of the minimum action configuration of the
>>>> universe such that there
>>>> is a universe that we observe now is in the state that it is and
>>>> such is consistent with
>>>> our existence in it must be explained either as being the result of
>>>> some fortuitous
>>>> accident or, as some claim, some "intelligent design" or some
>>>> process working in some
>>>> super-universe where our universe was somehow selected, if the
>>>> prior computation idea is
>>>> true.
>>>> I am trying to find an alternative that does not require
>>>> computations to occur prior
>>>> to the universe's existence! Several people, such as Lee Smolin,
>>>> Stuart Kaufmann and
>>>> David Deutsch have advanced the idea that the universe is,
>>>> literally, computing its next
>>>> state in an ongoing fashion, so my conjecture is not new. The
>>>> universe is computing
>>>> solutions to NP-Hard problems, but not in any Platonic sense.
>>>>
>>>
>>
>>
>


--
Onward!

Stephen


meekerdb

unread,
Oct 23, 2012, 7:16:29 PM10/23/12
to everyth...@googlegroups.com
I don't see that they are even similar. Existence of the aforesaid prime number just
means it satisfies a certain formula within an axiom system. The pink unicorn fails
existence of a quite different kind, namely an ability to locate it in spacetime. It may
still satisfy some propositions, such as, "The animal that is pink, has one horn, and
loses it's power in the presence of a virgin is obviously metaphorical."; just not ones we
think of as axiomatic.

> once we drop the pretense that existence is dependent or contingent on physicality.

It's not a pretense; it's a rejection of Platonism, or at least a distinction between
different meanings of 'exists'.

> Is it possible to define Physicality can be considered solely in terms of bundles of
> particular properties, kinda like Bruno's bundles of computations that define any given
> 1p. My thinking is that what is physical is exactly what some quantity of separable 1p
> have as mutually consistent

But do the 1p have to exist? Can they be Sherlock Holmes and Dr. Watson?

> (or representable as a Boolean Algebra) but this consideration seems to run independent
> of anything physical. What could reasonably constrain the computations so that there is
> some thing "real" to a physical universe?

That's already assuming the universe is just computation, which I think is begging the
question. It's the same as saying, "Why this and not that."

> There has to be something that cannot be changed merely by changing one's point of view.

So long as you thing other 1p viewpoints exist then intersubjective agreement defines the
'real' 3p world.

>
>
>> When you refer to the universe computing itself as an NP-hard problem, you are assuming
>> that "computing the universe" is member of a class of problems.
>
> Yes. It can be shown that computing a universe that contains something consistent
> with Einstein's GR is NP-Hard, as the problem of deciding whether or not there exists a
> smooth diffeomorphism between a pair of 3,1 manifolds has been proven (by Markov) to be
> so. This tells me that if we are going to consider the evolution of the universe to be
> something that can be a simulation running on some powerful computer (or an abstract
> computation in Platonia) then that simulation has to at least the equivalent to solving
> an NP-Hard problem. The prior existence, per se, of a solution is no different than the
> non-constructable proof that Diffeo_3,1 /subset NP-Hard that Markov found.

So the universe solves that problem. So what? We knew it was a soluble problem. Knowing
it was NP-hard didn't make it insoluble.

>
>> It actually doesn't make any sense to refer to a single problem as NP-hard, since the
>> "hard" refers to how the difficulty scales with different problems of increasing size.
>
> These terms, "Scale" and "Size", do they refer to some thing abstract or something
> physical or, perhaps, both in some sense?

They refer to something abstract (e.g. number of nodes in a graph), but they may have
application by giving them a concrete interpretation - just like any mathematics.

>
>> I'm not clear on what this class is.
>
> It is an equivalence class of computationally soluble problems.
> http://cs.joensuu.fi/pages/whamalai/daa/npsession.pdf There are many of them.
>
>> Are you thinking of something like computing Feynman path integrals for the universe?
>
> Not exactly, but that is one example of a computational problem.
>
>>
>>>
>>>> What would a "prior" computation mean?
>>>
>>> Where did you get that cluster of words?
>> From you, below, in the next to last paragraph (just because I quit writing doesn't
>> mean I quit reading at the same point).
>
> Ah, I wrote "...if the prior computation idea is true. " I was trying to contrast
> two different ideas: one where all of the computations are somehow performed "ahead of
> time" (literally!) and the other is where the computations occur as they need to subject
> to restrictions such as only those computations that have resources available can occur.
>
>>
>>>
>>>> Are you supposing that there is a computation and *then* there is an implementation
>>>> (in matter) that somehow realizes the computation that was formerly abstract. That
>>>> would seem muddled.
>>>
>>> Right! It would be, at least, muddled. That is my point!
>>
>> But no one but you has ever suggested the universe is computed and then implemented to
>> a two-step process. So it seems to be a muddle of your invention.
>
> No, I am trying to explain something that is taken for granted; it is more obvious
> for the Pre-established harmony of Leibniz, but I am arguing that this is also the case
> in Big Bang theory: the initial condition problem (also known as the foliation problem)
> is a problem of computing the universe ahead of time.

That problem assumes GR. But thanks to QM the future is not computed just from the past,
i.e. the past does not have to have enough information to determine the future. So the
idea that computing the next foliation in GR is 'too hard' may be an artifact of ignoring
QM. Also it's not clear what resources the universe has available with which to compute.
If you consider every Planck volume as capable of encoding a bit, and observe the
holographic bound on the information to be computed I think there's more than enough.

Brent

Stephen P. King

unread,
Oct 23, 2012, 8:00:08 PM10/23/12
to everyth...@googlegroups.com
On 10/23/2012 4:53 PM, John Mikes wrote:
Hi, Stephen,
you wrote some points in accordance with my thinking (whatever that is worth) with one point I disagree with:
if you want to argue a point, do not accept it as a base for your argument (even negatively not). You do that all the time. (SPK? etc.) -

Hi John,

    My English is pathetic and my rhetoric is even worse, I know this... I don't have an internal narrative in English, its all proprioceptive sensations that I have to translate into English as best I can... Dyslexia sucks! What I try to do is lay down a claim and then argue for its validity; my language often is muddled... but the point gets across sometimes. I have to accept that limitation...



My fundamental question: what do you (all) call 'mind'?

    Actually, mind - for me- is a concept, an abstraction, it isn't a thing at all...


(Sub: does the brain do/learn mind functions? HOW?)

    The same way that we learn to communicate with each other. How exactly? hypothesis non fingo.


(('experimentally observed' is restricted to our present level of understanding/technology(instrumentation)/theories.
Besides: "miraculous" is subject to oncoming explanatory novel info, when it changes into merely 'functonal'.))

    I agree.


 
To fish out some of my agreeing statements:
"Well, I don't follow the crowd...."
Science is no voting matter. 90+% believed the Flat Earth.

    I wish more ppl understood that fact!


 
"... Alter 1 neuron and you might not have the same mind..."
(Meaning: the 'invasion(?)' called 'altering a neuron' MAY change the functionalist's complexity IN THE MIND!- which is certainly beyond our knowable domain. That makes the 'hard' hard. We 'like' to explain DOWN everything in today's knowable terms. (Beware my agnostic views!)

    Agnostisism is a good stance to take. I am a bit too bold and lean into my beliefs. Sometimes too far...


 
"Computation" of course I consider a lot more than that (Platonistic?) algorithmic calculation on our existing (and so knowable?) embryonic device. I go for the Latin orig.: to THINK together - mathematically, or beyond. That mat be a deficiency from my (Non-Indo-European) mother tongue where the (improper?) translatable equivalent closes to the term "expectable". "I am counting on your visit tomorrow".

    That is similar to my notion of "faith" as "expectation of future truth"...


 
 "I strongly believe that computational complexity plays a huge role in many aspects of the hard problem of consciousness and that the Platonic approach to computer science is obscuring solutions as it is blind to questions of resource availability and distribution."
(and a lot more, do we 'know' about them, or not (yet).

    yep, unknown unknowns!


 
"Is the brain strictly a classical system? - No,..."
The "BRAIN" may be - as a 'Physical-World' figment of our bio-physio conventional science image, but its mind-related  function(?) (especially the hard one) is much more than a 'system': ALL 'parts' inventoried in explained functionality).
And: I keep away from the beloved "thought-experiments" invented to make uncanny ideas practically(?) feasible.

    Ah, I love thought experiments, the are the laboratory of philosophy. ;-)


 
"...As I see it, there is no brain change without a mind change and vice versa. The mind and brain are dual,..."
Thanks, Stephen, originally I thought there may be some (tissue-related) minor brain-changes not affecting the mind of which the 'brains' serves as a (material) tool in our "sci"? explanations.
Reading your post(s) I realized that it is a complexity and ANY change in one part has consequences in the others.

    Right. I have to account for the degradation effects. Psycho-physical parallelism is either exact or not at all.


So whatever 'part' we landscape as the 'neuronal brain' it is
still part of the wider complexity unknowable.

    Indeed!


 
Have a good trip onward

    Thanks. ;-)
-- 
Onward!

Stephen

meekerdb

unread,
Oct 23, 2012, 8:31:43 PM10/23/12
to everyth...@googlegroups.com
On 10/23/2012 5:50 AM, Roger Clough wrote:
Hi meekerdb

There are a number of theories to explain the collapse of the quantum wave function
(see below).
 
1) In subjective theories, the collapse is attributed
to consciousness (presumably of the intent or decision to make
a measurement).

There are also 'subjective' epistemological interpretations in which the 'collapse' is just taking account of the change in information provided by a measurement (c.f. Asher Peres or Chris Fuchs arXiv:1207.2141 ).


 
2) In objective or decoherence theories, some physical
event (such as using a probe to make a measurement)
in itself causes decoherence of the wave function. To me,
this is the simplest and most sensible answer (Occam's Razor).

Decoherence has gone part way in explaining the apparent collapse of the wave function, but it still depends on the existence of a preferred (einselected) basis in which the density matrix is diagonalized by environmental interactions.  Tracing over the environmental degrees of freedom is our mathematical operation - it's not part of system physics.


3) There is also the many-worlds interpretation, in which collapse
of the wave is avoided by creating an entire universe.
This sounds like overkill to me.
 
So I vote for decoherence of the wave by a probe.

It's not true that disturbance by the measurement device causes the (apparent) collapse; it's the interaction with an environment, and ultimately it may require assumption of retarded wave propagation. I highly recommend the review article on decoherence by Schlosshauer arXiv:quant-ph/0312059.

Brent

Stephen P. King

unread,
Oct 24, 2012, 12:03:50 AM10/24/12
to everyth...@googlegroups.com
On 10/23/2012 7:16 PM, meekerdb wrote:
On 10/23/2012 3:35 PM, Stephen P. King wrote:
On 10/23/2012 1:29 PM, meekerdb wrote:
On 10/23/2012 3:40 AM, Stephen P. King wrote:

snip


But you wrote, "Both require the prior existence of a solution to a NP-Hard problem."  An existence that is guaranteed by the definition.

Hi Brent,

    OH! Well, I thank you for helping me clean up my language! Let me try again. ;--) First I need to address the word "existence". I have tried to argue that "to exists" is to be "necessarily possible" but that attempt has fallen on deaf ears, well, it has until now for you are using it exactly how I am arguing that it should be used, as in "An existence that is guaranteed by the definition." DO you see that existence does nothing for the issue of properties? The existence of a pink unicorn and the existence of the 1234345465475766th prime number are the same kind of existence,

I don't see that they are even similar.  Existence of the aforesaid prime number just means it satisfies a certain formula within an axiom system.  The pink unicorn fails existence of a quite different kind, namely an ability to locate it in spacetime.  It may still satisfy some propositions, such as, "The animal that is pink, has one horn, and loses it's power in the presence of a virgin is obviously metaphorical."; just not ones we think of as axiomatic.

 Hi Brent,

    Why are they so different in your thinking? If the aforesaid prime number is such that there does not exist a physical symbol to represent it, how is it different from the pink unicorn? Why the insistence on a Pink Unicorn being a "real' creature? 
    I am using the case of the unicorn to force discussion of an important issue. We seem to have no problem believing that some mathematical object that cannot be physically constructed and yet balk at the idea of some cartoon creature. As I see it, the physical paper with a drawing of a pink horse with a horn protruding from its forehead or the brain activity of the little girl that is busy dreaming of riding a pink unicorn is just as physical as the mathematician crawling out an elaborate abstract proof on her chalkboard. A physical process is involved. So why the prejudice against the Unicorn? Both exists in our minds and, if my thesis is correct, then there is a physical process involved somewhere. No minds without bodies and no bodies without minds, or so the expression goes...



once we drop the pretense that existence is dependent or contingent on physicality.

It's not a pretense; it's a rejection of Platonism, or at least a distinction between different meanings of 'exists'.

    Right, I am questioning Platonism and trying to clear up the ambiguity in the word 'exists'.



    Is it possible to define Physicality can be considered solely in terms of bundles of particular properties, kinda like Bruno's bundles of computations that define any given 1p. My thinking is that what is physical is exactly what some quantity of separable 1p have as mutually consistent

But do the 1p have to exist?  Can they be Sherlock Holmes and Dr. Watson?

    1p is the one thing that we cannot doubt, at least about our own 1p. Descartes did a good job discussing that in his Meditations... That something other than ourselves  has a 1p, well, that is part of the hard problem! BTW, my definition of physicality is not so different from Bruno's, neither of us assumes that it is ontologically primitive and both of us, AFAIK, consider it as emergent or something from that which is sharable between a plurality of 1p. Do you have a problem with his concept of it?



(or representable as a Boolean Algebra) but this consideration seems to run independent of anything physical. What could reasonably constrain the computations so that there is some thing "real" to a physical universe?

That's already assuming the universe is just computation, which I think is begging the question.  It's the same as saying, "Why this and not that."

    No, I am trying to nail down whether the universe is computable or not. If it is computable, then it is natural to ask if something is computing it. If it is not computable, well.. that's a different can of worms! I am testing a hypothesis that requires the universe (at least the part that we can observe and talk about) to be representable as a particular kind of topological space that is dual to a Boolean algebra; therefore it must be computable in some sense.


There has to be something that cannot be changed merely by changing one's point of view.

So long as you think other 1p viewpoints exist then intersubjective agreement defines the 'real' 3p world.

    My thinking is that it exists as a necessary possibility in some a priori sense and it actually existing in a 'real 3p' sense are not the same thing. Is this a problem? The latter implies that it is accessible in some way. The former, well, there is some debate...




When you refer to the universe computing itself as an NP-hard problem, you are assuming that "computing the universe" is member of a class of problems.

    Yes. It can be shown that computing a universe that contains something consistent with Einstein's GR is NP-Hard, as the problem of deciding whether or not there exists a smooth diffeomorphism between a pair of 3,1 manifolds has been proven (by Markov) to be so. This tells me that if we are going to consider the evolution of the universe to be something that can be a simulation running on some powerful computer (or an abstract computation in Platonia) then that simulation has to at least the equivalent to solving an NP-Hard problem. The prior existence, per se, of a solution is no different than the non-constructable proof that Diffeo_3,1 /subset NP-Hard that Markov found.

So the universe solves that problem.  So what? We knew it was a soluble problem.  Knowing it was NP-hard didn't make it insoluble.

    I am assuming computability and thus solubility. The point is the question of available resources, this is where the Kolmogorov stuff comes in... My thesis is that if resources are not available for a given computation then it cannot be run, not complicated...




It actually doesn't make any sense to refer to a single problem as NP-hard, since the "hard" refers to how the difficulty scales with different problems of increasing size.

    These terms, "Scale" and "Size", do they refer to some thing abstract or something physical or, perhaps, both in some sense?

They refer to something abstract (e.g. number of nodes in a graph), but they may have application by giving them a concrete interpretation - just like any mathematics.

    What difference does what they refer to matter? Eventually there has to be some physical process or we would be incapable of even thinking about them! The resources to perform the computation are either available or they are not. Seriously, why are you over complicating the idea?



I'm not clear on what this class is.

    It is an equivalence class of computationally soluble problems. http://cs.joensuu.fi/pages/whamalai/daa/npsession.pdf There are many of them.

Are you thinking of something like computing Feynman path integrals for the universe?

    Not exactly, but that is one example of a computational problem.

snip.


    No, I am trying to explain something that is taken for granted; it is more obvious for the Pre-established harmony of Leibniz, but I am arguing that this is also the case in Big Bang theory: the initial condition problem (also known as the foliation problem) is a problem of computing the universe ahead of time.

That problem assumes GR.  But thanks to QM the future is not computed just from the past, i.e. the past does not have to have enough information to determine the future.  So the idea that computing the next foliation in GR is 'too hard' may be an artifact of ignoring QM.

   If the universe is QM then it can be considered as a quantum computer and its resource requirements are different from those of a classical machine. I think... I'm trying to get this right...


  Also it's not clear what resources the universe has available with which to compute.

    I am trying to figure the answer to that question.


  If you consider every Planck volume as capable of encoding a bit, and observe the holographic bound on the information to be computed I think there's more than enough.

    Yes, that is my hypothesis. The point is that the number of Planck voxels in our observable universe is a large but finite number. It is not infinite. This tells us that there is something strange about the Platonic idea of computation, as it assumes the availability infinite resources for a Universal Turing Machine in its complete neglect of the question of resources. One way to escape this is to allow for the universe to actually be infinite or that there actually exist an infinite number of finite physical universes.



-- 
Onward!

Stephen

Bruno Marchal

unread,
Oct 24, 2012, 7:31:35 AM10/24/12
to everyth...@googlegroups.com
On 23 Oct 2012, at 14:50, Roger Clough wrote:

Hi meekerdb

There are a number of theories to explain the collapse of the quantum wave function
(see below).
 
1) In subjective theories, the collapse is attributed
to consciousness (presumably of the intent or decision to make
a measurement).

This leads to ... solipsism. See the work of Abner Shimony. 



 
2) In objective or decoherence theories, some physical
event (such as using a probe to make a measurement)
in itself causes decoherence of the wave function. To me,
this is the simplest and most sensible answer (Occam's Razor).

This is inconsistent with quantum mechanics. It forces some devices into NOT obeying QM.




3) There is also the many-worlds interpretation, in which collapse
of the wave is avoided by creating an entire universe.
This sounds like overkill to me.

This is just the result of applying QM to the couple "observer + observed". It is the literal reading of QM.



 
So I vote for decoherence of the wave by a probe.

You have to abandon QM, then, and not just QM, but comp too (which can only please you, I guess).

Bruno



Richard Ruquist

unread,
Oct 24, 2012, 7:46:15 AM10/24/12
to everyth...@googlegroups.com
Bruno,

What is your opinion of Cramer's Transactional Interpretation of
Quantum Mechanics TIQM,
a 4th possible interpetation of QM.

http://en.wikipedia.org/wiki/Transactional_interpretation "More
recently he [Cramer] has also argued TIQM to be consistent with the
Afshar experiment, while claiming that the Copenhagen interpretation
and the many-worlds interpretation are not.[3]"
[3] ^ A Farewell to Copenhagen?, by John Cramer. Analog, December 2005.

Feynman used waves coming back from the future to solve his Quantum
Electrodynamics QED, the most experimentally accurate physics theory
extant, which in my mind lends TIQM credence. Such teteological
effects are expanded on for living systems in Terrence Deacon's book
"Incomplete Nature: How Mind Emerged from Matter".

And does Afshar's experiment negate MWI? QM?
Richard

Roger Clough

unread,
Oct 24, 2012, 7:49:44 AM10/24/12
to everything-list


According to Descartes, the physical is that which has extension in space.
That's a common definition of existence.

Roger Clough, rcl...@verizon.net
10/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-10-23, 18:35:10
Subject: Re: Interactions between mind and brain


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

Bruno Marchal

unread,
Oct 24, 2012, 7:51:41 AM10/24/12
to everyth...@googlegroups.com

On 23 Oct 2012, at 15:35, Roger Clough wrote:

> Hi Bruno Marchal
>
> Nothing is true, even comp, until it is proven by experiment.

Then your own consciousness is false, which I doubt.
Then the existence even of the appearance of a physical universe is
false.
Etc.
Since Gödel, we know that, even limiting ourselves to 3p truth on the
numbers relations, almost all the true one are unprovable in any theory.
Truth is *far* bigger than proof.
And concerning reality, in science there is no proof at all, as easily
explained by the antic dream argument. In science we never prove
anything about reality. We postulate theories, and prove only things
*in* the theories. Then experiment can disprove a theory, but never
prove it to be correct.
Except QM, all theories in physics have been refuted at some time.





> Can you think of an experiment to verify comp ?


To make comp scientific, we can only show comp to be experiementally
refutable, and yes this has been done, using also the classical theory
of knowledge. COMP + classical theory of knowledge entails the
physical laws, so to refute comp you can compare the physics extracted
from comp, and the physics extraoplated from observation.

Bruno

http://iridia.ulb.ac.be/~marchal/



It is loading more messages.
0 new messages