Movie Graph Argument

8 views
Skip to first unread message

Joseph Knight

unread,
Dec 9, 2011, 12:30:18 AM12/9/11
to everyth...@googlegroups.com
Hi Bruno

I was cruising the web when I stumbled upon a couple of PDFs by Jean-Paul Delahaye criticizing your work. (PDF 1, PDF 2). I don't speak French, but google translate was able to help me up to a point. The main point of PDF 1, in relation to the UDA, seems to be that there is not necessarily a notion of probability defined for truly indeterministic events. (Is this accurate? Are there any results in this area? I couldn't find much.)

The translation of PDF 2, with regards to the Movie Graph argument, was much harder for me to understand. Could you help me out with what Delayahe is saying here, and what your response is? I am just curious about these things :) I noticed some discussion of removing stones from heaps, and comparing that to the removal of subparts of the filmed graph, which to me seemed to be an illegitimate analogy, but I would like to hear your take...

Thanks

--
Joseph Knight

Bruno Marchal

unread,
Dec 9, 2011, 4:55:32 AM12/9/11
to everyth...@googlegroups.com
On 09 Dec 2011, at 06:30, Joseph Knight wrote:

Hi Bruno

I was cruising the web when I stumbled upon a couple of PDFs by Jean-Paul Delahaye criticizing your work. (PDF 1, PDF 2). I don't speak French, but google translate was able to help me up to a point. The main point of PDF 1, in relation to the UDA, seems to be that there is not necessarily a notion of probability defined for truly indeterministic events. (Is this accurate? Are there any results in this area? I couldn't find much.)

Jean-Paul Delahaye was the director of my thesis, and in 2004, when I asked him why I did not get the gift (money, publication of the thesis, and promotion of it) of the price I got in Paris for my thesis, he told me that he has refuted it (!). I had to wait for more than six year to see that "refutation" which appears to be only a pack of crap. Most objection are either rhetorical tricks, or contains elementary logical errors. I will, or not, answer to those fake objections. I have no clue why Delahaye acts like that. I think that if he had a real objection he would have told me this in private first, and not under my back. He showed a lacking of elementary scientific deontology. He might have some pressure from Paris, who witnessed some pressure from Brussels to hide a belgo-french academical scandal, but of course he denies this.

So Delahaye is that unique "scientist", that i have mentionned in some post, who pretend to refute my thesis. My director thesis!



The translation of PDF 2, with regards to the Movie Graph argument, was much harder for me to understand. Could you help me out with what Delayahe is saying here, and what your response is? I am just curious about these things :) I noticed some discussion of removing stones from heaps, and comparing that to the removal of subparts of the filmed graph, which to me seemed to be an illegitimate analogy, but I would like to hear your take...

The heap argument was already done when I was working on the thesis, and I answered it by the stroboscopic argument, which he did understand without problem at that time. Such an argument is also answered by Chalmers fading qualia paper, and would introduce zombie in the mechanist picture. We can go through all of this if you are interested, but it would be simpler to study the MGA argument first, for example here:


There are many other errors in Delahaye's PDF, like saying that there is no uniform measure on N (but there are just non sigma-additive measures), and also that remark is without purpose because the measure bears on infinite histories, like the iterated self-duplication experience, which is part of the UD's work, already illustrates. 

All along its critics, he confuses truth and validity, practical and in principle, deduction and speculation, science and continental philosophy. He also adds assumptions, and talk like if I was defending the truth of comp, which I never did (that mistake is not unfrequent, and is made by people who does not take the time to read the argument, usually). 

I proposed him, in 2004, to make a public talk at Lille, so that he can make his objection publicly, but he did not answer. I have to insist to get those PDF. I did not expect him to make them public before I answered them, though, and the tone used does not invite me to answer them with serenity. He has not convinced me, nor anyone else, that he takes himself his argument seriously.

The only remark which can perhaps be taken seriously about MGA is the same as the one by Jacques Mallah on this list: the idea that a physically inactive material piece of machine could have a physical activity relevant for a particular computation, that is the idea that comp does not entail what I call "the 323 principle". But as Stathis Papaioannou said, this does introduce a magic (non Turing emulable) role for matter in the computation, and that's against the comp hypothesis. No one seems to take the idea that comp does not entail 323 seriously in this list, but I am willing to clarify this.

Indeed, it is not yet entirely clear for me if comp implies 323 *logically*, due to the ambiguity of the "qua computatio". In the worst case, I can put 323 in the defining hypothesis of comp, but most of my student, and the reaction on this in the everything list suggests it is not necessary. It just shows how far some people are taken to avoid the conclusion by making matter and mind quite magical.

I think it is better to study the UDA1-7, before MGA, and if you want I can answer publicly the remarks by Delahaye, both on UDA and MGA. I might send him a mail so that he can participate. Note that the two PDF does not address the mathematical and main part of the thesis (AUDA).

So ask any question, and if Delahaye's texts suggest some one to you, that is all good for our discussion here.

Bruno




Joseph Knight

unread,
Dec 9, 2011, 1:47:32 PM12/9/11
to everyth...@googlegroups.com
On Fri, Dec 9, 2011 at 3:55 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 09 Dec 2011, at 06:30, Joseph Knight wrote:

Hi Bruno

I was cruising the web when I stumbled upon a couple of PDFs by Jean-Paul Delahaye criticizing your work. (PDF 1, PDF 2). I don't speak French, but google translate was able to help me up to a point. The main point of PDF 1, in relation to the UDA, seems to be that there is not necessarily a notion of probability defined for truly indeterministic events. (Is this accurate? Are there any results in this area? I couldn't find much.)

Jean-Paul Delahaye was the director of my thesis, and in 2004, when I asked him why I did not get the gift (money, publication of the thesis, and promotion of it) of the price I got in Paris for my thesis, he told me that he has refuted it (!). I had to wait for more than six year to see that "refutation" which appears to be only a pack of crap.
 
So you never got the money, publication, or promotion? 

Most objection are either rhetorical tricks, or contains elementary logical errors. I will, or not, answer to those fake objections. I have no clue why Delahaye acts like that. I think that if he had a real objection he would have told me this in private first, and not under my back. He showed a lacking of elementary scientific deontology. He might have some pressure from Paris, who witnessed some pressure from Brussels to hide a belgo-french academical scandal, but of course he denies this. 

So Delahaye is that unique "scientist", that i have mentionned in some post, who pretend to refute my thesis. My director thesis!



The translation of PDF 2, with regards to the Movie Graph argument, was much harder for me to understand. Could you help me out with what Delayahe is saying here, and what your response is? I am just curious about these things :) I noticed some discussion of removing stones from heaps, and comparing that to the removal of subparts of the filmed graph, which to me seemed to be an illegitimate analogy, but I would like to hear your take...

The heap argument was already done when I was working on the thesis, and I answered it by the stroboscopic argument, which he did understand without problem at that time. Such an argument is also answered by Chalmers fading qualia paper, and would introduce zombie in the mechanist picture. We can go through all of this if you are interested, but it would be simpler to study the MGA argument first, for example here:


There are many other errors in Delahaye's PDF, like saying that there is no uniform measure on N (but there are just non sigma-additive measures), and also that remark is without purpose because the measure bears on infinite histories, like the iterated self-duplication experience, which is part of the UD's work, already illustrates. 

All along its critics, he confuses truth and validity, practical and in principle, deduction and speculation, science and continental philosophy. He also adds assumptions, and talk like if I was defending the truth of comp, which I never did (that mistake is not unfrequent, and is made by people who does not take the time to read the argument, usually). 

I proposed him, in 2004, to make a public talk at Lille, so that he can make his objection publicly, but he did not answer. I have to insist to get those PDF. I did not expect him to make them public before I answered them, though, and the tone used does not invite me to answer them with serenity. He has not convinced me, nor anyone else, that he takes himself his argument seriously.

The only remark which can perhaps be taken seriously about MGA is the same as the one by Jacques Mallah on this list: the idea that a physically inactive material piece of machine could have a physical activity relevant for a particular computation, that is the idea that comp does not entail what I call "the 323 principle". But as Stathis Papaioannou said, this does introduce a magic (non Turing emulable) role for matter in the computation, and that's against the comp hypothesis. No one seems to take the idea that comp does not entail 323 seriously in this list, but I am willing to clarify this.

Could you elaborate on the 323 principle? It sounds like a qualm that I also have had, to an extent, with the MGA and also with Tim Maudlin's argument against supervenience -- the notion of "inertness" or "physical inactivity" seems to be fairly vague.


Indeed, it is not yet entirely clear for me if comp implies 323 *logically*, due to the ambiguity of the "qua computatio". In the worst case, I can put 323 in the defining hypothesis of comp, but most of my student, and the reaction on this in the everything list suggests it is not necessary. It just shows how far some people are taken to avoid the conclusion by making matter and mind quite magical.

I think it is better to study the UDA1-7, before MGA, and if you want I can answer publicly the remarks by Delahaye, both on UDA and MGA.

I feel quite confident with both the UDA and the MGA (It took me a little while). I read sane04, and quite a few old Everything discussions, including the link you gave for the MGA as well as the other posts for MGA 2 and 3. 
 
I might send him a mail so that he can participate. Note that the two PDF does not address the mathematical and main part of the thesis (AUDA).

So ask any question, and if Delahaye's texts suggest some one to you, that is all good for our discussion here.

My main question here would be: when Delahaye says you can't (necessarily) have probabilities for indeterministic events, is that true? How would it affect the first few steps of the UDA if there were no defined probability for arriving in, say, Washington vs Moscow?

--
Joseph Knight

Russell Standish

unread,
Dec 10, 2011, 1:23:00 AM12/10/11
to everyth...@googlegroups.com
On Fri, Dec 09, 2011 at 12:47:32PM -0600, Joseph Knight wrote (to Bruno):
>
> Could you elaborate on the 323 principle? It sounds like a qualm that I
> also have had, to an extent, with the MGA and also with Tim Maudlin's
> argument against supervenience -- the notion of "inertness" or "physical
> inactivity" seems to be fairly vague.
>

I discuss this on page 76 of my book.

AFAICT, Maudlin's argument only works in a single universe
setting. What is inert in one universe, is alive and kicking in other
universes for which the counterfactuals are true.

So it seems that COMP and single world, deterministic, materialism are
incompatible, but COMP and many worlds materialism is not (ie
supervenience across parallel worlds whose histories are compatible
with our present).

But then the UDA shows that parallel realities must occur, and
consciousness must supervene across all consistent histories, and that
the subjective future is indeterminate.

Cheers

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Joseph Knight

unread,
Dec 10, 2011, 2:31:05 AM12/10/11
to everyth...@googlegroups.com


On Sat, Dec 10, 2011 at 12:23 AM, Russell Standish <li...@hpcoders.com.au> wrote:
On Fri, Dec 09, 2011 at 12:47:32PM -0600, Joseph Knight wrote (to Bruno):
>
> Could you elaborate on the 323 principle? It sounds like a qualm that I
> also have had, to an extent, with the MGA and also with Tim Maudlin's
> argument against supervenience -- the notion of "inertness" or "physical
> inactivity" seems to be fairly vague.
>

I discuss this on page 76 of my book.

AFAICT, Maudlin's argument only works in a single universe
setting. What is inert in one universe, is alive and kicking in other
universes for which the counterfactuals are true.

So it seems that COMP and single world, deterministic, materialism are
incompatible, but COMP and many worlds materialism is not (ie
supervenience across parallel worlds whose histories are compatible
with our present).

But then the UDA shows that parallel realities must occur, and
consciousness must supervene across all consistent histories, and that
the subjective future is indeterminate.
 
Thanks, that makes a lot of sense. (Actually, I have read your book, but I read it before I really understood the issues at hand so I missed a lot. It's a good book, especially considering the breadth of topics it covers!) So you are saying that consciousness supervenes on the goings-on of other regions of the multiverse?


--
Joseph Knight

Bruno Marchal

unread,
Dec 10, 2011, 7:39:07 AM12/10/11
to everyth...@googlegroups.com
On 09 Dec 2011, at 19:47, Joseph Knight wrote:

On Fri, Dec 9, 2011 at 3:55 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 09 Dec 2011, at 06:30, Joseph Knight wrote:

Hi Bruno

I was cruising the web when I stumbled upon a couple of PDFs by Jean-Paul Delahaye criticizing your work. (PDF 1, PDF 2). I don't speak French, but google translate was able to help me up to a point. The main point of PDF 1, in relation to the UDA, seems to be that there is not necessarily a notion of probability defined for truly indeterministic events. (Is this accurate? Are there any results in this area? I couldn't find much.)

Jean-Paul Delahaye was the director of my thesis, and in 2004, when I asked him why I did not get the gift (money, publication of the thesis, and promotion of it) of the price I got in Paris for my thesis, he told me that he has refuted it (!). I had to wait for more than six year to see that "refutation" which appears to be only a pack of crap.
 
So you never got the money, publication, or promotion? 

I get only defamation. 



Most objection are either rhetorical tricks, or contains elementary logical errors. I will, or not, answer to those fake objections. I have no clue why Delahaye acts like that. I think that if he had a real objection he would have told me this in private first, and not under my back. He showed a lacking of elementary scientific deontology. He might have some pressure from Paris, who witnessed some pressure from Brussels to hide a belgo-french academical scandal, but of course he denies this. 

So Delahaye is that unique "scientist", that i have mentionned in some post, who pretend to refute my thesis. My director thesis!



The translation of PDF 2, with regards to the Movie Graph argument, was much harder for me to understand. Could you help me out with what Delayahe is saying here, and what your response is? I am just curious about these things :) I noticed some discussion of removing stones from heaps, and comparing that to the removal of subparts of the filmed graph, which to me seemed to be an illegitimate analogy, but I would like to hear your take...

The heap argument was already done when I was working on the thesis, and I answered it by the stroboscopic argument, which he did understand without problem at that time. Such an argument is also answered by Chalmers fading qualia paper, and would introduce zombie in the mechanist picture. We can go through all of this if you are interested, but it would be simpler to study the MGA argument first, for example here:


There are many other errors in Delahaye's PDF, like saying that there is no uniform measure on N (but there are just non sigma-additive measures), and also that remark is without purpose because the measure bears on infinite histories, like the iterated self-duplication experience, which is part of the UD's work, already illustrates. 

All along its critics, he confuses truth and validity, practical and in principle, deduction and speculation, science and continental philosophy. He also adds assumptions, and talk like if I was defending the truth of comp, which I never did (that mistake is not unfrequent, and is made by people who does not take the time to read the argument, usually). 

I proposed him, in 2004, to make a public talk at Lille, so that he can make his objection publicly, but he did not answer. I have to insist to get those PDF. I did not expect him to make them public before I answered them, though, and the tone used does not invite me to answer them with serenity. He has not convinced me, nor anyone else, that he takes himself his argument seriously.

The only remark which can perhaps be taken seriously about MGA is the same as the one by Jacques Mallah on this list: the idea that a physically inactive material piece of machine could have a physical activity relevant for a particular computation, that is the idea that comp does not entail what I call "the 323 principle". But as Stathis Papaioannou said, this does introduce a magic (non Turing emulable) role for matter in the computation, and that's against the comp hypothesis. No one seems to take the idea that comp does not entail 323 seriously in this list, but I am willing to clarify this.

Could you elaborate on the 323 principle?

With pleasure. Asap.



It sounds like a qualm that I also have had, to an extent, with the MGA and also with Tim Maudlin's argument against supervenience -- the notion of "inertness" or "physical inactivity" seems to be fairly vague.

I will explain why you can deduce something precise despite the vagueness of that notion. In fact that vagueness is more a problem fro a materialist than an immaterialist in fine.





Indeed, it is not yet entirely clear for me if comp implies 323 *logically*, due to the ambiguity of the "qua computatio". In the worst case, I can put 323 in the defining hypothesis of comp, but most of my student, and the reaction on this in the everything list suggests it is not necessary. It just shows how far some people are taken to avoid the conclusion by making matter and mind quite magical.

I think it is better to study the UDA1-7, before MGA, and if you want I can answer publicly the remarks by Delahaye, both on UDA and MGA.

I feel quite confident with both the UDA and the MGA (It took me a little while).

Nice.


I read sane04, and quite a few old Everything discussions, including the link you gave for the MGA as well as the other posts for MGA 2 and 3. 
 
I might send him a mail so that he can participate. Note that the two PDF does not address the mathematical and main part of the thesis (AUDA).

So ask any question, and if Delahaye's texts suggest some one to you, that is all good for our discussion here.

My main question here would be: when Delahaye says you can't (necessarily) have probabilities for indeterministic events, is that true?

Simplifying things a little bit I do agree with that statement. There are many ways to handle indeterminacies and uncertainty. Probability measure are just a particular case. But UDA does not rely at all on probability. All what matters to understand that physics become a branch of arithmetic/computer science is that whatever means you can use to quantify the first person indeterminacy, those quantification will not change when you introduce the delays of reconstitution, the shift real/virtual, etc.
Formally, the math excludes already probability in favor of credibility measure. But for the simplicity of the explanation, I use often probability for some precise protocol. The p = 1/2 for simple duplication is reasonable from the numerical identity of reconstituted observers. We have a symmetry which cannot be hoped for any coins!





How would it affect the first few steps of the UDA if there were no defined probability for arriving in, say, Washington vs Moscow?


Well, in that case, there are probability measure. In the infinite self-duplication, you can even use the usual gaussian. But even if there were no such distribution, the result remains unchanged: physics becomes a calculus of first person uncertainty with or without probability. As I said, only the invariance of that uncertainty calculus matter for the proof of the reversal.

Tell me if this answer your question.

Bruno



Bruno Marchal

unread,
Dec 10, 2011, 8:13:33 AM12/10/11
to everyth...@googlegroups.com

On 10 Dec 2011, at 07:23, Russell Standish wrote:

> On Fri, Dec 09, 2011 at 12:47:32PM -0600, Joseph Knight wrote (to
> Bruno):
>>
>> Could you elaborate on the 323 principle? It sounds like a qualm
>> that I
>> also have had, to an extent, with the MGA and also with Tim Maudlin's
>> argument against supervenience -- the notion of "inertness" or
>> "physical
>> inactivity" seems to be fairly vague.
>>
>
> I discuss this on page 76 of my book.
>
> AFAICT, Maudlin's argument only works in a single universe
> setting. What is inert in one universe, is alive and kicking in other
> universes for which the counterfactuals are true.

If that was true, I am not sure a quantum computer could still be
emulable by a classical computer, but QM contradicts this. Nowhere
does Maudlin postulate a single universe. But he postulates that a
computation can be done in one single universe (but that is correct,
even a quantum computation can be done in a single universe).


>
> So it seems that COMP and single world, deterministic, materialism are
> incompatible, but COMP and many worlds materialism is not (ie
> supervenience across parallel worlds whose histories are compatible
> with our present).

I am not sure about that. You might elaborate, or I might try to
explain directly why "materialist many-worlds" cannot work, even in
the case we have a quantum algorithm working in the brain. I mean, in
a sense of making physics again fundamental/primary.
I have to think about how to explain this.


>
> But then the UDA shows that parallel realities must occur, and
> consciousness must supervene across all consistent histories, and that
> the subjective future is indeterminate.

OK.

Bruno


http://iridia.ulb.ac.be/~marchal/

Russell Standish

unread,
Dec 11, 2011, 6:05:01 PM12/11/11
to everyth...@googlegroups.com
On Sat, Dec 10, 2011 at 02:13:33PM +0100, Bruno Marchal wrote:
>
> On 10 Dec 2011, at 07:23, Russell Standish wrote:
>
> >AFAICT, Maudlin's argument only works in a single universe
> >setting. What is inert in one universe, is alive and kicking in other
> >universes for which the counterfactuals are true.
>
> If that was true, I am not sure a quantum computer could still be
> emulable by a classical computer, but QM contradicts this. Nowhere
> does Maudlin postulate a single universe. But he postulates that a
> computation can be done in one single universe (but that is correct,
> even a quantum computation can be done in a single universe).
>

From my somewhat limited understanding, emulating a quantum computer
with a classical computer involves running something like a dovetailer
algorithm. But it is no longer absurd to suggest that merely executing
the steps of the dovetailer trace instantiates all of the programs the
dovetailer is executing.

Maudlin's argument relies on the absurdity the the presence or absence
of inert parts bears on whether something is consious. This absurdity
only works in a single universe setting, however. If your computer is
embedded in a Multiverse, the absurdity vanishes, because thiose inert
parts are no longer inert. If you then fold the multiverse back into a
single universe by dovetailing, one can then reapply the Maudlin
move. But then, in that case, one can embed that result into a
Multiverse, and the cycle repeats.

The question is - where is the consciousness in all this? I think it
must move with the levels - and given the UDA and COMP, I would say
that consciousness appears at the Multiverse level, not the single
universe level.

BTW - I had a similar problem with your MGA - it is not intrinsically
absurd to me that a recording can be conscious. From the right point
of view (presumably that of the consciousness itself - aka the "inside
view"), it seems plausible that a recording could be conscious.

Joseph Knight

unread,
Dec 11, 2011, 10:39:44 PM12/11/11
to everyth...@googlegroups.com
On Sat, Dec 10, 2011 at 6:39 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 09 Dec 2011, at 19:47, Joseph Knight wrote:

On Fri, Dec 9, 2011 at 3:55 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 09 Dec 2011, at 06:30, Joseph Knight wrote:

Hi Bruno

I was cruising the web when I stumbled upon a couple of PDFs by Jean-Paul Delahaye criticizing your work. (PDF 1, PDF 2). I don't speak French, but google translate was able to help me up to a point. The main point of PDF 1, in relation to the UDA, seems to be that there is not necessarily a notion of probability defined for truly indeterministic events. (Is this accurate? Are there any results in this area? I couldn't find much.)

Jean-Paul Delahaye was the director of my thesis, and in 2004, when I asked him why I did not get the gift (money, publication of the thesis, and promotion of it) of the price I got in Paris for my thesis, he told me that he has refuted it (!). I had to wait for more than six year to see that "refutation" which appears to be only a pack of crap.
 
So you never got the money, publication, or promotion? 

I get only defamation. 



Most objection are either rhetorical tricks, or contains elementary logical errors. I will, or not, answer to those fake objections. I have no clue why Delahaye acts like that. I think that if he had a real objection he would have told me this in private first, and not under my back. He showed a lacking of elementary scientific deontology. He might have some pressure from Paris, who witnessed some pressure from Brussels to hide a belgo-french academical scandal, but of course he denies this. 

So Delahaye is that unique "scientist", that i have mentionned in some post, who pretend to refute my thesis. My director thesis!



The translation of PDF 2, with regards to the Movie Graph argument, was much harder for me to understand. Could you help me out with what Delayahe is saying here, and what your response is? I am just curious about these things :) I noticed some discussion of removing stones from heaps, and comparing that to the removal of subparts of the filmed graph, which to me seemed to be an illegitimate analogy, but I would like to hear your take...

The heap argument was already done when I was working on the thesis, and I answered it by the stroboscopic argument, which he did understand without problem at that time. Such an argument is also answered by Chalmers fading qualia paper, and would introduce zombie in the mechanist picture. We can go through all of this if you are interested, but it would be simpler to study the MGA argument first, for example here:


There are many other errors in Delahaye's PDF, like saying that there is no uniform measure on N (but there are just non sigma-additive measures), and also that remark is without purpose because the measure bears on infinite histories, like the iterated self-duplication experience, which is part of the UD's work, already illustrates. 

All along its critics, he confuses truth and validity, practical and in principle, deduction and speculation, science and continental philosophy. He also adds assumptions, and talk like if I was defending the truth of comp, which I never did (that mistake is not unfrequent, and is made by people who does not take the time to read the argument, usually). 

I proposed him, in 2004, to make a public talk at Lille, so that he can make his objection publicly, but he did not answer. I have to insist to get those PDF. I did not expect him to make them public before I answered them, though, and the tone used does not invite me to answer them with serenity. He has not convinced me, nor anyone else, that he takes himself his argument seriously.

The only remark which can perhaps be taken seriously about MGA is the same as the one by Jacques Mallah on this list: the idea that a physically inactive material piece of machine could have a physical activity relevant for a particular computation, that is the idea that comp does not entail what I call "the 323 principle". But as Stathis Papaioannou said, this does introduce a magic (non Turing emulable) role for matter in the computation, and that's against the comp hypothesis. No one seems to take the idea that comp does not entail 323 seriously in this list, but I am willing to clarify this.

Could you elaborate on the 323 principle?

With pleasure. Asap.



It sounds like a qualm that I also have had, to an extent, with the MGA and also with Tim Maudlin's argument against supervenience -- the notion of "inertness" or "physical inactivity" seems to be fairly vague.

I will explain why you can deduce something precise despite the vagueness of that notion. In fact that vagueness is more a problem fro a materialist than an immaterialist in fine.

How so? 




Indeed, it is not yet entirely clear for me if comp implies 323 *logically*, due to the ambiguity of the "qua computatio". In the worst case, I can put 323 in the defining hypothesis of comp, but most of my student, and the reaction on this in the everything list suggests it is not necessary. It just shows how far some people are taken to avoid the conclusion by making matter and mind quite magical.

I think it is better to study the UDA1-7, before MGA, and if you want I can answer publicly the remarks by Delahaye, both on UDA and MGA.

I feel quite confident with both the UDA and the MGA (It took me a little while).

Nice.


I read sane04, and quite a few old Everything discussions, including the link you gave for the MGA as well as the other posts for MGA 2 and 3. 
 
I might send him a mail so that he can participate. Note that the two PDF does not address the mathematical and main part of the thesis (AUDA).

So ask any question, and if Delahaye's texts suggest some one to you, that is all good for our discussion here.

My main question here would be: when Delahaye says you can't (necessarily) have probabilities for indeterministic events, is that true?

Simplifying things a little bit I do agree with that statement. There are many ways to handle indeterminacies and uncertainty. Probability measure are just a particular case. But UDA does not rely at all on probability. All what matters to understand that physics become a branch of arithmetic/computer science is that whatever means you can use to quantify the first person indeterminacy, those quantification will not change when you introduce the delays of reconstitution, the shift real/virtual, etc.
Formally, the math excludes already probability in favor of credibility measure. But for the simplicity of the explanation, I use often probability for some precise protocol. The p = 1/2 for simple duplication is reasonable from the numerical identity of reconstituted observers. We have a symmetry which cannot be hoped for any coins!

Credibility measure? What's that? 




How would it affect the first few steps of the UDA if there were no defined probability for arriving in, say, Washington vs Moscow?


Well, in that case, there are probability measure. In the infinite self-duplication, you can even use the usual gaussian. But even if there were no such distribution, the result remains unchanged: physics becomes a calculus of first person uncertainty with or without probability. As I said, only the invariance of that uncertainty calculus matter for the proof of the reversal.

Tell me if this answer your question.

That seems to make sense. Thanks

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
Joseph Knight

Craig Weinberg

unread,
Dec 12, 2011, 7:55:54 AM12/12/11
to Everything List
On Dec 11, 6:05 pm, Russell Standish <li...@hpcoders.com.au> wrote:

> The question is - where is the consciousness in all this? I think it
> must move with the levels - and given the UDA and COMP, I would say
> that consciousness appears at the Multiverse level, not the single
> universe level.

I don't see why invoking a Multiverse is preferable to a continuum of
public and private sense channels within one universe. If
consciousness appears at the Multiverse level, should we not be
conscious of the Multiverse? Instead, it is just the opposite. The one
characteristic which is shared most pervasively among sane subjects is
realism - a practical certainty of a single shared sense context which
is uncompromisingly and absolutely real.

>
> BTW - I had a similar problem with your MGA - it is not intrinsically
> absurd to me that a recording can be conscious. From the right point
> of view (presumably that of the consciousness itself - aka the "inside
> view"), it seems plausible that a recording could be conscious.

If recordings were conscious, wouldn't they evolve behind our backs?
Shouldn't Bugs Bunny continue to have new adventures inside the film
can that we can watch forever? It makes more sense to me that
recordings are not conscious, but rather they are artifacts arranged
so that consciousness may play them as an indirect or specular
sensorimotive experience. We have no reason to assume that
consciousness, or anything else, can be a disembodied arithmetic.

Craig

Stephen P. King

unread,
Dec 12, 2011, 9:05:08 AM12/12/11
to everyth...@googlegroups.com
Hi!

On 12/12/2011 7:55 AM, Craig Weinberg wrote:
> On Dec 11, 6:05 pm, Russell Standish<li...@hpcoders.com.au> wrote:
>
>> The question is - where is the consciousness in all this? I think it
>> must move with the levels - and given the UDA and COMP, I would say
>> that consciousness appears at the Multiverse level, not the single
>> universe level.
> I don't see why invoking a Multiverse is preferable to a continuum of
> public and private sense channels within one universe. If
> consciousness appears at the Multiverse level, should we not be
> conscious of the Multiverse? Instead, it is just the opposite. The one
> characteristic which is shared most pervasively among sane subjects is
> realism - a practical certainty of a single shared sense context which
> is uncompromisingly and absolutely real.

This is a good point but let me flesh it out a bit. It seems to me
that in a way the Multiverse is isomorphic to a "continuum of public and
private sense channels within one universe ", the difference is just one
of point of view. The Multiverse view is an abstraction taken as if one
where somehow "outside" of the multiverse a literal impossibility for
such would require that there exist a reality allowing observers outside
of the multiverse and the latter is an abstraction taking into account a
plurality of disjoint 1p.
That a collection of communicating entities will have a collective
sense of a single context that is "real" follows immediately from the
necessity of mutual non-contradiction over those communications. If
there is no coherent content of one or more entities that would
contradict the "realness" of our world, why should we not have the
experience of a single world? We usually put this requirement onto the
notion of universal laws, but that assessment requires the unnecessary
explanatory burden of some prior measure on the possible worlds, a
burden that need not exist in the first place!

>
>> BTW - I had a similar problem with your MGA - it is not intrinsically
>> absurd to me that a recording can be conscious. From the right point
>> of view (presumably that of the consciousness itself - aka the "inside
>> view"), it seems plausible that a recording could be conscious.
> If recordings were conscious, wouldn't they evolve behind our backs?
> Shouldn't Bugs Bunny continue to have new adventures inside the film
> can that we can watch forever? It makes more sense to me that
> recordings are not conscious, but rather they are artifacts arranged
> so that consciousness may play them as an indirect or specular
> sensorimotive experience. We have no reason to assume that
> consciousness, or anything else, can be a disembodied arithmetic.

If recordings where conscious then why bother with the "real
thing"?!?!?!?!

Onward!

Stephen

Craig Weinberg

unread,
Dec 12, 2011, 10:09:40 AM12/12/11
to Everything List
On Dec 12, 9:05 am, "Stephen P. King" <stephe...@charter.net> wrote:
> Hi!
>
> On 12/12/2011 7:55 AM, Craig Weinberg wrote:
>
> > On Dec 11, 6:05 pm, Russell Standish<li...@hpcoders.com.au>  wrote:
>
> >> The question is - where is the consciousness in all this? I think it
> >> must move with the levels - and given the UDA and COMP, I would say
> >> that consciousness appears at the Multiverse level, not the single
> >> universe level.
> > I don't see why invoking a Multiverse is preferable to a continuum of
> > public and private sense channels within one universe. If
> > consciousness appears at the Multiverse level, should we not be
> > conscious of the Multiverse? Instead, it is just the opposite. The one
> > characteristic which is shared most pervasively among sane subjects is
> > realism - a practical certainty of a single shared sense context which
> > is uncompromisingly and absolutely real.
>
>      This is a good point but let me flesh it out a bit. It seems to me
> that in a way the Multiverse is isomorphic to a "continuum of public and
> private sense channels within one universe ", the difference is just one
> of point of view. The Multiverse view is an abstraction taken as if one
> where somehow "outside" of the multiverse a literal impossibility for
> such would require that there exist a reality allowing observers outside
> of the multiverse and the latter is an abstraction taking into account a
> plurality of disjoint 1p.

Right, multiverse can only ever be a model from the hypothetical
omniscient voyeur's perspective (HOVP) since each universe can only
contain observers of the universe which they inhabit. I suppose the
main difference between a single universe that embraces multisense
realism channels (like an Indra's Net where each node experiences the
other nodes in a way that represents them in terms which are
potentially sensible to them) and a true multiverse model is that the
multiverse provides a separate timeline for every possibility.

Since we can't tell the difference between a universe which seems to
make sense to us only because of the continuity of the single
anthropic timeline which we happen to be part of, or whether our naive
perception of a single universal timeline is more accurate, I tend to
give the benefit of the doubt to sense and significance. Sense gives
us the ability to sniff out better interpretations and precipitate a
consensus rather than splinter into billions of unknowable
possibilities. It seems like a universe anchored in multiverse would
have only relativistic realism. It seems like our waking experience
should be more disjointed and dreamlike as every moment would be a
complex dice throw to retain any resemblance to anything that came
before. It seems like there would be a lot of complex unexplained
rules governing possibilities of universes so that more universes
would have to be invoked to describe how that governing is arrived at.

If by multiverse we just mean private tunnel realities which overlap,
reflect, and refract each other, as well as project public consensus
realities, then sure, I'm all for that.

>      That a collection of communicating entities will have a collective
> sense of a single context that is "real" follows immediately from the
> necessity of mutual non-contradiction over those communications. If
> there is no coherent content of one or more entities that would
> contradict the "realness" of our world, why should we not have the
> experience of a single world? We usually put this requirement onto the
> notion of universal laws, but that assessment requires the unnecessary
> explanatory burden of some prior measure on the possible worlds, a
> burden that need not exist in the first place!

Right, sharing a set of common senses produces the same effect that we
would attribute to externally imposed 'laws'.

>
>
>
> >> BTW - I had a similar problem with your MGA - it is not intrinsically
> >> absurd to me that a recording can be conscious. From the right point
> >> of view (presumably that of the consciousness itself - aka the "inside
> >> view"), it seems plausible that a recording could be conscious.
> > If recordings were conscious, wouldn't they evolve behind our backs?
> > Shouldn't Bugs Bunny continue to have new adventures inside the film
> > can that we can watch forever? It makes more sense to me that
> > recordings are not conscious, but rather they are artifacts arranged
> > so that consciousness may play them as an indirect or specular
> > sensorimotive experience. We have no reason to assume that
> > consciousness, or anything else, can be a disembodied arithmetic.
>
>      If recordings where conscious then why bother with the "real
> thing"?!?!?!?!

Exactly. I was just thinking yesterday - if we could take a drug which
would make give us the delusion that we had done something great, even
if everyone else on Earth shared that delusion, I think we all would
agree that it isn't the same thing as actually doing something great.
There has to be something that is actually more real about reality
than fantasy or else none of us would care very much about reality.

Craig

Bruno Marchal

unread,
Dec 12, 2011, 10:11:54 AM12/12/11
to everyth...@googlegroups.com

On 12 Dec 2011, at 00:05, Russell Standish wrote:

> On Sat, Dec 10, 2011 at 02:13:33PM +0100, Bruno Marchal wrote:
>>
>> On 10 Dec 2011, at 07:23, Russell Standish wrote:
>>
>>> AFAICT, Maudlin's argument only works in a single universe
>>> setting. What is inert in one universe, is alive and kicking in
>>> other
>>> universes for which the counterfactuals are true.
>>
>> If that was true, I am not sure a quantum computer could still be
>> emulable by a classical computer, but QM contradicts this. Nowhere
>> does Maudlin postulate a single universe. But he postulates that a
>> computation can be done in one single universe (but that is correct,
>> even a quantum computation can be done in a single universe).
>>
>
> From my somewhat limited understanding, emulating a quantum computer
> with a classical computer involves running something like a dovetailer
> algorithm. But it is no longer absurd to suggest that merely executing
> the steps of the dovetailer trace instantiates all of the programs the
> dovetailer is executing.
>
> Maudlin's argument relies on the absurdity the the presence or absence
> of inert parts bears on whether something is consious. This absurdity
> only works in a single universe setting, however. If your computer is
> embedded in a Multiverse, the absurdity vanishes, because thiose inert
> parts are no longer inert.

But they do not play a part in the computation, at the correct
substitution level.
They are playing a part concerning the first person indeterminacy,
like in the UD*, or in QM physics. But that is derived (and has to be)
from the indeterminacy.


> If you then fold the multiverse back into a
> single universe by dovetailing, one can then reapply the Maudlin
> move.

Indeed. That is the key point.

> But then, in that case, one can embed that result into a
> Multiverse, and the cycle repeats.

I don't think we can. That would be like saying that we have to start
from the quantum multiverse, but the reasoning show that we can start
from any universal machinery, like numbers. To start from the
multiverse would be treachery (for the derivation of matter) and
ambiguous (we don't assume QM). And even with QM, the multiverse
notion is quite complex and controversial: is it a non computational
multidreams (as forced by comp), or is it a multi-physical material
reality (as forbidden by the MGA).


>
> The question is - where is the consciousness in all this? I think it
> must move with the levels - and given the UDA and COMP, I would say
> that consciousness appears at the Multiverse level, not the single
> universe level.

That is right, but with comp that "multiverse" is the mathematical
structure which needs to be entirely derived from the theory of
consciousness or from the self-reference logics.

>
> BTW - I had a similar problem with your MGA - it is not intrinsically
> absurd to me that a recording can be conscious.

There is no computation in a recording. There is only a fixed
description of a computation. In arithmetic, it is like confusing p
and Bp.
With p sigma_1, p and Bp looks alike (which explains the subtlety of
that nuance) in the sense that we have both
p -> Bp and
Bp -> p
But Bp -> p is only true (provable at some [ ]*-logic level), and not
provable by the machine, so p and Bp will still behave in a different
logical way.
Then you have the stroboscopic argument which shows that a recording
like a movie is not well defined in time and space.
But the simplest, imo, to see that a recording cannot be conscious
(with comp, 'qua computatio') is that there is no more any
computations done by a recording.

> From the right point
> of view (presumably that of the consciousness itself - aka the "inside
> view"), it seems plausible that a recording could be conscious.

A still other argument, is that no piece of the movie can have any
causal relationship with any other part, and so can be removed, making
eventually a *particular* consciousness (a dream about an ice-cream,
for example) supervening on the vacuum.

What is correct is that consciousness is related to all events having
made the recording possible, but this is only in virtue of some
numbers having some special relations with other number, and we are
back to the computationalist supervenience thesis.

We might come back on MGA, given some other questions on the list. So
if this is unclear you might ask question, or wait that I re-explain
the whole argument perhaps.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Dec 12, 2011, 11:05:28 AM12/12/11
to everyth...@googlegroups.com
On 12 Dec 2011, at 04:39, Joseph Knight wrote:

On Sat, Dec 10, 2011 at 6:39 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 09 Dec 2011, at 19:47, Joseph Knight wrote:

On Fri, Dec 9, 2011 at 3:55 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:


The heap argument was already done when I was working on the thesis, and I answered it by the stroboscopic argument, which he did understand without problem at that time. Such an argument is also answered by Chalmers fading qualia paper, and would introduce zombie in the mechanist picture. We can go through all of this if you are interested, but it would be simpler to study the MGA argument first, for example here:


There are many other errors in Delahaye's PDF, like saying that there is no uniform measure on N (but there are just non sigma-additive measures), and also that remark is without purpose because the measure bears on infinite histories, like the iterated self-duplication experience, which is part of the UD's work, already illustrates. 

All along its critics, he confuses truth and validity, practical and in principle, deduction and speculation, science and continental philosophy. He also adds assumptions, and talk like if I was defending the truth of comp, which I never did (that mistake is not unfrequent, and is made by people who does not take the time to read the argument, usually). 

I proposed him, in 2004, to make a public talk at Lille, so that he can make his objection publicly, but he did not answer. I have to insist to get those PDF. I did not expect him to make them public before I answered them, though, and the tone used does not invite me to answer them with serenity. He has not convinced me, nor anyone else, that he takes himself his argument seriously.

The only remark which can perhaps be taken seriously about MGA is the same as the one by Jacques Mallah on this list: the idea that a physically inactive material piece of machine could have a physical activity relevant for a particular computation, that is the idea that comp does not entail what I call "the 323 principle". But as Stathis Papaioannou said, this does introduce a magic (non Turing emulable) role for matter in the computation, and that's against the comp hypothesis. No one seems to take the idea that comp does not entail 323 seriously in this list, but I am willing to clarify this.

Could you elaborate on the 323 principle?

With pleasure. Asap.



It sounds like a qualm that I also have had, to an extent, with the MGA and also with Tim Maudlin's argument against supervenience -- the notion of "inertness" or "physical inactivity" seems to be fairly vague.

I will explain why you can deduce something precise despite the vagueness of that notion. In fact that vagueness is more a problem fro a materialist than an immaterialist in fine.

How so? 

With comp, if you want introduce a physical supervenience thesis, the physical activity can only mean "physical activity relevant with the computation".  So we can say that a physical piece of a computer is inert relatively to a set S of computations in case the computations in S are exactly executed with and without the physical piece. Digitalness makes the notion of exactness here sense full. If someone says that such a piece of matter has still some physical activity involved in the computation, it can only mean that we have not chosen the right level of implementation of those computations. If a materialist can convince someone that such a piece, which has no role for the computation in S, has some role, for S bearing a first person perspective, then we can no more say "yes" to a doctor, in virtue of building a device which will emulate correctly the computations in S (assuming some of them corresponds to a conscious experience).








Indeed, it is not yet entirely clear for me if comp implies 323 *logically*, due to the ambiguity of the "qua computatio". In the worst case, I can put 323 in the defining hypothesis of comp, but most of my student, and the reaction on this in the everything list suggests it is not necessary. It just shows how far some people are taken to avoid the conclusion by making matter and mind quite magical.

I think it is better to study the UDA1-7, before MGA, and if you want I can answer publicly the remarks by Delahaye, both on UDA and MGA.

I feel quite confident with both the UDA and the MGA (It took me a little while).

Nice.


I read sane04, and quite a few old Everything discussions, including the link you gave for the MGA as well as the other posts for MGA 2 and 3. 
 
I might send him a mail so that he can participate. Note that the two PDF does not address the mathematical and main part of the thesis (AUDA).

So ask any question, and if Delahaye's texts suggest some one to you, that is all good for our discussion here.

My main question here would be: when Delahaye says you can't (necessarily) have probabilities for indeterministic events, is that true?

Simplifying things a little bit I do agree with that statement. There are many ways to handle indeterminacies and uncertainty. Probability measure are just a particular case. But UDA does not rely at all on probability. All what matters to understand that physics become a branch of arithmetic/computer science is that whatever means you can use to quantify the first person indeterminacy, those quantification will not change when you introduce the delays of reconstitution, the shift real/virtual, etc.
Formally, the math excludes already probability in favor of credibility measure. But for the simplicity of the explanation, I use often probability for some precise protocol. The p = 1/2 for simple duplication is reasonable from the numerical identity of reconstituted observers. We have a symmetry which cannot be hoped for any coins!

Credibility measure? What's that? 

Let T be a finite set, and 2^T its power set. A belief function b is a function from 2^T to [0, 1] quite comparable to a probability function. We have for example that b({ }) = 0, and b(T) = 1. The main difference is we don't ask for the Poincare identity, we ask only for an inequality:

if Ai are subsets of T, we ask that

b(A1 U A2 U ... U An) bigger or equal than Sum_i b(Ai) - Sum_i < j b(Ai inter Aj) + ...

For probability we ask an equality. I see that in english they use "belief", but in french we use "croyance" (credibility).

This leads to a variant of probability calculus which can handle better the notion of ignorance than the bayesian approach. 
You might Google on "Dempster Shafer theory of evidence".

Modal logics can be used to formalize the "certainty" case. Alechina, in Amsterdam, has shown that the normal modal logic K + the formula D (Bp -> Dp) formalizes this "certainty" completely, and we get something similar in the relevant variants of the self-reference logics. (B = [ ], D = < >), except that we loose the modal necessitation rule, like for the quantum logics.

This is just one example of a calculus of uncertainty (apart from probability). There are theories of possibility, of plausibility, etc. 
Dempster-Shafer theory of evidence got some success in criminal inquests, medical diagnostic, finding location of secret submarines, debugging and inductive inference. It is particularly useful when we don't want to ascribe a uniform probability in case of ignorance, which is the case when we ignore the content of the set T, that is when we have only partial knowledge on the elementary results of the random experiences. It works also better on vague or fuzzy sets. I am not an expert in that field, but my late director thesis, Philippe Smets, the founder of IRIDIA, was working on an extension of such a belief function theory (or theory of evidence).







How would it affect the first few steps of the UDA if there were no defined probability for arriving in, say, Washington vs Moscow?


Well, in that case, there are probability measure. In the infinite self-duplication, you can even use the usual gaussian. But even if there were no such distribution, the result remains unchanged: physics becomes a calculus of first person uncertainty with or without probability. As I said, only the invariance of that uncertainty calculus matter for the proof of the reversal.

Tell me if this answer your question.

That seems to make sense. Thanks

OK. Ask any question in case you want grasp completely, or who knows, refute, the UDA argument. Please, for the step 8, MGA, use the most recent version which exists only on this list:

Bruno




meekerdb

unread,
Dec 12, 2011, 12:36:03 PM12/12/11
to everyth...@googlegroups.com
On 12/12/2011 4:55 AM, Craig Weinberg wrote:
> If recordings were conscious, wouldn't they evolve behind our backs?

Why would consciousness imply evolution? Evolution requires reproduction and differential
selection.

Brent

Stephen P. King

unread,
Dec 12, 2011, 2:06:16 PM12/12/11
to everyth...@googlegroups.com
Hi,

Reproduction and differential selection seem to be necessary
aspects of consciousness! Maybe there is not so much of a gap here
between the evolution and consciousness. But Craig's pint is one that I
echo, what is the necessary reason for actual 1p and quale if recordings
are sufficient? Forget the questions of how the recordings where made in
the first place...

Onward!

Stephen

meekerdb

unread,
Dec 12, 2011, 2:51:21 PM12/12/11
to everyth...@googlegroups.com
On 12/12/2011 11:06 AM, Stephen P. King wrote:
> On 12/12/2011 12:36 PM, meekerdb wrote:
>> On 12/12/2011 4:55 AM, Craig Weinberg wrote:
>>> If recordings were conscious, wouldn't they evolve behind our backs?
>>
>> Why would consciousness imply evolution? Evolution requires reproduction and
>> differential selection.
>>
>> Brent
>>
> Hi,
>
> Reproduction and differential selection seem to be necessary aspects of consciousness!

I have several friends who seem conscious, but aren't going to have children.

Brent

Stephen P. King

unread,
Dec 12, 2011, 7:04:30 PM12/12/11
to everyth...@googlegroups.com
On 12/12/2011 2:51 PM, meekerdb wrote:
> On 12/12/2011 11:06 AM, Stephen P. King wrote:
>> On 12/12/2011 12:36 PM, meekerdb wrote:
>>> On 12/12/2011 4:55 AM, Craig Weinberg wrote:
>>>> If recordings were conscious, wouldn't they evolve behind our backs?
>>>
>>> Why would consciousness imply evolution? Evolution requires
>>> reproduction and differential selection.
>>>
>>> Brent
>>>
>> Hi,
>>
>> Reproduction and differential selection seem to be necessary
>> aspects of consciousness!
>
> I have several friends who seem conscious, but aren't going to have
> children.
>

I am think of the functions, not the physical activity... Your
friends are, effectively, self-selecting against the survival of their
genes. Not wise from a Dawkinsian P.O.V. I have at least one child and
so the gene that lead this corporeality will live on. ;-)

Onward!

Stephen

Craig Weinberg

unread,
Dec 12, 2011, 8:04:41 PM12/12/11
to Everything List

I was talking about evolving in a general sense of changing and
progressing, not natural selection. If a cartoon is conscious in any
way, why doesn't it keep going by itself?

Craig

meekerdb

unread,
Dec 12, 2011, 8:40:27 PM12/12/11
to everyth...@googlegroups.com

Maybe it requires a certain environment. It requires a projector and screen. You require
oxygen, food, and water.

Brent

Craig Weinberg

unread,
Dec 12, 2011, 9:28:52 PM12/12/11
to Everything List

That's just sophistry. A projector and screen can't add scenes to a
cartoon. Recordings make don't make any changes to themselves which
extend their semantic content. Unless you have a counterfactual in
mind?

Craig

meekerdb

unread,
Dec 12, 2011, 11:43:06 PM12/12/11
to everyth...@googlegroups.com

I don't think a cartoon is conscious either. But I was just pointing out that "keep going
by itself" can't the be critereon since things we think are conscious don't keep going *by
themselves*.

Brent

Craig Weinberg

unread,
Dec 13, 2011, 7:11:17 AM12/13/11
to Everything List

It doesn't have to be *the* criterion but I think it's a valid
counterfactual. We have oxygen, food, water and our consciousness
keeps going, ie experiencing and generating both novelty and extending
continuity. A cartoon doesn't end because it's deprived of film or
pixels. You can provide extra blank frames which contain the same
elements, paper and ink, pixels of certain colors, etc, but no
extension of any pattern of the cartoon will occur.

What is your criterion for thinking that a cartoon isn't conscious?
Craig

smi...@zonnet.nl

unread,
Dec 13, 2011, 8:53:08 AM12/13/11
to everyth...@googlegroups.com
I explained my argument on this here:


http://arxiv.org/abs/1009.4472

Saibal

Citeren Craig Weinberg <whats...@gmail.com>:

Craig Weinberg

unread,
Dec 13, 2011, 10:47:07 AM12/13/11
to Everything List
On Dec 13, 8:53 am, smi...@zonnet.nl wrote:
> I explained my argument on this here:
>
> http://arxiv.org/abs/1009.4472
>
> Saibal
>

As with Bruno's argument, the problem I have is not with the
reasoning, it's with the beginning assumptions. You say

"According to the computationalist theory of the mind, conscious
experiences are
identified with computational states of algorithms [1, 2]. This view
is the logical
conclusion one arrives at if one assumes that physics applies to
everything, including
us."

I disagree completely. There is nothing logical about identifying
conscious experiences with computational states. Pain is not a number.
Blue is not a an algorithm which can be exported to non-visual
mechanism. It's false. A hopelessly unrecoverable category error which
is nonetheless quite intellectually seductive.

I agree that physics applies to everything, including us, which is why
the logical conclusion is:

1. What and who we are, our feelings and perceptions, apply to (at
least parts of) physics. It goes both ways. The universe feels. We are
the evidence of that.

2. Feeling is not a computation, otherwise it would be unexplainable
and redundant. If physics were merely the enactment of automatic
algorithms, then we would not be having this conversation. Nothing
would be having any conversation. What would be the point? Why would a
computation 'feel' like something?

3. Physics is feeling as well as computation. We know that we can tell
the difference between voluntary control of our mind and body and
involuntary processes. My feeling and intention can drive
physiological changes in my body and physiological changes in my body
can drive feelings, thoughts etc. If it were just computation, there
would be no difference, no subjective participation.

4. Computation is not primitive. It is a higher order sensorimotive
experience which intellectually abstracts lower order sensorimotive
qualities of repetition, novelty, symmetry, and sequence. When we
project arithmetic on the cosmos, we tokenize functional aspects of it
and arbitrarily privilege specific human perception channels.

5. Awareness is not primitive. Awareness does not exist absent a
material sensor. Some might argue for ghosts or out of body/near death
experiences, but even those are reported or interpreted by living
human subjects. There is no example of a disembodied consciousness
haunting a particular ip address or area of space.

6. Sense is primitive. Everything that can be said to be real in any
sense has to make sense. The universe has to make sense before we can
make sense of it. The capacity for being and experiencing inherently
derives from a distinction between what something is and everything
that is it isn't. The subject object relation is primary - well
beneath computation. Subjectivity is self-evident. It needs no
definition statement and no definition statement can be sufficient
without the meaning of the word 'I' already understood. If something
cannot understand 'I', it cannot ever be a subject. I cannot be
simulated, digitized, decohered, or reduced to an 'identification with
computation'. I may be computation in part, but then computation is
also me. Arithmetic must have all the possibilities of odor and sound.
Numbers must get dizzy and fall down.

7. Mistaking consciousness for computation has catastrophic
consequences. It is necessary to use computation to understand the
'back end' of consciousness through neurology, but building a
worldview on unrealism and applying it literally to ourselves is
dissociative psychosis. Even as a semi-literal folk ontology, the
notion of automatism as the authoritative essence of identity has ugly
consequences. Wal Mart. Wall Street. The triumph of quanitative
analysis over qualitative aesthetics is emptying our culture of all
significance, leaving only a digital residue - the essence of generic
interchangeability - like money itself, a universal placeholder for
the power of nothingness to impersonate anything and everything. Just
as alchemists and mystics once gazed into mere matter and coincidence
looking for higher wisdom of a spiritual nature, physics and
mathematics now gazes into consciousness looking for a foregone
conclusion of objective certainty. It's a fools errand. Without us,
the brain is a useless organ. All of it's computations add up to
nothing more or less than a pile of dead fish rotting in the sun.

Bruno Marchal

unread,
Dec 13, 2011, 12:44:19 PM12/13/11
to everyth...@googlegroups.com

On 13 Dec 2011, at 16:47, Craig Weinberg wrote:

> On Dec 13, 8:53 am, smi...@zonnet.nl wrote:
>> I explained my argument on this here:
>>
>> http://arxiv.org/abs/1009.4472
>>
>> Saibal
>>
>
> As with Bruno's argument, the problem I have is not with the
> reasoning, it's with the beginning assumptions. You say
>
> "According to the computationalist theory of the mind, conscious
> experiences are
> identified with computational states of algorithms [1, 2]. This view
> is the logical
> conclusion one arrives at if one assumes that physics applies to
> everything, including
> us."
>
> I disagree completely.


Me too. It assumes physics is computational, which it is most
plausibly not, in case "we" are machine (and thus described by a
digital truncation of some physical processes).
This entails that we cannot even assume a physical theory, but have to
derive it from computer science.
Observation becomes a modality of (relative) self-reference.


> There is nothing logical about identifying
> conscious experiences with computational states.

Here I disagree with you.
Although there is nothing sure from which we could deduce such a
relationship, we might still *infer* or *believe* that the brain is a
"natural" computer, (that is the truncation of you at the digital
level is a universal machine (in the Post, Church, Turing sense)).

We can believe the brain is a computer like most of us would believe
that the hart is a pump.

We do have evidence that whatever the level we choose to look on, when
we observe an heart or a brain, nothing seems to violate finite local
deterministic rules (machine).


> Pain is not a number.

Sure.

> Blue is not a an algorithm which can be exported to non-visual
> mechanism.

You assume non-comp. The fact that the experience of blueness is not a
number does not make it impossible that "blueness" is "lived" through
an arithmetical phenomenon involving self-reference of a machine with
respect to infinities of machines and computations.


> It's false.

You don't know that. You assume non-comp. You have not produce a
refutation of comp, as far as I know.

> A hopelessly unrecoverable category error which
> is nonetheless quite intellectually seductive.
>
> I agree that physics applies to everything, including us, which is why
> the logical conclusion is:

We can enlarge the sense of the word physics, but currently, in the
Aristotelian physicalist tradition, this is a form of reductionism.
Physics assumes special universal machine, where the digital mechanist
assumption force to take them all in consideration, and extract the
one, or the cluster of "one" justifying the local possible
truncations. But like in Mitra, and in Everett, "we" are always "in"
an infinity of one. (And that's indeed the natural place where the
counterfactuals can get some meaning and role, without attributing a
physical activity to a physically inactive piece of primitive matter.


>
> 1. What and who we are, our feelings and perceptions, apply to (at
> least parts of) physics.


That's coherent with your non-comp assumption.

> It goes both ways. The universe feels. We are
> the evidence of that.

Which universe? All the universal being can feel.
But the big whole, from inside, is just so big that it is not
unnameable, so I will not dare to address the question of "its"
thinking.


>
> 2. Feeling is not a computation,

Right. But this does not mean that it cannot related to self-
referential truth about a universal machine relatively to other
universal machines and infinities of computations, random noise
oracle, etc.

> otherwise it would be unexplainable
> and redundant.

Yes. An epiphenomena. It is the same error of formalism and
reductionism trying to eliminate truth in favor of forms. This can
only exist by a misunderstanding of Gödel and Tarski theorem. Even in
math we cannot eliminate truth and intuition, and assuming comp, and
*some amount* of self-consistency, we can "know" why.

> If physics were merely the enactment of automatic
> algorithms, then we would not be having this conversation.

OK. But I dare to insist that if we assume mechanism, physics is
everything but an enactement of an algorithm. Comp makes digital
physics wrong, a priori. I think that the DU even diagonalizes
'naturally" against all possible computable physics. But if that is
not the case, comp still force to extract the special physical
universal machine from the first person experience measure problem.


> Nothing
> would be having any conversation. What would be the point? Why would a
> computation 'feel' like something?

Well, a computation does not feel, like a brain does not feel. But a
person (a Löbian self-referential being) can, and thanks to relatively
stable computations emulating the self relatively to other machine,
that person can manifest herself through computations. Then that
person can be aware of the impossibility to communicate that feeling
to any probable universal neighbors in case it is unwilling to do that.

>
> 3. Physics is feeling as well as computation.

?


> We know that we can tell
> the difference between voluntary control of our mind and body and
> involuntary processes.

Partially, yes.

> My feeling and intention can drive
> physiological changes in my body and physiological changes in my body
> can drive feelings, thoughts etc. If it were just computation, there
> would be no difference, no subjective participation.

OK.
But comp does not say that we are computation. It says only that we
are only *relatively* dependent on some universal computation going on
relatively to some probable computations. The subjective machine will
speed up, because it bets on its consistency, on the existence of
itself relatively to the possible other machines. Memories become a
scenario with a hero (you).

>
> 4. Computation is not primitive.

You get computation quickly. Universality is cheap. Assuming
elementary arithmetic (like everyone does in high school, notably)
makes it already there.

Its immunity for diagonalization makes it the most transcendental
mathematical reality, and yet still effective.

> It is a higher order sensorimotive
> experience which intellectually abstracts lower order sensorimotive
> qualities of repetition, novelty, symmetry, and sequence. When we
> project arithmetic on the cosmos, we tokenize functional aspects of it
> and arbitrarily privilege specific human perception channels.

You lost me. I guess it makes sense with some non-comp theory.

>
> 5. Awareness is not primitive.

I agree.

> Awareness does not exist absent a
> material sensor.

That's locally true. It might be necessary, but that's an open problem.


> Some might argue for ghosts or out of body/near death
> experiences, but even those are reported or interpreted by living
> human subjects. There is no example of a disembodied consciousness
> haunting a particular ip address or area of space.

How do you know that? I guess you are right today, but "human made"
machines, programs and bugs are still very young, yet they grows
explosively on the net.

>
> 6. Sense is primitive.

Not with comp. Sense are primitive only form the first person
perspective, but not in "gods eyes" (The unnameable arithmetical truth
talk to the machines).


> Everything that can be said to be real in any
> sense has to make sense.

Ah! In that sense? Then I am OK.
0=1 is fase independently of me or anything.


> The universe has to make sense before we can
> make sense of it.

Probable with "we" = "humans".
false with "we" = "the universal beings", and universe meaning
physical universe.

> The capacity for being and experiencing inherently
> derives from a distinction between what something is and everything
> that is it isn't. The subject object relation is primary - well
> beneath computation. Subjectivity is self-evident. It needs no
> definition statement and no definition statement can be sufficient
> without the meaning of the word 'I' already understood.

Here you make a subtle error. You are correct (telling truth), but
incorrect to assume that we cannot explains those truth (self-
evidence, no possible definition) when doing some assumption (like
mechanism, and the non expressible self-referential correctness on the
part of the machine).


> If something
> cannot understand 'I', it cannot ever be a subject.


Self-reference is the jewel of computer science. machine can easily
understand the third person I, and experience the first person I. And
the first is finitely describable, and the second is only a door to
the unknown.


> I cannot be
> simulated, digitized,

Relatively? That's your non-comp assumption.


> decohered, or reduced to an 'identification with
> computation'.

Well, as paradoxical it might soon, you are provably right when we
assume comp. If you are a machine, then no one can reduce you to any
particular knowable machine, and no one can do any thinking at your
place (but you can delegate thinking by yourself).

> I may be computation in part, but then computation is
> also me. Arithmetic must have all the possibilities of odor and sound.
> Numbers must get dizzy and fall down.

Not numbers, but the hero appearing in the numbers' dreams.

>
> 7. Mistaking consciousness for computation has catastrophic
> consequences. It is necessary to use computation to understand the
> 'back end' of consciousness through neurology, but building a
> worldview on unrealism and applying it literally to ourselves is
> dissociative psychosis.

Not only you will not give a steak to my son in law, but I see you
will try to send his doctor in the asylum.
Well, thanks for the warning.


> Even as a semi-literal folk ontology, the
> notion of automatism as the authoritative essence of identity has ugly
> consequences.

Automata are below universality.

> Wal Mart. Wall Street. The triumph of quanitative
> analysis over qualitative aesthetics is emptying our culture of all
> significance, leaving only a digital residue - the essence of generic
> interchangeability - like money itself, a universal placeholder for
> the power of nothingness to impersonate anything and everything.


I am as much sad about that than you, but your reductionist view on
machine will not help.


> Just
> as alchemists and mystics once gazed into mere matter and coincidence
> looking for higher wisdom of a spiritual nature, physics and
> mathematics now gazes into consciousness looking for a foregone
> conclusion of objective certainty.

No. The point is that we cannot do that even with machine.


> It's a fools errand. Without us,
> the brain is a useless organ.

You can say that.

> All of it's computations add up to
> nothing more or less than a pile of dead fish rotting in the sun.

Without us? Sure.

But who us?

Bruno

http://iridia.ulb.ac.be/~marchal/

meekerdb

unread,
Dec 13, 2011, 12:50:25 PM12/13/11
to everyth...@googlegroups.com
It doesn't take information from it's environment, learn, and act on the environment.

Brent

Joseph Knight

unread,
Dec 13, 2011, 12:59:06 PM12/13/11
to everyth...@googlegroups.com
On Tue, Dec 13, 2011 at 9:47 AM, Craig Weinberg <whats...@gmail.com> wrote:
On Dec 13, 8:53 am, smi...@zonnet.nl wrote:
> I explained my argument on this here:
>
> http://arxiv.org/abs/1009.4472
>
> Saibal
>

As with Bruno's argument, the problem I have is not with the
reasoning, it's with the beginning assumptions. You say

"According to the computationalist theory of the mind, conscious
experiences are
identified with computational states of algorithms [1, 2]. This view
is the logical
conclusion one arrives at if one assumes that physics applies to
everything, including
us."

I disagree completely. There is nothing logical about identifying
conscious experiences with computational states. Pain is not a number.
Blue is not a an algorithm which can be exported to non-visual
mechanism. It's false. A hopelessly unrecoverable category error which
is nonetheless quite intellectually seductive.

It is a falsifiable hypothesis that has not been refuted (so far). You can't just declare it to be false. In fact, you commit several category errors/several straw men in the space of a couple of sentences. No computationalist would claim that "pain is a number", for example.

I agree that physics applies to everything, including us, which is why
the logical conclusion is:

1.  What and who we are, our feelings and perceptions, apply to (at
least parts of) physics. It goes both ways. The universe feels. We are
the evidence of that.

 Dennett would call this a deepity. It is trivially true on one reading, and incredibly important (but false) on another.


2. Feeling is not a computation, otherwise it would be unexplainable
and redundant. If physics were merely the enactment of automatic
algorithms, then we would not be having this conversation. Nothing
would be having any conversation. What would be the point? Why would a
computation 'feel' like something?

3. Physics is feeling as well as computation. We know that we can tell
the difference between voluntary control of our mind and body and
involuntary processes. My feeling and intention can drive
physiological changes in my body and physiological changes in my body
can drive feelings, thoughts etc. If it were just computation, there
would be no difference, no subjective participation.


 

4. Computation is not primitive. It is a higher order sensorimotive
experience which intellectually abstracts lower order sensorimotive
qualities of repetition, novelty, symmetry, and sequence.

 
When we
project arithmetic on the cosmos, we tokenize functional aspects of it
and arbitrarily privilege specific human perception channels.

Entirely possible, but irrelevant. That wouldn't make arithmetic any less important or real. You would have to try another tactic to make arithmetic "not real", just as saying "sets are abstractions" has nothing to do with the importance or reality of set theory.
 

5. Awareness is not primitive. Awareness does not exist absent a
material sensor. Some might argue for ghosts or out of body/near death
experiences, but even those are reported or interpreted by living
human subjects. There is no example of a disembodied consciousness
haunting a particular ip address or area of space.

6. Sense is primitive. Everything that can be said to be real in any
sense has to make sense.

Talk about arbitrarily privileging specific human perception channels.
 
The universe has to make sense before we can
make sense of it. The capacity for being and experiencing inherently
derives from a distinction between what something is and everything
that is it isn't. The subject object relation is primary - well
beneath computation. Subjectivity is self-evident. It needs no
definition statement and no definition statement can be sufficient
without the meaning of the word 'I' already understood. If something
cannot understand 'I', it cannot ever be a subject. I cannot be
simulated, digitized, decohered, or reduced to an 'identification with
computation'. I may be computation in part, but then computation is
also me. Arithmetic must have all the possibilities of odor and sound.
Numbers must get dizzy and fall down.

7. Mistaking consciousness for computation has catastrophic
consequences. It is necessary to use computation to understand the
'back end' of consciousness through neurology, but building a
worldview on unrealism and applying it literally to ourselves is
dissociative psychosis. Even as a semi-literal folk ontology, the
notion of automatism as the authoritative essence of identity has ugly
consequences. Wal Mart. Wall Street. The triumph of quanitative
analysis over qualitative aesthetics is emptying our culture of all
significance, leaving only a digital residue - the essence of generic
interchangeability - like money itself, a universal placeholder for
the power of nothingness to impersonate anything and everything.

I can buy that.
 
Just
as alchemists and mystics once gazed into mere matter and coincidence
looking for higher wisdom of a spiritual nature, physics and
mathematics now gazes into consciousness looking for a foregone
conclusion of objective certainty. It's a fools errand.

I'm glad we have you to tell us these things! 

Your position is legitimate, in that it is perfectly fine to deny computationalism. But you have no argument, so there is no reason to take you seriously.

--
Joseph Knight

Bruno Marchal

unread,
Dec 13, 2011, 1:11:11 PM12/13/11
to everyth...@googlegroups.com


Nor does it take information from itself, learn, and act on itself.


A movie of a computation is not a computation, like a movie of a
murder is not a murder.

So you can attribute a consciousness to the (abstract) person having
that filmed computation, but enacting the movie or not will not change
the measure on computational histories, unless you reconnect the movie
to the boolean graph, in which case you are re-instantiating the
person's ability to manifest its experience relatively to you (and
makes its experiences differentiating again in the relative way).

In a movie of a computation, there are just no computation, nor
running of the program filmed. It would be like confusing the number
denoted by the string "IIIIIIIIIIIII" and the string itself
"IIIIIIIIIIIII".

But if you believe in comp, and in the physical supervenience thesis
(linking consciousness to the *physical activity* of a digital
machine) you are lead to such an absurdity.

Bruno


http://iridia.ulb.ac.be/~marchal/

Stephen P. King

unread,
Dec 13, 2011, 1:25:08 PM12/13/11
to everyth...@googlegroups.com
On 12/13/2011 10:47 AM, Craig Weinberg wrote:
> 4. Computation is not primitive. It is a higher order sensorimotive
> experience which intellectually abstracts lower order sensorimotive
> qualities of repetition, novelty, symmetry, and sequence. When we
> project arithmetic on the cosmos, we tokenize functional aspects of it
> and arbitrarily privilege specific human perception channels.
Hi Craig,

No. A computation is by definition not abstracting novelty. Novelty
is the essence of not-computable. The Halting theorem is an illustration
of this. Computations are about repetition and sequence and perhaps
symmetry, yes, but repetition and sequencing of physical states is what
memory is all about, without the invariance over time there is nothing
to define a repetition on.

Onward!

Stephen

Craig Weinberg

unread,
Dec 13, 2011, 8:29:23 PM12/13/11
to Everything List

I was thinking of novelty in the sense of the experience of counting,
that it's not just rhythm, but actually progresses with a sense of
unveiling the next integer. Even though you know what it's going to
be, there is a sense of realizing that knowledge in expression - the
next digit of Pi, the solution of a problem, etc. All arithmetic is
fueled by a desire not merely to repeat but to make an unknown
quantity known or verify a known quantity. The motive of solving or
verifying I think requires the preexistence of a novelty concept.

Also, couldn't arithmetic be based on names instead of numbers though
and have no repetition? It could be all novelty. Meaningless novelty,
but novelty. I don't think humans would like it very much but a
computer should be able to function that way I would think. One giant
byte, addressable by name search algorithms.

Craig

Craig Weinberg

unread,
Dec 13, 2011, 9:09:52 PM12/13/11
to Everything List
On Dec 13, 12:59 pm, Joseph Knight <joseph.9...@gmail.com> wrote:

> On Tue, Dec 13, 2011 at 9:47 AM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
>
>
>
>
>
>
>
>
> > On Dec 13, 8:53 am, smi...@zonnet.nl wrote:
> > > I explained my argument on this here:
>
> > >http://arxiv.org/abs/1009.4472
>
> > > Saibal
>
> > As with Bruno's argument, the problem I have is not with the
> > reasoning, it's with the beginning assumptions. You say
>
> > "According to the computationalist theory of the mind, conscious
> > experiences are
> > identified with computational states of algorithms [1, 2]. This view
> > is the logical
> > conclusion one arrives at if one assumes that physics applies to
> > everything, including
> > us."
>
> > I disagree completely. There is nothing logical about identifying
> > conscious experiences with computational states. Pain is not a number.
> > Blue is not a an algorithm which can be exported to non-visual
> > mechanism. It's false. A hopelessly unrecoverable category error which
> > is nonetheless quite intellectually seductive.
>
> It is a falsifiable hypothesis that has not been refuted (so far). You
> can't just declare it to be false. In fact, you commit several category
> errors/several straw men in the space of a couple of sentences. No
> computationalist would claim that "pain is a number", for example.

It is refuted by the experience of subjectivity itself. There is no
evidence to suppose that subjectivity is a form of computation, nor is
there anything to suggest that computation in itself could or would
generate anything like consciousness. What would a computationalist
claim that pain is?

>
>
>
> > I agree that physics applies to everything, including us, which is why
> > the logical conclusion is:
>
> > 1.  What and who we are, our feelings and perceptions, apply to (at
> > least parts of) physics. It goes both ways. The universe feels. We are
> > the evidence of that.
>
>  Dennett would call this a deepity. It is trivially true on one reading,
> and incredibly important (but false) on another.

I'm talking about the trivial truth. Not suggesting that panpsychism,
just that obviously feeling is a physical possibility in this
universe.

>
>
>
>
>
>
>
>
>
> > 2. Feeling is not a computation, otherwise it would be unexplainable
> > and redundant. If physics were merely the enactment of automatic
> > algorithms, then we would not be having this conversation. Nothing
> > would be having any conversation. What would be the point? Why would a
> > computation 'feel' like something?
>
> > 3. Physics is feeling as well as computation. We know that we can tell
> > the difference between voluntary control of our mind and body and
> > involuntary processes. My feeling and intention can drive
> > physiological changes in my body and physiological changes in my body
> > can drive feelings, thoughts etc. If it were just computation, there
> > would be no difference, no subjective participation.
>
> > 4. Computation is not primitive. It is a higher order sensorimotive
> > experience which intellectually abstracts lower order sensorimotive
> > qualities of repetition, novelty, symmetry, and sequence.
>

> What? <http://en.wikipedia.org/wiki/Theory_of_computation>

What does the theory of computation have to do with the concrete
phenomenology of computation? I'm saying 'computers are arrays of
semiconductor materials arranged to conduct electrical current in a
dynamic and orderly fashion', and you're pointing me to references to
Boolean algebra.

>
> > When we
> > project arithmetic on the cosmos, we tokenize functional aspects of it
> > and arbitrarily privilege specific human perception channels.
>
> Entirely possible, but irrelevant. That wouldn't make arithmetic any less
> important or real.

It does when you are talking about arithmetic being universal or
primitive. It's not that arithmetic isn't part of realism, it's that
there are so many other senses which are equally universal and
justifiable as primitive.

> You would have to try another tactic to make arithmetic
> "not real", just as saying "sets are abstractions" has nothing to do with
> the importance or reality of set theory.

Arithmetic, like everything in the universe, is real in some senses,
unreal in others, and everything in between. I think that truth is
more primitive and universal than truths within arithmetic.

>
>
>
> > 5. Awareness is not primitive. Awareness does not exist absent a
> > material sensor. Some might argue for ghosts or out of body/near death
> > experiences, but even those are reported or interpreted by living
> > human subjects. There is no example of a disembodied consciousness
> > haunting a particular ip address or area of space.
>
> > 6. Sense is primitive. Everything that can be said to be real in any
> > sense has to make sense.
>
> Talk about arbitrarily privileging specific human perception channels.

I'm not talking about human sense or even biological sense. Everything
that is real has to be detectable or intelligible to something.

Sarcasm doesn't make me wrong.

>
> Your position is legitimate, in that it is perfectly fine to deny
> computationalism. But you have no argument, so there is no reason to take
> you seriously.

To announce that someone 'has no argument' without any specific
counter-arguments is meaningless to me, and the addition of the
condescension that follows reveals the unscientific nature of the
pseudo-criticisms of my position. To summarize, to my seven points,
your objections are that

You say computationalism is falsifiable and has not been disproved.
You say I make straw man fallacies but fail to specify them.
You admire Dan Dennett's mystical skepticism.
You cite a definition of the word computation as an authoritative
source to disqualify my observation of the the reality which that word
addresses.
You say that my interpretation about the failure of arithmetic
universality is irrelevant because you consider set theory to be real
also.

Which of these qualifies as a case against any of the points that I
make? Did I miss something?

Craig

Craig

Craig Weinberg

unread,
Dec 13, 2011, 9:13:34 PM12/13/11
to Everything List

Why not? Wile E. Coyote sees the Road Runner, he tries various
strategies to kill him, and he learns that they don't work so he
doesn't repeat himself.

Craig

Craig Weinberg

unread,
Dec 13, 2011, 10:56:16 PM12/13/11
to Everything List
On Dec 13, 12:44 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 13 Dec 2011, at 16:47, Craig Weinberg wrote:
>
> > On Dec 13, 8:53 am, smi...@zonnet.nl wrote:
> >> I explained my argument on this here:
>
> >>http://arxiv.org/abs/1009.4472
>
> >> Saibal
>
> > As with Bruno's argument, the problem I have is not with the
> > reasoning, it's with the beginning assumptions. You say
>
> > "According to the computationalist theory of the mind, conscious
> > experiences are
> > identified with computational states of algorithms [1, 2]. This view
> > is the logical
> > conclusion one arrives at if one assumes that physics applies to
> > everything, including
> > us."
>
> > I disagree completely.
>
> Me too. It assumes physics is computational, which it is most
> plausibly not, in case "we" are machine (and thus described by a
> digital truncation of some physical processes).
> This entails that we cannot even assume a physical theory, but have to
> derive it from computer science.
> Observation becomes a modality of (relative) self-reference.

I'm not sure I get it. I thought your position is that physics is a
computational simulation.

>
> > There is nothing logical about identifying
> > conscious experiences with computational states.
>
> Here I disagree with you.
> Although there is nothing sure from which we could deduce such a
> relationship, we might still *infer* or *believe* that the brain is a
> "natural" computer, (that is the truncation of you at the digital
> level is a universal machine (in the Post, Church, Turing sense)).

I think that the brain is a biocomputer, but it also hosts
consciousness. Consciousness uses the computing capacity of the brain,
but awareness itself is not a disembodied computational state. It's
living cells. Their awareness scales up to our awareness. It is driven
by their first person agendas as well as ours, which cannot be
accessed objectively.

>
> We can believe the brain is a computer like most of us would believe
> that the hart is a pump.

I understand, and I agree, the brain functions like a computer. It
also functions like a pump, a radio, a coral reef, a pharmacy, a
library, a synaptic suburb, etc. Generally the brain is compared to
the most advanced technology of whatever era is considering it.

>
> We do have evidence that whatever the level we choose to look on, when
> we observe an heart or a brain, nothing seems to violate finite local
> deterministic rules (machine).

But when we observe our own interiority, nothing seems to follow
finite local determistic rules. We appear to be able to conjure an
infinite universal indeterminacy at will. We don't know what a heart
can imagine, but it doesn't seem to do exactly what a brain does, and
neither does anything else. A brain really cannot be compared to
anything else until we can get outside of a brain.

>
> > Pain is not a number.
>
> Sure.
>
> > Blue is not a an algorithm which can be exported to non-visual
> > mechanism.
>
> You assume non-comp. The fact that the experience of blueness is not a
> number does not make it impossible that "blueness" is "lived" through
> an arithmetical phenomenon involving self-reference of a machine with
> respect to infinities of machines and computations.

But the specificity of it would be unnecessary. Why and how would
blueness be invoked just to set a self-referential equivalence? No
matter how powerful a computer we build, we're never going to need to
invent blue to perform some arithmetic operation, and no arithmetic
operation is ever going to have blue as a solution.

>
> > It's false.
>
> You don't know that. You assume non-comp. You have not produce a
> refutation of comp, as far as I know.

I am a refutation of comp. That's how I know it. I can care about
things and have preferences, computation cannot. Computation has
instructions and parameters, variables, and functions, but no
opinions, no point of view.

>
> > A hopelessly unrecoverable category error which
> > is nonetheless quite intellectually seductive.
>
> > I agree that physics applies to everything, including us, which is why
> > the logical conclusion is:
>
> We can enlarge the sense of the word physics, but currently, in the
> Aristotelian physicalist tradition, this is a form of reductionism.
> Physics assumes special universal machine, where the digital mechanist
> assumption force to take them all in consideration, and extract the
> one, or the cluster of "one" justifying the local possible
> truncations. But like in Mitra, and in Everett, "we" are always "in"
> an infinity of one. (And that's indeed the natural place where the
> counterfactuals can get some meaning and role, without attributing a
> physical activity to a physically inactive piece of primitive matter.

Hmm. I lost you in there with the cluster or infinity of one. I get
that physics at this time is limited to external objects, and my first
premise in Multisense Realism is that this limitation is not rooted in
science. Its invaluable for engineering of course, but it's an
insurmountable obstacle I think in understanding consciousness.

>
>
>
> > 1.  What and who we are, our feelings and perceptions, apply to (at
> > least parts of) physics.
>
> That's coherent with your non-comp assumption.

Even if it were comp. if a certain color or texture has an arithmetic
function associated with it, then doesn't that mean that function also
has at least the possibility of that color or texture within it?

>
> > It goes both ways. The universe feels. We are
> > the evidence of that.
>
> Which universe? All the universal being can feel.
> But the big whole, from inside, is just so big that it is not
> unnameable, so I will not dare to address the question of "its"
> thinking.

I was meaning more that the possibility of feeling exists within the
universe. Feeling is one of the things that the universe knows how to
physically produce.

>
>
>
> > 2. Feeling is not a computation,
>
> Right. But this does not mean that it cannot related to self-
> referential truth about a universal machine relatively to other
> universal machines and infinities of computations, random noise
> oracle, etc.

I agree, it could be related to different arithmetic consequences but
that is still not sufficient to explain the experience of feeling
itself. It's like saying that typing is related to language and
communication so therefore a keyboard must understand what you are
typing on it - that keystrokes inherently produce whatever meaning is
present in words.

>
> > otherwise it would be unexplainable
> > and redundant.
>
> Yes. An epiphenomena.

I think an epiphenomena just has to be non causally efficacious. I run
my car engine and the heat and exhaust are epiphenomena. Feeling makes
no sense as a possible exhaust of computation. The whole point of
computation is it's normalized, parsimonious integrity. Where does a
picture of a nonexistent palm tree come from in the f(x)?

> It is the same error of formalism and
> reductionism trying to eliminate truth in favor of forms. This can
> only exist by a misunderstanding of Gödel and Tarski theorem. Even in
> math we cannot eliminate truth and intuition, and assuming comp, and
> *some amount* of self-consistency, we can "know" why.

I like this whole direction of mathematics, and even though my mind
isn't well suited to it, I do respect the importance of the
contribution. Turing too. I think the whole self-referential
revelation is the functional skeleton of the most literal, objective
sense of the cosmos. There is intelligence and wisdom there,
unquestionably. I just think that it's only *almost* the secret of the
universe. To get the whole secret, we have to bring ourselves all the
way into the the laboratory. Everything that arithmetic is, the
universe also is not. Figurative, semantic, poetic, intuitive,
sensorimotive, sentient, etc. These aspects of our realism cannot be
meaningfully reduced to arithmetic, nor can arithmetic be understood
by wishes and fiction. What they can be reduced to is the sense of
order and symmetry which unites and divides them.

>
> > If physics were merely the enactment of automatic
> > algorithms, then we would not be having this conversation.
>
> OK. But I dare to insist that if we assume mechanism, physics is
> everything but an enactement of an algorithm. Comp makes digital
> physics wrong, a priori. I think that the DU even diagonalizes
> 'naturally" against all possible computable physics. But if that is
> not the case, comp still force to extract the special physical
> universal machine from the first person experience measure problem.

Hard for me to follow. Why doesn't physics include enactment? I
thought comp makes physics digital?

>
> > Nothing
> > would be having any conversation. What would be the point? Why would a
> > computation 'feel' like something?
>
> Well, a computation does not feel, like a brain does not feel. But a
> person (a Löbian self-referential being) can, and thanks to relatively
> stable computations emulating the self relatively to other machine,
> that person can manifest herself through computations. Then that
> person can be aware of the impossibility to communicate that feeling
> to any probable universal neighbors in case it is unwilling to do that.

How do you know that a Löbian being isn't just a simulation of a self-
referential being? It's only our sense of self projecting it's own
image onto a generic arithmetic process, like a cartoon. Does acting
like a self automatically make it a self? What if you intentionally
want to make a Löbian being that only seems like it is self-
referential but actually is not?

>
>
>
> > 3. Physics is feeling as well as computation.
>
> ?

It relates to phenomena in the universe which is ultimately tangible
or has tangible consequences. It's not just computation for the sake
of computation.

>
> > We know that we can tell
> > the difference between voluntary control of our mind and body and
> > involuntary processes.
>
> Partially, yes.
>
> > My feeling and intention can drive
> > physiological changes in my body and physiological changes in my body
> > can drive feelings, thoughts etc. If it were just computation, there
> > would be no difference, no subjective participation.
>
> OK.
> But comp does not say that we are computation. It says only that we
> are only *relatively* dependent on some universal computation going on
> relatively to some probable computations. The subjective machine will
> speed up, because it bets on its consistency, on the existence of
> itself relatively to the possible other machines. Memories become a
> scenario with a hero (you).

I'm not opposed to the idea of us being relatively dependent on some
universal computation, but not in a strictly epiphenomenal way. The
universal computations are also influenced by us directly, our sense
and motive on the macro-person level.

>
>
>
> > 4. Computation is not primitive.
>
> You get computation quickly. Universality is cheap. Assuming
> elementary arithmetic (like everyone does in high school, notably)
> makes it already there.

Quickly, yes. Universal, sure, at least as far as objects go.

>
> Its immunity for diagonalization makes it the most transcendental
> mathematical reality, and yet still effective.

I believe it. There is almost certainly no more powerful tool to
manipulate our environment. It's just that the thing that wants to
exercise power and manipulate the environment in the first place has
to precede the tool, if we are talking about a Theory of Everything.
If it were a Theory of Engineering, I would bet on computation every
time.

>
> > It is a higher order sensorimotive
> > experience which intellectually abstracts lower order sensorimotive
> > qualities of repetition, novelty, symmetry, and sequence. When we
> > project arithmetic on the cosmos, we tokenize functional aspects of it
> > and arbitrarily privilege specific human perception channels.
>
> You lost me. I guess it makes sense with some non-comp theory.

In a material metaphor, I'm saying that plastic is a higher order
phenomenon of synthetic organic chemistry, not a molecular primitive.
Even though it's utility and flexibility in simulating almost any kind
of material to our eyes, it's actually the deeper qualities underlying
the plastic which gives it it's pseudo-universality. When we mistake
plastic for the root of all matter, we focus on it's plasticity as it
serves us (rather than questioning the underlying chemistry which
gives plastic it's qualities).

>
>
>
> > 5. Awareness is not primitive.
>
> I agree.
>
> > Awareness does not exist absent a
> > material sensor.
>
> That's locally true. It might be necessary, but that's an open problem.
>
> > Some might argue for ghosts or out of body/near death
> > experiences, but even those are reported or interpreted by living
> > human subjects. There is no example of a disembodied consciousness
> > haunting a particular ip address or area of space.
>
> How do you know that? I guess you are right today, but "human made"
> machines, programs and bugs are still very young, yet they grows
> explosively on the net.

They still have to have a material net to grow on though. You can't
catch a programming bug from your computer. It seems like comp would
have a hard time explaining why that is - harder than it is for a six
year old to observe that it obviously can't happen.

>
>
>
> > 6. Sense is primitive.
>
> Not with comp. Sense are primitive only form the first person
> perspective, but not in "gods eyes" (The unnameable arithmetical truth
> talk to the machines).

What is arithmetical truth if it doesn't make sense?

>
> > Everything that can be said to be real in any
> > sense has to make sense.
>
> Ah! In that sense? Then I am OK.
> 0=1 is fase independently of me or anything.

Yes! Well yes in the literal sense that you intend. It could be said
that the 'knowledge of the nothingness of death' = the 'singularly
human experience' or something like that...1=0 in the sense 'each
thing begins from no thing'.

>
> > The universe has to make sense before we can
> > make sense of it.
>
> Probable with "we" = "humans".
> false with "we" = "the universal beings", and universe meaning
> physical universe.

How can sense arise from a universe which doesn't make sense? The
possibility of sense is itself sense.

>
> > The capacity for being and experiencing inherently
> > derives from a distinction between what something is and everything
> > that is it isn't. The subject object relation is primary - well
> > beneath computation. Subjectivity is self-evident. It needs no
> > definition statement and no definition statement can be sufficient
> > without the meaning of the word 'I' already understood.
>
> Here you make a subtle error. You are correct (telling truth), but
> incorrect to assume that we cannot explains those truth (self-
> evidence, no possible definition) when doing some assumption (like
> mechanism, and the non expressible self-referential correctness on the
> part of the machine).

It's not that I assume that we cannot explain those truths in other
ways, just that I don't assume that those other explanations can
dilute or negate the naive subjective orientation. Just because the
map is not the territory doesn't mean that map is not a phenomenon in
it's own right. It doesn't mean that map-making is an emergent
property of the territory.

>
> > If something
> > cannot understand 'I', it cannot ever be a subject.
>
> Self-reference is the jewel of computer science. machine can easily
> understand the third person I, and experience the first person I. And
> the first is finitely describable, and the second is only a door to
> the unknown.

How can you tell the difference between a machine reflecting our sense
of I and a first person experience of I? What gives us reason to think
a digital I is genuine?

>
> > I cannot be
> > simulated, digitized,
>
> Relatively? That's your non-comp assumption.

The simulation would have to turn me into someone else and still be
me. A simulation could act like me in every way, but the I that I am
now would not be extended into that simulation. Only I am I (how could
I not 'be'? Everywhere I go, there I am...)

>
> > decohered, or reduced to an 'identification with
> > computation'.
>
> Well, as paradoxical it might soon, you are provably right when we
> assume comp. If you are a machine, then no one can reduce you to any
> particular knowable machine, and no one can do any thinking at your
> place (but you can delegate thinking by yourself).

my only problem with being a machine is that since we are as close as
you can get to being the opposite of a machine, so that the term loses
all meaning if it encompasses everything. If a machine can make
choices based on preference rather than instructions as we can, what
does it serve to use the term machine?

>
> > I may be computation in part, but then computation is
> > also me. Arithmetic must have all the possibilities of odor and sound.
> > Numbers must get dizzy and fall down.
>
> Not numbers, but the hero appearing in the numbers' dreams.

What are those dreams made of?

>
>
>
> > 7. Mistaking consciousness for computation has catastrophic
> > consequences. It is necessary to use computation to understand the
> > 'back end' of consciousness through neurology, but building a
> > worldview on unrealism and applying it literally to ourselves is
> > dissociative psychosis.
>
> Not only you will not give a steak to my son in law, but I see you
> will try to send his doctor in the asylum.
> Well, thanks for the warning.

What would be the difference between an asylum and anywhere else?
Can't numbers dream just as well in an asylum?

>
> > Even as a semi-literal folk ontology, the
> > notion of automatism as the authoritative essence of identity has ugly
> > consequences.
>
> Automata are below universality.

Are they below identity?

>
> > Wal Mart. Wall Street. The triumph of quanitative
> > analysis over qualitative aesthetics is emptying our culture of all
> > significance, leaving only a digital residue - the essence of generic
> > interchangeability - like money itself, a universal placeholder for
> > the power of nothingness to impersonate anything and everything.
>
> I am as much sad about that than you, but your reductionist view on
> machine will not help.

Are you sure? What is economics but socially enforced
computationalism?

>
> > Just
> > as alchemists and mystics once gazed into mere matter and coincidence
> > looking for higher wisdom of a spiritual nature, physics and
> > mathematics now gazes into consciousness looking for a foregone
> > conclusion of objective certainty.
>
> No. The point is that we cannot do that even with machine.

Certainty of uncertainty.

>
> > It's a fools errand. Without us,
> > the brain is a useless organ.
>
> You can say that.
>
> > All of it's computations add up to
> > nothing more or less than a pile of dead fish rotting in the sun.
>
> Without us? Sure.
>
> But who us?

Us natural persons. Human beings extending psychologically into
autobiographical experience with historical context and corporeal
bodies with cells and molecules inside and cities, planets, and
galaxies outside.

Craig

meekerdb

unread,
Dec 13, 2011, 11:09:49 PM12/13/11
to everyth...@googlegroups.com

You're confusing characters a cartoon with the cartoon.

Brent

>
> Craig
>

meekerdb

unread,
Dec 13, 2011, 11:13:46 PM12/13/11
to everyth...@googlegroups.com
On 12/13/2011 7:56 PM, Craig Weinberg wrote:
> But when we observe our own interiority, nothing seems to follow
> finite local determistic rules. We appear to be able to conjure an
> infinite universal indeterminacy at will.

Really? I can't imagine more than five or six different things at the same time.

Brent

Kim Jones

unread,
Dec 14, 2011, 12:32:51 AM12/14/11
to everyth...@googlegroups.com
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

Jolly kind of you,

Kim Jones

Craig Weinberg

unread,
Dec 14, 2011, 9:32:09 AM12/14/11
to Everything List

No, I'm not. Either way it's the same. Cartoon characters appear to us
to take information, learn, and act on their environment. What does
the term we use to describe that appearance matter?

Craig

Craig Weinberg

unread,
Dec 14, 2011, 9:47:48 AM12/14/11
to Everything List

You can't imagine a view of the Sistine Chapel? A hairbrush with
thousands of bristles? The sound of millions of cicadas in the trees?
Not sure what you mean. Granted, there is loose sense of bandwidth of
sense we can make in any given moment, but I'm talking about the
infinite qualities of the content we can generate. We can
spontaneously make up things out of nowhere. The flavor of pinecone
icecream, the sound of a hat being eaten by a crocodile, a Picasso
version of a Pollack painting. This is not a capacity that lends
itself to explanation by determinism or can be compared to things that
we can observe outside of ourselves.

Craig

Bruno Marchal

unread,
Dec 14, 2011, 11:49:16 AM12/14/11
to everyth...@googlegroups.com

That's not my position. My working hypothesis is that "I" am a
machine, in the sense that I could survive with a copy of my brain
done at some level.
From this I can show that whatever the physical universe can be, it
cannot be a "computational object". Indeed it is only an appearance
emerging from a non computational statistics on computations.
Likewise, consciousness also is not a computational thing.

>
>>
>>> There is nothing logical about identifying
>>> conscious experiences with computational states.
>>
>> Here I disagree with you.
>> Although there is nothing sure from which we could deduce such a
>> relationship, we might still *infer* or *believe* that the brain is a
>> "natural" computer, (that is the truncation of you at the digital
>> level is a universal machine (in the Post, Church, Turing sense)).
>
> I think that the brain is a biocomputer, but it also hosts
> consciousness. Consciousness uses the computing capacity of the brain,
> but awareness itself is not a disembodied computational state.

Why not?


> It's
> living cells. Their awareness scales up to our awareness.

How?

> It is driven
> by their first person agendas as well as ours, which cannot be
> accessed objectively.
>
>>
>> We can believe the brain is a computer like most of us would believe
>> that the hart is a pump.
>
> I understand, and I agree, the brain functions like a computer.

Yes, there are many evidences, if only because locally everything
does, as far as we know. Except for the collapse of the quantum waves
(that nobody can explain, and that Everett explained away) we have not
yet find anything in nature which is not Turing emulable. That might
be a long term problem for comp, because comp predicts that the
physical universe is NOT turing emulable, but it might be everywhere
Turing emulable locally.

> It
> also functions like a pump,

A brain? Why?


> a radio,


Why?

> a coral reef,

Why?

> a pharmacy,

OK.

> a
> library,

OK.

> a synaptic suburb,

Why?


> etc. Generally the brain is compared to
> the most advanced technology of whatever era is considering it.


Not at all. It is compared to machine only, and wisely so given the
evidences.
Now, we have discovered universal machine, and the comparison just
makes *much* more sense.

>
>>
>> We do have evidence that whatever the level we choose to look on,
>> when
>> we observe an heart or a brain, nothing seems to violate finite local
>> deterministic rules (machine).
>
> But when we observe our own interiority, nothing seems to follow
> finite local determistic rules.

I agree with you. But that's exactly what introspecting machine are
saying, and can even explain.

> We appear to be able to conjure an
> infinite universal indeterminacy at will.

Yes. And we still don't know exactly how a machine can do that, but
their rich theology is promising with this respect.

> We don't know what a heart
> can imagine, but it doesn't seem to do exactly what a brain does, and
> neither does anything else. A brain really cannot be compared to
> anything else until we can get outside of a brain.

We can compare the brain with anything. And the comparison with
computer, especially in the mathematical original sense of the word,
is worth to study. Universal machine or number are very rich objects.
They are already able to defeat all universal theories.

>
>>
>>> Pain is not a number.
>>
>> Sure.
>>
>>> Blue is not a an algorithm which can be exported to non-visual
>>> mechanism.
>>
>> You assume non-comp. The fact that the experience of blueness is
>> not a
>> number does not make it impossible that "blueness" is "lived" through
>> an arithmetical phenomenon involving self-reference of a machine with
>> respect to infinities of machines and computations.
>
> But the specificity of it would be unnecessary.

Why?

> Why and how would
> blueness be invoked just to set a self-referential equivalence?

To accelerate decision.


> No
> matter how powerful a computer we build, we're never going to need to
> invent blue to perform some arithmetic operation,

Why should we need to invent it? It is already there, in the relation
in-between universal numbers.

> and no arithmetic
> operation is ever going to have blue as a solution.

You are right. An arithmetic operation, like a physical event are just
not the right type of object for seeing blue. Only person (including
animals) can do that. But this does not contradict the fact that they
might survive with a digital brain.


>
>>
>>> It's false.
>>
>> You don't know that. You assume non-comp. You have not produce a
>> refutation of comp, as far as I know.
>
> I am a refutation of comp.

You are not a proof. Even from your own private point of view.


> That's how I know it.

Comp, nor non-comp, is not the kind of thing we can *know*. We can
assume them and reason. besides in science we *know* nothing for sure.
Even if God appears to you and tell you that you are not a machine,
that will prove nothing, even to you. using that argument shows only
that you are influenceable through authoritative argument (the worst
possible kind of argument in fundamental science).

> I can care about
> things and have preferences, computation cannot.

But why could not people do that, when incarnated relatively through
computations (note the plural).
If you just say that machine cannot have preference, you are just
begging the question.

> Computation has
> instructions and parameters, variables, and functions, but no
> opinions, no point of view.

I have displayed the math of the 8 types of opinion/points of view
that *any* sound machine canNOT NOT discover by introspection. One of
them is the physical modalities, making comp + the classical theory of
knowledge testable.

Computer science explains very well were does the opinion, knowledge,
sensation, observation of machines comes from.

It might not be the correct explanations, but correct machines already
provide them. We might listen to them.

>
>>
>>> A hopelessly unrecoverable category error which
>>> is nonetheless quite intellectually seductive.
>>
>>> I agree that physics applies to everything, including us, which is
>>> why
>>> the logical conclusion is:
>>
>> We can enlarge the sense of the word physics, but currently, in the
>> Aristotelian physicalist tradition, this is a form of reductionism.
>> Physics assumes special universal machine, where the digital
>> mechanist
>> assumption force to take them all in consideration, and extract the
>> one, or the cluster of "one" justifying the local possible
>> truncations. But like in Mitra, and in Everett, "we" are always "in"
>> an infinity of one. (And that's indeed the natural place where the
>> counterfactuals can get some meaning and role, without attributing a
>> physical activity to a physically inactive piece of primitive matter.
>
> Hmm. I lost you in there with the cluster or infinity of one. I get
> that physics at this time is limited to external objects, and my first
> premise in Multisense Realism is that this limitation is not rooted in
> science. Its invaluable for engineering of course, but it's an
> insurmountable obstacle I think in understanding consciousness.

I was alluding to the movie graph argument (or Maudlin's one) which
shows that if we are machine, consciousness cannot be attributed to
the physical activity of that machine, but only to the causal
(arithmetical, with comp) dependencies. We can come back on this.

>
>>
>>
>>
>>> 1. What and who we are, our feelings and perceptions, apply to (at
>>> least parts of) physics.
>>
>> That's coherent with your non-comp assumption.
>
> Even if it were comp. if a certain color or texture has an arithmetic
> function associated with it, then doesn't that mean that function also
> has at least the possibility of that color or texture within it?

It has not. Physical (and persistent) objects exist only in the
(sharable) dream of numbers.

>
>>
>>> It goes both ways. The universe feels. We are
>>> the evidence of that.
>>
>> Which universe? All the universal being can feel.
>> But the big whole, from inside, is just so big that it is not
>> unnameable, so I will not dare to address the question of "its"
>> thinking.
>
> I was meaning more that the possibility of feeling exists within the
> universe.

Which universe? The arithmetical one? The physical one? The
theological one?


> Feeling is one of the things that the universe knows how to
> physically produce.

?

>>
>>
>>> 2. Feeling is not a computation,
>>
>> Right. But this does not mean that it cannot related to self-
>> referential truth about a universal machine relatively to other
>> universal machines and infinities of computations, random noise
>> oracle, etc.
>
> I agree, it could be related to different arithmetic consequences but
> that is still not sufficient to explain the experience of feeling
> itself. It's like saying that typing is related to language and
> communication so therefore a keyboard must understand what you are
> typing on it - that keystrokes inherently produce whatever meaning is
> present in words.

Feeling are explained by the fact that machine can refer entirely to
their own body (at some level), and this in different ways from
different points of view which obeys different logics. In particular
qualia correspond to available non communicable truth. They do have a
role by speeding up relative computation and decision. In fact the
more a machine introspect, the bigger is the set of non communicable
truth.


>
>>
>>> otherwise it would be unexplainable
>>> and redundant.
>>
>> Yes. An epiphenomena.
>
> I think an epiphenomena just has to be non causally efficacious.

I agree. That is why I like comp: it prevents consciousness and
private life to be epiphenomena. They are just real and very useful
(for just surviving for example) phenomenon. Stephen would add here
that comp makes primitive matter epiphenomenal, but that is a
nonsense: primitive matter just goes away.

> I run
> my car engine and the heat and exhaust are epiphenomena. Feeling makes
> no sense as a possible exhaust of computation.

Right. But that's a consequence of comp. feeling is not a computation.
What happens with comp is that a feeling is a truth about a person
incarnated at once by an infinity of computations.

> The whole point of
> computation is it's normalized, parsimonious integrity.

Hmm... You might confuse machine before and after Gödel. We have
learned something fundamental about machine: we have learned that we
cannot know what they are capable of (and this can be justified
entirely if we assume we are machines ourselves).

> Where does a
> picture of a nonexistent palm tree come from in the f(x)?

By the unboudable imagination of the universal machines, especially
when they are glued in long and deep sharable histories.

>
>> It is the same error of formalism and
>> reductionism trying to eliminate truth in favor of forms. This can
>> only exist by a misunderstanding of Gödel and Tarski theorem. Even in
>> math we cannot eliminate truth and intuition, and assuming comp, and
>> *some amount* of self-consistency, we can "know" why.
>
> I like this whole direction of mathematics, and even though my mind
> isn't well suited to it, I do respect the importance of the
> contribution.

Nice.


> Turing too. I think the whole self-referential
> revelation is the functional skeleton of the most literal, objective
> sense of the cosmos.

Nice.

> There is intelligence and wisdom there,
> unquestionably.

OK.

> I just think that it's only *almost* the secret of the
> universe. To get the whole secret, we have to bring ourselves all the
> way into the the laboratory. Everything that arithmetic is, the
> universe also is not.

We don't know what arithmetic is.

> Figurative, semantic, poetic, intuitive,
> sensorimotive, sentient, etc. These aspects of our realism cannot be
> meaningfully reduced to arithmetic,

You might be confusing a theory of arithmetic with arithmetic itself.
Today we know those things are far apart.
A theory of arithmetic is just a universal machine, or a Löbian
machine. Arithmetical truth is *far* beyond any machine.


> nor can arithmetic be understood
> by wishes and fiction. What they can be reduced to is the sense of
> order and symmetry which unites and divides them.
>
>>
>>> If physics were merely the enactment of automatic
>>> algorithms, then we would not be having this conversation.
>>
>> OK. But I dare to insist that if we assume mechanism, physics is
>> everything but an enactement of an algorithm. Comp makes digital
>> physics wrong, a priori. I think that the DU even diagonalizes
>> 'naturally" against all possible computable physics. But if that is
>> not the case, comp still force to extract the special physical
>> universal machine from the first person experience measure problem.
>
> Hard for me to follow. Why doesn't physics include enactment? I
> thought comp makes physics digital?

A lot of people develop that confusion, that is why I insist so much
that comp is in opposition to digital physics, at least as a
fundamental theory.


>
>>
>>> Nothing
>>> would be having any conversation. What would be the point? Why
>>> would a
>>> computation 'feel' like something?
>>
>> Well, a computation does not feel, like a brain does not feel. But a
>> person (a Löbian self-referential being) can, and thanks to
>> relatively
>> stable computations emulating the self relatively to other machine,
>> that person can manifest herself through computations. Then that
>> person can be aware of the impossibility to communicate that feeling
>> to any probable universal neighbors in case it is unwilling to do
>> that.
>
> How do you know that a Löbian being isn't just a simulation of a self-
> referential being?

It is, in the trivial sense that you light consider the number one
being a simulation of itself. But that is rather misleading, and
certainly false if "simulation" is taken in the computer science
sense. In that case a Löbian machine is only a simulation (emeuation)
of some other universal system (mike arithmetic). In that sense we are
simulations too.

> It's only our sense of self projecting it's own
> image onto a generic arithmetic process, like a cartoon.

The cartoon lacks everything making it a computation. At best, it
gives a description of computation.
The Gödel number of a computation is not a computation. A computation
is a complex relation between numbers and a universal number. the
Gödel number of a computation is just a number.


> Does acting
> like a self automatically make it a self?

Yes. Or you get zombie.


> What if you intentionally
> want to make a Löbian being that only seems like it is self-
> referential but actually is not?

Then it will fail on some self-referential task.

>
>>
>>
>>
>>> 3. Physics is feeling as well as computation.
>>
>> ?
>
> It relates to phenomena in the universe which is ultimately tangible
> or has tangible consequences. It's not just computation for the sake
> of computation.

I guess you mean "physical universe". I don't believe that exist in
any ontological sense. Physical reality is a (non arithmetical)
projection made by non arithmetical being emerging from infinities of
arithmetical relations.


>
>>
>>> We know that we can tell
>>> the difference between voluntary control of our mind and body and
>>> involuntary processes.
>>
>> Partially, yes.
>>
>>> My feeling and intention can drive
>>> physiological changes in my body and physiological changes in my
>>> body
>>> can drive feelings, thoughts etc. If it were just computation, there
>>> would be no difference, no subjective participation.
>>
>> OK.
>> But comp does not say that we are computation. It says only that we
>> are only *relatively* dependent on some universal computation going
>> on
>> relatively to some probable computations. The subjective machine will
>> speed up, because it bets on its consistency, on the existence of
>> itself relatively to the possible other machines. Memories become a
>> scenario with a hero (you).
>
> I'm not opposed to the idea of us being relatively dependent on some
> universal computation, but not in a strictly epiphenomenal way.

I agree with you.

> The
> universal computations are also influenced by us directly, our sense
> and motive on the macro-person level.

Some are, locally and relatively, but most are not. You cannot change
at will the additive/multiplicative structure of numbers.

>
>>
>>
>>
>>> 4. Computation is not primitive.
>>
>> You get computation quickly. Universality is cheap. Assuming
>> elementary arithmetic (like everyone does in high school, notably)
>> makes it already there.
>
> Quickly, yes. Universal, sure, at least as far as objects go.

As far as computation go. Not sure what you mean by "objects" here.


>
>>
>> Its immunity for diagonalization makes it the most transcendental
>> mathematical reality, and yet still effective.
>
> I believe it. There is almost certainly no more powerful tool to
> manipulate our environment. It's just that the thing that wants to
> exercise power and manipulate the environment in the first place has
> to precede the tool, if we are talking about a Theory of Everything.
> If it were a Theory of Engineering, I would bet on computation every
> time.

Diagonalization exists in arithmetic, out of time and space. Time and
space comes from the number ability to diagonalize and refer to
themselves.
When I wrote "Amoeba, Planaria and dreaming machine" I thought
engineers would jump on that, and some did, but unfortunately, the
technics is still waiting more powerful hardware to do that.

And then it will not work for the reason that nobody want clever
machines (who could be choosy about their users and destiny), for the
same reason that nobody really want children to be educated and free.
Humans love to chat on freedom, but I think they really hate that in
their heart.

I don't think the machines will ever be intelligent thanks to the
humans, they will be intelligent *despite* the humans. We want slaves,
not competitors.


>
>>
>>> It is a higher order sensorimotive
>>> experience which intellectually abstracts lower order sensorimotive
>>> qualities of repetition, novelty, symmetry, and sequence. When we
>>> project arithmetic on the cosmos, we tokenize functional aspects
>>> of it
>>> and arbitrarily privilege specific human perception channels.
>>
>> You lost me. I guess it makes sense with some non-comp theory.
>
> In a material metaphor, I'm saying that plastic is a higher order
> phenomenon of synthetic organic chemistry, not a molecular primitive.
> Even though it's utility and flexibility in simulating almost any kind
> of material to our eyes, it's actually the deeper qualities underlying
> the plastic which gives it it's pseudo-universality. When we mistake
> plastic for the root of all matter, we focus on it's plasticity as it
> serves us (rather than questioning the underlying chemistry which
> gives plastic it's qualities).

Plastic sucks. We should use renewable plants instead!

>
>>
>>
>>
>>> 5. Awareness is not primitive.
>>
>> I agree.
>>
>>> Awareness does not exist absent a
>>> material sensor.
>>
>> That's locally true. It might be necessary, but that's an open
>> problem.
>>
>>> Some might argue for ghosts or out of body/near death
>>> experiences, but even those are reported or interpreted by living
>>> human subjects. There is no example of a disembodied consciousness
>>> haunting a particular ip address or area of space.
>>
>> How do you know that? I guess you are right today, but "human made"
>> machines, programs and bugs are still very young, yet they grows
>> explosively on the net.
>
> They still have to have a material net to grow on though.

Even if that would exist, it cannot help. That is the point of the MGA:
http://old.nabble.com/MGA-1-td20566948.html

> You can't
> catch a programming bug from your computer. It seems like comp would
> have a hard time explaining why that is - harder than it is for a six
> year old to observe that it obviously can't happen.

A six years old child has a brain which is the product of millions
years of evolution. Give time to (human made) machines, the human made
computer are in their infancy, and 99,9999 % of applied computer
science consist in controlling them, not in letting them controlling
themselves.

>
>>
>>
>>
>>> 6. Sense is primitive.
>>
>> Not with comp. Sense are primitive only form the first person
>> perspective, but not in "gods eyes" (The unnameable arithmetical
>> truth
>> talk to the machines).
>
> What is arithmetical truth if it doesn't make sense?

Nothing. It does make sense. That's the whole point: it makes sense to
the universal numbers inhabiting (in some sense) arithmetical truth.


>
>>
>>> Everything that can be said to be real in any
>>> sense has to make sense.
>>
>> Ah! In that sense? Then I am OK.
>> 0=1 is fase independently of me or anything.
>
> Yes! Well yes in the literal sense that you intend.

OK.

> It could be said
> that the 'knowledge of the nothingness of death' = the 'singularly
> human experience' or something like that...1=0 in the sense 'each
> thing begins from no thing'.
>

Hmm... Then we would write 0 => 1. Not 0 = 1.


>>
>>> The universe has to make sense before we can
>>> make sense of it.
>>
>> Probable with "we" = "humans".
>> false with "we" = "the universal beings", and universe meaning
>> physical universe.
>
> How can sense arise from a universe which doesn't make sense? The
> possibility of sense is itself sense.

Yes. And the arithmetical universe makes sense, to us, but also to a
vast class of (relative) numbers (that is a shorthand for "people
incarnated in infinities of numbers relations").

>
>>
>>> The capacity for being and experiencing inherently
>>> derives from a distinction between what something is and everything
>>> that is it isn't. The subject object relation is primary - well
>>> beneath computation. Subjectivity is self-evident. It needs no
>>> definition statement and no definition statement can be sufficient
>>> without the meaning of the word 'I' already understood.
>>
>> Here you make a subtle error. You are correct (telling truth), but
>> incorrect to assume that we cannot explains those truth (self-
>> evidence, no possible definition) when doing some assumption (like
>> mechanism, and the non expressible self-referential correctness on
>> the
>> part of the machine).
>
> It's not that I assume that we cannot explain those truths in other
> ways, just that I don't assume that those other explanations can
> dilute or negate the naive subjective orientation.

And you are right on this. That's my whole point. We cannot and should
not discard the subjective feeling of 'numbers' and machines. That
would be an error, even for engineers.

> Just because the
> map is not the territory doesn't mean that map is not a phenomenon in
> it's own right. It doesn't mean that map-making is an emergent
> property of the territory.

OK. But yet that might still be possible.

>
>>
>>> If something
>>> cannot understand 'I', it cannot ever be a subject.
>>
>> Self-reference is the jewel of computer science. machine can easily
>> understand the third person I, and experience the first person I. And
>> the first is finitely describable, and the second is only a door to
>> the unknown.
>
> How can you tell the difference between a machine reflecting our sense
> of I and a first person experience of I? What gives us reason to think
> a digital I is genuine?

The richness of machine's introspection, and notably the difference
between what a machine can take as true and what she can justified
rationally. That might make comp as being the fertile simplest
explanation of the consciousness/realities coupling.

>
>>
>>> I cannot be
>>> simulated, digitized,
>>
>> Relatively? That's your non-comp assumption.
>
> The simulation would have to turn me into someone else and still be
> me. A simulation could act like me in every way, but the I that I am
> now would not be extended into that simulation.

You can't be sure of that.

> Only I am I (how could
> I not 'be'? Everywhere I go, there I am...)
>
>>
>>> decohered, or reduced to an 'identification with
>>> computation'.
>>
>> Well, as paradoxical it might soon, you are provably right when we
>> assume comp. If you are a machine, then no one can reduce you to any
>> particular knowable machine, and no one can do any thinking at your
>> place (but you can delegate thinking by yourself).
>
> my only problem with being a machine is that since we are as close as
> you can get to being the opposite of a machine, so that the term loses
> all meaning if it encompasses everything. If a machine can make
> choices based on preference rather than instructions as we can, what
> does it serve to use the term machine?

Machine means only that the local behavior follows local computable
laws. Arithmetical truth is full of machines, but also full of
entities which cannot be emulated by any machines. Not everything is
machine. And the behavior of most machines is beyond what machine can
handled and prove. The very notion of machine is already beyond
machines. A bit like it can be proved that the notion of finite number
is beyond what finite number/theory/machine can explain or justify.
Mathematical logic shows that the notion of finite, machines, etc. are
very tricky. It looks simple for us, but that simplicity is a delusion.
At the beginning of last century, Hilbert was hoping for a proof of
consistency of math in arithmetic, but Gödel showed that even the
consistency of arithmetic is beyond arithmetical means. Tarski showed
that arithmetical truth is not even definable in arithmetic. This
limitation, and the awareness of this limitation, extends to machines.


>
>>
>>> I may be computation in part, but then computation is
>>> also me. Arithmetic must have all the possibilities of odor and
>>> sound.
>>> Numbers must get dizzy and fall down.
>>
>> Not numbers, but the hero appearing in the numbers' dreams.
>
> What are those dreams made of?

They are only relational. Nothing is "made of" something.


>
>>
>>
>>
>>> 7. Mistaking consciousness for computation has catastrophic
>>> consequences. It is necessary to use computation to understand the
>>> 'back end' of consciousness through neurology, but building a
>>> worldview on unrealism and applying it literally to ourselves is
>>> dissociative psychosis.
>>
>> Not only you will not give a steak to my son in law, but I see you
>> will try to send his doctor in the asylum.
>> Well, thanks for the warning.
>
> What would be the difference between an asylum and anywhere else?

In an asylum you are forced to take toxic harmful drugs. Less so,
anywhere else (especially jail).

> Can't numbers dream just as well in an asylum?

Not with the kind of medication you get in an asylum. You can't even
dream there.

>
>>
>>> Even as a semi-literal folk ontology, the
>>> notion of automatism as the authoritative essence of identity has
>>> ugly
>>> consequences.
>>
>> Automata are below universality.
>
> Are they below identity?

?

>
>>
>>> Wal Mart. Wall Street. The triumph of quanitative
>>> analysis over qualitative aesthetics is emptying our culture of all
>>> significance, leaving only a digital residue - the essence of
>>> generic
>>> interchangeability - like money itself, a universal placeholder for
>>> the power of nothingness to impersonate anything and everything.
>>
>> I am as much sad about that than you, but your reductionist view on
>> machine will not help.
>
> Are you sure? What is economics but socially enforced
> computationalism?

I really don't see the relation. In a democracy, economics is a way to
distribute money and enrich everyone, but only if bandits are not
perverting it for their own special interest.

>
>>
>>> Just
>>> as alchemists and mystics once gazed into mere matter and
>>> coincidence
>>> looking for higher wisdom of a spiritual nature, physics and
>>> mathematics now gazes into consciousness looking for a foregone
>>> conclusion of objective certainty.
>>
>> No. The point is that we cannot do that even with machine.
>
> Certainty of uncertainty.

Absolutely.

>
>>
>>> It's a fools errand. Without us,
>>> the brain is a useless organ.
>>
>> You can say that.
>>
>>> All of it's computations add up to
>>> nothing more or less than a pile of dead fish rotting in the sun.
>>
>> Without us? Sure.
>>
>> But who us?
>
> Us natural persons.

Nooooo.... us the Löbian Universal Number. They are the one given
internal sense to "everything". They always fill the gaps. That makes
them wrong almost all the time, but that gives their meaning, learning
abilities and purposes also.

> Human beings extending psychologically into
> autobiographical experience with historical context and corporeal
> bodies with cells and molecules inside and cities, planets, and
> galaxies outside.

Locally.

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Dec 14, 2011, 1:04:46 PM12/14/11
to everything-list List
Kim,

If nobody answers this I will answer asap. It is not an easy task. Maudlin's  paper is an (independent) variant of the Movie-Graph Argument.

Those arguments show the difficulty (if not the impossibility) to keep the physical supervenience thesis (the idea that consciousness is produced by the *physical activity* of a computer) with mechanism. It is the key to understand why keeping comp leads to immaterialism, or idealism (of the objective kind).

Both argument shows that the physical activity related to a particular computation can be changed arbitrarily. Maudlin restores counterfactualness (the fact that the computation will handle correctly possibly different inputs) by adding inactive pieces of machinery, for the particular computation (say a "conscious one"). COMP + physical supervenience forces to give a physical activity relevant for a computation to a piece of matter which has no physical activity relevant for that computation. 

Maudlins' argument assumes that comp necessitates the "323" principle, and we will come back on this, soon or later. MGA necessitates a plausible active equivalent form of the 323 principle, I am not sure of that. I will try a "layman-sum up" when I have more time.

Bruno




MARCHAL B., 1988, Informatique théorique et philosophie de l'esprit. Actes du 3ème  colloque international de l'ARC, Toulouse, pp. 193-227.  (explanation in english:   http://old.nabble.com/MGA-1-td20566948.html)

MAUDLIN T., 1989, Computation and Consciousness, The Journal of Philosophy, pp. 407- 432. 









Joseph Knight

unread,
Dec 14, 2011, 1:40:54 PM12/14/11
to everyth...@googlegroups.com
On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimj...@ozemail.com.au> wrote:
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

So consciousness doesn't supervene on Y. But Maudlin (basically) shows that you can just add some additional parts to the machine that handle the counterfactuals as needed. These extra parts don't actually do anything, but their "presence" means the machine now could exactly emulate program X, i.e., is conscious. So a computationalist is forced to assert that the machine's consciousness supervenes on the presence of these extra parts, which in fact perform no computations at all. 

I think what Russell said about this earlier, i.e., in a multiverse the extra parts are doing things, so consciousness then appears at the scale of the multiverse -- is fascinating. But I am out of time. Hope this helped. I would recommend reading the original paper for the details.

 
Jolly kind of you,

Kim Jones



On 12/12/2011, at 10:05 AM, Russell Standish wrote:

Maudlin's argument relies on the absurdity the the presence or absence
of inert parts bears on whether something is consious. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
Joseph Knight

meekerdb

unread,
Dec 14, 2011, 1:57:10 PM12/14/11
to everyth...@googlegroups.com

It matters in that the cartoon is patterns on a piece of film whose environment is our world.

Brent

meekerdb

unread,
Dec 14, 2011, 2:03:41 PM12/14/11
to everyth...@googlegroups.com
On 12/14/2011 6:47 AM, Craig Weinberg wrote:
> On Dec 13, 11:13 pm, meekerdb<meeke...@verizon.net> wrote:
>> On 12/13/2011 7:56 PM, Craig Weinberg wrote:
>>
>>> But when we observe our own interiority, nothing seems to follow
>>> finite local determistic rules. We appear to be able to conjure an
>>> infinite universal indeterminacy at will.
>> Really? I can't imagine more than five or six different things at the same time.
>>
>> Brent
> You can't imagine a view of the Sistine Chapel? A hairbrush with
> thousands of bristles? The sound of millions of cicadas in the trees?
> Not sure what you mean. Granted, there is loose sense of bandwidth of
> sense we can make in any given moment, but I'm talking about the
> infinite qualities of the content we can generate.

I deny that you imagine those multiplicities. You just have a word for them and you
imagine them as single things to which you attach the word.

> We can
> spontaneously make up things out of nowhere. The flavor of pinecone
> icecream, the sound of a hat being eaten by a crocodile, a Picasso
> version of a Pollack painting. This is not a capacity that lends
> itself to explanation by determinism or can be compared to things that
> we can observe outside of ourselves.

The random combination of known objects and concepts. Before computers it was commonly
implemented using concentric cardboard disks with word written along radii.

Brent

>
> Craig
>

meekerdb

unread,
Dec 14, 2011, 2:51:19 PM12/14/11
to everyth...@googlegroups.com
On 12/14/2011 10:40 AM, Joseph Knight wrote:


On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimj...@ozemail.com.au> wrote:
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

But I think Russell has reasonably questioned this.  You say X wouldn't know chemistry.  But that's a matter of intelligence, not necessarily consciousness.  We already know that computers can be intelligent, and there's nothing mysterious about intelligence "supervening" on machines.  Intelligence includes returning appropriate outputs for many different inputs.  But does consciousness?

Brent



So consciousness doesn't supervene on Y. But Maudlin (basically) shows that you can just add some additional parts to the machine that handle the counterfactuals as needed. These extra parts don't actually do anything, but their "presence" means the machine now could exactly emulate program X, i.e., is conscious. So a computationalist is forced to assert that the machine's consciousness supervenes on the presence of these extra parts, which in fact perform no computations at all. 

I think what Russell said about this earlier, i.e., in a multiverse the extra parts are doing things, so consciousness then appears at the scale of the multiverse -- is fascinating. But I am out of time. Hope this helped. I would recommend reading the original paper for the details.

 
Jolly kind of you,

Kim Jones



On 12/12/2011, at 10:05 AM, Russell Standish wrote:

Maudlin's argument relies on the absurdity the the presence or absence
of inert parts bears on whether something is consious. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
Joseph Knight
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.1890 / Virus Database: 2108/4680 - Release Date: 12/14/11


Joseph Knight

unread,
Dec 14, 2011, 5:09:59 PM12/14/11
to everyth...@googlegroups.com
On Wed, Dec 14, 2011 at 1:51 PM, meekerdb <meek...@verizon.net> wrote:
On 12/14/2011 10:40 AM, Joseph Knight wrote:


On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimj...@ozemail.com.au> wrote:
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

But I think Russell has reasonably questioned this.  You say X wouldn't know chemistry.  But that's a matter of intelligence, not necessarily consciousness.  We already know that computers can be intelligent, and there's nothing mysterious about intelligence "supervening" on machines.  Intelligence includes returning appropriate outputs for many different inputs.  But does consciousness?

I was really just using my Chemistry test as an imperfect analogy to the machine running Y being conscious (or not), so it doesn't affect the rest of the argument. But I see your point. Would you argue that a constant program (giving the same output no matter the input) can be conscious in principle? Maudlin assumes that such a program cannot be conscious, in his words, "it would make a mockery of the computational theory of mind." I am agnostic. In my opinion the Filmed Graph argument is more convincing than Maudlin, because with Maudlin one can still fall back to the position "consciousness can in principle supervene on a constant program".

(For those interested, here is the article itself)

Craig Weinberg

unread,
Dec 14, 2011, 7:29:21 PM12/14/11
to Everything List
On Dec 14, 11:49 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 14 Dec 2011, at 04:56, Craig Weinberg wrote:

> > I'm not sure I get it. I thought your position is that physics is a
> > computational simulation.
>
> That's not my position. My working hypothesis is that "I" am a
> machine, in the sense that I could survive with a copy of my brain
> done at some level.

Right but I thought your position is that matter is not primitive, so
that all of physics that pertains to matter is an epiphenomenon of the
underlying computation. I thought your view is that the brain itself
can be virtualized so that "I" can be run as purely digital
application in any computing machine with enough power.

> From this I can show that whatever the physical universe can be, it
> cannot be a "computational object". Indeed it is only an appearance
> emerging from a non computational statistics on computations.
> Likewise, consciousness also is not a computational thing.

How do you know that it's not the computation which is only an
appearance emerging from the physical universe, which inherently
includes the potential for consciousness? If you ask computation which
is primitive, it can only tell you that it is because it has no
capacity to make sense of anything else. If you ask only myself what
is primitive, I say awareness. To me, the primitive is the symmetry of
the two and the function of that symmetry in influencing perspective -
which is to me, sense-making.

>
>
>
> >>> There is nothing logical about identifying
> >>> conscious experiences with computational states.
>
> >> Here I disagree with you.
> >> Although there is nothing sure from which we could deduce such a
> >> relationship, we might still *infer* or *believe* that the brain is a
> >> "natural" computer, (that is the truncation of you at the digital
> >> level is a universal machine (in the Post, Church, Turing sense)).
>
> > I think that the brain is a biocomputer, but it also hosts
> > consciousness. Consciousness uses the computing capacity of the brain,
> > but awareness itself is not a disembodied computational state.
>
> Why not?

Because awareness wouldn't be necessary for machine computation.
Computers require no monitor or keyboard to compute. Also, awareness
requires no familiarity with computation. We feel and see without
having to understand arithmetic, even indirectly. I don't need to know
that blue comes between green and violet on the spectrum or that is is
a primary color or it's the opposite of another color, I can see it
directly as a self-explanatory phenomenon.

>
> > It's
> > living cells. Their awareness scales up to our awareness.
>
> How?

By sharing the same history of being the same single cell and staying
in continuous contact I suppose. It's just how awareness works. It's
not like objects in space, it's subjects through time. They are
semantically entangled as a multi-leveled shared experience in the now
- a now which is the tip of the iceberg of all experience. It's like
this:

<Human-Primate> consciousness <Mammal-Vertebrate> awareness <Organism-
Body> perception <Organ-Tissue> feeling <Cell-Gene> sensation
<Molecule-Atom> detection <quantum-arithmetic*>

*quantum arithmetic embodiment is not a concrete realism but an
analytical interpretation. It’s just the sense that atoms make
together, not literal particle/waves flying through space
instantaneously.

>
> > It is driven
> > by their first person agendas as well as ours, which cannot be
> > accessed objectively.
>
> >> We can believe the brain is a computer like most of us would believe
> >> that the hart is a pump.
>
> > I understand, and I agree, the brain functions like a computer.
>
> Yes, there are many evidences, if only because locally everything
> does, as far as we know. Except for the collapse of the quantum waves
> (that nobody can explain, and that Everett explained away) we have not
> yet find anything in nature which is not Turing emulable. That might
> be a long term problem for comp, because comp predicts that the
> physical universe is NOT turing emulable, but it might be everywhere
> Turing emulable locally.

Actually, this article says recent neuroscience suggests the brain is
not a computer:

"The Cornell researchers found that the brain continuously shifts
between states rather than having internal "variables" that contain
discrete "values" that are updated as the result of calculation
processes. According to researcher Michael Spivey, "In thinking of
cognition as working as a biological organism does... you do not have
to be in one state or another like a computer, but can have values in
between -- you can be partially in one state and another, and then
eventually gravitate to a unique interpretation, as in finally
recognizing a spoken word." The brain is not composed of modules that
pass the results of calculations back and forth; there are no
"results," just continual modulation. "

http://www.kschroeder.com/archive/blog/1120137696/index_html.html

>
> > It
> > also functions like a pump,
>
> A brain? Why?

It's got ventricles that it's constantly filling with fluid that
circulates around CNS. It's got billions of cellular neurotransmitter
pumps that are associated with changes in consciousness.

>
> > a radio,
>
> Why?

Alpha, beta, gamma, delta waves. The brain broadcasts electromagnetic
signals in the radio frequency range (0.1-60Hz).

>
> > a coral reef,
>
> Why?

It's a collective colony of individual organisms that construct
persistent structures tying them together.

>
> > a pharmacy,
>
> OK.
>
> > a
> > library,
>
> OK.
>
> > a synaptic suburb,
>
> Why?

It's a vast manifold of individual nodes and pathways between them and
with electrochemical traffic circulating from node to node.

>
> > etc. Generally the brain is compared to
> > the most advanced technology of whatever era is considering it.
>
> Not at all. It is compared to machine only, and wisely so given the
> evidences.

Is there ever an advanced technology that isn't a machine?

"The seductiveness of the analogy between human neural activity and
digital symbol manipulators has proved irresistible. It has been
characteristic of Western thought throughout the modern period,
beginning with Lamettrie's L'Homme Machine in 1750. Seeing humanity in
the image of which ever machine most dominates contemporary life is
what might be called mechanemorphism. With Lamettrie it was the clock.
The combustion engine followed. Freud thought electromagnets were a
good metaphor for the brain. Today, this tendency finds its most
extreme expression with the computer, especially amongst the
proponents of 'strong' artificial intelligence. Mechanemorphism has
conditioned not only our overall attitude to computers but also the
very terminology that has arisen around them."

"Over at BLDGBLOG, Geoffrey makes an astute observation about how the
latest consumer technologies have a way of becoming metaphors for the
mind. Before the brain was a binary code running on three pounds of
cellular microchips, it was an impressive calculator, or a camera, or
a blank slate. In other words, we're constantly superimposing the
gadgets of the day onto the cortex. Geoffrey notes that a recent
article featured on the BBC on fMRI scans of taxicab drivers ("Taxi
drivers have brain sat-nav") is very similar to an earlier study,
except that the most recent article used satellite navigation as a
metaphor for the spatial memories storied in the hippocampus:"
http://scienceblogs.com/cortex/2008/09/brain_metaphors.php

Here's a fun list for kids: http://faculty.washington.edu/chudler/metaphor.html

> Now, we have discovered universal machine, and the comparison just
> makes *much* more sense.
>
>
>
> >> We do have evidence that whatever the level we choose to look on,
> >> when
> >> we observe an heart or a brain, nothing seems to violate finite local
> >> deterministic rules (machine).
>
> > But when we observe our own interiority, nothing seems to follow
> > finite local determistic rules.
>
> I agree with you. But that's exactly what introspecting machine are
> saying, and can even explain.

Why do you believe that such introspections are dependent on
computation and not just as much the other way around?

>
> > We appear to be able to conjure an
> > infinite universal indeterminacy at will.
>
> Yes. And we still don't know exactly how a machine can do that, but
> their rich theology is promising with this respect.

Why make the machine the conjurer rather than the conjured?

>
> > We don't know what a heart
> > can imagine, but it doesn't seem to do exactly what a brain does, and
> > neither does anything else. A brain really cannot be compared to
> > anything else until we can get outside of a brain.
>
> We can compare the brain with anything. And the comparison with
> computer, especially in the mathematical original sense of the word,
> is worth to study. Universal machine or number are very rich objects.
> They are already able to defeat all universal theories.

How can we compare the brain with something else when the only
consciousness we have ever experienced is through our own brain? If we
compare the brain with anything else - like a loom, or a player piano,
or block of cheese, we know that those things don't have human
consciousness, so why would we be interested in treating them as part
of the same category? Consciousness is the only thing that is
important about the brain.

>
>
>
> >>> Pain is not a number.
>
> >> Sure.
>
> >>> Blue is not a an algorithm which can be exported to non-visual
> >>> mechanism.
>
> >> You assume non-comp. The fact that the experience of blueness is
> >> not a
> >> number does not make it impossible that "blueness" is "lived" through
> >> an arithmetical phenomenon involving self-reference of a machine with
> >> respect to infinities of machines and computations.
>
> > But the specificity of it would be unnecessary.
>
> Why?

Because comp would just assign a memory pointer that has no aesthetic
qualities at all, let alone a specific category of qualities and
associated semantic tonalities and textures. Blueness would be a
functionally redundant addition on top of the actual quantitative
label.

>
> > Why and how would
> > blueness be invoked just to set a self-referential equivalence?
>
> To accelerate decision.

Just the opposite. It slows it down. If you need a frequency-
wavelength quantity to begin with to assign blueness to, you already
have the precise criteria that is useful for decision. It would be no
faster to add on a flavor or color, and even if it were, where do you
get these flavors and colors from?

>
> > No
> > matter how powerful a computer we build, we're never going to need to
> > invent blue to perform some arithmetic operation,
>
> Why should we need to invent it? It is already there, in the relation
> in-between universal numbers.

Why do we need eyes to see blue if numbers already can see them and we
are a process of numbers?

>
> > and no arithmetic
> > operation is ever going to have blue as a solution.
>
> You are right. An arithmetic operation, like a physical event are just
> not the right type of object for seeing blue. Only person (including
> animals) can do that. But this does not contradict the fact that they
> might survive with a digital brain.

It could. If being me is more like seeing blue than it is like a
computer program, then a digital brain may very well not be the right
type of object for being me.

>
>
>
> >>> It's false.
>
> >> You don't know that. You assume non-comp. You have not produce a
> >> refutation of comp, as far as I know.
>
> > I am a refutation of comp.
>
> You are not a proof. Even from your own private point of view.

Why not? Or say blue is a refutation of comp instead.

>
> > That's how I know it.
>
> Comp, nor non-comp, is not the kind of thing we can *know*. We can
> assume them and reason. besides in science we *know* nothing for sure.
> Even if God appears to you and tell you that you are not a machine,
> that will prove nothing, even to you. using that argument shows only
> that you are influenceable through authoritative argument (the worst
> possible kind of argument in fundamental science).

I would agree with you in any other area other than consciousness.
Subjectivity can only be 'proved' by authoritative argument. The fact
that we are able to locate our own authority is itself a refutation of
comp. A machine cannot do that.

>
> > I can care about
> > things and have preferences, computation cannot.
>
> But why could not people do that, when incarnated relatively through
> computations (note the plural).
> If you just say that machine cannot have preference, you are just
> begging the question.

Because the preferences come out of the sense and motive of the body
it is being incarnated into. We are not the music, we are the musician
and the audience. It's not necessarily transferable, and if we
discovered how to make it so, there might not be any point in playing
the human game any more in the first place.

>
> > Computation has
> > instructions and parameters, variables, and functions, but no
> > opinions, no point of view.
>
> I have displayed the math of the 8 types of opinion/points of view
> that *any* sound machine canNOT NOT discover by introspection. One of
> them is the physical modalities, making comp + the classical theory of
> knowledge testable.

How can they be called opinions if there are 8 fixed possibilities?
Opinions aren't multiple choice variables, they are created
subjectively.

>
> Computer science explains very well were does the opinion, knowledge,
> sensation, observation of machines comes from.

I don't think computers or machines have any of those things except to
us. I can connect a fire extinguisher to a computer and it's still not
going to have the sense to activate it if I set the computer on fire.
It doesn't matter if it's Watson or Deep Blue, it's going to sit there
and burn down to a heap of molten slag, flashing 'Fire extinguisher
connected and ready' until the very end. If an entity can't muster an
opinion about it's own existence, if it has to be told how to act as
if it cared, then how can we really consider its intelligence
mimicking algorithms 'opinions' or points of view?

>
> It might not be the correct explanations, but correct machines already
> provide them. We might listen to them.

I agree, we can learn a lot a lot from them and we should listen to
them. I think we can learn even more if we adjust our expectations so
that we understand that machines tell us about the exterior of the
universe and the opposite of the interior.

See that's the crazy part though. Why do numbers have dreams? Do their
dreams help them calculate? If anything, the dreamers should dream
numbers and not the other way around.

>
>
>
> >>> It goes both ways. The universe feels. We are
> >>> the evidence of that.
>
> >> Which universe? All the universal being can feel.
> >> But the big whole, from inside, is just so big that it is not
> >> unnameable, so I will not dare to address the question of "its"
> >> thinking.
>
> > I was meaning more that the possibility of feeling exists within the
> > universe.
>
> Which universe? The arithmetical one? The physical one? The
> theological one?

How can it be the universe if it doesn't encompass all of those? The
one and only uni-verse.

>
> > Feeling is one of the things that the universe knows how to
> > physically produce.
>
> ?

If I take an aspirin, have I not physically produced analgesia?

>
>
>
> >>> 2. Feeling is not a computation,
>
> >> Right. But this does not mean that it cannot related to self-
> >> referential truth about a universal machine relatively to other
> >> universal machines and infinities of computations, random noise
> >> oracle, etc.
>
> > I agree, it could be related to different arithmetic consequences but
> > that is still not sufficient to explain the experience of feeling
> > itself. It's like saying that typing is related to language and
> > communication so therefore a keyboard must understand what you are
> > typing on it - that keystrokes inherently produce whatever meaning is
> > present in words.
>
> Feeling are explained by the fact that machine can refer entirely to
> their own body (at some level), and this in different ways from
> different points of view which obeys different logics. In particular
> qualia correspond to available non communicable truth. They do have a
> role by speeding up relative computation and decision. In fact the
> more a machine introspect, the bigger is the set of non communicable
> truth.

Why do they speed up computation and decision? Isn't that like saying
it's faster to run a program in a high level programming language than
compile it into machine language? Even if they did, why generate these
elaborate aesthetics, and most of all how? Are numbers omnipotent? If
they need to tell the difference between one thing and another can
they just invent whole new palettes of primary colors and novel
dimensions of sensation? To me it's like saying, 'hmm, these filing
cabinets are getting pretty full, maybe I'll reinvent space in a way
that it appears to be concretely charged with emotion, yet clearly
communicates an intention towards non-subjectivity.'

>
>
>
> >>> otherwise it would be unexplainable
> >>> and redundant.
>
> >> Yes. An epiphenomena.
>
> > I think an epiphenomena just has to be non causally efficacious.
>
> I agree. That is why I like comp: it prevents consciousness and
> private life to be epiphenomena. They are just real and very useful
> (for just surviving for example) phenomenon. Stephen would add here
> that comp makes primitive matter epiphenomenal, but that is a
> nonsense: primitive matter just goes away.

Sense does that too, but it doesn't require that things be useful to
be real.

>
> > I run
> > my car engine and the heat and exhaust are epiphenomena. Feeling makes
> > no sense as a possible exhaust of computation.
>
> Right. But that's a consequence of comp. feeling is not a computation.
> What happens with comp is that a feeling is a truth about a person
> incarnated at once by an infinity of computations.

What incarnates them and why? Why not just have the infinity of
computations?

>
> > The whole point of
> > computation is it's normalized, parsimonious integrity.
>
> Hmm... You might confuse machine before and after Gödel. We have
> learned something fundamental about machine: we have learned that we
> cannot know what they are capable of (and this can be justified
> entirely if we assume we are machines ourselves).

If you define machine that broadly though, you aren't really saying
anything about anything. If we can't know what they are capable of,
then we can't be sure that we should call them machines. It's
tautological to say we know that machines can be like us because if we
assume we are machines then machines would be doing what we do. If we
assume that we are weapons then we can say that we cannot know what
weapons are capable of. Maybe computation is just means to an end of
weaponry? It's somewhat of a sophist position, but really no more than
comp to me. Hence the term 'rocket science'. Where did computers and
networks come from? The military.

>
> > Where does a
> > picture of a nonexistent palm tree come from in the f(x)?
>
> By the unboudable imagination of the universal machines, especially
> when they are glued in long and deep sharable histories.

I do doubt their imagination though. My computer doesn't have an
imagination. The whole internet has no imagination. Just users of
brains. I'm willing to accept that computing is in it's infancy so I
would set the bar pretty low, but I have seen nothing yet which
strikes me as having an authentic voice. Computer music is the music
of plastic. It can be pretty, but it's not an expression of creative
teleology, it's just abstract noodling. It is the magnificence of the
human imagination, for better or worse, that is able to derive such
knowledge, insight, and power out of what I consider to be the
immaculate sterility of machine intelligence.

>
>
>
> >> It is the same error of formalism and
> >> reductionism trying to eliminate truth in favor of forms. This can
> >> only exist by a misunderstanding of Gödel and Tarski theorem. Even in
> >> math we cannot eliminate truth and intuition, and assuming comp, and
> >> *some amount* of self-consistency, we can "know" why.
>
> > I like this whole direction of mathematics, and even though my mind
> > isn't well suited to it, I do respect the importance of the
> > contribution.
>
> Nice.
>
> > Turing too. I think the whole self-referential
> > revelation is the functional skeleton of the most literal, objective
> > sense of the cosmos.
>
> Nice.
>
> > There is intelligence and wisdom there,
> > unquestionably.
>
> OK.
>
> > I just think that it's only *almost* the secret of the
> > universe. To get the whole secret, we have to bring ourselves all the
> > way into the the laboratory. Everything that arithmetic is, the
> > universe also is not.
>
> We don't know what arithmetic is.

Then why not call it sense?

>
> > Figurative, semantic, poetic, intuitive,
> > sensorimotive, sentient, etc. These aspects of our realism cannot be
> > meaningfully reduced to arithmetic,
>
> You might be confusing a theory of arithmetic with arithmetic itself.
> Today we know those things are far apart.
> A theory of arithmetic is just a universal machine, or a Löbian
> machine. Arithmetical truth is *far* beyond any machine.

It sounds like logos to me. Which is ok, but even that is not juicy
enough for biological realism. You need techne too.

>
> > nor can arithmetic be understood
> > by wishes and fiction. What they can be reduced to is the sense of
> > order and symmetry which unites and divides them.
>
> >>> If physics were merely the enactment of automatic
> >>> algorithms, then we would not be having this conversation.
>
> >> OK. But I dare to insist that if we assume mechanism, physics is
> >> everything but an enactement of an algorithm. Comp makes digital
> >> physics wrong, a priori. I think that the DU even diagonalizes
> >> 'naturally" against all possible computable physics. But if that is
> >> not the case, comp still force to extract the special physical
> >> universal machine from the first person experience measure problem.
>
> > Hard for me to follow. Why doesn't physics include enactment? I
> > thought comp makes physics digital?
>
> A lot of people develop that confusion, that is why I insist so much
> that comp is in opposition to digital physics, at least as a
> fundamental theory.
>
>

Isn't a digital brain a kind of digital physics?

>
> >>> Nothing
> >>> would be having any conversation. What would be the point? Why
> >>> would a
> >>> computation 'feel' like something?
>
> >> Well, a computation does not feel, like a brain does not feel. But a
> >> person (a Löbian self-referential being) can, and thanks to
> >> relatively
> >> stable computations emulating the self relatively to other machine,
> >> that person can manifest herself through computations. Then that
> >> person can be aware of the impossibility to communicate that feeling
> >> to any probable universal neighbors in case it is unwilling to do
> >> that.
>
> > How do you know that a Löbian being isn't just a simulation of a self-
> > referential being?
>
> It is, in the trivial sense that you light consider the number one
> being a simulation of itself. But that is rather misleading, and
> certainly false if "simulation" is taken in the computer science
> sense. In that case a Löbian machine is only a simulation (emeuation)
> of some other universal system (mike arithmetic). In that sense we are
> simulations too.

But what if the quality of Löbian self reference is not authentic?
What if it just behaves as if it is referencing itself because that's
what our interpretation has led us to expect?

>
> > It's only our sense of self projecting it's own
> > image onto a generic arithmetic process, like a cartoon.
>
> The cartoon lacks everything making it a computation. At best, it
> gives a description of computation.
> The Gödel number of a computation is not a computation. A computation
> is a complex relation between numbers and a universal number. the
> Gödel number of a computation is just a number.

A cartoon is different in it's behavioral capacities from an
interactive program, but that just strikes me as a degree of
sophistication and not an indicator that a program is any more likely
to develop its own sense of self. I can make a cartoon where the
characters act like they are talking to the audience. All I have to do
is select my audience members carefully and my cartoon could address
audience members by name.

>
> > Does acting
> > like a self automatically make it a self?
>
> Yes. Or you get zombie.

What about a ventriloquism dummy or actor on a movie screen? Those are
real things that seem to us like they might have a self, but they do
not. Their selfhood is an extension of a human agent, just as a
machine is an extension of a group of programmers or engineers.

>
> > What if you intentionally
> > want to make a Löbian being that only seems like it is self-
> > referential but actually is not?
>
> Then it will fail on some self-referential task.

Like failing to try to put itself out if you set it on fire?

>
>
>
> >>> 3. Physics is feeling as well as computation.
>
> >> ?
>
> > It relates to phenomena in the universe which is ultimately tangible
> > or has tangible consequences. It's not just computation for the sake
> > of computation.
>
> I guess you mean "physical universe". I don't believe that exist in
> any ontological sense. Physical reality is a (non arithmetical)
> projection made by non arithmetical being emerging from infinities of
> arithmetical relations.

I think it's just the opposite. Physical reality is a concrete
singularity, it is the divisions of that singularity which are
diffracted as object surfaces in 'space' and subject depths through
'time'. What you are talking about is true too, but it's inside out.
You're trying to model the outside of the universe when it can't have
an outside by definition. Matter is the stuff. That's where the action
is. Inside of matter. Like our brain. As a primitive, it's not matter,
because it's the divisions that make it matter. The divisions create
the realism.

>
>
>
> >>> We know that we can tell
> >>> the difference between voluntary control of our mind and body and
> >>> involuntary processes.
>
> >> Partially, yes.
>
> >>> My feeling and intention can drive
> >>> physiological changes in my body and physiological changes in my
> >>> body
> >>> can drive feelings, thoughts etc. If it were just computation, there
> >>> would be no difference, no subjective participation.
>
> >> OK.
> >> But comp does not say that we are computation. It says only that we
> >> are only *relatively* dependent on some universal computation going
> >> on
> >> relatively to some probable computations. The subjective machine will
> >> speed up, because it bets on its consistency, on the existence of
> >> itself relatively to the possible other machines. Memories become a
> >> scenario with a hero (you).
>
> > I'm not opposed to the idea of us being relatively dependent on some
> > universal computation, but not in a strictly epiphenomenal way.
>
> I agree with you.

ok

>
> > The
> > universal computations are also influenced by us directly, our sense
> > and motive on the macro-person level.
>
> Some are, locally and relatively, but most are not. You cannot change
> at will the additive/multiplicative structure of numbers.
>

Ok I am getting more the difference between universal computations and
computations, but I think that the universality is sort of like sense
spread out to it's thinnest possible layer so that in order to apply
to everything, it must mean nothing but what it literally refers to.
In this way it's not truly universal because it's only true in this
one narrowly defined generic sense. Arithmetic is sense with all of
the significance boiled off, leaving you with the essence of a-
signifying semiotics. In this way, arithmetic truths trace a boundary
around where significance is supposed to be, revealing it by making
it's absence clear. You can't change the structure of numbers because
only the formalism of their intent is real. I can make one drop of
water by adding four smaller drops together. That contradicts
universal numbers, but it doesn't make the truths it contradicts any
less true. What is real is the human sense and motive behind the
numbers, not the numbers themselves.

>
>
> >>> 4. Computation is not primitive.
>
> >> You get computation quickly. Universality is cheap. Assuming
> >> elementary arithmetic (like everyone does in high school, notably)
> >> makes it already there.
>
> > Quickly, yes. Universal, sure, at least as far as objects go.
>
> As far as computation go. Not sure what you mean by "objects" here.

As opposed to subjects. You can be alive as a person for years without
having to consciously compute anything.

>
>
>
> >> Its immunity for diagonalization makes it the most transcendental
> >> mathematical reality, and yet still effective.
>
> > I believe it. There is almost certainly no more powerful tool to
> > manipulate our environment. It's just that the thing that wants to
> > exercise power and manipulate the environment in the first place has
> > to precede the tool, if we are talking about a Theory of Everything.
> > If it were a Theory of Engineering, I would bet on computation every
> > time.
>
> Diagonalization exists in arithmetic, out of time and space. Time and
> space comes from the number ability to diagonalize and refer to
> themselves.
> When I wrote "Amoeba, Planaria and dreaming machine" I thought
> engineers would jump on that, and some did, but unfortunately, the
> technics is still waiting more powerful hardware to do that.
>
> And then it will not work for the reason that nobody want clever
> machines (who could be choosy about their users and destiny), for the
> same reason that nobody really want children to be educated and free.
> Humans love to chat on freedom, but I think they really hate that in
> their heart.
>
> I don't think the machines will ever be intelligent thanks to the
> humans, they will be intelligent *despite* the humans. We want slaves,
> not competitors.
>

Definitely. That's why computationalism is really kind of a minor
point from an engineering perspective. We don't want real AGI so
knowing that the way we are trying to get it is wrong should free us
to make better servants.

>
>
> >>> It is a higher order sensorimotive
> >>> experience which intellectually abstracts lower order sensorimotive
> >>> qualities of repetition, novelty, symmetry, and sequence. When we
> >>> project arithmetic on the cosmos, we tokenize functional aspects
> >>> of it
> >>> and arbitrarily privilege specific human perception channels.
>
> >> You lost me. I guess it makes sense with some non-comp theory.
>
> > In a material metaphor, I'm saying that plastic is a higher order
> > phenomenon of synthetic organic chemistry, not a molecular primitive.
> > Even though it's utility and flexibility in simulating almost any kind
> > of material to our eyes, it's actually the deeper qualities underlying
> > the plastic which gives it it's pseudo-universality. When we mistake
> > plastic for the root of all matter, we focus on it's plasticity as it
> > serves us (rather than questioning the underlying chemistry which
> > gives plastic it's qualities).
>
> Plastic sucks. We should use renewable plants instead!

Absolutely. All I want for Christmas is a flying solar yurt made of
water harvesting plant skin and with an internet connection.

>
>
>
> >>> 5. Awareness is not primitive.
>
> >> I agree.
>
> >>> Awareness does not exist absent a
> >>> material sensor.
>
> >> That's locally true. It might be necessary, but that's an open
> >> problem.
>
> >>> Some might argue for ghosts or out of body/near death
> >>> experiences, but even those are reported or interpreted by living
> >>> human subjects. There is no example of a disembodied consciousness
> >>> haunting a particular ip address or area of space.
>
> >> How do you know that? I guess you are right today, but "human made"
> >> machines, programs and bugs are still very young, yet they grows
> >> explosively on the net.
>
> > They still have to have a material net to grow on though.
>
> Even if that would exist, it cannot help. That is the point of the MGA:http://old.nabble.com/MGA-1-td20566948.html
>
> > You can't
> > catch a programming bug from your computer. It seems like comp would
> > have a hard time explaining why that is - harder than it is for a six
> > year old to observe that it obviously can't happen.
>
> A six years old child has a brain which is the product of millions
> years of evolution. Give time to (human made) machines, the human made
> computer are in their infancy, and 99,9999 % of applied computer
> science consist in controlling them, not in letting them controlling
> themselves.
>

You could just as easily say that computers are the product of all of
the millions of years of evolution of all of the computer scientists
and programmers.

>
>
> >>> 6. Sense is primitive.
>
> >> Not with comp. Sense are primitive only form the first person
> >> perspective, but not in "gods eyes" (The unnameable arithmetical
> >> truth
> >> talk to the machines).
>
> > What is arithmetical truth if it doesn't make sense?
>
> Nothing. It does make sense. That's the whole point: it makes sense to
> the universal numbers inhabiting (in some sense) arithmetical truth.
>

For it to make sense, the sensemaking has to already be possible in
the universe.

>
>
> >>> Everything that can be said to be real in any
> >>> sense has to make sense.
>
> >> Ah! In that sense? Then I am OK.
> >> 0=1 is fase independently of me or anything.
>
> > Yes! Well yes in the literal sense that you intend.
>
> OK.
>
> > It could be said
> > that the 'knowledge of the nothingness of death' = the 'singularly
> > human experience' or something like that...1=0 in the sense 'each
> > thing begins from no thing'.
>
> Hmm... Then we would write 0 => 1. Not 0 = 1.

sounds reasonable. still there's other examples for 0 = 1. Having No
answer for a test question can be One headache, etc.

>
>
>
> >>> The universe has to make sense before we can
> >>> make sense of it.
>
> >> Probable with "we" = "humans".
> >> false with "we" = "the universal beings", and universe meaning
> >> physical universe.
>
> > How can sense arise from a universe which doesn't make sense? The
> > possibility of sense is itself sense.
>
> Yes. And the arithmetical universe makes sense, to us, but also to a
> vast class of (relative) numbers (that is a shorthand for "people
> incarnated in infinities of numbers relations").

Wouldn't they be enumerated rather than incarnated? I guess I agree
but I would say that there is a vast class of sense which we can make
which is not arithmetic and we do not share with numbers. If
arithmetic is that thin layer of sense stretched out to embrace as
much machine (external) truths as possible, then the psyche is a
towering pillar of sense reaching upward and inward to non-machine
experiences.

>
>
>
> >>> The capacity for being and experiencing inherently
> >>> derives from a distinction between what something is and everything
> >>> that is it isn't. The subject object relation is primary - well
> >>> beneath computation. Subjectivity is self-evident. It needs no
> >>> definition statement and no definition statement can be sufficient
> >>> without the meaning of the word 'I' already understood.
>
> >> Here you make a subtle error. You are correct (telling truth), but
> >> incorrect to assume that we cannot explains those truth (self-
> >> evidence, no possible definition) when doing some assumption (like
> >> mechanism, and the non expressible self-referential correctness on
> >> the
> >> part of the machine).
>
> > It's not that I assume that we cannot explain those truths in other
> > ways, just that I don't assume that those other explanations can
> > dilute or negate the naive subjective orientation.
>
> And you are right on this. That's my whole point. We cannot and should
> not discard the subjective feeling of 'numbers' and machines. That
> would be an error, even for engineers.
>
> > Just because the
> > map is not the territory doesn't mean that map is not a phenomenon in
> > it's own right. It doesn't mean that map-making is an emergent
> > property of the territory.
>

> OK. But yet that might still be possible..

But the map makes changes to the territory.

>
>
>
> >>> If something
> >>> cannot understand 'I', it cannot ever be a subject.
>
> >> Self-reference is the jewel of computer science. machine can easily
> >> understand the third person I, and experience the first person I. And
> >> the first is finitely describable, and the second is only a door to
> >> the unknown.
>
> > How can you tell the difference between a machine reflecting our sense
> > of I and a first person experience of I? What gives us reason to think
> > a digital I is genuine?
>
> The richness of machine's introspection, and notably the difference
> between what a machine can take as true and what she can justified
> rationally. That might make comp as being the fertile simplest
> explanation of the consciousness/realities coupling.

How do you know a machine's introspection is rich?

>
>
>
> >>> I cannot be
> >>> simulated, digitized,
>
> >> Relatively? That's your non-comp assumption.
>
> > The simulation would have to turn me into someone else and still be
> > me. A simulation could act like me in every way, but the I that I am
> > now would not be extended into that simulation.
>
> You can't be sure of that.

Brain-conjoined twins can refer to themselves as I, but identical
twins don't. That suggests that neurological connection is the basis,
not identical similarity.

It would seem that through volition, we generate some of our own
computable laws...which I think makes them not particularly lawful or
computable.

>
>
> >>> I may be computation in part, but then computation is
> >>> also me. Arithmetic must have all the possibilities of odor and
> >>> sound.
> >>> Numbers must get dizzy and fall down.
>
> >> Not numbers, but the hero appearing in the numbers' dreams.
>
> > What are those dreams made of?
>
> They are only relational. Nothing is "made of" something.

They could serve the same relational functions and not be dreams. Why
should they be dreams?

>
>
>
> >>> 7. Mistaking consciousness for computation has catastrophic
> >>> consequences. It is necessary to use computation to understand the
> >>> 'back end' of consciousness through neurology, but building a
> >>> worldview on unrealism and applying it literally to ourselves is
> >>> dissociative psychosis.
>
> >> Not only you will not give a steak to my son in law, but I see you
> >> will try to send his doctor in the asylum.
> >> Well, thanks for the warning.
>
> > What would be the difference between an asylum and anywhere else?
>
> In an asylum you are forced to take toxic harmful drugs. Less so,
> anywhere else (especially jail).

I think there's actually a lot of drugs in jail.

>
> > Can't numbers dream just as well in an asylum?
>
> Not with the kind of medication you get in an asylum. You can't even
> dream there.

Medication is just physical matter though. Surely not a problem for
numbers?

>
>
>
> >>> Even as a semi-literal folk ontology, the
> >>> notion of automatism as the authoritative essence of identity has
> >>> ugly
> >>> consequences.
>
> >> Automata are below universality.
>
> > Are they below identity?
>
> ?

Automatism as the essence of identity. You say identity is the
phenomenon or just an expression of a deeper essence which is
automatic?

>
>
>
> >>> Wal Mart. Wall Street. The triumph of quanitative
> >>> analysis over qualitative aesthetics is emptying our culture of all
> >>> significance, leaving only a digital residue - the essence of
> >>> generic
> >>> interchangeability - like money itself, a universal placeholder for
> >>> the power of nothingness to impersonate anything and everything.
>
> >> I am as much sad about that than you, but your reductionist view on
> >> machine will not help.
>
> > Are you sure? What is economics but socially enforced
> > computationalism?
>
> I really don't see the relation. In a democracy, economics is a way to
> distribute money and enrich everyone, but only if bandits are not
> perverting it for their own special interest.

What is money though? Computation enacted socially. All goods and
services reduced to interchangeable digital quantities.

>
>
>
> >>> Just
> >>> as alchemists and mystics once gazed into mere matter and
> >>> coincidence
> >>> looking for higher wisdom of a spiritual nature, physics and
> >>> mathematics now gazes into consciousness looking for a foregone
> >>> conclusion of objective certainty.
>
> >> No. The point is that we cannot do that even with machine.
>
> > Certainty of uncertainty.
>
> Absolutely.
>
>
>
> >>> It's a fools errand. Without us,
> >>> the brain is a useless organ.
>
> >> You can say that.
>
> >>> All of it's computations add up to
> >>> nothing more or less than a pile of dead fish rotting in the sun.
>
> >> Without us? Sure.
>
> >> But who us?
>
> > Us natural persons.
>
> Nooooo.... us the Löbian Universal Number. They are the one given
> internal sense to "everything". They always fill the gaps. That makes
> them wrong almost all the time, but that gives their meaning, learning
> abilities and purposes also.

?

>
> > Human beings extending psychologically into
> > autobiographical experience with historical context and corporeal
> > bodies with cells and molecules inside and cities, planets, and
> > galaxies outside.
>
> Locally.

Locally in two opposite senses. Local to the body and local to the
subject. My experiences are local to me in my memory, but not to my
body. My cells and organs are local to my body bit not my psyche
directly.

Craig

Craig Weinberg

unread,
Dec 14, 2011, 7:34:43 PM12/14/11
to Everything List
On Dec 14, 1:57 pm, meekerdb <meeke...@verizon.net> wrote:

>
> It matters in that the cartoon is patterns on a piece of film whose environment is our world.

The cartoon only exists as a pattern to us, not to the piece of film.
The human psyche is the cartoon's only environment. The pigments or
emulsions or pixels which conduct the sense and motive of the
cartoonist to it's human audience are not the cartoon. In the literal
sense of the film and its world, there is no pattern, just a stain.

Craig

Craig Weinberg

unread,
Dec 14, 2011, 7:44:45 PM12/14/11
to Everything List
On Dec 14, 2:03 pm, meekerdb <meeke...@verizon.net> wrote:
> On 12/14/2011 6:47 AM, Craig Weinberg wrote:

> > You can't imagine a view of the Sistine Chapel? A hairbrush with
> > thousands of bristles? The sound of millions of cicadas in the trees?
> > Not sure what you mean. Granted, there is loose sense of bandwidth of
> > sense we can make in any given moment, but I'm talking about the
> > infinite qualities of the content we can generate.
>
> I deny that you imagine those multiplicities.  You just have a word for them and you
> imagine them as single things to which you attach the word.

Huh? If I drop a box of matches in real life, I can see the
multiplicity, but unlike RainMan I don't automatically know precisely
how many. I can imagine a box of matches spilled out on the floor,
can't you? I can imagine the difference between around 200 matches and
around 2000 matches, can't you? I can imagine a tarp the size of
picnic blanket piled high with matches. Not sure how many, maybe 1.5
million? Just guessing. Maybe your imagination works differently.
Maybe that makes you see the world differently than I do.

>
> > We can
> > spontaneously make up things out of nowhere. The flavor of pinecone
> > icecream, the sound of a hat being eaten by a crocodile, a Picasso
> > version of a Pollack painting. This is not a capacity that lends
> > itself to explanation by determinism or can be compared to things that
> > we can observe outside of ourselves.
>
> The random combination of known objects and concepts.  Before computers it was commonly
> implemented using concentric cardboard disks with word written along radii.

Dialing the word 'Picasso style' next to the word 'Pollack painting'
is not combining anything. They are just meaningless markings on a
cardboard disk. The ability to imagine what a Jackson Pollack painting
might look like if Pablo Picasso painted it is not something that a
cardboard disk can do. You take perception for granted each and every
time. You talk as if drawing a happy face on a rock makes the rock
happy.

Craig

meekerdb

unread,
Dec 14, 2011, 8:11:14 PM12/14/11
to everyth...@googlegroups.com
On 12/14/2011 2:09 PM, Joseph Knight wrote:


On Wed, Dec 14, 2011 at 1:51 PM, meekerdb <meek...@verizon.net> wrote:
On 12/14/2011 10:40 AM, Joseph Knight wrote:


On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimj...@ozemail.com.au> wrote:
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

But I think Russell has reasonably questioned this.  You say X wouldn't know chemistry.  But that's a matter of intelligence, not necessarily consciousness.  We already know that computers can be intelligent, and there's nothing mysterious about intelligence "supervening" on machines.  Intelligence includes returning appropriate outputs for many different inputs.  But does consciousness?

I was really just using my Chemistry test as an imperfect analogy to the machine running Y being conscious (or not), so it doesn't affect the rest of the argument. But I see your point. Would you argue that a constant program (giving the same output no matter the input) can be conscious in principle?

I don't think something can be conscious in the human sense unless it is intelligent.  The question is can something be intelligent without being conscious.  I incline to not, but I'm not sure.  I think the interesting point is that there tends to be a unjustified slip from consciousness to intelligence in some arguments.  In particular the "323" argument implicitly assumes that not-intelligent=>not-conscious.

Brent

Maudlin assumes that such a program cannot be conscious, in his words, "it would make a mockery of the computational theory of mind." I am agnostic. In my opinion the Filmed Graph argument is more convincing than Maudlin, because with Maudlin one can still fall back to the position "consciousness can in principle supervene on a constant program".

(For those interested, here is the article itself)
 

Brent



So consciousness doesn't supervene on Y. But Maudlin (basically) shows that you can just add some additional parts to the machine that handle the counterfactuals as needed. These extra parts don't actually do anything, but their "presence" means the machine now could exactly emulate program X, i.e., is conscious. So a computationalist is forced to assert that the machine's consciousness supervenes on the presence of these extra parts, which in fact perform no computations at all. 

I think what Russell said about this earlier, i.e., in a multiverse the extra parts are doing things, so consciousness then appears at the scale of the multiverse -- is fascinating. But I am out of time. Hope this helped. I would recommend reading the original paper for the details.
--

meekerdb

unread,
Dec 14, 2011, 8:32:14 PM12/14/11
to everyth...@googlegroups.com
On 12/14/2011 4:29 PM, Craig Weinberg wrote:
> How do you know that it's not the computation which is only an
> appearance emerging from the physical universe, which inherently
> includes the potential for consciousness? If you ask computation which
> is primitive, it can only tell you that it is because it has no
> capacity to make sense of anything else. If you ask only myself what
> is primitive, I say awareness. To me, the primitive is the symmetry of
> the two and the function of that symmetry in influencing perspective -
> which is to me, sense-making.

Of course if you set out to explain consciousness one possibility is to just assume
consciousness, i.e. awareness, if fundamental. But then you need explain everything else,
physics, Platonia,... in terms of awareness. Ordinarily "awareness" is a two part
relation: A is aware of B. So do you need another primitive to be the object of awareness?

Brent

meekerdb

unread,
Dec 14, 2011, 8:34:54 PM12/14/11
to everyth...@googlegroups.com

So you have muddled the question into, "Is our consciousness awareness of the fictional
events in a cartoon conscious?"

Brent

Joseph Knight

unread,
Dec 14, 2011, 8:35:14 PM12/14/11
to everyth...@googlegroups.com
On Wed, Dec 14, 2011 at 7:11 PM, meekerdb <meek...@verizon.net> wrote:
On 12/14/2011 2:09 PM, Joseph Knight wrote:


On Wed, Dec 14, 2011 at 1:51 PM, meekerdb <meek...@verizon.net> wrote:
On 12/14/2011 10:40 AM, Joseph Knight wrote:


On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimj...@ozemail.com.au> wrote:
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

But I think Russell has reasonably questioned this.  You say X wouldn't know chemistry.  But that's a matter of intelligence, not necessarily consciousness.  We already know that computers can be intelligent, and there's nothing mysterious about intelligence "supervening" on machines.  Intelligence includes returning appropriate outputs for many different inputs.  But does consciousness?

I was really just using my Chemistry test as an imperfect analogy to the machine running Y being conscious (or not), so it doesn't affect the rest of the argument. But I see your point. Would you argue that a constant program (giving the same output no matter the input) can be conscious in principle?

I don't think something can be conscious in the human sense unless it is intelligent.  The question is can something be intelligent without being conscious. 

I have always assumed so. Maybe it is unjustified, but I see no compelling reason why intelligence implies consciousness. There are strong reasons to believe the two are correlated though, because I agree that consciousness probably implies high intelligence. 
 
I incline to not, but I'm not sure.  I think the interesting point is that there tends to be a unjustified slip from consciousness to intelligence in some arguments. 

Agreed; I have encountered this many times in discussions like this. I prefer to leave intelligence out of it entirely, because I don't think there is any real controversy over whether intelligent entities can be built with 1s and 0s. In fact, they already have.
 
In particular the "323" argument implicitly assumes that not-intelligent=>not-conscious.

I am still unsure of the 323 argument, could you or someone explain?
 

Brent

Maudlin assumes that such a program cannot be conscious, in his words, "it would make a mockery of the computational theory of mind." I am agnostic. In my opinion the Filmed Graph argument is more convincing than Maudlin, because with Maudlin one can still fall back to the position "consciousness can in principle supervene on a constant program".

(For those interested, here is the article itself)
 

Brent



So consciousness doesn't supervene on Y. But Maudlin (basically) shows that you can just add some additional parts to the machine that handle the counterfactuals as needed. These extra parts don't actually do anything, but their "presence" means the machine now could exactly emulate program X, i.e., is conscious. So a computationalist is forced to assert that the machine's consciousness supervenes on the presence of these extra parts, which in fact perform no computations at all. 

I think what Russell said about this earlier, i.e., in a multiverse the extra parts are doing things, so consciousness then appears at the scale of the multiverse -- is fascinating. But I am out of time. Hope this helped. I would recommend reading the original paper for the details.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.1890 / Virus Database: 2108/4680 - Release Date: 12/14/11


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



--
Joseph Knight

meekerdb

unread,
Dec 14, 2011, 8:36:46 PM12/14/11
to everyth...@googlegroups.com
On 12/14/2011 4:44 PM, Craig Weinberg wrote:
> You talk as if drawing a happy face on a rock makes the rock
> happy.

You talk as if it's inventing happiness.

Brent

meekerdb

unread,
Dec 14, 2011, 9:17:12 PM12/14/11
to everyth...@googlegroups.com
On 12/14/2011 5:35 PM, Joseph Knight wrote:


On Wed, Dec 14, 2011 at 7:11 PM, meekerdb <meek...@verizon.net> wrote:
On 12/14/2011 2:09 PM, Joseph Knight wrote:


On Wed, Dec 14, 2011 at 1:51 PM, meekerdb <meek...@verizon.net> wrote:
On 12/14/2011 10:40 AM, Joseph Knight wrote:


On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimj...@ozemail.com.au> wrote:
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

But I think Russell has reasonably questioned this.  You say X wouldn't know chemistry.  But that's a matter of intelligence, not necessarily consciousness.  We already know that computers can be intelligent, and there's nothing mysterious about intelligence "supervening" on machines.  Intelligence includes returning appropriate outputs for many different inputs.  But does consciousness?

I was really just using my Chemistry test as an imperfect analogy to the machine running Y being conscious (or not), so it doesn't affect the rest of the argument. But I see your point. Would you argue that a constant program (giving the same output no matter the input) can be conscious in principle?

I don't think something can be conscious in the human sense unless it is intelligent.  The question is can something be intelligent without being conscious. 

I have always assumed so. Maybe it is unjustified, but I see no compelling reason why intelligence implies consciousness. There are strong reasons to believe the two are correlated though, because I agree that consciousness probably implies high intelligence. 
 
I incline to not, but I'm not sure.  I think the interesting point is that there tends to be a unjustified slip from consciousness to intelligence in some arguments. 

Agreed; I have encountered this many times in discussions like this. I prefer to leave intelligence out of it entirely, because I don't think there is any real controversy over whether intelligent entities can be built with 1s and 0s. In fact, they already have.
 
In particular the "323" argument implicitly assumes that not-intelligent=>not-conscious.

I am still unsure of the 323 argument, could you or someone explain?

As I understand it, if some computer, given a certain input, performs a computation that entails consciousness there will in general be some register in the computer that plays no active part in the computation, say register 323.  So the same computation, entailing the same consciousness, would be performed with register 323 removed.  In fact we could eliminate all the registers and components that do not change state during the computation without affecting the supervening consciousness.  We can essentially reduce the computer to a playback machine that only performs this computation given the certain input.  With any other input it will do something stupid or do nothing.  Hence it can't possibly be conscious and we must have been wrong to suppose that consciousness supervened on the computation.  Notice however that we have assumed that since it is now not intelligent, it can't be conscious.  It is assumed that consciousness implicitly requires intelligent response to counterfactuals.  But it is really intelligence or competence that requires this.

Brent

Bruno Marchal

unread,
Dec 15, 2011, 3:37:02 AM12/15/11
to everyth...@googlegroups.com
On 15 Dec 2011, at 02:11, meekerdb wrote:

On 12/14/2011 2:09 PM, Joseph Knight wrote:


On Wed, Dec 14, 2011 at 1:51 PM, meekerdb <meek...@verizon.net> wrote:
On 12/14/2011 10:40 AM, Joseph Knight wrote:


On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimj...@ozemail.com.au> wrote:
Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.

I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

But I think Russell has reasonably questioned this.  You say X wouldn't know chemistry.  But that's a matter of intelligence, not necessarily consciousness.  We already know that computers can be intelligent, and there's nothing mysterious about intelligence "supervening" on machines.  Intelligence includes returning appropriate outputs for many different inputs.  But does consciousness?

I was really just using my Chemistry test as an imperfect analogy to the machine running Y being conscious (or not), so it doesn't affect the rest of the argument. But I see your point. Would you argue that a constant program (giving the same output no matter the input) can be conscious in principle?

I don't think something can be conscious in the human sense unless it is intelligent.  The question is can something be intelligent without being conscious.  I incline to not, but I'm not sure.  I think the interesting point is that there tends to be a unjustified slip from consciousness to intelligence in some arguments.  In particular the "323" argument implicitly assumes that not-intelligent=>not-conscious.

It assumes only "no-computation => no-consciousness".

Intelligence (in the deep sense, not in the sense of competence) requires more than consciousness, but self-consciousness (which I think can be attributed to the Löbian machine (the universal machine "rich enough" to "know" that they are universal).

Intelligence is necessary to develop competence, but competence has a negative feedback on intelligence. But consciousness (raw data feeling) does not necessitate intelligence, I think. 

Bruno






Brent

Maudlin assumes that such a program cannot be conscious, in his words, "it would make a mockery of the computational theory of mind." I am agnostic. In my opinion the Filmed Graph argument is more convincing than Maudlin, because with Maudlin one can still fall back to the position "consciousness can in principle supervene on a constant program".

(For those interested, here is the article itself)
 

Brent



So consciousness doesn't supervene on Y. But Maudlin (basically) shows that you can just add some additional parts to the machine that handle the counterfactuals as needed. These extra parts don't actually do anything, but their "presence" means the machine now could exactly emulate program X, i.e., is conscious. So a computationalist is forced to assert that the machine's consciousness supervenes on the presence of these extra parts, which in fact perform no computations at all. 

I think what Russell said about this earlier, i.e., in a multiverse the extra parts are doing things, so consciousness then appears at the scale of the multiverse -- is fascinating. But I am out of time. Hope this helped. I would recommend reading the original paper for the details.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.1890 / Virus Database: 2108/4680 - Release Date: 12/14/11



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Dec 15, 2011, 5:54:26 PM12/15/11
to Everything List

I think the subjective / objective relation is actually the primitive.
One side is just the opposite or exterior of the other, heads and
tails. That's what I'm trying to get at by using the word 'sense'. You
can't get more primitive than sense, and it inherently requires a
detector and a detection. Sense is always something touching
something, either literally or figuratively. Making contact or
bridging the gap.

Really both A and B need to be able to detect each other, although
what they detect is the object side of the other. A sees B only as
something like (ɐq - A), ie, for us the exterior of A is (ɐ) (our
body, sense organs, nervous system, cells, molecules, etc), the
exterior of B is (q) (size, shape, behavior, context, etc). and the
subtraction of A is the absence of subjective identification -
objectification. That loss is mediated by the degree to which (q)
resembles the reflected image of (ɐ), which would be (a). The more you
seem like me, the more I identify with you. What you seem like to me
is not actually you as you are to yourself, but as you are to me. If
I'm a person, an ant is a pest. If I'm an ant, another ant is a family
member.

Craig

Craig Weinberg

unread,
Dec 15, 2011, 5:59:53 PM12/15/11
to Everything List

No, I'm saying there obviously aren't any fictional events in any
cartoon consciousness, they are in our consciousness only. The literal
mechanism of the cartoon is an empty vessel which is somewhat
interchangeable (but not disposable and not arbitrarily
interchangeable - has to be something cartoonable, like a flip book,
zoetrope, dynamic graphic display, etc).

Craig

Craig Weinberg

unread,
Dec 15, 2011, 6:02:28 PM12/15/11
to Everything List

How so? I'm saying that a drawing on a rock only looks like a happy
face to us. It looks like nothing in particular to other organisms.

Craig

Russell Standish

unread,
Dec 16, 2011, 4:39:53 AM12/16/11
to everyth...@googlegroups.com
On Mon, Dec 12, 2011 at 04:11:54PM +0100, Bruno Marchal wrote:
> >Maudlin's argument relies on the absurdity the the presence or absence
> >of inert parts bears on whether something is consious. This absurdity
> >only works in a single universe setting, however. If your computer is
> >embedded in a Multiverse, the absurdity vanishes, because thiose inert
> >parts are no longer inert.
>
> But they do not play a part in the computation, at the correct
> substitution level.

They certainly look like they are. If these parts weren't present, the
calculation proceeds differently in the other branches of the
Multiverse. In other words, counterfactuals are not handled correctly.

> They are playing a part concerning the first person indeterminacy,
> like in the UD*, or in QM physics. But that is derived (and has to
> be) from the indeterminacy.
>

They do that as well, but this is not relevant to Maudlins argument...

>
> >If you then fold the multiverse back into a
> >single universe by dovetailing, one can then reapply the Maudlin
> >move.
>
> Indeed. That is the key point.
>
>
>
> >But then, in that case, one can embed that result into a
> >Multiverse, and the cycle repeats.
>
> I don't think we can. That would be like saying that we have to
> start from the quantum multiverse, but the reasoning show that we
> can start from any universal machinery, like numbers. To start from
> the multiverse would be treachery (for the derivation of matter) and
> ambiguous (we don't assume QM). And even with QM, the multiverse
> notion is quite complex and controversial: is it a non computational
> multidreams (as forced by comp), or is it a multi-physical material
> reality (as forbidden by the MGA).

I do start with a Multiverse for Occams razor reasons (it hardly
treachery), and I know you don't (since it is derived in your
case). However, that is beside the point for Maudlin's argument. I'm
only observing that Maudlin's argument fails in a Multiverse reality.


>
>
> >
> >The question is - where is the consciousness in all this? I think it
> >must move with the levels - and given the UDA and COMP, I would say
> >that consciousness appears at the Multiverse level, not the single
> >universe level.
>
> That is right, but with comp that "multiverse" is the mathematical
> structure which needs to be entirely derived from the theory of
> consciousness or from the self-reference logics.
>

Why? I can see how, but why?

>
>
> >
> >BTW - I had a similar problem with your MGA - it is not intrinsically
> >absurd to me that a recording can be conscious.
>
> There is no computation in a recording. There is only a fixed
> description of a computation. In arithmetic, it is like confusing p
> and Bp.

This also means there is no computation in a block universe like UD*.

I think this needs to be spelled out. It is not so obvious.

> With p sigma_1, p and Bp looks alike (which explains the subtlety
> of that nuance) in the sense that we have both
> p -> Bp and
> Bp -> p
> But Bp -> p is only true (provable at some [ ]*-logic level), and
> not provable by the machine, so p and Bp will still behave in a
> different logical way.

I see the difference between p and Bp, but not the relevance to
recordings and computation. Sorry to be difficult here.

> Then you have the stroboscopic argument which shows that a recording
> like a movie is not well defined in time and space.
> But the simplest, imo, to see that a recording cannot be conscious
> (with comp, 'qua computatio') is that there is no more any
> computations done by a recording.
>

The computations may be phenomenal to the consciousness in
question. If not, why not? We have often talked about what links
observer moments together in this list.

>
>
> >From the right point
> >of view (presumably that of the consciousness itself - aka the "inside
> >view"), it seems plausible that a recording could be conscious.
>
> A still other argument, is that no piece of the movie can have any
> causal relationship with any other part, and so can be removed,
> making eventually a *particular* consciousness (a dream about an
> ice-cream, for example) supervening on the vacuum.

Isn't this what you were calling the "stroboscopic argument" above?

>
> What is correct is that consciousness is related to all events
> having made the recording possible, but this is only in virtue of
> some numbers having some special relations with other number, and we
> are back to the computationalist supervenience thesis.
>
> We might come back on MGA, given some other questions on the list.
> So if this is unclear you might ask question, or wait that I
> re-explain the whole argument perhaps.
>

I remember when you were explaining the MGA before, we got to this
point where you relied on recordings not being conscious, and I think
you said you hoped you didn't need to explain that bit :). I did ask
why at the time.

Its not a biggie though - just one of those "not understanding all the
steps" things.

-

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Bruno Marchal

unread,
Dec 16, 2011, 11:42:19 AM12/16/11
to everyth...@googlegroups.com

On 16 Dec 2011, at 10:39, Russell Standish wrote:

> On Mon, Dec 12, 2011 at 04:11:54PM +0100, Bruno Marchal wrote:
>>> Maudlin's argument relies on the absurdity the the presence or
>>> absence
>>> of inert parts bears on whether something is consious. This
>>> absurdity
>>> only works in a single universe setting, however. If your computer
>>> is
>>> embedded in a Multiverse, the absurdity vanishes, because thiose
>>> inert
>>> parts are no longer inert.
>>
>> But they do not play a part in the computation, at the correct
>> substitution level.
>
> They certainly look like they are. If these parts weren't present, the
> calculation proceeds differently in the other branches of the
> Multiverse. In other words, counterfactuals are not handled correctly.

If you think of a quantum multiverse, then that argument would work if
the brain is a quantum computer. If it is classical, its states can be
considered as having been prepared in the classical base, and the
computation (or non computation) will be handled correctly in each
branch of the quantum multiverse, in which the same MGA reasoning will
apply.

So you are introducing a different kind of physical multiverse, which
would handle the counterfactuals. But this will not work.
Either this physical multiverse, which plays the role of the
generalized brain, is Turing emulable, in which case I can emulate it
in a single Turing machine, for which the MGA will apply again. Or it
is not Turing emulable, but then the need of it will contradict the
comp assumption.


>
>> They are playing a part concerning the first person indeterminacy,
>> like in the UD*, or in QM physics. But that is derived (and has to
>> be) from the indeterminacy.
>>
>
> They do that as well, but this is not relevant to Maudlins argument...

The parallel realities does not play any role for a classical
computation, except for statistical interference (in case of a quantum
computer). But if this play a role, it means that we have not chosen
the right level of substitution. Once it has be chosen correctly (or
below), what happens in some other branch cannot interfere or play any
role in the computation.

>
>>
>>> If you then fold the multiverse back into a
>>> single universe by dovetailing, one can then reapply the Maudlin
>>> move.
>>
>> Indeed. That is the key point.
>>
>>
>>
>>> But then, in that case, one can embed that result into a
>>> Multiverse, and the cycle repeats.
>>
>> I don't think we can. That would be like saying that we have to
>> start from the quantum multiverse, but the reasoning show that we
>> can start from any universal machinery, like numbers. To start from
>> the multiverse would be treachery (for the derivation of matter) and
>> ambiguous (we don't assume QM). And even with QM, the multiverse
>> notion is quite complex and controversial: is it a non computational
>> multidreams (as forced by comp), or is it a multi-physical material
>> reality (as forbidden by the MGA).
>
> I do start with a Multiverse for Occams razor reasons (it hardly
> treachery), and I know you don't (since it is derived in your
> case). However, that is beside the point for Maudlin's argument. I'm
> only observing that Maudlin's argument fails in a Multiverse reality.

If the register "323" is missing in one branch of a quantum
multiverse, it is missing in all normal extension of the computational
state of the machine. Some rare branch will have the pieces, and from
there (and thus from the first person point of view of the subject)
everything will go well, by comp. But only because we fall back in a
branch where the piece is not missing. This is not different than the
comp or quantum immortality argument. The fact remains: the physical
activity in one normal branch missing the register is the same as the
physical activity in some branch not missing it, for the same
particular computation. Then Maudlins argument shows correctly that
the physical activity can be made arbitrary (and even non existing),
showing that comp links consciousness not on the physical activity of
the program, but on the computational (in the sense of computer
science) structure only, making matter and physics an epistemological
indexical for the conscious entity involved.


>
>
>>
>>
>>>
>>> The question is - where is the consciousness in all this? I think it
>>> must move with the levels - and given the UDA and COMP, I would say
>>> that consciousness appears at the Multiverse level, not the single
>>> universe level.
>>
>> That is right, but with comp that "multiverse" is the mathematical
>> structure which needs to be entirely derived from the theory of
>> consciousness or from the self-reference logics.
>>
>
> Why? I can see how, but why?

Keeping comp, we might say "only by Occam", but that would be weak,
given the fact that not much of known physics is handled by comp
currently.
But then the reason why we have to do that, even without Occam, is the
MGA argument. If some physical reality is at play in the brain for it
having a role in the making of consciousness, comp makes it Turing
emulable in a single reality, and it that single reality we can change
the computer structure so that his physical activity is arbitrary, by
adding, like Maudlin some physically inactive piece of matter, for
handling the counterfactuals. And what I say above will apply.

>
>>
>>
>>>
>>> BTW - I had a similar problem with your MGA - it is not
>>> intrinsically
>>> absurd to me that a recording can be conscious.
>>
>> There is no computation in a recording. There is only a fixed
>> description of a computation. In arithmetic, it is like confusing p
>> and Bp.
>
> This also means there is no computation in a block universe like UD*.
>
> I think this needs to be spelled out. It is not so obvious.

UD* contains a lot (all) computations. Indeed they are executed by the
UD, or by the additive and multiplicative structure of the natural
numbers.
I think that you are confusing UD* with a description of UD*, which
would contain all descriptions of all computations. But this is
already given by the counting algorithm: which generate 0, 1, 2, 3,
and thus all description of all computations. yet the counting
algorith is not Turing universal, and does not make any computation.
UD* is not just a collection of all description of computations, it is
a mathematical structure which execute, even if only in the
arithmetical sense, all computations. You do at the UD* level the same
mistake done by those who think that a recording or a cartoon executes
a computation, when it only describe one.
UD*, unlike the counting algorithm, *executes* a computation (and can
perhaps describe them too) in virtue of relating arithmetically the
numbers. The computation is in the arithmetical (or combinators
related, ...) true relational structures of the numbers (combinators,
etc.), not in the description of the computations. (N, +) describes
all computations, but does not run any program, except the program
sending x on x+1. The UD, or the structure (N, +, *), does generate
and run all programs.
The proof that the counting algorithm describe all computations can be
done in very few lines. The proof that the sigma_1 arithmetic (par of
(N,+,*)) runs all computations cannot be made in less that 50 pages.
The UD*, like a block universe, is a very rich and subtle structure.


>
>> With p sigma_1, p and Bp looks alike (which explains the subtlety
>> of that nuance) in the sense that we have both
>> p -> Bp and
>> Bp -> p
>> But Bp -> p is only true (provable at some [ ]*-logic level), and
>> not provable by the machine, so p and Bp will still behave in a
>> different logical way.
>
> I see the difference between p and Bp, but not the relevance to
> recordings and computation. Sorry to be difficult here.


No problem. I know that the point is subtle. p here is supposed to
correspond to some computational truth, and Bp for a proof of that
computational truth. The problem is that "p" is also a proposition,
and as such it will involved a description of that computation. But
the computation is in the meaning or the truth of some number
theoretical relation p, not in the description of p which is needed
because we are talking. Then the same occur at the meta-level, with
Bp. And, for p sigma_1, the same are equivalent, but not provably for
the machine.

The relation with the recordings and the computation is the following
one. To have a computation you need a universal system relating
(logico-arithmetically) steps of a computation. A description of a
computation does not relate the steps by itself. It described how the
steps are related, but only the original computation executed by some
universal system (the filmed one), has illustrated the true existence
of the relation, which can then be described by the movie, which do no
more any computation.

A proof that a computation exist can only be done by a proof that the
description of the computation exist, and p and Bp will be logically
equivalent (for a self-referentially correct machine, and p sigma_1
(computational)), but this does not mean that a computation is the
same object than a description of the computation.

I would like to say this in some more easy way, but it is hardly
possible, because to talk on a computation, I have to describe it. It
is more easy at the meta-level, where I can identify a computation
with, say, a universal machine, and a sequence of numbers describing
the evolving states of the machine run by the universal machine. Then
a description of this by a machine will be just one number coding that
stuff. But typically you might accuse me of describing the difference
between Bp and BBp. It just happens that I cannot describe p. The
difference is really the same as the difference between the true fact
that 5 is a prime number, and the sentence "5 is a prime number",
which might be true, or false, according to the semantics a universal
machine will use to decode and interpret that strings of letter.

>
>> Then you have the stroboscopic argument which shows that a recording
>> like a movie is not well defined in time and space.
>> But the simplest, imo, to see that a recording cannot be conscious
>> (with comp, 'qua computatio') is that there is no more any
>> computations done by a recording.
>>
>
> The computations may be phenomenal to the consciousness in
> question. If not, why not?

By definition of a computation. There is a real program linking the
states of a machine, like the UD, or like a local physical universe
(in some conception of them), or any universal machine.
With comp, the reason why you are conscious here and now, is that it
exist a computation going through you state. But your consciousness is
not in any description of any computation, it is in the truth of the
(relative) existence of that computation.

> We have often talked about what links
> observer moments together in this list.

It is a universal system. If we start from the numbers, it is the
truth of number theoretical proposition. It arises from the non
trivial additive/multiplicative structure of the numbers do the
relations. Those relations are independent on us. The only problem for
us is that there are an infinity of such relations, and we can only
bet on the local most probable universal history. Below our
substitution level, all universal machines run by the UD are fiercely
competing.

>
>>
>>
>>> From the right point
>>> of view (presumably that of the consciousness itself - aka the
>>> "inside
>>> view"), it seems plausible that a recording could be conscious.
>>
>> A still other argument, is that no piece of the movie can have any
>> causal relationship with any other part, and so can be removed,
>> making eventually a *particular* consciousness (a dream about an
>> ice-cream, for example) supervening on the vacuum.
>
> Isn't this what you were calling the "stroboscopic argument" above?

I am not sure why you say so. In the stroboscopic argument, we move an
observer along a giant version of the movie's pellicle, with a
stroboscope sending a flash from infinity (say) so that the observer
see a movie. But the presence of the observer plays no role in the
presence of absence in a device he is looking for. So we can remove
it, but then we have just a giant pellicle + a stroboscope, and the
identification of consciousness with state of the movie in time does
no more make any sense.
We don't remove any part of the pellicle, like in the argument just
sketched above.


>
>>
>> What is correct is that consciousness is related to all events
>> having made the recording possible, but this is only in virtue of
>> some numbers having some special relations with other number, and we
>> are back to the computationalist supervenience thesis.
>>
>> We might come back on MGA, given some other questions on the list.
>> So if this is unclear you might ask question, or wait that I
>> re-explain the whole argument perhaps.
>>
>
> I remember when you were explaining the MGA before, we got to this
> point where you relied on recordings not being conscious, and I think
> you said you hoped you didn't need to explain that bit :). I did ask
> why at the time.

It seems I have answered that. It is true than in the first 1988
version, I just say that to confuse a movie about a fact, with the
fact itself, is the biggest error a philosopher can do: confusing
reality with a representation of a reality!
But alarmed by the fact that some people seems to want to make that
confusion in the case of a running computer and a (high resolution)
movie of a running computer, I provided the stroboscopic argument (in
the french thesis), or the usual removing part (of the pellicle) of
non necessary components.

>
> Its not a biggie though - just one of those "not understanding all the
> steps" things.


There is no problem, Russell. The MGA is at the heart of the mind body
problem with comp. UDA1-7 already proves a lot (indeterminacy, non
locality, non cloning), but you can still escape the non-materialism
by conceiving that the physical reality is too much little for running
a significant part of the UD. MGA is supposed to show that this moves
is a red herring.

Some people believe that MGA is not needed, and indeed you can add to
comp some principles like "no arithmetical zombie in arithmetic", or
that "something no material cannot act on something material + "I" can
act on matter", etc. Or even just that "physical reality is infinite".
But those principles are hard to make precise, or hard to justify, and
the MGA, it seems to me, makes them unnecessary.

The difficulty is that we cannot describe the truth of an arithmetical
proposition without going trough a description of that arithmetical
proposition, and the same appears for the computations.

It is like the difference between the fact that 2 + 3 = 5, and the
sentence "2 + 3 = 5".

It is is even more like the difference between a proof that 2 + 3 = 5
and a description of a proof that 2 + 3 = 5.

It is also deeply related with the difference between the formal
implication p -> q, and the deduction p => q, or like as I said, at a
higher level, the difference between a machine computing some
function, and the sentences (perhaps written in a language that the
machine can 'understand') "I or the machine compute(s) that function".

I hope this helps a bit, ask any precision at any level. It is
obviously a difficult matter, combining the nasty subtleties of
philosophy of mind with the nasty subtleties of theoretical computer
science and mathematical logic.

Bruno

http://iridia.ulb.ac.be/~marchal/

meekerdb

unread,
Dec 16, 2011, 1:27:47 PM12/16/11
to everyth...@googlegroups.com
On 12/16/2011 8:42 AM, Bruno Marchal wrote:
> If you think of a quantum multiverse, then that argument would work if the brain is a
> quantum computer. If it is classical, its states can be considered as having been
> prepared in the classical base,

But "prepared in the classical base" just means decoherence makes the state
(approximately) orthogonal to the counterfactual bases. You seem to be assuming that
because the brain is quasi-classical, that it doesn't exist in the other branches?

Brent

smi...@zonnet.nl

unread,
Dec 16, 2011, 6:30:25 PM12/16/11
to everyth...@googlegroups.com
Citeren meekerdb <meek...@verizon.net>:

Indeed, and the branches can be grouped together in sets such that the
person is unaware of differences within one set. If the pixels on the
screen representing the letters you read were arranged slightly
differently, your brain would detect these differences, but you would
still see exactly the same letters as long as the differences are small
enough. The same patterns will be recognized.

So, in general, when we perceive something, we don't make "sharp"
measurements and as a consequence, we are not located in what we
normally would call a single branch of the multiverse. Rather, the
brain-environment state will be in a complicated entangled state, and
that state itself defines the computation that has been carried out.

Saibal

Russell Standish

unread,
Dec 16, 2011, 11:06:32 PM12/16/11
to everyth...@googlegroups.com
On Fri, Dec 16, 2011 at 05:42:19PM +0100, Bruno Marchal wrote:
>
> On 16 Dec 2011, at 10:39, Russell Standish wrote:
>
> >On Mon, Dec 12, 2011 at 04:11:54PM +0100, Bruno Marchal wrote:
> >>>Maudlin's argument relies on the absurdity the the presence or
> >>>absence
> >>>of inert parts bears on whether something is consious. This
> >>>absurdity
> >>>only works in a single universe setting, however. If your
> >>>computer is
> >>>embedded in a Multiverse, the absurdity vanishes, because
> >>>thiose inert
> >>>parts are no longer inert.
> >>
> >>But they do not play a part in the computation, at the correct
> >>substitution level.
> >
> >They certainly look like they are. If these parts weren't present, the
> >calculation proceeds differently in the other branches of the
> >Multiverse. In other words, counterfactuals are not handled correctly.
>
> If you think of a quantum multiverse, then that argument would work
> if the brain is a quantum computer. If it is classical, its states

Nobody here is proposing that the brain is a quantum computer. Penrose
and Lockwood do, but that's an entirely different hypothesis.

> can be considered as having been prepared in the classical base, and
> the computation (or non computation) will be handled correctly in
> each branch of the quantum multiverse, in which the same MGA
> reasoning will apply.

In Maudlin's argument, the inert parts are only inert by virtue of the
counterfactuals not having been realised. In a multiverse, the
counterfactuals are realised, but in different branches. Hence those
"inert" parts are no longer inert in all branches. If they were, they
could be removed from the computer altogether, without affecting the
computation.

>
> So you are introducing a different kind of physical multiverse,
> which would handle the counterfactuals. But this will not work.
> Either this physical multiverse, which plays the role of the
> generalized brain, is Turing emulable, in which case I can emulate
> it in a single Turing machine, for which the MGA will apply again.
> Or it is not Turing emulable, but then the need of it will
> contradict the comp assumption.
>

This step, as I understand it, is a form of dovetailing. Nobody really
thinks of the dovetailer algorithm as instantiating consciousness, so
the move is ultimately invalid, I would think.

>
>
>
> >
> >>They are playing a part concerning the first person indeterminacy,
> >>like in the UD*, or in QM physics. But that is derived (and has to
> >>be) from the indeterminacy.
> >>
> >
> >They do that as well, but this is not relevant to Maudlins argument...
>
> The parallel realities does not play any role for a classical
> computation, except for statistical interference (in case of a
> quantum computer).

It is not a question of the parallel realities playing a role in the
computation, but in the supervenience. Maudlin's argument says If
COMP, then supervenience on single universe is contradictory. But it
doesn't say anything about supervenience on multiple parallel realities.

> But if this play a role, it means that we have
> not chosen the right level of substitution. Once it has be chosen
> correctly (or below), what happens in some other branch cannot
> interfere or play any role in the computation.
>

I don't follow...

>
>
> >
> >>
> >>>If you then fold the multiverse back into a
> >>>single universe by dovetailing, one can then reapply the Maudlin
> >>>move.
> >>
> >>Indeed. That is the key point.
> >>
> >>
> >>
> >>>But then, in that case, one can embed that result into a
> >>>Multiverse, and the cycle repeats.
> >>

I think I'm coming around to the view that neither of the above steps
are valid - but one could equally say they are as valid as each other.

... snip ...

>
> If the register "323" is missing in one branch of a quantum
> multiverse, it is missing in all normal extension of the
> computational state of the machine.

Yes...

> Some rare branch will have the
> pieces, and from there (and thus from the first person point of view
> of the subject) everything will go well, by comp.

This is a bit confused. Surely the register is missing in all future branches,

> But only because
> we fall back in a branch where the piece is not missing.

Why? Are you saying that if consciousness requires the presence of
register 323 at some particular point, then we find ourselves
instantiated by a computer with such a register? But then surely,
never at any point would we find ourselves instantiated by a machine
without register 323 - presumably for most of our history we would be
unaware of whether the register existed or not.

> This is not
> different than the comp or quantum immortality argument. The fact
> remains: the physical activity in one normal branch missing the
> register is the same as the physical activity in some branch not
> missing it, for the same particular computation.

In all branches, or just special ones? If all branches, then the
register is totally unnecessary. If just a special pair of branches,
then Maudlin's argument shows that supervenience must occur across
more branches than those two.

Yes - it shows that physical supervenience is impossible in that
single reality.

But this doesn't answer the why question. I could imagine that you
might feel that Multiverses are otiose, so would prefer a derivation
of their existence from something "simpler" - eg arithmetic of the
whole numbers.

That's fine and dandy - but the Multiverse is not otiose - it is far
less of an impost than a single reality.

I know you're keen to attack the Aristotelian primary matter
position. To be quite honest, its not a question I care too much about
- I'm happy for my matter to be phenomenal, not primary. But I do
think we need to be careful about throwing out supervenience of mind
on matter (of whatever stripe), otherwise the Anthropic principle
becomes mysterious, and we're faced with what to do about the Occam
catastrophe.

I'm snipping the following text because it moves away from Maudlin's
argument in particular, and also there's some juicy stuff in it I need
to absorb before responding (if indeed I do :).

...snip...

--

Pierz

unread,
Dec 16, 2011, 11:26:21 PM12/16/11
to Everything List
So I’ve read Maudlin’s argument and I’ve decided to trade in my PC for
a brand new Olympia. Or maybe not – I heard the water bills are a
killer. Actually I’m not so impressed, and here’s why.

Maudlin attempts to show that consciousness cannot supervene on the
physical activity of a computing device by hypothesizing a mechanism
which, instead of responding to counterfactual input states in order
to accomplish a certain computation, is composed of numerous
interlocking mechanisms, each of which is ‘mindless’ in the sense of
producing the same output regardless of what is input (the state of
the ‘tape’ of water troughs), and each of which operates independently
of all others.

In order for this process to work out, Maudlin must ensure that the
trough containing the next bit of information being processed by the
armature is always the next trough being read, i.e., the spatial
sequence of troughs is the same as the computational sequence of
program pi. What then if the armature needs to read an earlier state
of the tape? Rather than allowing the armature to backtrack, Maudlin
invokes the notion of the ‘bilocation’ of certain addresses. By
connecting pipes to earlier troughs, the armature can read an earlier
address in the sequence. This is important because as the machine
progresses through its sequence of operations, it may fill or drain
the troughs it passes over, and these alterations to the tape may be
significant for its subsequent processing.

Now Maudlin claims that each copy of Klara cannot be conscious,
because a machine that produces the same outputs regardless of inputs
can never, by the computationalist’s own standards, be regarded as
conscious. It is only the conjunction of all the various Klaras in
Olympia that achieves a real computation, and yet, because only one
copy of Klara is running at any time, the other copies remaining
completely inert and separate, the computationalist has a problem.
Either he must give up the idea that consciousness depends on a ‘non-
trivial’ computation, or he must give up supervenience on the
physical.

Let us note that in order for Olympia to run a particular computation
on a particular input state, she must first be pre-programmed to that
input through the configuration of the float devices at each trough,
which ensure that a new Klara can be activated at each potential
counterfactual branch. This is fine so far as it goes, in that this
can be done in advance and amounts to the programming of any computer
before it is run. But note that at this programming stage, the results
of Olympia’s processing cannot be known, otherwise the program would
have to be run first, and that would lead to an infinite regress. That
is to say, when I program the float positions, I can’t know how
Olympia will fill or empty any particular trough in advance.

So here’s the rub. Let us imagine that Klara instance 1 (we’ll just
call her K1) is running and hits her first counterfactual at T4
(trough 4). K2 is now activated at T4 and K1 is shut down. Now let’s
say two troughs down at T6, program pi calls for the retrieval of the
contents of T1. By bilocation, we know she can do this. But now the
armature has a problem. Should it get the contents of T1 in K1 or in
K2? Clearly the answer is K1, because the armature has passed over K1-
T1 and therefore possibly altered its contents, but K2-T1 has never
been processed and contains ‘noise’ as far as the calculation is
concerned. As noted earlier, we cannot so arrange K2 that it contains
the same contents as K1 after processing, because that would entail
running the program in advance. So K2 must pipe the contents from
trough K1-T1, thereby activating a supposedly inactive copy of Klara.
This itself seems to shoot down Maudlin’s argument, since the
transferring of information (water, charge, whatever) from K1-T1 must
be accomplished physically, and involves the physical, causal
interaction of two different copies of Klara.

The problem is even deeper than this, however. How does the system
‘know’ when two locations should be bilocated? This works OK for a
single copy of Klara, since she is a static system. But if she must
physically interact with all the previous editions of herself further
back in the calculation chain, then she will be forced to ‘build’
pipes on the go, a ridiculously contrived procedure that totally
vitiates the idea of a mindlessly proceeding, inert system. And how
does Klara (or rather, Olympia) remember which path she has followed
in order to know which trough to drain? New mechanisms must be devised
which effectively mean retaining the activity of previous Klaras in
the chain and are no different from a form of backtracking.

If Maudlin’s argument is a foundation of the UDA, then it seems to me
the UDA is on shaky ground, though I have yet to investigate the MGA
in depth. People talk about the Movie Graph Argument, but the links
provided refer to Alice and a distant supernova with lucky rays that
substitute for functional neurons. I don’t see a connection to the
idea of a recording or a filmed graph. Can someone enlighten me?

Bruno, I'm aware I've left a prior discussion hanging regarding
measures of infinite sets. I read up on the material you linked to and
it seemed to me that amenability of integers depends on them being
taken as a sequence, not a mere set. But I don't really (at all)
understand the difference between sigma and simple additivity. Perhaps
you can explain? And is this discussion related to the 'credibility
measure' you mention elsewhere?

> Visiting Professor of Mathematics      hpco...@hpcoders.com.au

Russell Standish

unread,
Dec 17, 2011, 12:39:55 AM12/17/11
to everyth...@googlegroups.com
On Fri, Dec 16, 2011 at 08:26:21PM -0800, Pierz wrote:

...snip...

> The problem is even deeper than this, however. How does the system
> ‘know’ when two locations should be bilocated? This works OK for a
> single copy of Klara, since she is a static system. But if she must
> physically interact with all the previous editions of herself further
> back in the calculation chain, then she will be forced to ‘build’
> pipes on the go, a ridiculously contrived procedure that totally
> vitiates the idea of a mindlessly proceeding, inert system. And how
> does Klara (or rather, Olympia) remember which path she has followed
> in order to know which trough to drain? New mechanisms must be devised
> which effectively mean retaining the activity of previous Klaras in
> the chain and are no different from a form of backtracking.
>

My understanding is that to construct Olympia, we take n copies of
Klara, and run each copy to step i of the program, where i=1..n-1. The
construct the sequence of water troughs such that they are equal to
that of K_i at step i. We also connect K_i to Olympia at that point,
ready to take over in the event of a counterfactual being true.

I don't think the issue of pipes is a problem - we can assume each
trough in state i is connected to the troughs of states i-1 such
that when the armature moves through to state i, it closes a valve
connecting the troughs to the previous state's troughs.

It may seem complex, but it is mere complication, not complexity, if
you understand the difference.

> If Maudlin’s argument is a foundation of the UDA, then it seems to me
> the UDA is on shaky ground, though I have yet to investigate the MGA
> in depth. People talk about the Movie Graph Argument, but the links
> provided refer to Alice and a distant supernova with lucky rays that
> substitute for functional neurons. I don’t see a connection to the
> idea of a recording or a filmed graph. Can someone enlighten me?
>

Maudlin's argument has been compared with the MGA, which is step 8 of
the UDA. The previous steps are independent of Maudlin.

Olympia can be compared with a recording of the computation. That is
the "filmed graph" (aka movie graph).


--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders

Visiting Professor of Mathematics hpc...@hpcoders.com.au

Joseph Knight

unread,
Dec 17, 2011, 12:40:46 AM12/17/11
to everyth...@googlegroups.com
If Maudlin’s argument is a foundation of the UDA, then it seems to me
the UDA is on shaky ground, though I have yet to investigate the MGA
in depth. People talk about the Movie Graph Argument, but the links
provided refer to Alice and a distant supernova with lucky rays that
substitute for functional neurons. I don’t see a connection to the
idea of a recording or a filmed graph. Can someone enlighten me?

Bruno Marchal

unread,
Dec 17, 2011, 5:59:07 AM12/17/11
to everyth...@googlegroups.com

On 17 Dec 2011, at 05:06, Russell Standish wrote:

> On Fri, Dec 16, 2011 at 05:42:19PM +0100, Bruno Marchal wrote:
>>
>> On 16 Dec 2011, at 10:39, Russell Standish wrote:
>>
>>> On Mon, Dec 12, 2011 at 04:11:54PM +0100, Bruno Marchal wrote:
>>>>> Maudlin's argument relies on the absurdity the the presence or
>>>>> absence
>>>>> of inert parts bears on whether something is consious. This
>>>>> absurdity
>>>>> only works in a single universe setting, however. If your
>>>>> computer is
>>>>> embedded in a Multiverse, the absurdity vanishes, because
>>>>> thiose inert
>>>>> parts are no longer inert.
>>>>
>>>> But they do not play a part in the computation, at the correct
>>>> substitution level.
>>>
>>> They certainly look like they are. If these parts weren't present,
>>> the
>>> calculation proceeds differently in the other branches of the
>>> Multiverse. In other words, counterfactuals are not handled
>>> correctly.
>>
>> If you think of a quantum multiverse, then that argument would work
>> if the brain is a quantum computer. If it is classical, its states
>
> Nobody here is proposing that the brain is a quantum computer. Penrose
> and Lockwood do, but that's an entirely different hypothesis.

OK. Just saying that MGA works independently of the comp level.


>
>> can be considered as having been prepared in the classical base, and
>> the computation (or non computation) will be handled correctly in
>> each branch of the quantum multiverse, in which the same MGA
>> reasoning will apply.
>
> In Maudlin's argument, the inert parts are only inert by virtue of the
> counterfactuals not having been realised.

OK.


> In a multiverse, the
> counterfactuals are realised, but in different branches.

Not necessarily. If the computation is classical, it is the same in
the normal continuations. The classical counterfactuals are not
realized in a quantum multiverse. You have to put the Klara in some
superposition state to do that. You need a quantum Olympia.


> Hence those
> "inert" parts are no longer inert in all branches.

I don't see this.

> If they were, they
> could be removed from the computer altogether, without affecting the
> computation.

But that is the case for the computation under consideration.

>
>>
>> So you are introducing a different kind of physical multiverse,
>> which would handle the counterfactuals. But this will not work.
>> Either this physical multiverse, which plays the role of the
>> generalized brain, is Turing emulable, in which case I can emulate
>> it in a single Turing machine, for which the MGA will apply again.
>> Or it is not Turing emulable, but then the need of it will
>> contradict the comp assumption.
>>
>
> This step, as I understand it, is a form of dovetailing. Nobody really
> thinks of the dovetailer algorithm as instantiating consciousness, so
> the move is ultimately invalid, I would think.

The problem is there. With comp + the physical supervenience thesis,
the dovetailing algorithm does instantiate consciousness.

>
>>
>>
>>
>>>
>>>> They are playing a part concerning the first person indeterminacy,
>>>> like in the UD*, or in QM physics. But that is derived (and has to
>>>> be) from the indeterminacy.
>>>>
>>>
>>> They do that as well, but this is not relevant to Maudlins
>>> argument...
>>
>> The parallel realities does not play any role for a classical
>> computation, except for statistical interference (in case of a
>> quantum computer).
>
> It is not a question of the parallel realities playing a role in the
> computation, but in the supervenience. Maudlin's argument says If
> COMP, then supervenience on single universe is contradictory. But it
> doesn't say anything about supervenience on multiple parallel
> realities.

Those are relevant for the relative measure on continuations.
Unless you are using a non Turing emulable multiverse, whatever
physical supervenience thesis you are using, you can come back to a
single reality. You will not say "yes" to a drunk doctor, because you
bet that in some parallel realities he will be sober.

>
>> But if this play a role, it means that we have
>> not chosen the right level of substitution. Once it has be chosen
>> correctly (or below), what happens in some other branch cannot
>> interfere or play any role in the computation.
>>
>
> I don't follow...

Computation are by definition based on the running of a single
universal machine. Even in the quantum case. Quantum computer can be
emulated in a single reality or by a single well defined universal
machine. The inert pieces handles classical counterfactual at the comp
substitution level.


>
>>
>>
>>>
>>>>
>>>>> If you then fold the multiverse back into a
>>>>> single universe by dovetailing, one can then reapply the Maudlin
>>>>> move.
>>>>
>>>> Indeed. That is the key point.
>>>>
>>>>
>>>>
>>>>> But then, in that case, one can embed that result into a
>>>>> Multiverse, and the cycle repeats.
>>>>
>
> I think I'm coming around to the view that neither of the above steps
> are valid - but one could equally say they are as valid as each other.

Not sure I see which steps you are talking about. The MGA is a
reductio ad absurdum from comp + physical supervenience.


>
> ... snip ...
>
>>
>> If the register "323" is missing in one branch of a quantum
>> multiverse, it is missing in all normal extension of the
>> computational state of the machine.
>
> Yes...
>
>> Some rare branch will have the
>> pieces, and from there (and thus from the first person point of view
>> of the subject) everything will go well, by comp.
>
> This is a bit confused. Surely the register is missing in all future
> branches,

Not in the (rare) white rabbit type of branches, where the register
can reappear by a lucky vacuum fluctuation.


>
>> But only because
>> we fall back in a branch where the piece is not missing.
>
> Why? Are you saying that if consciousness requires the presence of
> register 323 at some particular point, then we find ourselves
> instantiated by a computer with such a register?

I am saying this assuming that the "323" principle is false, to get
the contradiction.


> But then surely,
> never at any point would we find ourselves instantiated by a machine
> without register 323 - presumably for most of our history we would be
> unaware of whether the register existed or not.

That's the point.

>
>> This is not
>> different than the comp or quantum immortality argument. The fact
>> remains: the physical activity in one normal branch missing the
>> register is the same as the physical activity in some branch not
>> missing it, for the same particular computation.
>
> In all branches, or just special ones? If all branches, then the
> register is totally unnecessary.

In this case the same computations, with the same inputs are done in
all branches.

> If just a special pair of branches,
> then Maudlin's argument shows that supervenience must occur across
> more branches than those two.

There is no more branches. We are now simulating all the branches in a
single reality. If that is not possible, then comp is already false.

OK, but then to make your argument you have to shift toward a multi-
multiverse, given that we have come back to a single universe. And the
argument can continue: I will just simulate that multi-multiverse in a
single branch on a single classical universal machine. If that is not
possible, then comp is false. If that is possible, then MGA will apply
again.

>
> But this doesn't answer the why question. I could imagine that you
> might feel that Multiverses are otiose, so would prefer a derivation
> of their existence from something "simpler" - eg arithmetic of the
> whole numbers.

Not at all. It is the idea that there is primary matter which is
otiose, or epistemologically contradictory. It is physicalism which is
show wrong. The multiverse is shown to be emergent from a numbers
multi-dream. Physics becomes a branch of machine's theology.

>
> That's fine and dandy - but the Multiverse is not otiose - it is far
> less of an impost than a single reality.

Yes. We agree on this. It is the main theme of this list. The question
is not about their existence, but their primitivity.

>
> I know you're keen to attack the Aristotelian primary matter
> position. To be quite honest, its not a question I care too much about
> - I'm happy for my matter to be phenomenal, not primary.

The MGA is just an help for people to realize that comp makes this
necessary.

> But I do
> think we need to be careful about throwing out supervenience of mind
> on matter (of whatever stripe), otherwise the Anthropic principle
> becomes mysterious, and we're faced with what to do about the Occam
> catastrophe.

Matter (like consciousness) are fundamental and plays a big role in
the big picture, but the point is that they are not primary.
UDA 1-7 has already reduce physics to classical computer science/
number theory, except that some can still make the move toward a
single little physical reality (with non concrete UD running in it).
MGA shows that such a move is a red herring.


>
> I'm snipping the following text because it moves away from Maudlin's
> argument in particular, and also there's some juicy stuff in it I need
> to absorb before responding (if indeed I do :).

Enjoy :)

Bruno


http://iridia.ulb.ac.be/~marchal/

Pierz

unread,
Dec 17, 2011, 7:08:32 AM12/17/11
to Everything List

> Visiting Professor of Mathematics      hpco...@hpcoders.com.au


> University of New South Wales          http://www.hpcoders.com.au
> --------------------------------------------------------------------------- -

> Maudlin's argument has been compared with the MGA, which is step 8 of


> the UDA. The previous steps are independent of Maudlin.

I understand that, but all the steps are necessary to support the
argument. If consciousness supervenes only on physical computation,
then one requires a physical instantiation of the UD, not a purely
arithmetical one.

> My understanding is that to construct Olympia, we take n copies of
> Klara, and run each copy to step i of the program, where i=1..n-1. The
> construct the sequence of water troughs such that they are equal to
> that of K_i at step i. We also connect K_i to Olympia at that point,
> ready to take over in the event of a counterfactual being true.

Invalid because of the infinite regress problem. How can we run the
program on the individual Klaras without connecting them to the
Olympia in the first place? The Klaras cannot calculate anything
without the counterfactual mechanism of all the other Klaras ensuring
they don't go wrong. If all the Klaras have already been run somehow
so the troughs prior to the branch onto the active Klara contain the
calculated values then there is no need to run Oylmpia at all. The
state of the last Klara already contains the output of the calculation
and we can discard Olympia and just say that we already calculated the
value in the past. This makes a mockery of the entire elaborate
mechanism Maudlin postulates and the business about inert parts and so
on is irrelevant. I don't think that saying that a live calculation
can always be replaced by one that was completed in the past solves
anything. Certainly consciousness (or a computer) may draw on the
results of completed calculations in order to speed up its work (a
computer doesn't need to recalculate the value of pi every time it
needs that constant), but it cannot solve every problem that way,
obviously! A computer game may pre-render an explosion made by
computing hundreds of thousands of particles, as a shortcut, but it
cannot pre-render every possible game and just branch into the
relevant branch of that movie as required. Unless you grant it
infinite calculation resources in the past and none in the present, an
abject sophistry.

I can't find anything in Maudlin's paper that suggests the method you
propose - pre-running every copy of Klara as if it had dealt with all
prior counterfactuals. Each copy is merely another dumb Klara ready to
wrong the next instant. That is both essential to the argument, and
its fatal flaw.

Bruno Marchal

unread,
Dec 17, 2011, 7:21:41 AM12/17/11
to everyth...@googlegroups.com

On 17 Dec 2011, at 05:26, Pierz wrote:

> So I’ve read Maudlin’s argument and I’ve decided to trade in my PC for
> a brand new Olympia.

Good luck.

> Or maybe not – I heard the water bills are a
> killer. Actually I’m not so impressed, and here’s why.
>
> Maudlin attempts to show that consciousness cannot supervene on the
> physical activity of a computing device by hypothesizing a mechanism
> which, instead of responding to counterfactual input states in order
> to accomplish a certain computation, is composed of numerous
> interlocking mechanisms, each of which is ‘mindless’

This is not relevant. Comp assumes at the start that the part of a
computing machinery is mindless (contra Craig).

All Klaras are inert, for the specific conscious computation under
consideration.
Only the pre-Olympia part is active, but with its physical activity
already made arbitrary if not inexistant.
(pre-Olympia = Olympia without the Klaras).

>
> Let us note that in order for Olympia to run a particular computation
> on a particular input state, she must first be pre-programmed to that
> input through the configuration of the float devices at each trough,
> which ensure that a new Klara can be activated at each potential
> counterfactual branch. This is fine so far as it goes, in that this
> can be done in advance and amounts to the programming of any computer
> before it is run. But note that at this programming stage, the results
> of Olympia’s processing cannot be known, otherwise the program would
> have to be run first, and that would lead to an infinite regress. That
> is to say, when I program the float positions, I can’t know how
> Olympia will fill or empty any particular trough in advance.

?


>
> So here’s the rub. Let us imagine that Klara instance 1 (we’ll just
> call her K1) is running and hits her first counterfactual at T4
> (trough 4). K2 is now activated at T4 and K1 is shut down. Now let’s
> say two troughs down at T6, program pi calls for the retrieval of the
> contents of T1. By bilocation, we know she can do this. But now the
> armature has a problem. Should it get the contents of T1 in K1 or in
> K2? Clearly the answer is K1, because the armature has passed over K1-
> T1 and therefore possibly altered its contents, but K2-T1 has never
> been processed and contains ‘noise’ as far as the calculation is
> concerned. As noted earlier, we cannot so arrange K2 that it contains
> the same contents as K1 after processing, because that would entail
> running the program in advance. So K2 must pipe the contents from
> trough K1-T1, thereby activating a supposedly inactive copy of Klara.
> This itself seems to shoot down Maudlin’s argument, since the
> transferring of information (water, charge, whatever) from K1-T1 must
> be accomplished physically, and involves the physical, causal
> interaction of two different copies of Klara.

Maudlin runs the initial computations. All Klara are and remains
inactive during the whole computation.

>
> The problem is even deeper than this, however. How does the system
> ‘know’ when two locations should be bilocated?


We have run the computation in advance, completely.


> This works OK for a
> single copy of Klara, since she is a static system. But if she must
> physically interact with all the previous editions of herself further
> back in the calculation chain, then she will be forced to ‘build’
> pipes on the go, a ridiculously contrived procedure that totally
> vitiates the idea of a mindlessly proceeding, inert system.

I think you miss the point. We consider only the particular
computation for which we know in advance that the Klara will not be
used. the role of the Klaras is only to reinstore counterfactual
correctness, but the physical supervenience is jeopardized by that
computation with the Klaras being inert.

> And how
> does Klara (or rather, Olympia) remember which path she has followed
> in order to know which trough to drain? New mechanisms must be devised
> which effectively mean retaining the activity of previous Klaras in
> the chain and are no different from a form of backtracking.
>
> If Maudlin’s argument is a foundation of the UDA, then it seems to me
> the UDA is on shaky ground, though I have yet to investigate the MGA
> in depth.

MGA is the last step, and is used only for immateriality. UDA has
already shown indeterminacy, non locality, etc.
MGA is only a remind that the mind body problem is not solved, and
that we have no idea of what "matter" can be.

> People talk about the Movie Graph Argument, but the links
> provided refer to Alice and a distant supernova with lucky rays that
> substitute for functional neurons. I don’t see a connection to the
> idea of a recording or a filmed graph. Can someone enlighten me?

See "Le paradoxe du graphe filmé" here, in the voume 3:
http://iridia.ulb.ac.be/~marchal/bxlthesis/consciencemecanisme.html
You will find drawings showing how to translate Olympia in term of
filmed graph.
Or, if you want, we can come back on this. I have just not so much
time right now.
You can see that Maudlin got the movie idea also, at the end of the
paper.


>
> Bruno, I'm aware I've left a prior discussion hanging regarding
> measures of infinite sets. I read up on the material you linked to and
> it seemed to me that amenability of integers depends on them being
> taken as a sequence, not a mere set. But I don't really (at all)
> understand the difference between sigma and simple additivity. Perhaps
> you can explain? And is this discussion related to the 'credibility
> measure' you mention elsewhere?


I was just pointing on the fact that it is false that there is no
measure on infinite discrete sets. But UDA does not suggest we need
that. So the sigma-additivity notion would distract us from the
topics. Eventually I can explain the difference in some other post,
but it is not related to the topics. I have begun the extraction of
the measure by the use of the self-referential logics, in AUDA. That
comes after the UDA.

Bruno

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Quentin Anciaux

unread,
Dec 17, 2011, 7:30:59 AM12/17/11
to everyth...@googlegroups.com


2011/12/17 Pierz <pie...@gmail.com>

The thought experiment is that:

1- Computationalism is true.
2- So it means there exists conscious program.
3- You just stumble accros one.
4- You run it.
5- During the run you've seen that some parts are never accessed.
6- You remove those parts.
7- You run it again... it must still implement the conscious program (3) by points 1 and 2.
...
N- you can build a machine that implements and can only run 3 but that can't handle counterfactual, but as the computation is the same as 3, it must be as conscious as when it was running on a complete physical computer.
N+1- you can restore the handling of conterfactual by adding inactive piece. But If N was not conscious, adding inactive pieces shouldn't render it conscious.

Quentin
 
so the troughs prior to the branch onto the active Klara contain the
calculated values then there is no need to run Oylmpia at all. The
state of the last Klara already contains the output of the calculation
and we can discard Olympia and just say that we already calculated the
value in the past. This makes a mockery of the entire elaborate
mechanism Maudlin postulates and the business about inert parts and so
on is irrelevant. I don't think that saying that a live calculation
can always be replaced by one that was completed in the past solves
anything. Certainly consciousness (or a computer) may draw on the
results of completed calculations in order to speed up its work (a
computer doesn't need to recalculate the value of pi every time it
needs that constant), but it cannot solve every problem that way,
obviously! A computer game may pre-render an explosion made by
computing hundreds of thousands of particles, as a shortcut, but it
cannot pre-render every possible game and just branch into the
relevant branch of that movie as required. Unless you grant it
infinite calculation resources in the past and none in the present, an
abject sophistry.

I can't find anything in Maudlin's paper that suggests the method you
propose - pre-running every copy of Klara as if it had dealt with all
prior counterfactuals. Each copy is merely another dumb Klara ready to
wrong the next instant. That is both essential to the argument, and
its fatal flaw.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
All those moments will be lost in time, like tears in rain.

Craig Weinberg

unread,
Dec 17, 2011, 8:05:50 AM12/17/11
to Everything List
On Dec 17, 7:30 am, Quentin Anciaux <allco...@gmail.com> wrote:

> N- you can build a machine that implements and can only run 3 but that
> can't handle counterfactual, but as the computation is the same as 3, it
> must be as conscious as when it was running on a complete physical computer.
> N+1- you can restore the handling of conterfactual by adding inactive
> piece. But If N was not conscious, adding inactive pieces shouldn't render
> it conscious.

Conscious of what? It sounds like this assumes that consciousness is a
binary feature.

Craig

Quentin Anciaux

unread,
Dec 17, 2011, 11:24:10 AM12/17/11
to everyth...@googlegroups.com


2011/12/17 Craig Weinberg <whats...@gmail.com>

You didn't read, that's not the argument.

It begins by *assuming we have a conscious program*. The argument is not about what is consciousness, it's about assuming consciousness to be computational and assuming physical supervenience thesis true and showing a contradiction.
 
Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Dec 17, 2011, 2:53:33 PM12/17/11
to Everything List
On Dec 17, 11:24 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2011/12/17 Craig Weinberg <whatsons...@gmail.com>

>
> > On Dec 17, 7:30 am, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > N- you can build a machine that implements and can only run 3 but that
> > > can't handle counterfactual, but as the computation is the same as 3, it
> > > must be as conscious as when it was running on a complete physical
> > computer.
> > > N+1- you can restore the handling of conterfactual by adding inactive
> > > piece. But If N was not conscious, adding inactive pieces shouldn't
> > render
> > > it conscious.
>
> > Conscious of what? It sounds like this assumes that consciousness is a
> > binary feature.
>
> You didn't read, that's not the argument.
>
> It begins by *assuming we have a conscious program*. The argument is not
> about what is consciousness, it's about assuming consciousness to be
> computational and assuming physical supervenience thesis true and showing a
> contradiction.

No, I read it, I just think the argument is broken from the start if
you don't care what consciousness is in the first place. What kind of
consciousness are you assuming a program can have? Feeling? Sense of
smell? How does it manage that without a physical machine?

Craig

Quentin Anciaux

unread,
Dec 17, 2011, 2:58:27 PM12/17/11
to everyth...@googlegroups.com


2011/12/17 Craig Weinberg <whats...@gmail.com>

On Dec 17, 11:24 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2011/12/17 Craig Weinberg <whatsons...@gmail.com>
>
> > On Dec 17, 7:30 am, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > N- you can build a machine that implements and can only run 3 but that
> > > can't handle counterfactual, but as the computation is the same as 3, it
> > > must be as conscious as when it was running on a complete physical
> > computer.
> > > N+1- you can restore the handling of conterfactual by adding inactive
> > > piece. But If N was not conscious, adding inactive pieces shouldn't
> > render
> > > it conscious.
>
> > Conscious of what? It sounds like this assumes that consciousness is a
> > binary feature.
>
> You didn't read, that's not the argument.
>
> It begins by *assuming we have a conscious program*. The argument is not
> about what is consciousness, it's about assuming consciousness to be
> computational and assuming physical supervenience thesis true and showing a
> contradiction.

No, I read it, I just think the argument is broken from the start if
you don't care what consciousness is in the first place.

We care... The argument is about the computationalist hypothesis... in this setting consciousness is a computatational thing, the computationalist hypothesis is the hypothesis that you can be run on a digital computer.

 
What kind of
consciousness are you assuming a program can have? Feeling? Sense of
smell?

Everything by hypothesis, talk about the argument... if you want to talk about non-comp, open another thread.
 
How does it manage that without a physical machine?

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Dec 17, 2011, 3:27:01 PM12/17/11
to Everything List
On Dec 17, 2:58 pm, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > You didn't read, that's not the argument.
>
> > > It begins by *assuming we have a conscious program*. The argument is not
> > > about what is consciousness, it's about assuming consciousness to be
> > > computational and assuming physical supervenience thesis true and
> > showing a
> > > contradiction.
>
> > No, I read it, I just think the argument is broken from the start if
> > you don't care what consciousness is in the first place.
>
> We care... The argument is about the computationalist hypothesis... in this
> setting consciousness is a computatational thing, the computationalist
> hypothesis is the hypothesis that you can be run on a digital computer.

Ok, so human consciousness. The program is nothing but digital
instruction code but it thinks it's a human being in a universe as a
human being experiences it regardless of whether it's running on a
player piano or motorized Legos. Your argument then is that since the
code supervenes upon the physical Legos, there is a contradiction to
computationalism? I agree. Doesn't that contradiction point to non-
comp?

>
> > What kind of
> > consciousness are you assuming a program can have? Feeling? Sense of
> > smell?
>
> Everything by hypothesis, talk about the argument... if you want to talk
> about non-comp, open another thread.

Some people are interested in understanding, others are interested in
telling people what to do. The two approaches are mutually exclusive.

Craig

meekerdb

unread,
Dec 17, 2011, 3:29:36 PM12/17/11
to everyth...@googlegroups.com
On 12/17/2011 4:30 AM, Quentin Anciaux wrote:
>
> The thought experiment is that:
>
> 1- Computationalism is true.
> 2- So it means there exists conscious program.
> 3- You just stumble accros one.
> 4- You run it.
> 5- During the run you've seen that some parts are never accessed.
> 6- You remove those parts.
> 7- You run it again... it must still implement the conscious program (3) by points 1 and 2.
> ...
> N- you can build a machine that implements and can only run 3 but that can't handle
> counterfactual, but as the computation is the same as 3, it must be as conscious as when
> it was running on a complete physical computer.
> N+1- you can restore the handling of conterfactual by adding inactive piece. But If N
> was not conscious, adding inactive pieces shouldn't render it conscious.
>
> Quentin

Yes, that's my understanding of the argument. But I find it curious that if we substitute
"intelligent" for "conscious" it no longer seems paradoxical. We readily conclude that
removing the ability to handle counterfactuals makes the machine unintelligent. Yet most
people think a machine should be intelligent in order to be conscious. Bruno thinks it
must understand finite induction. Yet there are very many people, whom we assume we are
conscious, but have never heard of finite induction (their "finite induction" register is
missing). Maudlin comments that "intelligence" is dispositional and so is different from
consciousness. But why isn't a computation dispositional too? If it were not then it
seems that you run into the paradox of the rock that computes everything.

I think the paradox arises from neglecting the fact that intelligence, computation, and
maybe consciousness are all relative to a context or environment. Intelligence is the
ability to learn from interacting with the environment AND acting to some purpose in the
environment.

Brent

meekerdb

unread,
Dec 17, 2011, 4:05:36 PM12/17/11
to everyth...@googlegroups.com
On 12/17/2011 8:24 AM, Quentin Anciaux wrote:


2011/12/17 Craig Weinberg <whats...@gmail.com>
On Dec 17, 7:30 am, Quentin Anciaux <allco...@gmail.com> wrote:

> N- you can build a machine that implements and can only run 3 but that
> can't handle counterfactual, but as the computation is the same as 3, it
> must be as conscious as when it was running on a complete physical computer.
> N+1- you can restore the handling of conterfactual by adding inactive
> piece. But If N was not conscious, adding inactive pieces shouldn't render
> it conscious.

Conscious of what? It sounds like this assumes that consciousness is a
binary feature.


You didn't read, that's not the argument.

It begins by *assuming we have a conscious program*. The argument is not about what is consciousness, it's about assuming consciousness to be computational and assuming physical supervenience thesis true and showing a contradiction.

But it seems like a play on our intuition as to what constitutes a computation.  We hypothesize consciousness supervenes on computation because computation is the kind of thing needed to make intelligent (and therefore "conscious") acting machines.  But then there is a subtle shift from computation as the basis of intelligent action, to computation as a sequence of physical states to sequence of physical states as the playback of a recording.  Then we intuit that something as simple as a playback can't be a computation that would support consciousness - but why should we still regard it as a computation?

Brent

Craig Weinberg

unread,
Dec 17, 2011, 4:50:09 PM12/17/11
to Everything List
On Dec 17, 4:05 pm, meekerdb <meeke...@verizon.net> wrote:

>
> But it seems like a play on our intuition as to what constitutes a computation.  We
> hypothesize consciousness supervenes on computation because computation is the kind of
> thing needed to make intelligent (and therefore "conscious") acting machines.

Exactly.

> But then
> there is a subtle shift from computation as the basis of intelligent action, to
> computation as a sequence of physical states to sequence of physical states as the
> playback of a recording.  Then we intuit that something as simple as a playback can't be a
> computation that would support consciousness - but why should we still regard it as a
> computation?
>

You're right, a playback is not just a computation in the sense of
being a purely arithmetic abstraction but neither is it 'conscious' in
the sense that we experience consciousness as a trillion cell living
organism. I would call it a detection. The external consequences of
that detection-reaction is what we model as computation, but it is the
subjective sense that it makes to the player that supports richer
elaborations of sense that lead to consciousness *not* the
computation. Computation is like the exoskeleton of sense.

Craig

Russell Standish

unread,
Dec 17, 2011, 5:52:09 PM12/17/11
to everyth...@googlegroups.com
On Sat, Dec 17, 2011 at 11:59:07AM +0100, Bruno Marchal wrote:

... snip ...


>
> >In a multiverse, the
> >counterfactuals are realised, but in different branches.
>
> Not necessarily. If the computation is classical, it is the same in
> the normal continuations. The classical counterfactuals are not
> realized in a quantum multiverse. You have to put the Klara in some
> superposition state to do that. You need a quantum Olympia.
>

They will automatically be in superposition, being just a classical
device replicated across the branches. But they are not quantum
devices (in the sense of processing qubits).

>
>
>
> >Hence those
> >"inert" parts are no longer inert in all branches.
>
> I don't see this.
>

Consider an instruction that monitors the circular polarisation of a
photon. If the photon is left polarised, then the program branches, if
right, the program continues to the next instruction.

In Maudlin's initial run, suppose the program didn't branch (we
now eliminate from consideration the MW branch in which the program
did branch prior to the construction of Olympia).

Now Maudlin construct an Olympia with without branching at the
point. Or we do your step of replacing the branch instruction by a no
operation, as the result will be the same. The we run Olympia, with a
Klara attached at that step.

In the Multiverse branches where the photon is right polarised,
Olympia continues on, and the Klara remains inert. In the branch where
a left polarised photon is observed, the Klara springs to life, and
implements the missing branch instruction.

This is what I mean. At all stages, Klara and Olympia are classical
computing devices, embedded in a Multiverse.

>
>
> >If they were, they
> >could be removed from the computer altogether, without affecting the
> >computation.
>
> But that is the case for the computation under consideration.
>

No. See above.

>
>
> >
> >>
> >>So you are introducing a different kind of physical multiverse,
> >>which would handle the counterfactuals. But this will not work.
> >>Either this physical multiverse, which plays the role of the
> >>generalized brain, is Turing emulable, in which case I can emulate
> >>it in a single Turing machine, for which the MGA will apply again.
> >>Or it is not Turing emulable, but then the need of it will
> >>contradict the comp assumption.
> >>
> >
> >This step, as I understand it, is a form of dovetailing. Nobody really
> >thinks of the dovetailer algorithm as instantiating consciousness, so
> >the move is ultimately invalid, I would think.
>
> The problem is there. With comp + the physical supervenience thesis,
> the dovetailing algorithm does instantiate consciousness.
>

The dovetailer instantiates consciousness in exactly the same way that
a random block of marbles instantiates the statue of David.

I think that for most people call a shapeless block of marble 'David'
is a bit perverse.

> >It is not a question of the parallel realities playing a role in the
> >computation, but in the supervenience. Maudlin's argument says If
> >COMP, then supervenience on single universe is contradictory. But it
> >doesn't say anything about supervenience on multiple parallel
> >realities.
>
> Those are relevant for the relative measure on continuations.

Quite possibly. But that is an independent question to Maudlin's
argument. I'm trying to stay focussed here.

> >>>>
> >>>>>If you then fold the multiverse back into a
> >>>>>single universe by dovetailing, one can then reapply the Maudlin
> >>>>>move.
> >>>>
> >>>>Indeed. That is the key point.
> >>>>
> >>>>
> >>>>
> >>>>>But then, in that case, one can embed that result into a
> >>>>>Multiverse, and the cycle repeats.
> >>>>
> >
> >I think I'm coming around to the view that neither of the above steps
> >are valid - but one could equally say they are as valid as each other.
>
> Not sure I see which steps you are talking about. The MGA is a
> reductio ad absurdum from comp + physical supervenience.

The step of folding a Multiverse into single universe by
dovetailing, followed by reembedding the single universe back into
another multiverse. It is what we've been discussing, but is _not_
part of either MGA nor Maudlin.

...snip...

> >
> >>This is not
> >>different than the comp or quantum immortality argument. The fact
> >>remains: the physical activity in one normal branch missing the
> >>register is the same as the physical activity in some branch not
> >>missing it, for the same particular computation.
> >
> >In all branches, or just special ones? If all branches, then the
> >register is totally unnecessary.
>
> In this case the same computations, with the same inputs are done in
> all branches.
>

But then, this is not a Multiverse. By definition, a multiverse's
branches will be distunguished by the inputs.

>
>
> >If just a special pair of branches,
> >then Maudlin's argument shows that supervenience must occur across
> >more branches than those two.
>
> There is no more branches. We are now simulating all the branches in
> a single reality. If that is not possible, then comp is already
> false.
>

We're talking past each other here...

...snip...

> OK, but then to make your argument you have to shift toward a multi-
> multiverse, given that we have come back to a single universe. And
> the argument can continue: I will just simulate that
> multi-multiverse in a single branch on a single classical universal
> machine. If that is not possible, then comp is false. If that is
> possible, then MGA will apply again.
>

As I said above, in simulating the multiple branches of the Multiverse
by dovetailing, we are no longer instantiating consciousness. (Just
like all blocks of marble are not David.) This step is basically
invalid. It does not imply COMP is false, though.

Of course, if you can think of another way of simulating a multiverse
within a single universe, I wil naturally reconsider...

>
>
> >
> >But this doesn't answer the why question. I could imagine that you
> >might feel that Multiverses are otiose, so would prefer a derivation
> >of their existence from something "simpler" - eg arithmetic of the
> >whole numbers.
>
> Not at all. It is the idea that there is primary matter which is
> otiose, or epistemologically contradictory. It is physicalism which
> is show wrong. The multiverse is shown to be emergent from a numbers
> multi-dream. Physics becomes a branch of machine's theology.
>
>
>
> >
> >That's fine and dandy - but the Multiverse is not otiose - it is far
> >less of an impost than a single reality.
>
> Yes. We agree on this. It is the main theme of this list. The
> question is not about their existence, but their primitivity.
>

Actually, I have no problem with this. I am quite persuaded by Kant's
concept of the unknowable noumenon to apply Laplace's "Sire, je n'ai
besoin de cet hypothese" to the whole issue of primitive reality. I
don't need the MGA to conclude that. But others seem to require a
primitive "something" to exist, and are most upset at
immateriality. I'm happy to work out what is the limit of our
knowledge and move on from there. If any universal system can
reproduce phenomena, then that suffices. It is a meaningless question
to ask "which universal system" - it could be all of them, or even nothing
at all.

Coming back to the MGA - it would be interesting to know whether your
movie graph construction escapes the multiverse embdding move. Assume
your filmed graph has had most of its nodes removed, but by
coincidence, a supernova sends a blast of photons that exactly
reproduce the graph in one of the MW branches. In the vast majority of
branches, the filmed graph is a dead as a doornail.

Does this render the graph conscious in that single lucky branch?

I think this question bears on the issue of Boltzmann brains too,
something we haven't discussed enough here.

Cheers

Bruno Marchal

unread,
Dec 18, 2011, 6:46:10 AM12/18/11
to everyth...@googlegroups.com

On 17 Dec 2011, at 23:52, Russell Standish wrote:

> On Sat, Dec 17, 2011 at 11:59:07AM +0100, Bruno Marchal wrote:
>
> ... snip ...
>>
>>> In a multiverse, the
>>> counterfactuals are realised, but in different branches.
>>
>> Not necessarily. If the computation is classical, it is the same in
>> the normal continuations. The classical counterfactuals are not
>> realized in a quantum multiverse. You have to put the Klara in some
>> superposition state to do that. You need a quantum Olympia.
>>
>
> They will automatically be in superposition, being just a classical
> device replicated across the branches. But they are not quantum
> devices (in the sense of processing qubits).

But it will be in the same state across the multiverse. If different,
then it is different computations, and different consciousness, and we
are no more in the situation of the argument.

>
>>
>>
>>
>>> Hence those
>>> "inert" parts are no longer inert in all branches.
>>
>> I don't see this.
>>
>
> Consider an instruction that monitors the circular polarisation of a
> photon. If the photon is left polarised, then the program branches, if
> right, the program continues to the next instruction.

OK, we then we are in the case of quantum superposition (or alike).


>
> In Maudlin's initial run, suppose the program didn't branch (we
> now eliminate from consideration the MW branch in which the program
> did branch prior to the construction of Olympia).
>
> Now Maudlin construct an Olympia with without branching at the
> point. Or we do your step of replacing the branch instruction by a no
> operation, as the result will be the same. The we run Olympia, with a
> Klara attached at that step.
>
> In the Multiverse branches where the photon is right polarised,
> Olympia continues on, and the Klara remains inert. In the branch where
> a left polarised photon is observed, the Klara springs to life, and
> implements the missing branch instruction.
>
> This is what I mean. At all stages, Klara and Olympia are classical
> computing devices, embedded in a Multiverse.

Consciousness, with comp relies on interaction in a branch. The other
branch can change statistics on the computations/continuation, but on
the presence of other branch, unless the brain is a quantum computer,
but this only means that the level is lower than usual, and we have to
simulate it in a single branch to do the MGA or Olympia reasoning again.

>
>>
>>
>>> If they were, they
>>> could be removed from the computer altogether, without affecting the
>>> computation.
>>
>> But that is the case for the computation under consideration.
>>
>
> No. See above.

Either the mutiverse needed for that type of physical supervenience is
Turing emulable, or it is not.
If it is, we can do the MG reasoning, if it is not, comp is false.

>
>>
>>
>>>
>>>>
>>>> So you are introducing a different kind of physical multiverse,
>>>> which would handle the counterfactuals. But this will not work.
>>>> Either this physical multiverse, which plays the role of the
>>>> generalized brain, is Turing emulable, in which case I can emulate
>>>> it in a single Turing machine, for which the MGA will apply again.
>>>> Or it is not Turing emulable, but then the need of it will
>>>> contradict the comp assumption.
>>>>
>>>
>>> This step, as I understand it, is a form of dovetailing. Nobody
>>> really
>>> thinks of the dovetailer algorithm as instantiating consciousness,
>>> so
>>> the move is ultimately invalid, I would think.
>>
>> The problem is there. With comp + the physical supervenience thesis,
>> the dovetailing algorithm does instantiate consciousness.
>>
>
> The dovetailer instantiates consciousness in exactly the same way that
> a random block of marbles instantiates the statue of David.

The dovetailer generates precise programs, and run it in the precise
computer science sense. Nobody has ever show me one instance of a
program executed by a rock, except for little trivial program.
Actually, nobody has succeeded to tell me what a rock is, or in what
sense that exists.
The universal dovetailer generates immensely big programs, which does
not make sense in most possible conception of rocks that I know. That
the statue of David is in a random marble of block is irrelevant for
the issue of consciousness relate to program execution.

>
> I think that for most people call a shapeless block of marble 'David'
> is a bit perverse.

Sure. But that the UD run the many lives of Russell is a correct
consequences of the comp Hyp.


>
>>> It is not a question of the parallel realities playing a role in the
>>> computation, but in the supervenience. Maudlin's argument says If
>>> COMP, then supervenience on single universe is contradictory. But it
>>> doesn't say anything about supervenience on multiple parallel
>>> realities.
>>
>> Those are relevant for the relative measure on continuations.
>
> Quite possibly. But that is an independent question to Maudlin's
> argument. I'm trying to stay focussed here.

My point is just that the use of the multiverse does not change the
consequence of the MGA reasoning.

>
>>>>>>
>>>>>>> If you then fold the multiverse back into a
>>>>>>> single universe by dovetailing, one can then reapply the Maudlin
>>>>>>> move.
>>>>>>
>>>>>> Indeed. That is the key point.
>>>>>>
>>>>>>
>>>>>>
>>>>>>> But then, in that case, one can embed that result into a
>>>>>>> Multiverse, and the cycle repeats.
>>>>>>
>>>
>>> I think I'm coming around to the view that neither of the above
>>> steps
>>> are valid - but one could equally say they are as valid as each
>>> other.
>>
>> Not sure I see which steps you are talking about. The MGA is a
>> reductio ad absurdum from comp + physical supervenience.
>
> The step of folding a Multiverse into single universe by
> dovetailing, followed by reembedding the single universe back into
> another multiverse. It is what we've been discussing, but is _not_
> part of either MGA nor Maudlin.

Indeed. We just show that comp + physical supervenience are
contradictory. But you are the one introducing a multiverse to object
on the validity of the reasoning, and my point is that such a move
does not work because multiverse, if needed, can be emulated in single
computation, where we can do Maudlin's or MGA reasoning again.


>
> ...snip...
>
>>>
>>>> This is not
>>>> different than the comp or quantum immortality argument. The fact
>>>> remains: the physical activity in one normal branch missing the
>>>> register is the same as the physical activity in some branch not
>>>> missing it, for the same particular computation.
>>>
>>> In all branches, or just special ones? If all branches, then the
>>> register is totally unnecessary.
>>
>> In this case the same computations, with the same inputs are done in
>> all branches.
>>
>
> But then, this is not a Multiverse. By definition, a multiverse's
> branches will be distunguished by the inputs.

That depends on many things. If I take a lift, I will take a lift in
the vast majority of my consistent extensions.

>
>>
>>
>>> If just a special pair of branches,
>>> then Maudlin's argument shows that supervenience must occur across
>>> more branches than those two.
>>
>> There is no more branches. We are now simulating all the branches in
>> a single reality. If that is not possible, then comp is already
>> false.
>>
>
> We're talking past each other here...
>
> ...snip...
>
>> OK, but then to make your argument you have to shift toward a multi-
>> multiverse, given that we have come back to a single universe. And
>> the argument can continue: I will just simulate that
>> multi-multiverse in a single branch on a single classical universal
>> machine. If that is not possible, then comp is false. If that is
>> possible, then MGA will apply again.
>>
>
> As I said above, in simulating the multiple branches of the Multiverse
> by dovetailing, we are no longer instantiating consciousness.

I don't follow you on this. The first person is not even aware of the
giant delays brought by the dovetailing procedure.
The UD (at first concrete) does instantiate consciousness (and "at
time" with the physical supervenience thesis). It is all what we need
for getting the (epistemological) contradiction.

> (Just
> like all blocks of marble are not David.)

It is different. There is no supervenience of experience in this case.


> This step is basically
> invalid. It does not imply COMP is false, though.
>
> Of course, if you can think of another way of simulating a multiverse
> within a single universe, I wil naturally reconsider...

The way of simulating it changes nothing. It is the point of the first
six step of UDA.

>
>>
>>
>>>
>>> But this doesn't answer the why question. I could imagine that you
>>> might feel that Multiverses are otiose, so would prefer a derivation
>>> of their existence from something "simpler" - eg arithmetic of the
>>> whole numbers.
>>
>> Not at all. It is the idea that there is primary matter which is
>> otiose, or epistemologically contradictory. It is physicalism which
>> is show wrong. The multiverse is shown to be emergent from a numbers
>> multi-dream. Physics becomes a branch of machine's theology.
>>
>>
>>
>>>
>>> That's fine and dandy - but the Multiverse is not otiose - it is far
>>> less of an impost than a single reality.
>>
>> Yes. We agree on this. It is the main theme of this list. The
>> question is not about their existence, but their primitivity.
>>
>
> Actually, I have no problem with this. I am quite persuaded by Kant's
> concept of the unknowable noumenon to apply Laplace's "Sire, je n'ai
> besoin de cet hypothese" to the whole issue of primitive reality. I
> don't need the MGA to conclude that. But others seem to require a
> primitive "something" to exist, and are most upset at
> immateriality.

Yes. The idea is to explain that the belief in a primitive physical
reality makes no sense once we bet on digital mechanism. Comp makes
physical reality emerge already from the additive and multiplicative
number relation. It explains what it is a complete coupling
consciousness/physical-reality which is emerging, and consciousness
has a key role in that emergence. (Not human consciousness, but
universal number consciousness).


> I'm happy to work out what is the limit of our
> knowledge and move on from there. If any universal system can
> reproduce phenomena, then that suffices.

I agree. But it took me 30 years to convince die hard materialist. So
it is not obvious. That's why we need a proof/argument.

> It is a meaningless question
> to ask "which universal system" - it could be all of them, or even
> nothing
> at all.

We need at least one universal system.
UDA (including step 8) shows that the laws of physics are independent
of the choice of the universal system, because physics (below our
subst level) is a sum of the works/dreams of all universal machines.
This gives, together with self-referential correctness, enough precise
constraints (by UDA) to (begin) the derivation of the laws of physics
(AUDA).


>
> Coming back to the MGA - it would be interesting to know whether your
> movie graph construction escapes the multiverse embdding move. Assume
> your filmed graph has had most of its nodes removed, but by
> coincidence, a supernova sends a blast of photons that exactly
> reproduce the graph in one of the MW branches. In the vast majority of
> branches, the filmed graph is a dead as a doornail.
>
> Does this render the graph conscious in that single lucky branch?

Yes (in case of comp + physical supervenience). But no branch is
single. All branch have 2^aleph_zero counterparts. Branches can be
rare relatively to other, in measure term, but all have the power of
the continuum, in the first person sense (despite everything is
countable in the (big) third person picture). This is trivial, because
for all programs, the UD, stupidly enough, re-execute it by
dovetailing it dumbly on all initial segment of all real numbers
(which is a continuum in the first person perspective).

>
> I think this question bears on the issue of Boltzmann brains too,
> something we haven't discussed enough here.

The UD makes this irrelevant, or trivial. We are in infinities of
digital Boltzmann computers. We belong to infinite layers of universal
dreams. Matter is a first person plural mind construct. Boltzmann idea
can be see as a precursor of the UD type of argument. It is Church
thesis and the notion of universal machine which makes that idea
precise.

Best,

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Dec 18, 2011, 6:46:29 AM12/18/11
to everyth...@googlegroups.com
To save the physical supervenience thesis, some people does that. But it is a confusion between a computation and a description of a computation.  

Bruno


Bruno Marchal

unread,
Dec 18, 2011, 6:48:21 AM12/18/11
to everyth...@googlegroups.com

Not really. Consciousness needs only universality, in some weak sense.
Self-consciousness needs the induction beliefs. I am open that
Robinson Arithmetic is conscious, and that Peano Arithmetic (and its
sound extension) are self-conscious (Löbian). But consciousness is
abstract/immaterial and concerns abstract/immaterial person. Not
material relative token.

> Yet there are very many people, whom we assume we are conscious,
> but have never heard of finite induction (their "finite induction"
> register is missing).

Not sure. all humans (if not all mammals or even much more) are
Löbian. To grasp the meaning of word like "anniversary", "annual",
"monthly", "death", "repetition", "constant", or any words with the
prefix "able", you need implicitly some amount of induction.

> Maudlin comments that "intelligence" is dispositional and so is
> different from consciousness. But why isn't a computation
> dispositional too?

It is. Even for Maudlin, I think. And consciousness might have a
dispositional components, but it has also a factual (true) key
components. (cf the difference between Bp and Bp & p).

> If it were not then it seems that you run into the paradox of the
> rock that computes everything.
>
> I think the paradox arises from neglecting the fact that
> intelligence, computation, and maybe consciousness are all relative
> to a context or environment.

I agree, for that paradox. But this does not make the immaterialist
conclusion non valid. Just to be precise.

> Intelligence is the ability to learn from interacting with the
> environment AND acting to some purpose in the environment.

OK. And consciousness is about the same together with some important
true semantical fixed point.

Bruno

http://iridia.ulb.ac.be/~marchal/

meekerdb

unread,
Dec 18, 2011, 1:56:59 PM12/18/11
to everyth...@googlegroups.com
I don't understand that remark.  Are you saying the playback is only a description of a computation and not a computation?  Or is the sequence of physical states only a description?

Brent

Russell Standish

unread,
Dec 18, 2011, 5:27:34 PM12/18/11
to everyth...@googlegroups.com
On Sun, Dec 18, 2011 at 12:46:10PM +0100, Bruno Marchal wrote:

...snip...


>
> Consciousness, with comp relies on interaction in a branch. The

I don't know what you mean by this.

> other branch can change statistics on the computations/continuation,
> but on the presence of other branch, unless the brain is a quantum
> computer, but this only means that the level is lower than usual,
> and we have to simulate it in a single branch to do the MGA or
> Olympia reasoning again.
>

I am not considering the case of the brain being a quantum computer
(eg Penrose's idea). As you say, it wouldn't make much difference anyway.

...snip...

> Either the mutiverse needed for that type of physical supervenience
> is Turing emulable, or it is not.
> If it is, we can do the MG reasoning, if it is not, comp is false.
>

If the emulation is by means of dovetailing, then I think not. A
dovetailer is not conscious. Nor is Peano arithmetic for that
matter. Some structures within these might be consious however (by
COMP, necessarily so).

In this case, the distinction is more than merely linguistic. Often
when you say PA is conscious, I translate your comment as above, and
continue on. But this can't be done here.

...snip

> >The dovetailer instantiates consciousness in exactly the same way that
> >a random block of marbles instantiates the statue of David.
>
> The dovetailer generates precise programs, and run it in the precise
> computer science sense. Nobody has ever show me one instance of a
> program executed by a rock, except for little trivial program.
> Actually, nobody has succeeded to tell me what a rock is, or in what
> sense that exists.
> The universal dovetailer generates immensely big programs, which
> does not make sense in most possible conception of rocks that I
> know. That the statue of David is in a random marble of block is
> irrelevant for the issue of consciousness relate to program
> execution.
>

Actually, I wasn't alluding to computing rocks (in spite of a certain
simularity) ).

The issue is misidentification of the dovetailer with one of the
programs it is executing.

It is the same mistake to say that the Library of Babel contains all
the wisdom of the ages just because one can find a copy of every book that
has ever been written within its walls.

...snip...

>
> My point is just that the use of the multiverse does not change the
> consequence of the MGA reasoning.
>

This we disagree on, clearly. However, I'm still failing to understand
your point of view, because ISTM you want to call the UD conscious,
when it surely can't be. Its as dumb as.

> >
> >As I said above, in simulating the multiple branches of the Multiverse
> >by dovetailing, we are no longer instantiating consciousness.
>
> I don't follow you on this. The first person is not even aware of
> the giant delays brought by the dovetailing procedure.
> The UD (at first concrete) does instantiate consciousness (and "at
> time" with the physical supervenience thesis). It is all what we
> need for getting the (epistemological) contradiction.

The UD runs all programs, including conscious ones, but is not
conscious in itself. Therefore, you cannot apply Maudlin/MGA to a
dovetailer - it makes no sense.

>
>
>
>
>
> >(Just
> >like all blocks of marble are not David.)
>
> It is different. There is no supervenience of experience in this case.
>

It was an analogy. The analogue of consciousness was form in this case.

>
>
>
> >This step is basically
> >invalid. It does not imply COMP is false, though.
> >
> >Of course, if you can think of another way of simulating a multiverse
> >within a single universe, I wil naturally reconsider...
>
> The way of simulating it changes nothing. It is the point of the
> first six step of UDA.

Only the subjective view is unchanged. The 3rd person view is changed
utterly. In the 1st person view, we have conscousness supervening on a
multiverse, which doesn't change. But Maudlin's argument no longer
works in the 1st person view.

...snip...

Bruno Marchal

unread,
Dec 19, 2011, 8:56:10 AM12/19/11
to everyth...@googlegroups.com

On 18 Dec 2011, at 23:27, Russell Standish wrote:

> On Sun, Dec 18, 2011 at 12:46:10PM +0100, Bruno Marchal wrote:
>
> ...snip...
>>
>> Consciousness, with comp relies on interaction in a branch. The
>
> I don't know what you mean by this.

Assuming comp, consciousness can use a machine working only by
interaction of parts. It does not need parallel universes, which will
only change the relative statistics of consciousness content.


>
>> other branch can change statistics on the computations/continuation,
>> but on the presence of other branch, unless the brain is a quantum
>> computer, but this only means that the level is lower than usual,
>> and we have to simulate it in a single branch to do the MGA or
>> Olympia reasoning again.
>>
>
> I am not considering the case of the brain being a quantum computer
> (eg Penrose's idea). As you say, it wouldn't make much difference
> anyway.
>
> ...snip...

OK.

>
>> Either the mutiverse needed for that type of physical supervenience
>> is Turing emulable, or it is not.
>> If it is, we can do the MG reasoning, if it is not, comp is false.
>>
>
> If the emulation is by means of dovetailing, then I think not. A
> dovetailer is not conscious.

That is ambiguous. A dovetailer, like Robinson arithmetic (when
proving all its theorems) is not conscious, per se. But it
instantiates consciousness, indeed all possible machine's consciousness.

> Nor is Peano arithmetic for that
> matter.

PA is a Lobian machine, and I think it is as conscious as you and me.
It has the same Löbian theology (the 8 hypostases), and thus it has
even the same physics (but that's another topic).

> Some structures within these might be consious however (by
> COMP, necessarily so).

So we agree. The confusion is that something (a brain, a universe, a
universal dovetailer) can instantiate consciousness without being
conscious. For MGA the boolean graph needs to instantiate
conciousness. It does not need to be conscious (a good thing given
that we don't know our substitution level: we never know which program
we are).


>
> In this case, the distinction is more than merely linguistic. Often
> when you say PA is conscious, I translate your comment as above, and
> continue on.

When I say that PA is conscious, I mean it literally.
When I say that RA is conscious, I mean "the universal machine RA" is
conscious literally. But RA can also play the role of a universal
dovetailer, which is not a person, and thus not conscious. RA can play
both, somehow. Like any universal machine, even a Löbian one, can
implement a universal dovetailing (if patient enough!).


> But this can't be done here.

I agree that the UD is not a person, and as such its consciousness is
even non-sensical. But if you agree that the UD instantiates
consciousness, then the MGA applies to it. I can say yes to a doctor
because it takes a much lower level than the correct one, putting much
to much in the artificial brain. And you were saying MGA does not work
in case of a physical supervenience based on a multiverse. That is why
I put the level so down so that I emulate that multiverse, making MGA
working on that structure. An infinitely low level can only force me
to implement (as my brain) a multiverse, or even the universal
dovetailing itself. This will subsumes all multi-multi-multi-
multi .... (^alpha) universes (alpha constructive ordinal).

>
> ...snip
>
>>> The dovetailer instantiates consciousness in exactly the same way
>>> that
>>> a random block of marbles instantiates the statue of David.
>>
>> The dovetailer generates precise programs, and run it in the precise
>> computer science sense. Nobody has ever show me one instance of a
>> program executed by a rock, except for little trivial program.
>> Actually, nobody has succeeded to tell me what a rock is, or in what
>> sense that exists.
>> The universal dovetailer generates immensely big programs, which
>> does not make sense in most possible conception of rocks that I
>> know. That the statue of David is in a random marble of block is
>> irrelevant for the issue of consciousness relate to program
>> execution.
>>
>
> Actually, I wasn't alluding to computing rocks (in spite of a certain
> simularity) ).
>
> The issue is misidentification of the dovetailer with one of the
> programs it is executing.

But as I just said, this is not relevant for the movie graph or
Maudlin's argument.


>
> It is the same mistake to say that the Library of Babel contains all
> the wisdom of the ages just because one can find a copy of every
> book that
> has ever been written within its walls.

Well, here there is also another mistake, which is that a library of
Babel contains only description, where a universal dovetailer actually
executes the 'descriptions'. The numbers, even with their order are
not Turing universal. It is really the laws of addition and
multiplication which gives rise to the genuine (universal) internal
'dynamics'.


>
> ...snip...
>
>>
>> My point is just that the use of the multiverse does not change the
>> consequence of the MGA reasoning.
>>
>
> This we disagree on, clearly. However, I'm still failing to understand
> your point of view, because ISTM you want to call the UD conscious,
> when it surely can't be. Its as dumb as.

The UD is not conscious, as a person, but once you add the
supervenience thesis, it instantiates consciousness at each moment
where it executes a conscious program (say Russell Standish's one,
then Bruno's one, etc). That is enough for applying the MGA argument.
We don't have to execute all the UD, with aphysical supervenience
thesis, "it" will instantiate consciousness after a finite time, and
the MGA will be able to be applied on portion of its execution.

>
>>>
>>> As I said above, in simulating the multiple branches of the
>>> Multiverse
>>> by dovetailing, we are no longer instantiating consciousness.
>>
>> I don't follow you on this. The first person is not even aware of
>> the giant delays brought by the dovetailing procedure.
>> The UD (at first concrete) does instantiate consciousness (and "at
>> time" with the physical supervenience thesis). It is all what we
>> need for getting the (epistemological) contradiction.
>
> The UD runs all programs, including conscious ones, but is not
> conscious in itself. Therefore, you cannot apply Maudlin/MGA to a
> dovetailer - it makes no sense.

Why?
I think that you have to elaborate on this. The MGA works for any
single program generating (with supervenience) consciousness. Where do
we use the fact that the program has to be conscious? Even with a
brain, consciousness is attribute to some sub-program, and we cannot
even know which one. I don't see your point.


>
>>
>>
>>
>>
>>
>>> (Just
>>> like all blocks of marble are not David.)
>>
>> It is different. There is no supervenience of experience in this
>> case.
>>
>
> It was an analogy. The analogue of consciousness was form in this
> case.

OK.

>
>>
>>
>>
>>> This step is basically
>>> invalid. It does not imply COMP is false, though.
>>>
>>> Of course, if you can think of another way of simulating a
>>> multiverse
>>> within a single universe, I wil naturally reconsider...
>>
>> The way of simulating it changes nothing. It is the point of the
>> first six step of UDA.
>
> Only the subjective view is unchanged. The 3rd person view is changed
> utterly. In the 1st person view, we have conscousness supervening on a
> multiverse, which doesn't change. But Maudlin's argument no longer
> works in the 1st person view.

I don't see why.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Dec 19, 2011, 10:13:38 AM12/19/11
to everyth...@googlegroups.com
Yes.



Or is the sequence of physical states only a description?

Only a description. If those states have been linked by a universal machine, then there is a genuine computation "in the air". That physical manifestation correspond to a genuine computation (which is an abstract notion living in Platonia). Consciousness is attached to such computation. The problem of those who want both comp and the physical supervenience, is that they have to attach consciousness to a physical implementation of a computation, but then MGA/Maudlin can be used to show that they have to attach consciousness to a movie, or to a physical description of a computation, which makes no sense, given that there is no more causal or arithmetical (or java-ical, etc.) relationships between the states: so it does no more correspond to a computation at all. 
Physics can implement computation only in virtue of having itself elementary causal processes making it Turing Universal, but consciousness (abstract) is only attached to the (abstract) computation, not any particular implementation.

if you want, no bodies are conscious. People are conscious, and their bodies are only a way to manifest themselves relatively to other people, again through their apparent bodies.

Things are bizarre only if we keep both comp and the idea that matter is a primitive concept, or that physics is the science of the ultimate reality. That explains why naturalist well versed in the mind-body difficulties eventually want to eliminate consciousness. But a second of introspection refutes the non-existence of consciousness. So *primitive matter* is easier to eliminate, even if it go against our animal prejudices, (reifed by Aristotle).

As I have said once: it is easier to explain to a consciousness the illusion of matter, than to explain to a piece of matter the illusion of consciousness. 

Bruno




meekerdb

unread,
Dec 19, 2011, 1:55:27 PM12/19/11
to everyth...@googlegroups.com
On 12/19/2011 5:56 AM, Bruno Marchal wrote:
>> In this case, the distinction is more than merely linguistic. Often
>> when you say PA is conscious, I translate your comment as above, and
>> continue on.
>
> When I say that PA is conscious, I mean it literally.
> When I say that RA is conscious, I mean "the universal machine RA" is conscious
> literally. But RA can also play the role of a universal dovetailer, which is not a
> person, and thus not conscious. RA can play both, somehow. Like any universal machine,
> even a L�bian one, can implement a universal dovetailing (if patient enough!).

>
>
>
>
>> But this can't be done here.
>
> I agree that the UD is not a person, and as such its consciousness is even non-sensical.
> But if you agree that the UD instantiates consciousness, then the MGA applies to it. I
> can say yes to a doctor because it takes a much lower level than the correct one,
> putting much to much in the artificial brain. And you were saying MGA does not work in
> case of a physical supervenience based on a multiverse. That is why I put the level so
> down so that I emulate that multiverse, making MGA working on that structure. An
> infinitely low level can only force me to implement (as my brain) a multiverse, or even
> the universal dovetailing itself. This will subsumes all multi-multi-multi-multi ....
> (^alpha) universes (alpha constructive ordinal).

This is related to my point that consciousness is relative to some context. ISTM that
pushing the substitution level down so low that you are emulating the physics of the
environment as well as the brain vitiates the argument. If I emulate a universe or
multiverse in order to instantiate consciousness then I have not shown consciousness to be
independent of physics. I've only shown that consciousness supervenes on the physics of
the emulated multiverse.

Brent


Quentin Anciaux

unread,
Dec 19, 2011, 2:16:31 PM12/19/11
to everyth...@googlegroups.com


2011/12/19 meekerdb <meek...@verizon.net>
On 12/19/2011 5:56 AM, Bruno Marchal wrote:
In this case, the distinction is more than merely linguistic. Often
when you say PA is conscious, I translate your comment as above, and
continue on.

When I say that PA is conscious, I mean it literally.
When I say that RA is conscious, I mean "the universal machine RA" is conscious literally. But RA can also play the role of a universal dovetailer, which is not a person, and thus not conscious. RA can play both, somehow. Like any universal machine, even a Löbian one, can implement a universal dovetailing (if patient enough!).





But this can't be done here.

I agree that the UD is not a person, and as such its consciousness is even non-sensical. But if you agree that the UD instantiates consciousness, then the MGA applies to it. I can say yes to a doctor because it takes a much lower level than the correct one, putting much to much in the artificial brain. And you were saying MGA does not work in case of a physical supervenience based on a multiverse. That is why I put the level so down so that I emulate that multiverse, making MGA working on that structure. An infinitely low level can only force me to implement (as my brain) a multiverse, or even the universal dovetailing itself. This will subsumes all multi-multi-multi-multi .... (^alpha) universes (alpha constructive ordinal).
This is related to my point that consciousness is relative to some context.  ISTM that pushing the substitution level down so low that you are emulating the physics of the environment as well as the brain vitiates the argument.  If I emulate a universe or multiverse in order to instantiate consciousness then I have not shown consciousness to be independent of physics.  I've only shown that consciousness supervenes on the physics of the emulated multiverse.

Brent

Also I can't see how the view that the physics emerge from invariance of infinities of interfering computations allows physics to be entirely simulated in one computation (ultra low substitution level).

ISTM that if the level is that low... then comp is false, because physics utlimately must be not computable... digital physics is not compatible with computationalism, but, an ultra low level is digital physics and that's contradictory.

And also, ISTM that a substitution level embedding all the multiverse is *infinite* hence not turing emulable.




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

meekerdb

unread,
Dec 19, 2011, 2:50:19 PM12/19/11
to everyth...@googlegroups.com
On 12/19/2011 7:13 AM, Bruno Marchal wrote:
...

> if you want, no bodies are conscious. People are conscious, and their bodies are only a
> way to manifest themselves relatively to other people, again through their apparent bodies.

But also, "only" a way to interact with the physics of their environment. It seems to me
that consciousness without this interaction is a dubious concept.

>
> Things are bizarre only if we keep both comp and the idea that matter is a primitive
> concept, or that physics is the science of the ultimate reality. That explains why
> naturalist well versed in the mind-body difficulties eventually want to eliminate
> consciousness. But a second of introspection refutes the non-existence of consciousness.
> So *primitive matter* is easier to eliminate, even if it go against our animal
> prejudices, (reifed by Aristotle).
>
> As I have said once: it is easier to explain to a consciousness the illusion of matter,
> than to explain to a piece of matter the illusion of consciousness.

OK. But a second of extrospection refutes the non-existence of matter. You take neither
as primitive; which is Ok but then you need to recover both consciousness and matter.

Brent

meekerdb

unread,
Dec 19, 2011, 4:26:27 PM12/19/11
to everyth...@googlegroups.com
On 12/19/2011 11:16 AM, Quentin Anciaux wrote:


2011/12/19 meekerdb <meek...@verizon.net>
On 12/19/2011 5:56 AM, Bruno Marchal wrote:
In this case, the distinction is more than merely linguistic. Often
when you say PA is conscious, I translate your comment as above, and
continue on.

When I say that PA is conscious, I mean it literally.
When I say that RA is conscious, I mean "the universal machine RA" is conscious literally. But RA can also play the role of a universal dovetailer, which is not a person, and thus not conscious. RA can play both, somehow. Like any universal machine, even a Löbian one, can implement a universal dovetailing (if patient enough!).




But this can't be done here.

I agree that the UD is not a person, and as such its consciousness is even non-sensical. But if you agree that the UD instantiates consciousness, then the MGA applies to it. I can say yes to a doctor because it takes a much lower level than the correct one, putting much to much in the artificial brain. And you were saying MGA does not work in case of a physical supervenience based on a multiverse. That is why I put the level so down so that I emulate that multiverse, making MGA working on that structure. An infinitely low level can only force me to implement (as my brain) a multiverse, or even the universal dovetailing itself. This will subsumes all multi-multi-multi-multi .... (^alpha) universes (alpha constructive ordinal).

This is related to my point that consciousness is relative to some context.  ISTM that pushing the substitution level down so low that you are emulating the physics of the environment as well as the brain vitiates the argument.  If I emulate a universe or multiverse in order to instantiate consciousness then I have not shown consciousness to be independent of physics.  I've only shown that consciousness supervenes on the physics of the emulated multiverse.

Brent

Also I can't see how the view that the physics emerge from invariance of infinities of interfering computations allows physics to be entirely simulated in one computation (ultra low substitution level).

ISTM that if the level is that low... then comp is false, because physics utlimately must be not computable... digital physics is not compatible with computationalism, but, an ultra low level is digital physics and that's contradictory.

But I think that's where our intuition misleads us.  It seems very likely that part or even all of one's brain could be replaced by computer; and that computer could be emulated by a universal digital computer.  But this overlooks the essential part played by the rest of the world and its interaction with your brain.  This passed over by saying the environment and it's interactions can be simulated as well.  But now you are committed to a simulated consciousness that is dependent on simulated physics.

Brent


And also, ISTM that a substitution level embedding all the multiverse is *infinite* hence not turing emulable.




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
All those moments will be lost in time, like tears in rain.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.1890 / Virus Database: 2108/4690 - Release Date: 12/19/11


Craig Weinberg

unread,
Dec 19, 2011, 5:28:33 PM12/19/11
to Everything List
On Dec 19, 4:26 pm, meekerdb <meeke...@verizon.net> wrote:

>
> But I think that's where our intuition misleads us.  It seems very likely that part or
> even all of one's brain could be replaced by computer; and that computer could be emulated
> by a universal digital computer.

Not to me. Does it seem very likely that part of even all of France
could be replaced by India? Or robot clones of French people?

Craig

meekerdb

unread,
Dec 19, 2011, 6:08:46 PM12/19/11
to everyth...@googlegroups.com

They are all replaced every 80yrs or so (some by Algerians).

Brent
"If you ask the wrong question, it won't matter what the answer is."

Russell Standish

unread,
Dec 19, 2011, 7:03:44 PM12/19/11
to everyth...@googlegroups.com
On Mon, Dec 19, 2011 at 02:56:10PM +0100, Bruno Marchal wrote:
>
> On 18 Dec 2011, at 23:27, Russell Standish wrote:
>
> >On Sun, Dec 18, 2011 at 12:46:10PM +0100, Bruno Marchal wrote:
> >
> >...snip...
> >>
> >>Consciousness, with comp relies on interaction in a branch. The
> >
> >I don't know what you mean by this.
>
> Assuming comp, consciousness can use a machine working only by
> interaction of parts. It does not need parallel universes, which
> will only change the relative statistics of consciousness content.
>

Even though the parts may be distributed across multiple branches of
the MV, and have different counterfactual histories?

Be careful of not including the conclusion in the definition of COMP.

...snip...

> >
> >If the emulation is by means of dovetailing, then I think not. A
> >dovetailer is not conscious.
>
> That is ambiguous. A dovetailer, like Robinson arithmetic (when
> proving all its theorems) is not conscious, per se. But it
> instantiates consciousness, indeed all possible machine's
> consciousness.
>

Great. But this is more than mere terminological wrangling. To an
observer of the dovetailer, no conscious processes are visible. To do
that would require a means of determining whether a computation is
conscious or not, something we don't have, and probably never will.

It is another manifestation of there being no God's viewpoint in a
Multiverse.

I feel this invalidates applying Maudlin's argument to a
dovetailer. But put another way, perhaps it means that consciousness
cannot supervene on a physical implementation of a dovetailer. Which
is probably what you're trying to get to.

...snip...

>
> >
> >In this case, the distinction is more than merely linguistic. Often
> >when you say PA is conscious, I translate your comment as above, and
> >continue on.
>
> When I say that PA is conscious, I mean it literally.
> When I say that RA is conscious, I mean "the universal machine RA"
> is conscious literally. But RA can also play the role of a universal
> dovetailer, which is not a person, and thus not conscious. RA can
> play both, somehow. Like any universal machine, even a Löbian one,
> can implement a universal dovetailing (if patient enough!).
>

I thought we agreed that these systems contained conscious structures,
not are conscious in themselves. I don't know otherwise how to
interpret "PA is literally conscious". It makes no sense...

>
>
>
> >But this can't be done here.
>
> I agree that the UD is not a person, and as such its consciousness
> is even non-sensical. But if you agree that the UD instantiates
> consciousness, then the MGA applies to it.

It doesn't eliminate the supervenience of the consciousness on the
simulated physics within the UD. It seems this is in accordance with
Brent's comments too.

Presumably you would argue that this is simulated matter, not
primitive matter. Sure. But what's to stop the primitive matter being
multiversal - whether it can be simulated or not is a little beside
the point.

...snip...

> >The issue is misidentification of the dovetailer with one of the
> >programs it is executing.
>
> But as I just said, this is not relevant for the movie graph or
> Maudlin's argument.

This is what I'm having difficulty seeing...

>
>
> >
> >It is the same mistake to say that the Library of Babel contains all
> >the wisdom of the ages just because one can find a copy of every
> >book that
> >has ever been written within its walls.
>
> Well, here there is also another mistake, which is that a library of
> Babel contains only description, where a universal dovetailer
> actually executes the 'descriptions'. The numbers, even with their
> order are not Turing universal. It is really the laws of addition
> and multiplication which gives rise to the genuine (universal)
> internal 'dynamics'.

Again, this is an analogy, like blocks of marble and David. Still, I
fail to see how instantiating all possible persons in all possible
environments actually creates a person at all. Just like instantiating
all possible books creates any particular book in the Library of Babel.

>
>
> >
> >...snip...
> >
> >>
> >>My point is just that the use of the multiverse does not change the
> >>consequence of the MGA reasoning.
> >>
> >
> >This we disagree on, clearly. However, I'm still failing to understand
> >your point of view, because ISTM you want to call the UD conscious,
> >when it surely can't be. Its as dumb as.
>
> The UD is not conscious, as a person, but once you add the
> supervenience thesis, it instantiates consciousness at each moment
> where it executes a conscious program (say Russell Standish's one,
> then Bruno's one, etc).

Why does this depend on supervenience?

> That is enough for applying the MGA
> argument. We don't have to execute all the UD, with aphysical
> supervenience thesis, "it" will instantiate consciousness after a
> finite time, and the MGA will be able to be applied on portion of
> its execution.
>
>
>
>
>
> >
> >>>
> >>>As I said above, in simulating the multiple branches of the
> >>>Multiverse
> >>>by dovetailing, we are no longer instantiating consciousness.
> >>
> >>I don't follow you on this. The first person is not even aware of
> >>the giant delays brought by the dovetailing procedure.
> >>The UD (at first concrete) does instantiate consciousness (and "at
> >>time" with the physical supervenience thesis). It is all what we
> >>need for getting the (epistemological) contradiction.
> >
> >The UD runs all programs, including conscious ones, but is not
> >conscious in itself. Therefore, you cannot apply Maudlin/MGA to a
> >dovetailer - it makes no sense.
>
> Why?
> I think that you have to elaborate on this. The MGA works for any
> single program generating (with supervenience) consciousness. Where
> do we use the fact that the program has to be conscious? Even with a
> brain, consciousness is attribute to some sub-program, and we cannot
> even know which one. I don't see your point.
>

Take the UD program, and unfold it into a list of all the states it
visits in its execution. Olympia is just the simple machine
that iterates over that list of states. Because the UD has no input, there
_are_ no counterfactuals, so there are no Klaras to be attached.

Is there any contradiction with the UD computation being instantiated
on a trivial physical process? No - because clearly it happens. But I
would argue that the UD is not conscious anyway, so its a rather
irrelevant point. I don't see what supervenience has to do with it, as
the supervenience is of the simulated conscious entities on the
simulated environments in which they're embedded.

...snip...

> >
> >>
> >>
> >>
> >>>This step is basically
> >>>invalid. It does not imply COMP is false, though.
> >>>
> >>>Of course, if you can think of another way of simulating a
> >>>multiverse
> >>>within a single universe, I wil naturally reconsider...
> >>
> >>The way of simulating it changes nothing. It is the point of the
> >>first six step of UDA.
> >
> >Only the subjective view is unchanged. The 3rd person view is changed
> >utterly. In the 1st person view, we have conscousness supervening on a
> >multiverse, which doesn't change. But Maudlin's argument no longer
> >works in the 1st person view.
>
> I don't see why.

Because Maudlin assumes a single universe physics, and the 1st person
viewpoint is multiversal.

Craig Weinberg

unread,
Dec 20, 2011, 8:14:55 AM12/20/11
to Everything List
On Dec 19, 6:08 pm, meekerdb <meeke...@verizon.net> wrote:
> On 12/19/2011 2:28 PM, Craig Weinberg wrote:
>
> > On Dec 19, 4:26 pm, meekerdb<meeke...@verizon.net>  wrote:
>
> >> But I think that's where our intuition misleads us.  It seems very likely that part or
> >> even all of one's brain could be replaced by computer; and that computer could be emulated
> >> by a universal digital computer.
> > Not to me. Does it seem very likely that part of even all of France
> > could be replaced by India? Or robot clones of French people?
>
>
> They are all replaced every 80yrs or so (some by Algerians).

So it should be no problem to replace the brain with bone marrow.

meekerdb

unread,
Dec 20, 2011, 1:13:04 PM12/20/11
to everyth...@googlegroups.com
So long as the functionality is the same.

Brent

Bruno Marchal

unread,
Dec 20, 2011, 3:06:10 PM12/20/11
to everyth...@googlegroups.com

On 20 Dec 2011, at 01:03, Russell Standish wrote:

> On Mon, Dec 19, 2011 at 02:56:10PM +0100, Bruno Marchal wrote:
>>
>> On 18 Dec 2011, at 23:27, Russell Standish wrote:
>>
>>> On Sun, Dec 18, 2011 at 12:46:10PM +0100, Bruno Marchal wrote:
>>>
>>> ...snip...
>>>>
>>>> Consciousness, with comp relies on interaction in a branch. The
>>>
>>> I don't know what you mean by this.
>>
>> Assuming comp, consciousness can use a machine working only by
>> interaction of parts. It does not need parallel universes, which
>> will only change the relative statistics of consciousness content.
>>
>
> Even though the parts may be distributed across multiple branches of
> the MV, and have different counterfactual histories?

?

What is a branch of a W in a MW if you allow interaction between worlds?
Comp, be it digital or quantum, makes classical computation non
interacting with parallel computation. Locally, if our brains were
quantum computers this would be locally false, but not in a relevant
way to contradict the MGA consequences, by the fact that if worlds
interfere that much still does not violate Church thesis, and quantum
computer are Turing emulable.

Are you arguing that comp does not entail the principle "323"?


>
> Be careful of not including the conclusion in the definition of COMP.
>
> ...snip...
>
>>>
>>> If the emulation is by means of dovetailing, then I think not. A
>>> dovetailer is not conscious.
>>
>> That is ambiguous. A dovetailer, like Robinson arithmetic (when
>> proving all its theorems) is not conscious, per se. But it
>> instantiates consciousness, indeed all possible machine's
>> consciousness.
>>
>
> Great. But this is more than mere terminological wrangling. To an
> observer of the dovetailer, no conscious processes are visible.

No conscious processes are ever visible.


> To do
> that would require a means of determining whether a computation is
> conscious or not, something we don't have, and probably never will.

That's always the case. You judge by chatting with person or by
observing them and recognizing yourself.
That's how I became open to the idea that all löbian entities (machine
or not machine) are conscious.

>
> It is another manifestation of there being no God's viewpoint in a
> Multiverse.

I am not sure about that. Many would say that the very idea of
"multiverse" is an attempt of describing "God's viewpoint".
With mechanism we have universal dreams and universal dreamers,
sharing, or not, dreams and subroutines.


>
> I feel this invalidates applying Maudlin's argument to a
> dovetailer.

Let me introduce a new definition. I define a closed generalized brain
(CGB) the portion of reality that you need to emulate a dream. Many
neurophysiologists would be that such a portion of reality is in the
skull, and that the process is Turing emulable (and I think it your
position). Comp implies that such CGB exists. That CGM can be emulated
by a turing machine, why would it matter the emulation is done by
dovetailing from the first person point of view?

> But put another way, perhaps it means that consciousness
> cannot supervene on a physical implementation of a dovetailer.
> Which
> is probably what you're trying to get to.

I just reason from the assumption. Consciousness would supervene on
the execution of a physical universal dovetailer. Why wouldn't it?


>
> ...snip...
>
>>
>>>
>>> In this case, the distinction is more than merely linguistic. Often
>>> when you say PA is conscious, I translate your comment as above, and
>>> continue on.
>>
>> When I say that PA is conscious, I mean it literally.
>> When I say that RA is conscious, I mean "the universal machine RA"
>> is conscious literally. But RA can also play the role of a universal
>> dovetailer, which is not a person, and thus not conscious. RA can
>> play both, somehow. Like any universal machine, even a Löbian one,
>> can implement a universal dovetailing (if patient enough!).
>>
>
> I thought we agreed that these systems contained conscious structures,
> not are conscious in themselves.

RA and the UD. They are computationnally equivalent (to be short). RA
is a bit richer as a person, but still very shy in its provability
means. RA, as a computer, is Turing universal, but as a prover, is
very weak. Not Löbian. Probably conscious but not yet self-conscious.
RA does not know her limit. It is innocence age. Principia
Mathematica, Peano Arithmetic (PA), Zermelo Fraenkel Set Theory, knows
their limits. They are Löbian, they obey to the *modesty* law (B(Bp-
>p)->Bp).

> I don't know otherwise how to
> interpret "PA is literally conscious". It makes no sense...

PA is a classical believer in elementary arithmetic, close for the
induction principle for its expressible formula, in his language.
Its metamathematics shows that it is vertiginous all what you can talk
with PA, including the proposition self-referential modal
"tautolotogies", which tell us something about them. It is a
scientific talk on all points of view, including the intensional
nuances (forced by the incompleteness phenomenon).

By being very simple, PA is closer to the idea of universal person,
but PA, ZF, PM, etc have already quite different character, the
coinsciousness of the Löbian machines is bound to differentiate.


>
>>
>>
>>
>>> But this can't be done here.
>>
>> I agree that the UD is not a person, and as such its consciousness
>> is even non-sensical. But if you agree that the UD instantiates
>> consciousness, then the MGA applies to it.
>
> It doesn't eliminate the supervenience of the consciousness on the
> simulated physics within the UD. It seems this is in accordance with
> Brent's comments too.
>
> Presumably you would argue that this is simulated matter, not
> primitive matter. Sure.

You lost me here. The "primitive matter" in comp is not *a priori*
simulable, it appears below our sharable substitution level.

> But what's to stop the primitive matter being
> multiversal - whether it can be simulated or not is a little beside
> the point.

On the contrary it is crucial. It makes the difference between
emulable in one reality (in our branch of the quantum multiverse in
case we imagine a concrete one), which is equivalent with Turing
emulable, or necessitating Non Turing emulable interactions or
interferences with parallel realities. The point is that if it is
Turing emulable, then the MGA applies. You have then to believe that a
physical inactive piece has a physical activity relevant in a
particular computation. I can't figure out what that could mean.

>
> ...snip...
>
>>> The issue is misidentification of the dovetailer with one of the
>>> programs it is executing.
>>
>> But as I just said, this is not relevant for the movie graph or
>> Maudlin's argument.
>
> This is what I'm having difficulty seeing...

If a dream can supervene on a closed generalized brain Turing
emulation, then it has to supervene to its emulation in one of its
classical instantiation, either in a concrete quasi-classical (normal)
history (or, after MGA, in arithmetic, or in the UD*). And in that
single reality emulation, MGA can be applied. If you give a role to
physically inactive, by making them active in some other world you are
forced to introduce a non Turing emulable *physical* component in
matter playing a role in consciousness, where comp show that we get it
for free below our substitution level.

>
>>
>>
>>>
>>> It is the same mistake to say that the Library of Babel contains all
>>> the wisdom of the ages just because one can find a copy of every
>>> book that
>>> has ever been written within its walls.
>>
>> Well, here there is also another mistake, which is that a library of
>> Babel contains only description, where a universal dovetailer
>> actually executes the 'descriptions'. The numbers, even with their
>> order are not Turing universal. It is really the laws of addition
>> and multiplication which gives rise to the genuine (universal)
>> internal 'dynamics'.
>
> Again, this is an analogy, like blocks of marble and David. Still, I
> fail to see how instantiating all possible persons in all possible
> environments actually creates a person at all. Just like instantiating
> all possible books creates any particular book in the Library of
> Babel.

?

The instantiation of all persons in all *relatively* possible
environment, for example in UD*, gives the domain of your (hopefully
us) first person (plural) indeterminacies.

That's UDA1-7, in case the UD is physically executed, in a steady
growing physical "?"verse.

MGA just show that the steady growing universe or multiverse is red
herring.

>
>>
>>
>>>
>>> ...snip...
>>>
>>>>
>>>> My point is just that the use of the multiverse does not change the
>>>> consequence of the MGA reasoning.
>>>>
>>>
>>> This we disagree on, clearly. However, I'm still failing to
>>> understand
>>> your point of view, because ISTM you want to call the UD conscious,
>>> when it surely can't be. Its as dumb as.
>>
>> The UD is not conscious, as a person, but once you add the
>> supervenience thesis, it instantiates consciousness at each moment
>> where it executes a conscious program (say Russell Standish's one,
>> then Bruno's one, etc).
>
> Why does this depend on supervenience?

I meant "physical supervenience". It is just introduced to get the
contradiction.
MGA is a reductio ad absurdum.

We might agree. I am just introducing a notion of *physical*
supervenience to get the absurd conclusion: physically inactive piece
of matter are physically active with respect to a computation.

>
> ...snip...
>
>>>
>>>>
>>>>
>>>>
>>>>> This step is basically
>>>>> invalid. It does not imply COMP is false, though.
>>>>>
>>>>> Of course, if you can think of another way of simulating a
>>>>> multiverse
>>>>> within a single universe, I wil naturally reconsider...
>>>>
>>>> The way of simulating it changes nothing. It is the point of the
>>>> first six step of UDA.
>>>
>>> Only the subjective view is unchanged. The 3rd person view is
>>> changed
>>> utterly. In the 1st person view, we have conscousness supervening
>>> on a
>>> multiverse, which doesn't change. But Maudlin's argument no longer
>>> works in the 1st person view.
>>
>> I don't see why.
>
> Because Maudlin assumes a single universe physics,

Where? It assumes only the Turing emulabilty.


> and the 1st person
> viewpoint is multiversal.

Yes, but with comp that's the result of the first person
indeterminacies. The first person cannot be a entity making
interacting those many branches in a non Turing emulable way. It seems
enough to me, to make the physical supervenience thesis devoid of any
meaning.

You might try to refute the 323 principle as clearly as possible by
using a *physical* multiverse. I think you will see by yourself that
you have to endow some primitive Matter with some non Turing emulable
processes at some point.

Bruno


http://iridia.ulb.ac.be/~marchal/

Russell Standish

unread,
Dec 21, 2011, 4:58:35 AM12/21/11
to everyth...@googlegroups.com
On Tue, Dec 20, 2011 at 09:06:10PM +0100, Bruno Marchal wrote:
>
> On 20 Dec 2011, at 01:03, Russell Standish wrote:
>
> >Even though the parts may be distributed across multiple branches of
> >the MV, and have different counterfactual histories?
>
> ?
>
> What is a branch of a W in a MW if you allow interaction between worlds?

Who said anything about interaction between the worlds? I assume by
interaction, you mean the usual interaction physicists means
(interference), or information being passed.

> Comp, be it digital or quantum, makes classical computation non
> interacting with parallel computation. Locally, if our brains were
> quantum computers this would be locally false, but not in a relevant
> way to contradict the MGA consequences, by the fact that if worlds
> interfere that much still does not violate Church thesis, and
> quantum computer are Turing emulable.

We're not discussing quantum computers here.

>
> Are you arguing that comp does not entail the principle "323"?

I don't believe so.

>
>
> >
> >Be careful of not including the conclusion in the definition of COMP.
> >
> >...snip...
> >
> >>>
> >>>If the emulation is by means of dovetailing, then I think not. A
> >>>dovetailer is not conscious.
> >>
> >>That is ambiguous. A dovetailer, like Robinson arithmetic (when
> >>proving all its theorems) is not conscious, per se. But it
> >>instantiates consciousness, indeed all possible machine's
> >>consciousness.
> >>
> >
> >Great. But this is more than mere terminological wrangling. To an
> >observer of the dovetailer, no conscious processes are visible.
>
> No conscious processes are ever visible.
>
>
>
>
> >To do
> >that would require a means of determining whether a computation is
> >conscious or not, something we don't have, and probably never will.
>
> That's always the case. You judge by chatting with person or by
> observing them and recognizing yourself.
> That's how I became open to the idea that all löbian entities
> (machine or not machine) are conscious.
>
>
>
> >
> >It is another manifestation of there being no God's viewpoint in a
> >Multiverse.
>
> I am not sure about that. Many would say that the very idea of
> "multiverse" is an attempt of describing "God's viewpoint".
> With mechanism we have universal dreams and universal dreamers,
> sharing, or not, dreams and subroutines.
>

It cannot be a God's point of view. The Multiverse is too simple to
admit an observer...

>
> >
> >I feel this invalidates applying Maudlin's argument to a
> >dovetailer.
>
> Let me introduce a new definition. I define a closed generalized
> brain (CGB) the portion of reality that you need to emulate a dream.

This may require the input of random numbers on the synapses. It seems
to me that dreams are the result of filtering and amplifying random
thermal noise with the brain. It is just a theory, of course, but it
would mean that the CGB is a Multiverse.

> Many neurophysiologists would be that such a portion of reality is
> in the skull, and that the process is Turing emulable (and I think
> it your position).

Sure, but the contents of the skull is an object that extends over
multiple branches of the Multiverse.

> Comp implies that such CGB exists. That CGM can
> be emulated by a turing machine, why would it matter the emulation
> is done by dovetailing from the first person point of view?

Because in the 1st person POV, the "inert" parts are not inert. Only
in the 3rd person dovetailed POV. And, I find it hard to think of the
dovetailer as conscious.

>
> >But put another way, perhaps it means that consciousness
> >cannot supervene on a physical implementation of a dovetailer.
> >Which
> >is probably what you're trying to get to.
>
> I just reason from the assumption. Consciousness would supervene on
> the execution of a physical universal dovetailer. Why wouldn't it?

Because the dovetailer is an incredibly simple program. It hardly
seems conscious. If I ask it a question, it is mute, so the Turing
test hardly helps.

...snip...


> >
> >It doesn't eliminate the supervenience of the consciousness on the
> >simulated physics within the UD. It seems this is in accordance with
> >Brent's comments too.
> >
> >Presumably you would argue that this is simulated matter, not
> >primitive matter. Sure.
>
> You lost me here. The "primitive matter" in comp is not *a priori*
> simulable, it appears below our sharable substitution level.
>

It may or may not be simulable a priori. Why would a materialist
assume that primitive matter is necessarily nonsimulable?

>
>
> >But what's to stop the primitive matter being
> >multiversal - whether it can be simulated or not is a little beside
> >the point.
>
> On the contrary it is crucial. It makes the difference between
> emulable in one reality (in our branch of the quantum multiverse in
> case we imagine a concrete one), which is equivalent with Turing
> emulable, or necessitating Non Turing emulable interactions or
> interferences with parallel realities.

I don't expect there will be interference between the realities. Why
does supervenience over multiple branches entail there must be
interactions between realities?

> The point is that if it is
> Turing emulable, then the MGA applies.

I don't see this.

> You have then to believe that


> a physical inactive piece has a physical activity relevant in a
> particular computation.

The physically "inactive" piece is only physically inactive in one
branch. If the supervenience is across multiple branches, then the
absurdum is no longer.


...snip...

> If a dream can supervene on a closed generalized brain Turing
> emulation, then it has to supervene to its emulation in one of its
> classical instantiation, either in a concrete quasi-classical
> (normal) history (or, after MGA, in arithmetic, or in the UD*).

Why?

> And
> in that single reality emulation, MGA can be applied. If you give a
> role to physically inactive, by making them active in some other
> world you are forced to introduce a non Turing emulable *physical*
> component in matter playing a role in consciousness, where comp show
> that we get it for free below our substitution level.
>

Why?

...snip...

> >>
> >>The UD is not conscious, as a person, but once you add the
> >>supervenience thesis, it instantiates consciousness at each moment
> >>where it executes a conscious program (say Russell Standish's one,
> >>then Bruno's one, etc).
> >
> >Why does this depend on supervenience?
>
> I meant "physical supervenience". It is just introduced to get the
> contradiction.
> MGA is a reductio ad absurdum.

How does this work?

> >
> >Because Maudlin assumes a single universe physics,
>
> Where? It assumes only the Turing emulabilty.

Its the only way to get inactive parts, and so force the absurdum. The
assumption is not explicit in Maudlin's work, but its there.

...snip...

>
> You might try to refute the 323 principle as clearly as possible by
> using a *physical* multiverse. I think you will see by yourself that
> you have to endow some primitive Matter with some non Turing
> emulable processes at some point.

I don't see the 323 principle as being relevant here - perhaps you can
explain more why its needed.

Craig Weinberg

unread,
Dec 21, 2011, 6:36:49 AM12/21/11
to Everything List
That's a false equivalence. My example was replacing part of France
with part of India. By oversimplifying that to mean replacing the
citizens only, then straw manning my argument completely by equating
'replacement' with the natural population's growth and mortality, you
come to the erroneous conclusion that a brain transplant is no
different from a kidney transplant. Nobody has ever survived a brain
transplant. As far as we know, a living brain is completely unlike
anything in the universe. I'm not saying the brain is magic, but since
we have way of detecting the degree to which it's 'functionality' is
the same from the outside, the argument that you can do a replacement
of the brain based on functional equivalence is begging the question.

David Nyman

unread,
Dec 21, 2011, 8:06:45 AM12/21/11
to everyth...@googlegroups.com
On 21 December 2011 09:58, Russell Standish <li...@hpcoders.com.au> wrote:

>> >Because Maudlin assumes a single universe physics,
>>
>> Where? It assumes only the Turing emulabilty.
>
> Its the only way to get inactive parts, and so force the absurdum. The
> assumption is not explicit in Maudlin's work, but its there.

Russell, isn't it central to the multiverse view that distinct,
univocal observer experiences supervene on each branch? In which
case, isn't it correct to apply Maudlin's argument to each branch
separately? If so, to oppose the conclusion by appealing to all the
branches simultaneously might seem like wanting to have your cake and
eat it too.

David

It is loading more messages.
0 new messages