--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
OK, thanks.
Well, yes, it's true that I hadn't heard the term except in places like this...
So anyway, the argument that the exact arrangement of the substrate isn't necessary for consciousness means that the same experiences could be generated by different arrangements of a given substrate, or perhaps completely different substrates (which may just be a statement of computational universality, or something similar?)
That in itself doesn't make the substrate unnecessary to consciousness, surely? It's just saying that there isn't a one-to-one mapping, and (for example) silicon or carbon brains might in theory generate the same states. So a given conscious experience doesn't supervene on the exact same configuration of the substrate... So maybe it doesn't always matter to my experience if my brain has, say, 100 or 101 ions in a particular synapse (or 1000000 and 1000001 - I have no idea what figures are realistic).
I'm not sure how you get from that to the non-primariness of matter, however.
--
That's where the MGA comes in. It purports to show that one of the possible substrates is inert matter, which seems so absurd that we should conclude the matter plays no part whatsoever.
For me the MGA was illuminating, but an even more mind-bending demonstration of the supervenience idea is that one can fashion a Turing-complete computer using an array of simple mechanical switches, through which ping-pong balls would flow (see http://helge.ru-stad.name/ppb_comp/ppbcne.htm for example).As such, by Church-Turing, you could take whatever computational framework one might hypothesize necessary to support consciousness (e.g. a simulation of a human brain), and run it on this ping-pong computer (albeit on a time-scale many orders of magnitude slower than today's, or even yesterday's, devices). The physical scale of the ping-pong computer necessary to run a sophisticated sim like that would be pretty massive too, to allow for the large amount of 'tape' (Turing) or memory required... but that's merely a pragmatic point. In principle, it would be computing exactly the same program as the supercomputer you'd probably commission to run that sim... and therefore, consciousness would supervene on your ping pong ball computer.
Or you can throw out the assumption that conscious thought is independent of an external world. This assumption comes easily to Platonist because they think Platonia exists independent of any worlds; but I find it very suspicious.
On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
That's where the MGA comes in. It purports to show that one of the possible substrates is inert matter, which seems so absurd that we should conclude the matter plays no part whatsoever.
That sounds like Maudlin's Olimpia argument....?
So far I get that different substrates can create the same computational states (by which I assume we mean the contents of registers and memory?) But how does the MGA get from showing that to showing that inert matter can be a possible substrate? (ISTM that a projected graph is not inert, if that's the argument.)
Yes, as I understand it that's the argument. It's consistent with Platonism. A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.
My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something. An isolated, pure consciousness is an oxymoron. Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.
(if you are concerned that /some/ notion of time is essential, then it needs only that time be encoded in the states in some way. No external time parameter is needed. See Julian Barbour's book /The End of Time/)
Right. It's only order that it needed. But if this view is to be taken as fundamental then it requires that there be some overlap between states in order to define the sequence. Otherwise there's no inherent way to order them without assuming some known dynamics, 2nd law, etc.
Or contrariwise, can't all the inputs to the conscisouness be provided as though it was in the world? (as for a brain in a vat for example. I mean hypothetically, and to simplify the argument, not as a general model of consciousness.)
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
2015-03-24 1:57 GMT+01:00 meekerdb <meek...@verizon.net>:
On 3/23/2015 5:44 PM, LizR wrote:
On 24 March 2015 at 13:07, meekerdb <meek...@verizon.net> wrote:
Yes, as I understand it that's the argument. It's consistent with Platonism. A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.
So is the idea to show that a recording is just as conscious as the original calculation?
My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something. An isolated, pure consciousness is an oxymoron. Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.
I am under the impression Bruno gets around that by potentially allowing the environment to be simulated as well. Or contrariwise, can't all the inputs to the conscisouness be provided as though it was in the world? (as for a brain in a vat for example. I mean hypothetically, and to simplify the argument, not as a general model of consciousness.)
Yes, he casually dismisses the objection by saying we'll just include the environment too. But that's my point that it's then no longer a new radical result. It's just saying that if you simulate a world it can include conscious beings who are conscious of that world. But IN THAT WORLD their substrate is not inert - even if it's inert in our world, e.g. consider the novel "Mody Dick" being simulated in a computer. To Ishmael and Ahab in the computer they'd be conscious and experiencing the hunt for the white whale. And, according to Platonists, they are as printed on the page too.
If the world is a computation, conscious part of it are subprogram that can be isolated by definition...
now that when they run, for their consciousness to have meaning they must be fed input that have meaning to the conscious subprogram is a tautology...
Also, the MGA *never* assert that the consciousness simulated is conscious of *our* world
(as it is obvious it can't be as it isn't fed inputs from our world)... it only assumes that you're running a program who is thought to be conscious (simulating a conscious being) and shows that if you accept that, and you accept the supervenience thesis and so accept that it is conscious in virtue of running in bare matter, you have to accept that the same stream of consciousness supervene on the projection + broken gate.
Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
>
> On 3/24/2015 2:23 AM, Quentin Anciaux wrote:
>>
>>
>>
>> 2015-03-24 1:57 GMT+01:00 meekerdb <meek...@verizon.net>:
>>>
>>> On 3/23/2015 5:44 PM, LizR wrote:
>>>>
>>>> On 24 March 2015 at 13:07, meekerdb <meek...@verizon.net> wrote:
>>>>>
>>>>> Yes, as I understand it that's the argument. It's consistent with Platonism. A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.
>>>>
>>>>
>>>> So is the idea to show that a recording is just as conscious as the original calculation?
>>>>>
>>>>>
>>>>> My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something. An isolated, pure consciousness is an oxymoron. Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.
>>>>
>>>>
>>>> I am under the impression Bruno gets around that by potentially allowing the environment to be simulated as well. Or contrariwise, can't all the inputs to the conscisouness be provided as though it was in the world? (as for a brain in a vat for example. I mean hypothetically, and to simplify the argument, not as a general model of consciousness.)
>>>
>>>
>>> Yes, he casually dismisses the objection by saying we'll just include the environment too. But that's my point that it's then no longer a new radical result. It's just saying that if you simulate a world it can include conscious beings who are conscious of that world. But IN THAT WORLD their substrate is not inert - even if it's inert in our world, e.g. consider the novel "Mody Dick" being simulated in a computer. To Ishmael and Ahab in the computer they'd be conscious and experiencing the hunt for the white whale. And, according to Platonists, they are as printed on the page too.
>>>
>>
>> If the world is a computation, conscious part of it are subprogram that can be isolated by definition...
>
>
> That's the point I disagree with.
If it's a program then you've no choice.
> When Bruno starts the comp argument by asking if you would say "Yes" to the doctor, it is implicit that the doctor is going to replace some part or all of your brain, BUT it's going to remain within the same environmental context.
Yes... But that context could be also simulated... In the end the only thing the conscious program can know, it knows it through an interface...
> I think the "consciousness subprogram" can run without the context, but I think it gets it's meaning, what it's about, from the context
The context is internal to the conscious subprogram as it is it by definition who gives meaning. The 'external' world is only inputs received from the interface of the subprogram, no more.
- and I think that context has to be very broad, including evolutionary history for example.
>
>
>> now that when they run, for their consciousness to have meaning they must be fed input that have meaning to the conscious subprogram is a tautology...
>>
>> Also, the MGA *never* assert that the consciousness simulated is conscious of *our* world
>
>
> It's implied by his Alice discussion.
When rerunning the program with the recorded initial input, by hypothesis the second run must be as conscious as the first when the inputs came from the 'real' external world... The program itself can't tell as it receives exactly the same inputs... Not similar inputs but *exactly* the same. So either the second run is as conscious as the first or none are.
>If the computation were just some arbitrary program we would have no reason to think it instantiated consciousness.
I never said it's an arbitrary program, I said it's a program thought to instantiate a conscious moment... However you determine it is a conscious moment in the first place is irrelevant for the argument.
> We only think that because it is record to a conscious computation in our world.
>
>
>> (as it is obvious it can't be as it isn't fed inputs from our world)... it only assumes that you're running a program who is thought to be conscious (simulating a conscious being) and shows that if you accept that, and you accept the supervenience thesis and so accept that it is conscious in virtue of running in bare matter, you have to accept that the same stream of consciousness supervene on the projection + broken gate.
>
>
> But I'm not accepting the supervenience thesis as applied to an isolated sequence of states. Without the context (which is implicit in the counterfactuals) the same sequence of computations could correspond to two different meanings, two different conscious thoughts - just as the same set of differential equations can model two different physical systems.
>
> I'm not sure how this plays into the UD because there they are infinitely many threads of computation through the same state. The state cannot, by itself, instantiate a thought. A thought must require a long sequence of identical or similar states. But in the UD there are no counterfactuals, because every possibility occurs at some point and branches from the thread. At least that's how I understand it.
>
> Brent
>
Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au> a écrit :
>
> On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
> >
> > When rerunning the program with the recorded initial input, by hypothesis
> > the second run must be as conscious as the first when the inputs came from
> > the 'real' external world... The program itself can't tell as it receives
> > exactly the same inputs... Not similar inputs but *exactly* the same. So
> > either the second run is as conscious as the first or none are.
>
> Or there is precisely one sequence of conscious observer moments no
> matter how many times it is rerun (or recorded and replayed, whatever).
>
> Cheers
Then in this case physical supervenience is false... The movie graph argument is showong just that if you believe it's the physical activity token that generates consciousness then you must accept that a consciousness supervenes on the movie plus broken gate. If you disbelieve physical supervenience from the start then MGA is not needed.
Quentin
> --
>
> ----------------------------------------------------------------------------
> Prof Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics hpc...@hpcoders.com.au
> University of New South Wales http://www.hpcoders.com.au
>
> Latest project: The Amoeba's Secret
> (http://www.hpcoders.com.au/AmoebasSecret.html)
> ----------------------------------------------------------------------------
>
Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au> a écrit :
>
> On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
> >
> > When rerunning the program with the recorded initial input, by hypothesis
> > the second run must be as conscious as the first when the inputs came from
> > the 'real' external world... The program itself can't tell as it receives
> > exactly the same inputs... Not similar inputs but *exactly* the same. So
> > either the second run is as conscious as the first or none are.
>
> Or there is precisely one sequence of conscious observer moments no
> matter how many times it is rerun (or recorded and replayed, whatever).
>
> CheersThen in this case physical supervenience is false...
If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs, then you reject physical supervenience.
Quentin
>
> Brent
Le 25 mars 2015 07:27, "Quentin Anciaux" <allc...@gmail.com> a écrit :
>
>
> Le 25 mars 2015 07:23, "meekerdb" <meek...@verizon.net> a écrit :
>
> >
> > On 3/24/2015 11:18 PM, Quentin Anciaux wrote:
> >>
> >>
> >> Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au> a écrit :
> >> >
> >> > On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> >> > > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
> >> > >
> >> > > When rerunning the program with the recorded initial input, by hypothesis
> >> > > the second run must be as conscious as the first when the inputs came from
> >> > > the 'real' external world... The program itself can't tell as it receives
> >> > > exactly the same inputs... Not similar inputs but *exactly* the same. So
> >> > > either the second run is as conscious as the first or none are.
> >> >
> >> > Or there is precisely one sequence of conscious observer moments no
> >> > matter how many times it is rerun (or recorded and replayed, whatever).
> >> >
> >> > Cheers
> >>
> >> Then in this case physical supervenience is false...
> >
> >
> > How so? Supervenience doesn't forbid different substrates from producing the same supervening effect. In this case it would be two different instances of the physical process producing the same conscious thoughts.
>
> If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs,
Because there is only one conscious moment
Quentin Anciaux wrote:
Le 25 mars 2015 07:27, "Quentin Anciaux" <allc...@gmail.com <mailto:allc...@gmail.com>> a écrit :
> Le 25 mars 2015 07:23, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
> > On 3/24/2015 11:18 PM, Quentin Anciaux wrote:
> >>
> >> Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au <mailto:li...@hpcoders.com.au>> a écrit :
> >> >
> >> > On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> >> > > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
> >> > >
> >> > > When rerunning the program with the recorded initial input, by hypothesis
> >> > > the second run must be as conscious as the first when the inputs came from
> >> > > the 'real' external world... The program itself can't tell as it receives
> >> > > exactly the same inputs... Not similar inputs but *exactly* the same. So
> >> > > either the second run is as conscious as the first or none are.
> >> >
> >> > Or there is precisely one sequence of conscious observer moments no
> >> > matter how many times it is rerun (or recorded and replayed, whatever).
> >> >
> >> > Cheers
> >>
> >> Then in this case physical supervenience is false...
> >
How so? Supervenience doesn't forbid different substrates from producing the same supervening effect. In this case it would be two different instances of the physical process producing the same conscious thoughts.If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs,
Because there is only one conscious moment
then you reject physical supervenience.
I do not think this follows. Consciousness supervenes on the brain states. It does not matter if these are instantiated in brain wetware or in an accurate record of these brain states on a film or in a computer memory. It is the states (or sequence of states) that makes up the conscious experience. If the record is exact, then replaying it reproduces exactly the initial conscious experience (as Russell points out), not some other experience.
How does this undermine physical supervenience? The brain wetware, photographic film, and computer memory are all physical things that instantiate the appropriate states and the conscious experience supervenes on these. The architecture of the computer that simulates consciousness does not matter as long as it accurate reproduces the appropriate brain states.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
2015-03-25 12:09 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au>:Quentin Anciaux wrote:
Le 25 mars 2015 07:27, "Quentin Anciaux" <allc...@gmail.com <mailto:allc...@gmail.com>> a écrit :
> Le 25 mars 2015 07:23, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
> > On 3/24/2015 11:18 PM, Quentin Anciaux wrote:
> >>
> >> Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au <mailto:li...@hpcoders.com.au>> a écrit :
> >> >
> >> > On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> >> > > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
> >> > >
> >> > > When rerunning the program with the recorded initial input, by hypothesis
> >> > > the second run must be as conscious as the first when the inputs came from
> >> > > the 'real' external world... The program itself can't tell as it receives
> >> > > exactly the same inputs... Not similar inputs but *exactly* the same. So
> >> > > either the second run is as conscious as the first or none are.
> >> >
> >> > Or there is precisely one sequence of conscious observer moments no
> >> > matter how many times it is rerun (or recorded and replayed, whatever).
> >> >
> >> > Cheers
> >>
> >> Then in this case physical supervenience is false...
> >
How so? Supervenience doesn't forbid different substrates from producing the same supervening effect. In this case it would be two different instances of the physical process producing the same conscious thoughts.If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs,
Because there is only one conscious moment
then you reject physical supervenience.
I do not think this follows. Consciousness supervenes on the brain states. It does not matter if these are instantiated in brain wetware or in an accurate record of these brain states on a film or in a computer memory. It is the states (or sequence of states) that makes up the conscious experience. If the record is exact, then replaying it reproduces exactly the initial conscious experience (as Russell points out), not some other experience.Yes... that's what I said... replaying it N times under physical supervenience means you have N times the conscious moment supervening on the substrate *in realtime* (exactly the same conscious moment) but it is instantiated N times, not only once... (when I say realtime, it's not that the inner time of the conscious moment should be one to one with the external time where that conscious moment is supervening, but that the conscious moment exists at the same time it is running) (as Russel seems to say).
Like say when I'm running an actual program (any one, not a "conscious" one if that exist) in a VM with recorded inputs of a previous run... everytime I run it, it is running and instantiated (not just once)... likewise a "conscious" program, under physical supervenience, the conscious moment would be "existing" everytime I run it and not just once.
How does this undermine physical supervenience? The brain wetware, photographic film, and computer memory are all physical things that instantiate the appropriate states and the conscious experience supervenes on these. The architecture of the computer that simulates consciousness does not matter as long as it accurate reproduces the appropriate brain states.Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...QuentinI agree with that... I'm just saying that if you say, under physical supervenience, that running N times the conscious moment does not instantiate it N times, then you simply reject physical supervenience...
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)
Under physical supervenience, stopping all the computers stops the conscious moment.
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Bruno Marchal wrote:
On 25 Mar 2015, at 12:25, Quentin Anciaux wrote:
Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...
Good summary. If you accept physical supervenience, you need to accept that non active part of the brain have active part in the brain, basically.
It makes clear that it is not the material brain or material computer which does the thinking, but the abstract person run by any sufficiently robust programs, with a robustness defined to its most plausible computations above its substitution level above and below the substitution below.
I think that all the MGA establishes is that if the film taken of the physical states of the brain is a good copy, then consciousness can supervene an that copy as well as the original.
Let me try to summarize the argument as I see it. We are conscious and we have brains that seem to be connected with the conscious state, such that a reasonable first model is that consciousness supervenes on the physical brain -- we alter the brain, we affect the conscious state, and the conscious state, being deterministic, reciprocally affects the brain. (Changed thoughts are correlated with changed brain states.)
The observation is then made that we could, quite probably, simulate the brain state to any desired level in a computer (universal Turing machine). The question is: does consciousness supervene on the physical state, or on the abstract calculational state represented by the computer?
Given that the computer simulation has the same conscious state as the original brain, it follows that copies of the conscious state can be made. In so far as these are accurate copies of the original physical state, they are all the same conscious moments -- we only create different consciousnesses when the inputs differ between copies -- and then the states are no longer identical.
None of this argues against conscious supervening on the physical rather than on an abstraction in Platonia. The MGA, as I understand it, was designed to undermine this conclusion. The movie image projected on the original neural plate recreates the original conscious state. But we can degrade the neural plate. As long as we project the same movie copy, the conscious state is unchanged. It is argued that this is absurd.
As far as I can tell, such an argument hinges on the notion of conterfactual equivalence: the original movie and the degraded plate are not counterfactually equivalent.
I simply say, so what! Counterfactual equivalence does not have any independent justification, and it is highly unlike to be sensible, even in the context of computationalism. Basically, because the simulation of any given conscious state can be carried out an any computer -- whatever the architecture, physical construction, or programming language. As long as the original state is accurately simulated, the conscious state will be the same. But these different instances of the calculation are generally not counterfactually equivalent, nor need they be -- they only have to simulate the original state to the required degree of accuracy -- they may differ to any degree whatsoever for their calculated states before and after the target conscious moment.
This comes back to my original question: since all possible programs are run by the dovetailer, how do we ensure that conscious beings see an ordered and predictable world. Only a set of measure zero among all possible programs would give that result.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Quentin Anciaux wrote:
2015-03-26 8:05 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au <mailto:bhkellett@optusnet.com.au>>:
This comes back to my original question: since all possible programs
are run by the dovetailer, how do we ensure that conscious beings
see an ordered and predictable world. Only a set of measure zero
among all possible programs would give that result.
Yes, it seems to me, we should see white noise, but maybe a selection attribute must be in play... like an anthropic argument.
Anthropic arguments are not going to work with computationalism because there is no basis on which you can assume underlying deterministic physical laws.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Apologies"Movie Graph Argument" - from Bruno's 2004 paper I believe.
I also don't know whether it makes more or less sense for a recording to be conscious than a computation.
However I have a feeling Bruno addressed this when he was explaining the who thing to me, some time ago - it doesn't matter whether the recording is conscious or not, it's just one of the infinite number of possible computations that contribute to generating that moment of consciousness.
Perhaps! Altho it seems to me that is assuming comp so maybe I didn't get that right.
On 24 March 2015 at 16:17, Bruce Kellett <bhke...@optusnet.com.au> wrote:
Russell Standish wrote:
On Tue, Mar 24, 2015 at 10:10:37AM +1100, Bruce Kellett wrote:
Russell Standish wrote:
On Tue, Mar 24, 2015 at 11:48:52AM +1300, LizR wrote:No, it doesn't imply any such thing. The motion of the observer, or
On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:Broadly, the idea is to use notion that movement is relative. If a
That's where the MGA comes in. It purports to show that one of theThat sounds like Maudlin's Olimpia argument....?
possible substrates is inert matter, which seems so absurd that we should
conclude the matter plays no part whatsoever.
So far I get that different substrates can create the same computational
states (by which I assume we mean the contents of registers and memory?)
But how does the MGA get from showing that to showing that inert matter can
be a possible substrate? (ISTM that a projected graph is not inert, if
that's the argument.)
machine is moving through a fixed sequence of states, we can
equivalently set things up so the machine is inert, but the observer
moves in such a way that appearance is unchanged. The absurdity is
that this implies consciousness depends on the motion of the observer.
rate of change of the sequence of states, is irrelevant to
consciousness. The only relevant thing is the states themselves --
the rate at which they are observed (or even if they are static)
does not matter.
Then clearly, you have no problem with the concept of a conscious
recording.
In order for the MGA to go through, conscious recordings need to be
considered absurd.
I personally, don't have an opinion either way, which is why I
consider that to be a rather serious flaw of the MGA.
If you take the block universe model seriously then we are nothing more than conscious recordings!
I don't know what MGA stands for, or what it means, so I can't comment on that.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
On 3/23/2015 3:48 PM, LizR wrote:
On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
That's where the MGA comes in. It purports to show that one of the possible substrates is inert matter, which seems so absurd that we should conclude the matter plays no part whatsoever.
That sounds like Maudlin's Olimpia argument....?
It is essentially the same. But I think Maudlin took the other side of the reductio and concluded that computationalism must be incomplete.
So far I get that different substrates can create the same computational states (by which I assume we mean the contents of registers and memory?) But how does the MGA get from showing that to showing that inert matter can be a possible substrate? (ISTM that a projected graph is not inert, if that's the argument.)
Yes, as I understand it that's the argument. It's consistent with Platonism. A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.
My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something.
An isolated, pure consciousness is an oxymoron. Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
2015-03-26 13:02 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au>:Quentin Anciaux wrote:
2015-03-26 12:13 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au
Quentin Anciaux wrote:
2015-03-26 8:05 GMT+01:00 Bruce Kellett
This comes back to my original question: since all possible
programs
are run by the dovetailer, how do we ensure that conscious
beings
see an ordered and predictable world. Only a set of measure zero
among all possible programs would give that result.
Yes, it seems to me, we should see white noise, but maybe a
selection attribute must be in play... like an anthropic argument.
Anthropic arguments are not going to work with computationalism
because there is no basis on which you can assume underlying
deterministic physical laws.
It seems to me it works relatively.... consciousness like ours can only experiment worls ordered like ours... even if almost all dreams/worlds produced by mathematics are not like that and do not allow of consciousness like ours, as you can only experience worlds like ours, it's no magic that you do... like with Quantum Immortality, you cannot experience being dead, so no wonder you find yourself alive, even if in almost all worlds you're dead (or not existing at all).
But we do not need the degree of order that we observe. We could survive perfectly well with a reasonable number of miracles -- laws that don't quite work always. And there are vastly more possible worlds of that sort than those that are strictly deterministic. The measure problem gets you every time.Well we don't know that we could survive in such world...but even, if MWI is correct, most instances of me goes everytime in such worlds... and some of us don't, why wonder that it's a miracle when it's a given there will always be a me in a non magic world ? I wonder why I'm not in a magic world... because I'm not.
Quentin
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
Agreed. But the question is more between :are we fundamentally mammals living on earth, or are we universal numbers living in arithmetic, deluded by oracles or other universal numbers.
Bruno Marchal wrote:
> On 25 Mar 2015, at 16:35, Stathis Papaioannou wrote:
>>
>> If my mind is being run on two separate computers, I can't know which
>> one of the two, and I can't say that my last remembered moment was run
>> on one or other or my next anticipated moment will be run on one or
>> other. If one computer stops it makes no difference to me and if a
>> third computer running my mind comes online it makes no difference to
>> me. So effectively there is only one conscious moment. Under physical
>> supervenience, stopping all the computers stops the conscious moment.
>
> I am OK. I think Quentin is arguing in the reducto ad absurdum part.
>
> In a sense both Russell is righ (there is only one 1p-experience), and
> Quentin is right: we can attribute consciousness in each running (but
> then if we attribute it to the physical activity token: we get the
> absurd conclusion: playing records and real-time consciousness supervene
> on a static film, etc.
One problem is that this is an invalid "argument from incredulity". The
fact that you find this conclusion absurd is not an argument against the
conclusion: it is merely a statement about how you fell about the
conclusion -- which could be right or wrong, and in either case does not
depend on how you feel about it.
Where is the contradiction?PGC wrote:
Why or how is anybody arguing that problem is generated or solved by "how somebody feels about it"?
It's via contradiction/standard reductio: assume conclusion false and negation to be true, and from this we derive contradiction. If latter is the case, conclusion must be true.
Only two things are required: law of excluded middle and if statement implies something false, it must be false. PGC
Platonist Guitar Cowboy wrote:Yes, physical supervenience is incompatible with computationalism.
On Fri, Mar 27, 2015 at 1:10 AM, Bruce Kellett <bhke...@optusnet.com.au <mailto:bhkellett@optusnet.com.au>> wrote:
PGC wrote:
Why or how is anybody arguing that problem is generated or
solved by "how somebody feels about it"?
It's via contradiction/standard reductio: assume conclusion
false and negation to be true, and from this we derive
contradiction. If latter is the case, conclusion must be true.
Only two things are required: law of excluded middle and if
statement implies something false, it must be false. PGC
Where is the contradiction?
Of what? MGA? I just described the mechanism, far from "just feelings".
I assumed you had read at least a paper: incompatibility of physical supervenience with comp. PGC
But it remains to be proved that physical supervenience is false and comp is true.
PGC - I think you may have skimmed over too much for me to grasp what you're saying. But maybe not. So .... does contradicition arise because you assume to start with that consciousness is created by computation, then show that it would also (assuming physical supervenience) arise from something that isn't computation?
I'm still not sure where the dreams come in, however. (Or the zombies...)On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states?
To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)
On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states? To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)
On Fri, Mar 27, 2015 at 3:16 AM, LizR <liz...@gmail.com> wrote:PGC - I think you may have skimmed over too much for me to grasp what you're saying. But maybe not. So .... does contradicition arise because you assume to start with that consciousness is created by computation, then show that it would also (assuming physical supervenience) arise from something that isn't computation?Bruno will kick my butt for vulgarizing his thesis in this improvisatory, overly short, imprecise manner. I suspect you're still assuming physical universe without being aware of it.
I'm still not sure where the dreams come in, however. (Or the zombies...)On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states?"The system" is what here? "It" referring to what here? Would you tend to interpret these as physical or comp objects?
Remember that comp supervenience requires physics to become part of machine psychology/theology; thus every explanatory potency of a physical universe is left behind. The association is some sensation [of my joy in space-time (x,t)] to [type] of relative computational state.
To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)
A machine from which we remove some redundant parts resulting in a finite set of states or executions looses counterfactual correctness:
The movie is not conscious. The universal machine viewing it via types, not tokens, of possible activities keeps CC intact, with consciousness supervening on potential activities, and not some brittle, particular branch of the same.
And yes, we can cite all manner of quantum weirdness and state that consciousness supervenes on physical processes that are not actualized. This is reasonable since measurements depending on potential observations that are non-actualized depend on CC. But here, Bruno iirc pointed out that this would be a case of tokens rather than types. In short "Bruno will definitely kill me for simplifying and shortening as much as I have" sense, consciousness relative to computational state of a universal machine supervenes on set of possible accessible extensions of these states distributed on the entirety of the UD. PGC
On 3/26/2015 7:16 PM, LizR wrote:
On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states? To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)I see two possible answers. First, in a quantum world there is a superposition of all those "counterfactual" states, so they are really present, but only observable as different relative states. Of course this already invokes QM and physics, rather than deriving them. But maybe it can be shown that the infinite threads of the UD serve to test all the counterfactual states.
Or, secondly, although there is no physical difference in the sequence of states in the replaying, consciousness is not physical and so could be absent. This doesn't require that consciousness be magic. If it is the abstract thing called "computation" then in the abstract it needs to counterfactually correct to count as computation.