Brent
The drawback is that any physical system (which could be mapped onto
any information or any computation) would be conscious. This is only a
drawback if you believe, I guess as a matter of faith, that it is
false.
--
Stathis Papaioannou
I think "meaning" ultimately must be grounded in action. That's why
it's hard to see where the meaning lies in a computation, something that
is just the manipulation of strings. People tend to say the meaning is
in the interpretation, noting that the same string of 1s and 0s can have
different interpretations. But what constitutes interpretation? I
think it is interaction with the world. If you say, "What's a cat?"
and I point and say, "That." then I've interpreted "cat" (perhaps
wrongly if I point to a dog).
Brent
A computation is a sequence of numbers (or of strings, or of
combinators, etc.) as resulting by an interpretation. For such an
interpretation, you don't need a "world", only an "interpreter" that
is a universal system, like elementary arithmetic for example. If you
invoke a world you will run in the usual "physical supervenience"
trouble. If you abstract from the interpreter you run into the
confusion between a computation and a description of a computation. It
is useful to fix once and for all the universal system. Then a
computation can then be defined by a sequence of numbers, but there is
a implicit universal system behind. The concept of information could
be a little too much quantitative and static in this setting, and
plays probably a bigger role in the notion of the content of specific
consciousness experiences.
The key notion to define "computation" if the notion of Universal
system or machine.
Bruno
I would say that the drawback is that consciousness is not
information, although the idea that we can be conscious *of* something
is obviously related to information management.
The UDA is supposed to show that, having said yes to the doctor, we
have to define the notion of physical system from the notion of
computation, only having such a definition at hands, can we see if
any physical system implements any computations, which I doubt (except
in some trivial senses).
Also, consciousness is a first person attribute (indeed the
paradigmatic first person attribute). As such I can associate my
consciousness only to the infinity of computations, in arithmetic,
going trough my state. The physical has to emerge from the statistical
probability interference among all computations, going through my
(current) states that are indiscernible from my point of view.
Why such interference takes the form of wave interference is still a
(technical) open problem.
Bruno
No, I would agree that the robot is conscious. For the simulated world
I think the answer is a little more complicated. Of course the
simulated robot is conscious relative to the simulated world, but as
Stathis pointed out, that the program is simulating a robot and a cat is
a consequence of our mapping of the program onto our world. There are
many possible mappings so the program might also be a simulation of this
email exchange in our world. All it takes is a different mapping. So I
would say the simulated robots consciousness is relative to the
simulated world, which in turn takes its meaning from us (and might well
have other "meanings"). In general we rely on the programmer to tell us
the interpretation in terms of our world.
Brent
You put scare quotes around "interpreter". I don't see how arithmetic
is an interpreter - isn't it an interpretation (of Peano's axioms)? And
how does arithmetic avoid the problem of arbitrarily many mappings, as
raised by Stathis?
Brent
>>
>> A computation is a sequence of numbers (or of strings, or of
>> combinators, etc.) as resulting by an interpretation. For such an
>> interpretation, you don't need a "world", only an "interpreter" that
>> is a universal system, like elementary arithmetic for example.
>
> You put scare quotes around "interpreter".
Just because it is not a human interpreter, but a programming language
interpreter. I use the term in the computer science sense.
> I don't see how arithmetic
> is an interpreter - isn't it an interpretation (of Peano's axioms)?
Usually I use "Arithmetic" for the (usual) standard interpretation (in
the human sense) of arithmetic. By arithmetic I was thinking of a
formal system such as the formal system Robinson Arithmetic (or Peano
Arithmetic depending on the context).
It is not so easy to show that Robinson Arithmetic is a Turing
Universal interpreter, but it is standardly done in most good textbook
in mathematical logic(*). It is no more extraordinary that the Turing
universality of the SK combinators, or the universality of the
Conway's game of life, or the universality of any little universal
system.
> And
> how does arithmetic avoid the problem of arbitrarily many mappings, as
> raised by Stathis?
Once you accept the computationalist hypothesis, not only that problem
is not avoided but the problems of the existence of both physical laws
and consciousness is entirely reduced to it, or to the digital version
(UD) version of that problem. The collection of all computations is a
well defined computational object, already existing or defined by a
tiny part of Arithmetical Truth, and not depending on the choice of
the initial basic formal system.
The mapping are well defined though. The way Putnam, Mallah, Chalmers
and others put that problem just makes no sense with comp, given that
they postulated some primitively material or substantial universe
which does not makes any sense (as I have argued already). Then they
confuse a computation with a description of a computation. Sometimes
they use also the idea that real numbers occurs actually in nature
which just add confusion. Now I usually don't insist on that, because,
even if such mapping would make sense, it just add computational
histories in the universal dovetailing, or in Arithmetic, and this
does not change the measure problem. The only important fact here is
that with comp, the digitalness makes the measure problem well
defined: none mappings are arbitrary: either there is a computation or
there is no computation.
For example, with numbers and succession (but without addition and
multiplication) there is no universal computation, even if there is a
sense to say there is all description of computations there. A
counting algorithm does not constitute a universal dovetailing. Now,
numbers + addition + multiplication, gives universal computations and
thus all computations with its typical super-redundancy, and the
measure problem makes sense. Ontologically we need no more.
Epistemologically we need *much* more, we need something so big that
even with the whole "Cantor Paradise" or the whole "Plato Heaven" at
our disposition we will not even been able to name what we need (and
that is how comp prevents first person reductionism or eliminativism,
and how it makes theology needing a scientific endeavor). (with
science = hypothetical axiomatics).
I agree with Kelly that we don't need a notion of causality, but we
need computations (Shannon information measures only a degree of
surprise, and consciousness is more general than being surprised, and
I agree with you that information is a statical notion). But the
notion of computations needs the logical relations existing among
numbers, although other basic finite entities can be used in the place
of numbers. In all case, the computations exists through the logical
relations among those finite entities.
We could say that a state A access to a state B if there is a
universal machine (a universal number relation) transforming A into B.
This works at the ontological level, or for the third person point of
view. But if A is a consciousness related state, then to evaluate the
probability of personal access to B, you have to take into account
*all* computations going from A to B, and thus you have to take into
account the infinitely many universal number relations transforming A
into B. Most of them are indiscernible by "you" because they differ
below "your" substitution level.
(*)
- Richard Epstein and Walter Carnielli, Computability, computable
Functions, Logic, and the Foundations of Mathematics, Wadsworth &
Brooks/Cole Mathematics series, Pacific Grove, California, 1989.
- Boolos, Burgess and Jeffrey, Computability and Logic, Cambridge
University Press, Fourth edition, 2002.
Bruno
> The question was whether information was enough, or whether something
> else is needed for consciousness. I think that sequence is needed,
> which we experience as the passage of time. When you speak of
> computations "going from A to B" do you suppose that this provides the
> sequence? In other words are the states of consciousness necessarily
> computed in the same order as they are experienced or is the order
> something intrinsic to the information in the states (i.e. like
> Stathis'es observer moments which can be shuffled into any order without
> changing the experience they instantiate).
Say a machine is in two separate parts M1 and M2, and the information
on M1 in state A is written to a punchcard, walked over to M2, loaded,
and M2 goes into state B. Then what you are suggesting is that this
sequence could give rise to a few moments of consciousness, since A
and B are causally connected; whereas if M1 and M2 simply went into
the same respective states A and B at random, this would not give rise
to the same consciousness, since the states would not have the right
causal connection. Right?
But then you could come up with variations on this experiment where
the transfer of information doesn't happen in as straightforward a
manner. For example, what if the operator who walks over the punchcard
gets it mixed up in a filing cabinet full of all the possible
punchcards variations, and either (a) loads one of the cards into M2
because he gets a special vibe about it and it happens to be the right
one, or (b) loads all of the punchcards into M2 in turn so as to be
sure that the right one is among them? Would the machine be conscious
if the operator loads the right card knowingly, but not if he is just
lucky, and not if he is ignorant but systematic? If so, how could the
computation know about the psychological state of the operator?
--
Stathis Papaioannou
Maybe. But I'm questioning more than the lack of causal connection.
I'm questioning the idea that a static thing like a state can be
conscious. That consciousness goes through a set of states, each one
being an "instant", is an inference we make in analogy with how we would
write a program simulating a mind. I'm saying I suspect something
essential is missing when we "digitize" it in this way. Note that this
does not mean I'd say "No" to Burno's doctor - because the doctor is
proposing to replace part of my brain with a mechanism that instantiates
a process - not just discrete states.
Brent
>
> On Apr 21, 11:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> We could say that a state A access to a state B if there is a
>> universal machine (a universal number relation) transforming A into
>> B.
>> This works at the ontological level, or for the third person point of
>> view. But if A is a consciousness related state, then to evaluate the
>> probability of personal access to B, you have to take into account
>> *all* computations going from A to B, and thus you have to take into
>> account the infinitely many universal number relations transforming A
>> into B. Most of them are indiscernible by "you" because they differ
>> below "your" substitution level.
>
> So, going back to some of your other posts about "transmitting" a copy
> of a person from Brussels to Moscow. What is it that is transmitted?
> Information, right?
OK.
> So for that to be a plausible scenario we have to
> say that a person at a particular instant in time can be fully
> described by some set of data.
Not fully. I agree with Brent that you need an interpreter to make
that person manifest herself in front of you. A bit like a CD, you
will need a player to get the music. Now, any (immaterial, simple)
Turing universal system will do, so I take the simplest one, the one
that we learn at school: elementary arithmetic. (On some other planet
they learn the combinators at school, and in the long run it could be
better, but fundamentally it does not matter).
>
>
> It would seem to me that their conscious state at that instant must be
> recoverable from that set of data. The only question is, what
> conditions must be met for them to "experience" this state, which is
> completely described by the data set?
But from the first person perspective I need, and elementary
arithmetic provides, an infinity of universal histories going through
my current states. It is not just "information", it is information
relative to possible computations.
> I don't see any obvious reason
> why anything additional is needed. What does computation really add
> to this?
It adds the relative interpretation of that information. Information,
which you identify with some bit strings is just a number, it is just
an encoding of a person, not the person.
Consciousness is the state of mind of a person who believes in a
reality. This makes sense only relatively to probable universal
histories.
>
>
> You say that computation is crucial for this "experience" to take
> place. But why would this be so? Why couldn't we just say that your
> various types of mathematical logic can describe various types of
> correlations, categories, patterns, and relationships between
> informational states, but don't actually contribute anything to
> conscious experience?
Remember I assume the computationalist hypothesis. This means I will
accept to be encoded in an information string, but only under the
promise it will be decoded relatively to probable computational
histories I can bet on, having an idea of my current first person state.
>
>
> Conscious experience is with the information.
Conscious experience is more the content, or the interpretation of
that information, made by a person or by a universal machine.
If the doctor makes a copy of your brain, and then codes it into a bit
string, and then put the bit string in the fridge, in our probable
history, well in that case you will not survive, in our local probable
history.
> Not with the
> computations that describe the relations between various informational
> states.
If you say yes to a doctor for a digital brain, you will ask for a
brain which functions relatively to our probable computational
history. No?
>
>
>> But if A is a consciousness related state, then to evaluate
>> the probability of personal access to B, you have to take
>> into account *all* computations going from A to B
>
> I don't see how probability enters into it. A and B are both fully
> contained conscious states. Both will be realized, because both
> platonically exist as possible sets of information. State B may have
> a "memory" of State A. State A may have an "expectation" (or
> premonition) of State B. But that is the only link between the two.
The UD generates an infinity of computations going from A to B.
Probabilities, credibilities, plausibilities, provabilities will all
emerge unavoidably.
>
> Otherwise the exist independenty.
I don't see any sense in which the term "computational state" makes
sense independently of a least one computation. But from inside we
have to take care on the infinity of computation, including those with
the "dovetailing-on-the-reals" noisy background. (From inside we
cannot distinguish the many finite initial segment of those reals with
the reals themselves.)
>
>
> So Brian Greene had a good passage somewhat addressing this in his
> last book. He's actually talking about the block universe idea, but
> still applicable I think:
>
> "In this way of thinking, events, regardless of when they happen from
> any particular perspective, just are. They all exist. They eternally
> occupy their particular point in spacetime. This is no flow. If you
> were having a great time at the stroke of midnight on New Year's Eve,
> 1999, you still are, since that is just one immutable location in
> spacetime.
>
> The flowing sensation from one moment to the next arises from our
> conscious recognition of change in our thoughts, feelings, and
> perceptions. Each moment in spacetime - each time slice - is like one
> of the still frames in a film. It exists whether or not some projector
> light illuminates it. To the you who is in any such moment, it is the
> now, it is the moment you experience at that moment. And it always
> will be. Moreover, within each individual slice, your thoughts and
> memories are sufficiently rich to yield a sense that time has
> continuously flowed to that moment. This feeling, this sensation that
> time is flowing, doesn't require previous moments - previous frames -
> to be sequentially illuminated."
I totally agree with this picture. Brian Green uses a space time,
where I use the numbers with they additive and multiplicative
structure, and my point is that assuming comp, it has to work, and I
show how space time and energy has to emerge from the inside view of a
tiny fragment of arithmetic. But both in Green and with comp, the
information or the points makes sense because they are structured, by
space-time in Green, and by addition and multiplication in comp. A
computation is a mathematical object in Plato Heaven, or just in a
tiny part of the standard model of arithmetic.
>
>
> On your earlier post:
>
>> The physical has to emerge from the statistical
>> probability interference among all computations, going through my
>> (current) states that are indiscernible from my point of view.
>> Why such interference takes the form of wave interference is still a
>> (technical) open problem.
>
> In my view, I just happen to be inhabit a perceptual universe that is
> fairly orderly and follows laws of cause and effect.
You hope this! But with comp this is globally wrong, and only "locally
apparent". "You" (3-person) are distributed densely on the border of a
universal dovetailing. What you perceive is the mean on all possible
"continuations". My point is that it has to be such once we assume
comp, and that this is empirically verifiable/refutable.
I put "continuations" in quotes, because it is not necessarily
related with physical notion of futures. It is more logical consistent
extensions. It could develop on physical pasts, and elsewhere.
> However, there
> are other conscious observers (including other versions of me) who
> inhabit perceptual universes that are much more chaotic and
> nonsensical.
>
> But everything that can be consciously experienced is experienced,
> because there exists information (platonically) that describes a mind
> (human, animal, or other) having that experience.
Yes there is a world in which you computer will transform itself into
a green flying pig. The "scientific", but really everyday life
question, is, what is the "probability" this will happen to "me" here
and now. If the probability is 99,9 %, I will not find worth to even
begin writing a post ....
Physics is the science of such prediction, and if comp is true, the
correct-by-definition prediction have to take account all histories
and to see those who have measure near 1.
>
>
> I say that because it seems to me that this information could
> (theoretically) be produced by a computer simulation of such a mind,
> which would presumably be conscious.
Yes, because the computer will generate not just the states, but it
will related them.
> So add platonism to that, and
> there you go!
We agree here. And comp makes arithmetical platonism sufficient. It
makes highly undecidable if there is anything more. From inside, on
the contrary, the bigness is not even measurable or nameable. That
follows from "simple" theoretical computer science.
Bruno
The experienced sequence will be the same, I think. I would even guess
that it will correspond to the sequence in most singular low grained
computations going through those states (if our substitution level is
not too low...) , but things get trickier with A, B, C very close, I
expect.
Remember that if the Mandelbrot set is creative (in the snes of Post),
or universal (in the sense of Turing) then all your 3-states of mind
(future, present, past, and elsewhere) are densely distributed on the
its border. Subjective time is an internal construct, and with comp,
physical time is probably a first person plural construct (we share
our physical histories).
>>
>> I have still a residual doubt that a quantum computer makes sense
>> mathematically, but if that exists, then there exist a reversible
>> universal dovetailing.
>>
>>
>
> I don't understand that remark. Universal dovetailing is a completely
> abstract mathematical construct. It exists in Platonia. So how can
> the
> existence of a reversible (i.e. information preserving) UD depend on
> quantum computers?
Oh? It is just that I can use the quantum UD to provide an example.
But you are really correct, and if there is any reversible universal
machine, then I can build a reversible universal dovetailing. I could
use billiard ball or Wand never effacing machine. The difficulty is
that I can executed it only from a point in an infinite past.
I have the same difficulty with "running reversibly a program
computing all decimals of the square root of 2", or even just
counting . When will I start? I have to consider a non well founded
set of type ... 6 5 4 3 2 1 0.
Bruno
On 21 Apr 2009, at 21:30, John Mikes wrote:
> Bruno,
> you made my day when you wrote:
> "SOMEHOW" - in:
> "...The machine has to be "runned" or "executed" relatively to a
> universal machine. You need the Peano or Robinson axiom to define
> such states and sequences of states.
> You can shuffled them if you want, and somehow the UD does shuffle
> them by its dovetailing procedure, but this will not change the
> arithmetical facts that those states belong or not too such or such
> computational histories...."
> *
> First: my vocablary sais about 'axiom' the reverse of how it is
> used, it is our artifact invented in order to facilitate the
> application of our theories IOW: explanations for the phenomena so
> poorly understood (if anyway). So it is MADE up for exactly the
> purpose what we evidence by it.
>
> Second: UD "shuffles 'them' by the ominous 'somehow', (no idea: how?)
By dovetailing. I say "somehow", to say literally "in some fashion
those who knows what the UD is can work by themselves as exercise,
because I am lazy right now and it will make the post too much longer,
also".
> but it has to be done for the result we invented as a 'must be'.
Absolutely. And it does it, all by himself in the realm of numbers +
addition + multiplication.
>
> Third: the 'computational history' snapshots have to come together
> (I am not referring to the sequence, rather to the combination
> between 'earlier' and 'later' snapshots into a continuum from a
> discontinuum. That marvel bugs science for at least 250 years since
> chemical "thinking" started.
> A sequence of pictures is no history.
We agree on this. See my post to Kelly. From outside, the links are
given by universal (or not) programs. From inside, it is linked to the
most probable histories + interference between the undistinguishable
one. QM without collapse confirms this, admittedly startling, view.
> *
> Then again: you wrote:
> "...The world you are observing is a sort of mean of all those
> computations, from your point of view. But the "running of the UD"
> is just a picturesque way to describe an infinite set of
> arithmetical relations..."
>
> I am not sure about the "mean" since we are not capable of even
> noticing 'all of them', not to evaluate the totality for a 'mean' -
> in my not arithmetic vocabulary: a median "meaning" of them all
> (nonsense).
By accepting Church thesis, we accept Gödel's Miracle. We can define,
inside, the universal-outside. WE cannot compute the correct inside
mean, but it has to be partially computable for a physical worlds to
exists. So we can bet on reasonable approximations. The "real" comp
physics will be unusable in practice, but will explain in theory (and
thus prevent its elimination) the presence of subject.
> Your words may be a flowery (math that is) expression of 'viewing
> the totality in its entirety' which is just as impossible (for us,
> today) as to realize your 'infinite set of arithmetical relations'.
> If I leave out the 'arithmetical' (or substitute it by my
> meaningfulness) then we came together in 'viewing the totality' in
> our indiviual wording-ways.
> "Relations" is the punctum salience, it is a loose enough term to
> cover whatever is beyond our present comprehension.
No I really use "relation" in the usual math sense. For exemple a
binary relation on N can be seen as a subset of NXN. It is just an
association, a set of couples or triples, etc.
> When relations look differently (maybe by just our observation from
> a different aspect?) we translate it into physical terms like
> change, movement, reaction, process or else, not realizing that WE
> look at it from different connotations.
You are far to quick here. But there is something like that.
> Use to that our coordinates (space and time) in the limited view we
> can muster (I call it: "model") and we arrived at causality of the
> conventional sciences (and common sense thinking as well).
That is what I hope for.
> Indeed it is our personal (mini)-solipsistic perceived reality of
> OUR world
> washed into some common pattern (partially!) by comp or math or else.
The advantage of the present approach is that it presupposes only the
"yes doctor and Church thesis", all the rest emerges from, well not
OUR (the human) prejudices/dreams, but OUR (the universal machine)
prejudices/dreams.
> By the maze of such covering umbrella we believe in adjusted thinking.
> *
> Please do not conclude any denial from my part against the 'somehow'
> topics, the process-function-change manipulations (unknown, as I
> said),
> it is only reference to my ignorance directed in my agnosticism
> towards made-up explanations of any cultural era (and changing fast).
No problem,
Best,
Bruno
I was with you up to that last sentence. Forward or backward, we just
experience increasing entropy as increasing time, but that doesn't
warrant the conclusion that no process is required and an "instant"
within itself has an arrow of time.
> In fact, you could run the universe forwards and backwards as many
> times as you wanted like this. We would never notice anything. We
> would always percieve increasing entropy. For us, time would always
> move forward, never backwards.
>
> My point being, as always, that our experience of reality is always
> entirely dependent on our brain state. We can't know anything about
> the universe that is not represented in the information of our brain
> state at any given instant.
>
> Forwards or backwards, it's all just particles moving around, assuming
> various configurations, some of which give rise to consciousness.
>
> Again, assuming that there actually is an external physical world. We
> could, I think, apply the same idea to running a computer simulation
> of a human brain in reverse where instead of computing the next state,
> we compute the previous state.
That was my point in asking Bruno whether there is a universal
reversible computer (Turing machines obviously aren't reversible).
Since a QM (without collapse) model of the universe is reversible,
absent a reversible computer either the universe could not be computed
or QM is wrong.
Brent
I think I agree with this, that consciousness is created by the
information associated with a brain state, however I think two things
are missing:
The first is that I don't think there is enough information within a
single Plank time or other snapshot of the brain to constitute
consciousness. As you mention below, under the view of block time,
the brain, and all other things are four-dimensional objects.
Therefore the total information composing a moment of conscious may be
spread across some non-zero segment of time.
The second problem is immediately related to the first. Lets assume
that there is consciousness within a 10 second time period, so we make
a recording of someone's brain states across 10 seconds and store it
in some suitable binary file. The question is: Are there any logical
connections between successive states when stored in this file? I
would think not.
When the brain state is embedded in block time, the laws of physics
serve as a suitable interpreter which connect the information spread
out over four-dimensions, but without computer software running the
stored brain state, there is no interpreter for the information when
it is just sitting on the disk. I think this is the reason some of us
feel a need to have information computed as opposed to it simply
existing.
Jason
>> Say a machine is in two separate parts M1 and M2, and the information
>> on M1 in state A is written to a punchcard, walked over to M2, loaded,
>> and M2 goes into state B. Then what you are suggesting is that this
>> sequence could give rise to a few moments of consciousness, since A
>> and B are causally connected; whereas if M1 and M2 simply went into
>> the same respective states A and B at random, this would not give rise
>> to the same consciousness, since the states would not have the right
>> causal connection. Right?
>>
>
> Maybe. But I'm questioning more than the lack of causal connection.
> I'm questioning the idea that a static thing like a state can be
> conscious. That consciousness goes through a set of states, each one
> being an "instant", is an inference we make in analogy with how we would
> write a program simulating a mind. I'm saying I suspect something
> essential is missing when we "digitize" it in this way. Note that this
> does not mean I'd say "No" to Burno's doctor - because the doctor is
> proposing to replace part of my brain with a mechanism that instantiates
> a process - not just discrete states.
>
>
> Brent
What is needed for the series of states to qualify as a process? I
assume that a causal connection between the states, as in my example
above, would be enough, since it is what happens in normal brains and
computers. But what would you say about the examples I give below,
where the causal connection is disrupted in various ways: is there a
process or is there just an unfeeling sequence of states?
>> But then you could come up with variations on this experiment where
>> the transfer of information doesn't happen in as straightforward a
>> manner. For example, what if the operator who walks over the punchcard
>> gets it mixed up in a filing cabinet full of all the possible
>> punchcards variations, and either (a) loads one of the cards into M2
>> because he gets a special vibe about it and it happens to be the right
>> one, or (b) loads all of the punchcards into M2 in turn so as to be
>> sure that the right one is among them? Would the machine be conscious
>> if the operator loads the right card knowingly, but not if he is just
>> lucky, and not if he is ignorant but systematic? If so, how could the
>> computation know about the psychological state of the operator?
--
Stathis Papaioannou
I mainly agree. I add that once we assume comp the laws of physics
themselves are emerging on information processing. That such an
information processing is purely arithmetico-logical, or combinator-
logical. No need for substances. Consciousness, time, energy and space
are internal constructs.
Bruno
I think that the states, by themselves, cannot qualify. The has to be something
else, a rule of inference, a causal connection, that joins them into a process.
> I
> assume that a causal connection between the states, as in my example
> above, would be enough, since it is what happens in normal brains and
> computers.
Yes, I certainly agree that it would be sufficient. But it may be more than is
necessary. The idea of physical causality isn't that well defined and it hardly
even shows up in fundamental physics except to mean no-action-at-a-distance.
> But what would you say about the examples I give below,
> where the causal connection is disrupted in various ways: is there a
> process or is there just an unfeeling sequence of states?
>
>>> But then you could come up with variations on this experiment where
>>> the transfer of information doesn't happen in as straightforward a
>>> manner. For example, what if the operator who walks over the punchcard
>>> gets it mixed up in a filing cabinet full of all the possible
>>> punchcards variations, and either (a) loads one of the cards into M2
>>> because he gets a special vibe about it and it happens to be the right
>>> one, or (b) loads all of the punchcards into M2 in turn so as to be
>>> sure that the right one is among them? Would the machine be conscious
>>> if the operator loads the right card knowingly, but not if he is just
>>> lucky, and not if he is ignorant but systematic? If so, how could the
>>> computation know about the psychological state of the operator?
So you are contemplating a process that consists of a sequence of states and a
rule that connects them thus constituting a process: punch cards (states) and a
machine which physically implements some rule producing a new punch card (state)
from a previous one. And then you ask whether it is still a process if, instead
of the rule producing the next state it is produced in some other way. I'd say
so long as the rule is followed (the operator loads the right card knowingly)
it's the same process. Otherwise it is not the same process (the operator
selects the right card by chance or by a different rule). If the process is a
conscious one, is the latter still conscious? I'd say that it is. If the
selection is by chance it's an instance of a Boltzmann brain. But I don't worry
about Boltzmann brains; they're to improbable.
Brent
Boltzmann brains are improbable, but the example of the punchcards is
not. The operator could have two punchcards in his pocket, have a
conversation with someone on the way from M1 to M2 and end up
forgetting or almost forgetting which is the right one. That is, his
certainty of picking the right card could vary between 0.5 and 1.
Would you say that only if his certainty is 1 would the conscious
process be implemented, and not if it is, say, 0.9?
--
Stathis Papaioannou
I said it would be implementing *the same* consciousness if he was
following the rule. If not he might be implementing a different
consciousness by using a different rule. Of course if it were different
in only one "moment" that wouldn't really be much of a difference. I
don't think it depends on his certainty. Even more difficult we might
ask what it means for him to follow the rule - must he do it
*consciously*; in which case do we have to know whether his brain is
functioning according to the same rule?
You're asking a lot of questions, Stathis. :-) What do you think?
Brent
If by "state" you meant something like the state of one's brain or
perhaps including some local chunk of the universe, I'd agree with you.
But in general an "instant" of *consciousness* does not include any
memory. My conscious stream of thought only occasionally brings up
unique memories that one could trace back to my earlier thoughts. In
fact most of my thinking, in the sense of information processing, is
subconscious. I suppose one could expand the definition of "experience"
to include unconscious experience, although it's hard to say what that
would mean without assuming a physical reality to instantiate it.
> Again, it seems to me that the arithmetic logic that Bruno refers to
> just serves to "describe" the relations between datasets. It doesn't
> "produce" consciousness.
>
> If there are many "algorithms" that could be used to transition from
> state A to state B, it seems to me that all of them would produce the
> same conscious experience. If you end up at "state B", then it
> doesn't matter how you go there...your "memory" of the experience will
> be identical regardless of what path you took.
>
> And since all states (not just A and B) exist platonically, then every
> possible "process" can be "inferred" to connect them in every possible
> way. But I don't think this means that the processes are the source
> of consciousness. They are just descriptions of the ways that states
> could be connected.
And I assume (and I believe Bruno agreed) that all possible processes,
i.e. computations, also exist Platonically. This seems to me to
introduce a continuum topology on states. Between every two states
there are countably many other states.
Brent
>
> On Apr 22, 12:24 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>> So for that to be a plausible scenario we have to
>>> say that a person at a particular instant in time can be fully
>>> described by some set of data.
>>
>> Not fully. I agree with Brent that you need an interpreter to make
>> that person manifest herself in front of you. A bit like a CD, you
>> will need a player to get the music.
>
> It seems to me that consciousness is the self-interpretion of
> information. David Chalmers has a good line: "Experience is
> information from the inside; physics is information from the outside."
First person experience and third person experiment. Glad to hear
Chalmers accept this at last.
In UDA, inside/outside are perfectly well defined in a pure third
person way: inside (first person) = memories annihilated and
reconstructed in classical teleportation, outside = the view outside
the teleporter. In AUDA I use the old classical definition by Plato in
the Theaetetus.
>
>
> I still don't see what an interpreter adds, except to satisfy the
> intuition that something is "happening" that "produces"
> consciousness. Which I think is an attempt to reintroduce "time".
I don't think so. The only "time" needed is the discrete order on the
natural numbers. An interpreter is needed to play the role of the
person who gives some content to the information handled through his
local "brain". (For this I need also addition and multiplication).
>
>
> But I don't see any advantage of this view over the idea that
> conscious states just "exist" as a type of platonic form (as Brent
> mentioned earlier).
The advantage is that we have the tools to derive physics in a way
which is enough precise for testing the comp hypothesis. Physics has
became a branch of computer's psychology or theology.
> At any given instant that I'm awake, I'm
> conscious of SOMETHING.
To predict something, the difficulty is to relate that consciousness
to its computational histories. Physics is given by a measure of
probability on those comp histories.
> And I'm conscious of it by virtue of my
> mental state at that instant. In the materialist view, my mental
> state is just the state of the particles of my brain at that
> instant.
Which cannot be maintained with the comp hyp. Your consciousness is an
abstract type related to all computations going through your current
state.
>
>
> But I say that what this really means is that my mental state is just
> the information represented by the particles of my brain at that
> instant. And that if you transfer that information to a computer and
> run a simulation that updates that information appropriately, my
> consciousness will continue in that computer simulation, regardless of
> the hardware (digital computer, mechanical computer, massively
> parallel or single processor, etc) or algorithmic details of that
> computer simulation.
OK. But it is a very special form of information. Consciousness is
really the qualia associated to your belief in some reality. It is a
bet on self-consistency: it speed up your reaction time relatively to
your most probable histories.
>
>
> But, what is information? I think it has nothing to do with physical
> storage or instantiation. I think it has an existence seperate from
> that. A platonic existence. And since the information that
> represents my brain exists platonically, then the information for
> every possible brain (including variations of my brain) should also
> exist platonically.
You make the same error than those who confuse a universal dovetailer
with a counting algorithm or the babel library. The sequence:
0, 1, 2, 3, 4, ... , or 0 1 10 11 100 101 110 111 go through all
description of all information, but it lacks the infinitely subtle
redundancy contained in the space of all computations (the universal
dovetaling). You work in N, succ, you lack addition and
multiplication, needed for having a notion of interpreter or universal
machine, the key entity capable of giving content to its information
structure. This is needed to have a coherent internal interpretation
of computerland.
>>>
>>
>> Conscious experience is more the content, or the interpretation of
>> that information, made by a person or by a universal machine.
>> If the doctor makes a copy of your brain, and then codes it into a
>> bit
>> string, and then put the bit string in the fridge, in our probable
>> history, well in that case you will not survive, in our local
>> probable
>> history.
>
> Given the platonic nature of information, this isn't really a
> concern. In Platonia, you always have a "next moment". In fact, you
> experience all possible "next moments". The "no cul-de-sac" rule
> applies I think.
By definition, indeed, once we want to quantify the first person
indeterminacy.
"next moment" makes sense only relatively to (universal) machine. It
is the "next step" relatively to some computation and thus universal
machine interpreting that "machine".
The cul-de-sac/no-cul-de-sac depends on the points of view adopted by
the machine itself.
>
>
>
>> If you say yes to a doctor for a digital brain, you will ask for a
>> brain which functions relatively to our probable computational
>> history. No?
>
> I won't worry about it too much, as there is no doctor, only my
> perceptions of a doctor. Every possible outcome of the "brain
> replacement operation" that I can perceive, I will perceive.
Not in the relative way. You have to explain why you see apples
falling from a tree, and not any arbitrary information-theoretical data.
>
> Including outcomes that don't make any sense.
You have to explain why they are *rare*. If not your theory does not
explain why you put water on the gas and not in the fridge when you
want a cup of coffee.
>
>
> Additionally, every possible outcome of the operation that the doctor
> can percieve, he will perceive. Including outcomes that don't make
> any sense.
>
> So it seems to me that a lot of your effort goes into explaining why
> we don't see strange "white rabbit" universes.
Indeed.
> Thus the talk of
> probabilities and measures. I'm willing to just say that all
> universes are experienced.
That is absolutely true. But we don't live in the absolute (except
perhaps with salvia :). We live in the relative worlds/states. I
cannot go to my office by flying through the window. The probability
that I end up in an hospital is far greater than arriving in piece to
my job place.
> Strange ones, normal ones, good ones, bad
> ones, ones with unbreakable physical laws, ones with no obvious
> physical laws at all. It's all a matter of perception, not a matter
> of physical realization.
That is true, but we want to explain "the stable appearance of atoms
and galaxies", and what happens when we die.
>
>
>
>> Yes there is a world in which you computer will transform itself into
>> a green flying pig. The "scientific", but really everyday life
>> question, is, what is the "probability" this will happen to "me" here
>> and now.
>
> I'm not sure what it means to ask, "what is the probability that my
> computer will turn into a green pig". One of me will observe
> everything that can be observed in the next instant. How many things
> is that? I'm not sure. More than 10...ha! Setting aside physical
> limits, maybe infinitely many? Given that I might also get extra
> sensory capacity in that instant, or extra cognitive capacity, or
> whatever.
>
> So, of course all of that sounds somewhat crazy, but that's where you
> end up when you try to explain consciousness I think. Any explanation
> that doesn't involve eliminativism is going to be strange I think.
The comp theory explains why we cannot explain consciousness, nor
truth. But we can bet on computational states, then the thought
experiments show that physics is derivable from computer science/
number theory in term of probabilities, and we can compare those
probabilities with the one we extract from the long observation of our
neighborhoods. Comp is a concrete testable theory, but we have to
derive the physics from it to do so. There is a gift because it gives
a complete arithmetical interpretation of an earlier type of theory
like Plotinus theology, which does not eliminate the person like
modern materialism, comp lead to a natural distinction between truth
about us, and provable by us.
>
>
> But, if you are willing to say that consciousness is an illusion, then
> you can just stick with materialism/physicalism and you're fine.
You are right. But consciousness is the only thing I have no doubt
about. The *only* undoubtable thing. The fixed point of the cartesian
systematic doubting attitude. A theory which eliminate my first
person, or my consciousness, although irrefutable by me, is wrong, I
hope, I hope it to be wrong for you too. (Why would I send a post on
consciousness to a zombie?)
> In
> that case there's no need to invoke any of this more esoteric stuff
> like platonism. Right?
Right. Materialism is a trick based on a lie (consciousness, and thus
pain, suffering are illusions) and an illusion (there is matter). This
is used to stop fundamental inquiry. It is not a coincidence that
authoritative theologies insists on materialism so much. Before
Darwinism God created the man, after Darwinism God created the matter.
Assuming comp, couple matter-man
Any content of consciousness can be an illusion. Consciousness itself
cannot, because without consciousness there is no more illusion at all.
Best,
Bruno
>> Boltzmann brains are improbable, but the example of the punchcards is
>> not. The operator could have two punchcards in his pocket, have a
>> conversation with someone on the way from M1 to M2 and end up
>> forgetting or almost forgetting which is the right one. That is, his
>> certainty of picking the right card could vary between 0.5 and 1.
>> Would you say that only if his certainty is 1 would the conscious
>> process be implemented, and not if it is, say, 0.9?
>>
>>
>
> I said it would be implementing *the same* consciousness if he was
> following the rule. If not he might be implementing a different
> consciousness by using a different rule. Of course if it were different
> in only one "moment" that wouldn't really be much of a difference. I
> don't think it depends on his certainty. Even more difficult we might
> ask what it means for him to follow the rule - must he do it
> *consciously*; in which case do we have to know whether his brain is
> functioning according to the same rule?
>
> You're asking a lot of questions, Stathis. :-) What do you think?
I don't think the rule matters, only the result, which could consist
of a series of disconnected states. The utility of a process is that
it reliably brings about the relevant states; but if they arose
randomly or by a different process that would be just as good. If not,
then you could have an apparently functionally identical machine which
has a different consciousness. One half of your brain might function
by a different process that gives the same neuronal outputs, and you
would have a feeling that something had radically changed, but your
mouth would seemingly of its own accord continue to declare that
everything is just the same. So, I agree with Kelly that the
consciousness consists in the information.
--
Stathis Papaioannou
> This implicitly assumes that you can dispense with the continuum and
> treat the process as a succession of discrete states. I question that.
So are you saying that, because we are conscious, that is evidence
that reality is at bottom continuous rather than discrete?
Do you think a computation would feel different from the inside
depending on whether it was done with pencil and paper, transistors or
vacuum tubes?
--
Stathis Papaioannou
If two processes always produce the same sequence they are the same
process in the abstract sense.
> If not,
> then you could have an apparently functionally identical machine which
> has a different consciousness. One half of your brain might function
> by a different process that gives the same neuronal outputs, and you
> would have a feeling that something had radically changed, but your
> mouth would seemingly of its own accord continue to declare that
> everything is just the same. So, I agree with Kelly that the
> consciousness consists in the information.
>
>
But is it the information in consciousness and is it discrete? If you
include the information that is in the brain, but not in consciousness,
I can buy the concept of relating states by similarity of content. Or
if you suppose a continuum of states that would provide a sequence. It
is only when you postulate discrete states containing only the contents
of instants of conscious thought, that I find difficulty.
Brent
Brent
Brent
> But is it the information in consciousness and is it discrete? If you
> include the information that is in the brain, but not in consciousness,
> I can buy the concept of relating states by similarity of content. Or
> if you suppose a continuum of states that would provide a sequence. It
> is only when you postulate discrete states containing only the contents
> of instants of conscious thought, that I find difficulty.
I'm not sure I understand. Are you saying that the information in most
physical processes, but not consciousness, can be discrete? I would
have said just the opposite: that even if it turns out that physics is
continuous and time is real, it would still be possible to chop up
consciousness into discrete parts (albeit of finite duration) and
there would still be continuity. In fact, I can't imagine how
consciousness could possibly be discontinuous if this was done, for
where would the information that tells you you've been chopped up
reside?
--
Stathis Papaioannou
> No, I don't think the medium makes a difference. But interpretation
> makes a difference. Most computations we do, on pencil and paper or
> transistors or neurons, have an interpretation in terms of our world.
> Kelly is supposing there is a "self-interpreting structure" I'm not sure
> what he means by this, but I imagine something like an elaborate
> simulation in which some parts of the computation simulate entities with
> values or purposes - on some mapping. But what about other mappings?
It's true that in the case of an ordinary computation in a brain or
digital computer the interpretation is fixed since the external world
with which it interacts is fixed. That's why brains and computers are
useful, after all. But if you take the system as a whole, there is no
a priori way to say what the interpretation of one part should be with
respect to another part. So we return to the position whereby a rock
could implement any finite state machine, if you only look at it the
right way.
--
Stathis Papaioannou
On Apr 24, 3:14 am, Jason Resch <jasonre...@gmail.com> wrote:Kelly, Your arguments are compelling and logical, you have put a lot of doubt in my mind about computationalism.Excellent! It sounds like you are following the same path as I did on all of this. So it makes sense to start with the idea of physicalism and the idea that the mind is like a very complex computer, since this explains third person observations of human behavior and ability very well I think. BUT, then the question of first person subjective consciousness arises. Where does that fit in with physicalism? So the next step is to expand to physicalism + full computationalism, where the computational activities of the brain also explain consciousness, in addition to behavior and ability.
>
> On Apr 24, 2:41 am, Brent Meeker <meeke...@dslextreme.com> wrote:
>>>
>>> In the materialist view, my mental state is just the
>>> state of the particles of my brain at that instant.
>>
>> I think we need some definition of "state".
>
> Hmmm. Well, I think my view of the word is pretty much the dictionary
> definition. Though there are two different meanings in play here.
>
> The physical state:
>
> "the condition of matter with respect to structure, form,
> constitution, phase, or the like"
>
> And the mental state:
>
> "a particular condition of mind or feeling"
... and computational states.
Then assuming comp, you can attribute a mental state to a
computational state, and then you must attribute an infinity of
computational states to a mind state.
>
>
> Though ultimately I'm saying that there is no actual physical world
> that exists outside of and independent from our perceptions. You and
> I probably perceive a very similar world, but there other conscious
> observers who perceive very different worlds. But all worlds are
> virtual worlds that exist only inside the minds of conscious platonic
> observers. And I base this conclusion on the line of thought laid out
> in my previous posts.
This is close to the consequence of comp.
>
>
>
>> If we discretize your brain, say slice it into Planck
>> units of time as Jason suggested, now we need to
>> have something to connect one state to another.
>
> Why do we need to have something extra to connect one state to
> another? What does this add, exactly?
I would say this goes along with the very (mathematical) definition of
what a computational state is.
>
>
> I think that these instances of consciousness are like pieces from a
> picture puzzle. But not a jigsaw picture puzzle...instead let's say
> that each puzzle piece is perfectly square, and they combine to make
> the full picture.
>
> How do you know where each piece fits into the overall picture? By
> the contents of the image fragment that is on each puzzle piece.
>
> So each puzzle piece has, contained within it, the information that
> indicates it's position in the larger framework. The same is true of
> instances of consciousness.
Nice image. With comp the same image admits an infinity of pieces
though.
And thus there is a measure problem. We agree?
>
>
> Though, I think that this view does make a testable prediction. Which
> is: there will be no end to your experiences. There is no permanent
> first person death.
OK, but such first person experience are excluded from the scientific
thought. We can talk about scientifically. My point is that comp
entails also verifiable:refutable physical facts.
>
>
> Of course, many realities will be unpleasant enough that this isn't
> necessarily a good thing. All good things lie before you. But so do
> all bad things. Blerg.
Yet, we have partial control, we can do things which change our
relative measure. It is useful when we want drink coffee, to give an
example.
Bruno
Then you loose the measure problem, the physical laws, the partial and
relative control, the quantum nature of the computations, etc.
>
>
> So we seem to have two options: "computation + information" OR
> "information".
This is like replacing the universal dovetailing (with its redundancy,
its very long (deep) histories, its many internal dynamics)
>
>
> I can't really see what problem is solved by including computation.
Do you say "yes" to the digitalist doctor? If yes, you cannot avoid
"computer science" or "elementary number theory" even just to define
"information". Why avoiding computer science in a theory which relate
consciousness (as manifesting relatively to me) to working computer.
>
> To me, assigning consciousness to platonically existing information
> seems to be good enough, with nothing left over for computation to
> explain. So, I go with the "just information" choice.
Agains. formally the difference is that your theory accept the natural
number (the finite information strings) and succession (to get them
all). But if you add addition and multiplication you get computer
science + a measure which explains why apples can fall from a tree in
normal histories, and why white rabbits can be rare.
i could that that if you are platonist, I don't see how you can avoid
the computations through which informations flux develop themselves,
when seen from inside.
Perhaps I should just ask what is your theory. Measure of information
needs already a non trivial math apparatus.
Bruno
>
> On Apr 24, 11:39 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>> At any given instant that I'm awake, I'm
>>> conscious of SOMETHING.
>>
>> To predict something, the difficulty is to relate that consciousness
>> to its computational histories. Physics is given by a measure of
>> probability on those comp histories.
>
> The laws of physics would seem to be contingent, not necessary.
On the contrary/ Physical laws appear are necessarily non contingent
with comp. They are defined through all computations in Platonia.
> In
> that I can imagine a universe with an entirely different set of
> physical laws.
Thre is no universe. You already "belong" to all comp histories going
through your actual states. of course all states are actual from
inside, but only "normal states" remains normal, and there are
physical laws only in normal histories. Physicalness is a product of
that "normality" conditions on histories.
>
>
> Further, assuming that computer simulations of brains are possible
> and
> give rise to consciousness,
OK. That is comp. My working hypothesis.
> I can imagine that a simulation of such a
> brain could be altered in a way that the simulated consciousness
> begins to perceive a universe with these alternate physical laws.
Only relatively to you. From the first person point of view of the
inhabitant of your altered simulation, they don't belong to it, but to
the infinity of simulation in Platonia. If your alteration in such
that the 1-view of those inhabitant escape from normality, from their
point ofviex they esacpe your "universe".
With comp there is no identity thesis. There is a 1-1 relation going
from a machine to a mind, but the inverse is 1-infinity: to each mind
state there is an infinity of machine realizing it. The first person
indeterminacy is exploitable to extract the laws of physics.
> Or
> even begins to perceive a universe with no consistent coherent
> physical laws at all.
The question is; what are their relative probability measure? What can
I expect.
>
>
>
>>> And I'm conscious of it by virtue of my
>>> mental state at that instant. In the materialist view, my mental
>>> state is just the state of the particles of my brain at that
>>> instant.
>>
>> Which cannot be maintained with the comp hyp. Your consciousness is
>> an
>> abstract type related to all computations going through your current
>> state.
>
> I see what my "current state" does here with respect to
> consciousness. But I don't see what the "computations going through
> it" contribute.
They contribute to the measure which gives sense to the "universal"
physical laws.
>
>
>
>>> I won't worry about it too much, as there is no doctor, only my
>>> perceptions of a doctor. Every possible outcome of the "brain
>>> replacement operation" that I can perceive, I will perceive.
>>
>> Not in the relative way. You have to explain why you see apples
>> falling from a tree, and not any arbitrary information-theoretical
>> data.
>
> I explain it by asserting that there are many versions of me, some who
> see apples, and some who see arbitrary information-theoretical data.
> Everything that can be perceived is perceived.
Without giving me a measure, it is like your theory predicts
everything. This is contradicted by the fact. If I want coffee now, I
know all to well I have to do something for that. Sorry but I cannot
wait for a white rabbit bringing me my cup of coffee.
>
>
>
>>> Including outcomes that don't make any sense.
>>
>> You have to explain why they are *rare*. If not your theory does not
>> explain why you put water on the gas and not in the fridge when you
>> want a cup of coffee.
>
> I don't say that they are rare, I say they don't make any sense. A
> big difference.
>
If they make any sense then they does not exist in Platonia, except in
non standard mathematical representation (due to incompleteness). Then
they have better to be rare relatively to my current state, or your
theory is deflationnary: it predicts every-events.
> I say that every possible event is perceived to happen, and so nothing
> is more or less rare than anything else.
It has to be at least in the relative way, if not your theory predicts
all happenings, even in practice, but the facts contradict this.
> There are only things that
> are rare in your experience.
This is what comp can explain. This is what the universal dovetailer
got normal explanations of measure one. The counting algorithm does not.
> They are not rare in an absolute sense.
Probably. I don't know because the proba are always relative with
comp, but this is an old discussion (cf ASSA/RSSA).
>
>
> Why do I say this? Because I think that platonism is the best
> explanation for conscious experience, and the above view is (I think)
> the logical conclusion of that platonic view of reality.
I agree with the platonism. And it is because the computations are ion
platonia that the whole thing works.
>
>
>
>>> Thus the talk of
>>> probabilities and measures. I'm willing to just say that all
>>> universes are experienced.
>>
>> That is absolutely true. But we don't live in the absolute (except
>> perhaps with salvia :).
>
> I say that we do live in the absolute. Not all experiences of the
> absolute will be strange. If all possible experiences exist in the
> absolute, then by definition some will be quite ordinary and mundane.
> Right?
Sure. But only in normal (measure 1) histories they remain normal. If
you don't address the first person, singular and plural, indeterminacy
problems, you don't solve the mind body problem, nor the body problem
(the origin of the appearance of a stable physical universe).
>
>
> But, right, salvia gives a taste of how strange experience can be.
> And also schizophrenia, dementia, and various other mental conditions
> and abnormalities cause by damage to the brain are further examples.
>
> How does your computational theory consciousness explain the
> perceptions of these people?
By the Galois connection between machine and behaviors, or equation
and surfaces, or theories and models.
If you have a system of equation, and decide to drop out some
equations, making the system more little, you get more solutions, more
hypersurfaces realizing the reamining equations. Similarly when you
drop out axioms from a theory, the theory will have more models.
Similarly when you "diminish" a brain, you enlarge the possible
consciousness. The consciousness of the universal person (universal
consciousness) exists in Platonia. Its "platonic brain" is any little
theory + induction axioms. It differentiate through the consistent
extensions, which exist also in Platonia. So many such extension
exists that a notion of normality and stability is necessarily
perceived from inside: the physical laws emerge. Making a part of your
brain sleepy or perturbated can let you experience unusual but real
reality types.
>
>
>
>> That is true, but we want to explain "the stable appearance of atoms
>> and galaxies", and what happens when we die.
>
> Some observers will see stable atoms and galaxies. Because that's one
> of the possible sets of experience. Other observers won't see these
> things.
I can explain to you that almost all observers will observe the same
physical laws everywhere in Platonia. You are the one saying that
physics is contingent now.
The nice thing with comp is that physics is necessary, and necessarily
the same for all observers/universal machines. Only geography and
history can be (very) different. Practically when we die, things are
more complex to predict, because you shift toward less normal world
and (I think) you can come back to universal consciousness, an amnesic
state where you forget the more particular differentiated dreams. It
is an open problem to evaluate the probaility to survive with all your
memories, like if your normal life continues (it is not null, but at
each DU-step k it is multiplied by something (much) less than 1/2^k, I
think.
>
>
>
>> You are right. But consciousness is the only thing I have no doubt
>> about. The *only* undoubtable thing. The fixed point of the cartesian
>> systematic doubting attitude. A theory which eliminate my first
>> person, or my consciousness, although irrefutable by me, is wrong, I
>> hope, I hope it to be wrong for you too. (Why would I send a post on
>> consciousness to a zombie?)
>
> Right, I'm with you on this. Consciousness is the one thing that
> can't be doubted. And that's where the trouble starts...
That's where science stars. That is why we have fun discussing
theories and arguments :)
Bruno
No, I think you're missing my point. Consider your analogy of fitting
together images to make a complete picture. You present this as a
spatial representation of the sequential flow of consciousness. Now
suppose your spatial elements have zero extent - they are "spatial
instants", i.e. points. What fits them together?
>
>
>>> Well, I'm not sure how much of the brain's information is needed to
>>> represent a particular state of consciousness. But I don't think that
>>> it's a crucial question.
>>>
>> It's a crucial question if the answer is "more than what is in an
>> instant of consciousness."
>>
>
> Why is it a crucial question in that case? I don't see what you're
> getting at.
It appears to me that you are implicitly supposing that information in
the brain (say in it's structure) can be associated with an instant of
consciousness and hence allow it's position in the "complete picture" to
be determined. But it would not be a legitimate move to use information
that was not in the instant itself. And that's what I find implausible,
that there is significant information content in a conscious interval of
infinitesimal duration.
Brent
I am not sure that the measure problem can be so easily
abandoned/ignored. Assuming every Observer Moment had has an equal
measure, then the random/white-noise filled OMs should vastly
outnumber the ordered and sensible OMs. Though I ever only have one
OM to go by, the fact I was able to maintain a
non-random/non-white-noise filled OMs long enough to compose this post
should serve as some level of evidence that all OMs are not weighted
equally.
Bruno has suggested that computationalism is a candidate for answering
the measure problem in a testable way. However there may be other
ways to answer it by considering platonic objects, for example
counting the umber of paths to a state, that is how often it reappears
as a substructure of other platonic objects, etc. Whether or not this
is testable is another question, but whether the ultimate explanation
of consciousness is computation or information, I feel that measure is
important.
Jason
I understand that all possible experiences by definition are
experienced, and that rare experiences, however rare they may be, will
still be experienced. In fact I used that same argument with Russell
Standish when he said that ants aren't conscious because if they were
then we should expect to be experiencing life as ants and not humans.
However, in your theory you explain that there are always "next
moments" to be experienced, if you were to wager on your next
experience would you guess that it will be random or ordered? If you
say ordered, is that not a contradiction when the random experiences
so greatly outnumber the ordered?
If your theory is true, then certainly there are observers who
experience every moment as sensible, yet I would liken those to a
branch of the multiverse where every time an experimenter measures the
quantum state of any particle, it comes out the same, in that branch
perhaps they never develop the field of quantum mechanics, but how
long into the future would you expect that illusion to hold? Perhaps
in your theory "next" and "previous" OMs aren't really connected, only
the illusion of such a connection?
Would you say you belong to the ASSA or RSSA camp?
http://everythingwiki.gcn.cx/wiki/index.php?title=ASSA
http://everythingwiki.gcn.cx/wiki/index.php?title=RSSA
Or perhaps something different entirely?
Jason
An untestable theory. But that's OK since if it's true it's also useless.
Brent
> I am not sure that the measure problem can be so easily
> abandoned/ignored. Assuming every Observer Moment had has an equal
> measure, then the random/white-noise filled OMs should vastly
> outnumber the ordered and sensible OMs. Though I ever only have one
> OM to go by, the fact I was able to maintain a
> non-random/non-white-noise filled OMs long enough to compose this post
> should serve as some level of evidence that all OMs are not weighted
> equally.
One remark that could be made about this oft-stated assertion is that
you don't *know* you have maintained a series of non-random OM's
orderly enough and long enough to compose this post. All you can be
certain about is your present OM, and it may be the only OM in all the
universes, anywhere or ever. In ot
--
Stathis Papaioannou
[Oops, didn't finish!]
In other words, it's impossible to know anything about other OM's from
within your own OM, except from a godlike stance outside the
multiverse. But this doesn't stop us drawing conclusions from the
(perhaps untrue) assumption that there are many OM's and the present
one is sampled randomly from them.
--
Stathis Papaioannou
It's strictly equivalent to another older theory, "Whatever will be,
will be."
Brent
The interesting, informative, thing consists in relating A and B.
This is my taste. I expect from a "theory of everything" to explain
the miracle making me felt to succeed regularly to make a cup of
coffee when I am in the state of dreaming or wanting or expecting ...
a cup of coffee, and not only the coffee occurs but soon after,
regularly, I smell it, I drink it, I enjoy it. How good dreams
happen, how bad dreams happen, how can I play the game in a way which
satisfy most good willing universal machines. I need a theory which
should be correct on what universal machine can do and think
relatively to their most stable and probable dreams ... I am a long
term researcher, but a practical one though.
All theories are hypothetical, but some theories (questions). I just
propose the comp question to nature. From your post I can guess you
stop at the step 3 of the Universal Dovetailer Argument (UDA).
So you have indeed the necessity to abandon comp to maintain your form
of immaterialist platonism, but then you lose the tool for questioning
nature. It almost look like choosing a theory because it does not even
address the question ?
Bruno
Bruno
>
>
> On Apr 26, 11:40 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>> The question is; what are their relative probability measure? What
>> can
>> I expect.
>
> Any expectations you have are unfounded. The problem of induction
> applies.
There is no problem of induction. There is a problem of induction only
for those who believe that science can prove a fact about reality.
There are only worst, bad, good and better theories, but all theories
are hypothetical belief. from the baby theory according to which he
has parents, to the existence of moons, suns and bosons, theories are
hypothetical, and their interpretations preserve the "hypotheticalness".
A scientist never say "I know". (and this I think is the way Popper
solved the "unduction problem". But it appears only to those who want
their theory to be true ... by authority.
>
>
> Any probabilities arrived at empirically are suspect, they will
> continue to hold for some Brunos but not for all...
Sure. I will ask a bank to lend me huge amount of money, I promise
them to reimburse when I will win ten times the big lottery in a row.
>
>
> But there's really not a better option that I can think of, so we
> might as well stick with our expectations and probabilities.
>
> Not that we have a choice, since free will is an illusion also...
Free will does not exist for those who live to work.
Free will does exist for those who work to live. If not, they are
exploited? From universal they are forced to be particular. that's how
souls falls and englue themselves, they loss their free will. Now work
no war can make you free. You are, and can remain by resistance and
vigilance.
In the normal worlds.
>
>
>
>> Without giving me a measure, it is like your theory predicts
>> everything.
>
> Right, it does basically predict everything.
Logicians call such theories inconsistent.
> Except an end to
> experience. There is no sweet, sweet release of death if I'm right.
> There will be no final rest in the comforting embrace of oblivion.
> Only the endless grind of a weary existence.
That is true. But no theories can predict this. Er... well, no
consistent theories can predict this. Inconsistent theories can prove
we are immortal indeed.
>
>
>
>> This is contradicted by the fact.
>
> How so? What fact? You know for certain that you are the only
> Bruno? You know for certain that there aren't parallel realities
> containing Brunos with different experiences? How did you come by
> this fact?
>
> Is it a fact, or just a belief?
Come on Kelly, you know my favorite postulate assume all computational
histories (Robinson arithmetic) so I have no doubt there is a
continuum of Bruno, even densely distributed between any two points of
the M sets), all englued in their history and soon or later quite
different, as different as Kelly and Bruno in short time, as different
as Bruno and the amoeba on longer time ...
The question is just this. I want coffee. How is it that I can realize
the dream "drinking of cup of coffee". I am good willing. I have try
your theory this morning. At first it works. It was a real and big
pleasure to await for the white rabbit bringing my cup of coffee in my
bed, where I could stay longer. But after two hours i was a bit
missing something, and I ask myself: why did I bet I would necessarily
be the Bruno in the White rabbit world instead of I seems (at least)
the usual normal (and a bit sadder) world where in most circumstances
I have to use my free will, my free time, my free action, my (not so
free) coffee and this up to the point I bring the warm product to my
mouth.
>
>
>
>> If I want coffee now, I
>> know all to well I have to do something for that. Sorry but I cannot
>> wait for a white rabbit bringing me my cup of coffee.
>
> God helps those who help themselves.
Ah! This is indeed in the "guardian angel" theory of the machine M, on
the machine M.
But of course the machine M, if she wants to remain scientific, or
just self-referentially correct, should say instead:
If God exists, then God helps those (universal machines) which help
themselves.
Plato's "truth" helps in the limit That why god is good, in Plato.
> However, some Brunos are more
> fortunate with respect to helpful rabbits than other Brunos. Stay
> optimisitic.
I tell you. I will not try the white rabbit method of realizing the
experience of drinking coffee no more, in the morning.
OK, I admit I do it everyday implictly in the evening, it is perfect
how the white rabbit seems to understand I don't want a cup of coffee,
he never bring me coffee at that time. Is that an evidence that I am
in a world with white rabbits?
>
>
>
>>> I say that every possible event is perceived to happen, and so
>>> nothing
>>> is more or less rare than anything else.
>>
>> It has to be at least in the relative way, if not your theory
>> predicts
>> all happenings, even in practice, but the facts contradict this.
>
> Again, what facts? If everything was happening in alternate versions
> of reality, how would you detect this? What facts do you possess that
> rule this out?
? I don't rule out the whole structure of possibilities, and I defend
the idea that actuality is possibility from inside. I include type of
relative or conditionnal possibilities. Indeed I explain that once you
assume comp, you have to take into account the interference of
histories due to your personal finite level of distinguishability
among a continuum of histories.
Mathematics illustrates that immaterial beings, like numbers and
digital machines, obeys laws. Of course you can contemplate the
picture, not trying to use it. I like very much poetry and many arts.
Actually to use the comp physics to measure the mass of the Higgs
(Brout Englert) Boson would be like using String Theory to prepare a
pizza, but comp gives a global coherent picture, and it gives a frame
where the "laws of physics" can emerge from.
Nobody can be sure it is true, but once you say yes to the doctor,
then if your survive, you are under its play.
Bruno
> Sure. I will ask a bank to lend me huge amount of money, I promise
> them to reimburse when I will win ten times the big lottery in a row.
Not so far fetched, really.
--
Stathis Papaioannou
The Financial Crisis Explained
Heidi is the proprietor of a bar in Berlin . In order to increase sales, she decides to allow her loyal customers - most of whom are unemployed alcoholics - to drink now but pay later. She keeps track of the drinks consumed on a ledger (thereby granting the customers loans).
Word gets around and as a result increasing numbers of customers flood into Heidi's bar.
Taking advantage of her customers' freedom from immediate payment constraints, Heidi increases her prices for wine and beer, the most-consumed beverages. Her sales volume increases massively.
A young and dynamic customer service consultant at the local bank recognizes these customer debts as valuable future assets and increases Heidi's borrowing limit. He sees no reason for undue concern since he has the debts of the alcoholics as collateral.
At the bank's corporate headquarters, expert bankers transform these customer assets into DRINKBONDS, ALKBONDS and PUKEBONDS. These securities are then traded on markets worldwide. No one really understands what these abbreviations mean and how the securities are guaranteed.
Nevertheless, as their prices continuously climb, the securities become top-selling items.
One day, although the prices are still climbing, a risk manager of the bank -- subsequently, of course, fired due his negativity -- decides that the time has come to demand payment of the debts incurred by
the drinkers at Heidi's bar.
However they cannot pay back the debts.
Heidi cannot fulfill her loan obligations and claims bankruptcy.
DRINKBOND and ALKBOND drop in price by 95%. PUKEBOND performs better, stabilizing in price after dropping by 80%.
The suppliers of Heidi's bar, having granted her generous payment due dates and having invested in the securities are faced with a new situation.
Her wine supplier claims bankruptcy, her beer supplier is taken over by a competitor.
The bank is saved by the government following dramatic round-the-clock consultations by leaders from the governing political parties.
The funds required for this purpose are obtained by a tax levied against the non-drinkers.
Finally an explanation I understand ...
> In fact, I can't imagine how
> consciousness could possibly be discontinuous if this was done, for
> where would the information that tells you you've been chopped up
> reside?
In Bruno's Washington/Moscow thought experiment that information isn't
in your consciousness, although it's available via third persons. My
view of the experiment is that you would lose a bit of consciousness,
that you can't slice consciousness arbitrarily finely in time.
Brent
What you are talking about is what I call the "Occam catastrophe" in
my book. The resolution of the paradox has to be that the
random/white-noise filled OMs are in fact unable to be observed. In
order for the Anthropic Principle to hold in a idealist theory
requires that the OM must contain a representation of the observer, ie
observers must be self-aware. Amongst such OMs containing observers,
ones that are the result of historically deep evolutionary processes
are by far the most common. And evolution of those observer moments
must also be constrained to be similar to those previously observed,
eliminating white rabbits, due to "robustness" of the observer.
Cheers
--
----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
I did a calculation based on historical population levels with
exponential population growth and concluded there's at least a few
centuries left, although the world population at the end of the 21st
century is likely to be less than at the start. I guess we'll have to
wait a bit to see if that one's right.
> And also it predicts that there are no
> significant number of (conscious) aliens? Because if there were, we
> should expect to be one of them and not a human?
Remember the Chinese question discussed in the ant paper. There can be
alien planets with greater populations than the Earth, but they must
be relatively rarer than planets containing Earth population levels.
There are other reasons for suspecting intelligent life is rare in the
universe (eg Fermi's paradox).
>
> Sounds like over-use of a good idea. In this case it ignores all
> other available information to just focus only on one narrow
> statistic. Why should we ignore everything else we know and only
> credit this single argument from probability? Surely, after studying
> ants and humans, the knowledge that we gain has to alter our initial
> expectations, right? But that isn't taken into account here (at least
> not in your one line description of the discussion...ha!).
>
> I think the problem with Russell's ant argument stems from trying to
> use "a priori" reasoning in an "a posteriori" situation. There is
> extra information available that he isn't taking into consideration.
>
What extra information do you have in mind? I'd gladly update my
priors with anything I can lay my hands on.
Excellent story, worth the brief deviation from the thread topic!
--
Stathis Papaioannou
>> I'm not sure I understand. Are you saying that the information in most
>> physical processes, but not consciousness, can be discrete? I would
>> have said just the opposite: that even if it turns out that physics is
>> continuous and time is real, it would still be possible to chop up
>> consciousness into discrete parts (albeit of finite duration) and
>> there would still be continuity.
>
> I could buy that if the finite duration was long enough that the content
> of the conscious interval was sufficient to order the intervals.
> Otherwise you'd need some extrinsic variable to order them (e.g physical
> time, brain states).
It seems to me that if the seconds of my life were according to an
external clock being generated backwards or scrambled, I would have no
way of knowing this, nor any way of knowing how fast the clock was
running or if it was changing speed. So how would the external clock
be able to impose an order on moments of quasi-consciousness below the
critical minimal interval, when it has no subjective effect on
supercritical intervals?
>> In fact, I can't imagine how
>> consciousness could possibly be discontinuous if this was done, for
>> where would the information that tells you you've been chopped up
>> reside?
>
> In Bruno's Washington/Moscow thought experiment that information isn't
> in your consciousness, although it's available via third persons. My
> view of the experiment is that you would lose a bit of consciousness,
> that you can't slice consciousness arbitrarily finely in time.
Could the question be settled by actual experiment, i.e. asking the
subject if they noticed anything unusual?
--
Stathis Papaioannou
2009/4/29 Stathis Papaioannou <stat...@gmail.com>:
For this you would need an actual AI and also that everybody agreed on
the fact that this AI is conscious and not a zombie.
If you can settle that, then an interview should be counted as proof.
But I'm not sure you can prove the AI is conscious, nor with the same
argument I'm not sure I could prove to you that I am.
Regards,
Quentin
--
All those moments will be lost in time, like tears in rain.
The atoms vibrating in a rock have a causal structure, insofar as an
atom moves when it is jiggled by its neighbours in perfect accordance
with the laws of physics. And in the possibility space of weird alien
computers it seems to me that there will always be a computer
isomorphic with the vibration of atoms in a given rock. This
requirement becomes even easier to satisfy if we allow a computation
to be broken up into short intervals on separate computers of
different design, with the final stream of consciousness requiring
nothing to bind it together other than the content of the individual
OM's.
--
Stathis Papaioannou
>>> In Bruno's Washington/Moscow thought experiment that information isn't
>>> in your consciousness, although it's available via third persons. My
>>> view of the experiment is that you would lose a bit of consciousness,
>>> that you can't slice consciousness arbitrarily finely in time.
>>
>> Could the question be settled by actual experiment, i.e. asking the
>> subject if they noticed anything unusual?
>>
>>
>> --
>> Stathis Papaioannou
>
> For this you would need an actual AI and also that everybody agreed on
> the fact that this AI is conscious and not a zombie.
>
> If you can settle that, then an interview should be counted as proof.
> But I'm not sure you can prove the AI is conscious, nor with the same
> argument I'm not sure I could prove to you that I am.
Well, you could just ask the teleported human. If he says he feels
fine, didn't notice anything other than the scenery changing, would
that count for anything? I suppose you could argue that of course he
would say that since a gap in consciousness is by definition not
noticeable, but then you end up with a variant of the zombie argument:
he says everything feels OK, but in actual fact he experiences
nothing.
--
Stathis Papaioannou
>>And in the possibility space of weird alien
>> computers it seems to me that there will always be a computer
>> isomorphic with the vibration of atoms in a given rock.
>
> What do you mean by "weird alien computers"? If we had a way of defining the
> notion of "causal structure", I'm sure it would be true that in the space of
> all computer programs (running on any sort of computer) there would be
> programs whose causal structure was isomorphic of the causal structure of
> vibrations in a rock, but this might be quite distinct from the causal
> structure associated with the brains of sentient observers.
Two computers of different architecture running the same program will
go through, on the face of it, completely different physical activity.
Now consider every possible general purpose computer, or every
possible Turing complete machine, running a particular program. I
don't know how to show this rigorously, but it seems to me that the
physical activity in a rock will mirror the physical activity in at
least one these possible computers, and that this requirement will be
easier to satisfy the shorter the period of the computation under
consideration.
>>This
>> requirement becomes even easier to satisfy if we allow a computation
>> to be broken up into short intervals on separate computers of
>> different design, with the final stream of consciousness requiring
>> nothing to bind it together other than the content of the individual
>> OM's.
>
> As long as the separate computers are each passing the results of their
> computation on to the next computer in the series, then we can talk about
> the causal structure instantiated by the whole series. And if they aren't,
> then according to the idea of associating OMs with causal structures, we
> might have to conclude that these computers are not really instantiating an
> OM of a complex humanlike observer even if by some outrageous coincidence
> the output of all these separate computers *looked* just like the output of
> a single computer running a simulation of the brain of a humanlike observer.
I would have said that every computer can generate an OM in complete
causal isolation from every other computer, and the OM's still
associate to form a stream of consciousness simply by virtue of their
content. That seems to me perhaps the main utility of the idea of
OM's. But it appears you agree with Brent that this association won't
happen (or at least, there will be a gap at the seams) unless the
computers are causally connected.
--
Stathis Papaioannou
Hi Russell,
What you said reminded me of this article, which appeared in the Boston Globe:
http://www.boston.com/bostonglobe/ideas/graphics/011109_hacking_your_brain/
See the section on hallucinating with ping pong balls and a radio. It
would seem the way the brain is organized it doesn't accept perception
of pure randomness (at least not for long, I have not yet tried the
experiment myself). If it can't find patterns from the senses it
looks like it gives up and invents patterns of its own.
Jason
Kelly wrote:
>
> Not if information exists platonically. So the question is, what does
> it mean for a physical system to "represent" a certain piece of
> information? With the correct "one-time pad", any desired information
> can be extracted from any random block of data obtained by making any
> desired measurement of any physical system.
>
> If I take a randomly generated one-time pad and XOR it with some real
> block of data, the result will still be random. But somehow the
> original information is there. You have the same problem with
> computational processes, as pointed out by Putnam and Searle. The
> molecular/atomic vibrations of the particles in my chair could be
> interpreted, with the right mapping, as implementing any conceivable
> computation.
>
> So unambiguously connecting information to the "physical" is not so
> easy, I think.This is essentially the problem discussed by Chalmers in "Does a Rock Implement Every Finite-State Automaton" at http://consc.net/papers/rock.html ,
and I think it's also the idea behind Maudlin's Olympia thought experiment as well.
But for anyone who wants to imagine some set of "psychophysical laws" connecting physical states to the measure of OMs I think there may be ways around it. For example, instead of associating an OM with the passive idea of "information", can't you associate with the causal structure instantiated by a computer program that's actually running, as opposed to something like a mere static printout of its states? Of course you'd need a precise mathematical definition of the "causal structure" of a set of causally-related physical events, but I don't see any reason why it should be impossible to come up with a good definition.
> It
> would seem the way the brain is organized it doesn't accept perception
> of pure randomness (at least not for long, I have not yet tried the
> experiment myself). If it can't find patterns from the senses it
> looks like it gives up and invents patterns of its own.
It is perhaps the other way around. The portion(s) of the brain
responsible for qualia perception appear to operate as a complex,
dynamical system with a variety of chaotic attractors, and sensory
information only serves to "nudge" this system from one set of attractor
cycles to another. In the absence of sensory input, these then operate
in open loop mode, and the person may experience all variety of
interesting qualia uncorrelated with the "real" world.
The overall mechanism of dissociative anaesthetic agents such as
Ketamine or nitrous oxide is poorly understood, but one notable property
they have is that in sub-clinical dosages they suppress sensory input
while retaining consciousness. This results in similar, "open loop"
qualia.
Johnathan Corgan
That assumes that one second can be cleanly (no causal or other
connection) sliced from the next second with no loss, which is what I doubt.
> So how would the external clock
> be able to impose an order on moments of quasi-consciousness below the
> critical minimal interval, when it has no subjective effect on
> supercritical intervals?
>
>
>>> In fact, I can't imagine how
>>> consciousness could possibly be discontinuous if this was done, for
>>> where would the information that tells you you've been chopped up
>>> reside?
>>>
>> In Bruno's Washington/Moscow thought experiment that information isn't
>> in your consciousness, although it's available via third persons. My
>> view of the experiment is that you would lose a bit of consciousness,
>> that you can't slice consciousness arbitrarily finely in time.
>>
>
> Could the question be settled by actual experiment, i.e. asking the
> subject if they noticed anything unusual?
>
Yes, I think it could - if we could do the experiment. Certainly when
I've been unconscious, either from concussion or anesthesia I've noticed
something unusual. :-)
Brent
>
>
>
>
> On Apr 27, 12:23 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>> So you have indeed the necessity to abandon comp to maintain your
>> form
>> of immaterialist platonism, but then you lose the tool for
>> questioning
>> nature. It almost look like choosing a theory because it does not
>> even
>> address the question ?
>
> Okay, going back to basics. It seems to me that there are two
> questions:
>
> A) The problem of explaining WHAT we perceive
> B) The problem of explaining THAT we perceive
>
> The first issue is addressed by the third-person process of physics,
> and of just generally trying make sense of what we perceive as we go
> through the daily grind of life. Everybody has a grasp of this issue,
> because you're faced with it everyday as soon as you wake up in the
> morning, "what's going on here???".
Well, that is "grandmother physics". It works well locally, but is
refuted by quantum mechanics, and actually is not tenable with just
the assumption of computationalism. There is an hard problem of matter.
>
>
> The second issue is obviously the more subtle first-person problem of
> consciousness.
Computationalism makes necessary the reduction of the hard problem of
matter to the less hard problem of mind. The problem of mind is less
hard, with comp, because computer science and the provability logics
can reduce the classical mind problem to the study of the correct
discourse of the self-introspecting machine using mathematical logic
to define the notion of self-referencial correctness. We get freely
the many nuances between true, communicable, provable, knowable,
inferable, observable, sensible, etc...
I think you have opted for platonist idealism at the start, so perhaps
you are not motivated to run through the Universal Dovetailer
Argument, whose main goal is to show that if we assume
computationalism (the assumption we can use classical teleportation as
a motion means, or that we are Turing-emulable) then we have to reduce
the physical laws to number theory/computer science (and a theory of
consciousness, which I take as a theory of machine's knowledge, with
the usual modal axiom of consciousness. Consciousness is the "true
believe in a reality". But with comp that reality is not the physical
reality, it is more the belief in elementary arithmetic, and from the
point of view of the machine it is a bet on self-consistency: there is
a reality which "satisfy me", there is a model making "me" valid. I
use Gödel Henking "completeness" theorem here.
You can take a look:
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html
I have already explain ten times this in this mailing list. I think
most grasp the six or seven steps, but some trouble remains for the
8th step. I intent also to come back soon or later to the seventh step
(which contains typical computer science difficulty) in my
conversation with Kim.
Search UDA in the archive of this list. I have finished in March a new
version of UDA (in 8 proposition) which has benefited from the
conversation with the list, and I will put it, someday, on my web page.
>
>
> But, for A, the fact that we are able to come up with rational-seeming
> explanations for what we experience, and that there seems to us to be
> an orderly pattern to what we perceive, doesn't answer the deeper
> question of the ultimate nature of this external world that we are
> observing.
I agree. Aristotle's naturalist hypothesis is an excellent
methodological simplification, but it departs from all fundamental
questions asked by Plato. And once Aristotle simplification is taken
as a granted "fact", or as authoritative axiom, or worse: as obvious,
then you can only be led to person elimination, in theory or in
practice.
> Here we get into issues of scientific/structural realism.
> In other words, what do our scientific theories really mean? (http://
> plato.stanford.edu/entries/structural-realism/)
>
> But I don't think we can assign any real meaning to what we observe
> until we have an acceptable understanding of the first person
> subjective experience by which we make our observations.
The mind-body problem is the problem of the relation between being and
appearance. Between the always doubtful sharable objectivity and the
always non communicable non doubtable direct true apprehension.
Between third person povs and singular or plural first person pov.
between Quanta and Qualia.
Physical reality is a first person plural type of things. QM confirms
this through the multiplication of entangled population of observers.
Physics concerns invariant patterns in all universal machine (self)-
observation, and, (unless I am wrong 'course), the physical reality is
the border of the intrinsic abyssal ignorance of all universal
machines. My point is only that this entails verifiable/refutable
facts, and up to now, QM confirms this. Comp easily refute any
newtonian or classical physics.
>
>
> So the question of consciousness is more fundamental than the
> questions of physics.
Not at all. Those two questions are both fundamental and very
fundamentally related. If you define consciousness by the true belief
in a (non necessarily correct or probable) reality (we are conscious
in dreams), then just computer science (and logic) can explain why and
how consciousness differentiate in universal machine histories, and
why stable and sharable dreams develop following long and deep (in
Bennet sense) ultra-parallel computations (ultra-parallel =
2^aleph_zero histories).
> We can come up with scientific theories to
> explain our observations, but since we don't know what an observation
> really is, this can only get us so far in really understanding what's
> going on with reality. Until we have a foundation in place,
> everything built above is speculative. To rely on physics as your
> foundation is "with more than Baron Münchhausen’s audacity, to pull
> oneself up into existence by the hair, out of the swamps of
> nothingness."
We totally agree on this, except that such a point is not obvious, at
least for most people after 1500 years of abuse of the Aristotelian
methological simplification. See the paper for a proof that once we
assume computationalism (and thus keep the notion of consciousness at
the start) physics has to be entirely reduced to the last mystery:
the mystery of numbers. This one can be shown insoluble by *any*
machine. All self-referentially correct machine can understand that
none of them can ever explain where the (natural) numbers comes from.
This makes them again a natural start. Without assuming them, we never
get them.
>
>
> But here we hit a problem because the process that we use to explain
> objective data doesn't work when applied to subjective experience.
> There is a discontinuity. The third-person perceived reality vs.
> first-person experienced reality. The latter apparently can't be
> explained in terms of the former.
All my point, Kelly, is that if you assume the computationalist
hypothesis, (digital mechanism, or simply Mechanism) then the former
has to be explained from the latter.
> But without an explanation for the
> latter, I don't see how any meaning can be attached to the former.
But the later is not so difficult ... once you take some time to study
computer science and mathematical logic. And then the former reappear
under a measure problem on computations.
I am amazed by the fact that we talk on comp since years here, and it
seems many still don't realize comp is made possible by the discovery
of the universal machine (by Babbage, Post, Turing, Church, Kleene,
Markov ...). It is a bomb! A creative bomb, for a change. "Nature"
invented it before 'course, again, and again, and again .... So does
the numbers, trough their effective enumeration of their computable
relations.
>
>
> And I think that is for this reason that I don't get hung up on the
> "white rabbit" problem. Arguments based on the probability of finding
> yourself in this state or that state are fine if all other things are
> equal, and that's the only information you have to reason with. But I
> don't think that we're in that situation.
Read the UDA and tell me where you have a problem, because this
paragraph is far too ambiguous.
>
>
> So I start with the assumption of physicalism and then say that based
> on that assumption, a computer simulation should be conscious,
You mean by a physical computer? Eventually this exist only in purely
mathematical universal machine's dreams;
> and
> then from there I find reasons to think that consciousness doesn't
> depend on physicalism.
I agree. But this is the reason why we have to justify completely the
physical appearance from computer science. No doubt information theory
has a role there, but it is just a part of a very large and rich
subject.
> To me, the most likely alternate explanation
> seems to be that consciousness depends on information.
What do you mean by "information"? Which theory are you referring
too? The term "information" is as tricky as the term "random" or
"infinite".
With comp pure noise (iterated self-duplication) multiplies "freely".
But the deep things arrive in the many probable relative informations
on possible histories, in the limit the redundancy makes the deep
difference.
> However, I am
> relying on some of my thought experiments that assumed physicalism as
> support for my conclusion of "informationalism".
>
> But I think the discontinuity between first and third person
> experience is another important clue, because I think that this break
> will be noticeable to all rational conscious entities in all possible
> worlds (even chaotic, irrational worlds). They should all notice a
> difference in kind between what is observed (no matter how crazy it
> is), and the subjective experience of making the observation.
>
> Further, let's say that I am a rational observer in a world where
> changes to brain structure do not appear to cause changes to behavior
> or subjective experience.
? (I guess here you come back with the idea that comp is false, isn't
it). Remember comp is incompatible with the very idea that there is a
material thing somewhere (by UDA). Ontologically there is only numbers
together with their additive and multiplicative structure. This
determine the whole dreamy-reality inside views structure of the
number theoretical "matrix". See my URL if interested.
It seems to be consistent with your ontological view, but with comp,
computer science provides the math for the epistemological sides, and
thanks to the incompleteness phenomenon (and others) all the nuances
gives very different and incompatible, as such, inside views of
Arithmetic. Most divided in two, the communicable and the non
communicable.
> Physicalism wouldn't have much appeal in
> this world.
In any world. Weak materialism, the idea that matter exists
*primitively" is provably false in the comp theory, with a minimal use
of Occam.
Matter is not the answer, matter is the question. Reversal.
> Rather, dualism would seem to have a clear edge as the
> default explanation.
Dualism does not work at all. It uses the identity theory. The UDA
step 8 and 7 gives no choice in the matter (no pun intended).
But you are platonist we can agree with this.
> But it might be even easier to make the leap to
> platonism in such a world, as presumably Plato's "ideal forms" might
> be even more appealing.
Especially with Church thesis. Theoretical computer science provides
an entire new realm. Theoretical computer science is a branch of math
100% unrelated with physics, a priori. Of course many physicists
follow Landauer and his idea that reality is based on quantum
information, but my point is that even that move cannot work: quantum
information has to be a first person plural reflect of classical
digitalness. Bits and Qubits are related by a double arrow. This makes
comp a testable theory. David Deutsch seems also to believe that
physical computability supersedes in fundamentality the usual à-la-
Post-Church-Kleene-Turing... classical computability, but I provide a
reason to believe that this form of quantum fundamentality is a
consequence of the numbers fundamentality. In their extensionnal
relation (like Fermat theorem) and intensional relation (like in
Kleene or Godel's theorem).
See my paper on Plotinus. The universal machine seems to plagiate
Plotinus!
We have a theology here. Correct machines are always humble and can
*only* pray on their consistency, and pray and work on the
satisfiability of their dreams.
> So in such a world you wouldn't get to
> platonism by way of thinking about computer simulations of brains
> (since brain activity isn't correlated with behavior),
If you mean "brain physical activity isn't correlated in an one-one
way with subjectivity", then I am OK.
> but I think you
> would still get there.
>
> The question is, what kind of world NOT lead you to Platonism? I
> think only a world that didn't have first person experience.
We agree on Platonism. But come on, the Pythagorean did already knows
that numbers quick back. Since then we know that there are universal
numbers which quick back universally. Comp and computer science are
interesting for providing the shape (the math) of the uncomputable and
insoluble which machines have to live with and sometimes name. And
physics is solidified by its ultimate foundation in arithmetic. With
comp there is a lot of work to do, that's sure.
and I think it's also the idea behind Maudlin's Olympia thought experiment as well.
OK, I hadn't been able to find Maudlin's paper online, but I finally located a pdf copy in a post from this list at http://www.mail-archive.com/everyth...@googlegroups.com/msg07657.html ...now that I read it I see the argument is distinct from Chalmers' "Does a Rock Implement Every Finite-State Automaton", although they are thematically similar in that they both deal with difficulties in defining what it means for a given physical system to "implement" a given computation. Chalmers' idea was that the idea of a rock implementing every possible computer program could be avoided if we defined an "implementation" in terms of counterfactuals, but Maudlin argues that this contradicts the "supervenience thesis" which says that "the presence or absence of inert, causally isolated objects cannot effect the presence or absence of phenomenal states associated with a system", since two systems may have different counterfactual structures merely by virtue of an inert subsystem in one which *would have* become active if the initial state of the system had been slightly different.
It seems to me that there might be ways of defining "causal structure" which don't depend on counterfactuals, though. One idea I had is that for any system which changes state in a lawlike way over time, all facts about events in the system's history can be represented as a collection of propositions, and then causal structure might be understood in terms of logical relations between propositions, given knowledge of the laws governing the system. As an example, if the system was a cellular automaton, one might have a collection of propositions like "cell 156 is colored black at time-step 36", and if you know the rules for how the cells are updated on each time-step, then knowing some subsets of propositions would allow you to deduce others (for example, if you have a set of propositions that tell you the states of all the cells surrounding cell 71 at time-step 106, in most cellular automata that would allow you to figure out the state of cell 71 at the subsequent time-step 107). If the laws of physics in our universe are deterministic than you should in principle be able to represent all facts about the state of the universe at all times as a giant (probably infinite) set of propositions as well, and given knowledge of the laws, knowing certain subsets of these propositions would allow you to deduce others.
"Causal structure" could then be defined in terms of what logical relations hold between the propositions, given knowledge of the laws governing the system. Perhaps in one system you might find a set of four propositions A, B, C, D such that if you know the system's laws, you can see that A&B imply C, and D implies A, but no other proposition or group of propositions in this set of four are sufficient to deduce any of the others in this set. Then in another system you might find a set of four propositions X, Y, Z and W such that W&Z imply Y, and X implies W, but those are the only deductions you can make from within this set. In this case you can say these two different sets of four propositions represent instantiations of the same causal structure, since if you map W to A, Z to B, Y to C, and D to X then you can see an isomorphism in the logical relations. That's obviously a very simple causal structure involving only 4 events, but one might define much more complex causal structures and then check if there was any subset of events in a system's history that matched that structure. And the propositions could be restricted to ones concerning events that actually did occur in the system's history, with no counterfactual propositions about what would have happened if the system's initial state had been different.
Thinking in this way, it's not obvious that Maudlin is right when he assumes that the original "Olympia" defined on p. 418-419 of the paper cannot be implementing a unique computation that gives rise to complex conscious experiences. It's true that the armature itself is not responding in any way to the states of successive troughs it passes over, but there is an aspect of the setup that might give the system a nontrivial causal structure, namely the fact that certain troughs may be connected to other by pipes to other troughs in the sequence, so that as the armature empties or fills one it is also emptying or filling the one it's connected to (this is done to emulate the idea of a Turing machine's read/write head returning to the same memory address multiple times, even though Olympia's armature just steadily progresses down the line of troughs in sequence--troughs connected by pipes are supposed to represent a single memory address). If we represented the Olympia system as a set of propositions about the state of each trough and the position of the armature at each time-step, then the fact that the armature's interaction with one trough changes the state of another trough the armature won't visit until a later step may be enough to give different programs markedly different causal structures, in spite of the fact that the armature itself is just dumbly moving from one trough to the next.
I see no contradiction in a "noticeable gap in consciousness". Whether
noticing such a gap depends on having some theory of the world or is
intrinsic seems to be the question.
Brent