Consciousness is information?

36 views
Skip to first unread message

Kelly

unread,
Apr 20, 2009, 1:47:35 AM4/20/09
to Everything List
What is the advantage of assigning consciousness to computational
processes (e.g. UDA), as opposed to just assigning it to the
information that is produced by computational processes?

For example, to take Maudlin's "Computation and Consciousness" paper,
if you just say that the consciousness is found in the information
represented by the arrangement of the empty or full water troughs,
then that basically removes the problem he is pointing out.

Similarly, associated consciousness only with information seems to
resolve problems with random processes interfering with the causal
structure of physically implemented computations which then, despite
having the causal chain interrupted, would still seem to produce
consciousness. (more on the irrelevance of causality:
http://platonicmindscape.blogspot.com/2009/02/irrelevance-of-causality.html)

Bruno Marchal has mentioned this in his movie graph argument, where a
cosmic ray interrupts a logical operation in a transistor on a
computer that is running a brain simulation, but due to good fortune
the result of the operation is still correct despite the break in the
causal chain that produced the answer.

Conscious being associated with information would also seem to address
the problems with Davidson's "swampman" scenario, and the related
quantum swampman scenario (http://platonicmindscape.blogspot.com/
2009/03/quantum-swampman.html).

So, many different programs can produce the same information, using
many different algorithms, optimizations, shortcuts, etc. But if all
of these programs all accurately simulate the same brain, then they
should produce the same conscious experience, regardless of the
various implementation details.

The most obvious thing that all such programs would have in common is
that they work with the same information...the state of the brain at
each given time slice. Even if this state is stored in different
forms by each of the various programs, there must always be a mapping
between those various storage formats, as well as a mapping back to
the original brain whose activity is being simulated.

Therefore, it seems better to me to say: Consciousness is
information, not the processes that produce the information.

What are the drawbacks of this view when contrasted with
computationalism?

Brent Meeker

unread,
Apr 20, 2009, 2:04:20 AM4/20/09
to everyth...@googlegroups.com
The main difficulty I see is that it fails to explain the sequential
aspect of consciousness. If consciousness is identified with
information then it is atemporal. There are attempts to overcome this
objection by assuming a discretized consciousness and identifying
sequence with a partial ordering by similarity or content, but I find
them unconvincing because when you chop consciousness into "moments"
then the "moments" have very little content and it's not clear that it
is enough to define a sequence. It seems you have allow each "moment"
to have small duration - and then you're back to process. Or instead of
expanding consciousness in the time direction, you could get enough
information by expanding in the "orthogonal" direction - i.e. including
unconscious things like information stored in memory but not being
recalled (at the moment). But then you've slipped physics in.

Brent

Jason Resch

unread,
Apr 20, 2009, 3:27:40 AM4/20/09
to everyth...@googlegroups.com
I think in regards to conscious, you can't have one without the other.
Both information and computation are needed, as the computation
imparts meaning to the information, and the information accumulates
meaning making each computation and its result more meaningful.

If I sent you an arbitrary binary string, it would have no meaning
unless you either knew in advance how to interpret it or how it was
produced. Either interpretation or understanding of how it was
produced can be described with computer programs, but without that
foreknowledge the binary string is meaningless because there would be
an infinite number of ways to interpret that string.

To understand how information "accumulates" through successive a
computations, consider how today's most common processors can only
consider 32-bit numbers at a time, yet like any Turing machine they
are nonetheless capable of performing any computation, including those
involving numbers much larger than can be expressed in 32-bits.

Consider what the neurons do (at least artificial ones), essentially
they only multiply and add (multiply the strength of a received signal
by the connection strength, then sum the received signals to determine
if they met the threshold to fire). At a low level the additions
might correspond to the intensity of one color for one pixel in a
visual field, say the brightness of red. Another neuron might then
sum the intensities of red, green, and blue colors to arrive at a
color for that pixel, while another one aggregates a collection of
those results into a field of colors. Finally this field of colors
might be processed by an object identification part of the neural
network to identify objects. Whether or not an object is identified
as a cat or a dog, might ultimately be determined by the firing of
just one neuron, yet at every stage the same basic computation is done
(multiplication and addition). The only difference is the consequence
of the computation at each stage; how it is ultimately interpreted by
the next level.

So the question comes down to where does the consciousness lie: during
the computation of information, the computed result, or in the
computations upon the computed results. Maybe it requires a loop of
such hierarchies as Douglas Hofstadter suggests. I don't have an
answer but it is something I too wonder about.

Jason

Stathis Papaioannou

unread,
Apr 20, 2009, 8:14:31 AM4/20/09
to everyth...@googlegroups.com
2009/4/20 Kelly <harm...@gmail.com>:

The drawback is that any physical system (which could be mapped onto
any information or any computation) would be conscious. This is only a
drawback if you believe, I guess as a matter of faith, that it is
false.


--
Stathis Papaioannou

Brent Meeker

unread,
Apr 20, 2009, 8:50:20 AM4/20/09
to everyth...@googlegroups.com

I think "meaning" ultimately must be grounded in action. That's why
it's hard to see where the meaning lies in a computation, something that
is just the manipulation of strings. People tend to say the meaning is
in the interpretation, noting that the same string of 1s and 0s can have
different interpretations. But what constitutes interpretation? I
think it is interaction with the world. If you say, "What's a cat?"
and I point and say, "That." then I've interpreted "cat" (perhaps
wrongly if I point to a dog).

Brent


Jesse Mazer

unread,
Apr 20, 2009, 10:01:07 AM4/20/09
to everyth...@googlegroups.com
Brent Meeker wrote:
> I think "meaning" ultimately must be grounded in action. That's why
> it's hard to see where the meaning lies in a computation, something that
> is just the manipulation of strings. People tend to say the meaning is
> in the interpretation, noting that the same string of 1s and 0s can have
> different interpretations. But what constitutes interpretation? I
> think it is interaction with the world. If you say, "What's a cat?"
> and I point and say, "That." then I've interpreted "cat" (perhaps
> wrongly if I point to a dog).
>

Well, suppose you have an A.I. computer program that's running a robot body--if you say "what's a cat" and the robot looks at a cat and points at it, and more generally interacts with the world and uses language in a way that suggests humanlike intelligence, do you grant that it probably has consciousness and that its statements have meaning? If so, suppose take the same program and let it run a simulated body in a simulated world, and when some other simulated fellow asks it "what's a cat", it now points at a simulated cat in this world. Has your opinion about the consciousness/meaning-creation of this program changed because it's only taking actions in a simulated world rather than our "real" world?

Bruno Marchal

unread,
Apr 20, 2009, 10:03:07 AM4/20/09
to everyth...@googlegroups.com

A computation is a sequence of numbers (or of strings, or of
combinators, etc.) as resulting by an interpretation. For such an
interpretation, you don't need a "world", only an "interpreter" that
is a universal system, like elementary arithmetic for example. If you
invoke a world you will run in the usual "physical supervenience"
trouble. If you abstract from the interpreter you run into the
confusion between a computation and a description of a computation. It
is useful to fix once and for all the universal system. Then a
computation can then be defined by a sequence of numbers, but there is
a implicit universal system behind. The concept of information could
be a little too much quantitative and static in this setting, and
plays probably a bigger role in the notion of the content of specific
consciousness experiences.
The key notion to define "computation" if the notion of Universal
system or machine.

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Apr 20, 2009, 10:19:46 AM4/20/09
to everyth...@googlegroups.com

I would say that the drawback is that consciousness is not
information, although the idea that we can be conscious *of* something
is obviously related to information management.
The UDA is supposed to show that, having said yes to the doctor, we
have to define the notion of physical system from the notion of
computation, only having such a definition at hands, can we see if
any physical system implements any computations, which I doubt (except
in some trivial senses).
Also, consciousness is a first person attribute (indeed the
paradigmatic first person attribute). As such I can associate my
consciousness only to the infinity of computations, in arithmetic,
going trough my state. The physical has to emerge from the statistical
probability interference among all computations, going through my
(current) states that are indiscernible from my point of view.
Why such interference takes the form of wave interference is still a
(technical) open problem.

Bruno


http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Apr 20, 2009, 11:36:33 AM4/20/09
to everyth...@googlegroups.com

No, I would agree that the robot is conscious. For the simulated world
I think the answer is a little more complicated. Of course the
simulated robot is conscious relative to the simulated world, but as
Stathis pointed out, that the program is simulating a robot and a cat is
a consequence of our mapping of the program onto our world. There are
many possible mappings so the program might also be a simulation of this
email exchange in our world. All it takes is a different mapping. So I
would say the simulated robots consciousness is relative to the
simulated world, which in turn takes its meaning from us (and might well
have other "meanings"). In general we rely on the programmer to tell us
the interpretation in terms of our world.

Brent

Brent Meeker

unread,
Apr 20, 2009, 11:41:12 AM4/20/09
to everyth...@googlegroups.com

You put scare quotes around "interpreter". I don't see how arithmetic
is an interpreter - isn't it an interpretation (of Peano's axioms)? And
how does arithmetic avoid the problem of arbitrarily many mappings, as
raised by Stathis?

Brent


Kelly

unread,
Apr 20, 2009, 11:13:54 PM4/20/09
to Everything List
On Apr 20, 2:04 am, Brent Meeker <meeke...@dslextreme.com> wrote:
>
> The main difficulty I see is that it fails to explain the sequential
> aspect of consciousness. If consciousness is identified with
> information then it is atemporal.
>

Time is just the dimension of experience. But experience is an
internal "psychological" concept, not an external concept. Therefore
"time" is also an internal feature of subjective experience, not
necessarily an external feature of objective reality.

So it seems to me that we have have no direct access to the physical
world. Information about the physical world is conveyed to us via our
senses. BUT, we don't even have direct conscious access to our
sensory data. All of that sensory data is instead apparently heavily
processed by various neural subsystems and "feature detectors", the
outputs of which are then reintegrated into a simplified mental model
of reality, and THAT is what we are actually aware of. That mental
model is what we think of as "the real world". So it seems to me
that, even accepting physicalism, we can already think of ourselves as
living in a virtual world of abstract information.

The same is true of time. We experience time only because we
represent that experience internally as part of our simplified model
of the world. If there is an external time, it could be altered in
many ways, but our internal representation (and experience) of time
will remain unchanged. Time derives from Consciousness. Not vice
versa. Time IS an aspect of consciousness...and thus doesn't exist
seperately from conscious experience.

And also you can go back to the computer simulation idea and think
about various scenarios. If you and your environment were simulated
on a fast computer or a slow computer...you wouldn't be able to tell
the difference. If the computer ran for a while, then the simulation
data was saved and the computer turned off, then a year later the
computer and the simulation were restarted where they left off, you
would have no way to detect that a year had passed in "external"
time. To you in the simulation, it would be as though nothing had
happened, because the computer simulation would pick up on the same
exact calculation where it had left off. There was no interruption in
your experience of time.

I agree that experience and consciousness requires changes of state,
but I don't agree that it must be change with respect to an external
physical "time" dimension. The best analogy that I have heard is that
if you have a non-horizontal line, it's Y value changes with respect
to the X axis. So some piece of information (the Y value) "changes"
with respect to another set of values (the X axis). But there is no
time involved in this type of change.

Your experience of the X axis will depend on how you represent the X
axis internally in your model of reality. Maybe you will experience
the X axis spatially...maybe you will experience it chronologically,
maybe you will experience it some other way entirely. Your experience
of it depends entirely on how it is represented internally in the
information that produces your conscious experience.

I think that the Sherlock Holmes approach is the correct one for
investigating and explaining the nature of consciousness and reality:
"When you have eliminated the impossible, whatever remains, however
improbable, must be the truth."

I come to the conclusion that consciousness is information by way of
process of elimination. I can think of experiments or scenarios where
you can do away with everything except information and still get
behavior that seems conscious and which therefore I assume is actually
conscious. Information is the only common factor in all situations
where consciousness seems to be in evidence. And really, it doesn't
seem that counter-intuitive to me that information is ultimately what
makes me what I am.

So, I agree with David Chalmers that the idea that some (all?)
information is conscious in some way is a fundamental aspect of
information, and not really reducible to more fundamental descriptions
or processes. Which again makes sense...how can you get more
fundamental than "information"?


On Apr 20, 2:04 am, Brent Meeker <meeke...@dslextreme.com> wrote:
>
> The main difficulty I see is that it fails to explain the sequential
> aspect of consciousness.  If consciousness is identified with
> information then it is atemporal.  There are attempts to overcome this
> objection by assuming a discretized consciousness and identifying
> sequence with a partial ordering by similarity or content, but I find
> them unconvincing because when you chop consciousness into "moments"
> then the "moments" have very little content and it's not clear that it
> is enough to define a sequence.  It seems you have allow each "moment"
> to have small duration - and then you're back to process.  Or instead of
> expanding consciousness in the time direction, you could get enough
> information by expanding in the "orthogonal" direction - i.e. including
> unconscious things like information stored in memory but not being
> recalled (at the moment).  But then you've slipped physics in.
>
> Brent- Hide quoted text -
>
> - Show quoted text -

Kelly

unread,
Apr 20, 2009, 11:24:11 PM4/20/09
to Everything List
On Apr 20, 8:14 am, Stathis Papaioannou <stath...@gmail.com> wrote:
>
> The drawback is that any physical system (which could be mapped onto
> any information or any computation) would be conscious. This is only a
> drawback if you believe, I guess as a matter of faith, that it is
> false.

Right, the "Putnam mapping" thing. And the related idea that with the
correct "one-time pad" you can extract any information from any
source. And the whole "rock implementing every finate state automata"
thing.

Given that information is so ubiquitious, I think it's best to just
bite the bullet and go with th idea that information exists
independently of any physical substrate, and without needing a source,
(as in some variation of platonic realism). But, if you combine this
with the idea that consciousness is information, and take this as your
fundamental basis of reality, then there really are no other
questions. Everything else follows I think.

This does lead one to conclude that most conscious observers see
chaotic and nonsensical realities, because most possible information
patterns are random-ish and chaotic. BUT, so be it. We have examples
of such conscious observers right here in every day life. People with
schizophrenia, dementia, hallucinations, etc. All of these conditions
are caused by disruptions in the information represented by the brain.
Which is why I think that even starting with the assumption of
physicalism, you're still lead back to idealism.

And of course, you have experience of nonsensical realities yourself,
when you dream. I would say the worlds we encounter in our dreams are
just as real (or unreal) as the world we see when we are awake, BUT we
don't spend much time there, and when we wake our memories of the
dream worlds fade and lose intensity. So we give them subordinate
status.

I would say that every possible conscious observer exists in a reality
of their own perceptions. And every perceivable reality (both hellish
and heavenly) IS perceived by every observer capable of perceiving it.
And the reason for this is that the information for these perceptions
exists in a platonic sense.


Kelly

unread,
Apr 21, 2009, 12:01:57 AM4/21/09
to Everything List
On Apr 20, 3:27 am, Jason Resch <jasonre...@gmail.com> wrote:
> If I sent you an arbitrary binary string, it would have no meaning
> unless you either knew in advance how to interpret it or how it was
> produced.  Either interpretation or understanding of how it was
> produced can be described with computer programs, but without that
> foreknowledge the binary string is meaningless because there would be
> an infinite number of ways to interpret that string.

It seems to me that the particular causal structure of a computer
running a simulation is an "implementation detail" of that particular
computer's architecture, and that any physical system that produces
outputs that can be mapped to a conscious human brain will produce
consciousness in the same way that the human brain does. Only the
outputs matter. Which is to say that only the information matters.

Being able to follow the causal chain of the computer running the
simulation is important in being able to interpret the outputs of the
simulation, and also is important to being able to have confidence
that the simulation is actually running correctly, AND is also
important in terms of knowing how to feed inputs into the simulation
(assuming that the simulated consciousness isn't living in a simulated
world which provides the inputs).

So causality is critical to us in viewing, interpretting, and
interacting with the simulation.

However, while causal chains are useful/necessary for working with and
interpretting output from physically implemented computers, I don't
see that they are essential for producing first person consciousness
experience, which doesn't require third person interpretation.

A random physical system (say a dust cloud) could "accidently" have
the right internal structure so as to be equivalent to a conscious
human brain over some period of time (http://
schwitzsplinters.blogspot.com/2009/01/dust-hypothesis.html). But,
that doesn't mean that it's a good idea to go looking at dust clouds
to find interesting simulations of conscious entities. This would be
like converting data from a radioactive decay counter into ASCII and
scanning the resulting character stream looking for a new blockbuster
novel. If you live long enough and never give up, you'll eventually
find your novel, BUT it's not at all an efficient way of going about
the task.




Brent Meeker

unread,
Apr 21, 2009, 1:50:02 AM4/21/09
to everyth...@googlegroups.com
I don't disagree with any of your examples and ideas below. I agree
that consciousness deals with models of the world (assuming there is a
world). I agree that time is just sequence (I referred to the
"sequential aspect of consciousness). But ISTM that each of your
examples implicitly or even explicitly depends on a process, over and
above the information:

"...sensory data is instead apparently heavily *processed*..."

"...we *represent* that experience internally..."

"... and still get *behavior*..."

Brent

John Mikes

unread,
Apr 21, 2009, 9:51:30 AM4/21/09
to everyth...@googlegroups.com
Jesse, I always appreciated your posts as considerate, logical and most professional. Now I a not so sure...
Brent mixed up a bit the concepts, even stirring in interpretation into meaning, you speak about "our real world" - a joke. All because both of you are infected with a physicalistic-computerminded thinking, product of our XX.c. aberration by too much epistemic enrichment without the necessary base to use it properly. The old Greeks had it easy: with that minuscule 'knowledge-base' they had it was just dandy to use their pure logic - what is tarnished in today's thinking when a preschooler knows more about the universe than a Greek sage did. Not that I would vouch for the trueness of our knowledge, it is "interpreted" perceived reality, with lots of explanatory artifacts (figments) to match the 'equations'.
 
Let me épater le bourgois: there is no such thing as meaning we put it out by our ways of thinking. Vocabularies are not God-given(!). Loaded words are real. Diverse meanings are context-dependent. Information is also not an 'existing thing', it is something we absorb from relations that reach our cognizance. Context dependent again. Now what is thei mysterious context? it is our setup in our perceived reality what we apply in a certain case. Again our doing.
Not two minds(?) work identically in ALL respects (this caution is for Bruno, who may (I don't know) posit that anybody (with a mathematically inclined mind would work similarly with numbers, I amnot sure). So if you identify a dog as a cat, it is your meaning - not mine.
 
I agreed with Brent that meaning is based on action - I was in the mindset of considering everything as 'action' until the question arose: what triggers such action, what provides the necessities to it (I don't know what to call energy) and the ways it proceeds? so what we see is what we supposed/assumed. Is that your "real world"?
I THINK we are part of an existence all interconnected in ways about which we have no idea, but explain it to the ever actual level of our epistemic cognitive inventory. There are relations that may be viewed in diverse aspects and the change of our views is interpreted in our physics impeded thinking as movement, action, change, function, etc.
It is hard to separate our figments from our own thinking: we think in them. KNOW we don't. Our perception is limited and we cannot include the totality (I think thinking in numbers is also only an aspect to try so).
It is comfortable to stay within our capbilites - we are not ready to accept that all we know is a fraction that fits our assumptions.
 
Sorry, I am far from expressing myself clearly, and that is not only a language problem. Human language - maybe.
 
John M
 
PS. My version of consciousness (universal): the course of responding to information (that is: in the above described sense). ANY. JM

Bruno Marchal

unread,
Apr 21, 2009, 11:31:57 AM4/21/09
to everyth...@googlegroups.com

On 20 Apr 2009, at 17:41, Brent Meeker wrote:

>>
>> A computation is a sequence of numbers (or of strings, or of
>> combinators, etc.) as resulting by an interpretation. For such an
>> interpretation, you don't need a "world", only an "interpreter" that
>> is a universal system, like elementary arithmetic for example.
>
> You put scare quotes around "interpreter".


Just because it is not a human interpreter, but a programming language
interpreter. I use the term in the computer science sense.

> I don't see how arithmetic
> is an interpreter - isn't it an interpretation (of Peano's axioms)?


Usually I use "Arithmetic" for the (usual) standard interpretation (in
the human sense) of arithmetic. By arithmetic I was thinking of a
formal system such as the formal system Robinson Arithmetic (or Peano
Arithmetic depending on the context).

It is not so easy to show that Robinson Arithmetic is a Turing
Universal interpreter, but it is standardly done in most good textbook
in mathematical logic(*). It is no more extraordinary that the Turing
universality of the SK combinators, or the universality of the
Conway's game of life, or the universality of any little universal
system.


> And
> how does arithmetic avoid the problem of arbitrarily many mappings, as
> raised by Stathis?


Once you accept the computationalist hypothesis, not only that problem
is not avoided but the problems of the existence of both physical laws
and consciousness is entirely reduced to it, or to the digital version
(UD) version of that problem. The collection of all computations is a
well defined computational object, already existing or defined by a
tiny part of Arithmetical Truth, and not depending on the choice of
the initial basic formal system.

The mapping are well defined though. The way Putnam, Mallah, Chalmers
and others put that problem just makes no sense with comp, given that
they postulated some primitively material or substantial universe
which does not makes any sense (as I have argued already). Then they
confuse a computation with a description of a computation. Sometimes
they use also the idea that real numbers occurs actually in nature
which just add confusion. Now I usually don't insist on that, because,
even if such mapping would make sense, it just add computational
histories in the universal dovetailing, or in Arithmetic, and this
does not change the measure problem. The only important fact here is
that with comp, the digitalness makes the measure problem well
defined: none mappings are arbitrary: either there is a computation or
there is no computation.
For example, with numbers and succession (but without addition and
multiplication) there is no universal computation, even if there is a
sense to say there is all description of computations there. A
counting algorithm does not constitute a universal dovetailing. Now,
numbers + addition + multiplication, gives universal computations and
thus all computations with its typical super-redundancy, and the
measure problem makes sense. Ontologically we need no more.
Epistemologically we need *much* more, we need something so big that
even with the whole "Cantor Paradise" or the whole "Plato Heaven" at
our disposition we will not even been able to name what we need (and
that is how comp prevents first person reductionism or eliminativism,
and how it makes theology needing a scientific endeavor). (with
science = hypothetical axiomatics).

I agree with Kelly that we don't need a notion of causality, but we
need computations (Shannon information measures only a degree of
surprise, and consciousness is more general than being surprised, and
I agree with you that information is a statical notion). But the
notion of computations needs the logical relations existing among
numbers, although other basic finite entities can be used in the place
of numbers. In all case, the computations exists through the logical
relations among those finite entities.

We could say that a state A access to a state B if there is a
universal machine (a universal number relation) transforming A into B.
This works at the ontological level, or for the third person point of
view. But if A is a consciousness related state, then to evaluate the
probability of personal access to B, you have to take into account
*all* computations going from A to B, and thus you have to take into
account the infinitely many universal number relations transforming A
into B. Most of them are indiscernible by "you" because they differ
below "your" substitution level.


(*)
- Richard Epstein and Walter Carnielli, Computability, computable
Functions, Logic, and the Foundations of Mathematics, Wadsworth &
Brooks/Cole Mathematics series, Pacific Grove, California, 1989.
- Boolos, Burgess and Jeffrey, Computability and Logic, Cambridge
University Press, Fourth edition, 2002.

Bruno

http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Apr 21, 2009, 12:59:33 PM4/21/09
to everyth...@googlegroups.com
The question was whether information was enough, or whether something
else is needed for consciousness. I think that sequence is needed,
which we experience as the passage of time. When you speak of
computations "going from A to B" do you suppose that this provides the
sequence? In other words are the states of consciousness necessarily
computed in the same order as they are experienced or is the order
something intrinsic to the information in the states (i.e. like
Stathis'es observer moments which can be shuffled into any order without
changing the experience they instantiate).

A related question in my mind has to do with reversibility.
Computations in general are not reversible: Turing machines erase
symbols. You can't infer the factors from the product. But QM (without
collapse) is unitary and reversible in principle (though not in practice
because of statistical and light-speed reasons). So my question is, are
the computations of the UD reversible?

> and thus you have to take into
> account the infinitely many universal number relations transforming A
> into B. Most of them are indiscernible by "you" because they differ
> below "your" substitution level.
>

Does the UD have to complete the infinitely many computations from A to
B, i.e. we must think of these computations as being complete in Plationia?

Brent

Bruno Marchal

unread,
Apr 21, 2009, 2:00:06 PM4/21/09
to everyth...@googlegroups.com

On 21 Apr 2009, at 18:59, Brent Meeker wrote:
>
> The question was whether information was enough, or whether something
> else is needed for consciousness. I think that sequence is needed,
> which we experience as the passage of time. When you speak of
> computations "going from A to B" do you suppose that this provides the
> sequence?

Not really. Subjective time, be it first person or first person plural
(and thus "physical") relies on all computations made by the UD, and
the taking into account it is "self-referential".




> In other words are the states of consciousness necessarily
> computed in the same order

Your first person next instant depends on an infinity of computations
made by the UD. The time step of the UD is relevant, because it
determines the whole UD structure, but it is not related in any direct
way with "time". We can conjecture than the more our substitution is
low, the more *time* looks like a computation being independent of us:
so relation of order can be made through indiscernible computation
equivalence class. I mean there are relation between states of
consciousness and computational history, but our consciousness
evolution is not related directly to one computational sequence.


> as they are experienced or is the order
> something intrinsic to the information in the states (i.e. like
> Stathis'es observer moments which can be shuffled into any order
> without
> changing the experience they instantiate).

Consciousness is related to the sheaf of computations going through
that states. A computational state is a state of a computing
(mathematical) machine when doing a computation. The machine has to be
"runned" or "executed" relatively to a universal machine. You need the
Peano or Robinson axiom to define such states and sequences of states.
You can shuffled them if you want, and somehow the UD does shuffle
them by its dovetailing procedure, but this will not change the
arithmetical facts that those states belong or not too such or such
computational histories. And consciousness relies on those
computational facts (and information play important role there, but
not up to identify consciousness and information. (I think
consciousness is more a filtering of information, somehow).



>
>
> A related question in my mind has to do with reversibility.
> Computations in general are not reversible: Turing machines erase
> symbols. You can't infer the factors from the product. But QM
> (without
> collapse) is unitary and reversible in principle (though not in
> practice
> because of statistical and light-speed reasons). So my question is,
> are
> the computations of the UD reversible?

I have still a residual doubt that a quantum computer makes sense
mathematically, but if that exists, then there exist a reversible
universal dovetailing.




>
>
>> and thus you have to take into
>> account the infinitely many universal number relations transforming A
>> into B. Most of them are indiscernible by "you" because they differ
>> below "your" substitution level.
>>
>
> Does the UD have to complete the infinitely many computations from A
> to
> B, i.e. we must think of these computations as being complete in
> Plationia?

Yes. Our first person expectations relies on the whole completion of
the UD, due to the non awareness of the dovetailing delay. But it is
easier to describe the working of the UD by a program executed in
time, than by an infinite set of arithmetical relations already true
in "Platonia".

If you accept comp, you accept that your "brain state" is accessed an
infinity of times by the UD through an infinity of computations. The
world you are observing is a sort of mean of all those computations,
from your point of view. But the "running of the UD" is just a
picturesque way to describe an infinite set of arithmetical relations.
From inside it is just a logical consequence that it looks analytical
and physical. Obvioulsy a lot of work has to be done to see if all
this will lead to a refutation of comp, or to a "theory of everything".

Bruno



>> (*)
>> - Richard Epstein and Walter Carnielli, Computability, computable
>> Functions, Logic, and the Foundations of Mathematics, Wadsworth &
>> Brooks/Cole Mathematics series, Pacific Grove, California, 1989.
>> - Boolos, Burgess and Jeffrey, Computability and Logic, Cambridge
>> University Press, Fourth edition, 2002.
>>
>> Bruno
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
>>>
>>
>>
>
>
> >

http://iridia.ulb.ac.be/~marchal/



Brent Meeker

unread,
Apr 21, 2009, 2:33:01 PM4/21/09
to everyth...@googlegroups.com
I understand that the UD computes all different histories so they are
interleaved. But each particular computation consists of an ordered set
of states. These states can belong to more than one sequence of
conscious experience. But the question is whether the order of the
states in the computation is always the same as their order in any
sequence of conscious experience in which they appear? For example, if
there is a computation of states A, B, and C then is that a possible
sequence in consciousness? In general there will be another, different
computation that computes the states in the order A, C, B, so is that
too a possible sequence in consciousness? Or is the experienced
sequence in consciousness the same - determined by some intrinsic to the
states?

> And consciousness relies on those
> computational facts (and information play important role there, but
> not up to identify consciousness and information. (I think
> consciousness is more a filtering of information, somehow).
>
>
>
>
>> A related question in my mind has to do with reversibility.
>> Computations in general are not reversible: Turing machines erase
>> symbols. You can't infer the factors from the product. But QM
>> (without
>> collapse) is unitary and reversible in principle (though not in
>> practice
>> because of statistical and light-speed reasons). So my question is,
>> are
>> the computations of the UD reversible?
>>
>
> I have still a residual doubt that a quantum computer makes sense
> mathematically, but if that exists, then there exist a reversible
> universal dovetailing.
>
>

I don't understand that remark. Universal dovetailing is a completely
abstract mathematical construct. It exists in Platonia. So how can the
existence of a reversible (i.e. information preserving) UD depend on
quantum computers?

Brent

John Mikes

unread,
Apr 21, 2009, 3:30:04 PM4/21/09
to everyth...@googlegroups.com
Bruno,
you made my day when you wrote:
"SOMEHOW" - in:
 "...The machine has to be "runned" or "executed" relatively to a universal machine. You need the Peano or Robinson axiom to define such states and sequences of states.

You can shuffled them if you want, and somehow the UD does shuffle
them by its dovetailing procedure, but this will not change the
arithmetical facts that those states belong or not too such or such
computational histories...."
*
First: my vocablary sais about 'axiom' the reverse of how it is used, it is our artifact invented in order to facilitate the application of our theories IOW: explanations for the phenomena so poorly understood (if anyway). So it is MADE up for exactly the purpose what we evidence by it.
 
Second: UD "shuffles 'them' by the ominous 'somehow', (no idea: how?) but it has to be done for the result we invented as a 'must be'.
 
Third: the 'computational history' snapshots have to come together
(I am not referring to the sequence, rather to the combination between 'earlier' and 'later'  snapshots into a continuum from a discontinuum. That marvel bugs science for at least 250 years since chemical "thinking" started.
A sequence of pictures is no history.
*
Then again: you wrote:
 "...The world you are observing is a sort of mean of all those computations, from your point of view. But the "running of the UD" is just a picturesque way to describe an infinite set of arithmetical relations..."
 
 I am not sure about the "mean" since we are not capable of even noticing 'all of them', not to evaluate the totality for a 'mean' - in my not arithmetic vocabulary: a median "meaning" of them all (nonsense).
Your words may be a flowery (math that is) expression of 'viewing the totality in its entirety' which is just as impossible (for us, today) as to realize your 'infinite set of arithmetical relations'. If I leave out the 'arithmetical' (or substitute it by my meaningfulness) then we came together in 'viewing the totality' in our indiviual wording-ways.
"Relations" is the punctum salience, it is a loose enough term to cover whatever is beyond our present comprehension. When relations look differently (maybe by just our observation from a different aspect?) we translate it into physical terms like change, movement, reaction, process or else, not realizing that WE look at it from different connotations.
Use to that our coordinates (space and time) in the limited view we can muster (I call it: "model") and we arrived at causality of the conventional sciences (and common sense thinking as well).
Indeed it is our personal (mini)-solipsistic perceived reality of OUR world
washed into some common pattern (partially!) by comp or math or else.
By the maze of such covering umbrella we believe in adjusted thinking.
*
Please do not conclude any denial from my part against the 'somehow' topics, the process-function-change manipulations (unknown, as I said),
it is only reference to my ignorance directed in my agnosticism towards made-up explanations of any cultural era (and changing fast).
John M

Kelly

unread,
Apr 22, 2009, 2:55:57 AM4/22/09
to Everything List
On Apr 21, 11:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> We could say that a state A access to a state B if there is a
> universal machine (a universal number relation) transforming A into B.
> This works at the ontological level, or for the third person point of
> view. But if A is a consciousness related state, then to evaluate the
> probability of personal access to B, you have to take into account
> *all* computations going from A to B, and thus you have to take into
> account the infinitely many universal number relations transforming A
> into B. Most of them are indiscernible by "you" because they differ
> below "your" substitution level.

So, going back to some of your other posts about "transmitting" a copy
of a person from Brussels to Moscow. What is it that is transmitted?
Information, right? So for that to be a plausible scenario we have to
say that a person at a particular instant in time can be fully
described by some set of data.

It would seem to me that their conscious state at that instant must be
recoverable from that set of data. The only question is, what
conditions must be met for them to "experience" this state, which is
completely described by the data set? I don't see any obvious reason
why anything additional is needed. What does computation really add
to this?

You say that computation is crucial for this "experience" to take
place. But why would this be so? Why couldn't we just say that your
various types of mathematical logic can describe various types of
correlations, categories, patterns, and relationships between
informational states, but don't actually contribute anything to
conscious experience?

Conscious experience is with the information. Not with the
computations that describe the relations between various informational
states.

> But if A is a consciousness related state, then to evaluate
> the probability of personal access to B, you have to take
> into account *all* computations going from A to B

I don't see how probability enters into it. A and B are both fully
contained conscious states. Both will be realized, because both
platonically exist as possible sets of information. State B may have
a "memory" of State A. State A may have an "expectation" (or
premonition) of State B. But that is the only link between the two.
Otherwise the exist independenty.

So Brian Greene had a good passage somewhat addressing this in his
last book. He's actually talking about the block universe idea, but
still applicable I think:

"In this way of thinking, events, regardless of when they happen from
any particular perspective, just are. They all exist. They eternally
occupy their particular point in spacetime. This is no flow. If you
were having a great time at the stroke of midnight on New Year's Eve,
1999, you still are, since that is just one immutable location in
spacetime.

The flowing sensation from one moment to the next arises from our
conscious recognition of change in our thoughts, feelings, and
perceptions. Each moment in spacetime - each time slice - is like one
of the still frames in a film. It exists whether or not some projector
light illuminates it. To the you who is in any such moment, it is the
now, it is the moment you experience at that moment. And it always
will be. Moreover, within each individual slice, your thoughts and
memories are sufficiently rich to yield a sense that time has
continuously flowed to that moment. This feeling, this sensation that
time is flowing, doesn't require previous moments - previous frames -
to be sequentially illuminated."

On your earlier post:

> The physical has to emerge from the statistical
> probability interference among all computations, going through my
> (current) states that are indiscernible from my point of view.
> Why such interference takes the form of wave interference is still a
> (technical) open problem.

In my view, I just happen to be inhabit a perceptual universe that is
fairly orderly and follows laws of cause and effect. However, there
are other conscious observers (including other versions of me) who
inhabit perceptual universes that are much more chaotic and
nonsensical.

But everything that can be consciously experienced is experienced,
because there exists information (platonically) that describes a mind
(human, animal, or other) having that experience.

I say that because it seems to me that this information could
(theoretically) be produced by a computer simulation of such a mind,
which would presumably be conscious. So add platonism to that, and
there you go!



Stathis Papaioannou

unread,
Apr 22, 2009, 7:56:46 AM4/22/09
to everyth...@googlegroups.com
2009/4/22 Brent Meeker <meek...@dslextreme.com>:

> The question was whether information was enough, or whether something
> else is needed for consciousness.  I think that sequence is needed,
> which we experience as the passage of time.  When you speak of
> computations "going from A to B" do you suppose that this provides the
> sequence?  In other words are the states of consciousness necessarily
> computed in the same order  as they are experienced or is the order
> something intrinsic to the information in the states (i.e. like
> Stathis'es observer moments which can be shuffled into any order without
> changing the experience they instantiate).

Say a machine is in two separate parts M1 and M2, and the information
on M1 in state A is written to a punchcard, walked over to M2, loaded,
and M2 goes into state B. Then what you are suggesting is that this
sequence could give rise to a few moments of consciousness, since A
and B are causally connected; whereas if M1 and M2 simply went into
the same respective states A and B at random, this would not give rise
to the same consciousness, since the states would not have the right
causal connection. Right?

But then you could come up with variations on this experiment where
the transfer of information doesn't happen in as straightforward a
manner. For example, what if the operator who walks over the punchcard
gets it mixed up in a filing cabinet full of all the possible
punchcards variations, and either (a) loads one of the cards into M2
because he gets a special vibe about it and it happens to be the right
one, or (b) loads all of the punchcards into M2 in turn so as to be
sure that the right one is among them? Would the machine be conscious
if the operator loads the right card knowingly, but not if he is just
lucky, and not if he is ignorant but systematic? If so, how could the
computation know about the psychological state of the operator?


--
Stathis Papaioannou

Brent Meeker

unread,
Apr 22, 2009, 11:49:58 AM4/22/09
to everyth...@googlegroups.com
Here you are assuming the point in question - whether the states are, by
themselves, conscious. If they are then it would imply that a record,
written on paper or a CD, of the state information transmitted in
Bruno's thought experiment would also be conscious. Even further, if
you identify information as a Platonic form, then it doesn't even need a
physical instantiation. The conscious state will simply exist like the
number two exists.

> Both will be realized, because both
> platonically exist as possible sets of information. State B may have
> a "memory" of State A. State A may have an "expectation" (or
> premonition) of State B. But that is the only link between the two.
> Otherwise the exist independenty.
>
> So Brian Greene had a good passage somewhat addressing this in his
> last book. He's actually talking about the block universe idea, but
> still applicable I think:
>
> "In this way of thinking, events, regardless of when they happen from
> any particular perspective, just are. They all exist. They eternally
> occupy their particular point in spacetime. This is no flow.

But Greene is assuming a real-line topology, so a sequence of
consciousness is connected.

> If you
> were having a great time at the stroke of midnight on New Year's Eve,
> 1999, you still are, since that is just one immutable location in
> spacetime.
>
> The flowing sensation from one moment to the next arises from our
> conscious recognition of change in our thoughts, feelings, and
> perceptions. Each moment in spacetime - each time slice - is like one
> of the still frames in a film.

Again, that is part of the question. Is the universe digital.

> It exists whether or not some projector
> light illuminates it. To the you who is in any such moment, it is the
> now, it is the moment you experience at that moment. And it always
> will be. Moreover, within each individual slice, your thoughts and
> memories are sufficiently rich to yield a sense that time has
> continuously flowed to that moment.

This is what I find dubious. It is certainly true is a sense if an
individual slice is thick enough, but it seems to me to be false in the
limit of thin slices - and if the slice cannot be arbitrarily thin, a
"point in time", then the question remains as to what is the dimension
along which it is thick.

Brent

Brent Meeker

unread,
Apr 22, 2009, 12:08:06 PM4/22/09
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
> 2009/4/22 Brent Meeker <meek...@dslextreme.com>:
>
>
>> The question was whether information was enough, or whether something
>> else is needed for consciousness. I think that sequence is needed,
>> which we experience as the passage of time. When you speak of
>> computations "going from A to B" do you suppose that this provides the
>> sequence? In other words are the states of consciousness necessarily
>> computed in the same order as they are experienced or is the order
>> something intrinsic to the information in the states (i.e. like
>> Stathis'es observer moments which can be shuffled into any order without
>> changing the experience they instantiate).
>>
>
> Say a machine is in two separate parts M1 and M2, and the information
> on M1 in state A is written to a punchcard, walked over to M2, loaded,
> and M2 goes into state B. Then what you are suggesting is that this
> sequence could give rise to a few moments of consciousness, since A
> and B are causally connected; whereas if M1 and M2 simply went into
> the same respective states A and B at random, this would not give rise
> to the same consciousness, since the states would not have the right
> causal connection. Right?
>

Maybe. But I'm questioning more than the lack of causal connection.
I'm questioning the idea that a static thing like a state can be
conscious. That consciousness goes through a set of states, each one
being an "instant", is an inference we make in analogy with how we would
write a program simulating a mind. I'm saying I suspect something
essential is missing when we "digitize" it in this way. Note that this
does not mean I'd say "No" to Burno's doctor - because the doctor is
proposing to replace part of my brain with a mechanism that instantiates
a process - not just discrete states.


Brent

Bruno Marchal

unread,
Apr 22, 2009, 12:24:32 PM4/22/09
to everyth...@googlegroups.com

On 22 Apr 2009, at 08:55, Kelly wrote:

>
> On Apr 21, 11:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> We could say that a state A access to a state B if there is a
>> universal machine (a universal number relation) transforming A into
>> B.
>> This works at the ontological level, or for the third person point of
>> view. But if A is a consciousness related state, then to evaluate the
>> probability of personal access to B, you have to take into account
>> *all* computations going from A to B, and thus you have to take into
>> account the infinitely many universal number relations transforming A
>> into B. Most of them are indiscernible by "you" because they differ
>> below "your" substitution level.
>
> So, going back to some of your other posts about "transmitting" a copy
> of a person from Brussels to Moscow. What is it that is transmitted?
> Information, right?

OK.


> So for that to be a plausible scenario we have to
> say that a person at a particular instant in time can be fully
> described by some set of data.

Not fully. I agree with Brent that you need an interpreter to make
that person manifest herself in front of you. A bit like a CD, you
will need a player to get the music. Now, any (immaterial, simple)
Turing universal system will do, so I take the simplest one, the one
that we learn at school: elementary arithmetic. (On some other planet
they learn the combinators at school, and in the long run it could be
better, but fundamentally it does not matter).


>
>
> It would seem to me that their conscious state at that instant must be
> recoverable from that set of data. The only question is, what
> conditions must be met for them to "experience" this state, which is
> completely described by the data set?

But from the first person perspective I need, and elementary
arithmetic provides, an infinity of universal histories going through
my current states. It is not just "information", it is information
relative to possible computations.

> I don't see any obvious reason
> why anything additional is needed. What does computation really add
> to this?

It adds the relative interpretation of that information. Information,
which you identify with some bit strings is just a number, it is just
an encoding of a person, not the person.

Consciousness is the state of mind of a person who believes in a
reality. This makes sense only relatively to probable universal
histories.

>
>
> You say that computation is crucial for this "experience" to take
> place. But why would this be so? Why couldn't we just say that your
> various types of mathematical logic can describe various types of
> correlations, categories, patterns, and relationships between
> informational states, but don't actually contribute anything to
> conscious experience?

Remember I assume the computationalist hypothesis. This means I will
accept to be encoded in an information string, but only under the
promise it will be decoded relatively to probable computational
histories I can bet on, having an idea of my current first person state.

>
>
> Conscious experience is with the information.

Conscious experience is more the content, or the interpretation of
that information, made by a person or by a universal machine.
If the doctor makes a copy of your brain, and then codes it into a bit
string, and then put the bit string in the fridge, in our probable
history, well in that case you will not survive, in our local probable
history.


> Not with the
> computations that describe the relations between various informational
> states.

If you say yes to a doctor for a digital brain, you will ask for a
brain which functions relatively to our probable computational
history. No?

>
>
>> But if A is a consciousness related state, then to evaluate
>> the probability of personal access to B, you have to take
>> into account *all* computations going from A to B
>
> I don't see how probability enters into it. A and B are both fully
> contained conscious states. Both will be realized, because both
> platonically exist as possible sets of information. State B may have
> a "memory" of State A. State A may have an "expectation" (or
> premonition) of State B. But that is the only link between the two.

The UD generates an infinity of computations going from A to B.
Probabilities, credibilities, plausibilities, provabilities will all
emerge unavoidably.


>
> Otherwise the exist independenty.

I don't see any sense in which the term "computational state" makes
sense independently of a least one computation. But from inside we
have to take care on the infinity of computation, including those with
the "dovetailing-on-the-reals" noisy background. (From inside we
cannot distinguish the many finite initial segment of those reals with
the reals themselves.)

>
>
> So Brian Greene had a good passage somewhat addressing this in his
> last book. He's actually talking about the block universe idea, but
> still applicable I think:
>
> "In this way of thinking, events, regardless of when they happen from
> any particular perspective, just are. They all exist. They eternally
> occupy their particular point in spacetime. This is no flow. If you
> were having a great time at the stroke of midnight on New Year's Eve,
> 1999, you still are, since that is just one immutable location in
> spacetime.
>
> The flowing sensation from one moment to the next arises from our
> conscious recognition of change in our thoughts, feelings, and
> perceptions. Each moment in spacetime - each time slice - is like one
> of the still frames in a film. It exists whether or not some projector
> light illuminates it. To the you who is in any such moment, it is the
> now, it is the moment you experience at that moment. And it always
> will be. Moreover, within each individual slice, your thoughts and
> memories are sufficiently rich to yield a sense that time has
> continuously flowed to that moment. This feeling, this sensation that
> time is flowing, doesn't require previous moments - previous frames -
> to be sequentially illuminated."

I totally agree with this picture. Brian Green uses a space time,
where I use the numbers with they additive and multiplicative
structure, and my point is that assuming comp, it has to work, and I
show how space time and energy has to emerge from the inside view of a
tiny fragment of arithmetic. But both in Green and with comp, the
information or the points makes sense because they are structured, by
space-time in Green, and by addition and multiplication in comp. A
computation is a mathematical object in Plato Heaven, or just in a
tiny part of the standard model of arithmetic.


>
>
> On your earlier post:
>
>> The physical has to emerge from the statistical
>> probability interference among all computations, going through my
>> (current) states that are indiscernible from my point of view.
>> Why such interference takes the form of wave interference is still a
>> (technical) open problem.
>
> In my view, I just happen to be inhabit a perceptual universe that is
> fairly orderly and follows laws of cause and effect.

You hope this! But with comp this is globally wrong, and only "locally
apparent". "You" (3-person) are distributed densely on the border of a
universal dovetailing. What you perceive is the mean on all possible
"continuations". My point is that it has to be such once we assume
comp, and that this is empirically verifiable/refutable.
I put "continuations" in quotes, because it is not necessarily
related with physical notion of futures. It is more logical consistent
extensions. It could develop on physical pasts, and elsewhere.


> However, there
> are other conscious observers (including other versions of me) who
> inhabit perceptual universes that are much more chaotic and
> nonsensical.
>
> But everything that can be consciously experienced is experienced,
> because there exists information (platonically) that describes a mind
> (human, animal, or other) having that experience.

Yes there is a world in which you computer will transform itself into
a green flying pig. The "scientific", but really everyday life
question, is, what is the "probability" this will happen to "me" here
and now. If the probability is 99,9 %, I will not find worth to even
begin writing a post ....
Physics is the science of such prediction, and if comp is true, the
correct-by-definition prediction have to take account all histories
and to see those who have measure near 1.

>
>
> I say that because it seems to me that this information could
> (theoretically) be produced by a computer simulation of such a mind,
> which would presumably be conscious.

Yes, because the computer will generate not just the states, but it
will related them.

> So add platonism to that, and
> there you go!

We agree here. And comp makes arithmetical platonism sufficient. It
makes highly undecidable if there is anything more. From inside, on
the contrary, the bigness is not even measurable or nameable. That
follows from "simple" theoretical computer science.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Apr 22, 2009, 12:45:41 PM4/22/09
to everyth...@googlegroups.com

On 21 Apr 2009, at 20:33, Brent Meeker wrote:
>
> I understand that the UD computes all different histories so they are
> interleaved. But each particular computation consists of an ordered
> set
> of states. These states can belong to more than one sequence of
> conscious experience. But the question is whether the order of the
> states in the computation is always the same as their order in any
> sequence of conscious experience in which they appear? For example, if
> there is a computation of states A, B, and C then is that a possible
> sequence in consciousness? In general there will be another,
> different
> computation that computes the states in the order A, C, B, so is that
> too a possible sequence in consciousness? Or is the experienced
> sequence in consciousness the same - determined by some intrinsic to
> the
> states?

The experienced sequence will be the same, I think. I would even guess
that it will correspond to the sequence in most singular low grained
computations going through those states (if our substitution level is
not too low...) , but things get trickier with A, B, C very close, I
expect.
Remember that if the Mandelbrot set is creative (in the snes of Post),
or universal (in the sense of Turing) then all your 3-states of mind
(future, present, past, and elsewhere) are densely distributed on the
its border. Subjective time is an internal construct, and with comp,
physical time is probably a first person plural construct (we share
our physical histories).


>>
>> I have still a residual doubt that a quantum computer makes sense
>> mathematically, but if that exists, then there exist a reversible
>> universal dovetailing.
>>
>>
>
> I don't understand that remark. Universal dovetailing is a completely
> abstract mathematical construct. It exists in Platonia. So how can
> the
> existence of a reversible (i.e. information preserving) UD depend on
> quantum computers?

Oh? It is just that I can use the quantum UD to provide an example.
But you are really correct, and if there is any reversible universal
machine, then I can build a reversible universal dovetailing. I could
use billiard ball or Wand never effacing machine. The difficulty is
that I can executed it only from a point in an infinite past.
I have the same difficulty with "running reversibly a program
computing all decimals of the square root of 2", or even just
counting . When will I start? I have to consider a non well founded
set of type ... 6 5 4 3 2 1 0.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Apr 22, 2009, 1:04:48 PM4/22/09
to everyth...@googlegroups.com
John,

On 21 Apr 2009, at 21:30, John Mikes wrote:

> Bruno,
> you made my day when you wrote:
> "SOMEHOW" - in:
> "...The machine has to be "runned" or "executed" relatively to a
> universal machine. You need the Peano or Robinson axiom to define
> such states and sequences of states.
> You can shuffled them if you want, and somehow the UD does shuffle
> them by its dovetailing procedure, but this will not change the
> arithmetical facts that those states belong or not too such or such
> computational histories...."
> *
> First: my vocablary sais about 'axiom' the reverse of how it is
> used, it is our artifact invented in order to facilitate the
> application of our theories IOW: explanations for the phenomena so
> poorly understood (if anyway). So it is MADE up for exactly the
> purpose what we evidence by it.
>
> Second: UD "shuffles 'them' by the ominous 'somehow', (no idea: how?)


By dovetailing. I say "somehow", to say literally "in some fashion
those who knows what the UD is can work by themselves as exercise,
because I am lazy right now and it will make the post too much longer,
also".

> but it has to be done for the result we invented as a 'must be'.

Absolutely. And it does it, all by himself in the realm of numbers +
addition + multiplication.

>
> Third: the 'computational history' snapshots have to come together
> (I am not referring to the sequence, rather to the combination
> between 'earlier' and 'later' snapshots into a continuum from a
> discontinuum. That marvel bugs science for at least 250 years since
> chemical "thinking" started.
> A sequence of pictures is no history.

We agree on this. See my post to Kelly. From outside, the links are
given by universal (or not) programs. From inside, it is linked to the
most probable histories + interference between the undistinguishable
one. QM without collapse confirms this, admittedly startling, view.

> *
> Then again: you wrote:
> "...The world you are observing is a sort of mean of all those
> computations, from your point of view. But the "running of the UD"
> is just a picturesque way to describe an infinite set of
> arithmetical relations..."
>
> I am not sure about the "mean" since we are not capable of even
> noticing 'all of them', not to evaluate the totality for a 'mean' -
> in my not arithmetic vocabulary: a median "meaning" of them all
> (nonsense).

By accepting Church thesis, we accept Gödel's Miracle. We can define,
inside, the universal-outside. WE cannot compute the correct inside
mean, but it has to be partially computable for a physical worlds to
exists. So we can bet on reasonable approximations. The "real" comp
physics will be unusable in practice, but will explain in theory (and
thus prevent its elimination) the presence of subject.

> Your words may be a flowery (math that is) expression of 'viewing
> the totality in its entirety' which is just as impossible (for us,
> today) as to realize your 'infinite set of arithmetical relations'.
> If I leave out the 'arithmetical' (or substitute it by my
> meaningfulness) then we came together in 'viewing the totality' in
> our indiviual wording-ways.
> "Relations" is the punctum salience, it is a loose enough term to
> cover whatever is beyond our present comprehension.

No I really use "relation" in the usual math sense. For exemple a
binary relation on N can be seen as a subset of NXN. It is just an
association, a set of couples or triples, etc.

> When relations look differently (maybe by just our observation from
> a different aspect?) we translate it into physical terms like
> change, movement, reaction, process or else, not realizing that WE
> look at it from different connotations.

You are far to quick here. But there is something like that.

> Use to that our coordinates (space and time) in the limited view we
> can muster (I call it: "model") and we arrived at causality of the
> conventional sciences (and common sense thinking as well).

That is what I hope for.

> Indeed it is our personal (mini)-solipsistic perceived reality of
> OUR world
> washed into some common pattern (partially!) by comp or math or else.


The advantage of the present approach is that it presupposes only the
"yes doctor and Church thesis", all the rest emerges from, well not
OUR (the human) prejudices/dreams, but OUR (the universal machine)
prejudices/dreams.

> By the maze of such covering umbrella we believe in adjusted thinking.
> *
> Please do not conclude any denial from my part against the 'somehow'
> topics, the process-function-change manipulations (unknown, as I
> said),
> it is only reference to my ignorance directed in my agnosticism
> towards made-up explanations of any cultural era (and changing fast).

No problem,

Best,

Bruno

http://iridia.ulb.ac.be/~marchal/

Kelly

unread,
Apr 22, 2009, 1:20:10 PM4/22/09
to Everything List
On Apr 21, 2:33 pm, Brent Meeker <meeke...@dslextreme.com> wrote:
> These states can belong to more than one sequence of
> conscious experience. But the question is whether the order of the
> states in the computation is always the same as their order in any
> sequence of conscious experience in which they appear? For example, if
> there is a computation of states A, B, and C then is that a possible
> sequence in consciousness? In general there will be another, different
> computation that computes the states in the order A, C, B, so is that
> too a possible sequence in consciousness?

Hypothetical situation, assuming an objectively existing physical
universe. All of the particles in the universe kick into reverse and
start going backwards. For some reason every particle in the universe
instantaneously reverses course. And also space begins contracting
instead of expanding. Everything in the universe hits a rubberwall and
bounces back 180 degrees.

So now instead of expanding, everything is on an exact "rewind" mode,
and we're headed back to the "Big Bang".

The laws of physics work the same in both directions...if you solve
them forward in time, you can take your answers, reverse the equations
and get your starting values, right? With the possible exception of
kaon decay, but we'll leave that aside for now.

This is what they always go on about with the "arrow of time". The
laws of physics work the same forwards and backwards in time. It's not
impossible for an egg to "unscramble", it's just very very very very
very unlikely. But if it did so, no laws of physics would be broken.
And, in fact, if you wait long enough, it will eventually happen.

Okay, so everything has reversed direction. The actual reversal
process is, of course, impossible. But after everything reverses,
everything just plays out by the normal laws of physics. Only that one
instant of reversal breaks the laws of physics.

External time is still moving forward, in the same direction as
before. We didn't reverse time. We just reversed the direction of
every particle.

So, now photons and neutrinos no longer shoot away from the sun -
instead now they shoot towards the sun, which when the photons and the
neutrinos and gamma rays hit helium atoms, the helium atoms split back
into individual hydrogen atoms, and absorb some energy in the process.
Again, no physical laws are broken, and time is moving forward.

Now, back on earth, everything is playing out in reverse as well. You
breath in carbon dioxide and absorb heat from your surroundings and
use the heat to break the carbon dioxide into carbon and oxygen. You
exhale the oxygen, and you turn the carbon into sugars, which you
eventually return to your digestive track where it's reconstituted
into food, which you regurgitate onto your fork and place it back onto
your plate.

Okay. So, still no physical laws broken. Entropy is decreasing, but
that's not impossible, just very unlikely under normal conditions.

Now. Your brain is also working backwards. But exactly backwards from
before. Every thought that you had yesterday, you will have again
tomorrow, in reverse. You will unthink it.

My question is, what would you experience in this case? What would it
be like to live in this universe where "external" time is still going
forward, but where all particles are retracing their steps precisely?

The laws of phsyics are still working exactly as before, but because
all particle trajectories were perfectly reversed, everything is
rolling back towards the big bang.

In my opinion, we wouldn't notice any difference. We would not
experience the universe moving in reverse, we would still experience
it moving forward exactly as we do now...we would still see the
universe as expanding even though it was contracting, we would still
see the sun giving off light and energy even though it was absorbing
both. In other words, we would still see a universe with increasing
entropy even though we actually would live in a universe with
decreasing entropy.

And why would that be the case? Because our mental states determine
what is the past for us and what is the future. There is no "external
arrow of time". The arrow of time is internal. The past is the past
because we remember it and because the neurons of our brains tell us
that it has already happened to us. The future is the future because
it's unknown, and because the neurons of our brains tell us that it
will happen to us soon.

If there is an external arrow of time, it is irrelevant, because as
this thought experiment shows it doesn't affect the way we perceive
time. Our internal mental state at any given instant determines what
is the future and what is the past for us.

In fact, you could run the universe forwards and backwards as many
times as you wanted like this. We would never notice anything. We
would always percieve increasing entropy. For us, time would always
move forward, never backwards.

My point being, as always, that our experience of reality is always
entirely dependent on our brain state. We can't know anything about
the universe that is not represented in the information of our brain
state at any given instant.

Forwards or backwards, it's all just particles moving around, assuming
various configurations, some of which give rise to consciousness.

Again, assuming that there actually is an external physical world. We
could, I think, apply the same idea to running a computer simulation
of a human brain in reverse where instead of computing the next state,
we compute the previous state.

Brent Meeker

unread,
Apr 22, 2009, 2:02:16 PM4/22/09
to everyth...@googlegroups.com

I was with you up to that last sentence. Forward or backward, we just
experience increasing entropy as increasing time, but that doesn't
warrant the conclusion that no process is required and an "instant"
within itself has an arrow of time.

> In fact, you could run the universe forwards and backwards as many
> times as you wanted like this. We would never notice anything. We
> would always percieve increasing entropy. For us, time would always
> move forward, never backwards.
>
> My point being, as always, that our experience of reality is always
> entirely dependent on our brain state. We can't know anything about
> the universe that is not represented in the information of our brain
> state at any given instant.
>
> Forwards or backwards, it's all just particles moving around, assuming
> various configurations, some of which give rise to consciousness.
>
> Again, assuming that there actually is an external physical world. We
> could, I think, apply the same idea to running a computer simulation
> of a human brain in reverse where instead of computing the next state,
> we compute the previous state.

That was my point in asking Bruno whether there is a universal
reversible computer (Turing machines obviously aren't reversible).
Since a QM (without collapse) model of the universe is reversible,
absent a reversible computer either the universe could not be computed
or QM is wrong.

Brent

Jason Resch

unread,
Apr 22, 2009, 2:41:54 PM4/22/09
to everyth...@googlegroups.com
On Wed, Apr 22, 2009 at 1:55 AM, Kelly <harm...@gmail.com> wrote:
>
> On Apr 21, 11:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> We could say that a state A access to a state B if there is a
>> universal machine (a universal number relation) transforming A into B.
>> This works at the ontological level, or for the third person point of
>> view. But if A is a consciousness related state, then to evaluate the
>> probability of personal access to B, you have to take into account
>> *all* computations going from A to B, and thus you have to take into
>> account the infinitely many universal number relations transforming A
>> into B. Most of them are indiscernible by "you" because they differ
>> below "your" substitution level.
>
> So, going back to some of your other posts about "transmitting" a copy
> of a person from Brussels to Moscow.  What is it that is transmitted?
> Information, right?  So for that to be a plausible scenario we have to
> say that a person at a particular instant in time can be fully
> described by some set of data.
>
> It would seem to me that their conscious state at that instant must be
> recoverable from that set of data.  The only question is, what
> conditions must be met for them to "experience" this state, which is
> completely described by the data set?  I don't see any obvious reason
> why anything additional is needed.  What does computation really add
> to this?
>

I think I agree with this, that consciousness is created by the
information associated with a brain state, however I think two things
are missing:

The first is that I don't think there is enough information within a
single Plank time or other snapshot of the brain to constitute
consciousness. As you mention below, under the view of block time,
the brain, and all other things are four-dimensional objects.
Therefore the total information composing a moment of conscious may be
spread across some non-zero segment of time.

The second problem is immediately related to the first. Lets assume
that there is consciousness within a 10 second time period, so we make
a recording of someone's brain states across 10 seconds and store it
in some suitable binary file. The question is: Are there any logical
connections between successive states when stored in this file? I
would think not.

When the brain state is embedded in block time, the laws of physics
serve as a suitable interpreter which connect the information spread
out over four-dimensions, but without computer software running the
stored brain state, there is no interpreter for the information when
it is just sitting on the disk. I think this is the reason some of us
feel a need to have information computed as opposed to it simply
existing.

Jason

Stathis Papaioannou

unread,
Apr 23, 2009, 7:28:33 AM4/23/09
to everyth...@googlegroups.com
2009/4/23 Brent Meeker <meek...@dslextreme.com>:

>> Say a machine is in two separate parts M1 and M2, and the information
>> on M1 in state A is written to a punchcard, walked over to M2, loaded,
>> and M2 goes into state B. Then what you are suggesting is that this
>> sequence could give rise to a few moments of consciousness, since A
>> and B are causally connected; whereas if M1 and M2 simply went into
>> the same respective states A and B at random, this would not give rise
>> to the same consciousness, since the states would not have the right
>> causal connection. Right?
>>
>
> Maybe.  But I'm questioning more than the lack of causal connection.
> I'm questioning the idea that a static thing like a state can be
> conscious.  That consciousness goes through a set of states, each one
> being an "instant", is an inference we make in analogy with how we would
> write a program simulating a mind.  I'm saying I suspect something
> essential is missing when we "digitize" it in this way.  Note that this
> does not mean I'd say "No" to Burno's doctor - because the doctor is
> proposing to replace part of my brain with a mechanism that instantiates
> a process - not just discrete states.
>
>
> Brent

What is needed for the series of states to qualify as a process? I
assume that a causal connection between the states, as in my example
above, would be enough, since it is what happens in normal brains and
computers. But what would you say about the examples I give below,
where the causal connection is disrupted in various ways: is there a
process or is there just an unfeeling sequence of states?

>> But then you could come up with variations on this experiment where
>> the transfer of information doesn't happen in as straightforward a
>> manner. For example, what if the operator who walks over the punchcard
>> gets it mixed up in a filing cabinet full of all the possible
>> punchcards variations, and either (a) loads one of the cards into M2
>> because he gets a special vibe about it and it happens to be the right
>> one, or (b) loads all of the punchcards into M2 in turn so as to be
>> sure that the right one is among them? Would the machine be conscious
>> if the operator loads the right card knowingly, but not if he is just
>> lucky, and not if he is ignorant but systematic? If so, how could the
>> computation know about the psychological state of the operator?


--
Stathis Papaioannou

Bruno Marchal

unread,
Apr 23, 2009, 10:23:46 AM4/23/09
to everyth...@googlegroups.com

I mainly agree. I add that once we assume comp the laws of physics
themselves are emerging on information processing. That such an
information processing is purely arithmetico-logical, or combinator-
logical. No need for substances. Consciousness, time, energy and space
are internal constructs.

Bruno


http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Apr 23, 2009, 12:56:41 PM4/23/09
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
> 2009/4/23 Brent Meeker <meek...@dslextreme.com>:
>
>>> Say a machine is in two separate parts M1 and M2, and the information
>>> on M1 in state A is written to a punchcard, walked over to M2, loaded,
>>> and M2 goes into state B. Then what you are suggesting is that this
>>> sequence could give rise to a few moments of consciousness, since A
>>> and B are causally connected; whereas if M1 and M2 simply went into
>>> the same respective states A and B at random, this would not give rise
>>> to the same consciousness, since the states would not have the right
>>> causal connection. Right?
>>>
>> Maybe. But I'm questioning more than the lack of causal connection.
>> I'm questioning the idea that a static thing like a state can be
>> conscious. That consciousness goes through a set of states, each one
>> being an "instant", is an inference we make in analogy with how we would
>> write a program simulating a mind. I'm saying I suspect something
>> essential is missing when we "digitize" it in this way. Note that this
>> does not mean I'd say "No" to Burno's doctor - because the doctor is
>> proposing to replace part of my brain with a mechanism that instantiates
>> a process - not just discrete states.
>>
>>
>> Brent
>
> What is needed for the series of states to qualify as a process?

I think that the states, by themselves, cannot qualify. The has to be something
else, a rule of inference, a causal connection, that joins them into a process.

> I
> assume that a causal connection between the states, as in my example
> above, would be enough, since it is what happens in normal brains and
> computers.

Yes, I certainly agree that it would be sufficient. But it may be more than is
necessary. The idea of physical causality isn't that well defined and it hardly
even shows up in fundamental physics except to mean no-action-at-a-distance.

> But what would you say about the examples I give below,
> where the causal connection is disrupted in various ways: is there a
> process or is there just an unfeeling sequence of states?
>
>>> But then you could come up with variations on this experiment where
>>> the transfer of information doesn't happen in as straightforward a
>>> manner. For example, what if the operator who walks over the punchcard
>>> gets it mixed up in a filing cabinet full of all the possible
>>> punchcards variations, and either (a) loads one of the cards into M2
>>> because he gets a special vibe about it and it happens to be the right
>>> one, or (b) loads all of the punchcards into M2 in turn so as to be
>>> sure that the right one is among them? Would the machine be conscious
>>> if the operator loads the right card knowingly, but not if he is just
>>> lucky, and not if he is ignorant but systematic? If so, how could the
>>> computation know about the psychological state of the operator?

So you are contemplating a process that consists of a sequence of states and a
rule that connects them thus constituting a process: punch cards (states) and a
machine which physically implements some rule producing a new punch card (state)
from a previous one. And then you ask whether it is still a process if, instead
of the rule producing the next state it is produced in some other way. I'd say
so long as the rule is followed (the operator loads the right card knowingly)
it's the same process. Otherwise it is not the same process (the operator
selects the right card by chance or by a different rule). If the process is a
conscious one, is the latter still conscious? I'd say that it is. If the
selection is by chance it's an instance of a Boltzmann brain. But I don't worry
about Boltzmann brains; they're to improbable.

Brent

Stathis Papaioannou

unread,
Apr 23, 2009, 7:37:56 PM4/23/09
to everyth...@googlegroups.com
2009/4/24 Brent Meeker <meek...@dslextreme.com>:

Boltzmann brains are improbable, but the example of the punchcards is
not. The operator could have two punchcards in his pocket, have a
conversation with someone on the way from M1 to M2 and end up
forgetting or almost forgetting which is the right one. That is, his
certainty of picking the right card could vary between 0.5 and 1.
Would you say that only if his certainty is 1 would the conscious
process be implemented, and not if it is, say, 0.9?


--
Stathis Papaioannou

Brent Meeker

unread,
Apr 23, 2009, 8:10:11 PM4/23/09
to everyth...@googlegroups.com

I said it would be implementing *the same* consciousness if he was
following the rule. If not he might be implementing a different
consciousness by using a different rule. Of course if it were different
in only one "moment" that wouldn't really be much of a difference. I
don't think it depends on his certainty. Even more difficult we might
ask what it means for him to follow the rule - must he do it
*consciously*; in which case do we have to know whether his brain is
functioning according to the same rule?

You're asking a lot of questions, Stathis. :-) What do you think?

Brent

Kelly

unread,
Apr 23, 2009, 8:37:53 PM4/23/09
to Everything List
On Apr 22, 2:02 pm, Brent Meeker <meeke...@dslextreme.com> wrote:
> I was with you up to that last sentence. Forward or backward, we just
> experience increasing entropy as increasing time, but that doesn't
> warrant the conclusion that no process is required and an "instant"
> within itself has an arrow of time.

It seems to me that each instant DOES contain within itself an arrow
of time, in the form of memories. Later instances are related to
earlier instances by virtue of having memory-information about those
earlier instances. That's what ties the various "states" together.
The nature of the computations that might transition you from instant
to instant are not relevant.

What matters is where you end up, not how you got there. If a
transition causes you to assume a state that contains no information
about earlier events (i.e., no memory of these events), then you have
lost a crucial part of what makes you who "you" are.

If you save your brain state at time A and then you save state again
at a subsequent time B, there is a relationship and an objectively
measureable degree of correlation between the information contained in
the two saved data sets.

It is, I think, the degree of correlation between states that provides
the illusion of a "flow" of consciousness. This has nothing to do
with the type of computation that could be used to "transition"
between the two data sets.

Again, it seems to me that the arithmetic logic that Bruno refers to
just serves to "describe" the relations between datasets. It doesn't
"produce" consciousness.

If there are many "algorithms" that could be used to transition from
state A to state B, it seems to me that all of them would produce the
same conscious experience. If you end up at "state B", then it
doesn't matter how you go there...your "memory" of the experience will
be identical regardless of what path you took.

And since all states (not just A and B) exist platonically, then every
possible "process" can be "inferred" to connect them in every possible
way. But I don't think this means that the processes are the source
of consciousness. They are just descriptions of the ways that states
could be connected.

Brent Meeker

unread,
Apr 23, 2009, 9:57:49 PM4/23/09
to everyth...@googlegroups.com
Kelly wrote:
> On Apr 22, 2:02 pm, Brent Meeker <meeke...@dslextreme.com> wrote:
>
>> I was with you up to that last sentence. Forward or backward, we just
>> experience increasing entropy as increasing time, but that doesn't
>> warrant the conclusion that no process is required and an "instant"
>> within itself has an arrow of time.
>>
>
> It seems to me that each instant DOES contain within itself an arrow
> of time, in the form of memories. Later instances are related to
> earlier instances by virtue of having memory-information about those
> earlier instances. That's what ties the various "states" together.
> The nature of the computations that might transition you from instant
> to instant are not relevant.
>
> What matters is where you end up, not how you got there. If a
> transition causes you to assume a state that contains no information
> about earlier events (i.e., no memory of these events), then you have
> lost a crucial part of what makes you who "you" are.
>
> If you save your brain state at time A and then you save state again
> at a subsequent time B, there is a relationship and an objectively
> measureable degree of correlation between the information contained in
> the two saved data sets.
>
> It is, I think, the degree of correlation between states that provides
> the illusion of a "flow" of consciousness. This has nothing to do
> with the type of computation that could be used to "transition"
> between the two data sets.
>

If by "state" you meant something like the state of one's brain or
perhaps including some local chunk of the universe, I'd agree with you.
But in general an "instant" of *consciousness* does not include any
memory. My conscious stream of thought only occasionally brings up
unique memories that one could trace back to my earlier thoughts. In
fact most of my thinking, in the sense of information processing, is
subconscious. I suppose one could expand the definition of "experience"
to include unconscious experience, although it's hard to say what that
would mean without assuming a physical reality to instantiate it.


> Again, it seems to me that the arithmetic logic that Bruno refers to
> just serves to "describe" the relations between datasets. It doesn't
> "produce" consciousness.
>
> If there are many "algorithms" that could be used to transition from
> state A to state B, it seems to me that all of them would produce the
> same conscious experience. If you end up at "state B", then it
> doesn't matter how you go there...your "memory" of the experience will
> be identical regardless of what path you took.
>
> And since all states (not just A and B) exist platonically, then every
> possible "process" can be "inferred" to connect them in every possible
> way. But I don't think this means that the processes are the source
> of consciousness. They are just descriptions of the ways that states
> could be connected.

And I assume (and I believe Bruno agreed) that all possible processes,
i.e. computations, also exist Platonically. This seems to me to
introduce a continuum topology on states. Between every two states
there are countably many other states.

Brent

Kelly

unread,
Apr 24, 2009, 12:14:17 AM4/24/09
to Everything List
On Apr 22, 12:24 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> So for that to be a plausible scenario we have to
>> say that a person at a particular instant in time can be fully
>> described by some set of data.
>
> Not fully. I agree with Brent that you need an interpreter to make
> that person manifest herself in front of you. A bit like a CD, you
> will need a player to get the music.

It seems to me that consciousness is the self-interpretion of
information. David Chalmers has a good line: "Experience is
information from the inside; physics is information from the outside."

I still don't see what an interpreter adds, except to satisfy the
intuition that something is "happening" that "produces"
consciousness. Which I think is an attempt to reintroduce "time".

But I don't see any advantage of this view over the idea that
conscious states just "exist" as a type of platonic form (as Brent
mentioned earlier). At any given instant that I'm awake, I'm
conscious of SOMETHING. And I'm conscious of it by virtue of my
mental state at that instant. In the materialist view, my mental
state is just the state of the particles of my brain at that
instant.

But I say that what this really means is that my mental state is just
the information represented by the particles of my brain at that
instant. And that if you transfer that information to a computer and
run a simulation that updates that information appropriately, my
consciousness will continue in that computer simulation, regardless of
the hardware (digital computer, mechanical computer, massively
parallel or single processor, etc) or algorithmic details of that
computer simulation.

But, what is information? I think it has nothing to do with physical
storage or instantiation. I think it has an existence seperate from
that. A platonic existence. And since the information that
represents my brain exists platonically, then the information for
every possible brain (including variations of my brain) should also
exist platonically.


>> Conscious experience is with the information.
>
> Conscious experience is more the content, or the interpretation of
> that information, made by a person or by a universal machine.
> If the doctor makes a copy of your brain, and then codes it into a bit
> string, and then put the bit string in the fridge, in our probable
> history, well in that case you will not survive, in our local probable
> history.

Given the platonic nature of information, this isn't really a
concern. In Platonia, you always have a "next moment". In fact, you
experience all possible "next moments". The "no cul-de-sac" rule
applies I think.


> If you say yes to a doctor for a digital brain, you will ask for a
> brain which functions relatively to our probable computational
> history. No?

I won't worry about it too much, as there is no doctor, only my
perceptions of a doctor. Every possible outcome of the "brain
replacement operation" that I can perceive, I will perceive.
Including outcomes that don't make any sense.

Additionally, every possible outcome of the operation that the doctor
can percieve, he will perceive. Including outcomes that don't make
any sense.

So it seems to me that a lot of your effort goes into explaining why
we don't see strange "white rabbit" universes. Thus the talk of
probabilities and measures. I'm willing to just say that all
universes are experienced. Strange ones, normal ones, good ones, bad
ones, ones with unbreakable physical laws, ones with no obvious
physical laws at all. It's all a matter of perception, not a matter
of physical realization.


> Yes there is a world in which you computer will transform itself into
> a green flying pig. The "scientific", but really everyday life
> question, is, what is the "probability" this will happen to "me" here
> and now.

I'm not sure what it means to ask, "what is the probability that my
computer will turn into a green pig". One of me will observe
everything that can be observed in the next instant. How many things
is that? I'm not sure. More than 10...ha! Setting aside physical
limits, maybe infinitely many? Given that I might also get extra
sensory capacity in that instant, or extra cognitive capacity, or
whatever.

So, of course all of that sounds somewhat crazy, but that's where you
end up when you try to explain consciousness I think. Any explanation
that doesn't involve eliminativism is going to be strange I think.

But, if you are willing to say that consciousness is an illusion, then
you can just stick with materialism/physicalism and you're fine. In
that case there's no need to invoke any of this more esoteric stuff
like platonism. Right?

Brent Meeker

unread,
Apr 24, 2009, 2:41:33 AM4/24/09
to everyth...@googlegroups.com
Kelly wrote:
> On Apr 22, 12:24 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>>> So for that to be a plausible scenario we have to
>>> say that a person at a particular instant in time can be fully
>>> described by some set of data.
>>>
>> Not fully. I agree with Brent that you need an interpreter to make
>> that person manifest herself in front of you. A bit like a CD, you
>> will need a player to get the music.
>>
>
> It seems to me that consciousness is the self-interpretion of
> information. David Chalmers has a good line: "Experience is
> information from the inside; physics is information from the outside."
>
> I still don't see what an interpreter adds, except to satisfy the
> intuition that something is "happening" that "produces"
> consciousness. Which I think is an attempt to reintroduce "time".
>
> But I don't see any advantage of this view over the idea that
> conscious states just "exist" as a type of platonic form (as Brent
> mentioned earlier). At any given instant that I'm awake, I'm
> conscious of SOMETHING. And I'm conscious of it by virtue of my
> mental state at that instant. In the materialist view, my mental
> state is just the state of the particles of my brain at that
> instant.
>
>

I think we need some definition of "state". Supposing your brain were a
Newtonian system the state would be the position and velocity of all the
particles. Physically this leads to the next state by the Newtonian
dynamics. But those dynamics operate in a continuum. If we discretize
your brain, say slice it into Planck units of time as Jason suggested,
now we need to have something to connect one state to another. The
states are no longer part of a continuum. In a computer "running" your
brain this is provided by the hardware of the computer. In Bruno's
theory it is provided by a relation in Platonia, i.e. a computational rule.

In idealism, the content of a state consciousness (a Planck slice, not
of a brain, but of a stream of consciousness) seems to me to be very
small and it doesn't so far as I can see have anything analogous to
dynamical equations to connect it to another state. You say it is
connected by the correlation of information content, but is that
unique? Is there a best or most probable next state or what?

Brent

Jason Resch

unread,
Apr 24, 2009, 3:14:04 AM4/24/09
to everyth...@googlegroups.com
Kelly,

Your arguments are compelling and logical, you have put a lot of doubt
in my mind about computationalism. I have actually been in somewhat
of a state of confusion since Bruno's movie graph argument coupled
with a paper by Max Tegmark. In Tegmark's paper, he was explaining
that there is an appeal to many people of associating the time
dimension with the computational clock, but argued there is no reason
to do so, time is just another dimension after all, and everything
being an atemporal platonic/mathematical object any perception of
change is illusory. Later, when Bruno explained his movie graph
argument, it came to the point where we were asked: Is a recording of
Alice's brain activity itself conscious? I first thought obviously
no, but then realized the contradiction with space-time. Could the
block-time view of the universe not be considered a recording?
Perhaps the difference between a recording (like Tape or CD) and the
universe (or a computer program/simulation) is that with a physical
recording it is possible to alter a state at one point in time without
affecting future/past states. Or maybe consciousness is only created
from platonic objects / information or relationships that exist within
them. The appeal of computationalism for me is that it creates a
self-interpreting structure, the information or state has meaning only
because it is part a state machine. We, being creatures who can only
experience through time might be fooled into thinking change over time
is necessary for consciousness, but what if we could make a computer
that computed over the X-dimension instead of T, what would such a
computer look like and how would it be logically different from a
recording (which is static over T), and how is it logically different
from a computer that computes accross the T dimension?

I very much look forward to reading your and others' opinions on this.

Jason

Bruno Marchal

unread,
Apr 24, 2009, 10:42:37 AM4/24/09
to everyth...@googlegroups.com

On 24 Apr 2009, at 02:37, Kelly wrote:

>
> On Apr 22, 2:02 pm, Brent Meeker <meeke...@dslextreme.com> wrote:
>> I was with you up to that last sentence. Forward or backward, we
>> just
>> experience increasing entropy as increasing time, but that doesn't
>> warrant the conclusion that no process is required and an "instant"
>> within itself has an arrow of time.
>
> It seems to me that each instant DOES contain within itself an arrow
> of time, in the form of memories. Later instances are related to
> earlier instances by virtue of having memory-information about those
> earlier instances. That's what ties the various "states" together.
> The nature of the computations that might transition you from instant
> to instant are not relevant.
>
> What matters is where you end up, not how you got there.


What matters is the first person probability you find yourself ending
up "there". And this will depend on all computations going through
your current state (or below) and going through the state "up there".
http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Apr 24, 2009, 11:39:34 AM4/24/09
to everyth...@googlegroups.com

On 24 Apr 2009, at 06:14, Kelly wrote:

>
> On Apr 22, 12:24 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>> So for that to be a plausible scenario we have to
>>> say that a person at a particular instant in time can be fully
>>> described by some set of data.
>>
>> Not fully. I agree with Brent that you need an interpreter to make
>> that person manifest herself in front of you. A bit like a CD, you
>> will need a player to get the music.
>
> It seems to me that consciousness is the self-interpretion of
> information. David Chalmers has a good line: "Experience is
> information from the inside; physics is information from the outside."

First person experience and third person experiment. Glad to hear
Chalmers accept this at last.
In UDA, inside/outside are perfectly well defined in a pure third
person way: inside (first person) = memories annihilated and
reconstructed in classical teleportation, outside = the view outside
the teleporter. In AUDA I use the old classical definition by Plato in
the Theaetetus.


>
>
> I still don't see what an interpreter adds, except to satisfy the
> intuition that something is "happening" that "produces"
> consciousness. Which I think is an attempt to reintroduce "time".


I don't think so. The only "time" needed is the discrete order on the
natural numbers. An interpreter is needed to play the role of the
person who gives some content to the information handled through his
local "brain". (For this I need also addition and multiplication).

>
>
> But I don't see any advantage of this view over the idea that
> conscious states just "exist" as a type of platonic form (as Brent
> mentioned earlier).

The advantage is that we have the tools to derive physics in a way
which is enough precise for testing the comp hypothesis. Physics has
became a branch of computer's psychology or theology.

> At any given instant that I'm awake, I'm
> conscious of SOMETHING.

To predict something, the difficulty is to relate that consciousness
to its computational histories. Physics is given by a measure of
probability on those comp histories.

> And I'm conscious of it by virtue of my
> mental state at that instant. In the materialist view, my mental
> state is just the state of the particles of my brain at that
> instant.

Which cannot be maintained with the comp hyp. Your consciousness is an
abstract type related to all computations going through your current
state.


>
>
> But I say that what this really means is that my mental state is just
> the information represented by the particles of my brain at that
> instant. And that if you transfer that information to a computer and
> run a simulation that updates that information appropriately, my
> consciousness will continue in that computer simulation, regardless of
> the hardware (digital computer, mechanical computer, massively
> parallel or single processor, etc) or algorithmic details of that
> computer simulation.

OK. But it is a very special form of information. Consciousness is
really the qualia associated to your belief in some reality. It is a
bet on self-consistency: it speed up your reaction time relatively to
your most probable histories.


>
>
> But, what is information? I think it has nothing to do with physical
> storage or instantiation. I think it has an existence seperate from
> that. A platonic existence. And since the information that
> represents my brain exists platonically, then the information for
> every possible brain (including variations of my brain) should also
> exist platonically.


You make the same error than those who confuse a universal dovetailer
with a counting algorithm or the babel library. The sequence:

0, 1, 2, 3, 4, ... , or 0 1 10 11 100 101 110 111 go through all
description of all information, but it lacks the infinitely subtle
redundancy contained in the space of all computations (the universal
dovetaling). You work in N, succ, you lack addition and
multiplication, needed for having a notion of interpreter or universal
machine, the key entity capable of giving content to its information
structure. This is needed to have a coherent internal interpretation
of computerland.


>>>
>>
>> Conscious experience is more the content, or the interpretation of
>> that information, made by a person or by a universal machine.
>> If the doctor makes a copy of your brain, and then codes it into a
>> bit
>> string, and then put the bit string in the fridge, in our probable
>> history, well in that case you will not survive, in our local
>> probable
>> history.
>
> Given the platonic nature of information, this isn't really a
> concern. In Platonia, you always have a "next moment". In fact, you
> experience all possible "next moments". The "no cul-de-sac" rule
> applies I think.

By definition, indeed, once we want to quantify the first person
indeterminacy.
"next moment" makes sense only relatively to (universal) machine. It
is the "next step" relatively to some computation and thus universal
machine interpreting that "machine".
The cul-de-sac/no-cul-de-sac depends on the points of view adopted by
the machine itself.


>
>
>
>> If you say yes to a doctor for a digital brain, you will ask for a
>> brain which functions relatively to our probable computational
>> history. No?
>
> I won't worry about it too much, as there is no doctor, only my
> perceptions of a doctor. Every possible outcome of the "brain
> replacement operation" that I can perceive, I will perceive.

Not in the relative way. You have to explain why you see apples
falling from a tree, and not any arbitrary information-theoretical data.


>
> Including outcomes that don't make any sense.


You have to explain why they are *rare*. If not your theory does not
explain why you put water on the gas and not in the fridge when you
want a cup of coffee.


>
>
> Additionally, every possible outcome of the operation that the doctor
> can percieve, he will perceive. Including outcomes that don't make
> any sense.
>
> So it seems to me that a lot of your effort goes into explaining why
> we don't see strange "white rabbit" universes.

Indeed.

> Thus the talk of
> probabilities and measures. I'm willing to just say that all
> universes are experienced.

That is absolutely true. But we don't live in the absolute (except
perhaps with salvia :). We live in the relative worlds/states. I
cannot go to my office by flying through the window. The probability
that I end up in an hospital is far greater than arriving in piece to
my job place.

> Strange ones, normal ones, good ones, bad
> ones, ones with unbreakable physical laws, ones with no obvious
> physical laws at all. It's all a matter of perception, not a matter
> of physical realization.

That is true, but we want to explain "the stable appearance of atoms
and galaxies", and what happens when we die.

>
>
>
>> Yes there is a world in which you computer will transform itself into
>> a green flying pig. The "scientific", but really everyday life
>> question, is, what is the "probability" this will happen to "me" here
>> and now.
>
> I'm not sure what it means to ask, "what is the probability that my
> computer will turn into a green pig". One of me will observe
> everything that can be observed in the next instant. How many things
> is that? I'm not sure. More than 10...ha! Setting aside physical
> limits, maybe infinitely many? Given that I might also get extra
> sensory capacity in that instant, or extra cognitive capacity, or
> whatever.
>
> So, of course all of that sounds somewhat crazy, but that's where you
> end up when you try to explain consciousness I think. Any explanation
> that doesn't involve eliminativism is going to be strange I think.

The comp theory explains why we cannot explain consciousness, nor
truth. But we can bet on computational states, then the thought
experiments show that physics is derivable from computer science/
number theory in term of probabilities, and we can compare those
probabilities with the one we extract from the long observation of our
neighborhoods. Comp is a concrete testable theory, but we have to
derive the physics from it to do so. There is a gift because it gives
a complete arithmetical interpretation of an earlier type of theory
like Plotinus theology, which does not eliminate the person like
modern materialism, comp lead to a natural distinction between truth
about us, and provable by us.


>
>
> But, if you are willing to say that consciousness is an illusion, then
> you can just stick with materialism/physicalism and you're fine.

You are right. But consciousness is the only thing I have no doubt
about. The *only* undoubtable thing. The fixed point of the cartesian
systematic doubting attitude. A theory which eliminate my first
person, or my consciousness, although irrefutable by me, is wrong, I
hope, I hope it to be wrong for you too. (Why would I send a post on
consciousness to a zombie?)

> In
> that case there's no need to invoke any of this more esoteric stuff
> like platonism. Right?

Right. Materialism is a trick based on a lie (consciousness, and thus
pain, suffering are illusions) and an illusion (there is matter). This
is used to stop fundamental inquiry. It is not a coincidence that
authoritative theologies insists on materialism so much. Before
Darwinism God created the man, after Darwinism God created the matter.
Assuming comp, couple matter-man

Any content of consciousness can be an illusion. Consciousness itself
cannot, because without consciousness there is no more illusion at all.

Best,

Bruno

http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Apr 24, 2009, 1:00:28 PM4/24/09
to everyth...@googlegroups.com
Jason Resch wrote:
> Kelly,
>
> Your arguments are compelling and logical, you have put a lot of doubt
> in my mind about computationalism. I have actually been in somewhat
> of a state of confusion since Bruno's movie graph argument coupled
> with a paper by Max Tegmark. In Tegmark's paper, he was explaining
> that there is an appeal to many people of associating the time
> dimension with the computational clock, but argued there is no reason
> to do so, time is just another dimension after all, and everything
> being an atemporal platonic/mathematical object any perception of
> change is illusory.

That's a common model but it's certainly not a settled question in
physics. Just recently Sean Carroll wrote a paper titled "What if Time
Really Exists?" http://arxiv.org/abs/0811.3772. And even in a block
universe model the time dimension is still different from the space
dimensions.

> Later, when Bruno explained his movie graph
> argument, it came to the point where we were asked: Is a recording of
> Alice's brain activity itself conscious? I first thought obviously
> no, but then realized the contradiction with space-time. Could the
> block-time view of the universe not be considered a recording?
> Perhaps the difference between a recording (like Tape or CD) and the
> universe (or a computer program/simulation) is that with a physical
> recording it is possible to alter a state at one point in time without
> affecting future/past states.

This implicitly assumes that you can dispense with the continuum and
treat the process as a succession of discrete states. I question that.
It is how we think and how we write and describe computer programs and
we know that if we make the time step small enough in the simulation we
can accurately reproduce processes. But I think we are fooling
ourselves by taking the description in terms of discrete states to be
sufficient - actually we are relying on the physics of the computer to
join one state to the next. Bruno proposes to abstract this whole
process up to Platonia where the role of the computer in interpreting
the program is taken over by abstract computations. But then to avoid
any choice he must allow all possible (countably infinite) computations
between any two states. ISTM this implies a strange topology of states
and I'm not clear on how it models consciousness.

> Or maybe consciousness is only created
> from platonic objects / information or relationships that exist within
> them. The appeal of computationalism for me is that it creates a
> self-interpreting structure, the information or state has meaning only
> because it is part a state machine. We, being creatures who can only
> experience through time might be fooled into thinking change over time
> is necessary for consciousness, but what if we could make a computer
> that computed over the X-dimension instead of T, what would such a
> computer look like and how would it be logically different from a
> recording (which is static over T), and how is it logically different
> from a computer that computes accross the T dimension?
>

I don't think it is *logically* different. Before computers, a
computation was something written out on sheets of paper (I know because
my first summer job in college was calculating coordinates and depths
for a geological research company and my official job title was
"Computer".) :-)

Brent

Stathis Papaioannou

unread,
Apr 24, 2009, 11:17:20 PM4/24/09
to everyth...@googlegroups.com
2009/4/24 Brent Meeker <meek...@dslextreme.com>:

>> Boltzmann brains are improbable, but the example of the punchcards is
>> not. The operator could have two punchcards in his pocket, have a
>> conversation with someone on the way from M1 to M2 and end up
>> forgetting or almost forgetting which is the right one. That is, his
>> certainty of picking the right card could vary between 0.5 and 1.
>> Would you say that only if his certainty is 1 would the conscious
>> process be implemented, and not if it is, say, 0.9?
>>
>>
>
> I said it would be implementing *the same* consciousness if he was
> following the rule.  If not he might be implementing a different
> consciousness by using a different rule.  Of course if it were different
> in only one "moment" that wouldn't really be much of a difference.  I
> don't think it depends on his certainty.  Even more difficult we might
> ask what it means for him to follow the rule - must he do it
> *consciously*; in which case do we have to know whether his brain is
> functioning according to the same rule?
>
> You're asking a lot of questions, Stathis.  :-)  What do you think?

I don't think the rule matters, only the result, which could consist
of a series of disconnected states. The utility of a process is that
it reliably brings about the relevant states; but if they arose
randomly or by a different process that would be just as good. If not,
then you could have an apparently functionally identical machine which
has a different consciousness. One half of your brain might function
by a different process that gives the same neuronal outputs, and you
would have a feeling that something had radically changed, but your
mouth would seemingly of its own accord continue to declare that
everything is just the same. So, I agree with Kelly that the
consciousness consists in the information.


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Apr 24, 2009, 11:29:42 PM4/24/09
to everyth...@googlegroups.com
2009/4/25 Brent Meeker <meek...@dslextreme.com>:

> This implicitly assumes that you can dispense with the continuum and
> treat the process as a succession of discrete states.  I question that.

So are you saying that, because we are conscious, that is evidence
that reality is at bottom continuous rather than discrete?

Do you think a computation would feel different from the inside
depending on whether it was done with pencil and paper, transistors or
vacuum tubes?


--
Stathis Papaioannou

Brent Meeker

unread,
Apr 24, 2009, 11:38:45 PM4/24/09
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
> 2009/4/24 Brent Meeker <meek...@dslextreme.com>:
>
>
>>> Boltzmann brains are improbable, but the example of the punchcards is
>>> not. The operator could have two punchcards in his pocket, have a
>>> conversation with someone on the way from M1 to M2 and end up
>>> forgetting or almost forgetting which is the right one. That is, his
>>> certainty of picking the right card could vary between 0.5 and 1.
>>> Would you say that only if his certainty is 1 would the conscious
>>> process be implemented, and not if it is, say, 0.9?
>>>
>>>
>>>
>> I said it would be implementing *the same* consciousness if he was
>> following the rule. If not he might be implementing a different
>> consciousness by using a different rule. Of course if it were different
>> in only one "moment" that wouldn't really be much of a difference. I
>> don't think it depends on his certainty. Even more difficult we might
>> ask what it means for him to follow the rule - must he do it
>> *consciously*; in which case do we have to know whether his brain is
>> functioning according to the same rule?
>>
>> You're asking a lot of questions, Stathis. :-) What do you think?
>>
>
> I don't think the rule matters, only the result, which could consist
> of a series of disconnected states. The utility of a process is that
> it reliably brings about the relevant states; but if they arose
> randomly or by a different process that would be just as good.

If two processes always produce the same sequence they are the same
process in the abstract sense.

> If not,
> then you could have an apparently functionally identical machine which
> has a different consciousness. One half of your brain might function
> by a different process that gives the same neuronal outputs, and you
> would have a feeling that something had radically changed, but your
> mouth would seemingly of its own accord continue to declare that
> everything is just the same. So, I agree with Kelly that the
> consciousness consists in the information.
>
>

But is it the information in consciousness and is it discrete? If you
include the information that is in the brain, but not in consciousness,
I can buy the concept of relating states by similarity of content. Or
if you suppose a continuum of states that would provide a sequence. It
is only when you postulate discrete states containing only the contents
of instants of conscious thought, that I find difficulty.

Brent

Brent

Brent Meeker

unread,
Apr 24, 2009, 11:47:54 PM4/24/09
to everyth...@googlegroups.com
No, I don't think the medium makes a difference. But interpretation
makes a difference. Most computations we do, on pencil and paper or
transistors or neurons, have an interpretation in terms of our world.
Kelly is supposing there is a "self-interpreting structure" I'm not sure
what he means by this, but I imagine something like an elaborate
simulation in which some parts of the computation simulate entities with
values or purposes - on some mapping. But what about other mappings?

Brent

Stathis Papaioannou

unread,
Apr 25, 2009, 5:52:20 AM4/25/09
to everyth...@googlegroups.com
2009/4/25 Brent Meeker <meek...@dslextreme.com>:

> But is it the information in consciousness and is it discrete?  If you
> include the information that is in the brain, but not in consciousness,
> I can buy the concept of relating states by similarity of content.  Or
> if you suppose a continuum of states that would provide a sequence. It
> is only when you postulate discrete states containing only the contents
> of instants of conscious thought, that I find difficulty.

I'm not sure I understand. Are you saying that the information in most
physical processes, but not consciousness, can be discrete? I would
have said just the opposite: that even if it turns out that physics is
continuous and time is real, it would still be possible to chop up
consciousness into discrete parts (albeit of finite duration) and
there would still be continuity. In fact, I can't imagine how
consciousness could possibly be discontinuous if this was done, for
where would the information that tells you you've been chopped up
reside?


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Apr 25, 2009, 7:04:56 AM4/25/09
to everyth...@googlegroups.com
2009/4/25 Brent Meeker <meek...@dslextreme.com>:

> No, I don't think the medium makes a difference.  But interpretation
> makes a difference.  Most computations we do, on pencil and paper or
> transistors or neurons, have an interpretation in terms of our world.
> Kelly is supposing there is a "self-interpreting structure" I'm not sure
> what he means by this, but I imagine something like an elaborate
> simulation in which some parts of the computation simulate entities with
> values or purposes - on some mapping.  But what about other mappings?

It's true that in the case of an ordinary computation in a brain or
digital computer the interpretation is fixed since the external world
with which it interacts is fixed. That's why brains and computers are
useful, after all. But if you take the system as a whole, there is no
a priori way to say what the interpretation of one part should be with
respect to another part. So we return to the position whereby a rock
could implement any finite state machine, if you only look at it the
right way.


--
Stathis Papaioannou

Kelly

unread,
Apr 25, 2009, 3:17:15 PM4/25/09
to Everything List
On Apr 24, 2:41 am, Brent Meeker <meeke...@dslextreme.com> wrote:
> >
> > In the materialist view, my mental state is just the
> > state of the particles of my brain at that instant.
>
> I think we need some definition of "state".

Hmmm. Well, I think my view of the word is pretty much the dictionary
definition. Though there are two different meanings in play here.

The physical state:

"the condition of matter with respect to structure, form,
constitution, phase, or the like"

And the mental state:

"a particular condition of mind or feeling"

Though ultimately I'm saying that there is no actual physical world
that exists outside of and independent from our perceptions. You and
I probably perceive a very similar world, but there other conscious
observers who perceive very different worlds. But all worlds are
virtual worlds that exist only inside the minds of conscious platonic
observers. And I base this conclusion on the line of thought laid out
in my previous posts.


> If we discretize your brain, say slice it into Planck
> units of time as Jason suggested, now we need to
> have something to connect one state to another.

Why do we need to have something extra to connect one state to
another? What does this add, exactly?

I think that these instances of consciousness are like pieces from a
picture puzzle. But not a jigsaw picture puzzle...instead let's say
that each puzzle piece is perfectly square, and they combine to make
the full picture.

How do you know where each piece fits into the overall picture? By
the contents of the image fragment that is on each puzzle piece.

So each puzzle piece has, contained within it, the information that
indicates it's position in the larger framework. The same is true of
instances of consciousness.

Based on how well the edges of their "images" line up, you can get
some idea about the relationship between two instances of
consciousness.


> In idealism, the content of a state consciousness (a Planck slice, not
> of a brain, but of a stream of consciousness) seems to me to be very
> small

Well, I'm not sure how much of the brain's information is needed to
represent a particular state of consciousness. But I don't think that
it's a crucial question. My answer is: more than none of it, but
less-than-or-equal-to all of it. Somewhere in that range. Ha!


> You say it is
> connected by the correlation of information content, but is that
> unique? Is there a best or most probable next state or what?

So I guess I'm taking the position of "extreme platonism" here. The
result is, I suppose, indistinguishable from that of modal realism.

All possible "next states" exist. None of them are "best" or "more
probable" than any other. Every possible future lies ahead of you,
and some version of you will experience each one of them. There will
be a version of you that never sees anything that strikes you as
unusual and who says "the universe is very normal, and this all makes
perfect sense, and how could it be any other way. These people who
advocate extreme platonism are crazy, because it doesn't match what I
observe."

But, there will also be a version of you who never has a normal
experience again. For eternity you will go from strange experience to
strange experience. And this version will say, "ah, ya, Kelly was
right about that extreme platonism thing."

And there will be all points between the two extremes.

Though, I think that this view does make a testable prediction. Which
is: there will be no end to your experiences. There is no permanent
first person death.

Of course, many realities will be unpleasant enough that this isn't
necessarily a good thing. All good things lie before you. But so do
all bad things. Blerg.

Kelly

unread,
Apr 25, 2009, 3:42:31 PM4/25/09
to Everything List
On Apr 24, 3:14 am, Jason Resch <jasonre...@gmail.com> wrote:
> Kelly,
>
> Your arguments are compelling and logical, you have put a lot of doubt
> in my mind about computationalism.

Excellent!

It sounds like you are following the same path as I did on all of
this.

So it makes sense to start with the idea of physicalism and the idea
that the mind is like a very complex computer, since this explains
third person observations of human behavior and ability very well I
think.

BUT, then the question of first person subjective consciousness
arises. Where does that fit in with physicalism? So the next step is
to expand to physicalism + full computationalism, where the
computational activities of the brain also explain consciousness, in
addition to behavior and ability.

But then you run into things like Maudlin's Olympia thought
experiement, and Bruno's movie graph examples, and many other strange
scenarios as well.

So the next step is to just get rid of physicalism altogether, as it
has other problems anyway (why something rather than nothing, the
ultimate nature of matter and energy, the origin of the universe, the
strangeness of QM, etc. etc.), and just go with pure computationalism.

But in the thought experiments that led to the jettisoning of
physicalism, the possiblity appears of just associating consciousness
with information, instead of the computations that produce the
information.

So we seem to have two options: "computation + information" OR
"information".

I can't really see what problem is solved by including computation.
To me, assigning consciousness to platonically existing information
seems to be good enough, with nothing left over for computation to
explain. So, I go with the "just information" choice.

Kelly

unread,
Apr 25, 2009, 4:52:39 PM4/25/09
to Everything List
On Apr 24, 11:39 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> > At any given instant that I'm awake, I'm
> > conscious of SOMETHING.
>
> To predict something, the difficulty is to relate that consciousness
> to its computational histories. Physics is given by a measure of
> probability on those comp histories.

The laws of physics would seem to be contingent, not necessary. In
that I can imagine a universe with an entirely different set of
physical laws.

Further, assuming that computer simulations of brains are possible and
give rise to consciousness, I can imagine that a simulation of such a
brain could be altered in a way that the simulated consciousness
begins to perceive a universe with these alternate physical laws. Or
even begins to perceive a universe with no consistent coherent
physical laws at all.


> > And I'm conscious of it by virtue of my
> > mental state at that instant. In the materialist view, my mental
> > state is just the state of the particles of my brain at that
> > instant.
>
> Which cannot be maintained with the comp hyp. Your consciousness is an
> abstract type related to all computations going through your current
> state.

I see what my "current state" does here with respect to
consciousness. But I don't see what the "computations going through
it" contribute.


> > I won't worry about it too much, as there is no doctor, only my
> > perceptions of a doctor. Every possible outcome of the "brain
> > replacement operation" that I can perceive, I will perceive.
>
> Not in the relative way. You have to explain why you see apples
> falling from a tree, and not any arbitrary information-theoretical data.

I explain it by asserting that there are many versions of me, some who
see apples, and some who see arbitrary information-theoretical data.
Everything that can be perceived is perceived.


> > Including outcomes that don't make any sense.
>
> You have to explain why they are *rare*. If not your theory does not
> explain why you put water on the gas and not in the fridge when you
> want a cup of coffee.

I don't say that they are rare, I say they don't make any sense. A
big difference.

I say that every possible event is perceived to happen, and so nothing
is more or less rare than anything else. There are only things that
are rare in your experience. They are not rare in an absolute sense.

Why do I say this? Because I think that platonism is the best
explanation for conscious experience, and the above view is (I think)
the logical conclusion of that platonic view of reality.


> > Thus the talk of
> > probabilities and measures. I'm willing to just say that all
> > universes are experienced.
>
> That is absolutely true. But we don't live in the absolute (except
> perhaps with salvia :).

I say that we do live in the absolute. Not all experiences of the
absolute will be strange. If all possible experiences exist in the
absolute, then by definition some will be quite ordinary and mundane.
Right?

But, right, salvia gives a taste of how strange experience can be.
And also schizophrenia, dementia, and various other mental conditions
and abnormalities cause by damage to the brain are further examples.

How does your computational theory consciousness explain the
perceptions of these people?


> That is true, but we want to explain "the stable appearance of atoms
> and galaxies", and what happens when we die.

Some observers will see stable atoms and galaxies. Because that's one
of the possible sets of experience. Other observers won't see these
things.


> You are right. But consciousness is the only thing I have no doubt
> about. The *only* undoubtable thing. The fixed point of the cartesian
> systematic doubting attitude. A theory which eliminate my first
> person, or my consciousness, although irrefutable by me, is wrong, I
> hope, I hope it to be wrong for you too. (Why would I send a post on
> consciousness to a zombie?)

Right, I'm with you on this. Consciousness is the one thing that
can't be doubted. And that's where the trouble starts...






Colin Hales

unread,
Apr 25, 2009, 8:29:52 PM4/25/09
to everyth...@googlegroups.com


Kelly wrote:
On Apr 24, 3:14 am, Jason Resch <jasonre...@gmail.com> wrote:
  
Kelly,

Your arguments are compelling and logical, you have put a lot of doubt
in my mind about computationalism.
    
Excellent!

It sounds like you are following the same path as I did on all of
this.

So it makes sense to start with the idea of physicalism and the idea
that the mind is like a very complex computer, since this explains
third person observations of human behavior and ability very well I
think.

BUT, then the question of first person subjective consciousness
arises.  Where does that fit in with physicalism?  So the next step is
to expand to physicalism + full computationalism, where the
computational activities of the brain also explain consciousness, in
addition to behavior and ability.
  
It's really cool to see folks exploring where I have been and seeing the same problems. I might be able to shed a little light on a productive 'next step' for exploration:

Try understanding the difference between a natural world which IS literally a mathematics, not a natural world described BY a mathematics. Note that a Turing machine is an instrument of a 'BY' computationalism, not the natural computation that I am speaking of. If you can get your head around this, then the answers (to a first person perspective) can be found. Stop thinking 'computation OF' and start thinking 'natural computation that IS'. Also very useful is the idea of using the explanation of a capacity to do science (grounded in a first person experience that is, in context, literally scientific observation) ... this is a very testable behaviour and represents the last thing physicists seem to want to explain: themselves. A green field in which it is obvious that cognition is most definitely not computation in the 'computation BY' sense.

Enjoy!

colin hales


Brent Meeker

unread,
Apr 26, 2009, 1:08:23 AM4/26/09
to everyth...@googlegroups.com
These are "edges" in time, i.e. a future boundary and a past boundary.
If these two boundaries are different then we are not longer talking
about a state, we're talking about an interval, furthermore an interval
that has duration and direction.

>
>
>> In idealism, the content of a state consciousness (a Planck slice, not
>> of a brain, but of a stream of consciousness) seems to me to be very
>> small
>>
>
> Well, I'm not sure how much of the brain's information is needed to
> represent a particular state of consciousness. But I don't think that
> it's a crucial question.

It's a crucial question if the answer is "more than what is in an
instant of consciousness."

Brent

Kelly

unread,
Apr 26, 2009, 4:53:45 AM4/26/09
to Everything List
On Apr 26, 1:08 am, Brent Meeker <meeke...@dslextreme.com> wrote:
> These are "edges" in time, i.e. a future boundary and a past boundary.
> If these two boundaries are different then we are not longer talking
> about a state, we're talking about an interval, furthermore an interval
> that has duration and direction.

Uhhhhhhhhh....what???

I think you should re-read my post. I think you missed something.


> > Well, I'm not sure how much of the brain's information is needed to
> > represent a particular state of consciousness. But I don't think that
> > it's a crucial question.
>
> It's a crucial question if the answer is "more than what is in an
> instant of consciousness."

Why is it a crucial question in that case? I don't see what you're
getting at.

Bruno Marchal

unread,
Apr 26, 2009, 10:40:26 AM4/26/09
to everyth...@googlegroups.com

On 25 Apr 2009, at 21:17, Kelly wrote:

>
> On Apr 24, 2:41 am, Brent Meeker <meeke...@dslextreme.com> wrote:
>>>
>>> In the materialist view, my mental state is just the
>>> state of the particles of my brain at that instant.
>>
>> I think we need some definition of "state".
>
> Hmmm. Well, I think my view of the word is pretty much the dictionary
> definition. Though there are two different meanings in play here.
>
> The physical state:
>
> "the condition of matter with respect to structure, form,
> constitution, phase, or the like"
>
> And the mental state:
>
> "a particular condition of mind or feeling"


... and computational states.

Then assuming comp, you can attribute a mental state to a
computational state, and then you must attribute an infinity of
computational states to a mind state.

>
>
> Though ultimately I'm saying that there is no actual physical world
> that exists outside of and independent from our perceptions. You and
> I probably perceive a very similar world, but there other conscious
> observers who perceive very different worlds. But all worlds are
> virtual worlds that exist only inside the minds of conscious platonic
> observers. And I base this conclusion on the line of thought laid out
> in my previous posts.

This is close to the consequence of comp.


>
>
>
>> If we discretize your brain, say slice it into Planck
>> units of time as Jason suggested, now we need to
>> have something to connect one state to another.
>
> Why do we need to have something extra to connect one state to
> another? What does this add, exactly?

I would say this goes along with the very (mathematical) definition of
what a computational state is.

>
>
> I think that these instances of consciousness are like pieces from a
> picture puzzle. But not a jigsaw picture puzzle...instead let's say
> that each puzzle piece is perfectly square, and they combine to make
> the full picture.
>
> How do you know where each piece fits into the overall picture? By
> the contents of the image fragment that is on each puzzle piece.
>
> So each puzzle piece has, contained within it, the information that
> indicates it's position in the larger framework. The same is true of
> instances of consciousness.

Nice image. With comp the same image admits an infinity of pieces
though.

And thus there is a measure problem. We agree?


>
>
> Though, I think that this view does make a testable prediction. Which
> is: there will be no end to your experiences. There is no permanent
> first person death.

OK, but such first person experience are excluded from the scientific
thought. We can talk about scientifically. My point is that comp
entails also verifiable:refutable physical facts.


>
>
> Of course, many realities will be unpleasant enough that this isn't
> necessarily a good thing. All good things lie before you. But so do
> all bad things. Blerg.

Yet, we have partial control, we can do things which change our
relative measure. It is useful when we want drink coffee, to give an
example.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Apr 26, 2009, 10:50:39 AM4/26/09
to everyth...@googlegroups.com

Then you loose the measure problem, the physical laws, the partial and
relative control, the quantum nature of the computations, etc.


>
>
> So we seem to have two options: "computation + information" OR
> "information".

This is like replacing the universal dovetailing (with its redundancy,
its very long (deep) histories, its many internal dynamics)

>
>
> I can't really see what problem is solved by including computation.


Do you say "yes" to the digitalist doctor? If yes, you cannot avoid
"computer science" or "elementary number theory" even just to define
"information". Why avoiding computer science in a theory which relate
consciousness (as manifesting relatively to me) to working computer.


>
> To me, assigning consciousness to platonically existing information
> seems to be good enough, with nothing left over for computation to
> explain. So, I go with the "just information" choice.

Agains. formally the difference is that your theory accept the natural
number (the finite information strings) and succession (to get them
all). But if you add addition and multiplication you get computer
science + a measure which explains why apples can fall from a tree in
normal histories, and why white rabbits can be rare.
i could that that if you are platonist, I don't see how you can avoid
the computations through which informations flux develop themselves,
when seen from inside.

Perhaps I should just ask what is your theory. Measure of information
needs already a non trivial math apparatus.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Apr 26, 2009, 11:40:59 AM4/26/09
to everyth...@googlegroups.com

On 25 Apr 2009, at 22:52, Kelly wrote:

>
> On Apr 24, 11:39 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>> At any given instant that I'm awake, I'm
>>> conscious of SOMETHING.
>>
>> To predict something, the difficulty is to relate that consciousness
>> to its computational histories. Physics is given by a measure of
>> probability on those comp histories.
>
> The laws of physics would seem to be contingent, not necessary.


On the contrary/ Physical laws appear are necessarily non contingent
with comp. They are defined through all computations in Platonia.


> In
> that I can imagine a universe with an entirely different set of
> physical laws.


Thre is no universe. You already "belong" to all comp histories going
through your actual states. of course all states are actual from
inside, but only "normal states" remains normal, and there are
physical laws only in normal histories. Physicalness is a product of
that "normality" conditions on histories.

>
>
> Further, assuming that computer simulations of brains are possible
> and
> give rise to consciousness,


OK. That is comp. My working hypothesis.

> I can imagine that a simulation of such a
> brain could be altered in a way that the simulated consciousness
> begins to perceive a universe with these alternate physical laws.

Only relatively to you. From the first person point of view of the
inhabitant of your altered simulation, they don't belong to it, but to
the infinity of simulation in Platonia. If your alteration in such
that the 1-view of those inhabitant escape from normality, from their
point ofviex they esacpe your "universe".
With comp there is no identity thesis. There is a 1-1 relation going
from a machine to a mind, but the inverse is 1-infinity: to each mind
state there is an infinity of machine realizing it. The first person
indeterminacy is exploitable to extract the laws of physics.


> Or
> even begins to perceive a universe with no consistent coherent
> physical laws at all.


The question is; what are their relative probability measure? What can
I expect.


>
>
>
>>> And I'm conscious of it by virtue of my
>>> mental state at that instant. In the materialist view, my mental
>>> state is just the state of the particles of my brain at that
>>> instant.
>>
>> Which cannot be maintained with the comp hyp. Your consciousness is
>> an
>> abstract type related to all computations going through your current
>> state.
>
> I see what my "current state" does here with respect to
> consciousness. But I don't see what the "computations going through
> it" contribute.


They contribute to the measure which gives sense to the "universal"
physical laws.

>
>
>
>>> I won't worry about it too much, as there is no doctor, only my
>>> perceptions of a doctor. Every possible outcome of the "brain
>>> replacement operation" that I can perceive, I will perceive.
>>
>> Not in the relative way. You have to explain why you see apples
>> falling from a tree, and not any arbitrary information-theoretical
>> data.
>
> I explain it by asserting that there are many versions of me, some who
> see apples, and some who see arbitrary information-theoretical data.
> Everything that can be perceived is perceived.


Without giving me a measure, it is like your theory predicts
everything. This is contradicted by the fact. If I want coffee now, I
know all to well I have to do something for that. Sorry but I cannot
wait for a white rabbit bringing me my cup of coffee.

>
>
>
>>> Including outcomes that don't make any sense.
>>
>> You have to explain why they are *rare*. If not your theory does not
>> explain why you put water on the gas and not in the fridge when you
>> want a cup of coffee.
>
> I don't say that they are rare, I say they don't make any sense. A
> big difference.
>


If they make any sense then they does not exist in Platonia, except in
non standard mathematical representation (due to incompleteness). Then
they have better to be rare relatively to my current state, or your
theory is deflationnary: it predicts every-events.

> I say that every possible event is perceived to happen, and so nothing
> is more or less rare than anything else.

It has to be at least in the relative way, if not your theory predicts
all happenings, even in practice, but the facts contradict this.

> There are only things that
> are rare in your experience.

This is what comp can explain. This is what the universal dovetailer
got normal explanations of measure one. The counting algorithm does not.


> They are not rare in an absolute sense.

Probably. I don't know because the proba are always relative with
comp, but this is an old discussion (cf ASSA/RSSA).

>
>
> Why do I say this? Because I think that platonism is the best
> explanation for conscious experience, and the above view is (I think)
> the logical conclusion of that platonic view of reality.

I agree with the platonism. And it is because the computations are ion
platonia that the whole thing works.

>
>
>
>>> Thus the talk of
>>> probabilities and measures. I'm willing to just say that all
>>> universes are experienced.
>>
>> That is absolutely true. But we don't live in the absolute (except
>> perhaps with salvia :).
>
> I say that we do live in the absolute. Not all experiences of the
> absolute will be strange. If all possible experiences exist in the
> absolute, then by definition some will be quite ordinary and mundane.
> Right?

Sure. But only in normal (measure 1) histories they remain normal. If
you don't address the first person, singular and plural, indeterminacy
problems, you don't solve the mind body problem, nor the body problem
(the origin of the appearance of a stable physical universe).


>
>
> But, right, salvia gives a taste of how strange experience can be.
> And also schizophrenia, dementia, and various other mental conditions
> and abnormalities cause by damage to the brain are further examples.
>
> How does your computational theory consciousness explain the
> perceptions of these people?


By the Galois connection between machine and behaviors, or equation
and surfaces, or theories and models.

If you have a system of equation, and decide to drop out some
equations, making the system more little, you get more solutions, more
hypersurfaces realizing the reamining equations. Similarly when you
drop out axioms from a theory, the theory will have more models.
Similarly when you "diminish" a brain, you enlarge the possible
consciousness. The consciousness of the universal person (universal
consciousness) exists in Platonia. Its "platonic brain" is any little
theory + induction axioms. It differentiate through the consistent
extensions, which exist also in Platonia. So many such extension
exists that a notion of normality and stability is necessarily
perceived from inside: the physical laws emerge. Making a part of your
brain sleepy or perturbated can let you experience unusual but real
reality types.


>
>
>
>> That is true, but we want to explain "the stable appearance of atoms
>> and galaxies", and what happens when we die.
>
> Some observers will see stable atoms and galaxies. Because that's one
> of the possible sets of experience. Other observers won't see these
> things.

I can explain to you that almost all observers will observe the same
physical laws everywhere in Platonia. You are the one saying that
physics is contingent now.

The nice thing with comp is that physics is necessary, and necessarily
the same for all observers/universal machines. Only geography and
history can be (very) different. Practically when we die, things are
more complex to predict, because you shift toward less normal world
and (I think) you can come back to universal consciousness, an amnesic
state where you forget the more particular differentiated dreams. It
is an open problem to evaluate the probaility to survive with all your
memories, like if your normal life continues (it is not null, but at
each DU-step k it is multiplied by something (much) less than 1/2^k, I
think.

>
>
>
>> You are right. But consciousness is the only thing I have no doubt
>> about. The *only* undoubtable thing. The fixed point of the cartesian
>> systematic doubting attitude. A theory which eliminate my first
>> person, or my consciousness, although irrefutable by me, is wrong, I
>> hope, I hope it to be wrong for you too. (Why would I send a post on
>> consciousness to a zombie?)
>
> Right, I'm with you on this. Consciousness is the one thing that
> can't be doubted. And that's where the trouble starts...

That's where science stars. That is why we have fun discussing
theories and arguments :)

Bruno


http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Apr 26, 2009, 12:47:48 PM4/26/09
to everyth...@googlegroups.com
Kelly wrote:
> On Apr 26, 1:08 am, Brent Meeker <meeke...@dslextreme.com> wrote:
>
>> These are "edges" in time, i.e. a future boundary and a past boundary.
>> If these two boundaries are different then we are not longer talking
>> about a state, we're talking about an interval, furthermore an interval
>> that has duration and direction.
>>
>
> Uhhhhhhhhh....what???
>
> I think you should re-read my post. I think you missed something.
>

No, I think you're missing my point. Consider your analogy of fitting
together images to make a complete picture. You present this as a
spatial representation of the sequential flow of consciousness. Now
suppose your spatial elements have zero extent - they are "spatial
instants", i.e. points. What fits them together?

>
>
>>> Well, I'm not sure how much of the brain's information is needed to
>>> represent a particular state of consciousness. But I don't think that
>>> it's a crucial question.
>>>
>> It's a crucial question if the answer is "more than what is in an
>> instant of consciousness."
>>
>
> Why is it a crucial question in that case? I don't see what you're
> getting at.

It appears to me that you are implicitly supposing that information in
the brain (say in it's structure) can be associated with an instant of
consciousness and hence allow it's position in the "complete picture" to
be determined. But it would not be a legitimate move to use information
that was not in the instant itself. And that's what I find implausible,
that there is significant information content in a conscious interval of
infinitesimal duration.

Brent

Jason Resch

unread,
Apr 26, 2009, 2:01:49 PM4/26/09
to everyth...@googlegroups.com
On Sat, Apr 25, 2009 at 3:52 PM, Kelly <harm...@gmail.com> wrote:
>
> I don't say that they are rare, I say they don't make any sense.  A
> big difference.
>
> I say that every possible event is perceived to happen, and so nothing
> is more or less rare than anything else.  There are only things that
> are rare in your experience.  They are not rare in an absolute sense.
>
> Why do I say this?  Because I think that platonism is the best
> explanation for conscious experience, and the above view is (I think)
> the logical conclusion of that platonic view of reality.
>
>

I am not sure that the measure problem can be so easily
abandoned/ignored. Assuming every Observer Moment had has an equal
measure, then the random/white-noise filled OMs should vastly
outnumber the ordered and sensible OMs. Though I ever only have one
OM to go by, the fact I was able to maintain a
non-random/non-white-noise filled OMs long enough to compose this post
should serve as some level of evidence that all OMs are not weighted
equally.

Bruno has suggested that computationalism is a candidate for answering
the measure problem in a testable way. However there may be other
ways to answer it by considering platonic objects, for example
counting the umber of paths to a state, that is how often it reappears
as a substructure of other platonic objects, etc. Whether or not this
is testable is another question, but whether the ultimate explanation
of consciousness is computation or information, I feel that measure is
important.

Jason

Kelly

unread,
Apr 26, 2009, 8:04:46 PM4/26/09
to Everything List
On Apr 26, 2:01 pm, Jason Resch <jasonre...@gmail.com> wrote:
> I am not sure that the measure problem can be so easily
> abandoned/ignored. Assuming every Observer Moment had has an equal
> measure, then the random/white-noise filled OMs should vastly
> outnumber the ordered and sensible OMs.

The ordered and sensible OM's may be vastly outnumbered, but they are
there. And thus if you assume that everything happens, they will
happen, and that explains your current experience of an ordered and
sensible reality. I don't see the problem.

Again, I'm lead to this conclusion by the line of reasoning mentioned
in my previous posts. I didn't start with this assumption and then
try to come up with supporting evidence.

It is a strange conclusion, but it seems to me that any theory that
explains conscious experience is going to have to be strange. I think
this one is only slightly odder than Bruno's. And it's not really
much odder than MWI, or the implications of an infinite universe
(e.g., infinite Kellys), or of infinite time (e.g., poincare
recurrence, boltzmann brains). Or strange compared to thinking about
where a material universe could have come from, what proceeded it,
what caused it, what underlies it, etc. That we exist at all is
pretty strange I think.


> Though I ever only have one
> OM to go by, the fact I was able to maintain a
> non-random/non-white-noise filled OMs long enough to compose this post
> should serve as some level of evidence that all OMs are not weighted
> equally.

If all possible OMs are real, then you will have successfully
completed all possible posts. So, where's the problem? You are one
of the Jason's who successfully completed a post. Where does your
experience depart from what the theory predicts?

You can only experience one path through life. One reality per
customer. The reality that you are experiencing HAD to be experienced
by someone, this is mandatory in my theory. Using the fact that you
ARE in fact experiencing it to try to disprove my theory I think is
not a valid option.

My theory does make one definite prediction, and so is (first person)
falsifiable. It predicts that there is always a next moment. Always
another conscious experience.

So, if you die and that's it, just oblivion...then I was wrong. Oops.

So, we just have to wait...we will have our answer soon enough!


Jason Resch

unread,
Apr 26, 2009, 9:00:57 PM4/26/09
to everyth...@googlegroups.com

I understand that all possible experiences by definition are
experienced, and that rare experiences, however rare they may be, will
still be experienced. In fact I used that same argument with Russell
Standish when he said that ants aren't conscious because if they were
then we should expect to be experiencing life as ants and not humans.

However, in your theory you explain that there are always "next
moments" to be experienced, if you were to wager on your next
experience would you guess that it will be random or ordered? If you
say ordered, is that not a contradiction when the random experiences
so greatly outnumber the ordered?

If your theory is true, then certainly there are observers who
experience every moment as sensible, yet I would liken those to a
branch of the multiverse where every time an experimenter measures the
quantum state of any particle, it comes out the same, in that branch
perhaps they never develop the field of quantum mechanics, but how
long into the future would you expect that illusion to hold? Perhaps
in your theory "next" and "previous" OMs aren't really connected, only
the illusion of such a connection?

Would you say you belong to the ASSA or RSSA camp?

http://everythingwiki.gcn.cx/wiki/index.php?title=ASSA
http://everythingwiki.gcn.cx/wiki/index.php?title=RSSA

Or perhaps something different entirely?

Jason

Kelly

unread,
Apr 26, 2009, 11:19:51 PM4/26/09
to Everything List

On Sun, Apr 26, 2009 at 9:00 PM, Jason Resch <jason...@gmail.com>
wrote:
>
> In fact I used that same argument with Russell
> Standish when he said that ants aren't conscious because if they were
> then we should expect to be experiencing life as ants and not humans.

Did you win or lose that argument?

I've heard that line of reasoning before also. Doesn't it also
conclude that we're living in the last days? If there are more
conscious beings in the future than in the present, then we should
expect to live there and not here, so there must not be more conscious
beings in the future? And also it predicts that there are no
significant number of (conscious) aliens? Because if there were, we
should expect to be one of them and not a human?

Sounds like over-use of a good idea. In this case it ignores all
other available information to just focus only on one narrow
statistic. Why should we ignore everything else we know and only
credit this single argument from probability? Surely, after studying
ants and humans, the knowledge that we gain has to alter our initial
expectations, right? But that isn't taken into account here (at least
not in your one line description of the discussion...ha!).

I think the problem with Russell's ant argument stems from trying to
use "a priori" reasoning in an "a posteriori" situation. There is
extra information available that he isn't taking into consideration.

Probably the same applies to the Doomsday argument and aliens. There
is extra information available that isn't being taking into account by
SSA. Pure SSA type reasoning only applies when there is no extra
information available on which to base your conclusion, I think.


> However, in your theory you explain that there are always "next
> moments" to be experienced, if you were to wager on your next
> experience would you guess that it will be random or ordered? If you
> say ordered, is that not a contradiction when the random experiences
> so greatly outnumber the ordered?

I have no choice in the matter. Some of me are going to bet random.
Some of me are going to bet ordered. When you come to a fork in the
road, take it.

Really and truely, I think the best rule of thumb is to bet the way
that leaves you looking LEAST FOOLISH if you're wrong. Usually
that'll be "ordered".


> Perhaps in your theory "next" and "previous" OMs aren't really
> connected, only the illusion of such a connection?

Right, that's exactly what I'm saying.


> Would you say you belong to the ASSA or RSSA camp?
> Or perhaps something different entirely?

I guess something different entirely. I'm saying that the only rule
is: "Everything happens. And sometimes, by sheer coincidence, it
makes sense."

Kelly

unread,
Apr 27, 2009, 12:40:13 AM4/27/09
to Everything List
On Apr 26, 12:47 pm, Brent Meeker <meeke...@dslextreme.com> wrote:
>
> No, I think you're missing my point. Consider your analogy of fitting
> together images to make a complete picture. You present this as a
> spatial representation of the sequential flow of consciousness. Now
> suppose your spatial elements have zero extent - they are "spatial
> instants", i.e. points. What fits them together?
>
>
> It appears to me that you are implicitly supposing that information in
> the brain (say in it's structure) can be associated with an instant of
> consciousness and hence allow it's position in the "complete picture" to
> be determined. But it would not be a legitimate move to use information
> that was not in the instant itself. And that's what I find implausible,
> that there is significant information content in a conscious interval of
> infinitesimal duration.


So, we have two things represented by a puzzle piece.

1) The contents of an instant of consciousness...which is the "image
fragment" on the surface of the piece.

2) How that instant of consciousness relates to the instants that
preceeded it and follow it...which is the piece's position within the
larger picture


And you have two seperate questions about information and conscious
states.

A) What information is responsible for a conscious state

B) What information is IN a conscious state.

And I think your questions focus on 2 and B.

So, as for 2...there is no actual relationship between the instants.
They fit together based solely on the first person subjective feeling
of flow, which undoubtedly involves some sort of short term memory.
Part of the feeling of an instant is how it is related to the previous
instant.

As for B, I'm not sure this matters, as it's really a seperate
question from A. So I am saying consciousness is information, but I'm
not saying it's the information that describes the particular things
that you're conscious OF at any given instant.

If I write down the details of what I'm conscious of AT this moment,
that information isn't the information that caused my conscious
experience OF that moment.

Conscious experience is tied to A. Not B.

B has no special significance. I'm not sure what it even really means
to talk about the information in a conscious state. How much
information is in the feeling of anger? How many bits describe the
subjective experience of seeing red?

Kelly

unread,
Apr 27, 2009, 1:24:47 AM4/27/09
to Everything List

On Apr 26, 11:40 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
> The question is; what are their relative probability measure? What can
> I expect.

Any expectations you have are unfounded. The problem of induction
applies.

Any probabilities arrived at empirically are suspect, they will
continue to hold for some Brunos but not for all...

But there's really not a better option that I can think of, so we
might as well stick with our expectations and probabilities.

Not that we have a choice, since free will is an illusion also...


> Without giving me a measure, it is like your theory predicts
> everything.

Right, it does basically predict everything. Except an end to
experience. There is no sweet, sweet release of death if I'm right.
There will be no final rest in the comforting embrace of oblivion.
Only the endless grind of a weary existence.


>This is contradicted by the fact.

How so? What fact? You know for certain that you are the only
Bruno? You know for certain that there aren't parallel realities
containing Brunos with different experiences? How did you come by
this fact?

Is it a fact, or just a belief?


> If I want coffee now, I
> know all to well I have to do something for that. Sorry but I cannot
> wait for a white rabbit bringing me my cup of coffee.

God helps those who help themselves. However, some Brunos are more
fortunate with respect to helpful rabbits than other Brunos. Stay
optimisitic.


> > I say that every possible event is perceived to happen, and so nothing
> > is more or less rare than anything else.
>
> It has to be at least in the relative way, if not your theory predicts
> all happenings, even in practice, but the facts contradict this.

Again, what facts? If everything was happening in alternate versions
of reality, how would you detect this? What facts do you possess that
rule this out?


Brent Meeker

unread,
Apr 27, 2009, 2:27:04 AM4/27/09
to everyth...@googlegroups.com

An untestable theory. But that's OK since if it's true it's also useless.

Brent

Kelly

unread,
Apr 27, 2009, 2:47:31 AM4/27/09
to Everything List
On Apr 27, 2:27 am, Brent Meeker <meeke...@dslextreme.com> wrote:
>
> An untestable theory. But that's OK since if it's true it's also useless.

Ha! True, true. But it being true AND useless would have a certain
aesthetic/poetic appeal. Which makes me even more inclined to think
that this is the way things are.

Of course, if it's true, then everything is useless.

But really no more so than it was before. "Usefulness" is an
overrated concept.

Jason Resch

unread,
Apr 27, 2009, 3:08:14 AM4/27/09
to everyth...@googlegroups.com
Kelly,

Your position as you have described it sounds a lot like ASSA only
without taking measure into consideration. I am curious if you
believe there is any merit to counting OMs or not. Meaning, if I have
two computers and set them up to run simulations of the same mind, are
there two minds or one?

Let's say I devised an evil simulation in which a mind suffers
horribly and is tortured, and I set the simulation to run each day,
and at the end of the day reset the simulation to the initial state,
such that after the first day, no new information or computations take
place, but they are repeated. If given the choice, would you unplug
the computer to stop the suffering of the mind in the computer, or
having already been simulated once would you consider it
futile/meaningless to stop it.

If the number of implementations of minds does not matter and if all
experiences already exist, then would it not be meaningless to do
anything? All actions, whatever the consequence would be rendered
neutral, having already happened somewhere. If no act of good or evil
matter this philosophy leads to utter fatalism.

I don't consider something happening with 100% probability to be
mutually exclusive with happening more than once. The question is
whether or not that makes any difference to the observer(s?).

Jason

Stathis Papaioannou

unread,
Apr 27, 2009, 10:29:56 AM4/27/09
to everyth...@googlegroups.com
2009/4/27 Jason Resch <jason...@gmail.com>:

> I am not sure that the measure problem can be so easily
> abandoned/ignored.  Assuming every Observer Moment had has an equal
> measure, then the random/white-noise filled OMs should vastly
> outnumber the ordered and sensible OMs.  Though I ever only have one
> OM to go by, the fact I was able to maintain a
> non-random/non-white-noise filled OMs long enough to compose this post
> should serve as some level of evidence that all OMs are not weighted
> equally.

One remark that could be made about this oft-stated assertion is that
you don't *know* you have maintained a series of non-random OM's
orderly enough and long enough to compose this post. All you can be
certain about is your present OM, and it may be the only OM in all the
universes, anywhere or ever. In ot


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Apr 27, 2009, 10:37:58 AM4/27/09
to everyth...@googlegroups.com
2009/4/28 Stathis Papaioannou <stat...@gmail.com>:

[Oops, didn't finish!]

In other words, it's impossible to know anything about other OM's from
within your own OM, except from a godlike stance outside the
multiverse. But this doesn't stop us drawing conclusions from the
(perhaps untrue) assumption that there are many OM's and the present
one is sampled randomly from them.


--
Stathis Papaioannou

Brent Meeker

unread,
Apr 27, 2009, 12:12:34 PM4/27/09
to everyth...@googlegroups.com

It's strictly equivalent to another older theory, "Whatever will be,
will be."

Brent

Bruno Marchal

unread,
Apr 27, 2009, 12:23:06 PM4/27/09
to everyth...@googlegroups.com

The interesting, informative, thing consists in relating A and B.
This is my taste. I expect from a "theory of everything" to explain
the miracle making me felt to succeed regularly to make a cup of
coffee when I am in the state of dreaming or wanting or expecting ...
a cup of coffee, and not only the coffee occurs but soon after,
regularly, I smell it, I drink it, I enjoy it. How good dreams
happen, how bad dreams happen, how can I play the game in a way which
satisfy most good willing universal machines. I need a theory which
should be correct on what universal machine can do and think
relatively to their most stable and probable dreams ... I am a long
term researcher, but a practical one though.

All theories are hypothetical, but some theories (questions). I just
propose the comp question to nature. From your post I can guess you
stop at the step 3 of the Universal Dovetailer Argument (UDA).
So you have indeed the necessity to abandon comp to maintain your form
of immaterialist platonism, but then you lose the tool for questioning
nature. It almost look like choosing a theory because it does not even
address the question ?

Bruno

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Apr 27, 2009, 1:21:43 PM4/27/09
to everyth...@googlegroups.com

On 27 Apr 2009, at 07:24, Kelly wrote:

>
>
> On Apr 26, 11:40 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>> The question is; what are their relative probability measure? What
>> can
>> I expect.
>
> Any expectations you have are unfounded. The problem of induction
> applies.


There is no problem of induction. There is a problem of induction only
for those who believe that science can prove a fact about reality.
There are only worst, bad, good and better theories, but all theories
are hypothetical belief. from the baby theory according to which he
has parents, to the existence of moons, suns and bosons, theories are
hypothetical, and their interpretations preserve the "hypotheticalness".
A scientist never say "I know". (and this I think is the way Popper
solved the "unduction problem". But it appears only to those who want
their theory to be true ... by authority.

>
>
> Any probabilities arrived at empirically are suspect, they will
> continue to hold for some Brunos but not for all...


Sure. I will ask a bank to lend me huge amount of money, I promise
them to reimburse when I will win ten times the big lottery in a row.


>
>
> But there's really not a better option that I can think of, so we
> might as well stick with our expectations and probabilities.
>
> Not that we have a choice, since free will is an illusion also...


Free will does not exist for those who live to work.
Free will does exist for those who work to live. If not, they are
exploited? From universal they are forced to be particular. that's how
souls falls and englue themselves, they loss their free will. Now work
no war can make you free. You are, and can remain by resistance and
vigilance.
In the normal worlds.

>
>
>
>> Without giving me a measure, it is like your theory predicts
>> everything.
>
> Right, it does basically predict everything.


Logicians call such theories inconsistent.

> Except an end to
> experience. There is no sweet, sweet release of death if I'm right.
> There will be no final rest in the comforting embrace of oblivion.
> Only the endless grind of a weary existence.


That is true. But no theories can predict this. Er... well, no
consistent theories can predict this. Inconsistent theories can prove
we are immortal indeed.

>
>
>
>> This is contradicted by the fact.
>
> How so? What fact? You know for certain that you are the only
> Bruno? You know for certain that there aren't parallel realities
> containing Brunos with different experiences? How did you come by
> this fact?
>
> Is it a fact, or just a belief?


Come on Kelly, you know my favorite postulate assume all computational
histories (Robinson arithmetic) so I have no doubt there is a
continuum of Bruno, even densely distributed between any two points of
the M sets), all englued in their history and soon or later quite
different, as different as Kelly and Bruno in short time, as different
as Bruno and the amoeba on longer time ...

The question is just this. I want coffee. How is it that I can realize
the dream "drinking of cup of coffee". I am good willing. I have try
your theory this morning. At first it works. It was a real and big
pleasure to await for the white rabbit bringing my cup of coffee in my
bed, where I could stay longer. But after two hours i was a bit
missing something, and I ask myself: why did I bet I would necessarily
be the Bruno in the White rabbit world instead of I seems (at least)
the usual normal (and a bit sadder) world where in most circumstances
I have to use my free will, my free time, my free action, my (not so
free) coffee and this up to the point I bring the warm product to my
mouth.


>
>
>
>> If I want coffee now, I
>> know all to well I have to do something for that. Sorry but I cannot
>> wait for a white rabbit bringing me my cup of coffee.
>
> God helps those who help themselves.


Ah! This is indeed in the "guardian angel" theory of the machine M, on
the machine M.

But of course the machine M, if she wants to remain scientific, or
just self-referentially correct, should say instead:

If God exists, then God helps those (universal machines) which help
themselves.

Plato's "truth" helps in the limit That why god is good, in Plato.


> However, some Brunos are more
> fortunate with respect to helpful rabbits than other Brunos. Stay
> optimisitic.


I tell you. I will not try the white rabbit method of realizing the
experience of drinking coffee no more, in the morning.

OK, I admit I do it everyday implictly in the evening, it is perfect
how the white rabbit seems to understand I don't want a cup of coffee,
he never bring me coffee at that time. Is that an evidence that I am
in a world with white rabbits?


>
>
>
>>> I say that every possible event is perceived to happen, and so
>>> nothing
>>> is more or less rare than anything else.
>>
>> It has to be at least in the relative way, if not your theory
>> predicts
>> all happenings, even in practice, but the facts contradict this.
>
> Again, what facts? If everything was happening in alternate versions
> of reality, how would you detect this? What facts do you possess that
> rule this out?


? I don't rule out the whole structure of possibilities, and I defend
the idea that actuality is possibility from inside. I include type of
relative or conditionnal possibilities. Indeed I explain that once you
assume comp, you have to take into account the interference of
histories due to your personal finite level of distinguishability
among a continuum of histories.

Mathematics illustrates that immaterial beings, like numbers and
digital machines, obeys laws. Of course you can contemplate the
picture, not trying to use it. I like very much poetry and many arts.
Actually to use the comp physics to measure the mass of the Higgs
(Brout Englert) Boson would be like using String Theory to prepare a
pizza, but comp gives a global coherent picture, and it gives a frame
where the "laws of physics" can emerge from.
Nobody can be sure it is true, but once you say yes to the doctor,
then if your survive, you are under its play.

Bruno

http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Apr 27, 2009, 1:42:26 PM4/27/09
to everyth...@googlegroups.com
Are you thinking of something like a linked list in which each state, in
it's inherent information, has a pointer to a previous (or future)
state. And the existence of this link constitutes the "feeling of flow"?

The idea of involving short term memory would be more conventional, but
I think it also entails allowing that conscious states have some
duration in time, or form a continuum.
> As for B, I'm not sure this matters, as it's really a seperate
> question from A. So I am saying consciousness is information, but I'm
> not saying it's the information that describes the particular things
> that you're conscious OF at any given instant.
>
> If I write down the details of what I'm conscious of AT this moment,
> that information isn't the information that caused my conscious
> experience OF that moment.
>
> Conscious experience is tied to A. Not B.
>
> B has no special significance. I'm not sure what it even really means
> to talk about the information in a conscious state. How much
> information is in the feeling of anger? How many bits describe the
> subjective experience of seeing red?
>

My problem exactly. But if we are no longer talking about information
IN a conscious state, but rather information responsible for the
conscious state then we have introduced the possibility of a whole
physics (a brain, a world) that may be responsible for many things, only
one of which is consciousness. In particular, it may be responsible for
limiting the conscious states and for fitting them together in succession.

Brent

>
> >
>
>

Stathis Papaioannou

unread,
Apr 28, 2009, 8:59:47 AM4/28/09
to everyth...@googlegroups.com
2009/4/28 Bruno Marchal <mar...@ulb.ac.be>:

> Sure. I will ask a bank to lend me huge amount of money, I promise
> them to reimburse when I will win ten times the big lottery in a row.

Not so far fetched, really.


--
Stathis Papaioannou

John Mikes

unread,
Apr 28, 2009, 10:28:34 AM4/28/09
to everyth...@googlegroups.com
Stathis,
I think Bruno is not realistic enough. Here is a better story - a solution to understand the situation:
---------

The Financial Crisis Explained

Heidi is the proprietor of a bar in Berlin . In order to increase sales, she decides to allow her loyal customers - most of whom are unemployed alcoholics - to drink now but pay later. She keeps track of the drinks consumed on a ledger (thereby granting the customers loans).
Word gets around and as a result increasing numbers of customers flood into Heidi's bar.
Taking advantage of her customers' freedom from immediate payment constraints, Heidi increases her prices for wine and beer, the most-consumed beverages. Her sales volume increases massively.
A young and dynamic customer service consultant at the local bank recognizes these customer debts as valuable future assets and increases Heidi's borrowing limit. He sees no reason for undue concern since he has the debts of the alcoholics as collateral.
At the bank's corporate headquarters, expert bankers transform these customer assets into DRINKBONDS, ALKBONDS and PUKEBONDS. These securities are then traded on markets worldwide. No one really understands what these abbreviations mean and how the securities are guaranteed.
Nevertheless, as their prices continuously climb, the securities become top-selling items.
One day, although the prices are still climbing, a risk manager of the bank -- subsequently, of course,  fired due his negativity -- decides that the time has come to demand payment of the debts incurred by
the drinkers at Heidi's bar.
However they cannot pay back the debts.
Heidi cannot fulfill her loan obligations and claims bankruptcy.
DRINKBOND and ALKBOND drop in price by 95%. PUKEBOND performs better, stabilizing in price after dropping by 80%.
The suppliers of Heidi's bar, having granted her generous payment due dates and having invested in the securities are faced with a new situation.
Her wine supplier claims bankruptcy, her beer supplier is taken over by a competitor.
The bank is saved by the government following dramatic round-the-clock consultations by leaders from the governing political parties.
The funds required for this purpose are obtained by a tax levied against the non-drinkers.
   
Finally an explanation I understand ...

JohnM

Kelly

unread,
Apr 28, 2009, 4:14:20 PM4/28/09
to Everything List

On Apr 27, 12:23 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
> So you have indeed the necessity to abandon comp to maintain your form
> of immaterialist platonism, but then you lose the tool for questioning
> nature. It almost look like choosing a theory because it does not even
> address the question ?

Okay, going back to basics. It seems to me that there are two
questions:

A) The problem of explaining WHAT we perceive
B) The problem of explaining THAT we perceive

The first issue is addressed by the third-person process of physics,
and of just generally trying make sense of what we perceive as we go
through the daily grind of life. Everybody has a grasp of this issue,
because you're faced with it everyday as soon as you wake up in the
morning, "what's going on here???".

The second issue is obviously the more subtle first-person problem of
consciousness.

But, for A, the fact that we are able to come up with rational-seeming
explanations for what we experience, and that there seems to us to be
an orderly pattern to what we perceive, doesn't answer the deeper
question of the ultimate nature of this external world that we are
observing. Here we get into issues of scientific/structural realism.
In other words, what do our scientific theories really mean? (http://
plato.stanford.edu/entries/structural-realism/)

But I don't think we can assign any real meaning to what we observe
until we have an acceptable understanding of the first person
subjective experience by which we make our observations.

So the question of consciousness is more fundamental than the
questions of physics. We can come up with scientific theories to
explain our observations, but since we don't know what an observation
really is, this can only get us so far in really understanding what's
going on with reality. Until we have a foundation in place,
everything built above is speculative. To rely on physics as your
foundation is "with more than Baron Münchhausen’s audacity, to pull
oneself up into existence by the hair, out of the swamps of
nothingness."

But here we hit a problem because the process that we use to explain
objective data doesn't work when applied to subjective experience.
There is a discontinuity. The third-person perceived reality vs.
first-person experienced reality. The latter apparently can't be
explained in terms of the former. But without an explanation for the
latter, I don't see how any meaning can be attached to the former.

And I think that is for this reason that I don't get hung up on the
"white rabbit" problem. Arguments based on the probability of finding
yourself in this state or that state are fine if all other things are
equal, and that's the only information you have to reason with. But I
don't think that we're in that situation.

So I start with the assumption of physicalism and then say that based
on that assumption, a computer simulation should be conscious, and
then from there I find reasons to think that consciousness doesn't
depend on physicalism. To me, the most likely alternate explanation
seems to be that consciousness depends on information. However, I am
relying on some of my thought experiments that assumed physicalism as
support for my conclusion of "informationalism".

But I think the discontinuity between first and third person
experience is another important clue, because I think that this break
will be noticeable to all rational conscious entities in all possible
worlds (even chaotic, irrational worlds). They should all notice a
difference in kind between what is observed (no matter how crazy it
is), and the subjective experience of making the observation.

Further, let's say that I am a rational observer in a world where
changes to brain structure do not appear to cause changes to behavior
or subjective experience. Physicalism wouldn't have much appeal in
this world. Rather, dualism would seem to have a clear edge as the
default explanation. But it might be even easier to make the leap to
platonism in such a world, as presumably Plato's "ideal forms" might
be even more appealing. So in such a world you wouldn't get to
platonism by way of thinking about computer simulations of brains
(since brain activity isn't correlated with behavior), but I think you
would still get there.

The question is, what kind of world NOT lead you to Platonism? I
think only a world that didn't have first person experience.

Brent Meeker

unread,
Apr 28, 2009, 4:58:25 PM4/28/09
to everyth...@googlegroups.com
But appearances can change. At one it was apparent that life could not
be explained in terms of chemistry and physics. What constitutes an
explanation is rather flexible. If we could make robots that acted in
every respect like conscious human beings and we could directly induce
any given conscious thought into anyone's brain, would you still say we
didn't understand subjective experience? I know we might still feel
there was a category gap, but people felt that about life too. I think
after some time we'd stop worrying about it and decide it was a
non-question.

> But without an explanation for the
> latter, I don't see how any meaning can be attached to the former.
>

Bruno seems to find that getting a cup of coffee in the morning is good
because of the subjective pleasure that follows.
> And I think that is for this reason that I don't get hung up on the
> "white rabbit" problem. Arguments based on the probability of finding
> yourself in this state or that state are fine if all other things are
> equal, and that's the only information you have to reason with. But I
> don't think that we're in that situation.
>
> So I start with the assumption of physicalism and then say that based
> on that assumption, a computer simulation should be conscious, and
> then from there I find reasons to think that consciousness doesn't
> depend on physicalism. To me, the most likely alternate explanation
> seems to be that consciousness depends on information.

But why did you feel justified in dispensing with the process
implemented by the computer? Did you consider the information in a
stack of punch cards to be conscious?
> However, I am
> relying on some of my thought experiments that assumed physicalism as
> support for my conclusion of "informationalism".
>
> But I think the discontinuity between first and third person
> experience is another important clue, because I think that this break
> will be noticeable to all rational conscious entities in all possible
> worlds (even chaotic, irrational worlds). They should all notice a
> difference in kind between what is observed (no matter how crazy it
> is), and the subjective experience of making the observation.
>
> Further, let's say that I am a rational observer in a world where
> changes to brain structure do not appear to cause changes to behavior
> or subjective experience. Physicalism wouldn't have much appeal in
> this world. Rather, dualism would seem to have a clear edge as the
> default explanation.
But we seem to be in a contrary world in which structure and process
determines subjective experience. Maybe that's the kind of world that
would NOT lead you to Platonism.

Brent

Kelly

unread,
Apr 28, 2009, 5:14:12 PM4/28/09
to Everything List
On Apr 27, 3:08 am, Jason Resch <jasonre...@gmail.com> wrote:
>
> Your position as you have described it sounds a lot like ASSA only
> without taking measure into consideration. I am curious if you
> believe there is any merit to counting OMs or not. Meaning, if I have
> two computers and set them up to run simulations of the same mind, are
> there two minds or one?

If the simulations are identical, then there is only one mind, which
exists in Platonia. The information in the computer simulation is
just a "shadow" of the actual platonic information. In reality, there
is no objectively existing computer running a simulation. There is
just your perception of this.

So, ultimately, I think there's no merit in counting OMs. Every OM
exists once, and only once, and they exist along a continuum where by
every variation is realized. Certain types of variation from one
instant to the next results in the subjective experience of the flow
of time. Variations "perpendicular" to this results in the subjective
feel of different personal identities.

See my previous reply to Bruno today for further details on white
rabbits.


> Let's say I devised an evil simulation in which a mind suffers
> horribly and is tortured, and I set the simulation to run each day,
> and at the end of the day reset the simulation to the initial state,
> such that after the first day, no new information or computations take
> place, but they are repeated. If given the choice, would you unplug
> the computer to stop the suffering of the mind in the computer, or
> having already been simulated once would you consider it
> futile/meaningless to stop it.

I would consider it futile or meaningless to stop it. Even if I
stopped you from running it the first time, even that would be
meaningless, because the horribly suffering mind actually exists in
Platonia, not in your computer simulation. Your feeling of having
"caused" their suffering is an illusion.

BUT, if there was no cost to me, I'd probably go ahead and stop you,
just in case I'm wrong.


> If the number of implementations of minds does not matter and if all
> experiences already exist, then would it not be meaningless to do
> anything?

Everything is meaningless. BUT, as it turns out, I have no choice in
the matter. My will is not free. Even if I'd like to just say,
"screw it, none of this matters, I quit", I still get up every day and
go to work. Why? Because I am not the master of my fate. I am not
the captain of my soul. This lack of real choice (i.e., free will) is
already clear from physicalism, but even more so with a version
platonism where all possible experiences must be realized.

But really I think we lost "meaningfulness" before we even got to
platonism.


> All actions, whatever the consequence would be rendered
> neutral, having already happened somewhere. If no act of good or evil
> matter this philosophy leads to utter fatalism.

Correct. Though, utter fatalism is not necessarily as bad as it
sounds. It just takes a little getting used to.


> I don't consider something happening with 100% probability to be
> mutually exclusive with happening more than once. The question is
> whether or not that makes any difference to the observer(s?).

I don't think it makes any difference to the observers if, from your
perspective something happens to them 1000 times. What matters to
them is how many times it happens to them from their perspective. I
don't think that you having the experience of running your identical
torture simulation 1000 times has any significance whatsoever for the
entities being tortured.

There SEEMS to be a relationship between the observer moments, but in
fact there is not. In this respect it's kind of like the subjective
experience of the flow of time.

Brent Meeker

unread,
Apr 28, 2009, 5:24:07 PM4/28/09
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
> 2009/4/25 Brent Meeker <meek...@dslextreme.com>:
>
>
>> But is it the information in consciousness and is it discrete? If you
>> include the information that is in the brain, but not in consciousness,
>> I can buy the concept of relating states by similarity of content. Or
>> if you suppose a continuum of states that would provide a sequence. It
>> is only when you postulate discrete states containing only the contents
>> of instants of conscious thought, that I find difficulty.
>>
>
> I'm not sure I understand. Are you saying that the information in most
> physical processes, but not consciousness, can be discrete? I would
> have said just the opposite: that even if it turns out that physics is
> continuous and time is real, it would still be possible to chop up
> consciousness into discrete parts (albeit of finite duration) and
> there would still be continuity.
I could buy that if the finite duration was long enough that the content
of the conscious interval was sufficient to order the intervals.
Otherwise you'd need some extrinsic variable to order them (e.g physical
time, brain states).

> In fact, I can't imagine how
> consciousness could possibly be discontinuous if this was done, for
> where would the information that tells you you've been chopped up
> reside?

In Bruno's Washington/Moscow thought experiment that information isn't
in your consciousness, although it's available via third persons. My
view of the experiment is that you would lose a bit of consciousness,
that you can't slice consciousness arbitrarily finely in time.

Brent

Kelly

unread,
Apr 28, 2009, 5:42:30 PM4/28/09
to Everything List
On Apr 27, 1:42 pm, Brent Meeker <meeke...@dslextreme.com> wrote:
>
> Are you thinking of something like a linked list in which each state, in
> it's inherent information, has a pointer to a previous (or future)
> state. And the existence of this link constitutes the "feeling of flow"?

Hmmmm. As a metaphor that works I think. Though the pointer is
explicitly in the "subjective feeling" of the state, and only
implicitly in the information that underlies that subjective feeling.


> I think it also entails allowing that conscious states have some
> duration in time, or form a continuum.

If consciousness isn't something that exists at any given instant of
time, then when does it exist? It exists "outside" of time? What is
a good analogy for how this would work?


> My problem exactly. But if we are no longer talking about information
> IN a conscious state, but rather information responsible for the
> conscious state then we have introduced the possibility of a whole
> physics (a brain, a world) that may be responsible for many things, only
> one of which is consciousness. In particular, it may be responsible for
> limiting the conscious states and for fitting them together in succession.

Not if information exists platonically. So the question is, what does
it mean for a physical system to "represent" a certain piece of
information? With the correct "one-time pad", any desired information
can be extracted from any random block of data obtained by making any
desired measurement of any physical system.

If I take a randomly generated one-time pad and XOR it with some real
block of data, the result will still be random. But somehow the
original information is there. You have the same problem with
computational processes, as pointed out by Putnam and Searle. The
molecular/atomic vibrations of the particles in my chair could be
interpreted, with the right mapping, as implementing any conceivable
computation.

So unambiguously connecting information to the "physical" is not so
easy, I think.

Jesse Mazer

unread,
Apr 28, 2009, 6:25:00 PM4/28/09
to everyth...@googlegroups.com

Kelly wrote:

>
> Not if information exists platonically. So the question is, what does
> it mean for a physical system to "represent" a certain piece of
> information? With the correct "one-time pad", any desired information
> can be extracted from any random block of data obtained by making any
> desired measurement of any physical system.
>
> If I take a randomly generated one-time pad and XOR it with some real
> block of data, the result will still be random. But somehow the
> original information is there. You have the same problem with
> computational processes, as pointed out by Putnam and Searle. The
> molecular/atomic vibrations of the particles in my chair could be
> interpreted, with the right mapping, as implementing any conceivable
> computation.
>
> So unambiguously connecting information to the "physical" is not so
> easy, I think.

This is essentially the problem discussed by Chalmers in "Does a Rock Implement Every Finite-State Automaton" at http://consc.net/papers/rock.html , and I think it's also the idea behind Maudlin's Olympia thought experiment as well. But for anyone who wants to imagine some set of "psychophysical laws" connecting physical states to the measure of OMs I think there may be ways around it. For example, instead of associating an OM with the passive idea of "information", can't you associate with the causal structure instantiated by a computer program that's actually running, as opposed to something like a mere static printout of its states? Of course you'd need a precise mathematical definition of the "causal structure" of a set of causally-related physical events, but I don't see any reason why it should be impossible to come up with a good definition. I think Chalmers attempts one based on counterfactuals in that paper, though I'm not sure if I like that approach.

Jesse

russell standish

unread,
Apr 29, 2009, 2:05:51 AM4/29/09
to everyth...@googlegroups.com

What you are talking about is what I call the "Occam catastrophe" in
my book. The resolution of the paradox has to be that the
random/white-noise filled OMs are in fact unable to be observed. In
order for the Anthropic Principle to hold in a idealist theory
requires that the OM must contain a representation of the observer, ie
observers must be self-aware. Amongst such OMs containing observers,
ones that are the result of historically deep evolutionary processes
are by far the most common. And evolution of those observer moments
must also be constrained to be similar to those previously observed,
eliminating white rabbits, due to "robustness" of the observer.

Cheers

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------

russell standish

unread,
Apr 29, 2009, 2:26:15 AM4/29/09
to everyth...@googlegroups.com
On Sun, Apr 26, 2009 at 08:19:51PM -0700, Kelly wrote:
>
>
> On Sun, Apr 26, 2009 at 9:00 PM, Jason Resch <jason...@gmail.com>
> wrote:
> >
> > In fact I used that same argument with Russell
> > Standish when he said that ants aren't conscious because if they were
> > then we should expect to be experiencing life as ants and not humans.
>
> Did you win or lose that argument?
>
> I've heard that line of reasoning before also. Doesn't it also
> conclude that we're living in the last days? If there are more
> conscious beings in the future than in the present, then we should
> expect to live there and not here, so there must not be more conscious
> beings in the future?

I did a calculation based on historical population levels with
exponential population growth and concluded there's at least a few
centuries left, although the world population at the end of the 21st
century is likely to be less than at the start. I guess we'll have to
wait a bit to see if that one's right.

> And also it predicts that there are no
> significant number of (conscious) aliens? Because if there were, we
> should expect to be one of them and not a human?

Remember the Chinese question discussed in the ant paper. There can be
alien planets with greater populations than the Earth, but they must
be relatively rarer than planets containing Earth population levels.

There are other reasons for suspecting intelligent life is rare in the
universe (eg Fermi's paradox).

>
> Sounds like over-use of a good idea. In this case it ignores all
> other available information to just focus only on one narrow
> statistic. Why should we ignore everything else we know and only
> credit this single argument from probability? Surely, after studying
> ants and humans, the knowledge that we gain has to alter our initial
> expectations, right? But that isn't taken into account here (at least
> not in your one line description of the discussion...ha!).
>
> I think the problem with Russell's ant argument stems from trying to
> use "a priori" reasoning in an "a posteriori" situation. There is
> extra information available that he isn't taking into consideration.
>

What extra information do you have in mind? I'd gladly update my
priors with anything I can lay my hands on.

Stathis Papaioannou

unread,
Apr 29, 2009, 8:30:40 AM4/29/09
to everyth...@googlegroups.com
2009/4/29 John Mikes <jam...@gmail.com>:

Excellent story, worth the brief deviation from the thread topic!


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Apr 29, 2009, 8:58:02 AM4/29/09
to everyth...@googlegroups.com
2009/4/29 Brent Meeker <meek...@dslextreme.com>:

>> I'm not sure I understand. Are you saying that the information in most
>> physical processes, but not consciousness, can be discrete? I would
>> have said just the opposite: that even if it turns out that physics is
>> continuous and time is real, it would still be possible to chop up
>> consciousness into discrete parts (albeit of finite duration) and
>> there would still be continuity.
>
> I could buy that if the finite duration was long enough that the content
> of the conscious interval was sufficient to order the intervals.
> Otherwise you'd need some extrinsic variable to order them (e.g physical
> time, brain states).

It seems to me that if the seconds of my life were according to an
external clock being generated backwards or scrambled, I would have no
way of knowing this, nor any way of knowing how fast the clock was
running or if it was changing speed. So how would the external clock
be able to impose an order on moments of quasi-consciousness below the
critical minimal interval, when it has no subjective effect on
supercritical intervals?

>> In fact, I can't imagine how
>> consciousness could possibly be discontinuous if this was done, for
>> where would the information that tells you you've been chopped up
>> reside?
>
> In Bruno's Washington/Moscow thought experiment that information isn't
> in your consciousness, although it's available via third persons. My
> view of the experiment is that you would lose a bit of consciousness,
> that you can't slice consciousness arbitrarily finely in time.

Could the question be settled by actual experiment, i.e. asking the
subject if they noticed anything unusual?


--
Stathis Papaioannou

Quentin Anciaux

unread,
Apr 29, 2009, 9:06:15 AM4/29/09
to everyth...@googlegroups.com
Hi,

2009/4/29 Stathis Papaioannou <stat...@gmail.com>:

For this you would need an actual AI and also that everybody agreed on
the fact that this AI is conscious and not a zombie.

If you can settle that, then an interview should be counted as proof.
But I'm not sure you can prove the AI is conscious, nor with the same
argument I'm not sure I could prove to you that I am.

Regards,
Quentin

--
All those moments will be lost in time, like tears in rain.

Stathis Papaioannou

unread,
Apr 29, 2009, 9:24:35 AM4/29/09
to everyth...@googlegroups.com
2009/4/29 Jesse Mazer <laser...@hotmail.com>:

The atoms vibrating in a rock have a causal structure, insofar as an
atom moves when it is jiggled by its neighbours in perfect accordance
with the laws of physics. And in the possibility space of weird alien
computers it seems to me that there will always be a computer
isomorphic with the vibration of atoms in a given rock. This
requirement becomes even easier to satisfy if we allow a computation
to be broken up into short intervals on separate computers of
different design, with the final stream of consciousness requiring
nothing to bind it together other than the content of the individual
OM's.


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Apr 29, 2009, 9:36:20 AM4/29/09
to everyth...@googlegroups.com
2009/4/29 Quentin Anciaux <allc...@gmail.com>:

>>> In Bruno's Washington/Moscow thought experiment that information isn't
>>> in your consciousness, although it's available via third persons. My
>>> view of the experiment is that you would lose a bit of consciousness,
>>> that you can't slice consciousness arbitrarily finely in time.
>>
>> Could the question be settled by actual experiment, i.e. asking the
>> subject if they noticed anything unusual?
>>
>>
>> --
>> Stathis Papaioannou
>
> For this you would need an actual AI and also that everybody agreed on
> the fact that this AI is conscious and not a zombie.
>
> If you can settle that, then an interview should be counted as proof.
> But I'm not sure you can prove the AI is conscious, nor with the same
> argument I'm not sure I could prove to you that I am.

Well, you could just ask the teleported human. If he says he feels
fine, didn't notice anything other than the scenery changing, would
that count for anything? I suppose you could argue that of course he
would say that since a gap in consciousness is by definition not
noticeable, but then you end up with a variant of the zombie argument:
he says everything feels OK, but in actual fact he experiences
nothing.


--
Stathis Papaioannou

Jesse Mazer

unread,
Apr 29, 2009, 9:47:04 AM4/29/09
to everyth...@googlegroups.com
They do have *a* causal structure, but I don't see why we should expect to find a set of events in the rock whose causal structure is isomorphic to the causal structure of a computer running a detailed simulation of a human brain for some extended period of time.

>And in the possibility space of weird alien
> computers it seems to me that there will always be a computer
> isomorphic with the vibration of atoms in a given rock.

What do you mean by "weird alien computers"? If we had a way of defining the notion of "causal structure", I'm sure it would be true that in the space of all computer programs (running on any sort of computer) there would be programs whose causal structure was isomorphic of the causal structure of vibrations in a rock, but this might be quite distinct from the causal structure associated with the brains of sentient observers. If you take a panpsychist approach like Chalmers, it might be that all causal structures have *some* type of qualia associated with them, even the ones in a rock (just as we might suppose that even an insect or an amoeba is not totally void of inner experience), but the sort of self-aware conceptual thought that humans have would probably be limited to a small subset of all possible causal structures.


>This
> requirement becomes even easier to satisfy if we allow a computation
> to be broken up into short intervals on separate computers of
> different design, with the final stream of consciousness requiring
> nothing to bind it together other than the content of the individual
> OM's.

As long as the separate computers are each passing the results of their computation on to the next computer in the series, then we can talk about the causal structure instantiated by the whole series. And if they aren't, then according to the idea of associating OMs with causal structures, we might have to conclude that these computers are not really instantiating an OM of a complex humanlike observer even if by some outrageous coincidence the output of all these separate computers *looked* just like the output of a single computer running a simulation of the brain of a humanlike observer.

Jesse

Quentin Anciaux

unread,
Apr 29, 2009, 9:48:39 AM4/29/09
to everyth...@googlegroups.com
2009/4/29 Stathis Papaioannou <stat...@gmail.com>:
>
> 2009/4/29 Quentin Anciaux <allc...@gmail.com>:
>
>>>> In Bruno's Washington/Moscow thought experiment that information isn't
>>>> in your consciousness, although it's available via third persons. My
>>>> view of the experiment is that you would lose a bit of consciousness,
>>>> that you can't slice consciousness arbitrarily finely in time.
>>>
>>> Could the question be settled by actual experiment, i.e. asking the
>>> subject if they noticed anything unusual?
>>>
>>>
>>> --
>>> Stathis Papaioannou
>>
>> For this you would need an actual AI and also that everybody agreed on
>> the fact that this AI is conscious and not a zombie.
>>
>> If you can settle that, then an interview should be counted as proof.
>> But I'm not sure you can prove the AI is conscious, nor with the same
>> argument I'm not sure I could prove to you that I am.
>
> Well, you could just ask the teleported human.

Yes... but that needs working teleportation ;) don't know if it is
easier to do than an AI :D

But as you say below, the zombie argument still stands. So beforehand
we should have an accepted "test" that tells if an entity is or not
conscious. And I've the feeling it is the most difficult part (besides
construction of a teleportation device or an AI).

> If he says he feels
> fine, didn't notice anything other than the scenery changing, would
> that count for anything? I suppose you could argue that of course he
> would say that since a gap in consciousness is by definition not
> noticeable, but then you end up with a variant of the zombie argument:
> he says everything feels OK, but in actual fact he experiences
> nothing.
>
>
> --
> Stathis Papaioannou
>
> >
>



Stathis Papaioannou

unread,
Apr 29, 2009, 10:45:17 AM4/29/09
to everyth...@googlegroups.com
2009/4/29 Jesse Mazer <laser...@hotmail.com>:

>>And in the possibility space of weird alien
>> computers it seems to me that there will always be a computer
>> isomorphic with the vibration of atoms in a given rock.
>
> What do you mean by "weird alien computers"? If we had a way of defining the
> notion of "causal structure", I'm sure it would be true that in the space of
> all computer programs (running on any sort of computer) there would be
> programs whose causal structure was isomorphic of the causal structure of
> vibrations in a rock, but this might be quite distinct from the causal
> structure associated with the brains of sentient observers.

Two computers of different architecture running the same program will
go through, on the face of it, completely different physical activity.
Now consider every possible general purpose computer, or every
possible Turing complete machine, running a particular program. I
don't know how to show this rigorously, but it seems to me that the
physical activity in a rock will mirror the physical activity in at
least one these possible computers, and that this requirement will be
easier to satisfy the shorter the period of the computation under
consideration.

>>This
>> requirement becomes even easier to satisfy if we allow a computation
>> to be broken up into short intervals on separate computers of
>> different design, with the final stream of consciousness requiring
>> nothing to bind it together other than the content of the individual
>> OM's.
>
> As long as the separate computers are each passing the results of their
> computation on to the next computer in the series, then we can talk about
> the causal structure instantiated by the whole series. And if they aren't,
> then according to the idea of associating OMs with causal structures, we
> might have to conclude that these computers are not really instantiating an
> OM of a complex humanlike observer even if by some outrageous coincidence
> the output of all these separate computers *looked* just like the output of
> a single computer running a simulation of the brain of a humanlike observer.

I would have said that every computer can generate an OM in complete
causal isolation from every other computer, and the OM's still
associate to form a stream of consciousness simply by virtue of their
content. That seems to me perhaps the main utility of the idea of
OM's. But it appears you agree with Brent that this association won't
happen (or at least, there will be a gap at the seams) unless the
computers are causally connected.


--
Stathis Papaioannou

Jason Resch

unread,
Apr 29, 2009, 11:28:36 AM4/29/09
to everyth...@googlegroups.com
On Wed, Apr 29, 2009 at 1:05 AM, russell standish <li...@hpcoders.com.au> wrote:
>
> What you are talking about is what I call the "Occam catastrophe" in
> my book. The resolution of the paradox has to be that the
> random/white-noise filled OMs are in fact unable to be observed. In
> order for the Anthropic Principle to hold in a idealist theory
> requires that the OM must contain a representation of the observer, ie
> observers must be self-aware. Amongst such OMs containing observers,
> ones that are the result of historically deep evolutionary processes
> are by far the most common. And evolution of those observer moments
> must also be constrained to be similar to those previously observed,
> eliminating white rabbits, due to "robustness" of the observer.
>
> Cheers
>

Hi Russell,

What you said reminded me of this article, which appeared in the Boston Globe:

http://www.boston.com/bostonglobe/ideas/graphics/011109_hacking_your_brain/

See the section on hallucinating with ping pong balls and a radio. It
would seem the way the brain is organized it doesn't accept perception
of pure randomness (at least not for long, I have not yet tried the
experiment myself). If it can't find patterns from the senses it
looks like it gives up and invents patterns of its own.

Jason

Bruno Marchal

unread,
Apr 29, 2009, 11:38:54 AM4/29/09
to everyth...@googlegroups.com
On 29 Apr 2009, at 00:25, Jesse Mazer wrote:


Kelly wrote:

> 
> Not if information exists platonically. So the question is, what does
> it mean for a physical system to "represent" a certain piece of
> information? With the correct "one-time pad", any desired information
> can be extracted from any random block of data obtained by making any
> desired measurement of any physical system.
> 
> If I take a randomly generated one-time pad and XOR it with some real
> block of data, the result will still be random. But somehow the
> original information is there. You have the same problem with
> computational processes, as pointed out by Putnam and Searle. The
> molecular/atomic vibrations of the particles in my chair could be
> interpreted, with the right mapping, as implementing any conceivable
> computation.
> 
> So unambiguously connecting information to the "physical" is not so
> easy, I think.

This is essentially the problem discussed by Chalmers in "Does a Rock Implement Every Finite-State Automaton" at http://consc.net/papers/rock.html ,


Yes. And I don't buy that argument. I will not insist because you did it well in your last post. Also, if it was the case that rock implement sophisticated computations, it would just add some measure on some computations in the Universal Dovetailing. Also, a rock cannot be a computational object: it is a projection of an infinity of computations when we look at the rock at a level which would be below our common substitution level. Eventually we will met the quantum vacuum (assuming comp implies QM, as I think), and in some "parallel world" that vaccum will go through all accessible states, but this is part of so many variate histories that they interfere destructively and does not generate any classical history stable relatively to any observer coupled with the rock.



and I think it's also the idea behind Maudlin's Olympia thought experiment as well.


Maudlin's Olympia, or the Movie Graph Argument are completely different. Those are arguments showing that computationalism is incompatible with the physical supervenience thesis. They show that consciousness are not related to any physical activity at all. Together with UDA1-7, it shows that physics has to be reduced to a theory of consciousness based on a purely mathematical (even arithmetical) theory of computation, which exists by Church Thesis.
The movie graph argument was originally only a tool for explaining how difficult the mind-body problem is, once we assume mechanism.





But for anyone who wants to imagine some set of "psychophysical laws" connecting physical states to the measure of OMs I think there may be ways around it. For example, instead of associating an OM with the passive idea of "information", can't you associate with the causal structure instantiated by a computer program that's actually running, as opposed to something like a mere static printout of its states? Of course you'd need a precise mathematical definition of the "causal structure" of a set of causally-related physical events, but I don't see any reason why it should be impossible to come up with a good definition.


Actually this is a good idea if you define causality by the logical relation linking a universal machine and a computation. But from the first person perspective you will have to take into account all universal machine relating those states, that is an infinity of computational histories. This comes from the invariance of the 1-perspective for arbitrary long delays in the arithmetical universal dovetailer.

Bruno

Johnathan Corgan

unread,
Apr 29, 2009, 12:19:30 PM4/29/09
to everyth...@googlegroups.com
On Wed, 2009-04-29 at 10:28 -0500, Jason Resch wrote:

> It
> would seem the way the brain is organized it doesn't accept perception
> of pure randomness (at least not for long, I have not yet tried the
> experiment myself). If it can't find patterns from the senses it
> looks like it gives up and invents patterns of its own.

It is perhaps the other way around. The portion(s) of the brain
responsible for qualia perception appear to operate as a complex,
dynamical system with a variety of chaotic attractors, and sensory
information only serves to "nudge" this system from one set of attractor
cycles to another. In the absence of sensory input, these then operate
in open loop mode, and the person may experience all variety of
interesting qualia uncorrelated with the "real" world.

The overall mechanism of dissociative anaesthetic agents such as
Ketamine or nitrous oxide is poorly understood, but one notable property
they have is that in sub-clinical dosages they suppress sensory input
while retaining consciousness. This results in similar, "open loop"
qualia.

Johnathan Corgan

Brent Meeker

unread,
Apr 29, 2009, 1:28:59 PM4/29/09
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
> 2009/4/29 Brent Meeker <meek...@dslextreme.com>:
>
>
>>> I'm not sure I understand. Are you saying that the information in most
>>> physical processes, but not consciousness, can be discrete? I would
>>> have said just the opposite: that even if it turns out that physics is
>>> continuous and time is real, it would still be possible to chop up
>>> consciousness into discrete parts (albeit of finite duration) and
>>> there would still be continuity.
>>>
>> I could buy that if the finite duration was long enough that the content
>> of the conscious interval was sufficient to order the intervals.
>> Otherwise you'd need some extrinsic variable to order them (e.g physical
>> time, brain states).
>>
>
> It seems to me that if the seconds of my life were according to an
> external clock being generated backwards or scrambled, I would have no
> way of knowing this, nor any way of knowing how fast the clock was
> running or if it was changing speed.

That assumes that one second can be cleanly (no causal or other
connection) sliced from the next second with no loss, which is what I doubt.

> So how would the external clock
> be able to impose an order on moments of quasi-consciousness below the
> critical minimal interval, when it has no subjective effect on
> supercritical intervals?
>
>
>>> In fact, I can't imagine how
>>> consciousness could possibly be discontinuous if this was done, for
>>> where would the information that tells you you've been chopped up
>>> reside?
>>>
>> In Bruno's Washington/Moscow thought experiment that information isn't
>> in your consciousness, although it's available via third persons. My
>> view of the experiment is that you would lose a bit of consciousness,
>> that you can't slice consciousness arbitrarily finely in time.
>>
>
> Could the question be settled by actual experiment, i.e. asking the
> subject if they noticed anything unusual?
>

Yes, I think it could - if we could do the experiment. Certainly when
I've been unconscious, either from concussion or anesthesia I've noticed
something unusual. :-)

Brent
>
>

Bruno Marchal

unread,
Apr 29, 2009, 3:15:01 PM4/29/09
to everyth...@googlegroups.com

On 28 Apr 2009, at 22:14, Kelly wrote:

>
>
> On Apr 27, 12:23 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>> So you have indeed the necessity to abandon comp to maintain your
>> form
>> of immaterialist platonism, but then you lose the tool for
>> questioning
>> nature. It almost look like choosing a theory because it does not
>> even
>> address the question ?
>
> Okay, going back to basics. It seems to me that there are two
> questions:
>
> A) The problem of explaining WHAT we perceive
> B) The problem of explaining THAT we perceive
>
> The first issue is addressed by the third-person process of physics,
> and of just generally trying make sense of what we perceive as we go
> through the daily grind of life. Everybody has a grasp of this issue,
> because you're faced with it everyday as soon as you wake up in the
> morning, "what's going on here???".

Well, that is "grandmother physics". It works well locally, but is
refuted by quantum mechanics, and actually is not tenable with just
the assumption of computationalism. There is an hard problem of matter.

>
>
> The second issue is obviously the more subtle first-person problem of
> consciousness.

Computationalism makes necessary the reduction of the hard problem of
matter to the less hard problem of mind. The problem of mind is less
hard, with comp, because computer science and the provability logics
can reduce the classical mind problem to the study of the correct
discourse of the self-introspecting machine using mathematical logic
to define the notion of self-referencial correctness. We get freely
the many nuances between true, communicable, provable, knowable,
inferable, observable, sensible, etc...

I think you have opted for platonist idealism at the start, so perhaps
you are not motivated to run through the Universal Dovetailer
Argument, whose main goal is to show that if we assume
computationalism (the assumption we can use classical teleportation as
a motion means, or that we are Turing-emulable) then we have to reduce
the physical laws to number theory/computer science (and a theory of
consciousness, which I take as a theory of machine's knowledge, with
the usual modal axiom of consciousness. Consciousness is the "true
believe in a reality". But with comp that reality is not the physical
reality, it is more the belief in elementary arithmetic, and from the
point of view of the machine it is a bet on self-consistency: there is
a reality which "satisfy me", there is a model making "me" valid. I
use Gödel Henking "completeness" theorem here.

You can take a look:
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html

I have already explain ten times this in this mailing list. I think
most grasp the six or seven steps, but some trouble remains for the
8th step. I intent also to come back soon or later to the seventh step
(which contains typical computer science difficulty) in my
conversation with Kim.
Search UDA in the archive of this list. I have finished in March a new
version of UDA (in 8 proposition) which has benefited from the
conversation with the list, and I will put it, someday, on my web page.


>
>
> But, for A, the fact that we are able to come up with rational-seeming
> explanations for what we experience, and that there seems to us to be
> an orderly pattern to what we perceive, doesn't answer the deeper
> question of the ultimate nature of this external world that we are
> observing.

I agree. Aristotle's naturalist hypothesis is an excellent
methodological simplification, but it departs from all fundamental
questions asked by Plato. And once Aristotle simplification is taken
as a granted "fact", or as authoritative axiom, or worse: as obvious,
then you can only be led to person elimination, in theory or in
practice.


> Here we get into issues of scientific/structural realism.
> In other words, what do our scientific theories really mean? (http://
> plato.stanford.edu/entries/structural-realism/)
>
> But I don't think we can assign any real meaning to what we observe
> until we have an acceptable understanding of the first person
> subjective experience by which we make our observations.


The mind-body problem is the problem of the relation between being and
appearance. Between the always doubtful sharable objectivity and the
always non communicable non doubtable direct true apprehension.
Between third person povs and singular or plural first person pov.
between Quanta and Qualia.
Physical reality is a first person plural type of things. QM confirms
this through the multiplication of entangled population of observers.
Physics concerns invariant patterns in all universal machine (self)-
observation, and, (unless I am wrong 'course), the physical reality is
the border of the intrinsic abyssal ignorance of all universal
machines. My point is only that this entails verifiable/refutable
facts, and up to now, QM confirms this. Comp easily refute any
newtonian or classical physics.


>
>
> So the question of consciousness is more fundamental than the
> questions of physics.

Not at all. Those two questions are both fundamental and very
fundamentally related. If you define consciousness by the true belief
in a (non necessarily correct or probable) reality (we are conscious
in dreams), then just computer science (and logic) can explain why and
how consciousness differentiate in universal machine histories, and
why stable and sharable dreams develop following long and deep (in
Bennet sense) ultra-parallel computations (ultra-parallel =
2^aleph_zero histories).

> We can come up with scientific theories to
> explain our observations, but since we don't know what an observation
> really is, this can only get us so far in really understanding what's
> going on with reality. Until we have a foundation in place,
> everything built above is speculative. To rely on physics as your
> foundation is "with more than Baron Münchhausen’s audacity, to pull
> oneself up into existence by the hair, out of the swamps of
> nothingness."

We totally agree on this, except that such a point is not obvious, at
least for most people after 1500 years of abuse of the Aristotelian
methological simplification. See the paper for a proof that once we
assume computationalism (and thus keep the notion of consciousness at
the start) physics has to be entirely reduced to the last mystery:
the mystery of numbers. This one can be shown insoluble by *any*
machine. All self-referentially correct machine can understand that
none of them can ever explain where the (natural) numbers comes from.
This makes them again a natural start. Without assuming them, we never
get them.


>
>
> But here we hit a problem because the process that we use to explain
> objective data doesn't work when applied to subjective experience.
> There is a discontinuity. The third-person perceived reality vs.
> first-person experienced reality. The latter apparently can't be
> explained in terms of the former.

All my point, Kelly, is that if you assume the computationalist
hypothesis, (digital mechanism, or simply Mechanism) then the former
has to be explained from the latter.


> But without an explanation for the
> latter, I don't see how any meaning can be attached to the former.

But the later is not so difficult ... once you take some time to study
computer science and mathematical logic. And then the former reappear
under a measure problem on computations.

I am amazed by the fact that we talk on comp since years here, and it
seems many still don't realize comp is made possible by the discovery
of the universal machine (by Babbage, Post, Turing, Church, Kleene,
Markov ...). It is a bomb! A creative bomb, for a change. "Nature"
invented it before 'course, again, and again, and again .... So does
the numbers, trough their effective enumeration of their computable
relations.


>
>
> And I think that is for this reason that I don't get hung up on the
> "white rabbit" problem. Arguments based on the probability of finding
> yourself in this state or that state are fine if all other things are
> equal, and that's the only information you have to reason with. But I
> don't think that we're in that situation.

Read the UDA and tell me where you have a problem, because this
paragraph is far too ambiguous.


>
>
> So I start with the assumption of physicalism and then say that based
> on that assumption, a computer simulation should be conscious,

You mean by a physical computer? Eventually this exist only in purely
mathematical universal machine's dreams;

> and
> then from there I find reasons to think that consciousness doesn't
> depend on physicalism.

I agree. But this is the reason why we have to justify completely the
physical appearance from computer science. No doubt information theory
has a role there, but it is just a part of a very large and rich
subject.

> To me, the most likely alternate explanation
> seems to be that consciousness depends on information.


What do you mean by "information"? Which theory are you referring
too? The term "information" is as tricky as the term "random" or
"infinite".
With comp pure noise (iterated self-duplication) multiplies "freely".
But the deep things arrive in the many probable relative informations
on possible histories, in the limit the redundancy makes the deep
difference.

> However, I am
> relying on some of my thought experiments that assumed physicalism as
> support for my conclusion of "informationalism".
>
> But I think the discontinuity between first and third person
> experience is another important clue, because I think that this break
> will be noticeable to all rational conscious entities in all possible
> worlds (even chaotic, irrational worlds). They should all notice a
> difference in kind between what is observed (no matter how crazy it
> is), and the subjective experience of making the observation.
>
> Further, let's say that I am a rational observer in a world where
> changes to brain structure do not appear to cause changes to behavior
> or subjective experience.

? (I guess here you come back with the idea that comp is false, isn't
it). Remember comp is incompatible with the very idea that there is a
material thing somewhere (by UDA). Ontologically there is only numbers
together with their additive and multiplicative structure. This
determine the whole dreamy-reality inside views structure of the
number theoretical "matrix". See my URL if interested.
It seems to be consistent with your ontological view, but with comp,
computer science provides the math for the epistemological sides, and
thanks to the incompleteness phenomenon (and others) all the nuances
gives very different and incompatible, as such, inside views of
Arithmetic. Most divided in two, the communicable and the non
communicable.


> Physicalism wouldn't have much appeal in
> this world.

In any world. Weak materialism, the idea that matter exists
*primitively" is provably false in the comp theory, with a minimal use
of Occam.
Matter is not the answer, matter is the question. Reversal.

> Rather, dualism would seem to have a clear edge as the
> default explanation.

Dualism does not work at all. It uses the identity theory. The UDA
step 8 and 7 gives no choice in the matter (no pun intended).

But you are platonist we can agree with this.


> But it might be even easier to make the leap to
> platonism in such a world, as presumably Plato's "ideal forms" might
> be even more appealing.

Especially with Church thesis. Theoretical computer science provides
an entire new realm. Theoretical computer science is a branch of math
100% unrelated with physics, a priori. Of course many physicists
follow Landauer and his idea that reality is based on quantum
information, but my point is that even that move cannot work: quantum
information has to be a first person plural reflect of classical
digitalness. Bits and Qubits are related by a double arrow. This makes
comp a testable theory. David Deutsch seems also to believe that
physical computability supersedes in fundamentality the usual à-la-
Post-Church-Kleene-Turing... classical computability, but I provide a
reason to believe that this form of quantum fundamentality is a
consequence of the numbers fundamentality. In their extensionnal
relation (like Fermat theorem) and intensional relation (like in
Kleene or Godel's theorem).
See my paper on Plotinus. The universal machine seems to plagiate
Plotinus!
We have a theology here. Correct machines are always humble and can
*only* pray on their consistency, and pray and work on the
satisfiability of their dreams.

> So in such a world you wouldn't get to
> platonism by way of thinking about computer simulations of brains
> (since brain activity isn't correlated with behavior),

If you mean "brain physical activity isn't correlated in an one-one
way with subjectivity", then I am OK.


> but I think you
> would still get there.
>
> The question is, what kind of world NOT lead you to Platonism? I
> think only a world that didn't have first person experience.

We agree on Platonism. But come on, the Pythagorean did already knows
that numbers quick back. Since then we know that there are universal
numbers which quick back universally. Comp and computer science are
interesting for providing the shape (the math) of the uncomputable and
insoluble which machines have to live with and sometimes name. And
physics is solidified by its ultimate foundation in arithmetic. With
comp there is a lot of work to do, that's sure.

Bruno
http://iridia.ulb.ac.be/~marchal/

Jesse Mazer

unread,
Apr 29, 2009, 3:16:30 PM4/29/09
to everyth...@googlegroups.com
Bruno wrote:


On 29 Apr 2009, at 00:25, Jesse Mazer wrote:

and I think it's also the idea behind Maudlin's Olympia thought experiment as well.


>Maudlin's Olympia, or the Movie Graph Argument are completely different. Those are arguments showing that computationalism is incompatible with the physical supervenience thesis. They show that consciousness are not related to any physical activity at all. Together with UDA1-7, it shows that physics has to be reduced to a theory of consciousness based on a purely mathematical (even arithmetical) theory of computation, which exists by Church Thesis.
The movie graph argument was originally only a tool for explaining how difficult the mind-body problem is, once we assume mechanism.




OK, I hadn't been able to find Maudlin's paper online, but I finally located a pdf copy in a post from this list at http://www.mail-archive.com/everyth...@googlegroups.com/msg07657.html ...now that I read it I see the argument is distinct from Chalmers' "Does a Rock Implement Every Finite-State Automaton", although they are thematically similar in that they both deal with difficulties in defining what it means for a given physical system to "implement" a given computation. Chalmers' idea was that the idea of a rock implementing every possible computer program could be avoided if we defined an "implementation" in terms of counterfactuals, but Maudlin argues that this contradicts the "supervenience thesis" which says that "the presence or absence of inert, causally isolated objects cannot effect the presence or absence of phenomenal states associated with a system", since two systems may have different counterfactual structures merely by virtue of an inert subsystem in one which *would have* become active if the initial state of the system had been slightly different.


It seems to me that there might be ways of defining "causal structure" which don't depend on counterfactuals, though. One idea I had is that for any system which changes state in a lawlike way over time, all facts about events in the system's history can be represented as a collection of propositions, and then causal structure might be understood in terms of logical relations between propositions, given knowledge of the laws governing the system. As an example, if the system was a cellular automaton, one might have a collection of propositions like "cell 156 is colored black at time-step 36", and if you know the rules for how the cells are updated on each time-step, then knowing some subsets of propositions would allow you to deduce others (for example, if you have a set of propositions that tell you the states of all the cells surrounding cell 71 at time-step 106, in most cellular automata that would allow you to figure out the state of cell 71 at the subsequent time-step 107). If the laws of physics in our universe are deterministic than you should in principle be able to represent all facts about the state of the universe at all times as a giant (probably infinite) set of propositions as well, and given knowledge of the laws, knowing certain subsets of these propositions would allow you to deduce others.


"Causal structure" could then be defined in terms of what logical relations hold between the propositions, given knowledge of the laws governing the system. Perhaps in one system you might find a set of four propositions A, B, C, D such that if you know the system's laws, you can see that A&B imply C, and D implies A, but no other proposition or group of propositions in this set of four are sufficient to deduce any of the others in this set. Then in another system you might find a set of four propositions X, Y, Z and W such that W&Z imply Y, and X implies W, but those are the only deductions you can make from within this set. In this case you can say these two different sets of four propositions represent instantiations of the same causal structure, since if you map W to A, Z to B, Y to C, and D to X then you can see an isomorphism in the logical relations. That's obviously a very simple causal structure involving only 4 events, but one might define much more complex causal structures and then check if there was any subset of events in a system's history that matched that structure. And the propositions could be restricted to ones concerning events that actually did occur in the system's history, with no counterfactual propositions about what would have happened if the system's initial state had been different.


Thinking in this way, it's not obvious that Maudlin is right when he assumes that the original "Olympia" defined on p. 418-419 of the paper cannot be implementing a unique computation that gives rise to complex conscious experiences. It's true that the armature itself is not responding in any way to the states of successive troughs it passes over, but there is an aspect of the setup that might give the system a nontrivial causal structure, namely the fact that certain troughs may be connected to other by pipes to other troughs in the sequence, so that as the armature empties or fills one it is also emptying or filling the one it's connected to (this is done to emulate the idea of a Turing machine's read/write head returning to the same memory address multiple times, even though Olympia's armature just steadily progresses down the line of troughs in sequence--troughs connected by pipes are supposed to represent a single memory address). If we represented the Olympia system as a set of propositions about the state of each trough and the position of the armature at each time-step, then the fact that the armature's interaction with one trough changes the state of another trough the armature won't visit until a later step may be enough to give different programs markedly different causal structures, in spite of the fact that the armature itself is just dumbly moving from one trough to the next.

Brent Meeker

unread,
Apr 29, 2009, 4:18:15 PM4/29/09
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
> 2009/4/29 Quentin Anciaux <allc...@gmail.com>:
>
>
>>>> In Bruno's Washington/Moscow thought experiment that information isn't
>>>> in your consciousness, although it's available via third persons. My
>>>> view of the experiment is that you would lose a bit of consciousness,
>>>> that you can't slice consciousness arbitrarily finely in time.
>>>>
>>> Could the question be settled by actual experiment, i.e. asking the
>>> subject if they noticed anything unusual?
>>>
>>>
>>> --
>>> Stathis Papaioannou
>>>
>> For this you would need an actual AI and also that everybody agreed on
>> the fact that this AI is conscious and not a zombie.
>>
>> If you can settle that, then an interview should be counted as proof.
>> But I'm not sure you can prove the AI is conscious, nor with the same
>> argument I'm not sure I could prove to you that I am.
>>
>
> Well, you could just ask the teleported human. If he says he feels
> fine, didn't notice anything other than the scenery changing, would
> that count for anything? I suppose you could argue that of course he
> would say that since a gap in consciousness is by definition not
> noticeable,

I see no contradiction in a "noticeable gap in consciousness". Whether
noticing such a gap depends on having some theory of the world or is
intrinsic seems to be the question.

Brent

Bruno Marchal

unread,
Apr 29, 2009, 4:19:56 PM4/29/09
to everyth...@googlegroups.com
Maudlin's point is that the causal structure has no physical role, so if you maintain the association of consciousness with the causal, actually computational structure, you have to abandon the physical supervenience. Or you reintroduce some magic, like if neurons have some knowledge of the absence of some other neurons, to which they are not related, during some computations.
But read the movie graph which shows the same thing without going through the question of the counterfactuals. If you believe that consciousness supervene on the physical implementation, or even just one universal machine computation, then you will associate consciousness to a description of that computation. but the description, although containing the genuine information is just not a computation at all. It miss the logical relation between the steps, made possible by the universal machine. So you can keep on with mechanism only by associating consciousness with the logical, immaterial, relation between the states. from inside they are infinitely many such relations, and this means the physical has to supervene on the sum of those relations "as seen from inside". By Church thesis and self-reference logic, they have a non trivial, redundant, structure.

Bruno
It is loading more messages.
0 new messages