Olympia's Beautiful and Profound Mind

3 views
Skip to first unread message

Brian Scurfield

unread,
May 13, 2005, 3:54:49 AM5/13/05
to Fabric-o...@yahoogroups.com, everyth...@eskimo.com
Bruno recently urged me to read up on Tim Maudlin's movie-graph argument
against the computational hypothesis. I did so. Here is my version of the
argument.
...........................

According to the computational hypothesis, consciousness supervenes on brain
activity and the important level of organization in the brain is its
computational structure. So the same consciousness can supervene on two
different physical systems provided that they support the same computational
structure. For example, we could replace every neuron in your brain with a
functionally equivalent silicon chip and you would not notice the
difference.

Computational structure is an abstract concept. The machine table of a
Turing Machine does not specify any physical requirements and different
physical implementations of the same machine may not be comparable in terms
of the amount of physical activity each must engage in. We might enquire:
what is the minimal amount of physical activity that can support a given
computation, and, in particular, consciousness?

Consider that we have a physical Turing Machine that instantiates the
phenomenal state of a conscious observer. To do this, it starts with a
prepared tape and runs through a sequence of state changes, writing symbols
to the tape, and moving the read-write as it does so. It engages in a lot of
physical activity. By assumption, the phenomenal state supervenes on this
physical computational activity. Each time we run the machine we will get
the same phenomenal state.

Let's try to minimise the amount of computational activity that the Turing
Machine must engage in. We note that many possible pathways through the
machine state table are not used in our particular computation because
certain counterfactuals are not true. For example, on the first step, the
machine might actually go from S_0 to S_8 because the data location on the
tape contained 0. Had the tape contained a 1, it might have gone to S_10,
but this doesn't obtain because the 1 was not actually present.

So let's unravel the actual computational path taken by the machine when it
starts with the prepared tape. Here are the actual machine states and tape
locations at each step:

S_0 s_8 s_7 s_7 s_3 s_2 . . . s_1023
t_0 t_1 t_2 t_1 t_2 t_3 . . . t_2032

Re-label these as follows:

s_[0] s_[1] s_[2] s_[3] s_[4] s_[5] . . .s_[N]
t_[0] t_[1] t_[2] t_[3] t_[4] t_[5] . . .t_[N]

Note that t_[1] and t_[3] are the same tape location, namely t_1. Similarly,
t_[2] and t_[4] are both tape location t_2. These tape locations are
"multiply-located".

The tape locations t_[0], t[1], t[2], ..., can be arranged in physical
sequence provided that a mechanism is provided to link the multiply-located
locations. Thus t[1] and t[3] might be joined by a circuit that turns both
on when a 1 is written and both off when a 0 is written. Now when the
machine runs, it has to take account of the remapped tape locations when
computing what state to go into next. Nevertheless, the net-effect of all
this is that it just runs from left to right.

If the machine just runs from left to right, why bother computing the state
changes? We could just arrange for each tape location to turn on (1 = on) or
off (0 = off) when the read/write head arrives. For example, if t_[2] would
have been turned on in the original computation, then there would be a local
mechanism that turns that location on when the read/write head arrives (note
that t_[4] would also turn on because it is linked to t_[2]). The state
S_[i] is then defined to occur when the machine is at tape location t_[i]
(this machine therefore undergoes as many state changes as the original
machine). Now we have a machine that just moves from left to right
triggering tape locations. To make it even simpler, the read/write head can
be replaced by a armature that moves from left to right triggering tape
locations. We have a very lazy machine! It's name is Olympia.

What, then, is the physical activity on which the phenomenal state
supervenes? It cannot be in the activity of the armature moving from
left to right. That doesn't seem to have the required complexity. Is it in
the turning on and off of the tape locations as the armature moves?
Again that does not seem to have the required degree of complexity.

It might be objected that in stripping out the computational pathway that we
did, we have neglected all the other pathways that could have been executed
but never in fact were. But what difference do these pathways make? We could
construct similar left-right machines for each of these pathways. These
machines would be triggered when a counterfactual occurs at a tape location.
The triggering mechanism is simple. If, say, t_[3] was originally on just
prior to the arrival of the read/write head but is now in fact off, then we
can freeze the original machine and arrange for another left-right machine
to start from that tape location. This triggering and freezing can be done
using a simple local mechanism at t_[3].

For brevity, I have just sketched how the counterfactuals might be
implemented (see the original article for more detail). The point is that we
have implemented all this extra machinery for supporting counterfactuals,
but none of it is actually used during the original computation. It remains
silent and inactive. Olympia runs just as well without them. Does connecting
up all the counterfactual machinery make Olympia phenomenally aware? And
does disconnecting the machinery make her not phenomenally aware even though
exactly the same computation is taking place?

>From the above, it would seem the following are inconsistent with each
other.

1. Your phenomenal state at a time is entirely determined by your brain
activity at the time.
2. For any phenomenal state of consciousness there exists some program, some
tape configuration, and some sequence of machine states that brings about
that phenomenal state on any physical machine capable of running the
program.
3. A physical system supports a phenomenal state if that the system can be
implemented as a Turing Machine performing some computation.

Maudlin's conclusion is that phenomenal states cannot supervene on physical
computational activity.

This, of course, is where Bruno and co. step in.
--------------------------------------


Notes:
1. Bruno Marchal independently discovered the movie-graph argument in 1988.
2. Maudlin considered a machine that used water troughs in place of tape
locations, but I really didn't want to inflict that kind of imagery on Bill!


Reference.

Maudlin, Tim (1989). Computation and Consciousness. Journal of Philosophy.
pp. 407-432.


Bruno Marchal

unread,
May 13, 2005, 8:40:34 AM5/13/05
to Brian Scurfield, Fabric-o...@yahoogroups.com, everyth...@eskimo.com
Thanks for that very nice summary. I let people think about it. We have
discussed it a long time before on the Everything-list. A keyword to
find that discussion in the everything list archive is "crackpot" as
Jacques Mallah named the argument.
Good we can come back on this, because we didn't conclude our old
discussion, and for the new people in the list, as for the for-list
people, it is a quite important step to figure out that the UDA is a
``proof", not just an ``argument". Well, at least I think so. Also,
thanks to Maudlin taking into account the necessity of the
counterfactuals in the notion of computation, and thanks to another
(more technical) paper by Hardegree, it is possible to use it to
motivate some equivalent but technically different path toward an
arithmetical quantum logic. I propose we talk on Hardegree later. But I
give the reference of Hardegree for those who are impatient ;) (also,
compare to many paper on quantum logic, this one is quite readable, and
constitutes perhaps a nice introduction to quantum logic, and I would
add, especially for Many-Wordlers. Hardegree shows that the most
standard implication connective available in quantum logic is formally
(at least) equivalent to a Stalnaker-Lewis notion of counterfactual. It
is the David Lewis of "plurality of worlds" and "Counterfactuals". Two
books which deserves some room on the shell of For-Lister and
Everythingers, imo.
Also, I didn't knew but late David Lewis did write a paper on Everett
(communicated to me by Adrien Barton). Alas, I have not yet find the
time to read it.

Hardegree, G. M. (1976). The Conditional in Quantum Logic. In Suppes,
P., editor, Logic and Probability in Quantum Mechanics, volume 78 of
Synthese Library, pages 55-72. D. Reidel Publishing Company,
Dordrecht-Holland.

Bruno

Le 13-mai-05, à 09:50, Brian Scurfield a écrit :

> Bruno recently urged me to read up on Tim Maudlin's movie-graph
> argument
> against the computational hypothesis. I did so. Here is my version of
> the
> argument.

> ............................

http://iridia.ulb.ac.be/~marchal/

Hal Finney

unread,
May 13, 2005, 1:03:38 PM5/13/05
to brian.s...@clear.net.nz, Fabric-o...@yahoogroups.com, everyth...@eskimo.com
We had some discussion of Maudlin's paper on the everything-list in 1999.
I summarized the paper at http://www.escribe.com/science/theory/m898.html .
Subsequent discussion under the thread title "implementation" followed
up; I will point to my posting at
http://www.escribe.com/science/theory/m962.html regarding Bruno's version
of Maudlin's result.

I suggested a flaw in Maudlin's argument at
http://www.escribe.com/science/theory/m1010.html with followup
http://www.escribe.com/science/theory/m1015.html .

In a nutshell, my point was that Maudlin fails to show that physical
supervenience (that is, the principle that whether a system is
conscious or not depends solely on the physical activity of the system)
is inconsistent with computationalism. What he does show is that you
can change the computation implemented by a system without altering it
physically (by some definition). But his desired conclusion does not
follow logically, because it is possible that the new computation is
also conscious.

(In fact, I argued that the new computation is very plausibly conscious,
but that doesn't even matter, because it is sufficient to consider that
it might be, in order to see that Maudlin's argument doesn't go through.
To repair his argument it would be necessary to prove that the altered
computation is unconscious.)

You can follow the thread and date index links off the messages above
to see much more discussion of the issue of implementation.

Hal Finney

Brian Scurfield

unread,
May 13, 2005, 11:16:29 PM5/13/05
to "Hal Finney", Fabric-o...@yahoogroups.com, everyth...@eskimo.com
Hal wrote:

> We had some discussion of Maudlin's paper on the everything-list in 1999.
> I summarized the paper at http://www.escribe.com/science/theory/m898.html
> .
> Subsequent discussion under the thread title "implementation" followed
> up; I will point to my posting at
> http://www.escribe.com/science/theory/m962.html regarding Bruno's version
> of Maudlin's result.

Thanks for those links.



> I suggested a flaw in Maudlin's argument at
> http://www.escribe.com/science/theory/m1010.html with followup
> http://www.escribe.com/science/theory/m1015.html .
>
> In a nutshell, my point was that Maudlin fails to show that physical
> supervenience (that is, the principle that whether a system is
> conscious or not depends solely on the physical activity of the system)
> is inconsistent with computationalism. What he does show is that you
> can change the computation implemented by a system without altering it
> physically (by some definition). But his desired conclusion does not
> follow logically, because it is possible that the new computation is
> also conscious.

So the system instantiates two different computations, when all things are
considered. The first instantiation is when the counterfactuals are enabled
(block removed) and the second instantiation is when the counterfactuals are
disabled (block added). Because there are two different computations, we
can't conclude that the second instantiation does not lead to a phenomenal
state of consciousness. But would you agree though that there does not
appear to be sufficient physical activity taking place in the second
instantiation to sustain phenomenal awareness? After all, Maudlin went to a
lot of trouble to construct a lazy machine! To carry out the second
computation, all that needs to happen is that the armature travel from left
to right emptying or filling troughs (or, as in my summary, triggering tape
locations). It is supposed to be transparently obvious from the lack of
activity that if it is conscious then it can't be as a result of physical
activity. Now you maintain it is conscious, so wherein lies the
consciousness?

> (In fact, I argued that the new computation is very plausibly conscious,
> but that doesn't even matter, because it is sufficient to consider that
> it might be, in order to see that Maudlin's argument doesn't go through.
> To repair his argument it would be necessary to prove that the altered
> computation is unconscious.)
>
> You can follow the thread and date index links off the messages above
> to see much more discussion of the issue of implementation.

OK, I'm making my way through those. Apologies to the list if the points I
raise have been covered previously.

Brian Scurfield

Jesse Mazer

unread,
May 14, 2005, 12:37:55 AM5/14/05
to everyth...@eskimo.com
Brian Scurfield wrote:

The main objection that comes to my mind is that in order to plan ahead of
time what number should be in each tape location before the armature begins
moving and flipping bits, you need to have already done the computation in
the regular way--so Olympia is not really computing anything, it's basically
just a playback device for showing us a *recording* of what happened during
the original computation. I don't think Olympia contributes anything more to
the measure of the observer-moment that was associated with the original
computation, any more than playing a movie showing the workings of each
neuron in my brain would contribute to the measure of the observer-moment
associated with what my brain was doing during that time.

>
>What, then, is the physical activity on which the phenomenal state
>supervenes? It cannot be in the activity of the armature moving from
>left to right. That doesn't seem to have the required complexity. Is it in
>the turning on and off of the tape locations as the armature moves?
>Again that does not seem to have the required degree of complexity.
>
>It might be objected that in stripping out the computational pathway that
>we
>did, we have neglected all the other pathways that could have been executed
>but never in fact were. But what difference do these pathways make? We
>could
>construct similar left-right machines for each of these pathways. These
>machines would be triggered when a counterfactual occurs at a tape
>location.
>The triggering mechanism is simple. If, say, t_[3] was originally on just
>prior to the arrival of the read/write head but is now in fact off, then we
>can freeze the original machine and arrange for another left-right machine
>to start from that tape location. This triggering and freezing can be done
>using a simple local mechanism at t_[3].

But this would still be just a playback device, it wouldn't have the same
"causal structure" (although I don't know precisely how to define that term,
so this is probably the weakest part of my argument) as the original
computation. To build all these pathways, you could expose a given
deterministic A.I. to all possible strings of sensory input of a certain
finite length, record the results for each string, and construct an
Olympia-type-playback device for each one. But although the original
computations contributed to the measure of a huge number of
observer-moments, playing back recordings of them would not, I think.

This actually brings up an interesting ethical problem. Suppose we put a
deterministic A.I. in a simulated environment with a simulated puppet body
whose motions are controlled by a string of numbers, and simulate what
happens with every possible input string to the puppet body of a certain
length. The vast majority of copies of the A.I. will observer the puppet
body flailing around in a completely random way, of course. But once we have
a database of what happened in the simulation with every possible input
string, we could then create an interactive playback device where I put on a
VR suit and my movements are translated into input strings for the puppet
body, and then the device responds by playing back the next bit of the
appropriate recording in its database. The question is, would it be morally
wrong for me to make the puppet body torture the A.I. in the simulation? I'm
really just playing back a recording of a simulation that already happened,
so it seems like I'm not contributing anything to the meaure of painful
observer-moments for the A.I., and of course the fraction of histories where
the A.I. experienced the puppet body acting in a coherent manner would have
been extremely small. I guess one answer to why it's still wrong, besides
the fact that simulated torture might have a corrupting effect on my own
mind, is that there's bound to be some fraction of worlds where I was
tricked or deluded into believing the VR system was just playing back
recordings from a preexisting database, when in reality I'm actually
interacting with a computation being done in realtime, so if I engage in
torture I am at least slightly contributing to the measure of
observer-moments who are experiencing horrible pain.

Jesse


Lee Corbin

unread,
May 14, 2005, 2:02:33 AM5/14/05
to Fabric-o...@yahoogroups.com, everyth...@eskimo.com
Hal writes

> We had some discussion of Maudlin's paper on the everything-list in 1999.
> I summarized the paper at http://www.escribe.com/science/theory/m898.html .
> Subsequent discussion under the thread title "implementation" followed

> ...


> I suggested a flaw in Maudlin's argument at
> http://www.escribe.com/science/theory/m1010.html with followup
> http://www.escribe.com/science/theory/m1015.html .
>
> In a nutshell, my point was that Maudlin fails to show that physical
> supervenience (that is, the principle that whether a system is
> conscious or not depends solely on the physical activity of the system)
> is inconsistent with computationalism.

It seemed to me that he made a leap at the end.

> (In fact, I argued that the new computation is very plausibly conscious,
> but that doesn't even matter, because it is sufficient to consider that
> it might be, in order to see that Maudlin's argument doesn't go through.
> To repair his argument it would be necessary to prove that the altered
> computation is unconscious.)

I know that Hal participated in a discussion on Extropians in 2002 or 2003
concerning Giant Look-Up Tables. I'm surprised that either in the course
of those discussions he didn't mention Maudlin's argument, or that I have
forgotten it.

Doesn't it all seem of a piece? We have, again, an entity that either
does not compute its subsequent states, (or as Jesse Mazer points out,
does so in a way that looks suspiciously like a recording of an actual
prior calculation).

The GLUT was a device that seemed to me to do the same thing, that is,
portray subsequent states without engaging in bonafide computations.

Is all this really the same underlying issue, or not?

Lee

Lee Corbin

unread,
May 14, 2005, 2:35:22 AM5/14/05
to FoR, EverythingList
Jesse comments on Brian's remarkable and exceedingly valuable
explication (thanks, Brian!), even if some old-timers are
having deja-vu all over again, and are wondering if indeed
the universe isn't hopelessly cyclic after all.

> > triggering tape locations. To make it even simpler, the read/write head can
> > be replaced by a armature that moves from left to right triggering tape
> > locations. We have a very lazy machine! It's name is Olympia.
>
> The main objection that comes to my mind is that in order to plan ahead of
> time what number should be in each tape location before the armature begins
> moving and flipping bits, you need to have already done the computation in
> the regular way--so Olympia is not really computing anything, it's basically
> just a playback device for showing us a *recording* of what happened during
> the original computation.

It seems to me that you are exactly correct! Admittedly I'm not
very articulate on this, partly because it's such a mystery, BUT:

It seems to me that there must be real information flow between
states of a real computation. When things are just laid out in a
way that circumvents this information flow, this causality, then
neither consciousness nor observer-moments obtain.

I admit that this is most peculiar. I admit that this may be
just another way of saying that time is mysterious. I admit
that it is logically possible that ditching the real universe
and regarding it as only a certain backwater of Platonia could
be correct. But so far: I can't accept it, and partly for the
*moral* aspects that Jesse brings up later.

> I don't think Olympia contributes anything more to the measure
> of the observer-moment that was associated with the original
> computation, any more than playing a movie showing the workings
> of each neuron in my brain would contribute to the measure of
> the observer-moment associated with what my brain was doing
> during that time.

Just so. But don't you find this difficult, as I do? Don't
we need stronger arguments than these to counter those who
believe that Platonia explains everything? Don't you also
feel that time and causality are linked here strongly, and
that somehow you and me and people like us seem to have a
"faith" that time and causality are real and independent of
such performances of Olympia, or "performances" by the
timeless Universal Dovetailer?

> But this would still be just a playback device, it wouldn't have the same
> "causal structure" (although I don't know precisely how to define that term,
> so this is probably the weakest part of my argument)

yes, we seem to have the same misgivings after all

> This actually brings up an interesting ethical problem. Suppose we put a
> deterministic A.I. in a simulated environment with a simulated puppet body
> whose motions are controlled by a string of numbers, and simulate what
> happens with every possible input string to the puppet body of a certain
> length.

An old idea, the Giant LookUp Table, or GLUT, did what to me
amounts to the same thing.

> The vast majority of copies of the A.I. will observer the puppet
> body flailing around in a completely random way, of course. But once we have
> a database of what happened in the simulation with every possible input
> string, we could then create an interactive playback device where I put on a
> VR suit and my movements are translated into input strings for the puppet
> body, and then the device responds by playing back the next bit of the
> appropriate recording in its database. The question is, would it be morally
> wrong for me to make the puppet body torture the A.I. in the simulation?

In my opinion: NO. It would not be morally wrong, because (as
you too believe) there are no observer moments in the playback.
When subsequent events are *causally* calculated from prior ones,
then, and only then, can moral issues arise, because then, and
only then, does an entity either benefit or suffer.

> I'm really just playing back a recording of a simulation that already
> happened,

that's right!

> so it seems like I'm not contributing anything to the measure of painful

> observer-moments for the A.I., and of course the fraction of histories where
> the A.I. experienced the puppet body acting in a coherent manner would have
> been extremely small.

Yes.

> I guess one answer to why it's still wrong, [IT IS???] besides

> the fact that simulated torture might have a corrupting effect on my own
> mind,

yeah, but that's not relevant. Let's hypothesize that you're
a very strong adult and can watch, say, violent or really scary
films or other portrayals, and not be too "disturbed"

> is that there's bound to be some fraction of worlds where I was
> tricked or deluded into believing the VR system was just playing back
> recordings from a preexisting database, when in reality I'm actually
> interacting with a computation being done in realtime, so if I engage in
> torture I am at least slightly contributing to the measure of
> observer-moments who are experiencing horrible pain.

Very good point. An important part of these hypotheses that we
play with is that we aregiven all the information. Thus, we do
not need to have the concern that you have just voiced.
(It's interesting enough, as is.)

Yes, indeed, whenever called upon to do something momentous, it
is wise for one to always question one's sanity, and to double-check
that (for example) the world really will be saved if I shoot down
that airliner, or throw an innocent person out of an airlock as in
the Tom Godwin (Ray Bradbury) story, "The Cold Equations". But those
considerations aren't really relevant here, so far as I can see.

As most readers will agree, it is better for a million portrayals
of Auschwitz to occur, than for one genuine kitten to be thrown into
boiling water. Now an *emulation* of Auschwitz? Well, that would be
different. Goodbye kitty, I'm sorry.

Lee

Brian Scurfield

unread,
May 14, 2005, 4:22:09 AM5/14/05
to Jesse Mazer, everyth...@eskimo.com, Fabric-o...@yahoogroups.com
Jesse wrote:

> The main objection that comes to my mind is that in order to plan ahead of
> time what number should be in each tape location before the armature
> begins moving and flipping bits, you need to have already done the
> computation in the regular way--so Olympia is not really computing
> anything, it's basically just a playback device for showing us a
> *recording* of what happened during the original computation.
>
> I don't think Olympia contributes anything more to the measure of the
> observer-moment that was associated with the original computation, any
> more than playing a movie showing the workings of each neuron in my brain
> would contribute to the measure of the observer-moment associated with
> what my brain was doing during that time.

You have to be careful, however, that you are not maintaining that when we
perform a computation and then do it in exactly the same way again that we
can only consider the first run to be genuine. And taking note of the
machine states and tape configuration during each step of the first run does
not mean that the second run is any less of a computation.

This aside, you are right that we need knowledge of the steps of the
original computation to construct Olympia. We could have just filmed the
steps of the original computation and played back the film, claiming that
the film is also performing a computation. I would agree that this would not
fit the bill. Information does not flow. But is it correct to say that
information does not flow in Olympia? Consider the following. Olympia's tape
configuration is causally dependent on the armature being in a certain
state; that is, being at a certain location. And because the tape is
"multiply-located", what happens at one point can affect what is happening
at another point. Furthermore, Olympia goes through as many state changes as
the original machine and the machine state has been defined without
reference to the state of the tape and vice-versa. Lastly, unlike a film,
Olympia is sensitive to counterfactuals when in the unblocked state. Do
these indicate the flow of information? I'm tempted to say yes!

Brian Scurfield


Stephen Paul King

unread,
May 14, 2005, 8:44:40 PM5/14/05
to Fabric-o...@yahoogroups.com, everyth...@eskimo.com
Dear Lee,

Let me use your post to continue our offline conversation here for the
benefit of all.

The idea of a computation, is it well or not-well founded? Usually TMs
and other finite (or infinite!) state machines are assume to have a well
founded set of states such that there are no "circularities" nor infinite
sequences in their specifications. See:

http://www.answers.com/topic/non-well-founded-set-theory


One of the interesting features that arises when we consider if it is
possible to faithfully represent the 1st person experiences of the
world -"being in the world" as Sartre wrote - in terms of computationally
generated simulations is that circularities arise almost everywhere.

Jon Barwise, Peter Wegner and others have pointed out that the usual
notions of computation fail to properly take into consideration the
necessity to deal with this issue and have been operating in a state of
Denial about a crucial aspect of the notion of conscious awareness: how can
an a priori specifiable computation contain an internal representational
model of itself that is dependent on its choices and interactions with
"others", when these others are not specified within the computation?

http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/b/Barwise:Jon.html
http://www.cs.brown.edu/people/pw/

Another aspect of this is the problem of concurrency.

http://www.cs.auckland.ac.nz/compsci340s2c/lectures/lecture10.pdf
http://boole.stanford.edu/abstracts.html

I am sure that I am being a fooling tyro is this post. ;-)

Kindest regards,

Stephen

> ------------------------ Yahoo! Groups Sponsor --------------------~-->
> In low income neighborhoods, 84% do not own computers.
> At Network for Good, help bridge the Digital Divide!
> http://us.click.yahoo.com/S.QlOD/3MnJAA/Zx0JAA/pyIolB/TM
> --------------------------------------------------------------------~->
>
>
> Yahoo! Groups Links
>
> <*> To visit your group on the web, go to:
> http://groups.yahoo.com/group/Fabric-of-Reality/
>
> <*> To unsubscribe from this group, send an email to:
> Fabric-of-Real...@yahoogroups.com
>
> <*> Your use of Yahoo! Groups is subject to:
> http://docs.yahoo.com/info/terms/
>
>
>

Stathis Papaioannou

unread,
May 15, 2005, 8:42:13 AM5/15/05
to step...@charter.net, everyth...@eskimo.com
I appreciate that there are genuine problems in the theory of computation as
applied to intelligent and/or conscious minds. However, we know that
intelligent and conscious minds do in fact exist, running on biological
hardware. The situation is a bit like seeing an aeroplane in the sky then
trying to figure out the physics of heavier than air flight; if you prove
that it's impossible, then there has to be something wrong with your proof.

If it does turn out that the brain is not Turing emulable, what are the
implications of this? Could we still build a conscious machine with
appropriate soldering and coding, or would we have to surrender to dualism/
an immaterial soul/ Roger Penrose or what?

--Stathis Papaioannou

_________________________________________________________________
Sell your car for $9 on carpoint.com.au
http://www.carpoint.com.au/sellyourcar

Stephen Paul King

unread,
May 15, 2005, 9:47:59 AM5/15/05
to Stathis Papaioannou, everyth...@eskimo.com
Dear Stathis,

Two points: I am pointing out that the "non-interactional" idea of
computation and any form of monism will fail to account for the "necessity"
of 1st person viewpoints. I am advocating a form of dualism, a "process"
dualism based on the work of Vaughan Pratt.

http://boole.stanford.edu/pub/ratmech.pdf

What in interesting about this form of dualism is that the dual aspects
become identical to each other (automorphic?) in the limit of infinite Minds
and Bodies. I don't have time to explain the details of this right now but
your fears can be assuaged: there is no coherent notion of an "immaterial
soul" nor a "mindless body". An example of the former is a complete atomic
Boolean algebra that can not be instantiated physically by any means and an
example the latter is found in considering a physical object that has no
possible representation.

Stephen

----- Original Message -----
From: "Stathis Papaioannou" <stathispa...@hotmail.com>
To: <step...@charter.net>
Cc: <everyth...@eskimo.com>
Sent: Sunday, May 15, 2005 8:32 AM
Subject: Re: Olympia's Beautiful and Profound Mind

Bruno Marchal

unread,
May 15, 2005, 10:29:03 AM5/15/05
to Stephen Paul King, Stathis Papaioannou, everyth...@eskimo.com

Le 15-mai-05, à 15:40, Stephen Paul King a écrit :

> Two points: I am pointing out that the "non-interactional" idea of
> computation and any form of monism will fail to account for the
> "necessity" of 1st person viewpoints.

You know that the "necessity" of 1st person viewpoints is what I
consider the most easily explained (through the translation of the
Theaetetus in arithmetic or in any language of a lobian machine).
You refer to paper as hard and technical as my thesis. You should
explain why you still believe the 1 person is dismissed in comp or any
monism.
Also, Pratt seems to me monist, and its mathematical dualism does not
address the main question in philosphy of mind/cognitive science. Its
paper is interesting but could hardly be refer as an authority on those
question at this stage.

Bruno


http://iridia.ulb.ac.be/~marchal/


Stephen Paul King

unread,
May 15, 2005, 2:04:36 PM5/15/05
to Bruno Marchal, everyth...@eskimo.com
Dear Bruno,

As for your showng of "necessity" of a 1st personviewpoint , I still do
not understand your argument and that is a failure on my part. ;-) As to
Pratt's ideas, let me quote directly from one of his papers:

http://boole.stanford.edu/pub/ratmech.pdf

"Some of the questions however remain philosophically challenging even
today. A central tenet of Cartesianism is mind-body dualism, the principle
that mind and body are the two basic substances of which reality is
constituted. Each can exist separately, body as realized in inanimate
objects and lower forms of life, mind as realized in abstract concepts and
mathematical certainties. According to Descartes the two come together only
in humans, where they undergo causal interaction, the mind reflecting on
sensory perceptions while orchestrating the physical motions of the limbs
and other organs of the body.

The crucial problem for the causal interaction theory of mind and body
was its mechanism: how did it work?

Descartes hypothesized the pineal gland, near the center of the brain,
as the seat of causal interaction. The objection was raised that the mental
and physical planes were of such a fundamentally dissimilar character as to
preclude any ordinary notion of causal interaction. But the part about a
separate yet joint reality of mind and body seemed less objectionable, and
various commentators offered their own explanations for the undeniably
strong correlations of mental and physical phenomena.

Malebranche insisted that these were only correlations and not true
interactions, whose appearance of interaction was arranged in every detail
by God by divine intervention on every occasion of correlation, a theory
that naturally enough came to be called occasionalism. Spinoza freed God
from this demanding schedule by organizing the parallel behavior of mind and
matter as a preordained apartheid emanating from God as the source of
everything. Leibniz postulated monads, cosmic chronometers miraculously
keeping perfect time with each other yet not interacting.

These patently untestable answers only served to give dualism a bad
name, and it gave way in due course to one or another form of monism: either
mind or matter but not both as distinct real substances. Berkeley opined
that matter did not exist and that the universe consisted solely of ideas.
Hobbes ventured the opposite: mind did not exist except as an artifact of
matter. Russell [Rus27] embraced neutral monism, which reconciled Berkeley's
and Hobbes' viewpoints as compatible dual accounts of a common neutral
Leibnizian monad.

This much of the history of mind-body dualism will suffice as a
convenient point of reference for the sequel. R. Watson's Britannica article
[Wat86] is a conveniently accessible starting point for further reading. The
thesis of this paper is that mind-body dualism can be made to work via a
theory that we greatly prefer to its monist competitors. Reflecting an era
of reduced expectations for the superiority of humans, we have implemented
causal interaction not with the pineal gland but with machinery freely
available to all classical entities, whether newt, pet rock, electron, or
theorem (but not quantum mechanical wavefunction, which is sibling to if not
an actual instance of our machinery)."

and

"We have advanced a mechanism for the causal interaction of mind and
body, and argued that separate additional mechanisms for body-body and
mind-mind interaction can be dispensed with; mind-body interaction is all
that is needed. This is a very different outcome from that contemplated by
17th century Cartesianists, who took body-body and mind-mind interaction as
given and who could find no satisfactory passage from these to mind-body
interaction. Even had they found a technically plausible solution to their
puzzle, mind-body interaction would presumably still have been regarded as
secondary to body-body interaction. We have reversed that priority.
One might not expect mind-body duality as a mere philosophical problem
to address any urgent need outside of philosophy. Nevertheless we have
offered solutions to the following practical problems that could be
construed as particular applications of our general solution to Descartes'
mind-body problem, broadly construed to allow scarecrows and everything else
to have minds."


There are his own words!

Stephen

----- Original Message -----
From: "Bruno Marchal" <mar...@ulb.ac.be>
To: "Stephen Paul King" <step...@charter.net>
Cc: "Stathis Papaioannou" <stathispa...@hotmail.com>;
<everyth...@eskimo.com>
Sent: Sunday, May 15, 2005 10:18 AM
Subject: Re: Olympia's Beautiful and Profound Mind

Stathis Papaioannou

unread,
May 16, 2005, 2:49:53 AM5/16/05
to step...@charter.net, mar...@ulb.ac.be, everyth...@eskimo.com
Dear Stephen,

The Pratt quote below shows disdain for historical solutions to the
mind-body problem, such as Descartes' theory that the two interact through
the pineal gland, but goes on to say that this is no reason to throw out
dualism altogether. Now, I have to admit, despite spending my adolescence in
the thrall of logical positivism (I still think A.J. Ayer's "Language, Truth
and Logic" is one of the great masterpieces of 20th century English
nonfictional prose), that there is something irreducible about 1st person
experience, forever beyond 3rd person verification or falsification; a blind
man might learn everything about visual perception, but still have no idea
what it is like to see. However, what reason is there to extrapolate from
this that there must be some special explanation for the interaction between
body and mind? What do you lose if you simply accept, as per Gilbert Ryle,
that the mind is what the brain does? Otherwise, you could seek a special
explanation for an electronic calculator's matter/mathematics dualism, or a
falling stone's matter/energy dualism, or any number of similar examples.
Occam's razor would suggest that such complications are unnecessary.

--Stathis Papaioannou

_________________________________________________________________

Stephen Paul King

unread,
May 16, 2005, 12:14:53 PM5/16/05
to Stathis Papaioannou, mar...@ulb.ac.be, everyth...@eskimo.com
Dear Stathis,

In a phrase, I would loose choice. What you are asking me is to give up
any hope of understanding how my sense of being-in-the-world is related to
any other phenomena in the world of experience and instead to just blindly
believe some claim. Are we so frustrated that we will accept "authority" as
a proof of our beliefs? I hope not!

Pratt's disdain follows from the obvious failures of other models. It
does not take a logician or mathematician or philosopher of unbelievable IQ
to see that the models of monism that have been advanced have a fatal flaw:
the inability to proof the necessity of epiphenomena. Maybe Bruno's theory
will solve this, I hold out hope that it does; but meanwhile, why can't we
consider and debate alternatives that offer a view ranging explanations and
unifying threads, such as Pratt's Chu space idea?

Kindest regards,

Stephen

----- Original Message -----
From: "Stathis Papaioannou" <stathispa...@hotmail.com>

To: <step...@charter.net>; <mar...@ulb.ac.be>
Cc: <everyth...@eskimo.com>
Sent: Monday, May 16, 2005 2:36 AM
Subject: Re: Olympia's Beautiful and Profound Mind

Stathis Papaioannou

unread,
May 16, 2005, 9:12:53 PM5/16/05
to step...@charter.net, mar...@ulb.ac.be, everyth...@eskimo.com
Dear Stephen,

I have to confess that the mathematical intricacies of Chu spaces are quite
beyond me. However, this passage appears at the introduction to the cited
article:

"We propose to reduce complex mind-body interaction to the elementary
interactions
of their constituents. Events of the body interact with states of the
mind. This interaction has two dual forms. A physical event a in the body
A impresses its occurrence on a mental state x of the mind X, written a=|x.
Dually, in state x the mind infers the prior occurrence of event a, written
x |= a.

Tell me if I have completely misconstrued it, but it seems that this is
still discussing how the two entities (mind and body) are interacting, and
differs only in detail from the 17th century solutions. *Why* do you need to
"prove the necessity of epiphenomena", and *how* is such a "proof" providing
any more information than the simple observation that the epiphenomena
exist? You could go mad seeing dualism everywhere. If I wave my hand in a
circular pattern, we have (a) the physical action of moving my hand in a
circular pattern, and (b) the circular pattern. Arguably, these are two
completely different things. One is an event in the physical world, and the
other is a theoretical or mathematical abstraction. How is it that these two
completely different entities interact? How can you prove that the physical
action of moving my hand in a particular way necessitates the epiphenomenon
of the circular pattern? And if you manage to explain that one, how can you
explain the experience of being-a-circular-pattern from the inside, or,
conversely, the non-experience of being-a-circular-pattern from the inside,
whichever is the case? There comes a point where theory and explanation
makes us more confused and no more informed than we were before.

--Stathis

_________________________________________________________________
SEEK: Over 80,000 jobs across all industries at Australia's #1 job site.
http://ninemsn.seek.com.au?hotmail

Stephen Paul King

unread,
May 16, 2005, 10:38:17 PM5/16/05
to Stathis Papaioannou, mar...@ulb.ac.be, everyth...@eskimo.com, ti...@yahoogroups.com
Dear Stathis,

Thank you for reading the paper in its entirety. Pratt's idea is very
subtle but the difference between the form of dualism that he is explaining
is very different from Descartes'. Pratt is considering "Mind" and "body" as
"process", not "substance". It is the difference between a "Being" based
paradigm and a "Becoming" based paradigm.

Please continue and take a look at some of the other papers
(http://chu.stanford.edu/ ) and notice how Category theory is being used,
notice the contravariant morphisms, notice how non-well founded logic is
being used. BTW, non-well founded logics handle the circularity that you
appear to protest. This circularity is also a key feature that has to been
explained in models of consciousness because, at a minimum, we have to
explain "self-awareness"!
Pratt doesn't seem to have address the key notion of forgetfulness in
the previously referenced paper, which is necessary to deal with
irreversibility, but I am sure that that will be dealt with soon enough.

The "interaction" between the "hand" and the "abstraction" in your
example [or better the information representing the physical hand] is
obvious. It is not an "interaction", it is an identity in the same way that
there is an identification beween a physical object and the class of
representations that it can have, be they "bitstrings" or whatever!
Interactions, as Pratt explains, need to be explained between the "bodies"
and between the "minds". How is it that my mind can interact with yours, or
to put it into COMP terms, how does one bitstring interact with another
without some physical instantiation?

The interaction problem becomes even more pronounced when we start
thinking about QM systems! If you look at the formalism carefully, it is
obvious that QM systems are separate from each other in such a way that even
the notion of "substance exchange" between them will simply not work. QM
systems are exactly like Leibniz' monads: "windowless".

Given this fact how do we propose to explain interactions in general and
communication between observers in particular? We can not have theories of
our universe of experience that only include a single observer! I know well
about particle physics theories talking about vector bosons being exchanged
but if you look carefully at the QM system involved, the vector bosons are
part of the single QM system being considered and not a separate system.
There are technical nuances involved here to be sure, but these ideas are
not being advanced without careful consideration. I understand all too well
the importance of Occam's Razor. ;-)

Lee Corbin

unread,
May 16, 2005, 10:52:25 PM5/16/05
to everyth...@eskimo.com
Stathis wrote

> > However, what reason is there to
> > extrapolate from this that there must be some special explanation for the
> > interaction between body and mind? What do you lose if you simply accept,
> > as per Gilbert Ryle, that the mind is what the brain does?

Stephen answers:

> In a phrase, I would lose choice. What you are asking me is to give up

> any hope of understanding how my sense of being-in-the-world is related to
> any other phenomena in the world of experience and instead to just blindly
> believe some claim.

I think that it comes down to a perceived need of an "answer".
Some of us require (demand) an explanation for the fact that
mathematics is so tightly coupled to the physical universe;
others need that, but more: they require an explanation beyond
the existence of mere abstract mathematical patterns; yet others
want that too, but more: they demand answers beyond naive
materialism for the existence of 1st person experiences. Yet
others go way beyond, and demand a Force to be responsible for
everything, and others beyond that, that this Force be personal
and omnipotent (how else to explain certain feelings?)

I'm scared sort of bad, that my own case may turn out to be
that I'm homozygous recessive in the religiosity gene, and
the people I mentioned first on the list are entirely heterozygous,
and the people I mentioned last are homozygous dominant! :-)

If not a religiosity gene, then some manner of unshakable almost
innate predisposition to be satisfied or unsatisfied with these
explanations. For the life of me, I can't see (nor have ever been
able to see) what the big deal is about 1st person experiences.
Of course I have 'em. So does everyone. It's as though people
expect 3rd person explanations for 1st person experiences, and I
think it is in principle impossible: i.e., that *no* possible
string of words in the English language could ever establish 1st
person "truths" by 3rd person descriptions of what physically is.

Stathis went on:

> > Otherwise, you
> > could seek a special explanation for an electronic calculator's
> > matter/mathematics dualism, or a falling stone's matter/energy dualism, or
> > any number of similar examples. Occam's razor would suggest that such
> > complications are unnecessary.

Another homozygous recessive, like me, eh? :-)

I won't agree that Occam's razor really can be deployed here.
It's just that you can't *see* the problem they're seeing.

> Pratt's disdain follows from the obvious failures of other models. It
> does not take a logician or mathematician or philosopher of unbelievable IQ
> to see that the models of monism that have been advanced have a fatal flaw:

> the inability to prove the necessity of epiphenomena. Maybe Bruno's theory

> will solve this, I hold out hope that it does; but meanwhile, why can't we
> consider and debate alternatives that offer a view ranging explanations and
> unifying threads, such as Pratt's Chu space idea?

I just have to say that I have utterly no sense that anything
here needs explanation.

Now I am not kidding: Once, when I was exasperated, I joked with
with a couple of friends and half-persuaded them that I was not
conscious. Do we really know for sure that Stephen, Lee, and
Stathis all have the same kind of consciousness? Could it be
that whether or not you see a problem depends on the extent
and kind of consciousness you have? Could we be making a fatal
mistake by assuming that everyone here has the same kind of
consciousness?

Lee

Jonathan Colvin

unread,
May 17, 2005, 1:35:48 AM5/17/05
to everyth...@eskimo.com

> >Lee corbin wrote: Pratt's disdain follows from the obvious failures of
> other models.
> > It does not take a logician or mathematician or philosopher of
> > unbelievable IQ to see that the models of monism that have
> been advanced have a fatal flaw:
> > the inability to prove the necessity of epiphenomena. Maybe Bruno's
> > theory will solve this, I hold out hope that it does; but
> meanwhile,
> > why can't we consider and debate alternatives that offer a view
> > ranging explanations and unifying threads, such as Pratt's
> Chu space idea?
>
> I just have to say that I have utterly no sense that anything
> here needs explanation.

I have to agree. Perhaps it is because I'm a Denett devotee, brainwashed
into a full denial of qualia/dualism, but I've yet to see any coherent
argument as to what there is anything about consciousness that needs
explaining. The only importance I see for consciousness is its role in
self-selection per Bostrom.

Jonathan Colvin

Stathis Papaioannou

unread,
May 17, 2005, 3:43:42 AM5/17/05
to jco...@ican.net, everyth...@eskimo.com
I agree with Lee's and Jonathan's comments, except that I think there is
something unusual about first person experience/ qualia/ consciousness in
that there is an aspect that cannot be communicated unless you experience it
(a blind man cannot know what it is like to see, no matter how much he
learns about the process of vision). Let me use the analogy of billiard
balls and Newtonian mechanics. Everything that billiard balls do by
themselves and with each other can be fully explained by the laws of
physics. Moreover, it can all be modelled by a computer program. But in
addition, there is the state of being-a-billiard-ball, which is something
very strange and cannot be communicated to non-billiard balls, because it
makes absolutely no difference to what is observed about them. It is not
clear if this aspect of billiard ball "experience" is duplicated by the
computer program, precisely because it makes no observable difference: you
have to be the simulated billiard ball to know.

Before someone says that billiard balls are not complex enough to have an
internal life, I would point out that neither is there any way to deduce a
priori that humans have conscious experiences. You have to actually be a
human to know this.

You don't need to postulate a special mechanism whereby mind interacts with
matter. The laws of physics explain the workings of the brain, and conscious
experience is just the strange, irreducible effect of this as seen from the
inside.

--Stathis Papaioannou

_________________________________________________________________
REALESTATE: biggest buy/rent/share listings
http://ninemsn.realestate.com.au

Jonathan Colvin

unread,
May 17, 2005, 4:01:14 AM5/17/05
to everyth...@eskimo.com

>Stathis: I agree with Lee's and Jonathan's comments, except that I
> think there is something unusual about first person
> experience/ qualia/ consciousness in that there is an aspect
> that cannot be communicated unless you experience it (a blind
> man cannot know what it is like to see, no matter how much he
> learns about the process of vision). Let me use the analogy
> of billiard balls and Newtonian mechanics. Everything that
> billiard balls do by themselves and with each other can be
> fully explained by the laws of physics. Moreover, it can all
> be modelled by a computer program. But in addition, there is
> the state of being-a-billiard-ball, which is something very
> strange and cannot be communicated to non-billiard balls,
> because it makes absolutely no difference to what is observed
> about them. It is not clear if this aspect of billiard ball
> "experience" is duplicated by the computer program, precisely
> because it makes no observable difference: you have to be the
> simulated billiard ball to know.

But is this "state of being a billiard ball" any different than simple
existence? What in particular is unusual about first person qualia? We might
simply say that a *description* of a billiard ball is not the same as *a
billiard ball* (a description of a billiard ball can not bruise me like a
real one can); in the same way, a description of a mind is not the same as a
mind; but what is unusual about that? It is not strange to differentiate
between a real object and a description of such, so I don't see that there
is anything any more unusual about first person experience. Is it any
stranger that a blind man can not see, than that a description of a billiard
ball's properties (weight, diameter, colour etc) can not bruise me?

Jonathan Colvin

Bruno Marchal

unread,
May 17, 2005, 5:55:50 AM5/17/05
to EverythingList list, FoR

Le 17-mai-05, à 09:06, Stathis Papaioannou a écrit :

> I agree with Lee's and Jonathan's comments, except that I think there
> is something unusual about first person experience/ qualia/
> consciousness in that there is an aspect that cannot be communicated
> unless you experience it (a blind man cannot know what it is like to
> see, no matter how much he learns about the process of vision). Let me
> use the analogy of billiard balls and Newtonian mechanics. Everything
> that billiard balls do by themselves and with each other can be fully
> explained by the laws of physics. Moreover, it can all be modelled by
> a computer program. But in addition, there is the state of
> being-a-billiard-ball, which is something very strange and cannot be
> communicated to non-billiard balls, because it makes absolutely no
> difference to what is observed about them. It is not clear if this
> aspect of billiard ball "experience" is duplicated by the computer
> program, precisely because it makes no observable difference: you have
> to be the simulated billiard ball to know.
>
> Before someone says that billiard balls are not complex enough to have
> an internal life, I would point out that neither is there any way to
> deduce a priori that humans have conscious experiences. You have to
> actually be a human to know this.
>
> You don't need to postulate a special mechanism whereby mind interacts
> with matter. The laws of physics explain the workings of the brain,
> and conscious experience is just the strange, irreducible effect of
> this as seen from the inside.


I agree. Let me make a little try to explain briefly why I think that,
although the first person cannot be reduced to any pure third person
notion, yet, it is possible to explain in a third person way why the
first person exists and why it cannot be reduced to any pure third
person way.

I will consider a machine M. The machine is supposed to be a sort of
mathematician, or a theorem proving machine in arithmetic. I suppose
the machine is sound: this means that if the machine proves some
proposition p, then p is true. I will also suppose that the machine is
programmed so as to assert all the propositions she can prove, in some
order. So one day she proves 1+1=2. Another day she proves 17 is prime,
and so on. So I will use "M proves" and "M can prove" equivalently.

As everyone knows or should know since Goedel 1931, it is possible to
represent the *provability by the machine M* in the language of the
machine (here the language is first order arithmetic, but the detail
are irrelevant in this short explanation). I will write Bp for
BEW(GN(p)), which is the representation of "M proves p" in the language
of the machine. BEW(GN(p)), means really there is a number which codes
a proof of the formula itself coded by GN(p). GN(p) is for the Godel
number of p, which is the traditional encoding of p in arithmetic. BEW
is for beweisbar: provable in German.


It has been proved by Hilbert, Bernays and Loeb that such a machine
verifies the following condition, for any p representing an
arithmetical proposition:

1) If M proves p then M proves Bp (sort of introspective ability: if
the machine can prove p, the machine can prove that the machine can
prove p)

2) M proves (Bp & B(p->q)) -> Bq (the machine can prove that if she
proves p and if she proves p->q, then she will proves p). This means
that the machine can prove that she follows the modus ponens inference
rule (which is part of arithmetic). It is a second introspective
ability.

3) The machine M proves "1)", i.e. M proves Bp -> BBp. i.e. the machine
proves that if she proves p, then she can prove that she can prove p.


If the machine is sound, the machine is necessarily consistent. That
means the machine will not prove 1 + 1 = 3, or any false arithmetical
statement. To make things easier, I suppose there is a constant false
in the machine language, written f. I could have use the proposition
1+1=3 instead, but "f" is shorter.

So M is consistent is equivalent as saying that (NOT Bf) is true about
M, i.e. when B is the provability of the machine (as I will no more
repeat).

Goedel second incompleteness theorem asserts that if M is consistent
then M cannot prove M is consistent. This can be translate in the
language of M: (NOT Bf) -> (NOT B (NOT Bf)).
We have two things: (NOT Bf) -> (NOT B (NOT Bf)) is true *about* the
machine, but we have also that the machine is able to prove it about
itself: that (NOT Bf) -> (NOT B (NOT Bf)) is can be proved by the
machine.
Of course from this you know that (NOT Bf) is true about the machine,
but that the machine cannot prove it.

I hope you are able to verify if a proposition of classical
propositional logic is a tautology or not. In particular NOT A is has
the same truth value that A -> f. In particular consistency (NOT Bf) is
equivalent to (Bf -> f).
So if he machine is sound, and thus consistent, (Bf -> f) is true about
the machine, but cannot be proved by the machine.

We see that in general, for any p, Bp -> p is true about the machine
(it is the soundness of the machine), but the machine cannot prove it
for any p, in particular she cannot prove Bf -> f, which is equivalent
to its own consistency.

This means that the sentences (Bp & p) is equivalent to Bp, for any p,
but the machine cannot prove this for any p.

So if we define a new sort of proof of p by Cp = (Bp & p), although
exactly the same arithmetical p will be proved, the very logic of Cp
will differ from the logic of Bp. (Bp <-> Cp) is true about the
machine but not provable by the machine. Actually NOT B(Bp <-> Cp) and
NOT C(Bp <-> Cp), are also true on the machine (and also not provable
by the machine).

There is thus a big gap between the truth on the machine, and what the
machine can prove about itself.

B can be used to modelise a form of scientist self-referential belief,
or third person self-reference. It is a belief notion due to the fact
that the machine cannot in general prove Bp -> p, like it is asked for
a knowledge notion. But the machine can prove Cp -> p. (given that Cp
is Bp & p).
Actually it can be shown that Cp obeys the traditional axioms for
knowledge. C can be used to modelise a form of machine first person
knowledge. With comp, it is arguably not even a "modelization" (but I
will not develop this point here).

Important Remark: B can be translated in the language of the machine.
It is a third person self-reference: the machine talk about itself
through a third person description of itself (here = Goedel number):
BEW(GN(p)). But C cannot be translated in the language of the machine.
You will never find an equivalent of BEW for C. the simplest attempt
would be Cp = BEW(GN(p)) & TRUE(GN(p)), but by Tarski theorem TRUE
itself cannot be translated in the language of the machine. So the C
logic is self-referential as far as the machine does not attempt to
give a name or a third person description to the owner of the
knowledge. The definition Cp = Bp & p is what I call the theatetus'
definition of knowledge (given that it is the definition of knowledge.

It can be shown that Cp gives rise to a logic of subjective
antisymmetrical time equivalent to Brouwer's theory of consciousness.
It is the modal logic S4Grz

Axioms:
t: Cp -> p
k: Cp & C(p->q) -> Cq
4: Cp -> CCp
Grz: C(C(p->Cp)->p)->p (Grz is respondible for the antisymmetry of
the modal accessibility relation among worlds, for those who remember
what I said on Kripke semantics).

Rules:
Modus Ponens: if the machine knows p, and knows "p->q", the machine
knows q.
Necessitation: if the machine knows p, the machine knows that it knows
p.

I hope it gives an idea why we can "meta-define" the knower (by linking
it to the truth like Theaetetus), and explain why it is equivalent with
the sound believer, although neither the believer nor the knower can be
aware (believe or know) of that equivalence.

Cp is just one possible Theaetetical variant of the arithmetisable Bp.

I sum up and anticipate:

Bp = the scientist self-referential believer, (3-person)
Cp = the knower (the unnameable self-referential 1-person)
Dp = Bp & NOT (B NOT p) = the conjectural believer
(Brian, this one is close to the "observer" I promise to explain)
Ep = Bp & NOT (B NOT p) & p = the conjectural knower (and
this one too, it is almost the feeler or smeller ... :)

We have Bp <-> Cp <-> Dp <-> Ep is true on the machine, but none of
those equivalence are either Believable, or knowable, ...

Now, if you restrict the arithmetical p on those p accessible by the
Universal Dovetailer (= COMP!) : you get more:

you get: p <-> Bp <-> Cp <-> Dp <-> Ep, but again, by
incompleteness, none of those equivalence are believable or knowable
.. by the machine.

Now, with G and G* (*), all those logics split into a communicable and
non communicable parts, and I could use them to capture the true
sentences concerning the conjectural knower and the conjectural
believer on the states accessible by the UD, and following the UDA they
should give the logic of *measure one* on the consistent extensions of
the machine, as conjecturally believe or known by them, and it is here
that I got quantum logics (but that was not the main point here).

To sum up: Godel's theorems makes it possible to explain why any
machine cannot know (1-person) that they are "machine" (obviously
3-person describable). Machine can never know they are machine, but can
bet on it.

(*) G is the logic of Bp and NOT (B (NOT p)) as being provable by the
machine.
G* is the logic of Bp and NOT (B (NOT p)) as being true about the
machine.
It is the gap between G and G* which makes inescapable all those
nuances between provability, knowability, probability 1, etc...

I recall: Smullyan's FOREVER UNDECIDED = excellent introduction to the
logic G (the logic of Bp)
Boolos' THE LOGIC OF PROVABILITY = excellent treatise on
the logic G and G* (called GL and GLS in that book).

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
May 17, 2005, 6:30:01 AM5/17/05
to Jonathan Colvin, everyth...@eskimo.com

Le 17-mai-05, à 09:56, Jonathan Colvin a écrit :

> Is it any
> stranger that a blind man can not see, than that a description of a
> billiard
> ball's properties (weight, diameter, colour etc) can not bruise me?


It is different with comp. because a description of you + a description
of billiard ball, done at some right level, can bruise you.

Bruno

http://iridia.ulb.ac.be/~marchal/


Stathis Papaioannou

unread,
May 17, 2005, 9:19:19 AM5/17/05
to jco...@ican.net, everyth...@eskimo.com
Jonathan,

Your post suggests to me a neat way to define what is special about first
person experience: it is the gap in information between what can be known
from a description of an object and what can be known from being the object
itself. This is a personal thing, but I think it is at least a little
surprising that there should be such a gap, and would never have guessed had
I not been conscious myself. I don't think it is a good idea to simply
ignore this gap, but on the other hand, I don't think there is any need to
postulate mind/body dualism and try to explain how the two interact. Aside
from this one difference I have focussed on, first person experience is just
something that occurs in the normal course of events in the physical
universe.

--Stathis Papaioannou

_________________________________________________________________
Are you right for each other? Find out with our Love Calculator:
http://fun.mobiledownloads.com.au/191191/index.wl?page=template&groupName=funstuff

Ti Bo

unread,
May 17, 2005, 9:42:51 AM5/17/05
to Stathis Papaioannou, jco...@ican.net, everyth...@eskimo.com

Hi All,

I haven't been on the ball with this discussion, but it ties in
very neatly with the discussion we have been having here in Linz at
the Data Ecologies workshop. The aspect that seems to be needed is the
idea of an "intrinsic observer" - this idea seems to have been
introduced
by Tom Toffoli and followed up on in many ways by Karl Svozil:
http://tph.tuwien.ac.at/~svozil/publ/maryland.htm

I hope I have something more to say about it later today...

cheers,

tim

----- Tim Boykett TIME'S UP::Research Department
\ / Industriezeile 33b A-4020 Linz Austria
X +43-732-787804(ph) +43-732-7878043(fx)
/ \ t...@timesup.org http://www.timesup.org
-----
http://www.timesup.org/fieldresearch/setups/index.html

Jonathan Colvin

unread,
May 17, 2005, 3:49:29 PM5/17/05
to everyth...@eskimo.com

> Stathis: Your post suggests to me a neat way to define what is special
> about first person experience: it is the gap in information
> between what can be known from a description of an object and
> what can be known from being the object itself.

But how can "being an object" provide any extra information? I don't see
that information or knowledge has much to do with it. How can "being an
apple" provide any extra information about the apple? Obviously there is a
difference between *an apple* and *a description of an apple*, in the same
way there is a difference between *a person* and *a description of a
person*, but the difference is one of physical existence, not information.

Jonathan Colvin


Stephen Paul King

unread,
May 17, 2005, 5:06:48 PM5/17/05
to everyth...@eskimo.com
Dear Bruno,

Your claim reminds me of the scene in the movie Matrix: Reloaded where
Neo deactivates some Sentinels all the while believing that he is Unplugged.
This leads to speculations about "matrix in a matrix", etc.

http://www.thematrix101.com/reloaded/meaning.php#mwam

There is still one question that needs to be answered: what is it that
gives rise to the differentiation necessary for one "description" to
"bruise" (or cause any kind of change) in another "description" if we
disallow for some thing that acts as an "interface" between the two.

What forms the "interface" in your theory?

http://arxiv.org/PS_cache/quant-ph/pdf/0001/0001064.pdf

Stephen

Jonathan Colvin

unread,
May 17, 2005, 5:39:39 PM5/17/05
to everyth...@eskimo.com
Bruno's claim is a straightforward consequence of Strong AI; that a
simulated mind would behave in an identical way to a "real" one, and would
experience the same "qualia". There's no special "interface" required here;
the simulated mind and the simulated billiard ball are in the same "world",
ie. at the same level of simulation. As far as the simulated person is
concerned, the billiard ball is "real". Of course, the simulation can also
contain a simulation of the billiard ball (2nd level simulation), which will
equally be unable to bruise the simulated person, and so on ad infinitum. If
we take Bostrom's simulation argument seriously, we all exist in some Nth
level simulation, while our simulated billiard ball exists at the (N+1)th
level.

Jonathan Colvin



> Stephen: Your claim reminds me of the scene in the movie Matrix:

Stephen Paul King

unread,
May 17, 2005, 6:27:17 PM5/17/05
to Jonathan Colvin, everyth...@eskimo.com
Dear Johathan,

I am trying to address the point of how we consider the interactions and
communications between minds, simulated or otherwise. I do not, question the
idea that simulated "minds" would be indistinguishable from "real" minds,
especially from a 1st person view. I am asking about how such minds can
interact such that notions of "cause and effect" and, say, signal to noise
ratios" are coherent notions.

Additionally, I still would like to understand how we can continue to
wonder about computations without ever considering the costs in resources
associated. We can not tacitly assume abstract perpetual motion machines to
power our abstract machines, or can we?

Stathis Papaioannou

unread,
May 17, 2005, 7:40:04 PM5/17/05
to jco...@ican.net, everyth...@eskimo.com
Jonathan,

Can you honestly say that there is no more information available to a
sighted person about the experience of vision than to a blind person? That
if a blind person who happens to be a scientific expert on vision
miraculously develops, for the first time in his life, the ability to see,
he hasn't really gained anything?

--Stathis Papaioannou

_________________________________________________________________
MSN Messenger v7. Download now: http://messenger.ninemsn.com.au/

Lee Corbin

unread,
May 17, 2005, 11:43:07 PM5/17/05
to EverythingList
Jonathan contrasts descriptions and what the descriptions describe:

> > Stathis: Your post suggests to me a neat way to define what is special
> > about first person experience: it is the gap in information
> > between what can be known from a description of an object and
> > what can be known from being the object itself.
>
> But how can "being an object" provide any extra information? I don't see
> that information or knowledge has much to do with it. How can "being an
> apple" provide any extra information about the apple?

Let's remember some naive answers here. First, for a fixed physical
object, there exist infinitely many descriptions. It's a common
belief that beyond a certain amount of accuracy, differences don't
really matter. For example, one ought to be quite happy to teleport
even if there is one atomic error for every 10^20 atoms.

Second, a common interpretation of QM asserts that beyond a certain
accuracy, there is *no* additional information to be had whatsoever.
That is, that there exists some finite bit string that contains
*all* an object's information (cf. Bekenstein bound).

Still, the naive answer is that a description (or even a set of
descriptions) of a physical object is different from the physical
object itself: a physical object is a process, and a set of
descriptions is merely a set of bits frozen in time (and here
we are back again, you know where).

However, I hold with these "naive" answers, as do a lot of people.
And so therefore I proceed to answer the above question thusly:
"Being an apple" provides *no* information beyond that which would
be provided by a sufficiently rich description. Even if an
emulation of a person appreciating the sublime, or agonizing to
a truly horrific extent, or whatever----no information obtains
anywhere that is not in principle available to the experimenters,
i.e., available from the third-person.

You could make the experimenter *hurt*, and then say, "now you
know what it feels like", and given today's techniques, that
might very well be true. But this is only a limitation on what
is known and knowable today; it says nothing about what might be
knowable about a human subject of 20th century complexity to
entities living a thousand years from now.

(We ignore the possible effects on the experimenter's value
system, or possible effects on his incentives: we are just
talking about information as bit-strings, here.)

> Obviously there is a difference between *an apple* and *a
> description of an apple*, in the same way there is a difference
> between *a person* and *a description of a person*, but the
> difference is one of physical existence, not information.

Yeah, that's the way it seems to me too.

Lee

Lee Corbin

unread,
May 18, 2005, 12:00:31 AM5/18/05
to EverythingList
Stathis wrote

> [Here is] a neat way to define what is special about first

> person experience: it is the gap in information between what can be known
> from a description of an object and what can be known from being the object
> itself. This is a personal thing, but I think it is at least a little
> surprising that there should be such a gap, and would never have guessed had
> I not been conscious myself.

"Had you not been conscious" yourself? Do you think that this
is at all possible?

That sufficiently complex entities capable of making their way
in the world ought to be conscious seems very natural to me,
for some reason. When I examine the animal world, for instance,
and see small creatures chasing one another, I would expect
them to be making maps of their environment; I would expect
them to have feelings; and I would expect at some level of
development that their maps would include a bit of self-reference.
(Just a tad, for the *lower* life forms.)

Perhaps I am hard-wired to project my own thoughts and feelings
onto others, including animals. Solipsism *really* seems unscientific
somehow. I myself have likes and dislikes, and so why shouldn't everyone
and everything else?

> I don't think it is a good idea to simply ignore this gap,

Well, I'd agree to call it a *difference*: as I said in another
post, the way some of us see it is that there isn't an information
gap.

> but on the other hand, I don't think there is any need to postulate
> mind/body dualism and try to explain how the two interact.

Well, I think that everyone here agrees with that. But of course,
you are thinking about the resurgence of dualism. I probably agree
with you.

> Aside from this one difference I have focused on, first person


> experience is just something that occurs in the normal course of
> events in the physical universe.

Well said. I would like to quote that. I do also read that to
include consciousness, as I'm sure you meant. I would also
read it to include all the "gaps".

Lee

Stathis Papaioannou

unread,
May 18, 2005, 2:29:56 AM5/18/05
to lco...@tsoft.com, everyth...@eskimo.com
I was using the term "information" loosely, to include what is commonly
termed qualia, subjective experience etc. I agree that if a physical system
is fully specified, then that is all you need in order to duplicate or
emulate the system. The new system will do everything the original one did,
including have conscious experiences. It's worth stressing this point again:
you don't need any special, non-physical information to emulate or duplicate
a conscious system; you don't need God to provide it with a soul, you don't
need to purchase a mind-body interface kit, you don't need to meditate and
wave quartz crystals around, and you don't need to have 1st person knowledge
of its subjective experiences. All you need is a few kilograms of raw
materials, a molecular assembler mechanism, and the data which indicates
where each bit goes. Once the job is finished, you automatically have a
system which talks, eats, and is conscious. Psychology and biology have been
reduced to physics and chemistry. Consciousness has been shown to be just be
an emergent phenomenon in a particular type of biological computer. Agree so
far? OK: having said all that, and assuming at this point that we know the
position and function of every atom in this newly created system, I *still*
would wonder what it feels like to actually *be* this system. My curiosity
could only be satisfied if I were in fact the duplicated system myself;
perhaps this could be achieved if I "became one" with the
new system by direct neural interface. I don't have to go to such lengths to
learn about the new system's mass, volume, behaviour, or any other property,
and in *this* consists the essential difference between 1st person and 3rd
person experience. You can minimise it and say it doesn't really make much
practical difference, but I don't think you can deny it.

--Stathis Papaioannou

>From Lee Corbin:

_________________________________________________________________

Ti Bo

unread,
May 18, 2005, 2:51:27 AM5/18/05
to Stathis Papaioannou, lco...@tsoft.com, everyth...@eskimo.com

This fits in well: the philosopher of consciousness and mathematician,
David Chalmers, coined the phrase:
"Experience is information from the inside; Physics is information from
the outside."

Which I quite like. It's in his book "The Conscious Mind: towards a
fundamental theory" which is heavy going, but seems to have some really
good ideas.

okidokee,

tim

Bruno Marchal

unread,
May 18, 2005, 2:51:51 AM5/18/05
to Stathis Papaioannou, lco...@tsoft.com, everyth...@eskimo.com
Very well said. Very good description of the 1/3 distinction. Except
perhaps that I believe it does make a *practical* difference. If you
are duplicated, and the one of the two duplicats is tortured it is the
difference between I suffer, and he suffers (or look suffering). No
amount of compassion for an other can evacuate that difference. It is
the difference between I *believe* he suffers, and I *know* I suffer.
And there is no mystery: even a machine as dumb as a theorem prover can
discover that difference which (if I'm correct) is related to the
inescapable difference, discovered by Godel, between proof (which
concerne 3-person description ) and truth (which really is a pure 1
person notion), as I have try a little bit to explain yesterday (the
difference of logic between Bp and Bp & p)

Bruno

Le 18-mai-05, à 08:26, Stathis Papaioannou a écrit :

http://iridia.ulb.ac.be/~marchal/


Lee Corbin

unread,
May 18, 2005, 2:54:21 AM5/18/05
to everyth...@eskimo.com
Jonathan writes

> Bruno's claim is a straightforward consequence of Strong AI; that a
> simulated mind would behave in an identical way to a "real" one, and would
> experience the same "qualia". There's no special "interface" required here;
> the simulated mind and the simulated billiard ball are in the same "world",
> ie. at the same level of simulation. As far as the simulated person is
> concerned, the billiard ball is "real". Of course, the simulation can also
> contain a simulation of the billiard ball (2nd level simulation), which will
> equally be unable to bruise the simulated person, and so on ad infinitum. If
> we take Bostrom's simulation argument seriously, we all exist in some Nth
> level simulation, while our simulated billiard ball exists at the (N+1)th
> level.

Now just to keep our bookkeeping accurate, Bruno Marchal's claims
far exceed what you have written.

What you have described is actually a pretty standard view among
most of Turing's followers. The MATRIX movie was a rather late-
comer to ideas that had been floating around long before the mid-
eighties when Vernor Vinge wrote "True Names". What Vinge appears
to have been the first to do is to think of *commands* (that one
gives while emulated) as *spells*, thus bringing in the sword &
sorcery crowd. Damien Broderick claims to have coined the term
"Virtual Reality" in the 1970's, and I think I saw a reference
where he had.

The moment that *anyone* suspected that Asimov's Robbie the Robot
might be conscious---might be a kind of being whose feelings could
be hurt---one immediately had MATRIX-like scenarios. How could one
fail to? And when Robert Sheckley wrote "Human Man's Burden" in the
late 50's, the robots clearly felt. Wasn't it an immediate
corollary that all their sensory input could be controlled? I know
that I had my own ideas just like this by 1966. And I'd be willing
to bet that Sheckley was *not* the first.

No, the important claims that Bruno makes go far beyond. He attempts
to derive physics from the theory of computation (i.e., recursive
functions, effective computability, incompleteness, and unsolvability).
His is also one set of the claims, hypotheses, and conjectures that
attempt to reduce physics to a completely timeless abstract world.
Julian Barbour, in The End of Time, gave, as you probably know, one
of the most brilliant presentations from this perspective.

Lee

Bruno Marchal

unread,
May 18, 2005, 2:57:04 AM5/18/05
to Stephen Paul King, EverythingList list
Hi Stephen,


Le 17-mai-05, à 22:39, Stephen Paul King a écrit :

> There is still one question that needs to be answered: what is it
> that gives rise to the differentiation necessary for one "description"
> to "bruise" (or cause any kind of change) in another "description" if
> we disallow for some thing that acts as an "interface" between the
> two.
>
> What forms the "interface" in your theory?

I think we should come back to the difference between your assumptions
and mine. For me "interface" and "resource" are the thing I would like
to explain. But I accept the notion that 0 is a number, and if x is a
number then x+1 is a number, etc.
Utimately interface and resource are explained in term of the rlation
between those numbers. I think you assume at the start a physical
world. I don't need that hypothesis.

Bruno


http://iridia.ulb.ac.be/~marchal/


Lee Corbin

unread,
May 18, 2005, 3:12:15 AM5/18/05
to EverythingList
Stathis writes

> I was using the term "information" loosely, to include what is commonly
> termed qualia, subjective experience etc. I agree that if a physical system
> is fully specified, then that is all you need in order to duplicate or
> emulate the system. The new system will do everything the original one did,
> including have conscious experiences. It's worth stressing this point again:
> you don't need any special, non-physical information to emulate or duplicate
> a conscious system; you don't need God to provide it with a soul, you don't
> need to purchase a mind-body interface kit, you don't need to meditate and
> wave quartz crystals around, and you don't need to have 1st person knowledge
> of its subjective experiences. All you need is a few kilograms of raw
> materials, a molecular assembler mechanism, and the data which indicates
> where each bit goes. Once the job is finished, you automatically have a
> system which talks, eats, and is conscious. Psychology and biology have been
> reduced to physics and chemistry. Consciousness has been shown to be just be
> an emergent phenomenon in a particular type of biological computer. Agree so
> far?

Well, this is certainly all right by me---though hardly by everyone
here. You have described very well the ordinary reduction of humans
and animals to ordinary physical mechanisms, a view that was
widespread among materialists all through the 19th and 20th
centuries, even if they didn't have as much evidence as we do.

> OK: having said all that, and assuming at this point that we know the
> position and function of every atom in this newly created system, I *still*
> would wonder what it feels like to actually *be* this system.

Have you read Hofstadter's comments on Thomas Nagel's essay "What is
it Like to be a Bat?". (Most easily accessed in "The Mind's I" by
Hofstadter and Dennett.) And I presume that you're familiar with
Daniel Dennett's views on qualia, as in "Consciousness Explained",
but that you reject them? (I'm rather new to this list.)

Lee

Bruno Marchal

unread,
May 18, 2005, 3:26:20 AM5/18/05
to Bruno Marchal, Stephen Paul King, EverythingList list
Well, ...

Le 18-mai-05, à 08:53, Bruno Marchal a écrit :

> I think you assume at the start a physical world. I don't need that
> hypothesis.

I should have said: I can't use that hypothesis, because the "physical
world" is what I would like to explain.
(Let us not exaggerate the partial success I got) ;)

Bruno

http://iridia.ulb.ac.be/~marchal/


John M

unread,
May 18, 2005, 3:14:40 PM5/18/05
to lco...@tsoft.com, EverythingList
Lee:
how would you relate to my generalization of the (non Shannon) information
concept:

---- Acknowledged difference ----

where the acknowledgor is not specified nor is the nature of the difference
?
(just 'deifferenc' is no information, unless absorbed into a pool of
organized data, identity does not constitute information - unless compared
with not-identity, to which it IS a difference.)
It can range from a differential el. charge to a Shakespeare story.

Cheers

John M

John M

unread,
May 18, 2005, 3:16:37 PM5/18/05
to everyth...@eskimo.com
In AI we consider (certain) qualia (characteristics) of the mind to get
simulated by the machine, in first range those ones that are relevant to the
activities we are interested in, but really: a choice of only those we know
about at all in the model we have of human mentality. There is always more
to it and we disregard the rest of the totality (of course).
The billiard ball also has more to it than in our model's characteristics
of the toy we consider. That's the way we think.

What brings to my mind the silly young peasant girl who worked in my
grandparents' home in the 30s and was sent down to the cellar made by my
grandfather with a horizontal trap-door covering the stairs down. She came
back desperate that the door does not open.
She was standing on it. The joke is on us:

Are we not trying to explain our own consciousness, using our own
consciousness, the mind, using our mind, and the (rest of the?) world
'objectively' - of which we are an intrinsic part of?
Aren't we standing on the trap door and try to lift it?

Excuse my rambling, I am not against advamced thinking, just apply always
the notion of a humble insecurity: that's all I can think of with my limited
means and there always may be much more to it.

Respectfully

John Mikes

Jonathan Colvin

unread,
May 18, 2005, 4:45:23 PM5/18/05
to everyth...@eskimo.com

>Stathis: I was using the term "information" loosely, to include what >is

commonly termed qualia, subjective experience etc.

But I think this is where a subtle dualism creeps in, because you are
ascribing something special to qualia, beyond mere existence, or so it
seems.

I agree
>that if a physical system is fully specified, then that is all
>you need in order to duplicate or emulate the system. The new
>system will do everything the original one did, including have
>conscious experiences. It's worth stressing this point again:
>you don't need any special, non-physical information to
>emulate or duplicate a conscious system; you don't need God to
>provide it with a soul, you don't need to purchase a mind-body
>interface kit, you don't need to meditate and wave quartz
>crystals around, and you don't need to have 1st person
>knowledge of its subjective experiences. All you need is a few
>kilograms of raw materials, a molecular assembler mechanism,
>and the data which indicates where each bit goes. Once the job
>is finished, you automatically have a system which talks,
>eats, and is conscious. Psychology and biology have been
>reduced to physics and chemistry. Consciousness has been shown
>to be just be an emergent phenomenon in a particular type of
>biological computer. Agree so far? OK: having said all that,
>and assuming at this point that we know the position and
>function of every atom in this newly created system, I *still*
>would wonder what it feels like to actually *be* this system.

What it would be like to be a bat, as Nagel puts it.

>My curiosity could only be satisfied if I were in fact the
>duplicated system myself; perhaps this could be achieved if I
>"became one" with the new system by direct neural interface. I
>don't have to go to such lengths to learn about the new
>system's mass, volume, behaviour, or any other property, and
>in *this* consists the essential difference between 1st person
>and 3rd person experience. You can minimise it and say it
>doesn't really make much practical difference, but I don't
>think you can deny it.

I can deny that there is anything special about it, beyond the difference
between A): *a description of an apple*; and B): *an apple*. I don't think
anyone would deny that there is a difference between A and B (even with comp
there is still a difference); but this "essential difference" does not seem
to have anything in particular to do with qualia or experience.

Jonathan Colvin

Jonathan Colvin

unread,
May 18, 2005, 4:51:02 PM5/18/05
to everyth...@eskimo.com
Lee writes:
>> Jonathan: Bruno's claim is a straightforward consequence of Strong AI;

that a
>> simulated mind would behave in an identical way to a "real" one, and
>> would experience the same "qualia". There's no special "interface"
>> required here; the simulated mind and the simulated billiard
>ball are
>> in the same "world", ie. at the same level of simulation. As far as
>> the simulated person is concerned, the billiard ball is "real". Of
>> course, the simulation can also contain a simulation of the billiard
>> ball (2nd level simulation), which will equally be unable to bruise
>> the simulated person, and so on ad infinitum. If we take Bostrom's
>> simulation argument seriously, we all exist in some Nth level
>> simulation, while our simulated billiard ball exists at the
>(N+1)th level.
>
>Now just to keep our bookkeeping accurate, Bruno Marchal's
>claims far exceed what you have written.
>
<snip>

>No, the important claims that Bruno makes go far beyond. He
>attempts to derive physics from the theory of computation
>(i.e., recursive functions, effective computability,
>incompleteness, and unsolvability).
>His is also one set of the claims, hypotheses, and conjectures
>that attempt to reduce physics to a completely timeless abstract world.
>Julian Barbour, in The End of Time, gave, as you probably
>know, one of the most brilliant presentations from this perspective.

Sure; but I was just addressing the observation by Bruno that a description
of a ball can bruise you (if you are also a description). That observation
is not unique to Bruno's Comp; it applies to any theory that accepts the
premise of Strong AI.

Jonathan


Russell Standish

unread,
May 19, 2005, 12:47:10 AM5/19/05
to John M, everyth...@eskimo.com
Pulling up the door you're standing on is known in the computer
industry as "bootstrapping", which comes from the expression "to pull
yourself up by your bootstraps".

Of course, over time, this has been shortened to "boot", as in
"booting your computer".

Initially, to boot a computer, one had to enter a small loader program
inot the computer by flicking switches. Run the program, and this
would a larger program from a disk or tape, which in turn would read
in an operating system (or whatever and start it running). These
days, this initial program is burned into a nonvolatile silicon chip,
which loads and runs the first sector of the hard disk, and so on, so
the tedious stage of entering the first program by hand is
avoided.

Can a conscious mind be understood completely by a conscious mind?
This can be cast in terms of Cantor's diagonalisation
argument. Goedel's 2nd theorem effectively says that arithmetic "cannot
understand itself". However, the set of recursive functions is closed
to diagonalisation, namely recursive functions exist that can emulate
any other.

Coming back to the original question - Bruno Marchal would probably answer
yes, with repsect to the assumption of COMP. Robert Rosen (to pick a
somewhat extreme opposing example) would probably argue no - that
consciousness lies in a class of systems outside the computable set.

Cheers

> > > > Le 17-mai-05, ? 09:56, Jonathan Colvin a ?crit :


> > > >
> > > >> Is it any
> > > >> stranger that a blind man can not see, than that a
> > > description of a
> > > >> billiard
> > > >> ball's properties (weight, diameter, colour etc) can not bruise me?
> > > >
> > > >
> > > > It is different with comp. because a description of you + a
> > > description of
> > > > billiard ball, done at some right level, can bruise you.
> > > >
> > > > Bruno
> > > >
> > > > http://iridia.ulb.ac.be/~marchal/
> > > >
> > > >
> > >
> > >
> >
> >
> >
>

--
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.

----------------------------------------------------------------------------
A/Prof Russell Standish Phone 8308 3119 (mobile)
Mathematics 0425 253119 (")
UNSW SYDNEY 2052 R.Sta...@unsw.edu.au
Australia http://parallel.hpc.unsw.edu.au/rks
International prefix +612, Interstate prefix 02
----------------------------------------------------------------------------

Lee Corbin

unread,
May 19, 2005, 3:04:45 AM5/19/05
to everyth...@eskimo.com
Jonathan writes

> Lee writes:
>
> > No, the important claims that Bruno makes go far beyond. He
> > attempts to derive physics from the theory of computation
> > (i.e., recursive functions, effective computability,
> > incompleteness, and unsolvability).
> > His is also one set of the claims, hypotheses, and conjectures
> > that attempt to reduce physics to a completely timeless abstract world.
> > Julian Barbour, in The End of Time, gave, as you probably
> > know, one of the most brilliant presentations from this perspective.
>
> Sure; but I was just addressing the observation by Bruno that a description
> of a ball can bruise you (if you are also a description). That observation
> is not unique to Bruno's Comp; it applies to any theory that accepts the
> premise of Strong AI.

I'm astonished to hear this; I thought that "strong AI" referred
merely to the claim that fully human or beyond intelligence might
be achieved by automatic machinery even if the programs only
push bits around one at a time. In other words, what distinguished
the strong AI camp from the weak AI camp was that the latter
believed that more is needed somehow or other: perhaps parallel
processing; perhaps biological program instantiation; perhaps
quantum gravity tubules or... something.

Also, strong vs. weak was/is distinguished so far as I know by
the claim about what is the best *practical* road to AI. That is,
some in the weak AI camp acknowledge that a purely "symbolic"
machine may some day achieve working human intelligence, just that
this is not the way most nearly at hand, (most easily achieved).

As far as believing that a billiard-ball *machine* or a hydraulic
machine might instantiate me (as a running program), I for one *do*
believe that. So in my understanding of the terms, as I said above,
then it follows that I myself am in the strong AI camp (ontologically).

But I (and I know I speak for others) don't think that I'm only
a description; we believe that we must be processes running during
some time interval on some kind of hardware in some physical reality.
So we are as yet unmoved :-) by Bruno's descriptions.

Lee

Stathis Papaioannou

unread,
May 19, 2005, 8:51:26 AM5/19/05
to jco...@ican.net, everyth...@eskimo.com
Jonathan Colvin wrote:

[quoting Stathis]


> >My curiosity could only be satisfied if I were in fact the
> >duplicated system myself; perhaps this could be achieved if I
> >"became one" with the new system by direct neural interface. I
> >don't have to go to such lengths to learn about the new
> >system's mass, volume, behaviour, or any other property, and
> >in *this* consists the essential difference between 1st person
> >and 3rd person experience. You can minimise it and say it
> >doesn't really make much practical difference, but I don't
> >think you can deny it.
>
>I can deny that there is anything special about it, beyond the difference
>between A): *a description of an apple*; and B): *an apple*. I don't think
>anyone would deny that there is a difference between A and B (even with
>comp
>there is still a difference); but this "essential difference" does not seem
>to have anything in particular to do with qualia or experience.
>
>Jonathan Colvin

Can the description of the apple, or bat, or whatever meaningfully include
what it is like to be that thing?

--Stathis Papaioannou

_________________________________________________________________
Sell your car for $9 on carpoint.com.au
http://www.carpoint.com.au/sellyourcar

Bruno Marchal

unread,
May 19, 2005, 9:31:48 AM5/19/05
to Stathis Papaioannou, jco...@ican.net, everyth...@eskimo.com

Le 19-mai-05, à 14:44, Stathis Papaioannou a écrit :

> Jonathan Colvin wrote:
>
> [quoting Stathis]
>> >My curiosity could only be satisfied if I were in fact the
>> >duplicated system myself; perhaps this could be achieved if I
>> >"became one" with the new system by direct neural interface. I
>> >don't have to go to such lengths to learn about the new
>> >system's mass, volume, behaviour, or any other property, and
>> >in *this* consists the essential difference between 1st person
>> >and 3rd person experience. You can minimise it and say it
>> >doesn't really make much practical difference, but I don't
>> >think you can deny it.
>>
>> I can deny that there is anything special about it, beyond the
>> difference
>> between A): *a description of an apple*; and B): *an apple*. I don't
>> think
>> anyone would deny that there is a difference between A and B (even
>> with comp
>> there is still a difference); but this "essential difference" does
>> not seem
>> to have anything in particular to do with qualia or experience.
>>
>> Jonathan Colvin
>
> Can the description of the apple, or bat, or whatever meaningfully
> include what it is like to be that thing?


What do you mean by " include" ? Does the artificial brain proposed by
your doctor "includes" you ?
In a 1-person sense: yes (assuming c.)
In a 3-person sense: no.
OK?

Bruno

http://iridia.ulb.ac.be/~marchal/


James N Rose

unread,
May 19, 2005, 10:40:34 AM5/19/05
to everyth...@eskimo.com
I would like to gather everyone's attention to point to
an essential conceptual error that exists in the current
debating points of this topic, which in fact has been
an egregious error in logic for the past 2500 years,
ever since Plato.

Recent postings cite:

Stathis Papaioannou wrote:
>
> Jonathan Colvin wrote:
>
> [quoting Stathis]
> > >My curiosity could only be satisfied if I were in fact the
> > >duplicated system myself; perhaps this could be achieved if I
> > >"became one" with the new system by direct neural interface. I
> > >don't have to go to such lengths to learn about the new
> > >system's mass, volume, behaviour, or any other property, and
> > >in *this* consists the essential difference between 1st person
> > >and 3rd person experience. You can minimise it and say it
> > >doesn't really make much practical difference, but I don't
> > >think you can deny it.
> >
> >I can deny that there is anything special about it, beyond the difference
> >between A): *a description of an apple*; and B): *an apple*. I don't think
> >anyone would deny that there is a difference between A and B (even with
> >comp
> >there is still a difference); but this "essential difference" does not seem
> >to have anything in particular to do with qualia or experience.
> >
> >Jonathan Colvin
>
> Can the description of the apple, or bat, or whatever meaningfully include
> what it is like to be that thing?
>
> --Stathis Papaioannou

In 1996 at "Towards a Science of Consciousness" (Tucson) I presented
several exhibits, each one highlighting some specific relational qualia
of existence in isolation, and identifying each/all in reagrd to a
potential single holistic description of being -and- performances of
being.

The one that has bearing here, was simply an apple - inside a black box
which no light could enter, until the box was opened and photons could
reach the surface of the apple.

The discussion point went something like this: In contradistinction to
the 2500 years old 'definition' of self and completeness set forth by
Plato in his discussions of 'real' vis a vis 'ideal', notice is heregiven
that the apple inside the closed box is - ideally - an entity which
is without color ... absolutely and always - even though weak-logic
presumes and assigns color 'to' things and entities, de facto.

The full existential extent and outer-bound limit of the apple goes
-only- up to BUT NOT BEYOND its physical manifestation; in this case
in entity: its skin. Where skin -ends-, "apple" .. -ends- and does
not 'exist'.

However,

'color' - that which we first-order associate -with- apple, exists -solely-
in that region -outside and beyond- ... where 'apple' does not exist. By
sheer rigid definition of 'existence' - and logical definitions re 'sets' -
apple and 'color' are and always must be -mutually exclusive-, with no Venn
intersection at all.

Conclusions:

1. No entity is 'complete' in and of itself; entities are "completed" only
in co-presence of external environmentals.

2. Systems and entities -will have- qualia that exist (emergently) from
I-Thou relations which they may not be internally aware of, or be self
appreciative of, nor the impacts of these qualia on their 'self'.

First and Third frames of reference can never be identical, and

'exhibition of qualia' versus 'access to qualia for feedback purposes'
are quite different things.

Cybernetic secondary connections 'smooth' and blur this relationship
of being.


(there is more, but I don't have time at the moment to continue; sorry
to do a 'fermat', but I'll write again, if anyone cares to explore this
thread after this posting today)

Jamie Rose
19 May 2005

John M

unread,
May 19, 2005, 3:41:18 PM5/19/05
to Russell Standish, everyth...@eskimo.com
Russell wrote:
---------------------quote-------

"Pulling up the door you're standing on is known in the computer
industry as "bootstrapping", which comes from the expression "to pull
yourself up by your bootstraps".

Of course, over time, this has been shortened to "boot", as in
"booting your computer".

Initially, to boot a computer, one had to enter a small loader program
inot the computer by flicking switches. Run the program, and this
would a larger program from a disk or tape, which in turn would read
in an operating system (or whatever and start it running). These
days, this initial program is burned into a nonvolatile silicon chip,
which loads and runs the first sector of the hard disk, and so on, so
the tedious stage of entering the first program by hand is
avoided.

Can a conscious mind be understood completely by a conscious mind?
This can be cast in terms of Cantor's diagonalisation
argument. Goedel's 2nd theorem effectively says that arithmetic "cannot
understand itself". However, the set of recursive functions is closed
to diagonalisation, namely recursive functions exist that can emulate
any other.

Coming back to the original question - Bruno Marchal would probably answer
yes, with repsect to the assumption of COMP. Robert Rosen (to pick a
somewhat extreme opposing example) would probably argue no - that
consciousness lies in a class of systems outside the computable set.

Cheers"
----------------------------------end quote------------
If my memory serves well, I heard that explanation of "bootstrapping" some
decades ago, (cosmological Q-theories) I suppose, before it was applied to
computers. Thanks, Russell, for the computer class, however I would not
equate 'booting' with 'bootstrapping', especially based on your info about
the 'special' initiating programs you have to provide to 'strapp'. It may
well be that the usage of the word originated from such poor understanding
of the metaphor.

I wanted to stress the ignorance of that "bootstrapper" (to use your
preferred word) about the way she was acting. She did not want to lift
herself up. She thought she is lifting a trapdoor (to go down the stairs).
Exactly the goal we are pursuing in trying to understand the 'world' we
belong to as part of it - together with its 'sense' of which we use a small
part to think with. Maybe Cantor and Goedel were smarter, yet I doubt if
they could encompass the totality in its interactions up and down to draw
wholistic conclusions upon the world they also were a small part of only.
They might have known more than us. That's all.

If one considers an infinite set for comp, (imaginary that is) unlimited and
encompassing all (knowable and coming) ramifications into its computations,
that is a different ballgame. I try to stay within our applicable limits and
accept the limitations of our mental capabilities. Not only mine, those of
other humans as well. I have no computer working with unlimited sets of
data, unlimited ways of comp, unlimited options to consider and the
unlimited choice to apply for results, I envy all who have that.

Without trying to defend Robert Rosen, his (unlimited) natural systems
(maximum models = the THING itself, not a model) are (in his words) "not
Turing -computable", I think that is different from Bruno's unlimited
'comp'.

Excuse me for falling through a trap-door into a reply about things I am no
expert in. I may have the wrong bootstraps.

Cheers

John

Quentin Anciaux

unread,
May 19, 2005, 3:58:33 PM5/19/05
to John M, everyth...@eskimo.com
Hi,

Le Jeudi 19 Mai 2005 21:18, John M a écrit :
>
> Without trying to defend Robert Rosen, his (unlimited) natural systems
> (maximum models = the THING itself, not a model) are (in his words) "not
> Turing -computable", I think that is different from Bruno's unlimited
> 'comp'.

I think that is what Bruno explains (rather my understanding of it), that
"consciousness" (a thing ?) is emergent on all computations passing through
th(is/ese) state(s). If I understand, there is not one computation that
simulate a thing but a set of computation having this state. But it seems to
me that an infinity of computation passing through a particular state exists,
so I do not very well understand how a measure can be associated to it.

> Excuse me for falling through a trap-door into a reply about things I am no
> expert in. I may have the wrong bootstraps.

I'm not an expert too ;)

> Cheers
>
> John

Regards,

Quentin Anciaux

Jonathan Colvin

unread,
May 19, 2005, 6:26:41 PM5/19/05
to everyth...@eskimo.com

> [quoting Stathis]
> > >My curiosity could only be satisfied if I were in fact the
> duplicated
> > >system myself; perhaps this could be achieved if I "became
> one" with
> > >the new system by direct neural interface. I don't have to
> go to such
> > >lengths to learn about the new system's mass, volume,
> behaviour, or
> > >any other property, and in *this* consists the essential
> difference
> > >between 1st person and 3rd person experience. You can
> minimise it and
> > >say it doesn't really make much practical difference, but I don't
> > >think you can deny it.
> >
> >I can deny that there is anything special about it, beyond the
> >difference between A): *a description of an apple*; and B):
> *an apple*.
> >I don't think anyone would deny that there is a difference between A
> >and B (even with comp there is still a difference); but this
> "essential
> >difference" does not seem to have anything in particular to do with
> >qualia or experience.
> >
> >Jonathan Colvin
>
> Stathis: Can the description of the apple, or bat, or whatever
> meaningfully include what it is like to be that thing?

My argument (which is Dennet's argument) is that "what it is like to be that
thing" is identical to "being that thing". As Bruno points out, in 3rd
person level (ie. the level where I am describing or simulating an apple), a
description can not "be" a thing; but on the 1st person level (where a
description *is* the thing, from the point of view of the thing, inside the
simulation, as it were), then the description does "include" what it is like
to be that thing. But "include" is not the correct word to use, since it
subtly assumes a dualism (that the qualia exist somehow separate from the
mere description of the thing); the description *just is* the thing.

Jonathan

Jonathan Colvin

unread,
May 19, 2005, 7:15:28 PM5/19/05
to everyth...@eskimo.com

> > >Lee: No, the important claims that Bruno makes go far beyond.
> He attempts
> > > to derive physics from the theory of computation (i.e., recursive
> > > functions, effective computability, incompleteness, and
> > > unsolvability).
> > > His is also one set of the claims, hypotheses, and
> conjectures that
> > > attempt to reduce physics to a completely timeless abstract world.
> > > Julian Barbour, in The End of Time, gave, as you probably
> know, one
> > > of the most brilliant presentations from this perspective.
> >
> >Jonathan: Sure; but I was just addressing the observation by Bruno that

a
> > description of a ball can bruise you (if you are also a
> description).
> > That observation is not unique to Bruno's Comp; it applies to any
> > theory that accepts the premise of Strong AI.
>
> I'm astonished to hear this; I thought that "strong AI"
> referred merely to the claim that fully human or beyond
> intelligence might be achieved by automatic machinery even if
> the programs only push bits around one at a time. In other
> words, what distinguished the strong AI camp from the weak AI
> camp was that the latter believed that more is needed somehow
> or other: perhaps parallel processing; perhaps biological
> program instantiation; perhaps quantum gravity tubules or...
> something.

No, the conventional meanings of strong vs. weak AI are merely:

Weak AI: machines can be made to act *as if* they were intelligent
(conscious, etc).
Strong AI: machines that act intelligently have real, conscious minds
(actually experience the world, qualia etc).

A claim that a description of an object (a simulated billiard ball for
instance) can bruise me (cause me pain etc) if I am a simulation, requires
strong AI, such that my simulation is conscious. Otherwise, under weak AI,
my simulation can only act *as if* it were bruised or in pain, since it is
not actually conscious.

> As far as believing that a billiard-ball *machine* or a
> hydraulic machine might instantiate me (as a running
> program), I for one *do* believe that. So in my understanding
> of the terms, as I said above, then it follows that I myself
> am in the strong AI camp (ontologically).

But Strong AI usually presumes substrate independance; so if you don't
believe that a mechanical ping pong ball machine for instance could
instantiate an intelligence, you would not be classed as in the Strong AI
camp.



> But I (and I know I speak for others) don't think that I'm
> only a description; we believe that we must be processes
> running during some time interval on some kind of hardware in
> some physical reality.
> So we are as yet unmoved :-) by Bruno's descriptions.

The usual reply is that this begs the question as to what a "process" is. If
we accept the block universe, time is a 1st person phenomenon anyway, so how
do differentiate between what is a description and what is a process?

Jonathan Colvin

Stathis Papaioannou

unread,
May 19, 2005, 9:03:56 PM5/19/05
to jco...@ican.net, everyth...@eskimo.com
OK then, we agree! It's just that what I (and many others) refer to as
qualia, you refer to as the difference between a description of a thing and
being the thing. I hate the word "dualism" as much as you do (because of the
implication that we may end up philosophically in the 16th century if we
yield to it), but haven't you just defined a very fundamental kind of
dualism, in aknowledging this difference between a thing and its
description? It seems to me, in retrospect, that our whole argument has been
one over semantics. Dennett (whom I greatly respect) goes to great lengths
to avoid having impure thoughts about something being beyond empirical
science or logic. David Chalmers ("The Conscious Mind", 1996) accepts that
it is actually simpler to admit that consciousness is just an irreducible
part of physical existence. We accept that quarks, or bitstrings, or
whatever are irreducible, so why is it any different to accept consciousness
or
what-it-is-like-to-be-something-as-distinct-from-a-description-of-something
(which is more of a mouthful) on the same basis?

--Stathis Papaioannou

_________________________________________________________________

Stephen Paul King

unread,
May 19, 2005, 9:48:02 PM5/19/05
to everyth...@eskimo.com
Dear Jonathan,

Non-separateness and identity are not the same! Your argument against
dualism assumes that the duals are somehow separable and thus, lacking a
linking mechanism, fails as a viable theory. On the other hand, once we see
the flaw in the assumption that we are making, that Body and Mind - Physical
existence and Mathematical existence (or Information!) are not separable in
the sense that one can have meaning and "reason to be" without the other, we
can again consider how dualism can be viable as people such as Vaughan Pratt
have done.

The hard part is in overcoming the prejudice that has built up since
Descartes flawed theory was proposed. His failure was in assuming that Body
and Mind are "substances" that have independent yet equal existence. The use
of the assumption of "substance" caries with it the necessitation of a
"causal connector". When we consider the duality in terms of process or
types and tokens or hardware and software, it makes a lot more sense.

This is analogous to claiming that numbers can somehow exist without
there being any need for them to be representable in any way. Unless we can
somehow "read each other's minds", it is impossible for me to communicate
the difference between the number 1 and the number 2. Without some physical
structure to act as an interface between our Minds, minds can not interact
or even "know" anything; there is no "definiteness". Similarly, Bodies can
not ask questions or predictions or have anticipations or
self-representations without some Mind associated. Nature has given us
fingers with which to understand numbers...

Consciousness seems to be more of a functional relationship between the
Physical and the Mental, the Outside and the Inside, as Chalmer's states.
When the two dual aspects are taken to the ultimate level of Existence
in-itself, the distinction between the two vanishes. Russell saw this long
ago, he denoted it as "neutral monism". It is too bad that he made the
mistake of excluding non-well founded sets from consideration.


Stephen

----- Original Message -----
From: "Jonathan Colvin" <jco...@ican.net>
To: <everyth...@eskimo.com>
Sent: Thursday, May 19, 2005 6:22 PM
Subject: RE: What do you lose if you simply accept...


snip

Russell Standish

unread,
May 19, 2005, 9:57:31 PM5/19/05
to James N Rose, everyth...@eskimo.com
On Thu, May 19, 2005 at 07:29:33AM -0700, James N Rose wrote:
> I would like to gather everyone's attention to point to
> an essential conceptual error that exists in the current
> debating points of this topic, which in fact has been
> an egregious error in logic for the past 2500 years,
> ever since Plato.
>

...

Agreed that colour is not a characteristic of an object in itself. How
does this impact on the debate, however?

Stathis Papaioannou

unread,
May 19, 2005, 10:10:25 PM5/19/05
to mar...@ulb.ac.be, jco...@ican.net, everyth...@eskimo.com
Yes, this is what I meant. What it is like to be something can only be
answered from the 1st person perspective.

--Stathis

>>Can the description of the apple, or bat, or whatever meaningfully include
>>what it is like to be that thing?
>
>
>What do you mean by " include" ? Does the artificial brain proposed by your
>doctor "includes" you ?
>In a 1-person sense: yes (assuming c.)
>In a 3-person sense: no.
>OK?
>
>Bruno
>
>http://iridia.ulb.ac.be/~marchal/
>

_________________________________________________________________

Jonathan Colvin

unread,
May 19, 2005, 10:26:48 PM5/19/05
to everyth...@eskimo.com

> Stathis: OK then, we agree! It's just that what I (and many others)
> refer to as qualia, you refer to as the difference between a
> description of a thing and being the thing. I hate the word
> "dualism" as much as you do (because of the implication that
> we may end up philosophically in the 16th century if we yield
> to it), but haven't you just defined a very fundamental kind
> of dualism, in aknowledging this difference between a thing
> and its description? It seems to me, in retrospect, that our
> whole argument has been one over semantics.

Well, that would be a novel application of "dualism", I think. A description
of a thing, and *a thing* seem to be two very different categories; dualism
would usually imply one is talking about dualistic properties of the *same
thing*. I'm still inclined to deny that "qualia" refers to anything. It is a
mental fiction.


>Dennett (whom I
> greatly respect) goes to great lengths to avoid having impure
> thoughts about something being beyond empirical science or
> logic. David Chalmers ("The Conscious Mind", 1996) accepts
> that it is actually simpler to admit that consciousness is
> just an irreducible part of physical existence. We accept
> that quarks, or bitstrings, or whatever are irreducible, so
> why is it any different to accept consciousness or
> what-it-is-like-to-be-something-as-distinct-from-a-description
> -of-something
> (which is more of a mouthful) on the same basis?


The argument from Dennet (which I'm inclinced to agree with) would be that
we can not accept "what-is-it-likeness" as an irreducible thing because
there is no such thing as "what is it likeness".

Jonathan Colvin

Bruno Marchal

unread,
May 20, 2005, 6:10:28 AM5/20/05
to Quentin Anciaux, EverythingList list

Le 19-mai-05, à 21:51, Quentin Anciaux a écrit :

> Hi,
>
> Le Jeudi 19 Mai 2005 21:18, John M a écrit :
>>
>> Without trying to defend Robert Rosen, his (unlimited) natural systems
>> (maximum models = the THING itself, not a model) are (in his words)
>> "not
>> Turing -computable", I think that is different from Bruno's unlimited
>> 'comp'.
>
> I think that is what Bruno explains (rather my understanding of it),
> that
> "consciousness" (a thing ?) is emergent on all computations passing
> through
> th(is/ese) state(s). If I understand, there is not one computation that
> simulate a thing but a set of computation having this state.


Right.


> But it seems to
> me that an infinity of computation passing through a particular state
> exists,

Right.

> so I do not very well understand how a measure can be associated to it.


Measure theory has been developed for taking into account infinite sets
on which the measure bears on.

>
>> Excuse me for falling through a trap-door into a reply about things I
>> am no
>> expert in. I may have the wrong bootstraps.
>
> I'm not an expert too ;)


Beware the experts. today, in the interdisciplinary fields, they are
in average worst than honest inquiring lay(wo)men.

Bruno


http://iridia.ulb.ac.be/~marchal/


Bruno Marchal

unread,
May 20, 2005, 6:36:01 AM5/20/05
to John M, Russell Standish, everyth...@eskimo.com

Le 19-mai-05, à 21:18, John M a écrit :

> Without trying to defend Robert Rosen, his (unlimited) natural systems
> (maximum models = the THING itself, not a model) are (in his words)
> "not
> Turing -computable", I think that is different from Bruno's unlimited
> 'comp'.


I would like to insist on this key point: comp entails that first
person reality, whatever it is, is NOT COMPUTABLE. (and the UDA shows
that physics is among the 1-realities).
If I am a machine then whatever I am embedded in, CANNOT BE CAPTURED
BY ANY PROGRAM, with the exception of the UD which does not really
captured reality as we can know it, because the capture in provably NOT
EFFECTIVE. The UD generates all the machine "dreams" which by highly
non trivial interference (not the quantum one but the comp one)
generates a non computable "solidity".

To understand COMP = to understand we are infinitely more ignorant than
we could have thought. And this aspect of comp appears still more
clearly in the "interview" of the Loebian machine which is the most
modest being ever conceived until now (to my knowledge).

John, I'm afraid you still have a reductionist, pre-godelian,
understanding of machine. Or perhaps, by inattention you are coming
back to such a reductionist conception of machine. Since Goedel 1931
such a reductive view of machine is just wrong. Godel's theorem is the
realisation that we just don't know what universal machine are, what
they are able to do. It makes us humble!

I insist because that's a widespread misconception. The real miracle is
that those machine dreams are still interfering in a way which makes
the appearance of physical reality locally testable, inluding the
testability of comp itself.

And so I do agree with ROSEN's conclusion that "nature" is not
computable. But I extracted this by what amounts essentially to a
self-finiteness assumption (that's comp) where Rosen got it by assuming
at the start that he is natural and by assuming at the start that
nature is not computable. I don't do that because I have never
understand what the word "Nature" means in that context, except as some
dogmatic oversimplification of Aristotle physics and theology.

Bruno


http://iridia.ulb.ac.be/~marchal/


Bruno Marchal

unread,
May 20, 2005, 6:40:56 AM5/20/05
to Stathis Papaioannou, jco...@ican.net, everyth...@eskimo.com

Le 20-mai-05, à 02:59, Stathis Papaioannou a écrit :

> OK then, we agree! It's just that what I (and many others) refer to as
> qualia, you refer to as the difference between a description of a
> thing and being the thing. I hate the word "dualism" as much as you do
> (because of the implication that we may end up philosophically in the
> 16th century if we yield to it), but haven't you just defined a very
> fundamental kind of dualism, in aknowledging this difference between a
> thing and its description? It seems to me, in retrospect, that our
> whole argument has been one over semantics. Dennett (whom I greatly
> respect) goes to great lengths to avoid having impure thoughts about
> something being beyond empirical science or logic. David Chalmers
> ("The Conscious Mind", 1996) accepts that it is actually simpler to
> admit that consciousness is just an irreducible part of physical
> existence. We accept that quarks, or bitstrings, or whatever are
> irreducible, so why is it any different to accept consciousness or
> what-it-is-like-to-be-something-as-distinct-from-a-description-of-
> something (which is more of a mouthful) on the same basis?


Yes but then why not take everything for granted. I do think Chalmers
just abandons rationalism, unlike Dennett in Brainstorms (but then a
little bit too in "Consciousness explained" ... explained away as he
realises himself at the end of the book (at last).

Frankly Stathis, is that is your last move, I prefer the short answer
by Norman Samish's wife: "because".

;)

Bruno

http://iridia.ulb.ac.be/~marchal/


James N Rose

unread,
May 20, 2005, 8:48:04 AM5/20/05
to everyth...@eskimo.com
Russell Standish wrote:
>
> On Thu, May 19, 2005 at 07:29:33AM -0700, James N Rose wrote:
> > I would like to gather everyone's attention to point to
> > an essential conceptual error that exists in the current
> > debating points of this topic, which in fact has been
> > an egregious error in logic for the past 2500 years,
> > ever since Plato.
> >
> ..... . . . . . . . .
>
> > 'color' - that which we first-order associate -with- apple, exists -solely-
> > in that region -outside and beyond- ... where 'apple' does not exist. By
> > sheer rigid definition of 'existence' - and logical definitions re 'sets' -
> > apple and 'color' are and always must be -mutually exclusive-, with no Venn
> > intersection at all.
> >
> > Conclusions:
> >
> > 1. No entity is 'complete' in and of itself; entities are "completed" only
> > in co-presence of external environmentals.
> >
> > 2. Systems and entities -will have- qualia that exist (emergently) from
> > I-Thou relations which they may not be internally aware of, or be self
> > appreciative of, nor the impacts of these qualia on their 'self'.
> >
> > First and Third frames of reference can never be identical, and
> >
> > 'exhibition of qualia' versus 'access to qualia for feedback purposes'
> > are quite different things.
> >
> > Cybernetic secondary connections 'smooth' and blur this relationship
> > of being.
> >
> >
> > (there is more, but I don't have time at the moment to continue; sorry
> > to do a 'fermat', but I'll write again, if anyone cares to explore this
> > thread after this posting today)
> >
> > Jamie Rose
> > 19 May 2005
>
> Agreed that colour is not a characteristic of an object in itself. How
> does this impact on the debate, however?


Russell,

Realize first that you just easily and aggreably opted to completely negate
Platonic 'real v. ideal' as a flawed logic. Identification of 'essential
qualia' is no longer an a priori valid 'given'. By next logical extension
of this de-validation, which qualia - assigned to an entity by way of
external evaluation of the entity - represent qualia which the entity
functions on immediately and intimately because the entity internally
has an information link to it?

The school prank of putting a secretly taped sign on a friends back
saying 'kick me' .. the conscious performance of the student -excludes-
a qualia which the environmental world identifies -with- the
student-with-sign.

"A description of a system, and a system in and of itself, can never and
will never map perfectly one to one and on to."

QED

Conclusions:

1. Initial condition alternatives result in alternate eventstream outcomes.
2. Alternate information sets preclude precision cloning,
performances, decision gates.
3. Conscious is not perfectly transferrable.

Jamie

Stephen Paul King

unread,
May 20, 2005, 11:18:47 AM5/20/05
to mi...@metasciences-academy.org, everyth...@eskimo.com, ti...@yahoogroups.com
Dear Jonathan,

Non-separateness and identity are not the same thing! Your argument

against dualism assumes that the duals are somehow separable and

non-mutually dependent and thus lacking a linking mechanism dualism fails as

a viable theory. On the other hand, once we see the flaw in the assumption
that we are making, that Body and Mind - Physical existence and Mathematical

existence (or Information!) are separable in the sense that one can have

meaning and "reason to be" without the other, we can again consider how
dualism can be viable as people such as Vaughan Pratt have done.

The hard part is in overcoming the prejudice that has built up since
Descartes' flawed theory was proposed. His failure was in assuming that Body
and Mind are "substances" that have independent yet equal existence. The use
of the assumption of "substance" caries with it the necessitation of a

"causal connector" and barring the existence of such a connector the
assumption leads to an inconsistency. When we consider the duality in terms
of process or types and tokens or hardware and software, whose very meaning
vanishes with out the other, it makes a lot more sense.

This is analogous to claiming that numbers can somehow exist without

there being any need for them to be representable in any way. How does any
number, even a Gödel number, represent anything at all if the means by which
to perform the act of making a distinction is not present "in them"? There
is not such a thing as an "action" in Platonia!

It is one thing to claim that any event or action or aspect of existence
can be faithfully represented as some number, just like a movie is
enumerated by a string of ones and zeroes in the tiny pits in DVD disc, but
without there being some means that is not a number to act upon that number,
the very notion of number vanishes. Numbers, all mathematics and logics,
necessitate some means to express themselves.

Unless we can somehow "read each other's minds", it is impossible for me
to communicate the difference between the number 1 and the number 2. Without
some physical structure to act as an interface between our Minds, minds

cannot interact or even "know" anything; there is no "definiteness".
Similarly, Bodies cannot ask questions or predictions or have anticipations

or self-representations without some Mind associated. Nature has given us
fingers with which to understand numbers...

A Platonist neglects his own brain when he demands that we believe that
numbers are all that exist.

Joao Leao

unread,
May 20, 2005, 11:50:10 AM5/20/05
to Stephen Paul King, mi...@metasciences-academy.org, everyth...@eskimo.com, ti...@yahoogroups.com
Dear SPK

Though I entirely agree with what you state above, I take issue with your
characterization of "Platonism" as some form of mathematical monism.
If you had called "Pythagorianism" to the doctrine that "only numbers
exist" you would most likely be correct. Platonism, however, is very
definitely a form of ontological dualism: Platonists never deny the
existence of the physical world, though they insist that it exists as
a corrupted copy of the world of forms, that one holding the "true"
reality.  It is one think to reject the false, and another one to deny
its existence! Sorry but this is not a pedantic point.

In other words: in all you say above you argue as a true Platonist,
(only one that does not know he is one)!

 
    Consciousness seems to be more of a functional relationship between the
Physical and the Mental, the Outside and the Inside, as Chalmer's states.
When the two dual aspects are taken to the ultimate level of Existence
in-itself, the distinction between the two vanishes. Russell saw this long
ago, he denoted it as "neutral monism". It is too bad that he made the
mistake of excluding non-well founded sets from consideration.
 

If that ultimate level of Existence, as you put it, was as accessible to us as
the world of (mathematical) forms is to our minds, than I would take it as
an indication that we (our sould) would had already migrated back to it in
old platonic parlance. Till than dualism seems quite unavoidable.

Kindly,

-Joao

 
Stephen

----- Original Message -----
From: "Jonathan Colvin" <jco...@ican.net>
To: <everyth...@eskimo.com>
Sent: Thursday, May 19, 2005 6:22 PM
Subject: RE: What do you lose if you simply accept...

snip
>> Stathis: Can the description of the apple, or bat, or whatever
>> meaningfully include what it is like to be that thing?
>
> My argument (which is Dennet's argument) is that "what it is like to be
> that thing" is identical to "being that thing". As Bruno points out, in
> 3rd
> person level (ie. the level where I am describing or simulating an apple),
> a description can not "be" a thing; but on the 1st person level (where a
> description *is* the thing, from the point of view of the thing, inside
> the
> simulation, as it were), then the description does "include" what it is
> like to be that thing. But "include" is not the correct word to use, since
> it
> subtly assumes a dualism (that the qualia exist somehow separate from the
> mere description of the thing); the description *just is* the thing.
>
> Jonathan
>

-- 

Joao Pedro Leao  :::  jl...@cfa.harvard.edu
Harvard-Smithsonian Center for Astrophysics
1815 Massachusetts Av. , Cambridge MA 02140
Work Phone: (617)-496-7990 extension 124
Cell-Phone: (617)-817-1800
----------------------------------------------
"All generalizations are abusive (specially this one!)"
-------------------------------------------------------
 

Stephen Paul King

unread,
May 20, 2005, 12:12:35 PM5/20/05
to jl...@cfa.harvard.edu, mi...@metasciences-academy.org, everyth...@eskimo.com, ti...@yahoogroups.com
Dear Joao,
 
    Your point is well taken! My failure was to point out that my 'rant' was against those that would claim that dualism can never be a viable alternative, especially to a Numbers-are-all-that-exists-monism. Thank you for pointing out that such is called Pythagorianism.
    OTOH, I see a failure in most discussions of Platonism in that nowhere is the concept of Becoming considered to be meaningful. I am trying, unsuccessfully it seems, to argue that it is a mistake to consider "Being" as fundamental and that any form of Becoming is mere illusion. We can use the launguage of Fixed points to show that Beingness, that which is is immutable, can be faithfully identified as fixed points in a space(?) of Becoming. Platonia should be taken as that ultimate level of Existence where all forms of Becoming - not Being! - are fixed points, some kind of unique and irriducible Category of Automorphisms, and not Existence in-itself.
 
    My words are ill-posed here, I apologize.
 
Kindest regards,
 
Stephen
 
----- Original Message -----
From: Joao Leao
Sent: Friday, May 20, 2005 11:40 AM
Subject: Re: In defense of Dualism (typos corrected)

snip

Dear SPK

Though I entirely agree with what you state above, I take issue with your
characterization of "Platonism" as some form of mathematical monism.
If you had called "Pythagorianism" to the doctrine that "only numbers
exist" you would most likely be correct. Platonism, however, is very
definitely a form of ontological dualism: Platonists never deny the
existence of the physical world, though they insist that it exists as
a corrupted copy of the world of forms, that one holding the "true"
reality.  It is one think to reject the false, and another one to deny
its existence! Sorry but this is not a pedantic point.

In other words: in all you say above you argue as a true Platonist,
(only one that does not know he is one)!

[SPK]


    Consciousness seems to be more of a functional relationship between the
Physical and the Mental, the Outside and the Inside, as Chalmer's states.
When the two dual aspects are taken to the ultimate level of Existence
in-itself, the distinction between the two vanishes. Russell saw this long
ago, he denoted it as "neutral monism". It is too bad that he made the
mistake of excluding non-well founded sets from consideration.
 

Joao Leao

unread,
May 20, 2005, 1:22:58 PM5/20/05
to Stephen Paul King, mi...@metasciences-academy.org, everyth...@eskimo.com, ti...@yahoogroups.com
 
Dear Stephen,

I think I catch your point. As it happens the distinction Being/Becoming (as Form/Substance) are very Aristotelian, both in origin
and in the way we use them. If the distinction has any meaning within Platonism is probably as the reverse of the usual sense, i.e.,
Being  only refers to the Forms (eternally) and Becoming to the finite everchanging corrupt reality(=appearance) of which we
(and our souls) are part. Our access to mathematical archetypes is in this sense a "map" to help us "make our way back to the
garden", as Joni Mitchell (that great Platonist) would put it! Existence-in-itself , if you prefer.  I guess that may be what all
commited Platonists are trying to do on their own, (though some think they need a lot more "maps"...).

Let me close (before I mix my metaphores irrecuperably).

Best,

-Joao
 

-- 

Joao Pedro Leao  :::  jl...@cfa.harvard.edu

scerir

unread,
May 20, 2005, 4:35:32 PM5/20/05
to everyth...@eskimo.com
From: "Joao Leao"
> Our access to mathematical archetypes is in
> this sense a "map" to help us "make our way back
> to the garden", as Joni Mitchell (that great
> Platonist) would put it!

If I remember well - but I studied all that 35
years ago - Aristotle called all that 'hylomorphism',
from hule = matter, or sustance, and morphe = form,
or in-formation.

Whether or not hylomorphism has something to do
with the limited information carried by quantum
states and quantum states themselves, the carriers
of that limited information, is something which
I find interesting :-)

Saluti,
-serafino

Joao Leao

unread,
May 20, 2005, 5:01:18 PM5/20/05
to scerir, everyth...@eskimo.com
I am not sure that the Aristotelic term applied
to this. I see hylemorphism as the position that
matter beggets form (rather the other way
around which is the more platonic position).

I think it applies fully to the group of attempts
to build Relational  (Classical and Quantum)
Theories of space-time such as the work of
Smolin,Rovelli, Barbour and such...
These follow Leibnitz in proposing that Space
(and time) are not things but objective relations
between material objects.

I find these interesting but anti-platonic.

-Joao
 
 

-- 

Joao Pedro Leao  :::  jl...@cfa.harvard.edu

John M

unread,
May 20, 2005, 5:16:50 PM5/20/05
to everyth...@eskimo.com
Quentin Anciaux wrote:

----- Original Message -----
Subject: Re: a description of you + a description of billiard ball can
bruise you?

> > Hi,
> >
> > Le Jeudi 19 Mai 2005 21:18, John M a écrit :

> >>SNIP


> >
> > I think that is what Bruno explains (rather my understanding of it), > >
that "consciousness" (a thing ?) is emergent on all computations passing
through th(is/ese) state(s). If I understand, there is not one computation
that simulate a thing but a set of computation having this state.

> > But it seems to me that an infinity of computation passing through a

particular state exists, so I do not very well understand how a measure can
be associated to it.
JM:
IMO consciousness is not "a thing", maybe a set of functions(?) - if we ever
agree. Bruno remarked:


>
> Measure theory has been developed for taking into account infinite sets on
which the measure bears on.

JM:
that must be a good compromise between the wholistic and model views. I
still hesitate to exempt a 'measure' from its reductionistic status in spite
of the wholistic infinite set it is 'based on' - seemingly to be by
simulation, ie. model construction.


> >
> > I'm not an expert too ;)
>
>
> Beware the experts. today, in the interdisciplinary fields, they are
> in average worst than honest inquiring lay(wo)men.
> Bruno

JM:
An 'expert' is a person "who knows all - better than others". As a technical
consultant I always preferred to be called "a specilaist".

JOhn M
>


Stathis Papaioannou

unread,
May 20, 2005, 11:39:36 PM5/20/05
to mar...@ulb.ac.be, jco...@ican.net, everyth...@eskimo.com
Bruno Marchal wrote:

People certainly seem to take their consciousness seriously on this list!
I've now managed to alienate both the "consciousness doesn't really exist"
and the "it exists and we can explain it" factions. I did not mean that
there is no explanation possible for consciousness. It is likely that in the
course of time the neuronal mechanisms behind the phenomenon will be worked
out and it will be possible to build intelligent, conscious machines.
Imagine that advanced aliens have already achieved this through
surreptitious study of humans over a number of decades. Their models of
human brain function are so good that by running an emulation of one or more
humans and their environment they can predict their behaviour better than
the humans can themselves. Now, I think you will agree (although Jonathan
Colvin may not) that despite this excellent understanding of the processes
giving rise to human conscious experience, the aliens may still have
absolutely no idea what the experience is actually like. For example, if
they lack any sense of vision, they cannot possibly know what it is like to
see red. This is the difference between 1st person and 3rd person
experience. At this point, Bruno, you may go further and say that the
1st/3rd person difference is not irreducible or inexplicable, but can be
shown to be a theorem in mathematical logic. This is a spectacular result,
and it is at a deeper explanatory level than the description of the neural
or computational basis of 1st person experience. However, does it help our
blind aliens understand what it is like for a human to see red? It is that
aspect of 1st person experience which cannot possibly be understood or
communicated in any way other than through oneself *being* the system that
has the experience which Chalmers calls the "hard problem" of consciousness.

--Stathis Papaioannou

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/

Jonathan Colvin

unread,
May 21, 2005, 2:37:22 AM5/21/05
to everyth...@eskimo.com
Stathis:

> People certainly seem to take their consciousness seriously
> on this list!
> I've now managed to alienate both the "consciousness doesn't
> really exist"
> and the "it exists and we can explain it" factions. I did not
> mean that there is no explanation possible for consciousness.
> It is likely that in the course of time the neuronal
> mechanisms behind the phenomenon will be worked out and it
> will be possible to build intelligent, conscious machines.
> Imagine that advanced aliens have already achieved this
> through surreptitious study of humans over a number of
> decades. Their models of human brain function are so good
> that by running an emulation of one or more humans and their
> environment they can predict their behaviour better than the
> humans can themselves. Now, I think you will agree (although
> Jonathan Colvin may not) that despite this excellent
> understanding of the processes giving rise to human conscious
> experience, the aliens may still have absolutely no idea what
> the experience is actually like.

No, I'd agree that they have no idea what the experience is like. But this
is no more remarkable than the fact that allthough we may have an excellent
understanding of photons, we can not travel at the speed of light, or that
although we may have an excellent understanding of trees, yet we can not
photosynthesize. Neither of these "problems" seem particularly hard.

Jonathan Colvin

Russell Standish

unread,
May 21, 2005, 4:51:12 AM5/21/05
to James N Rose, everyth...@eskimo.com
On Fri, May 20, 2005 at 05:39:42AM -0700, James N Rose wrote:
> >
> > Agreed that colour is not a characteristic of an object in itself. How
> > does this impact on the debate, however?
>
>
> Russell,
>
> Realize first that you just easily and aggreably opted to completely negate
> Platonic 'real v. ideal' as a flawed logic. Identification of 'essential
> qualia' is no longer an a priori valid 'given'. By next logical extension
> of this de-validation, which qualia - assigned to an entity by way of
> external evaluation of the entity - represent qualia which the entity
> functions on immediately and intimately because the entity internally
> has an information link to it?
>
> The school prank of putting a secretly taped sign on a friends back
> saying 'kick me' .. the conscious performance of the student -excludes-
> a qualia which the environmental world identifies -with- the
> student-with-sign.
>
> "A description of a system, and a system in and of itself, can never and
> will never map perfectly one to one and on to."
>
> QED
>
> Conclusions:
>
> 1. Initial condition alternatives result in alternate eventstream outcomes.
> 2. Alternate information sets preclude precision cloning,
> performances, decision gates.
> 3. Conscious is not perfectly transferrable.
>
> Jamie

Sorry, but you've completely lost me here. I'm still looking for
relevance... What does your first sentence mean, for example. What is
Platonic ideal vs real? Is it Plato's cave metaphor? In which case, I
don't remember Plato's cave being brought up in discussion on this
list. It doesn't seem terribly relevant to me, or even to notions of
arithmetic platonism for example.

Cheers

Stathis Papaioannou

unread,
May 21, 2005, 6:46:10 AM5/21/05
to jco...@ican.net, everyth...@eskimo.com
Jonathan Colvin wrote:

We are thus at an impasse, agreeing on all the facts but differing in our
appraisal of the facts.

--Stathis Papaioannou

_________________________________________________________________
Chat with 1000s of sexy singles at Lavalife!
http://lavalife9.ninemsn.com.au/

Bruno Marchal

unread,
May 21, 2005, 9:54:53 AM5/21/05
to Jonathan Colvin, everyth...@eskimo.com

Le 21-mai-05, à 08:31, Jonathan Colvin a écrit :


But we can photosynthesize. And we can understand why we cannot travel
at the speed of light. All this by using purely 3-person description of
those phenomena in some theory.
With consciousness, the range of the debate goes from non-existence to
only-existing. The problem is that it seems that an entirely 3-person
explanation of the brain-muscles relations evacuates any purpose for
consciousness and the 1-person. That's not the case with
photosynthesis.


Bruno


>
> Jonathan Colvin
>
>
http://iridia.ulb.ac.be/~marchal/


Bruno Marchal

unread,
May 21, 2005, 11:44:12 AM5/21/05
to Bruno Marchal, Jonathan Colvin, everyth...@eskimo.com

Le 21-mai-05, à 15:48, Bruno Marchal a écrit :


.. and from this don't infer that I am saying that consciousness is
not explainable. Just that consciousness cannot have the same *type* of
explanation as photosynthesis.

(With comp I would argue that an explanation of consciousness is of a
type similar as an explanation of why there is something instead of
just logic + arithmetic).

Bruno


>
>
> Bruno
>
>
>
>
>>
>> Jonathan Colvin
>>
>>
> http://iridia.ulb.ac.be/~marchal/
>
>
>

http://iridia.ulb.ac.be/~marchal/


Lee Corbin

unread,
May 21, 2005, 3:06:38 PM5/21/05
to EverythingList
Stathis writes

> > > I did not
> > > mean that there is no explanation possible for consciousness.
> > > It is likely that in the course of time the neuronal
> > > mechanisms behind the phenomenon will be worked out and it
> > > will be possible to build intelligent, conscious machines.
> > > Imagine that advanced aliens have already achieved this
> > > through surreptitious study of humans over a number of
> > > decades. Their models of human brain function are so good
> > > that by running an emulation of one or more humans and their
> > > environment they can predict their behaviour better than the
> > > humans can themselves.

Well put.

An interesting point to add is that since human behavior
is almost surely not compressible, the *only* way that they
can learn what a human is going to do is to, in effect, run
one (the mocked up one in their lab). As you say, they run
an *emulation*.

But this could mean that they had *no* special insight into
consciousness, because by adjusting the teleporter, Scotty
can "find out" things too just by making a physical copy of
the Captain, and, for example, finding out what he'd say
about giving the engineers a raise.

But you have described Martian science very well. Here is
what I think that they are capable of that *is* important:
they could tell (or announce) with very high accuracy
whether a species was conscious, and to what extent, in
its natural environment, and do all this just from the
creature's DNA (and perhaps a little info on the inter-
uterine environment).

Here is an analogy: in a cold hut in the Scottish highlands
in 1440, two bright, but shivering, people are debating the
nature of warmth. Says one: "Brrr. Some day the scientists
will be so advanced that the can objectively measure hotness,
and you and I will more closely agree." And he turned out
to be right, as we know now.

> > > Now, I think you will agree (although
> > > Jonathan Colvin may not) that despite this excellent
> > > understanding of the processes giving rise to human conscious
> > > experience, the aliens may still have absolutely no idea what
> > > the experience is actually like.

Yes, but what does that mean? What does it mean for, say,
you to know what it's like when I play 1. e4 in a game of
chess? I can tell you that it's probably nothing at all
like when *you* play 1. e4. But it's strickly a function of
how similar our chess careers have been, whether we both
have the same opinion of the Alapin counter to the Sicilian,
and so forth. So in effect, it really comes down to how
much you are already me when you play 1. e4.

Somebody here said it much better than I: they said that
you have to almost be someone to in order to know what
it's like to be them.

Jonathan then says

> > No, I'd agree that they have no idea what the experience is like. But this
> > is no more remarkable than the fact that allthough we may have an excellent
> > understanding of photons, we can not travel at the speed of light, or that
> > although we may have an excellent understanding of trees, yet we can not
> > photosynthesize. Neither of these "problems" seem particularly hard.

I totally agree.

> We are thus at an impasse, agreeing on all the facts but differing in our
> appraisal of the facts.

Maybe. But since you (Stathis) write so well, could you summarize
what your adversaries seem to be saying and what you say? I'm less
sure (than you) that no progress can be made.

thanks,
Lee

Stathis Papaioannou

unread,
May 22, 2005, 12:31:18 AM5/22/05
to mar...@ulb.ac.be, jco...@ican.net, everyth...@eskimo.com
Bruno Marchal wrote:


To be more strictly analogous with the situation for consciousness, what
Jonathan could have said is that we have no idea what it is like to *be* a
photon or to *be* a tree photosynthesising. Most people would say that
photons and trees aren't conscious, and therefore they *can* be entirely
understood from a 3rd person perspective. Perhaps this is true, but it is
not logically consistent to say that it must be true and still maintain the
1st person/ 3rd person distinction we have been discussing. This is because
the whole point of the distinction is that it is not possible to deduce or
understand that which is special about 1st person experience (namely,
consciousness) from an entirely 3rd person perspective. The aliens I have
described in my example could be as different from us as we are different
from trees, and they could easily conclude that an emulation of our minds is
not fundamentally different from an emulation of our weather.

--Stathis Papaioannou

Bruno Marchal

unread,
May 22, 2005, 1:57:15 AM5/22/05
to Stathis Papaioannou, jco...@ican.net, everyth...@eskimo.com

Le 22-mai-05, à 06:29, Stathis Papaioannou a écrit :

Which means we agree completely. I thought Jonathan, in the manner of
John Searle, was arguing that nothing in principle distinguishes a
phenomenon like consciousness and photosynthesis. And this is just a
traditional move made by the so-called elimininative materialists who
just pretend consciousness (and first person) does not exist. The error
they make, I think, comes from the fact that scientific discourses are
(by construction) made only in the 3-person manner. But nothing
prevents us to try (at least) to have some axiomatic of the first
person discourse and to make some 3-person statements about it. And
knowledge theory are like that. There is even a quasi-unanimity on the
basic axiom of knowledge "to know p entails p" (Cp -> p).

Bruno


http://iridia.ulb.ac.be/~marchal/


Lee Corbin

unread,
May 22, 2005, 4:14:17 AM5/22/05
to EverythingList
Stathis writes

> photon or to *be* a tree photosynthesising. Most people would say that
> photons and trees aren't conscious, and therefore they *can* be entirely
> understood from a 3rd person perspective.

On this list?? You think that most people *here* presume that
photons and trees are not conscious? On what grounds could
they possibly think that?

After all, Consciousness is Deeply Mysterious, and thus might
penetrate anything or everything to an unknown degree. In fact,
it may turn out that there exists an inverse square law: something
is Conscious precisely to the square of the degree that it *appears*
to us not to be conscious. (The appearance of consciousness and
evidently conscious exchanges between Conscious entities, you see,
serves as an outlet, and diminishes True Consciousness.) Why not?

> Perhaps this is true, but it is
> not logically consistent to say that it must be true and still maintain the
> 1st person/ 3rd person distinction we have been discussing. This is because
> the whole point of the distinction is that it is not possible to deduce or
> understand that which is special about 1st person experience (namely,
> consciousness) from an entirely 3rd person perspective.

Yes, in other words, it is ineffable.

> The aliens I have described in my example [who were very clever and
> who could manufacture consciousness in objects under their control]


> could be as different from us as we are different from trees, and
> they could easily conclude that an emulation of our minds is
> not fundamentally different from an emulation of our weather.

Oh my. So while I understood earlier from you that your Martians were
wizards at creating human consciousness in objects, I didn't gather that
they *themselves* were possibly not anything-like-conscious. Have I
misunderstood anything?

Lee

Bruno Marchal

unread,
May 22, 2005, 11:30:14 AM5/22/05
to lco...@tsoft.com, EverythingList

Le 22-mai-05, à 10:13, Lee Corbin a écrit :

>
>> [Stathis] Perhaps this is true, but it is


>> not logically consistent to say that it must be true and still
>> maintain the
>> 1st person/ 3rd person distinction we have been discussing. This is
>> because
>> the whole point of the distinction is that it is not possible to
>> deduce or
>> understand that which is special about 1st person experience (namely,
>> consciousness) from an entirely 3rd person perspective.
>
> Yes, in other words, it is ineffable.


Exactly. Like consistency for sound or just consistent machines, if you
simplify "ineffable" by unprovable. (Godel's second incompleteness
theorem)

Please, don't infer that I identify consciousness and consistency, but
I do think consciousness is a "logical descendant" of consistency.

Bruno

http://iridia.ulb.ac.be/~marchal/


Stathis Papaioannou

unread,
May 22, 2005, 11:30:36 AM5/22/05
to lco...@tsoft.com, everyth...@eskimo.com
Lee,

There are some things that can be known by examination of an object, and
there are other things that can only be known by being the object. When the
object is a human brain, this latter class of things is consciousness. (When
the object is something else, this latter class of thing is... well, how
would I know?) I think that the distinction between these two types of
knowledge is surprising, and I would never have noticed it had I not been
conscious myself. I also think that there is a sense in which this special
first person knowledge can be called fundamental, because by definition it
cannot be derived from any other fact about the universe.

The response of those who think that consciousness is nothing special to the
above is that it is not surprising that there is a difference between a
description of an object and the object itself, and that what I have called
"knowledge" in reference to conscious experiences is not really knowledge,
but part of the package that comes with being a thing. I can't really argue
against this; as I said, it is just a different way of looking at the same
facts.

Much has been written about particular formulations of the mind/body problem
(or, if you prefer, "problem"). For example, Douglas Hofstadter's commentary
on Thomas Nagel's famous essay, "What Is It Like to Be a Bat?" (which I
looked up at your suggestion) makes the point that the logic of the titular
question itself is muddled: if Nagel were a bat, he would not be Nagel, and
he would therefore not be Nagel asking the question. If Nagel were actually
asking what it would be like for him to stay Nagel and experience being a
bat, perhaps by having his brain stimulated in a batty way, then that is (a)
a different question, and (b) in theory possible, and not the intractable
problem originally advertised. This is fair enough, so I shall try to avoid
talking about qualia in the way Nagel does. However, I can't get rid of the
idea that there is something special and fundamental about first person
experience.

--Stathis Papaioannou

_________________________________________________________________
Free wallpapers on Level 9 http://level9.ninemsn.com.au/default.aspx

Lee Corbin

unread,
May 22, 2005, 3:21:08 PM5/22/05
to EverythingList
Stathis writes

> There are some things that can be known by examination of an object, and
> there are other things that can only be known by being the object.

Okay; but some examples are probably necessary. (1) Only Mozart can
know what it's like for the Mozart auditory system to hear C-sharp
on a harpsichord. (2) only a human being can know the feeling that
a human has at the loss of a family member.

(Those are the best I could do; can anybody come up with better ones?)

Note that my first example had this same peculiar linguistic structure
of "what-it-is-like-to-be". Not necessarily an indictment; but
something to notice. As for my second example: are feelings knowledge?

> When the object is a human brain, this latter class of things
> is consciousness. (When the object is something else, this
> latter class of thing is... well, how would I know?)

> I think that the distinction between these two types of knowledge
> is surprising, and I would never have noticed it had I not been
> conscious myself.

As you write (below), it's possibly debatable whether this
really is *knowledge*. Certainly it does not resemble the
usual kind of knowledge that is communicated from one person
to another. But here is my analysis of what knowledge is:

Knowledge is an internal map of something usually outside
of the skin. But then, a gunshot patient may also obtain
knowledge provided by his doctors of the exact location of
a bullet in his brain. Still, this is *knowledge* of what
conditions obtain in the physical world, encoded into a
yet different area of the patient's brain.

Are there other examples of knowledge? This is important
because, of course, one may be pressed to make the case
for consciousness *itself* to provide special knowledge
(of the non-communicable variety).



> The response of those who think that consciousness is nothing special to the
> above is that it is not surprising that there is a difference between a
> description of an object and the object itself, and that what I have called
> "knowledge" in reference to conscious experiences is not really knowledge,
> but part of the package that comes with being a thing. I can't really argue
> against this; as I said, it is just a different way of looking at the same
> facts.

Good. You anticipated my question. But your answer is oddly
interesting in a certain way: I would never have conflated this
question about "what is knowledge" with the difference between
"the description of an object" and "the object itself". Yet, it's
true: they are both examples of "the map" versus "the territory".
Interesting.

I wonder if this knower/known distinction can help even further.
After all, I might claim that in all the cases of this suspicious
different kind of "knowledge", it's as if those who see "the problem"
are trying to establish this difference between the knower and
the known in a case in which there isn't any actual difference.

In fact, the whole erection of the notion of *qualia* seems now
to me to be an effort to impose the knower/known dichotomy where
it doesn't apply. Hence the peculiar English language construction
of "what it is like to be a...".

(A good test to apply to doubtful cases where there may simply
be a semantic problem is to demand restatement using other terms.
For example, I am highly critical of the word "rights" used in
the abstract, such as "what gives X the right to do A?". So I
challenge people to try to say the same thing without using
the word "right". As near as I recall, they don't succeed without
greatly reducing the impact of what they want to say. So perhaps
a good challenge is this: we could try to articulate Nagel's
question without the construction "what it is like to be...".)

> Much has been written about particular formulations of the mind/body problem
> (or, if you prefer, "problem"). For example, Douglas Hofstadter's commentary
> on Thomas Nagel's famous essay, "What Is It Like to Be a Bat?" (which I
> looked up at your suggestion) makes the point that the logic of the titular
> question itself is muddled: if Nagel were a bat, he would not be Nagel, and
> he would therefore not be Nagel asking the question. If Nagel were actually
> asking what it would be like for him to stay Nagel and experience being a
> bat, perhaps by having his brain stimulated in a batty way, then that is (a)
> a different question, and (b) in theory possible, and not the intractable
> problem originally advertised. This is fair enough, so I shall try to avoid
> talking about qualia in the way Nagel does. However, I can't get rid of the
> idea that there is something special and fundamental about first person
> experience.

Yeah! I know the feeling! :-) I myself can't shake the feeling
that there *isn't* anything special about first person experience.

Thanks very much both for your effort to consult that source,
and, as usual, your perceptiveness and eloquence in explicating
a difficult matter.

Lee

Stathis Papaioannou

unread,
May 23, 2005, 8:31:31 AM5/23/05
to lco...@tsoft.com, everyth...@eskimo.com
Lee,

What you are describing here is panpsychism. If I insist that it is
impossible to know whether and in what way an entity is conscious without
actually *being* that entity oneself, then to be consistent I have to admit
that anything and everything might be conscious. OK; I admit it;
technically, I'm a panpsychist. However, I can treat this belief in the same
way as I treat a belief in solipsism. Looking at the world around me, other
humans behave in roughly the same way I do, so by analogy with my own
experience, I assume they are conscious. Rocks, on the other hand, display
no behaviour, so I assume they are not conscious. Animals fall somewhere
between humans and rocks, so I assume they have varying levels of
consciousness depending on the complexity of their nervous system. The
implicit theory behind this classification scheme is that consciousness is
associated with the sort of information processing that occurs in organisms
with central nervous systems. Using empathy as a substitute for direct
experience, I can't be absolutely sure of this, of course, but then I can't
be absolutely sure that the world doesn't disappear when I turn my back on
it, either.

Now to my aliens. It is a nuisance when discussing philosophy of mind that
we cannot switch our consciousness off in order to study it as disinterested
observers. Addressing this problem, my hypothetical aliens are intelligent
but non-conscious or differently-conscious. I did not state this in my last
post, so you may have assumed that any intelligent entity would be
conscious. Maybe this is so; maybe it is even the case that any aliens able
to study us at all must have enough in common with us to recognise us as
fellow conscious entities. However, for the sake of argument, I wanted to
eliminate the kind of empathy that allows us to believe that other humans or
animals are conscious. The point I wanted to make is that *only* through
empathy (as a substitute for direct experience) would the aliens recognise
us as conscious. There is nothing they could go on from our behaviour alone,
no matter how well they understood it, that would provide them with an idea
of what it is like to be human from the point of view of a human. Even if
they had derived some rule through contact with multiple species, eg. "any
organism able to count to ten is conscious", this would only be understood
as an abstraction unless they were in some way able to empathise with us.

--Stathis Papaioannou

_________________________________________________________________
SEEK: Over 80,000 jobs across all industries at Australia's #1 job site.
http://ninemsn.seek.com.au?hotmail

Lee Corbin

unread,
May 23, 2005, 1:04:19 PM5/23/05
to everyth...@eskimo.com
Stathis writes

> If I insist that it is impossible to know whether and in
> what way an entity is conscious without actually *being*
> that entity oneself, then to be consistent I have to admit
> that anything and everything might be conscious. OK; I
> admit it; technically, I'm a panpsychist. However, I can
> treat this belief in the same way as I treat a belief in
> solipsism.

It's possible that a fundamental division between us is the
quest for certainty. It's a mistaken idea, with IMO, a tragic
history, to worry about absolute certainty. It's unattainable
in any event. So on a literal level, I agree: to be consistent
we have to admit that anything and everything **might** be
conscious.

But that's absurd. I say that it is absurd to entertain highly
unlikely cases as being true, unless one is making some important
philosophic point of some kind.

I am glad that you then say that you treat your "belief" in
panpsychism the way that you treat belief in solipsism; namely
---if I may be so bold as to come out plainly and say it---you
just don't buy it. That is, you think it highly unlikely to
be true.

> Looking at the world around me, other humans behave in roughly
> the same way I do, so by analogy with my own experience, I
> assume they are conscious. Rocks, on the other hand, display
> no behaviour, so I assume they are not conscious.

Yes, exactly. We are in the position of people in the 14th
century who had only a vague idea of what warmth and heat was.
But just in the way that a one of them might presciently maintain
that (a) warmth is a real, not merely subjective phenomenon,
yes, a 1st person experience but much much more importantly
some kind of scientific phenomenon in the world and (b) someday
careful investigators (i.e. scientists) will someday pin it
down, so today are *we* about consciousness: although *certainty*
will never be achieved, some day an extremely careful
examination of a physical object---the way that it manipulates
information---will reveal whether it is conscious or not.

In the meantime, we can only guess. Just as a 14th century person
might say "well I don't know *exactly* what warmth is, that
iceberg, by God, is *not* warm, and someday what I am saying
will be quantified", so we can say "rocks are *not* conscious, and
someday it will be proved". (Again, with the caveat that
all knowledge is conjectural, and nothing is ever "proved"
beyond doubt.)

More likely, of course, in keeping with my temperature analogy,
it will one day be proved that rocks have almost zero consciousness,
ants have a piddling amount, and dogs are very conscious.

> Animals fall somewhere between humans and rocks, so I assume
> they have varying levels of consciousness depending on the
> complexity of their nervous system. The implicit theory behind
> this classification scheme is that consciousness is associated
> with the sort of information processing that occurs in organisms
> with central nervous systems.

We agree completely here.

> Using empathy as a substitute for direct experience, I can't
> be absolutely sure of this, of course, but then I can't
> be absolutely sure that the world doesn't disappear when
> I turn my back on it, either.

Yes, and so don't worry about it. You can't be "absolutely sure"
of *anything*!

> Now to my aliens. It is a nuisance when discussing philosophy of mind that
> we cannot switch our consciousness off in order to study it as disinterested
> observers. Addressing this problem, my hypothetical aliens are intelligent
> but non-conscious or differently-conscious. I did not state this in my last
> post, so you may have assumed that any intelligent entity would be
> conscious.

For all practical purposes, and maybe for *all* purposes, we can
safely assume that any naturally evolved process that makes maps
of its surroundings, cunningly contrives to control its environment
to the point that it can survive, responds intelligently to challenges
---such a being is almost beyond doubt conscious. The only counter-
examples I know of are extremely contrived, extremely bizarre, and
involve almost infinitely much in the way of memory resources and
process time.

> Maybe this is so; maybe it is even the case that any aliens able
> to study us at all must have enough in common with us to recognise us as
> fellow conscious entities. However, for the sake of argument, I wanted to
> eliminate the kind of empathy that allows us to believe that other humans or
> animals are conscious. The point I wanted to make is that *only* through
> empathy (as a substitute for direct experience) would the aliens recognise
> us as conscious. There is nothing they could go on from our behaviour alone,
> no matter how well they understood it, that would provide them with an idea
> of what it is like to be human from the point of view of a human.

Okay, but I'd say that *all* they have to go by is what all the
rest of us have to go by: behavior. You think that someone or
something is conscious only by virtue of its behavior. You said
so above. Empathy is a *consequence* of observing such behavior;
I'm not sure why you want to accord it a special role.

> Even if they had derived some rule through contact with multiple
> species, eg. "any organism able to count to ten is conscious",
> this would only be understood as an abstraction unless they were
> in some way able to empathise with us.

Well, :-), those extremely capable aliens of yours will have
a more elaborate theory than "it can count to ten"! Let's
say that they have a complex theory involving the kinds of
circuits an entity has, the way it channels information,
the kinds of maps it makes of the world around it and, key,
the way it includes a place for itself in that map. Moreover
---vitally---their theory accords *extremely* well with their
informal (and our informal) observations.

(Oh, yes, there may have been a few surprises: they might have
discovered (in conformance with their theory which they believe
almost as we are wed to the heliocentric theory), that oddly,
it turned out that chipmunks were hardly conscious, or some
other unanticipated consequence.

>From your last sentence, you seem to want to demote certain
kinds of theories as being "only abstractions". But our
theories provide our best explanations, and on this usage
of words, are all that we have. And the best theories,
like the theory of evolution or the heliocentric theory, are
rightly taken by us as "factual", (never forgetting for an
instant that all knowledge is conjectural).

Lee

Stathis Papaioannou

unread,
May 24, 2005, 1:52:19 AM5/24/05
to lco...@tsoft.com, everyth...@eskimo.com
Lee Corbin writes:

> > There are some things that can be known by examination of an object, and
> > there are other things that can only be known by being the object.
>
>Okay; but some examples are probably necessary. (1) Only Mozart can
>know what it's like for the Mozart auditory system to hear C-sharp
>on a harpsichord. (2) only a human being can know the feeling that
>a human has at the loss of a family member.
>

>Note that my first example had this same peculiar linguistic structure
>of "what-it-is-like-to-be". Not necessarily an indictment; but
>something to notice. As for my second example: are feelings knowledge?

That is the sort of thing I had in mind: any first person experience. I
would say that feelings are a kind of raw knowledge, as it doesn't add much
to say that they are not knowledge but what-it-is-like-to-have-a-feeling is.

> > When the object is a human brain, this latter class of things
> > is consciousness. (When the object is something else, this
> > latter class of thing is... well, how would I know?)
>
> > I think that the distinction between these two types of knowledge
> > is surprising, and I would never have noticed it had I not been
> > conscious myself.
>
>As you write (below), it's possibly debatable whether this
>really is *knowledge*. Certainly it does not resemble the
>usual kind of knowledge that is communicated from one person
>to another. But here is my analysis of what knowledge is:
>
>Knowledge is an internal map of something usually outside
>of the skin. But then, a gunshot patient may also obtain
>knowledge provided by his doctors of the exact location of
>a bullet in his brain. Still, this is *knowledge* of what
>conditions obtain in the physical world, encoded into a
>yet different area of the patient's brain.
>
>Are there other examples of knowledge? This is important
>because, of course, one may be pressed to make the case
>for consciousness *itself* to provide special knowledge
>(of the non-communicable variety).

This last sentence gets to the crux of the matter. I would say that
consciousness is a type of knowledge "of the non-communicable variety", and
it is *this* which makes it special. I say it is knowledge because if I
reflect, "I am now typing on a keyboard", it takes up RAM and hard disk
space in my brain. But I'd be happy enough if you decided it was *not*
knowledge, because that then makes it even *more* special.

Well, in that case I would have to repeat my reply to Jonathan Colvin, which
is that we basically agree on the facts of the matter but choose to
appraise/ interpret/ describe them in a different way.

--Stathis Papaioannou

_________________________________________________________________
Is your PC infected? Get a FREE online computer virus scan from McAfee®
Security. http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963

Jonathan Colvin

unread,
May 25, 2005, 4:43:40 AM5/25/05
to everyth...@eskimo.com
Stathis: Now, I think you
> >> will agree (although Jonathan Colvin may not) that despite this
> >> excellent understanding of the processes giving rise to human
> >> conscious experience, the aliens may still have absolutely no idea
> >> what the experience is actually like.
> >
> > Jonathan Colvin: No, I'd agree that they have no idea what the

experience is
> like. But
> > this is no more remarkable than the fact that allthough we
> may have an
> > excellent understanding of photons, we can not travel at
> the speed of
> > light, or that although we may have an excellent understanding of
> > trees, yet we can not photosynthesize. Neither of these "problems"
> > seem particularly hard.
>
>
> Bruno: But we can photosynthesize. And we can understand why we
> cannot travel at the speed of light. All this by using purely
> 3-person description of those phenomena in some theory.
> With consciousness, the range of the debate goes from
> non-existence to only-existing. The problem is that it seems
> that an entirely 3-person explanation of the brain-muscles
> relations evacuates any purpose for consciousness and the
> 1-person. That's not the case with photosynthesis.

You can photosynthesize? I certainly can not (not being a tree). If I had
photosynthetic pigments in my skin, I suppose I could; and if I had rubbery
wings and sharp teeth I'd be a bat (if my aunt had wheels, she'd be a
wagon). I still can not see (intellectually) the "problem" of consciousness.
Consciousness /qualia, 1st person phenomena, etc, IMHO, being very poorly
defined, and likely non-existing entities, are a precarious pillar to base
any cosmology or metaphysics on. "Observer" is far superior, and lacks the
taint of dualism.
To borrow a page from Penrose, I see qualia in much the same light as a
shadow. Everyone can agree what a shadow is, point to one, and talk about
them. But a shadow is not a thing. The ancients made much ado about shadows,
ascribing all sorts of metaphysical significance and whatnot to them. I
think it is quite likely that the fuss about consciousness and qualia
resurrects this old mistake. Shadows of the mind, indeed.

Jonathan Colvin

Bruno Marchal

unread,
May 25, 2005, 5:51:54 AM5/25/05
to Jonathan Colvin, everyth...@eskimo.com

Le 25-mai-05, à 10:34, Jonathan Colvin a écrit :

>>
>> Bruno: But we can photosynthesize. And we can understand why we
>> cannot travel at the speed of light. All this by using purely
>> 3-person description of those phenomena in some theory.
>> With consciousness, the range of the debate goes from
>> non-existence to only-existing. The problem is that it seems
>> that an entirely 3-person explanation of the brain-muscles
>> relations evacuates any purpose for consciousness and the
>> 1-person. That's not the case with photosynthesis.
>
> You can photosynthesize? I certainly can not (not being a tree). If I
> had
> photosynthetic pigments in my skin, I suppose I could; and if I had
> rubbery
> wings and sharp teeth I'd be a bat (if my aunt had wheels, she'd be a
> wagon). I still can not see (intellectually) the "problem" of
> consciousness.

I said I can photosynthetize, like I would said I can fly by taking a
plane. I can photosynthetize by building some voltaic cells. This is
not the case with the brain-consciousness relation. A thorough
understanding of how the brain functions *seems* to put away any
purpose of consciousness. A thorough understanding of photosynthesis
does not lead to an equivalent problem.

> I still can not see (intellectually) the "problem" of consciousness.

It is the problem of relating first person subjective private
experience with third person sharable theories and experiments. There
is a vast literature. A good intro is
Tye, M. (1995). Ten problems of consciousness. The MIT Press,
Cambridge, Massachusetts.

> Consciousness /qualia, 1st person phenomena, etc, IMHO, being very
> poorly
> defined,

Universes, matter, existence,... are also not well defined. Perhaps you
are not interested in such problems. The success of "natural science"
is due in great part to the simplifying assumption of
psychophysico-parallelism. I have proved such an assumption is just
incompatible with the computationalist assumption in cognitive science.
I have also reduce the problem of the existence of the 1-person to the
problem of the existence of third person sharable truth. And partially
solve it.
My problem: few physicist knows what axiomatic methodology is. It is
the art of reasoning without even trying to define the concept on which
we reason. We need just to agree on properties bearing on those things,
captured by formula and inference rules. Mathematicians proceed in this
way since more than one century now.

> and likely non-existing entities,

What about the person's right? What about pleasure and pain, ... It
seems to me you just excluded those things from your definition of
science, and I'm afraid you make the category error I have describe
recently.


> are a precarious pillar to base
> any cosmology or metaphysics on.

With comp, we just have no choice in the matter. If you are interested
at some point we can follow the proof step by step. I'm always
interested where, precisely, some people have some difficulties.


> To borrow a page from Penrose, I see qualia in much the same light as a
> shadow.

As an (arithmetical) platonist this is how I conceive anything
physical. Qualia are more colourful it seems to me. Wave lenght looks
more like shadows imo.


> Everyone can agree what a shadow is, point to one, and talk about
> them. But a shadow is not a thing. The ancients made much ado about
> shadows,
> ascribing all sorts of metaphysical significance and whatnot to them. I
> think it is quite likely that the fuss about consciousness and qualia
> resurrects this old mistake. Shadows of the mind, indeed.

> "Observer" is far superior, and lacks the
> taint of dualism.

As Stephen knows comp leads to monism. In a nutshell there is only
numbers.
And numbers (under the form of digital machine) can provided a complete
(and startling) explanation why, from their point of views, it looks
there is much more than numbers, and why necessarily it takes the shape
of a 1-3 person distinction.

Bruno


http://iridia.ulb.ac.be/~marchal/

Jonathan Colvin

unread,
May 25, 2005, 6:28:23 PM5/25/05
to everyth...@eskimo.com

**********
Interleaving;
***********

Bruno: But we can photosynthesize. And we can
understand why we
cannot travel at the speed of light. All this by
using purely
3-person description of those phenomena in some
theory.
With consciousness, the range of the debate goes
from
non-existence to only-existing. The problem is that
it seems
that an entirely 3-person explanation of the
brain-muscles
relations evacuates any purpose for consciousness
and the
1-person. That's not the case with photosynthesis.


JC: You can photosynthesize? I certainly can not (not being


a tree). If I had
photosynthetic pigments in my skin, I suppose I could; and
if I had rubbery
wings and sharp teeth I'd be a bat (if my aunt had wheels,
she'd be a
wagon). I still can not see (intellectually) the "problem"
of consciousness.


I said I can photosynthetize, like I would said I can fly by taking
a plane. I can photosynthetize by building some voltaic cells. This is not
the case with the brain-consciousness relation. A thorough understanding of
how the brain functions *seems* to put away any purpose of consciousness. A
thorough understanding of photosynthesis does not lead to an equivalent
problem.


*************************
By "consciousness", I think you mean "qualia". "Consciousness" can easily be
conflated with "self-awareness", which has an evolutionary purpose (it
enables us to "step outside" our own minds (treat them as virtual machines),
and thus anticipate our own and others' actions).
*************************

I still can not see (intellectually) the "problem" of
consciousness.


It is the problem of relating first person subjective private
experience with third person sharable theories and experiments. There is a
vast literature. A good intro is
Tye, M. (1995). Ten problems of consciousness. The MIT Press,
Cambridge, Massachusetts.

*********************************
If you deny (as I do) that there is such a "thing" as first person
subjective experience (qualia) the problem goes away.
*********************************



Consciousness /qualia, 1st person phenomena, etc, IMHO,
being very poorly
defined,


Universes, matter, existence,... are also not well defined. Perhaps
you are not interested in such problems. The success of "natural science" is
due in great part to the simplifying assumption of
psychophysico-parallelism. I have proved such an assumption is just
incompatible with the computationalist assumption in cognitive science.


I have also reduce the problem of the existence of the 1-person to
the problem of the existence of third person sharable truth. And partially
solve it.
My problem: few physicist knows what axiomatic methodology is. It is
the art of reasoning without even trying to define the concept on which we
reason. We need just to agree on properties bearing on those things,
captured by formula and inference rules. Mathematicians proceed in this way
since more than one century now.

and likely non-existing entities,


What about the person's right? What about pleasure and pain, ... It
seems to me you just excluded those things from your definition of science,
and I'm afraid you make the category error I have describe recently.

****************************************************
Rights, pleasure, pain...I don't deny we can talk about these (like shadows)
*as if* they actually exist, but they do not fall into the same category of
things as electrons and universes, or indeed any other part of Platonia. I
do indeed exclude them from science, but I think the category error is not
mine.

****************************************************


are a precarious pillar to base
any cosmology or metaphysics on.


With comp, we just have no choice in the matter. If you are
interested at some point we can follow the proof step by step. I'm always
interested where, precisely, some people have some difficulties.

To borrow a page from Penrose, I see qualia in much the same
light as a
shadow.


As an (arithmetical) platonist this is how I conceive anything
physical. Qualia are more colourful it seems to me. Wave lenght looks more
like shadows imo.

*******************************************************
I am also an arithmetical Platonist, but where we differ is our belief in
the relevance of 1st person phenomena. I just don't see that they are
relevant to anything other than "human discourse" (ie. "How are you feeling
today? Bit of a pain in the Gulliver.") You appear to be trying to extend
qualia into a category relevant to cosmology/science/Platonia, and it is
this initial step that I don't follow (mixing together Popper's Worlds I and
II). I agree self awareness is important for anthropic observer selection
phenomena, but you appear to be positing a much more fundamental role for
qualia. Mais je dois admettre que je ne commence pas a comprendre votre
theorie.

Jonathan Colvin
*******************************************************

Bruno Marchal

unread,
May 27, 2005, 12:41:30 PM5/27/05
to Jonathan Colvin, everyth...@eskimo.com

Le 26-mai-05, à 00:24, Jonathan Colvin a écrit :

> I am also an arithmetical Platonist, but where we differ is our belief
> in
> the relevance of 1st person phenomena.

Relevant or not they exist. And are in need to be explained.

> I just don't see that they are
> relevant to anything other than "human discourse" (ie. "How are you
> feeling
> today? Bit of a pain in the Gulliver.")


That's not just human discourse. There is conscious life behind the
discourse. It is in need to be explained, or explained away, but
rigorously. I find much more easy to explain matter *away*. You posit a
physical universe, I don't.


> You appear to be trying to extend
> qualia into a category relevant to cosmology/science/Platonia, and it
> is
> this initial step that I don't follow (mixing together Popper's Worlds
> I and
> II).

It is not at all an initial step. My initial step is just the question:
what can a machine really prove about itself and hope, or bet, about
its consistent extension.


> I agree self awareness is important for anthropic observer selection
> phenomena,


I don't use anthropic observer selection, although some parts of what I
do can be recasted in such a setting. But since about two years, I am
used to avoid such talk in the list. That could be misleading when used
together with the comp hyp.


> but you appear to be positing a much more fundamental role for
> qualia.

Not at all. I accept them from empirical reasons. I expect any serious
TOE to explain them, and solve the problems raised by their existence.
I do not posit them. But I am glad to recover candidates for qualia in
the machine's discourses.
(Actually, in the machine's silences ...)


> Mais je dois admettre que je ne commence pas a comprendre votre

> theorie. [But I must admit I do not begin to understand your theory
> (litteral translation by BM)]

Thanks for your frankness. To be honest it is not really my theory. It
is the theory of any sound universal machine having enough
"introspective abilities", like Peano Arithmetic, or Zermelo-fraenkel
set theory, or any effective extension of them. Actually the theory is
sound (but no more complete) for a vast class of
"super-turing-machine). I call them Lobian machine. This is because
Solovay related them to an important theorem and formula obtained by
the Deutsch logican Lob (Loeb, L\"{o}b). It is an important extension
of Godel's second incompleteness theorem.

Well, to be sure in my thesis there are two parts. In the first part I
make the Universal Dovetailer Argument (UDA) which (along with
Olympia's stuff) just shows that if we accept the comp hyp in the
cognitive science, then the mind-body problem is partially but
necessarily reduced into the problem of the appearances of stable third
person discourses (including talk on what is observable (physics)).
This can be explained without technic.In the second part, giving the
startling consequences of the first part, I ask the opinion of a Lobian
machine. Giving that for them, comp is trivially true (by
construction), I can extract some physics from comp. Then I compare
with empirical physics. Comp is made refutable, but until now, what I
derive just confirm comp.

Hope this helps a little bit,

Bruno


http://iridia.ulb.ac.be/~marchal/


Bruno Marchal

unread,
May 28, 2005, 12:31:47 PM5/28/05
to Stathis Papaioannou, lco...@tsoft.com, everyth...@eskimo.com

Le 22-mai-05, à 17:03, Stathis Papaioannou wrote (in part):

> The response of those who think that consciousness is nothing special
> to the above is that it is not surprising that there is a difference
> between a description of an object and the object itself, and that
> what I have called "knowledge" in reference to conscious experiences
> is not really knowledge, but part of the package that comes with being
> a thing. I can't really argue against this; as I said, it is just a
> different way of looking at the same facts.


Exactly. And that is something utterly important, I think, which people
always forget. This his has been understand and explained recurrently
in humanity life. As example I have found it rather explicitly state

1) An indian text of the eleventh century (I think): Drg-Drçya-Viveka,
Comment discriminer le spectateur du spectacle ?, Traduction de la
version anglaisse due à Nikhilânanda par Marcel Sauton, Librairie
d'Amérique et d'Orient, Adrien Maisonneuve, Paris.

2) The old Wittgenstein in his last book "on uncertainty", where he
says that" to know" and "to believe" could be the same state of mind,
but in different context.

3) The Plato's Theaetetus, when he defined knowing by justified true
belief.

4) Stathis (see above). Er ... Correct me if I am wrong ;)

5) Boolos, Goldblatt, Kuznetsov and Muravitski: when they discover that
G* proves that Bp is equivalent with (Bp and p), but that G does not
prove it, so that the logic of Cp = (Bp and p) gives a knower logic. Bp
and (Bp and p) are just different way of looking to the same
arithmetical fact, but the (godelian) gap between proof (G) and truth
(G*) makes both logics quite different.
Stephen: it is the logic of Cp, (= Bp & p) which give rise to S4Grz
(the "canonical" machine first person knower/time logic).

Those who does not understand the "4)" should take it as an advertising
for Smullyan's Forever Undecided, and the whole Godel Lob
provability/consistency field.

Bruno

PS Axiom of G and G*, and S4Grz can be found in my 1999 post to the
list:
http://www.escribe.com/science/theory/m1417.html

http://iridia.ulb.ac.be/~marchal/


Reply all
Reply to author
Forward
0 new messages