Epiphenomenalism (was: Re: Bruno's Restaurant)

22 views
Skip to first unread message

Stathis Papaioannou

unread,
Sep 24, 2012, 11:45:10 PM9/24/12
to everyth...@googlegroups.com
On Wed, Sep 19, 2012 at 12:00 AM, Jason Resch <jason...@gmail.com> wrote:

> Pain is anything but epiphenomenal. The fact that someone is able to talk about it rules out it being an epiphenomenon.

The behaviour - talking about the pain - could be explained entirely
as a sequence of physical events, without any hint of underlying
qualia. By analogy, we can explain the behaviour of a billiard ball
entirely in physical terms, without any idea if the ball has qualia or
some other ineffable non-quale property. In the ball's case this
property, like the experience of pain, would be epiphenomenal, without
causal efficacy of its own.


--
Stathis Papaioannou

Jason Resch

unread,
Sep 25, 2012, 1:34:22 AM9/25/12
to everyth...@googlegroups.com

If it has no causal efficacy, what causes someone to talk about the pain they are experiencing?  Is it all coincidental?

I find the entire concept of epiphenominalism to be self-defeating: if it were true, there is no reason to expect anyone to ever have proposed it.  If consciousness were truly an epiphenomenon then the experience of it and the resulting wonder about it would necessarily be private and non-shareable.  In other words, whoever is experiencing the consciousness with all its intrigue can in no way effect changes in the physical world.  So then who is it that proposes the theory of epiphenominalism to explain the mystery of conscious experience?  It can't be the causally inefficacious experiencer.  The only consistent answer epiphenominalism can offer is that the theory of epiphenominalism comes from a causally efficacious entity which in no way is effected by experiences.  It might as well be a considered a non-experiencer, for it would behave the same regardless of whether it experienced something or if it were a zombie.

Epiphenominalism is forced to defend the absurd notion that epiphenominalism (and all other theories of consciousness) are proposed by things that have never experienced consciousness.  Perhaps instead, its core assumption is wrong.  The reason for all these books and discussion threads about consciousness is that experiences and consciousness are causally efficacious.  If they weren't then why is anyone talking about them?

Jason

Bruno Marchal

unread,
Sep 25, 2012, 4:43:24 AM9/25/12
to everyth...@googlegroups.com

On 25 Sep 2012, at 05:45, Stathis Papaioannou wrote:

> On Wed, Sep 19, 2012 at 12:00 AM, Jason Resch <jason...@gmail.com>
> wrote:
>
>> Pain is anything but epiphenomenal. The fact that someone is able
>> to talk about it rules out it being an epiphenomenon.
>
> The behaviour - talking about the pain - could be explained entirely
> as a sequence of physical events, without any hint of underlying
> qualia.

With comp a physical events is explained in term of measure and
machine/number relative consciousness selection (à la WM-duplication
way).
Physics is phenomenal. It is an internal consciousness selection made
on coherent computations (arithmetical relations).
We can't explain physics without a theory of quanta, which, in comp,
is a sub-theory of a theory of consciousness/qualia.

Consciousness is not epiphenomenal: it is the "extractor" of the
physical realities in arithmetic. We could say that consciousness is
the universal self-accelerating property of the universal number which
makes possible the differentiation of the experience, and then the
physical reality is a projection. I could consider consciousness as
the main "force" in the universe, even if it is also a phenomenal
reality (the ontology being only arithmetic, or finite combinatorial
relations).

If you associate consciousness with the unconscious (automated)
inference in self-consistency, you can explain formally that self-
accelerating relative processes. It makes consciousness the "cause" of
all motions in the physical universe, even if the "cause" are given by
infinities of arithmetical relations + the (apparently plural
personal) self-selection.

Bruno



> By analogy, we can explain the behaviour of a billiard ball
> entirely in physical terms, without any idea if the ball has qualia or
> some other ineffable non-quale property. In the ball's case this
> property, like the experience of pain, would be epiphenomenal, without
> causal efficacy of its own.
>
>
> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Sep 25, 2012, 8:59:16 AM9/25/12
to everything-list
Hi Stathis Papaioannou

If you tell me that my mother has just died and I cry,
the process is initiated not by the objective brain, but
by a subjective thought. And more importantly, there has
to be a self to cause that thought and that reaction.
It was MY mother ! I feel sad.
The brain does not have a self (an I, a my). So it cannot
consciously cause anything.


Roger Clough, rcl...@verizon.net
9/25/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stathis Papaioannou
Receiver: everything-list
Time: 2012-09-24, 23:45:10
Subject: Epiphenomenalism (was: Re: Bruno's Restaurant)

Craig Weinberg

unread,
Sep 25, 2012, 1:03:29 PM9/25/12
to everyth...@googlegroups.com


On Tuesday, September 25, 2012 4:43:29 AM UTC-4, Bruno Marchal wrote:

On 25 Sep 2012, at 05:45, Stathis Papaioannou wrote:

> On Wed, Sep 19, 2012 at 12:00 AM, Jason Resch <jason...@gmail.com>  
> wrote:
>
>> Pain is anything but epiphenomenal.  The fact that someone is able  
>> to talk about it rules out it being an epiphenomenon.
>
> The behaviour - talking about the pain - could be explained entirely
> as a sequence of physical events, without any hint of underlying
> qualia.

With comp a physical events is explained in term of measure and  
machine/number relative consciousness selection (à la WM-duplication  
way).
Physics is phenomenal. It is an internal consciousness selection made  
on coherent computations (arithmetical relations).
We can't explain physics without a theory of quanta, which, in comp,  
is a sub-theory of a theory of consciousness/qualia.

Consciousness is not epiphenomenal: it is the "extractor" of the  
physical realities in arithmetic. We could say that consciousness is  
the universal self-accelerating property of the universal number which  
makes possible the differentiation of the experience, and then the  
physical reality is a projection. I could consider consciousness as  
the main "force" in the universe, even if it is also a phenomenal  
reality (the ontology being only arithmetic, or finite combinatorial  
relations).

We are on the same page here then. My only question is, if consciousness is the main "force" in the universe, doesn't it make more sense to see arithmetic as the "condenser" of experiences into physical realism? I can easily see why experience would need semiotic compressions to organize itself, but I can see no reason that arithmetic or physical realities would possibly need to be 'extracted', or even what that would mean. Why execute a program if all possible outcomes are already computable?

Craig

Bruno Marchal

unread,
Sep 26, 2012, 3:44:59 AM9/26/12
to everyth...@googlegroups.com
It makes sense once we assume comp, as we attach consciousness to computations, whose existence is guarantied by arithmetic.


I can easily see why experience would need semiotic compressions to organize itself, but I can see no reason that arithmetic or physical realities would possibly need to be 'extracted', or even what that would mean.

This is what the Universal Dovetailer Argument explains.


Why execute a program if all possible outcomes are already computable?

To be computable is not enough, if the computations are not done, relatively to the situation you are in.
Your question is like "why should I pay this beer if I can show that I can pay it?".

Bruno



To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/LIsQ202GwckJ.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Sep 26, 2012, 1:37:31 PM9/26/12
to everyth...@googlegroups.com

Are you saying that arithmetic guarantees consciousness because it obviously supervenes on awareness, or do you say that consciousness is specifically inevitable from arithmetic truth. If the latter, then it sounds like you are saying that some arithmetic functions can only be expressed as pain or blue...in which case, how are they really arithmetic. Besides, we have never seen a computation turn blue or create blueness.
 


I can easily see why experience would need semiotic compressions to organize itself, but I can see no reason that arithmetic or physical realities would possibly need to be 'extracted', or even what that would mean.

This is what the Universal Dovetailer Argument explains.

If it does, then I don't understand it. If you can explain it with a common sense example as a metaphor, then I might be able to get more of it.
 


Why execute a program if all possible outcomes are already computable?

To be computable is not enough, if the computations are not done, relatively to the situation you are in.
Your question is like "why should I pay this beer if I can show that I can pay it?".

Yes, why should I pay for the beer if it's arithmetically inevitable that I have paid for the beer in the future?

Craig
;'

Stathis Papaioannou

unread,
Sep 26, 2012, 10:24:15 PM9/26/12
to everyth...@googlegroups.com
On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing? Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour. We can't observe the
experience itself. If the experience had separate causal powers we
would be able to observe its effects: we would see that neurons were
miraculously firing contrary to physical law, and explain this as the
immaterial soul affecting the physical world.

> I find the entire concept of epiphenominalism to be self-defeating: if it
> were true, there is no reason to expect anyone to ever have proposed it. If
> consciousness were truly an epiphenomenon then the experience of it and the
> resulting wonder about it would necessarily be private and non-shareable.
> In other words, whoever is experiencing the consciousness with all its
> intrigue can in no way effect changes in the physical world. So then who is
> it that proposes the theory of epiphenominalism to explain the mystery of
> conscious experience? It can't be the causally inefficacious experiencer.
> The only consistent answer epiphenominalism can offer is that the theory of
> epiphenominalism comes from a causally efficacious entity which in no way is
> effected by experiences. It might as well be a considered a
> non-experiencer, for it would behave the same regardless of whether it
> experienced something or if it were a zombie.

The experiencer would behave the same if he were a zombie, since that
is the definition of a zombie. I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness. Perhaps instead, its core assumption is
> wrong. The reason for all these books and discussion threads about
> consciousness is that experiences and consciousness are causally
> efficacious. If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious, because consciousness is not causally efficacious. It is
emergent, at a higher level of description, supervenient or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


--
Stathis Papaioannou

Jason Resch

unread,
Sep 26, 2012, 11:29:56 PM9/26/12
to everyth...@googlegroups.com
On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:
On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing?  Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not?  There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms.  Saying the consciousness is irrelevant in the processes of the brain may be like saying human psychology is irrelevant in the price moves of the stock market.  Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.
 
We can't observe the
experience itself.

I'm not convinced of this.  While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind.  By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:
 
If the experience had separate causal powers we
would be able to observe its effects: we would see that neurons were
miraculously firing contrary to physical law, and explain this as the
immaterial soul affecting the physical world.

It sounds like you are saying either epiphenomenalism is true or interactionism is true ( http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)#Dualist_views_of_mental_causation ).  Both of these are forms of dualism, and I think both are false.

Violations of physics are not required for consciousness to have effects.  After all, no violations of physics are required for human psychology to have effects on stock prices.
 

> I find the entire concept of epiphenominalism to be self-defeating: if it
> were true, there is no reason to expect anyone to ever have proposed it.  If
> consciousness were truly an epiphenomenon then the experience of it and the
> resulting wonder about it would necessarily be private and non-shareable.
> In other words, whoever is experiencing the consciousness with all its
> intrigue can in no way effect changes in the physical world.  So then who is
> it that proposes the theory of epiphenominalism to explain the mystery of
> conscious experience?  It can't be the causally inefficacious experiencer.
> The only consistent answer epiphenominalism can offer is that the theory of
> epiphenominalism comes from a causally efficacious entity which in no way is
> effected by experiences.  It might as well be a considered a
> non-experiencer, for it would behave the same regardless of whether it
> experienced something or if it were a zombie.

The experiencer would behave the same if he were a zombie, since that
is the definition of a zombie.

Dualist theories, including epiphenominalism, lead to the notion that zombies are logically consistent.  I don't think zombies make any sense.  Do you?
 
I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.

This follows because the notion of knowing, which I define as possessing information, applies equally to zombie and non-zombie brains.  Both brains have identical information content, so they both know exactly the same things.  They both know what red is like, they both know what pain is like.   It's just there is some magical notion of there being a difference between them which is completely illogical.  Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.
 

> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness.  Perhaps instead, its core assumption is
> wrong.  The reason for all these books and discussion threads about
> consciousness is that experiences and consciousness are causally
> efficacious.  If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious,

Consciousness is defined on dictionary.com as "awareness of sensations, thoughts, surrounds, etc."  Awareness is defined as "having knowledge".  So we can say consciousness is merely having knowledge of sensations, thoughts, surroundings, etc.

It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

This isn't startling.  Doctors today declare people brain dead and take them off life support using the same assumptions.  If we had no principles for determining if something is conscious or not, would we still do this?  Do you worry about stepping on rocks because it might hurt them?  We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious.  So perhaps if we can tell with reasonable certainty something is not conscious, we might also be reasonably certain that a certain other thing IS conscious.

Proof, is another matter, and likely one we will never get.  Your entire life could be a big delusion and everything you might think you know could be wrong.  We can never really prove anything.
 
because consciousness is not causally efficacious.

I disagree with this.
 
It is
emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important.  Ask "If this thing were not conscious would it still behave in the same way?"  If not, then how can we say that consciousness is casually inefficacious?
 
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

Jason

Stephen P. King

unread,
Sep 27, 2012, 12:09:23 AM9/27/12
to everyth...@googlegroups.com
On 9/26/2012 11:29 PM, Jason Resch wrote:


On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:
On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing?  Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not?  There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms.  Saying the consciousness is irrelevant in the processes of the brain may be like saying human psychology is irrelevant in the price moves of the stock market.  Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.
 
We can't observe the
experience itself.

I'm not convinced of this.  While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind.  By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:
 
If the experience had separate causal powers we
would be able to observe its effects: we would see that neurons were
miraculously firing contrary to physical law, and explain this as the
immaterial soul affecting the physical world.

It sounds like you are saying either epiphenomenalism is true or interactionism is true ( http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)#Dualist_views_of_mental_causation ).  Both of these are forms of dualism, and I think both are false.

    Because they assume a substantive and thus separable substrate, the y are false.



Violations of physics are not required for consciousness to have effects.  After all, no violations of physics are required for human psychology to have effects on stock prices.
   
    Demonstrating that minds are not epiphenomena!


 

> I find the entire concept of epiphenominalism to be self-defeating: if it
> were true, there is no reason to expect anyone to ever have proposed it.  If
> consciousness were truly an epiphenomenon then the experience of it and the
> resulting wonder about it would necessarily be private and non-shareable.
> In other words, whoever is experiencing the consciousness with all its
> intrigue can in no way effect changes in the physical world.  So then who is
> it that proposes the theory of epiphenominalism to explain the mystery of
> conscious experience?  It can't be the causally inefficacious experiencer.
> The only consistent answer epiphenominalism can offer is that the theory of
> epiphenominalism comes from a causally efficacious entity which in no way is
> effected by experiences.  It might as well be a considered a
> non-experiencer, for it would behave the same regardless of whether it
> experienced something or if it were a zombie.

The experiencer would behave the same if he were a zombie, since that
is the definition of a zombie.

Dualist theories, including epiphenominalism, lead to the notion that zombies are logically consistent.  I don't think zombies make any sense.  Do you?

    These dualisms consider mind and body to be separable, this is where they fail. If Mind and body are merely distinct aspect of the same basic primitive then we get a prediction that zombies are not possible. Every mind must have an embodiment and every body must have (some kind of) a mind.


 
I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.

    How does this follow the definition of a zombie? They have no qualia thus no ability to reason about qualia!



This follows because the notion of knowing, which I define as possessing information, applies equally to zombie and non-zombie brains.  Both brains have identical information content, so they both know exactly the same things.

    Then what makes a zombie a zombie???


 They both know what red is like, they both know what pain is like.   It's just there is some magical notion of there being a difference between them which is completely illogical.  Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.

    No, the reports that are uttered by a zombie, if we are consistent are not reports of knowledge any more than the output of my calculator is knowledge!


 

> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness.  Perhaps instead, its core assumption is
> wrong.  The reason for all these books and discussion threads about
> consciousness is that experiences and consciousness are causally
> efficacious.  If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious,

Consciousness is defined on dictionary.com as "awareness of sensations, thoughts, surrounds, etc."  Awareness is defined as "having knowledge".  So we can say consciousness is merely having knowledge of sensations, thoughts, surroundings, etc.


    Right, and it is this that zombies lack.


It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

    Knowledge, at least tacitly, implies the ability to act upon the data, not just be guided by it.



This isn't startling.  Doctors today declare people brain dead and take them off life support using the same assumptions.  If we had no principles for determining if something is conscious or not, would we still do this?  Do you worry about stepping on rocks because it might hurt them?  We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious.  So perhaps if we can tell with reasonable certainty something is not conscious, we might also be reasonably certain that a certain other thing IS conscious.

Proof, is another matter, and likely one we will never get.  Your entire life could be a big delusion and everything you might think you know could be wrong.  We can never really prove anything.

    Rubbish! You are making perfection the enemy of the possible. We are fallible and thus can only reason within boundaries and error bars, so. Does this knock proofs down? NO!


 
because consciousness is not causally efficacious.

I disagree with this.

    I agree with your disagreement!


 
It is
emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important.  Ask "If this thing were not conscious would it still behave in the same way?"  If not, then how can we say that consciousness is casually inefficacious?
 
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

Jason

    Nice debate!
-- 
Onward!

Stephen

http://webpages.charter.net/stephenk1/Outlaw/Outlaw.html

Jason Resch

unread,
Sep 27, 2012, 1:01:10 AM9/27/12
to everyth...@googlegroups.com
On Wed, Sep 26, 2012 at 11:09 PM, Stephen P. King <step...@charter.net> wrote:
On 9/26/2012 11:29 PM, Jason Resch wrote:


On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:
On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing?  Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not?  There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms.  Saying the consciousness is irrelevant in the processes of the brain may be like saying human psychology is irrelevant in the price moves of the stock market.  Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.
 
We can't observe the
experience itself.

I'm not convinced of this.  While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind.  By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:
 
If the experience had separate causal powers we
would be able to observe its effects: we would see that neurons were
miraculously firing contrary to physical law, and explain this as the
immaterial soul affecting the physical world.

It sounds like you are saying either epiphenomenalism is true or interactionism is true ( http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)#Dualist_views_of_mental_causation ).  Both of these are forms of dualism, and I think both are false.

    Because they assume a substantive and thus separable substrate, the y are false.



Violations of physics are not required for consciousness to have effects.  After all, no violations of physics are required for human psychology to have effects on stock prices.
   
    Demonstrating that minds are not epiphenomena!



Well, it at least shows emergent things can have effects.  A truck is an emergent phenomenon, but it can still run you over.  So though consciousness might be emergent we can't plainly rule out that it can have no effects.
  
 

> I find the entire concept of epiphenominalism to be self-defeating: if it
> were true, there is no reason to expect anyone to ever have proposed it.  If
> consciousness were truly an epiphenomenon then the experience of it and the
> resulting wonder about it would necessarily be private and non-shareable.
> In other words, whoever is experiencing the consciousness with all its
> intrigue can in no way effect changes in the physical world.  So then who is
> it that proposes the theory of epiphenominalism to explain the mystery of
> conscious experience?  It can't be the causally inefficacious experiencer.
> The only consistent answer epiphenominalism can offer is that the theory of
> epiphenominalism comes from a causally efficacious entity which in no way is
> effected by experiences.  It might as well be a considered a
> non-experiencer, for it would behave the same regardless of whether it
> experienced something or if it were a zombie.

The experiencer would behave the same if he were a zombie, since that
is the definition of a zombie.

Dualist theories, including epiphenominalism, lead to the notion that zombies are logically consistent.  I don't think zombies make any sense.  Do you?

    These dualisms consider mind and body to be separable, this is where they fail. If Mind and body are merely distinct aspect of the same basic primitive then we get a prediction that zombies are not possible.

Right, and I think the converse is also true.  If zombies are not possible, then dualism must be wrong.
 
Every mind must have an embodiment and every body must have (some kind of) a mind.


 
I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.

    How does this follow the definition of a zombie? They have no qualia thus no ability to reason about qualia!

Zombies can reason.  They can do absolutely everything you can do, except they are not conscious.  They are also completely identical and indistinguishable, from you.  The only one who could (in principle) know they are a zombie is the zombie itself, but they don't know anything the non-zombie doesn't, for both the zombie and non-zombie brains have identical information content.  If you ask it if it is conscious, it will still say yes, and believe it.  It will not consider itself to be lying, it will in fact, believe itself to be telling the the truth.  There would be no lie detector test to that could detect this lie, the lie is so good, the zombie itself believes it.  The zombie is in fact, as certain of its own consciousness as the non-zombie.
 



This follows because the notion of knowing, which I define as possessing information, applies equally to zombie and non-zombie brains.  Both brains have identical information content, so they both know exactly the same things.

    Then what makes a zombie a zombie???


Right, I don't see that the difference makes a difference to anyone or anything, so the truth that there is still some difference must be questioned.  If there is no difference then the whole notion of zombies becomes inconsistent.
 

 They both know what red is like, they both know what pain is like.   It's just there is some magical notion of there being a difference between them which is completely illogical.  Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.

    No, the reports that are uttered by a zombie, if we are consistent are not reports of knowledge any more than the output of my calculator is knowledge!

But ask the zombie what it can see, and it can describe everything it sees, inspect its brain and you can see the information flow from the retinas to be processed by the visual cortex, and eventually make it to utterances of what it is looking at.  It knows what it is seeing, it's brain contains that knowledge in the same way any other brain does.  You can even watch its hippocampus store memories of what it saw, and when you ask it what it saw a few minutes ago, you can watch this knowledge come out of its brain just as it does in a non-zombie brain.  So in what sense could its knowledge be any less valid than the knowledge in a non-zombie brain?  Remember, zombies are 100% physically identical to their non-zombie counterparts, in every third-person observable way.
 


 

> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness.  Perhaps instead, its core assumption is
> wrong.  The reason for all these books and discussion threads about
> consciousness is that experiences and consciousness are causally
> efficacious.  If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious,

Consciousness is defined on dictionary.com as "awareness of sensations, thoughts, surrounds, etc."  Awareness is defined as "having knowledge".  So we can say consciousness is merely having knowledge of sensations, thoughts, surroundings, etc.


    Right, and it is this that zombies lack.


Zombies can think, understand, solve problems, answer questions, remember, talk about their beliefs, and so on.  They just are not conscious of anything when they do these things.  So when a zombie thinks/says/understands/believes he is conscious you might say it thinks is wrong or lying.  But in what sense is it lying or in what sense is it wrong?  Its brain does the same calculations as the other brain that is telling the truth.  Its brain contains the same neural patterns as the other brain that has true beliefs.

Daniel Dennett says it well: "when philosophers claim that zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition".[9][10] He coined the term zimboes (p-zombies that havesecond-order beliefs) to argue that the idea of a p-zombie is incoherent;[11] "Zimboes thinkZ they are conscious, thinkZ they have qualia, thinkZ they suffer pains – they are just 'wrong' (according to this lamentable tradition), in ways that neither they nor we could ever discover!"
 

It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

    Knowledge, at least tacitly, implies the ability to act upon the data, not just be guided by it.



This isn't startling.  Doctors today declare people brain dead and take them off life support using the same assumptions.  If we had no principles for determining if something is conscious or not, would we still do this?  Do you worry about stepping on rocks because it might hurt them?  We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious.  So perhaps if we can tell with reasonable certainty something is not conscious, we might also be reasonably certain that a certain other thing IS conscious.

Proof, is another matter, and likely one we will never get.  Your entire life could be a big delusion and everything you might think you know could be wrong.  We can never really prove anything.

    Rubbish! You are making perfection the enemy of the possible. We are fallible and thus can only reason within boundaries and error bars, so. Does this knock proofs down? NO!

We might be 99.99999% certain of some belief, but I don't know that we can ever be certain.  Some non zero amount of doubt regarding the correctness any proof depends on our own consistency/sanity.

This is not to say that seeking out explanations, or evidence, or proof is fruitless.  So I don't see this leading to an enemy of the good.
 


 
because consciousness is not causally efficacious.

I disagree with this.

    I agree with your disagreement!


 
It is
emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important.  Ask "If this thing were not conscious would it still behave in the same way?"  If not, then how can we say that consciousness is casually inefficacious?
 
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

Jason

    Nice debate!


Thanks,

Jason

Stephen P. King

unread,
Sep 27, 2012, 1:56:11 AM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 1:01 AM, Jason Resch wrote:
On Wed, Sep 26, 2012 at 11:09 PM, Stephen P. King <step...@charter.net> wrote:
On 9/26/2012 11:29 PM, Jason Resch wrote:
On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:
On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing? �Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not? �There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms. �Saying the consciousness is�irrelevant�in the processes of the brain may be like saying human psychology is�irrelevant�in the price moves of the stock market. �Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.
�
We can't observe the
experience itself.

I'm not convinced of this. �While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind. �By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:
�
If the experience had separate causal powers we
would be able to observe its effects: we would see that neurons were
miraculously firing contrary to physical law, and explain this as the
immaterial soul affecting the physical world.

It sounds like you are saying either epiphenomenalism is true or interactionism is true (�http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)#Dualist_views_of_mental_causation�). �Both of these are forms of dualism, and I think both are false.

��� Because they assume a substantive and thus separable substrate, the y are false.



Violations of physics are not required for consciousness to have effects. �After all, no violations of physics are required for human psychology to have effects on stock prices.
���
��� Demonstrating that minds are not epiphenomena!



Well, it at least shows emergent things can have effects. �A truck is an emergent�phenomenon, but it can still run you over. �So though consciousness might be emergent we can't�plainly�rule out that it can have no effects.
Hi!

��� Surely! The truck and my body emerge from the same underlying process and thus are similarly efficacious. If we are both being simulated by the same underlying process, the truck and I will not be any different when it come to the rules, unless a bias or distinction is built into the simulating program.

��
�

> I find the entire concept of epiphenominalism to be self-defeating: if it
> were true, there is no reason to expect anyone to ever have proposed it. �If

> consciousness were truly an epiphenomenon then the experience of it and the
> resulting wonder about it would necessarily be private and non-shareable.
> In other words, whoever is experiencing the consciousness with all its
> intrigue can in no way effect changes in the physical world. �So then who is

> it that proposes the theory of epiphenominalism to explain the mystery of
> conscious experience? �It can't be the causally inefficacious experiencer.

> The only consistent answer epiphenominalism can offer is that the theory of
> epiphenominalism comes from a causally efficacious entity which in no way is
> effected by experiences. �It might as well be a considered a

> non-experiencer, for it would behave the same regardless of whether it
> experienced something or if it were a zombie.

The experiencer would behave the same if he were a zombie, since that
is the definition of a zombie.

Dualist theories, including epiphenominalism, lead to the notion that zombies are logically consistent. �I don't think zombies make any sense. �Do you?

��� These dualisms consider mind and body to be separable, this is where they fail. If Mind and body are merely distinct aspect of the same basic primitive then we get a prediction that zombies are not possible.

Right, and I think the converse is also true. �If zombies are not possible, then dualism must be wrong.

��� Only for a substance dualism would this be true. It is not true for a dual aspect monism.

�
Every mind must have an embodiment and every body must have (some kind of) a mind.


�
I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.

��� How does this follow the definition of a zombie? They have no qualia thus no ability to reason about qualia!

Zombies can reason.

��� This is to equate reasoning to automatically following an algorithm. This implies perfect predictability at some level and thus the absence of any 1p only aspects. Additionally, the recipe is some thng that needs explanation. How was it found...?
��� This kind of zombie reasoning is an oxymoron as it assumes the possibility of evaluations and yet disallows the very possibility. Zombies have no qualia and thus cannot represent anything to itself. It has no "self" and thus lacks the capacity to impress anything upon that non-existent self.

�They can do absolutely everything you can do, except they are not conscious.

��� If they are not conscious then they are not conscious of their consciousness. Thus they do not have knowledge.

�They are also completely identical and indistinguishable, from you.

��� from the point of view of an observer, sure. But this is just a retelling of the Turing test. It merely considers 3p behavior.

�The only one who could (in principle) know they are a zombie is the zombie itself, but they don't know anything the non-zombie doesn't, for both the zombie and non-zombie brains have identical information content.

��� This is where the zombie falls apart. The zombie cannot act on that difference as it cannot , by definition, act upon the representation of that information.

�If you ask it if it is conscious, it will still say yes, and believe it.

��� No, they have no qualia, thus no beliefs.

�It will not consider itself to be lying, it will in fact, believe itself to be telling the the truth.

��� NO!

�There would be no lie detector test to that could detect this lie, the lie is so good, the zombie itself believes it. �The zombie is in fact, as certain of its own consciousness as the non-zombie.

��� Contradiction!

�



This follows because the notion of knowing, which I�define�as possessing information, applies equally to zombie and non-zombie brains. �Both brains have identical information content, so they both know exactly the same things.

��� Then what makes a zombie a zombie???


Right, I don't see that the difference makes a difference to anyone or anything, so the truth that there is still some difference must be questioned.

��� Stick to the definition. If a zombie has no qualia then it does not have anything that supervenes on qualia. Does it "know" anything? NO!

�If there is no difference then the whole notion of zombies becomes inconsistent.

��� It is inconsistent! As I see it, the concept of an entity that behaves identically to a conscious being and yet has no consciousness - no quale - then it is, at most, an automaton.

�

�They both know what red is like, they both know what pain is like. � It's just there is some magical notion of there being a difference between them which is completely illogical. �Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.

��� No, the reports that are uttered by a zombie, if we are consistent are not reports of knowledge any more than the output of my calculator is knowledge!

But ask the zombie what it can see, and it can describe everything it sees, inspect its brain and you can see the information flow from the retinas to be processed by the visual cortex, and eventually make it to utterances of what it is looking at.

��� This claim assumes the possibility of the capacity to report the content of its senses. This is dangerously close to contradicting the definition of a zombie.

�It knows what it is seeing, it's brain contains that knowledge in the same way any other brain does.

��� Knowledge implies the capacity of distinguishing a difference between the state of having knowledge and not having it. This seems to require an internal self-modeling capacity. How is this not a contradiction of the definition of a Zombie?

�You can even watch its hippocampus store memories of what it saw, and when you ask it what it saw a few minutes ago, you can watch this knowledge come out of its brain just as it does in a non-zombie brain.

��� Where does the "it's" come from? There is nothing to "posses" an opinion or representation of the data (that is not, strictly speaking, the data). It can only come from the implicit third party of the narrative here. The zombie has no sense of self as different from the world and thus no capacity to know that "it" is the subject of the brain scan.

�So in what sense could its knowledge be any less valid than the knowledge in a non-zombie brain?

��� Zombies cannot have "knowledge", as has just been shown.

�Remember, zombies are 100% physically identical to their non-zombie counterparts, in every third-person observable way.

��� Exactly, and thus any reference to internal capacities are suspect when attributed to zombies.

�


�

> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness. �Perhaps instead, its core assumption is
> wrong. �The reason for all these books and discussion threads about

> consciousness is that experiences and consciousness are causally
> efficacious. �If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious,

Consciousness is defined on dictionary.com as "awareness of sensations, thoughts, surrounds, etc." �Awareness is defined as "having knowledge". �So we can say consciousness is merely having knowledge of sensations, thoughts, surroundings, etc.


��� Right, and it is this that zombies lack.


Zombies can think, understand, solve problems, answer questions, remember, talk about their beliefs, and so on.

��� How is this determined? First we must show that such capacities obtain, otherwise we abandon logic and hold the law of non-contradiction in contempt.

�They just are not conscious of anything when they do these things.

��� There is no "they"... There is no "agency" for a zombie, unless the narrative here of the observer (who is making the determination) is merely projecting their own capacities...

�So when a zombie thinks/says/understands/believes he is conscious�you might say it thinks is wrong or lying. �But in what sense is it lying or in what sense is it wrong? �Its brain does the same calculations as the other brain that is telling the truth. �Its brain contains the same neural patterns as the other brain that has true beliefs.

��� "Same" ? If we are considering an equivalence then that equivalence, unless restricted, is complete. The same holds for difference. If we are going to allow for a spectrum between these then we must simultaneously allow for both , at least in a possible world sense.


Daniel Dennett says it well: "when philosophers claim that zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition".[9][10]�He coined the term�zimboes�(p-zombies that have second-order beliefs) to argue that the idea of a p-zombie is incoherent;[11]�"Zimboes thinkZ�they are conscious, thinkZ�they have qualia, thinkZ�they suffer pains � they are just 'wrong' (according to this lamentable tradition), in ways that neither they nor we could ever discover!"

��� Dennett is right here! He is pointing out the� contradiction of the definition of Zombie in the idea or belief that zombies can have beliefs. An entity either has qualia/consciousness or it does not. If it has consciousness or anything that follows from consciousness - such as knowledge - then it is not a zombie.

�

It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

��� Knowledge, at least tacitly, implies the ability to act upon the data, not just be guided by it.



This isn't�startling. �Doctors today declare people brain dead and take them off life support using the same assumptions. �If we had no principles for determining if�something�is conscious or not, would we still do this? �Do you worry about stepping on rocks because it might hurt them? �We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious. �So perhaps if we can tell with reasonable certainty something is not�conscious, we might also be reasonably certain that a certain other thing IS�conscious.

Proof, is another matter, and likely one we will never get. �Your entire life could be a big delusion and everything you might think you know could be wrong. �We can never really prove anything.

��� Rubbish! You are making perfection the enemy of the possible. We are fallible and thus can only reason within boundaries and error bars, so. Does this knock proofs down? NO!

We might be 99.99999% certain of some belief, but I don't know that we can ever be certain. �Some non zero amount of doubt regarding the correctness any proof depends on our own�consistency/sanity.

��� There is truth and there is the ability to find a proof of that truth.


This is not to say that seeking out explanations, or evidence, or proof is fruitless. �So I don't see this leading to an enemy of the good.

��� First, let us banish the ambiguity and inconsistency.

�


�
because consciousness is not causally efficacious.

I disagree with this.

��� I agree with your disagreement!


�
It is
emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important. �Ask "If this thing were not conscious would it still behave in the same way?" �If not, then how can we say that consciousness is casually inefficacious?
�
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

Jason

��� Nice debate!


Thanks,

Jason

Bruno Marchal

unread,
Sep 27, 2012, 4:37:59 AM9/27/12
to everyth...@googlegroups.com
Consciousness is specifically inevitable from arithmetic truth, "seen from inside".



If the latter, then it sounds like you are saying that some arithmetic functions can only be expressed as pain or blue...

No. You confuse "Turing emulable", and "first person indeterminacy recoverable". Pain and blue have no arithmetical representations.




in which case, how are they really arithmetic.

They are not. Arithmetical truth is already not arithmetical.
Arithmetic seen from inside is *vastly* bigger than arithmetic. This needs a bit of "model theory" to be explained formally.




Besides, we have never seen a computation turn blue or create blueness.

It would not lake sense to "see" that. Brain and electromagnetic fields or any 3p notion cannot turn blue. "Blue" is a singular informative global experienced by first person.



 


I can easily see why experience would need semiotic compressions to organize itself, but I can see no reason that arithmetic or physical realities would possibly need to be 'extracted', or even what that would mean.

This is what the Universal Dovetailer Argument explains.

If it does, then I don't understand it. If you can explain it with a common sense example as a metaphor, then I might be able to get more of it.

Did you understand the first person indeterminacy? Tell me if you understand the seven first steps of the UDA, in




 


Why execute a program if all possible outcomes are already computable?

To be computable is not enough, if the computations are not done, relatively to the situation you are in.
Your question is like "why should I pay this beer if I can show that I can pay it?".

Yes, why should I pay for the beer if it's arithmetically inevitable that I have paid for the beer in the future?

It is not arithmetically inevitable. In some stories you don't pay. Comp, like QM, leads to a continuum of futures, and your decisions and acts "here-and-now" determine the general features of your normal (majority) futures. That is why life and discussion forums have some sense.

Bruno


Bruno Marchal

unread,
Sep 27, 2012, 4:06:00 AM9/27/12
to everyth...@googlegroups.com
You can approximate consciousness by "belief in self-consistency".
This has already a "causal efficacy", notably a relative self-speeding
ability (by Gödel "length of proof" theorem). But "belief in self-
consistency" is pure 3p, and is not consciousness, you get
consciousness because the machine will confuse the belief in self-
consistency with the truth of its self-consistency, and this will
introduce a quale. The machine can be aware of it, and (with enough
cognitive ability) the machine will be aware of its non
communicability, making it into a personal quale.

I think you are doing a confusion level, like if matter was real, and
consciousness only emerging on it. I thought that some times ago you
did understand the movie graph argument, so that it is the illusion of
brain and matter which emerges from consciousness, and this gives
another role for consciousness: the bringing of physical realities
through number relations being selected (non causally, here).
Consciousness is what makes notions of causal efficacy meaningful to
start with.

I think it is the same error as using determinacy to refute free-will.
This would be correct if we were living at the determinist base level,
but we are not. Consciousness and free-will are real at the level
where we live, and unreal, in the big 3p picture, but this concerns
only the "outer god", not the "inner one" which can *know* a part of
its local self-consistency, and cannot know its local future.

Bruno





> It is
> emergent, at a higher level of description, supervenient or
> epiphenomenal - but not separately causally efficacious, or the
> problem of other minds and zombies would not exist.
>
>
> --
> Stathis Papaioannou
>

Stathis Papaioannou

unread,
Sep 27, 2012, 8:49:35 AM9/27/12
to everyth...@googlegroups.com
On Thu, Sep 27, 2012 at 1:29 PM, Jason Resch <jason...@gmail.com> wrote:

> But can you separate the consciousness from that sequence of physical events
> or not? There are multiple levels involved here and you may be missing the
> forest for the trees by focusing only on the atoms. Saying the
> consciousness is irrelevant in the processes of the brain may be like saying
> human psychology is irrelevant in the price moves of the stock market. Of
> course, you might explain the price moves in terms of atomic interactions,
> but you are missing the effects of higher-level phenomenon, which are real
> and do make a difference.

The higher level description is not an entity with *separate* causal
power. Was the stock market movement caused by physics, chemistry,
biochemistry or psychology? In a manner of speaking, it's correct to
say any of them; but we know that all the chemical, biochemical and
psychological properties are ultimately traceable to the physics, even
if it isn't practically useful to attempt stock market prediction by
analysing brain physics. What I object to is the idea of strong
emergence, that higher level properties are not merely surprising but
fundamentally unable to be deduced from lower level properties.

>> We can't observe the
>> experience itself.
>
>
> I'm not convinced of this. While today, we have difficulty in even defining
> the term, in the future, with better tools and understanding of minds and
> consciousness, we may indeed be able to tell if a certain process implements
> the right combination of processes to have what we would call a mind. By
> tracing the flows of information in its mind, we might even know what it is
> and isn't aware of.
>
> Albeit at a low resolution, scientists have already extracted from brain
> scans what people are seeing:
> http://www.newscientist.com/article/dn16267-mindreading-software-could-record-your-dreams.html

We still can't observe the experience. Advanced aliens may be able to
read our thoughts very accurately in this way but still have no idea
what we actually experience or whether we are conscious at all.

>> The people talking about them could be zombies. There is nothing in
>> any observation of peoples' behaviour that *proves* they are
>> conscious,
>
>
> Consciousness is defined on dictionary.com as "awareness of sensations,
> thoughts, surrounds, etc." Awareness is defined as "having knowledge". So
> we can say consciousness is merely having knowledge of sensations, thoughts,
> surroundings, etc.

The "merely" makes it an epiphenomenon. I think this is Daniel
Dennett's potion. Dennett argues that zombies are logically impossible
as consciousness is nothing but the sort of information processing
that goes on in brains.


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Sep 27, 2012, 9:08:41 AM9/27/12
to everyth...@googlegroups.com
On Thu, Sep 27, 2012 at 6:06 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> You can approximate consciousness by "belief in self-consistency". This has
> already a "causal efficacy", notably a relative self-speeding ability (by
> Gödel "length of proof" theorem). But "belief in self-consistency" is pure
> 3p, and is not consciousness, you get consciousness because the machine will
> confuse the belief in self-consistency with the truth of its
> self-consistency, and this will introduce a quale. The machine can be aware
> of it, and (with enough cognitive ability) the machine will be aware of its
> non communicability, making it into a personal quale.
>
> I think you are doing a confusion level, like if matter was real, and
> consciousness only emerging on it. I thought that some times ago you did
> understand the movie graph argument, so that it is the illusion of brain and
> matter which emerges from consciousness, and this gives another role for
> consciousness: the bringing of physical realities through number relations
> being selected (non causally, here). Consciousness is what makes notions of
> causal efficacy meaningful to start with.

I object to the idea that consciousness will cause a brain or other
machine to behave in a way not predictable by purely physical laws.
Some people, like Craig Weinberg, seem to believe that this is
possible but it is contrary to all science. This applies even if the
whole universe is really just a simulation, because what we observe is
at the level of the simulation.

> I think it is the same error as using determinacy to refute free-will. This
> would be correct if we were living at the determinist base level, but we are
> not. Consciousness and free-will are real at the level where we live, and
> unreal, in the big 3p picture, but this concerns only the "outer god", not
> the "inner one" which can *know* a part of its local self-consistency, and
> cannot know its local future.


--
Stathis Papaioannou

Craig Weinberg

unread,
Sep 27, 2012, 9:26:01 AM9/27/12
to everyth...@googlegroups.com


On Thursday, September 27, 2012 1:01:12 AM UTC-4, Jason wrote:


On Wed, Sep 26, 2012 at 11:09 PM, Stephen P. King <step...@charter.net> wrote:
On 9/26/2012 11:29 PM, Jason Resch wrote:


On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:
On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing?  Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not?  There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms.  Saying the consciousness is irrelevant in the processes of the brain may be like saying human psychology is irrelevant in the price moves of the stock market.  Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.

Exactly Jason. The moment we conflate  "physical events" with "painful stimulus" we have lost the war. If we assume that physical events can possibly be defined as full of 'pain', or that they stimulate (i.e. are received and responded to as a signifying experience - which is causally efficacious in changing observed behavior), then we are already begging the question of the explanatory gap. To assume that there can be a such thing as a purely physical event which nonetheless is full of pain and power to influence behavior takes the entirety of sense and awareness for granted but then fails to acknowledge that it was necessary in the first place. Once you have the affect of pain and the effect of behavioral stimulation, you don't need a brain as far as explaining consciousness - you already have consciousness on the sub-personal level.
 
We can't observe the
experience itself.

I'm not convinced of this.  While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind.  By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:

This may not be what we are seeing at all, but rather what we are looking at. There was a recent study on the visual cortex which showed the same activity whether the subject actually saw something or not. There may be no activity in the brain at all which directly translates into any conscious experience that we have, only the event horizon where we interface with our body and the body's world. We aren't in there...we're in here. We are not extended across public spaces, we are intended within private times.  They are orthogonal sense modalities of the same essential process on multiple levels, each of which are cross-juxtaposed with ever other. (This means One group of cells in my body can get my full attention, or that I can think abstractly without consciously considering any cells or bodies or conditions in the world).

 
If the experience had separate causal powers we
would be able to observe its effects: we would see that neurons were
miraculously firing contrary to physical law, and explain this as the
immaterial soul affecting the physical world.

It sounds like you are saying either epiphenomenalism is true or interactionism is true ( http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)#Dualist_views_of_mental_causation ).  Both of these are forms of dualism, and I think both are false.

Yes, I agree they are both because they both fail to recognize the symmetry of extended public space and intended private time. It's understandable because we are inherently biased as being completely steeped in our privacy to the point that it seems largely transparent to us. Our ability to make sense of public space phenomena is so powerful and clear that we are, at least in the West, seduced into believing that the interior too surely must be nothing but a clever arrangement of exteriors. It isn't. The symmetry is the thing. Levels and symmetry are the answer, not linear functions. There is no magic required at all, unless you deny your own private experience from the start, which of course 'saws off the branch that you are sitting on' and logically disqualifies 'you' from having any opinion about anything.

    Because they assume a substantive and thus separable substrate, the y are false.



Violations of physics are not required for consciousness to have effects.  After all, no violations of physics are required for human psychology to have effects on stock prices.
   
    Demonstrating that minds are not epiphenomena!



Well, it at least shows emergent things can have effects.  A truck is an emergent phenomenon, but it can still run you over.  So though consciousness might be emergent we can't plainly rule out that it can have no effects.
  
 

> I find the entire concept of epiphenominalism to be self-defeating: if it
> were true, there is no reason to expect anyone to ever have proposed it.  If
> consciousness were truly an epiphenomenon then the experience of it and the
> resulting wonder about it would necessarily be private and non-shareable.
> In other words, whoever is experiencing the consciousness with all its
> intrigue can in no way effect changes in the physical world.  So then who is
> it that proposes the theory of epiphenominalism to explain the mystery of
> conscious experience?  It can't be the causally inefficacious experiencer.
> The only consistent answer epiphenominalism can offer is that the theory of
> epiphenominalism comes from a causally efficacious entity which in no way is
> effected by experiences.  It might as well be a considered a
> non-experiencer, for it would behave the same regardless of whether it
> experienced something or if it were a zombie.

The experiencer would behave the same if he were a zombie, since that
is the definition of a zombie.

Dualist theories, including epiphenominalism, lead to the notion that zombies are logically consistent.  I don't think zombies make any sense.  Do you?

    These dualisms consider mind and body to be separable, this is where they fail. If Mind and body are merely distinct aspect of the same basic primitive then we get a prediction that zombies are not possible.

Right, and I think the converse is also true.  If zombies are not possible, then dualism must be wrong.
 
Every mind must have an embodiment and every body must have (some kind of) a mind.

Zombies are a malformed notion because they assume an expectation of sentience in an intentionally constructed illusion. Puppets exist, zombies are square circles. Simulation isn't an objectively real function, it's a matter of fooling some of the people some of the time.

The term mind is similar to soul in that it assumes a public extension of a private intention which isn't actually real. It makes it a lot easier to talk about to reify our cognitive level experiences as a 'mind', but it's really is just the mental frequency range of your Self. The body of your Self is your entire body, brain, cells, and even more - your house, your friends, your world. The self is *not* defined in space or bodies, it is reflected in those things.
 


 
I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.

    How does this follow the definition of a zombie? They have no qualia thus no ability to reason about qualia!

Zombies can reason.  They can do absolutely everything you can do, except they are not conscious.

No, that's just a bad theoretical assumption. Understandable, but bad. In reality puppets can appear to reason to the extent that something thinks they know what behaviors should constitute reason and makes the leap from thinking that their observation of those behaviors implies awareness. Zombies can't do anything. They cannot be themselves. They have no first person experience at all. There is no 'they' there.
 
 They are also completely identical and indistinguishable, from you.  The only one who could (in principle) know they are a zombie is the zombie itself, but they don't know anything the non-zombie doesn't, for both the zombie and non-zombie brains have identical information content.  If you ask it if it is conscious, it will still say yes, and believe it.  It will not consider itself to be lying, it will in fact, believe itself to be telling the the truth.  There would be no lie detector test to that could detect this lie, the lie is so good, the zombie itself believes it.  The zombie is in fact, as certain of its own consciousness as the non-zombie.

Confusion of exterior 3p and interior 1p. Assumption of 'identical'. Mistakes.
 
 



This follows because the notion of knowing, which I define as possessing information, applies equally to zombie and non-zombie brains.  Both brains have identical information content, so they both know exactly the same things.

    Then what makes a zombie a zombie???


Right, I don't see that the difference makes a difference to anyone or anything, so the truth that there is still some difference must be questioned.  If there is no difference then the whole notion of zombies becomes inconsistent.

Yup.
 
 

 They both know what red is like, they both know what pain is like.   It's just there is some magical notion of there being a difference between them which is completely illogical.  Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.

    No, the reports that are uttered by a zombie, if we are consistent are not reports of knowledge any more than the output of my calculator is knowledge!

But ask the zombie what it can see, and it can describe everything it sees, inspect its brain and you can see the information flow from the retinas to be processed by the visual cortex, and eventually make it to utterances of what it is looking at.  It knows what it is seeing, it's brain contains that knowledge in the same way any other brain does.  You can even watch its hippocampus store memories of what it saw, and when you ask it what it saw a few minutes ago, you can watch this knowledge come out of its brain just as it does in a non-zombie brain.  So in what sense could its knowledge be any less valid than the knowledge in a non-zombie brain?  Remember, zombies are 100% physically identical to their non-zombie counterparts, in every third-person observable way.

Just a hypothetical. In theory, fire shouldn't feel hot. Theory based on exterior mechanics can never apply to interior experience completely, because if it could then it would be logically impossible for there to have any reason for experience to exist at all. The reason why zombies don't make sense is the same reason that we have the hard problem. If function is all there is, why is anyone watching the show?
 
 


 

> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness.  Perhaps instead, its core assumption is
> wrong.  The reason for all these books and discussion threads about
> consciousness is that experiences and consciousness are causally
> efficacious.  If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious,

Consciousness is defined on dictionary.com as "awareness of sensations, thoughts, surrounds, etc."  Awareness is defined as "having knowledge".  So we can say consciousness is merely having knowledge of sensations, thoughts, surroundings, etc.


    Right, and it is this that zombies lack.


Zombies can think, understand,

No.
 
solve problems, answer questions,

Yes. Like a Magic Eightball can so that too.
 
remember, talk about their beliefs, and so on.  

No. No beliefs, no memory. We can hear them talk, but 'they' aren't talking. No more than any other puppet.
 
They just are not conscious of anything when they do these things.

But since they *never* were conscious of anything, there never was a 'they' to begin with. You are assuming something that never was.
 
 So when a zombie thinks/says/understands/believes he is conscious you might say it thinks is wrong or lying.

There is no belief, thought, or understanding. We can hear something being said, but it is only a clever set of automated recordings. What a zombie-puppet says is a fancy voicemail tree.
 
 But in what sense is it lying or in what sense is it wrong?  Its brain does the same calculations as the other brain that is telling the truth.  Its brain contains the same neural patterns as the other brain that has true beliefs.
 

Daniel Dennett says it well: "when philosophers claim that zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition".[9][10] He coined the term zimboes (p-zombies that havesecond-order beliefs) to argue that the idea of a p-zombie is incoherent;[11] "Zimboes thinkZ they are conscious, thinkZ they have qualia, thinkZ they suffer pains – they are just 'wrong' (according to this lamentable tradition), in ways that neither they nor we could ever discover!"
 

Dennett is just incredibly wrong about everything related to consciousness and perception, but he is very convincing as he expresses the logic of the mistakenly exteriorized interiority very well. The problem isn't that it is illogical, it is that logic isn't the ground of being after all, and supervenes on first person awareness in the first place - which transcends logic and reason. 


It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

    Knowledge, at least tacitly, implies the ability to act upon the data, not just be guided by it.

Information cannot become significant on it's own. Not possible.
 



This isn't startling.  Doctors today declare people brain dead and take them off life support using the same assumptions.  If we had no principles for determining if something is conscious or not, would we still do this?  Do you worry about stepping on rocks because it might hurt them?  We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious.  So perhaps if we can tell with reasonable certainty something is not conscious, we might also be reasonably certain that a certain other thing IS conscious.

Proof, is another matter, and likely one we will never get.  Your entire life could be a big delusion and everything you might think you know could be wrong.  We can never really prove anything.

    Rubbish! You are making perfection the enemy of the possible. We are fallible and thus can only reason within boundaries and error bars, so. Does this knock proofs down? NO!

We might be 99.99999% certain of some belief, but I don't know that we can ever be certain.  Some non zero amount of doubt regarding the correctness any proof depends on our own consistency/sanity.

This is not to say that seeking out explanations, or evidence, or proof is fruitless.  So I don't see this leading to an enemy of the good.
 


 
because consciousness is not causally efficacious.

Hahahaha "I am not having this conversation" also means  "I have no way of knowing that I am not having this conversation."
 

I disagree with this.

    I agree with your disagreement!

I third this disagreement, and escalate it to the level of truth more fundamental and elemental than all of physics and arithmetic.
 


 
It is
emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important.  Ask "If this thing were not conscious would it still behave in the same way?"  If not, then how can we say that consciousness is casually inefficacious?
 
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

You got it. Zombies were a very early and natural mistake, which I don't blame on Chalmers. Our naive realism is to conflate personal with impersonal, sub-personal with micro-impersonal, etc. He made a misstep, but anyone would have done the same. That's what pioneering a field is all about.

Craig
 

Craig Weinberg

unread,
Sep 27, 2012, 9:30:48 AM9/27/12
to everyth...@googlegroups.com


On Thursday, September 27, 2012 9:09:12 AM UTC-4, stathisp wrote:
On Thu, Sep 27, 2012 at 6:06 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> You can approximate consciousness by "belief in self-consistency". This has
> already a "causal efficacy", notably a relative self-speeding ability (by
> Gödel "length of proof" theorem). But "belief in self-consistency" is pure
> 3p, and is not consciousness, you get consciousness because the machine will
> confuse the belief in self-consistency with the truth of its
> self-consistency, and this will introduce a quale. The machine can be aware
> of it, and (with enough cognitive ability) the machine will be aware of its
> non communicability, making it into a personal quale.
>
> I think you are doing a confusion level, like if matter was real, and
> consciousness only emerging on it. I thought that some times ago you did
> understand the movie graph argument, so that it is the illusion of brain and
> matter which emerges from consciousness, and this gives another role for
> consciousness: the bringing of physical realities through number relations
> being selected (non causally, here). Consciousness is what makes notions of
> causal efficacy meaningful to start with.

I object to the idea that consciousness will cause a brain or other
machine to behave in a way not predictable by purely physical laws.
Some people, like Craig Weinberg, seem to believe that this is
possible but it is contrary to all science. This applies even if the
whole universe is really just a simulation, because what we observe is
at the level of the simulation.

That is not what I believe at all. I have corrected you and others on this many times but you won't hear it. Nothing unusual needs to happen in the brain for ordinary consciousness to take place, it's just that physics has nothing to say about whether billions of synapses will suddenly begin firing in complex synchronized patterns or not. Physics doesn't care. Can neurons fire when conditions are right? Yes. Can our thoughts and intentions directly control conditions in the brain? YES. Of course. Obviously. Otherwise we wouldn't care any more about the human brain than we would a wasps nest. It's not that physics needs to be amended, it is that experience is part of physics, and physics is part of experience.

Craig
 

Jason Resch

unread,
Sep 27, 2012, 10:22:19 AM9/27/12
to everyth...@googlegroups.com
On Thu, Sep 27, 2012 at 12:56 AM, Stephen P. King <step...@charter.net> wrote:
On 9/27/2012 1:01 AM, Jason Resch wrote:
On Wed, Sep 26, 2012 at 11:09 PM, Stephen P. King <step...@charter.net> wrote:
On 9/26/2012 11:29 PM, Jason Resch wrote:
On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:
On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing?  Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not?  There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms.  Saying the consciousness is irrelevant in the processes of the brain may be like saying human psychology is irrelevant in the price moves of the stock market.  Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.
 
We can't observe the
experience itself.

I'm not convinced of this.  While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind.  By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:
 
If the experience had separate causal powers we

would be able to observe its effects: we would see that neurons were
miraculously firing contrary to physical law, and explain this as the
immaterial soul affecting the physical world.

It sounds like you are saying either epiphenomenalism is true or interactionism is true ( http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)#Dualist_views_of_mental_causation ).  Both of these are forms of dualism, and I think both are false.

    Because they assume a substantive and thus separable substrate, the y are false.



Violations of physics are not required for consciousness to have effects.  After all, no violations of physics are required for human psychology to have effects on stock prices.
   
    Demonstrating that minds are not epiphenomena!



Well, it at least shows emergent things can have effects.  A truck is an emergent phenomenon, but it can still run you over.  So though consciousness might be emergent we can't plainly rule out that it can have no effects.
Hi!

    Surely! The truck and my body emerge from the same underlying process and thus are similarly efficacious. If we are both being simulated by the same underlying process, the truck and I will not be any different when it come to the rules, unless a bias or distinction is built into the simulating program.
> I find the entire concept of epiphenominalism to be self-defeating: if it
> were true, there is no reason to expect anyone to ever have proposed it.  If

> consciousness were truly an epiphenomenon then the experience of it and the
> resulting wonder about it would necessarily be private and non-shareable.
> In other words, whoever is experiencing the consciousness with all its
> intrigue can in no way effect changes in the physical world.  So then who is

> it that proposes the theory of epiphenominalism to explain the mystery of
> conscious experience?  It can't be the causally inefficacious experiencer.

> The only consistent answer epiphenominalism can offer is that the theory of
> epiphenominalism comes from a causally efficacious entity which in no way is
> effected by experiences.  It might as well be a considered a

> non-experiencer, for it would behave the same regardless of whether it
> experienced something or if it were a zombie.

The experiencer would behave the same if he were a zombie, since that
is the definition of a zombie.

Dualist theories, including epiphenominalism, lead to the notion that zombies are logically consistent.  I don't think zombies make any sense.  Do you?

    These dualisms consider mind and body to be separable, this is where they fail. If Mind and body are merely distinct aspect of the same basic primitive then we get a prediction that zombies are not possible.

Right, and I think the converse is also true.  If zombies are not possible, then dualism must be wrong.

    Only for a substance dualism would this be true. It is not true for a dual aspect monism.


 
Every mind must have an embodiment and every body must have (some kind of) a mind.


 
I know I'm not a zombie and I believe

that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.
    How does this follow the definition of a zombie? They have no qualia thus no ability to reason about qualia!

Zombies can reason.

    This is to equate reasoning to automatically following an algorithm. This implies perfect predictability at some level and thus the absence of any 1p only aspects. Additionally, the recipe is some thng that needs explanation. How was it found...?
    This kind of zombie reasoning is an oxymoron as it assumes the possibility of evaluations and yet disallows the very possibility. Zombies have no qualia and thus cannot represent anything to itself. It has no "self" and thus lacks the capacity to impress anything upon that non-existent self.

Here, I disagree.  If a you ask a zombie to solve a riddle, and it ponders it for several minutes and then gives you the correct answer, how can you say it was not reasoning?  It is like saying a computer is not multiplying when you ask it what 4*4 is and it gives you 16.

Note that I think we agree (some forms of reasoning probably require consciousness), which only provides another reason to doubt the consistency of the definition of zombies.  I don't think reasoning is normally assumed to require consciousness, which is why someone who defines zombies as non-conscious may still hold that they have a reasoning ability.

 


 They can do absolutely everything you can do, except they are not conscious.

    If they are not conscious then they are not conscious of their consciousness. Thus they do not have knowledge.


I think this is another example of the above.  When you ask the zombie what the capital of Idaho is, or how many protons an oxygen atom has, or what he had for breakfast yesterday, and the zombie gives you an answer, then the zombie knows these things.  Now perhaps knowledge requires consciousness which is yet another reason to doubt the consistency of zombies.

I think the only difference in what you are saying and what I am saying, is I say look the zombies can do these things (by their definition), so they must be conscious and there is the inconsistency, whereas you say zombies cannot do these things since they are not conscious (by their definition), so then zombie behavior cannot be indistinguishable to a third party.

It works out to the same conclusion, either zombies are conscious, or zombies can't behave indistinguishably, and hence the definition of a zombie that is non-conscious but has identical behavior is flawed.

Jason

 

 They are also completely identical and indistinguishable, from you.

    from the point of view of an observer, sure. But this is just a retelling of the Turing test. It merely considers 3p behavior.
 The only one who could (in principle) know they are a zombie is the zombie itself, but they don't know anything the non-zombie doesn't, for both the zombie and non-zombie brains have identical information content.

    This is where the zombie falls apart. The zombie cannot act on that difference as it cannot , by definition, act upon the representation of that information.
 If you ask it if it is conscious, it will still say yes, and believe it.

    No, they have no qualia, thus no beliefs.
 It will not consider itself to be lying, it will in fact, believe itself to be telling the the truth.

    NO!


 There would be no lie detector test to that could detect this lie, the lie is so good, the zombie itself believes it.  The zombie is in fact, as certain of its own consciousness as the non-zombie.

    Contradiction!


 



This follows because the notion of knowing, which I define as possessing information, applies equally to zombie and non-zombie brains.  Both brains have identical information content, so they both know exactly the same things.

    Then what makes a zombie a zombie???


Right, I don't see that the difference makes a difference to anyone or anything, so the truth that there is still some difference must be questioned.
    Stick to the definition. If a zombie has no qualia then it does not have anything that supervenes on qualia. Does it "know" anything? NO!
 If there is no difference then the whole notion of zombies becomes inconsistent.

    It is inconsistent! As I see it, the concept of an entity that behaves identically to a conscious being and yet has no consciousness - no quale - then it is, at most, an automaton.


 

 They both know what red is like, they both know what pain is like.   It's just there is some magical notion of there being a difference between them which is completely illogical.  Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.

    No, the reports that are uttered by a zombie, if we are consistent are not reports of knowledge any more than the output of my calculator is knowledge!

But ask the zombie what it can see, and it can describe everything it sees, inspect its brain and you can see the information flow from the retinas to be processed by the visual cortex, and eventually make it to utterances of what it is looking at.
    This claim assumes the possibility of the capacity to report the content of its senses. This is dangerously close to contradicting the definition of a zombie.
 It knows what it is seeing, it's brain contains that knowledge in the same way any other brain does.

    Knowledge implies the capacity of distinguishing a difference between the state of having knowledge and not having it. This seems to require an internal self-modeling capacity. How is this not a contradiction of the definition of a Zombie?
 You can even watch its hippocampus store memories of what it saw, and when you ask it what it saw a few minutes ago, you can watch this knowledge come out of its brain just as it does in a non-zombie brain.

    Where does the "it's" come from? There is nothing to "posses" an opinion or representation of the data (that is not, strictly speaking, the data). It can only come from the implicit third party of the narrative here. The zombie has no sense of self as different from the world and thus no capacity to know that "it" is the subject of the brain scan.
 So in what sense could its knowledge be any less valid than the knowledge in a non-zombie brain?

    Zombies cannot have "knowledge", as has just been shown.
 Remember, zombies are 100% physically identical to their non-zombie counterparts, in every third-person observable way.

    Exactly, and thus any reference to internal capacities are suspect when attributed to zombies.
> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness.  Perhaps instead, its core assumption is
> wrong.  The reason for all these books and discussion threads about

> consciousness is that experiences and consciousness are causally
> efficacious.  If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious,

Consciousness is defined on dictionary.com as "awareness of sensations, thoughts, surrounds, etc."  Awareness is defined as "having knowledge".  So we can say consciousness is merely having knowledge of sensations, thoughts, surroundings, etc.


    Right, and it is this that zombies lack.


Zombies can think, understand, solve problems, answer questions, remember, talk about their beliefs, and so on.
    How is this determined? First we must show that such capacities obtain, otherwise we abandon logic and hold the law of non-contradiction in contempt.
 They just are not conscious of anything when they do these things.

    There is no "they"... There is no "agency" for a zombie, unless the narrative here of the observer (who is making the determination) is merely projecting their own capacities...


 So when a zombie thinks/says/understands/believes he is conscious you might say it thinks is wrong or lying.  But in what sense is it lying or in what sense is it wrong?  Its brain does the same calculations as the other brain that is telling the truth.  Its brain contains the same neural patterns as the other brain that has true beliefs.

    "Same" ? If we are considering an equivalence then that equivalence, unless restricted, is complete. The same holds for difference. If we are going to allow for a spectrum between these then we must simultaneously allow for both , at least in a possible world sense.


Daniel Dennett says it well: "when philosophers claim that zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition".[9][10] He coined the term zimboes (p-zombies that have second-order beliefs) to argue that the idea of a p-zombie is incoherent;[11] "Zimboes thinkZ they are conscious, thinkZ they have qualia, thinkZ they suffer pains – they are just 'wrong' (according to this lamentable tradition), in ways that neither they nor we could ever discover!"

    Dennett is right here! He is pointing out the  contradiction of the definition of Zombie in the idea or belief that zombies can have beliefs. An entity either has qualia/consciousness or it does not. If it has consciousness or anything that follows from consciousness - such as knowledge - then it is not a zombie.


 

It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

    Knowledge, at least tacitly, implies the ability to act upon the data, not just be guided by it.



This isn't startling.  Doctors today declare people brain dead and take them off life support using the same assumptions.  If we had no principles for determining if something is conscious or not, would we still do this?  Do you worry about stepping on rocks because it might hurt them?  We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious.  So perhaps if we can tell with reasonable certainty something is not conscious, we might also be reasonably certain that a certain other thing IS conscious.

Proof, is another matter, and likely one we will never get.  Your entire life could be a big delusion and everything you might think you know could be wrong.  We can never really prove anything.

    Rubbish! You are making perfection the enemy of the possible. We are fallible and thus can only reason within boundaries and error bars, so. Does this knock proofs down? NO!

We might be 99.99999% certain of some belief, but I don't know that we can ever be certain.  Some non zero amount of doubt regarding the correctness any proof depends on our own consistency/sanity.

    There is truth and there is the ability to find a proof of that truth.



This is not to say that seeking out explanations, or evidence, or proof is fruitless.  So I don't see this leading to an enemy of the good.

    First, let us banish the ambiguity and inconsistency.


 


 
because consciousness is not causally efficacious.

I disagree with this.

    I agree with your disagreement!


 
It is

emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important.  Ask "If this thing were not conscious would it still behave in the same way?"  If not, then how can we say that consciousness is casually inefficacious?
 
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

Jason

    Nice debate!


Thanks,

Jason


--

Jason Resch

unread,
Sep 27, 2012, 10:53:35 AM9/27/12
to everyth...@googlegroups.com
On Thu, Sep 27, 2012 at 7:49 AM, Stathis Papaioannou <stat...@gmail.com> wrote:
On Thu, Sep 27, 2012 at 1:29 PM, Jason Resch <jason...@gmail.com> wrote:

> But can you separate the consciousness from that sequence of physical events
> or not?  There are multiple levels involved here and you may be missing the
> forest for the trees by focusing only on the atoms.  Saying the
> consciousness is irrelevant in the processes of the brain may be like saying
> human psychology is irrelevant in the price moves of the stock market.  Of
> course, you might explain the price moves in terms of atomic interactions,
> but you are missing the effects of higher-level phenomenon, which are real
> and do make a difference.

The higher level description is not an entity with *separate* causal
power. Was the stock market movement caused by physics, chemistry,
biochemistry or psychology? In a manner of speaking, it's correct to
say any of them; but we know that all the chemical, biochemical and
psychological properties are ultimately traceable to the physics, even
if it isn't practically useful to attempt stock market prediction by
analysing brain physics. What I object to is the idea of strong
emergence, that higher level properties are not merely surprising but
fundamentally unable to be deduced from lower level properties.

I agree with your distaste for strong emergence, but I think that you can no more take the consciousness out of the brain, then you could take out the chemical reactions.  Each is a fundamental part of what it is and does.  
 

>> We can't observe the
>> experience itself.
>
>
> I'm not convinced of this.  While today, we have difficulty in even defining
> the term, in the future, with better tools and understanding of minds and
> consciousness, we may indeed be able to tell if a certain process implements
> the right combination of processes to have what we would call a mind.  By
> tracing the flows of information in its mind, we might even know what it is
> and isn't aware of.
>
> Albeit at a low resolution, scientists have already extracted from brain
> scans what people are seeing:
> http://www.newscientist.com/article/dn16267-mindreading-software-could-record-your-dreams.html

We still can't observe the experience. Advanced aliens may be able to
read our thoughts very accurately in this way but still have no idea
what we actually experience or whether we are conscious at all.


Maybe they could know what we experience.  If they moved their minds to alternate substrates they might have much greater neural plasticity and this could allow them to alter their own minds and know what we experience.  Perhaps with enough practice doing this with different creatures from all over the galaxy they could develop some pretty accurate theories about what processing patterns of information lead to what first person experiences.

 
>> The people talking about them could be zombies. There is nothing in
>> any observation of peoples' behaviour that *proves* they are
>> conscious,
>
>
> Consciousness is defined on dictionary.com as "awareness of sensations,
> thoughts, surrounds, etc."  Awareness is defined as "having knowledge".  So
> we can say consciousness is merely having knowledge of sensations, thoughts,
> surroundings, etc.

The "merely" makes it an epiphenomenon. I think this is Daniel
Dennett's potion. Dennett argues that zombies are logically impossible
as consciousness is nothing but the sort of information processing
that goes on in brains.

Zombies are logically impossible precisely because consciousness is not an epiphenomenon.

Dennett explains his position on epiphenomenalism here:

He is "flabbergasted that anyone takes this view seriously"

Jason

Stephen P. King

unread,
Sep 27, 2012, 10:53:48 AM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 4:37 AM, Bruno Marchal wrote:
On 26 Sep 2012, at 19:37, Craig Weinberg wrote:
in which case, how are they really arithmetic.

They are not. Arithmetical truth is already not arithmetical.
Arithmetic seen from inside is *vastly* bigger than arithmetic. This needs a bit of "model theory" to be explained formally.
Hi Bruno,

    Is this not just the direct implication of the Löwenheim–Skolem theorems? What is missing? The discussion here is wonderful! http://www.earlham.edu/~peters/courses/logsys/low-skol.htm#review It seems to run parallel to what I have been trying to discuss with you regarding the possibility that non-standard models allow for a "true" plurality of 1p in extensions of modal logic.

http://www.earlham.edu/~peters/courses/logsys/low-skol.htm#skolem
Skolem's Paradox

" LST has bite because we believe that there are uncountably many real numbers (more than 0). Indeed, let's insist that we know it; Cantor proved it in 1873, and we don't want to open the question again. What is remarkable about LST is the assertion that even if the intended interpretation of S is a system of arithmetic about the real numbers, and even if the system is consistent and has a model that makes its theorems true, its theorems (under a different interpretation) will be true for a domain too small to contain all the real numbers. Systems about uncountable infinities can be given a model whose domain is only countable. Systems about the reals can be interpreted as if they were about some set of objects no more numerous than the natural numbers. It is as if a syntactical version of "One-Thousand and One Arabian Nights" could be interpreted as "One Night in Centerville".

This strange situation is not hypothetical. There are systems of set theory (or number theory or predicate logic) that contain a theorem which asserts in the intended interpretation that the cardinality of the real numbers exceeds the cardinality of the naturals. That's good, because it's true. Such systems therefore say that the cardinality of the reals is uncountable. So the cardinality of the reals must really be uncountable in all the models of the system, for a model is an interpretation in which the theorems come out true (for that interpretation). Now one would think that if theorems about uncountable cardinalities are true for a model, then the domain of the model must have uncountably many members. But LST says this is not so. Even these systems, if they have models at all, have at least one countable model.

Insofar as this is a paradox it is called Skolem's paradox. It is at least a paradox in the ancient sense: an astonishing and implausible result. Is it a paradox in the modern sense, making contradiction apparently unavoidable? We know from history all too many cases of shocking results initially misperceived as contradictions. Think about the existence of pairs of numbers with no common divisor, no matter how small, or the property of every infinite set that it can be put into one-to-one correspondence with some of its proper subsets."

    What broke the Skolem prison for me was a remark by Louis Kauffman that self-referencing systems allows for finite models of infinities as they can capture the mereology of infinite sets ( the property that there is a one-to-one correspondence between whole and proper subsets). We get stuck on what is "proper"....

From: http://www.earlham.edu/~peters/courses/logsys/low-skol.htm#amb1

"To talk about incurable ambiguity suggests calamity, and this is how it seems to some logicians. LST shows that the real numbers cannot be specified uniquely by any first order theory. If that is so, then one of the most important domains in mathematics cannot be reached with the precision and finality we thought formal systems permitted us to attain.
    But this is not a calamity from another point of view. If formal languages were not ambiguous or capable of many interpretations, they would not be formal. From this standpoint, LST is less about deficiences in our ability to express meanings univocally, or about deficiencies in our ability to understand the real numbers, than it is about the gap between form and content (syntax and semantics).
    Other metatheorems prove this ambiguity for statements and systems in general. LST proves this ambiguity for any system attempting to describe uncountable infinities: even if they succeed on one interpretation, there will always be other interpretations of the same underlying syntax by which they describe only countably many objects.
    LST has this similarity to Gödel's first incompleteness theorem. While Gödel's theorem only applies to "sufficiently strong" systems of arithmetic, LST only applies to first-order theories of a certain adequacy, namely, those with models, hence those that are consistent. Gödel's theorem finds a surprising weakness in strength; sufficiently powerful systems of arithmetic are incomplete. LST also finds a surprising weakness in strength; first-order theories with models are importantly ambiguous in a way that especially hurts set theory, arithmetic, and other theories concerned to capture truths about uncountable cardinalities.

Serious Incurable Ambiguity: Plural Models

But LST proves a kind of ambiguity much more important than the permanent plurality of interpretations. There can be plural models, that is, plural interpetations in which the theorems come out true.

As we become familiar with formalism and its susceptibility to various interpretations, we might think that plural interpretations are not that surprising; perhaps they are inevitable. Plural models are more surprising. We might think that we must go out of our way to get them. But on the contrary, LST says they are inevitable for systems of a certain kind.

Very Serious Incurable Ambiguity: Non-Categoricity

But the ambiguity is stronger still. The plural permissible models are not always isomorphic with one another. The isomorphism of models is a technical concept that we don't have to explain fully here. Essentially two models are isomorphic if their domains map one another; their elements have the same relations under the functions and predicates defined in the interpretations containing those domains.

If all the models of a system are isomorphic with one another, we call the system categorical. LST proves that systems with uncountable models also have countable models; this means that the domains of the two models have different cardinalities, which is enough to prevent isomorphism. Hence, consistent first-order systems, including systems of arithmetic, are non-categorical.

We might have thought that, even if a vast system of uninterpreted marks on paper were susceptible of two or more coherent interpretations, or even two or more models, at least they would all be "equivalent" or "isomorphic" to each other, in effect using different terms for the same things. But non-categoricity upsets this expectation. Consistent systems will always have non-isomorphic or qualitatively different models."

    This non-isomorphism is the point I have been trying to make, we cannot extract a true plurality of minds from a single Sigma_1 COMP model unless we allow for an inconsistency in the ontologically primitive level. Continuing the quote:


    " We don't even approach univocal reference "at the limit" or asymptotically, by increasing the number of axioms or theorems describing the real numbers until they are infinite in number. We might have thought that, even if a certain vast system of bits sustained non-isomorphic models, we could approach unambiguity (even if we could not reach it) by increasing the size of the system. After all, "10" could symbolize everything from day and night to male and female, and from two to ten; but a string of 1's and 0's a light-year in length must at least narrow down the range of possible referents. But this is not so, for LST applies even to infinitely large systems. LST proves in a very particular way that no first-order formal system of any size can specify the reals uniquely. It proves that no description of the real numbers (in a first-order theory) is categorical."

    This reasoning is parallel to my own argument that there must be a means to "book keep" the differences and that this cannot be done "in the arithmetic" itself. This is the fundamental argument that I am making for he necessity of physical worlds, which we can represent faithfully as topological spaces and we get this if we accept the Stone duality as a ontological principle. But that is an argument against your thesis of  immaterial monism. :_(
    Continuing the quote:


    "Very Very Serious Incurable Ambiguity: Upward and Downward LST
If the intended model of a first-order theory has a cardinality of 1, then we have to put up with its "shadow" model with a cardinality of 0. But it could be worse. These are only two cardinalities. The range of the ambiguity from this point of view is narrow. Let us say that degree of non-categoricity is 2, since there are only 2 different cardinalities involved."

    Why not allow for arbitrary extensions via forcing? Why not the unnameable towers of cardinalities of Cantor, so long as it is possible to have pair-wise consistent constructions from the infinities?


"But it is worse. A variation of LST called the "downward" LST proves that if a first-order theory has a model of any transfinite cardinality, x, then it also has a model of every transfinite cardinal y, when y > x. Since there are infinitely many infinite cardinalities, this means there are first-order theories with arbitrarily many LST shadow models. The degree of non-categoricity can be any countable number."

    Implying the existence of sets of countable numbers within each model, subject to some constraint?


"There is one more blow. A variation of LST called the "upward" LST proves that if a first-order theory has a model of any infinite cardinality, then it has models of any arbitrary infinite cardinality, hence every infinite cardinality. The degree of non-categoricity can be any infinite number."

    Thus an argument for the Tower!


"A variation of upward LST has been proved for first-order theories with identity: if such a theory has a "normal" model of any infinite cardinality, then it has normal models of any, hence every, infinite cardinality.

Coping
Most mathematicians agree that the Skolem paradox creates no contradiction. But that does not mean they agree on how to resolve it.

First we should note that the ambiguity proved by LST is curable in the sense that LST holds only in first-order theories. Higher-order logics are not afflicted with it, although they are afflicted with many weaknesses absent in first-order logic. The ambiguity is also curable as soon as we add ordering to our collection of domain objects supposed to be real numbers. Once ordering is added, systems intended to capture the reals can become categorical.

But the ambiguity remains baffling and frustrating for first-order theories prior to the introduction of ordering.

Can such a system really assert the uncountability of the reals if the assertion is "just as much" about some merely countable infinite? Or can it really assert that the cardinality of the continuum is 1 (assuming the continuum hypothesis) if the assertion is "just as much" about every other infinite cardinality? LST may not force us to retract our belief that the reals are uncountable; but on one terrifying reading it does, and to avoid that reading we may well have to alter the modality of our belief that the reals are uncountable.

    What of models that do not assume CH? We get a plenum of continua... (At least between Aleph_0 and Aleph_1) No? Do we necessarily lose countability so long as ordering can be imposed by some rule?


"In metalogic the term "model" is used in (at least) two senses. We have used the term in the more technical sense, as an interpretation of a system in which its theorems come out true for that interpretation. But the term "model" may also be used more casually to refer to the domain of things on which we want to focus, such as the real numbers, especially if we assume that such things have an existence independent of formal systems and human logicians. In this less strict second sense of "model", Platonists in mathematics who believe that the real numbers exist independently of human minds and formal systems can say that there is an uncountable model of the real numbers: namely, the real numbers themselves. However, they must (by LST) admit that first-order formal systems that seem to capture the real numbers can always be satisfied by a merely countable domain. For this reasons, Platonists will remain Platonic and will not pin their hopes on formalization."

    Bad news for Bruno! :_(


"One reading of LST holds that it proves that the cardinality of the real numbers is the same as the cardinality of the rationals, namely, countable. (The two kinds of number could still differ in other ways, just as the naturals and rationals do despite their equal cardinality.) On this reading, the Skolem paradox would create a serious contradiction, for we have Cantor's proof, whose premises and reasoning are at least as strong as those for LST, that the set of reals has a greater cardinality than the set of rationals.

The good news is that this strongly paradoxical reading is optional. The bad news is that the obvious alternatives are very ugly. The most common way to avoid the strongly paradoxical reading is to insist that the real numbers have some elusive, essential property not captured by system S. This view is usually associated with a Platonism that permits its proponents to say that the real numbers have certain properties independently of what we are able to say or prove about them.

The problem with this view is that LST proves that if some new and improved S' had a model, then it too would have a countable model. Hence, no matter what improvements we introduce, either S' has no model or it does not escape the air of paradox created by LST. (S' would at least have its own typographical expression as a model, which is countable.) As Morris Kline put it, while Gödel's first incompleteness theorem showed that certain strong formal systems always prove less than we'd like, LST shows that they also prove more than we'd like
."


   Please note the discussion of Platonism!

Jason Resch

unread,
Sep 27, 2012, 11:09:30 AM9/27/12
to everyth...@googlegroups.com
On Thu, Sep 27, 2012 at 8:26 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Thursday, September 27, 2012 1:01:12 AM UTC-4, Jason wrote:


On Wed, Sep 26, 2012 at 11:09 PM, Stephen P. King <step...@charter.net> wrote:
On 9/26/2012 11:29 PM, Jason Resch wrote:


On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:

On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing?  Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not?  There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms.  Saying the consciousness is irrelevant in the processes of the brain may be like saying human psychology is irrelevant in the price moves of the stock market.  Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.

Exactly Jason. The moment we conflate  "physical events" with "painful stimulus" we have lost the war. If we assume that physical events can possibly be defined as full of 'pain', or that they stimulate (i.e. are received and responded to as a signifying experience - which is causally efficacious in changing observed behavior), then we are already begging the question of the explanatory gap. To assume that there can be a such thing as a purely physical event which nonetheless is full of pain and power to influence behavior takes the entirety of sense and awareness for granted but then fails to acknowledge that it was necessary in the first place. Once you have the affect of pain and the effect of behavioral stimulation, you don't need a brain as far as explaining consciousness - you already have consciousness on the sub-personal level.

Well you need something to explain consciousness (besides consciousness itself), otherwise you haven't explained anything.
 
 
We can't observe the
experience itself.

I'm not convinced of this.  While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind.  By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:

This may not be what we are seeing at all, but rather what we are looking at. There was a recent study on the visual cortex which showed the same activity whether the subject actually saw something or not.

You're right, it depends on what level we are doing the scanning.  That information might not propagate to the effect that it becomes known throughout the brain.
How do you define puppet here?

 
Simulation isn't an objectively real function, it's a matter of fooling some of the people some of the time.

The term mind is similar to soul in that it assumes a public extension of a private intention which isn't actually real. It makes it a lot easier to talk about to reify our cognitive level experiences as a 'mind', but it's really is just the mental frequency range of your Self. The body of your Self is your entire body, brain, cells, and even more - your house, your friends, your world. The self is *not* defined in space or bodies, it is reflected in those things.
 


 
I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.

    How does this follow the definition of a zombie? They have no qualia thus no ability to reason about qualia!

Zombies can reason.  They can do absolutely everything you can do, except they are not conscious.

No, that's just a bad theoretical assumption. Understandable, but bad. In reality puppets can appear to reason to the extent that something thinks they know what behaviors should constitute reason and makes the leap from thinking that their observation of those behaviors implies awareness. Zombies can't do anything.

They are defined to be able to do anything.  I agree it seems to lead to contradictions.
 
They cannot be themselves. They have no first person experience at all. There is no 'they' there.
 
 They are also completely identical and indistinguishable, from you.  The only one who could (in principle) know they are a zombie is the zombie itself, but they don't know anything the non-zombie doesn't, for both the zombie and non-zombie brains have identical information content.  If you ask it if it is conscious, it will still say yes, and believe it.  It will not consider itself to be lying, it will in fact, believe itself to be telling the the truth.  There would be no lie detector test to that could detect this lie, the lie is so good, the zombie itself believes it.  The zombie is in fact, as certain of its own consciousness as the non-zombie.

Confusion of exterior 3p and interior 1p. Assumption of 'identical'. Mistakes.
 
 



This follows because the notion of knowing, which I define as possessing information, applies equally to zombie and non-zombie brains.  Both brains have identical information content, so they both know exactly the same things.

    Then what makes a zombie a zombie???


Right, I don't see that the difference makes a difference to anyone or anything, so the truth that there is still some difference must be questioned.  If there is no difference then the whole notion of zombies becomes inconsistent.

Yup.
 
 

 They both know what red is like, they both know what pain is like.   It's just there is some magical notion of there being a difference between them which is completely illogical.  Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.

    No, the reports that are uttered by a zombie, if we are consistent are not reports of knowledge any more than the output of my calculator is knowledge!

But ask the zombie what it can see, and it can describe everything it sees, inspect its brain and you can see the information flow from the retinas to be processed by the visual cortex, and eventually make it to utterances of what it is looking at.  It knows what it is seeing, it's brain contains that knowledge in the same way any other brain does.  You can even watch its hippocampus store memories of what it saw, and when you ask it what it saw a few minutes ago, you can watch this knowledge come out of its brain just as it does in a non-zombie brain.  So in what sense could its knowledge be any less valid than the knowledge in a non-zombie brain?  Remember, zombies are 100% physically identical to their non-zombie counterparts, in every third-person observable way.

Just a hypothetical. In theory, fire shouldn't feel hot. Theory based on exterior mechanics can never apply to interior experience completely, because if it could then it would be logically impossible for there to have any reason for experience to exist at all. The reason why zombies don't make sense is the same reason that we have the hard problem. If function is all there is, why is anyone watching the show?
 

The machine in the state of executing the function is conscious.  It has to be, otherwise it would be a zombie but one that knows, thinks, understands, contemplates, feels, etc.

I think we can build machines that do these things, and so they will be conscious.  You think man-made machines cannot be conscious, so they will be unable to do these things.  It is logically consistent, but will be increasingly hard to justify as the repertoire of man-made machines advances.
 
 


 

> Epiphenominalism is forced to defend the absurd notion that epiphenominalism
> (and all other theories of consciousness) are proposed by things that have
> never experienced consciousness.  Perhaps instead, its core assumption is
> wrong.  The reason for all these books and discussion threads about
> consciousness is that experiences and consciousness are causally
> efficacious.  If they weren't then why is anyone talking about them?

The people talking about them could be zombies. There is nothing in
any observation of peoples' behaviour that *proves* they are
conscious,

Consciousness is defined on dictionary.com as "awareness of sensations, thoughts, surrounds, etc."  Awareness is defined as "having knowledge".  So we can say consciousness is merely having knowledge of sensations, thoughts, surroundings, etc.


    Right, and it is this that zombies lack.


Zombies can think, understand,

No.
 
solve problems, answer questions,

Yes. Like a Magic Eightball can so that too.
 
remember, talk about their beliefs, and so on.  

No. No beliefs, no memory. We can hear them talk, but 'they' aren't talking. No more than any other puppet.
 
They just are not conscious of anything when they do these things.

But since they *never* were conscious of anything, there never was a 'they' to begin with. You are assuming something that never was.
 
 So when a zombie thinks/says/understands/believes he is conscious you might say it thinks is wrong or lying.

There is no belief, thought, or understanding. We can hear something being said, but it is only a clever set of automated recordings. What a zombie-puppet says is a fancy voicemail tree.
 
 But in what sense is it lying or in what sense is it wrong?  Its brain does the same calculations as the other brain that is telling the truth.  Its brain contains the same neural patterns as the other brain that has true beliefs.
 

Daniel Dennett says it well: "when philosophers claim that zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition".[9][10] He coined the term zimboes (p-zombies that havesecond-order beliefs) to argue that the idea of a p-zombie is incoherent;[11] "Zimboes thinkZ they are conscious, thinkZ they have qualia, thinkZ they suffer pains – they are just 'wrong' (according to this lamentable tradition), in ways that neither they nor we could ever discover!"
 

Dennett is just incredibly wrong about everything related to consciousness and perception, but he is very convincing as he expresses the logic of the mistakenly exteriorized interiority very well. The problem isn't that it is illogical, it is that logic isn't the ground of being after all, and supervenes on first person awareness in the first place - which transcends logic and reason. 

I disagree with some of his materialist concepts and ideas on the limits of computers, but overall, of all the well known philosopher's of mind, he seems to be one I agree with most closely.
 


It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

    Knowledge, at least tacitly, implies the ability to act upon the data, not just be guided by it.

Information cannot become significant on it's own. Not possible.
 



This isn't startling.  Doctors today declare people brain dead and take them off life support using the same assumptions.  If we had no principles for determining if something is conscious or not, would we still do this?  Do you worry about stepping on rocks because it might hurt them?  We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious.  So perhaps if we can tell with reasonable certainty something is not conscious, we might also be reasonably certain that a certain other thing IS conscious.

Proof, is another matter, and likely one we will never get.  Your entire life could be a big delusion and everything you might think you know could be wrong.  We can never really prove anything.

    Rubbish! You are making perfection the enemy of the possible. We are fallible and thus can only reason within boundaries and error bars, so. Does this knock proofs down? NO!

We might be 99.99999% certain of some belief, but I don't know that we can ever be certain.  Some non zero amount of doubt regarding the correctness any proof depends on our own consistency/sanity.

This is not to say that seeking out explanations, or evidence, or proof is fruitless.  So I don't see this leading to an enemy of the good.
 


 
because consciousness is not causally efficacious.

Hahahaha "I am not having this conversation" also means  "I have no way of knowing that I am not having this conversation."
 

I disagree with this.

    I agree with your disagreement!

I third this disagreement, and escalate it to the level of truth more fundamental and elemental than all of physics and arithmetic.
 


 
It is
emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important.  Ask "If this thing were not conscious would it still behave in the same way?"  If not, then how can we say that consciousness is casually inefficacious?
 
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

You got it. Zombies were a very early and natural mistake, which I don't blame on Chalmers. Our naive realism is to conflate personal with impersonal, sub-personal with micro-impersonal, etc. He made a misstep, but anyone would have done the same. That's what pioneering a field is all about.


Well we can learn something about what consciousness is not from errors too.

Jason

Stephen P. King

unread,
Sep 27, 2012, 11:27:24 AM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 10:22 AM, Jason Resch wrote:
This is to equate reasoning to automatically following an algorithm. This implies perfect predictability at some level and thus the absence of any 1p only aspects. Additionally, the recipe is some thng that needs explanation. How was it found...?
��� This kind of zombie reasoning is an oxymoron as it assumes the possibility of evaluations and yet disallows the very possibility. Zombies have no qualia and thus cannot represent anything to itself. It has no "self" and thus lacks the capacity to impress anything upon that non-existent self.

Here, I disagree. �If a you ask a zombie to solve a riddle, and it ponders it for several minutes and then gives you the correct answer, how can you say it was not reasoning? �It is like saying a computer is not multiplying when you ask it what 4*4 is and it gives you 16.

Note that I think we agree (some forms of reasoning probably require consciousness), which only provides another reason to doubt the consistency of the definition of zombies. �I don't think reasoning is normally assumed to require consciousness, which is why someone who defines zombies as non-conscious�may still hold that they have a reasoning ability.

Hi Jason,

��� OK, but isn't that the point I made? Automaton behavior is per-scripted. It is not the result from an internal self-model. Is there some point where the two are identical in the 3p sense. Certainly! But only in that special case does your claim follow, but it does not follow generally as we need to take into account "novel" behavior.

Stephen P. King

unread,
Sep 27, 2012, 11:31:16 AM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 10:22 AM, Jason Resch wrote:
> I think the only difference in what you are saying and what I am
> saying, is I say look the zombies can do these things (by their
> definition), so they must be conscious and there is the inconsistency,
> whereas you say zombies cannot do these things since they are
> not conscious (by their definition), so then zombie behavior cannot be
> indistinguishable to a third party.
>
> It works out to the same conclusion, either zombies are conscious, or
> zombies can't behave indistinguishably, and hence the definition of a
> zombie that is non-conscious but has identical behavior is flawed.
>
Hi Jason,

I am fine with identity of the two if and only if there is no
distinguishable difference in behavior, as this gives us a 3p
definition, but to only see that case as the whole of the gamut of
possibilities is a mistake. My claim is that the zombie idea can cause
as much confusion as it's proponents intended to solve. Ideas are two
edged things.... Otherwise they are just meaningless noise.

Bruno Marchal

unread,
Sep 27, 2012, 12:52:30 PM9/27/12
to everyth...@googlegroups.com
But this cannot be entirely correct. Consciousness will make your
brain, at the level below the substitution level, having some well
defined state, with an electron, for example, described with some
precise position. Without consciousness there is no "material" brain
at all.

Of course, you will argue that this is what physics already describes,
with QM. In that sense I am OK, but consciousness is still playing a
role, even if it is not necessarily the seemingly magical role invoked
by Craig.





> Some people, like Craig Weinberg, seem to believe that this is
> possible but it is contrary to all science.

I agree with you on this. As an argument against mechanism, your point
is valid. My point is that the way you talk might been misleading as
it looks like it is bearing on some notion of primitively causal
matter, but it does not. That plays some role when comparing the "comp
matter" and the QM matter.



> This applies even if the
> whole universe is really just a simulation, because what we observe is
> at the level of the simulation.

Not if we observe ourselves or our neighborhood below our substitution
level. In that case we can see only the trace of all infinitely many
possible simulations, or computations, leading to our actual current
computational states. Again we can say that QM confirms this a
posteriori.
In that case an observation will determine a brain state, in the same
way a self-localization after duplication determines a self-localized
state (like I am in this well defined city).

Bruno




>
>> I think it is the same error as using determinacy to refute free-
>> will. This
>> would be correct if we were living at the determinist base level,
>> but we are
>> not. Consciousness and free-will are real at the level where we
>> live, and
>> unreal, in the big 3p picture, but this concerns only the "outer
>> god", not
>> the "inner one" which can *know* a part of its local self-
>> consistency, and
>> cannot know its local future.
>
>
> --
> Stathis Papaioannou
>

meekerdb

unread,
Sep 27, 2012, 12:57:21 PM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 5:49 AM, Stathis Papaioannou wrote:
Albeit at a low resolution, scientists have already extracted from brain
> scans what people are seeing:
> http://www.newscientist.com/article/dn16267-mindreading-software-could-record-your-dreams.html
We still can't observe the experience. Advanced aliens may be able to
read our thoughts very accurately in this way but still have no idea
what we actually experience or whether we are conscious at all.


But we could tell if it correctly read our thoughts (at least as well as we can tell anything).  We could looks at records and say, "Yes, that's what I was thinking."  And after we did that we would develop confidence that it could also read what other people were thinking - in just the same way we recognize other people as conscious.

Brent

Craig Weinberg

unread,
Sep 27, 2012, 1:00:09 PM9/27/12
to everyth...@googlegroups.com
On Thursday, September 27, 2012 11:09:32 AM UTC-4, Jason wrote:


On Thu, Sep 27, 2012 at 8:26 AM, Craig Weinberg <whats...@gmail.com> wrote:


On Thursday, September 27, 2012 1:01:12 AM UTC-4, Jason wrote:


On Wed, Sep 26, 2012 at 11:09 PM, Stephen P. King <step...@charter.net> wrote:
On 9/26/2012 11:29 PM, Jason Resch wrote:


On Wed, Sep 26, 2012 at 9:24 PM, Stathis Papaioannou <stat...@gmail.com> wrote:

On Tue, Sep 25, 2012 at 3:34 PM, Jason Resch <jason...@gmail.com> wrote:

> If it has no causal efficacy, what causes someone to talk about the pain
> they are experiencing?  Is it all coincidental?

There is a sequence of physical events from the application of the
painful stimulus to the subject saying "that hurts", and this
completely explains the observable behaviour.

But can you separate the consciousness from that sequence of physical events or not?  There are multiple levels involved here and you may be missing the forest for the trees by focusing only on the atoms.  Saying the consciousness is irrelevant in the processes of the brain may be like saying human psychology is irrelevant in the price moves of the stock market.  Of course, you might explain the price moves in terms of atomic interactions, but you are missing the effects of higher-level phenomenon, which are real and do make a difference.

Exactly Jason. The moment we conflate  "physical events" with "painful stimulus" we have lost the war. If we assume that physical events can possibly be defined as full of 'pain', or that they stimulate (i.e. are received and responded to as a signifying experience - which is causally efficacious in changing observed behavior), then we are already begging the question of the explanatory gap. To assume that there can be a such thing as a purely physical event which nonetheless is full of pain and power to influence behavior takes the entirety of sense and awareness for granted but then fails to acknowledge that it was necessary in the first place. Once you have the affect of pain and the effect of behavioral stimulation, you don't need a brain as far as explaining consciousness - you already have consciousness on the sub-personal level.

Well you need something to explain consciousness (besides consciousness itself), otherwise you haven't explained anything.

No, you need to realize that consciousness cannot be 'explained' any more than it already is through experience itself. *This* is the primitive to which all explanation is tied. Explanation itself is an experience within consciousness, but consciousness is not a function within the field of explanation. We are the explanation.
 
 
 
We can't observe the
experience itself.

I'm not convinced of this.  While today, we have difficulty in even defining the term, in the future, with better tools and understanding of minds and consciousness, we may indeed be able to tell if a certain process implements the right combination of processes to have what we would call a mind.  By tracing the flows of information in its mind, we might even know what it is and isn't aware of.

Albeit at a low resolution, scientists have already extracted from brain scans what people are seeing:

This may not be what we are seeing at all, but rather what we are looking at. There was a recent study on the visual cortex which showed the same activity whether the subject actually saw something or not.

You're right, it depends on what level we are doing the scanning.  That information might not propagate to the effect that it becomes known throughout the brain.

Yes. Once I read that paper on the visual cortex results it made much more sense why the figures lifted from our analysis of visual cortex data looked only vaaaguely like anything at all. If what we are looking at is someone using their brain to look rather than see, then I would expect the patterns to look just like that - vaguely following the outlines of boundaries with lots of noise and random shifts of focus. There's no image there. It's just proto-ocular intention mapped in 2d.
 

Any intentionally constructed dynamic experience which invites a projection of sentience. Cartoons, stuffed animals, marionettes, motion picture images of actors, avatars, AI bots, etc.


 
Simulation isn't an objectively real function, it's a matter of fooling some of the people some of the time.

The term mind is similar to soul in that it assumes a public extension of a private intention which isn't actually real. It makes it a lot easier to talk about to reify our cognitive level experiences as a 'mind', but it's really is just the mental frequency range of your Self. The body of your Self is your entire body, brain, cells, and even more - your house, your friends, your world. The self is *not* defined in space or bodies, it is reflected in those things.
 


 
I know I'm not a zombie and I believe
that other people aren't zombies either, but I can't be sure.

If you were a zombie, you would still know that you were not a zombie, and still believe other people are not zombies either, but you could not be sure.

    How does this follow the definition of a zombie? They have no qualia thus no ability to reason about qualia!

Zombies can reason.  They can do absolutely everything you can do, except they are not conscious.

No, that's just a bad theoretical assumption. Understandable, but bad. In reality puppets can appear to reason to the extent that something thinks they know what behaviors should constitute reason and makes the leap from thinking that their observation of those behaviors implies awareness. Zombies can't do anything.

They are defined to be able to do anything.  I agree it seems to lead to contradictions.

Yes, it sets it up at the outset. To hypothesize a zombie is to give a hypothetical example of 'zombie water that is water in every possible way except that it is made of kerosene'.
 
 
They cannot be themselves. They have no first person experience at all. There is no 'they' there.
 
 They are also completely identical and indistinguishable, from you.  The only one who could (in principle) know they are a zombie is the zombie itself, but they don't know anything the non-zombie doesn't, for both the zombie and non-zombie brains have identical information content.  If you ask it if it is conscious, it will still say yes, and believe it.  It will not consider itself to be lying, it will in fact, believe itself to be telling the the truth.  There would be no lie detector test to that could detect this lie, the lie is so good, the zombie itself believes it.  The zombie is in fact, as certain of its own consciousness as the non-zombie.

Confusion of exterior 3p and interior 1p. Assumption of 'identical'. Mistakes.
 
 



This follows because the notion of knowing, which I define as possessing information, applies equally to zombie and non-zombie brains.  Both brains have identical information content, so they both know exactly the same things.

    Then what makes a zombie a zombie???


Right, I don't see that the difference makes a difference to anyone or anything, so the truth that there is still some difference must be questioned.  If there is no difference then the whole notion of zombies becomes inconsistent.

Yup.
 
 

 They both know what red is like, they both know what pain is like.   It's just there is some magical notion of there being a difference between them which is completely illogical.  Zombies don't make sense, and therefore neither do dualist theories such as epihenominalism.

    No, the reports that are uttered by a zombie, if we are consistent are not reports of knowledge any more than the output of my calculator is knowledge!

But ask the zombie what it can see, and it can describe everything it sees, inspect its brain and you can see the information flow from the retinas to be processed by the visual cortex, and eventually make it to utterances of what it is looking at.  It knows what it is seeing, it's brain contains that knowledge in the same way any other brain does.  You can even watch its hippocampus store memories of what it saw, and when you ask it what it saw a few minutes ago, you can watch this knowledge come out of its brain just as it does in a non-zombie brain.  So in what sense could its knowledge be any less valid than the knowledge in a non-zombie brain?  Remember, zombies are 100% physically identical to their non-zombie counterparts, in every third-person observable way.

Just a hypothetical. In theory, fire shouldn't feel hot. Theory based on exterior mechanics can never apply to interior experience completely, because if it could then it would be logically impossible for there to have any reason for experience to exist at all. The reason why zombies don't make sense is the same reason that we have the hard problem. If function is all there is, why is anyone watching the show?
 

The machine in the state of executing the function is conscious.  It has to be, otherwise it would be a zombie but one that knows, thinks, understands, contemplates, feels, etc.

But it won't be able to execute the functions well enough to fool everyone all the time.
 

I think we can build machines that do these things, and so they will be conscious.  You think man-made machines cannot be conscious, so they will be unable to do these things.  

No, I think that man made machines can be conscious to the extent of the capacity of what he is making them out of. The capacity to be deeply conscious seems inversely proportional to the capacity to be deterministically controlled by exterior forces, so that mechanism and consciousness are mutually exclusive ends of the same qualitative spectrum. We can make things that are conscious by genetic engineering, but they aren't machines in the sense that we cannot program and control them like we would prefer.
 
It is logically consistent, but will be increasingly hard to justify as the repertoire of man-made machines advances.

Promissory materialism. First let's see one single man-made machine that demonstrates any sign of feeling or voluntary participation whatsoever. We haven't got to 1+1 yet and you talking about a repertoire of higher math.
 

He's very popular, and probably deservedly so. I think that his view is almost right, but with the subject of consciousness, I think that we are dealing with *the most unique case possible in all possible universes*, so he is exactly wrong in assuming that it can be arrived at through recursive complexity alone.
 
 


It then becomes a straightforward problem of information theory and computer science to know if a certain system possesses knowledge of those things or not.

    Knowledge, at least tacitly, implies the ability to act upon the data, not just be guided by it.

Information cannot become significant on it's own. Not possible.
 



This isn't startling.  Doctors today declare people brain dead and take them off life support using the same assumptions.  If we had no principles for determining if something is conscious or not, would we still do this?  Do you worry about stepping on rocks because it might hurt them?  We have good reasons not to worry about those things because we assume there are certain necessary levels of complexity and information processing ability needed to be conscious.  So perhaps if we can tell with reasonable certainty something is not conscious, we might also be reasonably certain that a certain other thing IS conscious.

Proof, is another matter, and likely one we will never get.  Your entire life could be a big delusion and everything you might think you know could be wrong.  We can never really prove anything.

    Rubbish! You are making perfection the enemy of the possible. We are fallible and thus can only reason within boundaries and error bars, so. Does this knock proofs down? NO!

We might be 99.99999% certain of some belief, but I don't know that we can ever be certain.  Some non zero amount of doubt regarding the correctness any proof depends on our own consistency/sanity.

This is not to say that seeking out explanations, or evidence, or proof is fruitless.  So I don't see this leading to an enemy of the good.
 


 
because consciousness is not causally efficacious.

Hahahaha "I am not having this conversation" also means  "I have no way of knowing that I am not having this conversation."
 

I disagree with this.

    I agree with your disagreement!

I third this disagreement, and escalate it to the level of truth more fundamental and elemental than all of physics and arithmetic.
 


 
It is
emergent, at a higher level of description, supervenient

Right, it could be emergent / supervenient, but that does not mean it is causally inefficacious.

You need to look at the counterfactual to say whether or not it is casually important.  Ask "If this thing were not conscious would it still behave in the same way?"  If not, then how can we say that consciousness is casually inefficacious?
 
or
epiphenomenal - but not separately causally efficacious, or the
problem of other minds and zombies would not exist.


There is no problem of zombies if you can show the idea to be inconsistent.

You got it. Zombies were a very early and natural mistake, which I don't blame on Chalmers. Our naive realism is to conflate personal with impersonal, sub-personal with micro-impersonal, etc. He made a misstep, but anyone would have done the same. That's what pioneering a field is all about.


Well we can learn something about what consciousness is not from errors too.

Absolutely. It may really be the only way to address it experimentally. It's only the ultimate interpretation that I take exception to. It's a small detail, but it is the detail that allows our existence.

Craig
 

Jason

meekerdb

unread,
Sep 27, 2012, 1:18:09 PM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 9:52 AM, Bruno Marchal wrote:
>> I object to the idea that consciousness will cause a brain or other
>> machine to behave in a way not predictable by purely physical laws.
>
> But this cannot be entirely correct. Consciousness will make your brain, at the level
> below the substitution level, having some well defined state, with an electron, for
> example, described with some precise position. Without consciousness there is no
> "material" brain at all.

Why would the state be well defined *below* the substitution level? The substitution
level is classical or near classical and so already QM implies that there is a lower level
where the state is not well defined.

Brent

Bruno Marchal

unread,
Sep 27, 2012, 2:12:46 PM9/27/12
to everyth...@googlegroups.com
On 27 Sep 2012, at 16:53, Stephen P. King wrote:

On 9/27/2012 4:37 AM, Bruno Marchal wrote:
On 26 Sep 2012, at 19:37, Craig Weinberg wrote:
in which case, how are they really arithmetic.

They are not. Arithmetical truth is already not arithmetical.
Arithmetic seen from inside is *vastly* bigger than arithmetic. This needs a bit of "model theory" to be explained formally.
Hi Bruno,

    Is this not just the direct implication of the Löwenheim–Skolem theorems? What is missing? The discussion here is wonderful! http://www.earlham.edu/~peters/courses/logsys/low-skol.htm#review It seems to run parallel to what I have been trying to discuss with you regarding the possibility that non-standard models allow for a "true" plurality of 1p in extensions of modal logic.

http://www.earlham.edu/~peters/courses/logsys/low-skol.htm#skolem
Skolem's Paradox

" LST has bite because we believe that there are uncountably many real numbers (more than <aleph.gif>0). Indeed, let's insist that we know it; Cantor proved it in 1873, and we don't want to open the question again. What is remarkable about LST is the assertion that even if the intended interpretation of S is a system of arithmetic about the real numbers, and even if the system is consistent and has a model that makes its theorems true, its theorems (under a different interpretation) will be true for a domain too small to contain all the real numbers. Systems about uncountable infinities can be given a model whose domain is only countable. Systems about the reals can be interpreted as if they were about some set of objects no more numerous than the natural numbers. It is as if a syntactical version of "One-Thousand and One Arabian Nights" could be interpreted as "One Night in Centerville".

This strange situation is not hypothetical. There are systems of set theory (or number theory or predicate logic) that contain a theorem which asserts in the intended interpretation that the cardinality of the real numbers exceeds the cardinality of the naturals. That's good, because it's true. Such systems therefore say that the cardinality of the reals is uncountable. So the cardinality of the reals must really be uncountable in all the models of the system, for a model is an interpretation in which the theorems come out true (for that interpretation). Now one would think that if theorems about uncountable cardinalities are true for a model, then the domain of the model must have uncountably many members. But LST says this is not so. Even these systems, if they have models at all, have at least one countable model.

Insofar as this is a paradox it is called Skolem's paradox. It is at least a paradox in the ancient sense: an astonishing and implausible result. Is it a paradox in the modern sense, making contradiction apparently unavoidable? We know from history all too many cases of shocking results initially misperceived as contradictions. Think about the existence of pairs of numbers with no common divisor, no matter how small, or the property of every infinite set that it can be put into one-to-one correspondence with some of its proper subsets."

    What broke the Skolem prison for me was a remark by Louis Kauffman that self-referencing systems allows for finite models of infinities as they can capture the mereology of infinite sets ( the property that there is a one-to-one correspondence between whole and proper subsets). We get stuck on what is "proper"....

From: http://www.earlham.edu/~peters/courses/logsys/low-skol.htm#amb1

"To talk about incurable ambiguity suggests calamity, and this is how it seems to some logicians. LST shows that the real numbers cannot be specified uniquely by any first order theory. If that is so, then one of the most important domains in mathematics cannot be reached with the precision and finality we thought formal systems permitted us to attain.
    But this is not a calamity from another point of view. If formal languages were not ambiguous or capable of many interpretations, they would not be formal. From this standpoint, LST is less about deficiences in our ability to express meanings univocally, or about deficiencies in our ability to understand the real numbers, than it is about the gap between form and content (syntax and semantics).
    Other metatheorems prove this ambiguity for statements and systems in general. LST proves this ambiguity for any system attempting to describe uncountable infinities: even if they succeed on one interpretation, there will always be other interpretations of the same underlying syntax by which they describe only countably many objects.
    LST has this similarity to Gödel's first incompleteness theorem. While Gödel's theorem only applies to "sufficiently strong" systems of arithmetic, LST only applies to first-order theories of a certain adequacy, namely, those with models, hence those that are consistent. Gödel's theorem finds a surprising weakness in strength; sufficiently powerful systems of arithmetic are incomplete. LST also finds a surprising weakness in strength; first-order theories with models are importantly ambiguous in a way that especially hurts set theory, arithmetic, and other theories concerned to capture truths about uncountable cardinalities.

Serious Incurable Ambiguity: Plural Models

But LST proves a kind of ambiguity much more important than the permanent plurality of interpretations. There can be plural models, that is, plural interpetations in which the theorems come out true.

As we become familiar with formalism and its susceptibility to various interpretations, we might think that plural interpretations are not that surprising; perhaps they are inevitable. Plural models are more surprising. We might think that we must go out of our way to get them. But on the contrary, LST says they are inevitable for systems of a certain kind.

Very Serious Incurable Ambiguity: Non-Categoricity

But the ambiguity is stronger still. The plural permissible models are not always isomorphic with one another. The isomorphism of models is a technical concept that we don't have to explain fully here. Essentially two models are isomorphic if their domains map one another; their elements have the same relations under the functions and predicates defined in the interpretations containing those domains.

If all the models of a system are isomorphic with one another, we call the system categorical. LST proves that systems with uncountable models also have countable models; this means that the domains of the two models have different cardinalities, which is enough to prevent isomorphism. Hence, consistent first-order systems, including systems of arithmetic, are non-categorical.

We might have thought that, even if a vast system of uninterpreted marks on paper were susceptible of two or more coherent interpretations, or even two or more models, at least they would all be "equivalent" or "isomorphic" to each other, in effect using different terms for the same things. But non-categoricity upsets this expectation. Consistent systems will always have non-isomorphic or qualitatively different models."

    This non-isomorphism is the point I have been trying to make, we cannot extract a true plurality of minds from a single Sigma_1 COMP model unless we allow for an inconsistency in the ontologically primitive level. Continuing the quote:


    " We don't even approach univocal reference "at the limit" or asymptotically, by increasing the number of axioms or theorems describing the real numbers until they are infinite in number. We might have thought that, even if a certain vast system of bits sustained non-isomorphic models, we could approach unambiguity (even if we could not reach it) by increasing the size of the system. After all, "10" could symbolize everything from day and night to male and female, and from two to ten; but a string of 1's and 0's a light-year in length must at least narrow down the range of possible referents. But this is not so, for LST applies even to infinitely large systems. LST proves in a very particular way that no first-order formal system of any size can specify the reals uniquely. It proves that no description of the real numbers (in a first-order theory) is categorical."







    This reasoning is parallel to my own argument that there must be a means to "book keep" the differences and that this cannot be done "in the arithmetic" itself. This is the fundamental argument that I am making for he necessity of physical worlds, which we can represent faithfully as topological spaces and we get this if we accept the Stone duality as a ontological principle. But that is an argument against your thesis of  immaterial monism. :_(



I have no clue what problem you talk about, nor what do you mean by "physical" as playing a role in making which theory categorical, and why do you want this.

I mention often the LST and "Skolem paradox", which is very natural in comp, as it provides a "model" why a structure might look little from oustide (enunerable for example) and be gigantic from inside (non enumerable). I gave more explanation and use of the Skolem phenomenon in "conscience et mécanisme": you can see the drawing to illustrate what happens:







    Continuing the quote:


    "Very Very Serious Incurable Ambiguity: Upward and Downward LST
If the intended model of a first-order theory has a cardinality of 1, then we have to put up with its "shadow" model with a cardinality of 0. But it could be worse. These are only two cardinalities. The range of the ambiguity from this point of view is narrow. Let us say that degree of non-categoricity is 2, since there are only 2 different cardinalities involved."

    Why not allow for arbitrary extensions via forcing? Why not the unnameable towers of cardinalities of Cantor, so long as it is possible to have pair-wise consistent constructions from the infinities?


"But it is worse. A variation of LST called the "downward" LST proves that if a first-order theory has a model of any transfinite cardinality, x, then it also has a model of every transfinite cardinal y, when y > x. Since there are infinitely many infinite cardinalities, this means there are first-order theories with arbitrarily many LST shadow models. The degree of non-categoricity can be any countable number."

    Implying the existence of sets of countable numbers within each model, subject to some constraint?


"There is one more blow. A variation of LST called the "upward" LST proves that if a first-order theory has a model of any infinite cardinality, then it has models of any arbitrary infinite cardinality, hence every infinite cardinality. The degree of non-categoricity can be any infinite number."

    Thus an argument for the Tower!


"A variation of upward LST has been proved for first-order theories with identity: if such a theory has a "normal" model of any infinite cardinality, then it has normal models of any, hence every, infinite cardinality.

Coping
Most mathematicians agree that the Skolem paradox creates no contradiction. But that does not mean they agree on how to resolve it.

First we should note that the ambiguity proved by LST is curable in the sense that LST holds only in first-order theories. Higher-order logics are not afflicted with it, although they are afflicted with many weaknesses absent in first-order logic. The ambiguity is also curable as soon as we add ordering to our collection of domain objects supposed to be real numbers. Once ordering is added, systems intended to capture the reals can become categorical.

But the ambiguity remains baffling and frustrating for first-order theories prior to the introduction of ordering.

Can such a system really assert the uncountability of the reals if the assertion is "just as much" about some merely countable infinite? Or can it really assert that the cardinality of the continuum is 1 (assuming the continuum hypothesis) if the assertion is "just as much" about every other infinite cardinality? LST may not force us to retract our belief that the reals are uncountable; but on one terrifying reading it does, and to avoid that reading we may well have to alter the modality of our belief that the reals are uncountable.


He is right on this. This is the reason I avoid set theories, as too much ambiguous, and favor combinatorial theories whose intended model are enumerable. With comp the existence of infinities absolutely non enumerable is absolutely undecidable. Sets is just an ultra-simplifying tools in math, and it is a bit insane to take sets in an ontology. Comp needs arithmetical realism, not set theoretical realism.




    What of models that do not assume CH? We get a plenum of continua... (At least between Aleph_0 and Aleph_1) No? Do we necessarily lose countability so long as ordering can be imposed by some rule?

Yu mean between Aleph_0 and 2^Aleph_0?
By definition aleph_1 is the least non enumerable set. The question is: "is 2^Aleph_zero equal to Aleph_1 or is it bigger than Aleph_1). With the forcing technic you can build model of ZF with    2^Aleph_0   =   Aleph_17, or Aleph_666, or Aleph_Aleph_0, etc.




"In metalogic the term "model" is used in (at least) two senses. We have used the term in the more technical sense, as an interpretation of a system in which its theorems come out true for that interpretation. But the term "model" may also be used more casually to refer to the domain of things on which we want to focus, such as the real numbers, especially if we assume that such things have an existence independent of formal systems and human logicians. In this less strict second sense of "model", Platonists in mathematics who believe that the real numbers exist independently of human minds and formal systems can say that there is an uncountable model of the real numbers: namely, the real numbers themselves. However, they must (by LST) admit that first-order formal systems that seem to capture the real numbers can always be satisfied by a merely countable domain. For this reasons, Platonists will remain Platonic and will not pin their hopes on formalization."

    Bad news for Bruno! :_(




Come on Stephen. This is known material, and I have already explained more than once why all this is a chance for mechanism. Mechanism does not ask, and if you want: even put doubt about, any mathematical realism and physical realism,  above the natural numbers. This was also one of my critics of Tegmark. He took too much math for granted, without distinguishing the mind tools and the reality to which they refer.
With comp, analysis and physics are just in the imagination of the numbers. 

Still LST is very interesting. I might come back on more on this, but I am not sure people have the formal background, especially when looking at your own jump in favor of physicalism, or weak materialism from LST. You can try to elaborate on this, but do it in simple english without escaping in more technicalities, unless you can make a genuine logical point, not an analogical one.

Also, I use the term "realist" instead of platonic for such kind of mathematical platonism, so to be able to use "platonism" for the doctrine of Plato, which is only arithmetical realist with the earlier and post Plato platonist.






"One reading of LST holds that it proves that the cardinality of the real numbers is the same as the cardinality of the rationals, namely, countable. (The two kinds of number could still differ in other ways, just as the naturals and rationals do despite their equal cardinality.) On this reading, the Skolem paradox would create a serious contradiction, for we have Cantor's proof, whose premises and reasoning are at least as strong as those for LST, that the set of reals has a greater cardinality than the set of rationals.

The good news is that this strongly paradoxical reading is optional. The bad news is that the obvious alternatives are very ugly. The most common way to avoid the strongly paradoxical reading is to insist that the real numbers have some elusive, essential property not captured by system S. This view is usually associated with a Platonism that permits its proponents to say that the real numbers have certain properties independently of what we are able to say or prove about them.

The problem with this view is that LST proves that if some new and improved S' had a model, then it too would have a countable model. Hence, no matter what improvements we introduce, either S' has no model or it does not escape the air of paradox created by LST. (S' would at least have its own typographical expression as a model, which is countable.) As Morris Kline put it, while Gödel's first incompleteness theorem showed that certain strong formal systems always prove less than we'd like, LST shows that they also prove more than we'd like
."


   Please note the discussion of Platonism!


The whole comment is entirely based on arithmetical platonism, and it does make sense to explain why machines can only remain fuzzy when they try to get 3p description on 1p reality. The uncountable exists but only from the machine points of view. It is epistemological. What does exist in a more important sense is the non mechanically or non recursively enumerable, and this is sort of "absolute" by Church thesis.

Cardinality is a relative notion, not an absolute one. That is what LST shows.

I often give Cantor's diagonalization proof, but it is just to prepare for Kleene's diagonalization proof. 

Non standard model plays only an indirect role in comp, notably to explain the unavoidable ambiguity of many terms in our language, formal or not. It can help to understand the difficulty of term like "blue", for example. The same happens with the word "finite", which we cannot define formally, and which make comp formally undefinable, except by things like the "yes doctor" for a finite procedure. That is why comp has the axioms it has. The level of substitution is also not knowable, etc. 

As I said to John Mikes, all this is what will prevent us to build complete or categorical theories of machines and persons, and why comp, which has a reductionist look, is actually a vaccine against reductionism on machine, numbers and persons.

Bruno


meekerdb

unread,
Sep 27, 2012, 3:09:49 PM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 8:27 AM, Stephen P. King wrote:
Note that I think we agree (some forms of reasoning probably require consciousness), which only provides another reason to doubt the consistency of the definition of zombies.  I don't think reasoning is normally assumed to require consciousness, which is why someone who defines zombies as non-conscious may still hold that they have a reasoning ability.

Most of reasoning, like other thinking, happens outside consciousness.  Suppose you are playing chess.  A move comes into your head, the consequences of that move occur to your.  You are conscious of these two thoughts, but are you conscious of where they came from and why one followed the other?

Brent

Stathis Papaioannou

unread,
Sep 27, 2012, 7:44:35 PM9/27/12
to everyth...@googlegroups.com
On Thu, Sep 27, 2012 at 11:30 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> I object to the idea that consciousness will cause a brain or other
>> machine to behave in a way not predictable by purely physical laws.
>> Some people, like Craig Weinberg, seem to believe that this is
>> possible but it is contrary to all science. This applies even if the
>> whole universe is really just a simulation, because what we observe is
>> at the level of the simulation.
>
>
> That is not what I believe at all. I have corrected you and others on this
> many times but you won't hear it. Nothing unusual needs to happen in the
> brain for ordinary consciousness to take place, it's just that physics has
> nothing to say about whether billions of synapses will suddenly begin firing
> in complex synchronized patterns or not. Physics doesn't care. Can neurons
> fire when conditions are right? Yes. Can our thoughts and intentions
> directly control conditions in the brain? YES. Of course. Obviously.
> Otherwise we wouldn't care any more about the human brain than we would a
> wasps nest. It's not that physics needs to be amended, it is that experience
> is part of physics, and physics is part of experience.

If physics cannot predict even in theory when the neurons will fire
then *by definition* the neurons behave contrary to physics.


--
Stathis Papaioannou

Craig Weinberg

unread,
Sep 27, 2012, 10:52:06 PM9/27/12
to everyth...@googlegroups.com

If the neurons fire based on the participation of a personal identity in response to events in a person's life, then how could physics predict them without predicting a person's entire life?

Craig
 


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Sep 27, 2012, 11:28:41 PM9/27/12
to everyth...@googlegroups.com
On Fri, Sep 28, 2012 at 12:52 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> If physics cannot predict even in theory when the neurons will fire
>> then *by definition* the neurons behave contrary to physics.
>
>
> If the neurons fire based on the participation of a personal identity in
> response to events in a person's life, then how could physics predict them
> without predicting a person's entire life?

When you replace the spark plugs in your car you don't need to know
everywhere the car is going to go for the duration of its existence.
You just need to know how the spark plugs respond to voltage, current,
temperature and so on. If you can't predict this even in theory then
your car has magical spark plugs and you won't be able to replace
them. Same with your brain.


--
Stathis Papaioannou

Stephen P. King

unread,
Sep 27, 2012, 11:31:43 PM9/27/12
to everyth...@googlegroups.com
    Good point! Physical systems are completely blind to their history, or so we are told...

Craig Weinberg

unread,
Sep 27, 2012, 11:49:12 PM9/27/12
to everyth...@googlegroups.com

The spark plugs don't fire in response to the will of the driver, the brain does. This isn't magic, this is the ordinary process by which we participate in the world in every waking moment of our lives. It is not the same. Building a car that you can drive with your mind is one thing, building a car that can predict where you want to drive to next month is completely different.

Craig



--
Stathis Papaioannou

Roger Clough

unread,
Sep 28, 2012, 5:56:18 AM9/28/12
to everything-list
Hi Bruno Marchal and all, from Leibniz's point of view:

1) Free will is possible with L's determinism if defined
in the following way: if the monad sees the appetite,
then the action is free will. If not, not.

2) Consciousness does not emerge from matter, it
is a "fulgeration" of the All (the monad of monads),
to use L's term. I think that means emanation, not sure.
Matter is never in complete control, nor are the monads,
nor in fact is the All. L's causation is cooperative.
The monad of monads appears to cause changes, but it can only
do so according to the monads' perceptions, according
to their individual desires, because monads are unaffected
by other monads. All changes in monads are actually
caused by their previous states. Since this must occur
according to the preestablished harmony, to me that all boils down
to mean that the preestablished harmony is a script for
monadic change. Like the prices of stock market stocks,
it contains all you need or can know to predict
the future states of all monads, those being individually
given by their previous states. Since the previous states have
been constantly reset so that each monad knows everything
in the univefrse uniqueloy from its own point of view.



Roger Clough, rcl...@verizon.net
9/28/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-27, 12:52:30
Subject: Re: Epiphenomenalism (was: Re: Bruno's Restaurant)


On 27 Sep 2012, at 15:08, Stathis Papaioannou wrote:

> On Thu, Sep 27, 2012 at 6:06 PM, Bruno Marchal
> wrote:
>
>> You can approximate consciousness by "belief in self-consistency".
>> This has
>> already a "causal efficacy", notably a relative self-speeding
>> ability (by
>> G?el "length of proof" theorem). But "belief in self-consistency"

Roger Clough

unread,
Sep 28, 2012, 6:30:33 AM9/28/12
to everything-list
Hi Craig Weinberg
 
 
Roger Clough, rcl...@verizon.net
9/28/2012
"Forever is a long time, especially near the end." -Woody Allen
 
 
----- Receiving the following content -----
Receiver: everything-list
Time: 2012-09-27, 09:26:01
Subject: Re: Epiphenomenalism (was: Re: Bruno's Restaurant)

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/OEko5uU91-UJ.

Bruno Marchal

unread,
Sep 28, 2012, 10:00:32 AM9/28/12
to everyth...@googlegroups.com

On 28 Sep 2012, at 11:56, Roger Clough wrote:

> Hi Bruno Marchal and all, from Leibniz's point of view:
>
> 1) Free will is possible with L's determinism if defined
> in the following way: if the monad sees the appetite,
> then the action is free will. If not, not.

I would have said that this is freedom, not free will.
To be franc, I still don't know how to interpret "monad".


>
> 2) Consciousness does not emerge from matter,

That's coherent with computationalism.


> it
> is a "fulgeration" of the All (the monad of monads),
> to use L's term. I think that means emanation, not sure.
> Matter is never in complete control, nor are the monads,
> nor in fact is the All.

OK.


> L's causation is cooperative.
> The monad of monads appears to cause changes, but it can only
> do so according to the monads' perceptions, according
> to their individual desires, because monads are unaffected
> by other monads. All changes in monads are actually
> caused by their previous states.

Looks like monad might be interpreted in the comp theory by a
computational state, or a relative number (relative to a universal
system or number).



> Since this must occur
> according to the preestablished harmony, to me that all boils down
> to mean that the preestablished harmony is a script for
> monadic change.

It looks like a script describing (a part of) arithmetical truth.



> Like the prices of stock market stocks,
> it contains all you need or can know to predict
> the future states of all monads, those being individually
> given by their previous states. Since the previous states have
> been constantly reset so that each monad knows everything
> in the univefrse uniqueloy from its own point of view.


That's not quite clear for me, sorry.

Bruno
>> To post to this group, send email to everything-
>> li...@googlegroups.com.

Bruno Marchal

unread,
Sep 28, 2012, 1:55:35 PM9/28/12
to everyth...@googlegroups.com

On 27 Sep 2012, at 19:18, meekerdb wrote:

> On 9/27/2012 9:52 AM, Bruno Marchal wrote:
>>> I object to the idea that consciousness will cause a brain or other
>>> machine to behave in a way not predictable by purely physical laws.
>>
>> But this cannot be entirely correct. Consciousness will make your
>> brain, at the level below the substitution level, having some well
>> defined state, with an electron, for example, described with some
>> precise position. Without consciousness there is no "material"
>> brain at all.
>
> Why would the state be well defined *below* the substitution level?
> The substitution level is classical or near classical and so already
> QM implies that there is a lower level where the state is not well
> defined.

This is not quite clear and depends on your interpretation or even
formulation of QM. The lower level where the state is not defined, is
relative to your own state, and it is "well defined" relatively to any
finer grained computations, it just doesn't matter for your
computational state.

I *can* know the exact position of an electron in my brain, even if
this will make me totally ignorant on its impulsions. I can know its
exact impulsion too, even if this will make me totally ignorant of its
position. In both case, the electron participate two different
coherent computation leading to my computational state.
Of course this is just "in principle", as in continuous classical QM,
we need to use distributions, and reasonable Fourier transforms.

The state is well defined, as your state belongs to a computation. It
is not well defined below your substitution level, but this is only
due to your ignorance on which computations you belong. You can
"observe" yourself below the substitution result, but the detail of
such observation are just not relevant for getting your computational
state.

Bruno



>
> Brent
>
>>
>> Of course, you will argue that this is what physics already
>> describes, with QM. In that sense I am OK, but consciousness is
>> still playing a role, even if it is not necessarily the seemingly
>> magical role invoked by Craig.
>>
>

meekerdb

unread,
Sep 28, 2012, 2:30:55 PM9/28/12
to everyth...@googlegroups.com
On 9/28/2012 10:55 AM, Bruno Marchal wrote:
>
> On 27 Sep 2012, at 19:18, meekerdb wrote:
>
>> On 9/27/2012 9:52 AM, Bruno Marchal wrote:
>>>> I object to the idea that consciousness will cause a brain or other
>>>> machine to behave in a way not predictable by purely physical laws.
>>>
>>> But this cannot be entirely correct. Consciousness will make your brain, at the level
>>> below the substitution level, having some well defined state, with an electron, for
>>> example, described with some precise position. Without consciousness there is no
>>> "material" brain at all.
>>
>> Why would the state be well defined *below* the substitution level? The substitution
>> level is classical or near classical and so already QM implies that there is a lower
>> level where the state is not well defined.
>
> This is not quite clear and depends on your interpretation or even formulation of QM.
> The lower level where the state is not defined, is relative to your own state, and it is
> "well defined" relatively to any finer grained computations, it just doesn't matter for
> your computational state.
>
> I *can* know the exact position of an electron in my brain, even if this will make me
> totally ignorant on its impulsions. I can know its exact impulsion too, even if this
> will make me totally ignorant of its position.

But that doesn't imply that the electron does not have a definite position and momentum;
only that you cannot prepare an ensemble in which both values are sharp.

> In both case, the electron participate two different coherent computation leading to my
> computational state.
> Of course this is just "in principle", as in continuous classical QM, we need to use
> distributions, and reasonable Fourier transforms.

But at the fundamental level of the UD 'the electron' has some definite representation in
each of infinitely many computations. The uncertainty comes from the many different
computations. Right?

>
> The state is well defined, as your state belongs to a computation. It is not well
> defined below your substitution level, but this is only due to your ignorance on which
> computations you belong.

Right. What I would generally refer to as 'my state' is a classical state (since I don't
experience Everett's many worlds).

But I still don't understand, "Consciousness will make your brain, at the level below the
substitution level, having some well defined state, with an electron, for example,
described with some precise position. Without consciousness there is no "material" brain
at all. "

How does consciousness "make a brain" or "make matter"? I thought your theory was that
both at made by computations. My intuition is that, within your theory of comp,
consciousness implies consciousness of matter and matter is a construct of consciousness;
so you can't have one without the other.

Brent

Bruno Marchal

unread,
Sep 29, 2012, 3:49:54 AM9/29/12
to everyth...@googlegroups.com
OK. This Fourier relation between complementary observable is quite
mysterious in the comp theory.




>
>> In both case, the electron participate two different coherent
>> computation leading to my computational state.
>> Of course this is just "in principle", as in continuous classical
>> QM, we need to use distributions, and reasonable Fourier transforms.
>
> But at the fundamental level of the UD 'the electron' has some
> definite representation in each of infinitely many computations.
> The uncertainty comes from the many different computations. Right?

Yes, and the fact that we cannot know which one bears us "here and
now". The QM indeterminacy is made into a particular first person comp
indeterminacy.



>
>>
>> The state is well defined, as your state belongs to a computation.
>> It is not well defined below your substitution level, but this is
>> only due to your ignorance on which computations you belong.
>
> Right. What I would generally refer to as 'my state' is a classical
> state (since I don't experience Everett's many worlds).
>
> But I still don't understand, "Consciousness will make your brain,
> at the level below the substitution level, having some well defined
> state, with an electron, for example, described with some precise
> position. Without consciousness there is no "material" brain at all. "
>
> How does consciousness "make a brain" or "make matter"? I thought
> your theory was that both at made by computations. My intuition is
> that, within your theory of comp, consciousness implies
> consciousness of matter and matter is a construct of consciousness;

That's what I was saying.


> so you can't have one without the other.

Exactly. Not sure if we disagree on something here.

Bruno

Roger Clough

unread,
Sep 29, 2012, 5:11:29 AM9/29/12
to everything-list
Hi Bruno Marchal
 
1) A monad is something like a soul, to which a homunculus is attached.
Human humunculuses have intellect, feeling and body, animal
and perhaps vegetable monads only have feeling and body,
and bodies of matter only have the body  partition. All are
considered to be alive.
 
In general I would say that a monad is a partless, isolated, individual concept
complete enough to represent the identity of a corporeal body.
It is windowless and thus blind and incapable of being acted on
or act on other monads. So it is blind and passive.
 
Although these characteristics would seem to make a monad
dead and useless, it is in fact alive and is being continuously
updated by the supreme monad, which is all-good, all-powerful and
omniscient.
 
The term "monad"  has been used in neoplatonism, gnosticism, computer programming,
and even in quantum  mechanics.  These all mean substantially different things,
although in all of these, "monad" refers loosely to "one" or "unitary."

For a general description see

http://en.wikipedia.org/wiki/Monadology

Leibniz's version, which I use, differs somewhat from these and is
difficult to understand because he does not give a concise
definition that is complete, but rather gives its 97 characteristics in his monadology:
 
 
You will note that each characteristic follows logically from the previous
one.
 
Logicians refer to a monad as a complete concept, meaning
that it is a proposition with all the predicates needed to
clearly define it.
 
Each monad has a stack of "perceptions" which reflect the
perceptions of the universe of monads from that monad's
point of view that are actually acquired and passed on
continuously by the monad of monads. You might think
of these as the energy states of a monad.
 
 
2) The monad also has a stack of "appetites", which
you might think of its potential desire or will.  Hence
L's reference to the internally viewed appetite as free will.
Besides the appetites, the monad also has an internal energy source.




Roger Clough, rcl...@verizon.net
9/29/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-28, 10:00:32
>> To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

Stephen P. King

unread,
Sep 29, 2012, 6:21:40 AM9/29/12
to everyth...@googlegroups.com
HEY!

    It's nice to see other people noticing the same thing that I have been complaining about. Thank you, Brent!



On 9/29/2012 3:49 AM, Bruno Marchal wrote:

I *can* know the exact position of an electron in my brain, even if this will make me totally ignorant on its impulsions. I can know its exact impulsion too, even if this will make me totally ignorant of its position.

But that doesn't imply that the electron does not have a definite position and momentum; only that you cannot prepare an ensemble in which both values are sharp.

OK. This Fourier relation between complementary observable is quite mysterious in the comp theory.

    How about that! Bruno, you might wish to read up a little on Pontryagin duality, of which the Fourier relation is an example. It is a relation between spaces. How do you get spaces in your non-theory, Bruno?







In both case, the electron participate two different coherent computation leading to my computational state.
Of course this is just "in principle", as in continuous classical QM, we need to use distributions, and reasonable Fourier transforms.

But at the fundamental level of the UD 'the electron' has some definite representation in each of infinitely many computations.  The uncertainty comes from the many different computations.  Right?

Yes, and the fact that we cannot know which one bears us "here and now". The QM indeterminacy is made into a particular first person comp indeterminacy.

    Where is the "here and now" if not a localization in a physical world. This is defined as "centering" by Quine's Propositional Objects as discussed in Chalmers book, pg. 60-61...








The state is well defined, as your state belongs to a computation. It is not well defined below your substitution level, but this is only due to your ignorance on which computations you belong.

Right.  What I would generally refer to as 'my state' is a classical state (since I don't experience Everett's many worlds).

But I still don't understand, "Consciousness will make your brain, at the level below the substitution level, having some well defined state, with an electron, for example, described with some precise position. Without consciousness there is no "material" brain at all. "

How does consciousness "make a brain" or "make matter"?  I thought your theory was that both at made by computations.  My intuition is that, within your theory of comp, consciousness implies consciousness of matter and matter is a construct of consciousness;

That's what I was saying.

    Really!?




so you can't have one without the other.

Exactly. Not sure if we disagree on something here.

    What exactly are you agreeing about, Bruno? No consciousness without matter? Ah, you think that numbers have intrinsic properties... OK.



Bruno




Brent

You can "observe" yourself below the substitution result, but the detail of such observation are just not relevant for getting your computational state.

Bruno




Brent


Of course, you will argue that this is what physics already describes, with QM. In that sense I am OK, but consciousness is still playing a role, even if it is not necessarily the seemingly magical role invoked by Craig.

    Craig is not invoking any magic other than the fact that consciousness cannot be striped from what allows it to define itself and remain the same thing. It has content and that content is important. It is not just a hall of mirrors effect that we can collapse with Kleene. No, there is a physical tie in that cannot be ignored.

Bruno Marchal

unread,
Sep 29, 2012, 10:11:19 AM9/29/12
to everyth...@googlegroups.com
On 29 Sep 2012, at 12:21, Stephen P. King wrote:

HEY!

    It's nice to see other people noticing the same thing that I have been complaining about. Thank you, Brent!


On 9/29/2012 3:49 AM, Bruno Marchal wrote:

I *can* know the exact position of an electron in my brain, even if this will make me totally ignorant on its impulsions. I can know its exact impulsion too, even if this will make me totally ignorant of its position.

But that doesn't imply that the electron does not have a definite position and momentum; only that you cannot prepare an ensemble in which both values are sharp.

OK. This Fourier relation between complementary observable is quite mysterious in the comp theory.

    How about that! Bruno, you might wish to read up a little on Pontryagin duality, of which the Fourier relation is an example. It is a relation between spaces. How do you get spaces in your non-theory, Bruno?

?

The result is that we have to explain geometry, analysis and physics from numbers. It is constructive as it shows the unique method which keeps distinct and relate the different views, and the quanta/qualia differences. But the result is a problem, indeed: a problem in intensional arithmetic.








In both case, the electron participate two different coherent computation leading to my computational state.
Of course this is just "in principle", as in continuous classical QM, we need to use distributions, and reasonable Fourier transforms.

But at the fundamental level of the UD 'the electron' has some definite representation in each of infinitely many computations.  The uncertainty comes from the many different computations.  Right?

Yes, and the fact that we cannot know which one bears us "here and now". The QM indeterminacy is made into a particular first person comp indeterminacy.

    Where is the "here and now" if not a localization in a physical world.

Perhaps, but you need to define what you mean by physical world without assuming a *primitive* physical world.



This is defined as "centering" by Quine's Propositional Objects as discussed in Chalmers book, pg. 60-61...







The state is well defined, as your state belongs to a computation. It is not well defined below your substitution level, but this is only due to your ignorance on which computations you belong.

Right.  What I would generally refer to as 'my state' is a classical state (since I don't experience Everett's many worlds).

But I still don't understand, "Consciousness will make your brain, at the level below the substitution level, having some well defined state, with an electron, for example, described with some precise position. Without consciousness there is no "material" brain at all. "

How does consciousness "make a brain" or "make matter"?  I thought your theory was that both at made by computations.  My intuition is that, within your theory of comp, consciousness implies consciousness of matter and matter is a construct of consciousness;

That's what I was saying.

    Really!?

?






so you can't have one without the other.

Exactly. Not sure if we disagree on something here.

    What exactly are you agreeing about, Bruno? No consciousness without matter? Ah, you think that numbers have intrinsic properties... OK.

Indeed. I think 17 is intrinsically a prime number in all possible realities. This is needed to define in an intrinsic way the non intrinsic, intensional properties of the relative number (machines). Being universal, or simply being a code, or an address is not intrinsic, but can be once we choose an initial Turing universal base.

Bruno



Stathis Papaioannou

unread,
Sep 29, 2012, 2:14:02 PM9/29/12
to everyth...@googlegroups.com
On Fri, Sep 28, 2012 at 1:49 PM, Craig Weinberg <whats...@gmail.com> wrote:

> The spark plugs don't fire in response to the will of the driver, the brain
> does. This isn't magic, this is the ordinary process by which we participate
> in the world in every waking moment of our lives. It is not the same.
> Building a car that you can drive with your mind is one thing, building a
> car that can predict where you want to drive to next month is completely
> different.

But the atoms in the food I ate for dinner that will be incorporated
into my brain don't know what I'm going to do next month. All they
have to do is behave like every other carbon, hydrogen, nitrogen etc.
atom in the universe. Whatever my brain does, it does it with those
interchangeable components.


--
Stathis Papaioannou

meekerdb

unread,
Sep 29, 2012, 3:33:09 PM9/29/12
to everyth...@googlegroups.com
On 9/29/2012 7:11 AM, Bruno Marchal wrote:
Yes, and the fact that we cannot know which one bears us "here and now". The QM indeterminacy is made into a particular first person comp indeterminacy.

    Where is the "here and now" if not a localization in a physical world.

Perhaps, but you need to define what you mean by physical world without assuming a *primitive* physical world.

Physical objects are exactly the kind of thing that are defined ostensively.

Brent

Stephen P. King

unread,
Sep 29, 2012, 7:54:56 PM9/29/12
to everyth...@googlegroups.com
On 9/29/2012 10:11 AM, Bruno Marchal wrote:

On 29 Sep 2012, at 12:21, Stephen P. King wrote:

HEY!

    It's nice to see other people noticing the same thing that I have been complaining about. Thank you, Brent!


On 9/29/2012 3:49 AM, Bruno Marchal wrote:

I *can* know the exact position of an electron in my brain, even if this will make me totally ignorant on its impulsions. I can know its exact impulsion too, even if this will make me totally ignorant of its position.

But that doesn't imply that the electron does not have a definite position and momentum; only that you cannot prepare an ensemble in which both values are sharp.

OK. This Fourier relation between complementary observable is quite mysterious in the comp theory.

    How about that! Bruno, you might wish to read up a little on Pontryagin duality, of which the Fourier relation is an example. It is a relation between spaces. How do you get spaces in your non-theory, Bruno?

?

The result is that we have to explain geometry, analysis and physics from numbers. It is constructive as it shows the unique method which keeps distinct and relate the different views, and the quanta/qualia differences. But the result is a problem, indeed: a problem in intensional arithmetic.
Hi Bruno,

    What ever means they are constructed, it is still a space that is the end result. A space is simply "a space is a set with some added structure."










In both case, the electron participate two different coherent computation leading to my computational state.
Of course this is just "in principle", as in continuous classical QM, we need to use distributions, and reasonable Fourier transforms.

But at the fundamental level of the UD 'the electron' has some definite representation in each of infinitely many computations.  The uncertainty comes from the many different computations.  Right?

Yes, and the fact that we cannot know which one bears us "here and now". The QM indeterminacy is made into a particular first person comp indeterminacy.

    Where is the "here and now" if not a localization in a physical world.

Perhaps, but you need to define what you mean by physical world without assuming a *primitive* physical world.

    I am OK with the idea that a physical world is that which can be described by a Boolean Algebra in a "sharable way". The trick is the "sharing". It order to share something there must be multiple entities that can each participate in some way and that those entities are in some way distinguishable from each other.





This is defined as "centering" by Quine's Propositional Objects as discussed in Chalmers book, pg. 60-61...







The state is well defined, as your state belongs to a computation. It is not well defined below your substitution level, but this is only due to your ignorance on which computations you belong.

Right.  What I would generally refer to as 'my state' is a classical state (since I don't experience Everett's many worlds).

But I still don't understand, "Consciousness will make your brain, at the level below the substitution level, having some well defined state, with an electron, for example, described with some precise position. Without consciousness there is no "material" brain at all. "

How does consciousness "make a brain" or "make matter"?  I thought your theory was that both at made by computations.  My intuition is that, within your theory of comp, consciousness implies consciousness of matter and matter is a construct of consciousness;

That's what I was saying.

    Really!?

?



    I believe that it was Brent that wrote: "My intuition is that, within your theory of comp, consciousness implies consciousness of matter and matter is a construct of consciousness; " and you wrote that you agreed.






so you can't have one without the other.

Exactly. Not sure if we disagree on something here.

    What exactly are you agreeing about, Bruno? No consciousness without matter? Ah, you think that numbers have intrinsic properties... OK.

Indeed. I think 17 is intrinsically a prime number in all possible realities.

    It is not a reality in a world that only has 16 objects in it. I can come up with several other counter-examples in terms of finite field, but that is overly belaboring a point.


This is needed to define in an intrinsic way the non intrinsic, intensional properties of the relative number (machines). Being universal, or simply being a code, or an address is not intrinsic, but can be once we choose an initial Turing universal base.

    How do you distinguish one version of the code X from another Y such that X interviews Y has a meaning?


Bruno

Craig Weinberg

unread,
Sep 29, 2012, 9:12:28 PM9/29/12
to everyth...@googlegroups.com

At the level in which there are interchangeable components, there is no brain. At the level at which we can describe a brain, the context is a whole living organism. Your view conflates what I call the micro-impersonal level with higher impersonal levels and the perceptual frame of impersonal representations with the perceptual frame of personal presentations.

Craig



--
Stathis Papaioannou

Bruno Marchal

unread,
Sep 30, 2012, 3:34:20 AM9/30/12
to everyth...@googlegroups.com
They are referred too ostensively. They are not "defined" in that way, at least not in the theory. Only in practice, they referred too ostensively. In our context, we search a theory, not a practice.

Bruno

Bruno Marchal

unread,
Sep 30, 2012, 3:59:44 AM9/30/12
to everyth...@googlegroups.com
On 30 Sep 2012, at 01:54, Stephen P. King wrote:

On 9/29/2012 10:11 AM, Bruno Marchal wrote:

On 29 Sep 2012, at 12:21, Stephen P. King wrote:

HEY!

    It's nice to see other people noticing the same thing that I have been complaining about. Thank you, Brent!


On 9/29/2012 3:49 AM, Bruno Marchal wrote:

I *can* know the exact position of an electron in my brain, even if this will make me totally ignorant on its impulsions. I can know its exact impulsion too, even if this will make me totally ignorant of its position.

But that doesn't imply that the electron does not have a definite position and momentum; only that you cannot prepare an ensemble in which both values are sharp.

OK. This Fourier relation between complementary observable is quite mysterious in the comp theory.

    How about that! Bruno, you might wish to read up a little on Pontryagin duality, of which the Fourier relation is an example. It is a relation between spaces. How do you get spaces in your non-theory, Bruno?

?

The result is that we have to explain geometry, analysis and physics from numbers. It is constructive as it shows the unique method which keeps distinct and relate the different views, and the quanta/qualia differences. But the result is a problem, indeed: a problem in intensional arithmetic.
Hi Bruno,

    What ever means they are constructed, it is still a space that is the end result. A space is simply "a space is a set with some added structure."

A set is an epistemic construct, in the arithmetical TOE.












In both case, the electron participate two different coherent computation leading to my computational state.
Of course this is just "in principle", as in continuous classical QM, we need to use distributions, and reasonable Fourier transforms.

But at the fundamental level of the UD 'the electron' has some definite representation in each of infinitely many computations.  The uncertainty comes from the many different computations.  Right?

Yes, and the fact that we cannot know which one bears us "here and now". The QM indeterminacy is made into a particular first person comp indeterminacy.

    Where is the "here and now" if not a localization in a physical world.

Perhaps, but you need to define what you mean by physical world without assuming a *primitive* physical world.

    I am OK with the idea that a physical world is that which can be described by a Boolean Algebra in a "sharable way". The trick is the "sharing". It order to share something there must be multiple entities that can each participate in some way and that those entities are in some way distinguishable from each other.


Like they obviously are in arithmetic.







This is defined as "centering" by Quine's Propositional Objects as discussed in Chalmers book, pg. 60-61...







The state is well defined, as your state belongs to a computation. It is not well defined below your substitution level, but this is only due to your ignorance on which computations you belong.

Right.  What I would generally refer to as 'my state' is a classical state (since I don't experience Everett's many worlds).

But I still don't understand, "Consciousness will make your brain, at the level below the substitution level, having some well defined state, with an electron, for example, described with some precise position. Without consciousness there is no "material" brain at all. "

How does consciousness "make a brain" or "make matter"?  I thought your theory was that both at made by computations.  My intuition is that, within your theory of comp, consciousness implies consciousness of matter and matter is a construct of consciousness;

That's what I was saying.

    Really!?

?



    I believe that it was Brent that wrote: "My intuition is that, within your theory of comp, consciousness implies consciousness of matter and matter is a construct of consciousness; " and you wrote that you agreed.

OK.








so you can't have one without the other.

Exactly. Not sure if we disagree on something here.

    What exactly are you agreeing about, Bruno? No consciousness without matter? Ah, you think that numbers have intrinsic properties... OK.

Indeed. I think 17 is intrinsically a prime number in all possible realities.

    It is not a reality in a world that only has 16 objects in it.

That would be ultrafinist, to say the least. But even this cannot work, even in a world with only 16 objects, in the case it can have self aware creature, by the MGA, in case comp can make sense in such structure (which of course it does not). 




I can come up with several other counter-examples in terms of finite field, but that is overly belaboring a point.

Yes, and comp has to assume 0, s(0), s(s(0)), ... to provide sense to the term "computations". 




This is needed to define in an intrinsic way the non intrinsic, intensional properties of the relative number (machines). Being universal, or simply being a code, or an address is not intrinsic, but can be once we choose an initial Turing universal base.

    How do you distinguish one version of the code X from another Y such that X interviews Y has a meaning?

?

Like I distinguish factorial of 4 and factorial of 5.

Bruno



Roger Clough

unread,
Sep 30, 2012, 7:34:59 AM9/30/12
to everything-list
Hi Stephen P. King

With his relativity principle, Einstein showed us that
there is no such thing as space, because all distances
are relational, relative, not absolute.

The Michelson朚orley experiment also proved that
there is no ether, there is absolutely nothing
there in what we call space. Photons simply
jump across space, their so-called waves are
simply mathematical constructions.

Leibniz similarly said, in his own way, that
neither space nor time are substances.
They do not exist. They do exist, however,
when they join to become (extended) substances
appearing as spacetime.

Roger Clough, rcl...@verizon.net
9/30/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-09-29, 19:54:56
Subject: Re: Epiphenomenalism

Roger Clough

unread,
Sep 30, 2012, 8:05:10 AM9/30/12
to everything-list
Hi Stephen P. King

Leibniz would not go along with epiphenomena because
the matter that materialists base their beliefs in
is not real, so it can't emanate consciousness.

Leibniz did not believe in matter in the same way that
atheists today do not believe in God.

And with good reason. Leibniz contended that not only matter,
but spacetime itself (or any extended substance) could not
real because extended substances are infinitely divisible.

Personally. I substitute Heisenberg's uncertainty principle
as the basis for this view because the fundamental particles
are supposedly divisible. Or one might substitute
Einstein's principle of the relativity of spacetime.
The uncertainties left with us by Heisenberg on
the small scale and Einstein on the large scale
ought to cause materialists to base their beliefs on
something less elusive than matter.


Roger Clough, rcl...@verizon.net
9/30/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-09-29, 19:54:56
Subject: Re: Epiphenomenalism


meekerdb

unread,
Sep 30, 2012, 12:16:02 PM9/30/12
to everyth...@googlegroups.com
On 9/30/2012 12:34 AM, Bruno Marchal wrote:

On 29 Sep 2012, at 21:33, meekerdb wrote:

On 9/29/2012 7:11 AM, Bruno Marchal wrote:
Yes, and the fact that we cannot know which one bears us "here and now". The QM indeterminacy is made into a particular first person comp indeterminacy.

    Where is the "here and now" if not a localization in a physical world.

Perhaps, but you need to define what you mean by physical world without assuming a *primitive* physical world.

Physical objects are exactly the kind of thing that are defined ostensively.

They are referred too ostensively. They are not "defined" in that way, at least not in the theory.

Well of course, nothing can be referred to ostensively *in a theory*. But that's how theoretical definitions are given meaning via reference to what we perceive.


Only in practice, they referred too ostensively. In our context, we search a theory, not a practice.

A theory that can't be connected to practice is just abstract mathematics, a kind of language game.  In fact you do connect your theory to practice by reference to diaries and perceptions.

Brent

Bruno Marchal

unread,
Sep 30, 2012, 1:58:19 PM9/30/12
to everyth...@googlegroups.com
Hi Roger Clough,

I have regrouped my comments because they are related.


On 30 Sep 2012, at 13:34, Roger Clough wrote:

> Hi Stephen P. King
>
> With his relativity principle, Einstein showed us that
> there is no such thing as space, because all distances
> are relational, relative, not absolute.

With comp there is clear sense in which there is not space, are there
is only numbers (or lambda terms) and that they obey only two simple
laws: addition and multiplication (resp. application and abstraction).

Note that with Einstein, there is still an absolute space-time.



>
> The Michelson朚orley experiment also proved that
> there is no ether, there is absolutely nothing
> there in what we call space.

I agree, but there are little loopholes, perhaps. A friend of mine
made his PhD on a plausible intepretation of Poincaré relativity
theory, and points on the fact that such a theory can explain some of
the "non covariance" of the Bohmian quantum mechanics (which is a many-
world theory + particles having a necessary unknown initial conditions
so that an added potential will guide the particle in "one" universe
among those described by the universal quantum wave.
I don't take this seriously, though.


> Photons simply
> jump across space, their so-called waves are
> simply mathematical constructions.

In that case you will have to explain me how mathematical construction
can go through two slits and interfere.



>
> Leibniz similarly said, in his own way, that
> neither space nor time are substances.
> They do not exist. They do exist, however,
> when they join to become (extended) substances
> appearing as spacetime.

OK. (and comp plausible).

other post:
> Hi Stephen P. King
>
> Leibniz would not go along with epiphenomena because
> the matter that materialists base their beliefs in
> is not real, so it can't emanate consciousness.

Comp "true" .



>
> Leibniz did not believe in matter in the same way that
> atheists today do not believe in God.

Comp "true" .



>
> And with good reason. Leibniz contended that not only matter,
> but spacetime itself (or any extended substance) could not
> real because extended substances are infinitely divisible.

Space time itself is not "real" for a deeper reason.

Why would the physical not be infinitely divisible and extensible,
especially if "not real"?




>
> Personally. I substitute Heisenberg's uncertainty principle
> as the basis for this view because the fundamental particles
> are supposedly divisible.

By definition an atom is not divisible, and the "atoms" today are the
elementary particles. Not sure you can divide an electron or a Higgs
boson.
With comp particles might get the sme explanation as the physicist, as
fixed points for some transformation in a universal group or universal
symmetrical system.
The simple groups, the exceptional groups, the Monster group can play
some role there (I speculate).



> Or one might substitute
> Einstein's principle of the relativity of spacetime.
> The uncertainties left with us by Heisenberg on
> the small scale and Einstein on the large scale
> ought to cause materialists to base their beliefs on
> something less elusive than matter.


I can't agree more. Matter is plausibly the last ether of physics.
Provably so if comp is true, and if there is no flaw in UDA.


OTHER POST
> Hi Bruno Marchal
>
> I'm still trying to figure out how numbers and ideas fit
> into Leibniz's metaphysics. Little is written about this issue,
> so I have to rely on what Leibniz says otherwise about monads.


OK. I will interpret your monad by "intensional number".

let me be explicit on this. I fixe once and for all a universal
system: I chose the programming language LISP. Actually, a subset of
it: the programs LISP computing only (partial) functions from N to N,
with some list representation of the numbers like (0), (S 0), (S S
0), ...

I enumerate in lexicographic way all the programs LISP. P_1, P_2,
P_3, ...

The ith partial computable functions phi_i is the one computed by P_i.

I can place on N a new operation, written #, with a # b = phi_a(b),
that is the result of the application of the ath program LISP, P_a, in
the enumeration of all the program LISP above, on b.

Then I define a number as being intensional when it occurs at the left
of an expression like a # b.

The choice of a universal system transforms each number into a
(partial) function from N to N.

A number u is universal if phi_u(a, b) = phi_a(b). u interprets or
understands the program a and apply it to on b to give the result
phi_a(b). a is the program, b is the data, and u is the computer. (a,
b) here abbreviates some number coding the couple (a, b), to stay
withe function having one argument (so u is a P_i, there is a
universal program P_u).

Universal is an intensional notion, it concerns the number playing the
role of a name for the function. The left number in the (partial)
operation #.



>
>
> Previously I noted that numbers could not be monads because
> monads constantly change.

They "change" relatively to universal numbers.

The universal numbers in arithmetic constitutes a sort of INDRA NET,
as all universal numbers reflects (can emulate, and does emulate, in
the UD) all other universal numbers.

Universal numbers introduce many relative dynamics in arithmetic.

Given that "time is not real", this should not annoy you in any way.


> Another argument against numbers
> being monads is that all monads must be attached to corporeal
> bodies.

Ah?



> So monads refer to objects in the (already) created world,
> whose identities persist, while ideas and numbers are not
> created objects.

Hmm... They "emanate" from arithmetical truth, so OK.

The problem is in the "(already)" created world.

The existence of a "real physical world" is a badly express problem.
All we can ask is that vast category of sharable dreams admits some
(unique?) maximal consistent extension satisfying ... who? All
universal numbers?

I don't know. I mean, I cannot make sense of an "already created
world", nor of objects in there.

So my attempt to intepret monads by universal number fails, but in
your definition here you are using concept which I attempt to explain,
and so I cannot use them.

But I refute your argument that numbers cannot change, as they do
change all the time through their arithmetical relations with the
universal numbers.



>
> While numbers and ideas cannot be monads, they have to
> be are entities in the mind, feelings, and bodily aspects
> of monads.

Numbers get the two role, at least from the pov of the universal
numbers. That's the beauty of it.



> For Leibniz refers to the "intellect" of human
> monads.

I refer to the "intellect" (terrestrial and divine) of the universal
numbers, among mainly the Löbian one (as the other are a bit too much
mute on the interesting question).


> And similarly, numbers and ideas must be used
> in the "fictional" construction of matter-- in the bodily
> aspect of material monads, as well as the construction
> of our bodies and brains.

OK. But even truer at another level made possible by comp. As I try to
illustrate. Arithmetic is full of life and dreams.

Bruno


http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Sep 30, 2012, 2:06:54 PM9/30/12
to everyth...@googlegroups.com
On 30 Sep 2012, at 18:16, meekerdb wrote:

On 9/30/2012 12:34 AM, Bruno Marchal wrote:

On 29 Sep 2012, at 21:33, meekerdb wrote:

On 9/29/2012 7:11 AM, Bruno Marchal wrote:
Yes, and the fact that we cannot know which one bears us "here and now". The QM indeterminacy is made into a particular first person comp indeterminacy.

    Where is the "here and now" if not a localization in a physical world.

Perhaps, but you need to define what you mean by physical world without assuming a *primitive* physical world.

Physical objects are exactly the kind of thing that are defined ostensively.

They are referred too ostensively. They are not "defined" in that way, at least not in the theory.

Well of course, nothing can be referred to ostensively *in a theory*. But that's how theoretical definitions are given meaning via reference to what we perceive.

Sure. But the meaning can be clean from metaphysical prejudices also. 




Only in practice, they referred too ostensively. In our context, we search a theory, not a practice.

A theory that can't be connected to practice is just abstract mathematics, a kind of language game. 

I can' agree more, and that is why I criticize physicalism (not physics), as it cut the possible link between conscious practice and theory.
Keep in my that my goal is to explain the origin of matter and consciousness.



In fact you do connect your theory to practice by reference to diaries and perceptions.

Absolutely so. 
But I use the "yes doctor" practice, to still illustrate a conceptual point, which is that if comp is true, the mind is basically solved by the "dreams of the universal numbers", and matter is an open problem, as we have only the dreams and the persistence of laws might seem more difficult. The math just shows that the more we try to refute comp by that problem, the more we get a quantum like weirdness, making perhaps the quantum aspect of nature a reflect of our universal number dreamy nature.

Bruno



Stephen P. King

unread,
Sep 30, 2012, 2:16:32 PM9/30/12
to everyth...@googlegroups.com
On 9/30/2012 7:34 AM, Roger Clough wrote:
> Hi Stephen P. King
>
> With his relativity principle, Einstein showed us that
> there is no such thing as space, because all distances
> are relational, relative, not absolute.
>
> The Michelson朚orley experiment also proved that
> there is no ether, there is absolutely nothing
> there in what we call space. Photons simply
> jump across space, their so-called waves are
> simply mathematical constructions.
>
> Leibniz similarly said, in his own way, that
> neither space nor time are substances.
> They do not exist. They do exist, however,
> when they join to become (extended) substances
> appearing as spacetime.
>
> Roger Clough, rcl...@verizon.net
> 9/30/2012
> "Forever is a long time, especially near the end." -Woody Allen
>
>
>

Indeed! We just have different ideas about monads. I see the
monads, as Leibniz defined them, as flawed. I seek to fix that flaw so
that the theory of monads "works" with other modern concepts.

--
Onward!

Stephen


Stephen P. King

unread,
Sep 30, 2012, 2:19:52 PM9/30/12
to everyth...@googlegroups.com
On 9/30/2012 8:05 AM, Roger Clough wrote:
> Hi Stephen P. King
>
> Leibniz would not go along with epiphenomena because
> the matter that materialists base their beliefs in
> is not real, so it can't emanate consciousness.
>
> Leibniz did not believe in matter in the same way that
> atheists today do not believe in God.
>
> And with good reason. Leibniz contended that not only matter,
> but spacetime itself (or any extended substance) could not
> real because extended substances are infinitely divisible.
>
> Personally. I substitute Heisenberg's uncertainty principle
> as the basis for this view because the fundamental particles
> are supposedly divisible. Or one might substitute
> Einstein's principle of the relativity of spacetime.
> The uncertainties left with us by Heisenberg on
> the small scale and Einstein on the large scale
> ought to cause materialists to base their beliefs on
> something less elusive than matter.
I agree! The only problem that I have with Leibniz' model is his
concept of a Pre-established Harmony. This can be fixed, but it requires
some very subtle arguments from computational complexity theory.

--
Onward!

Stephen


Roger Clough

unread,
Oct 1, 2012, 10:56:36 AM10/1/12
to everything-list
Hi Stephen P. King

Good luck with "improving" Leibniz, but
I see no problem with his ideas. He
even has nonlocal QM in his schema.

Materialists hate that.


Roger Clough, rcl...@verizon.net
10/1/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-09-30, 14:16:32
Subject: Re: Einstein and space


On 9/30/2012 7:34 AM, Roger Clough wrote:
> Hi Stephen P. King
>
> With his relativity principle, Einstein showed us that
> there is no such thing as space, because all distances
> are relational, relative, not absolute.
>
> The Michelson?orley experiment also proved that
> there is no ether, there is absolutely nothing
> there in what we call space. Photons simply
> jump across space, their so-called waves are
> simply mathematical constructions.
>
> Leibniz similarly said, in his own way, that
> neither space nor time are substances.
> They do not exist. They do exist, however,
> when they join to become (extended) substances
> appearing as spacetime.
>
> Roger Clough, rcl...@verizon.net
> 9/30/2012
> "Forever is a long time, especially near the end." -Woody Allen
>
>
>

Indeed! We just have different ideas about monads. I see the
monads, as Leibniz defined them, as flawed. I seek to fix that flaw so
that the theory of monads "works" with other modern concepts.

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

Jason Resch

unread,
Oct 6, 2012, 1:02:23 AM10/6/12
to everyth...@googlegroups.com
This can clearly be shown to be false.  For me to be responding to this post (using a a secure connection to my mail server) requires the use of prime numbers of 153 decimal digits in length.

There are on the order of 10^90 particles in the observable universe.  This is far smaller than the prime numbers which are larger than 10^152.  So would you say these numbers are not prime, merely because we don't have 10^153 things we can point to?

If a number P can be prime in a universe with fewer than P objects in it, might P be prime in a universe with 0 objects?

Jason

Stephen P. King

unread,
Oct 6, 2012, 1:14:01 AM10/6/12
to everyth...@googlegroups.com
LOL Jason,

    Did you completely miss the point of "reality"? When is it even possible to have a "universe with 0 objects"? Nice oxymoron!
-- 
Onward!

Stephen

Roger Clough

unread,
Oct 6, 2012, 8:19:20 AM10/6/12
to everything-list
Hi Stephen P. King

IMHO A universe with 0 objects could still contain Mind--
which had to be there before the universe was created.


Roger Clough, rcl...@verizon.net
10/6/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-10-06, 01:14:01
Subject: Re: Epiphenomenalism


On 10/6/2012 1:02 AM, Jason Resch wrote:




Jason Resch

unread,
Oct 6, 2012, 10:40:30 AM10/6/12
to everyth...@googlegroups.com
Say there is a universe that exists only an infinitely extended 3-manifold. Is this not a "universe with 0 objects"?

In any case, did my example change your opinion regarding the primality of 17 in a universe with 16 objects?

Jason

Stephen P. King

unread,
Oct 6, 2012, 11:59:02 AM10/6/12
to everyth...@googlegroups.com
    Were did the "infinitely extended 3-manifold" come from? You are treating it as if it where an object!

-- 
Onward!

Stephen
Reply all
Reply to author
Forward
0 new messages