questions on machines, belief, awareness, and knowledge

28 views
Skip to first unread message

Brian Tenneson

unread,
Sep 13, 2012, 4:04:53 PM9/13/12
to everyth...@googlegroups.com
Bruno,

You use B as a predicate symbol for "belief" I think.  What are some properties of B and is there a predicate for knowing/being aware of that might lead to a definition for self-awareness?

btw, what is a machine and what types of machines are there?

Is there a generic description for a structure (in the math logic sense) to have a belief or to be aware; something like
A |= "I am the structure A"
?

Finally, on a different note, if there is a structure for which all structures can be 1-1 injected into it, does that in itself imply a sort of ultimate structure perhaps what Max Tegmark views as the level IV multiverse?
Thanks.

Bruno Marchal

unread,
Sep 14, 2012, 4:20:01 AM9/14/12
to everyth...@googlegroups.com
Hi Brian,


On 13 Sep 2012, at 22:04, Brian Tenneson wrote:

> Bruno,
>
> You use B as a predicate symbol for "belief" I think.

I use for the modal unspecified box, in some context (in place of the
more common "[]").
Then I use it mainly for the box corresponding to Gödel's beweisbar
(provability) arithmetical predicate (definable with the symbols E, A,
&, ->, ~, s, 0 and parentheses.
Thanks to the fact that Bp -> p is not a theorem, it can plays the
role of believability for the ideally correct machines.





> What are some properties of B and is there a predicate for knowing/
> being aware of that might lead to a definition for self-awareness?

Yes, B and its variants:
B_1 p == Bp & p
B_2 p = Bp & Dt
B_3 p = Bp & Dt & t,
and others.



>
> btw, what is a machine and what types of machines are there?

With comp we bet that we are, at some level, digital machine. The
theory is one studied by logicians (Post, Church, Turing, etc.).



>
> Is there a generic description for a structure (in the math logic
> sense) to have a belief or to be aware; something like
> A |= "I am the structure A"
> ?

Yes, by using the Dx = xx method, you can define a machine having its
integral 3p plan available. But the 1p-self, given by Bp & p, does not
admit any name. It is the difference between "I have two legs" and "I
have a pain in a leg, even if a phantom one". G* proves them
equivalent (for correct machines), but G cannot identify them, and
they obeys different logic (G and S4Grz).



>
> Finally, on a different note, if there is a structure for which all
> structures can be 1-1 injected into it, does that in itself imply a
> sort of ultimate structure perhaps what Max Tegmark views as the
> level IV multiverse?

A 1-1 map is too cheap for that, and the set structure is a too much
structural flattening. Comp used the simulation, notion, at a non
specifiable level substitution.

Bruno



http://iridia.ulb.ac.be/~marchal/



Stephen P. King

unread,
Sep 14, 2012, 9:41:15 AM9/14/12
to everyth...@googlegroups.com
On 9/14/2012 4:20 AM, Bruno Marchal wrote:
Hi Brian,


On 13 Sep 2012, at 22:04, Brian Tenneson wrote:

Bruno,

You use B as a predicate symbol for "belief" I think.

I use for the modal unspecified box, in some context (in place of the more common "[]").
Then I use it mainly for the box corresponding to Gödel's beweisbar (provability) arithmetical predicate (definable with the symbols E, A, &, ->, ~, s, 0 and parentheses.
Thanks to the fact that Bp -> p is not a theorem, it can plays the role of believability for the ideally correct machines.





What are some properties of B and is there a predicate for knowing/being aware of that might lead to a definition for self-awareness?

Yes, B and its variants:
B_1 p == Bp & p
B_2 p = Bp & Dt
B_3 p = Bp & Dt & t,
and others.




btw, what is a machine and what types of machines are there?

With comp we bet that we are, at some level, digital machine. The theory is one studied by logicians (Post, Church, Turing, etc.).

 Dear Bruno,

    Could you elaborate on what your definition of "a digital machine" is? Is it something that can be faithfully represented by a Boolean Algebra of some sort?






Is there a generic description for a structure (in the math logic sense) to have a belief or to be aware; something like
A |= "I am the structure A"
?

Yes, by using the Dx = xx method, you can define a machine having its integral 3p plan available.

    This "3p plan" would be like my internal model of my body that I have as part of my conscious awareness?


But the 1p-self, given by Bp & p, does not admit any name. It is the difference between "I have two legs" and "I have a pain in a leg, even if a phantom one". G* proves them equivalent (for correct machines), but G cannot identify them, and they obeys different logic (G and S4Grz).

    This implies, to me, that the 1p-self cannot be defined by an equivalence class with a fixed equivalence relation. This is problematic if assumed to be true for all possible 1p-selfs. AFAIK, your definition would only apply to an machine that is unnameable infinite such as the totality of all that could exist, aka "God" or "cosmic intelligence". It reminds me more of the Azathoth of H.P. Lovecraft's mythos.






Finally, on a different note, if there is a structure for which all structures can be 1-1 injected into it, does that in itself imply a sort of ultimate structure perhaps what Max Tegmark views as the level IV multiverse?

A 1-1 map is too cheap for that, and the set structure is a too much structural flattening.

    I agree, it is just a tautology.


Comp used the simulation, notion, at a non specifiable level substitution.

    But does not address the computational resource requirement. :_(


Bruno



http://iridia.ulb.ac.be/~marchal/





-- 
Onward!

Stephen

http://webpages.charter.net/stephenk1/Outlaw/Outlaw.html

Bruno Marchal

unread,
Sep 14, 2012, 2:56:20 PM9/14/12
to everyth...@googlegroups.com
On 14 Sep 2012, at 15:41, Stephen P. King wrote:

On 9/14/2012 4:20 AM, Bruno Marchal wrote:
Hi Brian,


On 13 Sep 2012, at 22:04, Brian Tenneson wrote:

Bruno,

You use B as a predicate symbol for "belief" I think.

I use for the modal unspecified box, in some context (in place of the more common "[]").
Then I use it mainly for the box corresponding to Gödel's beweisbar (provability) arithmetical predicate (definable with the symbols E, A, &, ->, ~, s, 0 and parentheses.
Thanks to the fact that Bp -> p is not a theorem, it can plays the role of believability for the ideally correct machines.





What are some properties of B and is there a predicate for knowing/being aware of that might lead to a definition for self-awareness?

Yes, B and its variants:
B_1 p == Bp & p
B_2 p = Bp & Dt
B_3 p = Bp & Dt & t,
and others.




btw, what is a machine and what types of machines are there?

With comp we bet that we are, at some level, digital machine. The theory is one studied by logicians (Post, Church, Turing, etc.).

 Dear Bruno,

    Could you elaborate on what your definition of "a digital machine" is?

Anything Turing emulable.




Is it something that can be faithfully represented by a Boolean Algebra of some sort?


Anything can be represented by  Boolean algebra of some sort, even the quantum logic, despite not being embeddable in Boolean logic.








Is there a generic description for a structure (in the math logic sense) to have a belief or to be aware; something like
A |= "I am the structure A"
?

Yes, by using the Dx = xx method, you can define a machine having its integral 3p plan available.

    This "3p plan" would be like my internal model of my body that I have as part of my conscious awareness?

Yes, you can say that. 




But the 1p-self, given by Bp & p, does not admit any name. It is the difference between "I have two legs" and "I have a pain in a leg, even if a phantom one". G* proves them equivalent (for correct machines), but G cannot identify them, and they obeys different logic (G and S4Grz).

    This implies, to me, that the 1p-self cannot be defined by an equivalence class with a fixed equivalence relation. This is problematic if assumed to be true for all possible 1p-selfs. AFAIK, your definition would only apply to an machine that is unnameable infinite such as the totality of all that could exist, aka "God" or "cosmic intelligence". It reminds me more of the Azathoth of H.P. Lovecraft's mythos.

Proof?








Finally, on a different note, if there is a structure for which all structures can be 1-1 injected into it, does that in itself imply a sort of ultimate structure perhaps what Max Tegmark views as the level IV multiverse?

A 1-1 map is too cheap for that, and the set structure is a too much structural flattening.

    I agree, it is just a tautology.

Comp used the simulation, notion, at a non specifiable level substitution.

    But does not address the computational resource requirement. :_(

It does not solve it, but it address it, like it address all of physics. I give the tools so that you can ask your question directly to the machine.

Bruno





Stephen P. King

unread,
Sep 15, 2012, 2:55:17 AM9/15/12
to everyth...@googlegroups.com
On 9/14/2012 2:56 PM, Bruno Marchal wrote:

On 14 Sep 2012, at 15:41, Stephen P. King wrote:

On 9/14/2012 4:20 AM, Bruno Marchal wrote:
Hi Brian,


On 13 Sep 2012, at 22:04, Brian Tenneson wrote:

Bruno,

You use B as a predicate symbol for "belief" I think.

I use for the modal unspecified box, in some context (in place of the more common "[]").
Then I use it mainly for the box corresponding to Gödel's beweisbar (provability) arithmetical predicate (definable with the symbols E, A, &, ->, ~, s, 0 and parentheses.
Thanks to the fact that Bp -> p is not a theorem, it can plays the role of believability for the ideally correct machines.





What are some properties of B and is there a predicate for knowing/being aware of that might lead to a definition for self-awareness?

Yes, B and its variants:
B_1 p == Bp & p
B_2 p = Bp & Dt
B_3 p = Bp & Dt & t,
and others.




btw, what is a machine and what types of machines are there?

With comp we bet that we are, at some level, digital machine. The theory is one studied by logicians (Post, Church, Turing, etc.).

 Dear Bruno,

    Could you elaborate on what your definition of "a digital machine" is?

Anything Turing emulable.

Dear Bruno,

    OK. But you do understand that this assumes an unnecessary restrictive definition of computation. I define computation as "any transformation of information" and Information is defined as "the difference between a pair that makes a difference to a third".






Is it something that can be faithfully represented by a Boolean Algebra of some sort?


Anything can be represented by  Boolean algebra of some sort, even the quantum logic, despite not being embeddable in Boolean logic.

    No, you cannot define a bijective map between a logical representation of a QM system and a Boolean algebra. The quantum logical structure (ortho-complete lattice) is not distributive and the Boolean algebra *is* distributive.










Is there a generic description for a structure (in the math logic sense) to have a belief or to be aware; something like
A |= "I am the structure A"
?

Yes, by using the Dx = xx method, you can define a machine having its integral 3p plan available.

    This "3p plan" would be like my internal model of my body that I have as part of my conscious awareness?

Yes, you can say that.

    Good!





But the 1p-self, given by Bp & p, does not admit any name. It is the difference between "I have two legs" and "I have a pain in a leg, even if a phantom one". G* proves them equivalent (for correct machines), but G cannot identify them, and they obeys different logic (G and S4Grz).

    This implies, to me, that the 1p-self cannot be defined by an equivalence class with a fixed equivalence relation. This is problematic if assumed to be true for all possible 1p-selfs. AFAIK, your definition would only apply to an machine that is unnameable infinite such as the totality of all that could exist, aka "God" or "cosmic intelligence". It reminds me more of the Azathoth of H.P. Lovecraft's mythos.

Proof?

    You ask for proof? I will try. Do you recall our discussion that "God has no name"? Why did we agree that "god has no name"?










Finally, on a different note, if there is a structure for which all structures can be 1-1 injected into it, does that in itself imply a sort of ultimate structure perhaps what Max Tegmark views as the level IV multiverse?

A 1-1 map is too cheap for that, and the set structure is a too much structural flattening.

    I agree, it is just a tautology.

Comp used the simulation, notion, at a non specifiable level substitution.

    But does not address the computational resource requirement. :_(

It does not solve it, but it address it, like it address all of physics. I give the tools so that you can ask your question directly to the machine.

Bruno



    OK!

Russell Standish

unread,
Sep 15, 2012, 4:11:34 AM9/15/12
to everyth...@googlegroups.com
On Sat, Sep 15, 2012 at 02:55:17AM -0400, Stephen P. King wrote:
> >> Dear Bruno,
> >>
> >> Could you elaborate on what your definition of "a digital
> >>machine" is?
> >
> >Anything Turing emulable.
>
> Dear Bruno,
>
> OK. But you do understand that this assumes an unnecessary
> restrictive definition of computation. I define computation as "any
> transformation of information" and Information is defined as "the
> difference between a pair that makes a difference to a third".
>

That is far too inclusive a definition of computation. A map from i in
N to the ith decimal place of Chaitin's number Omega would satisfy you
definition of transformation of information, yet the posession of such
an "algorithm" would render oneself omniscient. You can answer any
question posable in a formal language by means of running this
algorithm for the correct decimal place. See Li and Vitanyi, page 218
for a discussion, or the reference they give:

Bennett & Gardiner, (1979) Scientific American, 241, 20-34.

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Roger Clough

unread,
Sep 15, 2012, 6:43:08 AM9/15/12
to everything-list
 
[IMHO a failed attempt to define qualia but still a nice try.]
 
 
 
 
Qualia: The Geometry of Integrated Information

David Balduzzi, Giulio Tononi*

Department of Psychiatry, University of Wisconsin, Madison, Wisconsin, United States of America
Abstract Top

According to the integrated information theory, the quantity of consciousness is the amount of integrated information generated by a complex of elements, 
and the quality of experience is specified by the informational relationships it generates. This paper outlines a framework for characterizing 
the informational relationships generated by such systems. Qualia space (Q) is a space having an axis for each possible state (activity pattern)
 of a complex. Within Q, each submechanism specifies a point corresponding to a repertoire of system states. Arrows between repertoires in
Q define informational relationships. Together, these arrows specify a quale锟斤拷a shape that completely and univocally characterizes the 
quality of a conscious experience. F锟斤拷 the height of this shape锟斤拷is the quantity of consciousness associated with the experience.
 Entanglement measures how irreducible informational relationships are to 
their component relationships, specifying concepts and modes. Several corollaries follow from these premises. 
The quale is determined by both the mechanism and state of the system. Thus, two different systems having 
identical activity patterns may generate different qualia. Conversely, the same quale may be generated by two 
systems that differ in both activity and connectivity. Both active and inactive elements specify a quale, but elements 
that are inactivated do not. Also, the activation of an element affects experience by changing the shape of the quale. 
The subdivision of experience into modalities and submodalities corresponds to subshapes in Q. In principle, different aspects of experience may 
be classified as different shapes in Q, and the similarity between experiences reduces to similarities between shapes. 
Finally, specific qualities, such as the 锟斤拷redness锟斤拷 of red, while generated by a local mechanism, cannot be reduced to it, but require considering the entire quale.
Ultimately, the present framework may offer a principled way for translating qualitative properties of experience into mathematics.
Author Summary Top

In prior work, we suggested that consciousness has to do with integrated information,
 which was defined as the amount of information generated by a system in a given state, above and beyond the
 information generated independently by its parts. In the present paper, we move from computing the quantity of 
integrated information to describing the structure or quality of the integrated information unfolded by interactions 
in the system. We take a geometric approach, introducing the notion of a quale as a shape that embodies the entire set of 
informational relationships generated by interactions in the system. The paper investigates how features of the quale relate to 
properties of the underlying system and also to basic features of experience, providing the beginnings of a mathematical 
dictionary relating neurophysiology to the geometry of the quale and the geometry to phenomenology.

Citation: Balduzzi D, Tononi G (2009) Qualia: The Geometry of Integrated Information. PLoS Comput Biol 5(8): e1000462. doi:10.1371/journal.pcbi.1000462
References
 
Here's an image from the paper. Don't ask me what it means.
 
 
 
 
 
 
 
mind geometry journal.pcbi.1000462.g009.png

Stephen P. King

unread,
Sep 15, 2012, 12:35:30 PM9/15/12
to everyth...@googlegroups.com
On 9/15/2012 4:11 AM, Russell Standish wrote:
On Sat, Sep 15, 2012 at 02:55:17AM -0400, Stephen P. King wrote:
Dear Bruno,

   Could you elaborate on what your definition of "a digital
machine" is?
Anything Turing emulable.
Dear Bruno,

    OK. But you do understand that this assumes an unnecessary
restrictive definition of computation. I define computation as "any
transformation of information" and Information is defined as "the
difference between a pair that makes a difference to a third".

Hi Russell,


That is far too inclusive a definition of computation.

    Not really, it only requires some way of representing the information such that it can be transformed. The integers are not the only kind of number that we can represent numbers (or any other mathematical object) with. IMHO, we are naive to think that Nature is hobbled to only use integers to perform her Computations. We must never project our deficiencies on Nature.


 A map from i in
N to the ith decimal place of Chaitin's number Omega would satisfy you
definition of transformation of information, yet the posession of such
an "algorithm" would render oneself omniscient.

    That is exactly my point! I am forcing the issue of the implication of Universal Turing Machines, they are implicitly omniscient unless they are restricted in some way. Turing et al, considered the case of computations via NxN -> N functions but abstracted away the resource requirements and we get very smart people, like Bruno, taking this as to means that we can completely ignore the possibility of actually implementing a computation and not jsut reasoning about some abstract object in our minds. The Ultrafinitists and Intuitionist (like Normal Wildberger for instance) have a valid critique but forget that they too are fallible and project their limitations on Nature. I am trying very hard to to do that!


 You can answer any
question posable in a formal language by means of running this
algorithm for the correct decimal place. See Li and Vitanyi, page 218
for a discussion, or the reference they give:

Bennett & Gardiner, (1979) Scientific American, 241, 20-34.

    Sure, but you are missing the point that I am trying to make. Unless there is at least the possibility in principle for a given computation to be implemented somehow, even if it is in the form of some pattern of chalk marks on a board or pattern of neurons firing in a brain, there is no "reality" to a abstraction such as a Universal Turing Machine. I am arguing against Immaterialism (and Materialism!) of any kind and for a dual aspect monism (like that which David Chalmers discusses and argues for in his book).

meekerdb

unread,
Sep 15, 2012, 3:56:39 PM9/15/12
to everyth...@googlegroups.com
On 9/15/2012 9:35 AM, Stephen P. King wrote:
On 9/15/2012 4:11 AM, Russell Standish wrote:
On Sat, Sep 15, 2012 at 02:55:17AM -0400, Stephen P. King wrote:
Dear Bruno,

   Could you elaborate on what your definition of "a digital
machine" is?
Anything Turing emulable.
Dear Bruno,

    OK. But you do understand that this assumes an unnecessary
restrictive definition of computation. I define computation as "any
transformation of information" and Information is defined as "the
difference between a pair that makes a difference to a third".

Hi Russell,

That is far too inclusive a definition of computation.

    Not really, it only requires some way of representing the information such that it can be transformed. The integers are not the only kind of number that we can represent numbers (or any other mathematical object) with. IMHO, we are naive to think that Nature is hobbled to only use integers to perform her Computations. We must never project our deficiencies on Nature.

I would go even farther than Russell implies.  A lot of the muddle about computation and consciousness comes about because they are abstracted out of the world.  That's why I like to think in terms of robots or Mars rovers.  Consciousness and computation are given their meaning by their effecting actions in the world.  To find out what a string of 1s and 0s means a Mars rovers memory you need to see what effect they have on its actions. You know that "1+1=10" means 1+1=2 when 10 in a register causes it to pick up two rocks.

So to further abstract computation to mean "transformation of information" will lead to even more of a muddle.

Brent


 A map from i in
N to the ith decimal place of Chaitin's number Omega would satisfy you
definition of transformation of information, yet the posession of such
an "algorithm" would render oneself omniscient.

    That is exactly my point! I am forcing the issue of the implication of Universal Turing Machines, they are implicitly omniscient unless they are restricted in some way. Turing et al, considered the case of computations via NxN -> N functions but abstracted away the resource requirements and we get very smart people, like Bruno, taking this as to means that we can completely ignore the possibility of actually implementing a computation and not jsut reasoning about some abstract object in our minds. The Ultrafinitists and Intuitionist (like Normal Wildberger for instance) have a valid critique but forget that they too are fallible and project their limitations on Nature. I am trying very hard to to do that!

 You can answer any
question posable in a formal language by means of running this
algorithm for the correct decimal place. See Li and Vitanyi, page 218
for a discussion, or the reference they give:

Bennett & Gardiner, (1979) Scientific American, 241, 20-34.

    Sure, but you are missing the point that I am trying to make. Unless there is at least the possibility in principle for a given computation to be implemented somehow, even if it is in the form of some pattern of chalk marks on a board or pattern of neurons firing in a brain, there is no "reality" to a abstraction such as a Universal Turing Machine. I am arguing against Immaterialism (and Materialism!) of any kind and for a dual aspect monism (like that which David Chalmers discusses and argues for in his book).


-- 
Onward!

Stephen

http://webpages.charter.net/stephenk1/Outlaw/Outlaw.html
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Evgenii Rudnyi

unread,
Sep 16, 2012, 3:44:48 AM9/16/12
to everyth...@googlegroups.com
On 15.09.2012 21:56 meekerdb said the following:
> On 9/15/2012 9:35 AM, Stephen P. King wrote:
>> On 9/15/2012 4:11 AM, Russell Standish wrote:

...

>> Hi Russell,
>>
>>> That is far too inclusive a definition of computation.
>>
>> Not really, it only requires some way of representing the
>> information such that it can be transformed. The integers are not
>> the only kind of number that we can represent numbers (or any other
>> mathematical object) with. IMHO, we are naive to think that Nature
>> is hobbled to only use integers to perform her Computations. We
>> must never project our deficiencies on Nature.
>
> I would go even farther than Russell implies. A lot of the muddle
> about computation and consciousness comes about because they are
> abstracted out of the world. That's why I like to think in terms of
> robots or Mars rovers. Consciousness and computation are given their
> meaning by their effecting actions in the world. To find out what a
> string of 1s and 0s means a Mars rovers memory you need to see what
> effect they have on its actions. You know that "1+1=10" means 1+1=2
> when 10 in a register causes it to pick up two rocks.
>
> So to further abstract computation to mean "transformation of
> information" will lead to even more of a muddle.
>
> Brent
>

So this is some kind of enactive model of consciousness, similar to what
Alva No� writes in Out of Our Heads: Why You Are Not Your Brain, and
Other Lessons from the Biology of Consciousness.

One question in this respect. Let me start with a quote from Max
Velmans, Understanding Consciousness

Section Can qualia be reduced to the exercise of sensory-motor skills?

p. 102 �Piloting a 747 no doubt feels like something to a human pilot,
and the way that it feels is likely to have something to do with human
biology. But why should it feel the same way to an electronic autopilot
that replaces the skills exercised by a human being? Or why should it
feel like anything to be the control system of a guided missile system?
Anyone versed in the construction of electronic control systems knows
that if one builds a system in the right way, it will function just as
it is intended to do, whether it feels like anything to be that system
or not. If so, functioning in an electronic (or any other) system is
logically tangential to whether it is like anything to be that system,
leaving the hard problem of why it happens to feel a certain way in
humans untouched.�

Do you mean that the meaning in a guided missile system happens as
by-product of its development by engineers?

To me, it seems that meaning that you have defined in Mars Rovers is yet
another theory of epiphenomenalism.

Evgenii
--
http://blog.rudnyi.ru/2012/06/visual-world-a-grand-illusion.html

meekerdb

unread,
Sep 16, 2012, 3:55:25 PM9/16/12
to everyth...@googlegroups.com
And your quote and question are yet another example of "nothing buttery" and argument by
incredulity.

Brent

Roger Clough

unread,
Sep 17, 2012, 5:52:52 AM9/17/12
to everything-list
Regarding computers, there are two types of knowledge. You say
"I know John Smith"

If you have actually met him, this is called knowledge by acquaintance.
If you have just heard about him, this is called knowledge by desciption.

Computers can't deal with the former, knowledge by acquaintance .
since iot is an experience. They can only deal with the latter,
which is a fact.


Roger Clough, rcl...@verizon.net
9/17/2012
Leibniz would say, "If there's no God, we'd have to invent him
so that everything could function."



----- Receiving the following content -----
From: Evgenii Rudnyi
Receiver: everything-list
Time: 2012-09-16, 03:44:48
Subject: Re: questions on machines, belief, awareness, and knowledge


On 15.09.2012 21:56 meekerdb said the following:
> On 9/15/2012 9:35 AM, Stephen P. King wrote:
>> On 9/15/2012 4:11 AM, Russell Standish wrote:

...

>> Hi Russell,
>>
>>> That is far too inclusive a definition of computation.
>>
>> Not really, it only requires some way of representing the
>> information such that it can be transformed. The integers are not
>> the only kind of number that we can represent numbers (or any other
>> mathematical object) with. IMHO, we are naive to think that Nature
>> is hobbled to only use integers to perform her Computations. We
>> must never project our deficiencies on Nature.
>
> I would go even farther than Russell implies. A lot of the muddle
> about computation and consciousness comes about because they are
> abstracted out of the world. That's why I like to think in terms of
> robots or Mars rovers. Consciousness and computation are given their
> meaning by their effecting actions in the world. To find out what a
> string of 1s and 0s means a Mars rovers memory you need to see what
> effect they have on its actions. You know that "1+1=10" means 1+1=2
> when 10 in a register causes it to pick up two rocks.
>
> So to further abstract computation to mean "transformation of
> information" will lead to even more of a muddle.
>
> Brent
>

So this is some kind of enactive model of consciousness, similar to what
Alva No? writes in Out of Our Heads: Why You Are Not Your Brain, and
Other Lessons from the Biology of Consciousness.

One question in this respect. Let me start with a quote from Max
Velmans, Understanding Consciousness

Section Can qualia be reduced to the exercise of sensory-motor skills?

p. 102 ?iloting a 747 no doubt feels like something to a human pilot,
and the way that it feels is likely to have something to do with human
biology. But why should it feel the same way to an electronic autopilot
that replaces the skills exercised by a human being? Or why should it
feel like anything to be the control system of a guided missile system?
Anyone versed in the construction of electronic control systems knows
that if one builds a system in the right way, it will function just as
it is intended to do, whether it feels like anything to be that system
or not. If so, functioning in an electronic (or any other) system is
logically tangential to whether it is like anything to be that system,
leaving the hard problem of why it happens to feel a certain way in
humans untouched.?

Do you mean that the meaning in a guided missile system happens as
by-product of its development by engineers?

To me, it seems that meaning that you have defined in Mars Rovers is yet
another theory of epiphenomenalism.

Evgenii
--
http://blog.rudnyi.ru/2012/06/visual-world-a-grand-illusion.html

Evgenii Rudnyi

unread,
Sep 17, 2012, 2:27:02 PM9/17/12
to everyth...@googlegroups.com
On 16.09.2012 21:55 meekerdb said the following:
I am not sure if I understand you. I am not saying that I am right but I
really do not understand you point. You say

"Consciousness and computation are given their meaning by their
effecting actions in the world."

and it seems that you imply that this could be applied for a robot as
well. My thought were that engineers who have design a robot know
everything how it is working. You comment suggests however that in the
robot there is something else that has emerged independently from the
will of engineers. I would be just interested to learn what it is. If
you know the answer, I would appreciate it.

Evgenii

Roger Clough

unread,
Sep 18, 2012, 6:10:52 AM9/18/12
to everything-list
Hi Evgenii Rudnyi
 
Brent has a pragmatic view of consciousness in that
the meaning of things is what they do, not what they are.
This is Peirce's view of reality.  I tend to lean that way myself.
 
 
Roger Clough, rcl...@verizon.net
9/18/2012
"Forever is a long time, especially near the end."
Woody Allen
 
----- Receiving the following content -----
Receiver: everything-list
Time: 2012-09-17, 14:27:02
Subject: Re: questions on machines, belief, awareness, and knowledge

>> what Alva No� writes in Out of Our Heads: Why You Are Not Your

>> Brain, and Other Lessons from the Biology of Consciousness.
>>
>> One question in this respect. Let me start with a quote from Max
>> Velmans, Understanding Consciousness
>>
>> Section Can qualia be reduced to the exercise of sensory-motor
>> skills?
>>
>> p. 102 揚iloting a 747 no doubt feels like something to a human

>> pilot, and the way that it feels is likely to have something to do
>> with human biology. But why should it feel the same way to an
>> electronic autopilot that replaces the skills exercised by a human
>> being? Or why should it feel like anything to be the control system
>> of a guided missile system? Anyone versed in the construction of
>> electronic control systems knows that if one builds a system in the
>> right way, it will function just as it is intended to do, whether
>> it feels like anything to be that system or not. If so, functioning
>> in an electronic (or any other) system is logically tangential to
>> whether it is like anything to be that system, leaving the hard
>> problem of why it happens to feel a certain way in humans
>> untouched.�

>>
>> Do you mean that the meaning in a guided missile system happens as
>> by-product of its development by engineers?
>>
>> To me, it seems that meaning that you have defined in Mars Rovers
>> is yet another theory of epiphenomenalism.
>
> And your quote and question are yet another example of "nothing
> buttery" and argument by incredulity.
>
> Brent
>

I am not sure if I understand you. I am not saying that I am right but I
really do not understand you point. You say

"Consciousness and computation are given their meaning by their
effecting actions in the world."

and it seems that you imply that this could be applied for a robot as
well. My thought were that engineers who have design a robot know
everything how it is working. You comment suggests however that in the
robot there is something else that has emerged independently from the
will of engineers. I would be just interested to learn what it is. If
you know the answer, I would appreciate it.

Evgenii

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

meekerdb

unread,
Sep 18, 2012, 6:57:33 PM9/18/12
to everyth...@googlegroups.com
On 9/17/2012 11:27 AM, Evgenii Rudnyi wrote:
>>> Do you mean that the meaning in a guided missile system happens as
>>> by-product of its development by engineers?
>>>
>>> To me, it seems that meaning that you have defined in Mars Rovers
>>> is yet another theory of epiphenomenalism.
>>
>> And your quote and question are yet another example of "nothing
>> buttery" and argument by incredulity.
>>
>> Brent
>>
>
> I am not sure if I understand you. I am not saying that I am right but I really do not
> understand you point. You say
>
> "Consciousness and computation are given their meaning by their effecting actions in the
> world."
>
> and it seems that you imply that this could be applied for a robot as well. My thought
> were that engineers who have design a robot know everything how it is working.

But they don't a robot, even one as simple as a Mars Rover perceives and acts on things
the engineers don't know. A more advanced robot will also learn from experience and
become as unpredictable as a person from the engineer's standpoint.

Brent

Evgenii Rudnyi

unread,
Sep 21, 2012, 3:27:27 PM9/21/12
to everyth...@googlegroups.com
On 19.09.2012 00:57 meekerdb said the following:
> On 9/17/2012 11:27 AM, Evgenii Rudnyi wrote:
>>>> Do you mean that the meaning in a guided missile system happens
>>>> as by-product of its development by engineers?
>>>>
>>>> To me, it seems that meaning that you have defined in Mars
>>>> Rovers is yet another theory of epiphenomenalism.
>>>
>>> And your quote and question are yet another example of "nothing
>>> buttery" and argument by incredulity.
>>>
>>> Brent
>>>
>>
>> I am not sure if I understand you. I am not saying that I am right
>> but I really do not understand you point. You say
>>
>> "Consciousness and computation are given their meaning by their
>> effecting actions in the world."
>>
>> and it seems that you imply that this could be applied for a robot
>> as well. My thought were that engineers who have design a robot
>> know everything how it is working.
>
> But they don't a robot, even one as simple as a Mars Rover perceives
> and acts on things the engineers don't know. A more advanced robot
> will also learn from experience and become as unpredictable as a
> person from the engineer's standpoint.
>

Okay, let us take more advanced robots. I guess that

Dario Floreano and Claudio Mattiussi, Bio-Inspired Artificial
Intelligence: Theories, Methods, and Technologies

should be perfect here. You will find in the book about learning in
behavioral systems. Yet, the authors do not use the term consciousness
at all. They even talk about intelligence just once

Conclusion, p. 585 : �A careful reader have noticed that we have not yet
defined what intelligence is. This was done on purpose because
intelligence has different meanings for different persons and in
different situations. For example, some believe that intelligence is the
ability to be creative; other think that it is the ability to make
predictions; and others believe that intelligence exists only in the eye
of the observer. In this book we have shown that biological and
artificial intelligence manifests itself though multiple processes and
mechanisms that interact at different spatial and temporal scales to
produce emergent and functional behavior. The most important implication
of the approaches presented here is that understanding and engineering
intelligence does not reduce to replicating a mammalian brain in a
computer but requires also capturing multiply types and levels of
interactions, such as those between brains and bodies, individual and
societies, learning and behavior, evolution and development,
self-protection and self-repair, to mention a few�.

Hence, again let us imagine that a robot with artificial neural networks
developed as described in the book can learn something indeed. In the
book there are even examples in this respect. Yet, the engineers
developing it have not even thought about consciousness. Hence, in my
view, if consciousness happens to be in such a robot, then we could talk
without a problem about epiphenomenalism. Why not?

I have seen a project where engineers at least talk about a module QUALIA

http://www.mindconstruct.com/

�MIND|CONSTRUCT is developing a �strong-AI engine�, a so called AI-mind,
that can be used in (human-like) robotics, healthcare, aerospace
sciences and every other area where �conscious� man-machine interaction
is of any importance.

The MIND|CONSTRUCT organization is the culmination of many years in
AI-research and the so called �hard-problems�, and the application of
elaborate experience in knowledge-management, for the design and
development of a �strong-AI engine�.�

If consciousness happens here, then we could at least find that it was
planned this way.

Evgenii
--
http://blog.rudnyi.ru/2011/03/intelligence.html

http://blog.rudnyi.ru/2012/05/mindconstruct.html

Bruno Marchal

unread,
Sep 22, 2012, 8:58:47 AM9/22/12
to everyth...@googlegroups.com
> Conclusion, p. 585 : “A careful reader have noticed that we have not
> yet defined what intelligence is. This was done on purpose because
> intelligence has different meanings for different persons and in
> different situations. For example, some believe that intelligence is
> the ability to be creative; other think that it is the ability to
> make predictions; and others believe that intelligence exists only
> in the eye of the observer. In this book we have shown that
> biological and artificial intelligence manifests itself though
> multiple processes and mechanisms that interact at different spatial
> and temporal scales to produce emergent and functional behavior. The
> most important implication of the approaches presented here is that
> understanding and engineering intelligence does not reduce to
> replicating a mammalian brain in a computer but requires also
> capturing multiply types and levels of interactions, such as those
> between brains and bodies, individual and societies, learning and
> behavior, evolution and development, self-protection and self-
> repair, to mention a few”.
>
> Hence, again let us imagine that a robot with artificial neural
> networks developed as described in the book can learn something
> indeed. In the book there are even examples in this respect. Yet,
> the engineers developing it have not even thought about
> consciousness. Hence, in my view, if consciousness happens to be in
> such a robot, then we could talk without a problem about
> epiphenomenalism. Why not?
>
> I have seen a project where engineers at least talk about a module
> QUALIA
>
> http://www.mindconstruct.com/
>
> “MIND|CONSTRUCT is developing a ‘strong-AI engine’, a so called AI-
> mind, that can be used in (human-like) robotics, healthcare,
> aerospace sciences and every other area where ‘conscious’ man-
> machine interaction is of any importance.
>
> The MIND|CONSTRUCT organization is the culmination of many years in
> AI-research and the so called ‘hard-problems’, and the application
> of elaborate experience in knowledge-management, for the design and
> development of a ‘strong-AI engine’.“
>
> If consciousness happens here, then we could at least find that it
> was planned this way.

It is part of what a machine is that we cannot know what we are doing
in building them, so human might as well build a conscious machine
without knowing it; except later, when the machine complains or fight
for its right.
Comp is rather negative on the idea of programming consciousness. We
can only let consciousness manifest itself, or not. Or we can copy
intelligent machine, partially or completely.

Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Evgenii Rudnyi

unread,
Sep 22, 2012, 9:29:57 AM9/22/12
to everyth...@googlegroups.com
On 22.09.2012 14:58 Bruno Marchal said the following:
>> Conclusion, p. 585 : �A careful reader have noticed that we have
>> not yet defined what intelligence is. This was done on purpose
>> because intelligence has different meanings for different persons
>> and in different situations. For example, some believe that
>> intelligence is the ability to be creative; other think that it is
>> the ability to make predictions; and others believe that
>> intelligence exists only in the eye of the observer. In this book
>> we have shown that biological and artificial intelligence manifests
>> itself though multiple processes and mechanisms that interact at
>> different spatial and temporal scales to produce emergent and
>> functional behavior. The most important implication of the
>> approaches presented here is that understanding and engineering
>> intelligence does not reduce to replicating a mammalian brain in a
>> computer but requires also capturing multiply types and levels of
>> interactions, such as those between brains and bodies, individual
>> and societies, learning and behavior, evolution and development,
>> self-protection and self-repair, to mention a few�.
>>
>> Hence, again let us imagine that a robot with artificial neural
>> networks developed as described in the book can learn something
>> indeed. In the book there are even examples in this respect. Yet,
>> the engineers developing it have not even thought about
>> consciousness. Hence, in my view, if consciousness happens to be in
>> such a robot, then we could talk without a problem about
>> epiphenomenalism. Why not?
>>
>> I have seen a project where engineers at least talk about a module
>> QUALIA
>>
>> http://www.mindconstruct.com/
>>
>> �MIND|CONSTRUCT is developing a �strong-AI engine�, a so called
>> AI-mind, that can be used in (human-like) robotics, healthcare,
>> aerospace sciences and every other area where �conscious�
>> man-machine interaction is of any importance.
>>
>> The MIND|CONSTRUCT organization is the culmination of many years in
>> AI-research and the so called �hard-problems�, and the application
>> of elaborate experience in knowledge-management, for the design and
>> development of a �strong-AI engine�.�
>>
>> If consciousness happens here, then we could at least find that it
>> was planned this way.
>
> It is part of what a machine is that we cannot know what we are doing
> in building them, so human might as well build a conscious machine
> without knowing it; except later, when the machine complains or fight
> for its right. Comp is rather negative on the idea of programming
> consciousness. We can only let consciousness manifest itself, or not.
> Or we can copy intelligent machine, partially or completely.

Then, I am afraid, comp is of no help to the AI community as it seems
cannot guide engineers on how to develop an intelligent robot.

Evgenii

meekerdb

unread,
Sep 22, 2012, 4:49:06 PM9/22/12
to everyth...@googlegroups.com
On 9/22/2012 6:29 AM, Evgenii Rudnyi wrote:
On 22.09.2012 14:58 Bruno Marchal said the following:

On 21 Sep 2012, at 21:27, Evgenii Rudnyi wrote:

On 19.09.2012 00:57 meekerdb said the following:
On 9/17/2012 11:27 AM, Evgenii Rudnyi wrote:
Do you mean that the meaning in a guided missile system
happens as by-product of its development by engineers?

To me, it seems that meaning that you have defined in Mars
Rovers is yet another theory of epiphenomenalism.

And your quote and question are yet another example of
"nothing buttery" and argument by incredulity.

Brent


I am not sure if I understand you. I am not saying that I am
right but I really do not understand you point. You say

"Consciousness and computation are given their meaning by
their effecting actions in the world."

and it seems that you imply that this could be applied for a
robot as well. My thought were that engineers who have design a
robot know everything how it is working.

But they don't a robot, even one as simple as a Mars Rover
perceives and acts on things the engineers don't know.� A more
advanced robot will also learn from experience and become as
unpredictable as a person from the engineer's standpoint.


Okay, let us take more advanced robots. I guess that

Dario Floreano and Claudio Mattiussi, Bio-Inspired Artificial
Intelligence: Theories, Methods, and Technologies

should be perfect here. You will find in the book about learning in
�behavioral systems. Yet, the authors do not use the term
�AI-research and the so called �hard-problems�, and the application
of elaborate experience in knowledge-management, for the design and
�development of a �strong-AI engine�.�

If consciousness happens here, then we could at least find that it
was planned this way.

It is part of what a machine is that we cannot know what we are doing
in building them, so human might as well build a conscious machine
without knowing it; except later, when the machine complains or fight
for its right. Comp is rather negative on the idea of programming
consciousness. We can only let consciousness manifest itself, or not.
Or we can copy intelligent machine, partially or completely.

Then, I am afraid, comp is of no help to the AI community as it seems cannot guide engineers on how to develop an intelligent robot.

Evgenii

In the past, Bruno has said that a machine that understands transfinite induction will be conscious.� But being conscious and intelligent are not the same thing.

Brent

Evgenii Rudnyi

unread,
Sep 23, 2012, 3:31:23 AM9/23/12
to everyth...@googlegroups.com
On 22.09.2012 22:49 meekerdb said the following:
> On 9/22/2012 6:29 AM, Evgenii Rudnyi wrote:
>> On 22.09.2012 14:58 Bruno Marchal said the following:

...

>>> It is part of what a machine is that we cannot know what we are
>>> doing in building them, so human might as well build a conscious
>>> machine without knowing it; except later, when the machine
>>> complains or fight for its right. Comp is rather negative on the
>>> idea of programming consciousness. We can only let consciousness
>>> manifest itself, or not. Or we can copy intelligent machine,
>>> partially or completely.
>>
>> Then, I am afraid, comp is of no help to the AI community as it
>> seems cannot guide engineers on how to develop an intelligent
>> robot.
>>
>> Evgenii
>
> In the past, Bruno has said that a machine that understands
> transfinite induction will be conscious. But being conscious and
> intelligent are not the same thing.
>
> Brent
>

In my view this is the same as epiphenomenalism. Engineers develop a
robot to achieve a prescribed function. They do not care about
consciousness in this respect. Then consciousness will appear
automatically but the function developed by engineers does not depend on
it. Hence epiphenomenalism seems to apply.

Evgenii

Bruno Marchal

unread,
Sep 23, 2012, 3:52:55 AM9/23/12
to everyth...@googlegroups.com
>>> Conclusion, p. 585 : “A careful reader have noticed that we have
>>> not yet defined what intelligence is. This was done on purpose
>>> because intelligence has different meanings for different persons
>>> and in different situations. For example, some believe that
>>> intelligence is the ability to be creative; other think that it is
>>> the ability to make predictions; and others believe that
>>> intelligence exists only in the eye of the observer. In this book
>>> we have shown that biological and artificial intelligence manifests
>>> itself though multiple processes and mechanisms that interact at
>>> different spatial and temporal scales to produce emergent and
>>> functional behavior. The most important implication of the
>>> approaches presented here is that understanding and engineering
>>> intelligence does not reduce to replicating a mammalian brain in a
>>> computer but requires also capturing multiply types and levels of
>>> interactions, such as those between brains and bodies, individual
>>> and societies, learning and behavior, evolution and development,
>>> self-protection and self-repair, to mention a few”.
>>>
>>> Hence, again let us imagine that a robot with artificial neural
>>> networks developed as described in the book can learn something
>>> indeed. In the book there are even examples in this respect. Yet,
>>> the engineers developing it have not even thought about
>>> consciousness. Hence, in my view, if consciousness happens to be in
>>> such a robot, then we could talk without a problem about
>>> epiphenomenalism. Why not?
>>>
>>> I have seen a project where engineers at least talk about a module
>>> QUALIA
>>>
>>> http://www.mindconstruct.com/
>>>
>>> “MIND|CONSTRUCT is developing a ‘strong-AI engine’, a so called
>>> AI-mind, that can be used in (human-like) robotics, healthcare,
>>> aerospace sciences and every other area where ‘conscious’
>>> man-machine interaction is of any importance.
>>>
>>> The MIND|CONSTRUCT organization is the culmination of many years in
>>> AI-research and the so called ‘hard-problems’, and the application
>>> of elaborate experience in knowledge-management, for the design and
>>> development of a ‘strong-AI engine’.“
>>>
>>> If consciousness happens here, then we could at least find that it
>>> was planned this way.
>>
>> It is part of what a machine is that we cannot know what we are doing
>> in building them, so human might as well build a conscious machine
>> without knowing it; except later, when the machine complains or fight
>> for its right. Comp is rather negative on the idea of programming
>> consciousness. We can only let consciousness manifest itself, or not.
>> Or we can copy intelligent machine, partially or completely.
>
> Then, I am afraid, comp is of no help to the AI community as it
> seems cannot guide engineers on how to develop an intelligent robot.

It can help to see how to not develop intelligent robot, like
programming them. It helps in the negative, like the proof of
irrationality of 2 helps to not search for two such numbers with
ration equal to sqrt(2).

The whole of theoretical AI is full of such negative, provably non
constructive, theorems.

It helps to understand that "intelligence" is not a normative notion,
and that if machine becomes intelligent, it might not been known in
advance, and it is more a question of letting them explore, than
obeying specific instructions.

Comp helps to be less naïve both on machine and human intelligence,
and it separates also the notion of consciousness from the notion of
intelligence.

Then our goal here is to get a TOE, not to help engineers in making an
intelligent machine. But still there is that important negative help
described above. Eventually comp can help humans to develop some more
respect for machines ... and humans.

Bruno

http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Sep 23, 2012, 9:20:55 AM9/23/12
to everything-list
 
Intelligence and consciousness require an agent outside
of spacetime (mental) to make choices about or manipulate
physical objects within spacetime.
 
Computers have no agent or self outside of spacetime.
So they have no intelligence and cannot be conscious.
 
Period.
 
 
 
Roger Clough, rcl...@verizon.net
9/23/2012
"Forever is a long time, especially near the end." -Woody Allen
 
 
----- Receiving the following content -----
Receiver: everything-list
Time: 2012-09-22, 09:29:57
Subject: Re: questions on machines, belief, awareness, and knowledge

>> Conclusion, p. 585 : 揂 careful reader have noticed that we have

>> not yet defined what intelligence is. This was done on purpose
>> because intelligence has different meanings for different persons
>> and in different situations. For example, some believe that
>> intelligence is the ability to be creative; other think that it is
>> the ability to make predictions; and others believe that
>> intelligence exists only in the eye of the observer. In this book
>> we have shown that biological and artificial intelligence manifests
>> itself though multiple processes and mechanisms that interact at
>> different spatial and temporal scales to produce emergent and
>> functional behavior. The most important implication of the
>> approaches presented here is that understanding and engineering
>> intelligence does not reduce to replicating a mammalian brain in a
>> computer but requires also capturing multiply types and levels of
>> interactions, such as those between brains and bodies, individual
>> and societies, learning and behavior, evolution and development,
>> self-protection and self-repair, to mention a few�.

>>
>> Hence, again let us imagine that a robot with artificial neural
>> networks developed as described in the book can learn something
>> indeed. In the book there are even examples in this respect. Yet,
>> the engineers developing it have not even thought about
>> consciousness. Hence, in my view, if consciousness happens to be in
>> such a robot, then we could talk without a problem about
>> epiphenomenalism. Why not?
>>
>> I have seen a project where engineers at least talk about a module
>> QUALIA
>>
>> http://www.mindconstruct.com/
>>
>> 揗IND|CONSTRUCT is developing a 憇trong-AI engine�, a so called

>> AI-mind, that can be used in (human-like) robotics, healthcare,
>> aerospace sciences and every other area where 慶onscious�

>> man-machine interaction is of any importance.
>>
>> The MIND|CONSTRUCT organization is the culmination of many years in
>> AI-research and the so called 慼ard-problems�, and the application

>> of elaborate experience in knowledge-management, for the design and
>> development of a 憇trong-AI engine�.�

>>
>> If consciousness happens here, then we could at least find that it
>> was planned this way.
>
> It is part of what a machine is that we cannot know what we are doing
> in building them, so human might as well build a conscious machine
> without knowing it; except later, when the machine complains or fight
> for its right. Comp is rather negative on the idea of programming
> consciousness. We can only let consciousness manifest itself, or not.
> Or we can copy intelligent machine, partially or completely.

Then, I am afraid, comp is of no help to the AI community as it seems
cannot guide engineers on how to develop an intelligent robot.

Evgenii

> Bruno
>
>
>
>>
>> Evgenii -- http://blog.rudnyi.ru/2011/03/intelligence.html
>>
>> http://blog.rudnyi.ru/2012/05/mindconstruct.html
>>
>> -- You received this message because you are subscribed to the
>> Google Groups "Everything List" group. To post to this group, send
>> email to everyth...@googlegroups.com. To unsubscribe from this
>> group, send email to everything-list+unsub...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsub...@googlegroups.com.

Bruno Marchal

unread,
Sep 23, 2012, 10:51:02 AM9/23/12
to everyth...@googlegroups.com
Not at all. Study UDA to see why exactly, but if comp is correct,
consciousness is somehow what defines the physical realities, making
possible for engineers to build the machines, and then consciousness,
despite not being programmable per se, does have a role, like
relatively speeding up the computations. Like "non free will", the
"epiphenomenalism" is only "apparent" because you take the "outer
god's eyes view", but with comp, there is no matter, nor
consciousness, at that level, and we have no access at all at that
level (without assuming comp, and accessing it intellectually, that is
only arithmetic).

This is hard to explain if you fail to see the physics/machine's
psychology/theology reversal. You are still (consciously or not)
maintaining the physical supervenience thesis, or an aristotelian
ontology, but comp prevents this to be possible.

Bruno


http://iridia.ulb.ac.be/~marchal/



Evgenii Rudnyi

unread,
Sep 23, 2012, 12:33:23 PM9/23/12
to everyth...@googlegroups.com
On 23.09.2012 16:51 Bruno Marchal said the following:
>
> On 23 Sep 2012, at 09:31, Evgenii Rudnyi wrote:
>
>> On 22.09.2012 22:49 meekerdb said the following:

...

>>> In the past, Bruno has said that a machine that understands
>>> transfinite induction will be conscious. But being conscious
>>> and intelligent are not the same thing.
>>>
>>> Brent
>>>
>>
>> In my view this is the same as epiphenomenalism. Engineers develop
>> a robot to achieve a prescribed function. They do not care about
>> consciousness in this respect. Then consciousness will appear
>> automatically but the function developed by engineers does not
>> depend on it. Hence epiphenomenalism seems to apply.
>
> Not at all. Study UDA to see why exactly, but if comp is correct,
> consciousness is somehow what defines the physical realities, making
> possible for engineers to build the machines, and then
> consciousness, despite not being programmable per se, does have a
> role, like relatively speeding up the computations. Like "non free
> will", the "epiphenomenalism" is only "apparent" because you take
> the "outer god's eyes view", but with comp, there is no matter, nor
> consciousness, at that level, and we have no access at all at that
> level (without assuming comp, and accessing it intellectually, that
> is only arithmetic).
>
> This is hard to explain if you fail to see the physics/machine's
> psychology/theology reversal. You are still (consciously or not)
> maintaining the physical supervenience thesis, or an aristotelian
> ontology, but comp prevents this to be possible.
>

Bruno,

I have considered a concrete case, when engineers develop a robot, not a
general one. For such a concrete case, I do not understand your answer.

I have understood Brent in such a way that when engineers develop a
robot they must just care about functionality to achieve and they can
ignore consciousness at all. Whether it appears in the robot or not, it
is not a business of engineers. Do you agree with such a statement or not?

Evgenii

Stathis Papaioannou

unread,
Sep 24, 2012, 2:44:10 AM9/24/12
to everyth...@googlegroups.com
On Sun, Sep 23, 2012 at 11:20 PM, Roger Clough <rcl...@verizon.net> wrote:
>
> Intelligence and consciousness require an agent outside
> of spacetime (mental) to make choices about or manipulate
> physical objects within spacetime.
>
> Computers have no agent or self outside of spacetime.
> So they have no intelligence and cannot be conscious.
>
> Period.

Roger,

How do you come up with this stuff?


--
Stathis Papaioannou

Bruno Marchal

unread,
Sep 24, 2012, 5:07:05 AM9/24/12
to everyth...@googlegroups.com
The robot might disagree.

You might disagree, if you get a digital brain, and that people
torture you on the pretext that you are a zombie.

And you are right, we can dismiss consciousness. We have already
dismissed emotion and feelings with human slaves for a very long time.
That does not mean those slaves were not conscious, and that
consciousness has no role.

If you want a robot or slave with flexible high cognitive capacities,
I doubt that it can harbor a mind without consciousness, which is just
when the robot infers (interrogates) its own sanity/consistency, and
get aware of its non communicable but "known" features.

Then with comp, you cannot understand where matter comes from without
using the concept of consciousness or at least its approximation
through most first person notions, like personal memories access,
belief, knowledge, sensations, etc.

You don't need to understand nor even believe in the Higgs boson to do
a pizza, but if the standard model is correct, then there would be no
pizza at all without it.

If you adopt an instrumental policy, you can evacuate *all*
questionings, but when generalized, this attitude leads people to
depression and sense crisis, and lack of meaning crisis, and disgust
of science. To separate science from spirituality can only lead to
technological idolatry in the hands of barbarians. Individuals becomes
functional objects. That means suffering and death of humanity.


Bruno


http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Sep 24, 2012, 8:41:31 AM9/24/12
to everything-list
Hi Stathis Papaioannou

You'll have to ask Descartes.


Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stathis Papaioannou
Receiver: everything-list
Time: 2012-09-24, 02:44:10
Subject: Re: A requirement of intelligence and consciousness which only humanshave


On Sun, Sep 23, 2012 at 11:20 PM, Roger Clough wrote:
>
> Intelligence and consciousness require an agent outside
> of spacetime (mental) to make choices about or manipulate
> physical objects within spacetime.
>
> Computers have no agent or self outside of spacetime.
> So they have no intelligence and cannot be conscious.
>
> Period.

Roger,

How do you come up with this stuff?


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

Roger Clough

unread,
Sep 24, 2012, 9:34:55 AM9/24/12
to everything-list
Hi meekerdb

The computer can mechanically prove something,
but it cannot know that it did so. It cannot
sit back with a beer and muse over how smart it is.


Roger Clough, rcl...@verizon.net
9/24/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: meekerdb
Receiver: everything-list
Time: 2012-09-22, 16:49:06
Subject: Re: questions on machines, belief, awareness, and knowledge


On 9/22/2012 6:29 AM, Evgenii Rudnyi wrote:
On 22.09.2012 14:58 Bruno Marchal said the following:


On 21 Sep 2012, at 21:27, Evgenii Rudnyi wrote:


On 19.09.2012 00:57 meekerdb said the following:

On 9/17/2012 11:27 AM, Evgenii Rudnyi wrote:

Do you mean that the meaning in a guided missile system
happens as by-product of its development by engineers?

To me, it seems that meaning that you have defined in Mars
Rovers is yet another theory of epiphenomenalism.


And your quote and question are yet another example of
"nothing buttery" and argument by incredulity.

Brent



I am not sure if I understand you. I am not saying that I am
right but I really do not understand you point. You say

"Consciousness and computation are given their meaning by
their effecting actions in the world."

and it seems that you imply that this could be applied for a
robot as well. My thought were that engineers who have design a
robot know everything how it is working.


But they don't a robot, even one as simple as a Mars Rover
perceives and acts on things the engineers don't know.? A more
advanced robot will also learn from experience and become as
unpredictable as a person from the engineer's standpoint.



Okay, let us take more advanced robots. I guess that

Dario Floreano and Claudio Mattiussi, Bio-Inspired Artificial
Intelligence: Theories, Methods, and Technologies

should be perfect here. You will find in the book about learning in
?ehavioral systems. Yet, the authors do not use the term
consciousness at all. They even talk about intelligence just once

Conclusion, p. 585 : ? careful reader have noticed that we have
not yet defined what intelligence is. This was done on purpose
because intelligence has different meanings for different persons
and in different situations. For example, some believe that
intelligence is the ability to be creative; other think that it is
the ability to make predictions; and others believe that
intelligence exists only in the eye of the observer. In this book
we have shown that biological and artificial intelligence manifests
itself though multiple processes and mechanisms that interact at
different spatial and temporal scales to produce emergent and
functional behavior. The most important implication of the
approaches presented here is that understanding and engineering
intelligence does not reduce to replicating a mammalian brain in a
computer but requires also capturing multiply types and levels of
interactions, such as those between brains and bodies, individual
and societies, learning and behavior, evolution and development,
self-protection and self-repair, to mention a few?.

Hence, again let us imagine that a robot with artificial neural
networks developed as described in the book can learn something
indeed. In the book there are even examples in this respect. Yet,
the engineers developing it have not even thought about
consciousness. Hence, in my view, if consciousness happens to be in
such a robot, then we could talk without a problem about
epiphenomenalism. Why not?

I have seen a project where engineers at least talk about a module
QUALIA

http://www.mindconstruct.com/

?IND|CONSTRUCT is developing a ?trong-AI engine?, a so called
AI-mind, that can be used in (human-like) robotics, healthcare,
aerospace sciences and every other area where ?onscious?
man-machine interaction is of any importance.

The MIND|CONSTRUCT organization is the culmination of many years in
?I-research and the so called ?ard-problems?, and the application
of elaborate experience in knowledge-management, for the design and
?evelopment of a ?trong-AI engine?.?

If consciousness happens here, then we could at least find that it
was planned this way.


It is part of what a machine is that we cannot know what we are doing
in building them, so human might as well build a conscious machine
without knowing it; except later, when the machine complains or fight
for its right. Comp is rather negative on the idea of programming
consciousness. We can only let consciousness manifest itself, or not.
Or we can copy intelligent machine, partially or completely.


Then, I am afraid, comp is of no help to the AI community as it seems cannot guide engineers on how to develop an intelligent robot.

Evgenii

In the past, Bruno has said that a machine that understands transfinite induction will be conscious.? But being conscious and intelligent are not the same thing.

Brent

Stephen P. King

unread,
Sep 24, 2012, 10:39:14 AM9/24/12
to everyth...@googlegroups.com
On 9/24/2012 9:34 AM, Roger Clough wrote:
> Hi meekerdb
>
> The computer can mechanically prove something,
> but it cannot know that it did so. It cannot
> sit back with a beer and muse over how smart it is.
>
>
Hi Roger,

What you are considering that a computer does not have is the
ability to model itself within its environment and compute optimizations
of such a model to guide its future choices. This can be well
represented within a computational framework and it is something that
Bruno has worked out in his comp model. (My only beef with Bruno is that
his model is so abstract that it is completely disconnected from the
physical world and thus has a "body" problem.)

Bruno Marchal

unread,
Sep 24, 2012, 10:45:01 AM9/24/12
to everyth...@googlegroups.com

On 24 Sep 2012, at 16:39, Stephen P. King wrote:

> On 9/24/2012 9:34 AM, Roger Clough wrote:
>> Hi meekerdb
>>
>> The computer can mechanically prove something,
>> but it cannot know that it did so. It cannot
>> sit back with a beer and muse over how smart it is.
>>
>>
> Hi Roger,
>
> What you are considering that a computer does not have is the
> ability to model itself within its environment and compute
> optimizations of such a model to guide its future choices. This can
> be well represented within a computational framework and it is
> something that Bruno has worked out in his comp model. (My only beef
> with Bruno is that his model is so abstract that it is completely
> disconnected from the physical world and thus has a "body" problem.)

But that is the "scientific success" of the comp theory (not
"model") : it reduces the mind body problem to a body problem, in a
precise realm, with a technic to extract the "laws of bodies", making
comp an utterly scientific, in Popper sense, theory. You still miss
the point. The body problem is not a defect, it is the main success of
comp.

Bruno

http://iridia.ulb.ac.be/~marchal/



meekerdb

unread,
Sep 24, 2012, 12:23:29 PM9/24/12
to everyth...@googlegroups.com
In my defense, I only said that the engineers could develop artificial intelligences
without considering consciousnees. I didn't say they *must* do so, and in fact I think
they are ethically bound to consider it. John McCarthy has already written on this years
ago. And it has nothing to do with whether supervenience or comp is true. In either case
an intelligent robot is likely to be a conscious being and ethical considerations arise.

Brent

Brian Tenneson

unread,
Sep 24, 2012, 1:44:22 PM9/24/12
to everyth...@googlegroups.com
Hi Bruno

On Fri, Sep 14, 2012 at 1:20 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:
Hi Brian,



On 13 Sep 2012, at 22:04, Brian Tenneson wrote:

Bruno,

You use B as a predicate symbol for "belief" I think.

I use for the modal unspecified box, in some context (in place of the more common "[]").
Then I use it mainly for the box corresponding to Gödel's beweisbar (provability) arithmetical predicate (definable with the symbols E, A, &, ->, ~, s, 0 and parentheses.
Thanks to the fact that Bp -> p is not a theorem, it can plays the role of believability for the ideally correct machines.


How come Bp->p is not a theorem?




What are some properties of B and is there a predicate for knowing/being aware of that might lead to a definition for self-awareness?

Yes, B and its variants:
B_1 p == Bp & p
B_2 p = Bp & Dt
B_3 p = Bp & Dt & t,
and others.

D?  B_1? B_2? B_3?




btw, what is a machine and what types of machines are there?

With comp we bet that we are, at some level, digital machine. The theory is one studied by logicians (Post, Church, Turing, etc.).

I am also curious as to the definition of a digital machine.




Is there a generic description for a structure (in the math logic sense) to have a belief or to be aware; something like
A |= "I am the structure A"
?

Yes, by using the Dx = xx method, you can define a machine having its integral 3p plan available. But the 1p-self, given by Bp & p, does not admit any name. It is the difference between "I have two legs" and "I have a pain in a leg, even if a phantom one". G* proves them equivalent (for correct machines), but G cannot identify them, and they obeys different logic (G and S4Grz).

DX = xx?




Finally, on a different note, if there is a structure for which all structures can be 1-1 injected into it, does that in itself imply a sort of ultimate structure perhaps what Max Tegmark views as the level IV multiverse?

A 1-1 map is too cheap for that, and the set structure is a too much structural flattening. Comp used the simulation, notion, at a non specifiable level substitution.

This structure I have in mind having the property that all structures can be injected into it has more structure than a set structure.  See, I have revised my thoughts and put them into a fairly short document. You helped me a year or two ago to show me some flaws with my thoughts in a document. I could send it to you.

Bruno Marchal

unread,
Sep 25, 2012, 4:04:39 AM9/25/12
to everyth...@googlegroups.com
Hi Brian,


On 24 Sep 2012, at 19:44, Brian Tenneson wrote:

Hi Bruno

On Fri, Sep 14, 2012 at 1:20 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:
Hi Brian,



On 13 Sep 2012, at 22:04, Brian Tenneson wrote:

Bruno,

You use B as a predicate symbol for "belief" I think.

I use for the modal unspecified box, in some context (in place of the more common "[]").
Then I use it mainly for the box corresponding to Gödel's beweisbar (provability) arithmetical predicate (definable with the symbols E, A, &, ->, ~, s, 0 and parentheses.
Thanks to the fact that Bp -> p is not a theorem, it can plays the role of believability for the ideally correct machines.


How come Bp->p is not a theorem?

If it was, Bf -> f would be a theorem, and thus ~Bf would be a theorem, and this would contradict the second incompleteness theorem.

Or by Löb, if Bf -> f is a theorem, then by necessitation B(Bf->f) would be a theorem, and by Löb (B(Bp->p)->Bp) and modus ponens, we would get Bf, and by Bf -> f, we would get f, and the machine would be inconsistent.

Bf -> f is a theorem of G*, and so its arithmetical interpretation is true (in the standard model), but this the machine cannot prove.






What are some properties of B and is there a predicate for knowing/being aware of that might lead to a definition for self-awareness?

Yes, B and its variants:
B_1 p == Bp & p
B_2 p = Bp & Dt
B_3 p = Bp & Dt & t,
and others.

D?  B_1? B_2? B_3?

Dp is defined by ~B~p.
B_1 p is defined by Bp & p, etc. (I mean by their arithmetical interpretation). When I have more time I will give the precise Solovaytheorem, here or on FOAR.







btw, what is a machine and what types of machines are there?

With comp we bet that we are, at some level, digital machine. The theory is one studied by logicians (Post, Church, Turing, etc.).

I am also curious as to the definition of a digital machine.

Anything emulable by a Turing machine, or anything definable by a sigma_1 arithmetical relations. This is provably equivalent, and get very general with Church thesis.








Is there a generic description for a structure (in the math logic sense) to have a belief or to be aware; something like
A |= "I am the structure A"
?

Yes, by using the Dx = xx method, you can define a machine having its integral 3p plan available. But the 1p-self, given by Bp & p, does not admit any name. It is the difference between "I have two legs" and "I have a pain in a leg, even if a phantom one". G* proves them equivalent (for correct machines), but G cannot identify them, and they obeys different logic (G and S4Grz).

DX = xx?

I will soon explain this with the phi_i notations. But the basic idea is that you can build a self-referential machine by applying a duplicator to itself. If Dx produces "xx", DD will produce "DD", that is itself. It is an effective diagonalization. See how to build amoeba, and self-regenerating programs with this in my paper "Amoeba, planaria and dreaming machines", where I illustrate in LISP how to use that idea. Some subroutine are explained in the appendice of "conscience & mécanisme".







Finally, on a different note, if there is a structure for which all structures can be 1-1 injected into it, does that in itself imply a sort of ultimate structure perhaps what Max Tegmark views as the level IV multiverse?

A 1-1 map is too cheap for that, and the set structure is a too much structural flattening. Comp used the simulation, notion, at a non specifiable level substitution.

This structure I have in mind having the property that all structures can be injected into it has more structure than a set structure.  See, I have revised my thoughts and put them into a fairly short document. You helped me a year or two ago to show me some flaws with my thoughts in a document. I could send it to you.

OK. September is busy, but I will have more time (I hope) in october, you can send it to me.

Best,

Bruno



Roger Clough

unread,
Sep 25, 2012, 7:40:44 AM9/25/12
to everything-list
Hi Bruno Marchal

Do you believe that a computer has a physical mind
that can be conscious ?


Roger Clough, rcl...@verizon.net
9/25/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-24, 10:45:01
Subject: Re: questions on machines, belief, awareness, and knowledge


Roger Clough

unread,
Sep 25, 2012, 7:48:51 AM9/25/12
to everything-list
Hi Bruno Marchal

The immanent is that which is in spacetime, is extended and physical.
The transcendent is that which is outside of spacetime, is not extended and is nonphysical.

Platonia is transcendent, numbers are transcendent, arithmetic is transcendent.
Yet you seem to believe that mind is immanent, not transcendent.
Isn't there a conflict in such an understanding ?

Roger Clough, rcl...@verizon.net
9/25/2012
"Forever is a long time, especially near the end." -Woody Allen

====================================================


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-24, 10:45:01
Subject: Re: questions on machines, belief, awareness, and knowledge


Roger Clough

unread,
Sep 25, 2012, 8:26:02 AM9/25/12
to everything-list
Hi Stephen P. King

I don't deny that a computer can optimize itself,
but I deny that the operation is autonomous,
meaning independent, for ultimately it is software
dependent, using a program written by an outsider.
True intelligence and true consciousness must be
to whatever extent possible independent of outside
help or perspective.

Isn't the self 1p ? not sure.



Roger Clough, rcl...@verizon.net
9/25/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-09-24, 10:39:14
Subject: Re: questions on machines, belief, awareness, and knowledge


Bruno Marchal

unread,
Sep 25, 2012, 11:20:17 AM9/25/12
to everyth...@googlegroups.com

Hi Roger Clough

> Hi Bruno Marchal
>
> Do you believe that a computer has a physical mind
> that can be conscious ?

My personal beliefs are private.

With comp a computer (universal machine/number) has no physical mind,
nor a primitive physical body. But it has an infinity of non physical
bodies. It is bizarre, and I am not sure this can be understood
without taking the comp first person indeterminacy into account.
Knowing the work of Everett in QM can help to illustrate, but QM is
not assumed in comp.

> The immanent is that which is in spacetime, is extended and physical.
> The transcendent is that which is outside of spacetime, is not
> extended and is nonphysical.

I can be OK with that vocabulary.


>
> Platonia is transcendent, numbers are transcendent, arithmetic is
> transcendent.

OK. Although I am myself using transcendent in a more restricted form,
but I can be OK with this for awhile.


> Yet you seem to believe that mind is immanent, not transcendent.

The mind of the universal machine is transcendent, and it obeys
transcendental laws, but my particular mind yesterday when listening
to music and drinking coffee was immanent. The mind has the two
aspects, as it is transcendent, but from its perspective it has, most
of the time, immanent aspect. In fact, that is what consciousness does
all the time: connecting transcendence and immanence, through self-
dfferentiation. The physical has those two aspects too (with comp): it
connects the universal physical laws with the geographical particular
local and relative reality.


> Isn't there a conflict in such an understanding ?


You tell me.

> In idealism the ideal world is the reflection of the actual world,

That might not exist, even in Platonia. With comp, we can take a very
little Platonia (arithmetical truth, or even a tiny part of it).
Note that comp is neutral monist. The transcendental truth is very
simple, and entirely delimited by the laws of addition and
multiplication (or anything Turing equivalent). The rest are digital
machines (or relative number) psychological projections: they are
lawful too.


> so that the material brain is reflected in the ideal mind,
> but one critical difference.

> Thought requires that somewhere there's a someone or something
> in the driver's seat. I can't imagine a material self, it has
> to be mental-- transcendent, in Platonia or the mind.
> It is what causes motion and makes decisions.


No problems here, except that there is no physical brain in Platonia,
nor really (primitive) physical brain on earth, unless you redefine
"physical" explicitly through the coherence conditions on the possible
computations/dreams by numbers. Those coherence conditions cannot be
imposed on the theory. They have to be extracted from the logics of
(machine) self-reference.

> Platonia always rules !

OK, but like Plotinus and the neoplatonists, even Platonia is just a
"servant of God" or an "emanation of God", who or which is the reason
why Platonia "exists".
The advantage of comp is that it explains the origin of the "three
gods" from arithmetic, in the sense that almost all numbers will
believe correctly in three "objects/subjects" verifying most
discourses made about them by mystics and open minded rationalists
Indian, Chinese and Greeks. You can take a look at my "Plotinus" paper
for more on this.
Like the neoplatonists, comp leads to a form of platonist
pythagoreanism.

My main point is not a defense of that idea, but that such theory
(mainly comp + classical theory of knowledge) is empirically testable.
It is hard to imagine a more testable theory, as the whole of physics
is derivable from arithmetic in a precise way. Only local geographies
and local histories are not derivable, not even by a god.

Bruno
http://iridia.ulb.ac.be/~marchal/



William R. Buckley

unread,
Sep 25, 2012, 3:24:46 PM9/25/12
to everyth...@googlegroups.com
Roger:

Please then describe for us in detail however painstaking
that model of consciousness which you hold, and your means
of determining intelligence. That is, present for us in
clear text your measures; the waving of hands is specifically
disallowed as an offering of answer to this challenge.

wrb
> list+uns...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-
> list+uns...@googlegroups.com.

meekerdb

unread,
Sep 25, 2012, 3:33:39 PM9/25/12
to everyth...@googlegroups.com
On 9/25/2012 12:24 PM, William R. Buckley wrote:
From: everyth...@googlegroups.com [mailto:everything-
> li...@googlegroups.com] On Behalf Of Roger Clough
> Sent: Tuesday, September 25, 2012 5:26 AM
> To: everything-list
> Subject: Can a computer make independent choices ?
> 
> Hi Stephen P. King
> 
> I don't deny that a computer can optimize itself,
> but I deny that the operation is autonomous,
> meaning independent, for ultimately it is software
> dependent, using a program written by an outsider.
> True intelligence and true consciousness must be
> to whatever extent possible independent of outside
> help or perspective.

Which just means the learned component of behavior should be big compared to the built-in component.  Just as we don't think of animals as intelligent when they simply follow instincts but we do think them intelligent when they learn things.

Brent

Stephen P. King

unread,
Sep 25, 2012, 6:30:02 PM9/25/12
to everyth...@googlegroups.com
On 9/25/2012 8:26 AM, Roger Clough wrote:
> Hi Stephen P. King
>
> I don't deny that a computer can optimize itself,
> but I deny that the operation is autonomous,
> meaning independent, for ultimately it is software
> dependent, using a program written by an outsider.

Hi Roger,

Please think a while about that "independence" means here. How
could you even know of the existence of a think that is completely
independent of you? Autonomy, independence, etc. are "relative" terms in
the sense that there is always an implied "ideal" condition and/or
context that we can define them and all of their "weakened" versions.

> True intelligence and true consciousness must be
> to whatever extent possible independent of outside
> help or perspective.

Sure, but is that even possible given the necessary requirements of
consciousness? Does consciousness need to have as its object more than
just itself? How does even a "consistent solipsist" know that it exists?

>
> Isn't the self 1p ? not sure.

The self is 1p, by definition.

Stephen P. King

unread,
Sep 25, 2012, 6:44:28 PM9/25/12
to everyth...@googlegroups.com
--
    Simple prejudice explains that...

Brian Tenneson

unread,
Sep 25, 2012, 6:48:45 PM9/25/12
to everyth...@googlegroups.com
So suppose there is a choice to be made. A or B.  Is there software that enables the computer to independently choose A or B. 
What about a neural network of many "nodes" and "connections" that has been through many epochs to the point where its outputs perfectly resemble pseudorandom number sequences?  Or put more simply, if its behavior cannot be predicted, does that make it independent?

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

meekerdb

unread,
Sep 25, 2012, 6:48:44 PM9/25/12
to everyth...@googlegroups.com
Do you have definition of intelligence that doesn't depend on learning?

Brent

Stephen P. King

unread,
Sep 25, 2012, 7:14:23 PM9/25/12
to everyth...@googlegroups.com
--

    No. Intelligent is learning dependent.

Jason Resch

unread,
Sep 26, 2012, 12:38:37 AM9/26/12
to everyth...@googlegroups.com
> immanence, through self-dfferentiation. The physical has those two
Bruno,

I am curious, what are the three gods?

Are these explained in your Plotinus paper?


> verifying most discourses made about them by mystics and open minded
> rationalists Indian, Chinese and Greeks. You can take a look at my
> "Plotinus" paper for more on this.
> Like the neoplatonists, comp leads to a form of platonist
> pythagoreanism.
>
> My main point is not a defense of that idea, but that such theory
> (mainly comp + classical theory of knowledge) is empirically testable.
> It is hard to imagine a more testable theory, as the whole of
> physics is derivable from arithmetic in a precise way. Only local
> geographies and local histories are not derivable, not even by a god.

Do you consider different geographies to include different places with
different particles or different dimensions of space time, or do you
think comp implies a single physics for all observers. One like that
of our standard model?

Jason
>> To post to this group, send email to everything-
>> li...@googlegroups.com.
>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> To post to this group, send email to everything-
>> li...@googlegroups.com.
>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>

Bruno Marchal

unread,
Sep 26, 2012, 4:18:17 AM9/26/12
to everyth...@googlegroups.com

On 26 Sep 2012, at 00:30, Stephen P. King wrote:

> On 9/25/2012 8:26 AM, Roger Clough wrote:
>> Hi Stephen P. King
>>
>> I don't deny that a computer can optimize itself,
>> but I deny that the operation is autonomous,
>> meaning independent, for ultimately it is software
>> dependent, using a program written by an outsider.
>
> Hi Roger,
>
> Please think a while about that "independence" means here. How
> could you even know of the existence of a think that is completely
> independent of you? Autonomy, independence, etc. are "relative"
> terms in the sense that there is always an implied "ideal" condition
> and/or context that we can define them and all of their "weakened"
> versions.
>
>> True intelligence and true consciousness must be
>> to whatever extent possible independent of outside
>> help or perspective.
>
> Sure, but is that even possible given the necessary requirements
> of consciousness? Does consciousness need to have as its object more
> than just itself? How does even a "consistent solipsist" know that
> it exists?
>
>>
>> Isn't the self 1p ? not sure.
>
> The self is 1p, by definition.

Hmm.... The self obtained by the Dx = "xx" method is entirely 3p, and
is the one usually denoted by Gödel's predicate: Bp.

To get the 1p, we connect it to truth, which makes sense as Bp -> p,
although true (trivially as we limit ouself to ideally correct
machine) is not provable by the machine, so Bp & p defined a new modal
box, having an arithmetical interpretation, but no more definable or
representable in arithmetic. That is the 1p. As it has no name or no
representation, it acts like a little god; and it plays the role of
the inner God in the arithmetical intepretation of Plotinus.

Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Sep 26, 2012, 4:33:49 AM9/26/12
to everyth...@googlegroups.com
Yes.
The three (neoplatonist) gods are:

1) the outer god, or truth, for Plato. It is simple, but has no name,
nor description. With comp, we can take arithmetical truth.
2) the Noùs, or Platonia, the realm of the ideas, or the intelligible.
In arithmetic it is played by Bp, and it splits into what the machine
can say about it, G, and what is true about it, G*.
3) the inner god. It is the conjunct of the two preceding gods:
provability and truth: Bp & p. It has no name, but acts like the
machine. It Plato's universal soul, the theaetetus knower. In
arithmetic, it is played by the modal box of the S4Grz modal logic.

Then you have the intelligible matter (Bp & Dt), and the sensible
matter (Bp & Dt & p). The Dt conditions makes it into a probability
one on the computations (obtained by restricting the arithmetical
interpretation of the sentence letter, p, q, ..., on the sigma_1
sentences (this tranlates comp in arithmetic).

More on this when I have more time. Someday I will give you the
enunciation of Solovay theorem, which is the key here.




>
>
>> verifying most discourses made about them by mystics and open
>> minded rationalists Indian, Chinese and Greeks. You can take a look
>> at my "Plotinus" paper for more on this.
>> Like the neoplatonists, comp leads to a form of platonist
>> pythagoreanism.
>>
>> My main point is not a defense of that idea, but that such theory
>> (mainly comp + classical theory of knowledge) is empirically
>> testable.
>> It is hard to imagine a more testable theory, as the whole of
>> physics is derivable from arithmetic in a precise way. Only local
>> geographies and local histories are not derivable, not even by a god.
>
> Do you consider different geographies to include different places
> with different particles or different dimensions of space time, or
> do you think comp implies a single physics for all observers. One
> like that of our standard model?

Comp implies the same physics for all universal machines. Physics is
really a collection of theorems in elementary arithmetic, or of true
but unprovable (by fixed machine) arithmetical sentences (but this
will concerned more sensible matter than intelligible matter). This
helps to separate the quanta and the qualia. If the mass of the
electron is not derivable from arithmetic, it will mean that it is
geographical, and that we can access to physical realities where
electron will have other mass. Thanks to S4Grz1, Z1* and X1*, we know
already that physics does not collapse into classical logic. In such a
case, physics would have been shown trivial, and all "phsyical laws"
would be local, or geographical.

Bruno
>>> To post to this group, send email to everyth...@googlegroups.com
>>> .
>>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>>> .
>>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>>> .
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To post to this group, send email to everyth...@googlegroups.com
>>> .
>>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>>> .
>>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>>> .
>>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> To post to this group, send email to everything-
>> li...@googlegroups.com.
>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Roger Clough

unread,
Sep 26, 2012, 6:17:59 AM9/26/12
to everything-list
Hi Bruno Marchal

I'm still trying to digest it, but Leibniz' principles that

a) every explanation is a cause,

and

b) every substance can be causative

and

c) every substance is alive (and presumably intelligent)


Allow the possibility of computers being conscious.
And alive. And intelligent. At least in the Leibnizian sense
that all substances are alive, etc. I had been thinking in
conventional ways that according to Leibniz are wrong.


Roger Clough, rcl...@verizon.net
9/26/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-25, 11:20:17

Bruno Marchal

unread,
Sep 26, 2012, 6:48:13 AM9/26/12
to everyth...@googlegroups.com
Hi Roger Clough,

> Hi Bruno Marchal
>
> I'm still trying to digest it, but Leibniz' principles that
>
> a) every explanation is a cause,
>
> and
>
> b) every substance can be causative
>
> and
>
> c) every substance is alive (and presumably intelligent)
>
>
> Allow the possibility of computers being conscious.
> And alive. And intelligent. At least in the Leibnizian sense
> that all substances are alive, etc. I had been thinking in
> conventional ways that according to Leibniz are wrong.

I have not much time to comment. If Leibniz helped you to open your
mind to the idea that computer can think, that is good news (for you
and Leibniz, and our fellows the machines).

Note that "computer can think" is an abbreviation for "computer (the
material object) can support a thinking entity relatively to some
other universal environment. The material object does not really think
per se, be it brain or computer, nor does it really exists at the base
level. It is more a projected information pattern.

I try to avoid the term "substance", as in occident it means often
"primitive matter", but if you use it in the greek sense (hypostase)
as I think you do, I can make some sense of what you say. To be sure I
am not sure "alive", or "causative" have clear referents. Those are
epistemological concepts. Yet I makes sense of the idea that
explanation have a causal feature, but I can't explain what I mean by
this now.
Basically you can see a program as a cause for a universal machine to
act in some way, like the sign put on the Golem forehead :)

You might elaborate on how Leibniz made your mind change on this
capital point.

Bruno
>> To post to this group, send email to everything-
>> li...@googlegroups.com.
>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> To post to this group, send email to everything-
>> li...@googlegroups.com.

Stephen P. King

unread,
Sep 26, 2012, 9:10:37 AM9/26/12
to everyth...@googlegroups.com
On 9/26/2012 4:18 AM, Bruno Marchal wrote:
>>>
>>> Isn't the self 1p ? not sure.
>>
>> The self is 1p, by definition.
>
> Hmm.... The self obtained by the Dx = "xx" method is entirely 3p, and
> is the one usually denoted by G�del's predicate: Bp.
>
> To get the 1p, we connect it to truth, which makes sense as Bp -> p,
> although true (trivially as we limit ouself to ideally correct
> machine) is not provable by the machine, so Bp & p defined a new modal
> box, having an arithmetical interpretation, but no more definable or
> representable in arithmetic. That is the 1p. As it has no name or no
> representation, it acts like a little god; and it plays the role of
> the inner God in the arithmetical intepretation of Plotinus.
Hi Bruno,

Your remark makes sense. I was only considering the inner aspect. I
will disagree a tiny bit about it having no representation; it has a
representation to itself but this is just its automorphism but that
makes the representation vanishing. It is the "homunculus" without regress.

Jason Resch

unread,
Sep 26, 2012, 6:02:34 PM9/26/12
to everyth...@googlegroups.com
Thank you.  I look forward to this.
 







verifying most discourses made about them by mystics and open minded rationalists Indian, Chinese and Greeks. You can take a look at my "Plotinus" paper for more on this.
Like the neoplatonists, comp leads to a form of platonist pythagoreanism.

My main point is not a defense of that idea, but that such theory (mainly comp + classical theory of knowledge) is empirically testable.
It is hard to imagine a more testable theory, as the whole of physics is derivable from arithmetic in a precise way. Only local geographies and local histories are not derivable, not even by a god.

Do you consider different geographies to include different places with different particles or different dimensions of space time, or do you think comp implies a single physics for all observers.  One like that of our standard model?

Comp implies the same physics for all universal machines. Physics is really a collection of theorems in elementary arithmetic, or of true but unprovable (by fixed machine) arithmetical sentences (but this will concerned more sensible matter than intelligible matter). This helps to separate the quanta and the qualia. If the mass of the electron is not derivable from arithmetic, it will mean that it is geographical, and that we can access to physical realities where electron will have other mass.

So does comp provide any hints as to which aspects of our local universe should be universal and which are geographical? 
 
Thanks to S4Grz1, Z1* and X1*, we know already that physics does not collapse into classical logic. In such a case, physics would have been shown trivial, and all "phsyical laws" would be local, or geographical.

It was not 100% clear to me what you meant.  Did you mean that we already know all physical laws are local/geographical, or that we already know that all physical laws are not local/geographical?

Thanks,

Jason


Bruno Marchal

unread,
Sep 27, 2012, 4:17:08 AM9/27/12
to everyth...@googlegroups.com
On 27 Sep 2012, at 00:02, Jason Resch wrote:



On Wed, Sep 26, 2012 at 3:33 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:


More on this when I have more time. Someday I will give you the enunciation of Solovay theorem, which is the key here.

Thank you.  I look forward to this.

OK. Nice.


 







verifying most discourses made about them by mystics and open minded rationalists Indian, Chinese and Greeks. You can take a look at my "Plotinus" paper for more on this.
Like the neoplatonists, comp leads to a form of platonist pythagoreanism.

My main point is not a defense of that idea, but that such theory (mainly comp + classical theory of knowledge) is empirically testable.
It is hard to imagine a more testable theory, as the whole of physics is derivable from arithmetic in a precise way. Only local geographies and local histories are not derivable, not even by a god.

Do you consider different geographies to include different places with different particles or different dimensions of space time, or do you think comp implies a single physics for all observers.  One like that of our standard model?

Comp implies the same physics for all universal machines. Physics is really a collection of theorems in elementary arithmetic, or of true but unprovable (by fixed machine) arithmetical sentences (but this will concerned more sensible matter than intelligible matter). This helps to separate the quanta and the qualia. If the mass of the electron is not derivable from arithmetic, it will mean that it is geographical, and that we can access to physical realities where electron will have other mass.

So does comp provide any hints as to which aspects of our local universe should be universal and which are geographical? 


Yes, as the logic of probabilty one for observation (given by S4Grz1, and/or the X and Z logics) already provides an arithmetical quantization explaining why the observables cannot be boolean, and are quantum like, with a MWI formal aspect. It is open if this is enough to explain why the physical universe looks like a quantum computer, but all the evidences go in that direction, including the reason why the bottom is linear and symmetrical. The physical is quantum like, even "quantum group" like.
The hamiltonians might be more geographical, perhaps. Here some ASSA might even play some bayesian-like role.




 
Thanks to S4Grz1, Z1* and X1*, we know already that physics does not collapse into classical logic. In such a case, physics would have been shown trivial, and all "phsyical laws" would be local, or geographical.

It was not 100% clear to me what you meant.  Did you mean that we already know all physical laws are local/geographical,

No. We already know that there are physical laws that are NOT local/geographical. Mainly the quantum principles. Unfortunately complex open problems abounds. It is the price of translating directly the mind-body problem in arithmetic. We get a complex problem in math (the contrary would have been astonishing though).


or that we already know that all physical laws are not local/geographical?

That's it.



Thanks,

You are welcome,

Bruno


Roger Clough

unread,
Sep 27, 2012, 7:24:14 AM9/27/12
to everything-list
Hi Bruno Marchal

I was thinking of a computer as a monad,
but whether it can think or not would
have to be an assumption (that it contains
an intellect). I forgot that inanimate matter
does not have an intellect. So I have to retract
that statement. Sorry.

This may be another mistake or be trivial, as I am
not a mathematician,but studying his causation
theory again suggests that Leibniz's preestablished
harmony might be a
special case of the Turing machine. It would be
more applicable to man and life in that
Leibniz allows for Aristotle's end-causation as well
as the traditional 'efficient causation".


Roger Clough, rcl...@verizon.net
9/27/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-26, 06:48:13
Subject: Re: WHOA! A reassessment of my position that computers cannot bealive, conscious, or intelligent

Stephen P. King

unread,
Sep 27, 2012, 9:05:26 AM9/27/12
to everyth...@googlegroups.com
On 9/27/2012 4:17 AM, Bruno Marchal wrote:
>> So does comp provide any hints as to which aspects of our local
>> universe should be universal and which are geographical?
>
>
> Yes, as the logic of probabilty one for observation (given by S4Grz1,
> and/or the X and Z logics) already provides an arithmetical
> quantization explaining why the observables cannot be boolean, and are
> quantum like, with a MWI formal aspect. It is open if this is enough
> to explain why the physical universe looks like a quantum computer,
> but all the evidences go in that direction, including the reason why
> the bottom is linear and symmetrical. The physical is quantum like,
> even "quantum group" like.
> The hamiltonians might be more geographical, perhaps. Here some ASSA
> might even play some bayesian-like role.
>
>
Dear Bruno,

"observables cannot be Boolean" What does this mean? Is this
different from "observables can be represented by Boolean algebras"? A
1p v. 3p dichotomy?

Bruno Marchal

unread,
Sep 27, 2012, 12:37:38 PM9/27/12
to everyth...@googlegroups.com

On 27 Sep 2012, at 13:24, Roger Clough wrote:

> Hi Bruno Marchal
>
> I was thinking of a computer as a monad,
> but whether it can think or not would
> have to be an assumption (that it contains
> an intellect).

I don't think you have to assume this, unless you propose some magical
theory of an intellect. Computers, notably the Löbian computer have
already an intellect, making them possible to reason and access to the
intelligible realm. They can't avoid having beliefs.

And, if you agree with Theaetetus definition of knowledge, and
Plotinus definition of the universal soul, computers have a soul too.




> I forgot that inanimate matter
> does not have an intellect.

I agree with this. With comp, there are may be inanimate numbers, but
no primitively real inanimate matter. That is dreamed projection.



> So I have to retract
> that statement. Sorry.

I knew it was to beautiful to be true, but may be you will change
again your mind. This illustrates that you search, and have no
certainty in the matter, and that is the wise (scientific, doubting)
attitude.


>
> This may be another mistake or be trivial, as I am
> not a mathematician,but studying his causation
> theory again suggests that Leibniz's preestablished
> harmony might be a
> special case of the Turing machine. It would be
> more applicable to man and life in that
> Leibniz allows for Aristotle's end-causation as well
> as the traditional 'efficient causation".

I think comp implies a sort of end-causation too, but it has weird
properties. It is a complex subject matter. I have to think more on
that. Even Darwinian evolution can be said to have an end-causation,
from the points of view of the survivors. But the notion of
"causation" is itself fuzzy, especially in the "natural" realm.

Bruno

Evgenii Rudnyi

unread,
Sep 29, 2012, 8:43:39 AM9/29/12
to everyth...@googlegroups.com
On 24.09.2012 18:23 meekerdb said the following:
Dear Bruno and Brent,

Frankly speaking I do not quite understand you answers. When I try to
convert your thoughts to some guidelines for engineers developing
robots, I get only something like as follows.

1) When you make your design, do not care about consciousness, just
implement functions required.

2) When a robot is ready, it may have consciousness. We have not a clue
how to check if it has it but you must consider ethical implications
(say shutting a robot down may be equivalent to a murder).

Evgenii

P.S. In my view 1) and 2) implies epiphenomenolism for consciousness.

Bruno Marchal

unread,
Sep 29, 2012, 10:29:23 AM9/29/12
to everyth...@googlegroups.com
If consciousness is epiphenomenal, how could matter be explained
through a theory of consciousness/first person, as this is made
obligatory when we assume that we are machines?

I remind you that things go in this way, if we are machine:

number ===> consciousness ===> matter

(and only then: matter ===> human consciousness ===> human notion of
number. That might explains the confusion)

I assume some basic understanding of the FPI and the UDA here. (FPI =
first person indeterminacy).

Bruno



http://iridia.ulb.ac.be/~marchal/



meekerdb

unread,
Sep 29, 2012, 3:28:07 PM9/29/12
to everyth...@googlegroups.com
On 9/29/2012 5:43 AM, Evgenii Rudnyi wrote:
I have understood Brent in such a way that when engineers develop
a robot they must just care about functionality to achieve and
they can ignore consciousness at all. Whether it appears in the
robot or not, it is not a business of engineers. Do you agree
with such a statement or not?

In my defense, I only said that the engineers could develop
artificial intelligences without considering consciousnees.  I didn't
say they *must* do so, and in fact I think they are ethically bound
to consider it.  John McCarthy has already written on this years ago.
And it has nothing to do with whether supervenience or comp is true.
In either case an intelligent robot is likely to be a conscious being
and ethical considerations arise.



Dear Bruno and Brent,

Frankly speaking I do not quite understand you answers. When I try to convert your thoughts to some guidelines for engineers developing robots, I get only something like as follows.

1) When you make your design, do not care about consciousness, just implement functions required.

Where did I say that.  Don't paraphrase, quote.

Brent

Evgenii Rudnyi

unread,
Sep 29, 2012, 3:36:22 PM9/29/12
to everyth...@googlegroups.com
On 29.09.2012 21:28 meekerdb said the following:
> On 9/29/2012 5:43 AM, Evgenii Rudnyi wrote:
>>>>> I have understood Brent in such a way that when engineers
>>>>> develop a robot they must just care about functionality to
>>>>> achieve and they can ignore consciousness at all. Whether it
>>>>> appears in the robot or not, it is not a business of
>>>>> engineers. Do you agree with such a statement or not?
>>>
>>> In my defense, I only said that the engineers could develop
>>> artificial intelligences without considering consciousnees. I
>>> didn't say they *must* do so, and in fact I think they are
>>> ethically bound to consider it. John McCarthy has already
>>> written on this years ago. And it has nothing to do with whether
>>> supervenience or comp is true. In either case an intelligent
>>> robot is likely to be a conscious being and ethical
>>> considerations arise.
>>>
>>
>>
>> Dear Bruno and Brent,
>>
>> Frankly speaking I do not quite understand you answers. When I try
>> to convert your thoughts to some guidelines for engineers
>> developing robots, I get only something like as follows.
>>
>> 1) When you make your design, do not care about consciousness, just
>> implement functions required.
>
> Where did I say that. Don't paraphrase, quote.

It well might be that I have interpreted your words incorrectly. Sorry
if this is the case.

Evgenii

P.S. I would say that the text above

>>> In my defense, I only said that the engineers could develop
>>> artificial intelligences without considering consciousnees.

belong to you.

Roger Clough

unread,
Sep 30, 2012, 8:34:16 AM9/30/12
to everything-list
Hi Bruno Marchal

I'm still trying to figure out how numbers and ideas fit
into Leibniz's metaphysics. Little is written about this issue,
so I have to rely on what Leibniz says otherwise about monads.


Previously I noted that numbers could not be monads because
monads constantly change. Another argument against numbers
being monads is that all monads must be attached to corporeal
bodies. So monads refer to objects in the (already) created world,
whose identities persist, while ideas and numbers are not
created objects.

While numbers and ideas cannot be monads, they have to
be are entities in the mind, feelings, and bodily aspects
of monads. For Leibniz refers to the "intellect" of human
monads. And similarly, numbers and ideas must be used
in the "fictional" construction of matter-- in the bodily
aspect of material monads, as well as the construction
of our bodies and brains.


Roger Clough, rcl...@verizon.net
9/30/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-09-29, 10:29:23
Subject: Re: questions on machines, belief, awareness, and knowledge


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

Stephen P. King

unread,
Sep 30, 2012, 2:22:03 PM9/30/12
to everyth...@googlegroups.com
On 9/30/2012 8:34 AM, Roger Clough wrote:
> Hi Bruno Marchal
>
> I'm still trying to figure out how numbers and ideas fit
> into Leibniz's metaphysics. Little is written about this issue,
> so I have to rely on what Leibniz says otherwise about monads.
>
>
> Previously I noted that numbers could not be monads because
> monads constantly change. Another argument against numbers
> being monads is that all monads must be attached to corporeal
> bodies. So monads refer to objects in the (already) created world,
> whose identities persist, while ideas and numbers are not
> created objects.
>
> While numbers and ideas cannot be monads, they have to
> be are entities in the mind, feelings, and bodily aspects
> of monads. For Leibniz refers to the "intellect" of human
> monads. And similarly, numbers and ideas must be used
> in the "fictional" construction of matter-- in the bodily
> aspect of material monads, as well as the construction
> of our bodies and brains.
Dear Roger,

Bruno's idea is a form of "Pre-Established Hamony", in that the
"truth" of the numbers is a pre-established ontological primitive.

--
Onward!

Stephen


Roger Clough

unread,
Oct 1, 2012, 10:17:05 AM10/1/12
to everything-list
Hi Stephen P. King

Good idea, but unfortunately monads are not numbers,
numbers will now guide them or replace them.
Monads have to be associated with corporeal bodies down here in
contingia, where crap happens.



Roger Clough, rcl...@verizon.net
10/1/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-09-30, 14:22:03
Subject: Re: Numbers and other inhabitants of Platonia are also inhabitants ofmonads

Stephen P. King

unread,
Oct 1, 2012, 4:32:54 PM10/1/12
to everyth...@googlegroups.com
On 10/1/2012 10:17 AM, Roger Clough wrote:
> Hi Stephen P. King
>
> Good idea, but unfortunately monads are not numbers,
> numbers will now guide them or replace them.
> Monads have to be associated with corporeal bodies down here in
> contingia, where crap happens.

Hi Roger,

I agree, monads are not numbers. Monads use numbers.

>
>
> Roger Clough,rcl...@verizon.net
--
Onward!

Stephen


Richard Ruquist

unread,
Oct 1, 2012, 4:51:44 PM10/1/12
to everyth...@googlegroups.com
String theory and variable fine-structure measurements across the
universe suggest that the discrete and distinct monads are
ennumerable.

Roger Clough

unread,
Oct 2, 2012, 7:08:11 AM10/2/12
to everything-list
Hi Richard Ruquist

Absolutely.


Roger Clough, rcl...@verizon.net
10/2/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Richard Ruquist
Receiver: everything-list
Time: 2012-10-01, 16:51:44
Subject: Re: Numbers and other inhabitants of Platonia are also inhabitantsofmonads


String theory and variable fine-structure measurements across the
universe suggest that the discrete and distinct monads are
ennumerable.

Reply all
Reply to author
Forward
0 new messages