The world is in the brain

56 views
Skip to first unread message

Evgenii Rudnyi

unread,
Apr 6, 2013, 1:45:12 PM4/6/13
to everyth...@googlegroups.com
Fingelkurts, A., Fingelkurts, A., and Neves, C. (2010). “Natural World
Physical, Brain Operational, and Mind Phenomenal Space-Time”. *Physics
of Life Reviews* 7(2): 195-249.

http://scireprints.lu.lv/141/1/Fingelkurts_Space-time_in_Physics_brain_and_mind.pdf

“We would like to discuss the hypothesis that via the brain operational
space-time the mind subjective space-time is connected to otherwise
distant physical space-time reality.”

See Fig 11 where the phenomenal world is in the brain.

Evgenii

Craig Weinberg

unread,
Apr 6, 2013, 4:33:00 PM4/6/13
to everyth...@googlegroups.com
I like that diagram, and I think its a step in the right direction...

but...

it does not explain why phenomenal consciousness should be considered to resemble space-time. It really doesn't. To the contrary, spatiotemporal memories merge seamlessly with imaginary places and times, or non-places and non-times. When we are sequestered from public interactions, we lose spatial and temporal continuity as daydream dissolves into dream and realism dissipates. If they took the diagram and twisted the top hemisphere 90 degrees... hmm. maybe I will give that a try...

Thanks,
Craig

meekerdb

unread,
Apr 6, 2013, 6:46:37 PM4/6/13
to everyth...@googlegroups.com
On 4/6/2013 10:45 AM, Evgenii Rudnyi wrote:
> Fingelkurts, A., Fingelkurts, A., and Neves, C. (2010). �Natural World Physical, Brain
> Operational, and Mind Phenomenal Space-Time�. *Physics of Life Reviews* 7(2): 195-249.
>
> http://scireprints.lu.lv/141/1/Fingelkurts_Space-time_in_Physics_brain_and_mind.pdf
>
> �We would like to discuss the hypothesis that via the brain operational
> space-time the mind subjective space-time is connected to otherwise distant physical
> space-time reality.�

Which just says that you can think about things that are far way.

>
> See Fig 11 where the phenomenal world is in the brain.

I don't see anything in this paper to support Craig's "top down" magic. They write:



According to
OA framework, the phenomenological
architecture of consciousne
ss and the brain�s operational
architectonics correspond with one
another; and they may also sh
are ontological iden
tity. If this
holds true, then we can make another claim that
by reproducing one architect
ure we can observe the
self-emergence of the other. Then, the problem
of producing man-made �machine� consciousness is
the problem of duplicating the whol
e level of operational architect
ure (with its inherent governing
laws and mechanisms) found in the electromagne
tic brain field, which di
rectly constitutes the
phenomenal level of brain organization.


which, except for the assumption that only the electromagnetic field is relevant, sounds
just like Bruno's explication of "comp".

Brent

Craig Weinberg

unread,
Apr 6, 2013, 7:29:27 PM4/6/13
to everyth...@googlegroups.com


On Saturday, April 6, 2013 6:46:37 PM UTC-4, Brent wrote:
On 4/6/2013 10:45 AM, Evgenii Rudnyi wrote:
> Fingelkurts, A., Fingelkurts, A., and Neves, C. (2010). �Natural World Physical, Brain
> Operational, and Mind Phenomenal Space-Time�. *Physics of Life Reviews* 7(2): 195-249.
>
> http://scireprints.lu.lv/141/1/Fingelkurts_Space-time_in_Physics_brain_and_mind.pdf
>
> �We would like to discuss the hypothesis that via the brain operational
> space-time the mind subjective space-time is connected to otherwise distant physical
> space-time reality.�

Which just says that you can think about things that are far way.

>
> See Fig 11 where the phenomenal world is in the brain.

I don't see anything in this paper to support Craig's "top down" magic.

(...and by "top down" magic you mean 'the ordinary capacities with which we participate in this very conversation.)

Craig

 

Craig Weinberg

unread,
Apr 6, 2013, 8:40:03 PM4/6/13
to everyth...@googlegroups.com
Ok, here's my modified version of Fig 11

http://multisenserealism.files.wordpress.com/2012/01/33ost_diagram.jpg


Craig

On Saturday, April 6, 2013 1:45:12 PM UTC-4, Evgenii Rudnyi wrote:

Evgenii Rudnyi

unread,
Apr 7, 2013, 2:54:08 AM4/7/13
to everyth...@googlegroups.com
On 07.04.2013 02:40 Craig Weinberg said the following:
> Ok, here's my modified version of Fig 11
>
> http://multisenserealism.files.wordpress.com/2012/01/33ost_diagram.jpg
>

I believe that you have understood the paper wrong. The authors
literally believe that the observed 3D world is geometrically speaking
in the brain.

See for example

Section 3. Space and time in mind, 3.1. Phenomenal space

�As it was pointed Smythies [333] this phenomenal space may be identical
with some aspect of brain space but not with any aspect of external
physical space. The same idea was explicitly formulated by Searle [334]:
�The brain creates a body image, and pains, like all bodily sensations,
are parts of the body image. The pain-in-the-foot is literally in the
physical space of the brain.��

This immediately leads to Max Velmans paradox {"The real skull (as
opposed to the phenomenal skull) is beyond the perceived horizon and
dome of the sky."}, see

http://blog.rudnyi.ru/2012/05/brain-and-world.html

and to some further possible speculations like

'Another researcher, Kuhlenbeck [335] made an even stronger claim,
suggesting that "... physical events and mental events occur in
different space-time systems which have no dimensions in common."

Evgenii

>
> On Saturday, April 6, 2013 1:45:12 PM UTC-4, Evgenii Rudnyi wrote:
>>
>> Fingelkurts, A., Fingelkurts, A., and Neves, C. (2010). �Natural
>> World Physical, Brain Operational, and Mind Phenomenal Space-Time�.
>> *Physics of Life Reviews* 7(2): 195-249.
>>
>>
>> http://scireprints.lu.lv/141/1/Fingelkurts_Space-time_in_Physics_brain_and_mind.pdf
>>
>>
>>
�We would like to discuss the hypothesis that via the brain operational
>> space-time the mind subjective space-time is connected to
>> otherwise distant physical space-time reality.�

meekerdb

unread,
Apr 7, 2013, 1:12:02 PM4/7/13
to everyth...@googlegroups.com
On 4/6/2013 11:54 PM, Evgenii Rudnyi wrote:
> On 07.04.2013 02:40 Craig Weinberg said the following:
>> Ok, here's my modified version of Fig 11
>>
>> http://multisenserealism.files.wordpress.com/2012/01/33ost_diagram.jpg
>>
>
> I believe that you have understood the paper wrong. The authors literally believe that
> the observed 3D world is geometrically speaking in the brain.

Yes our 3d model of the world is in our minds (not our brains). It's not "there"
geometrically speaking. Geometry and "there" are part of the model. Dog bites man.

Brent

Evgenii Rudnyi

unread,
Apr 7, 2013, 1:20:07 PM4/7/13
to everyth...@googlegroups.com
On 07.04.2013 19:12 meekerdb said the following:
Well, if you look into the paper, you see that authors take it literally
as in neuroscience mind means brain. Mind belongs to philosophy.

Evgenii


Craig Weinberg

unread,
Apr 7, 2013, 9:48:25 PM4/7/13
to everyth...@googlegroups.com


On Sunday, April 7, 2013 2:54:08 AM UTC-4, Evgenii Rudnyi wrote:
On 07.04.2013 02:40 Craig Weinberg said the following:
> Ok, here's my modified version of Fig 11
>
> http://multisenserealism.files.wordpress.com/2012/01/33ost_diagram.jpg
>

I believe that you have understood the paper wrong. The authors
literally believe that the observed 3D world is geometrically speaking
in the brain.

I didn't read the paper yet, I just thought the diagram was a good basis for a MR diagram.
 

See for example

Section 3. Space and time in mind, 3.1. Phenomenal space

�As it was pointed Smythies [333] this phenomenal space may be identical
with some aspect of brain space but not with any aspect of external
physical space. The same idea was explicitly formulated by Searle [334]:
�The brain creates a body image, and pains, like all bodily sensations,
are parts of the body image.  The pain-in-the-foot is literally in the
physical space of the brain.��

This immediately leads to Max Velmans paradox {"The real skull (as
opposed to the phenomenal skull) is beyond the perceived horizon and
dome of the sky."}, see

http://blog.rudnyi.ru/2012/05/brain-and-world.html

and to some further possible speculations like

'Another researcher, Kuhlenbeck [335] made an even stronger claim,
suggesting that "... physical events and mental events occur in
different space-time systems which have no dimensions in common."

Eh, I don't think it makes sense or explains anything to map consciousness to a matrix of positions. What does a flavor or a smell have to do with a location or shape? To me its pretty obviously our own species' visual bias which compels us to conceive of reality in visual terms. In comparing visual phenomena to sensory experience in general, I can understand visual shapes and tangible objects as categories of experience but experiences such as taste or emotion cannot be configurations of objects. I don't think that there is any way possible of getting around that, as it seems as self evident as the impossibility of a square circle. Objects can be dreamed of subjectively or imagined, but subjects cannot appear out of the interactions of gears, regardless of how many gears there are.

Craig

 

Bruno Marchal

unread,
Apr 8, 2013, 5:38:44 AM4/8/13
to everyth...@googlegroups.com
But mind is different from brain. And mind is part of both cognitive
science and theoretical computer science. To identify mind and brain
is possible in some strong non computationalist theories, but such
theories don't yet exist, and are only speculated about. To confuse
mind and brain, is like confusing literature and ink.
Neurophilophers are usually computationalist and weakly materialist,
and so are basically inconsistent.

Bruno



>
> Evgenii
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

http://iridia.ulb.ac.be/~marchal/



Craig Weinberg

unread,
Apr 9, 2013, 2:48:00 PM4/9/13
to everyth...@googlegroups.com


On Monday, April 8, 2013 5:38:44 AM UTC-4, Bruno Marchal wrote:

On 07 Apr 2013, at 19:20, Evgenii Rudnyi wrote:

> On 07.04.2013 19:12 meekerdb said the following:
>> On 4/6/2013 11:54 PM, Evgenii Rudnyi wrote:
>>> On 07.04.2013 02:40 Craig Weinberg said the following:
>>>> Ok, here's my modified version of Fig 11
>>>>
>>>> http://multisenserealism.files.wordpress.com/2012/01/33ost_diagram.jpg
>>>>
>>>
>>>
>>>> I believe that you have understood the paper wrong. The authors
>>> literally believe that the observed 3D world is geometrically
>>> speaking in the brain.
>>
>> Yes our 3d model of the world is in our minds (not our brains). It's
>> not "there" geometrically speaking.  Geometry and "there" are part of
>> the model.  Dog bites man.
>
> Well, if you look into the paper, you see that authors take it  
> literally as in neuroscience mind means brain. Mind belongs to  
> philosophy.


But mind is different from brain. And mind is part of both cognitive  
science and theoretical computer science. To identify mind and brain  
is possible in some strong non computationalist theories, but such  
theories don't yet exist, and are only speculated about. To confuse  
mind and brain, is like confusing literature and ink.
Neurophilophers are usually computationalist and weakly materialist,  
and so are basically inconsistent.

If we used a logic automata type of scheme, then mind and brain would be the same thing. Each bit would be an atomic configuration, and programs would be atomic assemblies. Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc. but would output behaviors consistent with our expectations for those experiences.

Craig

Evgenii Rudnyi

unread,
Apr 9, 2013, 3:19:12 PM4/9/13
to everyth...@googlegroups.com
On 08.04.2013 11:38 Bruno Marchal said the following:
>
> On 07 Apr 2013, at 19:20, Evgenii Rudnyi wrote:
>
>> On 07.04.2013 19:12 meekerdb said the following:
>>> On 4/6/2013 11:54 PM, Evgenii Rudnyi wrote:
>>>> On 07.04.2013 02:40 Craig Weinberg said the following:
>>>>> Ok, here's my modified version of Fig 11
>>>>>
>>>>> http://multisenserealism.files.wordpress.com/2012/01/33ost_diagram.jpg
>>>>>
>>>>
>>>>
>>>>>
>>>>>
I believe that you have understood the paper wrong. The authors
>>>> literally believe that the observed 3D world is geometrically
>>>> speaking in the brain.
>>>
>>> Yes our 3d model of the world is in our minds (not our brains).
>>> It's not "there" geometrically speaking. Geometry and "there"
>>> are part of the model. Dog bites man.
>>
>> Well, if you look into the paper, you see that authors take it
>> literally as in neuroscience mind means brain. Mind belongs to
>> philosophy.
>
>
> But mind is different from brain. And mind is part of both cognitive
> science and theoretical computer science. To identify mind and brain
> is possible in some strong non computationalist theories, but such
> theories don't yet exist, and are only speculated about. To confuse
> mind and brain, is like confusing literature and ink. Neurophilophers
> are usually computationalist and weakly materialist, and so are
> basically inconsistent.

I guess, this is a way how science develops. Neuroscientists study brain
and they just take a priori from the materialist and reductionism
paradigm that mind must be in the brain. After that, they write papers
to bring this idea to the logical conclusion. To this end, they seem to
have two options. Either they should say that the 3D visual world is
illusion (I guess, Dennett goes this way) or put phenomenological
consciousness into the brain. Let us see what happens along this way.

The paper in a way is well written. The only flaw (that actually is
irrelevant to the content of the paper) that I have seen in it, is THE
ENTROPY. Biologists like the entropy so much that they use it in any
occasion. For example from the paper:

�Thus, changes in entropy provide an important window into
self-organization: a sudden increase of entropy just before the
emergence of a new structure, followed by brief period of negative
entropy (or negentropy).�

I have seen that this could be traced to Schr�dinger�s What is Life?,
reread his chapter on Order, Disorder and Entropy and made my comments

http://blog.rudnyi.ru/2013/04/schrodinger-disorder-and-entropy.html

Evgenii

meekerdb

unread,
Apr 10, 2013, 1:16:48 AM4/10/13
to everyth...@googlegroups.com
The materialist view is just that the mind is a process in the brain, like a computation
is the process of running a program in a computer. As processes they may be abstracted
from their physical instantiation and are not anywhere, except maybe in Platonia.

> After that, they write papers to bring this idea to the logical conclusion. To this end,
> they seem to have two options. Either they should say that the 3D visual world is
> illusion (I guess, Dennett goes this way)

I think "illusion" has too strong a connotation of fallacious. I think "model" is more
accurate. So long as we realize the world we conceptualize is a model then we are not
guilty of a fallacy.

> or put phenomenological consciousness into the brain.

I don't know what this means. That phenomenological consciousness depends on the brain is
empirically well established. But to "put it into" the brain implies making a spatial
placement of an abstract concept.



> Let us see what happens along this way.
>
> The paper in a way is well written. The only flaw (that actually is irrelevant to the
> content of the paper) that I have seen in it, is THE ENTROPY. Biologists like the
> entropy so much that they use it in any occasion. For example from the paper:
>
> “Thus, changes in entropy provide an important window into self-organization: a sudden
> increase of entropy just before the emergence of a new structure, followed by brief
> period of negative entropy (or negentropy).”
>
> I have seen that this could be traced to Schrödinger’s What is Life?,
> reread his chapter on Order, Disorder and Entropy and made my comments
>
> http://blog.rudnyi.ru/2013/04/schrodinger-disorder-and-entropy.html


Still tilting at that windmill?

"A) From thermodynamic tables, the mole entropy of silver at standard conditions S(Ag, cr)
= 42.55 J K-1 mol-1 is bigger than that of aluminum S(Al, cr) = 28.30 J K-1 mol-1. Does it
mean that there is more disorder in silver as in aluminium?"

Yes, there is more disorder in the sense that raising the temperature of a mole of Ag 1deg
increases the number of accessible conduction electron states available more than does
raising the temperature of a mole of Al does.

I agree that disorder is not necessarily a good metaphor for entropy. But dispersal of
energy isn't always intuitively equal to entropy either. Consider dissolving ammonium
nitrate in water. The process is endothermic, so the temperature drops and energy is
absorbed, but the process goes spontaneously because the entropy increases; the are a lot
more microstates accessible in the solution even at the lower temperature.

Your quote of Arnheim makes me suspect that *he* is one who has confounded our language.
Receiving information reduces uncertainty; it doesn't necessarily increase order. Chaos
and unpredictability and information do not "carry a maximum of information". What they do
is allow for a maximum increase of information when they are resolved. Disorder doesn't
provide information - it provides the opportunity for using information, just as ignorance
of what a message will be is a measure of how much information the message will contain
when it removes the ignorance.

Brent

Bruno Marchal

unread,
Apr 10, 2013, 9:15:09 AM4/10/13
to everyth...@googlegroups.com
?


Each bit would be an atomic configuration, and programs would be atomic assemblies.

Two apples is not the number two.



Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc.

That is what we ask you to justify, or to assume explicitly, not to take for granted.

Bruno




but would output behaviors consistent with our expectations for those experiences.








Craig


Bruno



>
> Evgenii
>
>
> --
> You received this message because you are subscribed to the Google  
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,  
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Craig Weinberg

unread,
Apr 10, 2013, 9:21:56 AM4/10/13
to everyth...@googlegroups.com

Models have no presence. The same model can be expressed in any sense modality, so that our ability to conceptualize models is not the same phenomenon as our ability to perceive and participate in the world. Modeling is based on equivalence, and equivalence is part of pattern recognition, so that in order to even conceive of a model, there first would have to be direct perception and participation in a real world. Caring about the world gives you a reason to care about modeling it. If your world is invisible, intangible, and unconscious, then no models are needed and all participation is better served by automatic algorithms.
 

> or put phenomenological consciousness into the brain.

I don't know what this means. That phenomenological consciousness depends on the brain is
empirically well established.

That human consciousness is influenced by the brain is empirically well established. There is enough data from things like hydrocephalus, the recent psilocybin study, and NDEs to cast some doubt even on human-brain dependence in theory. Those exotic possibilities are not necessary however to see that there are a great variety of brainless species who nonetheless participate in the world in ways which seem more conscious than non-biological structures.

Think of it this way. If our brain produced phenomenal awareness, then the tissues of the brain would have to be responsible for that - the phenomenal consciousness of the brain would be dependent on the proto-phenomenal consicousness of neuronal sub-brains...otherwise consciousness appears out of nothing, for no particular reason, to live nowhere.

Craig
 

Bruno Marchal

unread,
Apr 10, 2013, 9:22:21 AM4/10/13
to everyth...@googlegroups.com
Which is close to nonsense. Of course it is also very fuzzy. If you
look in the brain, you see neuron, you don't see mind. Leibniz already
knew this, and the pre-christian mechanist too.



> After that, they write papers to bring this idea to the logical
> conclusion. To this end, they seem to have two options. Either they
> should say that the 3D visual world is illusion (I guess, Dennett
> goes this way)

This is unclear. You might give a reference. Dennett seems to take
physicalism for granted.

The problem of many is that they just seem unaware that the mind-body
problem is quite severe in the weak materialist framework.





> or put phenomenological consciousness into the brain. Let us see
> what happens along this way.
>
> The paper in a way is well written. The only flaw (that actually is
> irrelevant to the content of the paper) that I have seen in it, is
> THE ENTROPY. Biologists like the entropy so much that they use it in
> any occasion. For example from the paper:
>
> “Thus, changes in entropy provide an important window into self-
> organization: a sudden increase of entropy just before the
> emergence of a new structure, followed by brief period of negative
> entropy (or negentropy).”
>
> I have seen that this could be traced to Schrödinger’s What is
> Life?, reread his chapter on Order, Disorder and Entropy and made my
> comments
>
> http://blog.rudnyi.ru/2013/04/schrodinger-disorder-and-entropy.html

Not too much problem with this, but Schroedinger's book is also at the
origin of molecular biology, and is full of interesting insight. His
philosophy of mind is inspired by Hinduism, and in my opinion, it is
less wrong than material reductionism.

Craig Weinberg

unread,
Apr 10, 2013, 9:32:46 AM4/10/13
to everyth...@googlegroups.com


http://www.youtube.com/watch?v=YDCwrbqHfTM

The Future of Computing -- Reuniting Bits and Atoms

Neil Gershenfeld talking about using digital fabrication to replace digital computation.


Each bit would be an atomic configuration, and programs would be atomic assemblies.

Two apples is not the number two.

With logic automata, the number two would not be necessary....matter would embody its own programs.
 



Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc.

That is what we ask you to justify, or to assume explicitly, not to take for granted.

The fact that logic automata unites form and function as a single process should show that there is no implicit aesthetic preference. A program is a functional shape whose relation with other functional shapes is defined entirely by position. There is no room for, nor plausible emergence of any kind of aesthetic differences between functions we would assume are associated with sight or sound, thought or feeling. Logic automata proves that none of these differences are meaningful in a functionalist universe.

Craig
 

Evgenii Rudnyi

unread,
Apr 10, 2013, 4:18:58 PM4/10/13
to everyth...@googlegroups.com
On 10.04.2013 07:16 meekerdb said the following:
> On 4/9/2013 12:19 PM, Evgenii Rudnyi wrote:

...

>> I have seen that this could be traced to Schr�dinger�s What is
>> Life?, reread his chapter on Order, Disorder and Entropy and made
>> my comments
>>
>> http://blog.rudnyi.ru/2013/04/schrodinger-disorder-and-entropy.html
>
>>
>
> Still tilting at that windmill?
>
> "A) From thermodynamic tables, the mole entropy of silver at standard
> conditions S(Ag, cr) = 42.55 J K-1 mol-1 is bigger than that of
> aluminum S(Al, cr) = 28.30 J K-1 mol-1. Does it mean that there is
> more disorder in silver as in aluminium?"
>
> Yes, there is more disorder in the sense that raising the temperature
> of a mole of Ag 1deg increases the number of accessible conduction
> electron states available more than does raising the temperature of a
> mole of Al does.
>
> I agree that disorder is not necessarily a good metaphor for entropy.
> But dispersal of energy isn't always intuitively equal to entropy
> either. Consider dissolving ammonium nitrate in water. The process is
> endothermic, so the temperature drops and energy is absorbed, but
> the process goes spontaneously because the entropy increases; the are
> a lot more microstates accessible in the solution even at the lower
> temperature.
>

You'd better look at what biologist say. For example:

http://www.icr.org/article/270/

�and that the idea of their improving rather than harming organisms is
contrary to the Second Law of Thermodynamics, which tells us that matter
and energy naturally tend toward greater randomness rather than greater
order and complexity.�

Do you like it?

Evgenii

meekerdb

unread,
Apr 10, 2013, 4:34:46 PM4/10/13
to everyth...@googlegroups.com
You're referring me to an article on biological evolution by a guy with a Masters of Art
on a Creationist website??

Do YOU like it?

Brent

Terren Suydam

unread,
Apr 10, 2013, 4:36:47 PM4/10/13
to everyth...@googlegroups.com
This is close to an idea I have been mulling over for some time... that the source of the phenomenological feeling of pleasure is in some way identified with decreases in entropy, and pain is in some way identified with increases in entropy. It is a way to map the subjective experience of pain and pleasure to a 3p description of, say, a nervous system.  Damage to the body (associated with pain) can usually (always?) be characterized in terms of a sudden increase in entropy of the body. Perhaps this is also true in the mental domain, so that emotional loss (or e.g. embarrassment) can also be characterized as an increase in entropy of one's mental models, but this is pure speculation. The case is even harder to make with pleasure. It would be weird if it were true, but so far it is the only way I know of to map pleasure and pain onto anything objective at all.

Terren


On Wed, Apr 10, 2013 at 4:18 PM, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
On 10.04.2013 07:16 meekerdb said the following:
On 4/9/2013 12:19 PM, Evgenii Rudnyi wrote:

...

I have seen that this could be traced to Schrödinger’s What is

Life?, reread his chapter on Order, Disorder and Entropy and made
my comments

http://blog.rudnyi.ru/2013/04/schrodinger-disorder-and-entropy.html



Still tilting at that windmill?

"A) From thermodynamic tables, the mole entropy of silver at standard
 conditions S(Ag, cr) = 42.55 J K-1 mol-1 is bigger than that of
aluminum S(Al, cr) = 28.30 J K-1 mol-1. Does it mean that there is
more disorder in silver as in aluminium?"

Yes, there is more disorder in the sense that raising the temperature
of a mole of Ag 1deg increases the number of accessible conduction
electron states available more than does raising the temperature of a
mole of Al does.

I agree that disorder is not necessarily a good metaphor for entropy.
 But dispersal of energy isn't always intuitively equal to entropy
either. Consider dissolving ammonium nitrate in water. The process is
 endothermic, so the temperature drops and energy is absorbed, but
the process goes spontaneously because the entropy increases; the are
a lot more microstates accessible in the solution even at the lower
temperature.


You'd better look at what biologist say. For example:

http://www.icr.org/article/270/

“and that the idea of their improving rather than harming organisms is contrary to the Second Law of Thermodynamics, which tells us that matter and energy naturally tend toward greater randomness rather than greater order and complexity.”

Do you like it?

Evgenii

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

Evgenii Rudnyi

unread,
Apr 10, 2013, 4:38:53 PM4/10/13
to everyth...@googlegroups.com
On 10.04.2013 22:34 meekerdb said the following:
You will find a similar sentence also on an evolutionary website. Such a
statement will be the same. Look for example at

Annila, A. & S.N. Salthe (2010) Physical foundations of evolutionary
theory. Journal of Non-equilibrium Thermodynamics 35: 301-321,
http://dx.doi.org/10.1515/jnetdy.2010.019

Evgenii

Evgenii Rudnyi

unread,
Apr 10, 2013, 4:40:30 PM4/10/13
to everyth...@googlegroups.com
On 10.04.2013 22:36 Terren Suydam said the following:
> This is close to an idea I have been mulling over for some time...
> that the source of the phenomenological feeling of pleasure is in
> some way identified with decreases in entropy, and pain is in some
> way identified with increases in entropy. It is a way to map the
> subjective experience of pain and pleasure to a 3p description of,
> say, a nervous system. Damage to the body (associated with pain) can
> usually (always?) be characterized in terms of a sudden increase in
> entropy of the body. Perhaps this is also true in the mental domain,
> so that emotional loss (or e.g. embarrassment) can also be
> characterized as an increase in entropy of one's mental models, but
> this is pure speculation. The case is even harder to make with
> pleasure. It would be weird if it were true, but so far it is the
> only way I know of to map pleasure and pain onto anything objective
> at all.
>

This was my point. The entropy in your statement has nothing to do with
the thermodynamic entropy and the Second Law.

Evgenii

Telmo Menezes

unread,
Apr 10, 2013, 4:52:07 PM4/10/13
to everyth...@googlegroups.com
On Wed, Apr 10, 2013 at 10:36 PM, Terren Suydam <terren...@gmail.com> wrote:
> This is close to an idea I have been mulling over for some time... that the
> source of the phenomenological feeling of pleasure is in some way identified
> with decreases in entropy, and pain is in some way identified with increases
> in entropy. It is a way to map the subjective experience of pain and
> pleasure to a 3p description of, say, a nervous system. Damage to the body
> (associated with pain) can usually (always?) be characterized in terms of a
> sudden increase in entropy of the body. Perhaps this is also true in the
> mental domain, so that emotional loss (or e.g. embarrassment) can also be
> characterized as an increase in entropy of one's mental models, but this is
> pure speculation. The case is even harder to make with pleasure. It would be
> weird if it were true, but so far it is the only way I know of to map
> pleasure and pain onto anything objective at all.

Hi Terren,

Interesting idea, but I can think of a number of counter examples:
cold/freezing, boredom, the rush of taking risks, masochism (for some
people), the general preference for freedom as opposed to being under
control, booze, ....

I suspect life is just meaningless from the outside. I'd say that pain
and pleasure are fine-tunned by evolution to maximise the
survivability of species in an environment that is largely also
generated by evolution. It's a strange loop.
>> email to everything-li...@googlegroups.com.
>> To post to this group, send email to everyth...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.

Evgenii Rudnyi

unread,
Apr 10, 2013, 4:55:19 PM4/10/13
to everyth...@googlegroups.com
On 10.04.2013 22:52 Telmo Menezes said the following:

...

> I suspect life is just meaningless from the outside. I'd say that
> pain and pleasure are fine-tunned by evolution to maximise the
> survivability of species in an environment that is largely also
> generated by evolution. It's a strange loop.
>

What difference do you see when one changes evolution in your sentence
by god?

Evgenii

meekerdb

unread,
Apr 10, 2013, 4:58:43 PM4/10/13
to everyth...@googlegroups.com
That wasn't the question. The question was do you like it, do you believe it, can you
support it with your own arguments?

> Such a statement will be the same. Look for example at
>
> Annila, A. & S.N. Salthe (2010) Physical foundations of evolutionary theory. Journal of
> Non-equilibrium Thermodynamics 35: 301-321, http://dx.doi.org/10.1515/jnetdy.2010.019

Which is behind a paywall ($224), and says nothing like that in the abstract.

To say that mutations improving organisms is contrary to the 2nd law is wrong in so many
ways I hardly know where to start. First, the 2nd law is an approximate law that
expresses a statistical regularity. It doesn't forbid improbable events, even ones that
decrease entropy. Second, there is no teleological measure of "improving" in evolution;
there is only greater or lesser reproduction. And greater reproduction means more living
tissue which increases entropy of the whole Sun/Earth/biota system faster - and so is
consistent with the 2nd law. The 2nd laws says nothing about randomness vs order or
complexity (ever hear of Benard convection?).

Brent

>
> Evgenii
>

Terren Suydam

unread,
Apr 10, 2013, 5:08:20 PM4/10/13
to everyth...@googlegroups.com
Hi Telmo,

Yes, those are good counter examples.

But I think to say "pain and pleasure are fine-tuned by evolution..." is a sleight of hand. Pain and pleasure are phenomenological primitives. If evolution created those primitives, how did it do that? By what mechanism?  

Another way to think of this is to acknowledge that pain signals are mediated by special nerves in the nervous system. But what makes those nerves any different from a nerve that carries information about gentle pressure?  You may be able to point to different neuroreceptors used, but then that shifts the question to why different neuroreceptors should result in different characters of experience.

One way out of this to posit that phenomenological primitives are never "created" but are identified somehow with a particular characterization of an objective state of affairs, the challenge being to characterize the mapping between the objective and the phenomenological. That is my aim with my flawed idea above. 

Terren

Telmo Menezes

unread,
Apr 10, 2013, 5:21:27 PM4/10/13
to everyth...@googlegroups.com
A loss in explanatory power.

Craig Weinberg

unread,
Apr 10, 2013, 5:26:25 PM4/10/13
to everyth...@googlegroups.com


On Wednesday, April 10, 2013 4:36:47 PM UTC-4, Terren Suydam wrote:
This is close to an idea I have been mulling over for some time... that the source of the phenomenological feeling of pleasure is in some way identified with decreases in entropy, and pain is in some way identified with increases in entropy. It is a way to map the subjective experience of pain and pleasure to a 3p description of, say, a nervous system.  Damage to the body (associated with pain) can usually (always?) be characterized in terms of a sudden increase in entropy of the body. Perhaps this is also true in the mental domain, so that emotional loss (or e.g. embarrassment) can also be characterized as an increase in entropy of one's mental models, but this is pure speculation. The case is even harder to make with pleasure. It would be weird if it were true, but so far it is the only way I know of to map pleasure and pain onto anything objective at all.

There's no sensation of pain in associated with increasing entropy in the brain itself though. Also analgesia and anesthesia would be impossible if pain were automatically associated with entropy.

Craig
 

Terren


To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.

Telmo Menezes

unread,
Apr 10, 2013, 5:28:46 PM4/10/13
to everyth...@googlegroups.com
On Wed, Apr 10, 2013 at 11:08 PM, Terren Suydam <terren...@gmail.com> wrote:
> Hi Telmo,
>
> Yes, those are good counter examples.
>
> But I think to say "pain and pleasure are fine-tuned by evolution..." is a
> sleight of hand. Pain and pleasure are phenomenological primitives. If
> evolution created those primitives, how did it do that? By what mechanism?

Completely agree. I mean pain and pleasure as things that you can
observe with an fMRI machine. As for the 1p experience of pain and
pleasure... wish I knew. I don't think evolution created these
primitives in this latter sense.

> Another way to think of this is to acknowledge that pain signals are
> mediated by special nerves in the nervous system. But what makes those
> nerves any different from a nerve that carries information about gentle
> pressure? You may be able to point to different neuroreceptors used, but
> then that shifts the question to why different neuroreceptors should result
> in different characters of experience.

Yes, I've always been puzzled by that.

> One way out of this to posit that phenomenological primitives are never
> "created" but are identified somehow with a particular characterization of
> an objective state of affairs,

I suspect the same.

> the challenge being to characterize the
> mapping between the objective and the phenomenological. That is my aim with
> my flawed idea above.

Cool. Sorry for not getting what you were saying at first. You still
have to deal with my counter-examples though, I'd say... (forgetting
the evolutionary rant)

Telmo.

meekerdb

unread,
Apr 10, 2013, 5:57:36 PM4/10/13
to everyth...@googlegroups.com
On 4/10/2013 1:36 PM, Terren Suydam wrote:
> This is close to an idea I have been mulling over for some time... that the source of
> the phenomenological feeling of pleasure is in some way identified with decreases in
> entropy, and pain is in some way identified with increases in entropy. It is a way to
> map the subjective experience of pain and pleasure to a 3p description of, say, a
> nervous system.

You will just further muddle the meaning of entropy.


> Damage to the body (associated with pain) can usually (always?) be characterized in
> terms of a sudden increase in entropy of the body.

Consider dribbling some liquid nitrogen on your skin. Hurts doesn't it. But the entropy
of your body is (locally) reduced. The pain comes from neurons sending signals to your
brain. They use a tiny amount of free energy to do this which increases the entropy of
your body also. Your brain receives a few bits of information about the pain which
represent an infinitesimal decrease in entropy if your brain was in a state uncertainty
about whether your body hurt.

> Perhaps this is also true in the mental domain, so that emotional loss (or e.g.
> embarrassment) can also be characterized as an increase in entropy of one's mental
> models, but this is pure speculation.

It hardly even rises to speculation unless you have some idea of how to quantify and test it.

> The case is even harder to make with pleasure. It would be weird if it were true, but so
> far it is the only way I know of to map pleasure and pain onto anything objective at all.

Damasio proposes that pleasure and pain map into levels of various hormones as well as
neural activity.

Brent

meekerdb

unread,
Apr 10, 2013, 5:59:54 PM4/10/13
to everyth...@googlegroups.com
Do you see no difference? Are the operation of both equally mysterious to you?

Brent

meekerdb

unread,
Apr 10, 2013, 6:08:31 PM4/10/13
to everyth...@googlegroups.com
On 4/10/2013 2:08 PM, Terren Suydam wrote:
Hi Telmo,

Yes, those are good counter examples.

But I think to say "pain and pleasure are fine-tuned by evolution..." is a sleight of hand. Pain and pleasure are phenomenological primitives. If evolution created those primitives, how did it do that? By what mechanism? �

Another way to think of this is to acknowledge that pain signals are mediated by special nerves in the nervous system. But what makes those nerves any different from a nerve that carries information about gentle pressure? �You may be able to point to different neuroreceptors used, but then that shifts the question to why different neuroreceptors should result in different characters of experience.

You have to ground the interpretation in behavior and its relation to evolutionary advantage. People who put their hand in the fire withdraw it quickly and exclaim to warn others.� People that don't suffer reproductive disadvantage.

Brent

John Mikes

unread,
Apr 10, 2013, 6:17:27 PM4/10/13
to everyth...@googlegroups.com
Evgeniy, I did not read the paper either, but fundmentally agree with your evaluation - not in toto, of course. I even take it further: THE WORLD is in the MIND (not brain, see my reflection to Bruno below) and it is individually different for EACH OF US, as our "mini-solipsism" - the way we, in our personal differences adjust those informative additions we absorb about the totality (and no two persons "get" the same of those, nor adjust them in the same fashion). We have similar connotations and are happy with those. Or: we argue about them. 




Then Craig wrote (second refl.):

 "...Neuroscientists study brain and they just take a priori from the materialist and reductionism paradigm that mind must be in the brain. After that, they write papers to bring this idea to the logical conclusion. To this end, they seem to have two options. Either they should say that the 3D visual world is illusion (I guess, Dennett goes this way) or put phenomenological consciousness into the brain. ...."

JM: I may agree to the 3D as illusion by the mini-solipsism. The other alternative is pure reductionism: Brain we have, brain explains them all. I would not succumb to being pressured how I identify "mind". (Maybe: the Terra Incognita explaining 'the rest of it').

And Bruno concluded:

But mind is different from brain. And mind is part of both cognitive   
science and theoretical computer science. To identify mind and brain   
is possible in some strong non computationalist theories, but such   
theories don't yet exist, and are only speculated about. To confuse   
mind and brain, is like confusing literature and ink. 
Neurophilophers are usually computationalist and weakly materialist,   
and so are basically inconsistent. 
 
I would reverse the 'locations' of those nonlocal concepts:
(I do not take 'brain' as the tissue in physiology, rather the sum of observable(?) functions of our hypothetical organ wherever it may be 'located': skull, intestines, heart, etc.) and is the applied observable TOOL for the elusive MIND(!!!). 
Cognitive and theor. computer sci. ARE part of the mind(function?) whatever that may be. I appreciate Bruno's agnostic stance on those 'theories' that may explain them all,, but do not yet(?) exist. 
IMO theoretical computer science and cognitive science have one thing in common: at the point where they enter 'real complexity' they exceed the capabilities of the human thinking power (logic etc.). 
We are part of - and living in - a world (=infinite complexity) we know nothing about but our ignorance pretends to explain it all from the fraction we so far learned (rather: explained right or wrong) for ourselves. 

John Mikes




“Thus, changes in entropy provide an important window into self-organization: a sudden increase of entropy just before  the emergence of a new structure, followed by brief period of negative entropy (or negentropy).”

I have seen that this could be traced to Schrödinger’s What is Life?, reread his chapter on Order, Disorder and Entropy and made my comments

http://blog.rudnyi.ru/2013/04/schrodinger-disorder-and-entropy.html


Evgenii

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

Craig Weinberg

unread,
Apr 10, 2013, 6:26:16 PM4/10/13
to everyth...@googlegroups.com

That's begging the question. People would withdraw their hand with the exact same rapidity regardless of the aesthetic quality of the signal. Terren and I understand this, and we understand that your view does not understand this. In a deterministic universe, there is no need to motivate stones to roll down hill. You can't remove all causal efficacy from will on one hand and then rely on it to justify aesthetics on the other. It doesn't work, and even if it did, it doesn't answer Terren's question: "how did it do that? By what mechanism?". Does evolution simply conjure "pain" from a magical box of infinite experiences, or are there some rules in place as to their nature?

Craig
 

Brent

meekerdb

unread,
Apr 10, 2013, 6:38:46 PM4/10/13
to everyth...@googlegroups.com
On 4/10/2013 3:26 PM, Craig Weinberg wrote:


On Wednesday, April 10, 2013 6:08:31 PM UTC-4, Brent wrote:
On 4/10/2013 2:08 PM, Terren Suydam wrote:
Hi Telmo,

Yes, those are good counter examples.

But I think to say "pain and pleasure are fine-tuned by evolution..." is a sleight of hand. Pain and pleasure are phenomenological primitives. If evolution created those primitives, how did it do that? By what mechanism? �

Another way to think of this is to acknowledge that pain signals are mediated by special nerves in the nervous system. But what makes those nerves any different from a nerve that carries information about gentle pressure? �You may be able to point to different neuroreceptors used, but then that shifts the question to why different neuroreceptors should result in different characters of experience.

You have to ground the interpretation in behavior and its relation to evolutionary advantage. People who put their hand in the fire withdraw it quickly and exclaim to warn others.� People that don't suffer reproductive disadvantage.

That's begging the question. People would withdraw their hand with the exact same rapidity regardless of the aesthetic quality of the signal.

No, that's answering the question.  Whatever aesthetic quality causes one to quickly withdraw and warn other is the answer to "What aesthetic quality is pain?"


Terren and I understand this, and we understand that your view does not understand this.

You use "understand" as a synonym for "assert".  Your "understanding" has no predictive power and is not consilient with other science.


In a deterministic universe, there is no need to motivate stones to roll down hill. You can't remove all causal efficacy from will on one hand and then rely on it to justify aesthetics on the other.

I'm not the one relying on will - you are.


It doesn't work, and even if it did, it doesn't answer Terren's question: "how did it do that? By what mechanism?". Does evolution simply conjure "pain" from a magical box of infinite experiences, or are there some rules in place as to their nature?

I gave the rules - that's why it's an answer.

Brent


Craig
 

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2013.0.3272 / Virus Database: 3162/6236 - Release Date: 04/10/13


Craig Weinberg

unread,
Apr 10, 2013, 7:28:25 PM4/10/13
to everyth...@googlegroups.com


On Wednesday, April 10, 2013 6:38:46 PM UTC-4, Brent wrote:
On 4/10/2013 3:26 PM, Craig Weinberg wrote:


On Wednesday, April 10, 2013 6:08:31 PM UTC-4, Brent wrote:
On 4/10/2013 2:08 PM, Terren Suydam wrote:
Hi Telmo,

Yes, those are good counter examples.

But I think to say "pain and pleasure are fine-tuned by evolution..." is a sleight of hand. Pain and pleasure are phenomenological primitives. If evolution created those primitives, how did it do that? By what mechanism? �

Another way to think of this is to acknowledge that pain signals are mediated by special nerves in the nervous system. But what makes those nerves any different from a nerve that carries information about gentle pressure? �You may be able to point to different neuroreceptors used, but then that shifts the question to why different neuroreceptors should result in different characters of experience.

You have to ground the interpretation in behavior and its relation to evolutionary advantage. People who put their hand in the fire withdraw it quickly and exclaim to warn others.� People that don't suffer reproductive disadvantage.

That's begging the question. People would withdraw their hand with the exact same rapidity regardless of the aesthetic quality of the signal.

No, that's answering the question.  Whatever aesthetic quality causes one to quickly withdraw and warn other is the answer to "What aesthetic quality is pain?"

How could an aesthetic quality cause one to do anything if "one" has no effective free will? If you can't explain aesthetic quality in term of ion channels and brain activity then you must be talking about magic.
 

Terren and I understand this, and we understand that your view does not understand this.

You use "understand" as a synonym for "assert".  Your "understanding" has no predictive power and is not consilient with other science.

The only assertion I make is that you are wasting your time trying to convince us that you're right when we can both see that you don't understand why you are wrong, and also why you think we're wrong.


In a deterministic universe, there is no need to motivate stones to roll down hill. You can't remove all causal efficacy from will on one hand and then rely on it to justify aesthetics on the other.

I'm not the one relying on will - you are.

"People who put their hand in the fire withdraw it quickly"

Why does this behavior occur as a consequence of some sensory experience rather than simple mechanics? If pain makes me withdraw my hand, it can only be because my will contributes to my hand's movements. Otherwise the pain feeling would be irrelevant as I would be a spectator and my will would be an illusion.

 

It doesn't work, and even if it did, it doesn't answer Terren's question: "how did it do that? By what mechanism?". Does evolution simply conjure "pain" from a magical box of infinite experiences, or are there some rules in place as to their nature?

I gave the rules - that's why it's an answer.

Translation - you have no answer except to try to confuse the question.

Craig
 

Terren Suydam

unread,
Apr 11, 2013, 10:32:14 AM4/11/13
to everyth...@googlegroups.com
On Wed, Apr 10, 2013 at 6:08 PM, meekerdb <meek...@verizon.net> wrote:
On 4/10/2013 2:08 PM, Terren Suydam wrote:
Hi Telmo,

Yes, those are good counter examples.

But I think to say "pain and pleasure are fine-tuned by evolution..." is a sleight of hand. Pain and pleasure are phenomenological primitives. If evolution created those primitives, how did it do that? By what mechanism?  

Another way to think of this is to acknowledge that pain signals are mediated by special nerves in the nervous system. But what makes those nerves any different from a nerve that carries information about gentle pressure?  You may be able to point to different neuroreceptors used, but then that shifts the question to why different neuroreceptors should result in different characters of experience.

You have to ground the interpretation in behavior and its relation to evolutionary advantage. People who put their hand in the fire withdraw it quickly and exclaim to warn others.  People that don't suffer reproductive disadvantage.

Brent


Of course, but it still involves a sleight of hand.  Let me offer this example by way of trying to make this clear.

You have creature A which does not suffer pain. Then some mutation occurs and creature B, descended from A, is born with the ability to feel pain when exposed to fire. We agree that creature B is more likely to reproduce than creature A. My question is, what is the nature of the mutation that suddenly ushered in the subjective experience of pain?  What is the mechanism?

Terren

Bruno Marchal

unread,
Apr 11, 2013, 10:54:43 AM4/11/13
to everyth...@googlegroups.com
Interesting, but out of topics.






Each bit would be an atomic configuration, and programs would be atomic assemblies.

Two apples is not the number two.

With logic automata, the number two would not be necessary....matter would embody its own programs.

With comp, matter relies on the numbers law, or Turing equivalent.



 



Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc.

That is what we ask you to justify, or to assume explicitly, not to take for granted.

The fact that logic automata unites form and function as a single process should show that there is no implicit aesthetic preference. A program is a functional shape whose relation with other functional shapes is defined entirely by position. There is no room for, nor plausible emergence of any kind of aesthetic differences between functions we would assume are associated with sight or sound, thought or feeling.

Why?



Logic automata proves that none of these differences are meaningful in a functionalist universe.

?

Bruno

Terren Suydam

unread,
Apr 11, 2013, 10:54:59 AM4/11/13
to everyth...@googlegroups.com
On Wed, Apr 10, 2013 at 5:57 PM, meekerdb <meek...@verizon.net> wrote:
On 4/10/2013 1:36 PM, Terren Suydam wrote:
This is close to an idea I have been mulling over for some time... that the source of the phenomenological feeling of pleasure is in some way identified with decreases in entropy, and pain is in some way identified with increases in entropy. It is a way to map the subjective experience of pain and pleasure to a 3p description of, say, a nervous system.

You will just further muddle the meaning of entropy.

I agree.
 

Damage to the body (associated with pain) can usually (always?) be characterized in terms of a sudden increase in entropy of the body.

Consider dribbling some liquid nitrogen on your skin.  Hurts doesn't it.  But the entropy of your body is (locally) reduced.  The pain comes from neurons sending signals to your brain.  They use a tiny amount of free energy to do this which increases the entropy of your body also.  Your brain receives a few bits of information about the pain which represent an infinitesimal decrease in entropy if your brain was in a state uncertainty about whether your body hurt.


Agree.  I am abandoning the idea of entropy in the chemistry sense in light of Telmo's and your objections.  However, there may be a way to characterize the mind - i.e. "the software that runs on the brain architecture" in objective terms (such as the information-theoretic notion of entropy) that might yield possible mappings to subjective feelings of pain and pleasure. I subscribe to the idea that we only experience our internally constructed world, so it seems possible to abandon "physical" entropy without sacrificing the idea of a mental entropy. 
 

Perhaps this is also true in the mental domain, so that emotional loss (or e.g. embarrassment) can also be characterized as an increase in entropy of one's mental models, but this is pure speculation.

It hardly even rises to speculation unless you have some idea of how to quantify and test it.


Sure. Our understanding of the emergent dynamics of neural activity is still pretty meager. But as I am assuming comp, I therefore assume that there is a lawful, deterministic relationship among these emergent dynamics as well (a determinism that is orthogonal to the determinism of ion channels etc) - and so I find it entirely plausible that one could quantify and test the higher level dynamics, in the same way that you could make a study of the causal relationships among patterns that emerge on a "Game of Life" automata. 

I think one of the more important areas of research is characterizing these emergent dynamics from the bottom up, modeling them, and then proceeding to the next level of emergent dynamics. My hunch is that there are several such emergent layers, corresponding with structures that scale up eventually to the size of the entire brain, resulting in chains of supervenience.  Psychology is the study of the highest layers - we need to connect them to the lower layers. Without that understanding we will never truly understand how drugs affect our psychology, for example. With that understanding we will have a much better grasp of the mechanism of mind, how to predict it, etc.

 

The case is even harder to make with pleasure. It would be weird if it were true, but so far it is the only way I know of to map pleasure and pain onto anything objective at all.

Damasio proposes that pleasure and pain map into levels of various hormones as well as neural activity.


This may be true, and yield useful insights, but still just shifts the burden of explanation onto something else.
 
Terren

Brent

Terren Suydam

unread,
Apr 11, 2013, 11:04:39 AM4/11/13
to everyth...@googlegroups.com
On Wed, Apr 10, 2013 at 5:28 PM, Telmo Menezes <te...@telmomenezes.com> wrote:
On Wed, Apr 10, 2013 at 11:08 PM, Terren Suydam <terren...@gmail.com> wrote:
> Hi Telmo,
>
> Yes, those are good counter examples.
>
> But I think to say "pain and pleasure are fine-tuned by evolution..." is a
> sleight of hand. Pain and pleasure are phenomenological primitives. If
> evolution created those primitives, how did it do that? By what mechanism?

Completely agree. I mean pain and pleasure as things that you can
observe with an fMRI machine. As for the 1p experience of pain and
pleasure... wish I knew. I don't think evolution created these
primitives in this latter sense.

> Another way to think of this is to acknowledge that pain signals are
> mediated by special nerves in the nervous system. But what makes those
> nerves any different from a nerve that carries information about gentle
> pressure?  You may be able to point to different neuroreceptors used, but
> then that shifts the question to why different neuroreceptors should result
> in different characters of experience.

Yes, I've always been puzzled by that.


My hunch is that the 'pain' neurons feed into circuits that can be characterized objectively in a certain way, that is distinguishable from circuits that receive sensory information with no particular pain/pleasure valence, so that it doesn't matter in particular what the neurotransmitters or hormones are that mediate the circuitry itself. Rather, it is the cybernetic description of the circuits in question that provide the "hook" on which to hang distinguishable identification of various kinds of qualia.

Pressing forward with the entropy idea, perhaps the pain circuitry has the result of increasing the (information-theoretic) entropy of the global mind, and therefore we experience it as pain. Keep in mind I am not "arguing for" this - just exploring the idea. Maybe you or someone else who is sympathetic to this style of inquiry can improve on the idea of entropy... it certainly has its problems.
 
Terren

Craig Weinberg

unread,
Apr 11, 2013, 11:13:33 AM4/11/13
to everyth...@googlegroups.com

Why is it off topic? It addresses exactly what we are talking about - the gap between pure function and form. By closing that gap, we can see that it makes no difference and that there is no problem to running an anesthetic program.
 






Each bit would be an atomic configuration, and programs would be atomic assemblies.

Two apples is not the number two.

With logic automata, the number two would not be necessary....matter would embody its own programs.

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.
 



 



Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc.

That is what we ask you to justify, or to assume explicitly, not to take for granted.

The fact that logic automata unites form and function as a single process should show that there is no implicit aesthetic preference. A program is a functional shape whose relation with other functional shapes is defined entirely by position. There is no room for, nor plausible emergence of any kind of aesthetic differences between functions we would assume are associated with sight or sound, thought or feeling.

Why?

Because the function is accomplished with or without any sensory presentation beyond positions of bits. With comp you already assume the immaterial so its easier to conflate that intangible principle with sensory participation, since sense can also be thought of as immaterial also. With logical automata we can see clearly that the functions of computation need not be immaterial at all, and can be presented directly through 4-D material geometry. In doing this, we expose the difference between computation, which is an anesthetic automatism and consciousness which is an aesthetic direct participation.
 



Logic automata proves that none of these differences are meaningful in a functionalist universe.

?

That any function performed by a logical automata would be the same configuration of bricks whether we ultimately read the output as a visual experience or an auditory experience.

Craig
 

Bruno Marchal

unread,
Apr 11, 2013, 11:54:17 AM4/11/13
to everyth...@googlegroups.com
The difference is that evolution assumes some mechanism.

With comp you can define pain by the qualia associated to anything
contradicting some universal goal.
The most typical universal goal is "protect yourself".

I imagine we send robots on a far planet where there are some acid
rains which might demolish their circuits. We will provide mechanism
so that when such rain occurs the robots find quickly some shelter. No
need of pain at this stage, but if the machine is Löbian, she will be
able to rationalize her behavior, so that when we ask her why she
protect herself, she will will talk about her non communicable qualia
she got when the rain is coming, and she might well call it pain.

Such a theory predicted that if someone burn alive through suicide,
that person would not necessarily feel pain. As sad as it is, this has
been confirmed by some testimony of people doing just that. They
describe being burn even as pleasurable, until they are brought to
some hospital and then the pain becomes quite acute. (Hmm... I don't
find the interview of women who burns themselves in Afghanistan when
their husband cheat them, I will search when I have more times).
This can also be related with some ZEN technic to diminish pain by
"accepting it", and used in Japan to survive Chinese interrogations).

Pain can be the qualia brought by a frustration in a situation
contradicting instinctive universal goals.
The qualia itself can be explained by the combination self-reference +
truth, that is the relatively correct self-reference, which lead the
machine to acknowledge non justifiable truth. The negative aspect of
the affect is brought by the contradiction with respect to universal
goal, and is usually more intense when the goal is instinctive or
hidden.

Note that this needs a notion of truth, so the Platonist God is not
far away, making your point, after all.

Bruno




>
> Evgenii
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Apr 11, 2013, 11:57:22 AM4/11/13
to everyth...@googlegroups.com
Glial cells seems to have some rôle in chronic pain. Also.

Bruno




>
> Brent
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
http://iridia.ulb.ac.be/~marchal/



Craig Weinberg

unread,
Apr 11, 2013, 12:15:57 PM4/11/13
to everyth...@googlegroups.com


On Thursday, April 11, 2013 11:54:17 AM UTC-4, Bruno Marchal wrote:

On 10 Apr 2013, at 22:55, Evgenii Rudnyi wrote:

> On 10.04.2013 22:52 Telmo Menezes said the following:
>
> ...
>
>> I suspect life is just meaningless from the outside. I'd say that
>> pain and pleasure are fine-tunned by evolution to maximise the
>> survivability of species in an environment that is largely also
>> generated by evolution. It's a strange loop.
>>
>
> What difference do you see when one changes evolution in your  
> sentence by god?


The difference is that evolution assumes some mechanism.

With comp you can define pain by the qualia associated to anything  
contradicting some universal goal.
The most typical universal goal is "protect yourself".

Why isn't the condition of "satisfying universal goal = false" sufficient?


I imagine we send robots on a far planet where there are some acid  
rains which might demolish their circuits. We will provide mechanism  
so that when such rain occurs the robots find quickly some shelter. No  
need of pain at this stage, but if the machine is Löbian, she will be  
able to rationalize her behavior, so that when we ask her why she  
protect herself, she will will talk about her non communicable qualia  
she got when  the rain is coming, and she might well call it pain.

What does it mean to "talk about" that which is non-communicable? What she calls it is irrelevant, but do her reports describe the qualia as "sharp" or "dull"? Excruciating or irritating? Does it make her want to rip her eyes out of her skull or simply believe that it is time to escalate the priority of a search for protection? Is there any indication at all that a Löbian machine experiences any specific aesthetic qualities at all, or do you assume that every time we ask a machine a question and it fails to communicate an answer that it means that they must have a human-like conscious experience which they cannot express?


Such a theory predicted that if someone burn alive through suicide,  
that person would not necessarily feel pain. As sad as it is, this has  
been confirmed by some testimony of people doing just that. They  
describe being burn even as pleasurable, until they are brought to  
some hospital and then the pain becomes quite acute. (Hmm... I don't  
find the interview of women who burns themselves in Afghanistan when  
their husband cheat them, I will search when I have more times).
This can also be related with some ZEN technic to diminish pain by  
"accepting it", and used in Japan to survive Chinese interrogations).

Sure, pain is relative. Like all sense, it is defined by contrast, previous experience, and expectation.


Pain can be the qualia brought by a frustration in a situation  
contradicting instinctive universal goals.
The qualia itself can be explained by the combination self-reference +  
truth, that is the relatively correct self-reference, which lead the  
machine to acknowledge non justifiable truth. The negative aspect of  
the affect is brought by the contradiction with respect to universal  
goal, and is usually more intense when the goal is instinctive or  
hidden.

Note that this needs a notion of truth, so the Platonist God is not  
far away, making your point, after all.

Self-reference + truth is no substitute for aesthetic presence. The notion of self-reference you are using is a superficial one rooted in symbol manipulation rather than proprietary influence. Selfness defined this way is a silhouette with no content. In reality, authentic selfhood arises from aesthetic qualities experienced, not from logical conditions or non-communicable residues of arithmetic.

Craig

Evgenii Rudnyi

unread,
Apr 11, 2013, 2:07:41 PM4/11/13
to everyth...@googlegroups.com
On 10.04.2013 23:59 meekerdb said the following:
I do not see any difference. I do not see that the explanation through
Evolution in the sentence above is better than the explanation through
God. In the sentence above, in my view, the explanatory power is at the
same level, either with Evolution or with God.

Evgenii

Evgenii Rudnyi

unread,
Apr 11, 2013, 2:11:47 PM4/11/13
to everyth...@googlegroups.com
On 10.04.2013 22:58 meekerdb said the following:
> On 4/10/2013 1:38 PM, Evgenii Rudnyi wrote:
>> On 10.04.2013 22:34 meekerdb said the following:
>>> On 4/10/2013 1:18 PM, Evgenii Rudnyi wrote:
>>>> On 10.04.2013 07:16 meekerdb said the following:
>>>>> On 4/9/2013 12:19 PM, Evgenii Rudnyi wrote:
>>>>

...

>>>> You'd better look at what biologist say. For example:
>>>>
>>>> http://www.icr.org/article/270/
>>>>
>>>> �and that the idea of their improving rather than harming
>>>> organisms is contrary to the Second Law of Thermodynamics,
>>>> which tells us that matter and energy naturally tend toward
>>>> greater randomness rather than greater order and complexity.�
>>>>
>>>> Do you like it?
>>>
>>> You're referring me to an article on biological evolution by a
>>> guy with a Masters of Art on a Creationist website??
>>>
>>> Do YOU like it?
>>
>> You will find a similar sentence also on an evolutionary website.
>
> That wasn't the question. The question was do you like it, do you
> believe it, can you support it with your own arguments?

No, I do not like it. I have made this example to show what happens when
people start mixing the thermodynamic entropy and biology. My note to
this was

"I am afraid that this is a misunderstanding. The Second Law tells that
the entropy increases in the isolated system. This is not the case with
life on the Earth, as energy comes in and go out. In this case, if to
speak of a system not far from the stationary state, Ilya Prigogine has
proved that then the production of the entropy should be minimal.
However, even this could not be generalized to the case when a system is
far from equilibrium (this seems to be case with life on the Earth).
Hence it is unlikely that the Second Law could help us when one
considers evolution problems. In any case, I would recommend you the
works of Ilya Prigogine � he was a great thermodynamicist."


>> Such a statement will be the same. Look for example at
>>
>> Annila, A. & S.N. Salthe (2010) Physical foundations of
>> evolutionary theory. Journal of Non-equilibrium Thermodynamics 35:
>> 301-321, http://dx.doi.org/10.1515/jnetdy.2010.019
>
> Which is behind a paywall ($224), and says nothing like that in the
> abstract.

If you type the title in Google, you will find a free version. My
comment to this paper is at

http://blog.rudnyi.ru/2013/02/physical-foundations-of-evolutionary-theory.html

You will find a link to a free version there as well.

Evgenii

Bruno Marchal

unread,
Apr 11, 2013, 2:38:26 PM4/11/13
to everyth...@googlegroups.com
?



 






Each bit would be an atomic configuration, and programs would be atomic assemblies.

Two apples is not the number two.

With logic automata, the number two would not be necessary....matter would embody its own programs.

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.

?





 



 



Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc.

That is what we ask you to justify, or to assume explicitly, not to take for granted.

The fact that logic automata unites form and function as a single process should show that there is no implicit aesthetic preference. A program is a functional shape whose relation with other functional shapes is defined entirely by position. There is no room for, nor plausible emergence of any kind of aesthetic differences between functions we would assume are associated with sight or sound, thought or feeling.

Why?

Because the function is accomplished with or without any sensory presentation beyond positions of bits.

So there is some sensory presentation. 



With comp you already assume the immaterial so its easier to conflate that intangible principle with sensory participation,

Which conflation? On the contrary, once a machine self-refers, many usually conflated views get unconflated.




since sense can also be thought of as immaterial also.

Which ease, but does not solve the things, you need a self between.



With logical automata we can see clearly that the functions of computation need not be immaterial at all, and can be presented directly through 4-D material geometry.

Either it violates Church thesis, and then it is very interesting, or not, and then it is a red herring for the mind-body peoblem, even if quite interesting in practical applications.




In doing this, we expose the difference between computation, which is an anesthetic automatism and consciousness which is an aesthetic direct participation.

In doing this, all what I see is that you eliminate the person who got a brain prosthesis. 

Saying that God made the human following his own image also expose a difference, but not in a quite convincing way. 




 



Logic automata proves that none of these differences are meaningful in a functionalist universe.

?

That any function performed by a logical automata would be the same configuration of bricks whether we ultimately read the output as a visual experience or an auditory experience.

There is a big difference between computationalism and functionalism. Comp says that functionalism is correct, at some unknown level, and in fine, this plays some role, as we cannot know which machine we are. We are only free to bet on some level, in case we need some new body, or after death. 
if functionalism was correct, you can replace the entire universe by the program "do nothing", as it will do the same thing as the entire universe.
A machine is *much* more than a function. In the math, we distinguish intensional and extensional, to talk about that difference. Modal logic aboard the intensional aspects, already existing in the extensional math, when looked from some (internal or not) point of view.
I think you conflate extension and intension (note the "s").

Bruno

Craig Weinberg

unread,
Apr 11, 2013, 3:18:58 PM4/11/13
to everyth...@googlegroups.com

It's not off topic.
 



 






Each bit would be an atomic configuration, and programs would be atomic assemblies.

Two apples is not the number two.

With logic automata, the number two would not be necessary....matter would embody its own programs.

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.

?

Does that mean you think that comp can generate geometry, or that matter doesn't relay on geometry?
 





 



 



Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc.

That is what we ask you to justify, or to assume explicitly, not to take for granted.

The fact that logic automata unites form and function as a single process should show that there is no implicit aesthetic preference. A program is a functional shape whose relation with other functional shapes is defined entirely by position. There is no room for, nor plausible emergence of any kind of aesthetic differences between functions we would assume are associated with sight or sound, thought or feeling.

Why?

Because the function is accomplished with or without any sensory presentation beyond positions of bits.

So there is some sensory presentation. 

In reality there would be low level sensory presentation, but without a theory of physics or computation which supports that, we should not allow it to be smuggled in.
 



With comp you already assume the immaterial so its easier to conflate that intangible principle with sensory participation,

Which conflation? On the contrary, once a machine self-refers, many usually conflated views get unconflated.

The conflation is between computation and sensation. A machine has no sensation, but the parts of a machine ultimately are associated with low level sensations at the material level. It is on those low level sensory-motor interactions which high level logics can be executed, instrumentally, with no escalation of awareness.
 




since sense can also be thought of as immaterial also.

Which ease, but does not solve the things, you need a self between.

Not sure how that relates, but how do you know that a self is needed?
 



With logical automata we can see clearly that the functions of computation need not be immaterial at all, and can be presented directly through 4-D material geometry.

Either it violates Church thesis, and then it is very interesting, or not, and then it is a red herring for the mind-body peoblem, even if quite interesting in practical applications.

My point is that computation need not have a mind - it can be executed using bodies alone, and logic automata demonstrates that is true.
 




In doing this, we expose the difference between computation, which is an anesthetic automatism and consciousness which is an aesthetic direct participation.

In doing this, all what I see is that you eliminate the person who got a brain prosthesis. 

Saying that God made the human following his own image also expose a difference, but not in a quite convincing way. 

Why isn't the logic automata example convincing? Are you saying that there still must be some mind there even though all functions are executed by bodies? What is your objection?
 




 



Logic automata proves that none of these differences are meaningful in a functionalist universe.

?

That any function performed by a logical automata would be the same configuration of bricks whether we ultimately read the output as a visual experience or an auditory experience.

There is a big difference between computationalism and functionalism. Comp says that functionalism is correct, at some unknown level, and in fine, this plays some role, as we cannot know which machine we are. We are only free to bet on some level, in case we need some new body, or after death. 
if functionalism was correct, you can replace the entire universe by the program "do nothing", as it will do the same thing as the entire universe.
A machine is *much* more than a function. In the math, we distinguish intensional and extensional, to talk about that difference. Modal logic aboard the intensional aspects, already existing in the extensional math, when looked from some (internal or not) point of view.
I think you conflate extension and intension (note the "s").

I would say that a machine is a collection of logical functions which produce another collection of logical functions. What more is there to it, or more to the point, what more is there which could generate any aesthetic experience?

Craig

meekerdb

unread,
Apr 11, 2013, 5:22:47 PM4/11/13
to everyth...@googlegroups.com
It needn't be one specific "pain" mechanism.  It could be a part of the brain that interprets a complex of neural signals as pain, it could be release of some hormones, it could be the development of specific pain sensors.  All that is significant is that it elicit the "pain response".

Brent

Craig Weinberg

unread,
Apr 11, 2013, 5:30:45 PM4/11/13
to everyth...@googlegroups.com

That's exactly why it doesn't make sense that pain could exist at all from a mechanistic assumption. Nothing is necessary to elicit the pain response except for whatever signals engage those responses directly. You already have data, and that data can be copied or acted upon in whatever practical way is required - what possible purpose could this extra layer of "pain" serve, and how/where/what mechanism produces it?

Craig


Brent

Terren Suydam

unread,
Apr 11, 2013, 5:44:50 PM4/11/13
to everyth...@googlegroups.com
So you would identify the subjective experience of pain with an objective description of some agent's pain response. That's no worse than my original idea I suppose, though vulnerable to the same sorts of objections... for instance, how would you account for phantom limb pain? headaches? What kind of mechanism leads to the pain experience when it is impossible to identify a pain response?

Terren

 
 
 

meekerdb

unread,
Apr 12, 2013, 12:20:18 AM4/12/13
to everyth...@googlegroups.com
Why is it impossible to identify a pain response?  Don't people with phantom limb pain complain and try to alleviate it?

Brent

Terren Suydam

unread,
Apr 12, 2013, 9:44:33 AM4/12/13
to everyth...@googlegroups.com
Yes, of course. In the case of the fire though, the explanatory power comes from the fact that the "response" is a response to something objective. For phantom limb pain your explanation is that the pain response is a response to pain itself -  which is not helpful because it includes the phenomenon in the explanation of the thing you're trying to explain.

Terren

Telmo Menezes

unread,
Apr 12, 2013, 10:01:41 AM4/12/13
to everyth...@googlegroups.com
Hi Evgenii,

The difference is that with the evolutionary explanation above you can
create a model that you can test and make predictions with. I've done
part of this computationally for my PhD thesis.

Telmo.

> Evgenii
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.

Bruno Marchal

unread,
Apr 12, 2013, 10:40:33 AM4/12/13
to everyth...@googlegroups.com
On 11 Apr 2013, at 18:15, Craig Weinberg wrote:



On Thursday, April 11, 2013 11:54:17 AM UTC-4, Bruno Marchal wrote:

On 10 Apr 2013, at 22:55, Evgenii Rudnyi wrote:

> On 10.04.2013 22:52 Telmo Menezes said the following:
>
> ...
>
>> I suspect life is just meaningless from the outside. I'd say that
>> pain and pleasure are fine-tunned by evolution to maximise the
>> survivability of species in an environment that is largely also
>> generated by evolution. It's a strange loop.
>>
>
> What difference do you see when one changes evolution in your  
> sentence by god?


The difference is that evolution assumes some mechanism.

With comp you can define pain by the qualia associated to anything  
contradicting some universal goal.
The most typical universal goal is "protect yourself".

Why isn't the condition of "satisfying universal goal = false" sufficient?

In which logic?
It is the sufficient, but it must be written in the logic corresponding to the relevant hypostasis (arithmetical point-of-view), and three of them have qualia related to it. 





I imagine we send robots on a far planet where there are some acid  
rains which might demolish their circuits. We will provide mechanism  
so that when such rain occurs the robots find quickly some shelter. No  
need of pain at this stage, but if the machine is Löbian, she will be  
able to rationalize her behavior, so that when we ask her why she  
protect herself, she will will talk about her non communicable qualia  
she got when  the rain is coming, and she might well call it pain.

What does it mean to "talk about" that which is non-communicable? What she calls it is irrelevant, but do her reports describe the qualia as "sharp" or "dull"? Excruciating or irritating? Does it make her want to rip her eyes out of her skull

It depends on the machine, the situation, the degree of the subgoal s compared to the instinctive universal gial, etc. but roughloy speaking: yes. That's the idea. 



or simply believe that it is time to escalate the priority of a search for protection? Is there any indication at all that a Löbian machine experiences any specific aesthetic qualities at all, or do you assume that every time we ask a machine a question and it fails to communicate an answer that it means that they must have a human-like conscious experience which they cannot express?

Not all the time, nor with every machine. Only when it comes from the relevant self-reference. 






Such a theory predicted that if someone burn alive through suicide,  
that person would not necessarily feel pain. As sad as it is, this has  
been confirmed by some testimony of people doing just that. They  
describe being burn even as pleasurable, until they are brought to  
some hospital and then the pain becomes quite acute. (Hmm... I don't  
find the interview of women who burns themselves in Afghanistan when  
their husband cheat them, I will search when I have more times).
This can also be related with some ZEN technic to diminish pain by  
"accepting it", and used in Japan to survive Chinese interrogations).

Sure, pain is relative.

I am not sure about this. I'm afraid pain is absolute, like consciousness.
But pain can disappear when the soul disconnected from the universal goal, or when the soul find a non terrestrial way to satisfy it, if that exist, as it seems in the "theology of numbers".




Like all sense, it is defined by contrast, previous experience, and expectation.

Some qualia related to pain might have some relative aspect, but I think quite plausible that pain is absolute.
If you escape a pain, you just escape it, you don't look at it in a new perspective, but you change altogether of the perspective. 






Pain can be the qualia brought by a frustration in a situation  
contradicting instinctive universal goals.
The qualia itself can be explained by the combination self-reference +  
truth, that is the relatively correct self-reference, which lead the  
machine to acknowledge non justifiable truth. The negative aspect of  
the affect is brought by the contradiction with respect to universal  
goal, and is usually more intense when the goal is instinctive or  
hidden.

Note that this needs a notion of truth, so the Platonist God is not  
far away, making your point, after all.

Self-reference + truth is no substitute for aesthetic presence.

I would need some precise definition of aesthetic presence to be sure of that. 



The notion of self-reference you are using is a superficial one rooted in symbol manipulation rather than proprietary influence.

Number self-reference give 8 hypostases. Three of them are not related to only symbol manipulation, but through the truth of some proposition relating a possibility of 'proprietary influence".





Selfness defined this way is a silhouette with no content.

You say so, but don't give argument.



In reality, authentic selfhood arises from aesthetic qualities experienced,

I agree with this (being large in the interpretation of the vocabulary). 
The machines agree too. I already told you this.




not from logical conditions or non-communicable residues of arithmetic.

Why? (To make comp false as you wish, I think).

Bruno







Craig


Bruno




>
> Evgenii
>
> --
> You received this message because you are subscribed to the Google  
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,  
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Evgenii Rudnyi

unread,
Apr 12, 2013, 12:27:15 PM4/12/13
to everyth...@googlegroups.com
On 12.04.2013 16:01 Telmo Menezes said the following:
> On Thu, Apr 11, 2013 at 8:07 PM, Evgenii Rudnyi <use...@rudnyi.ru>
> wrote:
>> On 10.04.2013 23:59 meekerdb said the following:
>>
>>> On 4/10/2013 1:55 PM, Evgenii Rudnyi wrote:
>>>>
>>>> On 10.04.2013 22:52 Telmo Menezes said the following:
>>>>
>>>> ...
>>>>
>>>>> I suspect life is just meaningless from the outside. I'd say
>>>>> that pain and pleasure are fine-tunned by evolution to
>>>>> maximise the survivability of species in an environment that
>>>>> is largely also generated by evolution. It's a strange loop.
>>>>>
>>>>
>>>> What difference do you see when one changes evolution in your
>>>> sentence by god?
>>>
>>>
>>> Do you see no difference? Are the operation of both equally
>>> mysterious to you?
>>
>>
>> I do not see any difference. I do not see that the explanation
>> through Evolution in the sentence above is better than the
>> explanation through God. In the sentence above, in my view, the
>> explanatory power is at the same level, either with Evolution or
>> with God.
>
> Hi Evgenii,
>
> The difference is that with the evolutionary explanation above you
> can create a model that you can test and make predictions with. I've
> done part of this computationally for my PhD thesis.
>
> Telmo.

Could you please describe a bit more on what you can predict? Right now
this is just a declaration.

Evgenii

Craig Weinberg

unread,
Apr 12, 2013, 1:23:52 PM4/12/13
to everyth...@googlegroups.com


On Friday, April 12, 2013 10:40:33 AM UTC-4, Bruno Marchal wrote:

On 11 Apr 2013, at 18:15, Craig Weinberg wrote:



On Thursday, April 11, 2013 11:54:17 AM UTC-4, Bruno Marchal wrote:

On 10 Apr 2013, at 22:55, Evgenii Rudnyi wrote:

> On 10.04.2013 22:52 Telmo Menezes said the following:
>
> ...
>
>> I suspect life is just meaningless from the outside. I'd say that
>> pain and pleasure are fine-tunned by evolution to maximise the
>> survivability of species in an environment that is largely also
>> generated by evolution. It's a strange loop.
>>
>
> What difference do you see when one changes evolution in your  
> sentence by god?


The difference is that evolution assumes some mechanism.

With comp you can define pain by the qualia associated to anything  
contradicting some universal goal.
The most typical universal goal is "protect yourself".

Why isn't the condition of "satisfying universal goal = false" sufficient?

In which logic?
It is the sufficient, but it must be written in the logic corresponding to the relevant hypostasis (arithmetical point-of-view), and three of them have qualia related to it. 

Why would a logic which relies on qualia be preferable to one which does not?
 





I imagine we send robots on a far planet where there are some acid  
rains which might demolish their circuits. We will provide mechanism  
so that when such rain occurs the robots find quickly some shelter. No  
need of pain at this stage, but if the machine is Löbian, she will be  
able to rationalize her behavior, so that when we ask her why she  
protect herself, she will will talk about her non communicable qualia  
she got when  the rain is coming, and she might well call it pain.

What does it mean to "talk about" that which is non-communicable? What she calls it is irrelevant, but do her reports describe the qualia as "sharp" or "dull"? Excruciating or irritating? Does it make her want to rip her eyes out of her skull

It depends on the machine, the situation, the degree of the subgoal s compared to the instinctive universal gial, etc. but roughloy speaking: yes. That's the idea. 

What is an example of the kind of output you are basing your interpretation on?
 



or simply believe that it is time to escalate the priority of a search for protection? Is there any indication at all that a Löbian machine experiences any specific aesthetic qualities at all, or do you assume that every time we ask a machine a question and it fails to communicate an answer that it means that they must have a human-like conscious experience which they cannot express?

Not all the time, nor with every machine. Only when it comes from the relevant self-reference. 

Why does that self-reference suggest an expectation of qualia? If I say "this sentence is self-referential" would it be more likely to be associated with qualia than another sentence?
 






Such a theory predicted that if someone burn alive through suicide,  
that person would not necessarily feel pain. As sad as it is, this has  
been confirmed by some testimony of people doing just that. They  
describe being burn even as pleasurable, until they are brought to  
some hospital and then the pain becomes quite acute. (Hmm... I don't  
find the interview of women who burns themselves in Afghanistan when  
their husband cheat them, I will search when I have more times).
This can also be related with some ZEN technic to diminish pain by  
"accepting it", and used in Japan to survive Chinese interrogations).

Sure, pain is relative.

I am not sure about this. I'm afraid pain is absolute, like consciousness.
But pain can disappear when the soul disconnected from the universal goal, or when the soul find a non terrestrial way to satisfy it, if that exist, as it seems in the "theology of numbers".

Pain can disappear just from building up a tolerance to it. After repeated exposures, hot water no longer feels as hot.
 




Like all sense, it is defined by contrast, previous experience, and expectation.

Some qualia related to pain might have some relative aspect, but I think quite plausible that pain is absolute.
If you escape a pain, you just escape it, you don't look at it in a new perspective, but you change altogether of the perspective. 

I don't know it that's true. I think that people who are into masochistic stimulation feel pain, but they feel it as pleasurable also. I think that they do probably change progressively to look at it in a new perspective. With qualia you can argue either way. In one sense, pain is only that which is unambiguously painful. In another sense, pain can be ambiguous and context dependent.
 






Pain can be the qualia brought by a frustration in a situation  
contradicting instinctive universal goals.
The qualia itself can be explained by the combination self-reference +  
truth, that is the relatively correct self-reference, which lead the  
machine to acknowledge non justifiable truth. The negative aspect of  
the affect is brought by the contradiction with respect to universal  
goal, and is usually more intense when the goal is instinctive or  
hidden.

Note that this needs a notion of truth, so the Platonist God is not  
far away, making your point, after all.

Self-reference + truth is no substitute for aesthetic presence.

I would need some precise definition of aesthetic presence to be sure of that. 

Aesthetic presence cannot be defined, it can only be experienced directly. The reason for this is that all qualities of precision and definition are themselves nothing but aesthetic properties in the cognitive mode of appreciation.
 



The notion of self-reference you are using is a superficial one rooted in symbol manipulation rather than proprietary influence.

Number self-reference give 8 hypostases. Three of them are not related to only symbol manipulation, but through the truth of some proposition relating a possibility of 'proprietary influence".

A possibility is not the same thing as a positive indication.  As long as there are other possibilities which fit more sensibly within arithmetic, I don't see any reason to hope that our experience can be found there.






Selfness defined this way is a silhouette with no content.

You say so, but don't give argument.

I don't understand what argument could be any more persuasive. A puppet can be made to act as if it were an autonomous person. I can video tape this puppet show and not the puppet can appear to be acting in a selfish way all by itself. Your argument seems to be that the only reason why this puppet is not literally a sentient being is just because the movie is not sophisticated enough, but if the movie were programmed to be able to play different loops in response to the relevant questions from the audience, then there would begin to emerge (from somewhere/nowhere) some authentic new person. I can see that this is obviously not the case and that this example, and the many others I have given, most convincingly perhaps with the logic automata example, that in fact self-referent mechanism is neither necessary nor sufficient to explain aesthetic participation.




In reality, authentic selfhood arises from aesthetic qualities experienced,

I agree with this (being large in the interpretation of the vocabulary). 
The machines agree too. I already told you this.

A machine will seem to agree to anything that it is programmed to agree to. You would have to give me specific examples if you want me to believe that any machine has ever described aesthetic experiences or personal preferences.





not from logical conditions or non-communicable residues of arithmetic.

Why? (To make comp false as you wish, I think).

Because if logic and arithmetic could exist without aesthetic experiences and forms, then they would, and experience could not plausibly arise. It's not to make comp false, it's to show that if comp were true, then it would have no need of the universe we experience - no not only biological experiences, but even geometry. If comp didn't run on sense, then the universe which would exist could not contain any sense, as none would be required.

Craig


Bruno Marchal

unread,
Apr 13, 2013, 6:47:47 AM4/13/13
to everyth...@googlegroups.com
On 11 Apr 2013, at 21:18, Craig Weinberg wrote:

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.

?

Does that mean you think that comp can generate geometry, or that matter doesn't relay on geometry?

"comp can generate geometry" does not mean something clear.

But what can be shown is that in the comp theory, you can assume only number (or combinators) and the + and * laws, this generates all the dreams, which can be shown to generate from the machine points of view, geometry, analysis, and physics. Then we can compare physics with the empirical data and confirm of refute comp (but not proving comp). 
Since already Diophantus, but then systematically since Descartes, the relation between geometry and arithmetic are deep and multiple. It is a whole subject matter, a priori independent from comp.


 





 



 



Maybe this makes it easier to see why forms and functions are not the same as sensory experiences, as no pile of logic automata would inspire feelings, flavors, thoughts, etc.

That is what we ask you to justify, or to assume explicitly, not to take for granted.

The fact that logic automata unites form and function as a single process should show that there is no implicit aesthetic preference. A program is a functional shape whose relation with other functional shapes is defined entirely by position. There is no room for, nor plausible emergence of any kind of aesthetic differences between functions we would assume are associated with sight or sound, thought or feeling.

Why?

Because the function is accomplished with or without any sensory presentation beyond positions of bits.

So there is some sensory presentation. 

In reality there would be low level sensory presentation, but without a theory of physics or computation which supports that, we should not allow it to be smuggled in.

So we agree.



 



With comp you already assume the immaterial so its easier to conflate that intangible principle with sensory participation,

Which conflation? On the contrary, once a machine self-refers, many usually conflated views get unconflated.

The conflation is between computation and sensation. A machine has no sensation,

I can agree. We must distinguish a machine from the person who own that machine, or is supported by the machine.




but the parts of a machine ultimately are associated with low level sensations at the material level.

If that exist. 



It is on those low level sensory-motor interactions which high level logics can be executed, instrumentally, with no escalation of awareness.


May be you should work with Stephen. Despite he defends comp, he point constantly on math which should be better for a non-comp theory like yours. 



 




since sense can also be thought of as immaterial also.

Which ease, but does not solve the things, you need a self between.

Not sure how that relates, but how do you know that a self is needed?

Because sense makes sense for a subject, which is a person, and which has different sort of self (like the 8 "hypostases" in comp + some definition). 




 



With logical automata we can see clearly that the functions of computation need not be immaterial at all, and can be presented directly through 4-D material geometry.

Either it violates Church thesis, and then it is very interesting, or not, and then it is a red herring for the mind-body peoblem, even if quite interesting in practical applications.

My point is that computation need not have a mind -

A computation has no mind. But some computation can be assumed to support a mind, or to mke it possible for a mind---a subject--- to manifest itself with respect to different universal mind in the local neighborhood.




it can be executed using bodies alone, and logic automata demonstrates that is true.

"bodies alone" don't make sense in the comp theory. 




 




In doing this, we expose the difference between computation, which is an anesthetic automatism and consciousness which is an aesthetic direct participation.

In doing this, all what I see is that you eliminate the person who got a brain prosthesis. 

Saying that God made the human following his own image also expose a difference, but not in a quite convincing way. 

Why isn't the logic automata example convincing? Are you saying that there still must be some mind there even though all functions are executed by bodies? What is your objection?

It introduces a notion of bodies, when a simpler theory can explain them, in a way making that simple theory testable.
Your argument that machine cannot support a mind mirrors the elimination of person by materialists.




 




 



Logic automata proves that none of these differences are meaningful in a functionalist universe.

?

That any function performed by a logical automata would be the same configuration of bricks whether we ultimately read the output as a visual experience or an auditory experience.

There is a big difference between computationalism and functionalism. Comp says that functionalism is correct, at some unknown level, and in fine, this plays some role, as we cannot know which machine we are. We are only free to bet on some level, in case we need some new body, or after death. 
if functionalism was correct, you can replace the entire universe by the program "do nothing", as it will do the same thing as the entire universe.
A machine is *much* more than a function. In the math, we distinguish intensional and extensional, to talk about that difference. Modal logic aboard the intensional aspects, already existing in the extensional math, when looked from some (internal or not) point of view.
I think you conflate extension and intension (note the "s").

I would say that a machine is a collection of logical functions which produce another collection of logical functions. What more is there to it, or more to the point, what more is there which could generate any aesthetic experience?

Truth.

Craig Weinberg

unread,
Apr 13, 2013, 6:05:46 PM4/13/13
to everyth...@googlegroups.com


On Saturday, April 13, 2013 6:47:47 AM UTC-4, Bruno Marchal wrote:

On 11 Apr 2013, at 21:18, Craig Weinberg wrote:

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.

?

Does that mean you think that comp can generate geometry, or that matter doesn't relay on geometry?

"comp can generate geometry" does not mean something clear.

I think its pretty clear. Without a printer or video screen, my computer cannot generate geometry. It doesn't matter how much CPU power or memory I have, the functions will come no closer to taking on a coherent geometric form somewhere. I can make endless computations about circles and pi, but there is never any need for any literal presentation of a circle in the universe. No actual circle is present.
 

But what can be shown is that in the comp theory, you can assume only number (or combinators) and the + and * laws, this generates all the dreams, which can be shown to generate from the machine points of view, geometry, analysis, and physics.

That's only because you have given + and * the benefit of the dream to begin with. Comp is tautology.
 
Then we can compare physics with the empirical data and confirm of refute comp (but not proving comp). 
Since already Diophantus, but then systematically since Descartes, the relation between geometry and arithmetic are deep and multiple. It is a whole subject matter, a priori independent from comp.

What is the relation between comp and geometry?

Craig
 

Bruno Marchal

unread,
Apr 14, 2013, 1:27:24 PM4/14/13
to everyth...@googlegroups.com
On 14 Apr 2013, at 00:05, Craig Weinberg wrote:



On Saturday, April 13, 2013 6:47:47 AM UTC-4, Bruno Marchal wrote:

On 11 Apr 2013, at 21:18, Craig Weinberg wrote:

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.

?

Does that mean you think that comp can generate geometry, or that matter doesn't relay on geometry?

"comp can generate geometry" does not mean something clear.

I think its pretty clear. Without a printer or video screen, my computer cannot generate geometry.

Why?
(printer and video screen are not geometry).

There are program able to solve geometrical puzzle by rotating "mentally" (in their RAM, without using screen, nor printer) complex geometrical figure, ...




It doesn't matter how much CPU power or memory I have, the functions will come no closer to taking on a coherent geometric form somewhere. I can make endless computations about circles and pi, but there is never any need for any literal presentation of a circle in the universe. No actual circle is present.

Not sure I have ever see an actual circle anywhere, nor do I think that seeing proves existence ...




 

But what can be shown is that in the comp theory, you can assume only number (or combinators) and the + and * laws, this generates all the dreams, which can be shown to generate from the machine points of view, geometry, analysis, and physics.

That's only because you have given + and * the benefit of the dream to begin with.

No. I begin with assuming that the brain is Turing emulable. It is not that obvious to get everything (brain and consciousness) from + and *.





Comp is tautology.

If comp was tautology, I would like you to attribute to my sun in law, the one with the digital brain, a little more tautological consideration. You should accept that he has consciousness then.

But of course that is not the case, as comp might be false, logically. Indeed, it can be shown refutable, and if the evidences were that physics is Newtonian, I would say that comp would be quite doubtful.




 
Then we can compare physics with the empirical data and confirm of refute comp (but not proving comp). 
Since already Diophantus, but then systematically since Descartes, the relation between geometry and arithmetic are deep and multiple. It is a whole subject matter, a priori independent from comp.

What is the relation between comp and geometry?

It extends already the very many relation between number and geometry discovered by Descartes.
Most elementary geometries on the reals are decidable, and so are common toys in the machine's dreams, then it is like an old couple, the relations are for the best and the worth. For example the fact that the following diophantine equation has no non trivial solution---- x^2 = 2*y^2---is equivalent with the fact that the diagonal of a square, in the euclidienne plane, is incommensurable with the side of the square. They have no common unities. 
You can sum up 90% of math by the study of the relation between the numbers and the geometries.

Bruno





Stathis Papaioannou

unread,
Apr 14, 2013, 5:45:04 PM4/14/13
to everyth...@googlegroups.com


On 15/04/2013, at 3:27 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> But of course that is not the case, as comp might be false, logically. Indeed, it can be shown refutable, and if the evidences were that physics is Newtonian, I would say that comp would be quite doubtful.

Why?

Craig Weinberg

unread,
Apr 14, 2013, 8:38:05 PM4/14/13
to everyth...@googlegroups.com


On Sunday, April 14, 2013 1:27:24 PM UTC-4, Bruno Marchal wrote:

On 14 Apr 2013, at 00:05, Craig Weinberg wrote:



On Saturday, April 13, 2013 6:47:47 AM UTC-4, Bruno Marchal wrote:

On 11 Apr 2013, at 21:18, Craig Weinberg wrote:

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.

?

Does that mean you think that comp can generate geometry, or that matter doesn't relay on geometry?

"comp can generate geometry" does not mean something clear.

I think its pretty clear. Without a printer or video screen, my computer cannot generate geometry.

Why?
(printer and video screen are not geometry).

Printers and video screen have no other purpose other than to manifest geometric forms in public.
 

There are program able to solve geometrical puzzle by rotating "mentally" (in their RAM, without using screen, nor printer) complex geometrical figure, ...

That's what I'm saying. All geometric function can be emulated computationally with no literal geometry. The puzzle shapes aren't literally "in" the RAM. There is no presentation of shape in that universe, and the addition of shape (from screens or printers) would add nothing to that computation.





It doesn't matter how much CPU power or memory I have, the functions will come no closer to taking on a coherent geometric form somewhere. I can make endless computations about circles and pi, but there is never any need for any literal presentation of a circle in the universe. No actual circle is present.

Not sure I have ever see an actual circle anywhere, nor do I think that seeing proves existence ...

I don't think that there is 'existence'. There is seeing, feeling, touching, etc. I don't understand what you mean by not being sure if you have seen an actual circle anywhere. OOOOOOOO see? Those are actual circles that you see on your screen. The computer doesn't see those though. It doesn't see the similarity between o,O,0,O,o, etc. To the computer there are different quantities associated with the ASCII characters, different codes for font rendering as screen pixels or printer instructions, etc, but unless you are running an OCR program, the computer by default has no notion of visual circularity associated with OOOOOOOO.
 




 

But what can be shown is that in the comp theory, you can assume only number (or combinators) and the + and * laws, this generates all the dreams, which can be shown to generate from the machine points of view, geometry, analysis, and physics.

That's only because you have given + and * the benefit of the dream to begin with.

No. I begin with assuming that the brain is Turing emulable. It is not that obvious to get everything (brain and consciousness) from + and *.

It's one thing to assume that the brain is Turing emulable, but another to assume that interior experience is isomorphic to brain activity. My view is that it is not. To the contrary, exteriority is the anesthetic, orthomodular reflection of interiority. This orthomodularity is total, so that it circumscribes both arithmetic truth and ontological realism entirely.

http://multisenserealism.com/2013/04/14/1060/






Comp is tautology.

If comp was tautology, I would like you to attribute to my sun in law, the one with the digital brain, a little more tautological consideration. You should accept that he has consciousness then.

He doesn't have consciousness, but he has the capacity to broadly and deeply enrich our consciousness. I give him the appropriate consideration, he gets a nice juicy retro-memory implant of a generic steak eating experience - free of charge!
 

But of course that is not the case, as comp might be false, logically. Indeed, it can be shown refutable, and if the evidences were that physics is Newtonian, I would say that comp would be quite doubtful.




 
Then we can compare physics with the empirical data and confirm of refute comp (but not proving comp). 
Since already Diophantus, but then systematically since Descartes, the relation between geometry and arithmetic are deep and multiple. It is a whole subject matter, a priori independent from comp.

What is the relation between comp and geometry?

It extends already the very many relation between number and geometry discovered by Descartes.
Most elementary geometries on the reals are decidable, and so are common toys in the machine's dreams, then it is like an old couple, the relations are for the best and the worth. For example the fact that the following diophantine equation has no non trivial solution---- x^2 = 2*y^2---is equivalent with the fact that the diagonal of a square, in the euclidienne plane, is incommensurable with the side of the square. They have no common unities. 
You can sum up 90% of math by the study of the relation between the numbers and the geometries.

It seems that there are implicit equivalences but only after the fact of geometry. Geometry itself is a separate aesthetic dimension which does not follow explicitly from comp.


See if you like my idea for a Sci-Fi story about artificial qualia/hyper-quanta: http://s33light.org/post/47951545367

I think that while we disagree on whether machines experience qualia themselves, I think we can both agree on the idea that machine discovered quanta can be translated into our experience as significantly novel qualitative content, if not new sensory modalities (by perceptual cheating or neural mod).

Craig

Bruno Marchal

unread,
Apr 15, 2013, 8:27:28 AM4/15/13
to everyth...@googlegroups.com
Comp implies that there is a substitution level, and it implies that
if we look around us below that substitution level we must find the
traces of the infinitely many parallel computations leading to our
states (by the first person indeterminacy).

Comp implies that physics must be derived from arithmetic, and this
leads to a MW sort of physics, both qualitatively (like in the UDA),
and formally, when assuming the classical theory of knowledge, like in
the translation of UDA in arithmetic. In that case we can generate the
set of experimental configurations capable of refuting comp (or
showing that we are in a second order simulation built to make us
believe in non-comp).

There are other reason as well. A Newtonian physics uses action at a
distance, arguably a non comp phenomenon. Newton was aware of that
problem, and he already took this as a symptom that his physics was
only an approximation, but this has been of course solved by
relativity theory.

Bruno


http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Apr 15, 2013, 8:41:19 AM4/15/13
to everyth...@googlegroups.com
On 15 Apr 2013, at 02:38, Craig Weinberg wrote:



On Sunday, April 14, 2013 1:27:24 PM UTC-4, Bruno Marchal wrote:

On 14 Apr 2013, at 00:05, Craig Weinberg wrote:



On Saturday, April 13, 2013 6:47:47 AM UTC-4, Bruno Marchal wrote:

On 11 Apr 2013, at 21:18, Craig Weinberg wrote:

With comp, matter relies on the numbers law, or Turing equivalent.

Matter also relies on geometry, which comp cannot provide.

?

Does that mean you think that comp can generate geometry, or that matter doesn't relay on geometry?

"comp can generate geometry" does not mean something clear.

I think its pretty clear. Without a printer or video screen, my computer cannot generate geometry.

Why?
(printer and video screen are not geometry).

Printers and video screen have no other purpose other than to manifest geometric forms in public.
 

There are program able to solve geometrical puzzle by rotating "mentally" (in their RAM, without using screen, nor printer) complex geometrical figure, ...

That's what I'm saying. All geometric function can be emulated computationally with no literal geometry. The puzzle shapes aren't literally "in" the RAM. There is no presentation of shape in that universe, and the addition of shape (from screens or printers) would add nothing to that computation.

That's a reson to doubt that what you call "literal geometry" makes no sense.








It doesn't matter how much CPU power or memory I have, the functions will come no closer to taking on a coherent geometric form somewhere. I can make endless computations about circles and pi, but there is never any need for any literal presentation of a circle in the universe. No actual circle is present.

Not sure I have ever see an actual circle anywhere, nor do I think that seeing proves existence ...

I don't think that there is 'existence'. There is seeing, feeling, touching, etc.

That makes my point. With comp all this is already implemented in arithmetic.



I don't understand what you mean by not being sure if you have seen an actual circle anywhere. OOOOOOOO see?

If those are circles, then I doubt that PI = 3,141592...




Those are actual circles that you see on your screen.

Not at all. They are polygonal gross approximations of circle.



The computer doesn't see those though. It doesn't see the similarity between o,O,0,O,o, etc.

He can, if you let them learn the difference. Some software can already see such similarities.



To the computer there are different quantities associated with the ASCII characters, different codes for font rendering as screen pixels or printer instructions, etc, but unless you are running an OCR program, the computer by default has no notion of visual circularity associated with OOOOOOOO.

Of course. Current laptop are blind, but you can't derive a universal big fact from a biased small sample.



 




 

But what can be shown is that in the comp theory, you can assume only number (or combinators) and the + and * laws, this generates all the dreams, which can be shown to generate from the machine points of view, geometry, analysis, and physics.

That's only because you have given + and * the benefit of the dream to begin with.

No. I begin with assuming that the brain is Turing emulable. It is not that obvious to get everything (brain and consciousness) from + and *.

It's one thing to assume that the brain is Turing emulable, but another to assume that interior experience is isomorphic to brain activity.

You cannot be more right on this. It has been part of my job to show that if the brain is Turing emulable, then the interior experience is not at all isomorphic to brain activity. It is already done explicitely in step seven (you don't need the more subtle step 8).




My view is that it is not.

Good, but it contradicts some of your other posts, where mind and matter seems to be dual.



To the contrary, exteriority is the anesthetic, orthomodular reflection of interiority. This orthomodularity is total, so that it circumscribes both arithmetic truth and ontological realism entirely.

http://multisenserealism.com/2013/04/14/1060/






Comp is tautology.

If comp was tautology, I would like you to attribute to my sun in law, the one with the digital brain, a little more tautological consideration. You should accept that he has consciousness then.

He doesn't have consciousness, but he has the capacity to broadly and deeply enrich our consciousness. I give him the appropriate consideration, he gets a nice juicy retro-memory implant of a generic steak eating experience - free of charge!

But that does not make sense if he cannot taste it consciously, and comp asserts, non tautologically, that
he will taste it, which was my point.


 

But of course that is not the case, as comp might be false, logically. Indeed, it can be shown refutable, and if the evidences were that physics is Newtonian, I would say that comp would be quite doubtful.




 
Then we can compare physics with the empirical data and confirm of refute comp (but not proving comp). 
Since already Diophantus, but then systematically since Descartes, the relation between geometry and arithmetic are deep and multiple. It is a whole subject matter, a priori independent from comp.

What is the relation between comp and geometry?

It extends already the very many relation between number and geometry discovered by Descartes.
Most elementary geometries on the reals are decidable, and so are common toys in the machine's dreams, then it is like an old couple, the relations are for the best and the worth. For example the fact that the following diophantine equation has no non trivial solution---- x^2 = 2*y^2---is equivalent with the fact that the diagonal of a square, in the euclidienne plane, is incommensurable with the side of the square. They have no common unities. 
You can sum up 90% of math by the study of the relation between the numbers and the geometries.

It seems that there are implicit equivalences but only after the fact of geometry. Geometry itself is a separate aesthetic dimension which does not follow explicitly from comp.

Only because you assume non-comp at the start. I have no problem with that, but you should not pretend refuting comp.

Bruno





See if you like my idea for a Sci-Fi story about artificial qualia/hyper-quanta: http://s33light.org/post/47951545367

I think that while we disagree on whether machines experience qualia themselves, I think we can both agree on the idea that machine discovered quanta can be translated into our experience as significantly novel qualitative content, if not new sensory modalities (by perceptual cheating or neural mod).

Craig


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

meekerdb

unread,
Apr 15, 2013, 2:33:45 PM4/15/13
to everyth...@googlegroups.com
On 4/15/2013 5:27 AM, Bruno Marchal wrote:
There are other reason as well. A Newtonian physics uses action at a distance, arguably a non comp phenomenon.

Why would comp not accommodate action at a distance?

Brent

Russell Standish

unread,
Apr 15, 2013, 10:14:31 PM4/15/13
to everyth...@googlegroups.com
On Mon, Apr 15, 2013 at 02:41:19PM +0200, Bruno Marchal wrote:
>
>
> You cannot be more right on this. It has been part of my job to show
> that if the brain is Turing emulable, then the interior experience
> is not at all isomorphic to brain activity. It is already done
> explicitely in step seven (you don't need the more subtle step 8).
>

If you have shown this, then empirically, COMP is falsified. However,
you yourself, have stated that all that is shown is that the brain
(and physics generally) cannot be ontologically primitive.

We're getting close to the latter (on foar) - if we can find a way of
expressing the MGA that doesn't rely on an intuition.

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Bruno Marchal

unread,
Apr 16, 2013, 5:05:07 AM4/16/13
to everyth...@googlegroups.com
It does, but only due to the FPI. 

I am not sure the notion of action at a distance makes sense to me. But then we have not yet derived space and (physical) distances from comp, so we don't really know. I tend to accredit Everett because it makes quantum physics local. If non local physical cause exist, I am not sure I would dare to say "yes" to the digitalist surgeon.

Bruno



Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Bruno Marchal

unread,
Apr 16, 2013, 7:41:07 AM4/16/13
to everyth...@googlegroups.com

On 16 Apr 2013, at 04:14, Russell Standish wrote:

> On Mon, Apr 15, 2013 at 02:41:19PM +0200, Bruno Marchal wrote:
>>
>>
>> You cannot be more right on this. It has been part of my job to show
>> that if the brain is Turing emulable, then the interior experience
>> is not at all isomorphic to brain activity. It is already done
>> explicitely in step seven (you don't need the more subtle step 8).
>>
>
> If you have shown this, then empirically, COMP is falsified.

Why?
Only if you still believe in primitive material brain. But the point
is that does not exist primitively.




> However,
> you yourself, have stated that all that is shown is that the brain
> (and physics generally) cannot be ontologically primitive.

Yes.




>
> We're getting close to the latter (on foar) - if we can find a way of
> expressing the MGA that doesn't rely on an intuition.

All proofs and theories relies on some intuition and common sense.

Common sense is the only tool that we have to go beyond common sense.

But where you think I did use intuition, I was using only the
definition of comp. See my preceding posts.

Bruno






>
> --
>
> ----------------------------------------------------------------------------
> Prof Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics hpc...@hpcoders.com.au
> University of New South Wales http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>

Russell Standish

unread,
Apr 17, 2013, 2:41:55 AM4/17/13
to everyth...@googlegroups.com
On Tue, Apr 16, 2013 at 01:41:07PM +0200, Bruno Marchal wrote:
>
> On 16 Apr 2013, at 04:14, Russell Standish wrote:
>
> >On Mon, Apr 15, 2013 at 02:41:19PM +0200, Bruno Marchal wrote:
> >>
> >>
> >>You cannot be more right on this. It has been part of my job to show
> >>that if the brain is Turing emulable, then the interior experience
> >>is not at all isomorphic to brain activity. It is already done
> >>explicitely in step seven (you don't need the more subtle step 8).
> >>
> >
> >If you have shown this, then empirically, COMP is falsified.
>
> Why?
> Only if you still believe in primitive material brain. But the point
> is that does not exist primitively.
>

This has nothing to do with "primitiveness". Stick an fMRI scanner on
your brain, and think some thoughts. You will find you cannot have a
different thought without different brain activity. Moreover, there
appears to be a quite close correspondence, viz experiments where
computers are taught to read someone's mind. The experiments are
getting better over time - I would say its pretty overwhelming
evidence that mind supervenes on (phenomenal) physical matter
(particularly the brain).

Furthermore, the anthropic principle is completely unexplainable with
some sort of "phenomenal" physical supervenience.

>
>
>
> >However,
> >you yourself, have stated that all that is shown is that the brain
> >(and physics generally) cannot be ontologically primitive.
>
> Yes.
>
>
>
>
> >
> >We're getting close to the latter (on foar) - if we can find a way of
> >expressing the MGA that doesn't rely on an intuition.
>
> All proofs and theories relies on some intuition and common sense.

Only in setting the axioms. You're allowed to argue with an axiom, of
course, but that does not invalidate a proof.

>
> Common sense is the only tool that we have to go beyond common sense.
>
> But where you think I did use intuition, I was using only the
> definition of comp. See my preceding posts.
>

Primarily where you assert that conscious states supervening on
recordings are an absurdity. There are some other places where
intuition has cropped up, but that seems to be the main one.

Craig Weinberg

unread,
Apr 17, 2013, 11:29:34 AM4/17/13
to everyth...@googlegroups.com


On Wednesday, April 17, 2013 2:41:55 AM UTC-4, Russell Standish wrote:
On Tue, Apr 16, 2013 at 01:41:07PM +0200, Bruno Marchal wrote:
>
> On 16 Apr 2013, at 04:14, Russell Standish wrote:
>
> >On Mon, Apr 15, 2013 at 02:41:19PM +0200, Bruno Marchal wrote:
> >>
> >>
> >>You cannot be more right on this. It has been part of my job to show
> >>that if the brain is Turing emulable, then the interior experience
> >>is not at all isomorphic to brain activity. It is already done
> >>explicitely in step seven (you don't need the more subtle step 8).
> >>
> >
> >If you have shown this, then empirically, COMP is falsified.
>
> Why?
> Only if you still believe in primitive material brain. But the point
> is that does not exist primitively.
>

This has nothing to do with "primitiveness". Stick an fMRI scanner on
your brain, and think some thoughts. You will find you cannot have a
different thought without different brain activity. Moreover, there
appears to be a quite close correspondence, viz experiments where
computers are taught to read someone's mind. The experiments are
getting better over time - I would say its pretty overwhelming
evidence that mind supervenes on (phenomenal) physical matter
(particularly the brain).

Furthermore, the anthropic principle is completely unexplainable with
some sort of "phenomenal" physical supervenience.

One example that crossed my mind is eyelids. If comp were true, I would think that eyelids would be unnecessary. Any optical data could be accessed directly from the data source so that the physical character of a sense organ should not be relevant. An obvious adaptation would be for a machine who loses her eyes to simply use some other appendage to reproduce the effect.

I disagree however, that the correlation between human brain activity and human experience is evidence that *all* experience *supervenes* on brain activity. We see the correlation, but since the fMRI can find no evidence whatsoever of any experience at all, we cannot claim supervenience. Our conscious experience is reflected in brain activity, yes. It is not reflected in liver activity or intestine activity, yes (I presume). Some brain activity is not correlated with conscious awareness, yes. That's all fine, but that does not mean that unconscious brain activity is not associated with experience on some extra-personal level, and it does not mean that experience is a product of brain activity, or is local to brain activity at all. Indeed, locality is not necessarily a coherent quality of consciousness, and my understanding is that is the case as subjectivity is physically tied to time rather than space.

Craig


Bruno Marchal

unread,
Apr 17, 2013, 11:52:40 AM4/17/13
to everyth...@googlegroups.com

On 17 Apr 2013, at 08:41, Russell Standish wrote:


Primarily where you assert that conscious states supervening on
recordings are an absurdity. There are some other places where
intuition has cropped up, but that seems to be the main one.

In the comp frame. It is absurd because there is no computation done by a recording.
See my preceding posts, you have confused association and supervenience.

Bruno




meekerdb

unread,
Apr 17, 2013, 1:39:36 PM4/17/13
to everyth...@googlegroups.com
But a recording refers to a computation at a different time.  Is a computation, as done by a Turing machine, localized in time?

Brent

Bruno Marchal

unread,
Apr 18, 2013, 5:30:21 AM4/18/13
to everyth...@googlegroups.com
On 17 Apr 2013, at 19:39, meekerdb wrote:

On 4/17/2013 8:52 AM, Bruno Marchal wrote:

On 17 Apr 2013, at 08:41, Russell Standish wrote:


Primarily where you assert that conscious states supervening on
recordings are an absurdity. There are some other places where
intuition has cropped up, but that seems to be the main one.

In the comp frame. It is absurd because there is no computation done by a recording.
See my preceding posts, you have confused association and supervenience.

But a recording refers to a computation at a different time. 

That's correct.



Is a computation, as done by a Turing machine, localized in time?

Only relatively to the universal machine/number running that machine. In that case you will not make a consciousness supervening on the recording, but to the immaterial computations referred to by the recording. That is, you go from physical supervenience to comp supervenience. That's the right move.

Bruno 





Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 
Reply all
Reply to author
Forward
0 new messages