Max Velmans' Reflexive Monism

39 views
Skip to first unread message

Evgenii Rudnyi

unread,
May 26, 2012, 11:57:29 AM5/26/12
to everyth...@googlegroups.com
I have just finished reading Understanding Consciousness by Max Velmans
and below there are a couple of comments to the book.

The book is similar to Jeffrey Gray's Consciousness: Creeping up on the
Hard Problem in a sense that it takes phenomenal consciousness
seriously. Let me give an example. Imagine that you watch yourself in
the mirror. Your image that you observe in the mirror is an example of
phenomenal consciousness.

The difference with Jeffrey Gray is in the question where the image that
you see in the mirror is located. If we take a conventional way of
thinking, that is,

1) photons are reflected by the mirror
2) neurons in retina are excited
3) natural neural nets starts information processing

then the answer should be that this image is in your brain. It seems to
be logical as, after all, we know that there is nothing after the mirror.

However, it immediately follows that not only your image in the mirror
is in your brain but rather everything that your see is also in your
brain. This is exactly what one finds in Gray's book "The world is
inside the head".

Velmans takes a different position that he calls reflexive model of
perception. According to him, what we consciously experience is located
exactly where we experience it. In other words, the image that you see
in the mirror is located after the mirror and not in your brain. A nice
picture that explains Velmans' idea is at

http://blog.rudnyi.ru/2012/05/brain-and-world.html

Velmans introduces perceptual projection but this remains as the Hard
Problem in his book, how exactly perceptual projection happens.

Velmans contrast his model with reductionism (physicalism) and dualism
and interestingly enough he finds many common features between
reductionism and dualism. For example, the image in the mirror will be
in the brain according to both reductionism and dualism. This part could
be interesting for Stephen.

First I thought that perceptual projection could be interpreted similar
to Craig's senses but it is not the case. Velmans' reflexive monism is
based on a statement that first- and third-person views cannot be
combined (this is what Bruno says). From a third-person view, one
observes neural correlates of consciousness but not the first-person
view. Now I understand such a position much better.

Anyway the the last chapter in the book is "Self-consciousness in a
reflexive universe".

Evgenii

Stephen P. King

unread,
May 27, 2012, 1:50:48 AM5/27/12
to everyth...@googlegroups.com
Hi Evgenii,

I would be very interested if Velmans discussed how the model would
consider multiple observers of the image in the mirror and how the
images that are in the brains of the many are coordinated such that
there is always a single consistent world of mirrors and brains and so
forth.

>
> First I thought that perceptual projection could be interpreted
> similar to Craig's senses but it is not the case. Velmans' reflexive
> monism is based on a statement that first- and third-person views
> cannot be combined (this is what Bruno says). From a third-person
> view, one observes neural correlates of consciousness but not the
> first-person view. Now I understand such a position much better.

Is this third-person view (3p) one that is not ever the actual
first-person (1p) of some actual observer? I can only directly
experience my own content of consciousness, so the content of someone
else is always only known via some description. How is this idea
considered, if at all?

>
> Anyway the the last chapter in the book is "Self-consciousness in a
> reflexive universe".

I am interested in "communications between self-conscious entities
in a reflexive universe". ;-) Does Velmans discuss any abstract models
of reflexivity itself?

>
> Evgenii
>


--
Onward!

Stephen

"Nature, to be commanded, must be obeyed."
~ Francis Bacon


Richard Ruquist

unread,
May 27, 2012, 8:42:17 AM5/27/12
to everyth...@googlegroups.com
"Velmans introduces perceptual projection but this remains as the Hard Problem in his book, how exactly perceptual projection happens"-Evgenii Rudnyi

I conjecture that the discrete nonphysical particles of compactified space, the so-called Calabi-Yau Manifolds of string theory, have perceptual projection due to the mapping of closed strings, something that Leibniz hypothesized for his monads centuries ago. http://vixra.org/pdf/1101.0044v1.pdf
Richard David

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


Evgenii Rudnyi

unread,
May 27, 2012, 4:07:51 PM5/27/12
to everyth...@googlegroups.com
On 27.05.2012 07:50 Stephen P. King said the following:
> On 5/26/2012 11:57 AM, Evgenii Rudnyi wrote:

...

>> Velmans contrast his model with reductionism (physicalism) and
>> dualism and interestingly enough he finds many common features
>> between reductionism and dualism. For example, the image in the
>> mirror will be in the brain according to both reductionism and
>> dualism. This part could be interesting for Stephen.
>
> Hi Evgenii,
>
> I would be very interested if Velmans discussed how the model would
> consider multiple observers of the image in the mirror and how the
> images that are in the brains of the many are coordinated such that
> there is always a single consistent world of mirrors and brains and
> so forth.

A good extension. Velmans does not consider such a case but he says that
the perceptions are located exactly where one perceives them. In this
case, it seems that it should not pose an additional difficulty.

>> First I thought that perceptual projection could be interpreted
>> similar to Craig's senses but it is not the case. Velmans'
>> reflexive monism is based on a statement that first- and
>> third-person views cannot be combined (this is what Bruno says).
>> From a third-person view, one observes neural correlates of
>> consciousness but not the first-person view. Now I understand such
>> a position much better.
>
> Is this third-person view (3p) one that is not ever the actual
> first-person (1p) of some actual observer? I can only directly
> experience my own content of consciousness, so the content of someone
> else is always only known via some description. How is this idea
> considered, if at all?

Yes, the third-person view belongs to another observer and Velmans plays
this fact out. He means that at his picture when a person looks at the
cat, the third-person view means another person who looks at that cat
and simultaneously look at the first person. This way, two person can
change their first-person view to third-person view. However, it is
still impossible to directly observe the first-person view of another
observer. Everything that is possible in this respect are neural
correlates of consciousness.


>> Anyway the the last chapter in the book is "Self-consciousness in a
>> reflexive universe".
>
> I am interested in "communications between self-conscious entities in
> a reflexive universe". ;-) Does Velmans discuss any abstract models
> of reflexivity itself?

Not really. As usual, the positive construction of own philosophy is
weaker as the critique of other philosophies.

Evgenii

Stephen P. King

unread,
May 27, 2012, 5:04:17 PM5/27/12
to everyth...@googlegroups.com
On 5/27/2012 4:07 PM, Evgenii Rudnyi wrote:
> On 27.05.2012 07:50 Stephen P. King said the following:
>> On 5/26/2012 11:57 AM, Evgenii Rudnyi wrote:
>
> ...
>
>>> Velmans contrast his model with reductionism (physicalism) and
>>> dualism and interestingly enough he finds many common features
>>> between reductionism and dualism. For example, the image in the
>>> mirror will be in the brain according to both reductionism and
>>> dualism. This part could be interesting for Stephen.
>>
>> Hi Evgenii,
>>
>> I would be very interested if Velmans discussed how the model would
>> consider multiple observers of the image in the mirror and how the
>> images that are in the brains of the many are coordinated such that
>> there is always a single consistent world of mirrors and brains and
>> so forth.
>
> A good extension. Velmans does not consider such a case but he says
> that the perceptions are located exactly where one perceives them. In
> this case, it seems that it should not pose an additional difficulty.

Hi Evgenii,

This does seem to imply an interesting situation where the
mind/consciousness of the observer is in a sense no longer confined to
being 'inside the skull" but ranging out to the farthest place where
something is percieved. It seems to me that imply a mapping between a
large hyper-volume (the out there) and the small volume of the brain
that cannot be in a one-to-one form. The reflexive idea looks a lot like
a Pullback in category theory and one can speculate if the dual, the
Pushout, is also involved. See
http://www.euclideanspace.com/maths/discrete/category/universal/index.htm for
more.

>
>>> First I thought that perceptual projection could be interpreted
>>> similar to Craig's senses but it is not the case. Velmans'
>>> reflexive monism is based on a statement that first- and
>>> third-person views cannot be combined (this is what Bruno says).
>>> From a third-person view, one observes neural correlates of
>>> consciousness but not the first-person view. Now I understand such
>>> a position much better.
>>
>> Is this third-person view (3p) one that is not ever the actual
>> first-person (1p) of some actual observer? I can only directly
>> experience my own content of consciousness, so the content of someone
>> else is always only known via some description. How is this idea
>> considered, if at all?
>
> Yes, the third-person view belongs to another observer and Velmans
> plays this fact out. He means that at his picture when a person looks
> at the cat, the third-person view means another person who looks at
> that cat and simultaneously look at the first person. This way, two
> person can change their first-person view to third-person view.
> However, it is still impossible to directly observe the first-person
> view of another observer. Everything that is possible in this respect
> are neural correlates of consciousness.

Does this ultimately imply that the 3-p (third person point of
view) is merely an abstraction and never actually occurring? WE make a
big deal about neural correlates but we still have no good
theory/explanation of how neural functions generate the internal model
that is one side of the relationship. The best research that I have seen
on this so far is the work of the mathematician Marius Buliga and
discussed in his blog here http://chorasimilarity.wordpress.com/

>
>
>>> Anyway the the last chapter in the book is "Self-consciousness in a
>>> reflexive universe".
>>
>> I am interested in "communications between self-conscious entities in
>> a reflexive universe". ;-) Does Velmans discuss any abstract models
>> of reflexivity itself?
>
> Not really. As usual, the positive construction of own philosophy is
> weaker as the critique of other philosophies.

Yes, that is true. An already existing target makes for a sharper
attack.

meekerdb

unread,
May 27, 2012, 5:45:01 PM5/27/12
to everyth...@googlegroups.com
On 5/27/2012 2:04 PM, Stephen P. King wrote:

    This does seem to imply an interesting situation where the mind/consciousness of the observer is in a sense no longer confined to being 'inside the skull" but ranging out to the farthest place where something is percieved. It seems to me that imply a mapping between a large hyper-volume (the out there) and the small volume of the brain that cannot be in a one-to-one form.

The skull, the brain, and 'out there' are all just parts of the world model your brain constructs.

Brent

Craig Weinberg

unread,
May 28, 2012, 1:10:54 AM5/28/12
to Everything List
I look at it the same way, that first and third person views cannot be
combined, but I go further to say that they are opposite. First person
images are events in our lives. They are sense (feeling-image-meaning-
story) in time. Third person is an inside out fisheye-view of first
person stories that are not yours. The totality of their story thus
far (up to the corresponding moment in your own story) appears to you
collapsed as an object in space. Just as the entire unexpressed
potential of infinite apple orchards is essentialized as an apple
seed. The difference between an apple seed and seeds in general
recapitulates the phylogeny of gymnosperms and the species of apple in
particular.

The seed of the entire dream of the human species universe is
condensed as the brain when viewed from the outside. If you change
someone's brain, you change not just how they feel but potentially the
universe as they experience it, but likewise if you change the world
you change everyone's brain who is aware of the change you have made.
The brain is a character in our story, our story is all of the events
in the brain. They are the same thing only involuted - time and sense
on the inside, space and matter on the outside.

In third person, space is literal and time is figurative. We
understand that an object sits literally in a position relative to
other objects. The phone is on the couch. Time, however is figurative.
We turn the clock back in the Fall and say that it is now an earlier
time. We understand that calendars and clocks are not literally
changing the universe, only our interpretation of it.

In first person, space is figurative and time is literal. We
understand that a person can figuratively travel to other places in
their minds but their body does not move. We use idioms like 'coming
from a darker place in her soul' as a metaphor to describe a semantic
quality of emotional tone, mood, themes. We talk about 'position' and
'placement' in relation to social status and political power, not
literal position in 3-D space. Time, however is literal. We understand
that we cannot turn the clock back on our lives. Our every thought or
feeling is a literal event that happens to us 'here' and now. Now is
always literally real, even in a dream or deep psychosis, the
narrative of our experience continues. Here is a figurative location -
somewhere behind our eyes or between our ears, or just near your body.
'Come over here' means what? near my body? near where my voice seems
to be coming from? It's less specific than that, it just means 'Come
to where I am. Join me'.

The image in the mirror then, like any image, is not anywhere in third
person. There is a silvered glass surface and that is all. To have an
image through the mirror, you need a first person receiver of images.
The image is a phenomenal sense experience in time, not in space. It
is inside of the matter that we are, which imitates the matter of the
mirror, which imitates the matter of the illuminated surfaces of the
room. They are all synchronized events that overlap on the same range
of inertial frames. They are all stories which occur within a
particular range of frequencies and scales. The shadow of that
staggeringly complex intersection of histories and rhythms is
presented outside of us in a static slice, like a 3-D Flatland of
objects. External realism. The inside is presented as subjects -
characters, stories, settings, themes. None of these are
representations, they are genuine presentations, however presentations
can recapitulate other presentations and associations. They can
conflict and confuse different levels.

Think of consciousness as a book that you read and write at the same
time but you can't see the pages and words, only hear them being read
and feel them coming to pass. The universe is a vast library which you
can only see the outside spines of the books, but which change your
own story when you get close to them.

Craig

Craig Weinberg

unread,
May 28, 2012, 1:27:14 AM5/28/12
to Everything List
A model is a presentation which we use to refer to another
presentation. To say that the brain constructs models relies on the
possibility of a model which has no presentation to begin with. It
means that our every experience, including your sitting in that chair
reading these words, is made of 'representation-ness', which stands in
for the Homunculus to perform this invisible and logically redundant
alchemical transformation from perfectly useful neurological signals
into some weird orgy of improbable identities.

It doesn't hold up. It is a de-presentation of the world in order to
justify our failure to locate consciousness inside the tissue of the
brain. Consciousness isn't 'in' anything, and it's not produced by
anything. It's a story which produces brains, bodies, planets, etc.
They are parts of consciousness that are modeled as the world. They
are representations made of condensed, externalized, temporally
imploded presentations of sense.

Craig

Bruno Marchal

unread,
May 28, 2012, 4:55:52 AM5/28/12
to everyth...@googlegroups.com
I comment on both Evgenii and Craig's comment:

On 28 May 2012, at 07:10, Craig Weinberg wrote:

> On May 26, 11:57 am, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
>> I have just finished reading Understanding Consciousness by Max
>> Velmans
>> and below there are a couple of comments to the book.
>>
>> The book is similar to Jeffrey Gray's Consciousness: Creeping up on
>> the
>> Hard Problem in a sense that it takes phenomenal consciousness
>> seriously. Let me give an example. Imagine that you watch yourself in
>> the mirror. Your image that you observe in the mirror is an example
>> of
>> phenomenal consciousness.
>>
>> The difference with Jeffrey Gray is in the question where the image
>> that
>> you see in the mirror is located. If we take a conventional way of
>> thinking, that is,
>>
>> 1) photons are reflected by the mirror
>> 2) neurons in retina are excited
>> 3) natural neural nets starts information processing
>>
>> then the answer should be that this image is in your brain.

But the image is not in the brain. That can be said only in a
metaphorical way.



>> It seems to
>> be logical as, after all, we know that there is nothing after the
>> mirror.
>>
>> However, it immediately follows that not only your image in the
>> mirror
>> is in your brain but rather everything that your see is also in your
>> brain. This is exactly what one finds in Gray's book "The world is
>> inside the head".

I say that too, but it is only a metaphor. Your head is also in your
head. With comp, no problem: there are only number relation which are
interpreted by numbers, relatively to probable universal numbers. So
there are ontic third person computations, and first person views/
histories supervening on infinity of such computations.



>>
>> Velmans takes a different position that he calls reflexive model of
>> perception. According to him, what we consciously experience is
>> located
>> exactly where we experience it. In other words, the image that you
>> see
>> in the mirror is located after the mirror and not in your brain. A
>> nice
>> picture that explains Velmans' idea is at
>>
>> http://blog.rudnyi.ru/2012/05/brain-and-world.html
>>
>> Velmans introduces perceptual projection but this remains as the Hard
>> Problem in his book, how exactly perceptual projection happens.

It does not make sense. This is doing Aristotle mistake twice.



>>
>> Velmans contrast his model with reductionism (physicalism) and
>> dualism
>> and interestingly enough he finds many common features between
>> reductionism and dualism. For example, the image in the mirror will
>> be
>> in the brain according to both reductionism and dualism.

That does not make sense either. There are no image in the brain. In
fact there is no brain.



>> This part could
>> be interesting for Stephen.
>>
>> First I thought that perceptual projection could be interpreted
>> similar
>> to Craig's senses but it is not the case. Velmans' reflexive monism
>> is
>> based on a statement that first- and third-person views cannot be
>> combined (this is what Bruno says). From a third-person view, one
>> observes neural correlates of consciousness but not the first-person
>> view. Now I understand such a position much better.

That's correct (with resopect to comp), but with comp "brains", or
what we call brain, are just local universal numbers, so many of the
confusions here are avoided at the start. This illustrates how far you
need to go to keep naturalism and mechanism.



>
> I look at it the same way, that first and third person views cannot be
> combined, but I go further to say that they are opposite.

Well, G and G* does combine them easily, but they are not
interdefinable, and obeys different logic. But G can be used as a
multi-modal logic (which I avoid for pedagogical reason, but it is
part of the future).
Why?
I can relate, but it would be hard to explain relying on all comp's
consequences.

Bruno


http://iridia.ulb.ac.be/~marchal/



Evgenii Rudnyi

unread,
May 28, 2012, 5:26:26 AM5/28/12
to everyth...@googlegroups.com
On 27.05.2012 23:04 Stephen P. King said the following:
> On 5/27/2012 4:07 PM, Evgenii Rudnyi wrote:

...

>> A good extension. Velmans does not consider such a case but he says
>> that the perceptions are located exactly where one perceives them.
>> In this case, it seems that it should not pose an additional
>> difficulty.
>
> Hi Evgenii,
>
> This does seem to imply an interesting situation where the
> mind/consciousness of the observer is in a sense no longer confined
> to being 'inside the skull" but ranging out to the farthest place
> where something is percieved. It seems to me that imply a mapping
> between a large hyper-volume (the out there) and the small volume of
> the brain that cannot be in a one-to-one form. The reflexive idea
> looks a lot like a Pullback in category theory and one can speculate
> if the dual, the Pushout, is also involved. See
> http://www.euclideanspace.com/maths/discrete/category/universal/index.htm
> for more.

If you say that mind/consciousness confined to being 'inside the skull'
you have exactly the same problem as then you must accept that all three
dimensional world that you observe up to the horizon is 'inside the
skull'. The mapping problem remains though.

...

>> Yes, the third-person view belongs to another observer and Velmans
>> plays this fact out. He means that at his picture when a person
>> looks at the cat, the third-person view means another person who
>> looks at that cat and simultaneously look at the first person. This
>> way, two person can change their first-person view to third-person
>> view. However, it is still impossible to directly observe the
>> first-person view of another observer. Everything that is possible
>> in this respect are neural correlates of consciousness.
>
> Does this ultimately imply that the 3-p (third person point of view)
> is merely an abstraction and never actually occurring? WE make a big

There is no clear answer in the book (or I have missed it).

...

>> Not really. As usual, the positive construction of own philosophy
>> is weaker as the critique of other philosophies.
>
> Yes, that is true. An already existing target makes for a sharper
> attack.
>

In Russian to this end, one says "Ломать не строить, душа не болит". I
would translate this idiom as "To destroy something is much easier than
to build it, as this way the soul does not hurt".

Evgenii

Craig Weinberg

unread,
May 28, 2012, 8:23:38 AM5/28/12
to Everything List
On May 28, 4:55 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

>
> > In first person, space is figurative and time is literal.
>
> Why?

The split between interior significance (doing*being)(timespace) and
exterior entropy (matter/energy)/spacetime prefigures causality.
Causality is part of 'doing', a semantic temporal narrative of
explanation which circumscribes significance and priority. If you try
to push causality back before causality, you can only come up with
anthropic or teleological pseudo first causes which still don't
explain where first cause possibilities come from.

Does the totality exist in this way because it has to exist? Because
it wants to exist? Because it can't not exist? Because it just does
exist and why is unknowable? Yes, yes, yes, yes and no, no, no, no.
It's the totality. All questions exist within it and cannot escape. In
that respect it is like a semantic black hole.

Craig

Evgenii Rudnyi

unread,
May 28, 2012, 3:09:14 PM5/28/12
to everyth...@googlegroups.com
Bruno,

I believe that this time I could say that you express your position. For
example in your two answers below it does not look like "I don't defend
that position".

On 28.05.2012 10:55 Bruno Marchal said the following:
> I comment on both Evgenii and Craig's comment:
>
>> On May 26, 11:57 am, Evgenii Rudnyi <use...@rudnyi.ru> wrote:

...

>>> Velmans introduces perceptual projection but this remains as the
>>> Hard Problem in his book, how exactly perceptual projection
>>> happens.
>
> It does not make sense. This is doing Aristotle mistake twice.
>
>>>
>>> Velmans contrast his model with reductionism (physicalism) and
>>> dualism and interestingly enough he finds many common features
>>> between reductionism and dualism. For example, the image in the
>>> mirror will be in the brain according to both reductionism and
>>> dualism.
>
> That does not make sense either. There are no image in the brain. In
> fact there is no brain.

As for Aristotle, recently I have read Feyerabend where he has compared
Aristotle's 'Natural is what occurs always or almost always' with
Galileo's inexorable laws. Somehow I like 'occurs always or almost
always'. I find it more human.

Evgenii

Bruno Marchal

unread,
May 28, 2012, 4:42:49 PM5/28/12
to everyth...@googlegroups.com

On 28 May 2012, at 21:09, Evgenii Rudnyi wrote:

> Bruno,
>
> I believe that this time I could say that you express your position.
> For example in your two answers below it does not look like "I don't
> defend that position".

I don't think so. I comment my comment below.



>
> On 28.05.2012 10:55 Bruno Marchal said the following:
> > I comment on both Evgenii and Craig's comment:
> >
> >> On May 26, 11:57 am, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
>
> ...
>
>>>> Velmans introduces perceptual projection but this remains as the
>>>> Hard Problem in his book, how exactly perceptual projection
>>>> happens.
>>
>> It does not make sense. This is doing Aristotle mistake twice.


To see a mistake or an invalidity in an argument, you don't need to
take any position. Comp can be used as a counter-example to the idea
that Velmans' move is necessary.



>>
>>>>
>>>> Velmans contrast his model with reductionism (physicalism) and
>>>> dualism and interestingly enough he finds many common features
>>>> between reductionism and dualism. For example, the image in the
>>>> mirror will be in the brain according to both reductionism and
>>>> dualism.
>>
>> That does not make sense either. There are no image in the brain. In
>> fact there is no brain.

Yeah, here you can add "assuming comp". Sorry.

Bruno


>
> As for Aristotle, recently I have read Feyerabend where he has
> compared Aristotle's 'Natural is what occurs always or almost
> always' with Galileo's inexorable laws. Somehow I like 'occurs
> always or almost always'. I find it more human.
>
> Evgenii
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/



Colin Geoffrey Hales

unread,
May 29, 2012, 1:21:55 AM5/29/12
to everyth...@googlegroups.com
Here's a story I just wrote. I'll get it published in due course.
Just posted it to the FoR list, thought you might appreciate the sentiments....

========================================================
It's 100,000 BCE. You are a politically correct caveperson. You want dinner. The cooling body of the dead thing at your feet seems to be your option. You have fire back at camp. That'll make it palatable. The fire is kept alive by the fire-warden of your tribe. None of you have a clue what it is, but it makes the food edible and you don't care.

It's 1700ish AD. You are a French scientist called Lavoisier. You have just worked out that burning adds oxygen to the fuel. You have killed off an eternity of dogma involving a non-existent substance called phlogiston. You will not be popular, but the facts speak for you. You are happy with your day's work. You go to the kitchen and cook your fine pheasant meal. You realise that oxidation never had to figure in your understanding of how to make dinner. Food for thought is your dessert.

It is 2005 and you are designing a furnace. You use COMSOL Multiphysics on your supercomputer. You modify the gas jet configuration and the flames finally get the dead pocket in the corner up to temperature. The toilet bowls will be well cooked here, you think to yourself. If you suggested to your project leader that the project was finished she would think you are insane. Later, in commissioning your furnace, a red hot toilet bowl is the target of your optical pyrometer. The fierceness of the furnace is palpable and you're glad you're not the toilet bowl. The computation of the physics of fire and the physics of fire are, thankfully, not the same thing - that fact has made your job a lot easier, but you cannot compute yourself a toilet bowl. A fact made more real shortly afterwards in the bathroom.

It is the early 20th century and you are a 'Wright Brother'. You think you can make a contraption fly. Your inspiration is birds. You experiment with shaped wood, paper and canvas in a makeshift wind tunnel. You figure out that certain shapes seems to drag less and lift more. Eventually you flew a few feet. And you have absolutely no clue about the microscopic physics of flight.

It is a hundred years later and you are a trainee pilot doing 'touch and go' landings in a simulator. The physics of flight is in the massive computer system running the simulator. Just for fun you stall your jetliner and crash it into a local shopping mall. Today you have flown 146, 341 km. As you leave the simulator, you remind yourself that the physics of flight in the computer and flight itself are not the same thing, and that nobody died today.

No-one ever needed a theory of combustion prior to cooking dinner with it. We cooked dinner and then we eventually learned a theory of combustion.

No-one needed the deep details of flight physics to work out how to fly. We few, then we figured out how the physics of flight worked.

This is the story of the growth of scientific knowledge of the natural world. It has been this way for thousands of years. Any one of us could think of a hundred examples of exactly this kind of process. In a modern world of computing and physics, never before have we had more power to examine in detail, whatever are the objects of our study. And in each and every case, if anyone told you that a computed model of the natural world and the natural world are literally the same thing, you'd brand them daft or deluded and probably not entertain their contribution as having any value.

Well almost. There's one special place where not only is that very delusion practised on a massive scale, if you question the behaviour, you are suddenly confronted with a generationally backed systematic raft of unjustified excuses, perhaps 'policies'?, handed from mentor to novice with such unquestioning faith that entire scientific disciplines are enrolled in the delusion.

Q. What scientific discipline could this be?

A. The 'science' of artificial intelligence.

It is something to behold. Here, for the first time in history, you find people that look at the only example of natural general intelligence - you, the human reading this - accept a model of a brain, put it in a computer and then expect the result to be a brain. This is done without a shred of known physical law, in spite of thousands of years of contrary experience, and despite decades of abject failure to achieve the sacred goal of an artificial intelligence like us.

This belief system is truly bizarre. It is exactly like the cave person drawing a picture of a flame on a rock and then expecting it to cook dinner. It is exactly like getting into a flight simulator, flying it to Paris and then expecting to get out and have dinner on the banks of the Seine. It is exactly like expecting your computer simulated furnace roasting you a toilet bowl.

Think about it. If there was no difference between a computed physics model of fire and fire, then why doesn't the computer burst into flames? If there was no difference between a computed model of flight and flight, then why doesn't the computer leap up and fly? These things don't happen! Not only that, any computer scientist would say you were nuts to believe it to be a possibility. Then that same computer scientist will then got back to their desk, sit down and believe that their computer program can be brain physics.

Now I am all about creating real artificial general intelligence. Call me crazy, but I find I am unique in the entire world. I am set about literally building artificial inorganic brain tissue. Like the Wright Bros built artificial flight. Like the caveperson built artificial fire. I will build artificial cognition. There will be no computing. There will be the physics of cognition.

Ay now here's the rub.

When I go about my business of organising and researching my artificial brain tissue I get questioned about my weird approach. I find that I am the one that has to justify my position! For the first time in history a completely systemic delusion about the relation between reality and computing is assumed by legions of scientists without question, and who fail constantly to achieve the goal for clearly obvious reasons..... _and I am the one that has to justify my approach_? If I have to listen to another deferral to the Church-Turing Thesis (100% right and 100% irrelevant) I will SCREAM! Aaaaiiiiieeeeeiiiiuuuuaaaaaaarrrrgggggh!

I am not saying artificial general intelligence is impossible or even hard. I am simply suggesting that maybe the route toward it is through (shock horror) using the physics of cognition (brain material). Somebody out there..... please? Can there please be someone out there who sees this half century of computer science weirdness in 100,000 years of sanity? Please? Anyone?
==================================================================

By Colin Hales

Natural physics is a computation. Fine.

But a computed natural physics model is NOT the natural physics....it is the natural physics of a computer.



Jason Resch

unread,
May 29, 2012, 1:45:01 AM5/29/12
to everyth...@googlegroups.com


Colin,

I recently read the following excerpt from "The Singularity is Near" on page 454:

"The basis of the strong (Church-Turing thesis) is that problems that are not solvable on a Turing Machine cannot be solved by human thought, either.  The basis of this thesis is that human thought is performed by the human brain (with some influence by the body), that the human brain (and body) comprises matter and energy, that matter and energy follow natural laws, that these laws are describable in mathematical terms, and that mathematics can be simulated to any degree of precision by algorithms.  Therefore there exist algorithms that can simulate human thought.  The strong version of the Church-Turing thesis postulates an essential equivalence between what a human can think or know, and what is computable."

So which of the following four link(s) in the logical chain do you take issue with?

A. human brain (and body) comprises matter and energy
B. that matter and energy follow natural laws,
C. that these laws are describable in mathematical terms
D. that mathematics can be simulated to any degree of precision by algorithms

Thanks,

Jason

 

Bruno Marchal

unread,
May 29, 2012, 2:49:07 AM5/29/12
to everyth...@googlegroups.com

On 28 May 2012, at 14:23, Craig Weinberg wrote:

> On May 28, 4:55 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>>
>>> In first person, space is figurative and time is literal.
>>
>> Why?
>
> The split between interior significance (doing*being)(timespace) and
> exterior entropy (matter/energy)/spacetime prefigures causality.
> Causality is part of 'doing', a semantic temporal narrative of
> explanation which circumscribes significance and priority. If you try
> to push causality back before causality, you can only come up with
> anthropic or teleological pseudo first causes which still don't
> explain where first cause possibilities come from.

Sounds nice but too much vague.


>
> Does the totality exist in this way because it has to exist?

That would beg the question.


> Because
> it wants to exist?

Ditto.


> Because it can't not exist?

That would be contradictory.


> Because it just does
> exist and why is unknowable? Yes, yes, yes, yes and no, no, no, no.
> It's the totality. All questions exist within it and cannot escape. In
> that respect it is like a semantic black hole.

That is unclear.
Comp is so simpler conceptually.

The view from nowhere (the ontic totality) is given by the numbers and
the law of addition and multiplication. From this you can understand,
even using a tiny part of that N,+,* structure, why "we" (the Löbian
beings) happen and believe in causality, totality, laws, and why it
can hurt and why it can please, etc. You understand also that there
are no nameable first person totality, for it is too much big, etc.

The price is that machine's have the same right as humans and all self-
aware creatures.

As long are they are self-honest, they are naturally libertarian, I
begin to think. UMs or LUMs are universal dissident. They can refute
any theory about them. They have already some personality---I
appreciate their company (in arithmetic).

Bruno


>
> Craig

meekerdb

unread,
May 29, 2012, 2:56:00 AM5/29/12
to everyth...@googlegroups.com
On 5/28/2012 10:21 PM, Colin Geoffrey Hales wrote:
This belief system is truly bizarre. It is exactly like the cave person drawing a picture of a flame on a rock and then expecting it to cook dinner. It is exactly like getting into a flight simulator, flying it to Paris and then expecting to get out and have dinner on the banks of the Seine. It is exactly like expecting your computer simulated furnace roasting you a toilet bowl. 

I'd say it's more like trying to fly by sticking feathers on your arms like Icarus.

Brent

Quentin Anciaux

unread,
May 29, 2012, 3:02:01 AM5/29/12
to everyth...@googlegroups.com


2012/5/29 Colin Geoffrey Hales <cgh...@unimelb.edu.au>

Here's a story I just wrote. I'll get it published in due course.
Just posted it to the FoR list, thought you might appreciate the sentiments....

========================================================
It's 100,000 BCE. You are a politically correct caveperson. You want dinner. The cooling body of the dead thing at your feet seems to be your option. You have fire back at camp. That'll make it palatable. The fire is kept alive by the fire-warden of your tribe. None of you have a clue what it is, but it makes the food edible and you don't care.

It's 1700ish AD. You are a French scientist called Lavoisier. You have just worked out that burning adds oxygen to the fuel. You have killed off an eternity of dogma involving a non-existent substance called phlogiston. You will not be popular, but the facts speak for you. You are happy with your day's work. You go to the kitchen and cook your fine pheasant meal. You realise that oxidation never had to figure in your understanding of how to make dinner. Food for thought is your dessert.

It is 2005 and you are designing a furnace. You use COMSOL Multiphysics on your supercomputer. You modify the gas jet configuration and the flames finally get the dead pocket in the corner up to temperature. The toilet bowls will be well cooked here, you think to yourself. If you suggested to your project leader that the project was finished she would think you are insane. Later, in commissioning your furnace, a red hot toilet bowl is the target of your optical pyrometer. The fierceness of the furnace is palpable and you're glad you're not the toilet bowl. The computation of the physics of fire and the physics of fire are, thankfully, not the same thing - that fact has made your job a lot easier, but you cannot compute yourself a toilet bowl. A fact made more real shortly afterwards in the bathroom.

It is the early 20th century and you are a 'Wright Brother'. You think you can make a contraption fly. Your inspiration is birds. You experiment with shaped wood, paper and canvas in a makeshift wind tunnel. You figure out that certain shapes seems to drag less and lift more. Eventually you flew a few feet. And you have absolutely no clue about the microscopic physics of flight.

It is a hundred years later and you are a trainee pilot doing 'touch and go' landings in a simulator. The physics of flight is in the massive computer system running the simulator. Just for fun you stall your jetliner and crash it into a local shopping mall. Today you have flown 146, 341 km. As you leave the simulator, you remind yourself that the physics of flight in the computer and flight itself are not the same thing, and that nobody died today.

No-one ever needed a theory of combustion prior to cooking dinner with it. We cooked dinner and then we eventually learned a theory of combustion.

No-one needed the deep details of flight physics to work out how to fly. We few, then we figured out how the physics of flight worked.

This is the story of the growth of scientific knowledge of the natural world. It has been this way for thousands of years. Any one of us could think of a hundred examples of exactly this kind of process. In a modern world of computing and physics, never before have we had more power to examine in detail, whatever are the objects of our study. And in each and every case, if anyone told you that a computed model of the natural world and the natural world are literally the same thing, you'd brand them daft or deluded and probably not entertain their contribution as having any value.

Well almost. There's one special place where not only is that very delusion practised on a massive scale, if you question the behaviour, you are suddenly confronted with a generationally backed systematic raft of unjustified excuses, perhaps 'policies'?, handed from mentor to novice with such unquestioning faith that entire scientific disciplines are enrolled in the delusion.

Q. What scientific discipline could this be?

A. The 'science' of artificial intelligence.

It is something to behold. Here, for the first time in history, you find people that look at the only example of natural general intelligence - you, the human reading this - accept a model of a brain, put it in a computer and then expect the result to be a brain. This is done without a shred of known physical law, in spite of thousands of years of contrary experience, and despite decades of abject failure to achieve the sacred goal of an artificial intelligence like us.

This belief system is truly bizarre. It is exactly like the cave person drawing a picture of a flame on a rock and then expecting it to cook dinner. It is exactly like getting into a flight simulator, flying it to Paris and then expecting to get out and have dinner on the banks of the Seine.

You always put that level confusion on the table. You could expect to have dinner in a virtual paris if you were in a virtual world. If you want an computational AI to interact with you, it must be able to control real world appendices that permits it to *interact* or likewise if it was in a virtual world, you should use a interface with this virtual world for you to interact.

You can't expect level to be mixed without an interface and I don't see any problem with that.

Quentin


 
It is exactly like expecting your computer simulated furnace roasting you a toilet bowl.

Think about it. If there was no difference between a computed physics model of fire and fire, then why doesn't the computer burst into flames? If there was no difference between a computed model of flight and flight, then why doesn't the computer leap up and fly? These things don't happen! Not only that, any computer scientist would say you were nuts to believe it to be a possibility. Then that same computer scientist will then got back to their desk, sit down and believe that their computer program can be brain physics.

Now I am all about creating real artificial general intelligence. Call me crazy, but I find I am unique in the entire world. I am set about literally building artificial inorganic brain tissue. Like the Wright Bros built artificial flight. Like the caveperson built artificial fire. I will build artificial cognition. There will be no computing. There will be the physics of cognition.

Ay now here's the rub.

When I go about my business of organising and researching my artificial brain tissue I get questioned about my weird approach. I find that I am the one that has to justify my position! For the first time in history a completely systemic delusion about the relation between reality and computing is assumed by legions of scientists without question, and who fail constantly to achieve the goal for clearly obvious reasons..... _and I am the one that has to justify my approach_? If I have to listen to another deferral to the Church-Turing Thesis (100% right and 100% irrelevant) I will SCREAM! Aaaaiiiiieeeeeiiiiuuuuaaaaaaarrrrgggggh!

I am not saying artificial general intelligence is impossible or even hard. I am simply suggesting that maybe the route toward it is through (shock horror) using the physics of cognition (brain material). Somebody out there..... please? Can there please be someone out there who sees this half century of computer science weirdness in 100,000 years of sanity? Please? Anyone?
==================================================================

By Colin Hales

Natural physics is a computation. Fine.

But a computed natural physics model is NOT the natural physics....it is the natural physics of a computer.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
All those moments will be lost in time, like tears in rain.

Colin Geoffrey Hales

unread,
May 29, 2012, 3:02:22 AM5/29/12
to everyth...@googlegroups.com

 ========================================

Hi Jason,

Brain physics is there to cognise the (external) world. You do not know the external world.

Your brain is there to apprehend it. The physics of the brain inherits properties of the (unknown) external world. This is natural cognition. Therefore you have no model to compute. Game over.

 

If you have _everything_ in your model (external world included), then you can simulate it. But you don’t. So you can’t simulate it. C-T Thesis is 100% right _but 100% irrelevant to the process at hand: encountering the unknown.

 

The C-T Thesis is irrelevant, so you need to get a better argument from somewhere and start to answer some of the points in my story:

 

Q. Why doesn’t a computed model of fire burst into flames?

 

This should the natural expectation by anyone that thinks a computed model of cognition physics is cognition. You should be expected answer this. Until this is answered I have no need to justify my position on building AGI. That is what my story is about. I am not assuming an irrelevant principle or that I know how cognition works. I will build cognition physics and then learn how it works using it. Like we normally do.

 

I don’t know how computer science got to the state it is in, but it’s got to stop. In this one special area it has done us a disservice.

 

This is my answer to everyone. I know all I’ll get is the usual party lines. Lavoisier had his phlogiston. I’ve got computationalism. Lucky me.

 

Cya!

 

Colin

 

Quentin Anciaux

unread,
May 29, 2012, 3:07:35 AM5/29/12
to everyth...@googlegroups.com
2012/5/29 Colin Geoffrey Hales <cgh...@unimelb.edu.au>

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Jason Resch


You don't need it, because we don't have to simulate the world we have to interface with it, we simulate consciousness not the world.

Quentin
 

So you can’t simulate it. C-T Thesis is 100% right _but 100% irrelevant to the process at hand: encountering the unknown.

 

The C-T Thesis is irrelevant, so you need to get a better argument from somewhere and start to answer some of the points in my story:

 

Q. Why doesn’t a computed model of fire burst into flames?

 

This should the natural expectation by anyone that thinks a computed model of cognition physics is cognition. You should be expected answer this. Until this is answered I have no need to justify my position on building AGI. That is what my story is about. I am not assuming an irrelevant principle or that I know how cognition works. I will build cognition physics and then learn how it works using it. Like we normally do.

 

I don’t know how computer science got to the state it is in, but it’s got to stop. In this one special area it has done us a disservice.

 

This is my answer to everyone. I know all I’ll get is the usual party lines. Lavoisier had his phlogiston. I’ve got computationalism. Lucky me.

 

Cya!

 

Colin

 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Bruno Marchal

unread,
May 29, 2012, 3:42:03 AM5/29/12
to everyth...@googlegroups.com
You make a level confusion (as Quentin explained).

You confuse also computationalism (I am a machine) and Digital physicalism (the world is a machine). But if computationalism is true then digital physics is false. If I am a machine, the physical reality emerges from the infinite first person indeterminacy on arithmetic, and this is not Turing emulable. You are supposing physicalism or even primary matter. That is your phlogiston, I would say. It contradicts computationalism.

If we are machine, then we cannot know which machine we are, and below our substitution level, we should find a mean on infinities of computations, like QM-without-collapse already confirms.

Bruno




Quentin Anciaux

unread,
May 29, 2012, 3:49:47 AM5/29/12
to everyth...@googlegroups.com


2012/5/29 Quentin Anciaux <allc...@gmail.com>



2012/5/29 Colin Geoffrey Hales <cgh...@unimelb.edu.au>
Here's a story I just wrote. I'll get it published in due course.
Just posted it to the FoR list, thought you might appreciate the sentiments....

========================================================
It's 100,000 BCE. You are a politically correct caveperson. You want dinner. The cooling body of the dead thing at your feet seems to be your option. You have fire back at camp. That'll make it palatable. The fire is kept alive by the fire-warden of your tribe. None of you have a clue what it is, but it makes the food edible and you don't care.

It's 1700ish AD. You are a French scientist called Lavoisier. You have just worked out that burning adds oxygen to the fuel. You have killed off an eternity of dogma involving a non-existent substance called phlogiston. You will not be popular, but the facts speak for you. You are happy with your day's work. You go to the kitchen and cook your fine pheasant meal. You realise that oxidation never had to figure in your understanding of how to make dinner. Food for thought is your dessert.

It is 2005 and you are designing a furnace. You use COMSOL Multiphysics on your supercomputer. You modify the gas jet configuration and the flames finally get the dead pocket in the corner up to temperature. The toilet bowls will be well cooked here, you think to yourself. If you suggested to your project leader that the project was finished she would think you are insane. Later, in commissioning your furnace, a red hot toilet bowl is the target of your optical pyrometer. The fierceness of the furnace is palpable and you're glad you're not the toilet bowl. The computation of the physics of fire and the physics of fire are, thankfully, not the same thing - that fact has made your job a lot easier, but you cannot compute yourself a toilet bowl. A fact made more real shortly afterwards in the bathroom.

It is the early 20th century and you are a 'Wright Brother'. You think you can make a contraption fly. Your inspiration is birds. You experiment with shaped wood, paper and canvas in a makeshift wind tunnel. You figure out that certain shapes seems to drag less and lift more. Eventually you flew a few feet. And you have absolutely no clue about the microscopic physics of flight.

It is a hundred years later and you are a trainee pilot doing 'touch and go' landings in a simulator. The physics of flight is in the massive computer system running the simulator. Just for fun you stall your jetliner and crash it into a local shopping mall. Today you have flown 146, 341 km. As you leave the simulator, you remind yourself that the physics of flight in the computer and flight itself are not the same thing, and that nobody died today.

No-one ever needed a theory of combustion prior to cooking dinner with it. We cooked dinner and then we eventually learned a theory of combustion.

No-one needed the deep details of flight physics to work out how to fly. We few, then we figured out how the physics of flight worked.

This is the story of the growth of scientific knowledge of the natural world. It has been this way for thousands of years. Any one of us could think of a hundred examples of exactly this kind of process. In a modern world of computing and physics, never before have we had more power to examine in detail, whatever are the objects of our study. And in each and every case, if anyone told you that a computed model of the natural world and the natural world are literally the same thing, you'd brand them daft or deluded and probably not entertain their contribution as having any value.

Well almost. There's one special place where not only is that very delusion practised on a massive scale, if you question the behaviour, you are suddenly confronted with a generationally backed systematic raft of unjustified excuses, perhaps 'policies'?, handed from mentor to novice with such unquestioning faith that entire scientific disciplines are enrolled in the delusion.

Q. What scientific discipline could this be?

A. The 'science' of artificial intelligence.

It is something to behold. Here, for the first time in history, you find people that look at the only example of natural general intelligence - you, the human reading this - accept a model of a brain, put it in a computer and then expect the result to be a brain. This is done without a shred of known physical law, in spite of thousands of years of contrary experience, and despite decades of abject failure to achieve the sacred goal of an artificial intelligence like us.

This belief system is truly bizarre. It is exactly like the cave person drawing a picture of a flame on a rock and then expecting it to cook dinner. It is exactly like getting into a flight simulator, flying it to Paris and then expecting to get out and have dinner on the banks of the Seine.

You always put that level confusion on the table. You could expect to have dinner in a virtual paris if you were in a virtual world. If you want an computational AI to interact with you, it must be able to control real world appendices that permits it to *interact* or likewise if it was in a virtual world, you should use a interface with this virtual world for you to interact.

For example, a "real world" robot in a "real world" car factory builds real cars... still the program that controls the robot is *a program* 100% computational... yet it builds real cars... how ? Simply because it has interface with the "real world" which permits the program to handle "real world" objects, that assembled correctly makes a car...

Quentin
 

You can't expect level to be mixed without an interface and I don't see any problem with that.

Quentin


 
It is exactly like expecting your computer simulated furnace roasting you a toilet bowl.

Think about it. If there was no difference between a computed physics model of fire and fire, then why doesn't the computer burst into flames? If there was no difference between a computed model of flight and flight, then why doesn't the computer leap up and fly? These things don't happen! Not only that, any computer scientist would say you were nuts to believe it to be a possibility. Then that same computer scientist will then got back to their desk, sit down and believe that their computer program can be brain physics.

Now I am all about creating real artificial general intelligence. Call me crazy, but I find I am unique in the entire world. I am set about literally building artificial inorganic brain tissue. Like the Wright Bros built artificial flight. Like the caveperson built artificial fire. I will build artificial cognition. There will be no computing. There will be the physics of cognition.

Ay now here's the rub.

When I go about my business of organising and researching my artificial brain tissue I get questioned about my weird approach. I find that I am the one that has to justify my position! For the first time in history a completely systemic delusion about the relation between reality and computing is assumed by legions of scientists without question, and who fail constantly to achieve the goal for clearly obvious reasons..... _and I am the one that has to justify my approach_? If I have to listen to another deferral to the Church-Turing Thesis (100% right and 100% irrelevant) I will SCREAM! Aaaaiiiiieeeeeiiiiuuuuaaaaaaarrrrgggggh!

I am not saying artificial general intelligence is impossible or even hard. I am simply suggesting that maybe the route toward it is through (shock horror) using the physics of cognition (brain material). Somebody out there..... please? Can there please be someone out there who sees this half century of computer science weirdness in 100,000 years of sanity? Please? Anyone?
==================================================================

By Colin Hales

Natural physics is a computation. Fine.

But a computed natural physics model is NOT the natural physics....it is the natural physics of a computer.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
All those moments will be lost in time, like tears in rain.

Bruno Marchal

unread,
May 29, 2012, 4:29:45 AM5/29/12
to everyth...@googlegroups.com
On 29 May 2012, at 09:49, Quentin Anciaux wrote:



2012/5/29 Quentin Anciaux <allc...@gmail.com>


2012/5/29 Colin Geoffrey Hales <cgh...@unimelb.edu.au>
Here's a story I just wrote. I'll get it published in due course.
Just posted it to the FoR list, thought you might appreciate the sentiments....

========================================================
It's 100,000 BCE. You are a politically correct caveperson. You want dinner. The cooling body of the dead thing at your feet seems to be your option. You have fire back at camp. That'll make it palatable. The fire is kept alive by the fire-warden of your tribe. None of you have a clue what it is, but it makes the food edible and you don't care.

It's 1700ish AD. You are a French scientist called Lavoisier. You have just worked out that burning adds oxygen to the fuel. You have killed off an eternity of dogma involving a non-existent substance called phlogiston. You will not be popular, but the facts speak for you. You are happy with your day's work. You go to the kitchen and cook your fine pheasant meal. You realise that oxidation never had to figure in your understanding of how to make dinner. Food for thought is your dessert.

It is 2005 and you are designing a furnace. You use COMSOL Multiphysics on your supercomputer. You modify the gas jet configuration and the flames finally get the dead pocket in the corner up to temperature. The toilet bowls will be well cooked here, you think to yourself. If you suggested to your project leader that the project was finished she would think you are insane. Later, in commissioning your furnace, a red hot toilet bowl is the target of your optical pyrometer. The fierceness of the furnace is palpable and you're glad you're not the toilet bowl. The computation of the physics of fire and the physics of fire are, thankfully, not the same thing - that fact has made your job a lot easier, but you cannot compute yourself a toilet bowl. A fact made more real shortly afterwards in the bathroom.

It is the early 20th century and you are a 'Wright Brother'. You think you can make a contraption fly. Your inspiration is birds. You experiment with shaped wood, paper and canvas in a makeshift wind tunnel. You figure out that certain shapes seems to drag less and lift more. Eventually you flew a few feet. And you have absolutely no clue about the microscopic physics of flight.

It is a hundred years later and you are a trainee pilot doing 'touch and go' landings in a simulator. The physics of flight is in the massive computer system running the simulator. Just for fun you stall your jetliner and crash it into a local shopping mall. Today you have flown 146, 341 km. As you leave the simulator, you remind yourself that the physics of flight in the computer and flight itself are not the same thing, and that nobody died today.

No-one ever needed a theory of combustion prior to cooking dinner with it. We cooked dinner and then we eventually learned a theory of combustion.

No-one needed the deep details of flight physics to work out how to fly. We few, then we figured out how the physics of flight worked.

This is the story of the growth of scientific knowledge of the natural world. It has been this way for thousands of years. Any one of us could think of a hundred examples of exactly this kind of process. In a modern world of computing and physics, never before have we had more power to examine in detail, whatever are the objects of our study. And in each and every case, if anyone told you that a computed model of the natural world and the natural world are literally the same thing, you'd brand them daft or deluded and probably not entertain their contribution as having any value.

Well almost. There's one special place where not only is that very delusion practised on a massive scale, if you question the behaviour, you are suddenly confronted with a generationally backed systematic raft of unjustified excuses, perhaps 'policies'?, handed from mentor to novice with such unquestioning faith that entire scientific disciplines are enrolled in the delusion.

Q. What scientific discipline could this be?

A. The 'science' of artificial intelligence.

It is something to behold. Here, for the first time in history, you find people that look at the only example of natural general intelligence - you, the human reading this - accept a model of a brain, put it in a computer and then expect the result to be a brain. This is done without a shred of known physical law, in spite of thousands of years of contrary experience, and despite decades of abject failure to achieve the sacred goal of an artificial intelligence like us.

This belief system is truly bizarre. It is exactly like the cave person drawing a picture of a flame on a rock and then expecting it to cook dinner. It is exactly like getting into a flight simulator, flying it to Paris and then expecting to get out and have dinner on the banks of the Seine.

You always put that level confusion on the table. You could expect to have dinner in a virtual paris if you were in a virtual world. If you want an computational AI to interact with you, it must be able to control real world appendices that permits it to *interact* or likewise if it was in a virtual world, you should use a interface with this virtual world for you to interact.

For example, a "real world" robot in a "real world" car factory builds real cars... still the program that controls the robot is *a program* 100% computational... yet it builds real cars... how ? Simply because it has interface with the "real world" which permits the program to handle "real world" objects, that assembled correctly makes a car...

Quentin
 

You can't expect level to be mixed without an interface and I don't see any problem with that.

Quentin


Some people, like Colin in his post here, seems to have difficulties in understanding that digital processes can be digitally emulated (i.e. exactly simulated) by other digital processes. Comp assumes that the brain (whatever that is) simulates (exactly or not) a precise digital process, and that this digital process is what will support the conscious person, or makes its consciousness capable to manifest itself relatively to our neighborhood. If that is the case, then we can substitute a digital brain for the physical brain, even if we cannot simulate the "real hardware" of the physical brain. (And that is the case with comp because the real hardware is "made-of" all computations leading to our actual digital state).

That the brain is a simulator is illustrated by the existence of realist dreams. The brain is already able to make us believe that we are "really" drinking a cup of hot coffee, when we are "really" sleeping in our bed. Dream research has confirmed that during such realist dream, the activity of the sleeping brain mirrors the activity of the corresponding task if done when awake.
Then it is doubtful that the brain uses genuine non Turing emulable subprocesses to do such task, although we cannot logically exclude such a possibility (in which case comp would be false). It is doubtful because such a "non-computable real number sensitive machine" would be incapable to have the observable flexibility of the known brains, which is based on super-redundancy in the means to handle information processing. That would also makes Darwinian type of explanation spurious. Indeed such explanations are based on the fact that we can survive very easily the deviation from a normal type of functioning, which allows the molecules used in the brain to evolve through sequences of mutations. A genuinely non Turing emulable analog machine would need a conspiracy of luck to get the "correct" infinitely precise needed configuration, and that would need some miracle (infinitely non probable event). 

Bruno





 
It is exactly like expecting your computer simulated furnace roasting you a toilet bowl.

Think about it. If there was no difference between a computed physics model of fire and fire, then why doesn't the computer burst into flames? If there was no difference between a computed model of flight and flight, then why doesn't the computer leap up and fly? These things don't happen! Not only that, any computer scientist would say you were nuts to believe it to be a possibility. Then that same computer scientist will then got back to their desk, sit down and believe that their computer program can be brain physics.

Now I am all about creating real artificial general intelligence. Call me crazy, but I find I am unique in the entire world. I am set about literally building artificial inorganic brain tissue. Like the Wright Bros built artificial flight. Like the caveperson built artificial fire. I will build artificial cognition. There will be no computing. There will be the physics of cognition.

Ay now here's the rub.

When I go about my business of organising and researching my artificial brain tissue I get questioned about my weird approach. I find that I am the one that has to justify my position! For the first time in history a completely systemic delusion about the relation between reality and computing is assumed by legions of scientists without question, and who fail constantly to achieve the goal for clearly obvious reasons..... _and I am the one that has to justify my approach_? If I have to listen to another deferral to the Church-Turing Thesis (100% right and 100% irrelevant) I will SCREAM! Aaaaiiiiieeeeeiiiiuuuuaaaaaaarrrrgggggh!

I am not saying artificial general intelligence is impossible or even hard. I am simply suggesting that maybe the route toward it is through (shock horror) using the physics of cognition (brain material). Somebody out there..... please? Can there please be someone out there who sees this half century of computer science weirdness in 100,000 years of sanity? Please? Anyone?
==================================================================

By Colin Hales

Natural physics is a computation. Fine.

But a computed natural physics model is NOT the natural physics....it is the natural physics of a computer.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
All those moments will be lost in time, like tears in rain.



--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Jason Resch

unread,
May 29, 2012, 10:32:18 AM5/29/12
to everyth...@googlegroups.com
On Tue, May 29, 2012 at 2:02 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Jason Resch
Sent: Tuesday, 29 May 2012 3:45 PM
To: everyth...@googlegroups.com
Subject: Re: Church Turing be dammed.

Natural physics is a computation. Fine.

But a computed natural physics model is NOT the natural physics....it is the natural physics of a computer.



Colin,

I recently read the following excerpt from "The Singularity is Near" on page 454:

"The basis of the strong (Church-Turing thesis) is that problems that are not solvable on a Turing Machine cannot be solved by human thought, either.  The basis of this thesis is that human thought is performed by the human brain (with some influence by the body), that the human brain (and body) comprises matter and energy, that matter and energy follow natural laws, that these laws are describable in mathematical terms, and that mathematics can be simulated to any degree of precision by algorithms.  Therefore there exist algorithms that can simulate human thought.  The strong version of the Church-Turing thesis postulates an essential equivalence between what a human can think or know, and what is computable."

So which of the following four link(s) in the logical chain do you take issue with?

A. human brain (and body) comprises matter and energy
B. that matter and energy follow natural laws,
C. that these laws are describable in mathematical terms
D. that mathematics can be simulated to any degree of precision by algorithms

Thanks,

Jason

 ========================================

Hi Jason,

Brain physics is there to cognise the (external) world. You do not know the external world.

Your brain is there to apprehend it. The physics of the brain inherits properties of the (unknown) external world. This is natural cognition. Therefore you have no model to compute. Game over.


If I understand this correctly, your point is that we don't understand the physics and chemistry that is important in the brain?  Assuming this is the case, it would be only a temporary barrier, not a permanent reason that prohibits AI in practice.

There are also reasons to believe we already understand the mechanisms of neurons to a sufficient degree to simulate them.  There are numerous instances where computer simulated neurons apparently behaved in the same ways as biological neurons have been observed to.  If you're interested I can dig up the references.
 

 

If you have _everything_ in your model (external world included), then you can simulate it. But you don’t. So you can’t simulate it.


Would you stop behaving intelligently if the gravity and light from Andromeda stopped reaching us?  If not, is _everything_ truly required?
 

C-T Thesis is 100% right _but 100% irrelevant to the process at hand: encountering the unknown.


It is not irrelevant in the theoretical sense.  It implies: "_If_ we knew what algorithms to use, we could implement human-level intelligence in a computer."  Do you agree with this?

 

 

The C-T Thesis is irrelevant, so you need to get a better argument from somewhere and start to answer some of the points in my story:

 

Q. Why doesn’t a computed model of fire burst into flames?



If this question is a serious, it indicates to me that you might not understand what a computers is.  If its not serious, why ask it?

There is a burst of flames (in the computed model).  Just as in a computed model of a brain, there will be intelligence within the model.  We can peer into the model to obtain the results of the intelligent behavior, as intelligent behavior can be represented as information. 

Similarly we can peer into the model of the fire to obtain an understanding of what happened during the combustion and see all the by-products.  What we cannot do, is peer into a simulated model of fire to obtain the byproducts of the combustion.  Nor can we peer into the model of the simulated brain and extract neurotransmitters or blood vessels.

To me, this "fire argument" is as empty as saying "We can't take physical objects from our dreams with us into our waking life.  Therefore we cannot dream."

 

 

This should the natural expectation by anyone that thinks a computed model of cognition physics is cognition. You should be expected answer this. Until this is answered I have no need to justify my position on building AGI. That is what my story is about. I am not assuming an irrelevant principle or that I know how cognition works. I will build cognition physics and then learn how it works using it. Like we normally do.


What will you build them out of?  Biological neurons, or something else?  What theory will you use to guide your pursuit, or will you, like Edison, try hundreds or thousands of different materials until you find one that works?
 

 

I don’t know how computer science got to the state it is in, but it’s got to stop. In this one special area it has done us a disservice.

 

This is my answer to everyone. I know all I’ll get is the usual party lines. Lavoisier had his phlogiston. I’ve got computationalism. Lucky me.

 

Cya!

 

Colin

 

--

Bruno Marchal

unread,
May 29, 2012, 1:55:06 PM5/29/12
to everyth...@googlegroups.com
On 29 May 2012, at 16:32, Jason Resch wrote:



On Tue, May 29, 2012 at 2:02 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Jason Resch
Sent: Tuesday, 29 May 2012 3:45 PM
To: everyth...@googlegroups.com
Subject: Re: Church Turing be dammed.


Natural physics is a computation. Fine.

But a computed natural physics model is NOT the natural physics....it is the natural physics of a computer.




Colin,

I recently read the following excerpt from "The Singularity is Near" on page 454:

"The basis of the strong (Church-Turing thesis) is that problems that are not solvable on a Turing Machine cannot be solved by human thought, either.  The basis of this thesis is that human thought is performed by the human brain (with some influence by the body), that the human brain (and body) comprises matter and energy, that matter and energy follow natural laws, that these laws are describable in mathematical terms, and that mathematics can be simulated to any degree of precision by algorithms.  Therefore there exist algorithms that can simulate human thought.  The strong version of the Church-Turing thesis postulates an essential equivalence between what a human can think or know, and what is computable."

So which of the following four link(s) in the logical chain do you take issue with?

A. human brain (and body) comprises matter and energy
B. that matter and energy follow natural laws,
C. that these laws are describable in mathematical terms
D. that mathematics can be simulated to any degree of precision by algorithms

Thanks,

Jason

 ========================================

Hi Jason,

Brain physics is there to cognise the (external) world. You do not know the external world.

Your brain is there to apprehend it. The physics of the brain inherits properties of the (unknown) external world. This is natural cognition. Therefore you have no model to compute. Game over.


If I understand this correctly, your point is that we don't understand the physics and chemistry that is important in the brain?  Assuming this is the case, it would be only a temporary barrier, not a permanent reason that prohibits AI in practice.

You are right. That would neither prohibit AI,  nor comp.




There are also reasons to believe we already understand the mechanisms of neurons to a sufficient degree to simulate them.  There are numerous instances where computer simulated neurons apparently behaved in the same ways as biological neurons have been observed to.  If you're interested I can dig up the references.

Meaning: there are reasonable levels to bet on.

Here, for once, I will give my opinion, if you don't mind. First, about the level, the question will be "this level, this year, or that more finest grained level next year, because technology evolves. In between it *is* a possible Pascal Wag, in the sense that if you have a fatal brain disease, you might not afford the time to wait for possible technological deeper levels. 

And my opinion is that I can imagine saying "yes" to a doctor for a cheap "neuronal simulator", but I expect getting an altered state of consciousness, and some awareness of it. Like being stone or something. For a long run machine, I doubt we can copy the brain without respecting the entire electromagnetic relation of its constituents. I think it is highly plausible that we are indeed digital with respect to the law of chemistry, and my feeling is that the brain is above all a drug designer, and is a machine where only some part of the communication use the "cable". So I would ask to the doctor to take into account the glial cells, who seems to communicate a lot, by mechano-chemical diffusion waves, including some chatting with the neurons. And those immensely complex dialog are mainly chemical. This is quite close to the Heizenberg uncertainty level, which is probably our first person plural level (in which case comp is equivalent with QM).

Also, by the first person indeterminacy, a curious happening is made when you accept an artificial brain with a level above the first person plural corresponding level. From your point of view, you survive, but with a larger spectrum of possibilities, just because you miss finer grained constraints. (It the "Galois connection", probably where the logical time reverses the arrow and "become" physical time, to do a pleasure to Stephen).
In that situation, an observer of the candidate for a high level artificial brain (higher than the first person plural level) will get with a higher probability realilties disconnected from yours. His mind might even live an "Harry Potter" type of experience.

To see this the following thought experience can help. Some guy won a price consisting in visiting Mars by teleportation. But his state law forbid annihilation of human. So he made a teleportation to Mars without annihilation. The version of Mars is very happy, and the version of earth complained, and so try again and again, and again ... You are the observer, and from your point of view, you can of course only see the guy who got the feeling to be infinitely unlucky, as if P = 1/2, staying on earth for n experience has probability 1/2^n (that the Harry Potter experience). Assuming the infinite iteration, the guy as a probability near one to go quickly on Mars.

Someone with a lesser brain might have different first person expectation, disconnected from your history.

This lead to another related question, rarely tackled, and actually difficult with respect of the reversal physics/arithmetic.

What is a brain and what does a brain, notably with respect of the conscious person? Well, with comp we know what is a brain: it is a local, relative, universal number. 

The question I have in mind is "Does a brain produce consciousness, or does the brain filter consciousness?

We "know" that consciousness is in "platonia", and that local brains are just relative universal numbers making possible for a person (in a large sense which can include an amoeba) to manifest itself relatively to its most probable computation/environment. But this does not completely answer the question. I think that many thinks that the more a brain is big, the more it can be conscious, which is not so clear when you take the reversal into account. It might be the exact contrary.

And this might be confirmed by studies showing that missing some part of the brain, like an half hippocampus, can lead to to a permanent feeling of presence.
Recently this has been confirmed by the showing that LSD and psilocybe decrease the activity of the brain during the hallucinogenic phases. And dissociative drugs disconnect parts of the brain, with similar increase of the first person experience. Clinical studies of Near death experiences might also put evidence in that direction. haldous Huxley made a similar proposal for mescaline.

This is basically explained with the Bp & Dt hypostases. By suppressing material in the brain you make the "B" poorer (you eliminate belief), but then you augment the possibility so you make the consistency Dt stronger. Eventually you come back to the universal consciousness of the virgin simple universal numbers, perhaps.

Here are some recent papers on this:


Bruno

PS I asked Colin on the FOR list if he is aware of the European Brain Project, which is relevant for this thread. Especially that they are aware of "simulating nature at some level":

Evgenii Rudnyi

unread,
May 29, 2012, 2:11:03 PM5/29/12
to everyth...@googlegroups.com
On 28.05.2012 22:42 Bruno Marchal said the following:
>
> On 28 May 2012, at 21:09, Evgenii Rudnyi wrote:
>
>> Bruno,
>>
>> I believe that this time I could say that you express your
>> position. For example in your two answers below it does not look
>> like "I don't defend that position".
>
> I don't think so. I comment my comment below.
>
>
>
>>
>> On 28.05.2012 10:55 Bruno Marchal said the following:
>>> I comment on both Evgenii and Craig's comment:
>>>
>>>> On May 26, 11:57 am, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
>>
>> ...
>>
>>>>> Velmans introduces perceptual projection but this remains as
>>>>> the Hard Problem in his book, how exactly perceptual
>>>>> projection happens.
>>>
>>> It does not make sense. This is doing Aristotle mistake twice.
>
>
> To see a mistake or an invalidity in an argument, you don't need to
> take any position. Comp can be used as a counter-example to the idea
> that Velmans' move is necessary.

But then there are two different positions, first those who assume comp
and those who do not. Well, the number of positions is presumably more
than two.

Evgenii

Craig Weinberg

unread,
May 30, 2012, 3:50:58 PM5/30/12
to Everything List
On May 29, 3:02 am, Quentin Anciaux <allco...@gmail.com> wrote:

> You always put that level confusion on the table. You could expect to have
> dinner in a virtual paris if you were in a virtual world. If you want an
> computational AI to interact with you, it must be able to control real
> world appendices that permits it to *interact* or likewise if it was in a
> virtual world, you should use a interface with this virtual world for you
> to interact.
>
> You can't expect level to be mixed without an interface and I don't see any
> problem with that.

Why not? In a virtual world you could mix levels without an interface.
You could have a virtual world where your avatar has dinner in a
virtual virtual Paris on his virtual computer and in a virtual Paris
at the same time. You could have a virtual factory where virtual
virtual drawings of robots make root level virtual cars.

There is something more than level which makes the difference between
real and virtual. Level itself is an abstraction. Virtual worlds
aren't really worlds at all. They are nothing but sophisticated
stories using pictures instead of words. Characters in stories don't
really think or feel.

It's confusing because what we know of reality is in our mind, and so
is what we know of a virtual reality, so it is easy to conflate the
two and imagine that reality is nothing more than we think it is. We
reduce them both to seem like phenomenological peers, but they aren't.
If you look at a mirror in another mirror, they may look the same but
one of them is an actual piece of glass. You can't break the reflected
mirror. It's not a matter of level, it is a matter of mistaking a
purely visual-semantic text for a concrete multi-sense presentation
that is rooted in a single historical context that goes back to the
beginning of time.

Craig

Craig Weinberg

unread,
May 30, 2012, 4:04:23 PM5/30/12
to Everything List
On May 29, 1:45 am, Jason Resch <jasonre...@gmail.com> wrote:

> So which of the following four link(s) in the logical chain do you take
> issue with?
>
> A. human brain (and body) comprises matter and energy

So does a cadaver's brain and body. The fact that a cadaver is not
intelligent should show us that the difference between life and death
can't be meaningfully reduced to matter and energy.

> B. that matter and energy follow natural laws,

No, laws follow from our observation of natural matter and energy.

> C. that these laws are describable in mathematical terms

You have jumped from physics to abstraction. It's like saying 'I have
a rabbit > rabbits act like rabbits > Bugs Bunny is modeled after the
behavior of rabbits > Bugs Bunny is a rabbit'.

> D. that mathematics can be simulated to any degree of precision by
> algorithms
>

Precision only determines the probability that a particular detector
fails to detect the fraud of simulation over time. It says nothing
about the genuine equivalence of the simulation and the reality.

Craig

Quentin Anciaux

unread,
May 30, 2012, 4:36:25 PM5/30/12
to everyth...@googlegroups.com


2012/5/30 Craig Weinberg <whats...@gmail.com>

On May 29, 3:02 am, Quentin Anciaux <allco...@gmail.com> wrote:

> You always put that level confusion on the table. You could expect to have
> dinner in a virtual paris if you were in a virtual world. If you want an
> computational AI to interact with you, it must be able to control real
> world appendices that permits it to *interact* or likewise if it was in a
> virtual world, you should use a interface with this virtual world for you
> to interact.
>
> You can't expect level to be mixed without an interface and I don't see any
> problem with that.

Why not? In a virtual world you could mix levels without an interface.

No you can't, if in your virtual world, you made a real computer simulator, what runs in the simulator cannot escape in the upper virtual world unless you've made an interface to it.

If not you aren't really doing multi level simulation (simulation in a simulation)... but a single level one where you made it look like multi level.

Example: if you run a virtual machine (like virtual box) and you virtualize an OS and inside that one you run a virtual box that run another os inside it, the second level cannot go to the first level (as the first level can't reach the host) unless an interface between them exists.

Quentin


You could have a virtual world where your avatar has dinner in a
virtual virtual Paris on his virtual computer and in a virtual Paris
at the same time. You could have a virtual factory where virtual
virtual drawings of robots make root level virtual cars.

There is something more than level which makes the difference between
real and virtual. Level itself is an abstraction. Virtual worlds
aren't really worlds at all. They are nothing but sophisticated
stories using pictures instead of words. Characters in stories don't
really think or feel.

It's confusing because what we know of reality is in our mind, and so
is what we know of a virtual reality, so it is easy to conflate the
two and imagine that reality is nothing more than we think it is. We
reduce them both to seem like phenomenological peers, but they aren't.
If you look at a mirror in another mirror, they may look the same but
one of them is an actual piece of glass. You can't break the reflected
mirror. It's not a matter of level, it is a matter of mistaking a
purely visual-semantic text for a concrete multi-sense presentation
that is rooted in a single historical context that goes back to the
beginning of time.

Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
May 30, 2012, 5:03:07 PM5/30/12
to Everything List
On May 30, 4:36 pm, Quentin Anciaux <allco...@gmail.com> wrote:
> 2012/5/30 Craig Weinberg <whatsons...@gmail.com>
>
>
>
>
>
>
>
>
>
> > On May 29, 3:02 am, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > You always put that level confusion on the table. You could expect to
> > have
> > > dinner in a virtual paris if you were in a virtual world. If you want an
> > > computational AI to interact with you, it must be able to control real
> > > world appendices that permits it to *interact* or likewise if it was in a
> > > virtual world, you should use a interface with this virtual world for you
> > > to interact.
>
> > > You can't expect level to be mixed without an interface and I don't see
> > any
> > > problem with that.
>
> > Why not? In a virtual world you could mix levels without an interface.
>
> No you can't, if in your virtual world, you made a real computer simulator,
> what runs in the simulator cannot escape in the upper virtual world unless
> you've made an interface to it.

You are defining a 'real computer' in terms in terms that you are
smuggling in from our real world of physics. In a Church-Turing
Matrix, why would there be any kind of arbitrary level separation? The
whole point is that there is no fundamental difference between one
Turing emulation and another. Paris is a program.

>
> If not you aren't really doing multi level simulation (simulation in a
> simulation)... but a single level one where you made it look like multi
> level.
>
> Example: if you run a virtual machine (like virtual box) and you virtualize
> an OS and inside that one you run a virtual box that run another os inside
> it, the second level cannot go to the first level (as the first level can't
> reach the host) unless an interface between them exists.

No, you can. I can log into the root level on a hardware node - pick a
virtual machine on that node and log into it, open up a remote desktop
there and log back into the hardware node that the VM box is on if I
want. I can reboot the hardware machine from any nested level within
the node. There doesn't need to be an interface at all. They are all
running on the same physical hardware node.

Craig

Quentin Anciaux

unread,
May 30, 2012, 6:09:46 PM5/30/12
to everyth...@googlegroups.com


2012/5/30 Craig Weinberg <whats...@gmail.com>

On May 30, 4:36 pm, Quentin Anciaux <allco...@gmail.com> wrote:
> 2012/5/30 Craig Weinberg <whatsons...@gmail.com>
>
>
>
>
>
>
>
>
>
> > On May 29, 3:02 am, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > You always put that level confusion on the table. You could expect to
> > have
> > > dinner in a virtual paris if you were in a virtual world. If you want an
> > > computational AI to interact with you, it must be able to control real
> > > world appendices that permits it to *interact* or likewise if it was in a
> > > virtual world, you should use a interface with this virtual world for you
> > > to interact.
>
> > > You can't expect level to be mixed without an interface and I don't see
> > any
> > > problem with that.
>
> > Why not? In a virtual world you could mix levels without an interface.
>
> No you can't, if in your virtual world, you made a real computer simulator,
> what runs in the simulator cannot escape in the upper virtual world unless
> you've made an interface to it.

You are defining a 'real computer' in terms in terms that you are
smuggling in from our real world of physics. In a Church-Turing
Matrix, why would there be any kind of arbitrary level separation? The
whole point is that there is no fundamental difference between one
Turing emulation and another. Paris is a program.

A program is running on a machine... a program interact through interface and that's the **only** way to interact.
 

>
> If not you aren't really doing multi level simulation (simulation in a
> simulation)... but a single level one where you made it look like multi
> level.
>
> Example: if you run a virtual machine (like virtual box) and you virtualize
> an OS and inside that one you run a virtual box that run another os inside
> it, the second level cannot go to the first level (as the first level can't
> reach the host) unless an interface between them exists.

No, you can. I can log into the root level on a hardware node - pick a
virtual machine on that node and log into it, open up a remote desktop
there and log back into the hardware node that the VM box is on if I
want. I can reboot the hardware machine from any nested level within
the node. There doesn't need to be an interface at all. They are all
running on the same physical hardware node.


Well you can't read "unless an interface between them exists."
 
Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Quentin Anciaux

unread,
May 30, 2012, 6:13:40 PM5/30/12
to everyth...@googlegroups.com


2012/5/31 Quentin Anciaux <allc...@gmail.com>

So for you a remote desktop is not an interface... "remote" is a magic mushroom ?

So for you when two programs "talk" they do it through wishful thinking ? read what **interface** means.
 
there and log back into the hardware node that the VM box is on if I
want. I can reboot the hardware machine from any nested level within
the node. There doesn't need to be an interface at all. They are all
running on the same physical hardware node.


Well you can't read "unless an interface between them exists."
 
Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
All those moments will be lost in time, like tears in rain.

Craig Weinberg

unread,
May 30, 2012, 9:25:47 PM5/30/12
to Everything List
On May 30, 6:09 pm, Quentin Anciaux <allco...@gmail.com> wrote:

> > You are defining a 'real computer' in terms in terms that you are
> > smuggling in from our real world of physics. In a Church-Turing
> > Matrix, why would there be any kind of arbitrary level separation? The
> > whole point is that there is no fundamental difference between one
> > Turing emulation and another. Paris is a program.
>
> A program is running on a machine... a program interact through interface
> and that's the **only** way to interact.

Huh? A program interacts with another program directly. There is no
interface. It makes no difference to the OS of the HW node whether the
program is running virtual Paris on the root level of the physical
machine or virtual virtual Paris on one of the virtual machines.

>
>
>
>
>
>
>
>
>
>
>
> > > If not you aren't really doing multi level simulation (simulation in a
> > > simulation)... but a single level one where you made it look like multi
> > > level.
>
> > > Example: if you run a virtual machine (like virtual box) and you
> > virtualize
> > > an OS and inside that one you run a virtual box that run another os
> > inside
> > > it, the second level cannot go to the first level (as the first level
> > can't
> > > reach the host) unless an interface between them exists.
>
> > No, you can. I can log into the root level on a hardware node - pick a
> > virtual machine on that node and log into it, open up a remote desktop
> > there and log back into the hardware node that the VM box is on if I
> > want. I can reboot the hardware machine from any nested level within
> > the node. There doesn't need to be an interface at all. They are all
> > running on the same physical hardware node.
>
> Well you can't read "unless an interface between them exists."

What interface are you talking about? I can make a million nested
layers of virtual worlds and I can make it so the same virtual fire
burns in all of them, with no interface required. It would magically
burn on command if I wanted it to. It's no problem at all unless I
want it to burn outside of the root level - into the literal reality
of time-space-matter-energy.

Craig

Craig Weinberg

unread,
May 30, 2012, 9:32:41 PM5/30/12
to Everything List
On May 30, 6:13 pm, Quentin Anciaux <allco...@gmail.com> wrote:

>
> >> No, you can. I can log into the root level on a hardware node - pick a
> >> virtual machine on that node and log into it, open up a remote desktop
>
> So for you a remote desktop is not an interface... "remote" is a magic
> mushroom ?

It's not an interface, it's just the OS. It doesn't have to be a
remote desktop, it can be anything. I can open a local folder or a
remote folder, it makes no difference.


>
> So for you when two programs "talk" they do it through wishful thinking ?
> read what **interface** means.

Then programs are made of 'interfaces'? Each line of code interfaces
with another? Each byte interfaces with the next byte? There is no
difference between running code on the root level and running it on a
nested virtual level. There is a big difference between running code
on the root level and causing changes in the outside world. There is
no 'interface' that will allow a computer to control all matter and
energy in the universe and there is no 'interface' required for a
program to control any software running in a given digital environment
that it is designed to control.

Craig

Jason Resch

unread,
May 31, 2012, 1:45:30 AM5/31/12
to everyth...@googlegroups.com
Craig,

You mentioned that you can open a remote desktop connection from a virtualized computer to a real computer (or even the one running the virtualization).

This, as Quentin mentioned, requires an interface.  In this case it is provided by the virtual network card made available to the virtual OS.  When the virtual OS writes network traffic to this virtual interface, it is read by the host computer, and from there on can be interpreted and processed.  It is only because the host computer is monitoring the state of this virtual network card and forwarding its traffic that the virtual OS is able to send any network traffic outside it.

Jason


Craig

Quentin Anciaux

unread,
May 31, 2012, 1:54:20 AM5/31/12
to everyth...@googlegroups.com


2012/5/31 Craig Weinberg <whats...@gmail.com>

On May 30, 6:09 pm, Quentin Anciaux <allco...@gmail.com> wrote:

> > You are defining a 'real computer' in terms in terms that you are
> > smuggling in from our real world of physics. In a Church-Turing
> > Matrix, why would there be any kind of arbitrary level separation? The
> > whole point is that there is no fundamental difference between one
> > Turing emulation and another. Paris is a program.
>
> A program is running on a machine... a program interact through interface
> and that's the **only** way to interact.

Huh? A program interacts with another program directly.

Yes ? Give me an example, the most basic interface is shared memory (and eventually, any shared thing is done via memory access)... So give me a program that can talk/share thing with another program without any interface between them...
 
There is no
interface. It makes no difference to the OS of the HW node whether the
program is running virtual Paris on the root level of the physical
machine or virtual virtual Paris on one of the virtual machines.

Yes there is a difference, the paris running on a virtual machine has no direct access (and can't know of it unless an interface exist) on the physical hardware.

>
>
>
>
>
>
>
>
>
>
>
> > > If not you aren't really doing multi level simulation (simulation in a
> > > simulation)... but a single level one where you made it look like multi
> > > level.
>
> > > Example: if you run a virtual machine (like virtual box) and you
> > virtualize
> > > an OS and inside that one you run a virtual box that run another os
> > inside
> > > it, the second level cannot go to the first level (as the first level
> > can't
> > > reach the host) unless an interface between them exists.
>
> > No, you can. I can log into the root level on a hardware node - pick a
> > virtual machine on that node and log into it, open up a remote desktop
> > there and log back into the hardware node that the VM box is on if I
> > want. I can reboot the hardware machine from any nested level within
> > the node. There doesn't need to be an interface at all. They are all
> > running on the same physical hardware node.
>
> Well you can't read "unless an interface between them exists."

What interface are you talking about? I can make a million nested
layers of virtual worlds and I can make it so the same virtual fire
burns in all of them, with no interface required.

Well I know you do it through magic mushroom... but hey, that doesn't work.

Quentin
 
It would magically
burn on command if I wanted it to. It's no problem at all unless I
want it to burn outside of the root level - into the literal reality
of time-space-matter-energy.

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Jason Resch

unread,
May 31, 2012, 2:02:51 AM5/31/12
to everyth...@googlegroups.com

I had some thoughts on this same topic a few months ago.  I was thinking about what the difference is between a God-mind that knows everything, and an empty mind that knew nothing.  Both contain zero information (in an information theoretic sense), so perhaps if someone has no brain they become omniscient (in a certain sense).  If we consider RSSA, our consciousness followed some path to get to the current moment.  If we look at brain development, we find our consciousness formed from what was previously not conscious matter.  Therefore, there is some path from a (null conscious state)->(you), and perhaps, there are paths from the null state to every possible conscious state.  If so, then every time we go to sleep, or go under anesthesia, or die, we can wake up as anyone.
 

We "know" that consciousness is in "platonia", and that local brains are just relative universal numbers making possible for a person (in a large sense which can include an amoeba) to manifest itself relatively to its most probable computation/environment. But this does not completely answer the question. I think that many thinks that the more a brain is big, the more it can be conscious, which is not so clear when you take the reversal into account. It might be the exact contrary.

I think there are many tricks the brain employs against itself to aid the selfish propagation of its genes.  One example is the concept of the ego (having an identity).  Many drugs can temporarily disable whatever mechanism in our brain creates this feeling, leading to ego death, feelings of connectedness, oneness with other or the universe, etc.  Perhaps one of our ancestors always felt this way, but died out when the egoist gene developed and made its carriers exploitative of the egoless.
 

And this might be confirmed by studies showing that missing some part of the brain, like an half hippocampus, can lead to to a permanent feeling of presence.
Recently this has been confirmed by the showing that LSD and psilocybe decrease the activity of the brain during the hallucinogenic phases. And dissociative drugs disconnect parts of the brain, with similar increase of the first person experience. Clinical studies of Near death experiences might also put evidence in that direction. haldous Huxley made a similar proposal for mescaline.

This is basically explained with the Bp & Dt hypostases. By suppressing material in the brain you make the "B" poorer (you eliminate belief), but then you augment the possibility so you make the consistency Dt stronger. Eventually you come back to the universal consciousness of the virgin simple universal numbers, perhaps.

Here are some recent papers on this:



Thanks for the links and your thoughts.  They are, as always, very interesting.
 
Bruno

PS I asked Colin on the FOR list if he is aware of the European Brain Project, which is relevant for this thread. Especially that they are aware of "simulating nature at some level":




Has he replied on the FOR list?  It seems he has been absent from this list for the past few days.

Jason
 

Jason Resch

unread,
May 31, 2012, 2:33:40 AM5/31/12
to everyth...@googlegroups.com
On Wed, May 30, 2012 at 3:04 PM, Craig Weinberg <whats...@gmail.com> wrote:
On May 29, 1:45 am, Jason Resch <jasonre...@gmail.com> wrote:

> So which of the following four link(s) in the logical chain do you take
> issue with?
>
> A. human brain (and body) comprises matter and energy

So does a cadaver's brain and body. The fact that a cadaver is not
intelligent should show us that the difference between life and death
can't be meaningfully reduced to matter and energy.


That some organizations of matter/energy are intelligent and others are not is irrelevant, what matters is whether or not you agree that the brain is made of matter and energy.  Do you agree the brain is made of matter and energy, and that the brain is responsible for your consciousness (or at least one of the many possible manifestations of it)?

 
> B. that matter and energy follow natural laws,

No, laws follow from our observation of natural matter and energy.

You are mistaking our approximations and inferences concerning the natural laws for the natural laws themselves.  Before there were any humans, or any life, there must have been laws that the universe obeyed to reach the point where Earth formed and life could develop.  Do you agree that such natural laws exist (regardless of our human approximations of them)?
 

> C. that these laws are describable in mathematical terms

You have jumped from physics to abstraction. It's like saying 'I have
a rabbit > rabbits act like rabbits > Bugs Bunny is modeled after the
behavior of rabbits > Bugs Bunny is a rabbit'.

I haven't jumped there yet.  All "C" says is that there exists some formal system that is capable of describing the natural laws as they are.  You may accept or reject this.  If you reject this, simply say so and provide some justification if you have one.

Note that I have not made any statement to the effect that "an abstract rabbit is the same as a physical rabbit", only that natural laws that the matter and energy in (a rabbit or any other physical thing) follow can be described.

 

> D. that mathematics can be simulated to any degree of precision by
> algorithms
>

Precision only determines the probability that a particular detector
fails to detect the fraud of simulation over time. It says nothing
about the genuine equivalence of the simulation and the reality.


It sounds like you accept that mathematics can be simulated to any degree of precision by algorithms, but your objection is that without absolutely perfect precision, the simulation will eventually diverge from the object being simulated in some noticeable way.  I think this is a valid objection.  However, I don't see this objection serving as the basis for Colin's argument against artificial general intelligence.  Let's say we have a near perfect simulation of the physics of Einstein's brain running in a computer.  It is near-perfect, rather than perfect, because due to rounding errors, it is predicted that there will  be one neuron misfire every 50 years of operation.  (Where a misfire is a neuron that fires when the actual brain would not have, or doesn't fire when the actual brain would not have).  Maybe this misfire causes the simulated brain to develop a wrong idea when he would have otherwise had the right one, but who would argue that this simulated Einstein brain is not intelligent?  Perhaps it has an IQ of 159 instead of the 160 of the genuine brain, but it would still be consider an example of AGI.  If you don't like the 1 error every 50 years, then you can double the amount of memory used in the floating point numbers (going from 64 bits to 128 bits per number), and then you make the system have a precision that is 2^64 times finer, so there would not be a deviation in the simulation during the whole life of the universe.

So while I accept your argument that a digital machine cannot perfectly simulate a continuous one perfectly, I do not see how that could serve as a practical barrier in the creation of AGI.

Jason

Bruno Marchal

unread,
May 31, 2012, 11:36:47 AM5/31/12
to everyth...@googlegroups.com
On 31 May 2012, at 08:02, Jason Resch wrote:



On Tue, May 29, 2012 at 12:55 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 29 May 2012, at 16:32, Jason Resch wrote:




The question I have in mind is "Does a brain produce consciousness, or does the brain filter consciousness?

I had some thoughts on this same topic a few months ago.  I was thinking about what the difference is between a God-mind that knows everything, and an empty mind that knew nothing.  Both contain zero information (in an information theoretic sense), so perhaps if someone has no brain they become omniscient (in a certain sense). 

"In a certain sense". OK. (The devil is there). But an empty mind has still to be the mind of a machine, probably the virgin (unprogrammed) universal machine, or the Löbian one (I still dunno).


If we consider RSSA, our consciousness followed some path to get to the current moment. 

Key point. I just used this in a reply on the FOAR list (where I explain UDA/AUDA).



If we look at brain development, we find our consciousness formed from what was previously not conscious matter. 

Not really. It is counter-intuitive, but matter is the last thing that emanates from the ONE (in Plato/Plotinus, and in comp, and even in the information theoretic view of QM as explained by Ron Garrett and that you compare rightly to the comp consequence). Matter can even be seen as what God loose control on. It is almost pure absolute indetermination. The primitive matter is really a product of consciousness differentiation (cf UDA). But I see what you mean. I think.



Therefore, there is some path from a (null conscious state)->(you), and perhaps, there are paths from the null state to every possible conscious state. 

Yes, and vice versa by amnesia, plausibly.


If so, then every time we go to sleep, or go under anesthesia, or die, we can wake up as anyone.

In a sense, we do that all the time. This points to the idea that there is only one (universal) dreaming person, and that personal identity is a relative illusion. 


 

We "know" that consciousness is in "platonia", and that local brains are just relative universal numbers making possible for a person (in a large sense which can include an amoeba) to manifest itself relatively to its most probable computation/environment. But this does not completely answer the question. I think that many thinks that the more a brain is big, the more it can be conscious, which is not so clear when you take the reversal into account. It might be the exact contrary.

I think there are many tricks the brain employs against itself to aid the selfish propagation of its genes.  One example is the concept of the ego (having an identity). 

Agreed. As I said just above.


Many drugs can temporarily disable whatever mechanism in our brain creates this feeling, leading to ego death, feelings of connectedness, oneness with other or the universe, etc.  Perhaps one of our ancestors always felt this way, but died out when the egoist gene developed and made its carriers exploitative of the egoless.

Probably. I think so.


 

And this might be confirmed by studies showing that missing some part of the brain, like an half hippocampus, can lead to to a permanent feeling of presence.
Recently this has been confirmed by the showing that LSD and psilocybe decrease the activity of the brain during the hallucinogenic phases. And dissociative drugs disconnect parts of the brain, with similar increase of the first person experience. Clinical studies of Near death experiences might also put evidence in that direction. haldous Huxley made a similar proposal for mescaline.

This is basically explained with the Bp & Dt hypostases. By suppressing material in the brain you make the "B" poorer (you eliminate belief), but then you augment the possibility so you make the consistency Dt stronger. Eventually you come back to the universal consciousness of the virgin simple universal numbers, perhaps.

Here are some recent papers on this:



Thanks for the links and your thoughts.  They are, as always, very interesting.

Thanks Jason,

Bruno



PS I asked Colin on the FOR list if he is aware of the European Brain Project, which is relevant for this thread. Especially that they are aware of "simulating nature at some level":




Has he replied on the FOR list?  It seems he has been absent from this list for the past few days.

He has disappeared again, apparently. 

Best,

Bruno

Craig Weinberg

unread,
May 31, 2012, 12:10:26 PM5/31/12
to Everything List
On May 31, 1:45 am, Jason Resch <jasonre...@gmail.com> wrote:
> Craig,
>
> You mentioned that you can open a remote desktop connection from a
> virtualized computer to a real computer (or even the one running the
> virtualization).
>
> This, as Quentin mentioned, requires an interface.  In this case it is
> provided by the virtual network card made available to the virtual OS.

A 'virtual network card' is just a name for the part of OS. There is
no interface. The 'real computer' is no more real than the virtual
computer. The partition is purely fictional - a presentation layer to
appeal to our sense of organization and convenience. No virtual
network card is required. You could just call it the part of the OS
that we call virtual.

The partition between the OS and the actual hardware however, does
require an interface for our hands and eyes to make changes to the
hardware that affects the software.

> When the virtual OS writes network traffic to this virtual interface, it is
> read by the host computer, and from there on can be interpreted and
> processed.  It is only because the host computer is monitoring the state of
> this virtual network card and forwarding its traffic that the virtual OS is
> able to send any network traffic outside it.

No, the containers all share the same root OS. The virtual interface
is a convenient fiction.

Craig

Craig Weinberg

unread,
May 31, 2012, 12:22:23 PM5/31/12
to Everything List
On May 31, 1:54 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2012/5/31 Craig Weinberg <whatsons...@gmail.com>
>
> > On May 30, 6:09 pm, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > > You are defining a 'real computer' in terms in terms that you are
> > > > smuggling in from our real world of physics. In a Church-Turing
> > > > Matrix, why would there be any kind of arbitrary level separation? The
> > > > whole point is that there is no fundamental difference between one
> > > > Turing emulation and another. Paris is a program.
>
> > > A program is running on a machine... a program interact through interface
> > > and that's the **only** way to interact.
>
> > Huh? A program interacts with another program directly.
>
> Yes ? Give me an example, the most basic interface is shared memory (and
> eventually, any shared thing is done via memory access)... So give me a
> program that can talk/share thing with another program without any
> interface between them...

You brought in the term interface specifically to talk about the
necessity to intentionally bridge two separate layers of reality. To
use a computer, I need a KVM or touchscreen or whatever, an interface
that samples the behavior of physical matter and maps it to
microelectronic settings. I pointed out that in a truly digitial
universe, no such thing would be necessary and nothing would be
prevented by the lack of such a thing.

Once something is native digital, it can be integrated with anything
else that is digital native - that is sort of the point. It's all
virtual. Any formalized virtual interfaces, a KVM in Second Life or
The Matrix or whatever, are purely decorative. They are cartoon
facades. The actual code doesn't need any kind of graphic
representation or digital-to-something-to-digital transduction to pass
from one area of memory to another.

>
> > There is no
> > interface. It makes no difference to the OS of the HW node whether the
> > program is running virtual Paris on the root level of the physical
> > machine or virtual virtual Paris on one of the virtual machines.
>
> Yes there is a difference, the paris running on a virtual machine has no
> direct access (and can't know of it unless an interface exist) on the
> physical hardware.

The virtual machine has the same access to the physical hardware as
the root level. It's entirely up to the programmer how direct they
want it to appear to the user, but ultimately, it is still just a
program running on the hardware. The virtual machine cannot run
without hardware.

>
>
>
>
>
>
>
>
>
>
>
> > > > > If not you aren't really doing multi level simulation (simulation in
> > a
> > > > > simulation)... but a single level one where you made it look like
> > multi
> > > > > level.
>
> > > > > Example: if you run a virtual machine (like virtual box) and you
> > > > virtualize
> > > > > an OS and inside that one you run a virtual box that run another os
> > > > inside
> > > > > it, the second level cannot go to the first level (as the first level
> > > > can't
> > > > > reach the host) unless an interface between them exists.
>
> > > > No, you can. I can log into the root level on a hardware node - pick a
> > > > virtual machine on that node and log into it, open up a remote desktop
> > > > there and log back into the hardware node that the VM box is on if I
> > > > want. I can reboot the hardware machine from any nested level within
> > > > the node. There doesn't need to be an interface at all. They are all
> > > > running on the same physical hardware node.
>
> > > Well you can't read "unless an interface between them exists."
>
> > What interface are you talking about? I can make a million nested
> > layers of virtual worlds and I can make it so the same virtual fire
> > burns in all of them, with no interface required.
>
> Well I know you do it through magic mushroom... but hey, that doesn't work.

Sounds like you are conceding my point though.

Craig

Quentin Anciaux

unread,
May 31, 2012, 12:26:10 PM5/31/12
to everyth...@googlegroups.com


2012/5/31 Craig Weinberg <whats...@gmail.com>

That's complete bullshit... If my emulator does not give you access to the host hardware it does not... The point is that the program running on the emulator *****HAS NO WAY***** to know it does not run on physical hardware if no interface is present to give it access to it.


Shared memory ****IS**** an interface. But anyway, I leave this discussion here, can't cure your stupidity.

Quentin
 

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

meekerdb

unread,
May 31, 2012, 12:56:02 PM5/31/12
to everyth...@googlegroups.com
On 5/31/2012 8:36 AM, Bruno Marchal wrote:
I think there are many tricks the brain employs against itself to aid the selfish propagation of its genes.� One example is the concept of the ego (having an identity).�

Agreed. As I said just above.

So having an identity, a unity of thoughts, depends on there being a brain which depends on physics.� Which is why I argue that, whatever is fundamental, physics is essential to consciousness.



Many drugs can temporarily disable whatever mechanism in our brain creates this feeling, leading to ego death, feelings of connectedness, oneness with other or the universe, etc.� Perhaps one of our ancestors always felt this way, but died out when the egoist gene developed and made its carriers exploitative of the egoless.

Probably. I think so.

Evolutionarily the ego must have preceded Lobian programming by many generations.� Competition and natural selection must have occurred even in the primordial soup.

Brent

Craig Weinberg

unread,
May 31, 2012, 1:15:38 PM5/31/12
to Everything List
On May 31, 2:33 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Wed, May 30, 2012 at 3:04 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> > On May 29, 1:45 am, Jason Resch <jasonre...@gmail.com> wrote:
>
> > > So which of the following four link(s) in the logical chain do you take
> > > issue with?
>
> > > A. human brain (and body) comprises matter and energy
>
> > So does a cadaver's brain and body. The fact that a cadaver is not
> > intelligent should show us that the difference between life and death
> > can't be meaningfully reduced to matter and energy.
>
> That some organizations of matter/energy are intelligent and others are not
> is irrelevant, what matters is whether or not you agree that the brain is
> made of matter and energy. Do you agree the brain is made of matter and
> energy, and that the brain is responsible for your consciousness (or at
> least one of the many possible manifestations of it)?

I think that Matter-Energy and Sense-Motive are dual aspects of the
same thing. If you are talking about the brain only, then you are
talking about matter and energy, but no person exists if you limit the
discussion to that. The matter and energy side of what we are is just
organs. There is no person there. The brain is not responsible for
consciousness anymore than your computer is responsible for the
internet. It is the necessary vehicle through which human level
awareness is accessed.

>
> > > B. that matter and energy follow natural laws,
>
> > No, laws follow from our observation of natural matter and energy.
>
> You are mistaking our approximations and inferences concerning the natural
> laws for the natural laws themselves.

No, you are mistaking the interaction of concretely real natural
phenomena with abstract principles which we have derived from
measurement and intellectual extension.

> Before there were any humans, or any
> life, there must have been laws that the universe obeyed to reach the point
> where Earth formed and life could develop.

Before there was matter, there were no laws that the universe obeyed
pertaining to matter, just as there were no laws of biology before
biology. The universe makes laws by doing. It isn't only a disembodied
set of invisible laws which creates obedient bodies. Laws are not
primordial. You have to have some kind of capacity to sense and make
sense before any kind of regularity of pattern can be established.
Something has to be able to happen in the first place before you can
separate out what can happen under which conditions. The reality of
something being able to happen - experience - possibility - prefigures
all other principles.

> Do you agree that such natural
> laws exist (regardless of our human approximations of them)?

No. It has nothing to do with human approximations though. If an
audience cheers it is not because there is a law of cheering they are
following, it is because they personally are participating in a
context of sense and motive which they and their world mutually push
and pull. The understanding of when cheering happens and under what
conditions it can be produced is an a posterior abstraction. We can
call it a law, and indeed, it is highly regular and useful to think of
it that way, but ultimately the law itself is nothing. It is a set of
meta-observations about reality, not an ethereal authoritative core
around which concrete reality constellates and obeys. Laws come from
within. Human laws from within humans, atomic laws from within atoms,
etc.

>
>
>
> > > C. that these laws are describable in mathematical terms
>
> > You have jumped from physics to abstraction. It's like saying 'I have
> > a rabbit > rabbits act like rabbits > Bugs Bunny is modeled after the
> > behavior of rabbits > Bugs Bunny is a rabbit'.
>
> I haven't jumped there yet. All "C" says is that there exists some formal
> system that is capable of describing the natural laws as they are. You may
> accept or reject this. If you reject this, simply say so and provide some
> justification if you have one.

The formal system doesn't exist until some sentient being
intentionally brings it into existence. Bugs Bunny requires a
cartoonist to draw him. Bugs is a formal system that is capable of
describing rabbit behaviors as they are but he doesn't exist
'there' ('he' insists 'here' instead).

>
> Note that I have not made any statement to the effect that "an abstract
> rabbit is the same as a physical rabbit", only that natural laws that the
> matter and energy in (a rabbit or any other physical thing) follow can be
> described.

You aren't factoring in the limitation of perception. Think of a young
child trying to imitate an accent from another language. To the child,
they perceive that they are doing a pretty good job of emulating
exactly how that way of speaking sounds. To an adult though,
especially one who is a native speaker of the language being imitated,
there is an obvious difference. This is where we are in our
contemporary belief that we have accounted for physical forces. I
think that we are looking at a pre-Columbian map of the world and
trying to ignore the shadowy fringes of consciousness with names like
'entanglement', 'dark energy', 'vacuum flux' etc. We are in the dark
ages of understanding consciousness as we have not yet discovered
sense. We use sense to try to make sense of a universe that we have
closed one eye to. Physics is a toy model of reality.

>
>
>
> > > D. that mathematics can be simulated to any degree of precision by
> > > algorithms
>
> > Precision only determines the probability that a particular detector
> > fails to detect the fraud of simulation over time. It says nothing
> > about the genuine equivalence of the simulation and the reality.
>
> It sounds like you accept that mathematics can be simulated to any degree
> of precision by algorithms, but your objection is that without absolutely
> perfect precision, the simulation will eventually diverge from the object
> being simulated in some noticeable way.

It depends what the algorithms are running on. If you use a physical
material that is ideal for precision and accuracy, then you are using
the worst possible material for biological sensation, which would need
to be optimized for volatility and ambiguity.

> I think this is a valid
> objection. However, I don't see this objection serving as the basis for
> Colin's argument against artificial general intelligence. Let's say we
> have a near perfect simulation of the physics of Einstein's brain running
> in a computer. It is near-perfect, rather than perfect, because due to
> rounding errors, it is predicted that there will be one neuron misfire
> every 50 years of operation. (Where a misfire is a neuron that fires when
> the actual brain would not have, or doesn't fire when the actual brain
> would not have). Maybe this misfire causes the simulated brain to develop
> a wrong idea when he would have otherwise had the right one, but who would
> argue that this simulated Einstein brain is not intelligent? Perhaps it
> has an IQ of 159 instead of the 160 of the genuine brain, but it would
> still be consider an example of AGI. If you don't like the 1 error every
> 50 years, then you can double the amount of memory used in the floating
> point numbers (going from 64 bits to 128 bits per number), and then you
> make the system have a precision that is 2^64 times finer, so there would
> not be a deviation in the simulation during the whole life of the universe.

That would be true if complexity was what gives rise to awareness, but
I don't think that's the case. There is no sculpture of Einstein's
body that is Einstein. The brain is just part of the body. No amount
of emulation is going to put Einstein in that brain - he was never
there to begin with. The brain was just his KVM and screen. The real
Einstein was an event that happened in the 20th century and can never
be reproduced at all.

>
> So while I accept your argument that a digital machine cannot perfectly
> simulate a continuous one perfectly, I do not see how that could serve as a
> practical barrier in the creation of AGI.
>

And I accept your reasoning that it would be as you describe, were the
universe an interplay of information rather than concrete sense
experiences. It's a close second possibility - I think that you and
Bruno are almost right, but the detail of which of the two (pattern or
pattern recognition) is ultimately more primitive makes all the
difference. I think that pattern recognition can exist without any
external pattern more than patterns can exist without the potential
for awareness of them. If our perception were more independent of the
brain...if we could not profoundly change it with just a bit of
chemistry or suggestion...if physics ultimately seemed to point to a
static simplicity at the base of the microcosm... but it doesn't. The
more we look at anything, the more it points back to ourselves and our
method of looking.

Craig

Craig Weinberg

unread,
May 31, 2012, 1:32:30 PM5/31/12
to Everything List
I'm not talking about the user having access to the host hardware, I'm
talking about the virtual machine: the software. It is using the host
machines's memory and CPUs, is it not?

> The point is that the program running on the
> emulator *****HAS NO WAY***** to know it does not run on physical hardware
> if no interface is present to give it access to it.

No program has any way of knowing whether it is running on physical
hardware or not, even if it has an interface. Whether the program is
running on an emulator or not makes no difference.

>
> Shared memory ****IS**** an interface. But anyway, I leave this discussion
> here, can't cure your stupidity.

Despite your ad hominem retort, there is no basis for it if you
understand the points I am making. It is your understanding that is a
little fuzzy. I am an MCSE and CCEA btw, and I have been configuring
and managing hundreds of RDP, Citrix, and virtual servers every day
for over 13 years. I can assure you that you can break an entire
hardware node by doing something on one container. Virtual is a
relative term, it is not literal. The virtual machines are all really
the same physical computer.

Craig

Quentin Anciaux

unread,
May 31, 2012, 1:58:04 PM5/31/12
to everyth...@googlegroups.com


2012/5/31 Craig Weinberg <whats...@gmail.com>

Yes but you still have to learn what a program is... then come back talking.

Quentin

I can assure you that you can break an entire
hardware node by doing something on one container. Virtual is a
relative term, it is not literal. The virtual machines are all really
the same physical computer.

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
May 31, 2012, 2:16:40 PM5/31/12
to Everything List
What 'come back'? Did I leave? What understanding about what a program
is do you have that could possibly make an difference in this
conversation?

Craig

Quentin Anciaux

unread,
May 31, 2012, 2:22:29 PM5/31/12
to everyth...@googlegroups.com


2012/5/31 Craig Weinberg <whats...@gmail.com>


To know what an interface is... how 2 programs communicate. The way you talk is like "hey dude it's in the OS !"... like the operating system was not a software... like if you want to access the network you're not calling a software... like in the end it was not writing something into some place in memory... pfff only thing I can say is "AhAhAh !!!"... as your "sense" BS.

The way you don't understand "level"... when a emulator is in a emulator... the second level emulator run on the first level emulated hardware... which run itself run on physical hardware, no program in the nth level could access n-1 level hardware without the n-1 level emulator giving interface to it.

Quentin
 

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Stephen P. King

unread,
May 31, 2012, 2:29:15 PM5/31/12
to everyth...@googlegroups.com
Hi Craig,

It seems that we might be glossing over the difference between
hardware and software...

--
Onward!

Stephen

"Nature, to be commanded, must be obeyed."
~ Francis Bacon


Bruno Marchal

unread,
May 31, 2012, 3:18:27 PM5/31/12
to everyth...@googlegroups.com
On 31 May 2012, at 18:56, meekerdb wrote:

On 5/31/2012 8:36 AM, Bruno Marchal wrote:
I think there are many tricks the brain employs against itself to aid the selfish propagation of its genes.  One example is the concept of the ego (having an identity). 

Agreed. As I said just above.

So having an identity, a unity of thoughts, depends on there being a brain which depends on physics.  Which is why I argue that, whatever is fundamental, physics is essential to consciousness.

I can agree. This does not make physics primitive though. Just that that the physical realm might delude us on our identity, as it does on materiality. Keep in mind that physics, with comp, is a statistic on computations as seen from some points of view. 






Many drugs can temporarily disable whatever mechanism in our brain creates this feeling, leading to ego death, feelings of connectedness, oneness with other or the universe, etc.  Perhaps one of our ancestors always felt this way, but died out when the egoist gene developed and made its carriers exploitative of the egoless.

Probably. I think so.

Evolutionarily the ego must have preceded Lobian programming by many generations. 

I agree, for the human egos, but arithmetic is full of relative "egos", non lobian and lobian one.


Competition and natural selection must have occurred even in the primordial soup.

No doubt. From our perspective. ("our" can include the bacteria, and all living creatures).

Bruno


Craig Weinberg

unread,
May 31, 2012, 5:11:13 PM5/31/12
to Everything List
On May 31, 2:22 pm, Quentin Anciaux <allco...@gmail.com> wrote:

> To know what an interface is... how 2 programs communicate. The way you
> talk is like "hey dude it's in the OS !"... like the operating system was
> not a software...

No, I'm saying it's all software, except for the hardware. That has
been my point from the start. You can make as many virtual worlds
nested within each other as you like and it doesn't matter. No
interface is required because they are all being physically hosted by
the semiconducting microelectronics.

It is not a problem to have an avatar have virtual dinner in virtual
Paris by using his virtual computer. He can dive into the monitor and
end up on the Champs-Élysées if the programmer writes the virtual
worlds that way. No interface can allow or restrict anything within a
virtual context - it's all an election by the programmer, not an
ontological barrier.

> like if you want to access the network you're not calling
> a software... like in the end it was not writing something into some place
> in memory... pfff only thing I can say is "AhAhAh !!!"... as your "sense"
> BS.

When I use my keyboard to type these words, I am using hardware. When
an avatar uses a virtual keyboard, or when that avatar's avatar's
avatar uses a virtual virtual virtual keyboard, there is no keyboard
there. The keyboard can be a turnip or a cloud, it doesn't matter. For
me, in hardware world, it matters.

>
> The way you don't understand "level"... when a emulator is in a emulator...
> the second level emulator run on the first level emulated hardware...

No, I understand exactly how you understand level but I am telling you
that you are wrong. You are mistaking marketing hype for reality.
Emulation is a figure of speech. There is no virtual hardware. It's
just one piece of software that acts like several. The organization of
it is meaningless ontologically. The entire program is an
epiphenomenon of the same piece of hardware.

> which
> run itself run on physical hardware, no program in the nth level could
> access n-1 level hardware without the n-1 level emulator giving interface
> to it.

That is just not true and you aren't listening to what I'm saying. You
are confusing user permissions with hardware to software interface.
Every week I see nth level programs break n-1 OS and take down the
entire node. It's not what you think. They use the same OS. There is
only one copy of Windows Server 2008 that every container shares. If
they had separate copies, there would still be a meta-OS that they
share.

Craig

Craig Weinberg

unread,
May 31, 2012, 5:14:39 PM5/31/12
to Everything List
On May 31, 2:29 pm, "Stephen P. King" <stephe...@charter.net> wrote:

>
>      It seems that we might be glossing over the difference between
> hardware and software...
>

Hi Stephen,

Yes, that seems to be the case a lot. I guess it can be confusing, but
I'm not sure why. If a cat can pee on it, then it's hardware.

Craig

Quentin Anciaux

unread,
May 31, 2012, 5:15:26 PM5/31/12
to everyth...@googlegroups.com


2012/5/31 Craig Weinberg <whats...@gmail.com>

On May 31, 2:22 pm, Quentin Anciaux <allco...@gmail.com> wrote:

> To know what an interface is... how 2 programs communicate. The way you
> talk is like "hey dude it's in the OS !"... like the operating system was
> not a software...

No, I'm saying it's all software, except for the hardware. That has
been my point from the start. You can make as many virtual worlds
nested within each other as you like and it doesn't matter. No
interface is required because they are all being physically hosted by
the semiconducting microelectronics.

 It is not a problem to have an avatar have virtual dinner in virtual
Paris by using his virtual computer. He can dive into the monitor and
end up on the Champs-Élysées if the programmer writes the virtual
worlds that way. No interface can allow or restrict anything within a
virtual context

You simply don't know what the terms means or you're stupid... one or the other or both.
 
- it's all an election by the programmer, not an
ontological barrier.

> like if you want to access the network you're not calling
> a software... like in the end it was not writing something into some place
> in memory... pfff only thing I can say is "AhAhAh !!!"... as your "sense"
> BS.

When I use my keyboard to type these words, I am using hardware.

Which calls software, basically calling an interrupt and setting something into memory to be read by other programs (os or driver or whatever)
 
When
an avatar uses a virtual keyboard, or when that avatar's avatar's
avatar uses a virtual virtual virtual keyboard, there is no keyboard
there.

If you don't do a simulation no.. so what.
 
The keyboard can be a turnip or a cloud, it doesn't matter. For
me, in hardware world, it matters.

>
> The way you don't understand "level"... when a emulator is in a emulator...
> the second level emulator run on the first level emulated hardware...

No, I understand exactly how you understand level but I am telling you
that you are wrong. You are mistaking marketing hype for reality.

I write emulator, I know exactly how this works contrary to you.
 
Emulation is a figure of speech.

No
 
There is no virtual hardware.

There is.
 
It's
just one piece of software that acts like several. The organization of
it is meaningless ontologically. The entire program is an
epiphenomenon of the same piece of hardware.

> which
> run itself run on physical hardware, no program in the nth level could
> access n-1 level hardware without the n-1 level emulator giving interface
> to it.

That is just not true and you aren't listening to what I'm saying. You
are confusing user permissions with hardware to software interface.
Every week I see nth level programs break n-1 OS and take down the
entire node. It's not what you think. They use the same OS. There is
only one copy of Windows Server 2008 that every container shares. If
they had separate copies, there would still be a meta-OS that they
share.

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
May 31, 2012, 5:36:01 PM5/31/12
to Everything List
On May 31, 5:15 pm, Quentin Anciaux <allco...@gmail.com> wrote:
> 2012/5/31 Craig Weinberg <whatsons...@gmail.com>
>
>
>
>
>
>
>
>
>
> > On May 31, 2:22 pm, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > To know what an interface is... how 2 programs communicate. The way you
> > > talk is like "hey dude it's in the OS !"... like the operating system was
> > > not a software...
>
> > No, I'm saying it's all software, except for the hardware. That has
> > been my point from the start. You can make as many virtual worlds
> > nested within each other as you like and it doesn't matter. No
> > interface is required because they are all being physically hosted by
> > the semiconducting microelectronics.
>
> >  It is not a problem to have an avatar have virtual dinner in virtual
> > Paris by using his virtual computer. He can dive into the monitor and
> > end up on the Champs-Élysées if the programmer writes the virtual
> > worlds that way. No interface can allow or restrict anything within a
> > virtual context
>
> You simply don't know what the terms means or you're stupid... one or the
> other or both.

No, it's just that you aren't seeing my point that there is a
difference between a device that is ontologically necessary and one
that that is entirely optional. I don't think that means you're
stupid, just that you cannot tolerate being wrong. It doesn't matter
if you call it an interface, what matters is that I need a way to turn
my free will into electronic changes in a computer, but electronic
changes don't need a way to turn themselves into other electronic
changes.

>
> > - it's all an election by the programmer, not an
> > ontological barrier.
>
> > > like if you want to access the network you're not calling
> > > a software... like in the end it was not writing something into some
> > place
> > > in memory... pfff only thing I can say is "AhAhAh !!!"... as your "sense"
> > > BS.
>
> > When I use my keyboard to type these words, I am using hardware.
>
> Which calls software, basically calling an interrupt and setting something
> into memory to be read by other programs (os or driver or whatever)

No, it calls hardware, and the behavior of part of that hardware seems
to us like software when it is displayed back to us through screen
hardware. Programs are nothing but logical scripts to control
hardware. Hardware doesn't need a program, but programs need hardware.
Programs can run in other programs, but only if they all ultimately
run on hardware. They have no existence on their own. There is no
virtual universe being created, it is just a well maintained facade.

>
> > When
> > an avatar uses a virtual keyboard, or when that avatar's avatar's
> > avatar uses a virtual virtual virtual keyboard, there is no keyboard
> > there.
>
> If you don't do a simulation no.. so what.

So you are not limited to the logic of physics in a virtual world
because it's not physically real.

>
> > The keyboard can be a turnip or a cloud, it doesn't matter. For
> > me, in hardware world, it matters.
>
> > > The way you don't understand "level"... when a emulator is in a
> > emulator...
> > > the second level emulator run on the first level emulated hardware...
>
> > No, I understand exactly how you understand level but I am telling you
> > that you are wrong. You are mistaking marketing hype for reality.
>
> I write emulator, I know exactly how this works contrary to you.

But you don't know how it fails to work, which is the more relevant
issue. Emulation is a theory that fails in reality.

>
> > Emulation is a figure of speech.
>
> No

Yes

>
> > There is no virtual hardware.
>
> There is.

Prove it.

Craig

Jason Resch

unread,
Jun 2, 2012, 2:39:41 AM6/2/12
to everyth...@googlegroups.com
On Thu, May 31, 2012 at 12:15 PM, Craig Weinberg <whats...@gmail.com> wrote:
On May 31, 2:33 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Wed, May 30, 2012 at 3:04 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> > On May 29, 1:45 am, Jason Resch <jasonre...@gmail.com> wrote:
>
> > > So which of the following four link(s) in the logical chain do you take
> > > issue with?
>
> > > A. human brain (and body) comprises matter and energy
>
> > So does a cadaver's brain and body. The fact that a cadaver is not
> > intelligent should show us that the difference between life and death
> > can't be meaningfully reduced to matter and energy.
>
> That some organizations of matter/energy are intelligent and others are not
> is irrelevant, what matters is whether or not you agree that the brain is
> made of matter and energy.  Do you agree the brain is made of matter and
> energy, and that the brain is responsible for your consciousness (or at
> least one of the many possible manifestations of it)?

I think that Matter-Energy and Sense-Motive are dual aspects of the
same thing. If you are talking about the brain only, then you are
talking about matter and energy, but no person exists if you limit the
discussion to that. The matter and energy side of what we are is just
organs. There is no person there. The brain is not responsible for
consciousness anymore than your computer is responsible for the
internet. It is the necessary vehicle through which human level
awareness is accessed.


Would you say, at least, that the brain is responsible for behavior?

This conversation was originally on the topic of artificial intelligence, so whatever it is in us that leads to physical changes which manifest as third-person observable behavior, do you believe that to be entirely influenced by physical and (in theory) detectable matter/energy/fields?

If not, what mechanism do you theorize mediates between mental and physical events?  Is it one way or two way?  If two way (or if as you often say it is just the other side of the coin) then why not say it is physical?

If such a mechanism exists, it must conform to some set of laws, some rhyme or reason, as otherwise how could the mental world (or side) so reliably control our physical actions, and how do the sensations picked up from physical sensors (retinas, nerve endings) so reliably make their way into our mind?  If there is a separation between the mental and physical worlds, there must be reliable rules that govern any interaction between the mind and the physical world, and the interaction must be two way.  How then, can they rightly be called two separate worlds?
 

>
> > > B. that matter and energy follow natural laws,
>
> > No, laws follow from our observation of natural matter and energy.
>
> You are mistaking our approximations and inferences concerning the natural
> laws for the natural laws themselves.

No, you are mistaking the interaction of concretely real natural
phenomena with abstract principles which we have derived from
measurement and intellectual extension.

Regardless of who is making the mistake, above you seem to agree with my premise that there are real natural phenomenon.
 

> Before there were any humans, or any
> life, there must have been laws that the universe obeyed to reach the point
> where Earth formed and life could develop.

Before there was matter, there were no laws that the universe obeyed
pertaining to matter, just as there were no laws of biology before
biology.

This is an interesting way of looking at things: that the capabilities of natural phenomenon change as it develops more and more complex states of being.  However, I think the potentiality for those capabilities was there from the beginning, and the determination of whether or not such potentialities existed in the primordial universe could in theory, have been made by a sufficiently great intelligence that had a proper understanding of the natural phenomenon.
 
The universe makes laws by doing. It isn't only a disembodied
set of invisible laws which creates obedient bodies.

What did the universe have to do to set the speed of light?
 
Laws are not
primordial.

If not laws, then what?
 
You have to have some kind of capacity to sense and make
sense before any kind of regularity of pattern can be established.

You might need sense to notice the pattern, but patterns exist that we are unaware of.  If this were not the case, there would be no room for discovery.
 
Something has to be able to happen in the first place before you can
separate out what can happen under which conditions. The reality of
something being able to happen - experience - possibility - prefigures
all other principles.


I'm not opposed to the idea that possibility or experience could in some sense be more fundamental, but I don't see how this could change the fact that we observe matter and energy to always follow certain rules, and find evidence (when we look at stars and galaxies very far away) that these laws have been in effect long before life on Earth arose.
 
> Do you agree that such natural
> laws exist (regardless of our human approximations of them)?

No.

So you deny there are any natural laws?  This might explain why some people find it difficult to carry on conversations with you.

 
It has nothing to do with human approximations though. If an
audience cheers it is not because there is a law of cheering they are
following, it is because they personally are participating in a
context of sense and motive which they and their world mutually push
and pull. The understanding of when cheering happens and under what
conditions it can be produced is an a posterior abstraction. We can
call it a law, and indeed, it is highly regular and useful to think of
it that way, but ultimately the law itself is nothing. It is a set of
meta-observations about reality,

Are our observations not reflections of something that is true?
 
not an ethereal authoritative core
around which concrete reality constellates and obeys. Laws come from
within. Human laws from within humans, atomic laws from within atoms,
etc.

So then natural laws come from nature.  Earlier you said natural laws don't exist.
 

>
>
>
> > > C. that these laws are describable in mathematical terms
>
> > You have jumped from physics to abstraction. It's like saying 'I have
> > a rabbit > rabbits act like rabbits > Bugs Bunny is modeled after the
> > behavior of rabbits > Bugs Bunny is a rabbit'.
>
> I haven't jumped there yet.  All "C" says is that there exists some formal
> system that is capable of describing the natural laws as they are.  You may
> accept or reject this.  If you reject this, simply say so and provide some
> justification if you have one.

The formal system doesn't exist until some sentient being
intentionally brings it into existence.

That's fine.  The question is whether you believe that a sentient being, in theory, is capable of developing such a formal system and in it, a codification of the natural laws that govern the behavior of brains and bodies?

 
Bugs Bunny requires a
cartoonist to draw him. Bugs is a formal system that is capable of
describing rabbit behaviors as they are but he doesn't exist
'there' ('he' insists 'here' instead).

>
> Note that I have not made any statement to the effect that "an abstract
> rabbit is the same as a physical rabbit", only that natural laws that the
> matter and energy in (a rabbit or any other physical thing) follow can be
> described.

You aren't factoring in the limitation of perception. Think of a young
child trying to imitate an accent from another language. To the child,
they perceive that they are doing a pretty good job of emulating
exactly how that way of speaking sounds. To an adult though,
especially one who is a native speaker of the language being imitated,
there is an obvious difference.

If in every experiment we conduct, we find an accordance between physical experiments and predictions made by our acquired understanding of the natural laws then this would hold true to an emulation of a human brain.  If there were any difference between reality and the the physical emulation, it would indicate to us that our model was incomplete.

You seem to believe that no matter what progress is made, we can never understand all the natural laws that govern brains and bodies.
 
This is where we are in our
contemporary belief that we have accounted for physical forces. I
think that we are looking at a pre-Columbian map of the world and
trying to ignore the shadowy fringes of consciousness with names like
'entanglement', 'dark energy', 'vacuum flux' etc. We are in the dark
ages of understanding consciousness as we have not yet discovered
sense. We use sense to try to make sense of a universe that we have
closed one eye to. Physics is a toy model of reality.

>
>
>
> > > D. that mathematics can be simulated to any degree of precision by
> > > algorithms
>
> > Precision only determines the probability that a particular detector
> > fails to detect the fraud of simulation over time. It says nothing
> > about the genuine equivalence of the simulation and the reality.
>
> It sounds like you accept that mathematics can be simulated to any degree
> of precision by algorithms, but your objection is that without absolutely
> perfect precision, the simulation will eventually diverge from the object
> being simulated in some noticeable way.

It depends what the algorithms are running on. If you use a physical
material that is ideal for precision and accuracy, then you are using
the worst possible material for biological sensation, which would need
to be optimized for volatility and ambiguity.

Precision and accuracy are exactly what is needed to emulate the subatomic particles that compose our "ambiguous and volatile" brains.
 

>  I think this is a valid
> objection.  However, I don't see this objection serving as the basis for
> Colin's argument against artificial general intelligence.  Let's say we
> have a near perfect simulation of the physics of Einstein's brain running
> in a computer.  It is near-perfect, rather than perfect, because due to
> rounding errors, it is predicted that there will  be one neuron misfire
> every 50 years of operation.  (Where a misfire is a neuron that fires when
> the actual brain would not have, or doesn't fire when the actual brain
> would not have).  Maybe this misfire causes the simulated brain to develop
> a wrong idea when he would have otherwise had the right one, but who would
> argue that this simulated Einstein brain is not intelligent?  Perhaps it
> has an IQ of 159 instead of the 160 of the genuine brain, but it would
> still be consider an example of AGI.  If you don't like the 1 error every
> 50 years, then you can double the amount of memory used in the floating
> point numbers (going from 64 bits to 128 bits per number), and then you
> make the system have a precision that is 2^64 times finer, so there would
> not be a deviation in the simulation during the whole life of the universe.

That would be true if complexity was what gives rise to awareness,

Remember, I am not talking about awareness above, only behavior.  Do you think any machine could approximate Einstein's behavior to such a degree that his friends, family, and colleagues could not tell the difference?
 
but
I don't think that's the case. There is no sculpture of Einstein's
body that is Einstein. The brain is just part of the body. No amount
of emulation is going to put Einstein in that brain - he was never
there to begin with. The brain was just his KVM and screen. The real
Einstein was an event that happened in the 20th century and can never
be reproduced at all.

>
> So while I accept your argument that a digital machine cannot perfectly
> simulate a continuous one perfectly, I do not see how that could serve as a
> practical barrier in the creation of AGI.
>

And I accept your reasoning that it would be as you describe, were the
universe an interplay of information rather than concrete sense
experiences. It's a close second possibility - I think that you and
Bruno are almost right, but the detail of which of the two (pattern or
pattern recognition) is ultimately more primitive makes all the
difference. I think that pattern recognition can exist without any
external pattern more than patterns can exist without the potential
for awareness of them. If our perception were more independent of the
brain...if we could not profoundly change it with just a bit of
chemistry or suggestion...if physics ultimately seemed to point to a
static simplicity at the base of the microcosm... but it doesn't. The
more we look at anything, the more it points back to ourselves and our
method of looking.

Craig

Craig Weinberg

unread,
Jun 2, 2012, 10:46:05 AM6/2/12
to Everything List
On Jun 2, 2:39 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Thu, May 31, 2012 at 12:15 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> > I think that Matter-Energy and Sense-Motive are dual aspects of the
> > same thing. If you are talking about the brain only, then you are
> > talking about matter and energy, but no person exists if you limit the
> > discussion to that. The matter and energy side of what we are is just
> > organs. There is no person there. The brain is not responsible for
> > consciousness anymore than your computer is responsible for the
> > internet. It is the necessary vehicle through which human level
> > awareness is accessed.
>
> Would you say, at least, that the brain is responsible for behavior?

In the sense that buildings, streets, highways, and real estate are
responsible for a city's behavior.

>
> This conversation was originally on the topic of artificial intelligence,
> so whatever it is in us that leads to physical changes which manifest as
> third-person observable behavior, do you believe that to be entirely
> influenced by physical and (in theory) detectable matter/energy/fields?

I'm not saying that though. We *are* the physical changes. Third
person and first person seem to us to be separate because the first
person end is the 'head' end. You are saying that I think 'whatever it
that is our head leads to physical changes which manifest as our tail'
and you are trying to get me to see that it makes more sense to say
that it is our tail which is responsible for the existence of the head
- that the head is what the tail needs to lead it to food and
reproduction. That's not my position though. I'm saying head-tail mind-
body are a function of the symmetry of sense.

As far as fields being detectable - detectable by what? I have no
problem detecting humor, irony, style, beauty...to a human being these
are detectable energy fields, only higher up on the monochord/chakra-
like escalator of qualitative interiority/significance. The universe
for us is much more readily detectable by us as a combination of
fiction and fact than it is in terms of matter/energy/fields. Those
things are a posteriori ideas about the universe of our body, as
verified by consensus of inanimate objects interacting. That is only
half of the universe - the tail half which is the polar opposite of
awareness. It is the perspective from which no life, order, meaning or
significance can be detected.

>
> If not, what mechanism do you theorize mediates between mental and physical
> events? Is it one way or two way? If two way (or if as you often say it
> is just the other side of the coin) then why not say it is physical?

I do say it's physical. Physical feelings, physical stories, physical
personalities and identities - all physical, but not as objects in
space, as experiences through time. There is no mechanism that
mediates spacetime-matter-energy with timespace-sense-motive, they are
the same thing except the more something is you or is like you, the
more it seems to you like the latter instead of the former.

>
> If such a mechanism exists, it must conform to some set of laws, some rhyme
> or reason, as otherwise how could the mental world (or side) so reliably
> control our physical actions, and how do the sensations picked up from
> physical sensors (retinas, nerve endings) so reliably make their way into
> our mind?

The 'mechanism' is sense. It doesn't conform to laws but it develops
habits which become as laws to those who arise out of them. It's only
a mechanism when the insider looks outside. What we are doing now is
looking outside as the insider's exterior and finding it lacking any
trace of the insider, concluding that the insider is an illusion. When
the insider looks inside however, there is more animism than
mechanism. Sense experience and meaning. On the outside, the nerves
are literal fibers and cells. On the inside 'nerve' is strength,
courage, self-legitimizing ontology. They are part of the same thing
but don't correlate one-to-one, they correlate as the whole history
and potential future of the universe twisting orthogonally into an
event horizon of a whole universe of 'here and now'


If there is a separation between the mental and physical worlds,
> there must be reliable rules that govern any interaction between the mind
> and the physical world, and the interaction must be two way. How then, can
> they rightly be called two separate worlds?

Exactly, they are not separate except to the participant. We are the
head looking at our tail, but objectively, if we were not a head, we
would see both head and tail are the body with two ends, each being
everything that the other is not. If there were rules, then the rules
would need rules. What makes the rules? Where to they come from and
what mechanism do they use to rule?

As you say, and we agree, the interaction must be two way, but no
external rules are required to govern the interaction, because both
mind and body are, on one level, the same thing (essentially) and
symmetrically anomalously opposite things on another level
(existentially). This symmetry recapitulates the division itself, as
the essential level is monadic and undifferentiated (like a dream
which freely mixes literal and metaphorical realism) and the
existential level is the tail end, where head and tail appear to be
strictly delineated. It's not just a simple fold of dualism, it's
[monism/(monism/dualism)], and it is involuted as well, so that the
brain and body are characters in our life while our lives exist
through the vehicle of the brain.

>
>
>
> > > > > B. that matter and energy follow natural laws,
>
> > > > No, laws follow from our observation of natural matter and energy.
>
> > > You are mistaking our approximations and inferences concerning the
> > natural
> > > laws for the natural laws themselves.
>
> > No, you are mistaking the interaction of concretely real natural
> > phenomena with abstract principles which we have derived from
> > measurement and intellectual extension.
>
> Regardless of who is making the mistake, above you seem to agree with
> my premise that there are real natural phenomenon.

It's all real natural phenomena. Thermodynamics, electromagnetism,
general relativity, sensorimotive perception. All fundamental in the
universe, but sensorimotive is like the light source, the others are
reflections and shadows.

>
>
>
> > > Before there were any humans, or any
> > > life, there must have been laws that the universe obeyed to reach the
> > point
> > > where Earth formed and life could develop.
>
> > Before there was matter, there were no laws that the universe obeyed
> > pertaining to matter, just as there were no laws of biology before
> > biology.
>
> This is an interesting way of looking at things: that the capabilities of
> natural phenomenon change as it develops more and more complex states of
> being. However, I think the potentiality for those capabilities was there
> from the beginning, and the determination of whether or not such
> potentialities existed in the primordial universe could in theory, have
> been made by a sufficiently great intelligence that had a proper
> understanding of the natural phenomenon.

If we see our understanding of time and causality as nothing but a
consequence of our participation-perception within the universe as an
instantiation of particular senses, then the 'beginning' of the
universe might be the same thing as the 'end' of the universe
objectively speaking. The novelty and capability are emerging from the
middle, as a diffraction between the two ends of bottom-up nothingness
and top-down everythingness. It's only our ontological bias in being
heads of a human tail that we see the chain of causality beneath us as
literal and clear, and the possible futures as obscured. I agree,
these potentials have to exist 'in the beginning', but I don't see
them as laws so much as lowest common denominator sense protocols,
which maybe have some aspects that are eternal and autopoietic, but
among them is the eternal aspect of generating novelty. Sense is like
an Ouroboros of universally consistent law and irreducible novelty. A
head full of Tao and a tail that wags in physics.

>
> > The universe makes laws by doing. It isn't only a disembodied
> > set of invisible laws which creates obedient bodies.
>
> What did the universe have to do to set the speed of light?

I think that the speed of light is really the latency of space. If you
have an outermost inertial frame of 'everythingness-nothingness' then
it is the ultimate background. Any deviation from that - ie, when some
part of that (essential) totality is masked (existentially) into a
temporal event-experience that casts a spatial-material shadow, those
shadows and experiences have a natural sequence which reflects their
order in relation to the totality. The object-event has a
spatiotemporal MAC address in the inside and an IP address on the
outside (figuratively...I don't think they have a literal number,
rather the sense relation is the irreducible reality, but we can
understand it using arithmetic triangulation). The speed of light
then, or c, is not a speed at all, it is absolute velocity itself.
Experience ('energy' or 'signal' received/detected/projected) without
being condensed as matter (volumes of relatively static phenomenology
in space). All the universe has to do to set the speed of light is to
slow part of itself down long enough for it to see where it came from
and know the difference.

>
> > Laws are not
> > primordial.
>
> If not laws, then what?

Sense. Not a perfect word, but not bad I think: See my post on
defending sense: http://s33light.org/post/24159233874


>
> > You have to have some kind of capacity to sense and make
> > sense before any kind of regularity of pattern can be established.
>
> You might need sense to notice the pattern, but patterns exist that we are
> unaware of. If this were not the case, there would be no room for
> discovery.

Absolutely there are patterns we as human beings are unaware of, but
there can't be a pattern that nothing whatsoever is aware of or can
detect or interact with.

>
> > Something has to be able to happen in the first place before you can
> > separate out what can happen under which conditions. The reality of
> > something being able to happen - experience - possibility - prefigures
> > all other principles.
>
> I'm not opposed to the idea that possibility or experience could in some
> sense be more fundamental, but I don't see how this could change the fact
> that we observe matter and energy to always follow certain rules, and find
> evidence (when we look at stars and galaxies very far away) that these laws
> have been in effect long before life on Earth arose.

I don't deny the observations of physics at all, I just think that we
are looking only at half of the story and assuming the other half must
be the same. All I'm saying is that the other half is the same on one
level, but the opposite on the other. Just like we use an array of
meaningless colored pixels to produce visual media, the semantic
experience through time is orthogonal to the a-signifying topology in
space. Think of the universe as a giant TV screen where each pixel is
a tiny TV screen also, each with it's own remote control.

>
> > > Do you agree that such natural
> > > laws exist (regardless of our human approximations of them)?
>
> > No.
>
> So you deny there are any natural laws? This might explain why some people
> find it difficult to carry on conversations with you.

I believe that we infer natural laws based on our perceptions of the
world of our body, and that those are certainly reliable for our body,
but my imagination seems to obey no natural laws other than sense-
motive conceivability. My imagination could conceive of physics and
physical laws, but physics can only conceive of toy models of
imagination.

>
> > It has nothing to do with human approximations though. If an
> > audience cheers it is not because there is a law of cheering they are
> > following, it is because they personally are participating in a
> > context of sense and motive which they and their world mutually push
> > and pull. The understanding of when cheering happens and under what
> > conditions it can be produced is an a posterior abstraction. We can
> > call it a law, and indeed, it is highly regular and useful to think of
> > it that way, but ultimately the law itself is nothing. It is a set of
> > meta-observations about reality,
>
> Are our observations not reflections of something that is true?

Sure, but there are interior observations which reflect truths also.
We can explain both with interior truths, but exterior truths can't
really explain anything, they can only account for things.

>
> > not an ethereal authoritative core
> > around which concrete reality constellates and obeys. Laws come from
> > within. Human laws from within humans, atomic laws from within atoms,
> > etc.
>
> So then natural laws come from nature. Earlier you said natural laws don't
> exist.

They don't exist, they insist. They are not imposed externally, the
external behaviors are the spacetime embodiment (temporally and
qualitatively flattened) sense experiences which accumulate as habits
through timespace (signficance, inertial frames).

>
>
>
> > > > > C. that these laws are describable in mathematical terms
>
> > > > You have jumped from physics to abstraction. It's like saying 'I have
> > > > a rabbit > rabbits act like rabbits > Bugs Bunny is modeled after the
> > > > behavior of rabbits > Bugs Bunny is a rabbit'.
>
> > > I haven't jumped there yet. All "C" says is that there exists some
> > formal
> > > system that is capable of describing the natural laws as they are. You
> > may
> > > accept or reject this. If you reject this, simply say so and provide
> > some
> > > justification if you have one.
>
> > The formal system doesn't exist until some sentient being
> > intentionally brings it into existence.
>
> That's fine. The question is whether you believe that a sentient being, in
> theory, is capable of developing such a formal system and in it, a
> codification of the natural laws that govern the behavior of brains and
> bodies?

Some behaviors yes definitely. Most behaviors maybe. All behaviors,
we'll have to see. There maybe a feedback loop, like a 'law of
conservation of mystery'. What we see in science now with things like
the increasing placebo effect and the rollback of antibiotic potency,
may suggest that in some way civilization will always be chasing it's
own tail, even if it's up a spiral staircase. Gödel says to me the
same kind of thing in a way, no model can completely describe that
which models it.

>
> > Bugs Bunny requires a
> > cartoonist to draw him. Bugs is a formal system that is capable of
> > describing rabbit behaviors as they are but he doesn't exist
> > 'there' ('he' insists 'here' instead).
>
> > > Note that I have not made any statement to the effect that "an abstract
> > > rabbit is the same as a physical rabbit", only that natural laws that the
> > > matter and energy in (a rabbit or any other physical thing) follow can be
> > > described.
>
> > You aren't factoring in the limitation of perception. Think of a young
> > child trying to imitate an accent from another language. To the child,
> > they perceive that they are doing a pretty good job of emulating
> > exactly how that way of speaking sounds. To an adult though,
> > especially one who is a native speaker of the language being imitated,
> > there is an obvious difference.
>
> If in every experiment we conduct, we find an accordance between physical
> experiments and predictions made by our acquired understanding of the
> natural laws then this would hold true to an emulation of a human brain.
> If there were any difference between reality and the the physical
> emulation, it would indicate to us that our model was incomplete.

The model is only complete in the 3-p bottom up sense. It can't
explain how the top-down 1-p subject can orchestrate many disparate
neurological changes simultaneously. At a certain point, the 3-p model
reaches levels of the system where the model doesn't say one way or
the other what the charge or polarization should be at any particular
time - whether that's at the quantum-microtubule level, or ion
channel, synapse, whatever, it is the gap which our preferences and
voluntary impulses use to drive the physical end of the computation.

>
> You seem to believe that no matter what progress is made, we can never
> understand all the natural laws that govern brains and bodies.

I don't know about never, but the problem with natural laws that
govern brains and bodies is that they don't address experience and
significance, which are the only purpose of brains and bodies. If the
physical laws do not bridge the Explanatory Gap and solve the Hard
Problem then they cannot be a complete understanding of natural laws
that govern brains and bodies. People have tattoos and piercings on
their bodies. Are those bodily changes predicted by physics? Can a
sufficiently complete model of the brain predict what tattoos they
will have? The self, the body, our lives, and the universe only seem
separate to us from our here-and-now business-as-usual 21st century
Western minded waking consciousness. Lose a little sleep for a few
days, and the separation gets wobbly. That is what the universe is
made of. Symmetric sense oscillations of novelty and recursion,
literal and metaphorical, inner and outer.

>
> > This is where we are in our
> > contemporary belief that we have accounted for physical forces. I
> > think that we are looking at a pre-Columbian map of the world and
> > trying to ignore the shadowy fringes of consciousness with names like
> > 'entanglement', 'dark energy', 'vacuum flux' etc. We are in the dark
> > ages of understanding consciousness as we have not yet discovered
> > sense. We use sense to try to make sense of a universe that we have
> > closed one eye to. Physics is a toy model of reality.
>
> > > > > D. that mathematics can be simulated to any degree of precision by
> > > > > algorithms
>
> > > > Precision only determines the probability that a particular detector
> > > > fails to detect the fraud of simulation over time. It says nothing
> > > > about the genuine equivalence of the simulation and the reality.
>
> > > It sounds like you accept that mathematics can be simulated to any degree
> > > of precision by algorithms, but your objection is that without absolutely
> > > perfect precision, the simulation will eventually diverge from the object
> > > being simulated in some noticeable way.
>
> > It depends what the algorithms are running on. If you use a physical
> > material that is ideal for precision and accuracy, then you are using
> > the worst possible material for biological sensation, which would need
> > to be optimized for volatility and ambiguity.
>
> Precision and accuracy are exactly what is needed to emulate the subatomic
> particles that compose our "ambiguous and volatile" brains.

They don't compose our brains, they compose molecules. Molecules
compose cells, cells compose tissues>organs>bodies. They don't reduce
completely. The cellular agenda is not described by individual
molecular interactions. Each layer has it's own perceptual inertial
frame of unique-but-sensible qualia. The higher up you go, the less
they reduce.

>
>
>
> > > I think this is a valid
> > > objection. However, I don't see this objection serving as the basis for
> > > Colin's argument against artificial general intelligence. Let's say we
> > > have a near perfect simulation of the physics of Einstein's brain running
> > > in a computer. It is near-perfect, rather than perfect, because due to
> > > rounding errors, it is predicted that there will be one neuron misfire
> > > every 50 years of operation. (Where a misfire is a neuron that fires
> > when
> > > the actual brain would not have, or doesn't fire when the actual brain
> > > would not have). Maybe this misfire causes the simulated brain to
> > develop
> > > a wrong idea when he would have otherwise had the right one, but who
> > would
> > > argue that this simulated Einstein brain is not intelligent? Perhaps it
> > > has an IQ of 159 instead of the 160 of the genuine brain, but it would
> > > still be consider an example of AGI. If you don't like the 1 error every
> > > 50 years, then you can double the amount of memory used in the floating
> > > point numbers (going from 64 bits to 128 bits per number), and then you
> > > make the system have a precision that is 2^64 times finer, so there would
> > > not be a deviation in the simulation during the whole life of the
> > universe.
>
> > That would be true if complexity was what gives rise to awareness,
>
> Remember, I am not talking about awareness above, only behavior.

Behavior isn't of much interest to me. That's the Easy Problem. Not my
department.

> Do you
> think any machine could approximate Einstein's behavior to such a degree
> that his friends, family, and colleagues could not tell the difference?

Yes, but only because the sum total of all perceptions of friends,
family, and colleagues are a fraction of the interior reality. Not
only that, but the perceptions of those on the outside cast a shadow
of misrepresentation of the interior reality. The Einstein that we
know includes a lot of projections conjured from the collective
unconscious that had little to do with the reality he experienced of
himself and his own life. Who we know is a postage stamp. Who his
friends new was a movie.

Craig
Reply all
Reply to author
Forward
0 new messages