Turing Machines

16 views
Skip to first unread message

Craig Weinberg

unread,
Aug 14, 2011, 10:38:30 AM8/14/11
to Everything List
http://www.youtube.com/watch?v=E3keLeMwfHY

Does the idea of this machine solve the Hard Problem of Consciousness,
or are qualia something more than ideas?

Jason Resch

unread,
Aug 14, 2011, 11:50:20 AM8/14/11
to everyth...@googlegroups.com
Craig,

Thanks for the video, it is truly impressive.

Jason


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


Bruno Marchal

unread,
Aug 14, 2011, 1:39:32 PM8/14/11
to everyth...@googlegroups.com

Quite cute little physical implementation of a Turing machine.

Read Sane04, it explains how a slight variant of that machine, or how
some program you can give to that machine, will develop qualia, and
develop a discourse about them semblable to ours, so that you have to
treat them as zombie if you want have them without qualia. They can
even understand that their solution is partial, and necessary partial.
Their theories are clear, transparent and explicit, unlike yours where
it seems to be hard to guess what you assume, and what you derive.

But then you admit yourself not trying to really convey your
intuition, and so it looks just like "racism": "you will not tell me
that this (pointing on silicon or a sort of clock) can think?" I don't
take such move as argument.

Bruno


http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Aug 14, 2011, 6:09:03 PM8/14/11
to Everything List
On Aug 14, 11:50 am, Jason Resch <jasonre...@gmail.com> wrote:
> Craig,
>
> Thanks for the video, it is truly impressive.
>
> Jason

Oh glad you liked it. I agree, what a beautifully engineered project.

Craig

Craig Weinberg

unread,
Aug 14, 2011, 6:18:20 PM8/14/11
to Everything List
On Aug 14, 1:39 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 14 Aug 2011, at 16:38, Craig Weinberg wrote:
>
> >http://www.youtube.com/watch?v=E3keLeMwfHY
>
> > Does the idea of this machine solve the Hard Problem of Consciousness,
> > or are qualia something more than ideas?
>
> Quite cute little physical implementation of a Turing machine.

So good. Wow.

> Read Sane04, it explains how a slight variant of that machine, or how  
> some program you can give to that machine, will develop qualia, and  
> develop a discourse about them semblable to ours, so that you have to  
> treat them as zombie if you want have them without qualia. They can  
> even understand that their solution is partial, and necessary partial.  
> Their theories are clear, transparent and explicit,

They aren't clear to me at all. I keep trying to read it but I don't
get why feeling should ever result from logic, let alone be an
inevitable consequence of any particular logic.

>unlike yours where  
> it seems to be hard to guess what you assume, and what you derive.
>
> But then you admit yourself not trying to really convey your  
> intuition, and so it looks just like "racism": "you will not tell me  
> that this (pointing on silicon or a sort of clock) can think?" I don't  
> take such move as argument.

It might think, but you can't tell me that it thinks it's a clock or
that it's telling time, let alone that it has feelings about that or
free will to change it. I'm open to being convinced of that, but it
doesn't make sense that we would perceive a difference between biology
and physics if there weren't in fact some kind of significant
difference. I don't see that comp provides for such a difference.

Craig

Colin Geoffrey Hales

unread,
Aug 14, 2011, 7:29:04 PM8/14/11
to everyth...@googlegroups.com

Great video ... a picture of simplicity....

 

Q. ‘What is it like to be a Turing Machine?” = Hard Problem.

A. It’s like being the pile of gear in the video, NO MATTER WHAT IS ON THE TAPE.

 

Colin

Craig Weinberg

unread,
Aug 14, 2011, 8:06:44 PM8/14/11
to Everything List
On Aug 14, 7:29 pm, Colin Geoffrey Hales <cgha...@unimelb.edu.au>
wrote:
> Great video ... a picture of simplicity....
>
> Q. 'What is it like to be a Turing Machine?" = Hard Problem.
>
> A. It's like being the pile of gear in the video, NO MATTER WHAT IS ON
> THE TAPE.

Why doesn't it matter what's on the tape? If I manually move the tape
under the scanner myself, will the gear as a whole know the
difference? If I dismantle the machine or turn it off will it care?

Craig

Colin Geoffrey Hales

unread,
Aug 14, 2011, 8:18:10 PM8/14/11
to everyth...@googlegroups.com

Craig

Colin ============

Precisely. How can it possibly 'care'? If the machine was (1) spread across the entire solar system, or (2) miniaturized to the size of an atom, (3) massively parallel, (4) quantum, (5) digital, (6) analog or (7) whatever..... it doesn't matter.... it will always be "what it is like to be the physical object (1), (2), (3), (4), (5), (6), (7)", resp., no matter what is on the tape. If find the idea that the contents of the tape somehow magically delivers a first person experience to be intellectually moribund.

The point is, what magic is assumed in the contents of the tape being fiddled with 'Turing-ly' delivers first person content? Legions of folks out there will say "its all information processing!", to which I add... the brain, which is the 100% origins of the only 'what it is like' description we know of, is NOT doing what the video does.

So.... good question. I wish others would ask it.

Colin


Jason Resch

unread,
Aug 15, 2011, 12:01:24 AM8/15/11
to everyth...@googlegroups.com

Colin and Craig,

Imagine that God has such a machine on his desk, which he uses to compute the updated positions of each particle in some universe over each unit of Planck time.  Would you agree it is possible for the following to occur in the simulation:

1. Stars to coalesce due to gravity and begin fusion?
2. Simple biological molecules to forum?
3. Simple single-celled life forms to evolve
4. More complex multi-cellular life forms to evolve?
5. Intelligent life forms to evolve (at least as intelligent as humans)?
6. Intelligent life in the simulation to solve problems and develop culture and technology?
7. For that intelligent life to question qualia?
8. For that intelligent life to define the hard problem?
9. For those beings to create an interconnected network of computers and debate this same topic?

If you disagree with any of the numbered possibilities, please state which ones you disagree with.

Thanks,

Jason

Colin Geoffrey Hales

unread,
Aug 15, 2011, 1:13:21 AM8/15/11
to everyth...@googlegroups.com


Colin and Craig,

Imagine that God has such a machine on his desk, which he uses to compute the updated positions of each particle in some universe over each unit of Planck time.  Would you agree it is possible for the following to occur in the simulation:

1. Stars to coalesce due to gravity and begin fusion?
2. Simple biological molecules to forum?
3. Simple single-celled life forms to evolve
4. More complex multi-cellular life forms to evolve?
5. Intelligent life forms to evolve (at least as intelligent as humans)?
6. Intelligent life in the simulation to solve problems and develop culture and technology?
7. For that intelligent life to question qualia?
8. For that intelligent life to define the hard problem?
9. For those beings to create an interconnected network of computers and debate this same topic?

If you disagree with any of the numbered possibilities, please state which ones you disagree with.


Colin =============

I don’t know about Craig...but I disagree with all of them.

Your premise, that the God’s-Desk Turing machine is relevant, is misplaced.

A) The Turing Machine in the video is inside this (our reality) reality. It uses reality (whatever it is) to construct the Turing machine. All expectations of the machine are constructed on this basis. It is the only basis for expectations of creation of AGI within our reality.

B) The Turing machine on your God’s desk is not that (A) at all. You could be right or wrong or merely irrelevant... and it would change nothing in (A) perspective.

Until you de-confuse these 2 points of view, your 9 points have no meaning. The whole idea that computation is necessarily involved in intelligence is also likewise taken along for the ride. There’s no (A)-style  Turing computation going on in a brain. (A)-style Turing-Computing a model of a brain is not a brain for the same reason (A)-style  computing a model of fire is not fire.

To me,

(i) reality-as-computation

                (ii) computation of a model of reality within the reality

(iii) to be made of/inside inside an actual reality, and able to make a model of it from within

(iv) an actual reality

are all different things. The video depicts a bit of a (iv) doing (iii), from the perspective of an observer within (iv). I’m not interested in simulating anything. I want to create artificial cognition (AGI) the same way artificial flight is flight.

Colin

 

Jason Resch

unread,
Aug 15, 2011, 1:56:35 AM8/15/11
to everyth...@googlegroups.com
On Mon, Aug 15, 2011 at 12:13 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:


Colin and Craig,

Imagine that God has such a machine on his desk, which he uses to compute the updated positions of each particle in some universe over each unit of Planck time.  Would you agree it is possible for the following to occur in the simulation:

1. Stars to coalesce due to gravity and begin fusion?
2. Simple biological molecules to forum?
3. Simple single-celled life forms to evolve
4. More complex multi-cellular life forms to evolve?
5. Intelligent life forms to evolve (at least as intelligent as humans)?
6. Intelligent life in the simulation to solve problems and develop culture and technology?
7. For that intelligent life to question qualia?
8. For that intelligent life to define the hard problem?
9. For those beings to create an interconnected network of computers and debate this same topic?

If you disagree with any of the numbered possibilities, please state which ones you disagree with.


Colin =============

I don’t know about Craig...but I disagree with all of them.

Your premise, that the God’s-Desk Turing machine is relevant, is misplaced.


It was to avoid any distraction on the topics of run time, resources, tape length, etc.
 

A) The Turing Machine in the video is inside this (our reality) reality. It uses reality (whatever it is) to construct the Turing machine. All expectations of the machine are constructed on this basis. It is the only basis for expectations of creation of AGI within our reality.


Does it matter where a Turing machine is for it to be a Turing machine?  Do you think it matters from the program's point of view what is providing the basis for its computation?

In any case, if you find it problematic then assume the Turing machine is run by some advanced civilization instead of on God's desk.
 

B) The Turing machine on your God’s desk is not that (A) at all. You could be right or wrong or merely irrelevant... and it would change nothing in (A) perspective.

Until you de-confuse these 2 points of view, your 9 points have no meaning.

Can we accurately simulate physical laws or can't we?  Before you answer, take a few minutes to watch this amazing video, which simulates the distribution of mass throughout the universe on the largest scales: http://www.youtube.com/watch?v=W35SYkfdGtw
(Note each point of light represents a galaxy, not a star)

 

The whole idea that computation is necessarily involved in intelligence is also likewise taken along for the ride. There’s no (A)-style  Turing computation going on in a brain.

Either the brain follows predictable laws or it does not.  If it does follow predictable laws, then a model of the brains behavior can be created.  The future evolution of this model can then be determined by a Turing machine.  The evolution of the model would be as generally intelligent as the brain its model was based upon.

You must believe in some randomness, magic, infinities or undecideability somewhere in the physics of this universe that are relavent to the behavior of the brain.  Otherwise there is no reason for such a model to not be possible.
 

(A)-style Turing-Computing a model of a brain is not a brain for the same reason (A)-style  computing a model of fire is not fire.

But the question here is whether or not the model is intelligent?  Not what "style" of intelligence it happens to be.  I don't see how the "style" of intelligence can make any meaningful difference.  The intelligence of the model could drive the same behaviors, it would react the same way in the same situations, answer the same questions with the same answers, fill out the bubbles in a standardized test in the same way, so how is this "A-intelligence" different from "B-intelligence"?  I think you are manufacturing an difference where there is none.  (Does that make it an artificial difference?)
 

To me,

(i) reality-as-computation

                (ii) computation of a model of reality within the reality

(iii) to be made of/inside inside an actual reality, and able to make a model of it from within

(iv) an actual reality

are all different things. The video depicts a bit of a (iv) doing (iii), from the perspective of an observer within (iv). I’m not interested in simulating anything. I want to create artificial cognition (AGI) the same way artificial flight is flight.



Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?

Thanks,

Jason

Colin Geoffrey Hales

unread,
Aug 15, 2011, 3:06:46 AM8/15/11
to everyth...@googlegroups.com

Read all your comments....cutting/snipping to the chase...

 

[Jason ]


Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?


Thanks,

Jason

 

[Colin]

I think you’ve misunderstood the position in ways that I suspect are widespread...

 

1) simulation of the chemistry or physics underlying the brain is impossible

It’s quite possible, just irrelevant! ‘Chemistry’ and ‘physics’ are terms for models of the natural world used to describe how natural processes appear to an observer inside the universe. You can simulate (compute physics/chem. models) until you turn blue, and be as right as you want: all you will do is predict how the universe appears to an observer.

 

This has nothing to do with creating  artificial intelligence.

 

Natural intelligence is a product of the actual natural world, and is not a simulation. Logic dictates that, just like the wheel, fire, steam power, light and flight, artificial cognition involves the actual natural processes found in brains. This is not a physics model of the brain implemented in any sense of the word. Artificial cognition will be artificial in the same way that artificial light is light. Literally. In brains we know there are action potentials coupling/resonating with a large unified EM field system, poised on/around the cusp of an unstable equilibrium. So real artificial cognition will have, you guessed it, action potential coupling resonating with a large unified EM field system, poised on/around the cusp of an unstable equilibrium. NOT a model of it computed on something. Such inorganic cognition will literally have an EEG signature like humans. If you want artificially instantiated fire you must provide fuel, oxygen and heat/spark. In the same way, if you want artificial cognition you must provide equivalent minimal set of necessary physical ingredients.

 

 

2. Human intelligence is something beyond the behaviors manifested by the brain

This sounds very strange to me. Human intelligence (an ability to observe and produce the models called ‘physics and chemistry’) resulted from the natural processes (as apparent to us) described by us as physics and chemistry, not the models called physics & chemistry. It’s confusingly self-referential...but logically sound.

 

= = = = = = = = = = = = = = = =

The fact that you posed the choices the way you did indicates a profound confusion of natural processes with computed models of natural processes. The process of artificial cognition that uses natural processes in an artificial context is called ‘brain tissue replication’. In replication there is no computing and no simulation. This is the way to explore/understand and develop artificial cognition.... in exactly the way we used artificial flight to figure out the physics of flight. We FLEW. We did not examine a physics model of flying (we didn’t have one at the time!). Does a computed physics model of flight fly? NO. Does a computed physics model of combustion burn? NO. Is a computed physics model of a hurricane a hurricane? NO.

 

So how can a computed physics model of cognition be cognition?

 

I hope you can see the distinction I am trying to make clear. Replication is not simulation.

 

Colin

 

 

 

Craig Weinberg

unread,
Aug 15, 2011, 10:18:39 AM8/15/11
to Everything List
Jason & Colin, I'm going to just try to address everything in one
reply.

I agree with Colin pretty much down the line. My position assumes that
worldview as axiomatic and then adds some hypotheses on top of that.
Jason, your original list of questions are all predicated on the very
assumption that I've challenged all along but can't seem to get you
(or others) to look at. I have experienced this many many times
before, so it doesn't surprise me and I can't be sure that it's even
possible for a mind that is so well versed in 'right hand' logic to be
able to shift into a left hand mode, even if it wanted to. I have not
seen it happen yet.

As Colin says, the assumption is that the logic behind the Turing
machine has anything to do with the reality of the world we are
modeling through it. If you make a universe based upon Turing
computations alone, there is no gravity or fusion, no biological
molecules, etc. There is only meaningless patterns of 1 and 0 through
which we can plot out whatever abstract coordinates we with to keep
track of. It means nothing to us until it is converted to physical
changes which we can sense with our eyes, like ink on tape or
illuminated pixels on a screen.

On Aug 15, 3:06 am, Colin Geoffrey Hales <cgha...@unimelb.edu.au>
wrote:
> Read all your comments....cutting/snipping to the chase...
>
> [Jason ]
> Your belief that AGI is impossible to achieve through computers depends
> on at least one of the following propositions being true:
> 1. Accurate simulation of the chemistry or physics underlying the brain
> is impossible

You can simulate it as far as being able to model the aspects of it's
behavior that you can observe, but you can't necessarily predict that
behavior over time, any more than you can predict what other people
might say to you today. The chemistry and physics of the brain are
partially determined by the experiences of the environment through the
body, and partially determined by the sensorimotive agenda of the
mind, which are both related to but not identical with the momentum
and consequences of it's neurological biochemistry. All three are are
woven together as an inseparable whole.

> 2. Human intelligence is something beyond the behaviors manifested by
> the brain

Any intelligence is something beyond the behaviors of matter. It's not
as if a Turing machine is squirting out omnipotent toothpaste, you are
inferring that there is some world being created (metaphysically)
which can be experienced somewhere else beyond the behavior of the pen
and tape, motors and guides, chips and wires.

> Which one(s) do you think is/are correct and why?
>
> Thanks,
>
> Jason
>
> [Colin]
>
> I think you've misunderstood the position in ways that I suspect are
> widespread...
>
> 1) simulation of the chemistry or physics underlying the brain is
> impossible
>
> It's quite possible, just irrelevant! 'Chemistry' and 'physics' are
> terms for models of the natural world used to describe how natural
> processes appear to an observer inside the universe. You can simulate
> (compute physics/chem. models) until you turn blue, and be as right as
> you want: all you will do is predict how the universe appears to an
> observer.
>
> This has nothing to do with creating artificial intelligence.
>
> Natural intelligence is a product of the actual natural world, and is
> not a simulation. Logic dictates that, just like the wheel, fire, steam
> power, light and flight, artificial cognition involves the actual
> natural processes found in brains. This is not a physics model of the
> brain implemented in any sense of the word. Artificial cognition will be
> artificial in the same way that artificial light is light. Literally. In
> brains we know there are action potentials coupling/resonating with a
> large unified EM field system, poised on/around the cusp of an unstable
> equilibrium.

Colin, here is where you can consider my idea of sensorimotive
electromagnetism if you want. What really is an EM field? What is it
made of and how do we know? My hypothesis is that we actually don't
know, and that the so called EM field is a logical inference of causal
phenomenon to which matter (organic molecules within a neuron in this
case) reacts to. Instead, I think that it makes more sense as sense. A
sensorimotive synchronization shared amongst molecules and cells alike
(albeit in different perceptual frames of reference - PRIFs). If two
or more people share a feeling and they act in synchrony, from a
distance it could appear as if they are subject to an EM field which
informs them from outside their bodies and exists in between their
bodies when in fact the synchronization arises from within, through
semantic sharing of sense. It's reproduced or imitated locally in each
body as a feeling - the same feeling figuratively but separate
instantiations literally in separate brains (or cells, molecules, as
the case may be).

All of our inferences of electromagnetism come through observing the
behaviors of matter with matter. In order for EM fields to be a
literal phenomenon independent of atoms, it would have to be shown
that a vacuum can detect an EM event in a vacuum. That this is so
problematic underscores the primitive level of our assumptions about
EM. We can't not be matter and can't not use matter to detect EM so
that we aren't even consciously aware that we are ascribing waves and
arrows to the behavior of materials rather than just mathematical
particle-waves. Like the lines radiating out of a cartoon light bulb,
the radiance is a subjective experience within our PRIF as sighted
humans, not literal rays of nano projectiles striking our eyes.

The waveness or particleness is in the eye of the beholder because it
is literally the beholder which is actively experiencing the effect.
The effect is figuratively the same for each beholder within the same
PRIF although locally instantiated, but outside of the PRIF, such as
in lab experiments with photomultipliers, the effect is not only
instantiated separately on the local level, but on the figurative
level as well. A different metaphor is invoked if you are working with
beholders of literal sequential events compared to organic beholders
of a greater range of qualitative experience. Hence the light that we
see through our eyes is not just electromagnetism, it is the
sensorimotive content of our nervous system's visual sense.

>So real artificial cognition will have, you guessed it,
> action potential coupling resonating with a large unified EM field
> system, poised on/around the cusp of an unstable equilibrium. NOT a
> model of it computed on something. Such inorganic cognition will
> literally have an EEG signature like humans. If you want artificially
> instantiated fire you must provide fuel, oxygen and heat/spark. In the
> same way, if you want artificial cognition you must provide equivalent
> minimal set of necessary physical ingredients.

Yes, I get this as well. Others seem reluctant to commit to it, which
I see as sentimental-protectionist to the occidental perspective, and
not progressive-scientific.


> 2. Human intelligence is something beyond the behaviors manifested by
> the brain
> This sounds very strange to me. Human intelligence (an ability to
> observe and produce the models called 'physics and chemistry') resulted
> from the natural processes (as apparent to us) described by us as
> physics and chemistry, not the models called physics & chemistry. It's
> confusingly self-referential...but logically sound.

Yes, I get this too. The model of the Krebs cycle doesn't produce
anything by itself. It's just a mathematical-logical understanding
(which is itself a sensorimotive experience and not a physical
artifact which exists independently of our minds) Again, it is
instantiated separately in each of our minds, with a degree of
distortion, and is relevant only to our PRIF in relation to a
biochemical level PRIF. On the level of the molecules, there is no
Krebs cycle, just as the cycles of the stock market are not the actual
activities of human beings doing business.

> = = = = = = = = = = = = = = = =
>
> The fact that you posed the choices the way you did indicates a profound
> confusion of natural processes with computed models of natural
> processes.

To my mind it is profound confusion as well, but my ACME-OMMM model
predicts that it would be how it should look from the OMMM facing
perspective. The mind can relate to the world as computed models too,
it's just using the cognitive PRIF only instead of the full spectrum
of PRIFs accessible through our perception. Such a worldview is 'black
and white' relative to other perceptual modes, but it provides higher
resolution of linear literal functions. In it's extreme though, it has
no capacity to allow for metaphorical resonance. Which is a problem
for consciousness, since it's made of metaphorical resonance of
matter's interior.

>The process of artificial cognition that uses natural
> processes in an artificial context is called 'brain tissue replication'.
> In replication there is no computing and no simulation. This is the way
> to explore/understand and develop artificial cognition.... in exactly
> the way we used artificial flight to figure out the physics of flight.
> We FLEW. We did not examine a physics model of flying (we didn't have
> one at the time!). Does a computed physics model of flight fly? NO. Does
> a computed physics model of combustion burn? NO. Is a computed physics
> model of a hurricane a hurricane? NO.
>
> So how can a computed physics model of cognition be cognition?
>
> I hope you can see the distinction I am trying to make clear.
> Replication is not simulation.

Yep. You said it.

Craig

Jason Resch

unread,
Aug 15, 2011, 10:20:10 AM8/15/11
to everyth...@googlegroups.com
On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:

Read all your comments....cutting/snipping to the chase...

 


It is a little unfortunate you did not answer all of the questions.  I hope that you will answer both questions (1) and (2) below.
 

[Jason ]


Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?


Thanks,

Jason

 

[Colin]

I think you’ve misunderstood the position in ways that I suspect are widespread...

 

1) simulation of the chemistry or physics underlying the brain is impossible


Question 1:

Do you believe correct behavior, in terms of the relative motions of particles is possible to achieve in a simulation?  For example, take the example of the millennium run.  The simulation did not produce dark matter, but the representation of dark matter behaved like dark matter did in the universe (in terms of relative motion).  If we can simulate accurately the motions of particles, to predict where they will be in time T given where they are now, then we can peek into the simulation to see what is going on.

Please answer if you agree the above is possible.  If you do not, then I do not see how your viewpoint is consistent with the fact that we can build simulations like the millenium run, or test aircraft designs before building them, etc.

Question 2:

Given the above (that we can predict the motions of particles in relation to each other) then we can extract data from the simulation to see how things are going inside.  Much like we had to convert a large array of floating point values representing particle positions in the Millennium simulation in order to render a video of a fly-through.  If the only information we can extract is the predicted particle locations, then even though the simulation does not create EM fields or fire in this universe, we can at least determine how the different particles will be arranged after running the simulation.

Therefore, if we simulated a brain answering a question in a standardized test, we can peer into the simulation to determine in which bubble the graphite particles are concentrated (from the simulated pencil, controlled by the simulated brain in the model of particle interactions within an entire classroom).  Therefore, we have a model which tells us what an intelligent person would do, based purely on positions of particles in a simulation.

What is wrong with the above reasoning?  It seems to me if we have a model that can be used to determine what an intelligence would do, then the model could stand in for the intelligence in question.

Jason

Evgenii Rudnyi

unread,
Aug 15, 2011, 2:17:20 PM8/15/11
to everyth...@googlegroups.com
On 15.08.2011 07:56 Jason Resch said the following:

...

> Can we accurately simulate physical laws or can't we? Before you
> answer, take a few minutes to watch this amazing video, which
> simulates the distribution of mass throughout the universe on the
> largest scales: http://www.youtube.com/watch?v=W35SYkfdGtw (Note each
> point of light represents a galaxy, not a star)

The answer on your question depends on what you mean by accurately and
what by physical laws. I am working with finite elements (more
specifically with ANSYS Multiphysics) and I can tell for sure that if
you speak of simulation of the universe, then the current simulation
technology does not scale. Nowadays one could solve a linear system
reaching dimension of 1 billion but this will not help you. I would say
that either contemporary numerical methods are deadly wrong, or
simulated equations are not the right ones. In this respect, you may
want to look how simulation is done for example in Second Life.

Well, today numerical simulation is a good business (computer-aided
engineering is about a billion per year) and it continues to grow. Yet,
if you look in detail, then there are some areas when it could be
employed nicely and some where it better to forget about simulation.

I understand that you speak "in principle". Yet, I am not sure if
extrapolation too far away from the current knowledge makes sense, as
eventually we are coming to "philosophical controversies".

Evgenii

Craig Weinberg

unread,
Aug 15, 2011, 4:36:31 PM8/15/11
to Everything List

Jason Resch

unread,
Aug 15, 2011, 5:42:42 PM8/15/11
to everyth...@googlegroups.com
On Mon, Aug 15, 2011 at 1:17 PM, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
On 15.08.2011 07:56 Jason Resch said the following:

...


Can we accurately simulate physical laws or can't we?  Before you
answer, take a few minutes to watch this amazing video, which
simulates the distribution of mass throughout the universe on the
largest scales: http://www.youtube.com/watch?v=W35SYkfdGtw (Note each
point of light represents a galaxy, not a star)

The answer on your question depends on what you mean by accurately and what by physical laws. I am working with finite elements (more specifically with ANSYS Multiphysics) and I can tell for sure that if you speak of simulation of the universe, then the current simulation technology does not scale. Nowadays one could solve a linear system reaching dimension of 1 billion but this will not help you. I would say that either contemporary numerical methods are deadly wrong, or simulated equations are not the right ones. In this respect, you may want to look how simulation is done for example in Second Life.

Well, today numerical simulation is a good business (computer-aided engineering is about a billion per year) and it continues to grow. Yet, if you look in detail, then there are some areas when it could be employed nicely and some where it better to forget about simulation.

I understand that you speak "in principle".

Yes, this is why in my first post, I said consider God's Turing machine (free from our limitations).  Then it is obvious that with the appropriate tape, a physical system can be approximated to any desired level of accuracy so long as it is predictable.  Colin said such models of physics or chemistry are impossible, so I hope he elaborates on what makes these systems unpredictable.

 
Yet, I am not sure if extrapolation too far away from the current knowledge makes sense, as eventually we are coming to "philosophical controversies".


We're already simulating peices of brain tissue on the order of fruit fly brains (10,000 neurons).  Computers double in power/price every year, so 6 years later we could simulate mouse brains, another 6 we can simulate cat brains, and in another 6 we can simulate human brains. (By 2030)

But all of this is an aside from point that I was making regarding the power and versatility of Turing machines.  Those who think Artificial Intelligence is not possible with computers must show what about the brain is unpredictable or unmodelable.

Jason

Craig Weinberg

unread,
Aug 15, 2011, 6:23:46 PM8/15/11
to Everything List
On Aug 15, 5:42 pm, Jason Resch <jasonre...@gmail.com> wrote:

> We're already simulating peices of brain tissue on the order of fruit fly
> brains (10,000 neurons). Computers double in power/price every year, so 6
> years later we could simulate mouse brains, another 6 we can simulate cat
> brains, and in another 6 we can simulate human brains. (By 2030)

If you have a chance to listen and compare the following:

http://www.retrobits.net/atari/downloads/samg.mp3 Done in 1982 with a
program 6k in size. Six. thousand. bytes. on the Atari BASIC operating
system that was 8k ROM.

http://www.acapela-group.com/text-to-speech-interactive-demo.html
(for side by side comparison paste:

Four score and seven years ago our fathers brought forth on this
continent, a new nation, conceived in Liberty, and dedicated to the
proposition that all men are created equal.

into the text box and choose English (US) - Ryan for the voice.

So in 29 years of computing progress, on software that is orders of
magnitude more complex and resource-heavy, we can definitely hear a
strong improvement, however, at this rate, in another 30 years, we are
still not going to have anything that sounds convincingly like natural
speech. This is just mapping vocal chord vibrations to digital logic -
a miniscule achievement compared to mapping even the simplest
neurotransmitter interactions. Computers double in power/price, but
they also probably halve in efficiency/memory. It takes longer now to
boot up and shut down the computer, longer to convert a string of text
into voice.

Like CGI, despite massive increases in computing power, it still only
superficially resembles what it's simulating. IMO, there has been
little or no ground even in simulating the appearance of genuine
feeling, let alone in producing something which itself feels.

Craig

Craig Weinberg

unread,
Aug 15, 2011, 6:22:27 PM8/15/11
to Everything List
On Aug 15, 5:42 pm, Jason Resch <jasonre...@gmail.com> wrote:

> We're already simulating peices of brain tissue on the order of fruit fly
> brains (10,000 neurons).  Computers double in power/price every year, so 6
> years later we could simulate mouse brains, another 6 we can simulate cat
> brains, and in another 6 we can simulate human brains. (By 2030)

Jason Resch

unread,
Aug 15, 2011, 7:18:40 PM8/15/11
to everyth...@googlegroups.com
On Mon, Aug 15, 2011 at 5:22 PM, Craig Weinberg <whats...@gmail.com> wrote:
On Aug 15, 5:42 pm, Jason Resch <jasonre...@gmail.com> wrote:

> We're already simulating peices of brain tissue on the order of fruit fly
> brains (10,000 neurons).  Computers double in power/price every year, so 6
> years later we could simulate mouse brains, another 6 we can simulate cat
> brains, and in another 6 we can simulate human brains. (By 2030)

If you have a chance to listen and compare the following:

http://www.retrobits.net/atari/downloads/samg.mp3  Done in 1982 with a
program 6k in size. Six. thousand. bytes. on the Atari BASIC operating
system that was 8k ROM.

http://www.acapela-group.com/text-to-speech-interactive-demo.html
(for side by side comparison paste:


Try this one, it is among the best I have found:
http://www.ivona.com/online/editor.php

 
Four score and seven years ago our fathers brought forth on this
continent, a new nation, conceived in Liberty, and dedicated to the
proposition that all men are created equal.

into the text box and choose English (US) - Ryan for the voice.

So in 29 years of computing progress, on software that is orders of
magnitude more complex and resource-heavy, we can definitely hear a
strong improvement, however, at this rate, in another 30 years, we are
still not going to have anything that sounds convincingly like natural
speech.

I think you will be surprised by the progress of the next 30 years.
 
This is just mapping vocal chord vibrations to digital logic -
a miniscule achievement compared to mapping even the simplest
neurotransmitter interactions. Computers double in power/price, but
they also probably halve in efficiency/memory. It takes longer now to
boot up and shut down the computer, longer to convert a string of text
into voice.

Lines of code (code complexity) has been found to grow even more quickly than Moore's law.  (At least in the example of Microsoft Word that I read about at one point)
 

Like CGI, despite massive increases in computing power, it still only
superficially resembles what it's simulating. IMO, there has been
little or no ground even in simulating the appearance of genuine
feeling, let alone in producing something which itself feels.


That is the property of exponential processes and progress, looking back the curve seems flat, look to see where it is going and you'll see an overwhelming spike.

Have you seen the recent documentary "Transcendent Man"?

You seem to accept that computing power is doubling every year.  The fruit fly has 10^5 neurons, a mouse 10^7, a cat 10^9, and a human 10^11.  It's only a matter of time (and not that much) before a $10 thumb drive will have enough memory to store a complete mapping of all the neurons in your brain.  People won't need to freeze themselves to be immortal at that point.

Jason

Colin Geoffrey Hales

unread,
Aug 15, 2011, 8:21:06 PM8/15/11
to everyth...@googlegroups.com

On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:

Read all your comments....cutting/snipping to the chase...

It is a little unfortunate you did not answer all of the questions.  I hope that you will answer both questions (1) and (2) below.

 

Yeah sorry about that... I’m really pressed at the moment.

 

[Jason ]


Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?


Thanks,

Jason

 

[Colin]

I think you’ve misunderstood the position in ways that I suspect are widespread...

 

1) simulation of the chemistry or physics underlying the brain is impossible


Question 1:

Do you believe correct behavior, in terms of the relative motions of particles is possible to achieve in a simulation? 

 

[Colin]

 

YES, BUT Only if you simulate the entire universe. Meaning you already know everything, so why bother?

 

So NO, in the real practical world of computing an agency X that is ignorant of NOT_X.

 

For a computed cognitive agent X, this will come down to how much impact the natural processes of NOT_X (the external world) involves itself in the natural processes of X.

 

I think there is a nonlocal direct impact of NOT_X on the EM fields inside X. The EM fields are INPUT, not OUTPUT.

But this will only be settled experimentally. I aim to do that.

 

For example, take the example of the millennium run.  The simulation did not produce dark matter, but the representation of dark matter behaved like dark matter did in the universe (in terms of relative motion).  If we can simulate accurately the motions of particles, to predict where they will be in time T given where they are now, then we can peek into the simulation to see what is going on.

Please answer if you agree the above is possible.  If you do not, then I do not see how your viewpoint is consistent with the fact that we can build simulations like the millenium run, or test aircraft designs before building them, etc.

Question 2:

Given the above (that we can predict the motions of particles in relation to each other) then we can extract data from the simulation to see how things are going inside.  Much like we had to convert a large array of floating point values representing particle positions in the Millennium simulation in order to render a video of a fly-through.  If the only information we can extract is the predicted particle locations, then even though the simulation does not create EM fields or fire in this universe, we can at least determine how the different particles will be arranged after running the simulation.

Therefore, if we simulated a brain answering a question in a standardized test, we can peer into the simulation to determine in which bubble the graphite particles are concentrated (from the simulated pencil, controlled by the simulated brain in the model of particle interactions within an entire classroom).  Therefore, we have a model which tells us what an intelligent person would do, based purely on positions of particles in a simulation.

What is wrong with the above reasoning?  It seems to me if we have a model that can be used to determine what an intelligence would do, then the model could stand in for the intelligence in question.

 

[Colin]

I think I already answered this. You can simulate a human if you already know everything, just like you can simulate flight if you simulate the environment you are flying in. In the equivalent case applied to human cognition, you have to simulate the entire universe in order that the simulation is accurate. But we are trying to create an artificial cognition that can be used to find out about the universe outside the artificial cognition ... like humans, you don’t know what’s outside...so you can’t do the simulation. The reasoning fails at this point, IMO.

 

The above issue about the X/NOT_X interrelationship stands, however.

 

The solution is: there is/can be no simulation in an artificial cognition. It has to use the same processes a brain uses: literally. This is the replication approach.

 

Is it really such a big deal that you can’t get AGI with computation? Who cares? The main thing is we can do it using replication. We are in precisely the same position the Wright Bros were when making artificial flight.

 

This situation is kind of weird. Insisting that simulation/computation is the only way to solve a problem is like saying ‘all buildings must be constructed out of paintings of bricks and only people doing it this way will ever build a building.’. For 60 years every building made like this falls down.

 

Meanwhile I want to build a building out of bricks, and I have to justify my position?

 

Very odd.

 

Colin

 

I literally just found out my PhD examination passed ! Woohoo!

So that’s .....

 

Very odd.

 

Dr. Colin

J

 

Jason Resch

unread,
Aug 15, 2011, 10:08:57 PM8/15/11
to everyth...@googlegroups.com
On Mon, Aug 15, 2011 at 7:21 PM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:

On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:

Read all your comments....cutting/snipping to the chase...

It is a little unfortunate you did not answer all of the questions.  I hope that you will answer both questions (1) and (2) below.

 

Yeah sorry about that... I’m really pressed at the moment.


No worries.
 

 

[Jason ]


Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?


Thanks,

Jason

 

[Colin]

I think you’ve misunderstood the position in ways that I suspect are widespread...

 

1) simulation of the chemistry or physics underlying the brain is impossible


Question 1:

Do you believe correct behavior, in terms of the relative motions of particles is possible to achieve in a simulation? 

 

[Colin]

 

YES, BUT Only if you simulate the entire universe. Meaning you already know everything, so why bother?

 


Interesting idea.  But do you really think the happenings of some asteroid floating in interstellar space in the Andromeda galaxy makes any difference to your intelligence?  Could we get away with only simulating the light cone for a given mind instead of the whole universe?
 

So NO, in the real practical world of computing an agency X that is ignorant of NOT_X.

 

For a computed cognitive agent X, this will come down to how much impact the natural processes of NOT_X (the external world) involves itself in the natural processes of X.

 

I think there is a nonlocal direct impact of NOT_X on the EM fields inside X. The EM fields are INPUT, not OUTPUT.

But this will only be settled experimentally. I aim to do that.


I think I have a faint idea of what you are saying, but it is not fully clear.  Are you hypothesizing there are non-local effects between every particle in the universe which are necessary to explain the EM fields, and these EM fields are necessary for intelligent behavior?
 

 

For example, take the example of the millennium run.  The simulation did not produce dark matter, but the representation of dark matter behaved like dark matter did in the universe (in terms of relative motion).  If we can simulate accurately the motions of particles, to predict where they will be in time T given where they are now, then we can peek into the simulation to see what is going on.

Please answer if you agree the above is possible.  If you do not, then I do not see how your viewpoint is consistent with the fact that we can build simulations like the millenium run, or test aircraft designs before building them, etc.

Question 2:

Given the above (that we can predict the motions of particles in relation to each other) then we can extract data from the simulation to see how things are going inside.  Much like we had to convert a large array of floating point values representing particle positions in the Millennium simulation in order to render a video of a fly-through.  If the only information we can extract is the predicted particle locations, then even though the simulation does not create EM fields or fire in this universe, we can at least determine how the different particles will be arranged after running the simulation.

Therefore, if we simulated a brain answering a question in a standardized test, we can peer into the simulation to determine in which bubble the graphite particles are concentrated (from the simulated pencil, controlled by the simulated brain in the model of particle interactions within an entire classroom).  Therefore, we have a model which tells us what an intelligent person would do, based purely on positions of particles in a simulation.

What is wrong with the above reasoning?  It seems to me if we have a model that can be used to determine what an intelligence would do, then the model could stand in for the intelligence in question.

 

[Colin]

I think I already answered this. You can simulate a human if you already know everything,


We would need to know everything to be certain it is an accurate simulation, but we don't need to know everything to attempt to build a model based on our current knowledge.  Then see whether or not it works.  If the design fails, then we are missing something, if it does work like a human mind does, then it would appear we got the important details right.
 

just like you can simulate flight if you simulate the environment you are flying in.


But do we need to simulate the entire atmosphere in order to simulate flight, or just the atmosphere in the immediate area around the surfaces of the plane?  Likewise, it seems we could take shortcuts in simulating the environment surrounding a mind and get the behavior we are after.
 

In the equivalent case applied to human cognition, you have to simulate the entire universe in order that the simulation is accurate. But we are trying to create an artificial cognition that can be used to find out about the universe outside the artificial cognition ... like humans, you don’t know what’s outside...so you can’t do the simulation.


Why couldn't we simulate a space station, with a couple of intelligent agents on it, and place that space station in a finite volume of vacuum in which after particles pass a certain point we stop simulating them?  They would see no stars, but I don't know why seeing stars would be necessary for intelligence.
 

The reasoning fails at this point, IMO.


The idea that something outside this universe is necessary to explain the goings on in this universe is like the idea that an invisible undetectable (from inside this universe) soul exists and is necessary to explain why some things are conscious while others are not.

If something is truly outside the universe then it can't make a difference within this universe.  Are you suggesting the intervention of forces outside this universe determine whether or not a process can be intelligent?
 

 

The above issue about the X/NOT_X interrelationship stands, however.

 

The solution is: there is/can be no simulation in an artificial cognition. It has to use the same processes a brain uses: literally. This is the replication approach.

 


If we replicate the laws of physics in a simulation, then a brain in that simulation is a replication of a real physical brain is it not?
 

Is it really such a big deal that you can’t get AGI with computation?


It would be a very surprising theoretical result.
 

Who cares? The main thing is we can do it using replication.



What is the difference between simulation and replication?  Perhaps all our disagreement stems from this difference in definitions.
 

We are in precisely the same position the Wright Bros were when making artificial flight.

 

This situation is kind of weird. Insisting that simulation/computation is the only way to solve a problem is like saying ‘all buildings must be constructed out of paintings of bricks and only people doing it this way will ever build a building.’. For 60 years every building made like this falls down.


Its not that all brains are computers, its that the evolution of all finite processes can be determined by a computer.  There is a subtle difference between saying the brain is a computer, and saying a computer can determine what a brain would do.

I think your analogy is a little off.  It is not that proponents of strong AI suggest that houses need to be made of paintings of bricks, it is that the anti-strong-AI suggests that there are some bricks whose image cannot be depicted by a painting.

A process that cannot be predicted by a computer is like a sound that cannot be replicated by a microphone, or an image that can't be captured by a painting or photograph.  It would be very surprising for such a thing to exist.
 

 

Meanwhile I want to build a building out of bricks, and I have to justify my position?

 


You can build your buildings out of bricks, but don't tell the artists that it is impossible for some bricks to be painted (or that they have to paint every brick in the universe for their painting to be look right!), unless you have some reason or evidence why that would be so.

 

Very odd.

 

Colin

 

I literally just found out my PhD examination passed ! Woohoo!

So that’s .....

 

Very odd.

 

Dr. Colin

J

 



Congratulations! :-)

Jason

meekerdb

unread,
Aug 15, 2011, 10:22:03 PM8/15/11
to everyth...@googlegroups.com
On 8/15/2011 4:18 PM, Jason Resch wrote:
> You seem to accept that computing power is doubling every year. The
> fruit fly has 10^5 neurons, a mouse 10^7, a cat 10^9, and a human
> 10^11. It's only a matter of time (and not that much) before a $10
> thumb drive will have enough memory to store a complete mapping of all
> the neurons in your brain. People won't need to freeze themselves to
> be immortal at that point.

But they'll have to be rich enough to afford super-computer time if they
want to really live. :-)

Brent

Jason Resch

unread,
Aug 15, 2011, 10:43:52 PM8/15/11
to everyth...@googlegroups.com
I am more worried for the biologically handicapped in the future.  Computers will get faster, brains won't.  By 2029, it is predicted $1,000 worth of computer will buy a human brain's worth of computational power.  15 years later, you can get 1,000 X the human brain's power for $1,000.  Imagine: the simulated get to experience 1 century for each month the humans with biological brains experience.  Who will really be alive then?

Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

meekerdb

unread,
Aug 15, 2011, 11:32:59 PM8/15/11
to everyth...@googlegroups.com
On 8/15/2011 7:08 PM, Jason Resch wrote:

just like you can simulate flight if you simulate the environment you are flying in.


But do we need to simulate the entire atmosphere in order to simulate flight, or just the atmosphere in the immediate area around the surfaces of the plane?� Likewise, it seems we could take shortcuts in simulating the environment surrounding a mind and get the behavior we are after.

Why simulate?� Why not create a robot with sensors so it can interact the natural environment.

Brent

Craig Weinberg

unread,
Aug 16, 2011, 12:23:15 AM8/16/11
to Everything List
On Aug 15, 7:18 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Mon, Aug 15, 2011 at 5:22 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

> Try this one, it is among the best I have found:http://www.ivona.com/online/editor.php

It's nicer, but still not significantly more convincing than the
oldest version to me.

> I think you will be surprised by the progress of the next 30 years.

That's exactly what I might have said 20 years ago. I could never have
prepared myself for how disappointing the future turned out to be, so
yes, if in 2041 we aren't living in a world that makes Idiocracy or
Soylent Green seem naively optimistic, then I will be pleasantly
surprised. If you compare the technological advances from 1890-1910 to
those of 1990-2010 I think your will see what I mean. We're inventing
cell phones that play games instead of replacements for cars,
electricity grids, moving pictures, radio, aircraft, etc etc.

> > This is just mapping vocal chord vibrations to digital logic -
> > a miniscule achievement compared to mapping even the simplest
> > neurotransmitter interactions. Computers double in power/price, but
> > they also probably halve in efficiency/memory. It takes longer now to
> > boot up and shut down the computer, longer to convert a string of text
> > into voice.
>
> Lines of code (code complexity) has been found to grow even more quickly
> than Moore's law.  (At least in the example of Microsoft Word that I read
> about at one point)

Exactly. There isn't an exponential net improvement.

> > Like CGI, despite massive increases in computing power, it still only
> > superficially resembles what it's simulating. IMO, there has been
> > little or no ground even in simulating the appearance of genuine
> > feeling, let alone in producing something which itself feels.
>
> That is the property of exponential processes and progress, looking back the
> curve seems flat, look to see where it is going and you'll see an
> overwhelming spike.
>
> Have you seen the recent documentary "Transcendent Man"?
>
> You seem to accept that computing power is doubling every year.  The fruit
> fly has 10^5 neurons, a mouse 10^7, a cat 10^9, and a human 10^11.  It's
> only a matter of time (and not that much) before a $10 thumb drive will have
> enough memory to store a complete mapping of all the neurons in your brain.
> People won't need to freeze themselves to be immortal at that point.

Look at the interface that we're using to have this conversation.
Hunching over a monitor and keyboard to type plain text. Using ">>>"
characters like it was 1975 being printed out on a dot matrix printer
over an acoustic coupler. The quantitative revolution has turned out
to be as much of a mirage as space travel. An ever receding promise
with ever shorter intervals of satisfaction. Our new toys are only fun
for a matter of days or weeks now before we feel them lacking.
Facebook means less interest in old friendships. Streaming music and
video means disposable entertainment. All of our appetites are dulled
yet amplified under the monotonous influence of infoporn on demand.
Sure, it has it's consolations, but, to quote Jim Morrison "No eternal
reward will forgive us now for wasting the dawn." We may not need to
freeze ourselves, but we will wish we had frozen some of our reasons
for wanting to be immortal.

Craig

Craig Weinberg

unread,
Aug 16, 2011, 12:28:25 AM8/16/11
to Everything List
On Aug 15, 8:21 pm, Colin Geoffrey Hales <cgha...@unimelb.edu.au>
wrote:
> On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales

> The solution is: there is/can be no simulation in an artificial
> cognition. It has to use the same processes a brain uses: literally.
> This is the replication approach.
>
> Is it really such a big deal that you can't get AGI with computation?
> Who cares? The main thing is we can do it using replication. We are in
> precisely the same position the Wright Bros were when making artificial
> flight.
>
> This situation is kind of weird. Insisting that simulation/computation
> is the only way to solve a problem is like saying 'all buildings must be
> constructed out of paintings of bricks and only people doing it this way
> will ever build a building.'. For 60 years every building made like this
> falls down.
>
> Meanwhile I want to build a building out of bricks, and I have to
> justify my position?
>
> Very odd.

Y E S. You've nailed it.

> I literally just found out my PhD examination passed ! Woohoo!
>
> So that's .....
>
> Very odd.

Congratulations Dr.!

Craig Weinberg

unread,
Aug 16, 2011, 12:48:39 AM8/16/11
to Everything List
On Aug 15, 10:08 pm, Jason Resch <jasonre...@gmail.com> wrote:

> It would be a very surprising theoretical result.

Only if you have a very sentimental attachment to the theory. It
wouldn't surprise me at all.

> > Who cares? The main thing is *we can do it using replication*.
>
> What is the difference between simulation and replication?  Perhaps all our
> disagreement stems from this difference in definitions.

The difference is that simulation assumes that that something can
really be something that it is not. Replication doesn't assume that,
but rather says that you can only be sure that something is what it
is.

> > We are in precisely the same position the Wright Bros were when making
> > artificial flight. ****
>
> > ** **
>
> > This situation is kind of weird. Insisting that simulation/computation is
> > the only way to solve a problem is like saying ‘*all buildings must be
> > constructed out of paintings of bricks and only people doing it this way
> > will ever build a building.’*. For 60 years every building made like this
> > falls down.
>
> Its not that all brains are computers, its that the evolution of all finite
> processes can be determined by a computer.  There is a subtle difference
> between saying the brain is a computer, and saying a computer can determine
> what a brain would do.
>
> I think your analogy is a little off.  It is not that proponents of strong
> AI suggest that houses need to be made of paintings of bricks, it is that
> the anti-strong-AI suggests that there are some bricks whose image cannot be
> depicted by a painting.

I have no problem with AI brick images making AI building images, but
an image is not a brick or a building.

> A process that cannot be predicted by a computer is like a sound that cannot
> be replicated by a microphone, or an image that can't be captured by a
> painting or photograph.  It would be very surprising for such a thing to
> exist.

That's where you're making a strawman of consciousness and awareness.
You're assuming that it's a 'process' It isn't. Charge is not a
process, nor is mass. It's an experiential property of energy over
time. It is not like a sound or a microphone, it is the listener. Not
an image but the seer of images, the painter, the photographer. It
would be very surprising for such a thing to exist because it doesn't
ex-ist. It in-sists. It persists within. Within the brain, within
cells, within whales and cities, within microprocessors even but all
do not insist with the same bandwidth of awareness. The microprocessor
doesn't understand it's program. If it did it would make up a new one
by itself. If you pour water on your motherboard though, it will
figure out some very creative and unpredictable ways of responding.

> You can build your buildings out of bricks, but don't tell the artists that
> it is impossible for some bricks to be painted (or that they have to paint
> every brick in the universe for their painting to be look right!), unless
> you have some reason or evidence why that would be so.

No, Colin is right. It's the strong AI position that is asserting that
painted bricks must be real if they are painted well enough. That's
your entire position. If you paint a brick perfectly, it can only be a
brick and not a zombie brick (painting). All we are pointing out is
that there is a difference between a painting of a brick and a brick,
and if you actually want the brick to function as a brick, the
painting isn't going to work, no matter how amazingly detailed the
painting is.

Craig

Craig Weinberg

unread,
Aug 16, 2011, 12:53:01 AM8/16/11
to Everything List
On Aug 15, 10:43 pm, Jason Resch <jasonre...@gmail.com> wrote:
> I am more worried for the biologically handicapped in the future.  Computers
> will get faster, brains won't.  By 2029, it is predicted $1,000 worth of
> computer will buy a human brain's worth of computational power.  15 years
> later, you can get 1,000 X the human brain's power for $1,000.  Imagine: the
> simulated get to experience 1 century for each month the humans with
> biological brains experience.  Who will really be alive then?

Speed and power is for engines, not brains. Good ideas don't come from
engines.

Craig

Colin Geoffrey Hales

unread,
Aug 16, 2011, 2:08:17 AM8/16/11
to everyth...@googlegroups.com

 

On 8/15/2011 7:08 PM, Jason Resch wrote:

just like you can simulate flight if you simulate the environment you are flying in.


But do we need to simulate the entire atmosphere in order to simulate flight, or just the atmosphere in the immediate area around the surfaces of the plane?  Likewise, it seems we could take shortcuts in simulating the environment surrounding a mind and get the behavior we are after.


Why simulate?  Why not create a robot with sensors so it can interact the natural environment.

Brent

 

[Colin]

 

Hi Brent,

There seems to be another confusion operating here. What makes you think I am not creating a robot with sensors? What has this got to do with simulation?

 

1)      Having sensors is not simulation. Humans have sensors...eg retina.

2)      The use of sensors does not connect the robot to the environment in any unique way. The incident photon could have come across the room or the galaxy. Nobody tells a human which, yet the brain sorts it out.

3)      A robot brain based on replication uses sensors like any other robot.

4)      What I am saying is that the replication approach will handle the sensors like a human brain handles sensors.

 

Of course we don’t have to simulate the entire universe to simulate flight. The fact is we simulate _some_ of the environment in order that flight simulation works. It’s a simulation. It’s not flight. This has nothing to do with the actual problem of real embedded embodied cognition of an unknown external environment by an AGI. You don’t know it! You are ‘cognising’ to find out about it. You can’t simulate it and the sensors don’t give you enough info. If a human supplies that info then you’re grounding the robot in the human’s cognition, not supplying the robot with its own cognition.

 

In replication there is no simulating going on! There is inorganic, artificially derived natural processes identical to what is going on in a natural brain. Literally. A brain has action potential comms. A brain has EM comms. Therefore a replicated brain will have the SAME action potentials mutually interacting with the same EM fields. The replicant chips will have an EEG/MEG signature like a human. There is no computing of anything. There is inorganic version of the identical processes going on in a real brain.

 

I hope we’re closer to being on the same page.

 

Colin

 

 

 

 

 

Stathis Papaioannou

unread,
Aug 16, 2011, 3:22:29 AM8/16/11
to everyth...@googlegroups.com
On Tue, Aug 16, 2011 at 12:18 AM, Craig Weinberg <whats...@gmail.com> wrote:

> You can simulate it as far as being able to model the aspects of it's
> behavior that you can observe, but you can't necessarily predict that
> behavior over time, any more than you can predict what other people
> might say to you today. The chemistry and physics of the brain are
> partially determined by the experiences of the environment through the
> body, and partially determined by the sensorimotive agenda of the
> mind, which are both related to but not identical with the momentum
> and consequences of it's neurological biochemistry. All three are are
> woven together as an inseparable whole.

If the brain does something not predictable by modelling its
biochemistry that means it works by magic.


--
Stathis Papaioannou