--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Quite cute little physical implementation of a Turing machine.
Read Sane04, it explains how a slight variant of that machine, or how
some program you can give to that machine, will develop qualia, and
develop a discourse about them semblable to ours, so that you have to
treat them as zombie if you want have them without qualia. They can
even understand that their solution is partial, and necessary partial.
Their theories are clear, transparent and explicit, unlike yours where
it seems to be hard to guess what you assume, and what you derive.
But then you admit yourself not trying to really convey your
intuition, and so it looks just like "racism": "you will not tell me
that this (pointing on silicon or a sort of clock) can think?" I don't
take such move as argument.
Bruno
Great video ... a picture of simplicity....
Q. ‘What is it like to be a Turing Machine?” = Hard Problem.
A. It’s like being the pile of gear in the video, NO MATTER WHAT IS ON THE TAPE.
Colin
Craig
Colin ============
Precisely. How can it possibly 'care'? If the machine was (1) spread across the entire solar system, or (2) miniaturized to the size of an atom, (3) massively parallel, (4) quantum, (5) digital, (6) analog or (7) whatever..... it doesn't matter.... it will always be "what it is like to be the physical object (1), (2), (3), (4), (5), (6), (7)", resp., no matter what is on the tape. If find the idea that the contents of the tape somehow magically delivers a first person experience to be intellectually moribund.
The point is, what magic is assumed in the contents of the tape being fiddled with 'Turing-ly' delivers first person content? Legions of folks out there will say "its all information processing!", to which I add... the brain, which is the 100% origins of the only 'what it is like' description we know of, is NOT doing what the video does.
So.... good question. I wish others would ask it.
Colin
Colin and Craig,
Imagine that God has such a machine on his desk, which he uses to compute the updated positions of each particle in some universe over each unit of Planck time. Would you agree it is possible for the following to occur in the simulation:
1. Stars to coalesce due to gravity and begin fusion?
2. Simple biological molecules to forum?
3. Simple single-celled life forms to evolve
4. More complex multi-cellular life forms to evolve?
5. Intelligent life forms to evolve (at least as intelligent as humans)?
6. Intelligent life in the simulation to solve problems and develop culture and technology?
7. For that intelligent life to question qualia?
8. For that intelligent life to define the hard problem?
9. For those beings to create an interconnected network of computers and debate this same topic?
If you disagree with any of the numbered possibilities, please state which ones you disagree with.
Colin =============
I don’t know about Craig...but I disagree with all of them.
Your premise, that the God’s-Desk Turing machine is relevant, is misplaced.
A) The Turing Machine in the video is inside this (our reality) reality. It uses reality (whatever it is) to construct the Turing machine. All expectations of the machine are constructed on this basis. It is the only basis for expectations of creation of AGI within our reality.
B) The Turing machine on your God’s desk is not that (A) at all. You could be right or wrong or merely irrelevant... and it would change nothing in (A) perspective.
Until you de-confuse these 2 points of view, your 9 points have no meaning. The whole idea that computation is necessarily involved in intelligence is also likewise taken along for the ride. There’s no (A)-style Turing computation going on in a brain. (A)-style Turing-Computing a model of a brain is not a brain for the same reason (A)-style computing a model of fire is not fire.
To me,
(i) reality-as-computation
(ii) computation of a model of reality within the reality
(iii) to be made of/inside inside an actual reality, and able to make a model of it from within
(iv) an actual reality
are all different things. The video depicts a bit of a (iv) doing (iii), from the perspective of an observer within (iv). I’m not interested in simulating anything. I want to create artificial cognition (AGI) the same way artificial flight is flight.
Colin
Colin =============
Colin and Craig,
Imagine that God has such a machine on his desk, which he uses to compute the updated positions of each particle in some universe over each unit of Planck time. Would you agree it is possible for the following to occur in the simulation:
1. Stars to coalesce due to gravity and begin fusion?
2. Simple biological molecules to forum?
3. Simple single-celled life forms to evolve
4. More complex multi-cellular life forms to evolve?
5. Intelligent life forms to evolve (at least as intelligent as humans)?
6. Intelligent life in the simulation to solve problems and develop culture and technology?
7. For that intelligent life to question qualia?
8. For that intelligent life to define the hard problem?
9. For those beings to create an interconnected network of computers and debate this same topic?
If you disagree with any of the numbered possibilities, please state which ones you disagree with.I don’t know about Craig...but I disagree with all of them.
Your premise, that the God’s-Desk Turing machine is relevant, is misplaced.
A) The Turing Machine in the video is inside this (our reality) reality. It uses reality (whatever it is) to construct the Turing machine. All expectations of the machine are constructed on this basis. It is the only basis for expectations of creation of AGI within our reality.
B) The Turing machine on your God’s desk is not that (A) at all. You could be right or wrong or merely irrelevant... and it would change nothing in (A) perspective.
Until you de-confuse these 2 points of view, your 9 points have no meaning.
The whole idea that computation is necessarily involved in intelligence is also likewise taken along for the ride. There’s no (A)-style Turing computation going on in a brain.
(A)-style Turing-Computing a model of a brain is not a brain for the same reason (A)-style computing a model of fire is not fire.
To me,
(i) reality-as-computation
(ii) computation of a model of reality within the reality
(iii) to be made of/inside inside an actual reality, and able to make a model of it from within
(iv) an actual reality
are all different things. The video depicts a bit of a (iv) doing (iii), from the perspective of an observer within (iv). I’m not interested in simulating anything. I want to create artificial cognition (AGI) the same way artificial flight is flight.
Read all your comments....cutting/snipping to the chase...
[Jason ]
Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?
Thanks,
Jason
[Colin]
I think you’ve misunderstood the position in ways that I suspect are widespread...
1) simulation of the chemistry or physics underlying the brain is impossible
It’s quite possible, just irrelevant! ‘Chemistry’ and ‘physics’ are terms for models of the natural world used to describe how natural processes appear to an observer inside the universe. You can simulate (compute physics/chem. models) until you turn blue, and be as right as you want: all you will do is predict how the universe appears to an observer.
This has nothing to do with creating artificial intelligence.
Natural intelligence is a product of the actual natural world, and is not a simulation. Logic dictates that, just like the wheel, fire, steam power, light and flight, artificial cognition involves the actual natural processes found in brains. This is not a physics model of the brain implemented in any sense of the word. Artificial cognition will be artificial in the same way that artificial light is light. Literally. In brains we know there are action potentials coupling/resonating with a large unified EM field system, poised on/around the cusp of an unstable equilibrium. So real artificial cognition will have, you guessed it, action potential coupling resonating with a large unified EM field system, poised on/around the cusp of an unstable equilibrium. NOT a model of it computed on something. Such inorganic cognition will literally have an EEG signature like humans. If you want artificially instantiated fire you must provide fuel, oxygen and heat/spark. In the same way, if you want artificial cognition you must provide equivalent minimal set of necessary physical ingredients.
2. Human intelligence is something beyond the behaviors manifested by the brain
This sounds very strange to me. Human intelligence (an ability to observe and produce the models called ‘physics and chemistry’) resulted from the natural processes (as apparent to us) described by us as physics and chemistry, not the models called physics & chemistry. It’s confusingly self-referential...but logically sound.
= = = = = = = = = = = = = = = =
The fact that you posed the choices the way you did indicates a profound confusion of natural processes with computed models of natural processes. The process of artificial cognition that uses natural processes in an artificial context is called ‘brain tissue replication’. In replication there is no computing and no simulation. This is the way to explore/understand and develop artificial cognition.... in exactly the way we used artificial flight to figure out the physics of flight. We FLEW. We did not examine a physics model of flying (we didn’t have one at the time!). Does a computed physics model of flight fly? NO. Does a computed physics model of combustion burn? NO. Is a computed physics model of a hurricane a hurricane? NO.
So how can a computed physics model of cognition be cognition?
I hope you can see the distinction I am trying to make clear. Replication is not simulation.
Colin
Read all your comments....cutting/snipping to the chase...
[Jason ]
Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?
Thanks,
Jason
[Colin]
I think you’ve misunderstood the position in ways that I suspect are widespread...
1) simulation of the chemistry or physics underlying the brain is impossible
...
> Can we accurately simulate physical laws or can't we? Before you
> answer, take a few minutes to watch this amazing video, which
> simulates the distribution of mass throughout the universe on the
> largest scales: http://www.youtube.com/watch?v=W35SYkfdGtw (Note each
> point of light represents a galaxy, not a star)
The answer on your question depends on what you mean by accurately and
what by physical laws. I am working with finite elements (more
specifically with ANSYS Multiphysics) and I can tell for sure that if
you speak of simulation of the universe, then the current simulation
technology does not scale. Nowadays one could solve a linear system
reaching dimension of 1 billion but this will not help you. I would say
that either contemporary numerical methods are deadly wrong, or
simulated equations are not the right ones. In this respect, you may
want to look how simulation is done for example in Second Life.
Well, today numerical simulation is a good business (computer-aided
engineering is about a billion per year) and it continues to grow. Yet,
if you look in detail, then there are some areas when it could be
employed nicely and some where it better to forget about simulation.
I understand that you speak "in principle". Yet, I am not sure if
extrapolation too far away from the current knowledge makes sense, as
eventually we are coming to "philosophical controversies".
Evgenii
On 15.08.2011 07:56 Jason Resch said the following:
...The answer on your question depends on what you mean by accurately and what by physical laws. I am working with finite elements (more specifically with ANSYS Multiphysics) and I can tell for sure that if you speak of simulation of the universe, then the current simulation technology does not scale. Nowadays one could solve a linear system reaching dimension of 1 billion but this will not help you. I would say that either contemporary numerical methods are deadly wrong, or simulated equations are not the right ones. In this respect, you may want to look how simulation is done for example in Second Life.
Can we accurately simulate physical laws or can't we? Before you
answer, take a few minutes to watch this amazing video, which
simulates the distribution of mass throughout the universe on the
largest scales: http://www.youtube.com/watch?v=W35SYkfdGtw (Note each
point of light represents a galaxy, not a star)
Well, today numerical simulation is a good business (computer-aided engineering is about a billion per year) and it continues to grow. Yet, if you look in detail, then there are some areas when it could be employed nicely and some where it better to forget about simulation.
I understand that you speak "in principle".
Yet, I am not sure if extrapolation too far away from the current knowledge makes sense, as eventually we are coming to "philosophical controversies".
On Aug 15, 5:42 pm, Jason Resch <jasonre...@gmail.com> wrote:If you have a chance to listen and compare the following:
> We're already simulating peices of brain tissue on the order of fruit fly
> brains (10,000 neurons). Computers double in power/price every year, so 6
> years later we could simulate mouse brains, another 6 we can simulate cat
> brains, and in another 6 we can simulate human brains. (By 2030)
http://www.retrobits.net/atari/downloads/samg.mp3 Done in 1982 with a
program 6k in size. Six. thousand. bytes. on the Atari BASIC operating
system that was 8k ROM.
http://www.acapela-group.com/text-to-speech-interactive-demo.html
(for side by side comparison paste:
Four score and seven years ago our fathers brought forth on this
continent, a new nation, conceived in Liberty, and dedicated to the
proposition that all men are created equal.
into the text box and choose English (US) - Ryan for the voice.
So in 29 years of computing progress, on software that is orders of
magnitude more complex and resource-heavy, we can definitely hear a
strong improvement, however, at this rate, in another 30 years, we are
still not going to have anything that sounds convincingly like natural
speech.
This is just mapping vocal chord vibrations to digital logic -
a miniscule achievement compared to mapping even the simplest
neurotransmitter interactions. Computers double in power/price, but
they also probably halve in efficiency/memory. It takes longer now to
boot up and shut down the computer, longer to convert a string of text
into voice.
Like CGI, despite massive increases in computing power, it still only
superficially resembles what it's simulating. IMO, there has been
little or no ground even in simulating the appearance of genuine
feeling, let alone in producing something which itself feels.
On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:
Read all your comments....cutting/snipping to the chase...
It is a little unfortunate you did not answer all of the questions. I hope that you will answer both questions (1) and (2) below.
Yeah sorry about that... I’m really pressed at the moment.
[Jason ]
Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?
Thanks,
Jason
[Colin]
I think you’ve misunderstood the position in ways that I suspect are widespread...
1) simulation of the chemistry or physics underlying the brain is impossible
Question 1:
Do you believe correct behavior, in terms of the relative motions of particles is possible to achieve in a simulation?
[Colin]
YES, BUT Only if you simulate the entire universe. Meaning you already know everything, so why bother?
So NO, in the real practical world of computing an agency X that is ignorant of NOT_X.
For a computed cognitive agent X, this will come down to how much impact the natural processes of NOT_X (the external world) involves itself in the natural processes of X.
I think there is a nonlocal direct impact of NOT_X on the EM fields inside X. The EM fields are INPUT, not OUTPUT.
But this will only be settled experimentally. I aim to do that.
For example, take the example of the millennium run. The simulation did not produce dark matter, but the representation of dark matter behaved like dark matter did in the universe (in terms of relative motion). If we can simulate accurately the motions of particles, to predict where they will be in time T given where they are now, then we can peek into the simulation to see what is going on.
Please answer if you agree the above is possible. If you do not, then I do not see how your viewpoint is consistent with the fact that we can build simulations like the millenium run, or test aircraft designs before building them, etc.
Question 2:
Given the above (that we can predict the motions of particles in relation to each other) then we can extract data from the simulation to see how things are going inside. Much like we had to convert a large array of floating point values representing particle positions in the Millennium simulation in order to render a video of a fly-through. If the only information we can extract is the predicted particle locations, then even though the simulation does not create EM fields or fire in this universe, we can at least determine how the different particles will be arranged after running the simulation.
Therefore, if we simulated a brain answering a question in a standardized test, we can peer into the simulation to determine in which bubble the graphite particles are concentrated (from the simulated pencil, controlled by the simulated brain in the model of particle interactions within an entire classroom). Therefore, we have a model which tells us what an intelligent person would do, based purely on positions of particles in a simulation.
What is wrong with the above reasoning? It seems to me if we have a model that can be used to determine what an intelligence would do, then the model could stand in for the intelligence in question.
[Colin]
I think I already answered this. You can simulate a human if you already know everything, just like you can simulate flight if you simulate the environment you are flying in. In the equivalent case applied to human cognition, you have to simulate the entire universe in order that the simulation is accurate. But we are trying to create an artificial cognition that can be used to find out about the universe outside the artificial cognition ... like humans, you don’t know what’s outside...so you can’t do the simulation. The reasoning fails at this point, IMO.
The above issue about the X/NOT_X interrelationship stands, however.
The solution is: there is/can be no simulation in an artificial cognition. It has to use the same processes a brain uses: literally. This is the replication approach.
Is it really such a big deal that you can’t get AGI with computation? Who cares? The main thing is we can do it using replication. We are in precisely the same position the Wright Bros were when making artificial flight.
This situation is kind of weird. Insisting that simulation/computation is the only way to solve a problem is like saying ‘all buildings must be constructed out of paintings of bricks and only people doing it this way will ever build a building.’. For 60 years every building made like this falls down.
Meanwhile I want to build a building out of bricks, and I have to justify my position?
Very odd.
Colin
I literally just found out my PhD examination passed ! Woohoo!
So that’s .....
Very odd.
Dr. Colin
J
On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <cgh...@unimelb.edu.au> wrote:
Read all your comments....cutting/snipping to the chase...
It is a little unfortunate you did not answer all of the questions. I hope that you will answer both questions (1) and (2) below.
Yeah sorry about that... I’m really pressed at the moment.
[Jason ]
Your belief that AGI is impossible to achieve through computers depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is impossible
2. Human intelligence is something beyond the behaviors manifested by the brain
Which one(s) do you think is/are correct and why?
Thanks,
Jason
[Colin]
I think you’ve misunderstood the position in ways that I suspect are widespread...
1) simulation of the chemistry or physics underlying the brain is impossible
Question 1:
Do you believe correct behavior, in terms of the relative motions of particles is possible to achieve in a simulation?
[Colin]
YES, BUT Only if you simulate the entire universe. Meaning you already know everything, so why bother?
So NO, in the real practical world of computing an agency X that is ignorant of NOT_X.
For a computed cognitive agent X, this will come down to how much impact the natural processes of NOT_X (the external world) involves itself in the natural processes of X.
I think there is a nonlocal direct impact of NOT_X on the EM fields inside X. The EM fields are INPUT, not OUTPUT.
But this will only be settled experimentally. I aim to do that.
For example, take the example of the millennium run. The simulation did not produce dark matter, but the representation of dark matter behaved like dark matter did in the universe (in terms of relative motion). If we can simulate accurately the motions of particles, to predict where they will be in time T given where they are now, then we can peek into the simulation to see what is going on.
Please answer if you agree the above is possible. If you do not, then I do not see how your viewpoint is consistent with the fact that we can build simulations like the millenium run, or test aircraft designs before building them, etc.
Question 2:
Given the above (that we can predict the motions of particles in relation to each other) then we can extract data from the simulation to see how things are going inside. Much like we had to convert a large array of floating point values representing particle positions in the Millennium simulation in order to render a video of a fly-through. If the only information we can extract is the predicted particle locations, then even though the simulation does not create EM fields or fire in this universe, we can at least determine how the different particles will be arranged after running the simulation.
Therefore, if we simulated a brain answering a question in a standardized test, we can peer into the simulation to determine in which bubble the graphite particles are concentrated (from the simulated pencil, controlled by the simulated brain in the model of particle interactions within an entire classroom). Therefore, we have a model which tells us what an intelligent person would do, based purely on positions of particles in a simulation.
What is wrong with the above reasoning? It seems to me if we have a model that can be used to determine what an intelligence would do, then the model could stand in for the intelligence in question.
[Colin]
I think I already answered this. You can simulate a human if you already know everything,
just like you can simulate flight if you simulate the environment you are flying in.
In the equivalent case applied to human cognition, you have to simulate the entire universe in order that the simulation is accurate. But we are trying to create an artificial cognition that can be used to find out about the universe outside the artificial cognition ... like humans, you don’t know what’s outside...so you can’t do the simulation.
The reasoning fails at this point, IMO.
The above issue about the X/NOT_X interrelationship stands, however.
The solution is: there is/can be no simulation in an artificial cognition. It has to use the same processes a brain uses: literally. This is the replication approach.
Is it really such a big deal that you can’t get AGI with computation?
Who cares? The main thing is we can do it using replication.
We are in precisely the same position the Wright Bros were when making artificial flight.
This situation is kind of weird. Insisting that simulation/computation is the only way to solve a problem is like saying ‘all buildings must be constructed out of paintings of bricks and only people doing it this way will ever build a building.’. For 60 years every building made like this falls down.
Meanwhile I want to build a building out of bricks, and I have to justify my position?
Very odd.
Colin
I literally just found out my PhD examination passed ! Woohoo!
So that’s .....
Very odd.
Dr. Colin
J
But they'll have to be rich enough to afford super-computer time if they
want to really live. :-)
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.
just like you can simulate flight if you simulate the environment you are flying in.
But do we need to simulate the entire atmosphere in order to simulate flight, or just the atmosphere in the immediate area around the surfaces of the plane?� Likewise, it seems we could take shortcuts in simulating the environment surrounding a mind and get the behavior we are after.
On 8/15/2011 7:08 PM, Jason Resch wrote:
just like you can simulate flight if you simulate the environment you are flying in.
But do we need to simulate the entire atmosphere in order to simulate flight, or just the atmosphere in the immediate area around the surfaces of the plane? Likewise, it seems we could take shortcuts in simulating the environment surrounding a mind and get the behavior we are after.
Why simulate? Why not create a robot with sensors so it can interact the natural environment.
Brent
[Colin]
Hi Brent,
There seems to be another confusion operating here. What makes you think I am not creating a robot with sensors? What has this got to do with simulation?
1) Having sensors is not simulation. Humans have sensors...eg retina.
2) The use of sensors does not connect the robot to the environment in any unique way. The incident photon could have come across the room or the galaxy. Nobody tells a human which, yet the brain sorts it out.
3) A robot brain based on replication uses sensors like any other robot.
4) What I am saying is that the replication approach will handle the sensors like a human brain handles sensors.
Of course we don’t have to simulate the entire universe to simulate flight. The fact is we simulate _some_ of the environment in order that flight simulation works. It’s a simulation. It’s not flight. This has nothing to do with the actual problem of real embedded embodied cognition of an unknown external environment by an AGI. You don’t know it! You are ‘cognising’ to find out about it. You can’t simulate it and the sensors don’t give you enough info. If a human supplies that info then you’re grounding the robot in the human’s cognition, not supplying the robot with its own cognition.
In replication there is no simulating going on! There is inorganic, artificially derived natural processes identical to what is going on in a natural brain. Literally. A brain has action potential comms. A brain has EM comms. Therefore a replicated brain will have the SAME action potentials mutually interacting with the same EM fields. The replicant chips will have an EEG/MEG signature like a human. There is no computing of anything. There is inorganic version of the identical processes going on in a real brain.
I hope we’re closer to being on the same page.
Colin
> You can simulate it as far as being able to model the aspects of it's
> behavior that you can observe, but you can't necessarily predict that
> behavior over time, any more than you can predict what other people
> might say to you today. The chemistry and physics of the brain are
> partially determined by the experiences of the environment through the
> body, and partially determined by the sensorimotive agenda of the
> mind, which are both related to but not identical with the momentum
> and consequences of it's neurological biochemistry. All three are are
> woven together as an inseparable whole.
If the brain does something not predictable by modelling its
biochemistry that means it works by magic.
--
Stathis Papaioannou
> 1) simulation of the chemistry or physics underlying the brain is impossible
>
> It’s quite possible, just irrelevant! ‘Chemistry’ and ‘physics’ are terms
> for models of the natural world used to describe how natural processes
> appear to an observer inside the universe. You can simulate (compute
> physics/chem. models) until you turn blue, and be as right as you want: all
> you will do is predict how the universe appears to an observer.
>
>
>
> This has nothing to do with creating artificial intelligence.
If you predict how the universe will appear to an observer you can
predict what a human will say when presented with a particular
problem, and isn't that a human-level AI by definition?
--
Stathis Papaioannou
I agree. It is a very narrow to think computational power is the key to rich
experience and high intelligence. The real magic is what is done with the
hardware. And honestly I see no reason to believe that we somehow we
magically develop amazingly intelligent software. Software development is
slow, no comparison to the exponential progress of hardware.
I believe that it is inherently impossible to design intelligence. It can
just self-organize itself through becoming aware of itself. I am not even
sure anymore whether this will have to do very much to do with technology.
Technology might have an fundamental restriction to being a tool of
intelligence, not the means to increase intelligence at the core (just
relative, superficial intelligence like intellectual knowledge).
Also, we have no reliable way of measuring the computational power of the
brain, not to speak of the possibly existing subtle energies that go beyond
the brain, that may be essential to our functioning. The way that
computational power of the brain is estimated now relies on a quite
reductionstic view of what the brain is and what it does.
benjayk
--
View this message in context: http://old.nabble.com/Turing-Machines-tp32259675p32271222.html
Sent from the Everything List mailing list archive at Nabble.com.
> Also, we have no reliable way of measuring the computational power of the
> brain, not to speak of the possibly existing subtle energies that go beyond
> the brain, that may be essential to our functioning. The way that
> computational power of the brain is estimated now relies on a quite
> reductionstic view of what the brain is and what it does.
And the problem with the reductionist view is? It certainly seems to
be the case that if you throw some chemical elements together in a
particular way, you get intelligence and consciousness. The elements
obey well-understood chemical laws, even though they constitute a
complex system with difficult to predict behaviour.
--
Stathis Papaioannou
I agree. It is a very narrow to think computational power is the key to rich
Craig Weinberg wrote:
>
> On Aug 15, 10:43 pm, Jason Resch <jasonre...@gmail.com> wrote:
>> I am more worried for the biologically handicapped in the future.
>> Computers
>> will get faster, brains won't. By 2029, it is predicted $1,000 worth of
>> computer will buy a human brain's worth of computational power. 15 years
>> later, you can get 1,000 X the human brain's power for $1,000. Imagine:
>> the
>> simulated get to experience 1 century for each month the humans with
>> biological brains experience. Who will really be alive then?
>
> Speed and power is for engines, not brains. Good ideas don't come from
> engines.
>
> Craig
>
experience and high intelligence. The real magic is what is done with the
hardware. And honestly I see no reason to believe that we somehow we
magically develop amazingly intelligent software.
Software development is
slow, no comparison to the exponential progress of hardware.
I believe that it is inherently impossible to design intelligence. It can
just self-organize itself through becoming aware of itself.
I am not even
sure anymore whether this will have to do very much to do with technology.
Technology might have an fundamental restriction to being a tool of
intelligence, not the means to increase intelligence at the core (just
relative, superficial intelligence like intellectual knowledge).
Also, we have no reliable way of measuring the computational power of the
brain, not to speak of the possibly existing subtle energies that go beyond
the brain, that may be essential to our functioning. The way that
computational power of the brain is estimated now relies on a quite
reductionstic view of what the brain is and what it does.
>> If the brain does something not predictable by modelling its
>> biochemistry that means it works by magic.
>
> Then you are saying that whether you accept what I'm what I'm writing
> here or not is purely predictable through biochemistry alone or else
> must be 'magic'. So in order for you to change your mind, some
> substance needs to cross your blood brain barrier, and that the
> content of your mind - the meaning of what you are choosing to think
> about right now can only be magic. I think my approach is much more
> scientific. I'm not prejudging what the solution can or cannot be in
> advance.
>
> If you want to call psychology magic, that's ok with me, but it
> certainly drives biochemistry as much as it is driven by biochemistry.
> Why is it so hard to accept that both levels of reality are in fact
> real? Our body doesn't seem to have a problem taking commands from our
> mind. Why should I deny that those commands have a source which cannot
> be adequately described in terms of temperature and pressure or
> voltage? To presume that we can only know what the mind is by studying
> it's shadow in the brain is, I think catastrophically misguided and
> ultimately unworkable. If not for our own experiences of the mind,
> biochemistry would not tell us that such a thing could possibly exist.
Our body precisely follows the deterministic biochemical reactions
that comprise it. The mind is generated as a result of these
biochemical reactions; a reaction occurs in your brain which causes
you to have a thought to move your arm and move your arm. How could it
possibly be otherwise?
--
Stathis Papaioannou
There are *so* many problems with that. We are naive, a bit like 7 year old
wanting to build a time machine. We know little about the brain. Who says
there is no quantum effects going on? There doesn't even have to be
substantial entaglement. Chaos theory tells us that even minuscle quantum
effects could have major impacts on the thing. ESP and telepathy suggest
that we are to some extent entangled. There are *major* problems reprodocing
this with computers.
Neural imaging and scanning cannot pick up the major information in the
brain. Not by a long stretch. It is like having a picture of a RAM and
thinking this is enough to recover the information on it.
What use are fast brains? Our brains alone are of little use. We also need a
rich environment and a body.
You presuppose that AI researchers have the potential ability to build
superintelligent AI. Why should we suspect this more than we suspect that
gorillas can build humans? I'd like to hear arguments that make it plausible
that it is possible to engineer somthing more generally intelligent than
yourself.
Jason Resch-2 wrote:
>
>> Software development is
>> slow, no comparison to the exponential progress of hardware.
>>
>
> As I mentioned to Craig who complained his computer takes longer to start
> up
> now than ever, the complexity of software is in many cases outpacing even
> the exponential growth in the power of computer hardware.
That may quite well be. But even if we have a software that can render a
99^99 dimensional mandelbrot this will not be of much use. The point is that
the usefulness of software is not progressing exponentially.
Jason Resch-2 wrote:
>
>> I believe that it is inherently impossible to design intelligence. It can
>> just self-organize itself through becoming aware of itself.
>
>
> A few genes separate us from chimps, and all of our intelligence.
I don't think our intelligence is reducible to genes. Memes seem even more
important. And just because we can't really research it scientifically at
moment, does not mean there are no subtler things that determine our general
intelligence than genes and culture. Many subjective experiences hint at
something like a more subtle layer, call it "soul" if you will.
All of what we understand about biology may just be the tiny top of a
pyramid that is buried in the sand.
Jason Resch-2 wrote:
>
> If we can
> determine which, and see what these genes do then perhaps we can
> extrapolate
> and find out how our DNA is able to make some brains better than others.
But this is not how intelligent works. You don't just extrapolate a bit and
have more intelligence. If this were the case, we would already have
superintelligence. Development / evolution of intelligence, learning and
consciousness are highly non-trivial, and non-linear.
Jason Resch-2 wrote:
>
>> I am not even
>> sure anymore whether this will have to do very much to do with
>> technology.
>> Technology might have an fundamental restriction to being a tool of
>> intelligence, not the means to increase intelligence at the core (just
>> relative, superficial intelligence like intellectual knowledge).
>>
>
> I think the existence of Google and Wikipedia makes me more intelligent.
> If
> I could embed a calculator chip into my brain my mental math skills would
> improve markedly.
This is exactly the kind of intelligence I am NOT talking about. It's
useful, sure. But it doesn't lead to unimaginable creative, self-improving
intelligence. We may become super-knowledgable in the next decades, sure.
But this doesn't mean we acquire the wisdom that is necessary for deep
progress, leading to higher states of consciousness and happiness.
Whether you know extremely much or not isn't really so important that it
would divide enhanced humans from normal humans more than it divides
scientifically literate people from badly educated people.
Jason Resch-2 wrote:
>
>>
>> Also, we have no reliable way of measuring the computational power of the
>> brain, not to speak of the possibly existing subtle energies that go
>> beyond
>> the brain, that may be essential to our functioning. The way that
>> computational power of the brain is estimated now relies on a quite
>> reductionstic view of what the brain is and what it does.
>>
>
> As I've mentioned before on this list, neuroscientists have succeeded in
> creating biologically realistic neurons. The CPU requirements of these
> neurons is well understood:
>
> http://www.youtube.com/watch?v=LS3wMC2BpxU&t=7m30s
Biologically realistic neurons is relative. We certainly don't take quantum
effects into account, and evidence seems to suggest this is important. At
least I see no other way to explain ESP.
Even if we suppose that they are biologically realistic, neurons alone don't
make up human a functioning brain. A neuron is like a transistor of a
computer, and a transistor is not enough for a functioning computer! There
are other type of cells that may be important in information processing.
Also there are different kinds of neurons, and the way they are put together
in different units is also important. Even if we were able to reproduce all
of this, we would still need the software running on the brain.
How would we do this?
At the very least, it seems we would have to biologically realistically
simulate the development of the brain from before birth until late
childhood. And of course this needs a very good interface to the outside
also.
So, simulating neurons seems to be the easiest task in simulating a brain.
benjayk
--
View this message in context: http://old.nabble.com/Turing-Machines-tp32259675p32272304.html
Stathis Papaioannou-2 wrote:
>
> On Tue, Aug 16, 2011 at 10:03 PM, benjayk
> <benjamin...@googlemail.com> wrote:
>
>> Also, we have no reliable way of measuring the computational power of the
>> brain, not to speak of the possibly existing subtle energies that go
>> beyond
>> the brain, that may be essential to our functioning. The way that
>> computational power of the brain is estimated now relies on a quite
>> reductionstic view of what the brain is and what it does.
>
> And the problem with the reductionist view is?
>
It seeks to dissect reality into pieces, while if you have some sense of
spirituality, you see that this is not how reality functions (as it is a
whole). It works reasonably well for simple things like motors, but that's
it.
Even if you just look at science, it shows that the reductionist view is
fundamentally flawed. In quantum mechanics you have one interconnected wave
function, not neatly seperateable pieces. The reductionists do a bit of
hand-waving and say that this is not relevant at the macro-scale, but they
haven't shown this yet. Just because newtonian physics is a good
approximation on the surface, doesn't mean that it isn't fundamentally
insufficient to explain the workings of complex systems.
Stathis Papaioannou-2 wrote:
>
> It certainly seems to
> be the case that if you throw some chemical elements together in a
> particular way, you get intelligence and consciousness.
It may seem that way to some people. It may seem that the earth is flat as
well.
They are just jumping to conclusions from some vague understanding of what
is happening. We see a correlation between brain function and human
consciousness? Well, that obviously means that brains produces consciousness
(or that consciousness is equivalent to the firing of neurons, and it's
subjective nature is an illusion). But, wait, no it doesn't, not AT ALL.
Correlations are fine, but they don't suggest by a long stretch that the one
thing (brain) that correlates to some extent with the other thing (human
consciousness) *produces* a broad generalization of the other thing
(consciousness as such).
Stathis Papaioannou-2 wrote:
>
> The elements
> obey well-understood chemical laws, even though they constitute a
> complex system with difficult to predict behaviour.
Do we understand them well? OK, good enough to make a host of good
predictions, but we have no remotely complete understanding of them. Also,
that biology is reducible to chemistry is an assumption, but that itself is
just a reductionistic faith. They can say that if they manage to derive
biology from chemistry.
--
View this message in context: http://old.nabble.com/Turing-Machines-tp32259675p32272468.html
On 8/15/2011 7:08 PM, Jason Resch wrote:
just like you can simulate flight if you simulate the environment you are flying in.
But do we need to simulate the entire atmosphere in order to simulate flight, or just the atmosphere in the immediate area around the surfaces of the plane? Likewise, it seems we could take shortcuts in simulating the environment surrounding a mind and get the behavior we are after.
Why simulate? Why not create a robot with sensors so it can interact the natural environment.
Brent
[Colin]
Hi Brent,
There seems to be another confusion operating here. What makes you think I am not creating a robot with sensors? What has this got to do with simulation?
1) Having sensors is not simulation. Humans have sensors...eg retina.
2) The use of sensors does not connect the robot to the environment in any unique way. The incident photon could have come across the room or the galaxy. Nobody tells a human which, yet the brain sorts it out.
3) A robot brain based on replication uses sensors like any other robot.
4) What I am saying is that the replication approach will handle the sensors like a human brain handles sensors.
Of course we don’t have to simulate the entire universe to simulate flight. The fact is we simulate _some_ of the environment in order that flight simulation works. It’s a simulation. It’s not flight. This has nothing to do with the actual problem of real embedded embodied cognition of an unknown external environment by an AGI. You don’t know it! You are ‘cognising’ to find out about it. You can’t simulate it and the sensors don’t give you enough info. If a human supplies that info then you’re grounding the robot in the human’s cognition, not supplying the robot with its own cognition.
In replication there is no simulating going on! There is inorganic, artificially derived natural processes identical to what is going on in a natural brain. Literally. A brain has action potential comms. A brain has EM comms. Therefore a replicated brain will have the SAME action potentials mutually interacting with the same EM fields. The replicant chips will have an EEG/MEG signature like a human. There is no computing of anything. There is inorganic version of the identical processes going on in a real brain.
I hope we’re closer to being on the same page.
Colin
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
That's approximately true, but it overstates the determinism a little.
First, the system isn't closed, so ones thoughts and behavior are
continually modified by stuff that happens on your past light cone.
Second, there are quantum random event within your brain, e.g. decay of
radioactive potassium atoms, that could influence your thougts and actions.
Brent
And also to explain how the pieces interact in reality.
Brent
It's not a flaw in his reasoning, it's description at a different
level. While it is no doubt true that you, the whole you, determine to
move your arm; it seems not to be the case that the *conscious* you does
so. Various experiments starting with Libet show that the biochemical
reactions that move it occur before you are conscious of the decision to
move it.
Brent
I have to repeat that the current simulation technology just does not
scale. With it even God will not help. The only way that I could imagine
is that God's Turing machine is based on completely different simulation
technology (this however means that our current knowledge of physical
laws and/or numerics is wrong).
>> Yet, I am not sure if extrapolation too far away from the current
>> knowledge makes sense, as eventually we are coming to
>> "philosophical controversies".
>>
>>
> We're already simulating peices of brain tissue on the order of fruit
> fly brains (10,000 neurons). Computers double in power/price every
> year, so 6 years later we could simulate mouse brains, another 6 we
> can simulate cat brains, and in another 6 we can simulate human
> brains. (By 2030)
>
> But all of this is an aside from point that I was making regarding
> the power and versatility of Turing machines. Those who think
> Artificial Intelligence is not possible with computers must show what
> about the brain is unpredictable or unmodelable.
Why that? I guess that you should prove first that consciousness is
predictable and could be modeled.
Evgenii
If I understand Bruno correctly, than his position is that this happens
exactly otherwise.
Evgenii
Right, otherwise there is little use in dissecting. But the very concept of
interacting pieces has its limits in describing reality. Two quantum
entagled particles cannot properly be described as two pieces interacting.
benjayk
--
View this message in context: http://old.nabble.com/Turing-Machines-tp32259675p32274111.html
Scale doesn't matter at the level of theoretical possibility. Bruno's
UD is the most inefficient possible way to compute this universe - but
he only cares that it's possible. All universal Turing machines are
equivalent so it doesn't matter what God's is based on. Maybe you just
mean the world is not computable in the sense that it is nomologically
impossible to compute it faster than just letting it happen.
Brent
Sure they can, if you allow FTL interactions, as in Bohmian QM. But
even if you don't, the state of the two particles is described by a ray
in Hilbert space or a density matrix - which is pretty reductive.
Brent
Now you're changing the definitions of words again. What does
"conscious" mean, if not "the part of your thinking that you can report
on." I would never claim that you didn't make the decision - it's just
that "you" is a lot bigger than your consciousness.
> If moving my arm is like reading a book, I can't tell you what the
> book is about until I actually have read it, but I still am initiating
> the reading of the book, and not the book forcing me to read it.
>
Another non-analogy. Is this sentence making you think of a dragon?
Brent
> Craig
>
>
Why that? I guess that you should prove first that consciousness is predictable and could be modeled.
Yet, I am not sure if extrapolation too far away from the currentWe're already simulating peices of brain tissue on the order of fruit
knowledge makes sense, as eventually we are coming to
"philosophical controversies".
fly brains (10,000 neurons). Computers double in power/price every
year, so 6 years later we could simulate mouse brains, another 6 we
can simulate cat brains, and in another 6 we can simulate human
brains. (By 2030)
But all of this is an aside from point that I was making regarding
the power and versatility of Turing machines. Those who think
Artificial Intelligence is not possible with computers must show what
about the brain is unpredictable or unmodelable.
It is like having a picture of a RAM and
thinking this is enough to recover the information on it.
What use are fast brains?
Our brains alone are of little use. We also need a
rich environment and a body.
You presuppose that AI researchers have the potential ability to build
superintelligent AI. Why should we suspect this more than we suspect that
gorillas can build humans? I'd like to hear arguments that make it plausible
that it is possible to engineer somthing more generally intelligent than
yourself.
That may quite well be. But even if we have a software that can render a
Jason Resch-2 wrote:
>
>> Software development is
>> slow, no comparison to the exponential progress of hardware.
>>
>
> As I mentioned to Craig who complained his computer takes longer to start
> up
> now than ever, the complexity of software is in many cases outpacing even
> the exponential growth in the power of computer hardware.
99^99 dimensional mandelbrot this will not be of much use. The point is that
the usefulness of software is not progressing exponentially.
I don't think our intelligence is reducible to genes. Memes seem even more
Jason Resch-2 wrote:
>
>> I believe that it is inherently impossible to design intelligence. It can
>> just self-organize itself through becoming aware of itself.
>
>
> A few genes separate us from chimps, and all of our intelligence.
important. And just because we can't really research it scientifically at
moment, does not mean there are no subtler things that determine our general
intelligence than genes and culture. Many subjective experiences hint at
something like a more subtle layer, call it "soul" if you will.
All of what we understand about biology may just be the tiny top of a
pyramid that is buried in the sand.
But this is not how intelligent works. You don't just extrapolate a bit and
Jason Resch-2 wrote:
>
> If we can
> determine which, and see what these genes do then perhaps we can
> extrapolate
> and find out how our DNA is able to make some brains better than others.
have more intelligence. If this were the case, we would already have
superintelligence.
At
least I see no other way to explain ESP.
Even if we suppose that they are biologically realistic, neurons alone don't
make up human a functioning brain. A neuron is like a transistor of a
computer, and a transistor is not enough for a functioning computer! There
are other type of cells that may be important in information processing.
Also there are different kinds of neurons, and the way they are put together
in different units is also important. Even if we were able to reproduce all
of this, we would still need the software running on the brain.
How would we do this?
At the very least, it seems we would have to biologically realistically
simulate the development of the brain from before birth until late
childhood. And of course this needs a very good interface to the outside
also.
So, simulating neurons seems to be the easiest task in simulating a brain.
It's hard for me to accept that you can possibly think that your mind
determines the biochemistry in your brain. It's like saying that the
speed and direction your car goes in determines the activity of the
engine and the brakes.
> "Why did the chicken cross the road?" For deterministic biochemical
> reactions.
> "Why did the sovereign nation declare war?" For deterministic
> biochemical reactions.
> "What is the meaning of f=ma"? For deterministic biochemical
> reactions.
>
> Biochemistry is just what's happening on the level of cells and
> molecules. It is an entirely different perceptual-relativistic
> inertial frame of reference. Are they correlated? Sure. You change
> your biochemistry in certain ways in your brain, and you will
> definitely feel it. Can you change your biochemistry in certain ways
> by yourself? Of course. Think about something that makes you happy and
> your cells will produce the proper neurotransmitters. YOU OWN them.
> They are your servant. To believe otherwise is to subscribe to a faith
> in the microcosm over the macrocosm, in object phenomenology over
> subject phenomenology to the point of imaging that there is no
> subject. The subject imagines it is nothing but an object. It's
> laughably tragic.
>
> In order to understand how the universe creates subjectivity, you have
> to stop trying to define it in terms of it's opposite. Objectivity
> itself is a subjective experience. There is no objective experience of
> subjectivity - it looks like randomness and self-similarity feedback.
> That's a warning. It means - 'try again but look in the other
> direction'.
I feel happy because certain things happen in my environment that
affect the biochemistry in my brain, and that is experienced as
happiness. I can also feel happy if I take certain drugs which cause
release of neurotransmitters such as dopamine, even if nothing in my
environment is particularly joy-inducing. On the other hand, I can be
depressed due to underactivity of serotonergic neurotransmission, so
that even if happy things happen they don't cheer me up, and this can
be corrected by pro-serotonergic drugs.
I don't doubt the subjective, I just can't see how it could be due to
anything other than physical processes in the brain. The physical
process comes first, and the feeling or thought follows as a result.
Remove the brain and the feeling or thought is also removed.
--
Stathis Papaioannou
AKA "subconscious".
> When you
> look at electrical transmission in the brain over milliseconds and
> microseconds, you have automatically shifted outside of the realm of
> vernacular consciousness and into microconscious territories.
>
> Just as the activity of cells as a whole is beyond the scope of what
> can be understood by studying molecules alone, the study of the
> microconscious is too short term to reveal the larger, slower pattern
> of our ordinary moment to moment awareness of awareness. Raw awareness
> is fast, but awareness of awareness is slower, the ability to
> awareness of awareness to be communicated through motor channels is
> slower still, and the propagation of motor intention through the
> efferent nerves through the spinal cord is quite a bit slower. It's
> really not comparing apples to apples then if you look at the very
> earliest fraction of a second of an experience and compare it with the
> time it takes for the experience to be fully explicated through all of
> the various perceptual and cognitive resources. It's completely
> misleading and mischaracterizes awareness in yet another attempt to
> somehow prove for the sake of validating our third person
> observations, that in fact we cannot really be alive and conscious, we
> just think we are. I think it's like a modern equivalent of 'angels
> dancing on the head of a pin'.
>
So you admit that what happens that determines you behavior occurs
before you are aware of it, i.e. conscious. And what happens first is
the activity of neurons. The rest of the above paragraph seems to be an
attempt to save dualism by saying why the casual spirit comes after the
motor effect. I have no problem being alive and conscious with
consciousness coming after the decision. The decision was still made by
me. I just don't conceive "me" as being so small as my consciousness.
>
>>
>>> If moving my arm is like reading a book, I can't tell you what the
>>> book is about until I actually have read it, but I still am initiating
>>> the reading of the book, and not the book forcing me to read it.
>>>
>> Another non-analogy. Is this sentence making you think of a dragon?
>>
> A dragon? No. Why would it? Why is it 'another' non-analogy? Is this
> 'another' ad hominem non-argument?
>
It's a non-analogy because no one proposed that your actions were
determined by a book or other external effect. The hypothesis was that
they are determined by neural processes of which you are not aware.
Brent
That's pretty impressive, but it is far from sufficient ("0.01mm³"), and we
don't know how good it will scale up.
Jason Resch-2 wrote:
>
>> It is like having a picture of a RAM and
>> thinking this is enough to recover the information on it.
>>
>> What use are fast brains?
>
>
> A million years of human technological progress in the time frame of one
> year seems highly useful.
But technological progress is not exlusively made in our brains. Also, the
amount of useful technological progress that our brain can deliver may be
intrinsically limited.
Jason Resch-2 wrote:
>
>> Our brains alone are of little use. We also need a
>> rich environment and a body.
>>
>
> I'm not sure bodies are necessary, but in the context of a simulation you
> could have any body you wanted, or no body at all. (Like in second life)
Jason Resch-2 wrote:
>
>>
>> You presuppose that AI researchers have the potential ability to build
>> superintelligent AI. Why should we suspect this more than we suspect that
>> gorillas can build humans? I'd like to hear arguments that make it
>> plausible
>> that it is possible to engineer somthing more generally intelligent than
>> yourself.
>>
>
> I there was someone just like me, but thought at twice the speed, I am
> sure
> he would score more highly on some general intelligence tests.
Of course, if only because he effectively would have twice the time. But
that's not what I am referring to when I say superintelligent. Imagine he
would have 10000 times more time. Would that make him 10000 times more
intelligent? Of course not.
Jason Resch-2 wrote:
>
> If we can
> find a gene or genes that make the difference between Newton and the
> average
> person, and then switch them on in the average person through gene
> therapy,
> would that count as engineering something more intelligent than yourself?
Ultimately, no. What you say may well be possible, but we are essential just
using the intelligence that is already there and copy it. But even then I
doubt that we can get the kind of deeply creative intelligence, that
includes wisdom, which is the essential driver of progress. I don't buy at
all that intellectual intelligence is what drives us forward.
Intellect can be used for selfish and destructive purposes as well. Real
intelligence consists in clear awareness of yourself and the world, which
also leads to moral intelligence. This is what I say can't be engineered.
Jason Resch-2 wrote:
>
> What about taking Nootropics ( http://en.wikipedia.org/wiki/Nootropic )?
> There are many plausible scenarios for making ourselves more intelligent,
> or
> more creative than our current state.
Nootropics don't make you much more intelligent. More effective and
concentrated, sure.
You see, you talk of superficial intelligence. I don't argue that we can't
increase this artificially. We already do. Look at the internet. One could
argue it amplifies some part of our intelligence orders of magnitude. Yet,
does it lead to singularity-like progress? Obviously, no. So I would argue
this intelligence is not what's important. What's important is the
intelligence that's deeply interwoven with the most basic layers of
consciousness, that includes (deep) morality, spiritual awareness. I don't
see any evidence that this can be engineered, or even be teached.
Jason Resch-2 wrote:
>
>
>> Jason Resch-2 wrote:
>> >
>> >> I believe that it is inherently impossible to design intelligence. It
>> can
>> >> just self-organize itself through becoming aware of itself.
>> >
>> >
>> > A few genes separate us from chimps, and all of our intelligence.
>> I don't think our intelligence is reducible to genes. Memes seem even
>> more
>> important. And just because we can't really research it scientifically at
>> moment, does not mean there are no subtler things that determine our
>> general
>> intelligence than genes and culture. Many subjective experiences hint at
>> something like a more subtle layer, call it "soul" if you will.
>> All of what we understand about biology may just be the tiny top of a
>> pyramid that is buried in the sand.
>>
>>
> Well we only have one sample of biology from one planet in one type of
> chemistry. Throughout the everything, what we know is minuscule compared
> to
> what can be known about biology. That said, there is a finite amount
> there
> is to learn about the human brain and biology. There are information
> theoretic limits established by the number of base pairs which set upper
> bounds on how much there is to be learned about human biology. With this,
> we confidently say the brain's design is not infinitely complex.
This assumes the functioning of the brain can be reduced to (essentially
classical) biochemistry. That may seem obviously true, yet it has not been
demonstrated. Essential "magic" in the world may not be absent, just subtle.
We have some evidence for that, like ESP.
Jason Resch-2 wrote:
>
>>
>> Jason Resch-2 wrote:
>> >
>> > If we can
>> > determine which, and see what these genes do then perhaps we can
>> > extrapolate
>> > and find out how our DNA is able to make some brains better than
>> others.
>> But this is not how intelligent works. You don't just extrapolate a bit
>> and
>> have more intelligence. If this were the case, we would already have
>> superintelligence.
>
>
> Vastly super-human intelligence requires vastly more powerful processing
> capabilities. Our computers are still very far from that, and are more on
> the level of insect brains.
OK. I am not necessarily referring to our computers. If there was an easy
way to increase intelligence, evolution would already have found it. But in
reality, development of intelligence was a very slow and complex process.
My argument also rests on a holistic understanding of the universe, that is,
it is not a random accident, but is there to develop consciousness. If
certainly wouldn't use so convoluted methods if it consists in linear
extrapolation of some charateristic.
Jason Resch-2 wrote:
>
>> Jason Resch-2 wrote:
>> >
>> >>
>> >> Also, we have no reliable way of measuring the computational power of
>> the
>> >> brain, not to speak of the possibly existing subtle energies that go
>> >> beyond
>> >> the brain, that may be essential to our functioning. The way that
>> >> computational power of the brain is estimated now relies on a quite
>> >> reductionstic view of what the brain is and what it does.
>> >>
>> >
>> > As I've mentioned before on this list, neuroscientists have succeeded
>> in
>> > creating biologically realistic neurons. The CPU requirements of these
>> > neurons is well understood:
>> >
>> > http://www.youtube.com/watch?v=LS3wMC2BpxU&t=7m30s
>> Biologically realistic neurons is relative. We certainly don't take
>> quantum
>> effects into account, and evidence seems to suggest this is important.
>
>
> On the contrary, the simulated neurons behaved in the same ways as
> biological neurons, and this is without including quantum effects.
The just behave in the same way that we are able to analyze what about the
behaviour of the neurons is relevant (they certainly don't behave in the
same way physically, as we don't even simulate them on a molecular level).
Our analysis of this may be very incomplete.
Jason Resch-2 wrote:
>
> It would
> therefore seem that quantum effects play a negligible role in a brain's
> function.
That is circular reasoning. We don't understand how the function of neurons
depends on quantum effects, therefore they play no role.
Jason Resch-2 wrote:
>
>> At
>> least I see no other way to explain ESP.
>>
>
> I don't see how quantum mechanics could even explain ESP. Entangled
> particles cannot be used to transmit information.
But this is coherent with the results of ESP studies. ESP often seems to
transcend time, which makes it suspect that classical information
trasmission is involved.
There is no need for information transmission in order for there to be an
anomalous knowledge. If a particle on earth is entagled with a particle on
the moon, and the particle on earth is measured to be a certain way, we have
knowledge about the particle on the moon, even though there was no
information transmission in the ordinary sense.
Jason Resch-2 wrote:
>
>>
>> Even if we suppose that they are biologically realistic, neurons alone
>> don't
>> make up human a functioning brain. A neuron is like a transistor of a
>> computer, and a transistor is not enough for a functioning computer!
>> There
>> are other type of cells that may be important in information processing.
>> Also there are different kinds of neurons, and the way they are put
>> together
>> in different units is also important. Even if we were able to reproduce
>> all
>> of this, we would still need the software running on the brain.
>> How would we do this?
>>
>
> The software of the brain is represented by the manner in which the
> neurons
> are connected to each other, and how individual neurons respond to each
> other. This design can be copied straight from the data provided by
> serial
> sectioning scanning.
OK. But this presupposes that scanning will be good enough and that scanning
is sufficient (there is no non-biological component to our intelligence). I
am not convinced of either.
benjayk
--
View this message in context: http://old.nabble.com/Turing-Machines-tp32259675p32279419.html
But they are not all consciousness = awareness-of-awareness. And the
decision to act precedes the awareness of the decision - which is
evidence against the idea the consciousness is in control of one's
decisions, c.f. Grey Walter carousel experiment. Even in common
experience one makes many decisions without being aware of them, even
decisions that require perception. So it is not plausible that
consciousness makes the decisions. Consciousness may indeed occur in
parallel and sometimes correlate with decisions and sometimes not. But
the correlation is due to a common, subconscious, cause.
> Not like an assembly line - like a living, flowing interaction amongst
> multiple layers of external relations and internal perceptions, the
> parts and the wholes. Without perception and relativity, there are
> only parts.
>
>
>> The rest of the above paragraph seems to be an
>> attempt to save dualism by saying why the casual spirit comes after the
>> motor effect. I have no problem being alive and conscious with
>> consciousness coming after the decision. The decision was still made by
>> me. I just don't conceive "me" as being so small as my consciousness.
>>
> You're applying a broad definition of consciousness at the beginning
> and a narrow definition to consciousness at the end and using the
> mismatch to beg the question.
I didn't refer to "consciousness" at the beginning. I said what happens
first is the activity of neurons - not necessarily conscious. You are
attributing inconsistencies to me to create a strawman. At the end I'm
using your definition of consciousness "awareness of awareness".
> I have no problem with recognition
> coming after cognition after awareness after detection, but I have a
> problem with conflating all of those as 'consciousness' and then
> making a special case for electromagnetic activity in the brain not
> corresponding to anything experiential out of anthropomorphic
> superstition. Just because 'you' don't think you feel anything doesn't
> mean that what you actually are doesn't detect it as a first person
> experience.
>
>
>>>>> If moving my arm is like reading a book, I can't tell you what the
>>>>> book is about until I actually have read it, but I still am initiating
>>>>> the reading of the book, and not the book forcing me to read it.
>>>>>
>>
>>>> Another non-analogy. Is this sentence making you think of a dragon?
>>>>
>>
>>> A dragon? No. Why would it? Why is it 'another' non-analogy? Is this
>>> 'another' ad hominem non-argument?
>>>
>> It's a non-analogy because no one proposed that your actions were
>> determined by a book or other external effect. The hypothesis was that
>> they are determined by neural processes of which you are not aware.
>>
> They are determined by neural experienced of which you, at the .1Hz
> level of 'Brent' sitting in a neurologist's office are not aware. That
> doesn't mean that the groups of neurons at the 0.001 Hz level are not
> aware, and it doesn't mean that that awareness is not part of your
> total self's awareness.
Now you've introduced another concept "total self's awareness" of which
you are not aware. Logic requires the consistent use of words. And it
did nothing to explain your analogy of the book that didn't force you to
read it.
Brent
I understand what you say. On the other hand however, it is still good
to look at the current level of simulation technology, especially when
people make predictions on what happens in the future (in other messages
the possibility of brain simulation and talk about physico-chemical
processes).
From such a viewpoint, even a level of one-cell simulation is not
reachable in the foreseeable future. Hence, in my view, after the
discussion about theoretical limits it would be good to look at the
reality. It might probably help to think the assumptions over.
I would say that it is small practical things that force us to
reconsider our conceptions.
Evgenii
--
http://blog.rudnyi.ru
...
>>> But all of this is an aside from point that I was making
>>> regarding the power and versatility of Turing machines. Those
>>> who think Artificial Intelligence is not possible with computers
>>> must show what about the brain is unpredictable or unmodelable.
>>>
>>
>> Why that? I guess that you should prove first that consciousness
>> is predictable and could be modeled.
>>
>>
> Everyone (except perhaps the substance dualists, mysterians, and
> solopists -- each non-scientific or anti-scientific philosophies)
> believe the brain (on the lowest levels) operates according to simple
> and predictable rules. Also note, the topic of the above was not
> consciousness, but intelligence.
>
The matter is not about our beliefs (though it would be interesting to
look at the theology that Bruno develops).
Yes, the point was about intelligence but the reason about success (if I
have understood it correctly) was that it is possible to simulate even
the whole universe. To this end in my view, it would be good first to
develop a theory for consciousness. Here however the theory is missing
(I do not know if you agree with Bruno's theory). What dualism concerns,
let me quote Jeffrey Gray
p. 73. �If conscious experiences are epiphenomenal, like the melody
whistled by the steam engine, there is not much more, scientifically
speaking, to say about them. So to adopt epiphenomenalism is a way of
giving up on the Hard Problem. But it is too early to give up. Science
has only committed itself to serious consideration of the problem within
the last couple of decades. To find casual powers for conscious events
will not be easy. But the search should be continued. And, if it leads
us back to dualism, so be it.�
Evgenii
--
http://blog.rudnyi.ru
I agree with that sentiment. That's why I often try to think of
consciousness in terms of what it would mean to provide a Mars Rover
with consciousness. According to Bruno the ones we've sent to Mars were
already conscious, since their computers were capable of Lobian logic.
But clearly they did not have human-like consciousness (or
intelligence). I think it much more likely that we could make a Mars
Rover with consciousness and intelligence somewhat similar to humans
using von Neumann computers or artificial neural nets than by trying to
actually simulate a brain.
Brent
> p. 73. “If conscious experiences are epiphenomenal, like the melody
> whistled by the steam engine, there is not much more, scientifically
> speaking, to say about them. So to adopt epiphenomenalism is a way
> of giving up on the Hard Problem. But it is too early to give up.
> Science has only committed itself to serious consideration of the
> problem within the last couple of decades. To find casual powers for
> conscious events will not be easy. But the search should be
> continued. And, if it leads us back to dualism, so be it.”
Well, with the comp hyp it is "just" a coming back to Plato. We keep
monism, but abandon materialism/physicalism. Advantage: this solves
the mind problem with the usual computer science, and above all, this
gives a realm where we can see where the laws of physics come from,
and why there is an appearance of matter.
This goes toward an unification of all science (forces and loves
included) which is then 100% theological, and 99.99...9% mathematical.
Bruno
I don't remember having said this. I even doubt that Mars Rover is
universal, although that might be serendipitously possible
(universality is very cheap), in which case it would be as conscious
as a human being under a high dose of salvia (a form of consciousness
quite disconnected from terrestrial realities). But it is very
probable that it is not Löbian. I don't see why they would have given
the induction axioms to Mars Rover (the induction axioms is what gives
the Löbian self-referential power).
> But clearly they did not have human-like consciousness (or
> intelligence). I think it much more likely that we could make a
> Mars Rover with consciousness and intelligence somewhat similar to
> humans using von Neumann computers or artificial neural nets than
> by trying to actually simulate a brain.
I think consciousness might be attributed to the virgin (non
programmed) universal machine, but such consciousness is really the
basic consciousness of everyone, before the contingent differentiation
on the histories. LUMs, on the contrary, have a self-consciousness,
even when basically virgin: they makes a distinction between them and
some possible independent or transcendental reality.
No doubt the truth is a bit far more subtle, if only because there are
intermediate stage between UMs and LUMs.
Bruno
You didn't say it explicitly. It was my inference that the computer's
learning algorithms would include induction.
Brent
> On 8/18/2011 7:24 AM, Bruno Marchal wrote:
>>> I agree with that sentiment. That's why I often try to think of
>>> consciousness in terms of what it would mean to provide a Mars
>>> Rover with consciousness. According to Bruno the ones we've sent
>>> to Mars were already conscious, since their computers were capable
>>> of Lobian logic.
>>
>> I don't remember having said this. I even doubt that Mars Rover is
>> universal, although that might be serendipitously possible
>> (universality is very cheap), in which case it would be as
>> conscious as a human being under a high dose of salvia (a form of
>> consciousness quite disconnected from terrestrial realities). But
>> it is very probable that it is not Löbian. I don't see why they
>> would have given the induction axioms to Mars Rover (the induction
>> axioms is what gives the Löbian self-referential power).
>
> You didn't say it explicitly. It was my inference that the
> computer's learning algorithms would include induction.
Yes, and that makes them universal. To make them Löbian, you need them
to not just *do* induction, but they have to believe in induction.
Roughly speaking. If *i* = "obeys the induction rule", For a UM *i*
is true, but that's all. For a LUM is is not just that *i* is true,
but *i*is believed by the machine. For a UM *i* is true but B*i* is
false. For a LUM we have both *i* is true, and B*i* is true.
Of course the induction here is basically the induction on numbers(*).
It can be related to learning, anticipating or doing inductive
inference, but the relation is not identity.
(*) The infinity of axioms: F(0) & for all n (P(n) -> P(s(n)) ->.
for all n P(n).
With F any arithmetical formula, that is a formula build with the
logical symbol, and the arithmetical symbols {0, s, +, *}.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
So do you have a LISP program that will make my computer Lobian?
Brent
> On 8/18/2011 10:50 AM, Bruno Marchal wrote:
>>
>> On 18 Aug 2011, at 19:05, meekerdb wrote:
>>
>>> On 8/18/2011 7:24 AM, Bruno Marchal wrote:
>>>>> I agree with that sentiment. That's why I often try to think of
>>>>> consciousness in terms of what it would mean to provide a Mars
>>>>> Rover with consciousness. According to Bruno the ones we've
>>>>> sent to Mars were already conscious, since their computers were
>>>>> capable of Lobian logic.
>>>>
>>>> I don't remember having said this. I even doubt that Mars Rover
>>>> is universal, although that might be serendipitously possible
>>>> (universality is very cheap), in which case it would be as
>>>> conscious as a human being under a high dose of salvia (a form of
>>>> consciousness quite disconnected from terrestrial realities). But
>>>> it is very probable that it is not Löbian. I don't see why they
>>>> would have given the induction axioms to Mars Rover (the
>>>> induction axioms is what gives the Löbian self-referential power).
>>>
>>> You didn't say it explicitly. It was my inference that the
>>> computer's learning algorithms would include induction.
>>
>> Yes, and that makes them universal. To make them Löbian, you need
>> them to not just *do* induction, but they have to believe in
>> induction.
>>
>> Roughly speaking. If *i* = "obeys the induction rule", For a UM
>> *i* is true, but that's all. For a LUM is is not just that *i* is
>> true, but *i*is believed by the machine. For a UM *i* is true but
>> B*i* is false. For a LUM we have both *i* is true, and B*i* is true.
>>
>> Of course the induction here is basically the induction on
>> numbers(*). It can be related to learning, anticipating or doing
>> inductive inference, but the relation is not identity.
>>
>>
>> (*) The infinity of axioms: F(0) & for all n (P(n) -> P(s(n)) ->.
>> for all n P(n).
>> With F any arithmetical formula, that is a formula build with the
>> logical symbol, and the arithmetical symbols {0, s, +, *}.
>
> So do you have a LISP program that will make my computer Lobian?
It would be easier to do it by hands:
1) develop a first order logic specification for your computer (that
is a first order axiomatic for its data structures, including the
elementary manipulations that your computer can do on them)
2) add a scheme of induction axioms on those data structure. For
example, for the combinators, it would be like this
"if P(K) and P(S) and if for all X and Y P(X) & P(Y) implies P((X,Y))
then for all X and Y P((X,Y))". And this for all "P" describable in
your language.
It will be automatically Löbian. And, yes, it should not be to
difficult to write a program in LISP, doing that. That is, starting
from a first order logical specification of an interpreter, extending
it into a Löbian machine.
Bruno
When I search on Google Scholar
lobian robot
then there is only one hit (I guess that this is Bruno's thesis). When I
search however
loebian robot
there are some more hits with for example Loebian embodiment. I do not
not know what it means but in my view it would be interesting to build a
robot with a Loebian logic and research it. In my view, it is not enough
to state that there is already some consciousness there. It would be
rather necessary to research on what it actually means. Say it has
visual consciousness experience, it feels pain or something else.
It would be interesting to see what people do in this area. For example,
"Loebian embodiment" sounds interesting and it would be nice to find
some review about it.
Evgenii
Just to clarify P is some predicate, i.e. a function that returns #T or
#F and X and Y are some data stuctures (e.g. lists) and ( , ) is a
combinator, i.e. a function from DxD =>D for D the domain of X and Y.
Right?
Brent
>
> It will be automatically L�bian. And, yes, it should not be to
> difficult to write a program in LISP, doing that. That is, starting
> from a first order logical specification of an interpreter, extending
> it into a L�bian machine.
>
> Bruno
>> probable that it is not Löbian. I don't see why they would have given
>> the induction axioms to Mars Rover (the induction axioms is what
>> gives the Löbian self-referential power).
"Löbian machine" is an idiosyncrasy that I use as a shorter expression
for what the logicians usually describes by "a sufficiently rich
theory".
I have not yet decide on how to exactly define them.
I hesitate between a very weak sense, like any belief system (machine,
theory) close for the Löb rule (which says that you can deduce p from
Bp -> p).
A stronger sense is : any belief system having the Löb's formula in
it. So it contains the "formal Löb rule": B(Bp -> p) -> Bp.
But my current favorite definition is: any universal machine which can
prove p -> Bp for p sigma_1 (or equivalent). This I paraphrase in
layman's language by: any universal machine which knows that she is
universal. Sigma_1 proposition are those having the shape ExP(x) with
P decidable. You can intuit that *you* can do, by testing P(0), P(1),
P(2), ... until you find a n such that P(n).
A theorem prover which can prove all true sigma_1 proposition is
provably equivalent with a universal machine, and all universal
machine can prove (modulo modification of the language) the true
sigma_1 propositions.
PA and ZF are Löbian machine in that last sense (which implies the
weaker senses). They are emulated in the human brain of those who
study them, although they are easy to implement on computers.
A long time ago I concluded that some theorem prover, written by
Chang, also by Boyer and Moore, are Löbian. When a child grasp notion
like "anniversary", "death", "forever", "potentially infinite", they
show Löbianity.
Are humans Löbian? Hard to say, because they have a non monotonical
layer (they can retrieve old beliefs), but it is clear they have a
Löbian machine (or entity) living inside. Fear of death, fear of the
unknown and fear of the others are typically Löbian.
Another definition: a Löbian entity is an entity whose 3-self-
referential beliefs obeys to the logics G and G*. Then you can apply
the Theaetetus theory to get the 1-knower logic, and the 1-physics,
etc. They have all the same theology, but the personal arithmetical
content of the "Bp" can vary a lot.
Bruno
Life, Mind, and Robots
The Ins and Outs of Embodied Cognition
Hybrid Neural Systems, 2000 - Springer
http://acs.ist.psu.edu/misc/dirk-files/Papers/EmbodiedCognition/Life,%20Mind%20and%20Robots_The%20Ins%20and%20Outs%20of%20Embodied%20Cognition%20.pdf
It happened that they talk not about the Loeb theorem but rather about
the biologist Jacques Loeb.
Do you know why robotics people do not use the L�b theorem in practice?
Evgenii
On 20.08.2011 16:22 Bruno Marchal said the following:
>>> it is very probable that it is not L�bian. I don't see why they
>>> would have given the induction axioms to Mars Rover (the
>>> induction axioms is what gives the L�bian self-referential
> "L�bian machine" is an idiosyncrasy that I use as a shorter
> expression for what the logicians usually describes by "a
> sufficiently rich theory". I have not yet decide on how to exactly
> define them.
>
> I hesitate between a very weak sense, like any belief system
> (machine, theory) close for the L�b rule (which says that you can
> deduce p from Bp -> p). A stronger sense is : any belief system
> having the L�b's formula in it. So it contains the "formal L�b rule":
> B(Bp -> p) -> Bp.
>
> But my current favorite definition is: any universal machine which
> can prove p -> Bp for p sigma_1 (or equivalent). This I paraphrase in
> layman's language by: any universal machine which knows that she is
> universal. Sigma_1 proposition are those having the shape ExP(x)
> with P decidable. You can intuit that *you* can do, by testing P(0),
> P(1), P(2), ... until you find a n such that P(n).
>
> A theorem prover which can prove all true sigma_1 proposition is
> provably equivalent with a universal machine, and all universal
> machine can prove (modulo modification of the language) the true
> sigma_1 propositions.
>
> PA and ZF are L�bian machine in that last sense (which implies the
> weaker senses). They are emulated in the human brain of those who
> study them, although they are easy to implement on computers.
>
> A long time ago I concluded that some theorem prover, written by
> Chang, also by Boyer and Moore, are L�bian. When a child grasp notion
> like "anniversary", "death", "forever", "potentially infinite", they
> show L�bianity.
>
> Are humans L�bian? Hard to say, because they have a non monotonical
> layer (they can retrieve old beliefs), but it is clear they have a
> L�bian machine (or entity) living inside. Fear of death, fear of the
> unknown and fear of the others are typically L�bian.
>
> Another definition: a L�bian entity is an entity whose
> 3-self-referential beliefs obeys to the logics G and G*. Then you can
> On 8/19/2011 2:18 AM, Bruno Marchal wrote:
>>> So do you have a LISP program that will make my computer Lobian?
>>
>> It would be easier to do it by hands:
>> 1) develop a first order logic specification for your computer
>> (that is a first order axiomatic for its data structures, including
>> the elementary manipulations that your computer can do on them)
>> 2) add a scheme of induction axioms on those data structure. For
>> example, for the combinators, it would be like this
>> "if P(K) and P(S) and if for all X and Y P(X) & P(Y) implies
>> P((X,Y)) then for all X and Y P((X,Y))". And this for all "P"
>> describable in your language.
>
> Just to clarify P is some predicate, i.e. a function that returns #T
> or #F and X and Y are some data stuctures (e.g. lists) and ( , ) is
> a combinator, i.e. a function from DxD =>D for D the domain of X and
> Y. Right?
Predicate are more syntactical object. They can be interpreted as
function or relation, but in logic we distinguish explicitly the
syntax and the semantics. So an arithmetical predicate is just a
formula written with the usual symbols. Its intended meaning will be
true or false, relatively to some model. For example, the predicate "x
is greater than y" is "Ez(y+z = x)".
The semantics of combinators is rather hard, and it took time before
mathematicians find one. D^D needs to be isomorphic to D, because
there is only one domain (the collection of all combinators). But Dana
Scott has solved the problem, and found a notion of continuous
function making D^D isomorphic with D. Recursion theory provides also
an intuitive model, where a number can be seen both as a function and
a number: just define a new operation on the natural numbers: "@" by i
@ j = phi_i(j). It is a bit nasty, given that such an operation will
be partial (in case phi_i(j) does not stop).
Bruno
>
> Brent
>
>>
>> It will be automatically Löbian. And, yes, it should not be to
>> difficult to write a program in LISP, doing that. That is, starting
>> from a first order logical specification of an interpreter,
>> extending it into a Löbian machine.
>>
>> Bruno
> I have browsed papers on Loebian embodiment, for example
>
> Life, Mind, and Robots
> The Ins and Outs of Embodied Cognition
> Hybrid Neural Systems, 2000 - Springer
> http://acs.ist.psu.edu/misc/dirk-files/Papers/EmbodiedCognition/Life,%20Mind%20and%20Robots_The%20Ins%20and%20Outs%20of%20Embodied%20Cognition%20.pdf
>
> It happened that they talk not about the Loeb theorem but rather
> about the biologist Jacques Loeb.
>
> Do you know why robotics people do not use the Löb theorem in
> practice?
Logicians tends to work in an ivory tower, and many despise
applications. Discoveries go slowly from one field to another. When
the comma was discovered, it took 300 hundred years to become used in
applied science.
Now Löbianity is a conceptual things more important for religion and
fundamental things. I do not advocate implementation of Löbianity. It
makes more sense to let the machine develop their löbianity by
learning and evolution.
Löbianity is equivalent with correct self-reference for any entity
capable of adding and multiplying numbers. It is not useful for
controlling a machine. Löbian machine are not typical slaves. They can
develop a strong taste against authority. They don't need users.
Bruno
>>>> it is very probable that it is not Löbian. I don't see why they
>>>> would have given the induction axioms to Mars Rover (the
>>>> induction axioms is what gives the Löbian self-referential
>> "Löbian machine" is an idiosyncrasy that I use as a shorter
>> expression for what the logicians usually describes by "a
>> sufficiently rich theory". I have not yet decide on how to exactly
>> define them.
>>
>> I hesitate between a very weak sense, like any belief system
>> (machine, theory) close for the Löb rule (which says that you can
>> deduce p from Bp -> p). A stronger sense is : any belief system
>> having the Löb's formula in it. So it contains the "formal Löb rule":
>> B(Bp -> p) -> Bp.
>>
>> But my current favorite definition is: any universal machine which
>> can prove p -> Bp for p sigma_1 (or equivalent). This I paraphrase in
>> layman's language by: any universal machine which knows that she is
>> universal. Sigma_1 proposition are those having the shape ExP(x)
>> with P decidable. You can intuit that *you* can do, by testing P(0),
>> P(1), P(2), ... until you find a n such that P(n).
>>
>> A theorem prover which can prove all true sigma_1 proposition is
>> provably equivalent with a universal machine, and all universal
>> machine can prove (modulo modification of the language) the true
>> sigma_1 propositions.
>>
>> PA and ZF are Löbian machine in that last sense (which implies the
>> weaker senses). They are emulated in the human brain of those who
>> study them, although they are easy to implement on computers.
>>
>> A long time ago I concluded that some theorem prover, written by
>> Chang, also by Boyer and Moore, are Löbian. When a child grasp notion
>> like "anniversary", "death", "forever", "potentially infinite", they
>> show Löbianity.
>>
>> Are humans Löbian? Hard to say, because they have a non monotonical
>> layer (they can retrieve old beliefs), but it is clear they have a
>> Löbian machine (or entity) living inside. Fear of death, fear of the
>> unknown and fear of the others are typically Löbian.
>>
>> Another definition: a Löbian entity is an entity whose
>> 3-self-referential beliefs obeys to the logics G and G*. Then you can
>> apply the Theaetetus theory to get the 1-knower logic, and the
>> 1-physics, etc. They have all the same theology, but the personal
>> arithmetical content of the "Bp" can vary a lot.
>>
>> Bruno
>>
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>