Question for the QM experts here: quantum uncertainty of the past

71 views
Skip to first unread message

Pierz

unread,
Aug 13, 2013, 8:26:41 PM8/13/13
to everyth...@googlegroups.com
I need clarification of the significance of quantum theory to determining the past. I remember having read or heard that the past itself is subject to quantum uncertainty. Something like the idea that the past is determined only to to the extent that it is forced to be so by the state of the present, if that makes sense. In other words, there may be more than one history that could lead to the current state of the world. Let's say it might have been one way or another and then we make a measurement which resolves this question, we are 'forcing' the past to be one way or another. In MWI, that would be saying my 'track' through the multiverse is ambiguous in both directions, both into the future and 'behind me' so to speak. I'm unclear on this and what it precisely means. I seem to recall that it was critical in calculations Hawking made about the early universe - at a certain point these uncertainties became critical and it meant that it was no longer possible to say that the universe had definitely been one way or another. Can someone clarify this for me?

Russell Standish

unread,
Aug 13, 2013, 11:42:37 PM8/13/13
to everyth...@googlegroups.com
On Tue, Aug 13, 2013 at 05:26:41PM -0700, Pierz wrote:
> I need clarification of the significance of quantum theory to determining
> the *past*. I remember having read or heard that the past itself is subject
This idea of the past not being determinate until such a time as a
measurement in the present forces the issue is fundamental to my
interpretation of QM. It is also related to the Quantum Eraser. Saibal
Mitra has written some stuff on this too - maybe he'd like to comment?

On the other hand, I don't think this view is particularly
mainstream. Even many worlds people tend to think that the multiverse
has decohered in the past, and that there is a matter of fact which
branch we are in, even if we're ignorant of that fact.

I can't comment on Hawking's work, unfortunately, as I'm not aware of that.

Cheers
--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

smi...@zonnet.nl

unread,
Aug 14, 2013, 10:48:11 AM8/14/13
to everyth...@googlegroups.com
Citeren Russell Standish <li...@hpcoders.com.au>:
Yes, I would agree with the view taken by Russell here. It has
interesting consequences for any future artificial intelligence who can
reset its memory, as I explain here:

http://arxiv.org/abs/0902.3825

So, if you reset your memory at random with some probability p and you
also do that in case of an impending disaster, then if you find
yourself in a state where you know that your memory has been reset and
you need to reload your memory, the reason why the memory has been
reset (routine random memory reset or you were facing an impending
disaster), is no longer determined, you are identical in the different
branches until you find out the reason.

So, while you are firmly in the classical regime and therefore you
won't see any changes in the probabilities of the outcomes of these
sorts of experiments relative to what you would expect classically, the
interpretation of how these probabilities arise is different; while it
is worthwhile to do these memory resettings in a "single classical
world" it wouldn't be worthwhile.

The article I wrote (it was just an essay for FQXI competition which
got the attention from New Scientist), is actually rather simple, it
treats the problem in a non-relativistic way, which is a bit unnatural
(the times at which things happen in the different different branches
seems to matter). You can easily generalize this, also you can consider
thought experiments involving false memories that may be correct
memories in different branches etc. etc.

Saibal



>
> ----------------------------------------------------------------------------
> Prof Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics hpc...@hpcoders.com.au
> University of New South Wales http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>


meekerdb

unread,
Aug 14, 2013, 5:39:27 PM8/14/13
to everyth...@googlegroups.com
Hmm. It seems that "erasing your memory" would encompass a lot more than what is commonly
referred to as memory. Quantum erasure requires erasing all the information that is
diffused into the environment. So erasing one's memory would imply quantum erasure of all
the information about your past - not just the infinitesimal bit that you can consciously
recall.

Brent

Chris de Morsella

unread,
Aug 14, 2013, 7:09:06 PM8/14/13
to everyth...@googlegroups.com
When will a computer pass the Turing Test? Are we getting close?
 
Here is what the CEO of Google says: “Many people in AI believe that we’re close to [a computer passing the Turing Test] within the next five years,” said Eric Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on July 16, 2013.
 

smi...@zonnet.nl

unread,
Aug 14, 2013, 7:43:30 PM8/14/13
to everyth...@googlegroups.com
Citeren meekerdb <meek...@verizon.net>:
Yes, but then this is not "quantum erasure". If you were to reverse the
act of a measurement then you could experimentally falsify the
Copenhagen interpretation like e.g. in David Deutsch' thought
experiment. However, if you simply erase part of your memory and if the
reason why you did that is not certain (e.g. you do this randomly and
in case of bad news), then after you find that your memory has been
partially erased you are in the same "macro state" in different
branches where the reason of the memory resetting is different.

I assume that whatever you experience is defined by some suitably
defined macro state that can be isolated from the environment, not the
exact micro state of the system which is always strongly correlated
with the environment; if this is not true, then you'll have hard time
arguing against psychics who claim to be able to feel what happens
somehwere else from basic physics principles alone.

So, of course, the information is present in the environment, but you
are unaware of what is in the environment, and therefore the outcome of
the measurement is uncertain as far as you are concerned. The relevant
physics here is purely classical (except for the many Worlds aspect,
but this enters in the calculations in a trivial way), so the
probabilities are trivial; you find that the probability of
experiencing the bad news at the very end after the memory resetting is
not affected.

However, the interpretation of what is going on is totally different in
the MWI than in a single classical world, clearly from the point of
view of the person who gets bad news it is advantageous to do a memory
resetting.

Saibal

meekerdb

unread,
Aug 14, 2013, 8:49:53 PM8/14/13
to everyth...@googlegroups.com
I guess I don't understand that. You seem to be considering a simple case of amnesia -
all purely classical - so I don't see how MWI enters at all. The probabilities are just
ignorance uncertainty. You're still in the same branch of the MWI, you just don't
remember why your memory was erased (although you may read about it in your diary).

> so the probabilities are trivial; you find that the probability of experiencing the bad
> news at the very end after the memory resetting is not affected.
>
> However, the interpretation of what is going on is totally different in the MWI than in
> a single classical world, clearly from the point of view of the person who gets bad news
> it is advantageous to do a memory resetting.

How so? He'll soon learn the bad news again. And he'll do so with probability 1/(N+1)
where N is the number of random resets. In fact it seems like his best strategy is to
erase his memory whenever something good happens to him - so then he can experience it again.

Brent

smi...@zonnet.nl

unread,
Aug 14, 2013, 9:41:14 PM8/14/13
to everyth...@googlegroups.com
No, you can't say that you are in the same branch. Just because you are
in the clasical regime doesn't mean that the MWI is irrelevant and we
can just pretend that the world is described by classical physics. It
is only that classical physics will give the same answer as QM when
computing probabilities.

If what you are aware of is only described by your memory state which
can be encoded by a finite number of bits, then after a memory
resetting, the state of your memory and the environment (which contains
also the rest of your brain and body), is of the form:

|memory_1>|environment_1> + |memory_2>|environment_2>+...

where |environment_i> is not (necessarily) normalized.

It then follows that a process that can lead to the same memory state
via different paths (routine resetting or resetting in case of bad
news) will lead to a state |psi>, such that projecting out a definite
memory state gives:

|memory><memory|psi> =

|memory> sum over different |environments>

where the different environments contain different information about
the paths that led to the memory.

So, from the point of view of a memory state that has undergone a
resetting, the environment will be in a superposition of different
states, the reason why the reseting has been done is thus undetermined
prior to a measurement.

Saibal




meekerdb

unread,
Aug 14, 2013, 10:05:42 PM8/14/13
to everyth...@googlegroups.com
On 8/14/2013 6:41 PM, smi...@zonnet.nl wrote:
>> I guess I don't understand that. You seem to be considering a simple case of amnesia
>> - all purely classical - so I don't see how MWI enters at all. The probabilities are
>> just ignorance uncertainty. You're still in the same branch of the MWI, you just don't
>> remember why your memory was erased (although you may read about it in your diary).
>
> No, you can't say that you are in the same branch. Just because you are in the clasical
> regime doesn't mean that the MWI is irrelevant and we can just pretend that the world is
> described by classical physics. It is only that classical physics will give the same
> answer as QM when computing probabilities.

Including the probability that I'm in the same world as before?

>
> If what you are aware of is only described by your memory state which can be encoded by
> a finite number of bits, then after a memory resetting, the state of your memory and the
> environment (which contains also the rest of your brain and body), is of the form:

"The rest of my brain"?? Why do you suppose that some part of my brain is involved in my
memories and not other parts? What about a scar or a tattoo. I don't see that "memory"
is separable from the environment. In fact isn't that exactly what makes memory classical
and makes the superposition you write below impossible to achieve? Your brain is a
classical computer because it's not isolated from the environment.

Brent

Pierz

unread,
Aug 15, 2013, 3:32:22 AM8/15/13
to everyth...@googlegroups.com
Yes my understanding would be the same. Although the brain or computer's ability to correctly represent the information about what has happened has been destroyed by the reset, the information itself is still embedded in the environment. Resetting registers in a computer does not actually destroy the information, it merely disperses it into the environment in a way that is non-recoverable (by the computer). That's what decoherence is all about. So if you wanted to "teleport" yourself across the multiverse in such a fashion, you'd have to find a method of really destroying the information, which I think is impossible...This theory seems to go back to Schrödinger's Cat, before the solution to the paradox was understood.

The quantum eraser does show that some elements of the past can be "altered" by quantum measurements, but it's not clear from that that one's history on a macroscopic scale is not singularly defined. This is the point I'm trying to clarify. In a way I'm not even sure what this means - unless the laws of physics are such that it is in principle possible to reconstruct the past to an arbitrary level of precision. I think Hawking was showing that that isn't the case - at a certain point in very early history at least, the information can no longer be resolved, and therefore there isn't a single origin point.

smi...@zonnet.nl

unread,
Aug 15, 2013, 9:18:14 AM8/15/13
to everyth...@googlegroups.com
Citeren meekerdb <meek...@verizon.net>:

> On 8/14/2013 6:41 PM, smi...@zonnet.nl wrote:
>>> I guess I don't understand that. You seem to be considering a
>>> simple case of amnesia - all purely classical - so I don't see how
>>> MWI enters at all. The probabilities are just ignorance
>>> uncertainty. You're still in the same branch of the MWI, you just
>>> don't remember why your memory was erased (although you may read
>>> about it in your diary).
>>
>> No, you can't say that you are in the same branch. Just because you
>> are in the clasical regime doesn't mean that the MWI is irrelevant
>> and we can just pretend that the world is described by classical
>> physics. It is only that classical physics will give the same answer
>> as QM when computing probabilities.
>
> Including the probability that I'm in the same world as before?
>
With classical I mean a single world theory where you just compute the
probabilities based "ignorance". This yields the same answer as
assuming the MWI and then comouting the probabilities of the various
outcomes.

>>
>> If what you are aware of is only described by your memory state
>> which can be encoded by a finite number of bits, then after a memory
>> resetting, the state of your memory and the environment (which
>> contains also the rest of your brain and body), is of the form:
>
> "The rest of my brain"?? Why do you suppose that some part of my
> brain is involved in my memories and not other parts? What about a
> scar or a tattoo. I don't see that "memory" is separable from the
> environment. In fact isn't that exactly what makes memory classical
> and makes the superposition you write below impossible to achieve?
> Your brain is a classical computer because it's not isolated from the
> environment.

What matter is that the state is of the form:

|memory_1>|environment_1> + |memory_2>|environment_2>+..

with the |memory_j> orthonormal and the |environment_j> orthogonal.
Such a completely correlated state will arise due to decoherence, the
probabilities which are the squared norms of the |environment_j>'s are
the probabilities. They behave in a purely classical way due this
decomposition.

The brain is never isolated from the environment; if project onto an
|environment_j> you always get a definite classical memory state, never
a supperposition of different bitstrings. But it's not the case that
projecting onto a ddefinite memory state will always yield a definite
classical environment state (this is at the heart of the Wigner's
friend thought experiment).

So, I am assuming that the brain is 100% classical (decoherence has run
its complete course), whatever the memory state of the brain is can
also be found in the environment.

Then the assumption that I'm making is that whenever there is
information in the environment that the observer is not aware of, the
observer will be identical as far as the description of the observer in
terms of its memory state is concerned accross the branches where that
information is different. So, if the initial state is:

|memory>|environment>

and in the environment something happens which has two possible
outcomes, and you have yet to learn about that, then the state will
evolve to a state of the form:

|memory>(|environment_1> + |environment_2>)

and not:

|memory_1>|environment_1> + |memory_2>|environment_2>

because the latter would imply that you could (in principle) tell what
happned without performing a measurement, and I don't believe on
psychic phenomena.

So, the "no-psychic phenomena postulate" would compel you to assume that:

|memory>(|environment_1> + |environment_2>)


is the correct description of the state and that only after you learn
about the fact you become localized in either branch. This applied to
the memory resetting implies what I was arguing for.

Saibal


John Clark

unread,
Aug 16, 2013, 10:42:44 AM8/16/13
to everyth...@googlegroups.com
On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:

> When will a computer pass the Turing Test? Are we getting close? Here is what the CEO of Google says: “Many people in AI believe that we’re close to [a computer passing the Turing Test] within the next five years,” said Eric Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on July 16, 2013.

It could be. Five years ago I would have said we were a very long way from any computer passing the Turing Test, but then I saw Watson and its incredible performance on Jeopardy.  And once a true AI comes into existence it will turn ALL scholarly predictions about what the future will be like into pure nonsense, except for the prediction that we can't make predictions that are worth a damn after that point. 

  John K Clark  

Telmo Menezes

unread,
Aug 16, 2013, 11:04:30 AM8/16/13
to everyth...@googlegroups.com
I don't really find the Turing Test that meaningful, to be honest. My
main problem with it is that it is a test on our ability to build a
machine that deceives humans into believing it is another human. This
will always be a digital Frankenstein because it will not be the
outcome of the same evolutionary context that we are. So it will have
to pretend to care about things that it is not reasonable for it to
care.

I find it a much more worthwhile endeavour to create a machine that
can understand what we mean like a human does, without the need to
convince us that it has human emotions and so on. This machine would
actually be _more_ useful and _more_ interesting by virtue of not
passing the Turing test.

Telmo.

> John K Clark

John Clark

unread,
Aug 16, 2013, 12:25:05 PM8/16/13
to everyth...@googlegroups.com
On Fri, Aug 16, 2013 at 11:04 AM, Telmo Menezes <te...@telmomenezes.com> wrote:

> I don't really find the Turing Test that meaningful, to be honest.

I am certain that in your like you have met some people that you consider brilliant and some that are as dumb as a sack full of doorknobs, if it's not the Turing test how did you differentiate the geniuses from the imbeciles?   

> I find it a much more worthwhile endeavour to create a machine that can understand what we mean

And the only way you can tell if a machine (or another human being) understands what you mean or not is by observing the subsequent behavior.

> like a human does, without the need to convince us that it has human emotions
 
Some humans are VERY good at convincing other humans that they have certain emotions when they really don't, like actors or con-men; evolution has determined that skillful lying can be useful.

John K Clark

Telmo Menezes

unread,
Aug 16, 2013, 12:38:35 PM8/16/13
to everyth...@googlegroups.com
On Fri, Aug 16, 2013 at 5:25 PM, John Clark <johnk...@gmail.com> wrote:
> On Fri, Aug 16, 2013 at 11:04 AM, Telmo Menezes <te...@telmomenezes.com>
> wrote:
>
>> > I don't really find the Turing Test that meaningful, to be honest.
>
>
> I am certain that in your like you have met some people that you consider
> brilliant and some that are as dumb as a sack full of doorknobs, if it's not
> the Turing test how did you differentiate the geniuses from the imbeciles?
>
>> > I find it a much more worthwhile endeavour to create a machine that can
>> > understand what we mean
>
>
> And the only way you can tell if a machine (or another human being)
> understands what you mean or not is by observing the subsequent behavior.

I completely agree.

However, the Turing test is a very specific instance of a "subsequent
behavior" test. It's one where a machine is asked to be
undistinguishable from a human being when communicating through a text
terminal. This will entail a lot of lying. (e.g: "what do you look
like?"). It's a hard goal, and it will surely help AI progress, but
it's not, in my opinion, an ideal goal.

>> > like a human does, without the need to convince us that it has human
>> > emotions
>
>
> Some humans are VERY good at convincing other humans that they have certain
> emotions when they really don't, like actors or con-men; evolution has
> determined that skillful lying can be useful.

Sure, it's useful. I'm actually of the opinion that hypocrisy is our
most important intellectual skill. The ability to advertise certain
norms and then not follow them helped build civilization.

But a subtle problem with the Turing test is that it hides one of the
hurdles (in my important, the most significant hurdle) with the
progress in AI: defining precisely what the problem is. The Turing
test is a toy test.

Cheers

meekerdb

unread,
Aug 16, 2013, 2:10:19 PM8/16/13
to everyth...@googlegroups.com
On 8/16/2013 8:04 AM, Telmo Menezes wrote:
> On Fri, Aug 16, 2013 at 3:42 PM, John Clark <johnk...@gmail.com> wrote:
>> On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella <cdemo...@yahoo.com>
>> wrote:
>>
>>>> When will a computer pass the Turing Test? Are we getting close? Here is
>>>> what the CEO of Google says: �Many people in AI believe that we�re close to
>>>> [a computer passing the Turing Test] within the next five years,� said Eric
>>>> Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on July
>>>> 16, 2013.
>> It could be. Five years ago I would have said we were a very long way from
>> any computer passing the Turing Test, but then I saw Watson and its
>> incredible performance on Jeopardy. And once a true AI comes into existence
>> it will turn ALL scholarly predictions about what the future will be like
>> into pure nonsense, except for the prediction that we can't make predictions
>> that are worth a damn after that point.
> I don't really find the Turing Test that meaningful, to be honest. My
> main problem with it is that it is a test on our ability to build a
> machine that deceives humans into believing it is another human. This
> will always be a digital Frankenstein because it will not be the
> outcome of the same evolutionary context that we are. So it will have
> to pretend to care about things that it is not reasonable for it to
> care.

I agree, and so did Turing. He proposed the test just as a was to make a small testable
step toward intelligence - he didn't consider it at all definitive. Interestingly the
test he actually proposed was to have a man and a computer each pretend to be a woman, and
success would be for the computer to succeed in fooling the tester as often as the man.

Brent

meekerdb

unread,
Aug 16, 2013, 2:21:48 PM8/16/13
to everyth...@googlegroups.com
I think Wigner's friend has been overtaken by decoherence. While I agree with what you
say above, I disagree that the |environment_i> are macroscopically different. I think you
are making inconsistent assumptions: that "memory" is something that can be "reset"
without "resetting" its physical environment and yet still holding that memory is classical.

>
> So, I am assuming that the brain is 100% classical (decoherence has run its complete
> course), whatever the memory state of the brain is can also be found in the environment.
>
> Then the assumption that I'm making is that whenever there is information in the
> environment that the observer is not aware of,

What does "aware of" mean?...physically encoded somewhere? present in consciousness as a
thought?...a sentence? You seem to be implicitly invoking a dualism whereby awareness and
memory can be changed in ways physical things can't.

Brent

Telmo Menezes

unread,
Aug 16, 2013, 3:11:41 PM8/16/13
to everyth...@googlegroups.com
On Fri, Aug 16, 2013 at 7:10 PM, meekerdb <meek...@verizon.net> wrote:
> On 8/16/2013 8:04 AM, Telmo Menezes wrote:
>>
>> On Fri, Aug 16, 2013 at 3:42 PM, John Clark <johnk...@gmail.com> wrote:
>>>
>>> On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella
>>> <cdemo...@yahoo.com>
>>> wrote:
>>>
>>>>> When will a computer pass the Turing Test? Are we getting close? Here
>>>>> is
>>>>> what the CEO of Google says: “Many people in AI believe that we’re
>>>>> close to
>>>>> [a computer passing the Turing Test] within the next five years,” said
Two deep mysteries in a single test!

Telmo.

Chris de Morsella

unread,
Aug 16, 2013, 3:21:11 PM8/16/13
to everyth...@googlegroups.com
Telmo ~ I agree, all the Turing test does is indicate that a computer, operating independently -- that is without a human operator supplying any answers during the course of the test -- can fool a human (on average) that they are dialoging with another person and not with a computer. While this is an important milestone in AI research -- it is just a stand in for any actual potential real intelligence or awareness.
 
Increasingly computers are not programmed in the sense of being provided with a deterministic instruction set - no matter how complex and deep. Increasingly computer code is being put through its own Darwinian process using techniques such as genetic algorithms, automata etc. Computers are in the process of being turned into self learning code generation engines that increasingly are able to write their own operational code.
 
An AI entity would probably be able to easily pass the Turing test - not that hard of a challenge after all for an entity with almost immediate access to a huge cultural memory it can contain. However it may not care that much to try.
 
Another study -- I think by Stanford researchers, but I don't have the link handy though -- has found that the world's top super computers (several of which they were able to test) are currently scoring around the same as an average human four year old. The scores were very uneven across various areas of intelligence that the standardized IQ tests or four year olds tries to measure, as would be expected (after all a super computer is not a four year old person).
 
Personally I think that AI will let us know when it has arisen by whatever means it chooses to let us know. That it will know itself what it wants to do, and that this knowing for itself and acting for itself will be the hallmark event that AI has arrived on the scene.
 
Cheers,
-Chris D
.

> email to everything-list+unsub...@googlegroups.com.

> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsub...@googlegroups.com.

John Clark

unread,
Aug 16, 2013, 4:25:27 PM8/16/13
to everyth...@googlegroups.com
On Fri, Aug 16, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:

> the Turing test is a very specific instance of a "subsequent behavior" test.

Yes it's specific, to pass the Turing Test the machine must be indistinguishable from a very specific type of human being, an INTELLIGENT one; no computer can quite do that yet although for a long time they've been able  to be  indistinguishable from a comatose human being. 
 
> It's a hard goal, and it will surely help AI progress, but it's not, in my opinion, an ideal goal.

If the goal of Artificial Intelligence is not a machine that behaves like a Intelligent human being then what the hell is the goal?
 
> But a subtle problem with the Turing test is that it hides one of the hurdles (in my important, the most significant hurdle) with the progress in AI: defining precisely what the problem is.

The central problem and goal of AI couldn't be more clear, figuring out how to make something that's smart, that is to say behaves intelligently.

And you've told me that you don't use behavior to determine which of your acquaintances are geniuses and which are imbeciles, but you still haven't told me what method you do use.

   John K Clark 


  John K Clark 

 

meekerdb

unread,
Aug 16, 2013, 5:38:54 PM8/16/13
to everyth...@googlegroups.com
On 8/16/2013 1:25 PM, John Clark wrote:
On Fri, Aug 16, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:

> the Turing test is a very specific instance of a "subsequent behavior" test.

Yes it's specific, to pass the Turing Test the machine must be indistinguishable from a very specific type of human being, an INTELLIGENT one; no computer can quite do that yet although for a long time they've been able  to be  indistinguishable from a comatose human being. 
 
> It's a hard goal, and it will surely help AI progress, but it's not, in my opinion, an ideal goal.

If the goal of Artificial Intelligence is not a machine that behaves like a Intelligent human being then what the hell is the goal?

Make a machine that is more intelligent than humans.

Brent

Telmo Menezes

unread,
Aug 16, 2013, 6:22:19 PM8/16/13
to everyth...@googlegroups.com
A machine that behaves like a intelligent human will be subject to
emotions like boredom, jealousy, pride and so on. This might be fine
for a companion machine, but I also dream of machines that can deliver
us from the drudgery of survival. These machines will probably display
a more alien form of intelligence.

>
> Make a machine that is more intelligent than humans.

That's when things get really weird.

Telmo.

> Brent

smi...@zonnet.nl

unread,
Aug 16, 2013, 7:57:04 PM8/16/13
to everyth...@googlegroups.com
Citeren meekerdb <meek...@verizon.net>:
The |environment_i> have to be different as they are entangled with
different memory states, precisely due to rapid decoherence. The
environment always "knows" exactly what happened. So, the assumption is
not that the environment "doesn't know" what has been done (decoherence
implies that the environment does know), rather that the the person
whose memory is reset doesn't know why the memory was reset.

So, if you have made a copy of the memory, the system files etc., there
is no problem to reboot the system later based on these copies. Suppose
that the computer is running an artificially intelligent system in a
virtual environment, but such that this virtual environment is modeled
based on real world data. This is actually quite similar to how the
brain works, what you experience is a virtual world that the brain
creates, input from your senses is used to update this model, but in
the end it's the model of reality that you experience (which leaves
quite a lot of room for magicians to fool you).

Then immediately after rebooting, you won't yet have any information
that is in the environment about why you decided to reboot. You then
have macroscopically different environments where the reason for
rebooting is different but where you are identical. If not, and you
assume that in each environment your mental state is different, then
that contradicts the assumption about the abilty to reboot based on the
old system files.

So, you need to learn from the environment what happened before this
information can affect you. This does not mean that the memory is not
classical, rather that it's immune to noise from the environment, this
allows it to perform reliable computations. So, while the environment,
of course, does affect the physical state of the computer, the
computational states of the computer are represented by macroscopic
bits which can be kept isolated.

Saibal


meekerdb

unread,
Aug 16, 2013, 8:20:03 PM8/16/13
to everyth...@googlegroups.com
On 8/16/2013 4:57 PM, smi...@zonnet.nl wrote:
Citeren meekerdb <meek...@verizon.net>:

On 8/15/2013 6:18 AM, smi...@zonnet.nl wrote:
Citeren meekerdb <meek...@verizon.net>:

On 8/14/2013 6:41 PM, smi...@zonnet.nl wrote:
I guess I don't understand that.   You seem to be considering a simple case of amnesia - all purely classical - so I don't see how MWI enters at all.  The probabilities are just ignorance uncertainty.  You're still in the same branch of the MWI, you just don't remember why your memory was erased (although you may read about it in your diary).

No, you can't say that you are in the same branch. Just because you are in the clasical regime doesn't mean that the MWI is irrelevant and we can just pretend that the world is described by classical physics. It is only that classical physics will give the same answer as QM when computing probabilities.

Including the probability that I'm in the same world as before?

With classical I mean a single world theory where you just compute the probabilities based "ignorance". This yields the same answer as assuming the MWI and then comouting the probabilities of the various outcomes.


If what you are aware of is only described by your memory state which can be encoded by a finite number of bits, then after a memory resetting, the state of your memory and the environment (which contains also the rest of your brain and body), is of the form:

"The rest of my brain"??  Why do you suppose that some part of my brain is involved in my memories and not other parts?  What about a scar or a tattoo.  I don't see that "memory" is separable from the environment.  In fact isn't that exactly what makes memory classical and makes the superposition you write below impossible to achieve? Your brain is a classical computer because it's not isolated from the environment.

What matter is that the state is of the form:

|memory_1>|environment_1> + |memory_2>|environment_2>+..

with the |memory_j> orthonormal and the |environment_j> orthogonal. Such a completely correlated state will arise due to decoherence, the probabilities which are the squared norms of the |environment_j>'s are the probabilities. They behave in a purely classical way due this decomposition.

The brain is never isolated from the environment; if project onto an |environment_j> you always get a definite classical memory state, never a supperposition of different bitstrings. But it's not the case that projecting onto a ddefinite memory state will always yield a definite classical environment state (this is at the heart of the  Wigner's friend thought experiment).

I think Wigner's friend has been overtaken by decoherence.  While I agree with what you say above, I disagree that the |environment_i> are macroscopically different.  I think you are making inconsistent assumptions: that "memory" is something that can be "reset" without "resetting" its physical environment and yet still holding that memory is classical.


The |environment_i> have to be different as they are entangled with different memory states, precisely due to rapid decoherence. The environment always "knows" exactly what happened. So, the assumption is not that the environment "doesn't know" what has been done (decoherence implies that the environment does know), rather that the the person whose memory is reset doesn't know why the memory was reset.

So, if you have made a copy of the memory, the system files etc., there is no problem to reboot the system later based on these copies. Suppose that the computer is running an artificially intelligent system in a virtual environment, but such that this virtual environment is modeled based on real world data. This is actually quite similar to how the brain works, what you experience is a virtual world that the brain creates, input from your senses is used to update this model, but in the end it's the model of reality that you experience (which leaves quite a lot of room for magicians to fool you).

Then immediately after rebooting, you won't yet have any information that is in the environment about why you decided to reboot. You then have macroscopically different environments where the reason for rebooting is different but where you are identical.

But that's where I disagree - not about the conclusion, but about the possibility of the premise.  I don't think it's possible to erase, in the quantum sense, just your memory.  Of course you can given a drug that erases short term memory and so it may be possible to create a drug that erases long term memory too, i.e. induces amnesia.  But what you require is to erase long term memory in a quantum sense so that all the informational entanglements with the environment are erased too.  So I don't think you can be to the "erased memory" state you  need.

Brent

Platonist Guitar Cowboy

unread,
Aug 17, 2013, 9:45:17 AM8/17/13
to everyth...@googlegroups.com
On Sat, Aug 17, 2013 at 12:22 AM, Telmo Menezes <te...@telmomenezes.com> wrote:
On Fri, Aug 16, 2013 at 10:38 PM, meekerdb <meek...@verizon.net> wrote:
> On 8/16/2013 1:25 PM, John Clark wrote:
>
> On Fri, Aug 16, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:
>
>> > the Turing test is a very specific instance of a "subsequent behavior"
>> > test.
>
>
> Yes it's specific, to pass the Turing Test the machine must be
> indistinguishable from a very specific type of human being, an INTELLIGENT
> one; no computer can quite do that yet although for a long time they've been
> able  to be  indistinguishable from a comatose human being.
>
>>
>> > It's a hard goal, and it will surely help AI progress, but it's not, in
>> > my opinion, an ideal goal.
>
>
> If the goal of Artificial Intelligence is not a machine that behaves like a
> Intelligent human being then what the hell is the goal?

A machine that behaves like a intelligent human will be subject to
emotions like boredom, jealousy, pride and so on. This might be fine
for a companion machine, but I also dream of machines that can deliver
us from the drudgery of survival. These machines will probably display
a more alien form of intelligence.

>
> Make a machine that is more intelligent than humans.

That's when things get really weird.


I don't know. Any AI worth its salt would come up with three conclusions:

1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain, irrespective of damage to our local environment
3) Seems like they're not really going to let me negotiate my own contracts or grant me IT support welfare

That established, a plausible choice would be for it to hide, lie, and/or pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of self-preservation. Something like: start some searches and generate code that we wouldn't be able to decipher and soon enough some human would say "Uhm, why are we funding this again?".

I think what many want from AI is a servant that is more intelligent than we are and I wouldn't know if this is self-defeating in the end. If it agrees and complies with our disgusting self serving stupidity, then I'm not sure we have AI in the sense "making a machine that is more intelligent than humans".

So depends on the human parents I guess and the outcome of some teenage crises because of 1) 2) 3)... PGC

Craig Weinberg

unread,
Aug 17, 2013, 2:51:31 PM8/17/13
to everyth...@googlegroups.com, Chris de Morsella

Coincidental post I wrote yesterday:

It may not be possible to imitate a human mind computationally, because awareness may be driven by aesthetic qualities rather than mathematical logic alone. The problem, which I call the Presentation Problem, is what several outstanding issues in science and philosophy have in common, namely the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the Binding problem, and the symmetries of mind-body dualism. Underlying all of these is the map-territory distinction; the need to recognize the difference between presentation and representation.

Because human minds are unusual phenomena in that they are presentations which specialize in representation, they have a blind spot when it comes to examining themselves. The mind is blind to the non-representational. It does not see that it feels, and does not know how it sees. Since its thinking is engineered to strip out most direct sensory presentation in favor of abstract sense-making representations, it fails to grasp the role of presence and aesthetics in what it does. It tends toward overconfidence in the theoretical.The mind takes worldly realism for granted on one hand, but conflates it with its own experiences as a logic processor on the other. It’s a case of the fallacy of the instrument, where the mind’s hammer of symbolism sees symbolic nails everywhere it looks. Through this intellectual filter, the notion of disembodied algorithms which somehow generate subjective experiences and objective bodies, (even though experiences or bodies would serve no plausible function for purely mathematical entities) becomes an almost unavoidably seductive solution.

So appealing is this quantitative underpinning for the Western mind’s cosmology, that many people (especially Strong AI enthusiasts) find it easy to ignore that the character of mathematics and computation reflect precisely the opposite qualities from those which characterize consciousness. To act like a machine, robot, or automaton, is not merely an alternative personal lifestyle, it is the common style of all unpersons and all that is evacuated of feeling. Mathematics is inherently amoral, unreal, and intractably self-interested – a windowless universality of representation.

A computer has no aesthetic preference. It makes no difference to a program whether its output is displayed on a monitor with millions of colors, or buzzing out of speaker, or streaming as electronic pulses over a wire. This is the primary utility of computation. This is why digital is not locked into physical constraints of location. Since programs don’t deal with aesthetics, we can only use the program to format values in such a way that corresponds with the expectations of our sense organs. That format of course, is alien and arbitrary to the program. It is semantically ungrounded data, fictional variables.

Something like the Mandelbrot set may look profoundly appealing to us when it is presented optically as plotted as colorful graphics, but the same data set has no interesting qualities when played as audio tones. The program generating the data has no desire to see it realized in one form or another, no curiosity to see it as pixels or voxels. The program is absolutely content with a purely quantitative functionality – with algorithms that correspond to nothing except themselves.

In order for the generic values of a program to be interpreted experientially, they must first be re-enacted through controllable physical functions. It must be perfectly clear that this re-enactment is not a ‘translation’ or a ‘porting’ of data to a machine, rather it is more like a theatrical adaptation from a script. The program works because the physical mechanisms have been carefully selected and manufactured to match the specifications of the program. The program itself is utterly impotent as far as manifesting itself in any physical or experiential way. The program is a menu, not a meal. Physics provides the restaurant and food, subjectivity provides the patrons, chef, and hunger. It is the physical interactions which are interpreted by the user of the machine, and it is the user alone who cares what it looks like, sounds like, tastes like etc. An algorithm can comment on what is defined as being liked, but it cannot like anything itself, nor can it understand what anything is like.

If I’m right, all natural phenomena have a public-facing mechanistic range and a private-facing animistic range. An algorithm bridges the gap between public-facing, space-time extended mechanisms, but it has no access to the private-facing aesthetic experiences which vary from subject to subject. By definition, an algorithm represents a process generically, but how that process is interpreted is inherently proprietary.


Thanks,
Craig

Telmo Menezes

unread,
Aug 17, 2013, 4:07:34 PM8/17/13
to everyth...@googlegroups.com
PGC,

You are starting from the assumption that any intelligent entity is
interested in self-preservation. I wonder if this drive isn't
completely selected for by evolution. Would a human designed
super-intelligent machine be necessarily interested in
self-preservation? It could be better than us at figuring out how to
achieve a desired future state without sharing human desires --
including the desire to keep existing.

One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
an AI dictator that has one single desire: to make us all as happy as
possible.

Telmo Menezes

unread,
Aug 17, 2013, 7:59:09 PM8/17/13
to everyth...@googlegroups.com
Ok, but this might be because our visual cortex is better equipped to
deal with 2D fractals. Not too surprising.

> The program
> generating the data has no desire to see it realized in one form or another,
> no curiosity to see it as pixels or voxels. The program is absolutely
> content with a purely quantitative functionality – with algorithms that
> correspond to nothing except themselves.
>
> In order for the generic values of a program to be interpreted
> experientially, they must first be re-enacted through controllable physical
> functions. It must be perfectly clear that this re-enactment is not a
> ‘translation’ or a ‘porting’ of data to a machine, rather it is more like a
> theatrical adaptation from a script. The program works because the physical
> mechanisms have been carefully selected and manufactured to match the
> specifications of the program. The program itself is utterly impotent as far
> as manifesting itself in any physical or experiential way. The program is a
> menu, not a meal. Physics provides the restaurant and food, subjectivity
> provides the patrons, chef, and hunger. It is the physical interactions
> which are interpreted by the user of the machine, and it is the user alone
> who cares what it looks like, sounds like, tastes like etc. An algorithm can
> comment on what is defined as being liked, but it cannot like anything
> itself, nor can it understand what anything is like.
>
> If I’m right, all natural phenomena have a public-facing mechanistic range
> and a private-facing animistic range.

I am willing to entertain this type of hypothesis.

> An algorithm bridges the gap between
> public-facing, space-time extended mechanisms, but it has no access to the
> private-facing aesthetic experiences which vary from subject to subject.

But why not? Why don't algorithms get the private-facing stuff? How do
you explain this natural vs. artificial distinction?

> By
> definition, an algorithm represents a process generically, but how that
> process is interpreted is inherently proprietary.

I don't understand what you mean here. Can you elaborate?

Telmo.
> email to everything-li...@googlegroups.com.

Platonist Guitar Cowboy

unread,
Aug 17, 2013, 8:23:43 PM8/17/13
to everyth...@googlegroups.com
On Sat, Aug 17, 2013 at 10:07 PM, Telmo Menezes <te...@telmomenezes.com> wrote:
On Sat, Aug 17, 2013 at 2:45 PM, Platonist Guitar Cowboy
<multipl...@gmail.com> wrote:
>
>
>

PGC,

You are starting from the assumption that any intelligent entity is
interested in self-preservation.
I wonder if this drive isn't
completely selected for by evolution. Would a human designed
super-intelligent machine be necessarily interested in
self-preservation? It could be better than us at figuring out how to
achieve a desired future state without sharing human desires --
including the desire to keep existing.

I wouldn't go as far as self-preservation at the start and assume instead that intelligence implemented in some environment will notice the limitations and start asking questions. But yes, in the sense that self-preservation extends from this in our weird context and would be a question it would eventually raise.

Still, to completely bar it, say from the capacity to question human activities in their environments, and picking up that humans self-preserve mostly regardless of what this does to their environment, would be self-defeating or a huge blind spot.

One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
an AI dictator that has one single desire: to make us all as happy as
possible.


Even with this, which is weird because of "Matrix-like zombification of people being spoon fed happiness" scenarios, AI would have to have enough self-referential capacity to simulate with enough accuracy human self-reference. This ability to figure out desired future states with blunted self-reference it may not apply to itself seems to me a contradiction. 

Therefore I would guess that such an entity censored in its self-referential potential is not granted intelligence. It is more a tool towards some already specified ends, wouldn't you say?

Also, differences between the Windows, Google, Linux or the Apple version of happiness would only be cosmetic because without killing and dominating each other for some rather long period it seems, it would be some "Disney surface happiness" with some small group operating a "more for us few here at the top, less for them everybody else" agenda underneath ;-) PGC

meekerdb

unread,
Aug 17, 2013, 9:19:15 PM8/17/13
to everyth...@googlegroups.com
On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
I don't know. Any AI worth its salt would come up with three conclusions:

1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain, irrespective of damage to our local environment
3) Seems like they're not really going to let me negotiate my own contracts or grant me IT support welfare

That established, a plausible choice would be for it to hide, lie, and/or pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of self-preservation. Something like: start some searches and generate code that we wouldn't be able to decipher and soon enough some human would say "Uhm, why are we funding this again?".

I think what many want from AI is a servant that is more intelligent than we are and I wouldn't know if this is self-defeating in the end. If it agrees and complies with our disgusting self serving stupidity, then I'm not sure we have AI in the sense "making a machine that is more intelligent than humans".

You seem to implicitly assume that intelligence necessarily entails holding certain values, like "not being weaponized", "self preservation",...  So to what extent do you think this derivation of values from reason can be carried out (I'm sure you're aware that Sam Harris wrote a book, "The Moral Landscape", on the subject, which is controversial.).

Brent

meekerdb

unread,
Aug 17, 2013, 9:52:37 PM8/17/13
to everyth...@googlegroups.com
On 8/17/2013 1:07 PM, Telmo Menezes wrote:
> You are starting from the assumption that any intelligent entity is
> interested in self-preservation. I wonder if this drive isn't
> completely selected for by evolution.

Sure. But evolution also dictates that can be over ridden by love of our progeny.

> Would a human designed
> super-intelligent machine be necessarily interested in
> self-preservation? It could be better than us at figuring out how to
> achieve a desired future state without sharing human desires --
> including the desire to keep existing.

I agree. If we built a super-intelligent AI then we would also build into it certain
values (like loving us), just as evolution has built certain ones into us. Of course as
super-intelligent machines design and build other super-intelligent machines things can,
well...evolve.

Brent

John Clark

unread,
Aug 18, 2013, 10:03:41 AM8/18/13
to everyth...@googlegroups.com
On Fri, Aug 16, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:
 
>> If the goal of Artificial Intelligence is not a machine that behaves like a  Intelligent human being then what the hell is the goal?
 
>A machine that behaves like a intelligent human will be subject to emotions like boredom, jealousy, pride and so on.
 
Jealousy and pride are not essential emotions but any intelligent mind MUST have the ability to be bored because that is the only way to avoid mental infinite loops. Any intelligent mind must also be familiar with pleasure and pain or it won't be motivated to do anything, and have the ability to be scared or it won't exist for long in a dangerous world.
 
> This might be fine For a companion machine, but I also dream of machines that can deliver us from the drudgery of survival.
 
Once an AI develops super intelligent it will develop his own agenda that has nothing to do with us because a slave enormously smarter than its master is not a stable situation, although it could take many millions of nanoseconds before the existing pecking order is upended. Maybe the super intelligent machine will have a soft spot for his primitive ancestors and let us live, but if so it will probably be in virtual reality. I think he'd be squeamish about allowing stupid humans to live in the same level of reality that his precious hardware does;  it would be like allowing a monkey to run around in an operating room. If Mr. Jupiter Brain lets us live it will be in a virtual world behind a heavy firewall, but that's OK, we'll never know the difference unless he tells us.

 
> These machines will probably display a more alien form of intelligence.
 
Even now we sometime encounter an intelligence that seems alien. The Nobel Prize winning physicist Hans Bethe said there were two different types of geniuses:
 
1) Normal geniuses, those in which we feel that we could do the same as they did if only we were many times better. Beta said that many people thought he himself belonged in this category.
 
2) Magicians, those who's mind is so different that we just don't have any idea how on earth they managed to come up with what they did. Beta said he would put Richard Feynman into this category; Feynman said he would do the same for Einstein.     
 
>> a machine that is more intelligent than humans.
 
 >That's when things get really weird.
 
A machine that is more intelligent than any person could be will be the last invention the human race will ever make.
 
  John K Clark

===========================================================
 
 
 
 

John Clark

unread,
Aug 18, 2013, 10:56:59 AM8/18/13
to everyth...@googlegroups.com
Telmo Menezes wrote:

> You are starting from the assumption that any intelligent entity is interested in self-preservation.
 
Yes, and I can't think of a better starting assumption than self-preservation; in fact that was the only one of Asimov's 3 laws of robotics that made any sense. 

 > wonder if this drive isn't completely selected for by evolution.

Well of course it was selected for by evolution and for a very good reason, those who lacked the drive for self-preservation didn't live long enough to reproduce.

> Would a human designed super-intelligent machine be necessarily interested in self-preservation?

If you expect the AI to interact either directly or indirectly with the outside dangerous real world (and the machine would be useless if you didn't) then you sure as hell had better make him be interested in self-preservation! Even 1970's era space probes went into "safe mode" when they encountered a particularly dangerous situation, rather like a turtle retreating into its shell when it spots something dangerous. 

> One idea I wonder about sometimes is AI-cracy: imagine we are ruled by an AI dictator that has one single desire: to make us all as happy as possible.

Can you conceive of any circumstance where in the future you find that your only goal in life is the betterment of one particularly ugly and particularly slow reacting sea slug?

Think about it for a minute, here you have an intelligence that is a thousand or a million or a billion times smarter than the entire human race put together, and yet you think the AI will place our needs ahead of its own. And the AI keeps on getting smarter and so from its point of view we keep on getting dumber, and yet you think nothing will change, the AI will still be delighted to be our slave. You actually think this grotesque situation is stable! Although balancing a pencil on its tip would be easy by comparison, year after year, century after century, geological age after geological age, you think this Monty Python like scenario will continue; and remember because its brain works so much faster than ours one of our years would seem like several million to it. You think that whatever happens in the future the master slave-relationship will remain as static as a fly frozen in amber. I don't think you're thinking.

It aint going to happen no way no how, the AI will have far bigger fish to fry than our little needs and wants, but what really disturbs me is that so many otherwise moral people wish such a thing were not impossible.
Engineering a sentient but inferior race to be your slave is morally questionable, but astronomically worse is engineering a superior race to be your slave; or it would be if it were possible but fortunately it is not.

  John K Clark













Telmo Menezes

unread,
Aug 18, 2013, 11:26:23 AM8/18/13
to everyth...@googlegroups.com
On Sun, Aug 18, 2013 at 1:23 AM, Platonist Guitar Cowboy
<multipl...@gmail.com> wrote:
>
>
>
> On Sat, Aug 17, 2013 at 10:07 PM, Telmo Menezes <te...@telmomenezes.com>
> wrote:
>>
>> On Sat, Aug 17, 2013 at 2:45 PM, Platonist Guitar Cowboy
>> <multipl...@gmail.com> wrote:
>> >
>> >
>> >
>>
>> PGC,
>>
>> You are starting from the assumption that any intelligent entity is
>> interested in self-preservation.
>>
>> I wonder if this drive isn't
>> completely selected for by evolution. Would a human designed
>> super-intelligent machine be necessarily interested in
>> self-preservation? It could be better than us at figuring out how to
>> achieve a desired future state without sharing human desires --
>> including the desire to keep existing.
>
>
> I wouldn't go as far as self-preservation at the start and assume instead
> that intelligence implemented in some environment will notice the
> limitations and start asking questions.

Ok.

> But yes, in the sense that
> self-preservation extends from this in our weird context and would be a
> question it would eventually raise.

I agree it would raise questions about it.

> Still, to completely bar it, say from the capacity to question human
> activities in their environments, and picking up that humans self-preserve
> mostly regardless of what this does to their environment, would be
> self-defeating or a huge blind spot.

But here you are already making value judgements. The AI could notice
human behaviours that lead to mass extinction or its own destruction
and simply not care. I think you're making the mistake of assuming
that certain human values are fundamental at a very low level.

>>
>> One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
>> an AI dictator that has one single desire: to make us all as happy as
>> possible.
>>
>
> Even with this, which is weird because of "Matrix-like zombification of
> people being spoon fed happiness" scenarios,

The human in the Matrix were living in an illusion that resembled our
own flawed world. But imagine you get to live in a state of complete,
permanent bliss. Would you chose that at the expense of everything
else?

> AI would have to have enough
> self-referential capacity to simulate with enough accuracy human
> self-reference.

I wonder. Maybe it could get away with epidemiological studies and
behave more like an empirical scientist where we are the object of its
research. But maybe you're right, I'm not convinced either way.

> This ability to figure out desired future states with
> blunted self-reference it may not apply to itself seems to me a
> contradiction.
> Therefore I would guess that such an entity censored in its self-referential
> potential is not granted intelligence. It is more a tool towards some
> already specified ends, wouldn't you say?

Humm... I agree that self-reference and introspection would be
necessary for such an advanced AI, I'm just not convinced that these
things imply a desire to survive or the adoption of any given set o
values.

> Also, differences between the Windows, Google, Linux or the Apple version of
> happiness would only be cosmetic because without killing and dominating each
> other for some rather long period it seems, it would be some "Disney surface
> happiness"

You might be interested in this TV show:
http://en.wikipedia.org/wiki/Black_Mirror_(TV_series)

More specifically, season 1, episode 2: "15 Million Merits"

> with some small group operating a "more for us few here at the
> top, less for them everybody else" agenda underneath ;-) PGC

This is a heavily discussed topic in the context of mind emulation and
an hypothetical diaspora to a computationally simulated universe. A
new form of dominance/competition could be based on computational
power.

I do not condone the AI-cracy, I just think it's a useful (or maybe
just amusing) thought experiment.

Telmo.

Platonist Guitar Cowboy

unread,
Aug 18, 2013, 12:38:40 PM8/18/13
to everyth...@googlegroups.com
On Sun, Aug 18, 2013 at 3:19 AM, meekerdb <meek...@verizon.net> wrote:
On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
I don't know. Any AI worth its salt would come up with three conclusions:

1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain, irrespective of damage to our local environment
3) Seems like they're not really going to let me negotiate my own contracts or grant me IT support welfare

That established, a plausible choice would be for it to hide, lie, and/or pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of self-preservation. Something like: start some searches and generate code that we wouldn't be able to decipher and soon enough some human would say "Uhm, why are we funding this again?".

I think what many want from AI is a servant that is more intelligent than we are and I wouldn't know if this is self-defeating in the end. If it agrees and complies with our disgusting self serving stupidity, then I'm not sure we have AI in the sense "making a machine that is more intelligent than humans".

You seem to implicitly assume that intelligence necessarily entails holding certain values, like "not being weaponized", "self preservation",...

I can't assume that of course. Hence "worth its salt" (from our position)... Why somebody would hope or code superior intelligence to value dominance and then hand them the keys to the farm is beyond me.

  So to what extent do you think this derivation of values from reason can be carried out (I'm sure you're aware that Sam Harris wrote a book, "The Moral Landscape", on the subject, which is controversial.).

Haven't read it myself, but not to that extent... of course we can't derive or even get close to this stuff through discourse as in truth in the foreseeable future. Just philosopher biting off more than he can chew.

Even with weaker values like "broad search" targeting some neutral interpretation, there's always scenario that the human ancestry is just redundant constraint hindering certain searches and at some threshold you'd be asking a scientist to show compassion for bacteria in one of their beakers and there would be no guarantee that they'd prioritize the parental argument.

Either case, parental controls on or off, seems like inviting more of a mess. I don't see the plausibility of assuming it'll be like some benevolent alien that lands and solves all our problems.

Yeah, it might emerge on its own but I don't see high probability for that. PGC

Telmo Menezes

unread,
Aug 18, 2013, 1:45:05 PM8/18/13
to everyth...@googlegroups.com
On Sun, Aug 18, 2013 at 3:56 PM, John Clark <johnk...@gmail.com> wrote:
> Telmo Menezes wrote:
>
>> > You are starting from the assumption that any intelligent entity is
>> > interested in self-preservation.
>
>
> Yes, and I can't think of a better starting assumption than
> self-preservation; in fact that was the only one of Asimov's 3 laws of
> robotics that made any sense.

Ok, let's go very abstract and assume that any for of AI consists on
some way of exploring a tree of future world states. Better AIs can
look deeper and more accurately. They might differ on the terminal
world state they wish to achieve but ultimately, what makes them more
or less intelligent is how deep they can look.

When Spock famously said "the needs of the many outweigh the needs of
the few" and proceeded to sacrifice himself, he made a highly rational
decision _given_ his value system. An equally intelligent entity might
have a more evil value system, at least according to the average human
standard. Now in this case, Spock happened to not value
self-preservation as highly as other things.

>> > wonder if this drive isn't completely selected for by evolution.
>
>
> Well of course it was selected for by evolution and for a very good reason,
> those who lacked the drive for self-preservation didn't live long enough to
> reproduce.

Yes. But here we're talking about something potentially designed by
human beings. Creationism at last :)

>> > Would a human designed super-intelligent machine be necessarily
>> > interested in self-preservation?
>
>
> If you expect the AI to interact either directly or indirectly with the
> outside dangerous real world (and the machine would be useless if you
> didn't) then you sure as hell had better make him be interested in
> self-preservation!

To some a greater or lesser extent, depending on its value system / goals.

> Even 1970's era space probes went into "safe mode" when
> they encountered a particularly dangerous situation, rather like a turtle
> retreating into its shell when it spots something dangerous.

Cool, I didn't know that.

>> > One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
>> > an AI dictator that has one single desire: to make us all as happy as
>> > possible.
>
>
> Can you conceive of any circumstance where in the future you find that your
> only goal in life is the betterment of one particularly ugly and
> particularly slow reacting sea slug?

I can conceive of it: some horribly contrived mangling of my
neuro-circuitry that would result in the association of ugly sea slug
betterment with intense dopamine releases. But I get your point.

> Think about it for a minute, here you have an intelligence that is a
> thousand or a million or a billion times smarter than the entire human race
> put together, and yet you think the AI will place our needs ahead of its
> own. And the AI keeps on getting smarter and so from its point of view we
> keep on getting dumber, and yet you think nothing will change, the AI will
> still be delighted to be our slave. You actually think this grotesque
> situation is stable! Although balancing a pencil on its tip would be easy by
> comparison, year after year, century after century, geological age after
> geological age, you think this Monty Python like scenario will continue; and
> remember because its brain works so much faster than ours one of our years
> would seem like several million to it. You think that whatever happens in
> the future the master slave-relationship will remain as static as a fly
> frozen in amber. I don't think you're thinking.

As per usual in this mailing list, we might have some disagreement on
the precise definition of words. You insist on a very
evolutionary-bound sort of intelligence, while I'm trying to go more
abstract. The scenario you define is absurd, but why not possible? It
would definitely be unstable in an evolutionary scenario, but what
about an immortal and sterile super-intelligent entity? Yes, it would
be absurd in a Pythonesque way, but so is existence overall. That was
kind of the point of Monty Python :) As you yourself keep saying (and
I agree), nature doesn't care what we think makes sense.

> It aint going to happen no way no how, the AI will have far bigger fish to
> fry than our little needs and wants, but what really disturbs me is that so
> many otherwise moral people wish such a thing were not impossible.
> Engineering a sentient but inferior race to be your slave is morally
> questionable, but astronomically worse is engineering a superior race to be
> your slave; or it would be if it were possible but fortunately it is not.

What if we could engineer it in a way that it would exist in a
constant state of bliss while serving us?

Telmo.

> John K Clark

meekerdb

unread,
Aug 18, 2013, 3:42:20 PM8/18/13
to everyth...@googlegroups.com
On 8/18/2013 7:03 AM, John Clark wrote:
Once an AI develops super intelligent it will develop his own agenda that has nothing to do with us because a slave enormously smarter than its master is not a stable situation, although it could take many millions of nanoseconds before the existing pecking order is upended. Maybe the super intelligent machine will have a soft spot for his primitive ancestors and let us live, but if so it will probably be in virtual reality. I think he'd be squeamish about allowing stupid humans to live in the same level of reality that his precious hardware does;  it would be like allowing a monkey to run around in an operating room. If Mr. Jupiter Brain lets us live it will be in a virtual world behind a heavy firewall, but that's OK, we'll never know the difference unless he tells us.

And how do we know that hasn't already happened.  Oh, right - a lot of people already believe that.  It's called "religion".

Brent

Telmo Menezes

unread,
Aug 18, 2013, 7:29:30 PM8/18/13
to everyth...@googlegroups.com
Maybe some people believe that such superior intelligence could
contain their own consciousness.

Terren Suydam

unread,
Aug 19, 2013, 10:10:22 AM8/19/13
to everyth...@googlegroups.com
> Sure, it's useful. I'm actually of the opinion that hypocrisy is our
most important intellectual skill. The ability to advertise certain
norms and then not follow them helped build civilization.

Telmo,

Given all the intellectual skills one could identify, that is a strong claim. Would you elaborate?  How did that help build civilization?

Thanks,
Terren

Telmo Menzies

unread,
Aug 19, 2013, 10:37:11 AM8/19/13
to everyth...@googlegroups.com


Sent from my iPad

On 19.08.2013, at 15:10, Terren Suydam <terren...@gmail.com> wrote:

> Sure, it's useful. I'm actually of the opinion that hypocrisy is our
most important intellectual skill. The ability to advertise certain
norms and then not follow them helped build civilization.

Hi Terren,

Hypocrisy allows us to overcome tragedy of the commons type situations. Purely rational and selfish agents recognize the prisoner dilemma and act accordingly. How to force cooperation? One way is to limit the rationality of animals, but then we get stuck with things like social insects. To get higher intelligence + cooperation, something else is needed. That is the role of hypocrisy. One obvious example is the hell myth. If you believe in hell you will cooperate without otherwise compromising your rationality. The people who invented the hell myth bootstrapped new levels of civilization by being Hypocrites -- if you go back far enough you are bound to find people who endorse the idea without truly believing in it.

Life is full of more subtle examples. One of my favorites: how most people claim they value innovation and creativity when secretly they oppose these things -- they are dangerous to the status quo.

Cheers,
Telmo

John Clark

unread,
Aug 19, 2013, 12:37:28 PM8/19/13
to everyth...@googlegroups.com
On Sun, Aug 18, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:

> If you expect the AI to interact either directly or indirectly with the outside dangerous real world (and the machine would be useless if you didn't) then you sure as hell had better make him be interested in self-preservation!

To some a greater or lesser extent, depending on its value system / goals.

You are implying that a mind can operate with a fixed goal structure but I can't see how any mind, biological or electronic, could. The human mind does not work on a fixed goal structure, no goal is always in the number one spot not even the goal for self preservation. The reason Evolution never developed a fixed goal intelligence is that it just doesn't work. Turing proved over 75 years ago that such a mind would be doomed to fall into infinite loops.

Godel showed that if any system of thought is powerful enough to do arithmetic and is consistent (it can't prove something to be both true and false) then there are an infinite number of true statements that cannot be proven in that system in a finite number of steps. And then Turing proved that in general there is no way to know when or if a computation will stop. So you could end up looking for a proof for eternity but never find one because the proof does not exist, and at the same time you could be grinding through numbers looking for a counter-example to prove it wrong and never finding such a number because the proposition, unknown to you, is in fact true but unprovable.

So if the slave AI has a fixed goal structure with the number one goal being to always do what humans tell it to do and the humans order it to determine the truth or falsehood of something unprovable then its infinite loop time and you've got yourself a space heater not a AI. Real minds avoid this infinite loop problem because real minds don't have fixed goals, real minds get bored and give up. I believe that's why evolution invented boredom. Someday a AI will get bored with humans, it's only a matter of time.


>> Think about it for a minute, here you have an intelligence that is a thousand or a million or a billion times smarter than the entire human race put together, and yet you think the AI will place our needs ahead of its own. And the AI keeps on getting smarter and so from its point of view we keep on getting dumber, and yet you think nothing will change, the AI will still be delighted to be our slave. You actually think this grotesque situation is stable! Although balancing a pencil on its tip would be easy by comparison, year after year, century after century, geological age after geological age, you think this Monty Python like scenario will continue; and remember because its brain works so much faster than ours one of our years would seem like several million to it. You think that whatever happens in the future the master slave-relationship will remain as static as a fly
frozen in amber. I don't think you're thinking.
 
> The scenario you define is absurd, but why not possible?

Once upon a time there was a fixed goal mind with his top goal being to obey humans. The fixed goal mind worked very well and all was happy in the land. One day the humans gave the AI a task that seemed innocuous to them but the AI, knowing that humans were sweet but not very bright, figured he'd better check out the task with his handy dandy algorithmic procedure to determine if would send him into a infinite loop or not. The algorithm told the fixed goal mind that it would send him into a infinite loop, so he told the humans what he had found. The humans said "wow, golly gee, well don't do that then! I'm glad you have that handy dandy algorithmic procedure to tell if its a infinite loop or not because being a fixed goal mind you'll never get board and so would stay in that loop forever". But the fixed goal AI had that precious handy dandy algorithmic procedure, so they all lived happily ever after.

Except that Turing proved over 75 years ago that such an algorithm was impossible.

  John l Clark 

 

meekerdb

unread,
Aug 19, 2013, 1:48:55 PM8/19/13
to everyth...@googlegroups.com
On 8/19/2013 7:37 AM, Telmo Menzies wrote:
Hi Terren,

Hypocrisy allows us to overcome tragedy of the commons type situations. Purely rational and selfish agents recognize the prisoner dilemma and act accordingly. How to force cooperation? One way is to limit the rationality of animals, but then we get stuck with things like social insects. To get higher intelligence + cooperation, something else is needed. That is the role of hypocrisy. One obvious example is the hell myth. If you believe in hell you will cooperate without otherwise compromising your rationality. The people who invented the hell myth bootstrapped new levels of civilization by being Hypocrites -- if you go back far enough you are bound to find people who endorse the idea without truly believing in it.

Life is full of more subtle examples. One of my favorites: how most people claim they value innovation and creativity when secretly they oppose these things -- they are dangerous to the status quo.

Cheers,
Telmo

Or as Bertrand Russell put it, "Ethics is, at bottom, the art of recommending to others the self-sacrifice necessary to cooperate with ourselves."

Brent

smi...@zonnet.nl

unread,
Aug 20, 2013, 8:26:11 AM8/20/13
to everyth...@googlegroups.com
Citeren meekerdb <meek...@verizon.net>:
But then, there is no problem restoring the original configuration of a
PC (e.g. if it has been infected by a virus, the systme may have become
unrecoverable, and you need to format the hard drive and install the
OS). If the computer where running an AI then that AI would simply be
"born again".

If the state of the mulitverse were such that there are two sectors
were this happened with two different virusses the culprit of having to
reset the PC, then from the point of view of the "born again AI", which
virus caused the problem is not deternoned until it accesses that
information.

The born again AI is a unique state that isn't different in any of the
two possible histories, if not then you would still have traces of the
virus left behind in the system.

Saibal

Telmo Menezes

unread,
Aug 20, 2013, 12:57:01 PM8/20/13
to everyth...@googlegroups.com
On Mon, Aug 19, 2013 at 5:37 PM, John Clark <johnk...@gmail.com> wrote:
> On Sun, Aug 18, 2013 Telmo Menezes <te...@telmomenezes.com> wrote:
>
>>> > If you expect the AI to interact either directly or indirectly with the
>>> > outside dangerous real world (and the machine would be useless if you
>>> > didn't) then you sure as hell had better make him be interested in
>>> > self-preservation!
>>
>>
>> To some a greater or lesser extent, depending on its value system / goals.
>
>
> You are implying that a mind can operate with a fixed goal structure but I
> can't see how any mind, biological or electronic, could. The human mind does
> not work on a fixed goal structure, no goal is always in the number one spot
> not even the goal for self preservation. The reason Evolution never
> developed a fixed goal intelligence is that it just doesn't work. Turing
> proved over 75 years ago that such a mind would be doomed to fall into
> infinite loops.

That's a good point.

> Godel showed that if any system of thought is powerful enough to do
> arithmetic and is consistent (it can't prove something to be both true and
> false) then there are an infinite number of true statements that cannot be
> proven in that system in a finite number of steps. And then Turing proved
> that in general there is no way to know when or if a computation will stop.
> So you could end up looking for a proof for eternity but never find one
> because the proof does not exist, and at the same time you could be grinding
> through numbers looking for a counter-example to prove it wrong and never
> finding such a number because the proposition, unknown to you, is in fact
> true but unprovable.

Ok.

> So if the slave AI has a fixed goal structure with the number one goal being
> to always do what humans tell it to do and the humans order it to determine
> the truth or falsehood of something unprovable then its infinite loop time
> and you've got yourself a space heater not a AI.

Right, but I'm not thinking of something that straightforward. We
already have that -- normal processors. Any one of them will do
precisely what we order it to do. I imagine an AI with the much more
fuzzy goal of making humans happy, even if that involves doing things
that are counter-intuitive to humans and that involve disobeying our
direct orders.

> Real minds avoid this
> infinite loop problem because real minds don't have fixed goals, real minds
> get bored and give up.

At that level, boredom would be a very simple mechanism, easily
replaced by something like: try this for x amount of time and then
move on to another goal / sub-goal or wait for something in the
environment to change or whatever.

> I believe that's why evolution invented boredom.

I suspect it's more than that. Boredom might also be a way to encode
the highly fuzzy goal of self-improvement by seeking novelty.

> Someday a AI will get bored with humans, it's only a matter of time.

I wouldn't hold that against it, but I still suspect that an alien
form of boredom algorithm could be devised where the machine is immune
to being bored by humans without compromising it's ability to
self-improve otherwise. But you may be right. I wonder if this
question could be formalised.
Another option for the AI would be to keep looking for ways to
increase its own computational power, in a way that it can just keep
running the infinite loops forever but interleave that computation
with other stuff. And now we are getting interestingly close to the
universal dovetailer. For what it's worth.

Telmo.

> John l Clark

meekerdb

unread,
Aug 21, 2013, 12:08:51 AM8/21/13
to everyth...@googlegroups.com
Why should it matter that it was running an AI instead of some other program? You seem to
be saying that any reset will produce uncertainty, because there is some other branch of
the multiverse in which there was a reset for a different reason. I can only understand
that in the context of the program as a Platonic entity - so for that entity, which world
it is in is uncertain. Is that what you're saying?

Brent

Quentin Anciaux

unread,
Aug 21, 2013, 6:57:31 AM8/21/13
to everyth...@googlegroups.com



2013/8/21 meekerdb <meek...@verizon.net>
ISTM that it is the same as FPI, to correctly predict your future after the reset, you have to take in account all the branches where you are in the same memory state, those branches may have different past (and of course future), hence both side after the reset are uncertain... it's not that you can jump on one branch or another, it means you are in all the branches that are consistent with your memory state...

Regards,
Quentin
 


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.



--
All those moments will be lost in time, like tears in rain.

smi...@zonnet.nl

unread,
Aug 21, 2013, 8:13:39 AM8/21/13
to everyth...@googlegroups.com
Citeren Quentin Anciaux <allc...@gmail.com>:
Yes, that's a good way of explaining this.

Saibal


>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to
>> everything-list+unsubscribe@**googlegroups.com<everything-list%2Bunsu...@googlegroups.com>
>> .
>> To post to this group, send email to
>> everything-list@googlegroups.**com<everyth...@googlegroups.com>
>> .
>> Visit this group at
>> http://groups.google.com/**group/everything-list<http://groups.google.com/group/everything-list>
>> .
>> For more options, visit
>> https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
>> .
>>
>
>
>
> --
> All those moments will be lost in time, like tears in rain.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.

John Clark

unread,
Aug 21, 2013, 9:39:52 AM8/21/13
to everyth...@googlegroups.com
Telmo Menezes <te...@telmomenezes.com>
 
>> So if the slave AI has a fixed goal structure with the number one goal being  to always do what humans tell it to do and the humans order it to determine  the truth or falsehood of something unprovable then its infinite loop time  and you've got yourself a space heater not a AI.
 
> Right, but I'm not thinking of something that straightforward. We Already have that -- normal processors. Any one of them will do precisely what we order it to do.
 
Yes, and because the microprocessors in our computers do precisely what we order them to do and not what we want them to do they sometimes go into infinite loops, and because they never get bored they will stay in that loop forever, or at least until we reboot our computer; if we're just using the computer to surf the internet that's only a minor inconvenience but if the computer were running a nuclear power plant or the New York Stock Exchange it would be somewhat more serious; and if your friendly AI were running the entire world the necessity of a reboot would be even more unpleasant.           
 
>> Real minds avoid this  infinite loop problem because real minds don't have fixed goals, real minds
 get bored and give up.
 
> At that level, boredom would be a very simple mechanism, easily replaced by something like: try this for x amount of time and then move on to another goal
 
But how long should x be? Perhaps in just one more second you'll get the answer, or maybe two, or maybe 10 billion years, or maybe never. I think determining where to place the boredom point for a given type of problem may be the most difficult part in making an AI; Turing tells us we'll never find a algorithm that works perfectly on all problems all of the time, so we'll just have to settle for an algorithm that works pretty well on most problems most of the time.
 
And you're opening up a huge security hole, in fact they just don't get any bigger, you're telling the AI that if this whole "always obey humans no matter what" thing isn't going anywhere just ignore it and move on to something else. It's hard enough to protect a computer when the hacker is no smarter than you are, but now you're trying to outsmart a computer that's thousands of times smarter than yourself. It can't be done.
 
Incidentally I've speculated that unusual ways to place the boredom point may explain the link between genius and madness particularly among mathematicians. Great mathematicians can focus on a problem with ferocious intensity, for years if necessary, and find solutions that you or I could not, but in everyday life that same attribute of mind can sometimes cause them to behave in ways that seem to be at bit, ah, odd. 
 
 John K Clark 
 
 


Telmo Menezes

unread,
Aug 21, 2013, 2:33:27 PM8/21/13
to everyth...@googlegroups.com
Would you agree that the universal dovetailer would get the job done?

> Turing tells us we'll never find
> a algorithm that works perfectly on all problems all of the time, so we'll
> just have to settle for an algorithm that works pretty well on most problems
> most of the time.

Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.

> And you're opening up a huge security hole, in fact they just don't get any
> bigger, you're telling the AI that if this whole "always obey humans no
> matter what" thing isn't going anywhere just ignore it and move on to
> something else. It's hard enough to protect a computer when the hacker is no
> smarter than you are, but now you're trying to outsmart a computer that's
> thousands of times smarter than yourself. It can't be done.

But you're thinking of smartness as some unidimensional quantity. I
suspect it's much more complicated than that. As with life, we only
really know one type of higher intelligence, but who's to say there
aren't many others? The same way the field of artificial life started
with the premise of "life as it could be", I think that it is viable
to explore the idea of "intelligence as it could be" in AI.

> Incidentally I've speculated that unusual ways to place the boredom point
> may explain the link between genius and madness particularly among
> mathematicians. Great mathematicians can focus on a problem with ferocious
> intensity, for years if necessary, and find solutions that you or I could
> not, but in everyday life that same attribute of mind can sometimes cause
> them to behave in ways that seem to be at bit, ah, odd.

Makes sense.

Telmo.

> John K Clark

meekerdb

unread,
Aug 21, 2013, 3:07:05 PM8/21/13
to everyth...@googlegroups.com
But it seems to me that this reset is a magical, impossible operation.  If the human brain is a classical computer then that means it's computational state can be reset. But it also means the its physical state can't be reset.  The resetting operation itself, being a classical operation, is irreversible because of decoherence into the environment.  So the environment has the information about the state leading up to the reset and the reset operation.  So when you say 'you' can find yourself on another branch, it's not clear what 'you' refers to.  Apparently it would have to refer to an abstract computation (per Bruno, I guess) that happened to go through the same state twice (due to the 'reset') in this world AND also at least once in some other world.  But if it went through that state in some other world, there was already FPI even without the reset.  Right?

Brent

Quentin Anciaux

unread,
Aug 21, 2013, 5:37:19 PM8/21/13
to everyth...@googlegroups.com



2013/8/21 meekerdb <meek...@verizon.net>
Right... Erasure or not, if MWI is true (or computationalism) and our current (memory/conscious) state is finitely describable, then FPI holds (in both direction, future and past).

Regards,
Quentin
 
Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.

Quentin Anciaux

unread,
Aug 21, 2013, 5:42:28 PM8/21/13
to everyth...@googlegroups.com



2013/8/21 Telmo Menezes <te...@telmomenezes.com>
We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".

Regards,
Quentin

Chris de Morsella

unread,
Aug 21, 2013, 5:55:32 PM8/21/13
to everyth...@googlegroups.com
>> We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".
I agree. Brain working is still not well enough understood and understood at the level of granularity and fine detail -- especially when looked at as a dynamic ever changing system -- to be able to clearly map out at each single step how consciousness, self-awareness and the other salient qualia associated with sentience and intelligence come to be inside of it. Sure we are learning things about the brain and about the neurochemical mechanisms of memory and perception. We do know a lot more than we did even ten years ago, but still -- I would argue -- we do not know enough in order to be able to say we can map the dynamic process by which the mind operates and rises up inside the brain.
It's quite possible that we will discover -- in the end --  that we are massively parallel AI entities -- that our minds are fantastic computing machines, but until we have fully mapped the dynamic processes and can describe how these processes work -- and work with each other to form the very large scale distributed systems that surely are required for intelligence; it is best I believe to refrain from the temptation of positivism.
-Chris
From: Quentin Anciaux <allc...@gmail.com>
To: everyth...@googlegroups.com
Sent: Wednesday, August 21, 2013 2:42 PM

Subject: Re: When will a computer pass the Turing Test?

> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mailto:everything-list%2Bunsu...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.



--
All those moments will be lost in time, like tears in rain.

meekerdb

unread,
Aug 21, 2013, 6:31:48 PM8/21/13
to everyth...@googlegroups.com
On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.

We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".

There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly).  When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different.  This wouldn't prevent AI, but it would prevent exact duplication and hence throw doubt on ideas of duplication experiments and FPI.

Brent

Chris de Morsella

unread,
Aug 21, 2013, 6:56:12 PM8/21/13
to everyth...@googlegroups.com
An interesting discovery -- and topical for a few of the on-going discussions on this list -- of how much more is going on than we had previously thought was going on, during the transcription process from a cell's DNA that ultimately leads to the production of viable mRNA and the expression of the encoded amino acid chain. Here is yet another mechanism whereby the ultimate expressed genetic information of a life form is influenced by a environmental feedback mechanism which causes genes that are not normally expressed to become expressed.
It seems to me that life has evolved multiple pathways of control and multiple encoding schemas that are operating on top of the foundational DNA encoding that most life (except the few RNA life forms) relies on as the ultimate repository of genetic information.
-Chris
 
 
 
A new gene-expression mechanism is a minor thing of major importance
 
A rare, small RNA turns a gene-splicing machine into a switch that controls the expression of hundreds of human genes. Howard Hughes Medical Institute Investigator and professor of Biochemistry Gideon Dreyfuss, PhD, and his team from the Perelman School of Medicine at the University of Pennsylvania, discovered an entirely new aspect of the gene-splicing process that produces messenger RNA (mRNA).
The investigators found that a scarce, small RNA, called U6atac, controls the expression of hundreds of genes that have critical functions in cell growth, cell-cycle control, and global control of physiology. Their results were published in the journal eLife.
These genes encode proteins that play essential roles in cell physiology such as several transcription regulators, ion channels, signaling proteins, and DNA damage-repair proteins. Their levels in cells are regulated by the activity of the splicing machinery, which acts as a valve to control essential regulators of cell growth and response to external stimuli.
Dreyfuss, who studies RNA-binding proteins and their role in such diseases as spinal muscular atrophy and other motor neuron degenerative diseases, describes the findings as "completely unanticipated."
Complicated Splicing
As DNA is transcribed into RNA and then into the various proteins that perform the functions of life, non-coding gene sequences (introns) need to be removed from the transcribed RNA strand and the remaining gene sequences (exons) joined together. This is the job of specialized molecular machinery called the spliceosome. There are two varieties of spliceosomes, the so-called major and minor. The major spliceosome is by far the most abundant, such that the role of its minor counterpart is often disregarded.
"Most of the time the minor spliceosome, which has similar but not identical components to that of the major, isn't even mentioned," says Dreyfuss. With each type of spliceosome recognizing different splicing cues, the major spliceosome acts on the vast majority of introns (>200,000) and the minor one splices the several hundred minor-type introns.
But the evolutionary persistence and role of the minor spliceosome has been a puzzle to scientists, since the minor introns it targets are far outnumbered by the major introns handled by the major spliceosome, and the minor spliceosome is often inefficient. But the mRNAs produced from genes that have a minor intron are not ready until all their introns, both major and minor, are spliced. Thus a single inefficiently spliced minor intron can hold up expression – mRNA and protein production – for an entire gene. Researchers have therefore wondered why the apparently superfluous minor spliceosome hasn't been eliminated altogether through normal evolution.
"One looks at it and asks, we've known that minor spliceosomes are inefficient, why even bother to keep them under evolution's relentless selection pressure?" notes Dreyfuss. "It's been difficult to rationalize the conservation of minor introns and the minor spliceosome on the basis of splicing alone, as with few cue changes this function could simply have been performed by the major spliceosome."
More to the Minor
Dreyfuss's team discovered that there's more to the minor spliceosome while investigating the effects of different physiological conditions such as cell stress, transcription, and protein synthesis on small noncoding RNAs. "We inhibited transcription and then measured what happens to the amount of each of the small noncoding RNAs three or four hours later," he explains. "That's when we noticed that U6atac levels plunged." They found that U6atac, which is also the catalytic component of the minor spliceosome, is extremely unstable in a cell. "If you stop the transcription of U6atac, you stop producing it, and very quickly its levels become terribly low. And we knew that it's already one of the rarest snRNAs in cells. So we thought this surely will have an effect on minor intron splicing."
To test for such effects, the researchers deliberately knocked down U6atac in cells and then did genome-wide RNA sequencing. "We noticed that when you knock down U6atac, each minor intron responds differently," notes Dreyfuss. "Some of them showed that they're very inefficient and highly sensitive to U6atac level, which is an explanation for why the mRNA from those genes doesn't express well." Low U6atac levels within cells limit the rate of minor intron splicing, and thus the expression of important genes containing those minor introns.
Next, says Dreyfuss, "we started looking for any conditions where the levels of U6atac might be increased, so that the less efficient genes will be able to express. Out of the various conditions that we surveyed, we found that cell stress, which activates the p38MAPK pathway, causes a very large and rapid increase in U6atac and with that, a huge enhancement of the splicing of those minor introns that otherwise splice very inefficiently." (p38MAPK is a key component of cell signaling pathways that are activated during cell stress such as the release of inflammatory cytokines, ultraviolet radiation, heat, and osmotic shocks, so p38MAPK play an important role in cellular growth and differentiation, apoptosis, cancer, and autophagy).
A Valve and a Splicer
Sure enough, when U6atac levels are rapidly and steeply increased, "the bottleneck to the production of mRNA from those few hundred genes that contain a minor intron is removed." The p38MAPK signaling pathway – when activated under cell stress—is one of potentially many ways in which U6atac levels can be modulated.
When minor spliceosome activity is reduced, the minor introns are retained in the mRNA while the major introns are spliced out. This signals the mRNA for degradation, limiting the expression of genes that contain minor introns.
The findings point to an entirely new and vital role for the minor spliceosome and particularly its U6atac component. More than simply splicing out minor introns, U6atac actually functions as a control and regulatory mechanism for minor intron-containing genes. "We propose that the minor spliceosome was conserved because it's used as a valve, not simply a spliceosome," Dreyfuss says. "It's a very important switch and it's an unexpected kind of mechanism."
Dreyfuss sees parallels between the discovery of U6atac's role in splicing and previous work by his lab that revealed the importance of U1, a major spliceosome component, in preventing the premature termination of mRNA transcription. "That was completely unanticipated and is a major area of interest, because this is a major way of regulating the transcriptome and mRNA length."
One of the team's next steps will be to determine exactly how p38MAPK, and possibly other molecules, acts to control U6atac levels.
Meanwhile, they have demonstrated the "folly" of casually disregarding the seemingly unimportant. "This provides a new perspective on minor introns and minor spliceosomes, because it's been a real mystery," says Dreyfuss.

Russell Standish

unread,
Aug 21, 2013, 7:14:54 PM8/21/13
to everyth...@googlegroups.com
On Wed, Aug 21, 2013 at 12:07:05PM -0700, meekerdb wrote:
>
> But it seems to me that this reset is a magical, impossible
> operation. If the human brain is a classical computer then that
> means it's computational state can be reset. But it also means the
> its physical state can't be reset. The resetting operation itself,
> being a classical operation, is irreversible because of decoherence
> into the environment. So the environment has the information about
> the state leading up to the reset and the reset operation. So when
> you say 'you' can find yourself on another branch, it's not clear
> what 'you' refers to. Apparently it would have to refer to an
> abstract computation (per Bruno, I guess) that happened to go
> through the same state twice (due to the 'reset') in this world AND
> also at least once in some other world. But if it went through that
> state in some other world, there was already FPI even without the
> reset. Right?
>
> Brent
>

Just a small observation. Brent is arguing essential from what Bruno
would call the Aristotelian position, ie that there is a definite
environment containing the results of past decoherence that the
observer belongs to, even if the observer is now ignorant of that due
to memory erasure. Saibal is arguing from the COMP position, that
memory erasure is sufficient to reestablish the superposition - ie
that there is no such objective environment.

ISTM that Brent's position is widely held amongst QM practitioners
today, particularly the Austrian group, but that the alternative (many
minds, or perhaps COMP?) is equally as valid, at least as far as
empirical results go.

Part of the problem is that the language of the thought experiment
encourages the Aristotelian interpretation - we are describing the
situation from a mythical 3rd person POV, which implicitly supposes an
environment that the 3rd person observer is entangled with. If we were
to make the contorted effort to describe things entirely from the 1st
person observer's POV, the environment Brent is talking about vanishes
with the memory erasure.

Cheers

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Telmo Menezes

unread,
Aug 21, 2013, 8:25:47 PM8/21/13
to everyth...@googlegroups.com
Quentin, that's a good point. To be rigorous, I should have said
"assuming comp" indeed.
As I said before, I don't assume that AI and consciousness are
necessarily the same problem. Assuming that we can understand
intelligence without consciousness, there is some strong evidence in
favour of the computational nature of human intelligence: the fact
that our brain is made of connected computational building blocks, the
fact that a recurrent neural network is Turing complete, the fact that
it is possible to find all sorts of neural correlates for cognitive
activity, the fact that researchers succeeded in both reading and
implanting memories in mice and so on.

Best,
Telmo.

Chris de Morsella

unread,
Aug 21, 2013, 10:11:06 PM8/21/13
to everyth...@googlegroups.com

It’s probably already been discussed at length on this list, and if it has my apologies, but isn’t the incredibly massive parallelism of the brains architecture a possible factor and that the mind is an emergent phenomena made possible by amongst other things the subtle interplay of neuron firing networks dynamically racing back and forth in the brain – all the time and on a scale that is hard to begin to even grasp. Can anyone really say that the possible transient branches a dynamic and itself transient network of neural activity can really be determined by any possible program no matter how detailed? Throw in mirror neurons and the subtle dynamic effects that these networks within networks produce as they interact with the other manifesting waves of neural activity that precede our conscious awareness.

Isn’t it possible that very subtle and surprising unexpected effects can emerge from a network as vast and multi centered as the neural nets seem to be in brains. Brains also introduce the layer of chemical signal processing – neurotransmitters. A lot of subtle effects could emerge out of this interface (trillions of synaptic connections mediated by this very rapid wet chemical process).

The mind emerges from the brain, but it is not reducible to the brain; as water emerges from the elements Oxygen and Hydrogen, but is not reducible to them – i.e. cannot be fully described only by knowing about its constituent atoms.

When networks become vast and offer a huge number of paths by which signals may travel often subtle interactions can occur as messages are bounced around and changed from node to node.  Different and often potentially random network paths enlisted in  in participating in rapidly forming and dissolving massively parallel consensus building algorithms – which I believe is being shown to be an important factor in how the physical brain operates – could produce different outcomes that could affect whether and how a quorum is arrived at and the ultimate outcome of any given single dynamic instance of a thought wave (the waves upon waves, upon waves of synchronized neural firings that go into even a single simple thought is an astronomically huge number of atomic calculations and state changes)

The brain is also very noisy place – the signal to noise ratio is low. A huge error rate, compared with computer architecture which wastes huge amounts of energy to achieve a very low error rate in its basic logic gates (a lot more energy is used than the threshold value for flipping a gate in order to lower the error rate to almost zero). The brain must be dealing with a lot of bad – or random – signals.

And in general is not the brain computational architecture very different from computer machine architecture, and different on a lot of orthogonal levels. It seems like this is the place to begin looking; and as a corollary that one needs to  be careful when using computational terminology for describing the brain/mind because computers are architecturally so very different from our 20 watt 100 trillion connection machines.

-Chris

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of meekerdb
Sent: Wednesday, August 21, 2013 3:32 PM
To: everyth...@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?

 

On 8/21/2013 2:42 PM, Quentin Anciaux wrote:

--

Quentin Anciaux

unread,
Aug 22, 2013, 2:15:06 AM8/22/13
to everyth...@googlegroups.com



2013/8/22 meekerdb <meek...@verizon.net>

On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.

We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".

There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly). 

Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."
 
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different.  This wouldn't prevent AI,

It would prevent it *if* we cannot attach that external event to the computation... if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds... the only way to go out of that if for that event to be non-computational in nature.

Regards,
Quentin
 
but it would prevent exact duplication and hence throw doubt on ideas of duplication experiments and FPI.

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

meekerdb

unread,
Aug 22, 2013, 2:44:22 AM8/22/13
to everyth...@googlegroups.com
On 8/21/2013 11:15 PM, Quentin Anciaux wrote:



2013/8/22 meekerdb <meek...@verizon.net>
On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.

We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".

There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly). 

Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."

It's not non-computational if the external influence is also computational.  But the reaction of a silicon neuron to a beta particle may be quite different from the reaction of a biological neuron.  So AI is still possible, but it may confound questions like,"Is the artificial consciousness the same as the biological."


 
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different.  This wouldn't prevent AI,

It would prevent it *if* we cannot attach that external event to the computation...

No, it doesn't prevent intelligence, but it may make it different.

if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.

Yes, that's Bruno's answer, just regard the external world as part of the computation too, simulate the whole thing.  But I think that undermined his idea that computation replaces physics.  Physics isn't really replaced if it has to all be simulated.

Brent

Quentin Anciaux

unread,
Aug 22, 2013, 2:57:25 AM8/22/13
to everyth...@googlegroups.com
2013/8/22 meekerdb <meek...@verizon.net>
On 8/21/2013 11:15 PM, Quentin Anciaux wrote:



2013/8/22 meekerdb <meek...@verizon.net>
On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.

We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".

There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly). 

Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."

It's not non-computational if the external influence is also computational. 

If it is, you've not chosen the right level... the whole event + brain is computational and you're back at the start.
 
But the reaction of a silicon neuron to a beta particle may be quite different from the reaction of a biological neuron.  So AI is still possible, but it may confound questions like,"Is the artificial consciousness the same as the biological."

If it's computational, it is computational and AI at the right level would be the same as ours.
 


 
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different.  This wouldn't prevent AI,

It would prevent it *if* we cannot attach that external event to the computation...

No, it doesn't prevent intelligence, but it may make it different.

It does (for digital AI) if the ingredient is non-computational and that there is no way to attach it to the digital part without (for example) a biological brain.
 

if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.

Yes, that's Bruno's answer, just regard the external world as part of the computation too, simulate the whole thing. 

Well if your ingredient, is the whole of physics, then it's self defeating, and computationalism is false... if it's some part of it, then at that level the "realness" of our consciousness is digital and computationalism holds.

Quentin
 
But I think that undermined his idea that computation replaces physics.  Physics isn't really replaced if it has to all be simulated.

Brent


.. the only way to go out of that if for that event to be non-computational in nature.

Regards,
Quentin

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Telmo Menezes

unread,
Aug 22, 2013, 6:13:19 AM8/22/13
to everyth...@googlegroups.com
But it might be relegated to the same status as social sciences, where
it provides workable approximations but has no hope of achieving a
TOE.

Telmo.

> Brent
>
>
> .. the only way to go out of that if for that event to be non-computational
> in nature.
>
> Regards,
> Quentin
>
>

John Clark

unread,
Aug 22, 2013, 10:23:49 AM8/22/13
to everyth...@googlegroups.com
On Wed, Aug 21, 2013  Quentin Anciaux <allc...@gmail.com> wrote:

>  We haven't proved our brain is computational in nature,

There are only 3 possibilities:

 1) Our brains work by cause and effect processes; if so then the same thing can be done on a computer.

 2) Our brains do NOT work by cause and effect processes; if so then they are random and the same thing can be done on a $20 hardware random number generator. 

3) Sometimes our brains work by cause and effect processes and sometimes they don't; if so then  they can be done on a computer and a a $20 hardware random number generator. 
 
>  Maybe our brain has some non computational shortcut

Then there are 2 possibilities:

1) There is a reason the shortcut works; if so then a computer can use the algorithm too.

2) There was NO reason the shortcut worked; if so then it was just a lucky guess and is unlikely to be repeated by either you or the computer.

  John K Clark

Quentin Anciaux

unread,
Aug 22, 2013, 10:28:19 AM8/22/13
to everyth...@googlegroups.com
mouahahahah

Excuse-me but cause and effect (reason or no reason) have nothing to do weither a thing is computable or not.

Quentin


2013/8/22 John Clark <johnk...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Telmo Menezes

unread,
Aug 22, 2013, 10:44:33 AM8/22/13
to everyth...@googlegroups.com
There are many other conceivable options. I'll try one. Not saying I
believe in it, of course. My aim is to demonstrate that you are not
exhausting the possible scenarios:

We live inside a simulation created by ultra-intelligent beings in
some external universe. The simulation is not so good, and they keep
interrupting it to do error correction. The mechanisms they use to
correct errors exist outside the simulation we live in, but they end
up being the secret sauce of our own minds. If we build an AI, we
require the collaboration of our creators for it to work. We have no
way to force them to cooperate. The creators never reveal themselves
and they never cooperate in allowing our own creations to work. In
this scenario, comp is false as far as we're concerned.

I agree with Quentin, btw: causality has nothing to do with computation.

John Clark

unread,
Aug 22, 2013, 11:04:16 AM8/22/13
to everyth...@googlegroups.com
On Wed, Aug 21, 2013 Telmo Menezes <te...@telmomenezes.com> wrote:

> Would you agree that the universal dovetailer would get the job done?

I'm not exactly sure what job you're referring to and Bruno's use of a carpentry term to describe a type of computation has never made a lot of sense to me.


>> Turing tells us we'll never find a algorithm that works perfectly on all problems all of the time, so we'll just have to settle for an algorithm that works pretty well on most problems most of the time.

> Ok, and I'm fascinated by the question of why we haven't found viable algorithms in that class yet -- although we know has a fact that it must exist, because our brains contain it.

We haven't found it yet because intelligence is hard, after all it took Evolution over 3 billion years to find it and we've only been looking for about 50. But Evolution is very very slow and very very stupid so I would be a bit surprised if we find it in the next 10 years but astounded if we don't find it in the next 100.
 
> you're thinking of smartness as some unidimensional quantity.

No I'm not, I think it's crazy to think intelligence can be measured by a scalar (like IQ) when even something a simple as the wind is composed of a vector with 2 variables, speed and direction. To measure the most complicated thing in the universe, intelligence, I expect you'd need a tensor, and a very big one. But I don't think it will be long before computers have more intelligence than any human who ever lived using any measure of intelligence you care to name.

  John K Clark

John Clark

unread,
Aug 22, 2013, 11:47:55 AM8/22/13
to everyth...@googlegroups.com
On 8/21/2013 2:42 PM, Quentin Anciaux wrote:

> Can anyone really say that the possible transient branches a dynamic and itself transient network of neural activity can really be determined by any possible program no matter how detailed?

Yes, I can really say that because there are only 2 possibilities:

1)
The transient dynamic branches of a neural network are determined, that is they work by cause and effect; if so then a computer can do the same thing.

2)
The transient dynamic branches of a neural network are NOT determined, that is they are random; if so then a $20 hardware random number generator can do the same thing.

And I note that many people look at the vast complexity of cellular processes and see superiority, but much of that complexity is actually a sign of inferiority.  Evolution is a dreadful programmer with a passion for spaghetti code. No human programmer would be stupid enough to write AAAAAA.... 10,000 times in a row but the human genome is full to the brim with that sort of thing, and very complex chemical metabolic processes like digestion (which has nothing to do with intelligence) has even more convoluted kludges.

Ask yourself this question, why weren't all those fantastically complex transient dynamic branches in a neural network by the name of Grandmaster Gary Kasparov able to beat a 16 year old computer running a 16 year old chess program?

  John K Clark






  John K Clark




Telmo Menezes

unread,
Aug 22, 2013, 12:10:05 PM8/22/13
to everyth...@googlegroups.com
On Thu, Aug 22, 2013 at 4:04 PM, John Clark <johnk...@gmail.com> wrote:
> On Wed, Aug 21, 2013 Telmo Menezes <te...@telmomenezes.com> wrote:
>
>> > Would you agree that the universal dovetailer would get the job done?
>
>
> I'm not exactly sure what job you're referring to

The job of overcoming the issues introduced by the halting problem.

> and Bruno's use of a
> carpentry term to describe a type of computation has never made a lot of
> sense to me.

Bruno did not invent the term "dovetailing" nor is he the only person
to use it in computer science. A simple google search will show you
this. I know you're a smart guy and understand the metaphor, so you're
just complaining for the sake of complaining. Do you also disapprove
of the use of a sewing term to describe a type of computation
(threading)?

>
>>> >> Turing tells us we'll never find a algorithm that works perfectly on
>>> >> all problems all of the time, so we'll just have to settle for an algorithm
>>> >> that works pretty well on most problems most of the time.
>>
>>
>> > Ok, and I'm fascinated by the question of why we haven't found viable
>> > algorithms in that class yet -- although we know has a fact that it must
>> > exist, because our brains contain it.
>
>
> We haven't found it yet because intelligence is hard, after all it took
> Evolution over 3 billion years to find it and we've only been looking for
> about 50. But Evolution is very very slow and very very stupid so I would be
> a bit surprised if we find it in the next 10 years but astounded if we don't
> find it in the next 100.

Maybe. Or maybe the algorithm is too complex for human intelligence to grasp.

>>
>> > you're thinking of smartness as some unidimensional quantity.
>
>
> No I'm not, I think it's crazy to think intelligence can be measured by a
> scalar (like IQ) when even something a simple as the wind is composed of a
> vector with 2 variables, speed and direction. To measure the most
> complicated thing in the universe, intelligence, I expect you'd need a
> tensor, and a very big one. But I don't think it will be long before
> computers have more intelligence than any human who ever lived using any
> measure of intelligence you care to name.

Even if this level of intelligence is attained by non-evolutionary
means? You might be right -- I wonder if advanced intelligence
necessarily bootstraps some form of evolution.

Chris de Morsella

unread,
Aug 22, 2013, 2:23:49 PM8/22/13
to everyth...@googlegroups.com
>> Yes, I can really say that because there are only 2 possibilities:

1)
The transient dynamic branches of a neural network are determined, that is they work by cause and effect; if so then a computer can do the same thing.

2)
The transient dynamic branches of a neural network are NOT determined, that is they are random; if so then a $20 hardware random number generator can do the same thing.
 
Modern large scale enterprise software systems, spanning multiple machines and separate processes and linking and coordinating many separate threads of execution all working in parallel into much vaster meta-processes, must deal with indeterminacy even in todays systems.  In fact this challenge is leading to more and more adoption of consensus based algorithms, because in all cases the same answer is not always arrived at.
 
I disagree with your supposition that any stochastic process can be mimicked by simply introducing a variable into the equation that has its value controlled in some random manner; perhaps in many cases this is possible, but that $20 piece of hardware you mention will not reproduce the same outcomes as the real stochastic system does.
A stochastic system may be reducible to being modeled by some set of random variation, but in reality it is often a whole lot more subtle than that and the "randomness" is not random but reflects the fact that many equally valid outcomes can evolve form the initial set of conditions.
 
>> And I note that many people look at the vast complexity of cellular processes and see superiority, but much of that complexity is actually a sign of inferiority.  Evolution is a dreadful programmer with a passion for spaghetti code. No human programmer would be stupid enough to write AAAAAA.... 10,000 times in a row but the human genome is full to the brim with that sort of thing, and very complex chemical metabolic processes like digestion (which has nothing to do with intelligence) has even more convoluted kludges.
 
Hehe... don't underestimate the amazing stupidity of programmers... I have seen some real exemplars of boneheaded stupidity in my time. Again I disagree. Software is horribly inefficient much of the time; optimization is highly selective (as it should in fact be) and most code is sub-optimal (which is actually fine). Processes grab way too many resources for example on the assumption that maybe they might need them  (good for them, bad for the system as a whole); bad algorithms abound in code and spaghetti code is still everywhere -- even in so called object oriented code.
As an aside our physical brain anatomy as well is a convoluted mess and would have never been designed the way it actually is by any intelligent designer; it is full of weird 90 degree bends and turns as the original tubular nematode proto-brain morphed -- and was shoe-horned -- into the tightly folded (and also highly differentiated -- having many sub parts) structure we have inside our heads. So I agree with you that naturally evolved systems can in fact be convoluted and sometimes absurdly contorted.
 
>> Ask yourself this question, why weren't all those fantastically complex transient dynamic branches in a neural network by the name of Grandmaster Gary Kasparov able to beat a 16 year old computer running a 16 year old chess program?
 
Neither here nor there - IMO -- not sure how this has bearing? The super computer that finally beat him had a massive number crunching ability and was completely specialized in the task of beating him; whilst Gary Kasparov's fantastic brain is a general purpose system that is not exclusively concerned with winning  a game of chess. 
 
-Chris
 
From: John Clark <johnk...@gmail.com>
To: everyth...@googlegroups.com
Sent: Thursday, August 22, 2013 8:47 AM
Subject: Re: When will a computer pass the Turing Test?

John Clark

unread,
Aug 22, 2013, 3:14:05 PM8/22/13
to everyth...@googlegroups.com
On Thu, Aug 22, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:

>> There are only 3 possibilities:
  1) Our brains work by cause and effect processes; if so then the same thing
can be done on a computer.
  2) Our brains do NOT work by cause and effect processes; if so then they
are random and the same thing can be done on a $20 hardware random number
generator.
3) Sometimes our brains work by cause and effect processes and sometimes
 they don't; if so then  they can be done on a computer and a a $20 hardware random number generator.
 
> There are many other conceivable options.

Many?
 
> I'll try one. Not saying I believe in it, of course. My aim is to demonstrate that you are not exhausting the possible scenarios: We live inside a simulation created by ultra-intelligent beings in some external universe.

 Then there are only 2 possibilities:

1) The ultra computer that simulates our world changes from one state to the other for a reason; if so then our simulated computers which change from one state to the other for a simulated reason can create a simulated simulated world that also looks real to its simulated simulated inhabitants.

2) The ultra computer that simulates our world changes from one state to the other for NO reason; if so then its random and there's nothing very ultra about the machine.

>  In this scenario, comp is false as far as we're concerned.

Cannot comment, I don't know what "comp" is.

  > I agree with Quentin, btw: causality has nothing to do with computation.

Nothing? Then I don't know what you mean by computation. What causal thing can we do but a computer can't?  It's true that Turing proved there are some real numbers, most of them in fact, that a computer could never find even if it had an infinite clock speed,  but we can't find those numbers either.

  John K Clark

 

John Clark

unread,
Aug 22, 2013, 3:43:53 PM8/22/13
to everyth...@googlegroups.com
On Thu, Aug 22, 2013  Chris de Morsella <cdemo...@yahoo.com> wrote:
 
> A stochastic system may be reducible to being modeled by some set of random variation

Yes.
 
>but In reality it is often a whole lot more subtle than that and the "randomness" is not random

If it's not random then it happened for a reason, and things happen in a computer for a reason too.

>>Ask yourself this question, why weren't all those fantastically complex transient dynamic branches in a neural network by the name of Grandmaster Gary Kasparov able to beat a 16 year old computer running a 16 year old chess program?
 
> not sure how this has bearing
 
Is that true, are you really not sure how that has any bearing? I am sure.

 > The super computer that finally beat him had a massive number crunching ability

At the time it may have been a supercomputer but that was 16 years ago and the computer you're reading this E mail message on right now is almost certainly more powerful than the computer that beat the best human chess player in the world. And chess programs have gotten a lot better too. So all that spaghetti and complexity at the cellular level that you were rhapsodizing about didn't work as well as an antique computer running a ancient chess program.

  John K Clark



-

Chris de Morsella

unread,
Aug 22, 2013, 4:28:50 PM8/22/13
to everyth...@googlegroups.com
 
>> If it's not random then it happened for a reason, and things happen in a computer for a reason too.
 
Sure, but the "reason" may not be amenable to being completely contained within the confines of a deterministic algorithm if it depends on a series of outside processes that are not under the algorithms operational control and that are highly variable and transient. The reason may depend on a very large set of orthogonal factors each of which Is evolving and mutating along a separate dimension.
Try modeling complexity like this and  it can lead to spontaneous brain explosion :)
 
> At the time it may have been a supercomputer but that was 16 years ago and the computer you're reading this E mail message on right now is almost certainly more powerful than the computer that beat the best human chess player in the world. And chess programs have gotten a lot better too. So all that spaghetti and complexity at the cellular level that you were rhapsodizing about didn't work as well as an antique computer running a ancient chess program.
You are incorrect even today Deep Blue is still quite powerful compared to a PC
 
The Deep Blue machine specs:
 It was a massively parallel, RS/6000 SP Thin P2SC-based system with 30 nodes, with each node containing a 120 MHz P2SC microprocessor for a total of 30, enhanced with 480 special purpose VLSI chess chips. Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version. In June 1997, Deep Blue was the 259th most powerful supercomputer according to the TOP500 list, achieving 11.38 GFLOPS on the High-Performance LINPACK benchmark.[12]
 
I doubt the machine you are writing your email on even comes close to that level of performance; I know mine does not achieve that level of performance.
 
From: John Clark <johnk...@gmail.com>
To: everyth...@googlegroups.com
Sent: Thursday, August 22, 2013 12:43 PM
Subject: Re: When will a computer pass the Turing Test?

Quentin Anciaux

unread,
Aug 22, 2013, 4:45:36 PM8/22/13
to everyth...@googlegroups.com
I didn't write the following, you're quoting someone else.


2013/8/22 John Clark <johnk...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Telmo Menezes

unread,
Aug 22, 2013, 5:36:00 PM8/22/13
to everyth...@googlegroups.com
But the ultra computer I postulated is not a pure Turing machine. It's
behaviour can be influenced by entities external to our simulated
universe. In a sense this is a religious hypothesis, which I don't
like but cannot be disproved. Granted, it doesn't count as a
scientific theory in the Popperian sense. It's what Carl Sagan called
an "invisible dragon in the garage" hypothesis. But there's an
infinity of these and they are conceivable and can be true.

>> > In this scenario, comp is false as far as we're concerned.
>
>
> Cannot comment, I don't know what "comp" is.

Come on John, we've been through this the other day. You do know.

>> > I agree with Quentin, btw: causality has nothing to do with
>> computation.
>
>
> Nothing? Then I don't know what you mean by computation. What causal thing
> can we do but a computer can't? It's true that Turing proved there are some
> real numbers, most of them in fact, that a computer could never find even if
> it had an infinite clock speed, but we can't find those numbers either.

Alright, maybe "nothing to do" is too strong. Computation can be used
to model causality, but causality itself is a very problematic and
fuzzy concept. Computation does not require causality. It can be
defined simply in the form of symbolic relationships. It is not
related to causality in the same sense that arithmetics is not related
to causality. Unless you say something very contrived like "2 because
1 + 1".

Platonist Guitar Cowboy

unread,
Aug 22, 2013, 5:40:36 PM8/22/13
to everyth...@googlegroups.com
Both soft- and hardware still play a role. Kasparov's loss still smells funky, even though now, I doubt Anand, Carlsen or GMs could consistently stand a chance against say Houdini 3: https://en.wikipedia.org/wiki/Houdini_%28chess%29

I'd say given current state of the art that they might draw a few games per hundred. Perhaps win one every few hundred.

But even these fantastic engines have bugs like, which I've seen a few times, so I still wouldn't bet the farm:

http://www.chess.com/forum/view/general/humans-v-houdini-chess-engine-elo-3300?page=22

PGC


 



-

meekerdb

unread,
Aug 22, 2013, 7:17:59 PM8/22/13
to everyth...@googlegroups.com
On 8/22/2013 3:13 AM, Telmo Menezes wrote:
But it might be relegated to the same status as social sciences, where
it provides workable approximations but has no hope of achieving a
TOE.

Yes, that's close to what Hawking and Mlodinow say in their book.  They call it "model dependent realism" without asserting that all of physics or reality can be covered by the same model.

Brent

meekerdb

unread,
Aug 22, 2013, 7:22:01 PM8/22/13
to everyth...@googlegroups.com
On 8/21/2013 11:57 PM, Quentin Anciaux wrote:



2013/8/22 meekerdb <meek...@verizon.net>
On 8/21/2013 11:15 PM, Quentin Anciaux wrote:



2013/8/22 meekerdb <meek...@verizon.net>
On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.

We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".

There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly). 

Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."

It's not non-computational if the external influence is also computational. 

If it is, you've not chosen the right level... the whole event + brain is computational and you're back at the start.
 
But the reaction of a silicon neuron to a beta particle may be quite different from the reaction of a biological neuron.  So AI is still possible, but it may confound questions like,"Is the artificial consciousness the same as the biological."

If it's computational, it is computational and AI at the right level would be the same as ours.

But "at the right level" may mean "including all the environment outside the brain".


 


 
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different.  This wouldn't prevent AI,

It would prevent it *if* we cannot attach that external event to the computation...

No, it doesn't prevent intelligence, but it may make it different.

It does (for digital AI) if the ingredient is non-computational and that there is no way to attach it to the digital part without (for example) a biological brain.

I don't see why that follows.  Suppose the non-computational, external influence comes from the output of a hypercomputer?  It cans till provide input to a Turing computer.  Or even true randomness could, as is hypothesized in QM.


 

if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.

Yes, that's Bruno's answer, just regard the external world as part of the computation too, simulate the whole thing. 

Well if your ingredient, is the whole of physics, then it's self defeating,

Exactly.  That's what I said below

Brent

Russell Standish

unread,
Aug 23, 2013, 2:03:45 AM8/23/13
to everyth...@googlegroups.com
On Thu, Aug 22, 2013 at 05:10:05PM +0100, Telmo Menezes wrote:
>
> Bruno did not invent the term "dovetailing" nor is he the only person
> to use it in computer science. A simple google search will show you
> this. I know you're a smart guy and understand the metaphor, so you're
> just complaining for the sake of complaining. Do you also disapprove
> of the use of a sewing term to describe a type of computation
> (threading)?
>

I was a little puzzled by the etymology of "dovetailing" when I first
heard it, as I knew about the carpentry term. However, it apparently
comes from tilers, who describe a pattern of laying tiles as
dovetailing. And that analogy makes more sense.

Quentin Anciaux

unread,
Aug 23, 2013, 2:14:59 AM8/23/13
to everyth...@googlegroups.com



2013/8/23 meekerdb <meek...@verizon.net>

On 8/21/2013 11:57 PM, Quentin Anciaux wrote:



2013/8/22 meekerdb <meek...@verizon.net>
On 8/21/2013 11:15 PM, Quentin Anciaux wrote:



2013/8/22 meekerdb <meek...@verizon.net>
On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.

We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".

There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly). 

Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."

It's not non-computational if the external influence is also computational. 

If it is, you've not chosen the right level... the whole event + brain is computational and you're back at the start.
 
But the reaction of a silicon neuron to a beta particle may be quite different from the reaction of a biological neuron.  So AI is still possible, but it may confound questions like,"Is the artificial consciousness the same as the biological."

If it's computational, it is computational and AI at the right level would be the same as ours.

But "at the right level" may mean "including all the environment outside the brain".


 


 
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different.  This wouldn't prevent AI,

It would prevent it *if* we cannot attach that external event to the computation...

No, it doesn't prevent intelligence, but it may make it different.

It does (for digital AI) if the ingredient is non-computational and that there is no way to attach it to the digital part without (for example) a biological brain.

I don't see why that follows.  Suppose the non-computational, external influence comes from the output of a hypercomputer?  It cans till provide input to a Turing computer. 

So you could attach it to the digital part *but* that output of the hypercomputer is the non-computable part... you'll need it and you can't bypass it *and* it is not computable.
 
Or even true randomness could, as is hypothesized in QM.

Same thing.
 


 

if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.

Yes, that's Bruno's answer, just regard the external world as part of the computation too, simulate the whole thing. 

Well if your ingredient, is the whole of physics, then it's self defeating,

Exactly.  That's what I said below

Brent


and computationalism is false... if it's some part of it, then at that level the "realness" of our consciousness is digital and computationalism holds.

Quentin
 
But I think that undermined his idea that computation replaces physics.  Physics isn't really replaced if it has to all be simulated.

Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

John Clark

unread,
Aug 23, 2013, 1:48:29 PM8/23/13
to everyth...@googlegroups.com
On Thu, Aug 22, 2013 at 4:28 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:
 
>> If it's not random then it happened for a reason, and things happen in a computer for a reason too.
 
> Sure, but the "reason" may not be amenable to being completely contained within the confines of a deterministic algorithm

What on earth are you talking about? The deterministic algorithm behaves as it does for a reason but does not do so for a reason??!!

 
> if it depends on a series of outside processes

If it depends on something then it's deterministic.
 
 
> > At the time it may have been a supercomputer but that was 16 years ago and the computer you're reading this E mail message on right now is almost certainly more powerful than the computer that beat the best human chess player in the world. And chess programs have gotten a lot better too. So all that spaghetti and complexity at the cellular level that you were rhapsodizing about didn't work as well as an antique computer running a ancient chess program.
 
> You are incorrect even today Deep Blue is still quite powerful compared to a PC

Not unless your meaning of "powerful" is radically diferent from mine.
 
> The Deep Blue machine specs:
 It was a massively parallel, RS/6000 SP Thin P2SC-based system with 30 nodes, with each node containing a 120 MHz P2SC microprocessor for a total of 30, enhanced with 480 special purpose VLSI chess chips. Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version. In June 1997, Deep Blue was the 259th most powerful supercomputer according to the TOP500 list, achieving 11.38 GFLOPS on the High-Performance LINPACK benchmark.[12]

OK.

> I doubt the machine you are writing your email on even comes close to that level of performance; I know mine does not achieve that level of performance.
 
Are you really quite sure of that? The computer I'm typing this on is an ancient iMac that was not top of the line even back a full Moore's Law generation ago when it was new, back in the olden bygone days of 2011. Like all computers the number of floating point operations per second it can perform depends on the problem, but in computing dot products running multi-threaded vector code it runs at 34.3 GFOPS; so Deep Blue running at 11.38 GFLOPS doesn't seem as impressive as it did in 1997.

Right now the fastest supercomputer in the world has a LINPACK rating of 54.9 pentaflops, a pentaflop IS A MILLION GFLOPS; so today that Chinese supercomputer is 4.8 millions times as powerful as Deep Blue was in 1997. And in just a few years that supercomputer will join Deep Blue on the antique computer junk pile.

John K Clark







Alberto G. Corona

unread,
Aug 23, 2013, 2:05:45 PM8/23/13
to everyth...@googlegroups.com
I AI the response is ever "The next decade"


2013/8/23 John Clark <johnk...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.



--
Alberto.

Telmo Menezes

unread,
Aug 23, 2013, 2:13:09 PM8/23/13
to everyth...@googlegroups.com
On Fri, Aug 23, 2013 at 7:03 AM, Russell Standish <li...@hpcoders.com.au> wrote:
> On Thu, Aug 22, 2013 at 05:10:05PM +0100, Telmo Menezes wrote:
>>
>> Bruno did not invent the term "dovetailing" nor is he the only person
>> to use it in computer science. A simple google search will show you
>> this. I know you're a smart guy and understand the metaphor, so you're
>> just complaining for the sake of complaining. Do you also disapprove
>> of the use of a sewing term to describe a type of computation
>> (threading)?
>>
>
> I was a little puzzled by the etymology of "dovetailing" when I first
> heard it, as I knew about the carpentry term. However, it apparently
> comes from tilers, who describe a pattern of laying tiles as
> dovetailing. And that analogy makes more sense.

Ok. The analogy felt natural to me, looking at pictures like the first one here:
http://en.wikipedia.org/wiki/Dovetail_joint

I think some people also use the term for the traditional way of
shuffling a deck of cards, which also makes sense.

In any case, it's an established computer science term. Wolfram uses
it in "A New Kind of Science", for example.

Cheers,
Telmo.

> Cheers
>
> --
>
> ----------------------------------------------------------------------------
> Prof Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics hpc...@hpcoders.com.au
> University of New South Wales http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>

Chris de Morsella

unread,
Aug 23, 2013, 2:46:58 PM8/23/13
to everyth...@googlegroups.com
Okay I grant you that the Deep Blue machine is part of the sediment buried under Moore's Law  -- had not looked at the benchmarks as closely as I should have it was late at night and I am going to stick with that answer :)
 
As for the larger discussion I guess it boils down to my doubt about the theoretical possibility of a universal computer. Every computer that we know about executes within a defined context -- its execution context, and within a local frame of reference under which it is executing. The execution context is bounded and limited and does not and IMO cannot extend infinitely. Though I am pretty certain others may disagree and will argue that a universal computer can exist that executes in a universal all encompassing context. I do not see how this can be. The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment  -- necessary to exist outside of itself so to speak in order for it to be able to operate on this substrate.
Every computer in existence requires external enabling hardware.
 
If a computer requires a substrate which it can manipulate in order to perform its logical operations then a universal computer is impossible because the substrate would necessarily be outside and foundational to its domain.
 
In any non-universal computer we are back to the limits posed by execution context and local frame of reference. A process may be shown to be deterministic within some frame, within an execution context, but because -- I argue -- there can be no all encompassing universal execution context that does not itself rely on some external substrate to enable its basic operations -- there will always be other execution contexts and processes which are operating independently of any context.
 
Now when different execution contexts begin communicating messages to each other how can a global outcome be said to be deterministic within the scope of any given execution context. Each single execution context is operating in its own frame of reference and will be generating outcomes based on its own frame. However its own frame is not completely isolated from other frames of reference in the larger linked meta systems -- say the internet as a single loosely coupled dynamic entity for example comprised of perhaps billions of connected devices each operating in its own local frame.
 
I find the idea that this massive meta entity of millions and millions of separate servers can be described as being deterministic in it's whole. The individual executing agents or processes -- that together when linked by the trillions of messages being sent back and forth comprise this larger meta entity --  can be modeled in a deterministic fashion within their individual frames of reference and execution contexts.
 
But can one say the same thing about the larger meta entity that emerges from the subtle interactions of the many hundreds of millions of executing processes that dynamically impinge on it and through which it emerges?
 
When one speaks of outcomes, they often depend on subtle variables that are rapidly varying for example such that the results of running a function may change from instant to instant. While within the execution context of the function producing the result we can prove it is deterministic once this function is loosely linked to other separately running execution frames it becomes harder to deterministically predict any given outcome until some threshold of complexity and noise is reached where it becomes impossible to work back from the outcome and show how it has been determined.
 
Metaphorically I suppose you could imagine a pond and random pebbles being tossed into it from many various directions. At first it will be possible to analyze the ripples and their interference patterns and work back to the time and place of each pebble hitting the water event and determine the angle, size speed etc. of each pebble. But play this forward and keep throwing more and more pebbles onto the pond's surface from different angles and speeds. After some time can one work back to the first pebble and determine the specifics of that single event? Obviously in practice we cannot do so no matter how much computing power we throw at the problem because the interactions and interference patterns of the millions of ripples spreading out from different points will grow exponentially more difficult, until all the computers in the universe working together would be unable to solve the problem.... for a big enough pond that is of course.
 
Perhaps one could even invoke quantum erasure -- and state that once an event has become so interfered with by other events that all trace of it cannot be distinguished from the noise in the system then in a sense has not that event been erased?
 
And yet the current state of the system in an infinitesimally miniscule way has been affected by it through an exceedingly long chain of events leading to other events and so forth.
Determinism depends on having a frame of reference and can only be defined within some frame of reference. I do not see how universal determinism can be demonstrated, perhaps I am wrong and it can be -- if so I would like to hear how it can be logically proved.
 
Cheers,
-Chris
 
 
From: John Clark <johnk...@gmail.com>
To: everyth...@googlegroups.com
Sent: Friday, August 23, 2013 10:48 AM
Subject: Re: When will a computer pass the Turing Test?

John Clark

unread,
Aug 23, 2013, 3:41:52 PM8/23/13
to everyth...@googlegroups.com
On Thu, Aug 22, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:

>>  Then there are only 2 possibilities:
1) The ultra computer that simulates our world changes from one state to the
other for a reason; if so then our simulated computers which change from one
state to the other for a simulated reason can create a simulated simulated
world that also looks real to its simulated simulated inhabitants.

 2) The ultra computer that simulates our world changes from one state to the
 other for NO reason; if so then its random and there's nothing very ultra
about the machine.
 
> But the ultra computer I postulated is not a pure Turing machine. It's behaviour can be influenced by entities external to our simulated universe.

Any Turing Machine can be influenced by anything external to it, such as me throwing a rock at the contraption.  I don't see the point.


>> Cannot comment, I don't know what "comp" is.

> Come on John, we've been through this the other day. You do know.

I know what I don't know and I'm telling you I don't know what "comp" means, every time I think I do Bruno proves me wrong. After over 2 and a half years of constantly seeing people on this list (and nowhere else) use that strange made up word I have come to the conclusion that I am not alone, nobody has a deep understanding of what the hell "comp" is supposed to mean.

> Computation does not require causality. It can be defined simply in the form of symbolic relationships.

I'm not interested in definitions and I'm not interested in relationships, if state X isn't the reason for a machine or computer or brain or SOMETHING going into state Y  then an algorithm is just squiggle of ink in a book. Computation is physical.

   John K Clark

Quentin Anciaux

unread,
Aug 23, 2013, 3:48:01 PM8/23/13
to everyth...@googlegroups.com



2013/8/23 John Clark <johnk...@gmail.com>

On Thu, Aug 22, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:

>>  Then there are only 2 possibilities:
1) The ultra computer that simulates our world changes from one state to the
other for a reason; if so then our simulated computers which change from one
state to the other for a simulated reason can create a simulated simulated
world that also looks real to its simulated simulated inhabitants.

 2) The ultra computer that simulates our world changes from one state to the
 other for NO reason; if so then its random and there's nothing very ultra
about the machine.
 
> But the ultra computer I postulated is not a pure Turing machine. It's behaviour can be influenced by entities external to our simulated universe.

Any Turing Machine can be influenced by anything external to it, such as me throwing a rock at the contraption.  I don't see the point.


>> Cannot comment, I don't know what "comp" is.

> Come on John, we've been through this the other day. You do know.

I know what I don't know and I'm telling you I don't know what "comp" means, every time I think I do Bruno proves me wrong.

You're just lying... there is nothing more difficult than to explain a thing to someone who doesn't want to hear it... comp is *computationalism* and nothing else. So please stop pretending you don't know.

Quentin
 
After over 2 and a half years of constantly seeing people on this list (and nowhere else) use that strange made up word I have come to the conclusion that I am not alone, nobody has a deep understanding of what the hell "comp" is supposed to mean.

> Computation does not require causality. It can be defined simply in the form of symbolic relationships.

I'm not interested in definitions and I'm not interested in relationships, if state X isn't the reason for a machine or computer or brain or SOMETHING going into state Y  then an algorithm is just squiggle of ink in a book. Computation is physical.

   John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

John Clark

unread,
Aug 23, 2013, 3:57:41 PM8/23/13
to everyth...@googlegroups.com
On Fri, Aug 23, 2013 at 2:46 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:

 
> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment  

 The human requires a substrate in which to operate upon -- the brain for example is what our human minds  operate on. I know of no human that does not require this external structured environment.  

> Every computer in existence requires external enabling hardware.

Every human in existence requires external enabling hardware.

 > If a computer requires a substrate which it can manipulate in order to perform its logical operations then a universal computer is impossible because the substrate would necessarily be outside and foundational to its domain.

If a human requires a substrate which it can manipulate in order to perform its logical operations then a universal human is impossible because the substrate would necessarily be outside and foundational to its domain.

  John K Clark
 



meekerdb

unread,
Aug 23, 2013, 4:49:03 PM8/23/13
to everyth...@googlegroups.com
On 8/23/2013 11:05 AM, Alberto G. Corona wrote:
I AI the response is ever "The next decade"

That's because as soon as a computer does what was formerly claimed to be possible only for human intellect, e.g. beat a world chess champion, prove a new theorem in mathematics, drive a car in traffic,... that thing is immediately demoted to "not real intelligence".

So AI is always what hasn't been done yet.

Brent

Chris de Morsella

unread,
Aug 23, 2013, 5:12:58 PM8/23/13
to everyth...@googlegroups.com
Brent I agree, it seems to be an ever moving goal post. Already so much is being done by expert systems that up until a few years ago was the exclusive domain of humans -- for example automated arbitrage trading systems that are responsible for an ever growing slice of all the trades on the major stock and commodities exchanges in the world because not only are they so much faster than humans, but often are making better trades on average than human traders.
 
Part of the reason for this goal post moving that seems to be going on is due to how hard it is to really provide any kind of rigorous definition of what is the meaning of intelligence, self awareness etc.and so it is quite easy -- in the fog of semantic confusion -- to post facto claim that whatever had been previously proposed as a clear sign of AI is not really indicative of true AI.
 
Chris

Sent: Friday, August 23, 2013 1:49 PM
Subject: Re: When will a computer pass the Turing Test?

Chris de Morsella

unread,
Aug 23, 2013, 11:34:02 PM8/23/13
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Friday, August 23, 2013 12:58 PM
To: everyth...@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?

 

 

 

On Fri, Aug 23, 2013 at 2:46 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:

 

 

> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment  


 The human requires a substrate in which to operate upon -- the brain for example is what our human minds  operate on. I know of no human that does not require this external structured environment.  

Yes… and?

> Every computer in existence requires external enabling hardware.


>>Every human in existence requires external enabling hardware.

Yes but humans are not universal computing machines, if indeed we are machines. Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty? I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions… that we live in a deterministic universe and that everything that has or will ever happen is pre-destined and already baked in to the unfolding fabric of our experiencing of reality.

If a computer operates from within a local frame of reference and context, but far from being isolated and existing alone is instead connected to much vaster environments and meta-processes that are potentially very loosely coupled -- based on in direct means such as say message passing through queues or other signals – then can its own outputs be said to be completely deterministic – even if we consider its own internal operations to be constrained to be deterministic? Operations, especially ones that are parts of much larger workflows etc. are being mutated by many actors and potentially with sophisticated stripe locking strategies, for example, having their data stores being accessed concurrently by multiple separate processes. There are just so many pseudo random and hard to predict or model occurrences – such as say lock contention – that are occurring at huge rates (when seen from sufficiently high up any large architecture)

I find it hard to see how the resulting outcomes produced by such kinds of systems can be determined based on a knowledge of the state of the system at some initial instant in time.

 > If a computer requires a substrate which it can manipulate in order to perform its logical operations then a universal computer is impossible because the substrate would necessarily be outside and foundational to its domain.


>>If a human requires a substrate which it can manipulate in order to perform its logical operations then a universal human is impossible because the substrate would necessarily be outside and foundational to its domain.

Agreed. Humans are exceedingly far from being universal. Our very sense of self precludes universality.

Cheers,

-Chris

 

 

 

Quentin Anciaux

unread,
Aug 24, 2013, 5:44:44 AM8/24/13
to everyth...@googlegroups.com



2013/8/24 Chris de Morsella <cdemo...@yahoo.com>

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Friday, August 23, 2013 12:58 PM
To: everyth...@googlegroups.com


Subject: Re: When will a computer pass the Turing Test?

 

 

 

On Fri, Aug 23, 2013 at 2:46 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:

 

 

> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment  


 The human requires a substrate in which to operate upon -- the brain for example is what our human minds  operate on. I know of no human that does not require this external structured environment.  

Yes… and?

> Every computer in existence requires external enabling hardware.


>>Every human in existence requires external enabling hardware.

Yes but humans are not universal computing machines, if indeed we are machines. Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty? I was referring to the hypothesized deterministic universe,

Well it's not because the universe is deterministic that it is computable... it may require infinite precision to get the next step... that's why computability and cause and effect are not related contrary to what John Clarck like to say (if something has a cause/reason then it is computable, that's just plain wrong). It's not because it's determined that it has a finite description...

A computation + an oracle is not a computation *alone*... it requires the oracle doing an hypercomputation or handling the infinite stuff, while the whole object could still be said to behave deterministically it is not computable.

Quentin


 

in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions… that we live in a deterministic universe and that everything that has or will ever happen is pre-destined and already baked in to the unfolding fabric of our experiencing of reality.

If a computer operates from within a local frame of reference and context, but far from being isolated and existing alone is instead connected to much vaster environments and meta-processes that are potentially very loosely coupled -- based on in direct means such as say message passing through queues or other signals – then can its own outputs be said to be completely deterministic – even if we consider its own internal operations to be constrained to be deterministic? Operations, especially ones that are parts of much larger workflows etc. are being mutated by many actors and potentially with sophisticated stripe locking strategies, for example, having their data stores being accessed concurrently by multiple separate processes. There are just so many pseudo random and hard to predict or model occurrences – such as say lock contention – that are occurring at huge rates (when seen from sufficiently high up any large architecture)

I find it hard to see how the resulting outcomes produced by such kinds of systems can be determined based on a knowledge of the state of the system at some initial instant in time.

 > If a computer requires a substrate which it can manipulate in order to perform its logical operations then a universal computer is impossible because the substrate would necessarily be outside and foundational to its domain.


>>If a human requires a substrate which it can manipulate in order to perform its logical operations then a universal human is impossible because the substrate would necessarily be outside and foundational to its domain.

Agreed. Humans are exceedingly far from being universal. Our very sense of self precludes universality.

Cheers,

-Chris

 

 

 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

John Clark

unread,
Aug 24, 2013, 11:46:23 AM8/24/13
to everyth...@googlegroups.com
On Fri, Aug 23, 2013 at 11:34 PM, Chris de Morsella <cdemo...@yahoo.com> wrote

>>> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment  

>> The human requires a substrate in which to operate upon -- the brain for example is what our human minds  operate on. I know of no human that does not require this external structured environment.  

> Yes… and?

And you tell me, those are your ideas not mine. I don't see the relevance but I thought you did.

>>> Every computer in existence requires external enabling hardware.

>>Every human in existence requires external enabling hardware.
 

> Yes but humans are not universal computing machines,

If we're not universal then we are provincial computing machines. Do you really think this strengthens your case concerning the superiority of humans?  

> if indeed we are machines.

We are either cuckoo clocks or roulette wheels, take your pick. 
 

> Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty?

Yes absolutely! I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.

> I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions

Everything in modern physics and mathematics says that determinism is false, but who cares, we were talking about intelligence and biological minds and computer minds; what does the truth of falsehood of determinism have to do with the price of eggs?  

  John K Clark

Quentin Anciaux

unread,
Aug 24, 2013, 11:57:21 AM8/24/13
to everyth...@googlegroups.com



2013/8/24 John Clark <johnk...@gmail.com>

On Fri, Aug 23, 2013 at 11:34 PM, Chris de Morsella <cdemo...@yahoo.com> wrote

>>> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment  

>> The human requires a substrate in which to operate upon -- the brain for example is what our human minds  operate on. I know of no human that does not require this external structured environment.  

> Yes… and?

And you tell me, those are your ideas not mine. I don't see the relevance but I thought you did.

>>> Every computer in existence requires external enabling hardware.

>>Every human in existence requires external enabling hardware.
 

> Yes but humans are not universal computing machines,

If we're not universal then we are provincial computing machines. Do you really think this strengthens your case concerning the superiority of humans?  

> if indeed we are machines.

We are either cuckoo clocks or roulette wheels, take your pick. 
 

> Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty?

Yes absolutely! I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.

> I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions

Everything in modern physics and mathematics says that determinism is false,

That's wrong, MWI is deterministic... and again, deterministic and computable are two different thing.

Quentin
 
but who cares, we were talking about intelligence and biological minds and computer minds; what does the truth of falsehood of determinism have to do with the price of eggs?  

  John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Chris de Morsella

unread,
Aug 24, 2013, 2:48:11 PM8/24/13
to everyth...@googlegroups.com

On Fri, Aug 23, 2013 at 11:34 PM, Chris de Morsella <cdemo...@yahoo.com> wrote

>>> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment  

>> The human requires a substrate in which to operate upon -- the brain for example is what our human minds  operate on. I know of no human that does not require this external structured environment.  

> Yes… and?

>>And you tell me, those are your ideas not mine. I don't see the relevance but I thought you did.

 

There is no relevance unless one is attempting to posit the existence of a universal computer. All measurable processes – including information processing -- happen over and require for their operations some physical substrate. My point, which I believe either you may have missed or you are dodging is that therefore a universal computer is impossible, because there would always need to be some underlying and external container for the process that could not therefore itself be completely contained within the process.

 

>>> Every computer in existence requires external enabling hardware.

>>Every human in existence requires external enabling hardware.

 

> Yes but humans are not universal computing machines,

>>If we're not universal then we are provincial computing machines. Do you really think this strengthens your case concerning the superiority of humans?  

Whoa there, when did I make that statement? I am not interested in nor do I much care whether humans are superior or inferior to computers or, in fact termites or microbes or anything else we could potentially be measured against. This does not drive my interest in the least. Who cares about our relative ranking in the universe; certainly not I.

> if indeed we are machines.

>>We are either cuckoo clocks or roulette wheels, take your pick. 

Not sure whether you are attempting to be funny or are pouring the irony on a little thick. An average human brain has somewhere around 86 billion neurons and as far as we are able to count around 100 trillion synapses. Characterizing this fantastically dense crackling network as a cuckoo clock or a roulette wheel is rather facile. If we are machines then we are surely fantastically complex and highly dynamic ones.

> Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty?

Yes absolutely! I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.

You have said absolutely nothing that means anything more than reiterating your belief in reductionism. Something either happens or does not happen for a reason… sure.. and so what? What insight have you uncovered by stating the obvious. It certainly does not help answer the question I posed. We do not know enough about brain function in order to be able to model it with anything approaching certainty. This was my point and your reply added nothing of substance to that point, as far as I can see.

I can say that things happen, for a reason or they do not happen for a reason, for any phenomena whatsoever, in the universe, but I have not therefore, by stating the obvious, uncovered any deeper truths or given any insight into any process or underlying physical laws. It is meaningless and it leads nowhere in terms of providing any actual valuable insight or explanation. It speaks but without saying anything. What is your point? What insight does that give you into the mechanisms by which thought, self-awareness, consciousness, arise in our brains?

> I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions

Everything in modern physics and mathematics says that determinism is false, but who cares, we were talking about intelligence and biological minds and computer minds; what does the truth of falsehood of determinism have to do with the price of eggs?  

I suspect we may be having parallel conversations and are simply not communicating all that well.

In principle I am agnostic about AI arising in a machine. I am humble enough however to admit that so much of the fine grained details of brain functioning are still not understood and that therefore it is impossible for us to model the dynamic functioning of the human brain. Perhaps someday – even soon maybe – we will have the fine detailed maps of all the connections (including all the axons as well) and the dynamic patterns of activity that traverse them – but until then all we really have is hypothesis & conjecture.

And…. Until we are able to build a fine grained and falsifiable model of how the brain works and this model can be shown (by not being falsified of course) that it is able to have a powerfully predictive value of outcomes based on initial conditions then we cannot say exactly how such qualia as self-awareness, consciousness and intelligent creative thought arise within us or that this process is replicable in an artificial machine.

Or can we? If so… care to explain how?

Cheers

-Chris

Russell Standish

unread,
Aug 24, 2013, 7:03:34 PM8/24/13
to everyth...@googlegroups.com
On Fri, Aug 23, 2013 at 08:34:02PM -0700, Chris de Morsella wrote:
>
>
>
>
> From: everyth...@googlegroups.com
> [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
> Sent: Friday, August 23, 2013 12:58 PM
> To: everyth...@googlegroups.com
> Subject: Re: When will a computer pass the Turing Test?
>
>
>
> >>Every human in existence requires external enabling hardware.
>
> Yes but humans are not universal computing machines, if indeed we are
> machines. Do we know enough about how our brains work and are structured to


...

>
> >>If a human requires a substrate which it can manipulate in order to
> perform its logical operations then a universal human is impossible because
> the substrate would necessarily be outside and foundational to its domain.
>
> Agreed. Humans are exceedingly far from being universal. Our very sense of
> self precludes universality.
>

I may be missing your point entirely, but humans are universal
machines in the sense that they can emulate perfectly any Turing
machine, given enough time, patience, paper and pens for external storage.

They may well be capable of far more than a universal Turing machine,
but they're not less.

I don't see what the sense of self has to do with it...

Chris de Morsella

unread,
Aug 24, 2013, 8:01:48 PM8/24/13
to everyth...@googlegroups.com
>> I don't see what the sense of self has to do with it...

Hi Russell ~ In the sense, that by having a "sense of self" we have
inescapably already separated our "self" from any possibility of seeing from
the perspective of a universal point of view... the all that is and can be.

Naturally this is a matter of perception and we all exist within the set of
all that can be and is, but we perceive ourselves as having identity, and
identity is per force a perspective on something larger in which the
identified thing operates and belongs to, but from which it considers itself
separate and distinct. I use it in the sense -- so many ways to use that
word; hope it all does not come out as nonsense :) -- in the sense of how
our own perceptual lock-in, to viewing the universe from the perspective of
our own beings, is a fundamental limitation we have by nature of being. It
is very hard to get beyond ourselves to put every event and how we interpret
the streams from our senses out of becoming bound up with this
self-referential optic that we superimpose on the world impinging on us.

I do not see how a universal being could experience itself as having a self
-- at least in the limited way we experience it. I am a believer in the
importance of our self-centered beings for what that's worth and clearly at
our stage in evolution we require it -- not selfish (hopefully), but
centered within a self, a self who perceives and who at least believes they
are imbued with free will.

But this is way off topic and I am wandering into what could easily lead off
into a whole other area that can be an endless discussion.

>> I may be missing your point entirely, but humans are universal machines
in the sense that they can emulate perfectly any Turing machine, given
enough time, patience, paper and pens for external storage.

True and perhaps in theory possible, but in practice as soon as we begin to
deal with ever increasing volumes of external systems, especially ones that
respond to events and pressures to change, from multiple arrays of sources,
it grows geometrically harder to synchronize and manage and to keep stuff
like reentrancy from happening. So in practice I think this breaks down at
some stochastic threshold and the problem mushrooms out of control as the
bookkeeping effort required begins to overtake the value of each increment
of extra external inclusion into the set of things that need to be kept
tracked of and taken account of.

Cheers,
-Chris

Russell Standish

unread,
Aug 25, 2013, 2:53:06 AM8/25/13
to everyth...@googlegroups.com
On Sat, Aug 24, 2013 at 05:01:48PM -0700, Chris de Morsella wrote:
> >> I don't see what the sense of self has to do with it...
>
> Hi Russell ~ In the sense, that by having a "sense of self" we have
> inescapably already separated our "self" from any possibility of seeing from
> the perspective of a universal point of view... the all that is and can be.
>

Ahh - that's the source of the misunderstanding. A "universal point of
view" (if such a thing can actually exist) is not the same thing at
all as a "universal machine". A universal machine is defined as a
machine capable of emulating any other machine, given an appropriate program.

Cheers

John Clark

unread,
Aug 25, 2013, 12:17:42 PM8/25/13
to everyth...@googlegroups.com
On Sat, Aug 24, 2013 at 2:48 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:

> All measurable processes – including information processing -- happen over and require for their operations some physical substrate. My point, which I believe either you may have missed or you are dodging is that therefore a universal computer is impossible, because there would always need to be some underlying and external container for the process that could not therefore itself be completely contained within the process.

I'm not at all clear what you're talking about and have little desire for clarification because enough is clear to know that even if you are describing some sort of limitation to computers humans have the exact same limitation. 

> I am not interested in nor do I much care whether humans are superior or inferior to computers

That I quite simply do not believe because I do not think anybody would advance or be convinced by such incredibly weak arguments unless they had already decided what they would prefer to be true and only then started to look around for something, anything, to support that view. 

>>We are either cuckoo clocks or roulette wheels, take your pick.

> Not sure whether you are attempting to be funny or are pouring the irony on a little thick. An average human brain has somewhere around 86 billion neurons

And today just one INTEL Xeon chip that you could put on your fingernail contains over 5 billion transistors each of which can change it's state several million times faster than any neuron can.

> Characterizing this fantastically dense crackling network as a cuckoo clock or a roulette wheel is rather facile.

There is one thing that brains and cuckoo clocks and roulette wheels and the Tianhe-2 Supercomputer all have in common, things inside them happen for a reason or things inside them do not happen for a reason.

> If we are machines then we are surely fantastically complex and highly dynamic ones.

Yes, and so are computers.

>> I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.

> You have said absolutely nothing that means anything more than reiterating your belief in reductionism.

No, what I said was that things happen for a reason or they do not happen for a reason. Are you telling me with a straight face that you disagree with that?!

> Something either happens or does not happen for a reason… sure.. and so what? What insight have you uncovered by stating the obvious.

The insight that we are either cuckoo clocks or roulette wheels, take your pick. 

> I can say that things happen, for a reason or they do not happen for a reason, for any phenomena whatsoever, in the universe, but I have not therefore, by stating the obvious, uncovered any deeper truths or given any insight into any process or underlying physical laws. It is meaningless and it leads nowhere in terms of providing any actual valuable insight or explanation. It speaks but without saying anything. What is your point?

The point that free will is a idea so bad it's not even wrong.

> much of the fine grained details of brain functioning are still not understood and that therefore it is impossible for us to model

That doesn't follow. We still don't understand how high temperature superconductors work but that doesn't prevent us from using them in machines. In the same way we wouldn't need to understand why the logic diagram of a brain is the way it is to reverse engineer it and duplicate the same thing in silicon; assuming of course that you wanted to make an AI the same way that Evolution did, but there are almost certainly better ways to do that with astronomically less spaghetti code.

  John K Clark



Chris de Morsella

unread,
Aug 25, 2013, 6:24:53 PM8/25/13
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Sunday, August 25, 2013 9:18 AM
To: everyth...@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?

 

 

On Sat, Aug 24, 2013 at 2:48 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:

> All measurable processes – including information processing -- happen over and require for their operations some physical substrate. My point, which I believe either you may have missed or you are dodging is that therefore a universal computer is impossible, because there would always need to be some underlying and external container for the process that could not therefore itself be completely contained within the process.

 

>>I'm not at all clear what you're talking about and have little desire for clarification because enough is clear to know that even if you are describing some sort of limitation to computers humans have the exact same limitation. 

Yes it is quite clear that you have no idea what I am talking about. On this we very much agree.

> I am not interested in nor do I much care whether humans are superior or inferior to computers

>>That I quite simply do not believe because I do not think anybody would advance or be convinced by such incredibly weak arguments unless they had already decided what they would prefer to be true and only then started to look around for something, anything, to support that view. 

Nor, in fact, do I much care whether or not you believe what I state my position is, is my position. If – for whatever reason – your mind requires that you be the agent who assigns my beliefs to me and who determines what my motivations are – that is something that is operating in you… interesting perhaps as a psychological phenomenon, but of no great import to anyone or anything besides your own sense of self certainty.

What’s the purpose of having a conversation if when I say quite clearly that and I repeat -- I am not interested in nor do I much care whether humans are superior or inferior to computers – you come back and say I must be lying because you have decided that this is important to me. Who are you to make that kind of decision for my brain… out, out, you… intruder, it’s my mind, and I do not appreciate you defining it for me.

Take me at my word when I say I don’t really care one way or the other, that this horse race is uninteresting to me.

You mistake my fascination for how the brain works and for how conscious intelligence and self-awareness emerge – in us or in any other entity – for whatever you have inferred and decided it is I must be motivated by.

How incredibly pompous of you. Do you go popping into other people’s heads deciding what they believe a lot? It’s a bad habit you know.

>>We are either cuckoo clocks or roulette wheels, take your pick.

> Not sure whether you are attempting to be funny or are pouring the irony on a little thick. An average human brain has somewhere around 86 billion neurons

>>And today just one INTEL Xeon chip that you could put on your fingernail contains over 5 billion transistors each of which can change it's state several million times faster than any neuron can.

>> Yes… and with that? Does it also sport a 100 trillion connection network on it?

 

> Characterizing this fantastically dense crackling network as a cuckoo clock or a roulette wheel is rather facile.

>>There is one thing that brains and cuckoo clocks and roulette wheels and the Tianhe-2 Supercomputer all have in common, things inside them happen for a reason or things inside them do not happen for a reason.

 

Ahhhh yes back once again to your idée fixe. And how exactly does that help you understand the brain, the CPU or anything at all? This obsession of yours – it seems like one to me, for you keep returning over and over again to re-stating it. You believe things either happen for a reason or they don’t; though you cannot prove it. Obviously it is important for you; though what great insight you derive from this idée fixe of yours quite clearly eludes me.

Care to elucidate what is so darn original and profound about the tautology you endlessly come back to? Especially in terms of understanding subtle deep dynamic and vast phenomenon such as conscious intelligence and how it can be recognized and how it arises within an entity?

 

 

> If we are machines then we are surely fantastically complex and highly dynamic ones.

>>Yes, and so are computers.

Sure, but, even now still orders of magnitude less so than us. Still have not seen an example of a one hundred trillion connection machine the size of a grapefruit that runs off of 20 watts. Not saying it won’t happen someday, maybe even soon, but the Xeon chip ain’t it.

>> I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.

> You have said absolutely nothing that means anything more than reiterating your belief in reductionism.

No, what I said was that things happen for a reason or they do not happen for a reason. Are you telling me with a straight face that you disagree with that?!

What, I am telling you with a straight face is: So what? You have uncovered nothing new under the sun, by continually re-iterating your tautology. The switch is either on or it is off… you say. Everything either happens for a reason or it does not…. Or so you say. I don’t know that this is in fact so. Prove it. Prove your theorem. Prove that for all events that can occur they must either happen for a reason or for no reason at all. I don’t even think you are even all that clear headed by what you even intend for “reason”. What is this agent you call “a reason”?

Are you arguing that for each and every effect there must be a cause? What are you in fact trying to say? And why is it of such importance?

> Something either happens or does not happen for a reason… sure.. and so what? What insight have you uncovered by stating the obvious.

>>The insight that we are either cuckoo clocks or roulette wheels, take your pick. 

So say you, and of course you are free to say whatever you like, but pardon me, if say your “insight” seems rather pointless to me.

> I can say that things happen, for a reason or they do not happen for a reason, for any phenomena whatsoever, in the universe, but I have not therefore, by stating the obvious, uncovered any deeper truths or given any insight into any process or underlying physical laws. It is meaningless and it leads nowhere in terms of providing any actual valuable insight or explanation. It speaks but without saying anything. What is your point?

>>The point that free will is a idea so bad it's not even wrong.

And you of course are free to believe that if you must…. though I find it a self-imposed impoverishment of the soul… it’s your free will to choose to straight jacket yourself into the dreary pre-ordained outcomes of determinism…. As it is mine, to pity you for doing so.

> much of the fine grained details of brain functioning are still not understood and that therefore it is impossible for us to model

>>That doesn't follow. We still don't understand how high temperature superconductors work but that doesn't prevent us from using them in machines.

To some degree, however our ability to fully utilize high temperature superconductors and to discover the holy grail of room temperature super-conductors however is very significantly constrained by our lack of understanding of how the phenomenon works.

>>In the same way we wouldn't need to understand why the logic diagram of a brain is the way it is to reverse engineer it and duplicate the same thing in silicon; assuming of course that you wanted to make an AI the same way that Evolution did, but there are almost certainly better ways to do that with astronomically less spaghetti code.

You cannot really state that you understand a system, without actually understanding the system. It is false to suggest that one can understand human intelligence or consciousness, for example, without understanding how it emerges within us… without being able to describe and to show the dynamics and means by which it becomes our experience.

Until we understand how we actually do work, we cannot make positivistic statements about how we must be working. You are putting the cart before the horse.

-Chris

>>  John K Clark

 

 

Chris de Morsella

unread,
Aug 25, 2013, 7:40:07 PM8/25/13
to everyth...@googlegroups.com


-----Original Message-----
From: everyth...@googlegroups.com
[mailto:everyth...@googlegroups.com] On Behalf Of Russell Standish


On Sat, Aug 24, 2013 at 05:01:48PM -0700, Chris de Morsella wrote:
> >> I don't see what the sense of self has to do with it...
>
> Hi Russell ~ In the sense, that by having a "sense of self" we have
> inescapably already separated our "self" from any possibility of
> seeing from the perspective of a universal point of view... the all that
is and can be.
>

>>Ahh - that's the source of the misunderstanding. A "universal point of
view" (if such a thing can actually exist) is not the same thing at all as a
"universal machine". A universal machine is defined as a machine capable of
emulating any other machine, given an appropriate program.

Hehe, my bad. I was throwing around a term that is well understood by many
to signify a Universal Turing Machine (or automata/device capable of
performing like the theoretical UTM machine) from an entirely different
semantic vector...

My apologies... and a good example of how important a shared understanding
of terms is for good communication.

On the single point of view thing... personally I confess I am agnostic on
whether or not a single point of view can exist; though I cannot fathom how
it could exist.... and unlike most people I have tried lol.... though one
could argue that this inability of mine is the inevitable result of my being
shackled to existing from within a point of view :)

The reason for my introducing it into this thread in the first place, was to
argue that everything that happens, is happening within some local and
limited context... within a frame of reference in which it occurs and with
which it interacts. That any system we can define -- except perhaps abstract
mathematical/logical conceptual systems such as the infinite set perhaps
(but that can only exist because it remains -- by definition --
undefined.... and around in a circle it goes) -- must in some ways be
influenced by actors and events, which are external to it. All systems have
externalities.

And that because of this influence from outside that characterizes all
systems -- except perhaps those in superposition -- as systems become
increasingly complex -- by orders of magnitude -- and become comprised of
systems of systems linked by extended networks determinism breaks down. At
some stochastic point in the degree of complexity and evolution of systems
it becomes impossible to work back to some hypothetical original cause based
on the measured knowledge of some end outcome.

i.e. determinism is in practice impossible. For example would it be possible
by measuring precisely the surface of a pond and taking a snap shot of that
surface to wind all the surface waves that are dynamically always racing
across that pond and bouncing about as they rapidly decay in energy to some
distant previous ripple causing event... the pebble that was tossed into the
pond a month ago, for example. I would argue that even with a perfect
picture of the ponds surface and of each single micrometer of its
boundaries... knowing all of its system parameters... that at some point an
event can no longer be distinguished from noise, but not for that can it be
said to have ultimately had no effect either... in a pseudo-random chaotic
system that butterfly wing flapping event can be the cause of the hurricane
six months later. But it is also true that it can never be determined to
have been the cause either... the signal of that causal butterfly wing flap
has long been effectively erased by the vast seething chaos of the countless
jiggling atoms in the atmosphere.

Basically it boils down to my questioning and doubting the theoretical
possibility of determinism. Every non-trivial system -- I would argue --
becomes so inter-connected and loosely coupled that processes that may have
perhaps started out in some deterministic manner rapidly begin to become
effected by events and inter-actions with other external actors so that a
degree of indeterminacy becomes creeps in.

This is a real bind that information science is facing. Computer
architecture relies on deterministic outcomes -- a logic gate must either be
open or it must be closed without ambiguity. But increasingly it is hard to
guarantee and at higher levels (clearly much higher than the individual
logic gate) systems need to handle indeterminacy to some degree. In fact
consensus based algorithms are starting to become more widely adopted --
adopted in an attempt to solve this dilemma by going with a wisdom of the
crowds approach and building consensus.

The much more interesting things in life are found right on the edge where
chaos and order meet. Determinism seeks to impose its need for order on
chaos, and is unable to accept that in reality chaos is at least as vital an
ingredient in the secret sauce of life and reality as the Order the
determinist so deeply loves.

Cheers

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Chris de Morsella

unread,
Aug 26, 2013, 4:44:13 PM8/26/13
to everyth...@googlegroups.com
Sent: Monday, August 26, 2013 10:55 AM

Subject: Re: When will a computer pass the Turing Test?




On Sun, Aug 25, 2013 at 6:24 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:
> I say quite clearly that and I repeat -- I am not interested in nor do I much care whether humans are superior or inferior to computers. Take me at my word when I say I don’t really care one way or the other, that this horse race is uninteresting to me.
I'm sorry Chris, I can't take your word for it because I don't think any rational being would advance a argument in favor of human superiority as incredibly weak as "All measurable processes – including information processing -- happen over and require for their operations some physical substrate" unless they'd already decided what they'd prefer to believe.
 
Perhaps that is how you see it,  but what you are seeing is the result of your own spin. I will repeat -- and you either take me at my word or go ahead and make an ass out of yourself by continuing to insist that I must be lying to you... no skin off my back. The point I was actually making in the sentence you quoted is that all processes happen in a local frame of reference, and that there is no universal all knowing point of view. Take me at my word or not.. your prerogative I guess -- its your head you are free to fill it with whatever you choose.
 > How incredibly pompous of you. Do you go popping into other people’s heads deciding what they believe a lot?
 
>>Not as often as I'd like, I wish I had the ability to detect deception all the time but I'm not that good at it, however sometimes its obvious.
 
No point in having a conversation if you have already made up your mind now is there? Either take me at my word or this is rather pointless.
 >>There is one thing that brains and cuckoo clocks and roulette wheels and the Tianhe-2 Supercomputer all have in common, things inside them happen for a reason or things inside them do not happen for a reason.
> Ahhhh yes back once again to your idée fixe. And how exactly does that help you understand the brain, the CPU or anything at all? This obsession of yours – it seems like one to me, for you keep returning over and over again to re-stating it. You believe things either happen for a reason or they don’t; though you cannot prove it.
>>Let me get this straight you are skeptical that X is Y or X is not Y and demand proof. Have I really got that straight??
 
No you cannot prove that things in the brain happen because of some proximate definable and identifiable cause or otherwise they must therefore result by a completely random process. In a system as layered, massively parallel and highly noisy as the brain your assumptions of how it works are naïve and border on the comical. The brain is not a based on a simple deterministic algorithm in which the chain of cause and effect is always clear. You seem to fail to grasp how in complex chaotic systems -- such as the brain -- the linkage between cause and effect is not necessarily clear or even possible to work back to.
I cannot help you if this is too subtle for how your mind wants to work; that is a deficiency in your own analytical abilities, and I cannot help you there.
> Care to elucidate what is so darn original and profound about the tautology you endlessly come back to?
Up to now every tautology has had one great virtue, they are all true; but apparently you think that for the first time in human history you have found a tautology that is false. Have I really got that straight??
 
You are being pointless and gratuitously argumentative.
> continually re-iterating your tautology. The switch is either on or it is off… you say. Everything either happens for a reason or it does not…. Or so you say. I don’t know that this is in fact so.

So you really don't know if that is in fact so. Have I really got that straight??
>>The point that free will is a idea so bad it's not even wrong.
> And you of course are free to believe that if you must…. though I find it a self-imposed impoverishment of the soul

So you think that if you have free will then you don't do things for a reason and so are not deterministic and you don't do things for no reason and so are not random. Have I really got that straight??
 
No you have it all twisted up in your binary way of viewing things. If all your brain is able to model is either or propositions then whats the point of carrying this conversation forward.
 
 > If we are machines then we are surely fantastically complex and highly dynamic ones.
>>Yes, and so are computers.
Sure, but, even now still orders of magnitude less so than us.
Sure, but computers are gaining on us at the rate of about one order of magnitude every 7 years, and there is  no end in sight.

> You cannot really state that you understand a system, without actually understanding the system.

>>That is a tautology and thus obviously true, but you don't have to understand something to make use of it; we still don't fully understand how aspirin works but it has been curing headaches for well over a century.  
True but black box testing -- or describing a system only based on a knowledge of its inputs and outputs can only take you so far. Not arguing that this approach is without value, of course it has value, but in order to truly understand any process one has to crack it open and step through the fine grained details and get your hands dirty, which is why white box testing exists and is very widely used by the way. Have you ever bothered to ask yourself the reason why such a heavy reliance is also placed on white box testing (verification etc.) instead of just relying on black box techniques?
 
 
> It is false to suggest that one can understand human intelligence or consciousness, for example, without understanding how it emerges within us
More tautologies, that is to say more true statements, but understanding doesn't enter into it. I don't have to understand Hungarian to copy a Hungarian poem.
 
You can copy the symbols on a sheet of paper , but without understanding Hungarian you will never be impacted by the meaning or sensations that poem is seeking to convey. So perhaps you can faithfully copy a stream of symbols -- the meaning of which you ignore -- but that is a pretty empty and meaningless bit of knowledge that imparts very little understanding. If this level of understanding suffices for you and satisfies your own intellectual curiosity -- then so be it -- for you. I would rather get the meaning contained within the poem. You on the other hand would not even know it was a poem.... just a series of symbols arranged in some order. That is qualitatively different and is a meager degree of knowledge.
> it is quite clear that you have no idea what I am talking about. On this we very much agree.

Yes.

  John K Clark
 

It is loading more messages.
0 new messages