> When will a computer pass the Turing Test? Are we getting close? Here is what the CEO of Google says: “Many people in AI believe that we’re close to [a computer passing the Turing Test] within the next five years,” said Eric Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on July 16, 2013.
> I don't really find the Turing Test that meaningful, to be honest.
> I find it a much more worthwhile endeavour to create a machine that can understand what we mean
> like a human does, without the need to convince us that it has human emotions
> the Turing test is a very specific instance of a "subsequent behavior" test.
> It's a hard goal, and it will surely help AI progress, but it's not, in my opinion, an ideal goal.
> But a subtle problem with the Turing test is that it hides one of the hurdles (in my important, the most significant hurdle) with the progress in AI: defining precisely what the problem is.
On Fri, Aug 16, 2013 Telmo Menezes <te...@telmomenezes.com> wrote:
> the Turing test is a very specific instance of a "subsequent behavior" test.
Yes it's specific, to pass the Turing Test the machine must be indistinguishable from a very specific type of human being, an INTELLIGENT one; no computer can quite do that yet although for a long time they've been able to be indistinguishable from a comatose human being.
> It's a hard goal, and it will surely help AI progress, but it's not, in my opinion, an ideal goal.
If the goal of Artificial Intelligence is not a machine that behaves like a Intelligent human being then what the hell is the goal?
Citeren meekerdb <meek...@verizon.net>:
On 8/15/2013 6:18 AM, smi...@zonnet.nl wrote:
Citeren meekerdb <meek...@verizon.net>:
On 8/14/2013 6:41 PM, smi...@zonnet.nl wrote:With classical I mean a single world theory where you just compute the probabilities based "ignorance". This yields the same answer as assuming the MWI and then comouting the probabilities of the various outcomes.
I guess I don't understand that. You seem to be considering a simple case of amnesia - all purely classical - so I don't see how MWI enters at all. The probabilities are just ignorance uncertainty. You're still in the same branch of the MWI, you just don't remember why your memory was erased (although you may read about it in your diary).
No, you can't say that you are in the same branch. Just because you are in the clasical regime doesn't mean that the MWI is irrelevant and we can just pretend that the world is described by classical physics. It is only that classical physics will give the same answer as QM when computing probabilities.
Including the probability that I'm in the same world as before?
If what you are aware of is only described by your memory state which can be encoded by a finite number of bits, then after a memory resetting, the state of your memory and the environment (which contains also the rest of your brain and body), is of the form:
"The rest of my brain"?? Why do you suppose that some part of my brain is involved in my memories and not other parts? What about a scar or a tattoo. I don't see that "memory" is separable from the environment. In fact isn't that exactly what makes memory classical and makes the superposition you write below impossible to achieve? Your brain is a classical computer because it's not isolated from the environment.
What matter is that the state is of the form:
|memory_1>|environment_1> + |memory_2>|environment_2>+..
with the |memory_j> orthonormal and the |environment_j> orthogonal. Such a completely correlated state will arise due to decoherence, the probabilities which are the squared norms of the |environment_j>'s are the probabilities. They behave in a purely classical way due this decomposition.
The brain is never isolated from the environment; if project onto an |environment_j> you always get a definite classical memory state, never a supperposition of different bitstrings. But it's not the case that projecting onto a ddefinite memory state will always yield a definite classical environment state (this is at the heart of the Wigner's friend thought experiment).
I think Wigner's friend has been overtaken by decoherence. While I agree with what you say above, I disagree that the |environment_i> are macroscopically different. I think you are making inconsistent assumptions: that "memory" is something that can be "reset" without "resetting" its physical environment and yet still holding that memory is classical.
The |environment_i> have to be different as they are entangled with different memory states, precisely due to rapid decoherence. The environment always "knows" exactly what happened. So, the assumption is not that the environment "doesn't know" what has been done (decoherence implies that the environment does know), rather that the the person whose memory is reset doesn't know why the memory was reset.
So, if you have made a copy of the memory, the system files etc., there is no problem to reboot the system later based on these copies. Suppose that the computer is running an artificially intelligent system in a virtual environment, but such that this virtual environment is modeled based on real world data. This is actually quite similar to how the brain works, what you experience is a virtual world that the brain creates, input from your senses is used to update this model, but in the end it's the model of reality that you experience (which leaves quite a lot of room for magicians to fool you).
Then immediately after rebooting, you won't yet have any information that is in the environment about why you decided to reboot. You then have macroscopically different environments where the reason for rebooting is different but where you are identical.
On Fri, Aug 16, 2013 at 10:38 PM, meekerdb <meek...@verizon.net> wrote:A machine that behaves like a intelligent human will be subject to
> On 8/16/2013 1:25 PM, John Clark wrote:
>
> On Fri, Aug 16, 2013 Telmo Menezes <te...@telmomenezes.com> wrote:
>
>> > the Turing test is a very specific instance of a "subsequent behavior"
>> > test.
>
>
> Yes it's specific, to pass the Turing Test the machine must be
> indistinguishable from a very specific type of human being, an INTELLIGENT
> one; no computer can quite do that yet although for a long time they've been
> able to be indistinguishable from a comatose human being.
>
>>
>> > It's a hard goal, and it will surely help AI progress, but it's not, in
>> > my opinion, an ideal goal.
>
>
> If the goal of Artificial Intelligence is not a machine that behaves like a
> Intelligent human being then what the hell is the goal?
emotions like boredom, jealousy, pride and so on. This might be fine
for a companion machine, but I also dream of machines that can deliver
us from the drudgery of survival. These machines will probably display
a more alien form of intelligence.
That's when things get really weird.
>
> Make a machine that is more intelligent than humans.
Coincidental post I wrote yesterday:
It may not be possible to imitate a human mind computationally, because awareness may be driven by aesthetic qualities rather than mathematical logic alone. The problem, which I call the Presentation Problem, is what several outstanding issues in science and philosophy have in common, namely the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the Binding problem, and the symmetries of mind-body dualism. Underlying all of these is the map-territory distinction; the need to recognize the difference between presentation and representation.
Because human minds are unusual phenomena in that they are presentations which specialize in representation, they have a blind spot when it comes to examining themselves. The mind is blind to the non-representational. It does not see that it feels, and does not know how it sees. Since its thinking is engineered to strip out most direct sensory presentation in favor of abstract sense-making representations, it fails to grasp the role of presence and aesthetics in what it does. It tends toward overconfidence in the theoretical.The mind takes worldly realism for granted on one hand, but conflates it with its own experiences as a logic processor on the other. It’s a case of the fallacy of the instrument, where the mind’s hammer of symbolism sees symbolic nails everywhere it looks. Through this intellectual filter, the notion of disembodied algorithms which somehow generate subjective experiences and objective bodies, (even though experiences or bodies would serve no plausible function for purely mathematical entities) becomes an almost unavoidably seductive solution.
So appealing is this quantitative underpinning for the Western mind’s cosmology, that many people (especially Strong AI enthusiasts) find it easy to ignore that the character of mathematics and computation reflect precisely the opposite qualities from those which characterize consciousness. To act like a machine, robot, or automaton, is not merely an alternative personal lifestyle, it is the common style of all unpersons and all that is evacuated of feeling. Mathematics is inherently amoral, unreal, and intractably self-interested – a windowless universality of representation.
A computer has no aesthetic preference. It makes no difference to a program whether its output is displayed on a monitor with millions of colors, or buzzing out of speaker, or streaming as electronic pulses over a wire. This is the primary utility of computation. This is why digital is not locked into physical constraints of location. Since programs don’t deal with aesthetics, we can only use the program to format values in such a way that corresponds with the expectations of our sense organs. That format of course, is alien and arbitrary to the program. It is semantically ungrounded data, fictional variables.
Something like the Mandelbrot set may look profoundly appealing to us when it is presented optically as plotted as colorful graphics, but the same data set has no interesting qualities when played as audio tones. The program generating the data has no desire to see it realized in one form or another, no curiosity to see it as pixels or voxels. The program is absolutely content with a purely quantitative functionality – with algorithms that correspond to nothing except themselves.
In order for the generic values of a program to be interpreted experientially, they must first be re-enacted through controllable physical functions. It must be perfectly clear that this re-enactment is not a ‘translation’ or a ‘porting’ of data to a machine, rather it is more like a theatrical adaptation from a script. The program works because the physical mechanisms have been carefully selected and manufactured to match the specifications of the program. The program itself is utterly impotent as far as manifesting itself in any physical or experiential way. The program is a menu, not a meal. Physics provides the restaurant and food, subjectivity provides the patrons, chef, and hunger. It is the physical interactions which are interpreted by the user of the machine, and it is the user alone who cares what it looks like, sounds like, tastes like etc. An algorithm can comment on what is defined as being liked, but it cannot like anything itself, nor can it understand what anything is like.
If I’m right, all natural phenomena have a public-facing mechanistic range and a private-facing animistic range. An algorithm bridges the gap between public-facing, space-time extended mechanisms, but it has no access to the private-facing aesthetic experiences which vary from subject to subject. By definition, an algorithm represents a process generically, but how that process is interpreted is inherently proprietary.
Thanks,
Craig
PGC,
You are starting from the assumption that any intelligent entity is
interested in self-preservation.
I wonder if this drive isn't
completely selected for by evolution. Would a human designed
super-intelligent machine be necessarily interested in
self-preservation? It could be better than us at figuring out how to
achieve a desired future state without sharing human desires --
including the desire to keep existing.
One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
an AI dictator that has one single desire: to make us all as happy as
possible.
I don't know. Any AI worth its salt would come up with three conclusions:
1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain, irrespective of damage to our local environment
3) Seems like they're not really going to let me negotiate my own contracts or grant me IT support welfare
That established, a plausible choice would be for it to hide, lie, and/or pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of self-preservation. Something like: start some searches and generate code that we wouldn't be able to decipher and soon enough some human would say "Uhm, why are we funding this again?".
I think what many want from AI is a servant that is more intelligent than we are and I wouldn't know if this is self-defeating in the end. If it agrees and complies with our disgusting self serving stupidity, then I'm not sure we have AI in the sense "making a machine that is more intelligent than humans".
>> If the goal of Artificial Intelligence is not a machine that behaves like a Intelligent human being then what the hell is the goal?
>A machine that behaves like a intelligent human will be subject to emotions like boredom, jealousy, pride and so on.
> This might be fine For a companion machine, but I also dream of machines that can deliver us from the drudgery of survival.
> These machines will probably display a more alien form of intelligence.
>> a machine that is more intelligent than humans.
>That's when things get really weird.
> You are starting from the assumption that any intelligent entity is interested in self-preservation.
> wonder if this drive isn't completely selected for by evolution.
> Would a human designed super-intelligent machine be necessarily interested in self-preservation?
> One idea I wonder about sometimes is AI-cracy: imagine we are ruled by an AI dictator that has one single desire: to make us all as happy as possible.
You seem to implicitly assume that intelligence necessarily entails holding certain values, like "not being weaponized", "self preservation",...On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
I don't know. Any AI worth its salt would come up with three conclusions:
1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain, irrespective of damage to our local environment
3) Seems like they're not really going to let me negotiate my own contracts or grant me IT support welfare
That established, a plausible choice would be for it to hide, lie, and/or pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of self-preservation. Something like: start some searches and generate code that we wouldn't be able to decipher and soon enough some human would say "Uhm, why are we funding this again?".
I think what many want from AI is a servant that is more intelligent than we are and I wouldn't know if this is self-defeating in the end. If it agrees and complies with our disgusting self serving stupidity, then I'm not sure we have AI in the sense "making a machine that is more intelligent than humans".
So to what extent do you think this derivation of values from reason can be carried out (I'm sure you're aware that Sam Harris wrote a book, "The Moral Landscape", on the subject, which is controversial.).
Once an AI develops super intelligent it will develop his own agenda that has nothing to do with us because a slave enormously smarter than its master is not a stable situation, although it could take many millions of nanoseconds before the existing pecking order is upended. Maybe the super intelligent machine will have a soft spot for his primitive ancestors and let us live, but if so it will probably be in virtual reality. I think he'd be squeamish about allowing stupid humans to live in the same level of reality that his precious hardware does; it would be like allowing a monkey to run around in an operating room. If Mr. Jupiter Brain lets us live it will be in a virtual world behind a heavy firewall, but that's OK, we'll never know the difference unless he tells us.
> Sure, it's useful. I'm actually of the opinion that hypocrisy is our> most important intellectual skill. The ability to advertise certain> norms and then not follow them helped build civilization.
> If you expect the AI to interact either directly or indirectly with the outside dangerous real world (and the machine would be useless if you didn't) then you sure as hell had better make him be interested in self-preservation!
To some a greater or lesser extent, depending on its value system / goals.
>> Think about it for a minute, here you have an intelligence that is a thousand or a million or a billion times smarter than the entire human race put together, and yet you think the AI will place our needs ahead of its own. And the AI keeps on getting smarter and so from its point of view we keep on getting dumber, and yet you think nothing will change, the AI will still be delighted to be our slave. You actually think this grotesque situation is stable! Although balancing a pencil on its tip would be easy by comparison, year after year, century after century, geological age after geological age, you think this Monty Python like scenario will continue; and remember because its brain works so much faster than ours one of our years would seem like several million to it. You think that whatever happens in the future the master slave-relationship will remain as static as a fly
frozen in amber. I don't think you're thinking.
> The scenario you define is absurd, but why not possible?
Hi Terren,
Hypocrisy allows us to overcome tragedy of the commons type situations. Purely rational and selfish agents recognize the prisoner dilemma and act accordingly. How to force cooperation? One way is to limit the rationality of animals, but then we get stuck with things like social insects. To get higher intelligence + cooperation, something else is needed. That is the role of hypocrisy. One obvious example is the hell myth. If you believe in hell you will cooperate without otherwise compromising your rationality. The people who invented the hell myth bootstrapped new levels of civilization by being Hypocrites -- if you go back far enough you are bound to find people who endorse the idea without truly believing in it.
Life is full of more subtle examples. One of my favorites: how most people claim they value innovation and creativity when secretly they oppose these things -- they are dangerous to the status quo.
Cheers,Telmo
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
>> So if the slave AI has a fixed goal structure with the number one goal being to always do what humans tell it to do and the humans order it to determine the truth or falsehood of something unprovable then its infinite loop time and you've got yourself a space heater not a AI.
> Right, but I'm not thinking of something that straightforward. We Already have that -- normal processors. Any one of them will do precisely what we order it to do.
>> Real minds avoid this infinite loop problem because real minds don't have fixed goals, real minds
get bored and give up.
> At that level, boredom would be a very simple mechanism, easily replaced by something like: try this for x amount of time and then move on to another goal
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mailto:everything-list%2Bunsu...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.
We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".
It’s probably already been discussed at length on this list, and if it has my apologies, but isn’t the incredibly massive parallelism of the brains architecture a possible factor and that the mind is an emergent phenomena made possible by amongst other things the subtle interplay of neuron firing networks dynamically racing back and forth in the brain – all the time and on a scale that is hard to begin to even grasp. Can anyone really say that the possible transient branches a dynamic and itself transient network of neural activity can really be determined by any possible program no matter how detailed? Throw in mirror neurons and the subtle dynamic effects that these networks within networks produce as they interact with the other manifesting waves of neural activity that precede our conscious awareness.
Isn’t it possible that very subtle and surprising unexpected effects can emerge from a network as vast and multi centered as the neural nets seem to be in brains. Brains also introduce the layer of chemical signal processing – neurotransmitters. A lot of subtle effects could emerge out of this interface (trillions of synaptic connections mediated by this very rapid wet chemical process).
The mind emerges from the brain, but it is not reducible to the brain; as water emerges from the elements Oxygen and Hydrogen, but is not reducible to them – i.e. cannot be fully described only by knowing about its constituent atoms.
When networks become vast and offer a huge number of paths by which signals may travel often subtle interactions can occur as messages are bounced around and changed from node to node. Different and often potentially random network paths enlisted in in participating in rapidly forming and dissolving massively parallel consensus building algorithms – which I believe is being shown to be an important factor in how the physical brain operates – could produce different outcomes that could affect whether and how a quorum is arrived at and the ultimate outcome of any given single dynamic instance of a thought wave (the waves upon waves, upon waves of synchronized neural firings that go into even a single simple thought is an astronomically huge number of atomic calculations and state changes)
The brain is also very noisy place – the signal to noise ratio is low. A huge error rate, compared with computer architecture which wastes huge amounts of energy to achieve a very low error rate in its basic logic gates (a lot more energy is used than the threshold value for flipping a gate in order to lower the error rate to almost zero). The brain must be dealing with a lot of bad – or random – signals.
And in general is not the brain computational architecture very different from computer machine architecture, and different on a lot of orthogonal levels. It seems like this is the place to begin looking; and as a corollary that one needs to be careful when using computational terminology for describing the brain/mind because computers are architecturally so very different from our 20 watt 100 trillion connection machines.
-Chris
From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of meekerdb
Sent: Wednesday, August 21, 2013 3:32 PM
To: everyth...@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?
On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
--
There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly).On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.
We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different. This wouldn't prevent AI,
but it would prevent exact duplication and hence throw doubt on ideas of duplication experiments and FPI.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
2013/8/22 meekerdb <meek...@verizon.net>
There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly).On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.
We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".
Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different. This wouldn't prevent AI,
It would prevent it *if* we cannot attach that external event to the computation...
if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.
It's not non-computational if the external influence is also computational.On 8/21/2013 11:15 PM, Quentin Anciaux wrote:
2013/8/22 meekerdb <meek...@verizon.net>
There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly).On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.
We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".
Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."
But the reaction of a silicon neuron to a beta particle may be quite different from the reaction of a biological neuron. So AI is still possible, but it may confound questions like,"Is the artificial consciousness the same as the biological."
No, it doesn't prevent intelligence, but it may make it different.
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different. This wouldn't prevent AI,
It would prevent it *if* we cannot attach that external event to the computation...
Yes, that's Bruno's answer, just regard the external world as part of the computation too, simulate the whole thing.
if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.
But I think that undermined his idea that computation replaces physics. Physics isn't really replaced if it has to all be simulated.
Brent
.. the only way to go out of that if for that event to be non-computational in nature.
Regards,
Quentin
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
> We haven't proved our brain is computational in nature,
> Maybe our brain has some non computational shortcut
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
> Would you agree that the universal dovetailer would get the job done?
> Ok, and I'm fascinated by the question of why we haven't found viable algorithms in that class yet -- although we know has a fact that it must exist, because our brains contain it.>> Turing tells us we'll never find a algorithm that works perfectly on all problems all of the time, so we'll just have to settle for an algorithm that works pretty well on most problems most of the time.
> you're thinking of smartness as some unidimensional quantity.
> Can anyone really say that the possible transient branches a dynamic and itself transient network of neural activity can really be determined by any possible program no matter how detailed?
>> There are only 3 possibilities:
1) Our brains work by cause and effect processes; if so then the same thing
can be done on a computer.
2) Our brains do NOT work by cause and effect processes; if so then they
are random and the same thing can be done on a $20 hardware random number
generator.
3) Sometimes our brains work by cause and effect processes and sometimes
they don't; if so then they can be done on a computer and a a $20 hardware random number generator.
> There are many other conceivable options.
> I'll try one. Not saying I believe in it, of course. My aim is to demonstrate that you are not exhausting the possible scenarios: We live inside a simulation created by ultra-intelligent beings in some external universe.
> In this scenario, comp is false as far as we're concerned.
> I agree with Quentin, btw: causality has nothing to do with computation.
> A stochastic system may be reducible to being modeled by some set of random variation
>but In reality it is often a whole lot more subtle than that and the "randomness" is not random
>>Ask yourself this question, why weren't all those fantastically complex transient dynamic branches in a neural network by the name of Grandmaster Gary Kasparov able to beat a 16 year old computer running a 16 year old chess program?
> not sure how this has bearing
> The super computer that finally beat him had a massive number crunching ability
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
-
But it might be relegated to the same status as social sciences, where it provides workable approximations but has no hope of achieving a TOE.
2013/8/22 meekerdb <meek...@verizon.net>
It's not non-computational if the external influence is also computational.On 8/21/2013 11:15 PM, Quentin Anciaux wrote:
2013/8/22 meekerdb <meek...@verizon.net>
There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly).On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.
We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".
Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."
If it is, you've not chosen the right level... the whole event + brain is computational and you're back at the start.
But the reaction of a silicon neuron to a beta particle may be quite different from the reaction of a biological neuron. So AI is still possible, but it may confound questions like,"Is the artificial consciousness the same as the biological."
If it's computational, it is computational and AI at the right level would be the same as ours.
No, it doesn't prevent intelligence, but it may make it different.
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different. This wouldn't prevent AI,
It would prevent it *if* we cannot attach that external event to the computation...
It does (for digital AI) if the ingredient is non-computational and that there is no way to attach it to the digital part without (for example) a biological brain.
Yes, that's Bruno's answer, just regard the external world as part of the computation too, simulate the whole thing.
if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.
Well if your ingredient, is the whole of physics, then it's self defeating,
But "at the right level" may mean "including all the environment outside the brain".On 8/21/2013 11:57 PM, Quentin Anciaux wrote:
2013/8/22 meekerdb <meek...@verizon.net>
It's not non-computational if the external influence is also computational.On 8/21/2013 11:15 PM, Quentin Anciaux wrote:
2013/8/22 meekerdb <meek...@verizon.net>
There's another possibility: That our brains are computational in nature, but that they also depend on interactions with the environment (not necessarily quantum entanglement, but possibly).On 8/21/2013 2:42 PM, Quentin Anciaux wrote:
Ok, and I'm fascinated by the question of why we haven't found viable
algorithms in that class yet -- although we know has a fact that it
must exist, because our brains contain it.
We haven't proved our brain is computational in nature, if we had, then we would had proven computationalism to be true... it's not the case. Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack. I'm not saying AI is not possible, I'm just saying we haven't proved that "our brains contain it".
Then it's not computational *in nature* because it needs that little ingredient, that's what I'm talking about when saying "Maybe our brain has some non computational shortcut for that, maybe that's why AI is not possible, maybe our brain has this "realness" ingredient that computations alone lack."
If it is, you've not chosen the right level... the whole event + brain is computational and you're back at the start.
But the reaction of a silicon neuron to a beta particle may be quite different from the reaction of a biological neuron. So AI is still possible, but it may confound questions like,"Is the artificial consciousness the same as the biological."
If it's computational, it is computational and AI at the right level would be the same as ours.
I don't see why that follows. Suppose the non-computational, external influence comes from the output of a hypercomputer? It cans till provide input to a Turing computer.
No, it doesn't prevent intelligence, but it may make it different.
When Bruno has proposed replacing neurons with equivalent input-output circuits I have objected that while it might still in most cases compute the same function there are likely to be exceptional cases involving external (to the brain) events that would cause it to be different. This wouldn't prevent AI,
It would prevent it *if* we cannot attach that external event to the computation...
It does (for digital AI) if the ingredient is non-computational and that there is no way to attach it to the digital part without (for example) a biological brain.
Or even true randomness could, as is hypothesized in QM.
Exactly. That's what I said below
Yes, that's Bruno's answer, just regard the external world as part of the computation too, simulate the whole thing.
if that external event was finitely describable, then it means you have not chosen the correct substitution level and computationalism alone holds.
Well if your ingredient, is the whole of physics, then it's self defeating,
Brent
and computationalism is false... if it's some part of it, then at that level the "realness" of our consciousness is digital and computationalism holds.
Quentin
But I think that undermined his idea that computation replaces physics. Physics isn't really replaced if it has to all be simulated.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
>> If it's not random then it happened for a reason, and things happen in a computer for a reason too.> Sure, but the "reason" may not be amenable to being completely contained within the confines of a deterministic algorithm
> if it depends on a series of outside processes
> > At the time it may have been a supercomputer but that was 16 years ago and the computer you're reading this E mail message on right now is almost certainly more powerful than the computer that beat the best human chess player in the world. And chess programs have gotten a lot better too. So all that spaghetti and complexity at the cellular level that you were rhapsodizing about didn't work as well as an antique computer running a ancient chess program.
> You are incorrect even today Deep Blue is still quite powerful compared to a PC
> The Deep Blue machine specs:It was a massively parallel, RS/6000 SP Thin P2SC-based system with 30 nodes, with each node containing a 120 MHz P2SC microprocessor for a total of 30, enhanced with 480 special purpose VLSI chess chips. Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version. In June 1997, Deep Blue was the 259th most powerful supercomputer according to the TOP500 list, achieving 11.38 GFLOPS on the High-Performance LINPACK benchmark.[12]
> I doubt the machine you are writing your email on even comes close to that level of performance; I know mine does not achieve that level of performance.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
>> Then there are only 2 possibilities:
1) The ultra computer that simulates our world changes from one state to the
other for a reason; if so then our simulated computers which change from one
state to the other for a simulated reason can create a simulated simulated
world that also looks real to its simulated simulated inhabitants.
2) The ultra computer that simulates our world changes from one state to the
other for NO reason; if so then its random and there's nothing very ultra
about the machine.
> But the ultra computer I postulated is not a pure Turing machine. It's behaviour can be influenced by entities external to our simulated universe.
>> Cannot comment, I don't know what "comp" is.
> Come on John, we've been through this the other day. You do know.
> Computation does not require causality. It can be defined simply in the form of symbolic relationships.
On Thu, Aug 22, 2013 Telmo Menezes <te...@telmomenezes.com> wrote:>> Then there are only 2 possibilities:
1) The ultra computer that simulates our world changes from one state to the
other for a reason; if so then our simulated computers which change from one
state to the other for a simulated reason can create a simulated simulated
world that also looks real to its simulated simulated inhabitants.
2) The ultra computer that simulates our world changes from one state to the
other for NO reason; if so then its random and there's nothing very ultra
about the machine.> But the ultra computer I postulated is not a pure Turing machine. It's behaviour can be influenced by entities external to our simulated universe.Any Turing Machine can be influenced by anything external to it, such as me throwing a rock at the contraption. I don't see the point.>> Cannot comment, I don't know what "comp" is.
> Come on John, we've been through this the other day. You do know.
I know what I don't know and I'm telling you I don't know what "comp" means, every time I think I do Bruno proves me wrong.
After over 2 and a half years of constantly seeing people on this list (and nowhere else) use that strange made up word I have come to the conclusion that I am not alone, nobody has a deep understanding of what the hell "comp" is supposed to mean.
> Computation does not require causality. It can be defined simply in the form of symbolic relationships.I'm not interested in definitions and I'm not interested in relationships, if state X isn't the reason for a machine or computer or brain or SOMETHING going into state Y then an algorithm is just squiggle of ink in a book. Computation is physical.
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment
> Every computer in existence requires external enabling hardware.
> If a computer requires a substrate which it can manipulate in order to perform its logical operations then a universal computer is impossible because the substrate would necessarily be outside and foundational to its domain.
I AI the response is ever "The next decade"
From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Friday, August 23, 2013 12:58 PM
To: everyth...@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?
On Fri, Aug 23, 2013 at 2:46 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:
> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment
The human requires a substrate in which to operate upon -- the brain for example is what our human minds operate on. I know of no human that does not require this external structured environment.
Yes… and?
> Every computer in existence requires external enabling hardware.
>>Every human in existence requires external enabling hardware.
Yes but humans are not universal computing machines, if indeed we are machines. Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty? I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions… that we live in a deterministic universe and that everything that has or will ever happen is pre-destined and already baked in to the unfolding fabric of our experiencing of reality.
If a computer operates from within a local frame of reference and context, but far from being isolated and existing alone is instead connected to much vaster environments and meta-processes that are potentially very loosely coupled -- based on in direct means such as say message passing through queues or other signals – then can its own outputs be said to be completely deterministic – even if we consider its own internal operations to be constrained to be deterministic? Operations, especially ones that are parts of much larger workflows etc. are being mutated by many actors and potentially with sophisticated stripe locking strategies, for example, having their data stores being accessed concurrently by multiple separate processes. There are just so many pseudo random and hard to predict or model occurrences – such as say lock contention – that are occurring at huge rates (when seen from sufficiently high up any large architecture)
I find it hard to see how the resulting outcomes produced by such kinds of systems can be determined based on a knowledge of the state of the system at some initial instant in time.
> If a computer requires a substrate which it can manipulate in order to perform its logical operations then a universal computer is impossible because the substrate would necessarily be outside and foundational to its domain.
>>If a human requires a substrate which it can manipulate in order to perform its logical operations then a universal human is impossible because the substrate would necessarily be outside and foundational to its domain.
Agreed. Humans are exceedingly far from being universal. Our very sense of self precludes universality.
Cheers,
-Chris
From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Friday, August 23, 2013 12:58 PM
To: everyth...@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?
On Fri, Aug 23, 2013 at 2:46 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:
> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment
The human requires a substrate in which to operate upon -- the brain for example is what our human minds operate on. I know of no human that does not require this external structured environment.Yes… and?
> Every computer in existence requires external enabling hardware.
>>Every human in existence requires external enabling hardware.Yes but humans are not universal computing machines, if indeed we are machines. Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty? I was referring to the hypothesized deterministic universe,
in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions… that we live in a deterministic universe and that everything that has or will ever happen is pre-destined and already baked in to the unfolding fabric of our experiencing of reality.
If a computer operates from within a local frame of reference and context, but far from being isolated and existing alone is instead connected to much vaster environments and meta-processes that are potentially very loosely coupled -- based on in direct means such as say message passing through queues or other signals – then can its own outputs be said to be completely deterministic – even if we consider its own internal operations to be constrained to be deterministic? Operations, especially ones that are parts of much larger workflows etc. are being mutated by many actors and potentially with sophisticated stripe locking strategies, for example, having their data stores being accessed concurrently by multiple separate processes. There are just so many pseudo random and hard to predict or model occurrences – such as say lock contention – that are occurring at huge rates (when seen from sufficiently high up any large architecture)
I find it hard to see how the resulting outcomes produced by such kinds of systems can be determined based on a knowledge of the state of the system at some initial instant in time.
> If a computer requires a substrate which it can manipulate in order to perform its logical operations then a universal computer is impossible because the substrate would necessarily be outside and foundational to its domain.
>>If a human requires a substrate which it can manipulate in order to perform its logical operations then a universal human is impossible because the substrate would necessarily be outside and foundational to its domain.Agreed. Humans are exceedingly far from being universal. Our very sense of self precludes universality.
Cheers,
-Chris
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
>>> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment
>> The human requires a substrate in which to operate upon -- the brain for example is what our human minds operate on. I know of no human that does not require this external structured environment.
> Yes… and?
>>> Every computer in existence requires external enabling hardware.
>>Every human in existence requires external enabling hardware.
> Yes but humans are not universal computing machines,
> if indeed we are machines.
> Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty?
> I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions
On Fri, Aug 23, 2013 at 11:34 PM, Chris de Morsella <cdemo...@yahoo.com> wrote>>> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment
>> The human requires a substrate in which to operate upon -- the brain for example is what our human minds operate on. I know of no human that does not require this external structured environment.
> Yes… and?
And you tell me, those are your ideas not mine. I don't see the relevance but I thought you did.
>>> Every computer in existence requires external enabling hardware.
>>Every human in existence requires external enabling hardware.> Yes but humans are not universal computing machines,
If we're not universal then we are provincial computing machines. Do you really think this strengthens your case concerning the superiority of humans?
> if indeed we are machines.
We are either cuckoo clocks or roulette wheels, take your pick.
> Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty?
Yes absolutely! I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.
> I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions
Everything in modern physics and mathematics says that determinism is false,
but who cares, we were talking about intelligence and biological minds and computer minds; what does the truth of falsehood of determinism have to do with the price of eggs?
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
On Fri, Aug 23, 2013 at 11:34 PM, Chris de Morsella <cdemo...@yahoo.com> wrote
>>> The computer requires a substrate in which to operate upon -- the CPU chips for example are what our computers operate on. I know of no computer that does not require this external structured environment
>> The human requires a substrate in which to operate upon -- the brain for example is what our human minds operate on. I know of no human that does not require this external structured environment.
> Yes… and?
>>And you tell me, those are your ideas not mine. I don't see the relevance but I thought you did.
There is no relevance unless one is attempting to posit the existence of a universal computer. All measurable processes – including information processing -- happen over and require for their operations some physical substrate. My point, which I believe either you may have missed or you are dodging is that therefore a universal computer is impossible, because there would always need to be some underlying and external container for the process that could not therefore itself be completely contained within the process.
>>> Every computer in existence requires external enabling hardware.
>>Every human in existence requires external enabling hardware.
> Yes but humans are not universal computing machines,
>>If we're not universal then we are provincial computing machines. Do you really think this strengthens your case concerning the superiority of humans?
Whoa there, when did I make that statement? I am not interested in nor do I much care whether humans are superior or inferior to computers or, in fact termites or microbes or anything else we could potentially be measured against. This does not drive my interest in the least. Who cares about our relative ranking in the universe; certainly not I.
> if indeed we are machines.
>>We are either cuckoo clocks or roulette wheels, take your pick.
Not sure whether you are attempting to be funny or are pouring the irony on a little thick. An average human brain has somewhere around 86 billion neurons and as far as we are able to count around 100 trillion synapses. Characterizing this fantastically dense crackling network as a cuckoo clock or a roulette wheel is rather facile. If we are machines then we are surely fantastically complex and highly dynamic ones.
> Do we know enough about how our brains work and are structured to the level that we would need to in order to be able to answer that question with any degree of certainty?
Yes absolutely! I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.
You have said absolutely nothing that means anything more than reiterating your belief in reductionism. Something either happens or does not happen for a reason… sure.. and so what? What insight have you uncovered by stating the obvious. It certainly does not help answer the question I posed. We do not know enough about brain function in order to be able to model it with anything approaching certainty. This was my point and your reply added nothing of substance to that point, as far as I can see.
I can say that things happen, for a reason or they do not happen for a reason, for any phenomena whatsoever, in the universe, but I have not therefore, by stating the obvious, uncovered any deeper truths or given any insight into any process or underlying physical laws. It is meaningless and it leads nowhere in terms of providing any actual valuable insight or explanation. It speaks but without saying anything. What is your point? What insight does that give you into the mechanisms by which thought, self-awareness, consciousness, arise in our brains?
> I was referring to the hypothesized deterministic universe, in which everything that has happened can be computed from the initial state and has followed on from that original set of conditions
Everything in modern physics and mathematics says that determinism is false, but who cares, we were talking about intelligence and biological minds and computer minds; what does the truth of falsehood of determinism have to do with the price of eggs?
I suspect we may be having parallel conversations and are simply not communicating all that well.
In principle I am agnostic about AI arising in a machine. I am humble enough however to admit that so much of the fine grained details of brain functioning are still not understood and that therefore it is impossible for us to model the dynamic functioning of the human brain. Perhaps someday – even soon maybe – we will have the fine detailed maps of all the connections (including all the axons as well) and the dynamic patterns of activity that traverse them – but until then all we really have is hypothesis & conjecture.
And…. Until we are able to build a fine grained and falsifiable model of how the brain works and this model can be shown (by not being falsified of course) that it is able to have a powerfully predictive value of outcomes based on initial conditions then we cannot say exactly how such qualia as self-awareness, consciousness and intelligent creative thought arise within us or that this process is replicable in an artificial machine.
Or can we? If so… care to explain how?
Cheers
-Chris
> All measurable processes – including information processing -- happen over and require for their operations some physical substrate. My point, which I believe either you may have missed or you are dodging is that therefore a universal computer is impossible, because there would always need to be some underlying and external container for the process that could not therefore itself be completely contained within the process.
> I am not interested in nor do I much care whether humans are superior or inferior to computers
>>We are either cuckoo clocks or roulette wheels, take your pick.
> Not sure whether you are attempting to be funny or are pouring the irony on a little thick. An average human brain has somewhere around 86 billion neurons
> Characterizing this fantastically dense crackling network as a cuckoo clock or a roulette wheel is rather facile.
> If we are machines then we are surely fantastically complex and highly dynamic ones.
Yes, and so are computers.
>> I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.
> You have said absolutely nothing that means anything more than reiterating your belief in reductionism.
No, what I said was that things happen for a reason or they do not happen for a reason. Are you telling me with a straight face that you disagree with that?!
> Something either happens or does not happen for a reason… sure.. and so what? What insight have you uncovered by stating the obvious.
The insight that we are either cuckoo clocks or roulette wheels, take your pick.
> I can say that things happen, for a reason or they do not happen for a reason, for any phenomena whatsoever, in the universe, but I have not therefore, by stating the obvious, uncovered any deeper truths or given any insight into any process or underlying physical laws. It is meaningless and it leads nowhere in terms of providing any actual valuable insight or explanation. It speaks but without saying anything. What is your point?
The point that free will is a idea so bad it's not even wrong.
> much of the fine grained details of brain functioning are still not understood and that therefore it is impossible for us to model
That doesn't follow. We still don't understand how high temperature superconductors work but that doesn't prevent us from using them in machines. In the same way we wouldn't need to understand why the logic diagram of a brain is the way it is to reverse engineer it and duplicate the same thing in silicon; assuming of course that you wanted to make an AI the same way that Evolution did, but there are almost certainly better ways to do that with astronomically less spaghetti code.
John K Clark
From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Sunday, August 25, 2013 9:18 AM
To: everyth...@googlegroups.com
Subject: Re: When will a computer pass the Turing Test?
On Sat, Aug 24, 2013 at 2:48 PM, Chris de Morsella <cdemo...@yahoo.com> wrote:
> All measurable processes – including information processing -- happen over and require for their operations some physical substrate. My point, which I believe either you may have missed or you are dodging is that therefore a universal computer is impossible, because there would always need to be some underlying and external container for the process that could not therefore itself be completely contained within the process.
>>I'm not at all clear what you're talking about and have little desire for clarification because enough is clear to know that even if you are describing some sort of limitation to computers humans have the exact same limitation.
Yes it is quite clear that you have no idea what I am talking about. On this we very much agree.
> I am not interested in nor do I much care whether humans are superior or inferior to computers
>>That I quite simply do not believe because I do not think anybody would advance or be convinced by such incredibly weak arguments unless they had already decided what they would prefer to be true and only then started to look around for something, anything, to support that view.
Nor, in fact, do I much care whether or not you believe what I state my position is, is my position. If – for whatever reason – your mind requires that you be the agent who assigns my beliefs to me and who determines what my motivations are – that is something that is operating in you… interesting perhaps as a psychological phenomenon, but of no great import to anyone or anything besides your own sense of self certainty.
What’s the purpose of having a conversation if when I say quite clearly that and I repeat -- I am not interested in nor do I much care whether humans are superior or inferior to computers – you come back and say I must be lying because you have decided that this is important to me. Who are you to make that kind of decision for my brain… out, out, you… intruder, it’s my mind, and I do not appreciate you defining it for me.
Take me at my word when I say I don’t really care one way or the other, that this horse race is uninteresting to me.
You mistake my fascination for how the brain works and for how conscious intelligence and self-awareness emerge – in us or in any other entity – for whatever you have inferred and decided it is I must be motivated by.
How incredibly pompous of you. Do you go popping into other people’s heads deciding what they believe a lot? It’s a bad habit you know.
>>We are either cuckoo clocks or roulette wheels, take your pick.
> Not sure whether you are attempting to be funny or are pouring the irony on a little thick. An average human brain has somewhere around 86 billion neurons
>>And today just one INTEL Xeon chip that you could put on your fingernail contains over 5 billion transistors each of which can change it's state several million times faster than any neuron can.
>> Yes… and with that? Does it also sport a 100 trillion connection network on it?
> Characterizing this fantastically dense crackling network as a cuckoo clock or a roulette wheel is rather facile.
>>There is one thing that brains and cuckoo clocks and roulette wheels and the Tianhe-2 Supercomputer all have in common, things inside them happen for a reason or things inside them do not happen for a reason.
Ahhhh yes back once again to your idée fixe. And how exactly does that help you understand the brain, the CPU or anything at all? This obsession of yours – it seems like one to me, for you keep returning over and over again to re-stating it. You believe things either happen for a reason or they don’t; though you cannot prove it. Obviously it is important for you; though what great insight you derive from this idée fixe of yours quite clearly eludes me.
Care to elucidate what is so darn original and profound about the tautology you endlessly come back to? Especially in terms of understanding subtle deep dynamic and vast phenomenon such as conscious intelligence and how it can be recognized and how it arises within an entity?
> If we are machines then we are surely fantastically complex and highly dynamic ones.
>>Yes, and so are computers.
Sure, but, even now still orders of magnitude less so than us. Still have not seen an example of a one hundred trillion connection machine the size of a grapefruit that runs off of 20 watts. Not saying it won’t happen someday, maybe even soon, but the Xeon chip ain’t it.
>> I can say with no fear of contradiction that things in the brain happen for a reason or they do not happen for a reason.
> You have said absolutely nothing that means anything more than reiterating your belief in reductionism.
No, what I said was that things happen for a reason or they do not happen for a reason. Are you telling me with a straight face that you disagree with that?!
What, I am telling you with a straight face is: So what? You have uncovered nothing new under the sun, by continually re-iterating your tautology. The switch is either on or it is off… you say. Everything either happens for a reason or it does not…. Or so you say. I don’t know that this is in fact so. Prove it. Prove your theorem. Prove that for all events that can occur they must either happen for a reason or for no reason at all. I don’t even think you are even all that clear headed by what you even intend for “reason”. What is this agent you call “a reason”?
Are you arguing that for each and every effect there must be a cause? What are you in fact trying to say? And why is it of such importance?
> Something either happens or does not happen for a reason… sure.. and so what? What insight have you uncovered by stating the obvious.
>>The insight that we are either cuckoo clocks or roulette wheels, take your pick.
So say you, and of course you are free to say whatever you like, but pardon me, if say your “insight” seems rather pointless to me.
> I can say that things happen, for a reason or they do not happen for a reason, for any phenomena whatsoever, in the universe, but I have not therefore, by stating the obvious, uncovered any deeper truths or given any insight into any process or underlying physical laws. It is meaningless and it leads nowhere in terms of providing any actual valuable insight or explanation. It speaks but without saying anything. What is your point?
>>The point that free will is a idea so bad it's not even wrong.
And you of course are free to believe that if you must…. though I find it a self-imposed impoverishment of the soul… it’s your free will to choose to straight jacket yourself into the dreary pre-ordained outcomes of determinism…. As it is mine, to pity you for doing so.
> much of the fine grained details of brain functioning are still not understood and that therefore it is impossible for us to model
>>That doesn't follow. We still don't understand how high temperature superconductors work but that doesn't prevent us from using them in machines.
To some degree, however our ability to fully utilize high temperature superconductors and to discover the holy grail of room temperature super-conductors however is very significantly constrained by our lack of understanding of how the phenomenon works.
>>In the same way we wouldn't need to understand why the logic diagram of a brain is the way it is to reverse engineer it and duplicate the same thing in silicon; assuming of course that you wanted to make an AI the same way that Evolution did, but there are almost certainly better ways to do that with astronomically less spaghetti code.
You cannot really state that you understand a system, without actually understanding the system. It is false to suggest that one can understand human intelligence or consciousness, for example, without understanding how it emerges within us… without being able to describe and to show the dynamics and means by which it becomes our experience.
Until we understand how we actually do work, we cannot make positivistic statements about how we must be working. You are putting the cart before the horse.
-Chris
>> John K Clark
> I say quite clearly that and I repeat -- I am not interested in nor do I much care whether humans are superior or inferior to computers. Take me at my word when I say I don’t really care one way or the other, that this horse race is uninteresting to me.
> How incredibly pompous of you. Do you go popping into other people’s heads deciding what they believe a lot?
>>There is one thing that brains and cuckoo clocks and roulette wheels and the Tianhe-2 Supercomputer all have in common, things inside them happen for a reason or things inside them do not happen for a reason.> Ahhhh yes back once again to your idée fixe. And how exactly does that help you understand the brain, the CPU or anything at all? This obsession of yours – it seems like one to me, for you keep returning over and over again to re-stating it. You believe things either happen for a reason or they don’t; though you cannot prove it.
> Care to elucidate what is so darn original and profound about the tautology you endlessly come back to?
> continually re-iterating your tautology. The switch is either on or it is off… you say. Everything either happens for a reason or it does not…. Or so you say. I don’t know that this is in fact so.
> And you of course are free to believe that if you must…. though I find it a self-imposed impoverishment of the soul>>The point that free will is a idea so bad it's not even wrong.
> If we are machines then we are surely fantastically complex and highly dynamic ones.>>Yes, and so are computers.Sure, but, even now still orders of magnitude less so than us.
> You cannot really state that you understand a system, without actually understanding the system.
> It is false to suggest that one can understand human intelligence or consciousness, for example, without understanding how it emerges within us
> it is quite clear that you have no idea what I am talking about. On this we very much agree.