The real reasons we don�t have AGI yet A response to David Deutsch�s recent article on AGI October 8, 2012 by Ben Goertzel
Hi Richard,So in this view, the main missing ingredient in AGI so far is �cognitive synergy�: the fitting-together of different intelligent components into an appropriate cognitive architecture, in such a way that the components richly and dynamically support and assist each other, interrelating very closely in a similar manner to the components of the brain or body and thus giving rise to appropriate emergent structures and dynamics. The reason this sort of intimate integration has not yet been explored much is that it�s difficult on multiple levels, requiring the design of an architecture and its component algorithms with a view toward the structures and dynamics that will arise in the system once it is coupled with an appropriate environment. Typically, the AI algorithms and structures corresponding to different cognitive functions have been developed based on divergent theoretical principles, by disparate communities of researchers, and have been tuned for effective performance on different tasks in different environments. Making such diverse components work together in a truly synergetic and cooperative way is a tall order, yet my own suspicion is that this � rather than some particular algorithm, structure or architectural principle � is the �secret sauce� needed to create human-level AGI based on technologies available today. Achieving this sort of cognitive-synergetic integration of AGI components is the focus of the OpenCog AGI project that I co-founded several years ago. We�re a long way from human adult level AGI yet, but we have a detailed design and codebase and roadmap for getting there. Wish us luck!
-- Onward! Stephen
Hi Russell,
��� Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?
--
-- Onward! Stephen
Deutsch is right.
Searle is right.
Genuine AGI can only come when thoughts are driven by feeling and will rather than programmatic logic. It's a fundamental misunderstanding to assume that feeling can be generated by equipment which is incapable of caring about itself. Without personal investment, there is no drive to develop right hemisphere awareness - to look around for enemies and friends, to be vigilant. These kinds of capacities cannot be burned into ROM, they have to be discovered through unscripted participation. They have to be able to lie and have a reason to do so.
I'm not sure about Deutsch's purported Popper fetish, but if that's true, I can see why that would be the case. My hunch is that although Ben Goertzel is being fair to Deutsch, he may be distorting Deutsch's position somewhat as far as I question that he is suggesting that we invest in developing Philosophy instead of technology. Maybe he is, but it seems like an exaggeration. It seems to me that Deutsch is advocating the very reasonable position that we evaluate our progress with AGI before doubling down on the same strategy for the next 60 years. Nobody whats to cut off AGI funding - certainly not me, I just think that the approach has become unscientific and sentimental like alchemists with their dream of turning lead into gold. Start playing with biology and maybe you'll have something. It will be a little messier though, since with biology and unlike with silicon computers, when you start getting close to something with human like intelligence, people tend to object when you leave twitching half-persons moaning around the laboratory. You will know you have real AGI because there will be a lot of people screaming.
Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/-iG7-y2ddXsJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,
Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?
It's probably because AI's have not needed to operate in environments where they need a self-model. They are not members of a social community. Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On 10/9/2012 2:16 AM, meekerdb wrote:
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,
Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?
It's probably because AI's have not needed to operate in environments where they need a self-model. They are not members of a social community. Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).
Brent
--
Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...
-- Onward! Stephen
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
--
-- Onward! Stephen
On 09 Oct 2012, at 13:22, Stephen P. King wrote:
On 10/9/2012 2:16 AM, meekerdb wrote:
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,
��� Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?
It's probably because AI's have not needed to operate in environments where they need a self-model.� They are not members of a social community.� Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).
Brent
--
��� Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...
Betting on self-consistency, and variant of that idea, shorten the proofs and speed the computations, sometimes in the "wrong direction".
On almost all inputs, universal machine (creative set, by Myhill theorem, and in a sense of Post) have the alluring property to be arbitrarily speedable.
Of course the trick is in "on almost all inputs" which means all, except a finite number of exception, and this concerns more evolution than reason.
Evolution is basically computation + the halting oracle. Implemented with the physical time (which is is based itself on computation + self-reference + arithmetical truth).
Bruno
-- Onward! Stephen
On 10/9/2012 12:28 PM, Bruno Marchal wrote:
On 09 Oct 2012, at 13:22, Stephen P. King wrote:
On 10/9/2012 2:16 AM, meekerdb wrote:
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,
Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?
It's probably because AI's have not needed to operate in environments where they need a self-model. They are not members of a social community. Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).
Brent
--
Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...
Betting on self-consistency, and variant of that idea, shorten the proofs and speed the computations, sometimes in the "wrong direction".
Hi Bruno,
Could you elaborate a bit on the betting mechanism so that it is more clear how the shorting of proofs and speed-up of computations obtains?
On almost all inputs, universal machine (creative set, by Myhill theorem, and in a sense of Post) have the alluring property to be arbitrarily speedable.
This is a measure issue, no?
Of course the trick is in "on almost all inputs" which means all, except a finite number of exception, and this concerns more evolution than reason.
OK.
Evolution is basically computation + the halting oracle. Implemented with the physical time (which is is based itself on computation + self-reference + arithmetical truth).
Bruno
So you are equating selection by fitness in a local environment with a halting oracle?
On 12/10/2012, at 16:27, "Bruno Marchal" <mar...@ulb.ac.be> wrote:
> On 10 Oct 2012, at 10:44, a b wrote:
>
>> On Wed, Oct 10, 2012 at 2:04 AM, Brett Hall <brha...@hotmail.com>
>> wrote:
>>> On 09/10/2012, at 16:38, "hibbsa" <asb...@gmail.com> wrote:
>>>> http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
>>>
>>> Ben Goertzel's article that hibbsa sent and linked to above says in
>>> paragraph 7 that,"I salute David Deutsch’s boldness, in writing and
>>> thinking about a field where he obviously doesn’t have much
>>> practical grounding. Sometimes the views of outsiders with very
>>> different backgrounds can yield surprising insights. But I don’t
>>> think this is one of those times. In fact, I think Deutsch’s
>>> perspective on AGI is badly mistaken, and if widely adopted, would
>>> slow down progress toward AGI dramatically. The real reasons we
>>> don’t have AGI yet, I believe, have nothing to do with Popperian
>>> philosophy, and everything to do with:..." (Then he listed some
>>> things).
>>>
>>> That paragraph quoted seems an appeal to authority in an
>>> underhanded way. In a sense it says (in a condescending manner)
>>> that DD has little practical grounding in this subject and can
>>> probably be dismissed on that basis...but let's look at what he
>>> says anyways. As if "practical grounding" by the writer would
>>> somehow have made the arguments themselves valid or more valid (as
>>> though that makes sense). The irony is, Goertzel in almost the next
>>> breath writes that AGI has "nothing to do with Popperian
>>> philosophy..." Presumably, by his own criterion, he can only make
>>> that comment with any kind of validity if he has "practical
>>> grounding" in Popperian epistemology? It seems he has indeed
>>> written quite a bit on Popper...but probably as much as DD has
>>> written on stuff related to AI. So how much is enough before you
>>> should be taken seriously? I'm also not sure that Goertzel is
>>> expert in Popperian *epistemology*.
>>>
>>> Later he goes on to write, "I have conjectured before that once
>>> some proto-AGI reaches a sufficient level of sophistication in its
>>> behavior, we will see an “AGI Sputnik” dynamic — where various
>>> countries and corporations compete to put more and more money and
>>> attention into AGI, trying to get there first. The question is,
>>> just how good does a proto-AGI have to be to reach the AGI Sputnik
>>> level?"I'm not sure what "proto-AGO" means? It perhaps misses the
>>> central point that intelligence is a qualitative, not quantitative
>>> thing. Sputnik was a less advanced version of the International
>>> Space Station (ISS)...or a GPS satellite.
>>>
>>> But there is no "less advanced" version of being a universal
>>> explainer (i.e a person, i.e: intelligent, i.e: AGI) is there? So
>>> the analogy is quite false. As a side point is the "A" in AGI
>>> racist? Or does the A simply mean "intelligently designed" as
>>> opposed to "evolved by natural selection"? I'm not sure...what will
>>> Artificial mean to AGI when they are here? I suppose we might
>>> augment our senses in all sorts of ways so the distinction might be
>>> blurred anyways as it is currently with race.So I think the Sputnik
>>> analogy is wrong.
>>>
>>> A better analogy would be...say you wanted to develop a *worldwide
>>> communications system* in the time of (say) the American Indians in
>>> the USA (say around 1200 AD for argument's sake). Somehow you knew
>>> *it must be possible* to create a communications system that
>>> allowed transmission of messages across the world at very very high
>>> speeds but so far your technology was limited to ever bigger fires
>>> and more and more smoke. Then the difference between (say) a smoke
>>> signal and a real communications satellite that can transmit a
>>> message around the world (like Sputnik) would be more appropriate.
>>> Then the smoke signal is the current state of AGI...and Sputnik is
>>> real AGI - what you get once you understand something brand new
>>> about orbits, gravity and radio waves...and probably most
>>> importantly - that the world was a giant *sphere* plagued by high
>>> altitude winds and diverse weather systems and so forth that would
>>> never even have entered your mind. Things you can't even conceive
>>> of if all you are doing in trying to devise a better world-wide
>>> communications system is making ever bigger fires and more and more
>>> smoke...because *surely* that approach will eventually lead to
>>> world-wide communications. After all - it's just a matter of bigger
>>> fires create more smoke which travels greater distance. Right?But
>>> even that analogy is no good really because the smoke signal and
>>> the satellite still have too much in common, perhaps. They are
>>> *both ways of communicating*. And yet, current "AI" and real "I" do
>>> *not* have in common "intelligence" or "thinking".
>>>
>>> What on Earth could "proto-agi" be in Ben's Goertzel's world? What
>>> would be the criterion for recognising it as distinct from actual
>>> AGI?
>>>
>>> I get the impression Ben might have missed the point that
>>> intelligeatnce is just qualitatively different from non-
>>> intelligence because the entire article is fixated on it being all
>>> about improvements in hardware. If you're intelligent then you are
>>> a universal explainer. And you are either a universal
>>> explainer...or not. There's no "Sputnik" level of intelligence
>>> which will lead towards GPS and ISS levels of intelligence. Right?
>>>
>>> Brett.
>>
>> At the end of the day the guy has just been told he hasn't made any
>> progress so it seems nature [to me] that he'll hit back with some
>> arsey comments, one of which the line about Deutsch which is supposed
>> to mean something like ".....for a guy who knows shit about the
>> subject"
>>
>> Personally I think that if you know why someone is getting something
>> like that in, then it's better to just ignore it and look for the main
>> ideas. His idea that intrigues me, is about how to get some emergence
>> taking place out of the underlying components.
>>
>> This has to be part of the problem because an inner sense of self
>> cannot be written directly into code. One criticism of Deutsch's
>> article, for me, was that he seemed to trivialise this aspect by
>> calling it nothing more than 'self-reference'. It isn't self reference
>> alone, it's inner experience. it's what is going on my head right now,
>> me thinking I am here and me seeing things in my room.
>
> It is explained by the difference between "provable(p)" which involves
> self-reference, and "provable(p) & p", which involves self reference
> and truth. The first gives a theory of self-reference in a third
> person way, like when you say "I have two arms", and the second
> provides a self-reference in a first person subjective way. It defines
> a non nameable knower, verifying the axiom of the classical theory of
> knowledge (S4), and which happens already to be non definable by the
> subject itself. So if interested you can consult my sane04. It does
> confirm many ideas of Deutsch, but its uses a more standard vocabulary.
Bruno, I'm confused. But feel like I'm 'almost there'. If you are some entity that can do "provable(p)" then you recognise your image in a mirror...or...what exactly?
With provable(p)and(p) then you recognise that the image in the mirror is you and you are a self.
Or something like that. I'm sure you have something more formal. I'll try again;
Is provable(p) something like "It can be shown that I have two arms" because (p) is just
"I have two arms"?
Putting that into natural language though seems to suggest that provable(p) must as a prior necessity have "p" as true.
But that's not what you're saying. You seem to be saying it's *easier* for some entity which can do computations to get to provable(p) than (p).
I do think I understand the difference between a third-person self reference and first-person self reference. It's almost like the difference between pointing at a mirror and asserting:
"That is me" (where the "me" is not a "self" but rather just a bunch of atoms you are in control of.)
And pointing at a mirror and asserting "That's my *reflection*. And this is me." (Where the "me" corresponds to some feeling that establishes ones own existence to one's own satisfaction).