How does OpenCog handle episodic memory?

152 views
Skip to first unread message

Ben Saunders

unread,
Jul 23, 2013, 2:33:50 PM7/23/13
to ope...@googlegroups.com
Hello Ben and other OpenCog experts,

In a recent blog post BenG wrote:

The longer I work on AGI, the more convinced I am that an embodied approach will be the best way to fully solve the common sense problem. The AI needs to learn common sense by learning to control a robot that does commonsensical things.... Then the ability to draw analogies and understand words will emerge from the AI's ability to understand the world and relate different experiences it has had. Whereas, a system that answers questions based on ConceptNet is just manipulating symbols without understanding their meaning, an approach that will never lead to real human-like general intelligence.

Most AI researchers today would agree with Ben that the best way to acquire common sense knowledge is by experiencing the world first hand through an adequate physical body [1]. As Ben points out, along with a body there has to be episodic memory or a system capable of deciding which experiences are relevant and worth remembering and later when faced with a new situation recall similar stored experiences that give guidance on what to expect. In the view of many cognitive scientists this is more or less the essence of intelligence.

So, I'm curious to know how OpenCog handles episodic memory?

[1] A decade ago this was not the case. For example see Marvin Minsky (another version here) and his student Push Singh argue that expanding on Doug Lenat's Cyc approach, of building a large corpora of declarative statements, is a better approach than trying to build 'baby machines'.

 - Ben Saunders

Ben Saunders

unread,
Jul 26, 2013, 12:13:47 PM7/26/13
to ope...@googlegroups.com
An earlier post (also pasted below) by BenG more or less answers my question, though I have no idea what a dimensional embedding space is.

On Monday, April 8, 2013 1:42:11 PM UTC, BenGoertzel wrote: The Atomspace is OpenCog's representation of declarative, semantic memory

Combo trees, used by MOSES and stored in a procedure repository, are
the system's key representation of procedural memory

Sensory memory (for vision at present) is stored in the DeSTIN hierarchy

Episodic memory is not adequately handled at present, but is intended
to be stored in a dimensional embedding space

Attentional memory is handled by the Short Term Importance values in
the AtomSpace, which also play a role in declarative memory via
guiding attractor formation...

All the non-declarative types of memory are linked into the Atomspace,
so that the declarative/semantic memory plays a central role...

In short, memory and processing are deeply intertwined, though in a
different way from how this happens in the human brain...

-- Ben G

I think that, with declarative and semantic memory it might be possible to simulate common sense in a chat-bot, with most of the data coming from crawling the web I guess. As Peter Norvig pointed out in a talk or paper "The Unreasonable Effectiveness of Data", the web now contains enough examples of declarative statements that everybody is assumed to know, like "water flows from high to low places" or "you can pull with a string but not push" etc., often times from the websites of elementary schools. It was the lack of enough such statements around 2002 that prompted Push Singh* to create openmind, where volunteers were supposed to manually enter such statements, with the hope being that, with some intelligent parsing, this crowdsourced data would very quickly outpace that in Doug Lenat's Cyc.

However, to create a true toddler level intelligence, IMHO, episodic memory is a must, as is maturation of DeSTIN to do acceptable sensory perception. In fact, if one takes the advice of Hans Moravec or Rodney Brooks seriously, 99% of the effort to build an AGI will be in creating the perception parts, viz. future versions of DeSTIN.

from Intelligence without Representation (1987) -- Rodney Brooks:

It is instructive to reflect on the way in which earth-based
biological evolution spent its time. Single-cell entities arose out of
the primordial soup roughly 3.5 billion years ago. A billion years
passed before photosynthetic plants appeared. After almost another
billion and a half years, around 550 million years ago, the first fish
and Vertebrates arrived, and then insects 450 million years ago. Then
things started moving fast. Reptiles arrived 370 million years ago,
followed by dinosaurs at 330 and mammals at 250 million years ago. The
first primates appeared 120 million years ago and the immediate
predecessors to the great apes a mere 18 million years ago. Man
arrived in roughly his present form 2.5 million years ago. He invented
agriculture a mere 10,000 years ago, writing less than 5000 years ago
and "expert" knowledge only over the last few hundred years.

This suggests that problem solving behavior, language, expert knowledge
and application, and reason, are all pretty simple once the essence of
being and reacting are available. That essence is the ability to move
around in a dynamic environment, sensing the surroundings to a degree
sufficient to achieve the necessary maintenance of life and
reproduction. This part of intelligence is where evolution has
concentrated its time—it is much harder.


Well, you could say that Brooks and Moravec were roboticists and would naturally trump their approach over every other, but still one must admit they have a very good point.

Good Luck,
Ben Saunders

* P.S.: BTW, Push Singh shouldn't have taken such a rash step in 2006. Were any of you guys in touch with him? Was he frustrated by openmind's poor reception and the prospects of not making tenure at MIT?


Linas Vepstas

unread,
Jul 26, 2013, 12:41:39 PM7/26/13
to opencog
Hi,

On 26 July 2013 11:13, Ben Saunders <bfrs...@gmail.com> wrote:
An earlier post (also pasted below) by BenG more or less answers my question, though I have no idea what a dimensional embedding space is.

On Monday, April 8, 2013 1:42:11 PM UTC, BenGoertzel wrote: The Atomspace is OpenCog's representation of declarative, semantic memory

Combo trees, used by MOSES and stored in a procedure repository, are
the system's key representation of procedural memory

Sensory memory (for vision at present) is stored in the DeSTIN hierarchy

Episodic memory is not adequately handled at present, but is intended
to be stored in a dimensional embedding space

Attentional memory is handled by the Short Term Importance values in
the AtomSpace, which also play a role in declarative memory via
guiding attractor formation...

All the non-declarative types of memory are linked into the Atomspace,
so that the declarative/semantic memory plays a central role...

In short, memory and processing are deeply intertwined, though in a
different way from how this happens in the human brain...

-- Ben G

I think that, with declarative and semantic memory it might be possible to simulate common sense in a chat-bot, with most of the data coming from crawling the web I guess. As Peter Norvig pointed out in a talk or paper "The Unreasonable Effectiveness of Data", the web now contains enough examples of declarative statements that everybody is assumed to know, like "water flows from high to low places" or "you can pull with a string but not push" etc., often times from the websites of elementary schools. It was the lack of enough such statements around 2002 that prompted Push Singh* to create openmind, where volunteers were supposed to manually enter such statements,

Sure, the  "cogita" chatbot used a database of facts that were extracted from this set of English sentences.  My only complaint is that the volunteers were not native English speakers, and composed atrocious sentences, such as "Golf is games played by peoples with a stick".  Despite this, Cogita could manage to answer some simple questions correctly.

with the hope being that, with some intelligent parsing, this crowdsourced data would very quickly outpace that in Doug Lenat's Cyc.

Well, "intelligent parsing" is easier said than done. After the cogita experiment, I realized that hand-writing parsers is insane, and that the only thing to do is to try to automatically learn both syntax and semantics. I wrote a proposal back then on how to do this, and am partly done re-writing and expanding it.

However, to create a true toddler level intelligence, IMHO, episodic memory is a must, as is maturation of DeSTIN to do acceptable sensory perception. In fact, if one takes the advice of Hans Moravec or Rodney Brooks seriously, 99% of the effort to build an AGI will be in creating the perception parts, viz. future versions of DeSTIN.

from Intelligence without Representation (1987) -- Rodney Brooks:

It is instructive to reflect on the way in which earth-based
biological evolution spent its time. Single-cell entities arose out of
the primordial soup roughly 3.5 billion years ago.

Apparently, if you plot  genetic complexity vs. time on a logarithmic scale, you get a straight line.  The intercept of that line with a complexity of 1 is about 6 or 7 billion year ago.

* P.S.: BTW, Push Singh shouldn't have taken such a rash step in 2006. Were any of you guys in touch with him?

No.
 
Was he frustrated by openmind's poor reception and the prospects of not making tenure at MIT?

Possibly.  The basic idea was so brilliant, it was just too simple to be appreciated; it would not impress colleagues.  Such is the nature of explaining yourself too clearly -- clarity is lauded only when you are already established; otherwise, the readers assume its just oversimplification and stupidity on the part of the author.

And then ... reviewing the text created b the volunteers -- "Golf is games played by peoples with a stick" -- would make any professor distressed.

-- Linas

Ben Goertzel

unread,
Jul 29, 2013, 6:30:49 AM7/29/13
to ope...@googlegroups.com
>> * P.S.: BTW, Push Singh shouldn't have taken such a rash step in 2006.
>> Were any of you guys in touch with him?

I knew Push a bit, though we were not close friends.... He gave a talk at
Webmind Inc., and we corresponding a little thereafter...

He was a good person and a creative researcher, though I argued with him that
learning should play a much more central role in his work...

As for his tragic suicide, he was prone to occasional depression, and there were
issues in his romantic life ... lots of human stuff going on with him
as with all
of us.... I don't feel it was especially tied to his AI research or associated
frustrations, as some have alluded.... But the truth is hard for us to know at
this point....

If Tipler is right we will all find out as the Omega Point is neared, however ;p

-- Ben

Joanna Bryson

unread,
Nov 29, 2020, 12:52:16 PM11/29/20
to opencog
Hi, I was just trying to work out if there was any relationship between OpenCog & OpenMind and came upon this thread.

I was pretty good friends with Push since he was an undergraduate and I was a PhD student the first year of the Cog project (1993).  However, I hadn't talked to him for about a year before he died (I was living in another country and we were all just busy), but a lot of our mutual friends got in touch.

Push was under a lot of pressures including being junior faculty at MIT and a highly valued relationship having gone long-distance / bicoastal. From what I understand, the most significant pressure though was a recent, permanent back injury that forced him to choose between pain or drugs, and the drugs he felt made him not smart enough.

If anyone coming on this thread is ever undergoing this kind of stress, please do remember that humanity's knowledge keeps improving, including medicine, and that there are people out there who love you. 

Joanna
Reply all
Reply to author
Forward
0 new messages