Should we encode the basic cognitive procedures (initial version at least) or they will spring out automatically?

128 views
Skip to first unread message

Alex

unread,
Apr 1, 2017, 6:58:19 AM4/1/17
to opencog
Hi!

Should we encode the basic cognitive procedures:
- check the understanding of concepts (that can be done by the following activies - prediction concept, explaining concept, predicting goals regarding concept, recreating concept - as defined by Steunebring in AGI 2016)
- determine the unknown knowlege and issue queries (automatic scientici and artistic discovery) for gathering those knowledge from the environment
- consciousness (ability to understand itself and world, ability to formulate, estimate and change goals, subgoals and ability to define and execute plans towards achieving those goals.
- ability to experience emotions (20 types of them, according to the personality as defined in Big Five factor model).

So - the question is - should we encode (e.g. as Atomspace rules) those basic cognitive procedures in OpenCog agent (AtomsSpace+MindAgents) or can we idly wait that such procedures automaticlly will emerge from the soup of atoms (nodes and links) when the atomspace will have just enough information (be it junk or something more structured)?

My dream would be to create some small agent with these basic procedurs and let this agent to learn and develop autonomouse? Can this be possible?

Of course - I mean only encoding the initial rules. Agent itself should be able to fine-tune or even redefine the meaning and actual content of "understaning", "consciousness", because even human being is just apporaching those notions via philosophy, arts, cognitive sciences, neuroscience. But the question still remain - should we encode at least the intial version of those cognitive procedures?

I have heard about Eva chatbot and other Hanson characters - maybe those characters as OpenCog agents already have encoded such general cognitive procedures?

Alex

unread,
Apr 1, 2017, 6:59:18 AM4/1/17
to opencog
Actually would be happy to make this as my contribution (with the guidance from the community).

Linas Vepstas

unread,
Apr 2, 2017, 3:47:49 PM4/2/17
to opencog
Hi Alex,

I'm hoping for a two-prong approach for some of the simpler stages of what you talk about: hand-coded rules, for now, to get some things going, and then also some automatically-learned ... uhh .. things.

The hanson-robots chatbot consists of multiple parts, some of which are very traditional old-style hand-authored chat.   Another part tries to hook it up to the physical sensations and motor movements.  This second part is here: https://github.com/opencog/opencog/tree/master/opencog/eva and the theory of it is in the "architecture" directory.  It consists of mostly hand-coded stuff, with the intent of someday replacing it by automatically learned behaviors.

You are welcome to hack on the code, to fix what is there, or extend it, or even try to implement new theoretical approaches.  The biggest problem is that what is there is already complex and hard to understand... and currently, buggy, half-finished, implemented in a hacky, inelegant way. 

Besides the Eva code, we also have, somewhere, not sure where, a minecraft embodiment interface.  It does not have langguage attached to it, but doing this would be a worthy task.

--linas


--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/6998ddfb-7dbd-41e6-96fa-81d0f60cb95b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Marc Weber

unread,
May 3, 2017, 11:04:36 AM5/3/17
to opencog
Your message lacks a goal. Humans have different traits /strength which already shows the trouble: Without problem space you cannot say which "solver" (human) is better for a task. Thus start by defining your problem to be solved ?

Linas Vepstas

unread,
May 5, 2017, 6:24:40 PM5/5/17
to opencog
On Wed, May 3, 2017 at 10:04 AM, Marc Weber <wanaho...@gmail.com> wrote:
Your message lacks a goal. Humans have different traits /strength which already shows the trouble: Without problem space you cannot say which "solver" (human) is better for a task. Thus start by defining your problem to be solved ?

The goal of the Hanson Robots robot is to be sociable.  A robot you could go sit down and have a beer with.  Or something like that.

--linas
 

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Alex

unread,
May 5, 2017, 7:50:04 PM5/5/17
to opencog
No, my goal is to build AGI for the benefit of humanity and it is not chatbot for entertainment. Apparently, there will be different characters/personalities, e.g.:
- teacher that explores the pupils current knowledge, beliefs, reasoning styles, personality, motivational system and that provides exactly targeted information and explanations. Math teacher (from school to graduate level) can be good example.
- Math scientist that discovers, explains, publishes and implements new optimization algorithms
- legal QA system
- IT development system that can reengineer legacy code, recover specification from the the code, explore and explain code
and so on, so on

In each case the AGI agent should explore that she knows and what she does not know and then she should decide what knowledge is necessary and then gather those knowledge. She - the agent - should estimate each fact, piece of knowledge, how deep she understands this piece and if she does not understand then she should go out for further knowledge - she can read books, she can automatically post in forums and wait for replies, she can seek conversations with different agents of human beings. So - she starts with the minimum knowledge and basic understanding, self-learning capabilities and the she gets more knowledge, she deepns the notions "what is understanging", "how should I learn", "what are my emotions and motivations", "what are emotions and motivations of my students" and so on.

So - the essence of my question was this - should we encode these basic capabilities or can be just automatically translate all the wikipedia (or largers corpus) into Atomspace and wait till those basic skills - understanding, self-learning - emerge? My guess is that we should provide basic skills and ignite self-learning process. Actually - that is exactly how humanity develops. There is no corpus in the world that fully explains what understanging is, how to solve the problems of epistemology, what types of emotions humans have and so on. There is no such corpus and I guess there will never be full answers. So - we should replicate the path of humanity that approachs those questions in step-by-step manner and sometimes are content with the partial answers that can or may not converge to something that can be called truth.

So, I am trying this encoding, but everythins is so slow for me.

Alex

unread,
May 5, 2017, 8:02:14 PM5/5/17
to opencog
I just wanted to say that there are open questions both in the domain knowledge level (e.g. in math) and in the self-learning methodology (meta-) level. Humanity is solving those questions in step-by-step manner and AGI agents can join in this process as well. But those AGI agents should have some basic knowledge and some basic learning skills which they can improve during the interaction and existence process.

Alex

unread,
May 5, 2017, 8:14:43 PM5/5/17
to opencog
Well - my endeavours are somehow marginal, because machine learning today is almost nothing more than applied statistics - it evaluates the parameters of the statistical models. By I am trying other kind of machine learning - symbolic, logical machine learning that learns symbolic knowledge. Actually at present I don't know any good reference, resource about symbolic machine learning - so - if anyone can provide it, mention it, then I would be really happy. 

The other field that lacks development, is what I can call "structural optimization". E.g. one can imaginge complex business process/queue model that have service times, waiting times, availability ratios and so on. One can numerically optimize this model. But - we can imagine the structural transformation of this model in completely different business process. So - how to select the best model from the different structures - now just simlate those models and choose the best structure, but to do guided search (like gradient search in numerical optimization) towards the best structure/structural model? So - if there are references in this field, then I would be more than happy to hear about them as well!

Linas Vepstas

unread,
May 5, 2017, 8:38:52 PM5/5/17
to opencog
On Fri, May 5, 2017 at 6:50 PM, Alex <alexand...@gmail.com> wrote:

So, I am trying this encoding, but everythins is so slow for me.

Yes, it is for us all.

anyway, stop thinking of AGI as being "just like human but smarter" -- it won't be that.

--linas

Linas Vepstas

unread,
May 5, 2017, 8:43:09 PM5/5/17
to opencog
On Fri, May 5, 2017 at 7:14 PM, Alex <alexand...@gmail.com> wrote:
Actually at present I don't know any good reference, resource about symbolic machine learning - so - if anyone can provide it, mention it, then I would be really happy. 

I don't either. The best one I know of is this one
 https://arxiv.org/abs/1401.3372

--linas


Alex

unread,
May 7, 2017, 10:38:54 AM5/7/17
to opencog
In the depths of the Web I have found such notions as "seed AI" and "bootstraping self-improving AI" - http://www.sciencedirect.com/science/article/pii/S1877050914015403 this is exactly the same idea what I am trying to pursue, but - contrary to others - I am trying to do this with OpenCog.

BTW, there is great BICA society and Elsevier issues quite prominent (indexed) journal Biologically Inspired Cognitive Architectures. I was not aware of this activity and now I am happy to this in action.

Ben Goertzel

unread,
May 7, 2017, 11:20:33 PM5/7/17
to opencog
Yeah of course we are very familiar with the concept of a seed AI from
online discussions back to 2001 or so, on the AGI and SL4 email lists
etc. etc.

The concept is the right one... but bear in mind that a biological
seed itself is a very complex system... in my view we are now building
the seed (the current version of OpenCog) and it is simple compared to
what it will grow into, but still complex for our paltry human minds
to grapple with ...

The main obstacle we face now, apart from numerous annoying and
complex issues of "plumbing" (software scalability, documentation,
interfacing with other software systems) and funding developer time,
is making backward chaining inference algorithmically scalable via
solving the adaptive pruning problem (letting the system identify what
patterns have characterized successful inferences in the past, and use
these patterns to guide its future inferences, using the inference
engine itself to recursively assist with the pattern-identification
process). We are having a small workshop here next week in HK to
focus on this problem...

ben
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/2c12c110-f040-4298-8ac5-2a2f841b8e62%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin
Reply all
Reply to author
Forward
0 new messages