Is the atomspace an attempt to make contextual models from learning algorithms?

73 views
Skip to first unread message

Gaurav Gautam

unread,
Feb 19, 2017, 8:37:07 PM2/19/17
to opencog
Hello again wonderful people of opencog!

You are the friendliest community for outsiders w.r.t. the AI world. That is amazing. 

So I was watching this video on youtube from some DARPA expert talking about how there have been two waves of AI technology. As I understood he said: the first wave tech tried to use logic by encoding rules relating to the real world and letting the computer crunch the axioms. The second wave is of course ML. But it is incapable of extracting abstract rules from the models it learns. That is, while it could tell you that a picture is a cat, it couldn't tell you why it is a cat. Then he said that in the future we will have systems that can do both, i.e. learn from data and abstract knowledge from the models they learn. 

And I started wondering if this is not very similar to atomspace. You represent concepts as graphs and use those graphs to perform logic. Now I am not sure if you would be making these graphs through learning algorithms or not. So that is my question. Is this what you are trying to do? Are you trying to make the atomspace through learning algorithms and then inform said algorithms from the contents of the atomspace?

Yours sincerely
Gaurav Gautam

Alex

unread,
Feb 21, 2017, 3:52:17 PM2/21/17
to opencog
Hi!

There are two kinds of knowledge (or representations of knowledge) - symbolic (explicit, like logics, rules) and subsymbolic (black boxes like neural networks), And Your question is about possibility to transform the knowledge from the subsymbolic representation to the sybolic one. There are such efforts ineed. Quick Google search reveals article how to do logical reasoning (at least propositional and predication logic) with neural networks, how energy minimization of neural networks is similar to directed deduction process. So - symbolic knowledge can be encoded into subsymboli one. It can be trickier to go in other direction. I have no references about it but I guess it is possible - e.g. to extract logical rules from the trainned neural network classifier. There is branch of science called Connection Science that investigates the connection between neural networks and logics and this community have journal http://www.tandfonline.com/toc/ccos20/current 

So - my understanding is that OpenCog can represent rules and other symbolic knowledge that are extracted from the subsymbolic representation of the knowledge, but it is not the only domain of application of OpenCog, there are lot more possible applications.

Alex

unread,
Feb 21, 2017, 3:57:02 PM2/21/17
to opencog
There is boom of neural network translation. Maybe it is possible to extract formal grammars, symbolic NLP processors from the neural networks that are trained for translation...

Gaurav Gautam

unread,
Feb 21, 2017, 8:40:53 PM2/21/17
to opencog
Very interesting. I understand that opencog has diverse applications but what I wanted to ask was if going from subsymbolic to symbolic representation is one of the central goals. I am not an expert but it seemed to me that doing this is the key to general intelligence. Am I wrong when I say that? 

Roman Treutlein

unread,
Feb 22, 2017, 5:35:05 AM2/22/17
to opencog
Yes bridging the gap between symbolic and subsymbolic representations is one of the important goals of the project. 
Reply all
Reply to author
Forward
0 new messages