Hello again wonderful people of opencog!
You are the friendliest community for outsiders w.r.t. the AI world. That is amazing.
So I was watching this video on youtube from some DARPA expert talking about how there have been two waves of AI technology. As I understood he said: the first wave tech tried to use logic by encoding rules relating to the real world and letting the computer crunch the axioms. The second wave is of course ML. But it is incapable of extracting abstract rules from the models it learns. That is, while it could tell you that a picture is a cat, it couldn't tell you why it is a cat. Then he said that in the future we will have systems that can do both, i.e. learn from data and abstract knowledge from the models they learn.
And I started wondering if this is not very similar to atomspace. You represent concepts as graphs and use those graphs to perform logic. Now I am not sure if you would be making these graphs through learning algorithms or not. So that is my question. Is this what you are trying to do? Are you trying to make the atomspace through learning algorithms and then inform said algorithms from the contents of the atomspace?
Yours sincerely
Gaurav Gautam