-- use the mailing list; your comments are of general interest.
-- go through the examples in the github atomspace examples directory. They are key.
-- question-answering ... constraint satisfaction ... oooof. That is a very long and complex and tricky discussion. We did a question answering system 10+ years ago, and learned a lot. Well, mostly that natural language isn't quirky "by accident"; there's a real reason why it is the way it is. The problem with question-answering is understanding the question;
finding the answer is kind-of "the easy part" ... we'll I'm glossing... there are many hard parts ....
-- neural nets I'm working on a theory that reconciles neural nets w/ the atomspace, but its 98% heavy-duty math so I'll spare you the details. Alexy ... is ...doing something different ...
On Oct 6, 2020, at 1:05 AM, Linas Vepstas <linasv...@gmail.com> wrote:Hi Luke,You wrote a long email, it deserves a long reply... tomorrow. I've been away for a while, just got back, and have a large backlog. In the meanwhiile several quick remarks:-- use the mailing list; your comments are of general interest.-- go through the examples in the github atomspace examples directory. They are key.-- I suggest focusing on the atomspace at first. At some point, you will say to yourself "gee I wish I could do xxx ... " at which point you will discover that the URE is, or is not the solution to xxx.-- question-answering ... constraint satisfaction ... oooof. That is a very long and complex and tricky discussion. We did a question answering system 10+ years ago, and learned a lot. Well, mostly that natural language isn't quirky "by accident"; there's a real reason why it is the way it is. The problem with question-answering is understanding the question; finding the answer is kind-of "the easy part" ... we'll I'm glossing... there are many hard parts ....-- neural nets I'm working on a theory that reconciles neural nets w/ the atomspace, but its 98% heavy-duty math so I'll spare you the details. Alexy ... is ...doing something different ...I'll try to respond in slightly greater detail tomorrow. I think it would be awesome to gain a collaborator!LinasOn Fri, Oct 2, 2020 at 3:56 AM Luke Peterson <luketp...@gmail.com> wrote:Hi Linas & Nil,
Introductions first. I’m a reasonably accomplished systems-level software engineer (20+ years as a programmer, worked on many parts of a major commercial operating system from UI & apps to the driver stack and low-level APIs) I spent the last decade managing an R&D team responsible for the hardware architecture (silicon) of one of the most widely adopted mobile (as in phones & tablets) GPU designs. But my personal interest has always been with the quest for AGI.
Getting straight to the point, I am able to dedicate serious (personal) time to contributing to a project like the OpenCog / the Atomspace, but I’m finding it a little tough to get my bearings and figure out if OpenCog / Atomspace is a good fit for me.
For the last year, I have spent my spare time developing (in Rust) a system called Hippocampus that has many similarities to the Atomspace. Hippocampus is also a graph language for representing both knowledge and transformations on knowledge, and also has typed values that flow through the graph.
My design for HC was entirely built from first-principles, i.e. just thinking about the behavior I want and how a system could be architected so that behavior would come about. Similarities with Atomspace I chalk up to convergent evolution at work.
There are of course, immense differences in the design between Hippocampus and the Atomspace. Not the least of which is that Hippocampus doesn’t support any kind of general query capability, i.e. the only way to find a node (atom) is by following a link from another node that has a link to it. But HC addresses some of the places where Atomspace uses queries by having “implied links”. So in practice they’re not all that different to use. There may be substantial performance differences where one or the other would be faster, depending on the usage patterns, however. In any case, HC is nowhere near as mature as Atomspace.
Nonetheless, HC was (is) showing a lot of promise, but I just got tired of working alone with nobody to bounce ideas off, and then I stumbled into Atomspace. I don’t have the background in predicate calculus and some of other disciplines that inform the Atomspace design, so I’m having to learn your terminology and patterns from square one, so please forgive any cluelessness on my part.
I’ve spent the last month making a serious effort to understand the best ways to use the Atomspace, but I can honestly say my understanding is still pretty murky.
I decided to write a “bootstrap” guide, putting into practice the adage that “You don’t understand something until you can explain it to someone else.” My guide (the start of it, anyway) is here. Of course it’s terribly incomplete.
https://luketpeterson.github.io/atomspace-bootstrap-guide/
Also, it is peppered with authoritative sounding statements, e.g. “It is..”. This is a stylistic choice because the alternative, e.g. “It appears to me that…” would make the guide even more tedious to read than it already is. Obviously I don’t know what I’m talking about in many cases and I’m sure I’m often wrong in both subtle and egregious ways.
Any comments and edits would be most welcome.
In any case, the next item on my “getting up to speed” roadmap was to understand how OpenCog / Atomspace could best be used to implement some form of narrow question-answering system. Think Apache UIMA / IBM Watson, weighing all the evidence from its KB to support or reject a hypothesis.
Implicit in that is some form of constraint solver. It’s not entirely clear to me which software module within OpenCog is the recommended component for tackling this; whether it’s the Unified Rule Engine, or the intent is that the right KB formulas mean that inferencing behavior can be implemented using Atomspace primitives invoking the pattern matcher alone. E.g. like the pattern-matcher examples to implement deduction, etc.
After I’ve wrapped my head around the OpenCog approach to inferencing, I wanted to try to use the system for visual reasoning. 2D, at first. So it can answer questions like “Is the red circle above the blue box?” For this, my plan would be to use pre-trained NN classifiers / annotators to generate atomspace assertions, and then to apply spatial reasoning rules in the Atomspace. I know that’s not a good general solution, but my aim is to understand the software architecture, not to do any cutting edge research yet.
My ultimate goal is to work on a hybrid Neuro-symbolic system capable of learning new patterns from input streams (Not sure what to try first, but the general idea is something akin to training fractal auto-encoders, and then try to reduce the meaningful dimensions of the auto-encoders into symbolic relationships)
Alexey Potapov’s blog post about representing the Atomspace operations in terms of a Neural Network is also interesting. It seems more flexible, but may ultimately give up some of the practical benefits of having a symbolic framework in the loop.
In any case, this kind of research is a little way away and I need to crawl before I try and run.
Where would you recommend I look next, to get a handle on the inferencing systems of OpenCog?
Anyway, thanks a lot, and thanks for your work on OpenCog to date.
-Luke
--Patrick: Are they laughing at us?Sponge Bob: No, Patrick, they are laughing next to us.