Re: Interest in Contributing to OpenCog

63 views
Skip to first unread message

Luke Peterson

unread,
Oct 6, 2020, 4:10:10 AM10/6/20
to linasv...@gmail.com, ope...@googlegroups.com
Hi Linas,

Thank you very much for the reply.  This is certainly lower urgency than many other things on your plate.

-- use the mailing list; your comments are of general interest.

Copied mailing list.  I was hesitant to spam everybody, but the list appears low-traffic enough that hopefully people can ignore if they want.

-- go through the examples in the github atomspace examples directory. They are key.

I have.  I don’t mean this to sound critical because the examples are very good and were critical to the understanding I have gleaned so-far, but I still have many unanswered questions.

I’ve written up a guide that is largely a digestion and regurgitation of the material in the examples and documentation, but in an order I found easier to grok, and with more (possibly incorrect) conceptual exposition around the parts that tripped me up.  I try to clearly call out things I’m still confused about.

I’d say my guide covers about 60% (so far) of the material in the corpus of official examples (including pattern matcher examples).  What I feel is missing from both my guide and from the official documentation (not necessarily missing from OpenCog itself) is that “whole is greater than the sum of the parts” moment where you see how everything fits together.  Sorry if that criticism is a bit wooly, but I feel like some “big picture” explanation is missing, so I don’t have a framework to hang all the details on.

Here is my guide, although you and anyone else on the mailing list will likely find it tedious as it’s pretty elementary stuff explained in far too many words.

-- question-answering ... constraint satisfaction ... oooof. That is a very long and complex and tricky discussion. We did a question answering system 10+ years ago, and learned a lot. Well, mostly that natural language isn't quirky "by accident"; there's a real reason why it is the way it is. The problem with question-answering is understanding the question;

I should have been more clear.  When I said question answering, I meant the ability to answer a question that could be formulated as a precise query (possibly in Atomese) 

I understand that, at the limit, any system's (or person’s) ability to precisiate (A word that sadly isn’t in the dictionary yet) a NL question requires a model of the entity asking the question.  This is true of all NL communication not just questions.  Grice's Cooperative Principle and all that. https://en.wikipedia.org/wiki/Cooperative_principle

Personal side note: I learned the hard way never to respond to the question “Are you carrying any illegal drugs?” with a clarifying question “Illegal according to which legal framework?” when the former question is asked by an armed customs and border protection agent.  That’s 8 hours of my life I’ll never get back.

Anyway, really “understanding” a natural language question in the way people do requires the whole “intelligence capability stack”, more or less.

 finding the answer is kind-of "the easy part"  ... we'll I'm glossing... there are many hard parts .... 

Perhaps I ran down the wrong path by mentioning Watson.  Say I’m not trying to query for an atom that already exists in the Atomspace, but instead for a value (or atom or set of atoms) that are created by my query.  I was intending to ask about was the “right way” to do “reason-driven exploration” in OpenCog / Atomspace to find a suitable answer among quadrillions of possible answers.

Consider the question: "What time do I need to leave home and by which route should I travel in order to be 99% sure I will arrive at work before 9am?"  Let’s say the KB has the complete train map & the train schedule, time-distances between points on foot, and probabilities of accidents and delays on various train lines, etc.

A straightforward term rewriting system seems like it would get into trouble because the number of possible paths is explosive.  There are many different algorithms that could be brought to bear, approximating a solution to a traveling-salesman problem like this.  But my question is: what part of the OpenCog software architecture handles this kind of exploration?

BTW, I’m not suggesting a full-on traveling salesman solver is part of the requirement for general intelligence.  It probably isn’t.  But we humans certainly possess some limited ability to navigate through complex conceptual topologies with lots of uncertainty.  Nobody plays sudoku by intuition alone, but nobody walks straight ahead until they crash into a wall before changing direction either.

-- neural nets I'm working on a theory that reconciles neural nets w/ the atomspace, but its 98% heavy-duty math so I'll spare you the details.  Alexy ... is ...doing something different ...

Really looking forward to your theory.  I saw you teased an article about the difference between NNs and Symbolic frameworks being dense vs. sparse linkages between concepts.  I think you're exactly right, and being able to “guide” where the more dense linkages are appropriate will be immensely beneficial for efficiency, introspection, “instructibility” and many other desirable properties that are pretty weak in today’s NNs.

Thanks again.

-Luke

On Oct 6, 2020, at 1:05 AM, Linas Vepstas <linasv...@gmail.com> wrote:

Hi Luke,

You wrote a long email, it deserves a long reply... tomorrow.  I've been away for a while, just got back, and have a large backlog.  In the meanwhiile several quick remarks:
-- use the mailing list; your comments are of general interest.
-- go through the examples in the github atomspace examples directory. They are key.
-- I suggest focusing on the atomspace at first. At some point, you will say to yourself "gee I wish I could do xxx ... " at which point you will discover that the URE is, or is not the solution to xxx.
-- question-answering ... constraint satisfaction ... oooof. That is a very long and complex and tricky discussion. We did a question answering system 10+ years ago, and learned a lot. Well, mostly that natural language isn't quirky "by accident"; there's a real reason why it is the way it is. The problem with question-answering is understanding the question; finding the answer is kind-of "the easy part"  ... we'll I'm glossing... there are many hard parts ....
-- neural nets I'm working on a theory that reconciles neural nets w/ the atomspace, but its 98% heavy-duty math so I'll spare you the details.  Alexy ... is ...doing something different ...

I'll try to respond in slightly greater detail tomorrow.   I think it would be awesome to gain a collaborator!

Linas

On Fri, Oct 2, 2020 at 3:56 AM Luke Peterson <luketp...@gmail.com> wrote:
Hi Linas & Nil,

Introductions first.  I’m a reasonably accomplished systems-level software engineer (20+ years as a programmer, worked on many parts of a major commercial operating system from UI & apps to the driver stack and low-level APIs) I spent the last decade managing an R&D team responsible for the hardware architecture (silicon) of one of the most widely adopted mobile (as in phones & tablets) GPU designs.  But my personal interest has always been with the quest for AGI.

Getting straight to the point, I am able to dedicate serious (personal) time to contributing to a project like the OpenCog / the Atomspace, but I’m finding it a little tough to get my bearings and figure out if OpenCog / Atomspace is a good fit for me.

For the last year, I have spent my spare time developing (in Rust) a system called Hippocampus that has many similarities to the Atomspace.  Hippocampus is also a graph language for representing both knowledge and transformations on knowledge, and also has typed values that flow through the graph.

My design for HC was entirely built from first-principles, i.e. just thinking about the behavior I want and how a system could be architected so that behavior would come about.  Similarities with Atomspace I chalk up to convergent evolution at work.

There are of course, immense differences in the design between Hippocampus and the Atomspace.  Not the least of which is that Hippocampus doesn’t support any kind of general query capability, i.e. the only way to find a node (atom) is by following a link from another node that has a link to it.  But HC addresses some of the places where Atomspace uses queries by having “implied links”.  So in practice they’re not all that different to use.  There may be substantial performance differences where one or the other would be faster, depending on the usage patterns, however.  In any case, HC is nowhere near as mature as Atomspace.

Nonetheless, HC was (is) showing a lot of promise, but I just got tired of working alone with nobody to bounce ideas off, and then I stumbled into Atomspace.  I don’t have the background in predicate calculus and some of other disciplines that inform the Atomspace design, so I’m having to learn your terminology and patterns from square one, so please forgive any cluelessness on my part.

I’ve spent the last month making a serious effort to understand the best ways to use the Atomspace, but I can honestly say my understanding is still pretty murky.

I decided to write a “bootstrap” guide, putting into practice the adage that “You don’t understand something until you can explain it to someone else.”  My guide (the start of it, anyway) is here.  Of course it’s terribly incomplete.

https://luketpeterson.github.io/atomspace-bootstrap-guide/

Also, it is peppered with authoritative sounding statements, e.g. “It is..”.  This is a stylistic choice because the alternative, e.g. “It appears to me that…” would make the guide even more tedious to read than it already is.  Obviously I don’t know what I’m talking about in many cases and I’m sure I’m often wrong in both subtle and egregious ways.

Any comments and edits would be most welcome.

In any case, the next item on my “getting up to speed” roadmap was to understand how OpenCog / Atomspace could best be used to implement some form of narrow question-answering system.  Think Apache UIMA / IBM Watson, weighing all the evidence from its KB to support or reject a hypothesis.

Implicit in that is some form of constraint solver.  It’s not entirely clear to me which software module within OpenCog is the recommended component for tackling this; whether it’s the Unified Rule Engine, or the intent is that the right KB formulas mean that inferencing behavior can be implemented using Atomspace primitives invoking the pattern matcher alone.  E.g. like the pattern-matcher examples to implement deduction, etc.

After I’ve wrapped my head around the OpenCog approach to inferencing, I wanted to try to use the system for visual reasoning.  2D, at first.  So it can answer questions like “Is the red circle above the blue box?”  For this, my plan would be to use pre-trained NN classifiers / annotators to generate atomspace assertions, and then to apply spatial reasoning rules in the Atomspace.  I know that’s not a good general solution, but my aim is to understand the software architecture, not to do any cutting edge research yet.

My ultimate goal is to work on a hybrid Neuro-symbolic system capable of learning new patterns from input streams (Not sure what to try first, but the general idea is something akin to training fractal auto-encoders, and then try to reduce the meaningful dimensions of the auto-encoders into symbolic relationships)

Alexey Potapov’s blog post about representing the Atomspace operations in terms of a Neural Network is also interesting.  It seems more flexible, but may ultimately give up some of the practical benefits of having a symbolic framework in the loop.

In any case, this kind of research is a little way away and I need to crawl before I try and run.

Where would you recommend I look next, to get a handle on the inferencing systems of OpenCog?

Anyway, thanks a lot, and thanks for your work on OpenCog to date.

-Luke




--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
 


Nil Geisweiller

unread,
Oct 20, 2020, 1:34:56 AM10/20/20
to ope...@googlegroups.com
Hi Luke,

On 10/6/20 11:10 AM, Luke Peterson wrote:
>  Sorry if that criticism is a bit wooly, but I feel like some “big
> picture” explanation is missing, so I don’t have a framework to hang all
> the details on.

The big picture, IMHO, is that OpenCog can learn how to allocate
resources, either by creating Hebbian links via mining (in a general
sense) a record of its own behavior, or creating inference control rules
via mining (in a general sense) a record of its own reasoning. It should
eventually acquire the ability to rewrite its own code, starting with
Atomese, and then the layers under, but we're still far from that.
However we do have proto experiments of creating hebbian links and
control rules.

Whether this can work ultimately depends on the environment. If it's too
chaotic it won't work, if it's partially chaotic with attractors that
OpenCog can figure out, then it likely will work. Biology has been able
to learn to somewhat out-perform physics, chemistry, and humanity has
learned to somewhat outperform biology (we're kinda on the top of the
food chain, these days), so clearly we don't live in a completely
chaotic environment (even though it does seem to get more chaotic by the
days).
> Here is my guide, although you and anyone else on the mailing list will
> likely find it tedious as it’s pretty elementary stuff explained in far
> too many words.
> https://luketpeterson.github.io/atomspace-bootstrap-guide/

It's really cool you wrote one. I didn't have time to read it yet
hopefully I will in the not too distant future.

>> Where would you recommend I look next, to get a handle on the
>> inferencing systems of OpenCog?

I would say have look at the URE and PLN examples

https://github.com/opencog/ure/tree/master/examples/ure
https://github.com/opencog/pln/tree/master/examples/pln

If you want to know more about proto experiments on learning inference
control rules you may read

https://blog.singularitynet.io/introspective-reasoning-within-the-opencog-framework-1bc7e182827
https://blog.opencog.org/2017/10/14/inference-meta-learning-part-i/

Nil

>>
>> Anyway, thanks a lot, and thanks for your work on OpenCog to date.
>>
>> -Luke
>>
>>
>>
>>
>> --
>> Patrick: Are they laughing at us?
>> Sponge Bob: No, Patrick, they are laughing next to us.
>>
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to opencog+u...@googlegroups.com
> <mailto:opencog+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/707E9A4E-1C3E-47B2-BBC6-C7EB5D7FF9D5%40gmail.com
> <https://groups.google.com/d/msgid/opencog/707E9A4E-1C3E-47B2-BBC6-C7EB5D7FF9D5%40gmail.com?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages