questions around the eva-chatbot, and current state of the opencog-nlp pipeline

148 views
Skip to first unread message

Apil Tamang

unread,
Jan 14, 2017, 4:12:35 PM1/14/17
to opencog
Hi,

Just wanted to ask what the state of the opencog<->nlp pipeline as it stands... and what was the most current demonstrable (sub)system around it?

I'm preparing to get some hands-on interaction with the docker/indigo/eva-opencog chatbot (doing the docker builds at the moment). Is the code that powers this chatbot located in this folder: opencog/opencog/nlp/chatbot-eva? I think Linas pointed out some months back that this code-base is also what he talks about in a document he prepared called the 'self-model.pdf'. Please let me know if I went wrong in any of this assumptions.

Few additional questions

- Is the opencog/opencog/nlp/chatbot-eva also really the state-of-the-art for a chatbot powered by opencog?

- What kind of work is currently in progress for the eva-opencog chatbot? I know Linas mentioned some months back he was heavily refactoring it.

- Is "Relex2Logic" is still an actively used step in the opencog<->nlp pipeline? This is a little over my head... but I could've sworn I read somewhere that the usage of Relex2Logic is now deprecated. However, the NLP presentation given on May 2016 Cog'athon still mentioned about using 'Relex2Logic' for the entire process. The eva-chatbot seems to include bits and pieces from a relex2logic module as well.

- How can I help ? I've spent the last couple of months observing opencog from as many angles as I can, but I guess there's no substitute to getting hands-on. I think something like an online weekly meeting would be a great way to share information, problems, tasks et.c.... if someone else was interested. Would that be feasible? 






 

Ben Goertzel

unread,
Jan 14, 2017, 10:49:22 PM1/14/17
to opencog
Hi,

> - Is "Relex2Logic" is still an actively used step in the opencog<->nlp
> pipeline? This is a little over my head... but I could've sworn I read
> somewhere that the usage of Relex2Logic is now deprecated. However, the NLP
> presentation given on May 2016 Cog'athon still mentioned about using
> 'Relex2Logic' for the entire process. The eva-chatbot seems to include bits
> and pieces from a relex2logic module as well.

We are using RelEx2Logic. There is a plan to replace it as follows

http://wiki.opencog.org/w/Link_Parse_to_Logic_via_Lojban

and this is actively being worked on but it's early-stage and I don't
expect this alternative to be usable till early fall...

> - How can I help ? I've spent the last couple of months observing opencog
> from as many angles as I can, but I guess there's no substitute to getting
> hands-on.

There are many many ways you could help. Which aspects fascinate
you most? And what are your strengths as a programmer? Are you able
to dive in and muck with someone else's hairy C++ code, or would you
rather deal mainly w/ the Scheme shell?

There are aspects that need help related to NLP and to PLN reasoning,
and to pattern mining, for example... and then there's an initiative
to integrate deep learning vision w/ OpenCog ... all these could be
reasonable areas to plunge in...


>I think something like an online weekly meeting would be a great
> way to share information, problems, tasks et.c.... if someone else was
> interested. Would that be feasible?

Could well be worth a try, but if I'm going to organize it, it will
need to wait till Feb, I'm traveling the next couple weeks and don't
want to deal with additional scheduled events...

-- Ben




--
Ben Goertzel, PhD
http://goertzel.org

“I tell my students, when you go to these meetings, see what direction
everyone is headed, so you can go in the opposite direction. Don’t
polish the brass on the bandwagon.” – V. S. Ramachandran

Apil Tamang

unread,
Jan 16, 2017, 1:32:22 PM1/16/17
to opencog

Hi Ben,
Thanks for responding... 

I was hoping to get involved with the eva-chatbot, in particular, because I'd taken the time to really study it a while back. It also seemed like an important piece that would give me a very holistic hands-on experience using the opencog framework.

But if the above isn't that important in the grand scheme of things, then no matter. I can work on something else. I am generally okay mucking around on backend code (having worked in JAVA  for some years now). I'm not sure how I should describe my strengths as a programmer are: I'm quite new to both C++ or Scheme, but I'm not really intimidated by that. I think (as you may agree) that the real challenges exist really much deeper...

Some months back, I did some solid reading on the PLN book, and that was very interesting! My chief interests at this moment would really be at any intersection between NLP and PLN. That's where I believe I could be most excited in for the near future. I don't know if I could take the lead-charge on any front though, and ideally my role would be one of support (tests, bug-fixes etc) for some relevant branch for a little while. I work full time as a developer currently, and some nights my brain's a little washed out :)

Anyways, let me know...

Alex

unread,
Jan 16, 2017, 4:24:29 PM1/16/17
to opencog
Regarding NLP - I have always been suspicous about statistical methods of NLP, it is something like subsymbolic methods of neural networks. Such subsymbolic methods require other science to make explicit inference (about results, about argumentation) possible - there is connection science for neural networks but I don't know about similar tool for statistics. My ideal is NLP along lines of article "On Deep Computational Formalization of Natural Language" (available via Google search) and I wonder why this path has not been pursued so far. Lack of developed suitable logic (deontic event calculus in this case) is one explanation. There is clearly need for universal logic (as considered by Springer journal Logica Universalis) and I guess that categorical logic may become such logic - it already formalizes predicate and modal logics and similar formalization of probabilistic and adaptable logic (Strasser) (nonmonotonic) may be discovered in future (I hope, though I have no idea about direction how this can be done. Coalgebraic logic unifies modal and probabilistic logics but I had had hard time understanding it). Then it wil be possible to do formalization of natural language (in all its modes - starting from scientific reasoning and ending with emotional utterances) in such way.

Ben Goertzel

unread,
Jan 17, 2017, 12:02:43 AM1/17/17
to opencog
Alex,

As Linas has often pointed out, the link grammar OpenCog uses is
basically equivalent to pregroup grammar, which has been modeled
explicitly using category theory (asymmetric monoidal categories etc.)

Lambda calculus is modeled by closed cartesian categories; and the
algebra of sub-hypergraphs of a hypergraph can be modeled as a Heyting
algebra in various ways (and one can put an intuitionistic probability
distribution on this Heyting algebra)...

I have been fiddling around with these math foundations a bunch in the
last few weeks and will post some things on Arxiv in the next couple
weeks...

In practical terms, there are two initiatives related to this going on right now

1) Linas is working on using unsupervised (statistical) learning on a
text corpus to infer a link grammar dictionary ...

2) Ruiting is working on this

http://wiki.opencog.org/w/Link_Parse_to_Logic_via_Lojban

(well she'll be taking a break for the next 2 weeks but then resuming Feb 5 ...)

--Ben
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/cf6a0d01-b98e-4fda-9473-7aeb24bc916e%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

Ben Goertzel

unread,
Jan 17, 2017, 12:04:18 AM1/17/17
to opencog
Thanks Apil, that's helpful background... I will follow up via private email...
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/58c3c473-df1f-4f71-9e2c-b11b8615f858%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



Alex

unread,
Jan 29, 2017, 11:15:51 AM1/29/17
to opencog
Regarding the ideas about translation (expression) of PLN into the terms of categorical logics (ideas that are floating around and whose results are pretty much expected and hoped for) everyone is also invited to look at the MMT project - foundation independent logics - by Florian Rabe. MMT is summarized in the "Future of Logics" article http://link.springer.com/article/10.1007/s11787-015-0132-x and the web page of the project itself is https://uniformal.github.io/

While MMT research is very promising, I have some doubts about applicability. Categorical logic represents logics as the categories: terms/expressions are objects and inference relationship forms the set for morphisms. MMT goes one step higher - it defines morphisms among theories (individual logics) therefore one can expect higher order category whose objects are logics and whose morphisms are translation/inheritance relationships among logics (or alternatively - defines similar functors among logics as categories). MMT at the end is something similar like (dependent) type theory and although MMT distances from the meta-logics, MMT still assumes some basic inference rules that are provided extra-linguistically. And although MMT generalizes many algorithms, apparently it still requires to provide those algorithms in the specific cases (they can not be deduced automatically from the syntax and inference rules). So - MMT can be good for the uniform representation of the logics and for smooth transition from one inference type (i.e. rational fair agents) to the other inference type (irrational, narcisstic, greedy agent), but the discovery of logic algorithms is still necessary.

I feel that any cognitive system should be capable of using different inference styles and therfore any discovery of Universal Logic could be suitable for the OpenCog. The basic question in this case is: is presentation style of the logic in which every logic can be expressed?

There is this question about sequent calculus - http://math.stackexchange.com/questions/2063828/does-every-logic-have-sequent-calculus-if-not-what-are-alternatives-to-them and the answer was no - there still are logics that can no be presented by sequent calculus.

My intuition was that ANY type of sequent calculus can be expressed by the forward chaining rules. But that is false - other methods are required. Maybe combination of backward and forward chaining systems fully covers any logic (including nonmonotonic, adaptable logics)?

There is apparent need for many specific logics:
- e.g. my research are about deontic logics and formalization of norms - this can not be done in the usual first order predicate logics or in the standard modal logics.
- e.g. early research by Steunebring (IDSIA) http://people.idsia.ch/~steunebrink/ about formalization of emotions and the BDI agents
- e.g. already mentioned formalization of the language by deontic event calculus

And it could be nice to arrive at the one reasoning engine that can handle all of then and that enable reasoning in any of them according to the situation. E.g. agents (we) usually have different reasoning styles in math exams, in job and in free time.

There is valid suggestion - can we outsource the reasoning part of the OpenCog to some external system, like MMT (I don't know yet about it) or Coq, Isabelle/HOL (as far as I understand those systems are capable to compile formally presented logics (e.g. via rules of sequent calculus) into reasoning engines/provers for those logics (but again - those systems are based on single, bounded meta-logic; to break those bounds is the main aim of MMT)? Or should OpenCog try to outsmart those efforts?

And last not least, there is this blog entry http://blog.ruleml.org/post/32629706-the-sad-state-concerning-the-relationships-between-logic-rules-and-logic-programming It is true - many business rules engines lack formal connection to the logic, those rule engines seems to be good heuristics of inferring something from given premises. Maybe OpenCog Rule engine has this formalization that can enable validation and generalization of it.

Alex

Reply all
Reply to author
Forward
0 new messages