Replacing Relex2Logic with Relex2Lojban

79 views
Skip to first unread message

Ben Goertzel

unread,
Jul 8, 2016, 7:44:35 AM7/8/16
to opencog, Jim Rutt, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
Here is a modest proposal, which would replace Relex2Logic with
something vaguely similar in spirit but much superior,

http://wiki.opencog.org/wikihome/index.php/Lojbanic_Relex2Logic

Actually it's a bit closer to the spirit of the bad old RelEx2Frame,
but with the significant difference that Lojban is a language with
complete coverage of everyday semantics, whereas FrameNet is sorely
limited and hasn't been honed by usage...

-- Ben


--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

Jim Rutt

unread,
Jul 8, 2016, 10:12:30 AM7/8/16
to Ben Goertzel, opencog, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
I like this idea very much.  I'm currently considering Lojban as a "knowledge engineering" language for a "really smart AI for games" project I'm starting to spin up.  Prior to full on AGI I see some fruitful problems to be solved using "sort of AGIish" software that depends on human created domain specific declarative knowledge.  My hypothesis is that there is a useful and talented - and not too expensive - class of human talent that can learn Lojban well who would not be appropriate for using tools that are less human language-like.  These might include very bright but highly anti-quantitative liberal arts grads.  Lojban strikes me as a potentially quite good adapter between the world of humans and the world of machines.  

ko pilno lo clearer pensi la lojban 

jim

--
===========================
Jim Rutt
JPR Ventures

Linas Vepstas

unread,
Jul 8, 2016, 8:03:40 PM7/8/16
to opencog, Ben Goertzel, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
FWIW, I am virulently anti-lojban, because mostly I believe it doesn't solve any problems that we actually have. --linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAPzPGw7p7MKC2d0MucQf8s4AimtSu-rNtkmm8Pprf03iJhJVdA%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.

Matt Chapman

unread,
Jul 9, 2016, 12:09:56 AM7/9/16
to opencog, Ben Goertzel, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
How does storing ConceptNode atoms with lojbanic labels improve over storing atoms with English labels? For practical applications, it seems like it would unnecessarily increase the size of the atomspace, and for training data, I expect there are vastly many more English to $X translation examples than Lojban to $X. Lojban is a fun toy, but like Linas, I don't see the problem that is being solved here. Sure Lojban has fewer rules to encode, but you still end up manually encoding them, as far as I  can tell. Maybe it feel less like cheating because writing lojban feels like writing code to begin with...

All the Best,

Matt

--
Standard Disclaimer:
Please interpret brevity as me valuing your time, and not as any negative intention.

Ben Goertzel

unread,
Jul 9, 2016, 12:14:19 AM7/9/16
to opencog, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
Hi Jim,

I totally agree that, in principle, Lojban could have a huge number of
applications involving human-machine interaction ... and human-human
interaction.... If biology and philosophy papers were written in
Lojban, knowledge would advance faster...

However I don't currently see a viable route to getting Lojban used by
a significant group of humans.... Maybe once we have advanced AGIs,
if some geeky people want to communicate with the AGIs in something
more closely resembling their native language, they will learn a
future version of Lojban to do so? ;0

On the other hand, as an internal tool within significantly
logic-focused AI systems, I can increasingly see a quite practical
near-term usage.... Potentially this usage will -- in some way we
can't exactly foresee right now -- lead to human adoption in some
subcommunity as well... but if it just helps us get to AGI a bit
faster and better, that's good enough ;) ...

-- Ben

-- Ben
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAPzPGw7p7MKC2d0MucQf8s4AimtSu-rNtkmm8Pprf03iJhJVdA%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



Ben Goertzel

unread,
Jul 9, 2016, 1:21:44 AM7/9/16
to Matt Chapman, opencog, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
OK, let me try to clarify more thoroughly...

Firstly, I take as the premise of my discussion here that we are
building an AGI system which has explicit, abstract logical inference
as a significant component (e.g. PLN). If you want to argue for an
AGI path that is purely subsymbolic, then I'm not going to dispute the
viability of such a path, but I don't think it's optimal and anyway my
suggestion of Lojban for OpenCog is premised on the fact that PLN is a
big component of OpenCog...

The question then is how to map natural-language relationships into
logic relationships.... Four approaches are obvious given current
technologies:

1) Hand-code mapping rules in some form

2) Learn mapping rules via supervised learning, from a training corpus

3) Learn mapping rules via unsupervised learning, from e.g. a big
corpus of texts or speech

4) Learn mapping rules via an embodied system's experience, i.e. via
reinforcement and imitation learning combined with unsupervised
learning

...

(4) is obviously appealing to me. For (4) to work one probably needs
to hand-code mappings from nonlinguistic perception (e.g. vision,
audition) into logical representation, but this is perhaps less
problematic than hand-coding mappings from language into logic,
because vision and audition have simpler structures in a way

Without hand-coded mappings from nonlinguistic perception into logic,
it's hard to see how (4) would work *unless* one was also willing to
have the logic itself emerge via reinforcement/ imitation /
unsupervised learning. That is, unless one was willing to give up
starting from a fixed logic like PLN and let the logic be learned....
I think this is possible but IMO it gets into "evolution of a brain
architecture" territory rather than "learning within a brain
architecture" territory...

What I am hoping to do is seed (4) with a combination of (1) and (3)

Specifically, regarding (3), Linas and I already wrote a paper
pointing in the direction of what we want to do....

https://arxiv.org/abs/1401.3372

However, at the moment I don't personally see how that approach is
going to let us learn something analogous to the RelEx2Logic rules. I
think it can let us learn something analogous to the link parser
grammar plus the RelEx rules. But I don't see how the unsupervised
learning paradigm we describe there is going to learn rules that
connect to PLN logic specifically.... I can sorta imagine how this
might happen, but it seems really hard...

So then we could use our unsupervised learning method for (3) and then
do (4) just for learning R2L rules. That might be viable....

However, Lojban seems to me like it could yield a robust way of doing
(1), which could potentially accelerate the overall process of making
an AGI that really understands language...

Our current R2L rule-base is kind of a mess and is also very
incomplete. So if we're going to do practical NLP dialogue
applications with OpenCog in the near future we need to either
extend/improve R2L or replace it. Taking approach (4) or "(4) on top
of (3)" is too researchy and difficult to be relevant to near-term
application development, though it's an important research direction..

The value of Lojban for an R2L-type layer is based on the facts that

A) Lojban directly maps into predicate logic, so into PLN-friendly Atomese

B) Lojban expresses everything that natural language expresses, in
ways that are reasonably elegant and already worked-out by other
people, and honed by decades of practice

On the other hand, the current system of R2L outputs is kind of
unsystematic and messy... and turning it into something elegant and
coherent would be a lot of work...

C) via generating Relex2Lojban or LinkGrammar2Lojban rules from a
parallel English/Lojban corpus, one avoids hand-coding any rules...
instead one can use this sample corpus to generate a R2L-like layer
for any syntax parser, including one learned via (3) or (3)+(4)... or
Google's newly released parser... or whatever...

D) unlike hand-coding R2L rules, the approach is more
language-independent (one only needs to create a parallel corpus in
Lojban and the new language, to extend the approach to a new language)

...

Regarding B, please do not minimize this point. FrameNet doesn't do
this, Cyc-L doesn't do this, SUMO doesn't do this.. the system of R2L
outputs doesn't currently do this ... Lojban does this...

I hope this long email at least conveys my line of thinking a bit better...

...

The point is not relabeling ConceptNodes with Lojban word-names
instead of English word-names. The point is that Lojban contains

B1) a more complete and commonsensical list of argument-structures for
verbs than Framenet

B2) systematic, commonsensical ways of dealing with everyday uses of
time, space, conjunction, possession, comparisons, etc. etc. in formal
logic

It's not the Lojban word-names that matter, it's the precisely-stated
logical relationships between the Lojban words...

-- Ben

Ben Goertzel

unread,
Jul 9, 2016, 1:25:08 AM7/9/16
to Linas Vepstas, opencog, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
I think it could help a lot in solving one problem you currently have:
Mapping multiple roughly-synonymous ways of saying the same thing,
into the same logical relationship-set (you've referred to this as
phrase-level "synonymy" before). The way it helps is by constituting
a viable "logical normal form" for everyday commonsense statements...

I think it can also help, down the road, in terms of enabling
automatic (no hand coded rules) building of mapping rules between
syntactico-semantic rules learned by unssupervised learning, and
logical relationships that are tractable for PLN to reason on...

-- Ben

Ben Goertzel

unread,
Jul 9, 2016, 1:53:39 AM7/9/16
to Linas Vepstas, opencog, Roman Treutlein, Zarathustra Goertzel, Rodas Solomon, Aaron Nitzkin
For those who are interested, some further potential particulars are here...

http://wiki.opencog.org/wikihome/index.php/Lojbanic_Relex2Logic#Some_Particulars
Reply all
Reply to author
Forward
0 new messages