24 views

Skip to first unread message

Apr 1, 2021, 10:27:00 PMApr 1

to link-grammar, opencog

Not everyone is aware that Link Grammar's underlying theory is far more broad than simply describing natural language. It would seem that the theory generalizes to one of describing graphs, in general (and knowledge graphs, in particular).

I just updated the wiki page https://wiki.opencog.org/w/Link_Grammar to give a brief sketch of how this theory generalizes, and how it influences the AtomSpace design. The generalization is reviewed in https://wiki.opencog.org/w/Connectors_and_Sections

If you've heard these ideas before, then this page "doesn't say anything new"; it just reminds you of the names of the Atoms that implement the various concepts, such as connectors and disjuncts (connector sequences). It also points to the half-dozen PDF's that justify and articulate this generalization.

If you haven't heard of these ideas before... well, enjoy! Just be aware that the ideas are deceptively simple. You might read this stuff and nod to yourself "yes, of course, obvious" until you get to something like the https://wiki.opencog.org/w/CrossSection and wonder "what the heck?" The apparent simplicity of these concepts makes them sometimes hard to understand.

-- Linas

Apr 3, 2021, 3:18:21 PMApr 3

to opencog, link-grammar

On Fri, Apr 2, 2021 at 1:38 PM Ivan V. <ivan....@gmail.com> wrote:

Hi Linas, thanks for the response :)I'll try to explain myself and provide a link to a project supporting my reasoning. If I'm not mistaken, you are talking about *link grammar* rules capable of doing inference process.

Link grammar does not have "rules" or "production rules". It is not a production grammar in the 1960's sense of a Chomsky context-free (or context-sensitive) grammar. In terms of linguistics, it is not a head-driven phrase-structure grammar (HPSG). It is a dependency grammar, with bi-directional dependencies. Thus, concepts like "inference" or "deduction" do not apply to link grammar or it's generalizations. There is a left-over fragment of "chaining" but that fragment is indistinguishable from parsing. The form of parsing that it does is "assembly". In terms of code, that means that the URE is incompatible with link-grammar: there are no rules.

One reason I'm excited by link-grammar is that it gets rid of ideas like deduction and inference and chaining, and replaces them by assembly. This is exciting because... well, look at the vast amounts of effort put into PLN and URE. They're problematic, they've always been problematic because of the difficulty of composing rules, and the difficulty of the notation of "input" and "output" when using lambda-calculus style notation, or Hilbert-style natural deduction, or sequent calculus, or Bayesian inference.

Worse, the "rules" are hand-curated, human-crafted carefully-engineered artifacts. I have not seen any coherent, complete proposal by which these rules could be learned by a learning system. I think those proposals are absent because trying to think of ways to learn rules and production systems is complicated.

Link-grammar nukes these concepts, replaces them with the idea of assembling parts, and nukes the lambda-calculus notation in favor of a more generic, more general connector notation. Its a more powerful, more fundamental technique. Things like lambda calc and sequent calc and production rules (and rules in general) are narrow special cases. Things like beta reduction and term re-writing and deduction and inference and reasoning are special cases.

I'll mention the nature of the solution below: link-grammar gets rid of arrows, and replaces them by bi-directional links. One no longer writes A -> B to mean "A implies B" or "A produces B" or "a function that takes A as input and returns output values of type B". Or rather, one is no longer forced to write A->B. One still can, and in certain problem domains, it's handy. It's just that now, one is liberated from a forced sequential flow. There's no forward-chaining, backward-chaining; the constraint satisfaction problem is symmetric with respect to that arrow.

Getting rid of that arrow is what nukes deduction and induction (and replaces them by the more general act of assembling)

The similar thing I'm trying to articulate is an extension to more popular grammar production rules which renders them deductively complete. The result stands for URE equivalent capable of reasoning about flexible syntax expressions (may be non-strict syntax form of Atomspace). Basically, it is a fusion between term rewriting rules and classical logic.Usual parsers take productions of the form `A -> B` (I write this backwards to align it with logic, as explained in correspondence between production rules and logical implication section).

The direction of this arrow is the tip of the ice-berg that led me to understand link-grammar (and to propose it as a replacement for productions and rules). What link-grammar says is "you don't need to assign a directionality to this arrow", and more strongly: "assigning a direction to this arrow leads to pathological cases in reasoning, theorem proving, natural deduction, etc."

Within the extension I propose, having a kind of a production:A1 /\ A2 -> Breads as follows - when checking for `B`, we derive *both* `A1` *and* `A2` to be parsed in the same input string fragment at the appropriate place.Similarly,A -> B1 \/ B2means that if *either* `B1` *or* `B2` is priorly derived, then derive `A` to parse in input string fragment at appropriate place.

A worked example here would aid in understanding. I almost get what you are saying, but not quite.

This extension to the parsing is a matter of expecting ambiguous expressions, and it comes handy when parsing logical expressions during proof construction. I imagined the whole inference mechanism around these extensions. I named it "expression logic", and it represents all of the following: expression recognizer and generator, SMT solver, deductive system, term rewriting framework, and metatheory language formalization. In less words, it is a problem solver. Implementation is still a work in process, but I believe that background theory won't change too much.

OK.

I'm trying to keep the size of the whole implementation below 300 javascript LOC.

!

Putting once more the background theory on trial, I wonder if the planned logic operators can be described only using basic term rewriting rules (only rules of the form `A -> B`) while retaining deductive completeness. Supported logic operators would still have the same significance to the theory behind, only that they would be implemented as a library, a level above the bare metal term rewriting algorithm.I hope I managed to be more understandable this time.

Examples, please!

Thanks again for the response,- ivan -pet, 2. tra 2021. u 19:09 Linas Vepstas <linasv...@gmail.com> napisao je:Hi Ivan,On Fri, Apr 2, 2021 at 3:00 AM Ivan V. <ivan....@gmail.com> wrote:I'm approaching a similar problem from the similar side. I start with a parsing algorithm, and extend it to support left side conjunctions and right side disjunctions.I don't understand what you are saying here. What does it mean to extend a parser in this way? What do you mean by conjunctions and disjunctions?The conventional definition of a parser is a device that acts on a string of symbols, that, when it's done, indicates how that string can be decomposed into parts.The conventional definition of conjuncts and disjuncts are strings of symbols combined with and/or operators.How are you modifying these conventional definitions to obtain what you are doing?-- LinasThis extension makes the rules equivalent to normalized sequents known from sequent calculus. The result is very interesting: any logical formula can be converted to these normalized sequents suitable for (enhanced) parsing. All rules then become:A1 /\ A2 /\ A3 /\ ... -> B1 \/ B2 \/ B3 \/ ...The parsing process then becomes equivalent to the logical abduction process. We start from the `Goal` atom and chain rules backwards to verify if the input string matches the rules.I like to think that this system resembles a kind of URE on steroids.

--------

You received this message because you are subscribed to the Google Groups "opencog" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA378PEv7%3Dm4A2yte-1icmL32JYrd-EndgE%2BuHGZvzAgc%3Dg%40mail.gmail.com.

You received this message because you are subscribed to the Google Groups "opencog" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6W5VnC-2A_k%3D3Vc2rV29%3DKuKLhddQzBXgpXsxc%3Dm%2BbuWw%40mail.gmail.com.

--Patrick: Are they laughing at us?Sponge Bob: No, Patrick, they are laughing next to us.

You received this message because you are subscribed to the Google Groups "opencog" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36Sm5O_SYqYJ-WxqLSNcq%3DD6aoRm8rZ4UPVVFU2Nq%3DqGg%40mail.gmail.com.

You received this message because you are subscribed to the Google Groups "opencog" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAB5%3Dj6VTy259EpZ55CqeH8u7QhM3x8U3DXOG7VEQf2uymBCyVg%40mail.gmail.com.

--

Patrick: Are they laughing at us?

Sponge Bob: No, Patrick, they are laughing next to us.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu