Hi Linas, thanks for the response :)
I'll try to explain myself and provide a link to a project supporting my reasoning. If I'm not mistaken, you are talking about *link grammar* rules capable of doing inference process.
Link grammar does not have "rules" or "production rules". It is not a production grammar in the 1960's sense of a Chomsky context-free (or context-sensitive) grammar. In terms of linguistics, it is not a head-driven phrase-structure grammar (HPSG). It is a dependency grammar, with bi-directional dependencies. Thus, concepts like "inference" or "deduction" do not apply to link grammar or it's generalizations. There is a left-over fragment of "chaining" but that fragment is indistinguishable from parsing. The form of parsing that it does is "assembly". In terms of code, that means that the URE is incompatible with link-grammar: there are no rules.
One reason I'm excited by link-grammar is that it gets rid of ideas like deduction and inference and chaining, and replaces them by assembly. This is exciting because... well, look at the vast amounts of effort put into PLN and URE. They're problematic, they've always been problematic because of the difficulty of composing rules, and the difficulty of the notation of "input" and "output" when using lambda-calculus style notation, or Hilbert-style natural deduction, or sequent calculus, or Bayesian inference.
Worse, the "rules" are hand-curated, human-crafted carefully-engineered artifacts. I have not seen any coherent, complete proposal by which these rules could be learned by a learning system. I think those proposals are absent because trying to think of ways to learn rules and production systems is complicated.
Link-grammar nukes these concepts, replaces them with the idea of assembling parts, and nukes the lambda-calculus notation in favor of a more generic, more general connector notation. Its a more powerful, more fundamental technique. Things like lambda calc and sequent calc and production rules (and rules in general) are narrow special cases. Things like beta reduction and term re-writing and deduction and inference and reasoning are special cases.
I'll mention the nature of the solution below: link-grammar gets rid of arrows, and replaces them by bi-directional links. One no longer writes A -> B to mean "A implies B" or "A produces B" or "a function that takes A as input and returns output values of type B". Or rather, one is no longer forced to write A->B. One still can, and in certain problem domains, it's handy. It's just that now, one is liberated from a forced sequential flow. There's no forward-chaining, backward-chaining; the constraint satisfaction problem is symmetric with respect to that arrow.
Getting rid of that arrow is what nukes deduction and induction (and replaces them by the more general act of assembling)
The similar thing I'm trying to articulate is an extension to more popular grammar production rules which renders them deductively complete. The result stands for URE equivalent capable of reasoning about flexible syntax expressions (may be non-strict syntax form of Atomspace). Basically, it is a fusion between term rewriting rules and classical logic.
The direction of this arrow is the tip of the ice-berg that led me to understand link-grammar (and to propose it as a replacement for productions and rules). What link-grammar says is "you don't need to assign a directionality to this arrow", and more strongly: "assigning a direction to this arrow leads to pathological cases in reasoning, theorem proving, natural deduction, etc."
Within the extension I propose, having a kind of a production:
A1 /\ A2 -> B
reads as follows - when checking for `B`, we derive *both* `A1` *and* `A2` to be parsed in the same input string fragment at the appropriate place.
A -> B1 \/ B2
means that if *either* `B1` *or* `B2` is priorly derived, then derive `A` to parse in input string fragment at appropriate place.
A worked example here would aid in understanding. I almost get what you are saying, but not quite.
This extension to the parsing is a matter of expecting ambiguous expressions, and it comes handy when parsing logical expressions during proof construction. I imagined the whole inference mechanism around these extensions. I named it "expression logic"
, and it represents all of the following: expression recognizer and generator, SMT solver, deductive system, term rewriting framework, and metatheory language formalization. In less words, it is a problem solver. Implementation is still a work in process, but I believe that background theory won't change too much.
Putting once more the background theory on trial, I wonder if the planned logic operators can be described only using basic term rewriting rules (only rules of the form `A -> B`) while retaining deductive completeness. Supported logic operators would still have the same significance to the theory behind, only that they would be implemented as a library, a level above the bare metal term rewriting algorithm.
I hope I managed to be more understandable this time.
Thanks again for the response,
- ivan -
I'm approaching a similar problem from the similar side. I start with a parsing algorithm, and extend it to support left side conjunctions and right side disjunctions.
I don't understand what you are saying here. What does it mean to extend a parser in this way? What do you mean by conjunctions and disjunctions?
The conventional definition of a parser is a device that acts on a string of symbols, that, when it's done, indicates how that string can be decomposed into parts.
The conventional definition of conjuncts and disjuncts are strings of symbols combined with and/or operators.
How are you modifying these conventional definitions to obtain what you are doing?
This extension makes the rules equivalent to normalized sequents known from sequent calculus. The result is very interesting: any logical formula can be converted to these normalized sequents suitable for (enhanced) parsing. All rules then become:
A1 /\ A2 /\ A3 /\ ... -> B1 \/ B2 \/ B3 \/ ...
The parsing process then becomes equivalent to the logical abduction process. We start from the `Goal` atom and chain rules backwards to verify if the input string matches the rules.
I like to think that this system resembles a kind of URE on steroids.