probabilistic type theory with records ... variants of categorial grammar & semantics, etc.

95 views
Skip to first unread message

Ben Goertzel

unread,
Aug 29, 2016, 5:41:29 PM8/29/16
to Nil Geisweiller, Ruiting Lian, Linas Vepstas, Zarathustra Goertzel, opencog
Linas, Nil, etc. --

This variation of type theory

http://www.dcs.kcl.ac.uk/staff/lappin/papers/cdll_lilt15.pdf

seems like it may be right for PLN and OpenCog ... basically,
dependent type theory with records (persistent memory) and
probabilities ...

If we view PLN as having this sort of semantics, then RelEx+R2L is
viewed as enacting a morphism from:

-- link grammar, which is apparently equivalent to pregroup grammar,
which is a nonsymmetric cartesian closed category

to

-- lambda calculus endowed with the probabilistic TTR type system,
which is a locally cartesian closed category

https://ncatlab.org/nlab/show/relation+between+type+theory+and+category+theory#DependentTypeTheory

For the value of dependent types in natural language semantics, see e.g.

http://www.slideshare.net/kaleidotheater/hakodate2015-julyslide?qid=85e8a7fc-f073-4ded-a2c8-9622e89fd07d&v=&b=&from_search=1

(the examples regarding anaphora in the above are quite clear)

https://ncatlab.org/nlab/show/dependent+type+theoretic+methods+in+natural+language+semantics

This paper

http://www.slideshare.net/DimitriosKartsaklis1/tensorbased-models-of-natural-language-semantics?qid=fd4cc5b3-a548-46a7-b929-da8246e6c530&v=&b=&from_search=2

on the other hand, seems mathematically sound but conceptually wrong
in its linguistic interpretation.

It constructs a nice morphism from pregroup grammars (closed cartesian
categories) to categories defined over vector spaces -- where the
vector spaces are taken to represent co-occurence vectors and such,
indicating word semantics.... The morphism is nice... however, the
idea that semantics consists of numerical vectors is silly ...
semantics is much richer than that

If we view grammar as link-grammar/pregroup-grammar/asymmetric-CCC ...
we should view semantics as {probabilistic TTR / locally compact
closed CCC *plus* numerical-vectors/linear-algebra}

I.e. semantics has a distributional aspect AND ALSO a more explicitly
logical aspect

Trying to push all of semantics into distributional word vectors,
leads them into insane complexities like modeling determiners using
Frobenius algebras... which is IMO just not sensiblen ... it's trying
to achieve a certain sort of mathematical simplicity that does not
reflect the kind of simplicity seen in natural systems like natural
language...

Instead I would say RelEx+R2L+ECAN (on language) +
word-frequency-analysis can be viewed as enacting a morphism from:

-- link grammar, which is apparently equivalent to pregroup grammar,
which is a nonsymmetric cartesian closed category

to the product of

-- lambda calculus endowed with the probabilistic TTR type system,
which is a locally cartesian closed category

-- the algebra of finite-dimensional vector spaces

This approach accepts fundamental heterogeneity in semantic representation...

-- Ben

--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

Linas Vepstas

unread,
Aug 29, 2016, 11:14:59 PM8/29/16
to Ben Goertzel, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, opencog, link-grammar
It will take me a while to digest this fully, but one error/confusion (and very important point) pops up immediately: link-grammar is NOT cartesian, and we most definitely do not want cartesian-ness in the system.  That would destroy everything interesting, everything that we want to have.  Here's the deal:

When we parse in link-grammar, we create multiple parses.  Each parse can be considered to "live" in its own unique world or universe (it's own Kripke frame)  These universes are typically incompatible with each other: they conflict. Only one parse is right, the others are wrong (typically -- although sometimes there are some ambiguous cases, where more than one parse may be right, or where one parse might be 'more right' than another).

These multiple incompatible universes are symptomatic of a "linear type system".  Now, linear type theory finds applications in several places: it can describe parallel computation (each universe is a parallel thread) and also mutex locks and synchronization, and also vending machines: for one dollar you get a menu selection of items to pick from -- the ChoiceLink that drove Eddie nuts. 

The linear type system is the type system of Linear logic, which is the internal language of the closed monoidal categories, of which the closed cartesian categories are a special case. 

Let me return to multiple universes -- we also want this in PLN reasoning. A man is discovered standing over a dead body, a bloody sword in his hand -- did he do the deed, or is he simply the first witness to stumble onto the scene?  What is the evidence pro and con?  
This scenario describes two parallel universes: one in which he is guilty, and one in which he is not. It is the job of the prosecutor, defense, judge and jury to figure out which universe he belongs to.  The mechanism is a presentation of evidence and reasoning and deduction and inference. 

Please be hyper-aware of this, and don't get confused: just because we do not know his guilt does not mean he is "half-guilty", -- just like an unflipped coin is not a some blurry, vague superimposition of heads and tails.

Instead, as the evidence rolls in, we want to find that the probability of one universe is increasing, while the probability of the other one is decreasing.  Its just one guy -- he cannot be both guilty and innocent -- one universe must eventually be the right one,m and it can be the only one.  (this is perhaps more clear in 3-way choices, or 4-way choices...)

Anyway, the logic of these parallel universes is linear logic, and the type theory is linear type theory, and the category is closed monoidal. 

(Actually, I suspect that we might want to use affine logic, which is per wikipedia "a substructural logic whose proof theory rejects the structural rule of contraction. It can also be characterized as linear logic with weakening.")

Anyway, another key point: lambda calculus is the internal language of *cartesian* closed categories.  It is NOT compatible with linear logic or linear types.   This is why I said in a different email, that "this way lies madness".  Pursuit of lambda calc will leave us up a creek without a paddle, it will prevent us from being able to apply PLN to guilty/not-guilty court cases.

----
BTW, vector spaces are NOT cartesian closed! They are the prime #1 most common example of where one can have tensor-hom adjunction, i.e. can do currying, and NOT be cartesian!  Vector spaces *are* closed-monoidal.

The fact that some people are able to map linguistics onto vector spaces (although with assorted difficulties/pathologies) re-affirms that closed-monoidal is the way to go.  The reason that linguistics maps poorly onto vector spaces is due to their symmetry -- the linguistics is NOT symmetric, the vector spaces are.    So what we are actually doing (or need to do) is develop the infrastructure for *cough cough* a non-symmetric vector space.. which is kind-of-ish what the point of the categorial grammars is.

Enough for now.

--linas

Ben Goertzel

unread,
Aug 30, 2016, 10:28:16 AM8/30/16
to link-grammar, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, opencog
Linas,

Alas my window of opportunities for writing long emails on math-y
stuff has passed, so I'll reply to your email more thoroughly in a
couple days...

However, let me just say that I am not so sure linear logic is what we
really want.... I understand that we want to take resource usage into
account in our reasoning generally... and that in link grammar we want
to account for the particular exclusive nature of the disjuncts ...
but I haven't yet convinced myself linear logic is necessarily the
right way to do this... I need to take a few hours and reflect on it
more and try to assuage my doubts on this (or not)

-- ben
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to link-grammar...@googlegroups.com.
> To post to this group, send email to link-g...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.

Linas Vepstas

unread,
Aug 30, 2016, 11:52:42 AM8/30/16
to opencog, link-grammar, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Ben,

Well, it might not have to be linear, it might be affine, I have not thought it through myself. What is clear is that cartesian is clearly wrong.

The reason I keep repeating the guilt/innocence example is that its not just the "exclusive nature of disjuncts in link-grammar", but rather, that it is a generic real-world reasoning problem.

I think I understand one of the points of confusion, though. In digital circuit verification (semiconductor chip industry), everyone agrees that the chips themselves behave according to classical boolean logic - its all just ones and zeros.  However, in verification, you have to prove that a particular chip design is working correctly.  That proof process does NOT use classical logic to achieve its ends -- it does use linear logic! Specifically, the proof process goes through sequences of Kripke frames, where you verify that certain ever-larger parts of the chip are behaving correctly, and you use the frames to keep track of how the various combinatorial possibilities feed back into one another.  Visualize it as a kind of lattice: at first, you have a combinatoric explosion, a kind of tree or vine, but then later, the branches join back together again, into a smaller collection. Those that fail to join up are either incompletely modelled, or indicate a design error in the chip.

There's another way of thinking of chip verification: one might say, in any given universe/kripke frame, that a given transistor is in one of three states: on, off, or "don't know", with the "don't know" state corresponding to the "we haven't simulated/verified that one yet".   The collection of possible universes shrinks, as you eliminate the "don't know" states during the proof process.    This kind of tri-valued logic is called "intuitionistic logic" and has assorted close relationships to linear logic.

These same ideas should generalize to PLN:  although PLN is itself a probabilistic logic, and I do not advocate changing that, the actual chaining process, the proof process of arriving at conclusions in PLN, cannot be, must not be. 

I hope the above pins down the source of confusion, when we talk about these things.  The logic happening at the proof level, the ludics level, is very different from the structures representing real-world knowledge.

--linas


> To post to this group, send email to link-g...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3DShs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.

Ben Goertzel

unread,
Aug 30, 2016, 6:10:32 PM8/30/16
to link-grammar, Matthew Ikle, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Linas,

Actually, even after more thought, I still don't (yet) see why linear
logic is needed here...

In PLN, each statement is associated with at least two numbers

(strength, count)

Let's consider for now the case where the strength is just a probability...

Then in the guilt/innocence case, if you have no evidence about the
guilt or innocence, you have count =0 .... So you don't have to
represent ignorance as p=.6 ... you can represent it as

(p,n) = (*,0)

The count is the number of observations made to arrive at the strength figure...

PLN count rules propagate counts from premises to conclusions, and if
everything is done right without double-counting of evidence, then the
amount of evidence (number of observations) supporting the conclusion
is less than or equal to the amount of evidence supporting the
premises...

This does not handle estimation of resource utilization in inference,
but it does handle the guilt/innocence example

As for the resource utilization issue, certainly one can count the
amount of space and time resources used in drawing a certain inference
... and one can weight an inference chain via the amount of resources
it uses... and one can prioritize less expensive inferences in doing
inference control. This will result in inferences that are "simpler"
in the sense of resource utilization, and hence more plausible
according to some variant of Occam's Razor...

But this is layering resource-awareness on top of the logic, and using
it in the control aspect, rather than sticking it into the logic as
linear and affine logic do...

The textbook linear logic example of

"I have $5" ==> I can buy a sandwich
"I have $5" ==> I can buy a salad
|- (oops?)
"I have $5" ==> I can buy a sandwich and I can buy a salad

doesn't impress much much, I mean you should just say

If I have $5, I can exchange $5 for a sandwich
If I have $5, I can exchange $5 for a salad
After I exchange $X for something else, I don't have $X anymore

or whatever, and that expresses the structure of the situation more
nicely than putting the nature of exchange into the logical deduction
apparatus.... There is no need to complicate one's logic just to
salvage a crappy representational choice...

In linear logic: It is no longer the case that given A implies B and
given A, one can deduce both A and B ...

In PLN, if one has

A <sA, nA>
(ImplicationLink A B) <sAB, nAB>

one can deduce

B <sB,nB>

but there is some math to do to deduce sB and nB, and one can base
this math on various assumptions including independence assumptions,
assumptions about the shapes of concepts, etc.

In short I think if we extent probabilistic TTR to be "TTR with <p,n>
truth values", then we can use lambda calculus with a type system
drawn from TTR and with each statement labeled with a <p,n> truth
value ... and then we can handle the finitude of evidence without
needing to complicate the base logic...

A coherent and sensible way to assess <p,n> truth values for
statements with quantified variables was given by me and Matt in 2008,
in

http://www.agiri.org/IndefiniteProbabilities.pdf

Don't let the third-order probabilities worry you ;)

...

In essence, it seems, the linear logic folks push a bunch of
complexity into the logic itself, whereas Matt and I pushed the
complexity into the truth values, and the Occam bias on proofs (into
which resource utilization should be factored)

-- Ben
.
>> > email to link-grammar...@googlegroups.com.
>> > To post to this group, send email to link-g...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/link-grammar.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> Super-benevolent super-intelligence is the thought the Global Brain is
>> currently struggling to form...
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to opencog+u...@googlegroups.com.
>> To post to this group, send email to ope...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3DShs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to link-grammar...@googlegroups.com.

Ben Goertzel

unread,
Aug 31, 2016, 2:05:16 AM8/31/16
to link-grammar, Matthew Ikle, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Regarding link parses and possible worlds...

In the TTR paper they point out that "possible worlds" is somehow
conceptually misleading terminology, and it may often be better to
think about "possible situations" (in a deep sense each possible
situation is some distribution over possible worlds, but it may rarely
be necessary to go that far)

In that sense, we can perhaps view the type of a link parse as a
dependent type, that depends upon the situation ... (?)

This is basically the same as viewing the link-parser itself as a
function that takes (sentence, dictionary) pairs into functions that
map situations into sets of link-parse-links [but the latter is a
more boring and obvious way of saying it ;p]

But again, I don't (yet) see why linear logic would be required
here... it seems to me something like TTR with <p,n> truth values is
good enough, and we can handle resource management on the "Occam's
razor" level

As you already know (but others may not have thought about), weighting
possible link parses via their probabilities based on a background
corpus, is itself a form of "resource usage based Occam's razor
weighting". Because the links and link-combinations with higher
probability based on the corpus, are the ones that the OpenCog system
doing the parsing has more reason to retain in the Atomspace --- thus
for higher-weighted links or link-combinations, the "marginal memory
usage" required to keep those links/link-combinations in memory is
less. So we can view the probability weighting of a potential parse
as proportional to the memory-utilization-cost of that parse, in the
context of a system with a long-term memory full of other parses from
some corpus (or some body of embodied linguistic experience,
whatever...).....

Currently it seems to me that the probabilistic weighting of parses
(corresponding to possible situations) is already handling
resource-management implicitly and we don't need linear logic to do
that here...

Of course these things are pretty subtle when you really think about
them, and I may be missing something...

ben

Linas Vepstas

unread,
Aug 31, 2016, 3:19:44 PM8/31/16
to opencog, link-grammar, Matthew Ikle, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Ben,

OK, not sure what to say. I think you totally missed the point of my last email. Specifically, it is a  https://en.wikipedia.org/wiki/Category_mistake  which is a punny way of putting it.

So let me try again a different way:  you are a big fan of agda -- so -- a rhetorical question: which logic is agda based on?

"Some other type theories include Per Martin-Löf's intuitionistic type theory, which has been the foundation used in some areas of constructive mathematics and for the proof assistant Agda. Thierry Coquand's calculus of constructions and its derivatives are the foundation used by Coq and others. The field is an area of active research, as demonstrated by homotopy type theory."

"Dependent types play a central role in intuitionistic type theory and in the design of functional programming languages like Idris, ATS, Agda and Epigram."

"A prime example is Agda, a programming language which uses intuitionistic type theory for its type system."

"intuitionistic type theory is used by Agda which is both a programming language and proof assistant;

"As such, the use of proof assistants (such as Agda or Coq) is enabling modern mathematicians and logicians to develop and prove extremely complex systems,..."

"Type theory has been the base of a number of proof assistants, such as NuPRL, LEGO and Coq. Recently, dependent types also featured in the design of programming languages such as ATS, Cayenne, Epigram, Agda, and Idris."

"Besides Coq, Agda is one of the languages in which homotopy type theory has been implementsed"


well -- I can't cut-n-paste -- its a copyright-protected e-book.

-------------
The category mistake is this: do NOT confuse the DOMAIN, with REASONING  ABOUT THE DOMAIN.  Which is what is being done, when you start talking about PLN.

I am NOT talking about PLN. I am talking about REASONING ABOUT PLN!  Two different things. 

I mentioned "ludics" for a reason.

Think of it this way:  you can write any program at all in agda.  For example, it could be a program that adds two integers together, and prints the result.

So, agda uses intuitionistic logic to *run* that program. That does not mean the addition of integers needs, uses, requires intuitionistic logic.  It only means that, if you write your program in agda, that is the theory of logic that will be used to run your program.

Think of PLN as being the program that adds integers together. Or however you want to describe what PLN does.  It is what it is.   I am trying to talk about the correct way in which to implement the *algorithm* that implements PLN.

Until we get past this point, we will be confused about what we are talking about.

Same deal for relex2logic. Same deal for relex. Same deal for linkages in link-grammar.  Same deal for the so-called "word-graph" tokenizer that Amir implemented for link-grammar, to split up words into morphemes.  Ask Amir to explain how it works.  You promptly get kripke frames, and NOT classical logic!  

--linas



>> > To post to this group, send email to link-g...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/link-grammar.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> Super-benevolent super-intelligence is the thought the Global Brain is
>> currently struggling to form...
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> To post to this group, send email to ope...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3DShs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to link-g...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Linas Vepstas

unread,
Aug 31, 2016, 5:16:48 PM8/31/16
to link-grammar, Matthew Ikle, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Ben,

What's TTR? 

We can talk about link-grammar, but I want to talk about something a little bit different: not PLN, but the *implementation* of PLN.   This conversation requires resolving the "category error" email I sent, just before this.

Thus, I take PLN as a given, including the formulas that PLN uses, and every possible example of *using* PLN that you could throw my way.  I have no quibble about any of those examples, or with the formulas, or with anything like that. I have no objections to the design of the PLN rules of inference.

What I want to talk about is how the PLN rules of inference are implemented in the block of C++ code in github.   I also want to assume that the implementation is complete, and completely bug-free (even though its not, but lets assume it is.)

Now, PLN consists of maybe a half-dozen or a dozen rules of inference.  They have names like "modus ponens" but we could call them just "rule MP" ... or just "rule A", "rule B", etc...

Suppose I start with some atomspace contents, and apply the PLN rule A. As a result of this application, we have a "possible world 1".  If, instead, we started with the same original atomspace contents as before, but applied rule B, then we would get "possible world 2".  It might also be the case that PLN rule A can be applied to some different atoms from the atomspace, in which case, we get "possible world 3".   

Each possible world consists of the triple (some subset of the atomspace, some PLN inference rule, the result of applying the PLN rule to the input).  

Please note that some of these possible worlds are invalid or empty: it might not be possible to apply the choosen PLN rule to the chosen subset of the atomspace.  I guess we should call these "impossible worlds".  You can say that their probability is exactly zero.

Observe that the triple above is an arrow:  the tail of the arrow is "some subset of the atomspace", the head of the arrow is "the result of applying PLN rule X", and the shaft of the arrow is given a name: its "rule X".

(in fancy-pants, peacock language, the arrows are morphisms, and the slinging together, here, are kripke frames. But lets avoid the fancy language since its confusing things a lot, just right now.)

Anyway -- considering this process, this clearly results in a very shallow tree, with the original atomspace as the root, and each branch of the tree corresponding to possible world.  Note that each possible world is a new and different atomspace: The rules of the game here are that we are NOT allowed to dump the results of the PLN inference back into the original atomsapce.  Instead, we MUST fork the atomspace.  Thus, if we have N possible worlds, then we have N distinct atomspaces. (not counting the original, starting atomspace)  

Now, for each possible world, we can apply the above procedure again. Naively, this is a combinatoric explosion. For the most part, each different possible world will be different than the others. They will share a lot of atoms in common, but some will be different.

Note, also, that *some* of these worlds will NOT be different, but will converge, or be "confluent", arriving at the same atomspace contents along different routes.  So, although, naively, we have a highly branching tree, it should be clear that sometimes, some of the branches come back together again.  

I already pointed out that some of the worlds are "impossible" i.e. have a probability of zero. These can be discarded.  But wait, there's more.  Suppose that one of the possible worlds contains the statement "John Kennedy is alive" (with a very very high confidence) , while another one contains the statement "John Kennedy is dead" (with a very very high confidence).  What I wish to claim is that, no matter what future PLN inferences might be made, these two worlds will never become confluent.

There is also a different effect: during inferencing, one might find oneself in a situation where the atoms being added to the atomspace, at each inference step, have lower and lower probability. At some point, this suggests that one should just plain quit -- that particular branch is just not going anywhere. its a dead end.

OK, that's it, I think, for the overview.   Now for some commentary.

First, (let get it out of the way now) the above describes *exactly* how link-grammar works.  For "atomspace" substitute "linkage" and for "PLN rule of inference" substitute "disjunction".  That's it. End of story(QED).

Notice that each distinct linkage in link-grammar is a distinct possible-world. The result of parsing is to create a list of possible worlds (linkages, aka "parses").  Now, link-grammar has a "cost system" that assigns different probabilities (different costs) to each possible world: this is "parse ranking": some parses (linkages) are more likely than others.

Note that each different parse is, in a sense, "not compatible" with every other parse.  Two different parses may share common elements, but other parts will differ.

Claim: the link-grammar is a closed monoidal category, where the words e the objects, and the disjuncts are the morphisms. I don't have the time or space to articulate this claim, so you'll have to take it on faith, or think it through, or compare it to other papers on categorial grammar e.g. the Bob Coecke paper op. cit.  It is useful to think of link-grammar disjuncts as jigsaw-puzzle pieces, and the act f parsing as the act of assembling a jigsaw puzzle.  (Se the original LG paper for a picture of the jigsaw pieces.  The Coecke paper also draws them. So does the Baez "rosetta stone" paper, though not as firmly)

Theorem: the act of applying PLN, as described above, is a closed monoidal category.
Proof:  A "PLN rule of inference" is, abstractly, exactly the same thing as a link-grammar disjunct. The contents of the atomspace is exactly the same thing as a (partially or fully) parsed sentence.  QED.

There is nothing more to this proof than that.  I mean, it can fleshed it out in much greater detail, but that's the gist of it.

Observe two very important things:  (1) during the proof, I never once had to talk about modus ponens, or any of the other PLN inference rules.  (2) during the proof, I never had to invoke the specific mathematical formulas that compute the TV's -- that compute the strength and confidence.   Both of these aspects of PLN are completely and utterly irrelevant to the proof.  The only thing that mattered is that PLN takes, as input, some atoms, and applies some transformation, and generates atoms. That's it.

The above theorem is *why* I keep talking about possible worlds and kripke-blah-blah and intuitionistic logic, and linear logic. its got NOTHING TO DO WITH THE ACTUAL PLN RULES!!! the only thing that matters is that there are rules, that get applied in some way.  The generic properties of linear logic and etc. are the generic properties of rule systems and kripke frames. Examples of such rule systems include link-grammar, PLN, NARS, classical logic, and many more.  The details of the specific rule system do NOT alter the fundamental process of rule application aka "parsing" aka "reasoning" aka "natural deduction" aka "sequent calculus".    Confusing the details of PLN with the act of parsing is a category error: the logic that describes parsing is not PLN, and PLN dos not describe parsing: its a category error to intermix the two.

Phew.

What remains to be done:  I believe that what I describe above, the "many-worlds hypothesis" of reasoning, can be used to create a system that is far more efficient than the current PLN backward/forward chainer.  It's not easy, though: the link-parser algorithm struggles with the combinatoric explosion, and has some deep, tricky techniques to beat it down.  ECAN was invented to deal with the explosion in PLN.  There are other ways.

By the way: the act of merging the results of a PLN inference back into the original atomspace corresponds, in a very literal sense, to a "wave function collapse". As long as you keep around multiple atomspaces, each containing partial results, you have "many worlds", but every time you discard or merge some of these atomspaces back into one, its a "collapse".  That includes some of the TV merge rules that plague the system.

Next, I plan to convert this email into a blog post.

--linas



>>> > To post to this group, send email to link-g...@googlegroups.com.
>>> > Visit this group at https://groups.google.com/group/link-grammar.
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> http://goertzel.org
>>>
>>> Super-benevolent super-intelligence is the thought the Global Brain is
>>> currently struggling to form...
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "opencog" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an

>>> To post to this group, send email to ope...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/opencog.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3DShs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "link-grammar" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> To post to this group, send email to link-g...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/link-grammar.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> Super-benevolent super-intelligence is the thought the Global Brain is
> currently struggling to form...



--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

--
You received this message because you are subscribed to the Google Groups "link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email to link-grammar+unsubscribe@googlegroups.com.

Linas Vepstas

unread,
Aug 31, 2016, 6:37:23 PM8/31/16
to link-grammar, Matthew Ikle, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
And so here is the blog post -- its a lightly reformatted version of this email, with lots of links to wikipedia and a few papers.  


I really really hope that this clarifies something that is often seen as mysterious.

--linas

Ben Goertzel

unread,
Sep 1, 2016, 1:09:13 PM9/1/16
to link-grammar, Matthew Ikle, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Thanks Linas...

Of course you are right that link grammar/pregroup grammar is
modelable as an asymmetric closed monoidal category which is not
cartesian... I was just freakin' overtired when I typed that... too
much flying around and too little sleep..

However, dependent type systems do often map into locally closed
cartesian categories; that part was not a slip...

At least in very many cases, it seems to me we can view the RelEx/R2L
transformations as taking an asymmetric closed monoidal category, into
a locally closed cartesian category... Cashing this out in terms of
examples will be important though, otherwise it's too abstract to be
useful. I started doing that on a flight but ran out of time; now
I'm back in HK and overwhelmed with meetings...

About Kripke frames etc. --- as I recall that was a model of the
semantics of modal logic with a Possibly operator as well as a
Necessarily operator.... But in PLN we have a richer notion of
possibility than in a standard modal logic, in the form of the
<probability, weight of evidence> truth values. I guess that if you
spelled out the formal semantics of logic with <p,n> truth values in
the right way, you would get some sort of extension of Kripke
semantics. Kripke semantics is based on unweighted graphs, so I
guess for the logic with <p,n> truth values you'd get something
similar with weighted graphs.... This would be interesting to spell
out in detail; I wish I had the type...

Typos, dumb mistakes and hasty errors aside, I think I'm reasonably
comfortable with Kripke frames and pregroup grammars and
intuitionistic logic stuff...

However, the point I don't yet see is why we need linear logic ... and
you don't really touch on that in your blog post... if you could
elaborate that it would be interesting to me...

-- Ben
>>> >>> > email to link-grammar...@googlegroups.com.
>>> >>> > To post to this group, send email to link-g...@googlegroups.com.
>>> >>> > Visit this group at https://groups.google.com/group/link-grammar.
>>> >>> > For more options, visit https://groups.google.com/d/optout.
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Ben Goertzel, PhD
>>> >>> http://goertzel.org
>>> >>>
>>> >>> Super-benevolent super-intelligence is the thought the Global Brain
>>> >>> is
>>> >>> currently struggling to form...
>>> >>>
>>> >>> --
>>> >>> You received this message because you are subscribed to the Google
>>> >>> Groups
>>> >>> "opencog" group.
>>> >>> To unsubscribe from this group and stop receiving emails from it,
>>> >>> send an
>>> >>> email to opencog+u...@googlegroups.com.
>>> >>> To post to this group, send email to ope...@googlegroups.com.
>>> >>> Visit this group at https://groups.google.com/group/opencog.
>>> >>> To view this discussion on the web visit
>>> >>>
>>> >>> https://groups.google.com/d/msgid/opencog/CACYTDBeRwpZiNS%3DShs2YfrRbKkC426v3z51oz6-Fc8u7SoJ8Mw%40mail.gmail.com.
>>> >>> For more options, visit https://groups.google.com/d/optout.
>>> >>
>>> >>
>>> >> --
>>> >> You received this message because you are subscribed to the Google
>>> >> Groups
>>> >> "link-grammar" group.
>>> >> To unsubscribe from this group and stop receiving emails from it, send
>>> >> an
>>> >> email to link-grammar...@googlegroups.com.
>>> >> To post to this group, send email to link-g...@googlegroups.com.
>>> >> Visit this group at https://groups.google.com/group/link-grammar.
>>> >> For more options, visit https://groups.google.com/d/optout.
>>> >
>>> >
>>> >
>>> > --
>>> > Ben Goertzel, PhD
>>> > http://goertzel.org
>>> >
>>> > Super-benevolent super-intelligence is the thought the Global Brain is
>>> > currently struggling to form...
>>>
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> http://goertzel.org
>>>
>>> Super-benevolent super-intelligence is the thought the Global Brain is
>>> currently struggling to form...
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "link-grammar" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to link-grammar...@googlegroups.com.
>>> To post to this group, send email to link-g...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/link-grammar.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to link-grammar...@googlegroups.com.

Linas Vepstas

unread,
Sep 1, 2016, 4:18:25 PM9/1/16
to link-grammar, Matthew Ikle, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Ben,

On Thu, Sep 1, 2016 at 12:09 PM, Ben Goertzel <b...@goertzel.org> wrote:

About Kripke frames etc. --- as I recall that was a model of the
semantics of modal logic with a Possibly operator as well as a
Necessarily operator....   But in PLN we have a richer notion of
possibility than in a standard modal logic,

Hey, I'm guessing that you're tired from travel, as here you repeat the same confusion from before. There is a difference between "reasoning" (which is what PLN does) and "reasoning about reasoning" (which is what I am talking about). 

What I am talking about applies to any rule-based system whatsoever, its not specific to PLN. As long as you keep going back to PLN, you will have trouble figuring out what I'm saying.   This is why I keep trying to create non-PLN examples. But every time I create a non-PLN example, you zip back to PLN, and that misses the point of it all.

-- linas

Ben Goertzel

unread,
Sep 1, 2016, 10:32:36 PM9/1/16
to opencog, link-grammar, Matthew Ikle, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hmm, but the rules of a system like PLN are just predicate logic
formulas themselves, so "reasoning about reasoning" is formally a
sub-case of good old reasoning

The semantics is different for "reasoning about reasoning" ... but if
one is using a sufficiently rich probabilistic logic for reasoning,
then one is a fortiori doing probabilistic-reasoning-about-reasoning,
right? ... which (if one uses <s,n> truth values) is richer than
intuitionistic reasoning-about-reasoning, and inclusive of the
former...

I agree we are talking past each other in some way or another
though... Sometimes email is not optimal. If will be fun to take
this up F2F sometime, ideally after I've taken a day or so to review
the relevant math, which exists in my brain at varying levels of
recollection and fuzziness just now...
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to opencog+u...@googlegroups.com.
> To post to this group, send email to ope...@googlegroups.com.
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA340RO1zNghUpNsB5ij3m%3Dq2hxGdW_xZfFsGXC4JW9EpMQ%40mail.gmail.com.

Nil Geisweiller

unread,
Sep 2, 2016, 7:19:25 AM9/2/16
to linasv...@gmail.com, opencog, link-grammar, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale


On 08/30/2016 06:52 PM, Linas Vepstas wrote:
>
> These same ideas should generalize to PLN: although PLN is itself a
> probabilistic logic, and I do not advocate changing that, the actual
> chaining process, the proof process of arriving at conclusions in PLN,
> cannot be, must not be.
>
> I hope the above pins down the source of confusion, when we talk about
> these things. The logic happening at the proof level, the ludics level,
> is very different from the structures representing real-world knowledge.

Oh, it's a lot clearer then! But in the case of PLN inference control we
want to use meta-learning anyway, not "hacks" (sorry if I upset certain)
like linear logic or intuitionistic logic. However, I feel an area where
something similar to linear logic, etc, might be very worthwhile
thinking of is in estimating how much evidences inference traces have in
common, as to have the revision rule work correctly. This is kinda the
only way I manage to relate these
barely-understandable-word-soup-sounding-to-me abstract proposals to
PLN. Would really love to look deep into that once it becomes more
prioritized though.

Nil

Nil Geisweiller

unread,
Sep 2, 2016, 7:41:59 AM9/2/16
to linasv...@gmail.com, link-grammar, Matthew Ikle, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Linas,

thanks for explaining so clearly what you mean...

On 09/01/2016 12:16 AM, Linas Vepstas wrote:
> Observe that the triple above is an arrow: the tail of the arrow is
> "some subset of the atomspace", the head of the arrow is "the result of
> applying PLN rule X", and the shaft of the arrow is given a name: its
> "rule X".

Aha, I finally understand what you meant all these years!

> I already pointed out that some of the worlds are "impossible" i.e. have
> a probability of zero. These can be discarded. But wait, there's more.
> Suppose that one of the possible worlds contains the statement "John
> Kennedy is alive" (with a very very high confidence) , while another one
> contains the statement "John Kennedy is dead" (with a very very high
> confidence). What I wish to claim is that, no matter what future PLN
> inferences might be made, these two worlds will never become confluent.

I don't think that's true. I believe they should at least be somewhat
confluent, I hope at least, if not then PLN inference control is
pathological. Sure you can't have John Kennedy being half-alive and
half-dead but that is not what a probability distribution means.
Ultimately the event of a probability space is a crisp set, that is why
Ben suggested multi-set semantics over the SubSet formula when dealing
with fuzzy concepts.

I can't comment on link-grammar since I don't understand it.

Nil

Linas Vepstas

unread,
Sep 2, 2016, 10:00:13 PM9/2/16
to Nil Geisweiller, opencog, link-grammar, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Nil,



These same ideas should generalize to PLN:  although PLN is itself a
probabilistic logic, and I do not advocate changing that, the actual
chaining process, the proof process of arriving at conclusions in PLN,
cannot be, must not be.

I hope the above pins down the source of confusion, when we talk about
these things.  The logic happening at the proof level, the ludics level,
is very different from the structures representing real-world knowledge.

Oh, it's a lot clearer then! But in the case of PLN inference control we want to use meta-learning anyway, not "hacks" (sorry if I upset certain) like linear logic or intuitionistic logic.

Well, hey, that is like saying that 2+2=4 is a hack -- 

The ideas that I am trying to describe are significantly older than PLN, and PLN is not some magical potion that somehow is not bound by the rules of reality, that can in some supernatural way violate the laws of mathematics.
 
However, I feel an area where something similar to linear logic, etc, might be very worthwhile thinking of is in estimating how much evidences inference traces have in common, as to have the revision rule work correctly. This is kinda the only way I manage to relate these barely-understandable-word-soup-sounding-to-me abstract proposals to PLN.  Would really love to look deep into that once it becomes more prioritized though.

OK, so in the blog post, at what point did things get too abstract, and too hard to follow? 

--linas

Linas Vepstas

unread,
Sep 2, 2016, 10:50:49 PM9/2/16
to Nil Geisweiller, link-grammar, Matthew Ikle, opencog, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Nil,


Observe that the triple above is an arrow:  the tail of the arrow is
"some subset of the atomspace", the head of the arrow is "the result of
applying PLN rule X", and the shaft of the arrow is given a name: its
"rule X".

Aha, I finally understand what you meant all these years!

I already pointed out that some of the worlds are "impossible" i.e. have
a probability of zero. These can be discarded.  But wait, there's more.
Suppose that one of the possible worlds contains the statement "John
Kennedy is alive" (with a very very high confidence) , while another one
contains the statement "John Kennedy is dead" (with a very very high
confidence). What I wish to claim is that, no matter what future PLN
inferences might be made, these two worlds will never become confluent.

I don't think that's true. I believe they should at least be somewhat confluent, I hope at least, if not then PLN inference control is pathological. Sure you can't have John Kennedy being half-alive and half-dead but that is not what a probability distribution means.

OK, the reason I focused on having separate, distinct copies of the atomspace at each step is that you (or some algo) gets to decide, at each point, whether you want to merge two atomspaces back together again into one, or not.

Today, by default, with the way the chainers are designed, the various different atomspaces are *always* merged back together again (into one single, global atomspace), and you are inventing things like "distributional TV" to control how that merge is done.

I am trying to point out that there is another possibility: one could, if desired, maintain many distinct atomspaces, and only sometimes merge them.   So, for just a moment, just pretend you actually did want to do that.  How could it actually be done?  Because doing it in the "naive" way is not practical.  Well, there are several ways of doing this more efficiently.  One way is to create a new TV, which stores the pairs (atomspace-id, simple-TV)  Then, if you wanted to merge two of these "abstract" atomspaces into one, you could just *erase* the atomspace-id.  Just as easy as that -- erase some info. You could even take two different (atomspace-id, simple-TV)  pairs and mash them into one distributional TV. 

The nice thing about keeping such pairs is that the atomsapce-id encodes the PLN inference chain.  If you want to know *how* you arrived at some simple-TV, you just look at the atomspace-id, and you then can know how you got there -- the inference chain is recorded, folded into the id.  To create a distributional-TV, you simply throw away the records of the different inference chains, and combine the simple-TV's into the distributional TV.

I hope this is clear.   The above indicates how something like this could work -- but we can't talk about if its a good idea, or how it might be useful, till we get past that.
 
I can't comment on link-grammar since I don't understand it.

Well, its a lot like PLN -- it is a set of inference rules (called "disjuncts") that get applied, and each of these inference rules has a probability associated with it (actually, log-probability -- the "cost").   However, instead of always merging each the result of each inference step back into a single global atomspace (called a "linkage"), one keeps track of multiple linkages (multiple distinct atomspaces).  One keeps going and going, until it is impossible to apply any further inference rules.  At this point, parsing is done.  When parsing is done, one has a few or dozens or hundreds of these "linkages" (aka "atomspaces")

A parse is then the complete contents of the "atomspace" aka "linkage".   At the end of the parse, the "words" (aka OC Nodes, we actually use WordNodes after conversion) are connected with "links" (aka OC EvaluationLinks)

Let me be clear: when I say "its a lot like PLN", I am NOT hand-waving or being metaphorical, nor am I trying to be abstract or obtuse.  I am trying to state something very real, very concrete, very central.  It might not be easy to understand; you might have to tilt your head sideways to get it, but it really is there.

Anyway, moving on -- Now, you could, if you wished, mash all of the "linkages"(atomspaces) back together again into just one -- you could put a distributional TV on each "link"(EvaluationLink), and mash everything into one.   You could do even more violence, and mash such a distributional TV down to a simple TV.   It might even be a good idea to do this! No one has actually done so.  

Historically, linguists really dislike the single-global-atomspace-with-probabilistic-TV's idea, and have always gone for the many-parallel-universes-with-crisp-TV's model of parsing. This dates back to before chomsky, before tesniere and is rooted in 19th or 18th-century or earlier concepts of grammar in, for example, Latin, etc. -- scholastic thinking maybe even to the 12th century.  The core concepts are already present, there; certainly, in jurisprudence.

What I am suggesting is that perhaps, by stealing some of these rather very old ideas, and realizing that they also just happen to describe one way of operating PLN, then perhaps would could create better inference control algorithms.   You don't have to always work with just a single atomspace.  Its OK to conceptualize about having many of them, and think about what might happen in each one.

--linas


Nil

Ben Goertzel

unread,
Sep 2, 2016, 11:44:13 PM9/2/16
to opencog, Nil Geisweiller, link-grammar, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Linas,

On Sat, Sep 3, 2016 at 10:50 AM, Linas Vepstas <linasv...@gmail.com> wrote:
> Today, by default, with the way the chainers are designed, the various
> different atomspaces are *always* merged back together again (into one
> single, global atomspace), and you are inventing things like "distributional
> TV" to control how that merge is done.
>
> I am trying to point out that there is another possibility: one could, if
> desired, maintain many distinct atomspaces, and only sometimes merge them.
> So, for just a moment, just pretend you actually did want to do that. How
> could it actually be done? Because doing it in the "naive" way is not
> practical. Well, there are several ways of doing this more efficiently.
> One way is to create a new TV, which stores the pairs (atomspace-id,
> simple-TV) Then, if you wanted to merge two of these "abstract" atomspaces
> into one, you could just *erase* the atomspace-id. Just as easy as that --
> erase some info. You could even take two different (atomspace-id, simple-TV)
> pairs and mash them into one distributional TV.

I note that we used to have something essentially equivalent to this,
for basically this same reason.....

It was called CompositeTruthValue, and was a truth value object that
contained mutliple truth values, indexed by a certain ID. The ID
was a version-ID not an atomspace-ID, but same difference...

A dude named Linas Vepstas got rid of this mechanism, because he
(probably correctly) felt it was a poor software design ;)

The replacement methodology is to use EmbeddedTruthValueLink and
ContextAnchorNode , as in the example

Evaluation
PredicateNode "thinks"
ConceptNode "Bob"
ContextAnchorNode "123"

EmbeddedTruthValueLink <0>
ContextAnchorNode "123"
Inheritance Ben sane

which uses more memory but does not complicate the core code so much...

Ben Goertzel

unread,
Sep 2, 2016, 11:50:44 PM9/2/16
to link-grammar, Nil Geisweiller, opencog, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
On Sat, Sep 3, 2016 at 9:59 AM, Linas Vepstas <linasv...@gmail.com> wrote:
> Hi Nil,
>
>>
>>>
>>> These same ideas should generalize to PLN: although PLN is itself a
>>> probabilistic logic, and I do not advocate changing that, the actual
>>> chaining process, the proof process of arriving at conclusions in PLN,
>>> cannot be, must not be.
>>>
>>> I hope the above pins down the source of confusion, when we talk about
>>> these things. The logic happening at the proof level, the ludics level,
>>> is very different from the structures representing real-world knowledge.
>>
>>
>> Oh, it's a lot clearer then! But in the case of PLN inference control we
>> want to use meta-learning anyway, not "hacks" (sorry if I upset certain)
>> like linear logic or intuitionistic logic.
>
>
> Well, hey, that is like saying that 2+2=4 is a hack --
>
> The ideas that I am trying to describe are significantly older than PLN, and
> PLN is not some magical potion that somehow is not bound by the rules of
> reality, that can in some supernatural way violate the laws of mathematics.

Hmm, no, but forms of logic with a Possibly operator are kinda crude
-- they basically lump all non-crisp truth values into a single
category, which is not really the most useful thing to do in most
cases...

Intuitionistic is indeed much older than probabilistic logic; but my
feeling is it is largely superseded by probabilistic logic in terms of
practical utility and relevance...

It's a fair theoretical point, though, that a lot of the nice theory
associated with intuitionistic logic could be generalized and ported
to probabilistic logic -- and much of this mathematical/philosophical
work has not been done...

As for linear logic, I'm still less clear on the relevance. It is
clear to me that integrating resource-awareness into the inference
process is important, but unclear to me that linear logic or affine
logic are good ways to do this in a probabilistic context. It may be
that deep integration of probabilistic truth values provides better
and different ways to incorporate resource-awareness...

As for "reasoning about reasoning", it's unclear to me that this
requires special treatment in terms of practicalities of inference
software.... Depending on one's semantic formalism, it may or may
not require special treatment in terms of the formal semantics of
reasoning.... It seems to me that part of the elegance of dependent
types is that one can suck meta-reasoning cleanly into the same
formalism as reasoning. This can also be done using type-free
domains (Dana Scott's old work, etc.).... But then there are other
formalisms where meta-reasoning and base-level reasoning are
formalized quite differently...

-- Ben

-- Ben

Linas Vepstas

unread,
Sep 3, 2016, 12:11:33 AM9/3/16
to link-grammar, opencog, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Yes. I am starting to get very annoyed. Whenever I talk about CompositeTruthValue, which I did earlier, I get the big brushoff. Now, when I finally was able to sneak it back into the conversation, I once again get the big brushoff.

I am starting to get really angry about this. I am spending wayyy too much time writing these emails, and all I get is blank stares and the occasional snide remark back.  This is just not that complicated, but as long as you do not bother to apply your considerable brainpower to all of this, the conversation is utterly completely stalled. 

I'm pretty angry right now.

--linas


Linas Vepstas

unread,
Sep 3, 2016, 12:17:47 AM9/3/16
to link-grammar, Nil Geisweiller, opencog, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
GOD DAMN IT BEN

Stop writing these ninny emails, and start thinking about what the hell is going on.  I've explained this six ways from Sunday, and I get the impression that you are just skimming everything I write, and not bothering to read it, much less think about it.  

I know you are really really smart, and I know you can understand this stuff, (cause its really not that hard)  but you are simply not making the effort to do so.  You are probably overwhelmed with other work -- OK -- great -- so we can maybe follow up on this later on. But reading your responses is just plain highly unproductive, and just doesn't lead anywhere.  Its not interesting, its not constructive, it doesn't solve any of the current problems in front of us.   

--linas

Ben Goertzel

unread,
Sep 3, 2016, 12:19:03 AM9/3/16
to link-grammar, opencog, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hmm, I don't feel like I'm brushing you off. I'm actually trying to
understand why you think linear or affine logic is needed --- I don't
get why, I suspect you have some intuition or knowledge here that I'm
not grokking, and I'd honestly love more
clarification/elaboration/explanation from you...

About ContextLink / CompositeTruthValue -- an interesting relevant
question is whether we want/need to use it in the PLN backward chainer
which Nil is now re-implementing.... Quite possibly we do...
>> email to link-grammar...@googlegroups.com.
>> To post to this group, send email to link-g...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/link-grammar.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to link-grammar...@googlegroups.com.
> To post to this group, send email to link-g...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
>
> For more options, visit https://groups.google.com/d/optout.



Ben Goertzel

unread,
Sep 3, 2016, 12:26:19 AM9/3/16
to link-grammar, Nil Geisweiller, opencog, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
hey... yes it's true that I have a shitload of less interesting things
to do this weekend, so I'm not reflecting on this stuff at great depth
at the moment... but I did spent some time reviewing papers on types
and categories and logic last week while flying around ...

Perhaps a useful way to move this discussion in a productive direction
would be to focus on some particular example or small set of examples?
Some of these things we're discussing are not going to be
practically relevant in OpenCog for a while, but some of them might be
important for Nil's near-future work on backward chaining and
inference control...

What I'm thinking is to posit a specific example of a real-world
situation and corresponding reasoning problem and then write down how
it would be formulated using

-- classical logic
-- intuitionistic logic
-- PLN

and then identify a corresponding "reasoning about reasoning" problem
and write down how ti would be formulated in these various ways... and
see how the semantics can be formalized or otherwise expressed in each
case...

Regarding inference control, I could then use said example as an
illustration of my prior suggestion regarding
probabilistic-programming-based inference control... and perhaps you
could use it to explain how you think linear or affine logic can be
useful for inference control?

I could come up with an example or two myself but I'm afraid I might
come up with one that doesn't fully illustrate the points you're
trying to make...

Going through this stuff in detail in the context of some specific
example might help un-confuse others besides you, me and Nil who are
listening into this thread as well...

This is not urgent but could be interesting...

ben
>> email to link-grammar...@googlegroups.com.
>> To post to this group, send email to link-g...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/link-grammar.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to link-grammar...@googlegroups.com.
> To post to this group, send email to link-g...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.



Linas Vepstas

unread,
Sep 3, 2016, 1:25:09 AM9/3/16
to opencog, link-grammar, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
On Fri, Sep 2, 2016 at 11:19 PM, Ben Goertzel <b...@goertzel.org> wrote:
Hmm, I don't feel like I'm brushing you off.   I'm actually trying to
understand why you think linear or affine logic is needed --- I don't
get why, I suspect you have some intuition or knowledge here that I'm
not grokking, and I'd honestly love more
clarification/elaboration/explanation from you...

Well, the blog post  http://blog.opencog.org/2016/08/31/many-worlds-reasoning-about-reasoning/  really tries to lay down the most basic ideas.  As you read it, I believe that you should be reminded of the old idea of "inference trails" which I diligently tried not to mention, because I am afraid it will confuse the issue.

If you read that, and think "how could one implement multiple atomspaces efficiently", then, yes, ContextLink and/or CompositeTruthValue is a way of doing that.

I claim that all the high-falutin mathematical terminology and concepts apply to this particular situation, but discussing that further, just right now, seems unproductive.  So let me set that aside.

But I do have the general sense that, as we move data through the system, using either ContextLinks, or something else that implements the multiverse model, would be useful.

There is a natural, obvious point of contact for this: every parse given to you, by relex, is effectively its own "universe", already.  That's just the way LG and relex were designed. Its how just about all parsers are designed.  The classic example is "I saw the man with the telescope", which has two parses, each parse is its own universe.  One parse is:

    _subj(see, I)
    _obj(see, man)
    _advmod(see, with)
    _pobj(with, telescope)

the other parse is nearly identical:

    _subj(see, I)
    _obj(see, man)
    _prepadj(man, with)
    _pobj(with, telescope)

We have several choices with how to deal with this.  We can try to keep these in separate "universes", maybe using ContextLinks (for example -- there are other ways, too), and then accumulate evidence until we can rule out one of these interpretations.

The other approach, that Nil was advocating with his distributional-TV proposals, is to jam these two into one, and say that    _advmod(see, with) is half-true, and     _prepadj(man, with) is half-true, -- and then somehow hope that PLN is able to eventually sort it out.   We currently don't do this approach, because it would break R2L -- the R2L rules would probably mis-behave, because they don't know how to propagate half-truths.

My gut instinct is that keeping the two different interpretations of "I saw the man with the telescope"  separate, for as long as possible, is better. Using the distributional-TV to (prematurely) merge them into one will probably lead to confusion.

However, this idea of keeping contexts separate, and when one should or should not merge them together, requires some sort of common shared vocabulary, so that we can talk about it.   The existing code talks about "parses" and "interpretations" and "word instances" and, maybe that is all that is needed, for now.   

But I suspect that it won't be enough, not for long, because PLN inferences done on one "interpretation" almost necessarily have to be distinct from the inferences one on the other interpretations, and we don't have good control over this.   Maybe ContextLinks or maybe ContextualTruthValues would be good for this. Maybe inference trails might help. I dunno.   The reason I dunno is because this level of abstraction is too low; it is too hard to think and plan and design at that level.  More abstraction makes understanding easier.

My hope was that, by talking about these other kinds of logics, it would clarify the issues -- for example, by understanding that "parsing" and "reasoning" are really the same kind of thing, and that therefore, the various tricks and techniques and algorithms developed  for parsing could be applied to reasoning as well.  Backward and forward chaining are very crude, very primitive tools. Far superior algorithms have been invented. I'm quite sure that we can do a lot lot better than merely backward/forward chaining in PLN.  But we can't get there until we start talking at the correct level of abstraction.

-- Linas


>> To post to this group, send email to link-g...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/link-grammar.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to link-g...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
>
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.

To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Linas Vepstas

unread,
Sep 3, 2016, 1:54:42 AM9/3/16
to opencog, link-grammar, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Here is other way of saying it.

On Fri, Sep 2, 2016 at 11:26 PM, Ben Goertzel <b...@goertzel.org> wrote:

  Some of these things we're discussing are not going to be
practically relevant in OpenCog for a while, but some of them might be
important for Nil's near-future work on backward chaining and
inference control...

Almost everything that I am talking about is aimed directly and explicitly at the concept of forward and backward chaining.   I claim even more: of all of the algorithms that are known for performing reasoning, forward/backward chaining are the worst and the slowest and the lowest-performance of all.   They are the most primitive possible tools for performing reasoning -- they are CPU hogs that get stuck in the mud of combinatorial explosion. 

What I'm thinking is to posit a specific example of a real-world
situation and corresponding reasoning problem and then write down how
it would be formulated using

-- classical logic
-- intuitionistic logic
-- PLN

Nothing that I care to talk about depends in the slightest on  this choice.  Whatever I  care to say about one applies equally well to the other two.  The differences between them mostly do not matter, at all, for the conversation that I wish to have.  One could add some green-cheese-from-the-moon logic to the list, and it just plain would not matter. It just doesn't matter.

The discussion I wish to have is about reasoning itself: the manner in which one applies rules to data.  So far, you have mentioned only two: forward and backward chaining.  I claim that there are many many more possibilities, that are far superior to these two. 

and then identify a corresponding "reasoning about reasoning" problem
and write down how ti would be formulated in these various ways... and
see how the semantics can be formalized or otherwise expressed in each
case...

The rules about reasoning are formulated in the same way, independently of the actual logic which you wish to use.  

Well, this is actually a kind-of white lie. If you know that your reasoner is going to manipulate expressions written in classical predicate logic, then you can cheat in various ways. By "cheat" I mean "optimize performance of your reasoning algorithm".    But I would rather avoid getting tangled in the cheats/optimizations, at least, for a little while, and discuss reasoning in general, completely independent of the logical system on which the reasoning is being performed.

Regarding inference control, I could then use said example as an
illustration of my prior suggestion regarding
probabilistic-programming-based inference control... and perhaps you
could use it to explain how you think linear or affine logic can be
useful for inference control?

I think we need to take multiple steps backwards first, and long before we talk about inference control, we first need to agree on what we mean when we say "inference".  Right now, we don't share a common concept of what this is.

The blog post attempts to provide a provisional definition of inference. 

I claim that inference is like parsing, and that algorithms suitable for parsing can be transported and used for inference. I also claim that these algorithms will all provide superior performance to backward/forward chaining.

Until we can start to talk about inference as if it was a kind of parsing, then I think we'll remain stuck, for a while.

--linas
 

>> To post to this group, send email to link-g...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/link-grammar.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "link-grammar" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to link-g...@googlegroups.com.
> Visit this group at https://groups.google.com/group/link-grammar.
> For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

Super-benevolent super-intelligence is the thought the Global Brain is
currently struggling to form...

--

You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscribe@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.

Ben Goertzel

unread,
Sep 3, 2016, 1:58:57 AM9/3/16
to link-grammar, opencog, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
I will read your message carefully tonight or sometime in the next few
days when I'm not in a hurry...

But I will respond to this one point first, hoping to spur you on to
provide more detail on your thinking...

On Sat, Sep 3, 2016 at 1:24 PM, Linas Vepstas <linasv...@gmail.com> wrote:
> Backward and forward chaining are very crude, very primitive tools. Far
> superior algorithms have been invented. I'm quite sure that we can do a lot
> lot better than merely backward/forward chaining in PLN. But we can't get
> there until we start talking at the correct level of abstraction.

I do agree on that point.

The previous implementation of the backward chainer (by William Ma)
fell apart because of some "plumbing" related to lambdas and
variables.... Nil's re-implementation will get that plumbing right.
But even with correct variable-management plumbing, yeah, forward and
backward chaining are extremely crude mechanisms that are not
sufficient for AGI in themselves...

I wonder if you would like to try to spell out in slightly more detail
one of the alternatives you are thinking of, though.... This would be
potentially relevant to Nil's work during the next couple months...

You have mentioned Viterbi and forward-backward algorithms before, and
I understand how those work for standard belief propagation in Bayes
nets and Markov chains etc., but I don't yet see how to apply that
idea in a general logic context tractably.... Of course one can
alternate btw forward and backward steps, or do them concurrently, but
that's not really what you're getting at...

Probably the point in the above paragraph ties in with the
probabilistic-programming stuff I wrote about and linked to before,
but I haven't figured out exactly how yet...

Anyway I gotta leave the computer and take my f**king broken car to
the shop now, but I will read this stuff and reflect on it with more
care sometime in the next few days when the time emerges...

ben

Ben Goertzel

unread,
Sep 3, 2016, 9:33:09 AM9/3/16
to link-grammar, opencog, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Linas, Nil, others,

OK, here is a more serious reply... which to be frank took a few hours
to write out... it may still not satisfy you, but at least it was
interesting to write down this stuff that has been bouncing in my head
a while ;) ;p ...


GENERAL FRAMEWORK OF OBJECTS AND TRANSFORMATIONS


OK, so let’s look at this at the level of abstraction of a general
rule engine… this is one level at which parsing and inference are the
same thing, right? (and after all, we do have a general rule engine
in OpenCog, so...)

In that context, what we have is some set of objects, and some set of
transformations…

Each transformation maps certain objects into certain other objects….

We could view each transformation as a category obviously… or we could
view any set of transformations as a category …

The transformations can also be viewed as objects. (Sometimes a
transformation is an object for a different transformation; but it’s
also possible for a transformation to be an object for itself and
transform itself.)

Some objects refer to observations from sensors or sets thereof; these
objects, among others, may come with count values. In the simplest
case, the count of an object may refer to the number of times an
observation falling into the set denoted by the object has been made
by the AI system associated with all these objects and
transformations…

Some objects are n-ary relations among other objects. For instance,
binary inheritance relations; predicates that transform objects into
truth values (truth values also being considerable as objects in this
general sense); etc.

We may also have types and type inheritance relationships (which are
a kind of object), so that objects may be seen as falling into a type
hierarchy. Since functions are a kind of object too, dependent types
may exist in this framework (and are also a kind of object)…

That is just the simple, abstract set-up, right? That obviously
encompasses parsing, logical inference, and plenty of other stuff…



SYNTAX PARSING



For the case of parsing… suppose we have a sentence object, i.e. a
linear series of word-instance objects. In link grammar (which seems
equivalent to pregroup grammar), each word can be viewed as a
transformation that maps (usually other) words into phrases. So for
instance if we have



_____ S___

|

dogs



i.e. “dogs” with an S connector on the right, this is a transformation
that transforms {any word with an S connector on its left} into a
phrase. We may have




___S___

|

bark



… in which case letting “dogs” transform “bark”, it will transform it into



______S_____

| |

dogs bark



i.e. into the phrase


dogs bark


We can model this grammar categorically, by considering the process of
connecting a word to another word or phrase as a “product” …. Then
the entity “dog as an entity to connect to the left of another word or
phrase” is the right adjoint of “dog”; and the entity “dog as an
entity to connect to the right of another word” is the left adjoint of
“dog”, and so we have a monoidal category. Which is evidently an
asymmetric monoidal category -- because grammar is left-right
asymmetric: a word’s potential syntactic connections on the left are
not the same as its potential syntactic connections on the right.
Whether modeling the grammar categorically is useful, I am unsure.

Parsing a sentence is a case where we have a limited number of
objects of interest in a given multi-transformation process. That is,
a sentence may consist of (say) 5 specific word-instances (e.g. “The
dogs bark at me”) …. And we have a constraint that if we apply the
word-instance “dogs” to the word-instance “bark” as indicated above,
we can’t also apply the word-instance “dogs” to the word-instance “me”
(as in the case “This sentence dogs me” ) …



So in this case we need some notion of a “constrained transformation
system” … where we have a certain set of objects, and then a certain
set of transformations, and a set of constraints on which
transformations can be used together. The objects plus
transformations can be thought of as a category. We then have a
bunch of disjunctions among (object, transformation) pairs, indicating
that if e.g. (O1,T1) is in the transformation system, then (O2,T2)
cannot be…. These disjunctions are the “constraints” …



CONSTRAINED TRANSFORMATION SYSTEMS IN PLN LOGIC


A logic like PLN is in the sense of the above paragraph also
“constrained transformation system”, where here the constraint
pertains to multiple-counting of evidence…. The objects of PLN
may be any objects of appropriate types (the types that can serve as
premises to the logic rules, e.g. an InheritanceLink and its
associated targets and truth value); and the transformations are the
inference rules. The constraints are things like:




If we accumulate a piece of indirect evidence E in favor of
“Inheritance A C” as part of the input to the deduction-rule
transformation to the list object composed of the objects “Inheritance
A B” and “Inheritance B C” … then we cannot use this same piece of
evidence E as part of the input to a deduction-rule transformation
whose input contains “Inheritance A C” and whose output contains
“Inheritance A B” …




Since in practical PLN we are not keeping track of individual pieces
of evidence, we need to say this as




If we use the truth value of “Inheritance A C” as part of the input to
the deduction-rule transformation to the list object composed of the
objects “Inheritance A B” and “Inheritance B C” … then we cannot use
this same truth value as part of the input to a deduction-rule
transformation whose input contains “Inheritance A C” and whose output
contains “Inheritance A B” …



Inference trails are a crude way of enforcing the disjunctive
constraint involved in PLN (or NARS or other similar uncertain logic
systems), when the latter is considered as a constrained
transformation system.

We can also view PLN categorically if we want to. PLN can be viewed
as building on typed lambda calculus, adding onto it probabilistic
truth values attached to expressions. In its full generality PLN
should be viewed as building on lambda calculus with dependent types.
Just as simply typed lambda calculus corresponds to closed Cartesian
categories,

http://www.goodmath.org/blog/2012/03/11/interpreting-lambda-calculus-using-closed-cartesian-categories/

similarly, lambda calculus with dependent types then is known to
correspond to a (locally closed) cartesian category. A category C is
locally Cartesian closed, if all of its slices C/X are Cartesian
closed (i.e. if it’s Cartesian closed for each parameter value used in
a parameter-dependent type). An arrow F: X —>Y can be interpreted as
a variable substitution and as a family of types indexed over Y in the
type theory. Basically, the category associated with a typed lambda
calculus has objects equal to the types, and morphisms equal to
type-inheritance relations between the types.

Note that unlike the asymmetric monoidal category used to model
syntax, here we have a SYMMETRIC category modeling logic expressions.
This is, crudely, because logic doesn’t care about left versus right,
whereas link grammar (and syntax in general) does.


MAPPING SYNTAX TO LOGIC

"RelEx + RelEx2Logic” maps syntactic structures into logical
structures. It takes in structures that care about left vs. right,
and outputs symmetric structures that don’t care about left vs. right.
The output of this semantic mapping framework, given a sentence, can
be viewed as a set of type judgments, i.e. a set of assignations of
terms to types. (Categorially, assigning term t to type T
corresponds to an arrow “t \circ ! : Gamma ---> T” where ! is an arrow
pointing to the unit of the category and Gamma is the set of type
definitions of the typed lambda calculus in question, and \circ is
function composition) .


For instance, if we map “dogs bark” into


EvaluationLink

PredicateNode “bark”

ConceptNode “dogs”



then we are in effect assigning “dogs” to the type T corresponding to



EvaluationLink

PredicateNode “bark”

ConceptNode $X



with free variable $X .


Categorially, the arrow


(ConceptNode $X) ----> (EvaluationLink (PredicateNode “bark”) (ConceptNode $X))


corresponds to this type expression.


The symmetry here consists in the fact that it is not “dog as it
connects to the left” or “dog as it connects to the right” that
belongs to this type T, it is just plain old “dog”…


Dependent types come in here when one has semantic mappings that are
left unresolved at the time of mapping. Anaphora are a standard
example, for instance sentences like “Every man who owns a donkey,
beats it”


http://www.slideshare.net/kaleidotheater/hakodate2015-julyslide


The point here is that we want to give “beat” a type of the form like


EvaluationLink

PredicateNode “beat”

ListLink

SomeType $y

SomeType $z


where then the type of $y is the type of men who own donkeys, and the
type of $z is the type of donkeys. But the rule for parsing “beats
it” should be generic and not depend on the specific types of the
subject and object of “beats”, which will be different in different
cases.



INFERENCE CONTROL


So, given this simple, general, abstract set-up, then we confront the
question of control….


What “control” means mostly here is something like pursuit of the
following goals:




We have a bunch of objects, and want to apply a bunch of
transformations to them, to product a bunch of other objects
satisfying some desirable characteristics (where the “desirable
characteristics” are often formulable in terms of transformations
mapping objects into numbers)




Or




We have a specific transformation, and want to find/create some object
that this transformation can act on … or some object that this
transformation will map into the largest number possible … etc.



In cases like this we need to apply multiple transformations (could
be in serial or in parallel) to achieve the goal, and there is a
problem of knowing which ones to try and in what order.

The “meta-learning approach to this is: Keep a record of what sets of
(serial/parallel) transformations have helped achieve similar goals.
Then, use this record as the starting-point for inference regarding
what transformations seem most probable to achieve the current goals.
Try these transformations that are “probable-looking based on
history” and see what happens. The information on what happened,
provides more data for meta-learning.


This can be viewed as a kind of probabilistic programming, as follows.
Consider a probability distribution p(A,G) over transformations of
object A aimed at goal G, where the probability of a transformation T
is proportional to:


· How useful T was for achieving goal G1

· How similar G1 is to G


Consider a similar distribution over objects A relative to goal G,
i.e. p(A,G) proportional to


· How useful transforming A was for achieving goal G1

· How similar G1 is to G


Then, consider a process as follows:


· Choose an object A relative to p(A,G)

· Choose a transformation T relative to p(T,G)

· Store the result

· Do some more inference to update the p(*,G) distributions

· Repeat


This is what I’ve called PGMC or “Probabilistic Growth and Mining of
Combinations”, and what Nil calls “metalearning for inference
control.”


RESOURCE DEPENDENCY AND LOGIC


Now let’s address linear logic related ideas.


The lambda calculus with multiplicities is straightforward to think about,


ftp://ftp-sop.inria.fr/mimosa/personnel/gbo/rr2025.ps

https://www-lipn.univ-paris13.fr/~manzonetto/papers/bcem12.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.471.7213&rep=rep1&type=pdf


Basically, one does lambda calculus, but on multisets rather than
ordinary sets. There is some math (see the last paper above)
showing that this is equivalent to a form of linear logic, under some
circumstances. One can handle cases where some quantities can be
used indefinitely many times, and others can be used only a certain
fixed number of times (their degree of multiset membership).

In terms of constrained transformation systems as I discussed them
above, resource constraints are one possible type of constraint. In
the case of the link grammar / pregroup grammar system, the
constraints involved in parsing a sentence come down to assuming that
the left and right transformations corresponding to each
word-instance, are finite resources with a multiplicity of one.

In the PLN case, the constraints involved in avoiding
multiple-counting of evidence in a sense come down to assuming that
each piece of evidence is a finite resource with a multiplicity of
one. However, this is a bit tricky, because each piece of evidence
can be used many times overall. But within a limited scope of
inference (such as application of the deduction rule among three
premises Inheritance(A,B), Inheritance(B,C), Inheritance(A,C)), then
the single observation must be treated as a finite resource with a
multiplicity of one.

In general, though, it seems to me that the management of resource
limitations in OpenCog is going to be more complicated than lambda
calculus with multiplicities or linear logic. There are some cases
where the issue is that a certain Atom, in a certain context, can only
be transformed in a certain way once. But in many other cases, the
resource limitations are different… the issues are stuff like: We
would rather prioritize use of an Atom that has a lot of other uses,
so that the marginal memory cost of this particular use of the Atom is
as small as possible. In this case, the issue isn’t that each Atom
can be used only a fixed number of times. Rather, it’s more like: we
want to probabilistically prioritize Atoms that have already been used
a lot of times. This is somewhat conceptually related to the
situations modeled by “lambda calculus with resources” and linear
logic, but not quite the same.


POSSIBLE WORLDS

In cases where we have objects that are supposed to model some
situation (e.g. the PLN logical links coming out of parsing and
interpreting a sentence, or interpreting a visual scene, etc.), then
we can associate each set of objects with the set of situations that
they correctly model.

For instance, an interpretation of a sentence in the context of a
certain class of situations, is associated with the set of possible
situations within that class for which that interpretation is
accurate.

Similarly, a conclusion of a logical inference in the context of a
certain class of situations, is associated with the set of possible
situations within that class for which the conclusion is accurate.
(Or we can make it fuzzy if appropriate, and say that the conclusion
is fuzzily accurate regarding different situations, to different
degrees….)

In some cases there will be contradictory interpretations that cannot
(or are very unlikely to) both be accurate. In this case, if we
don’t have enough evidence to distinguish, we may want to keep both
around in the Atomspace, wrapped in different contexts. For
instance, we may do this with multiple parses of a sentence and their
ensuing logical interpretations. Or, in PLN, the Rule of Choice
specifies that we should do this if we have two very different
estimates of the truth value of the same Atom (keep them separate
rather than merge them via some sort of weighted average).



FORWARD AND BACKWARD CHAINING, ITERATED GOAL-ORIENTED PROBABILISTIC
OBJECT/TRANSFORMATION CHOICE, WHAT ELSE?

Forward and backward chaining are clearly somewhat crude control
mechanisms; however, they are what are standardly used in
theorem-provers. It’s not clear what great alternatives exist in a
general setting.

However, one alternative suggested by the above “meta-learning”
approach to inference control is to just use sampling based on the
specified distributions p(A,G) and p(T,G). If the transformations
involved include both e.g. forward and backward applications of PLN
logic rules, then this sort of sampling approach is going to combine
“forward and backward chains” in a manner guided by the distributions.
When the most likely-looking inference steps are forward, the process
will do forward chaining a while. When the most likely-looking
inference steps are backward, the process will do backward chaining a
while. Sometimes it will mix up forward and backward steps freely.

Ben Goertzel

unread,
Sep 3, 2016, 10:13:03 AM9/3/16
to link-grammar, opencog, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
> MAPPING SYNTAX TO LOGIC
>
> "RelEx + RelEx2Logic” maps syntactic structures into logical
> structures. It takes in structures that care about left vs. right,
> and outputs symmetric structures that don’t care about left vs. right.
> The output of this semantic mapping framework, given a sentence, can
> be viewed as a set of type judgments, i.e. a set of assignations of
> terms to types. (Categorially, assigning term t to type T
> corresponds to an arrow “t \circ ! : Gamma ---> T” where ! is an arrow
> pointing to the unit of the category and Gamma is the set of type
> definitions of the typed lambda calculus in question, and \circ is
> function composition) .

One philosophically nice observation here is: Frege's "principle of
compositionality" here corresponds to the observation that there is a
morphism from the asymmetric monoidal category corresponding to link
grammar, into the symmetric locally cartesian closed category
corresponding to lambda calculus w/ dependent types...

This principle basically says that you can get the meaning of the
whole by combining the meaning of the parts, in language...

The case of "Every man who has a donkey, beats it" illustrates that in
order to get compositionality for weird sentences like this, you
basically want to have dependent types in your lambda calculus at the
logic end of your mapping...

-- Ben

Ben Goertzel

unread,
Sep 3, 2016, 1:30:08 PM9/3/16
to link-grammar, opencog, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
One more somewhat amusing observation is that PLN would be expected to
CREATE Chomskyan deep syntactic structures for sentences in the course
of learning surface structure based on embodied experience...

Recall the notion of Chomskyan "deep structure." I suggest that
probabilistic reasoning gives a new slant on this...

Consider the example

"Who did Ben tickle?"

The Chomskyan theory would explain this as wh-movement from the "deep
structure" version

"Ben did tickle who?"

Now, arguably this hypothetical syntactic deep structure version is a
better parallel to the *semantic* deep structure. This certainly
follows if we take the OpenCog semantic deep structure, in which we
get

Who did Ben tickle?
==>
EvaluationLink
PredicateNode "tickle"
ListLink
ConceptNode "Ben"
VariableNode $w


InheritanceLink
VariableNode $w
ConceptNode "person"

Ben did tickle Ruiting.

EvaluationLink
PredicateNode "tickle"
ListLink
ConceptNode "Ben"
ConceptNode "Ruiting"

(let's call this L1 for future reference)

The relation between the semantic representations of "Ben did tickle
Ruiting" and "Ben did tickle who?" is one of substitution of the
representation of "Ruiting" for the representation of "who"...

Similarly, the relation between the syntactic representation of "Ben
did tickle Ruiting" and "Ben did tickle who?" is one of substitution
of the lexical representation of "Ruiting" and the lexical
representation of "who?"

On the other hand, the relationship between the syntactic
representation of "Ben did tickle Ruiting" and that of "Who did Ben
tickle?" is not one of simple substitution...

If we represent substitution as an algebraic operation on both the
syntax-parse and semantic-representation side, then there is clearly a
morphism between the symmetry (the invariance wrt substitution) on the
semantic-structure side and the deep-syntactic-structure side... But
there's no such straightforward morphism on the
shallow-syntactic-structure side... (though the syntax algebra and the
logical-semantics algebra are morphic generally, there is no morphism
btw the substitution algebras on the two sides...)

HOWEVER, and here is the interesting part... suppose a mind has all three of

Who did Ben tickle?

Ben did tickle Ruiting.

and

EvaluationLink
PredicateNode "tickle"
ListLink
ConceptNode "Ben"
VariableNode $w

(let's call this L for future reference)

in it?

I submit that, in that case, PLN is going to *infer* the syntactic form

Ben did tickle who?

with uncertainties on the links...

That is, the deep syntactic structure is going to get produced via
uncertain inference...

So what?

Well, consider what happens during language learning. At a certain
point, the system's understanding of the meaning of

Who did Ben tickle?

is going to be incomplete and uncertain ... i.e. its probability
weighting on the link from the syntactic form to its logical semantic
interpretation will be relatively low...

At that stage, the construction of the deep syntactic form

Ben did tickle who?

will be part of the process of bolstering the probability of the
correct interpretation of

Who did Ben tickle?

Loosely speaking, the inference will have the form

Who did Ben tickle?
==>
L

is bolstered by

(Ben did tickle who? ==> L)
(Who did Ben tickle? ==> Ben did tickle who?)
|-
(Who did Ben tickle ==> L)

where

(Ben did tickle who? ==> L)

comes by analogical inference from

(Ben did tickle Ruiting ==> L1)

and

(Who did Ben tickle? ==> Ben did tickle who?)

comes by

(Who did Ben tickle ==> Ben did tickle Ruiting)
(Ben did tickle Ruiting ==> Ben did tickle who)
|-
(Who did Ben tickle? ==> Ben did tickle who?)

and

(Ben did tickle Ruiting ==> Ben did tickle who)

comes from analogical inference on the syntactic links, and

(Who did Ben tickle ==> Ben did tickle Ruiting)

comes by analogical inference from

(Who did Ben tickle ==> L)
(Ben did tickle Ruiting ==> L1)
Similarity L L1
|-
(Who did Ben tickle ==> Ben did tickle Ruiting)

So philosophically the conclusion we come to is: The syntactic deep
structure will get invented in the mind during the process of language
learning, because it helps to learn the surface form, as it's a
bridging structure between the semantic structure and the surface
syntactic structure...

One thing this means is that, contra Chomsky, the presence of the deep
structure in language does NOT imply that the deep structure has to be
innate ... the deep structure would naturally emerge in the mind as a
consequence of probabilistic inference ... and furthermore, languages
whose surface form is relatively easily tweakable into deep structures
that parallel syntactic structure, are likely to be more easily
learned using probabilistic reasoning.... So one would expect surface
syntax to emerge via multiple constraints include

-- ease of tweakability into deep structures that parallel semantic structure

-- ease of comprehension and production of surface structure

I believe Jackendoff made this latter point a few times...

-- Ben

Nil Geisweiller

unread,
Sep 5, 2016, 2:17:32 AM9/5/16
to linasv...@gmail.com, link-grammar, Matthew Ikle, opencog, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Linas,

On 09/03/2016 05:50 AM, Linas Vepstas wrote:
> What I am suggesting is that perhaps, by stealing some of these rather
> very old ideas, and realizing that they also just happen to describe one
> way of operating PLN, then perhaps would could create better inference
> control algorithms. You don't have to always work with just a single
> atomspace. Its OK to conceptualize about having many of them, and think
> about what might happen in each one.

totally agree on that, I'm just not sure how to best translate that into
practice, maybe ContextLinks would be useful, if necessary.

Nil

>
> --linas
>
>
>
> Nil
>
>

Nil Geisweiller

unread,
Sep 5, 2016, 2:41:40 AM9/5/16
to Ben Goertzel, opencog, link-grammar, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Hi Ben,

On 09/03/2016 06:44 AM, Ben Goertzel wrote:
> The replacement methodology is to use EmbeddedTruthValueLink and
> ContextAnchorNode , as in the example
>
> Evaluation
> PredicateNode "thinks"
> ConceptNode "Bob"
> ContextAnchorNode "123"
>
> EmbeddedTruthValueLink <0>
> ContextAnchorNode "123"
> Inheritance Ben sane
>
> which uses more memory but does not complicate the core code so much...

I'm not sure again (as a few months ago) why we wouldn't want to use a
ContextLink instead. As the opencog wiki is unaccessible I can't find
the definition of EmbeddedTruthValueLink, though I believe I understand
what it is.

Nil

Nil Geisweiller

unread,
Sep 5, 2016, 2:48:14 AM9/5/16
to Ben Goertzel, link-grammar, opencog, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale


On 09/03/2016 07:19 AM, Ben Goertzel wrote:
> About ContextLink / CompositeTruthValue -- an interesting relevant
> question is whether we want/need to use it in the PLN backward chainer
> which Nil is now re-implementing.... Quite possibly we do...

It's clear both the forward and backward chainer need to be able to
handle contextual reasoning rather than constantly un/contextualize
links. That is one should be able to launch reasoning queries in certain
contexts. Not supported at the moment but I feel we can afford
incremental progress in that respect.

Nil

Nil Geisweiller

unread,
Sep 5, 2016, 2:55:22 AM9/5/16
to linasv...@gmail.com, opencog, link-grammar, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale


On 09/03/2016 08:24 AM, Linas Vepstas wrote:
> The other approach, that Nil was advocating with his distributional-TV
> proposals, is to jam these two into one, and say that _advmod(see,
> with) is half-true, and _prepadj(man, with) is half-true, -- and
> then somehow hope that PLN is able to eventually sort it out. We
> currently don't do this approach, because it would break R2L -- the R2L
> rules would probably mis-behave, because they don't know how to
> propagate half-truths.

Oh, if you're talking about my Generalized Distributional TV proposal,
it was not about this, it was just about fitting existing TV types into one.

Although, since GDTVs may actually represent conditional distributions,
it could serve as composite TV as well.

Nil

Nil Geisweiller

unread,
Sep 5, 2016, 3:00:09 AM9/5/16
to linasv...@gmail.com, opencog, link-grammar, Nil Geisweiller, Matthew Ikle, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
On 09/03/2016 08:24 AM, Linas Vepstas wrote:
> The other approach, that Nil was advocating with his distributional-TV
> proposals, is to jam these two into one, and say that _advmod(see,
> with) is half-true, and _prepadj(man, with) is half-true, -- and
> then somehow hope that PLN is able to eventually sort it out. We
> currently don't do this approach, because it would break R2L -- the R2L
> rules would probably mis-behave, because they don't know how to
> propagate half-truths.

Nil Geisweiller

unread,
Sep 5, 2016, 3:36:45 AM9/5/16
to linasv...@gmail.com, opencog, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Linas,

On 09/03/2016 04:59 AM, Linas Vepstas wrote:
> However, I feel an area where something similar to linear logic,
> etc, might be very worthwhile thinking of is in estimating how much
> evidences inference traces have in common, as to have the revision
> rule work correctly. This is kinda the only way I manage to relate
> these barely-understandable-word-soup-sounding-to-me abstract
> proposals to PLN. Would really love to look deep into that once it
> becomes more prioritized though.
>
>
> OK, so in the blog post, at what point did things get too abstract, and
> too hard to follow?

The blog is clear and I believe I understood it well, and agree with it.
The only confusing part was when you mentioned the closed monoidal
category, etc. I tried to quickly understand it but it seems it would
suck me into layers of hyperlinks before I can get it. BTW, I would be
happy to spend a week reading a book on category theory, I'm just not
sure it's the best use of my time right now. But maybe it is, before
re-implementing the BC, not sure.

Nil

>
> --linas
>
>

Nil Geisweiller

unread,
Sep 5, 2016, 4:10:07 AM9/5/16
to linasv...@gmail.com, opencog, link-grammar, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Linas,

On 09/03/2016 08:54 AM, Linas Vepstas wrote:
> I claim that inference is like parsing, and that algorithms suitable for
> parsing can be transported and used for inference. I also claim that
> these algorithms will all provide superior performance to
> backward/forward chaining.
>
> Until we can start to talk about inference as if it was a kind of
> parsing, then I think we'll remain stuck, for a while.

It is inconvenient that I do not know LG well (I do know the basics
about parsing regular or context-free grammars but that's all).

I do have however some experience with automatic theorem proving, my
take is that no matter what abstraction you may come up with, it will
always suffer combinatorial explosion (as soon as the logic is
expressive enough). That is what I mean by linear or intuitionistic
logic being a hack, there is just no other way I can think of to tackle
that explosion than by using meta-learning so that it at least works
here on earth.

You say "of all of the algorithms that are known for performing
reasoning, forward/backward chaining are the worst and the slowest and
the lowest-performance of all", but that is not how FC and BC should be
thought of. First of all BC is just FC with a target driven inference
control. Second, FC is neither bad nor good, it all depends on the
control, right?

That is said, I totally like your multi-atomspace abstraction, looking
at confluence, etc. This is the way to go. I just fail to see how this
abstraction can help us simplify or optimize inference control. But I'm
certainly open to the idea.

Nil

Nil Geisweiller

unread,
Sep 5, 2016, 4:11:44 AM9/5/16
to linasv...@gmail.com, opencog, Nil Geisweiller, Ruiting Lian, Zarathustra Goertzel, Pete Wolfendale
Linas,

On 09/03/2016 08:54 AM, Linas Vepstas wrote:
> I claim that inference is like parsing, and that algorithms suitable for
> parsing can be transported and used for inference. I also claim that
> these algorithms will all provide superior performance to
> backward/forward chaining.
>
> Until we can start to talk about inference as if it was a kind of
> parsing, then I think we'll remain stuck, for a while.

It is inconvenient that I do not know LG well (I do know the basics
about parsing regular or context-free grammars but that's all).

I do have however some experience with automatic theorem proving, my
take is that no matter what abstraction you may come up with, it will
always suffer from combinatorial explosion (as soon as the logic is
expressive enough). That is what I mean by linear or intuitionistic
logic being a hack, there is just no other way I can think of to tackle
that explosion than by using meta-learning so that it at least works
here on earth.

You say "of all of the algorithms that are known for performing
reasoning, forward/backward chaining are the worst and the slowest and
the lowest-performance of all", but I don't think that is how FC and BC
should be thought of. First of all BC is just FC with a target driven
inference control. Second, FC is neither bad nor good, it all depends on
the control, right?

That is said, I totally like your multi-atomspace abstraction, looking
at confluence, etc. This is the way to go. I just fail to see how this
abstraction can help us simplify or optimize inference control. But I'm
certainly open to it.

Nil
Reply all
Reply to author
Forward
0 new messages