Hi Linas,
I thank you very much for answer. Sure, I can make the discussion public, but I don’t know the correct channel to do that. Discord? Github? Slack? MailList? All of them J ?
About the Hobbs book (actually is it very lengthy), I’m attaching a paper that presents the idea. Basically it is a try to formalize something we call “image schemas”, from Cognitive Linguistics (it is my research field).
To be more specific about my doubts:
- If I have a logic expression like
give (a b c)
representing “a gives b to c”, I want to set a type for ‘variables’ “a”, “b” and “c” – handling “give” as a seed and the variables as ‘connectors’.
Some another atom “x” would be of “type a” – in the sense “x” would be a seed with a connector compatible to “a”. In the generation process, “x” (possibly) would be attached to “give” (using the “a” connector”). So any knowledge relative to “a” would be indirectly available to “x” via graph links.
In term of “parsing” (it is my real intent) I want – for example - to uncover some words like in some cases of metonym:
- John opens the beer.
- Actually “John opens (the bottle of) the beer.
This is handling using GLT (Generative Lexicon Theory). In this example “open” has a connector to “bottle”, “bottle” has connector to “beer”. Although not expressed in the sentence we would have the three atoms in the atomspace and via generation process the “covered” word (among others possibilities) would be uncovered.
- After your explanation, I think my trouble is really with the graph exportation to GML. I tested using head/dependent pairs (H/D) but when the graph is exported, the link (in GML) is ‘undirected’ (this is, the ‘source’ can be the connector ‘H’ or ‘D’ indistinctly). I guess after connection we ‘lost’ the information about directionality because it is from connector, not from the seed (what is correct, because the seed can be ‘H’ for one connect but ‘D’ for other connector).
Once more, thank you! If you want I can post this doubts in the maillist (or another preferred channel).
Ely
From: Linas Vepstas [mailto:linasv...@gmail.com]
Sent: Friday, December 3, 2021 23:39
To: Ely Matos
Subject: Re: About Sheaf Theory - OpenCog implementation
Hi Ely,
On Fri, Dec 3, 2021 at 6:26 PM Ely Matos <ely....@gmail.com> wrote:
Dear Linas,
Sorry by disturb using a direct message,
No problem. But nothing you are asking couldn't also be said in a public forum; there is nothing embarrassing here.
but I really need help in (maybe too simple)
Nothing is ever too simple :-)
two questions about how to use the current sheaf implementation at OpenCog (I’ve read the papers and run the examples at opencog/generate github repo):
Let me tackle the questions in backwards order.
- I have a bunch of FOL theories (from the Gordon & Hobbs book [1]) and I’d like to know if – in the current version – it is possible to implement some kind of logic using sheafs.
I find this confusing in several different ways. For me, "FOL" stands for "first order logic"; the book you point at is about the psychology of common sense. I don't see what they have to do with each other. To me, common-sense reasoning has nothing to do with logic, but perhaps the authors make some strong argument about that. I have not seen this book before. Anyway, my confusion here is not that important.
More important: the "generate" code base aims to generate all possibilities of something, meeting some given constraints. Is that what you want to do? What do you want to generate?
In mathematics, and conventional first-order-logic, this is commonly done with theorem provers: the goal is to generate all possible theorems stemming from some axioms, or to generate all possible proofs subject to a statement of a theorem. The present day theorem-provers (there must be a dozen such projects) are surely 1000x faster than my code, although... they are not designed for generating English or other generic things; I am interested in the generic problem.
Now about the code itself:
* It's still rather primitive and undeveloped; it does a few basic things but much much more can be done.
* The original intent was to use it to generate English language sentences, given a collection of abstractions .... it does not do this yet. I still want to do this but am distracted by many other projects.
* The design was meant to allow it to generate "anything", not just language, but any kind of structures.
* The current generator works by exhaustively enumerating all possibilities. This is not terribly fast.
* I recently learned about this: https://marian42.de/article/wfc/ - it might be a more efficient algorithm for generation. I think I see how to generalize it, but have not tried. But that's off-topic, other than to note the current generation code is both primitive, and slow.
* The theorem provers I mentioned (HOL, Vampire, prover9... ) are probably filled with great ideas that could make the opencog/generate code run much faster... but it's a huge amount of work to implement these ideas. https://en.wikipedia.org/wiki/Automated_theorem_proving
- I see that network generation can be a way to ‘glue’ different sections, based on connector specification.
Yes.
But (a) I didn’t see how to implement constraints (on the linking between two nodes) and (b) I didn’t see how to force directionality.
I'm not sure how abstract I should be in answering. A somewhat short high-level review is here: https://wiki.opencog.org/w/Connectors_and_Sections
So let me give an answer using the low-level notation.
A connector has the form
(Connector
(Bond "FOO")
(Direction "bar"))
The (default) rule is that connectors of type "FOO" can only connect to other connectors of type "FOO" and nothing else. (I envision non-default styles, too, but there might not be an API for that yet. I don't remember)
If the above is satisfied, then the next check is made: do the directions "match"? In one of the examples, there are only two directions: + and - and they must always be opposites. Thus, FOO- can only attach to FOO+ and never to another FOO-. One of the examples shows how to declare which directions can attach to which. Here
shows how to define +/- opposites, while the below shows how to connect anything to anything:
you could also create rules that say, "a and b can join together, and b and c can join together, (and nothing else)" by saying:
(define sexuality-set (Concept "abc example"))
(Member (Set (ConnectorDir "a") (ConnectorDir "b")) sexuality-set)
(Member (Set (ConnectorDir "c") (ConnectorDir "b")) sexuality-set)
So that is a "tri-sexual" example: a can only joint to b, but not to other a's and not to c's. B can mate with a or c but not with other b's.
Also -- I recently renamed "ConnectorDir" to "Direction". It's easier that way.
--linas
--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.
Hi Linas,
I thank you very much for answer. Sure, I can make the discussion public, but I don’t know the correct channel to do that. Discord? Github? Slack? MailList? All of them J ?
About the Hobbs book (actually is it very lengthy), I’m attaching a paper that presents the idea. Basically it is a try to formalize something we call “image schemas”, from Cognitive Linguistics (it is my research field).
To be more specific about my doubts:
- If I have a logic expression like
give (a b c)
representing “a gives b to c”, I want to set a type for ‘variables’ “a”, “b” and “c” – handling “give” as a seed and the variables as ‘connectors’. Some another atom “x” would be of “type a” – in the sense “x” would be a seed with a connector compatible to “a”. In the generation process, “x” (possibly) would be attached to “give” (using the “a” connector”). So any knowledge relative to “a” would be indirectly available to “x” via graph links.
In term of “parsing” (it is my real intent) I want – for example - to uncover some words like in some cases of metonym:
- John opens the beer.
- Actually “John opens (the bottle of) the beer.
This is handling using GLT (Generative Lexicon Theory). In this example “open” has a connector to “bottle”, “bottle” has connector to “beer”. Although not expressed in the sentence we would have the three atoms in the atomspace and via generation process the “covered” word (among others possibilities) would be uncovered.
- After your explanation, I think my trouble is really with the graph exportation to GML. I tested using head/dependent pairs (H/D) but when the graph is exported, the link (in GML) is ‘undirected’ (this is, the ‘source’ can be the connector ‘H’ or ‘D’ indistinctly). I guess after connection we ‘lost’ the information about directionality because it is from connector, not from the seed (what is correct, because the seed can be ‘H’ for one connect but ‘D’ for other connector).
Once more, thank you! If you want I can post this doubts in the maillist (or another preferred channel).