understanding "represent" relation

34 views
Skip to first unread message

Maxim Tarasov

unread,
Mar 30, 2023, 1:15:24 PM3/30/23
to open-nars
I have been playing around with the represent relation and I'm trying to understand how it would work for more complex examples than the ones given in the book.
Screenshot 2023-03-30 at 9.56.24 AM.png
It also states that the same approach can be applied to natural language.

I think I grasp the simple example of "this represents that" but I'm having trouble understanding how this scales further to more complex nested relations. I will try to formulate an example.

Say we try to teach NARS a simple phrase "dog is animal". We would do something like 
({x #x "is" #y} x {(#x -> #y)}) -> represent
Then if we add another phrase, say "small animal", we can add another piece of knowledge
({x "small" #x} x {(#x -> [small])}) -> represent

The syntax above might not be 100% correct but hopefully it is on the right track.

Now, how can we combine the two so that we can represent a phrase "dog is small animal" in Narsese? We can of course give NARS another explicit represent relation but it seems to me that the system should be able to parse the phrase with just the two above statements. Perhaps not the current system but in theory at least.

The part I'm struggling to understand is what combination of rules will allow the system to derive the correspondence between "dog is small animal" and (dog -> animal) ^ (dog -> [small]) or some other representation of that sequence.

Perhaps this last Narsese statement is incorrect, maybe it shouldn't be a conjunction but something else? I'm not sure how to properly represent "dog is small animal" in Narsese and how the system can derive it from the given relations, and this is the crux of my question :)

Thanks!

Pei Wang

unread,
Mar 30, 2023, 2:40:04 PM3/30/23
to open...@googlegroups.com
Hi Maxim,

There are several issues in this example:
  • The '#x' in the book has been changed to '$x' in the current implementation, while '#x' now stands for what is written as '#x()'in the book --- the dependence list has been dropped.
  • Without simplification, your ({x #x "is" #y} x {(#x -> #y)}) -> represent should be something like ((* $word1 $term1) --> represent) && (* $word2 $term2) --> represent)) ==> (* [$word1 "is" $word2] {$term1 --> term2}) --> represent.
  • The phrase "small animal" is not always taken to be "is small and is animal". In this context, it is interpreted as "smaller than the other animals". This issue is discussed in http://www.cis.temple.edu/~pwang/Publication/fuzziness.pdf, as well as in a new paper submitted to AGI-23.
The actual learning process is still under development. The general idea is to start from comparisons, so as to learn the meaning of "small" as a property from "smaller than" that is a relation.

Regards,

Pei


--
You received this message because you are subscribed to the Google Groups "open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/598b47e7-5bc9-4f38-aa0b-921c03e3aea4n%40googlegroups.com.

Maxim Tarasov

unread,
Mar 30, 2023, 3:20:30 PM3/30/23
to open-nars
Yes, my Narsese is pretty rusty :) I intentionally skipped the part about relating words to terms, trying to keep it simple but I see what you mean. In any case, thank you for the link and looking forward to the new paper!

Regarding the "small animal" example, perhaps i should have chosen something else that is not a comparative property. I remember Patrick discussing learning correlations for properties like tall or short, big or small from observations. My question is a little different. I struggle to come up with a specific example but what I'm trying to understand is how, by what combination of rules, in theory, is the system expected to process a sequence that contains more than one potentially nested relation. 

Maybe I can put it another way and I hope this is not a silly question in the first place. Given your syntax the system now possesses a belief (* [$word1 "is" $word2] {$term1 --> term2}) --> represent so presumably if given some statement like (* ["dog" "is" "animal"]) it should now be able to derive that "dog -> animal". But what if $word2 is a relation itself, so instead of "animal" it is something else like "furry creature" so that the whole phrase "dog is a furry creature" involves multiple represent relations. Hopefully this is a better example. I assume "furry" is not something we want to learn from comparison. Or another example like "dog is swimming in the river". 

My thinking is that it should somehow recursively process the phrase and form multiple interpretations of its components which together as a set form its understanding of the phrase. I would like to know your thoughts on this.

Thank you

Pei Wang

unread,
Mar 31, 2023, 9:39:48 AM3/31/23
to open...@googlegroups.com
Yes, "dog is a furry creature" can be handled that way.

From this discussion, you can see an "adjective noun" phrase can be mapped into different Narsese compound terms. IThe mapping rule is not "formal" in the traditional sense, but depends on the two words involved. This is what I suggested in https://cis.temple.edu/~pwang/Publication/NLP.pdf

Regards,

Pei


Maxim Tarasov

unread,
Mar 31, 2023, 12:28:55 PM3/31/23
to open-nars
Thank you, this is helpful!

And regarding your comment about variables

> The '#x' in the book has been changed to '$x' in the current implementation, while '#x' now stands for what is written as '#x()'in the book --- the dependence list has been dropped.

Is there a publication that details this further? Based on some previous conversations I thought it just meant that # got changed to $ but you're saying the dependence list has been dropped. I'm wondering how some of the examples in the book that feature dependence list are handled in the new design.

Pei Wang

unread,
Mar 31, 2023, 1:30:50 PM3/31/23
to open...@googlegroups.com
The dependence list is not explicitly expressed in Narsese, because in the implementation it seems to be unnecessary. Conceptually it is still there, only implicitly represented by the scopes of the variables involved.

Regards,

Pei

Reply all
Reply to author
Forward
0 new messages