Yes, my Narsese is pretty rusty :) I intentionally skipped the part about relating words to terms, trying to keep it simple but I see what you mean. In any case, thank you for the link and looking forward to the new paper!
Regarding the "small animal" example, perhaps i should have chosen something else that is not a comparative property. I remember Patrick discussing learning correlations for properties like tall or short, big or small from observations. My question is a little different. I struggle to come up with a specific example but what I'm trying to understand is how, by what combination of rules, in theory, is the system expected to process a sequence that contains more than one potentially nested relation.
Maybe I can put it another way and I hope this is not a silly question in the first place. Given your syntax the system now possesses a belief (* [$word1 "is" $word2] {$term1 --> term2}) --> represent so presumably if given some statement like (* ["dog" "is" "animal"]) it should now be able to derive that "dog -> animal". But what if $word2 is a relation itself, so instead of "animal" it is something else like "furry creature" so that the whole phrase "dog is a furry creature" involves multiple represent relations. Hopefully this is a better example. I assume "furry" is not something we want to learn from comparison. Or another example like "dog is swimming in the river".
My thinking is that it should somehow recursively process the phrase and form multiple interpretations of its components which together as a set form its understanding of the phrase. I would like to know your thoughts on this.
Thank you