Atomic terms and Seed AGI

79 views
Skip to first unread message

Shubhamkar Ayare

unread,
Jun 17, 2023, 1:52:07 PM6/17/23
to open-nars
Apologies if I'm missing something. From what I've understood so far, it seems like atomic terms are defined by the sensor, or the person inputting narsese at the shell. In other words, a NARS system, on its own, cannot come up with new atomic terms, can it?

I'm not sure if it actually matters, because terms get internal-meaning from their relations with other terms. And there certainly are ways to produce non-atomic terms which can certainly acquire meaning different from their constituents (?). 

If this is the case, it seems that atomic terms and the sentences produced by the sensors act like a seed for the AGI system. Different systems can have different sets of atomic terms and the relations between them, and accordingly each such seed can result in systems that can more easily do certain things than others. This can be considered similar to different organisms having different niches - if put in a different niche they will try to adapt, although some may adapt better than others. And while such system will operate under the assumption of AIKR (because NARS does), the question of what seed can enable a human-compatible AGI will remains an open question.

Robert w

unread,
Jun 19, 2023, 7:09:40 AM6/19/23
to open-nars
Hello,


>it seems like atomic terms are defined by the sensor, or the person inputting narsese at the shell.
This is correct. The usual situation will be that the sensor comes up with names of atomic terms which say something about the observed regularities in the environment.
For example
<tyqqq00346 --> [uuuq5432]>.
The names themself do _not_ have to be defined or grounded by humans. We call this "experience grounded semantics" (EGS) in the theory.


>In other words, a NARS system, on its own, cannot come up with new atomic terms, can it?
Correct, there is no need to because this is covered by anything it derives from the premises which were fed into the system. This also includes rules to build various compound terms, which is a form of explicitly "grouping" things. NARS also does "group" things implicitly with deduction, induction, abduction, revision.


>And there certainly are ways to produce non-atomic terms which can certainly acquire meaning different from their constituents (?).
see above


>the question of what seed can enable a human-compatible AGI will remains an open question.
One can see the knowledge and all of the program (which is NARS plus tools etc.) as a _seed_ indeed.
We all follow a certain philosophy regarding alignment (you put it as human-compatible AGI). It's to us a problem of education of the A(G)I system. There is no way to burn in "friendliness" into a system like NARS. Reason for that is that it's always open to new experiences and thus knowledge. This also includes certain types of behaviours humans may see as "unfriendly". After this scientific opinion we "just" have to educate the AI system to _not_ do harm.



on the philosophical side of things:
One can look at humans as a example/inspiration of that. If a human is or isn't evil depends mostly on their education.
One argument against this could be that it's based on anthropomorphizing A(G)I / NARS. This isn't the case because it goes back to EGS in systems like NARS.


You can read more about EGS and friendliness in NARS in the papers if you would like a more in depth discussion of these things.
here is a link to most papers about NARS https://github.com/opennars/opennars/wiki/Papers
EGS https://cis.temple.edu/~pwang/Publication/semantics.pdf and http://91.203.212.130/NARS/wang.semantics.pdf
friendliness/alignment was briefly discussed in Pei's roadmap "From NARS to a Thinking Machine" page 11 https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=63ba2cc3e2f9e9a55e0a5f568b453691507a7810


Feel free to ask more questions about NARS, someone will for sure answer them.

Shubhamkar Ayare

unread,
Jun 19, 2023, 8:02:32 AM6/19/23
to open-nars
> friendliness

May be human-like is a better term than human-compatible to convey what I wanted to say. By this, I mean that if an AGI were to understand what humans mean when they refer to a chair, a two-wheeler (but whose one tire is out for repair), or how to handle sharp objects while working with humans, or estimating how a rubber ball will bounce, what is referred to as intuitive physics more generally, or even the social aspects about what social conventions are, about inferring people's goals and desires from their actions. If an AGI were to live and thrive in a human society, then it should be at least human-like, because our systems and societies are built for humans.

There is potentially a huge amount of such knowledge resulting in projects like Cyc. 

We also cannot have sensors supply all the categories, because there can always be new categories. Because NARS cannot come up with new atomic terms, what atomic terms can serve as a seed for human-like AGI requires research. For example, it seems reasonable that a sensor should supply terms corresponding to pixel or super-pixel level data rather than terms like chair, table, because the former is more or less a complete set, while the latter is not. A 100x100 image can convey a lot of things. However, it seems a bit too hard of a problem to infer that a given image has a chair in all the myriad shapes, viewpoints, lightings, color, occlusion, etc by certain logical operators that NARS uses. I feel as if NARS serves as a good model of high level human-like reasoning, but inferring the appropriate categories potentially-efficiently from sensors that are flexible enough for a fully autonomous system seems like a hard problem.

And thank you for confirming the earlier points! I'm certainly loving Dr. Pei Wang's insights in the papers (and the book) published so far!

Pei Wang

unread,
Jun 19, 2023, 9:21:25 AM6/19/23
to open...@googlegroups.com
Hi, Shubhamkar,

You raised an important issue. NARS is human-like at meta-level (experience-behavior relation), not necessarily at object-level (contents of experience and behavior), as AIs usually don’t have human-like body (including sensors and actuators named by atomic terms) and (perceived) environment.

This object-level differences can be reduced and compensated to various extents via learning, though that takes time and has limitations. It will surely lead to application issues, though to me, perfectly human-like behaviors are neither possible nor desirable, anyway.

On your initial technical question, NARS  may produce atomic terms by “compress” compound terms, especially for operations, though that hasn’t been implemented yet.

We do have ideas about using NARS to directly handle sensorimotor, but that will be a long story, and the terms/concepts won’t be human-like.

Regards,

Pei

--


You received this message because you are subscribed to the Google Groups "open-nars" group.


To unsubscribe from this group and stop receiving emails from it, send an email to open-nars+...@googlegroups.com.


To view this discussion on the web visit https://groups.google.com/d/msgid/open-nars/5c262129-d215-46c8-b6fc-bb51196a2d53n%40googlegroups.com.


Shubhamkar Ayare

unread,
Jun 19, 2023, 10:52:53 AM6/19/23
to open-nars
> NARS is human-like at meta-level (experience-behavior relation), not necessarily at object-level (contents of experience and behavior), as AIs usually don’t have human-like body (including sensors and actuators named by atomic terms) and (perceived) environment.
I see, the difference in levels explains it!

> perfectly human-like behaviors are neither possible nor desirable, anyway.
Certainly. In general, systems can have radically different sensors and actuators than humans; and even if they do have human like sensors and actuators, perfect human-like behavior (with all our flaws) doesn't seem desirable for an AGI system.

> NARS  may produce atomic terms by “compress” compound terms, especially for operations, though that hasn’t been implemented yet.
That'd make sense.

> directly handle sensorimotor
One idea that I had run into was to use embedding-mappings at the sensor level, and to augment the truth-value calculation with an embedding-based similarity metric. However, this will still be limited by the embedding-mappings. For an AGI, we would want it to be responsive to new knowledge. Un/fortunately, there seems relatively little if any research on one-shot learning of embeddings from scratch. Although, given the role that sleep plays in our own human learning, it is also unclear if one-shot learning of embeddings is possible. This does seem like an interesting line of research though, especially given that now-a-days, there exist embeddings for images and other modalities too.

Thank you much for clarifying things so far!

robe...@googlemail.com

unread,
Jun 19, 2023, 1:46:17 PM6/19/23
to open...@googlegroups.com
We have a story about dealing with embeddings or more specifically manifolds for visual perception (done with a external vision system which will hopefully get attached to a NARS implementation pretty soon).
It's always open to new object categories, so it's not a unsolved problem.


Shubhamkar Ayare

unread,
Jun 19, 2023, 2:15:08 PM6/19/23
to open-nars

Robert w

unread,
Jun 20, 2023, 6:51:55 PM6/20/23
to open-nars
Yes that's it.

Shubhamkar Ayare

unread,
Jun 23, 2023, 12:49:05 AM6/23/23
to open-nars
Right, extending this line of work to with different kinds of invariances and equivariances for unseen objects and backgrounds could be one way out. Basically, as you suggest in the paper,  in order to recognize that a particular car is the same car as before to reduce 'createdNewCategory', we will need rotational invariance; but in general, other kinds of invariances and equivariances would be required too.
Reply all
Reply to author
Forward
0 new messages