Hello,
>it seems like atomic terms are defined by the sensor, or the person inputting narsese at the shell.
This is correct. The usual situation will be that the sensor comes up with names of atomic terms which say something about the observed regularities in the environment.
For example
<tyqqq00346 --> [uuuq5432]>.
The names themself do _not_ have to be defined or grounded by humans. We call this "experience grounded semantics" (EGS) in the theory.
>In other words, a NARS system, on its own, cannot come up with new atomic terms, can it?
Correct, there is no need to because this is covered by anything it derives from the premises which were fed into the system. This also includes rules to build various compound terms, which is a form of explicitly "grouping" things. NARS also does "group" things implicitly with deduction, induction, abduction, revision.
>And there certainly are ways to produce non-atomic terms which can
certainly acquire meaning different from their constituents (?).
see above
>the question of what seed can enable a human-compatible AGI will remains an open question.
One can see the knowledge and all of the program (which is NARS plus tools etc.) as a _seed_ indeed.
We all follow a certain philosophy regarding alignment (you put it as human-compatible AGI). It's to us a problem of education of the A(G)I system. There is no way to burn in "friendliness" into a system like NARS. Reason for that is that it's always open to new experiences and thus knowledge. This also includes certain types of behaviours humans may see as "unfriendly". After this scientific opinion we "just" have to educate the AI system to _not_ do harm.
on the philosophical side of things:
One can look at humans as a example/inspiration of that. If a human is or isn't evil depends mostly on their education.
One argument against this could be that it's based on anthropomorphizing A(G)I / NARS. This isn't the case because it goes back to EGS in systems like NARS.
You can read more about EGS and friendliness in NARS in the papers if you would like a more in depth discussion of these things.
here is a link to most papers about NARS
https://github.com/opennars/opennars/wiki/PapersEGS
https://cis.temple.edu/~pwang/Publication/semantics.pdf and
http://91.203.212.130/NARS/wang.semantics.pdffriendliness/alignment was briefly discussed in Pei's roadmap "From NARS to a Thinking Machine" page 11
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=63ba2cc3e2f9e9a55e0a5f568b453691507a7810
Feel free to ask more questions about NARS, someone will for sure answer them.